key: cord-005687-gj6q0ft0 authors: paiva, josé-artur; laupland, kevin b. title: real -time pcr for early microbiological diagnosis: is it time? date: 2017-05-23 journal: intensive care med doi: 10.1007/s00134-017-4793-1 sha: doc_id: 5687 cord_uid: gj6q0ft0 nan blood cultures are the classical gold standard for microbiological diagnosis of bloodstream infection (bsi) and sepsis. however, only 10% of blood cultures processed are positive, and finalized results typically take 48-72 h. empirical antimicrobial therapy, administered until the etiological agent is identified and antimicrobial susceptibility test results are available, may be either excessive or inadequate, and unnecessary treatment with broadspectrum antimicrobials can lead to significant collateral damage including drug toxicity, antimicrobial drug resistance, increased length of stay, and additional cost. this is an important and relevant quality gap. it is evident that improved identification methods and practices that allow reduction of time to microbiological diagnosis and targeted therapy constitutes a major quality improvement framework in antibiotic use [1] . diagnostic techniques that do not depend on growth of organisms in culture may offer a distinct advantage over current methods. they allow shorter time to results and detection of non-cultivable microorganisms under antibiotic pressure. two recent studies have shown that matrix-assisted laser desorption/ionization time-of-flight mass spectrometry following isolation from clinical specimens coupled with antimicrobial stewardship programme (ast) intervention decreases time to organism identification and to effective and optimal antibiotic therapy in adult [2] and pediatric patients with bsi [3] . in the adult population, acceptance of an ast intervention has also been associated with a trend toward reduced mortality on multivariable analysis. moreover, nucleic acid amplification testing and mass spectrometry can identify selected antibiotic resistance patterns to vancomycin (vana/ vanb), methicillin (meca), cephalosporins (beta-lactamases) and carbapenem (cpe) [4] . polymerase chain reaction (pcr) is well established for the diagnosis of "atypical" pathogens in severe community-acquired pneumonia [6] and for the study of ards with possible infectious etiology, namely for respiratory viruses (hsv and cmv), with virus load quantification, and also for pneumocystis and aspergillus [5] . in a retrospective case-control study in adult icu patients with pneumonia and severe sepsis or septic shock, a strategy with bronchoalveolar lavage (bal) cultures plus bal m-pcr led to higher microbiological yield and less time to antibiotic therapy modification compared to a bal culture strategy (32.40 ± 14.41 vs. 41.74 ± 45.61 h; p < .001) [7] . however, several criticisms have been raised with the use of real-time pcr for the study of suspected sepsis and bsi. a study showed that the post-test probability of both a positive (26.3, 95% ci 19.8-33.7%) and a negative (5.6, 95% ci 4.1-7.4%) septifast test indicated potential limitations of the technique in diagnosing bsi in patients that had been admitted for an average of 8 days in hospital and had recently received antibiotics and organ support [8] . a systematic review and meta-analysis showed that, in suspected sepsis, septifast has higher specificity than sensitivity, being better for ruling in than for ruling out infection [9] . there are a number of important considerations when critiquing studies comparing blood cultures with nucleic acid diagnostic techniques. false negative pcr tests can occur by interference with human dna and the presence of pcr inhibitors in the blood. furthermore, they can only detect pathogens that are specifically tested for. on the other hand, blood cultures are less sensitive especially in the setting of recent exposure to antibiotics. overall, blood cultures have only 70% specificity, and sensitivity *correspondence: jarturpaiva@gmail.com is approximately 10% in suspected bacteremia, 30% in febrile neutropenia, 35% in severe sepsis, and 50% in septic shock [10] . defining a true positive result for evaluating diagnostic tests for infection is challenging and may be best by a composite measure that includes clinical status and type and severity of infection. recently, a comprehensive literature search was conducted to identify studies with measurable outcomes to evaluate the evidence for the effectiveness of different rapid diagnostic practices in decreasing time to targeted therapy for hospitalized patients with bsi [1] . the authors concluded that rapid phenotypic techniques with direct communication likely improve the timeliness of targeted therapy and that rapid molecular testing with direct communication significantly improves timeliness and significantly reduces mortality, compared to standard testing. since publication of this review, the radi-cal study [11] , an observational study with patients with suspected or proven bsi, pneumonia, or sterile fluid and tissue infection in nine icus, showed that pcr/electrospray ionization-mass spectrometry provides rapid pathogen identification with a sensitivity of 81%, specificity of 69%, and negative predictive value of 97% at 6 h from sample acquisition and that treatment could have been altered in up to 57% of patients. further, banerjee et al., in a prospective randomized controlled trial, studied 617 patients with positive blood cultures and stratified randomization into 3 arms: standard blood culture processing (control), rapid-multiplex pcr (rmpcr) reported with templated comments or rmpcr reported with templated comments and real-time audit and feedback of antimicrobial orders by an ast team [12] . antibiotic de-escalation occurred 19 h faster in the rmpcr/ ast group than in controls, with almost a 25% reduction in broad-spectrum antibiotic days of therapy. the evamica study, recently published in this journal, is an important addition to the body of literature investigating rapid diagnostic techniques in the icu [13] . this multicentre cluster-randomised crossover trial included 1416 patients and confirms that adding direct molecular detection of pathogens in the blood of patients hospitalized with severe sepsis to standard blood cultures results in an overall higher microbial diagnosis rate (increase from 28.1 to 42.6%) and shorter time to results (22.9 vs. 49.5 h). given higher diagnostic sensitivity and turnaround, do the results of the evamica study indicate that rapid diagnostic tests be integrated into standard diagnostic laboratory practice? there are a number of considerations for discussion in this regard. first, it is important to recognize that, while theoretically the availability of these results should lead to an increase in targeted and a reduction in excessive or inadequate therapies, this study does not prove this. second, it must be recognized that for rapid tests to be most useful they should ideally be offered 24 h per day, 7 days a week. in the evam-ica study, tests were not offered at weekends and were batched for daily runs. whether many clinical laboratories could implement this test for provision of prompt results in the "real world" setting remains to be determined. third, while a significant improvement in diagnostic certainty was observed, the fact that the majority remained undiagnosed for an infecting etiology leaves much to be desired. so is it time for it? yes! it is our contention that realtime pcr should be incorporated into standard clinical management of patients with sepsis. however, use of these tests will still require adjunct use of standard blood culture methods and, for full benefit implementation, coupling with ast. we must recognize that, even with the important gains we have witnessed with the use of new diagnostic tests, the majority of patients with sepsis will remain undiagnosed for a specific etiology. research into further improving diagnostic certainty through ongoing development of rapid culture-independent microbiological identification methods, means to enhance swift communication of results between microbiology laboratory and the icu, and enhanced integration with ast is needed to improve individual patient outcomes and reduce the burden of excessive antibiotic use and subsequent emergence of antimicrobial resistance. effectiveness of practices to increase timeliness of providing targeted therapy for inpatients with bloodstream infections: a laboratory medicine best practices systematic review and meta-analysis impact of rapid organism identification via matrix-assisted laser desorption/ionization time-of-flight combined with antimicrobial stewardship team intervention in adult patients with bacteremia and candidemia impact of matrix-assisted laser desorption and ionization time-of-flight and antimicrobial stewardship intervention on treatment of bloodstream infections in hospitalized children evaluation of matrix-assisted laser desorption ionization-time of flight mass spectrometry for rapid detection of β-lactam resistance in enterobacteriaceae derived from blood cultures community-acquired pneumonia related to intracellular pathogens diagnostic workup for ards patients impact of bronchoalveolar lavage multiplex polymerase chain reaction on microbiological yield and therapeutic decisions in severe pneumonia in intensive care unit diagnostic accuracy of septifast multi-pathogen real-time pcr in the setting of suspected healthcare-associated bloodstream infection accuracy of light-cycler septifast for the detection and identification of pathogens in the blood of patients with suspected sepsis: a systematic review and metaanalysis multi-pathogen real-time pcr system adds benefit for my patients: yes rapid diagnosis of infection in the critically ill, a multicenter study of molecular detection in bloodstream infections, pneumonia, and sterile site infections randomized trial of rapid multiplex polymerase chain reaction-based blood culture identification and susceptibility testing performance and economic evaluation of the molecular detection of pathogens for patients with severe infections: the evamica open-label cluster-randomised interventional crossover trial key: cord-265597-hiqqx1a2 authors: abdellatif, amal; gatto, mark title: it's ok not to be ok: shared reflections from two phd parents in a time of pandemic date: 2020-05-13 journal: gend work organ doi: 10.1111/gwao.12465 sha: doc_id: 265597 cord_uid: hiqqx1a2 adopting an intersectional feminist lens, we explore our identities as single and co‐parents thrust into the new reality of the uk covid‐19 lock‐down. as two phd students, we present shared reflections on our intersectional and divergent experiences of parenting and our attempts to protect our work and families during a pandemic. we reflect on the social constructions of 'masculinities' and 'emphasised femininities' (connell, 2005) as complicated influence on our roles as parents. finally, we highlight the importance of time and self‐care as ways of managing our shared realities during this uncertain period. through sharing reflections, we became closer friends in mutual appreciation and solidarity as we learned about each other's struggles and vulnerabilities. protecting your family is one of the most important roles you can play as a parent, but what happens when you cannot shield yourself or your loved ones from the threat of trauma (cobham & newnham, 2018) ? these reflections provide a glimpse into the lives of two phd parents. amal is a second-year phd student (international), an associate lecturer, and a single parent to a 3 year old and 13 year old. mark is a third-year phd student (home), a research assistant, a co-parent with a 14-month-old baby (13 months old during reflections), and his wife works in the nhs. we are both exploring gender in the workplace for our phds. our shared stories of the uk covid-19 lockdown acted as both individual catharsis and collective empowerment. through sharing, we both learned more about our intersectional identities and our efforts to act as protective shields for our families during this traumatic time. importantly, we also chose to write together to expose and resist patriarchal models of gender through our divergent parental roles and converging feminist principles towards gender equity. we present our reflective stories in three acts represented in a single day: morning, afternoon and evenings of april 10th, 2020 -'good friday' (additionally recorded as a shared time-log exercise). this method provided a reassuring structure for us to work with, while also framing our lived experiences thematically and over a longer time period. we include snapshots of our 'good friday' to highlight how our days progressed with various points of similarity and differences. we also include reflections on our developing response to the lockdown from across a three-week period from the start of the uk lockdown on march 23rd 2020 until april 8th. we intentionally shared our reflections with each other after each new entry to enrich our collective writing experience, a process which had the additional benefit of deepening our friendship and mutual admiration. we were inspired in our collective writing approach by 'writing resistance together' (ahonen et al., 2020) . we also drew on grenier (2015) as a model for constructing our shared autoethnography in a quasi-conversational form that expresses insights into our shared truths. we present our captured stories as both ordered and messy including ©2020 american geophysical union. all rights reserved. occasional spelling and grammatical errors; a messiness that reflects our lives. by making ourselves vulnerable in this way, we hope our reflective stories can pay tribute to the canon of emancipatory feminist writing (for examples, see cixous, cohen, & cohen, 1976; haraway, 1991 ) that challenges how we write about ourselves and our experiences as feminists who aspire to transcend gender binaries. "it is not sufficient to have an experience in order to learn. without reflecting on this experience, it may quickly be forgotten, or its learning potential lost." graham gibbs (1988) we share similar identities as phd students and parents. through our reflective diaries, we found our experiences converge around these two intersectional identities (crenshaw, 2018 ). yet, our diaries also reflect how our experiences diverge from the other identities we hold; gender, ethnicity, and co-parenting vs single parenting; all of which influenced our pandemic reality. we echo rodriguez, holvino, fletcher, and nkomo (2016) in moving beyond the favoured triumvirate of gender, race and class to building a more complex ontology of intersecting socio-cognitive categories in our experiences. as we both believe in the principles of social equity, we examined and acknowledged where our identities were privileged or discriminated against in a pandemic. we feel this represents a foundational step of our feminist reflective praxis. are different to other phd students. we must protect our study time. just as we must protect our time with our families. and these two worlds, though, naturally will cross over. always means that we strain to separate and retain difference. ( hooks (1990) we found resemblance between our diary reflections around feminism and examining our femininity and masculinity in the context of covid-19. we reflected on mark's experiences of 're-embodied masculinity' (connell, 2005) to embrace his caring role against the cultural template of 'hegemonic masculinity'. at the same time, amal reflected on her resistance through single-parenting to the cultural template of 'emphasised femininity' (connell & messerschmidt, 2005) . we present our non-conformant femininities and masculinities as a challenge to the institutionalised, 'assumed' and regulated practices around our gender (butler, 2004) . 14:30 -after lunch, helped her this time to draw on the paper rather than the walls! also coloured easter bunny and little eggs. (amal) (the older) practicing some masculine domination over his sister. for example, asking her not to talk in a certain way as she is a 'girl'. i observed similar behaviour from my daughter towards my son. for example, seeing him wearing or playing with something that does not conform to his gender, she directly says "this is not for boys, this is for girls". here, i realised how i come out as a feminist rather than only a mother and intervene in the conversation. i try to challenge the way i was raised (and resisted) sartre (1956, p. 29) our experiences of time have been stretched, squeezed and snapped at various stages of this pandemic. as our working days stretched into evenings, we tended to squeeze our time with more intensity until, with fatigue, we snapped. some experiences converged, especially our moments of vulnerability and need to take time for self-care, while others diverged. our ©2020 american geophysical union. all rights reserved. need to compartmentalise our time as parents and phd students acts as an ever-present pressure we both battle with. work always feels like a race against time. it is a race that i will never win, (mark as phd parents, we cannot escape the ticking of time and looming deadlines; we constantly feel this pressure, even at the best of times. the lockdown has meant we cannot produce the volume, nor achieve the quality of focus and output required to meet our own perceived expectations. with each passing minute, we experience prodding fingers and shouts for attention, which wrench us away from the immersion needed to produce our best work. in such a competitive discipline as academia, where success is measured on the 'publish or perish' continuum, our very survival as early career academics is at the forefront of our minds. "in the midst of every crisis, lies great opportunity." albert einstein "to be a complete individual, equal to man, woman has to have access to the male world as man does to the female one, access to the other; but the demands of the other are not symmetrical in the two cases." beauvoir (2011, p. 818) reviewing our reflections, we have both been comforted by our two lives lived in simultaneously divergent, yet similar moments of vulnerability with our families. as we have shared reflections during these early days and weeks, and we have grown closer as friends, despite the enforced distance we must observe. we have glimpsed behind the veil of our professional selves, allowed ourselves to share our precious private lives, and gained something far more valuable in our mutual admiration for each other as people. as amal has embodied the total parent from teacher to chef, carer, friend and protector, while squeezing in her studies; mark has experienced periods of re-embodied masculinity as transient sole primary carer and support to his wife. our experiences are unequal, but we ©2020 american geophysical union. all rights reserved. have both gained unplanned access to the 'other' as working parents, peers and friends. it is this 'other' that builds our case to embrace our vulnerabilities as parents towards a collective strength that could endure beyond this lockdown. is this a beginning to an end or an end to a new beginning? will we get back to the life we knew? in these ambiguous uncertain times, there are plenty of unanswered questions. even with this ambiguity, we think everyone by now already will have a long 'to do after the lockdown list'. this could be something as simple as a friend's hug, a cautious handshake, a staff kitchen gossip, a chilled drink at the pub, or that long overdue haircut! since the lockdown started, in each household, we became a huge conglomerate of organisations. we are the university, the school, the nursery, the gym, the restaurant, the library, and the hairdresser. will we see this as an ugly experience that brought all social inequalities and injustice to the surface? or will we see it as a great opportunity for family, self-discovery, open vulnerability, resilience, love, compassion and solidarity? will we value one another differently, or will it be a matter of time before we get back to the 'old' reality of busy bees buzzing around the hive? all we do know is that this shared experience has meant more to us than we anticipated. we helped each other see the light at the end of our separate tunnels and, out of our solidarity as feminists, our friendship has blossomed. writing resistance together. gender, work & organization the second sex undoing gender. london: routledge ltd toward a field of intersectionality studies: theory, applications, and praxis. signs the laugh of the medusa trauma and parenting: considering humanitarian crisis contexts masculinities. berkley and los hegemonic masculinity: rethinking the concept demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory, and antiracist politics feminist legal theory learning by doing: a guide to teaching and learning methods autoethnography as a legitimate approach to hrd research: a methodological conversation at 30,000 feet a cyborg manifesto: science, technology, and socialist-feminism in the late twentieth century yearning: race, gender, and cultural politics the theory and praxis of intersectionality in work and organisations: where do we go from here? gender being and nothingness: an essay on phenomenological ontology we both wish to formally thank our supervisor, professor jamie callahan, whose continuous mentorship and empathy has ensured we felt fully supported, especially during these traumatic times. we also want to thank jamie for encouraging us to write collectively and learn from each other. we both feel very fortunate to work with such an inspiring leader. ©2020 american geophysical union. all rights reserved. key: cord-152028-c8xit4tf authors: javid, alireza m.; liang, xinyue; venkitaraman, arun; chatterjee, saikat title: predictive analysis of covid-19 time-series data from johns hopkins university date: 2020-05-07 journal: nan doi: nan sha: doc_id: 152028 cord_uid: c8xit4tf we provide a predictive analysis of the spread of covid-19, also known as sars-cov-2, using the dataset made publicly available online by the johns hopkins university. our main objective is to provide predictions of the number of infected people for different countries in the next 14 days. the predictive analysis is done using time-series data transformed on a logarithmic scale. we use two well-known methods for prediction: polynomial regression and neural network. as the number of training data for each country is limited, we use a single-layer neural network called the extreme learning machine (elm) to avoid over-fitting. due to the non-stationary nature of the time-series, a sliding window approach is used to provide a more accurate prediction. the covid-19 pandemic has led to a massive global crisis, caused by the rapid spread rate and severe fatality, especially, among those with a weak immune system. in this work, we use the available covid-19 time-series of the infected cases to build models for predicting the number of cases in the near future. in particular, given the time-series till a particular day, we make predictions for the number of cases in the next τ days, where τ ∈ {1, 3, 7, 14}. this means that we predict for the next day, after 3 days, after 7 days, and after 14 days. our analysis is based on the time-series data made publicly available on the covid-19 dashboard by the center for systems science and engineering (csse) at the johns hopkins university (jhu) (https://systems.jhu.edu/research/public-health/ncov/) [1] . let y n denote the number of confirmed cases on the n-th day of the time-series after start of the outbreak. then, we have the following • the input consists of the last n samples of the time-series given by y n [y 1 , y 2 , · · · , y n ]. • the predicted output is t n =ŷ n+τ , τ ∈ {1, 3, 7, 14}. • due to non-stationary nature of the time-series data, a sliding window of size w is used over y n to make the prediction, and w is found via cross-validation. • the predictive function f ( · ) is modeled either by a polynomial or a neural network, and is used to make the prediction:ŷ n+τ = f (y n ) the dataset from jhu contains the cumulative number of cases reported daily for different countries. we base our analysis on 12 of the countries listed in table i . for each country, we consider the time-series y n starting from the day when the first case was reported. given the current day index n, we predict the number of cases for the day n + τ by considering as input the number of cases reported for the past w days, that is, for the days n − w + 1 to n. we use data-driven prediction approaches without considering any other aspect, for example, models of infectious disease spread [2] . we apply two approaches to analyze the data to make predictions, or in other words, to learn the function f : • polynomial model approach: simplest curve fit or approximation model, where the number of cases is approximated locally with polynomials − f is a polynomial. • neural network approach: a supervised learning approach that uses training data in the form of input-output pairs to learn a predictive model − f is a neural network. we describe each approach in detail in the following subsections. a. polynomial model 1) model: we model the expected value of y n as a third degree polynomial function of the day number n: the set of coefficients {p 0 , p 1 , p 2 , p 3 } are learned using the available training data. given the highly non-stationary nature of the time-series, we consider local polynomial approximations of the signal over a window of w days, instead of using all the data to estimate a single polynomial f ( · ) for the entire time-series. thus, at the n-th day, we learn the corresponding polynomial f ( · ) using y n,w [y n−w+1 , · · · , y n−1 , y n ]. 2) how the model is used: once the polynomial is determined, we use it to predict for (n + τ )-th day aŝ for every polynomial regression model, we construct the corresponding polynomial function f ( · ) by using y n,w as the most recent input data of size w. the appropriate window size w is found through cross-validation. b. neural networks 1) model: we use extreme learning machine (elm) as the neural network model to avoid overfitting to the training data. as the length of the time-series data for each country is limited, the number of training samples for the neural network would be quite small, which can lead to severe overfitting in large scale neural network such as deep neural networks (dnns), convolutional neural networks (cnns), etc. [3], [4] . elm, on the other hand, is a single layer neural network which uses random weights in its first hidden layer [5] . the use of random weights has gained popularity due to its simplicity and effectiveness in training [6][8] . we now briefly describe elm. consider a dataset containing n samples of pair-wise pdimensional input data x ∈ r p and the corresponding qdimensional target vector t ∈ r q as d = {(x n , t n )} n n=1 . we construct the feature vector as z n = g(wx n ) ∈ r h , where • weight matrix w ∈ r h×p is an instance of normal distribution, • h is the number of hidden neurons, and • activation function g( · ) is the rectified linear unit (relu). to predict the target, we use a linear projection of feature vector z n onto the target. let the predicted target for the nth sample be oz n . note that o ∈ r q×h . by using ℓ 2 -norm regularization, we find the optimal solution for the following convex optimization problem where · f denotes the frobenius norm. once the matrix o ⋆ is learned, the prediction for any new input x is now given bŷ 2) how the model is used: when using elm to predict the number of cases, we define x n = [y n−w+1 , ..., y n−1 , y n ] ⊤ and t n = [y n+τ ]. note that x n ∈ r w and t n ∈ r. for a fixed τ ∈ {1, 3, 7, 14}, we use cross-validation to find the proper window size w, number of hidden neurons h, and the regularization hyperparameter λ. in this subsection, we make predictions based on the timeseries data which currently is available until today may 4, 2020, for τ ∈ {1, 3, 7}. we estimate the number of cases for the last 31 days of the countries in table i . for each value of τ ∈ {1, 3, 7}, we compare the estimated number of caseŝ y n+τ with the true value y n+τ and report the estimation error in percentage, i.e., we carry out two sets of experiments for each of the two approaches (polynomial and elm) to examine their sensitivity to the new arriving training samples. in the first set of experiments, we implement cross-validation to find the hyperparameters without using the observed samples of the time-series as we proceed through 31 days span. in the second set of experiments, we implement cross-validation in a daily manner as we observe new samples of the time-series. in the latter setup, the window size w varied with respect to time to find the optimal hyperparameters as we proceed through time. we refer to this setup as 'elm time-varying' and 'poly time-varying' in the rest of the manuscript. we first show the reported and estimated number of infection cases for sweden by using elm time-varying for different τ 's in figure 1 . for each τ , we estimate the number of cases up to τ days after which jhu data is collected. in our later experiments, we show that elm time-varying is typically more accurate than the other three methods (polynomial, poly timevarying, and elm). this better accuracy conforms to the nonstationary behavior of the time-series data, or in other words that the best model parameters change over time. hence, the result of elm time-varying is shown explicitly for sweden. according to our experimental result, we predict that a total of 23039, 23873, and 26184 people will be infected in sweden on may 5, may 7, and may 11, 2020, respectively. histograms of error percentage of the four methods are shown in figure 2 for different values of τ . the histograms are calculated by using a nonparametric kernel-smoothing distribution over the past 31 days for all 12 countries. the daily error percentage for each country in table i is shown in figures 7-15 . note that the reported error percentage of elm is averaged over 100 monte carlo trials. the average and the standard deviation of the error over 31 days is reported (in percentage) in the legend of each of the figures for all four methods. it can be seen that daily cross-validation is crucial to preserve a consistent performance through-out the pandemic, resulting in a more accurate estimate. in other words, the variations of the time-series as n increases is significant enough to change the statistics of the training and validation set, which, in turn, leads to different optimal hyperparameters as the length of the time-series grows. it can also be seen that elm time-varying provides a more accurate estimate, especially for large values of τ . therefore, for the rest of the experiments, we only focus on elm time-varying as our favored approach. another interesting observation is that the performance of elm time-varying improves as n increases. this observation verifies the general principle that neural networks typically perform better as more data becomes available. we report the average error percentage of elm time-varying over the last 10 days of the time-series in table ii . we see that as τ increases the estimation error increases. when τ = 7, elm time-varying works well for most of the countries. it does not perform well for france and india. this poor estimation for a few countries could be due to a significant amount of noise in the timeseries data, even possibly caused by inaccurately reported daily cases. in this subsection, we repeat the prediction based on the time-series data which is available until today may 12, 2020, for τ ∈ {1, 3, 7}. in subsection iv-a, we predicted the total number of cases in sweden on may 5, may 7, and may 11, 2020. the reported number of cases on these days for sweden turned out to be 23216, 24623, and 26670, respectively, which is in the similar range of error that is reported in table ii . we show the reported and estimated number of infection cases for sweden by using elm time-varying for different τ 's in figure 3 . for each τ , we estimate the number of cases up to τ days after which jhu data is collected. according to our experimental result, we predict that a total of 27737, 28522, and 30841 people will be infected in sweden on may 13, may 15, and may 19, 2020, respectively. histograms of error percentage of the four methods are shown in figure 4 for different values of τ . these experiments verify that elm time-varying is the most consistent approach as the length of the time-series increases from may 4 to may 12. we report the average error percentage of elm timevarying over the last 10 days of the time-series in table iii . we see that as τ increases the estimation error increases. when τ = 7, elm time-varying works well for all of the countries except india, even though the number of training samples has increased compared to subsection iv-a. in this subsection, we repeat the prediction based on the time-series data which is available until today may 20, 2020, for τ ∈ {1, 7, 14}. in subsection iv-b, we predicted the total number of cases in sweden on may 13, may 15, and may 19, 2020. the reported number of cases on these days for sweden turned out to be 27909, 29207, and 30799, respectively, which is in the similar range of prediction error that is reported in table iii . we increase the prediction range τ in this subsection and we show the reported and estimated number of infection cases for sweden by using elm time-varying for τ = 1, 7, and 14 in figure 5 . for each τ , we estimate the number of cases up to τ days after which jhu data is collected. according to our experimental result, we predict that a total of 32032, 34702, and 37188 people will be infected in sweden on may 21, may 27, and june 3, 2020, respectively. histograms of error percentage of the four methods are shown in figure 6 for different values of τ . these experiments verify that elm time-varying is the most consistent approach as the length of the time-series increases from may 12 to may 20. we report the average error percentage of elm timevarying over the last 10 days of the time-series in table iv . we see that as τ increases the estimation error increases. when τ = 7, elm time-varying works well for all of the countries so we increase the prediction range to 14 days. we observe that elm time-varying fails to provide an accurate estimate for several countries such as france, india, iran, and usa. this experiment shows that long-term prediction of the spread covid-19 can be investigated as an open problem. however, by observing tables ii-iv, we expect that the performance of elm time-varying to improve in the future as the number of training samples increases during the pandemic. we studied the estimation capabilities of two well-known approaches to deal with the spread of the covid-19 pandemic. we showed that a small-sized neural network such as elm provides a more consistent estimation compared to polynomial regression counterpart. we found that a daily update of the model hyperparameters is of paramount importance to achieve a stable prediction performance. the proposed models currently use the only samples of the time-series data to predict the future number of cases. a potential future direction to improve the estimation accuracy is to incorporate constraints such as infectious disease spread model, non-pharmaceutical interventions, and authority policies [2]. [3] christian szegedy, alexander toshev, and dumitru erhan, "deep neural networks for object detection," in advances in neural error (%) elm time-varying, avg = 7.2%, std = 6% elm, avg = 45.6%, std = 14.3% poly time-varying, avg = 105.7%, std = 121 poly, avg = 52.3%, std = 89 error (%) elm time-varying, avg = 15.3%, std = 14.5% elm, avg = 18.5%, std = 16.1% poly time-varying, avg = 210.2%, std = 922% poly, avg = 2335%, std = 9655 error (%) elm time-varying, avg = 12%, std = 15.6% elm, avg = 17.3%, std = 9.9% poly time-varying, avg = 177.6%, std = = 1868.6%, std = 5659.9% error (%) elm time-varying, avg = 4.8%, std = 5.5% elm, avg = 4.5%, std = 3.3% poly time-varying, avg = 124.8%, std = 487 poly, avg = 148.3%, std = 459 error (%) elm time-varying, avg = 18.4%, std = 11.6% elm, avg = 14.4%, std = 6.4% poly time-varying, avg = 34%, std = 27% poly, avg = 55.5%, std = error (%) elm time-varying, avg = 3.8%, std = 5.8% elm, avg = 1.6%, std = 1.2% poly time-varying, avg = 22.9%, std = = 12.6%, std = 13 error (%) elm time-varying, avg = 10%, std = 12.3% elm, avg = 9.2%, std = 7.3% poly time-varying, avg = 40.4%, std = 27 poly, avg = 43.5%, std = 30 error (%) elm time-varying, avg = 14.1%, std = 24% elm, avg = 169.8%, std = 27.8% poly time-varying, avg = 37%, std = 30 error (%) elm time-varying, avg = 0.3%, std = 0.2% elm, avg = 1%, std = 0.3% poly time-varying, avg = 1% error (%) elm time-varying, avg = 25.9%, std = 27.5% elm, avg = 23.6%, std = 14.3% poly time-varying, avg = 161.2%, std = 609 = 4342.9%, std = 12418% error (%) elm time-varying, avg = 7.7%, std = 10.4% elm, avg = 12.9%, std = 5% poly time-varying, avg = 33.5%, std = 47 poly, avg = 35.2%, std = 71 error (%) elm time-varying, avg = 24.1%, std = 69% elm, avg = 265%, std = 84.8% poly time-varying, avg = 34.5%, std = 31 poly, avg = 55.5%, std = 59 15: daily error percentage of the last 31 days of 12 countries for elm and polynomial regression key: cord-024290-8z6us7v4 authors: allen, edward e.; farrell, john; harkey, alexandria f.; john, david j.; muday, gloria; norris, james l.; wu, bo title: time series adjustment enhancement of hierarchical modeling of arabidopsis thaliana gene interactions date: 2020-02-01 journal: algorithms for computational biology doi: 10.1007/978-3-030-42266-0_11 sha: doc_id: 24290 cord_uid: 8z6us7v4 network models of gene interactions, using time course gene transcript abundance data, are computationally created using a genetic algorithm designed to incorporate hierarchical bayesian methods with time series adjustments. the posterior probabilities of interaction between pairs of genes are based on likelihoods of directed acyclic graphs. this algorithm is applied to transcript abundance data collected from arabidopsis thaliana genes. this study extends the underlying statistical and mathematical theory of the norris-patton likelihood by including time series adjustments. cell signaling is accomplished via networks of transcriptional changes that lead to synthesis of distinct sets of proteins, which cause changes in growth, development, or metabolism. treatments that elevate levels of hormones result in cascades of changes in gene expression driven by activation and synthesis of transcription factors which are required to turn on downstream genes. one approach to model these gene regulatory networks is to collect measurements of changes in abundance of gene transcripts across a time course. the expression of a gene encoding a transcriptional activator or repressor protein may signal to the next gene to either turn on or turn off downstream genes and their encoded proteins. thus, time course transcriptomic data sets contain important information about how genes drive these changes in biological networks. yet genome-wide transcript abundance assays examine tens of thousands of genes so identification of patterns or networks within these large data sets is difficult. it is also critical to filter the meaningful transcript changes in these data sets to remove genes whose responses are not above background or that are dissimilar due to biological or technical variation. yet even though the bioinformatics community has developed statistical methods to filter the data [9] , additional approaches are needed to identify the networks and patterns in these large data sets. an important modern approach to statistical modeling includes bayesian techniques involving likelihoods and posterior probabilities. here, we extend our previous work on this problem by incorporating time series adjustments in the computation of bayesian likelihoods. we apply this method to time course data generated in response to treatments that elevate the levels of the hormone ethylene in arabidopsis thaliana. we take advantage of a previously published genome-wide transcriptional data set [9] , subjected to rigorous filtering and from which all the genes predicted to encode transcription factors have been identified. the goal is to predict gene regulatory networks that control time-matched developmental changes. the results in this paper are novel for several reasons. first, the methods use the hierarchical nature of the data sets. for example, replicate data are not averaged. rather, the method constructs a model over all of the data that uses each replicate as a source of information. the assumption is that at each level of the hierarchy there are commonalities in the data and parameters. thus, the replicate data is not independent. second, the addition of time series adjustment to improve the independence of the model's residuals gives these techniques stronger statistical foundations. third, the combination of bayesian model averaging with a cutting edge genetic algorithm provides rigorous estimates of posterior probabilities for edges. these computational modeling algorithms are derived using rigorous mathematical and statistical techniques and are computationally efficient. the models produced are easily understandable. many different techniques for modeling non-hierarchical data using gene expression data have been proposed. an excellent recent survey on this subject was given by emily [4] . there are many techniques for modeling gene and protein networks-with various different properties-available in the literature. our technique in this paper is a bayesian regression type method. variations of bayesian modeling can be found in [7, 11, 19] . other methods that use types of regression include [2, 21] which focus on logistic regression techniques, and [22, 23] which use poisson regression. other approaches to modeling these types of problems include differential equations [1] and boolean modeling [14] . this bayesian likelihood computational algorithm incorporates additional important features from earlier versions. earlier variations included computing posterior probabilities for a single replicate [11] and for multiple replicates with both hierarchical [18] and independent [17] structures. over the course of this research, the search procedure has changed from metropolis hastings to genetic algorithms. genetic algorithms' execution times are typically polynomial rather than the doubly exponential execution time, in terms of the numbers of time points and genes, of metropolis hastings. this variation also uses a bayesian version of the cross generational elitist selection, heterogeneous recombination, cataclysmic mutation algorithm (chc) [6] . genetic algorithms are motivated by the operators of selection, crossover, and mutation. the chc variation does not allow the crossover of similar parents. once the population becomes too homogeneous, then a cataclysmic mutation event regenerates the population from the current most fit parents. the bayesian chc (bchc) implemented in this paper uses a hierarchical statistical construct (the norris-patton likelihood) as the fitness function. the hormone ethylene (acc) is known to activate root growth in arabidopsis thaliana [9] . transcription factors (tfs) are cellular proteins that bind to dna to turn genes either on (activation) or off (repression). developmental changes are controlled by these genes. the data set used in this modeling process was the complete set of abundance levels of the twenty-six tfs believed/known to be involved in the activation of the growth of roots at eight time points after treatment with the ethylene precursor acc [9] . here, constructing an appropriate network model has potential agricultural applications in that it should lead to more complex understandings of root development. three network modeling paradigms are generally considered in the literature: cotemporal, next state one step and next state one and two steps. a next state one step model predicts the transcript abundance relationships between genes at time j based on the transcript abundance at time j − 1. in this paper, we will only consider next state one step models; for simplicity, we will refer to next state one step as next state. the time series adjusted (tsa) next state models are an amalgamation of next state modeling with standard time series adjustments [12] . the time series adjustment methodology makes the residuals (i.e., the estimated error terms) more independent. a directed graph g = (v, e) consists of a pair of collections: v a set of vertices (or nodes); and, e a collection of directed edges between pairs of vertices. a cycle is a sequence v 1 , e 1 directed acyclic graphs (dags) do not contain cycles. an example of a dag is given in fig. 1 . in this modeling algorithm, dags form the mathematical foundation of our computational approach. the vertices of a dag represent genes and the directed edges are one-way relationships between pairs of vertices. when there is a directed edge from for any dag d with vertex set v = {v 1 , v 2 , · · · , v n }, the vertices can be topologically sorted. this gives a total order > on v such that if v i is an ancestor conditional probability gives that for any two events a and b, the probability similarly, the density function f for two continuous variables y 1 and y 2 is recursively, using the order < implied by topologically sorting the dags on the set of continuous variable specific for a particular dag d, let y 1 be the gene that cannot have any parents. let y 2 be the gene that can have at most parent y 1 . similarly, let y h be the gene that can have parents from the collection {y 1 , · · · , y h−1 }. therefore, if we let y i represent the data of child i for all of the r replicates, we have for d f (y 1 , y 2 , . . . , y n |d) =f (y 1 |d)f (y 2 |y 1 , d)f (y 3 |y 1 , y 2 , d) · · · f (y k |y 1 , · · · y n−1 , d) statistical regression models of response (child) data from predictors (parents) data over time nearly always have correlated residuals over time. this is usually due to the remaining influence of the previous time's response data. in complicated modeling situations (e.g., like ours where we need to obtain closed form likelihoods of dags within a hierarchical structure in order to produce posterior probabilities of edges), it is common to derive results as if there were non-correlated residuals, as we have done in previous work. our previous work has shown utility both for simulated and biological data, but we now rigorously incorporate a time series adjustment into our model. this should result in substantially less correlated residuals and thus more accurate likelihoods for the dags. since these likelihoods are the foundations for the edges' estimated posterior probabilities, these estimates should also be improved. our time series adjustment is an integer autoregressive adjustment of order 1 in the commonly used family of markov conditioning. it is a version of kedem's and fokianos' autoregressive model [12, page 184 ]. in our setting, this simply adds the child's data at the previous time as an additional regressor for the child's data at the current time. thus, much of the child's data at the previous time's influence would be regressed out leaving less correlated, closer to independent, residuals from one time to the next. for each h, with 1 ≤ h ≤ n, f (y h |y 1 , y 2 , . . . , y h−1 , d) gives the density of y h given y h 's parent's data for dag d. now, let i y c be the data vector of any given child c from the i th replicate. the vector i y c has dimension t, the number utilized time points in the child c data set for a given replicate i. the symbol i x c is the t × k c regressor matrix for i y c . for next state with time series adjustment, t is the number of time points per replicate minus one since at time 1, the child data has no last previous parent data nor last previous child (tsa) data-so, the utilized child data starts at time 2. the value of k c is the number of parents of c plus two since i x c has a separate column for each of its parent's data at the previous time, a column of 1's for the intercept, and a column of the child's data at the previous time (the time series adjustment). a k c dimensional slope vector for child c's regressors is i β c . the common within replicate residual variance of child c is σ 2 c . assumptions which detail the hierarchical structure include that for a given the proof of theorem 1 uses the following lemmas whose computation can be found in [16] (a thesis from our research group). we include the proof of lemma 2 to show how the computation of the likelihood includes the slope parameters i β c of each of the replicates separately. proof. using integration, we have letting |m | denote the determinant of the matrix m , we have the following: extending lemma 2 to the product of density functions used in lemma 1, we have: note that g, v 0 and σ 2 c are positive free parameters. in our modeling algorithm, we set g = v 0 = σ 2 c = 1. the use of the time series adjusted next state norris-patton likelihood, along with a tailor-made genetic algorithm and bayesian model averaging, allows for the rigorous estimation of posterior probabilities for all gene pair interactions. if indicator < 0 then 18: p (t) cataclysm(p (t)) 19: indicator 50 20: end if 21: archive archive ∪ p (t) 22: end while 23: return archive 24: end procedure simply put, a genetic algorithm (ga) takes the current population and produces the next generation using the operations of selection, crossover, and mutation [15] . individuals (i.e., dags) are automatically moved to the next generation with preference given to those with the higher likelihoods (the elitist strategy). the first population must be initialized. the genetic algorithm terminates after a specified number of iterations. the tbchc genetic algorithm is an extension of bch [13] which was heavily influenced by the chc [5] . the tbchc fitness function includes the next state time series adjustment. the tbchc operators of selection, crossover, mutation, and repair will be discussed in the following paragraphs. the population of each generation consists of a fixed number of dags. each dag represents gene relationships. the genetic algorithm's aim is to move from the current population of dags to a new generation where the overall quality improves (as measured by the norris-patton likelihood). the elitist strategy only moves the top 10% of dags from the current generation to the next and the balance is filled by crossover. as tbchc iterates, all distinct dags are archived. the final gene interaction model is produced from this archived collection. generally, the selection operator chooses which members of the current population can potentially contribute children to the next generation. in fig. 2 selection is accomplished through a random pairing of all parents in the current population (lines 8-10). by assuming prior probabilities for the dag, the likelihood of a given dag d is proportional to the d's npl [3] . thus, the fitness of a candidate d can be computed using the npl. the crossover operator (line 12) exchanges genetic information (i.e., directed edges) between two parents producing two new offspring. the edges chosen to be exchanged are chosen randomly. there is one caveat: if the two parents are too similar-determined by the hamming distance between them then the two selected parent dags are not allowed to produce offspring (line 11). in a simple genetic algorithm, all selected parents are allowed to produce offspring. this tbchc prohibition of mating by similar parents may result in fewer dags in the next population than in the current population. since the modeling process is based on dags, if the crossover operator introduces a cycle in the offspring, a repair operator is applied. selection and crossover are used exclusively in tbchc until the population becomes too similar. at that point, cataclysmic mutation (line 17) is applied to reset the population by creating a new population of dags from the top 10% npl dags. there are no known techniques for assigning the optimum values to the genetic algorithm parameters. however, experience and the literature give general criterion for appropriate values. still, values are often determined on a case by case basis. the tbchc algorithm parameters include the following: 20 parallel executions each with 600 generations; the number of initial dags is 400; the crossover probability is 0.30; and, the number of parents of any given node is limited to 3. cataclysmic mutation causes the population of dags to be replaced by dags generated by crossover and mutation on the top 10% of the population to restore the candidate class to 400. this tbchc algorithm is implemented in python 3.0 using the networkx [8] and dispy packages [20] . it is important to realize that each directed edge in the model is labeled by a number in the interval [0, 1] indicating the posterior bayesian probability that the associated relationship exists in the biological network. using bayesian statistics, , which simply and appropriately weights each visited dag d according to its likelihood. this methodology requires equally likely priors since in such a situation the posterior for d is proportional its likelihood [3] . in order for this estimate to reflect its true value, it is necessary that ar contain a large and varied collection of dags of high likelihood. using the transcript abundance data for 26 arabidopsis thaliana genes stimulated by acc, gene interaction models for a next state with and without time series adjustment were computationally created, shown in fig. 3 . each edge is labeled by its posterior probability. figure 4 provides comparisons of three similar models to those given in fig. 3 . figure 4 (a) shows a stronger and tighter distribution of posterior probabilities than fig. 4(b) . there is significant agreement across the models for average posterior probabilities exceeding 0.8 and less than 0.2. however, for average posterior probabilities with values greater than 0.2 and less than 0.8 there is a great deal of variance, which reflects the lack of a strong posterior probability over this range. a typical underlying assumption of statistical analysis is that the residuals are independent [3, page 737]. it is well understood, however, that the residuals associated with time course data are not usually independent. by incorporating time series adjustments into the modeling process, the residuals' independence is much improved; thus, yielding a less approximated, more accurate likelihood function. the continuation of this research includes four tasks. first, the computational networks have been sent to the muday lab for biological investigation, confirmation and interpretation. second, in this paper, we investigated the enhancement of times series adjustment on a next state one step model. there are two other time paradigms, next state one and two steps and cotemporal, each of which has a time series adjustment analogue and a corresponding norris-patton likelihood. comparing and contrasting the computational results of these three distinct modeling methods-as well as their biological interpretations-are important in understanding the gene interaction models developed using this methodology. third, we will further consider higher order autoregressive adjustment to continue improving the independence of the residuals. fourth, effort is underway to implement nonuniform priors in the modeling techniques. this would permit construction of gene interaction models that reflect relationships found in the literature. modeling gene regulation networks using ordinary differential equations detecting gene-gene interactions that underlie human diseases probability and statistics, 4th edn a survey of statistical methods for gene-gene interaction in case-control genome-wide association studies the chc adaptive search algorithm: how to have safe search when engaging in nontraditional genetic recombination evolutionary computation 1 -basic algorithms and operators using bayesian networks to analyze expression data exploring network structure, dynamics, and function using networkx identification of transcriptional and receptor networks that control root responses to ethylene bayesian model averaging: a tutorial continuous cotemporal probabilistic modeling of systems biology networks from sparse data regression models for time series analysis a bchc genetic algorithm model of cotemporal hierarchical arabidopsis thaliana gene interactions stochastic boolean networks: an efficient approach to modeling gene regulatory networks an introduction to genetic algorithms bayesian interaction and associated networks from multiple replicates of sparse time-course data bayesian probabilistic network modeling from multiple independent replicates hierarchical bayesian system network modeling of multiple related replicates bayesian network analysis of signaling networks: a primer dispy: distributed and parallel computing with/for python plink: a toolset for whole-genome association and populationbased linkage analysis boost: a fast approach to detecting gene-gene interactions in disease data gboost: a gpu-based tool for detecting gene-gene interactions in genome-wide case control studies acknowledgments. the authors thank the national science foundation for their support with a grant, nsf#1716279. john farrell thanks wake forest university for support as a wake forest fellow for summer 2019. key: cord-027134-1k6oegu4 authors: turky, ayad; rahaman, mohammad saiedur; shao, wei; salim, flora d.; bradbrook, doug; song, andy title: deep learning assisted memetic algorithm for shortest route problems date: 2020-05-25 journal: computational science iccs 2020 doi: 10.1007/978-3-030-50426-7_9 sha: doc_id: 27134 cord_uid: 1k6oegu4 finding the shortest route between a pair of origin and destination is known to be a crucial and challenging task in intelligent transportation systems. current methods assume fixed travel time between any pairs, thus the efficiency of these approaches is limited because the travel time in reality can dynamically change due to factors including the weather conditions, the traffic conditions, the time of the day and the day of the week, etc. to address this dynamic situation, we propose a novel two-stage approach to find the shortest route. firstly deep learning is utilised to predict the travel time between a pair of origin and destination. weather conditions are added into the input data to increase the accuracy of travel time predicition. secondly, a customised memetic algorithm is developed to find shortest route using the predicted travel time. the proposed memetic algorithm uses genetic algorithm for exploration and local search for exploiting the current search space around a given solution. the effectiveness of the proposed two-stage method is evaluated based on the new york city taxi benchmark dataset. the obtained results demonstrate that the proposed method is highly effective compared with state-of-the-art methods. finding shortest routes is crucial in intelligent transportation systems. shortest route information can be utilised to enable route planners to compute and provide effective routing decisions [8, 11, 14, 16, 24] . however, shortest route computation is a challenging task partially due to dynamic environments [3] . for instance, the shortest path is impacted by various spatio-temporal factors, which are dynamic in nature, including weather, the time of the day, and the day of the week. that makes the current shortest route computation techniques ineffective [3, 7] . moreover, it is a challenging problem to incorporate these dynamic factors into shortest route computation. in recent years, the proliferation of pervasive technologies has enabled the collection of spatio-temporal big data associated with user mobility and travel routes in a real-time manner [15] . modern cars are equipped with telematics devices including in-car gps (global positioning system) devices which can be used as a source of valuable information in traffic modelling [23] . the traces generated from gps devices has been leveraged by many scenarios such as spatiotemporal context recognition, taxi-passenger queue time prediction, study of city dynamics and transport demand estimation [3, 12, 13, 17, 23] . one important aspect of finding shortest routes in realistic environments, which are inherently dynamic, is travel time prediction [8, 22] . due to the dynamic nature of in the travel routes, traditional machine learning methods cannot be applied directly onto travel time prediction. one of the key challenge for traditional machine learning models is the unavailability of hand-crafted features which requires substantial involvement of domain experts. one relevant approach is the recent use of evolutionary algorithms in other domains to work along with deep learning models for effective feature extraction and selection [18] [19] [20] [21] . in this study, we aim to identify relevant features for shortest route finding between an origin and destination, leveraging the auto-feature generation capability of deep learning. thereby we propose a novel two-stage architecture for the travel time prediction and route finding task. in particular we design a customized memetic algorithm to find shortest route based on the predicted travel time from the earlier stage. the contributions of this research are summarised as follows: -a novel two-stage architecture for the shortest route finding under dynamic environments. -development of a deep learning method to predict the travel time between a origin-destination pair. -a customised memetic algorithm to find shortest route using the predicted travel time. the rest of the paper is organized as follows. in sect. 2, we present our proposed methodology for this study. section 3 describes the experimental settings which is followed by the discussion of experimental results in sect. 4. finally, we conclude the paper in sect. 5. in this paper, we propose a deep learning assisted memetic algorithm to solve the shortest route problems. the proposed method has two stages which are (1) prediction stage and (2) optimisation stage. the prediction stage is responsible to predict the travel times between a pair of origin and destination along the given route by using deep learning. the second stage uses memetic algorithm to actually find the shortest path to visit all locations along the given route. in the following subsections, we discuss the main steps of the proposed method and the components of each stage in detail. figure 1 shows our proposed approach. conventional route finding methods assume fixed cost or travel time between any pairs of points. that is rarely the case in reality. one approach to the dynamic travel time issue is prediction. in this work, we incorporate the weather data along with the temporal-spatial data to develop a deep learning predictive approach. the goal of the proposed predictive approach is to predict future travel time between any points in the problem based on historical observations and weather condition. specifically, given a group of historical travel time data, weather data and road network data, the aim is to predict travel time between source (s) and destination (d) s i , d i ∈ r, i ∈ [1,2, ..., n], where n is the number of locations in the road network. our predictive approach tries to predict the travel time at t+1 based on the given data at t. the proposed predictive approach has three parts: input data, data cleaning and aggregation, the prediction approach. figure 2 shows the deep learning approach. input data. in this work, we use data from three different sources. the data involves around 1.5 million trip records. these include the travel time data, weather data and road network data. -travel time data. the travel times between different locations were collected using 2016 nyc yellow cab trip record data. -weather data. we use the weather data in new york city -2016. the data involves: date, maximum temperature, minimum temperature, average temperature, precipitation, snow fall and snow depth. -road network data. the road network data involves temporal and spatial information as follows: • id -a trip identifier. • vendor id -a code indicating whether the provider is involved with the trip record. • pickup date-time -date and time when the meter was started. • drop-off date-time -date and time when the meter was disconnected. • passenger count -indicates the total number of riders in the vehicle. • pickup longitude -the longitude of picked passenger. • pickup latitude -the latitude of the picked passenger. • dropoff longitude -the longitude of the dropped passenger. • dropoff latitude -the latitude of the dropped passenger. • store flag -indicates if the trip record was saved in vehicle memory before sending to the vendor where y = store and forward; n = not a store and forward trip. • trip duration -duration of the trip in seconds. this process involves removal of all error values, outliers, imputation of missing values and data aggregation. to facilitate the prediction we bound the data ranges between (average + 2) × standard deviation to (average − 2) × standard deviation. values outside of these ranges are considered as outliers and are removed. the missing values are imputed by the average values. any overlapping pick-up and drop-off locations are also removed. in the aggregation step, we combine the travel time data, weather data and road network each time step so that it can be fed into our deep networks. prediction approach. the main goal of this step is to provide high accuracy prediction of the travel times between different locations in the road network. the processed and aggregated data is provided as an input for the prediction approach. once the prediction model is trained and retrieved, it is then ready to actually predict the travel times between given locations. in this work, we propose a deep learning technique based on feedforward neural network to build our prediction approach. the deep neural network consists of one input layer, multiple hidden layers and one output layer. each layer (input, hidden and output) involves a set of neurons. the total number of neurons in the input layer is same as the number of input variables in our input data. the output layer has one single neuron which represents the predicted value. in deep neural network, we have m number of hidden layers and each one has k number of neurons. the input layer takes the input data and then feed them into the hidden layers. the output of the hidden layers are used as an input for the output layer. given the input data x (x =x 1 , .. x n ) and the output value y, the prediction approach aims to find the estimated value y est using a simple approach is as follows: where w is the weight and b is the bias. using a four-layer (one input, two hidden and one output) neural network as example, the y est can be calculated as follows: 1 is the output of the network and f is the activation function. in this work, keras [1] based on tensorflow [2] is used to develop our predication model. this subsection presents the proposed memetic algorithm (ma) for shortest route problems. ma is a population-based metaheuristic that combines the strengths of local search algorithm with population-based metaheuristic to improve the convergence process [9, 10] . in this paper, we used genetic algorithm (ga) and local search (ls) algorithm to form our proposed ma. ga is responsible for exploring new areas in the search space of solutions. ls is used to accelerate the search convergence. the pseudocode of the proposed ma is presented in is shown in (1) . the overview of the process is given below followed by detailed description of these steps. our proposed algorithm starts from setting parameters, creating a population of solutions, calculating the quality of each solution and identifying the best solution in the current population. next, the main steps of ma will iterate over a number of generations until the stopping criterion is met. at each generation, good solutions are selected from the population by the selection procedure. then the crossover operator is applied on the selected solutions to generate new solutions. after that the mutation operator is applied on the new solutions by randomly changing them. a repair procedure is applied to check the feasibility of the generated solutions and fix the infeasible solutions as some solutions are no longer feasible. afterwards a local search algorithm is invoked to iteratively improve the current solutions. if one of the stopping criteria is satisfied, then the whole ma procedure will stop and the current best solution will be returned as the output. otherwise, the fitness of the current pool of solutions will be calculated. then the population is updated since new solutions have been generated by crossover, mutation, repair procedure and local search. after that a new iteration starts from the selection procedure again. the main parameters of the proposed ma are initialised in this step. the proposed ma has several parameters. these are: population size, the number of generations, crossover rate, mutation rate and the number of non improvement iterations for the local search. initial population. the initial population is randomly generated. each solution is represented as one chromosome, e.g. one-dimensional array. each cell of the array contains an integer number which represent the location. fitness function. in this step, the fitness value of each solution based on the objective function is calculated. the better the fitness value is, the higher chance the solution will be selected to reproduce the next generation of solutions. for shortest route problems, the fitness is the total travel time between the origin and destination locations. therefore, solution with shortest travel time is the better. selection procedure. this step is responsible for selecting two solutions for producing the next generation. in this paper, we adopted the traditional tournament selection mechanism [4] [5] [6] . the tournament size is set to 2, indicating that each tournament has two solutions competing with each other. at each call, two solutions are randomly selected from the current population and the one with highest fitness value will be added to the reproduction pool. crossover. this step is responsible to generate new solutions by taking the selected solutions and mixes their genetic materials to produce new offsprings. in this paper, single-point crossover method is used which only swap genetic materials at one point [5, 6] . it first finds a common point between source node and destination node and then all points behind the common point are exchanged between the two solutions, thus resulting in two offspring's. mutation operator helps explore a large search space by producing some random changes in various solutions. in this paper, we used a one-point mutation operator [5] . crossover point is randomly selected and then all points behind the selected mutation point are changed with a random sequence. repair procedure. the aim of this step is to turn infeasible solutions into feasible ones. after crossover and mutation operations, the resulting solutions may become infeasible [5, 6] . in this paper, the ma in our experiments has repair procedure that ensure all infeasible solutions are repaired. local search algorithm. the main role of this step is to improve the convergence process of the search process in order to attain higher quality solutions [9, 10] . in this paper, the utilised local search algorithm is the steepest descent algorithm. steepest descent algorithm is a simple variation of the gradient descent algorithm. it starts with a given solution as an input and uses a neighbourhood structure to move the search process to other possibly better solutions. it uses an "accept only" improving acceptance criterion whereby only a better solution will be used as a new starting point. given s i , it applies a neighbourhood structure to create s n . replace s n with s i if s n is better. the pseudocode of the steepest descent algorithm is shown in (2) . stopping condition. if the stopping condition is met, terminate the search process and return the best found solution. for our proposed memetic algorithm, it will stop if the maximum number of generations is reached. otherwise, go to step 24. in this section, the parameter settings of the deep learning and the proposed algorithm are provided. the values of parameters were selected empirically based on our preliminary experiments, where we tested the deep learning model and the proposed algorithm with different parameter combination using different values for each parameter. the values of these parameters are determined one by one through manually changing the value of one parameter, while fixing the others. then, the best values for all parameters are recorded. the final parameter values of the deep learning and the proposed algorithm are presented in tables 1 and 2 . this section is divided into two subsections. the first examines the performance comparison between the deep learning approach and other machine learning models (sect. 4.1). the second assesses the benefit of incorporating the proposed components on search performance (sect. 4.2). in this paper, we have implemented a number of machine learning models and the results of these models are compared with the deep learning model proposed in this work. we have tested the followings methods: xgboost, random forest, artificial neural network, multivariate regression. the root-mean squared-error (rmse) was used as an evaluation metric. table 3 shows the results in term of rmse on the nyc taxi dataset. in the table, the best obtained result is highlighted in bold. from table 3 , it can be seen that our deep prediction model is superior to the other machine learning models in term of rmse. the best values with the lowest rmse is 11.01 achieved by our approach, followed by 21.34 from random forest, 24.06 from xgboost, 27.19 from multivariate regression and 70.21 from artificial neural network. this good result can be attributed to the factor that deep learning consider all input features and then utilise best ones through the internal learning process. on the other hand, other machine learning methods require feature engineering step to identify the best subset of features which is a very time consuming and needs a human expert. this section evaluates the effectiveness of the machine learning models and the proposed memetic algorithm. to this end, genetic algorithm (ga) and memetic algorithm (ma) with different machine learning models are tested and compared against each other. these are: ga with xgboost, ga with random forest, ga with artificial neural network, ga with multivariate regression, ga with deep prediction model, ma with xgboost, ma with random forest, ma with artificial neural network, ma with multivariate regression and ma with deep prediction model. the main aim is to evaluate the benefit of using our deep prediction model and local search algorithm within ma. to ensure a fair comparison between the compared algorithms, the initial solution, number of runs, stopping condition and computer resources are the same for all instances. all algorithms were executed for 30 independent runs over all instances. we also used 4 instances with a different number of locations ranging between 500 and 2000 locations, which can be seen as small, medium, large and very large. the computational comparisons of the above algorithms are presented in tables 4 and 5 . the comparison is in terms of the best cost (travel time) and standard deviation (std) for each number of locations, where the lower the better. the best results are highlighted in bold. a close scrutiny of tables 4 and 5 reveals that, of all the instances, the proposed ma algorithm with deep learning approach outperforms the other algorithms in all instances. from tables 4 and 5 , we can make the following observations: -ga with deep prediction model obtained better results when compared to ga with all other prediction models across all instances. this justifies the benefit of using deep learning approach to predict the travel time and the proposed memetic algorithm to exploit the current search space around the given solution. in this study, we proposed a novel two-stage approach for finding the shortest route under dynamic environment where travel time changes. firstly, we developed a deep learning method to predict the travel time between the origin and destination. we also added the weather conditions into the input to demonstrate that our approach can predict the travel time more accurately. secondly, a customised memetic algorithm is developed to find shortest route using the predicted travel time. the effectiveness of the proposed method has been evaluated on new york city taxi dataset. the obtained results lead to our conclusion that the proposed two-stage shortest route is effective, compared with conventional methods. the proposed deep prediction model and memetic algorithm are beneficial. tensorflow: large-scale machine learning on heterogeneous systems deepist: deep image-based spatio-temporal network for travel time estimation a comparative analysis of selection schemes used in genetic algorithms genetic algorithms and machine learning genetic algorithms travel time estimation without road networks: an urban morphological layout representation approach multi-task representation learning for travel time estimation on evolution, search, optimization, genetic algorithms and martial arts: towards memetic algorithms. caltech concurrent comput. program, c3p rep memetic algorithms and memetic computing optimization: a literature review solving multiple travelling officers problem with population-based optimization algorithms predicting imbalanced taxi and passenger queue contexts in airport queue context prediction using taxi driver knowledge coact: a framework for context-aware trip planning using active transport using big spatial data for planning user mobility capra: a contour-based accessible path routing algorithm wait time prediction for airport taxis using weighted nearest neighbor regression optimising deep belief networks by hyper-heuristic approach an evolutionary hyper-heuristic to optimise deep belief networks for image reconstruction evolutionary model construction for electricity consumption prediction multi-resolution selective ensemble extreme learning machine for electricity consumption prediction when will you arrive? estimating travel time based on deep neural networks. in: thirty-second aaai conference on artificial intelligence ridesourcing systems: a framework and review learning to estimate the travel time acknowledgements. this work is supported by the smarter cities and suburbs grant from the australian government and the mornington peninsula shire council. key: cord-033473-z79bt8hp authors: grote, gudela; pfrombeck, julian title: uncertainty in aging and lifespan research: covid-19 as catalyst for addressing the elephant in the room date: 2020-09-28 journal: work aging retire doi: 10.1093/workar/waaa020 sha: doc_id: 33473 cord_uid: z79bt8hp uncertainty is at the center of debates on how to best cope with the covid-19 pandemic. in our exploration of the role of uncertainty in current aging and lifespan research, we build on an uncertainty regulation framework that includes both reduction and creation of uncertainty as viable self-regulatory processes. in particular, we propose that future time perspective, a key component in models of successful aging, should be reconceptualized in terms of uncertainty regulation. we argue that by proactively regulating the amount of uncertainty one is exposed to, individuals’ future time perspective can be altered. we show how extant research might be (re)interpreted based on these considerations and suggest directions for future research, challenging a number of implicit assumptions about how age and uncertainty are interlinked. we close with some practical implications for individuals and organizations for managing the covid-19 crisis. the covid-19 pandemic has painfully exposed our global vulnerability. we are all called upon to cope with the ensuing uncertainties. it might be considered a particularly inappropriate time to discuss potential benefits of uncertainty. however, there is also much debate about what we can learn from this massive change in the way we live and work for a more positive future. we follow this reasoning and propose a model of uncertainty regulation, which includes mechanisms that reduce and create uncertainty. we use this as a basis for extending research and practice for successful aging. notably, some age-related research has already exposed positive attitudes toward uncertainty by examining young children's curiosity and desire to learn (e.g., kidd & hayden, 2015; oudeyer & smith, 2016) and older adults' willingness to proactively approach new identities as part of their transition to retirement (e.g., bordia, read, & bordia, 2020) . there also appears to be a direct positive link between age and tolerance for uncertainty (basevitz, pushkar, chaikelson, conway, & dalton, 2008; laguerre & barnes-farrell, 2019) . we take these findings as inspiration into exploring the role of uncertainty in aging and lifespan research. consistent with griffin and grote's (in press) uncertainty regulation model, we consider that individuals may not always reduce uncertainty, but regulate uncertainty towards an optimal level, which contributes to fostering a more positive future time perspective as a crucial resource for successful aging. we aim to delineate new avenues for aging and lifespan research regarding uncertainty and to promote a fuller understanding of individuals' uncertainty management, which is particularly relevant in difficult times, such as the current pandemic. adopting such a fresh look at uncertainty may raise awareness for opportunities amidst the many personal, social, and economic threats. in this commentary, we start with a brief introduction to griffin and grote's (in press) uncertainty regulation model. we then discuss future time perspective as a key component of self-regulatory processes in aging and position it within an uncertainty regulation framework. we show how extant research that touches on the age-uncertainty relationship might be (re)interpreted based on this framework. we propose directions for future research, challenging a number of implicit assumptions about how age and uncertainty are interlinked. we close with a few practical considerations for individuals and organizations that are relevant for managing the covid-19 crisis but that should also be applicable in happier times. griffin and grote (in press) postulate that individuals not only selfregulate their efforts to achieve certain goals at work but also regulate the amount of uncertainty they are exposed to in this process. they outline a work performance model in which individuals align their preferred level of endogenous uncertainty-that is, uncertainty over which individuals have immediate control-with the requirements for uncertainty management inherent in the task they are trying to accomplish. griffin and grote argue that the appraisal of uncertainty can create an aversive and/or a desirable state depending on individuals' work, aging and retirement, 2020, vol. xx, no. xx, pp. 1-5 doi:10 .1093/workar/waaa020 commentary predispositions and situational demands. based in that appraisal, individuals reduce or increase uncertainty in a self-regulatory feedback loop. whereas uncertainty reduction aims to (re)establish predictability and control, uncertainty creation is founded in the desire to enhance learning opportunities and to expand one's vision of possible futures well beyond the currently known in an act of expansive agency. in the griffin and grote's model, a second self-regulatory loop is postulated through which individuals align the regulation of endogenous uncertainty with the requirements stemming from exogenous uncertainty. exogenous uncertainty is largely determined by the broader environment, such as uncertainties at the macroeconomic level caused by the current pandemic. in the following, we focus on the self-regulatory cycles of managing endogenous uncertainty. especially in relation to future time perspective, we see endogenous uncertainty regulation at the center of intraindividual processes linked to individuals' aging experience. toward the end of this commentary, we broaden our perspective and discuss how individuals' uncertainty regulation may help in managing some of the exogenous challenges related to the covid-19 crisis. griffin and grote (in press) developed their model of uncertainty regulation in line with the fundamental principles of self-regulation. most theories in lifespan development, on which aging research is based, also build on self-regulation as the core process through which individuals plan and implement action in pursuit of valued goals, such as motivational theory of lifespan development (heckhausen, wrosch, & schulz, 2010) , theory of selection, optimization, and compensation (baltes, 1997; baltes & baltes, 1990) , and socioemotional selectivity theory (carstensen, 1991 (carstensen, , 2006 carstensen, isaacowitz, & charles, 1999) . opportunities for self-determined goal striving and sufficient control and self-efficacy are key for successful personal development and aging. however, such opportunities and the capabilities to capitalize on them are assumed to dwindle across the life course, beginning in midlife with the proverbial midlife crisis (heckhausen, 2001) and declining more rapidly in old age (heckhausen et al., 2010) . a variety of compensatory processes to cope with the loss of opportunities and primary control have been theorized, which center on individuals' adaptive capacity and a shift from achievement goals to a search for emotionally rewarding experiences and larger meaning. for example, the motivational theory of lifespan development suggests an optimization in primary and secondary control strategies by means of adaptive goal engagement and disengagement (heckhausen et al., 2010) , where primary control aims at changing external conditions to better fit personal needs and interests and secondary control does the opposite, that is change the self to better cope with external forces. the selection, optimization, and compensation model proposes the selection of goals, optimization of skills or resources, and the compensation of aging-related resource losses as coping strategy. socioemotional selectivity theory emphasizes the emotional regulation with a focus on emotionally rewarding experiences and closer relationships in later age (carstensen, 2006) . much of this adaptation is assumed to hinge on individuals' future time perspective (carstensen, 1991 (carstensen, , 2006 carstensen et al., 1999) , where a more open-ended future time perspective, which includes more opportunities and a longer timeframe, promotes successful lifespan development and aging (henry, zacher, & desmette, 2017; kooij, kanfer, betts, & rudolph, 2018; rudolph, kooij, rauvola, & zacher, 2018) . however, with progressing chronological age, individuals have persistently been found to perceive fewer opportunities and less remaining time for making use of those opportunities (baltes, wynne, sirabian, krenn, & de lange, 2014; rudolph et al., 2018; weikamp & göritz, 2015; zacher & frese, 2009) . besides age, studies have also found socioeconomic status, health, personality dimensions, and other dispositional characteristics such as self-efficacy, locus of control, and optimism, the experience of aging-related gains and losses, and the adoption of a growth mindset to influence future time perspective (fasbender, wöhrmann, wang, & klehe, 2019; kooij et al., 2018; rudolph et al., 2018; weiss, job, mathias, grah, & freund, 2016) . in addition to individual factors, contextual factors such as job autonomy and job complexity have been identified as antecedents . in the following, we propose that future time perspective is also influenced by individuals' uncertainty regulation. with its focus not only on remaining time per se but on the opportunities that may arise and could be taken advantage of, future time perspective seems a natural ally to uncertainty regulation. a longer future and more possibilities offered by that future imply more unpredictability and thereby more uncertainty. however, uncertainty to date has not been explicitly included in theories of how future time perspective affects lifespan development and aging. we argue that by regulating the amount of uncertainty one is exposed to, future time perspective can be altered. this consideration complements prior research that has focused on the impact of experienced aging-related gains and losses on perceived future time perspective (fasbender et al., 2019; weiss et al., 2016) . in figure 1 , we illustrate how, in addition to past experiences, an individual's uncertainty regulation might influence future time perspective. in the center of figure 1 , key processes from griffin and grote's (in press) model are depicted, where individuals strive to maintain a desired level of uncertainty by engaging in either opening or closing behaviors. opening behaviors are proactive and future-oriented and generate uncertainty as opportunities for learning and exploration of entirely new goals, such as changing one's occupation. in contrast, closing behaviors rely on existing knowledge and intend to exploit that knowledge, thereby also reducing uncertainty, for example, selecting a task one can do particularly well. by regulating the amount of experienced uncertainty, individuals can increase or reduce the perceived future opportunities and remaining time, that is, their future time perspective, in a recursive cycle. some similarities of the proposed processes to existing aging models and research are apparent. for example, kooij, zacher, wang, and heckhausen's (in press) model of successful aging-defined as older workers' ability and motivation to continue working-centers around proactive and adaptive processes of goal (dis)engagement. these processes are aimed at restoring person-environment fit after anticipated or experienced discrepancies between personal needs and abilities and environmental demands and resources. we suggest that individuals may not only continue to strive for their goal in the face of adverse conditions, but occasionally may give up more routine goals in search of learning opportunities and expression of expansive agency. over their careers, individuals may dynamically switch back and forth between exploiting existing skills and competencies and exploring new knowledge domains to achieve both optimal exposure to uncertainty and successful goal striving. in the study by laguerre and barnes-farrell (2019), future time perspective mediated the positive relationship between tolerance for uncertainty and motivation to continue working after retirement as well as financial risk tolerance. in griffin and grote's model, tolerance for uncertainty is discussed as an individual predisposition that influences the level of desirable uncertainty, which in turn constitutes the set value for uncertainty regulation. thus, laguerre and barnes-farrell's (2019) findings might be considered as tentative support for our proposition that future time perspective is shaped by and exerts its influence through processes of uncertainty regulation. in their study of entrepreneurial activity, gielnik, zacher, and wang (2018) found that the relationship between opportunity identification and entrepreneurial intentions was weaker for employees with a more limited future time perspective. at the same time, prior entrepreneurial experience strengthened the relationship between entrepreneurial intentions and activity. based on our theorizing, one might investigate whether entrepreneurial activity affects future time perspective via the experience of successfully regulating endogenous uncertainty, for instance having exploited existing personal networks to explore a new business sector. we believe that these examples underscore the value of exploring the role of uncertainty regulation in the development and impact of future time perspective. by achieving a more balanced uncertainty regulation, which includes both the exploitation of existing knowledge and the deliberate encounter with and exploration of the unknown, the perception of future opportunities in one's life may be promoted. in the final part of our commentary, we employ our proposed model to exemplarily (re)interpret research that touches on the age-uncertainty relationship and discuss how researchers could further examine uncertainty regulation in relation to individuals' future time perspective. we conclude with some practical considerations for managing the uncertainties, which ensue from the covid-19 pandemic. the covid-19 pandemic has brought uncertainty to the fore which despite or possibly because of its omnipresence often remains implicit in psychology and management research. if it is explicitly addressed, it is usually treated as an aversive state individuals try to avoid (e.g., cooper & thatcher, 2010; heckhausen, 2016; hogg, 2007 ). an important first step we propose is to explicitly include uncertainty into the study designs used in lifespan and aging research and to approach the experience of uncertainty as something potentially positive as well. in support of this proposal, dweck (2017) argues that individuals strive not for complete, but for optimal predictability concerning the relationships among events and among things in the world, as one basic need that drives development. quite similar to griffin and grote (in press), dweck (2017) postulates that complete predictability is not desirable because people are motivated to experience new and complex situations. in our literature search for this commentary, we identified a number of studies where findings might be interpreted differently and possibly even more convincingly if an uncertainty regulation perspective were used. for instance, meta-analytical evidence shows that job autonomy and complexity have positive effects on occupational future time perspective . the authors explain these results with the fact that these two job characteristics act as resources. however, one may also argue that job autonomy and complexity imply more uncertainty through the discretion they offer to the job holder. along with this uncertainty, opportunities arise, for instance for competence development, learning, and exploration, leading also to a more general perception of occupational opportunities expressed in occupational future time perspective. similarly, the finding that older employees experience less strain when confronted with role ambiguity may not (only) be explained by their greater reliance on crystallized abilities, which help them to manage uncertainty (abbasi & bordia, 2019) , but also by their interest in capitalizing on the opportunities that arise from uncertainty. such an interpretation receives some backing from the finding by basevitz and colleagues (2008) that older adults generally are more tolerant of uncertainty and worry less, presumably due to their increased focus on and capability for emotion regulation (scheibe, spieler, & kuba, 2016; toomey & rudolph, 2018 ). ainsworth's (2015) data on more and more successful older entrepreneurs may also be interpreted as an indicator of older adults being more willing to expose themselves to uncertainty. uncertainty regulation may be one of those processes that should be scrutinized in response to heckhausen's (2016) call to examine new constructs as sources of differences between more and less successful developmental paths. future studies may examine whether age differences in uncertainty regulation or in preferred levels of uncertainty exist. moreover, researchers may want to explore whether engaging in expansive agency to create uncertainty at different time points in life may be an effective strategy for successful aging. there might be more gain-oriented development in older age than accounted for in contemporary models of aging, based on opportunities generated by an openness to, and possibly active search for uncertainty. gains may include primary and secondary control aimed at the maintenance of workability, but also an exploration of new routes to meaningful engagement in society. for instance, by building on the finding by bordia and colleagues (2020) that openness to shed old identities and explore new ones was important for successful retirement transitions, interventions could be designed to get individuals to reflect on and possibly change their personal approach to managing uncertainty. thereby, not only secondary control of uncertainty could be strengthened, for instance through developing more tolerance for uncertainty, but also primary control, as individuals are enabled to more freely choose between reducing and increasing uncertainty for themselves. this may support individuals both in their daily life and during major transformations. studying the impact of such interventions would also allow to better understand the relationships between uncertainty regulation, future time perspective, and successful aging. although the covid-19 pandemic has primarily led to societal and economic uncertainties that represent serious threats, these difficult times may also hold a promise. specifically, they may allow for a future where individuals and societal actors become better equipped for regulating uncertainty in ways that balance the rewards of exploiting the known and exploring the opportunities offered by the unknown. as we have aimed to show, research on aging and lifespan development is well positioned to investigate this promise further. in addition, we want to point out some practical implications of our suggested new perspective on uncertainty, which are related to mastering the immediate challenges that covid-19 has brought upon us. most fundamentally, a systematic reflection of the uncertainties caused by the pandemic and the threats and opportunities they imply is paramount so that individuals and organizational actors can develop a more measured approach to uncertainty management. rather than proclaiming false certainties (e.g., there will be a vaccine within a year), as we experience daily by many politicians, we need an understanding of how covid-19-related uncertainties might develop and how we can best brace ourselves to master and possibly even take advantage of them. this may sound harsh, but even job loss has been found to open new avenues for personal development (zikic & richardson, 2007) . for some older workers, this experience turned out to be an opportunity to reflect upon their careers and engage in career exploration, opening avenues to alternative career paths such as self-employment. if, as prior research has shown, older individuals have more measured responses to uncertainty due to their superior emotion regulation, they may not only be better at coping with uncertainty related to themselves, but they can be valuable resources in their organizations by helping others to cope with covid-19 induced uncertainties (settersten et al., 2020) . lastly, the proven resourcefulness of organizations and individuals in managing uncertainties during this crisis could be harnessed more broadly in a dialogue between employers and employees on more flexible and self-directed forms of working that lie at the heart of well-being at work across all ages. thinking, young and old: cognitive job demands and strain across the lifespan aging entrepreneurs and volunteers: transition in late career on the incomplete architecture of human ontogeny psychological perspectives on successful aging: the model of selective optimization with compensation future time perspective, regulatory focus, and selection, optimization, and compensation: testing a longitudinal model age-related differences in worry and related processes retiring: role identity processes in retirement transition selectivity theory: social activity in life-span context the influence of a sense of time on human development taking time seriously: a theory of socioemotional selectivity identification in organizations: the role of self-concept orientations and identification motives from needs to goals and representations: foundations for a unified theory of motivation, personality, and development is the future still open? the mediating role of occupational future time perspective in the effects of career adaptability and aging experience on late career planning age in the entrepreneurial process: the role of future time perspective and prior entrepreneurial experience when is more uncertainty better? a model of uncertainty regulation and effectiveness adaptation and resilience in midlife social inequalities across the life course: societal unfolding and individual agency a motivational theory of life-span development future time perspective in the work context: a systematic review of quantitative studies uncertainty-identity theory the psychology and neuroscience of curiosity future time perspective: a systematic review and meta-analysis successful aging at work: a process model to guide future research and practice the role of intolerance of uncertainty in predicting future time perspective and retirementrelated outcomes how evolution may work through curiosity-driven developmental process occupational future time perspective: a meta-analysis of antecedents and outcomes an older-age advantage? emotion regulation and emotional experience after a day of work understanding the effects of covid-19 through a life course lens age-conditional effects in the affective arousal, empathy, and emotional labor linkage: withinperson evidence from an experience sampling study how stable is occupational future time perspective over time? a six-wave study across 4 years the end is (not) near: aging, essentialism, and future time perspective remaining time and opportunities at work: relationships between age, work characteristics, and occupational future time perspective unlocking the careers of business professionals following job loss: sensemaking and career exploration of older workers g. g. and j. p. contributed equally to this study. key: cord-012349-wutnt8yk authors: lech, karolina; liu, fan; davies, sarah k.; ackermann, katrin; ang, joo ern; middleton, benita; revell, victoria l.; raynaud, florence j.; hoveijn, igor; hut, roelof a.; skene, debra j.; kayser, manfred title: investigation of metabolites for estimating blood deposition time date: 2017-08-05 journal: int j legal med doi: 10.1007/s00414-017-1638-y sha: doc_id: 12349 cord_uid: wutnt8yk trace deposition timing reflects a novel concept in forensic molecular biology involving the use of rhythmic biomarkers for estimating the time within a 24-h day/night cycle a human biological sample was left at the crime scene, which in principle allows verifying a sample donor’s alibi. previously, we introduced two circadian hormones for trace deposition timing and recently demonstrated that messenger rna (mrna) biomarkers significantly improve time prediction accuracy. here, we investigate the suitability of metabolites measured using a targeted metabolomics approach, for trace deposition timing. analysis of 171 plasma metabolites collected around the clock at 2-h intervals for 36 h from 12 male participants under controlled laboratory conditions identified 56 metabolites showing statistically significant oscillations, with peak times falling into three day/night time categories: morning/noon, afternoon/evening and night/early morning. time prediction modelling identified 10 independently contributing metabolite biomarkers, which together achieved prediction accuracies expressed as auc of 0.81, 0.86 and 0.90 for these three time categories respectively. combining metabolites with previously established hormone and mrna biomarkers in time prediction modelling resulted in an improved prediction accuracy reaching aucs of 0.85, 0.89 and 0.96 respectively. the additional impact of metabolite biomarkers, however, was rather minor as the previously established model with melatonin, cortisol and three mrna biomarkers achieved auc values of 0.88, 0.88 and 0.95 for the same three time categories respectively. nevertheless, the selected metabolites could become practically useful in scenarios where rna marker information is unavailable such as due to rna degradation. this is the first metabolomics study investigating circulating metabolites for trace deposition timing, and more work is needed to fully establish their usefulness for this forensic purpose. knowing the time of the day or night when a biological trace was placed at a crime scene has valuable implications for criminal investigation. it would allow verifying the alibi and/ or testimony of the suspect(s) and could indicate whether other, yet unknown suspects may be involved in the crime. as such, knowing the trace deposition time would provide a link, or lack of, between the sample donor, identified via forensic dna profiling, and the criminal event. therefore, finding a means to retrieve information about the deposition time of biological material is of inestimable forensic value. in principle, molecular biomarkers with rhythmic changes in their concentration during the 24-h day/night cycle and analysible in crime scene traces would provide a useful resource for trace deposition timing. circadian rhythms are oscillations with a (near) 24-h period present in almost every physiological and behavioural aspect of human biology. they are generated on a molecular level by coordinated expression, translation and interaction of core clock genes and their respective protein products [1] . together, these genes form a transcriptionaltranslational feedback loop driving the expression of various clock-controlled genes, which manifests as rhythms in numerous processes including metabolism [2] [3] [4] [5] [6] [7] , where circadian timing plays a role in coordinating biochemical reactions and metabolic activities. because of this ubiquity of circadian rhythms and their association with many biological processes, the pool of potential rhythmic biomarkers is vast and diverse [8] . in a proof-of-principle study, we previously introduced the concept of molecular trace deposition timing, i.e. to establish the day/night time when (not since) a biological sample was placed at the crime scene, by measuring two circadian hormones, melatonin and cortisol, in small amounts of blood and saliva, and demonstrated that the established rhythmic concentration pattern of both biomarkers can be observed in such forensic-type samples [9] . recently, we identified various rhythmically expressed genes in the blood [10] and subsequently demonstrated the suitability of such messenger rna (mrna) biomarkers for blood trace deposition timing by establishing a statistical model based on melatonin, cortisol and three mrna biomarkers for predicting three day/night time categories: morning/noon, afternoon/evening and night/early morning [11] . here, we investigate different types of molecular biomarkers, namely metabolites, i.e. intermediates or products of metabolism, for their suitability in trace deposition timing. metabolic processes are known to be coupled with the circadian timing system in order to properly coordinate and execute them [6, 12, 13] . thus, many (by-)products of metabolism have been shown to exhibit rhythms in their daily concentration levels in metabolomics studies [7, 14, 15] , while none of them as yet have been tested for trace deposition timing. using plasma obtained from blood samples collected every 2 h across a 36-h period from healthy, young males, 171 metabolites were screened via a targeted metabolomics approach to identifiy those with statistically significant rhythms in concentration. rhythmic markers, as shown previously with hormones and mrna [9, 11] , are able to predict day/night time categories. thus, we hypothesized that applying rhythmic metabolites (with or without previously established rhythmic biomarkers) for time prediction modelling could improve the categorical time prediction for trace deposition timing, which was assessed in this study. the plasma metabolite data used in this study were obtained from blood samples collected during the sleep/sleep deprivation study (s/sd) conducted at surrey clinical research centre (crc) at the university of surrey, uk. full details of the study protocol and eligibility criteria have been reported elsewhere [4, 5, 7] . for the present analysis, 18 sequential twohourly blood samples per participant (n = 12 males, mean age ± standard deviation = 23 ± 5 years) were used, giving a total of 216 observations for subsequent model building. these samples spanned the first 36 h of the s/sd study (from 12:00-h day 2 to 22:00-h day 3). the samples covering the subsequent sleep deprivation condition, from 00:00 h on day 3 to 12:00 h on day 4, were excluded from the analysis. full details of the blood sample collection, plasma extraction method, targeted lc/ms metabolomics analysis and subsequent statistical analyses have been described in materials and methods and supplementary material sections of the previous articles [4, 5, 7] . concentration data of 171 metabolites (μm), belonging to either acylcarnitines, amino acids, biogenic amines, hexose, glycerophospholipids and sphingolipids, were obtained using the absoluteidq p180 targeted metabolomics kit (biocrates life sciences ag, innsbruck, austria) run on a waters xevo tq-s mass spectrometer coupled to an acquity hplc system (waters corporation, milford, ma, usa). after correcting the metabolite data for batch effect described in detail in [7] , we analysed the metabolite profiles with the single cosinor and nonlinear curve fitting (nlcf) methods to determine the presence of 24 h rhythmicity, as was done previously [4, 5] . this first selection step of metabolites for time category prediction was based on the statistically significant outcomes from the nlcf and single cosinor methods. the selected metabolites had to have a statistically significant amplitude and acrophase, calculated with the nlcf method, and statistically significant fits to a cosine curve, as calculated with the single cosinor method. final selection of markers for prediction modelling was done using multiple regression including all markers as the explainary variables and the sampling time as the dependent variable and ensuring all of the selected markers having statistically significant and independent effect on the overall model fitting. the metabolite markers that did not show a statistically significant independent effect were excluded from the marker selection process. the most suitable predicted time categories were established, based on the average peak times of the metabolites and hormone concentrations, as calculated with the nlcf method. the prediction model was built based on multinomial logistic regression, where the batch-corrected concentration values of metabolites were considered as the predictors and the day/night time categories as the response variable, as described elsewhere [11, 16] . additionally, we combined the previously proposed circadian hormones melatonin and cortisol [9] as well as the previously established rhythmic mrna biomarkers mknk2, per3 and hspa1b [11] with the metabolites in a prediction model, to determine whether a combination of the different types of rhythmic markers improves the prediction accuracy of time estimations. the dataset used for prediction modelling consisted of 216 observations, i.e. 12 individuals and 18 time points per individual. the multinomial logistic regression is written as and the probabilities for a certain day/night category can be estimated as the day and night category with the max (π 1 , π 2 , π 3 ) was considered as the predicted time category. the model predicted the probabilities of different possible outcomes of a categorical dependent variable, given a set of variables (predictors), as previously described and applied for eye and hair colour prediction based on snp genotypes [16] [17] [18] and for trace deposition time using circadian mrna biomarkers [11] . because of the small sample size, the performance of the generated model(s) was evaluated using the leaving-one-out cross-validation (loocv) method [19] . this approach builds a prediction model from all observations minus one, in this case for 215 observations, and predicts the time category for the one remaining observation. the whole procedure is repeated once for each observation, i.e. in this case 216 times. the area under the receiver operating characteristic (roc) curve (auc), which describes the accuracy of the prediction, was derived for each time category based on the concordance between the predicted probabilities and the observed time category. in general, auc values range from 0.5, which corresponds to random prediction, to 1.0, which represents perfect prediction. the concordance between the predicted and observed categories was categorized into four groups: true positives (tp), true negatives (tn), false positives (fp) and false negatives (fn). four accuracy parameters were derived: sensitivity = tp / (tp + fn) × 100, specificity = tn / (tn + fp) × 100, positive predictive value (ppv) = tp / (tp + fp) × 100 and negative predictive value (npv) = tn / (tn + fn) × 100. notably, the 216 observations that were used in this study were not completely independent from each other; however, we aimed to minimize the bias by cross-validation using loocv. from the 171 metabolites analysed in the plasma samples, we identified 56 metabolite biomarkers showing statistically significant oscillations, with both the nlcf and cosinor methods (table 1 ). next, these 56 metabolites were assigned to day or night time categories based on their mean peak (acrophase) time estimates (table 1 ). an overrepresentation of metabolites (n = 50, 89%) demonstrating peak concentrations in the afternoon, between 13:00 and 17:30 h, was noted. five out of 56 (9%) metabolites had their highest concentration values during the night, between 21:00 and 03:00 h. only one metabolite showed a peak time in the early morning, around 06:00 h. consequently, we assigned all 56 metabolites to three day/ night time categories, i.e. morning/noon (07:00-14:59 h), afternoon/evening (15:00-22:59 h) and night/early morning (23:00-06:59 h), together comprising one complete 24-h day/night cycle. in the first step of the biomarker selection, we applied linear regression to all 56 metabolites, identified as significantly rhythmic, to select those with an independent contribution to the model for predicting the three day/night time categories: morning/noon, afternoon/evening and night/early morning, as previously done for mrna and hormone biomarkers [11] . this analysis revealed a subset of 10 metabolite biomarkers (ac-c16, ac-c18:1, ac-c4, isoleucine, proline, pcaac38:5, pcaac42:2, pcaec32:2, pcaec36:5 and smc24:1). the remaining 46 metabolites were omitted from the subsequent model building and model validation analysis as their effect on time category prediction was 'masked' by the 10 metabolite biomarkers included in the model. a table 2 ). figure 1 presents z-scored concentration values across the day/night cycle for these 10 metabolite biomarkers. however, our previously established model based on two circadian hormones (melatonin and cortisol) and three mrna biomarkers (mknk2, hspa1b and per3) gave considerably higher auc values of 0.88, 0.88 and 0.95 for the same three time categories respectively [11] , than achieved here with the model based on the 10 plasma metabolites. therefore, we performed time prediction modelling using the 10 metabolite biomarkers highlighted here together with the previously identified hormone and mrna biomarkers. this analysis revealed a subset of seven independently contributing biomarkers: five metabolites (ac-c16, ac-c18:1, ac-c4, isoleucine and smc24:1), one hormone (melatonin) and one mrna biomarker (mknk2). the auc values obtained with this combined biomarker model were 0.85 for morning/noon, 0.89 for the afternoon/evening and 0.96 for night/early morning ( table 2 ). in this forensically motivated metabolomics study, 56 metabolite biomarkers exhibiting significant daily rhythms in concentration were identified in plasma and were further investigated for their suitability for estimating blood trace deposition time. the 171 metabolites initially tested were included in the absoluteidq p180 targeted metabolomics kit (biocrates life sciences ag, innsbruck, austria) and belong to five compound classes and are involved in major metabolic pathways, such as energy metabolism, ketosis, metabolism of amino acids, cell cycle and cell proliferation and carbohydrate metabolism, to name a few. metabolism is interconnected with circadian rhythms, influencing them and, in turn, being influenced by them [2, 6, 12, 13, 20] . among the metabolites with statistically significant oscillations identified here, we found a strong overrepresentation of those exhibiting peak concentrations in the afternoon, mainly from the phosphatidylcholine class (table 1) . although currently we cannot fully understand what causes this overrepresentation, the observed peak times agree with data showing lipid metabolism transcripts in humans having maximum transcription levels during the day [21] . the prediction model established here utilized 10 metabolite biomarkers for estimating three day/night time categories auc area under the receiver operating characteristic (roc) curve, ppv positive predictive value, npv negative predictive value, spec specificity, sens sensitivity a as established previously [11] [11] . in both model comparisons (i and ii), the remaining category was predicted slightly less accurately in the metabolite-based model. however, the final comparison with the combined model, based on two hormones (melatonin, cortisol) and three mrna biomarkers (mknk2, hspa1b and per3), (iii) showed that the metabolite-based model was considerably less accurate, giving lower auc values by 0.07, 0.02 and 0.05, for morning/noon, afternoon/evening and night/early morning respectively [11] . this final finding was the motivation to combine together in one time prediction model the 10 metabolite biomarkers identified here, with the hormone and mrna biomarkers identified previously [11] . the best combined model was based on five metabolites (ac-c16, ac-c18:1, ac-c4, isoleucine and smc24:1), melatonin and the mknk2 and reached auc values of 0.85 for morning/noon, 0.89 for afternoon/evening and 0.96 for night/early morning. overall, this combined model was slightly more accurate in predicting the afternoon/evening and the night/early morning categories (auc increase of 0.01) and slightly less accurate in predicting the morning/noon category (auc decrease of 0.03) compared with the previously established combined hormone and mrna-based model [11] . this rather minor impact of the newly tested metabolites, relative to the previously tested hormones and mrna biomarkers [11] , questions the value of using plasma metabolites for trace deposition timing. the major subset of the metabolites identified in the current study peaked during the day, and this might reflect either the feeding-fasting schedule [7, 22] or their original source. the original source of metabolites circulating in plasma is difficult to determine accurately since they can be derived from multiple organs that are regulated by different systemic and external cues influencing their function and rhythmicity, which, in turn, modifies the rhythms of the generated metabolites. consequently, if the metabolites identified here are sensitive to feeding and fasting cues, their applicability for trace deposition timing may be rather limited, but their value for monitoring peripheral circadian rhythms in the liver, for instance, may be crucial. furthermore, the previously introduced hormone and mrna biomarkers [11] can feasibly be analysed by using an elisa assay and rt-qpcr respectively, techniques that nowadays are straightforward and require only basic laboratory instruments and have been shown to be suitable for forensic trace analysis. in comparison, relatively specialized lc/ms equipment and methodology are needed to simultaneously analyse a large number of metabolites circulating in plasma, even more so, when measuring a forensic trace sample. regardless of these constraints, it has been shown that measuring metabolites in dried blood is possible [23, 24] , but needs to be studied further in the forensic context, where the quantity and the quality of dried blood stains are often compromised. however, in situations where intact rna is not available and the preferred mrna-based time estimation models can therefore not be used, metabolite markers might be the markers of choice. in such situation, metabolite analysis may provide valuable information on trace deposition time. the technical challenges should thus not impede future studies to fully establish whether plasma metabolites could be useful biomarkers for trace deposition timing, and if additional metabolites can achieve a more detailed and accurate time estimation than the metabolites identified here. additionally, more samples collected around the 24-h clock from more individuals need to be analysed to make the time prediction model more robust, and the analysis method, at best a multiplex system, needs to be forensically validated including sensitibity testing, specificity testing and stability testing, before final forensic casework application may be considered. central and peripheral circadian clocks in mammals mammalian circadian clock and metabolism-the epigenetic link effects of sleep and circadian rhythm on the human immune system diurnal rhythms in blood cell populations and the effect of acute sleep deprivation in healthy young men effect of sleep deprivation on rhythms of clock gene expression and melatonin in humans metabolism and the circadian clock converge effect of sleep deprivation on the human metabolome improving human forensics through advances in genetics, genomics and molecular biology estimating trace deposition time with circadian biomarkers: a prospective and versatile tool for crime scene reconstruction dissecting daily and circadian expression rhythms of clockcontrolled genes in human blood evaluation of mrna markers for estimating blood deposition time: towards alibi testing from human forensic stains with rhythmic biomarkers circadian rhythms, sleep, and metabolism circadian integration of metabolism and energetics identification of human plasma metabolites exhibiting time-of-day variation using an untargeted liquid chromatography-mass spectrometry metabolomic approach human blood metabolite timetable indicates internal body time eye color and the prediction of complex phenotypes from genotypes irisplex: a sensitive dna tool for accurate prediction of blue and brown eye colour in the absence of ancestry information the hirisplex system for simultaneous prediction of hair and eye colour from dna molecular classification of cancer: class discovery and class prediction by gene expression monitoring circadian rhythm and sleep disruption: causes, metabolic consequences, and countermeasures effects of insufficient sleep on circadian rhythmicity and expression amplitude of the human blood transcriptome plasma amino acid responses in humans to evening meals of differing nutritional composition biomarkers for nutrient intake with focus on alternative sampling techniques targeted metabolomics of dried blood spot extracts ethical statement and informed consent all procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 helsinki declaration and its later amendments or comparable ethical standards. informed consent was obtained from all individual participants included in the study.open access this article is distributed under the terms of the creative comm ons attribution 4.0 international license (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. key: cord-010758-ggoyd531 authors: valdano, eugenio; fiorentin, michele re; poletto, chiara; colizza, vittoria title: epidemic threshold in continuous-time evolving networks date: 2018-02-06 journal: nan doi: 10.1103/physrevlett.120.068302 sha: doc_id: 10758 cord_uid: ggoyd531 current understanding of the critical outbreak condition on temporal networks relies on approximations (time scale separation, discretization) that may bias the results. we propose a theoretical framework to compute the epidemic threshold in continuous time through the infection propagator approach. we introduce the weak commutation condition allowing the interpretation of annealed networks, activity-driven networks, and time scale separation into one formalism. our work provides a coherent connection between discrete and continuous time representations applicable to realistic scenarios. contagion processes, such as the spread of diseases, information, or innovations [1] [2] [3] [4] [5] , share a common theoretical framework coupling the underlying population contact structure with contagion features to provide an understanding of the resulting spectrum of emerging collective behaviors [6] . a common keystone property is the presence of a threshold behavior defining the transition between a macroscopic-level spreading regime and one characterized by a null or negligibly small contagion of individuals. known as the epidemic threshold in the realm of infectious disease dynamics [1] , the concept is analogous to the phase transition in nonequilibrium physical systems [7, 8] , and is also central in social contagion processes [5, [9] [10] [11] [12] [13] . a vast array of theoretical results characterize the epidemic threshold [14] , mainly under the limiting assumptions of quenched and annealed networks [4, [15] [16] [17] [18] , i.e., when the time scale of the network evolution is much slower or much faster, respectively, than the dynamical process. the recent availability of data on time-resolved contacts of epidemic relevance [19] has, however, challenged the time scale separation, showing it may introduce important biases in the description of the epidemic spread [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] and in the characterization of the transition behavior [31, [34] [35] [36] [37] . departing from traditional approximations, few novel approaches are now available that derive the epidemic threshold constrained to specific contexts of generative models of temporal networks [22, 32, 35, [38] [39] [40] [41] or considering generic discrete-time evolving contact patterns [42] [43] [44] . in particular, the recently introduced infection propagator approach [43, 44] is based on a matrix encoding the probabilities of transmission of the infective agent along time-respecting paths in the network. its spectrum allows the computation of the epidemic threshold at any given time scale and for an arbitrary discrete-time temporal network. leveraging an original mapping of the temporal network and epidemic spread in terms of a multilayer structure, the approach is valid in the discrete representation only, similarly to previous methods [17, 18, 35] . meanwhile, a large interest in the study of continuously evolving temporal networks has developed, introducing novel representations [19, 20, 27, 45] and proposing optimal discretization schemes [44, 46, 47] that may, however, be inaccurate close to the critical conditions [48] . most importantly, the two representations-continuous and discrete-of a temporal network remain disjointed in current network epidemiology. a discrete-time evolving network is indeed a multilayer object interpretable as a tensor in a linear algebraic representation [49] . this is clearly no longer applicable when time is continuous, as it cannot be expressed in the form of successive layers. hence, a coherent theoretical framework to bridge the gap between the two representations is still missing. in this letter, we address this issue by analytically deriving the infection propagator in continuous time. formally, we show that the dichotomy discrete timecontinuous time translates into the separation between a linear algebraic approach and a differential one, and that the latter can be derived as the structural limit of the former. our approach yields a solution for the threshold of epidemics spreading on generic continuously evolving networks, and a closed form under a specific condition that is then validated through numerical simulations. in addition, the proposed novel perspective allows us to cast an important set of network classes into one single rigorous and comprehensive mathematical definition, including annealed [4, 50, 51] and activity-driven [35, 52] networks, widely used in both methodological and applied research. let us consider a susceptible-infected-susceptible (sis) epidemic model unfolding on a continuously evolving temporal network of n nodes. the sis model constitutes a basic paradigm for the description of epidemics with reinfection [1] . infectious individuals (i) can propagate the contagion to susceptible neighbors (s) with rate λ, and recover to the s state with rate μ. the temporal network is described by the adjacency matrix aðtþ, with t ∈ ½0; t. we consider a discretized version of the system by sampling aðtþ at discrete time steps of length δt (fig. 1 ). this yields a finite sequence of adjacency matrices fa 1 ; a 2 ; …; a t step g, where t step ¼ ⌊t=δt⌋, and a h ¼ aðhδtþ. the sequence approximates the original continuous-time network with increasing accuracy as δt decreases. we describe the sis dynamics on this discrete sequence of static networks as a discrete-time markov chain [17, 18] : where p h;i is the probability that a node i is in the infectious state at time step h, and μδt (λδt) is the probability that a node recovers (transmits the infection) during a time step δt, for sufficiently small δt. by mapping the system into a multilayer structure encoding both network evolution and diffusion dynamics, the infection propagator approach derives the epidemic threshold as the solution of the equation ρ½pðt step þ ¼ 1 [43, 44] , where ρ is the spectral radius of the following matrix: the generic element p ij ðt step þ represents the probability that the infection can propagate from node i at time step 1 to node j at time step t step , when λ is close to λ c and within the quenched mean-field approximation (locally treelike network [53] ). for this reason, p is denoted as the infection propagator. to compute the continuous-time limit of the infection propagator, we observe that p obeys the recursive relation pðh þ 1þ ¼ pðhþ½1 − μδt þ λδta hþ1 . expressed in continuous time and dividing both sides by δt, the relation becomes that in the limit δt → 0 yields a system of n 2 coupled differential equations whose components are the lhs of eq. (4) is the derivative of p that is well behaved if all entries are continuous functions of time. a ij ðtþ are, however, often binary, so that their evolution is a sequence of discontinuous steps. to overcome this, it is possible to approximate these steps with one-parameter families of continuous functions, compute the threshold, and then perform the limit of the parameter that recovers the discontinuity. more formally, this is equivalent to interpreting derivatives in the sense of tempered distributions [54] . in order to check that our limit process correctly connects the discrete-time framework to the continuous time one, let us now consider the standard markov chain formulation of the continuous dynamics: performing a linear stability analysis of the disease-free state [i.e., around p i ðtþ ¼ 0] in the quenched mean-field approximation [17, 18] , we obtain we note that this expression is formally equivalent to eq. (5). in particular, each row of p ij of eq. (5) satisfies eq. (7). furthermore, the initial condition p ij ð0þ ¼ δ ij guarantees that in varying the row i, we consider all vectors of the space basis as initial condition. every solution pðtþ of eq. (7) can therefore be expressed as a linear combination of the rows of pðtþ. any fundamental matrix solution of eq. (7) obeys eq. (5) within the framework of the floquet theory of nonautonomous linear systems [55] . the equivalence of the two equations shows that our limit of the discrete-time propagator encodes the dynamics of the continuous process. it is important to note that the limit process leading to eq. (4) entails a fundamental change of paradigm on the representation of the network structure and contagion process, where the linear algebraic representation suitable in discrete time turns into a differential geometrical description of the continuous-time flow. while network and spreading dynamics in discrete time are encoded in a multilayer adjacency tensor, the continuous time description proposed in eq. (5) rests on a representation of the dynamical process in terms of a manifold whose points are adjacency matrices (or a rank-2 tensor in the sense of ref. [49] ) corresponding to possible network and contagion states. the dynamics of eq. (5) is then a curve on such a manifold, indicating which adjacency matrices to visit and in which order. in practice, we recover that the contagion process on a discrete temporal network corresponding to an ordered subset of the full multilayer structure of ref. [49] becomes in the limit δt → 0 a spreading on a continuous temporal network represented through a one-dimensional ordered subset of a tensor field (formally the pullback on the evolution curve). the two frameworks, so far considered independently and mutually exclusive, thus merge coherently through a smooth transition in this novel perspective. we now turn to solving eq. (4) to derive an analytic expression of the infection propagator. by defining the rescaled transmissibility γ ¼ λ=μ, we can solve eq. (4) in terms of a series in μ [56] , with p ð0þ ¼ 1 and under the assumption that γ remains finite around the epidemic threshold for varying recovery rates. the recursion relation from which we derived eq. (4) provides the full propagator for t ¼ t. equation (8) computed in t therefore yields the infection propagator for the continuous-time adjacency matrix aðtþ, and is defined by the sum of the following terms: equations (8) and (9) can be put in a compact form by using dyson's time-ordering operator t [57] . it is defined as t aðt 1 þaðt 2 þ ¼ aðt 1 þaðt 2 þθðt 1 − t 2 þ þ aðt 2 þaðt 1 þθðt 2 − t 1 þ, with θ being heaviside's step function. the expression of the propagator is thus equation (10) represents an explicit general solution for eq. (4) that can be computed numerically to arbitrary precision [56] . the epidemic threshold in the continuoustime limit is then given by ρ½pðtþ ¼ 1. we now discuss a special case where we can recover a closed-form solution of eq. (10), and thus of the epidemic threshold. we consider continuously evolving temporal networks satisfying the following condition (weak commutation): aðtþ; i.e., the adjacency matrix at a certain time aðtþ commutes with the aggregated matrix up to that time. in the introduced tensor field formalism, the weak commutation condition represents a constraint on the temporal trajectory, or equivalently, an equation of motion for aðtþ. equation (11) implies that the order of factors in eq. (9) no longer matters. hence, we can simply remove the timeordering operator t in eq. (10), yielding where hai ¼ r t 0 dtaðtþ=t is the adjacency matrix averaged over time. the resulting expression for the epidemic threshold for weakly commuting networks is then this closed-form solution proves to be extremely useful as a wide range of network classes satisfies the weak commutation condition of eq. (11) . an important class is constituted by annealed networks [4, 50, 51] . in the absence of dynamical correlations, the annealed regime leads to h½aðxþ; aðyþi ¼ 0, as the time ordering of contacts becomes irrelevant. equation (11) can thus be reinterpreted as h½aðtþ; aðxþi x ¼ 0, where the average is carried out over x ∈ ½0; tþ. for long enough t, r t 0 dxaðxþ=t approximates well the expected adjacency matrix hai of the annealed model, leading the annealed regime to satisfy eq. (13) . this result thus provides an alternative mathematical framework for the conceptual interpretation of annealed networks in terms of weak commutation. originally introduced to describe disorder on quenched networks [58, 59] , annealed networks were mathematically described in probabilistic terms, with the probability of establishing a contact depending on the degree distribution pðkþ and the twonode degree correlations pðk 0 jkþ [50] . here we show that temporal networks whose adjacency matrix aðtþ asymptotically commutes with the expected adjacency matrix are found to be in the annealed regime. equation (13) can also be used to test the limits of the time scale separation approach, by considering a generic temporal network not satisfying the weak commutation condition. if μ is small, we can truncate the series of the infection propagator [eq. (8) ] at the first order, p ¼ 1 þ μp ð1þ þ oðμ 2 þ, where p ð1þ ðtþ ¼ t½γhai − 1, to recover indeed eq. (13) . the truncation thus provides a mathematical expression of the range of validity of the physical review letters 120, 068302 (2018) time-separation scheme for spreading processes on temporal networks, since temporal correlations can be disregarded when the network evolves much faster than the spreading process. extending the result of the annealed networks, we show that the weak commutation condition also holds for networks whose expected adjacency matrix depends on time as a scalar function (instead of being constant as in the annealed case), haðtþi ¼ cðtþhað0þi. also in this case we have h½aðxþ; aðyþi ¼ 0, so that the same treatment performed for annealed networks applies. examples are provided by global trends in activation patterns, as often considered in infectious disease epidemiology to model seasonal variations of human contact patterns (e.g., due to the school calendar) [60] . when the time scale separation approach is not applicable, we find another class of weakly commuting temporal networks that are used as a paradigmatic network example for the study of contagion processes occurring on the same time scale of contacts evolution-the activity-driven model [35] . it considers heterogeneous populations where each node i activates according to an activity rate a i , drawn from a distribution fðaþ. when active, the node establishes m connections with randomly chosen nodes lasting a short time δ (δ ≪ 1=a i ). since the dynamics lacks time correlations, the weak commutation condition holds, and the epidemic threshold can be computed from eq. (13). in the limit of large network size, it is possible to write the average adjacency matrix as hai ij ¼ ðmδ=nþða i þ a j þ þ oð1=n 2 þ. through row operations we find that the matrix has rankðhaiþ ¼ 2, and thus only two nonzero eigenvalues, α, σ, with α > σ. we compute them through the traces of hai (tr½hai ¼ α þ σ and tr½hai 2 ¼ α 2 þ σ 2 ) to obtain the expression of ρ½hai for eq. (13): the epidemic threshold becomes yielding the same result of ref. [35] , provided here that the transmission rate λ is multiplied by δ to make it a probability, as in ref. [35] . finally, we verify that for the trivial example of static networks, with an adjacency matrix constant in time, eq. (13) reduces immediately to the result of refs. [17, 18] . we now validate our analytical prediction against numerical simulations on two synthetic models. the first is the activity-driven model with activation rate a i ¼ a, m ¼ 1, and average interactivation time τ ¼ 1=a ¼ 1, fixed as the time unit of the simulations. the transmission parameter is the probability upon contact λδ and the model is implemented in continuous time. the second model is based on a bursty interactivation time distribution pðδtþ ∼ ðϵ þ δtþ −β [31] , with β ¼ 2.5 and ϵ tuned to obtain the same average interactivation time as before, τ ¼ 1. we simulate a sis spreading process on the two networks with four different recovery rates, μ ∈ f10 −3 ; 10 −2 ; 10 −1 ; 1g, i.e., ranging from a value that is 3 orders of magnitude larger than the time scale τ of the networks (slow disease), to a value equal to τ (fast disease). we compute the average simulated endemic prevalence for specific values of λ, μ using the quasistationary method [61] and compare the threshold computed with eq. (13) with the simulated critical transition from extinction to endemic state. as expected, we find eq. (13) to hold for the activity-driven model at all time scales of the epidemic process (fig. 2) , as the network lacks temporal correlations. the agreement with the transition observed in the bursty model, however, is recovered only for slow diseases, as at those time scales the network is found in the annealed regime. when network and disease time scales become comparable, the weakly commuting approximation of eq. (13) no longer holds, as burstiness results in dynamical correlations in the network evolution [31] . our theory offers a novel mathematical framework that rigorously connects discrete-time and continuous-time critical behaviors of spreading processes on temporal networks. it uncovers a coherent transition from an adjacency tensor to a tensor field resulting from a limit performed on the structural representation of the network and contagion process. we derive an analytic expression of the infection propagator in the general case that assumes a closed-form solution in the introduced class of weakly commuting networks. this allows us to provide a rigorous mathematical interpretation of annealed networks, encompassing the different definitions historically introduced in the literature. this work also provides the basis for important theoretical extensions, assessing, for example, the impact of bursty activation patterns or of the adaptive dynamics in response to the circulating epidemic. finally, our approach offers a tool for applicative studies on the estimation of the vulnerability of temporal networks to contagion processes in many real-world scenarios, for which the discrete-time assumption would be inadequate. we thank luca ferreri and mason porter for fruitful discussions. this work is partially sponsored by the ec-health contract no. 278433 (predemics) and the anr contract no. anr-12-monu-0018 (harmsflu) to v. c., and the ec-anihwa contract no. anr-13-anwa-0007-03 (liveepi) to e. v., c. p., and v. c. * present address: department d'enginyeria informàtica i matemàtiques modeling infectious diseases in humans and animals generalization of epidemic theory: an application to the transmission of ideas epidemics and rumours epidemic spreading in scale-free networks a simple model of global cascades on random networks modelling dynamical processes in complex socio-technical systems contact interactions on a lattice on the critical behavior of the general epidemic process and dynamical percolation cascade dynamics of complex propagation propagation and immunization of infection on general networks with both homogeneous and heterogeneous components dynamics of rumor spreading in complex networks kinetics of social contagion critical behaviors in contagion dynamics epidemic processes in complex networks resilience of the internet to random breakdowns spread of epidemic disease on networks epidemic spreading in real networks: an eigenvalue viewpoint discrete time markov chain approach to contact-based disease spreading in complex networks modern temporal network theory: a colloquium impact of non-poissonian activity patterns on spreading processes disease dynamics over very different time-scales: foot-and-mouth disease and scrapie on the network of livestock movements in the uk epidemic thresholds in dynamic contact networks how disease models in static networks can fail to approximate disease in dynamic networks representing the uk's cattle herd as static and dynamic networks impact of human activity patterns on the dynamics of information diffusion small but slow world: how network topology and burstiness slow down spreading dynamical strength of social ties in information spreading high-resolution measurements of face-to-face contact patterns in a primary school dynamical patterns of cattle trade movements multiscale analysis of spreading in a large communication network bursts of vertex activation and epidemics in evolving networks interplay of network dynamics and heterogeneity of ties on spreading dynamics predicting and controlling infectious disease epidemics using temporal networks, f1000prime rep the dynamic nature of contact networks in infectious disease epidemiology activity driven modeling of time varying networks temporal percolation in activity-driven networks contrasting effects of strong ties on sir and sis processes in temporal networks monogamous networks and the spread of sexually transmitted diseases epidemic dynamics on an adaptive network effect of social group dynamics on contagion epidemic threshold and control in a dynamic network virus propagation on time-varying networks: theory and immunization algorithms analytical computation of the epidemic threshold on temporal networks infection propagator approach to compute epidemic thresholds on temporal networks: impact of immunity and of limited temporal resolution machine learning: ecml effects of time window size and placement on the structure of an aggregated communication network epidemiologically optimal static networks from temporal network data limitations of discrete-time approaches to continuous-time contagion dynamics mathematical formulation of multilayer networks langevin approach for the dynamics of the contact process on annealed scale-free networks thresholds for epidemic spreading in networks controlling contagion processes in activity driven networks beyond the locally treelike approximation for percolation on real networks a course of modern analysis some results in floquet theory, with application to periodic epidemic models the magnus expansion and some of its applications the radiation theories of tomonaga, schwinger, and feynman optimal disorder for segregation in annealed small worlds diffusion in scale-free networks with annealed disorder recurrent outbreaks of measles, chickenpox and mumps: i. seasonal variation in contact rates epidemic thresholds of the susceptible-infected-susceptible model on networks: a comparison of numerical and theoretical results key: cord-024494-i6puqauk authors: ienco, dino; interdonato, roberto title: deep multivariate time series embedding clustering via attentive-gated autoencoder date: 2020-04-17 journal: advances in knowledge discovery and data mining doi: 10.1007/978-3-030-47426-3_25 sha: doc_id: 24494 cord_uid: i6puqauk nowadays, great quantities of data are produced by a large and diverse family of sensors (e.g., remote sensors, biochemical sensors, wearable devices), which typically measure multiple variables over time, resulting in data streams that can be profitably organized as multivariate time-series. in practical scenarios, the speed at which such information is collected often makes the data labeling task uneasy and too expensive, so that limit the use of supervised approaches. for this reason, unsupervised and exploratory methods represent a fundamental tool to deal with the analysis of multivariate time series. in this paper we propose a deep-learning based framework for clustering multivariate time series data with varying lengths. our framework, namely detsec (deep time series embedding clustering), includes two stages: firstly a recurrent autoencoder exploits attention and gating mechanisms to produce a preliminary embedding representation; then, a clustering refinement stage is introduced to stretch the embedding manifold towards the corresponding clusters. experimental assessment on six real-world benchmarks coming from different domains has highlighted the effectiveness of our proposal. nowadays, huge amount of data is produced by a large and diverse family of sensors (e.g., remote sensors, biochemical sensors, wearable devices). modern sensors typically measure multiple variables over time, resulting in streams of data that can be profitably organized as multivariate time-series. while a major part of recent literature about multivariate time-series focuses on tasks such as forecasting [14, 19, 20] and classification [11, 26] of such data objects, the study of multivariate time-series clustering has often been neglected. the development of effective unsupervised clustering techniques is crucial in practical scenarios, where labeling enough data to deploy a supervised process may be too expensive (i.e., in terms of both time and money). moreover, clustering allows to discover characteristics of multivariate time series data that go beyond the apriori knowledge on a specific domain, serving as tool to support subsequent exploration and analysis processes. while several methods exist for the clustering of univariate time series [13] , the clustering of multivariate time series remains a challenging task. early approaches have been proposed which were generally based on adaptations of standard clustering techniques to such data, e.g., density based methods [3] , methods based on independent component analysis [25] and fuzzy approaches [5, 8] . recently, hallac et al. [9] proposed a method, namely ticc (toeplitz inverse covariance-based clustering), that segments multivariate time series and, successively, clusters subsequences through a markov random fields based approach. the algorithm leverages an (em)-like strategy, based on alternating minimization, that iteratively clusters the data and then updates the cluster parameters. unfortunately, this method does not produce a clustering solution considering the original time series but a data partition where the unit of analysis is the subsequence. as regards deep learning based clustering, such methods have recent become popular in the context of image and relational data [17, 24] , but their potential has not yet been fully exploited in the context of the unsupervised analysis of time series data. tzirakis et al. [24] recently proposed a segmentation/clustering framework based on agglomerative clustering which works on video data (time series of rgb images). the approach firstly extracts a clustering assignment via hierarchical clustering, then performs temporal segmentation and, finally, extracts representation via convolutional neural network (cnn). the clustering assignment is used as pseudo-label information to extract the new representation (training the cnn network) and to perform video segmentation. the proposed approach is specific to rgb video segmentation/clustering and it is not well suited for varying length information. all these factors limit its use to standard multivariate time-series analysis. a method based on recurrent neural networks (rnns) has also been recently proposed in [23] . the representation provided by the rnn is clustered using a divergence-based clustering loss function in an end-to-end manner. the loss function is designed to consider cluster separability and compactness, cluster orthogonality and closeness of cluster memberships to a simplex corner. the approach requires training and validation data to learn parameters and choose hyperparameter setting, respectively. finally, the framework is evaluated on a test set indicating that the approach seems not completely unsupervised and, for this reason, not directly exploitable in our scenario. in this work, we propose a new deep-learning based framework, namely det-sec (deep time series embedding clustering), to cope with multivariate timeseries clustering. differently from previous approaches, our framework is enough general to deal with time-series coming from different domains, providing a partition at the time-series level as well as manage varying length information. the detsec has two stages: firstly a recurrent autoencoder exploits attention and gating mechanisms to produce a preliminary embedding representation. then, a clustering refinement stage is introduced to stretch the embedding manifold towards the corresponding clusters. we provide an experimental analysis which includes comparison with five state of the art methods and ablations analysis of the proposed framework on six real-world benchmarks from different domains. the results of this analysis highlight the effectiveness of the proposed framework as well as the added value of the new learnt representation. the rest of the paper is structured as follows: in sect. 2 we introduce the det-sec framework, in sect. 3 we present our experimental evaluation, and sect. 4 concludes the work. in this section we introduce detsec (deep time series embedding clustering via attentive-gated autoencoder). let x = {x i } n i=1 be a multivariate time-series dataset. each x i ∈ x is a time-series where x ij ∈ r d is the multidimensional vector of the time-series x i at timestamp j, with 1 ≤ j ≤ t , d being the dimensionality of x ij and t the maximum time-series length. we underline that x can contain time-series with different lengths. the goal of detsec is to partition x in a given number of clusters, provided as an input parameter. to this purpose, we propose to deal with the multivariate time-series clustering task by means of recurrent neural networks [1] (rnn), in order to manage at the same time (i) the sequential information exhibited by time-series data and (ii) the multivariate (multi-dimensional) information that characterizes time-series acquired by real-world sensors. our approach exploits a gated recurrent unit (gru) [4] , a type of rnn, to model the time-series behavior and to encode the original time-series in a new vector embedding representation. detsec has two different stages. in the first one, the gru based autoencoder is exploited to summarize the time-series information and to produce the new vector embedding representation, obtained by forcing the network to reconstruct the original signal, that integrates the temporal behavior and the multi-dimensional information. once the autoencoder network has been pretrained, the second stage of our framework refines such representation by taking into account a twofold task, i.e., the reconstruction one and another one devoted to stretch the embedding manifold towards clustering centroids. such centroids can be derived by applying any centroid-based clustering algorithm (i.e. k-means) on the new data representation. the final clustering assignment is derived by applying the k-means clustering algorithm on the embeddings produced by detsec. figure 1 visually depicts the encoder/decoder structure of detsec, consisting of three different components in our network architecture: i) an encoder, ii) a backward decoder and iii) a forward decoder. the encoder is composed by two gru units that process the multivariate time series: the first one (in red) processes the time-series in reverse order (backward) while the second one (in green) processes the input time-series in the original order (forward). successively, for each gru unit, an attention mechanism [2] is applied to combine together the information coming from different timestamps. attention mechanisms are widely used in automatic signal processing [2] (1d signal or natural language processing) as they allow to merge together the information extracted by the rnn model at different timestamps via a convex combination of the input sources. the attention formulation we used is the following one: the network has three main components: i) an encoder, ii) a forward decoder and iii) a backward decoder. the encoder includes forward/backward gru networks. for each network an attention mechanism is employed to combine the sequential information. subsequently, the gating mechanism combines the forward/backward information to produce the embedding representation. the two decoder networks have similar structure: the forward decoder reconstructs the original signal considering its original order (forward -green color) while the backward decoder reconstructs the same signal but in inverse order (backward -red color). (color where h ∈ r t,l is a matrix obtained by vertically stacking all feature vectors h tj ∈ r l learned at t different timestamps by the gru and l is the hidden state size of the gru network. matrix w a ∈ r l,l and vectors b a , u a ∈ r l are parameters learned during the process. the symbol indicates element-wise multiplication. the purpose of this procedure is to learn a set of weights (λ t1 , . . . , λ tt ) that allows to combine the contribution of each timestamp h tj . the sof tm ax function is used to normalize weights λ so that their sum is equal to 1. the results of the attention mechanism for the backward (h att back ) and for the forward (h att forw ) gru units are depicted with red and green boxes, respectively, in fig. 1 . finally, the two sets of features are combined by means of a gating mechanism [18] as follows: where the gating function gate(·) performs a non linear transformation of the input via a sigmoid activation function and a set of parameters (w and b) that are learnt at training time. the result of the gate(·) function is a vector of elements ranging in the interval [0, 1] that is successively used to modulate the information derived by the attention operation. the gating mechanism adds a further decision level in the fusion between the h att forw and h att back information since it has the ability to select (or retain a part of) the helpful features to support the task at hand [27] . the forward and backward decoder networks are fed with the representation (embedding) generated by the encoder. they deal with the reconstruction of the original signal considering the same order (resp. the reverse order) for the forward (resp. backward) decoder. this means that the autoencoder copes with the sum of two reconstruction tasks (i.e., forward and backward) where each reconstruction task tries to minimize the mean squared error between the original data and the reconstructed one. formally, the loss function implemented by the autoencoder network is defined as follows: where |||| 2 2 is the squared l 2 distance, dec (resp. dec back ) is the forward (resp. backward) decoder network, enc is the encoder network and rev(x i ) is the timeseries x i in reverse order. θ 1 are the parameters associated to the encoder while θ 2 (resp. θ 3 ) are the parameters associated to the forward (resp. backward) decoder. algorithm 1 depicts the whole procedure implemented by detsec. it takes as input the dataset x, the number of epochs n ep ochs and the number of expected clusters nclust. the output of the algorithm is the new representation derived by the gru based attentive-gated autoencoder, named embeddings. the first stage of the framework (lines 2-6) trains the autoencoder reported in fig. 1 for a total of 50 epochs. successively, the second stage of the framework (lines 8-14) performs a loop considering the remaining number of epochs in which, at each epoch, the current representation is extracted, a k-means algorithm is executed to obtain the current cluster assignment and the corresponding centroids. successively, the autoencoder parameters are optimized considering the reconstruction loss l ae plus a third term that has the objective to stretch the data embeddings closer to the corresponding cluster centroids: where δ il is a function that is equal to 1 if the data embedding of the time-series x i belongs to cluster l and 0 elsewhere. centroids l is the centroid of cluster l. finally, the new data representation (embeddings) is extracted (line 15) and returned by the procedure. the final partition is obtained by applying the k-means clustering algorithm on the new data representation. require: x, n ep ochs, nclust. ensure: embeddings. 1: i = 0 2: while i < 50 do 3: update θ1, θ2 and θ3 by descending the gradient: 4: δ, centroids = runkmeans(embeddings, nclust) 11: update θ1, θ2 and θ3 by descending the gradient: 12: in this section we assess the behavior of detsec considering six real world multivariate time series benchmarks. to evaluate the performance of our proposal, we compare it with several competing and baselines approaches by means of standard clustering evaluation metrics. in addition, we perform a qualitative analysis based on a visual inspection of the embedding representations learnt by our framework and by competing approaches. for the comparative study, we consider the following competitors: -the classic k-means algorithm [21] based on euclidean distance. -the spectral clustering algorithm [15] (sc ). this approach leverages spectral graph theory to extract a new representation of the original data. k-means method is then applied to obtain the final data partition. -the deep embedding clustering algorithm [28] (dec ) that performs partitional clustering through deep learning. similarly to k-means, also this approach is suited for data with fixed length. also in this case we perform zero padding to fit all the time-series lengths to the size of the longest one. -the dynamic time warping measures [7] (dtw ) coupled with k-means algorithm. such distance measure is especially tailored for time-series data with variable length-size. this measure is a differentiable distance measure recently introduced to manage dissimilarity evaluation between multivariate time-series of variable length. we couple such measure with the k-means algorithm. note that when using k-means and sc, due to the fact that multivariate time series can have different lengths, we perform zero padding to fit all the time-series lengths to the longest one. for the dec method, we use the keras implementation 1 . for the dtw and softdtw measures we use their publicly available implementations [22] . with the aim to understand the interplay among the different components of detsec, we also propose an ablation study by taking into account the following variants of our framework: -a variant of our approach that does not involve the gating mechanism. the information coming from the forward and backward encoder are summed directly without any weighting schema. we name such ablation detsec nogate . -a variant of our approach that only involves the forward encoder/decoder gru networks disregarding the use of the multivariate time series in reverse order. we name such ablation detsec noback . our comparative evaluation has been carried out by performing experiments on six benchmarks characterized by different characteristics in terms of number of samples, number of attributes (dimensions) and time length: auslan, japvowel, arabicdigits, remsensing, basicm and ecg. all datasets, except remsensing -which was obtained contacting the authors of [10] -are available online 2 . the characteristics of the six datasets are reported in table 1 . clustering performances were evaluated by using two evaluation measures: the normalized mutual information (nmi) and adjusted rand index (ari) [21] . the nmi measure varies in the range [0, 1] while the ari measure varies in the range [−1, 1]. these measures take their maximum value when the clustering partition completely matches the original one, i.e., the partition induced by the available class labels. due to the non deterministic nature of all the clustering algorithms involved in the evaluation, we run the clustering process 30 times for each configuration, and we report average and standard deviation for each method, benchmark and measure. detsec is implemented via the tensorflow python library. for the comparison, we set the size of the hidden units in each of the gru networks (forward/backward -encoder/decoder) to 64 for basicm, ecg benchmarks and 512 for auslan, japvowel, arabicdigits, remsensing benchmarks. this difference is due to the fact that the former group includes datasets with limited number of samples that cannot be employed to efficiently learn recurrent neural networks with too many parameters. to train the model, we set the batch size equal to 16, the learning rate to 10 −4 and we use the adam optimizer [12] to learn the parameters of the model. the model are trained for 300 epochs: in the first 50 epochs the autoencoder is pre-trained while in the remaining 250 epochs the model is refined via clustering loss. experiments are carried out on a workstation equipped with an intel(r) xeon(r) e5-2667 v4@3.20 ghz cpu, with 256 gb of ram and one titan x gpu. table 2 reports on the performances of detsec and the competing methods in terms of nmi and ari. we can observe that detsec outperforms all the other methods on five datasets over six. the highest gains in performance are achieved on speech and activity recognition datasetes (i.e., japvowel, arabicdigits, aus-lan and basicm ). on such benchmarks, detsec outperforms the best competitors of at least 8 points (auslan ) with a maximum gap of 45 points on arabicdigits. regarding the egc dataset, we can note that best performances are obtained by k-means and dec. however, it should be noted that also in this case detsec outperforms the competitors specifically tailored to manage multivariate time-series data (i.e., dtw and softdtw ). table 3 reports on the comparison between detsec and its ablations. it can be noted how there is not a clear winner resulting from this analysis. detsec obtains the best performance (in terms of nmi and ari) on two benchmarks (arabicdigits and basicm ), while detsec nogate and detsec noback appear to be more suitable for other benchmarks (even if the performances of detsec remain always comparable to the best ones). for instance, we can observe that detsec nogate achieves the best performances on ecg. this is probably due to the fact that this ablation requires a lower number of parameters to learn, and this can be beneficial for processing datasets with a limited number of samples, timestamps and dimensions. to proceed further in the analysis, we visually inspect the new data representation produced by detsec and the best two competing methods (i.e., sc and dtw ) by using basicm as illustrative example. we choose this benchmark since it includes a limited number of samples (i.e., to ease the visualization and avoid possible visual cluttering) and it is characterized by timeseries of fixed length that avoid zero padding transformation. the basicm benchmark includes examples belonging to four different classes that, in fig. 2 , are depicted with four different colors: red, blue, green and black. figure 2(a), (b) , (c) and (d) show the two-dimensional projections of the original data versus the dtw and sc approaches on such dataset. the two dimensional representation is obtained via the t-distributed stochastic neighbor embedding (tsne) approach [16] . in this evaluation, we clearly observe that detsec recovers the underlying data structure better than the competing approaches. the original data representation ( fig. 2(a) ) drastically fails to capture data separability. the dtw method ( fig. 2(b) ) retrieves the cluster involving the blue points, on the left side of the figure, but it can be noted how all the other classes still remain mixed up. sc produces a better representation than the previous two cases but it still exhibits some issue to recover the four cluster structure: the green and black examples are slightly separated but some confusion is still present while the red and blue examples lie in a very close region (a fact that negatively impacts the discrimination between these two classes). conversely, detsec is able to stretch the data manifold producing embeddings that visually fit the underlying data distribution better than the competing approaches, and distinctly organize the samples according to their inner cluster structure. to sum up, we can underline that explicitly managing the temporal autocorrelation leads to better performances regarding the clustering of multivariate time-series of variable length. considering the benchmarks involved in this work, detsec exhibits a general better behavior with respect to the competitors when the benchmark contains enough data to learn the model parameters. this is particularly evident when speech or activity recognition tasks are considered. in addition, the visual inspection of the generated embedding representation is in line with the quantitative results and it underlines the quality of the proposed framework. in this paper we have presented detsec, a deep learning based approach to cluster multivariate time series data of variable length. detsec is a two stages framework in which firstly an attentive-gated rnn-based autoencoder is learnt with the aim to reconstruct the original data and, successively, the reconstruction task is complemented with a clustering refinement loss devoted to further stretching the embedding representations towards the corresponding cluster structure. the evaluation on six real-world time-series benchmarks has demonstrated the effectiveness of detsec and its flexibility on data coming from different application domains. we also showed, through a visual inspection, how the embedding representations generated by detsec highly improve data separability. as future work, we plan to extend the proposed framework considering a semi-supervised and/or constrained clustering setting. representation learning: a review and new perspectives efficient attention using a fixed-size memory representation a density based method for multivariate time series clustering in kernel feature space learning phrase representations using rnn encoder-decoder for statistical machine translation a fuzzy clustering model for multivariate spatial time series soft-dtw: a differentiable loss function for time-series optimizing dynamic time warping's window width for time series data mining applications wavelets-based clustering of multivariate time series duplo: a dual view point deep learning architecture for time series classification multivariate lstm-fcns for time series classification adam: a method for stochastic optimization clustering of time series data -a survey an ensemble model based on adaptive noise reducer and over-fitting prevention lstm for multivariate time series forecasting a tutorial on spectral clustering visualizing data using t-sne a survey of clustering with deep learning: from the perspective of network architecture improving speech recognition by revising gated recurrent units temporal pattern attention for multivariate time series forecasting mv-kwnn: a novel multivariate and multi-output weighted nearest neighbours algorithm for big data time series forecasting introduction to data mining, 1st edn tslearn: a machine learning toolkit dedicated to time-series data recurrent deep divergence-based clustering for simultaneous feature learning and clustering of variable length time series time-series clustering with jointly learning deep representations, clusters and temporal boundaries independent component analysis for clustering multivariate time series data learning kullback-leibler divergence-based gaussian model for multivariate time series classification gated multi-task network for text classification unsupervised deep embedding for clustering analysis key: cord-033851-bxpmxvkk authors: harmon, justin; duffy, lauren n. title: a moment in time: leisure and the manifestation of purpose date: 2020-10-16 journal: int j sociol leis doi: 10.1007/s41978-020-00073-0 sha: doc_id: 33851 cord_uid: bxpmxvkk there has been little consideration given to understanding the concept of time within leisure. just what is time when considered as an ordering mechanism of our leisure behaviors? most leisure research has approached the concept of time through a largely western, monochronic understanding which emphasizes time for its linear ordering and quantifiable qualities. the dominance of this implicit understanding of time is also notably influenced by pressing ideologies that define western society, such as neoliberalism, which can distort our personal discourse with our own time: we see it as a commodity – something to be used efficiently and to be invested. what this thought-piece aims to do is consider the existential properties of time, particularly the “moment,” as an opportunity to “achieve [the] total realization of a possibility” as illustrated by lefebvre. "there is a wide discrepancy between time as it is lived and time as it is considered." -edward t. hall (1983) "time" has always been a central component to understanding leisure. the narrative of modern, industrialized nations with 40-h work weeks led to the emergence of our dualistic view of work and leisure that comprise our lives, and thus, early concepts of leisure were defined by time outside work, instilling the centrality of time in how we understand leisure (godbey 2016) . as a result, most research on time in the field of leisure studies has focused on the use of time, typically through time diaries (robinson and godbey 1997) and other self-report time-use studies (bureau of labor statistics 2020; godbey 2016) . building from this, then, has been the utility function of time where the purposeful use of it is considered an essential ingredient to a life well-lived in the pursuit of the "american dream" (hunnicutt 2013) . time has been implicitly explored through the frameworks of serious leisure (e.g., stebbins 1992) , recreation specialization (e.g., scott and shafer 2001) , and enduring involvement (e.g., mcintyre and pigram 1992) . in each of these frameworks, time manifests as forms of commitment, or the behavioral components that tie individuals to routines of leisure behavior (habitual use of time) and a level of involvement indicative of devotion to an activity (intentional use of time). other scholars have looked at the segmented phases of leisure experiences and their effect on future participation (harmon and dunlap 2018) , as well as the continuation of leisure experiences after participation (scott and harmon 2016) , both key aspects of understanding the role of temporality, particularly as it relates to continuity in leisure repertoires. likewise, with the interest in intentional leisure experience design, there has been more recent attention paid to the temporal episodes that occur within experiences and the methods to study these across the duration of an experience (e.g., experiential sampling method; see ito et al. 2019; quinlan cutler et al. 2018; zajchowski et al. 2017) . though there is no one guiding theory of time, there has been little consideration given to understanding the concept of time within leisure. just what is time when considered as an ordering mechanism of our leisure behaviors? time has been theorized as both absolute (merrifield 2013) and abstract (hall 1983 ), but our internalization of it is often entirely subjective as something that can be transcended (berdyaev 1938 (berdyaev /2009 ), or in many cases, as something that has control over us (rose 2017) . what this thought-piece aims to do is consider the existential properties of time, particularly the "moment," as an opportunity to "achieve [the] total realization of a possibility" (lefebvre, as cited in merrifield 2013, p. 27) . moments, according to the work of henri lefebvre, are "the delirious climax of pure feeling, of pure immediacy, of being there and only there," and a desire to "endure" in the ephemeral properties of bliss, revelation, and connection (merrifield 2013, p. 28 ). while moments evaporate as a requirement of its fleeting qualities, their effects can endure. moments are understood as the opposite of alienation, which is reflected as "absence, a dead moment empty of critical content"; the lefebvrian moment signifies "presence, a fullness, [being] alive and [feeling] connected" (merrifield 2013, p. 29; emphasis original) . while presence has certainly been explored in leisure through the concept of flow (e.g., csizszentmihalyi 2008) , ruminations in this area have focused solely on absorption into the leisure activity, absent any significant consideration to the temporal component aside from simply "losing track of time." that is, most leisure research has approached the concept of time through a largely western, monochronic understanding which emphasizes time for its linear ordering and quantifiable qualities. the dominance of this implicit understanding of time is also notably influenced by pressing ideologies that define western society, such as neoliberalism, which can distort our personal discourse with our own time: we see it as a commoditysomething to be used efficiently and to be invested. metaphors are flippantly attributed to the experience of time without any depth of consideration in definition or description as it relates to the temporal aspects of experience: we "speak of it as being saved, spent, wasted, lost, made up, crawling, killed, and running out," but we rarely explore the complexities that lie beneath the surface (hall 1983, p. 48) . as hall (1983) stated, "time is so thoroughly woven into the fabric of existence that we are hardly aware of the degree to which it determines and coordinates everything we do, including the molding of relations with others in many subtle ways" (p. 48). the essential value of time can be found in "the manifestation of purpose" (berdyaev 1938 (berdyaev / 2009 , something often sought through, and attributed to, meaningful leisure experiences. thus, the moment, in leisure, is the point of departure, the pivotal processing of experience where we make decisions in the existential reckoning of our lives in the pursuit of transcendence. below, three cases are examined through the lens of lefebvre's moments in time. each demonstrate the notion that moments pass, yet the imprint they leave on us can go on to define the trajectory of our lives and meaning in life. there are generations of americans who remember exactly where they were when president john f. kennedy and dr. martin luther king, jr. were assassinated, when the space shuttle challenger blew up, or when the twin towers fell on 9/11; all moments that were jarring and decentering to life that followed. building on lefebvre's work, elden et al. (2003) state that a moment not only defines a form, but a form is also defined by it. moments can redefine or confirm one's trajectory. moments are recognized through individual-level consciousness, but those recognitions can take place in the broader tapestry of social consciousness and collective memory. that a moment could either confirm, deny, or disrupt routines, understandings, or beliefs, suggests that it can lead to shockwaves that reorder social life, even if only for a finite period of time but in some instances, permanently. much like being in a multi-car accident, there is a cascading effect from the point of impact on all involved that can extend beyond just those onsite, especially if the injuries of those involved are serious. the injuries sustained, and the rehabilitation required to follow, or the long-lasting effects of the permanence of injury, can alter the time which is still yet to come. in the moment, a blink of the eye, whole lives can be forever changed. just the same, moments can be pivotal and positive in terms of how they reorient life, interests, relationships, goals, and meanings. moments can be ambiguous, their importance not immediately recognizable until the series of events that follows has completed its evolution or escalation. elden et al. (2003) state that, "there is no moment except in so far as it embraces and aims to constitute an absolute" (p. 175). but that "absolute" might not be recognized immediately for lack of information or context, might only come into being through reflection on historical events, or may not be understood until the sequences that initiated it, or follow it, are put into place. in what follows, we explore the state of altered time through life during the covid-19 pandemic and the moment "it all changed"; the death of george floyd as the breaking point of public consciousness after centuries of injustice culminating in feelings of "enough"; and the moment when one recognizes their life is no longer lived as their own when there is a loss of control of one's time through incarceration. the surreality of life in 2020both in the united states and across the globeonce covid-19 forced itself on humankind, was quite abrupt even though there were signs indicating we should have been better prepared. the everyday, takenfor-granted "normalcy" of life was disruptedtoppled on its head. in the united states alone, millions lost their jobs; those who did not, in many instances, had to work from home where they were met with competition for their time. for many, it came in the form of children who were now out of school, sent home to continue their "education," albeit with parents playing a significant role in their instruction (burk et al. 2020) . still others were not so lucky, as the realization of just what and who was "essential" became very clear (rose 2020) . those on the "front line" were forced to navigate the precarious circumstances of the new world order as daily mis/information and everevolving updates from scrambling medical professionals and beleaguered governmental bureaucrats tried to keep up with what had become an almost minute-by-minute news cycle, seemingly always two steps behind the best practices for life during a global pandemic. our lives, almost literally, changed moment-by-moment as new information surfaced. realization of a "new normal" also came through moments that captured changes in the most mundane aspects of everyday life: scrolling tv listings for live sports and finding none, negotiating a transaction at the post-office through a make-shift protective barrier, or participating in telehealth appointments instead of visiting a doctor's office. all of these examples illustrate lefebvre's notion that moments can be those events that simply register the possibility of a life that could look, feel, and be, different. however, the case of covid-19 is not only about the moments that signify the new normal, but in itself became a disrupter of how time is spent, with the oft-used descriptor, "covid time." with the shake-up in daily routines came a perceived glut of "free time," in large part because most social and recreational outlets and activities were halted for the foreseeable future. entertainment, edification, and exercise were all now fully our responsibility whether we liked it or not (simpson 2020; williams 2020). slowly, but surely, we were forced to recognize that the hours in our days were our own to put in order and make work for us in the face of global uncertainty. numerous moments presented themselves to us as opportunities for engagement and action; still others seemed to give us permission to not take advantage of our situations, retreating to the couch to binge-watch netflix, or engage in passive activities, and not make accommodations to otherwise previously-held healthy leisure routines. some had few choices due to other health or social forces (son et al. 2020) , and for many, the moments that followed lockdown likely resulted in a sense of isolation, loneliness, or alienation (palgi et al. 2020) . but all were opportunities for enacting agency, for recognizing the potential to "seize the moment," to make the best of a situation that continued to remain unclear. through the long, cruel, and unjust history of chattel slavery, jim crow, segregation, and structural racism in the united states, there have been a series of moments that signaled atrocity and horror and the need for change. central in this rising awareness of injustice and brutality is the rash of police killings of unarmed, and oftentimes innocent, black men in the twenty-first century (staggers-hakim 2016) . while public shock and sadness simmered to the top with every black man and woman lost to senseless stateimposed violence, it was the killing of one man, george floyd, on may 25, 2020, where the anger boiled over in the broader public and a moment, captured in an eight minute and 46 s video, ignited action. while the black lives matter movement had a significant presence since the murder of michael brown on august 9, 2014 (see rickford 2016) , with the death of george floyd, it took on a new insurgence of power brought on by a public who had finally had enough; the moment for action and change was ripe. the departures from normal routine brought about by covid-19 intersected with the moment triggered by floyd's death; it created the opportunity for people to seize that altered state of time to capitalize on the historical moment and take to the streets in protest (abbady 2020) . for many, out-of-work and enraged by police violence, and without structured outlets for leisure, activism and protest became a renewed forum for solidarity and meaning makingthe streets became the surrogate for ball fields, basketball courts, dancefloors, and festivals; leisure blended with social justice to create the forum for revolution (arora 2013) . it is the temporal relations that occur in shared space where "societal sense-making" takes place (poell 2019, p. 1), the communal processing of generations of injustices in a collective call to action instigated by a pivotal moment in history. incarceration is the ultimate form of alienation (cochran and mears 2013) . there is an old adage that when you go to prison you only "do" two days: the day you go in, and the day you come out. the rest of the time spent behind bars is not yours. not only does being sentenced to jail or prison alienate the inmate from their family, work, and leisure, but it also alienates them from societyand the self (barry et al. 2020) . while many convicted of crimes may have been living a life of alienation before arrest and sentencing, it is only fully realized once the individual is locked up. elden et al. (2003) stated that, metaphorically, "the alienated person locks himself in the moment; he makes himself its prisoner" (p. 175); but this is likely literally true as well. either at the moment of capture or sentencing, but certainly once an inmate crosses into government confinement to serve their sentence, it is a series of moments, beginning with the decision to engage in a crime, through sentencing, to incarceration, where past decisions in life can be seen as part of a greater series of decisions, outlining how one loses control of their own time (severson et al. 2011) . prison and jails are the epicenters of unfree time, the ultimate paradox and contradiction where agency is removed, and inmates are subject to the impositions of statesanctioned mandates (shaw and elger 2015) . the same can be said of leisure in this blended temporal-spatial environment. while inmates have the opportunity to engage in certain types of "leisure" due to an excess of "free time," they have little control over much of it, and more often than not, that leisure is used primarily for helping pass time (johnsen and johansen 2019) . while an individual's initial decision to commit a crime, in most instances, was the key moment in their incarceration, they are constrained in their decision-making ability due to their imprisonment. moments of opportunity are few and far between, and the onus of moments shifts to maintaining habits and making smart decisions until they are released and given back their agency and ability to respond to future moments and sculpt their lives as they see fit. time, in a western context, is an intentionally "disciplining instrument meant to indoctrinate a particular set of arrangements and values among those operating within its purview" (saul 2020, p. 53) , something that is evidenced equally, but differently, through the covid-19 and incarceration examples. frequently, time has the appearance of a "homogenous medium" evoking the endurance properties of humans as they work, socialize, reflect, and recreate, but in reality, time is experienced heterogeneouslyas a multitude of series where we have the opportunity to process our thoughts and actions in situ (bergson 1889 (bergson /2004 . rojek (2010) said that leisure is shaped by history and that all theories of leisure must be situated in time, something that is captured by the social unrest, activism, and protests related to police brutality. yet, often, what is perceived as "free time" is infused with anxiety-inducing properties causing individuals to seek out some form of instrumental or practical structure within which to situate their leisure (batchelor et al. 2020 ). in our intentional use of "excess" free time at home during covid-19, in the justice movements we collectively try to develop through social protest, and in the routines we create for ourselves while in confinement, our decisions in the moment redirect us back to regimented patterns of behavior, that while familiar, if not necessarily potentially comfortable, can also be limiting in their predictability to our personal evolution. as we search for the "the manifestation of purpose" (berdyaev 1938 (berdyaev /2009 ) in our lives through leisure, whatever the context, we must be reminded of the importance of the opportunity to "achieve [the] total realization of a possibility" when presented in the moment (lefebvre, as cited in merrifield 2013, p. 27 ). leisure research is uniquely suited to explore the complexities and intricacies of human experience and behavior and its resulting impact on identity, perception, and potential. while the field has explored the temporality of leisure experiences for more than half a century (cf. clawson and knetsch 1966) , there has been scant effort in the dissection of the units of experience which are often richly rooted in histories, both personal and social. related, the sociopsychological processing of experience is inextricable from time, and therefore, there must be a reconstituted effort to investigate how moments in time not only define the form of a leisure experience, but how the form of leisure is also defined by time (elden et al. 2003) . phenomenological investigation and oral histories, common methodologies in leisure studies, are apt for generating more intimate knowledge of the micro-temporal moments in leisure. leisure scholars can approach investigations of time, particularly the moment, by placing more emphasis on points of origin, integral instances, realizations, calls-to-action, ultimatums, and the role of personal responsibility and accountability in leisure. some expedient examples of the exploration of a leisure moment could be the point when one begins to feel a sense of accomplishment or competence (e.g., breaking a personal record, learning to play the guitar capably), or when one reaffirms their purpose (e.g., seeing the sunrise over a lake, being acknowledged for one's contributions). moments are not static, and they are not uniform in their duration, but they have the potential to be life-changing, and it is the duty of the field of leisure studies to understand how. as a final posit, this paper makes an explicit call for more attention to be paid to the concept of time within leisure research. simply, researchers interested in self-realization through leisure should revisit assumptions regarding time. this paper approached time through a lefebvrian lens using his theory of moments, however, future research can draw from a range of philosophers, theoretical physicists, and theologists concerned with time, purpose, and meaning making in life. how to protest safely in a pandemic usurping public leisure space for protest. social activism in the digital and material commons functional disability, depression, and suicidal ideation in older prisoners precarious leisure: (re)imagining youth, transitions and temporality solitude & society time and free will: an essay on the immediate data of consciousness pandemic motherhood and the academy: a critical examination of the leisure-work dichotomy economics of outdoor recreation social isolation and inmate behavior: a conceptual framework for theorizing prison visitation and guiding and assessing research flow: the psychology of optimal experience history, time and space leisure matters: the state and future of leisure studies the dance of life: the other dimension of time the temporal phases of leisure experience: expectation, experience and reflection of leisure participation free time: the forgotten american dream a comparison of immediate and retrospective affective reports in leisure contexts serving time: organization and the affective dimension of time recreation specialization re-examined: the case of vehicle-based campers henri lefebvre: a critical introduction the loneliness pandemic: loneliness and other concomitants of depression, anxiety and their comorbidity during the covid-19 outbreak social media, temporality, and the legitimacy of protest the experience sampling method: examining its use and potential in tourist experience research black lives matter toward a modern practice of mass struggle time for life: the surprising ways americans use their time the labour of leisure: the culture of free time never enough hours in the day": employed mothers' perceptions of time pressure biopolitics, essential labor, and the political-economic crises of covid-19 temporality and inequity: how dominant cultures of time promote injustices in schools extended leisure experiences: a sociological conceptualization recreation specialization: a critical look at the construct prisoner reentry programming: who recidivates and when improving public health by respecting autonomy: using social science research to enfranchise prison populations mass hysteria, manufacturing crisis and the legal reconstruction of acceptable exercise during a pandemic promoting older adults' physical activity and social well-being during covid-19 the nation's unprotected children and the ghost of mike brown, or the impact of national police killings on the health and social development of african american boys amateurs, professionals, and serious leisure from gym rat to rock star! negotiating constraints to leisure experience via a strengths and substitutability approach the experiencing self and the remembering self: implications for leisure science publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord-023988-u60l07jv authors: bao, yinyin; bossion, amaury; brambilla, davide; buriak, jillian m.; cai, kang; chen, long; cooley, joya a.; correa-baena, juan-pablo; dagdelen, john m.; fenniri, miriam z.; horton, matthew k.; joshi, hrishikesh; khau, brian v.; kupgan, grit; la pierre, henry s.; rao, chengcheng; rosales, adrianne m.; wang, dong; yan, qifan title: snapshots of life—early career materials scientists managing in the midst of a pandemic date: 2020-04-23 journal: chem mater doi: 10.1021/acs.chemmater.0c01624 sha: doc_id: 23988 cord_uid: u60l07jv nan ■ yinyin bao, group leader, institute of pharmaceutical sciences, eth zürich i never expected that a simple assembly of certain macromolecules could have such a huge worldwide impact on human life, which is what we as polymer scientists endeavor to achieve but cannot. ironically, the coronavirus succeeded in making it happen. working in the second most densely infected country, as a father of two little kids and a junior group leader, i would not say that lockdown life is easy. with our lab closed, i can only focus on the papers, reports and other writing work, and communications with students can only be done online. what is more complicated is that i have to shift my working time mainly to the night, so that i can have "bulk time" to concentrate on one thing. although with less productivity, fortunately i have more time to teach my daughter mathematics, read story books for my son, make handicrafts with them, and have other fun. this greatly eases my anxiety due to the suspended research, which makes my lockdown life better than expected. during the spread of coronavirus, another thing i did not expect was the huge difference between east and west regarding people's reactions. a typical example is the big debate on whether uninfected people should wear masks. since the outbreak of sars in 2003, chinese and other asians have become very sensitive to unknown viruses, and wearing masks has been considered as an effective method to prevent the spread. however, this is treated as a sign of sickness and overreaction for most europeans, and is thus not socially acceptable. three weeks ago, when i wore my first mask just before the lockdown of eth, i only saw asian people wearing masks in switzerland. surprisingly, along with the situation of covid19 getting worse, i start to see europeans wearing masks in the stores and on the street. i even read the news that austrians are required to wear masks in supermarkets. it is interesting to see this transformation due to the reconciliation between eastern and western culture, and i will continuously follow this trend during this special period. ■ amaury bossion, post-doctoral while the new coronavirus pandemic is dominating headlines worldwide and the french president, mr. emmanuel macron, just extended the stay-at-home order, here are my thoughts on the surreal atmosphere. as a young postdoctoral scientist, i must admit it is quite harsh not only sacrificing laboratory time but also socializing only remotely. it is even more frustrating when this crisis put an instantaneous halt to promising ongoing experiments. this bizarre and heavy atmosphere is even more present in that i became, partly against my will, a troglodyte in my small parisian apartment. while lots of my friends and colleagues returned home to spend more time with their families at the beginning of the crisis, a mixture of logic and reflection advised me to stay home for the greater good, despite my family's call to come back. although, being organized and rigorous, covid-19 is putting a strain on me mentally as remaining focused upon writing ongoing articles and reviews all day is really demanding with all the other distractions surrounding me. physically, it has considerably reduced my daily physical activity to the point where the most exercise i get is the few walking steps needed to move around my flat. although, i sincerely hope this horrific time will end soon, and we will learn lessons for future preparedness, i definitely believe that we can take advantage of it to grow mentally and learn new things. ■ davide brambilla, assistant professor, facultéde pharmacie, universitéde montreál as a son of a health professional in the red zone in italy, i was rapidly aware of the seriousness of the infection. nonetheless, the pandemic materialized as a storm when, from one day to the other, we had to shut down the laboratory and stop all nonessential experiments. after the initial phase of disorientation, the first work-related thoughts went to the teaching, the graduate students and their projects. while for undergraduate classes the universitéde montreál rapidly reacted and provided support to generate online classes, for the research, the initial stress slowly converted in the recognition that this could be a great opportunity to review and better plan our projects. now, after a month of the home-office, i feel that this forced shutdown brought me out of a working routine, and made me appreciate even more the importance of our profession. scientific research is the only actual weapon we have to fight this infection and to prevent or give a rapid response to future ones. deeply, i hope this pandemic will teach us all something, and that the opinions of scientists will receive higher consideration by the society and the decisionmaking institutions. ■ kang cai, post-doctoral fellow, department of chemistry, northwestern university i am experiencing my second "stay-at-home" period in the us now. i am a postdoctoral researcher at northwestern university in the us. three months ago, i went back to china to attend an academic forum, and then stayed at my hometown in hunan province to spend chinese spring festival with my family. on jan. 23, the covid-19 outbreak happened, and i experienced my first "stay-at-home" period. my return flight to the us was on january 31st, which turned out to be united airline's last flight from china to the us. after a two-week self-quarantine, i worked hard in the lab and tried to get as many results as possible, since i realized that universities in the us could also be shut down in the near future, which happened one month later. now, i have been staying in my apartment for three weeks i work on manuscripts, read papers, think about proposals, do "reactions" in the kitchen, and watch tv. the group meeting is held every week via zoom video, which is good, because it becomes the only "social activity" every week. currently i have plenty of work to do, which makes the "stay-at-home" days not that boring. but if the days continue for another one-to-two months, or even longer, which seems very likely to happen, i am not sure whether i will become anxious or not. it is a tough year for all of us. i am supposed to be on the job market this year, but now the situation has changed a lot. the future is full of uncertainty. ■ long chen, professor, department of chemistry, tianjin university during the locked-down period in our city since february, although the laboratories are still closed and all the students keep staying in their hometown, we all have great confidence that our country, and the entire world, can win this covid-19 pandemic crisis. we keep in touch with our group members via wechat, and also continue to hold the group meetings online, with a focus on literature reports. although from the early times of this pandemic we were not allowed to enter the office in the campus, all the online resources of the university can be conveniently accessed via a vpn connection. it is also an opportunity for the principal investigator and graduate students to analyze and summarize their research work. we recently managed to finish an invited review contribution and several manuscripts. with the situation improving and becoming better and better in china, some universities have already announced their gradual reopening time schedule. it is my hope that humanity pulls together and builds a future in which we are united to fight together against this virus and bring about the fastest possible victory. ■ joya cooley, post-doctoral fellow, materials research laboratory, university of california santa barbara i've read plenty that says that a routine is the most helpful for maintaining overall sanity. i've been keeping a routine, but have found that keeping a flexible routine with some goals works better for me. some of those goals include: sit down at my desk and get some writing done, read some literature, do yoga, make meals, check on my parents. i find that variation in my routine helps with maintaining sanityif i stray from a rigid routine, i start to create more inner turmoil when i cannot keep up. however, if i just try to set some daily/weekly goals, i can tackle them based on how i'm feeling that particular day. try to practice yoga at least 4 times a week; some days it happens in the morning, some days in the evening, some days not at all. write for at least one hour a day; some days that happens all at once, some days it is broken up into shorter sessions. i feel i'm in a precarious position as i'm wrapping up my postdoctoral work and gearing up to begin my independent career, but all i can do for now is take it one day at a time. our laboratories closed unofficially on march 13th, 2020, as i worried about the health of my students and postdocs. it coincided with spring break and some students were traveling abroad to see their families. i was hesitant to advise them on travel and the institute had not officially closed, making it difficult to issue strong recommendations. georgia tech officially closed all nonessential activities one week later. the weeks after were tough, as students got stranded abroad and experiments were not finished to the extent we wanted. our laboratories only opened in october 2019, and we spent the past 6 months ramping up. it took one day to ramp everything down. nonetheless, this has presented itself as an opportunity to come up with strong experimental plans, revisit the literature, and compile and analyze data that will be going into future manuscripts. the week of april 6, 2020, finally started feeling normal and we are trying to make the most of it. i have a plan for each of my students and postdocs for the next two months, after that, i will have to reevaluate! will we turn into a computational group? we will see! as for teaching, transitioning to online format for this millennial (me) was easy and my students adapted quite wellthey ask more questions than ever. we are a team at lawrence berkeley national laboratory working with colleagues from the biomedical research community to make text mining and search tools specifically tailored to covid-19 research with the goal of helping to accelerate our colleagues' research. our team is made up of a number of graduate student researchers and postdocs from lbnl and uc berkeley who specialize in natural language processing methods for analyzing materials science literature, but we were approached about a month ago by colleagues from the innovative genomics institute about applying some of our techniques to the covid-19 literature. since then, we have been working around the clock to build covidscholar.org, a knowledge portal designed to help researchers stay on top of the covid-19 literature. to our knowledge, our database is the most comprehensive and current source for covid-19 papers available today with more than 45,000 papers and we are expanding to include patents and clinical trials in the near future. our site also includes features that leverage machine learning models that extract knowledge from the literature and help researchers make new connections that they might have missed due to the sheer volume of research coming out every day (we are seeing more than 150 papers published on covid-19 daily.) this project has been extremely motivating for everyone on the team and we have been able to make rapid progress as a result. ■ miriam fenniri, undergraduate student, university of british columbia i am a soon-to-be fourth (senior) year undergraduate student at the university of british columbia (ubc). i was fortunate enough to spend last summer in an organic chemistry lab at universitélaval, where i was working on the synthesis of low band gap conducting polymers in the laboratory of prof. mario leclerc. this summer, i was planning on staying on ubc campus doing research and continuing my work as a teaching assistant until covid-19 got in the way. not only is the course that i was going to assist with canceled, but the research center where i was going to spend these next 4 and a half months is closed until further notice. i am still living on-campus. many of my colleagues and classmates have returned homesome are still here either by choice or because of travel restrictions. classes were moved online on march 13th, and shortly thereafter, laboratories closed and research was halted. the transition to online learning has been smooth, however i never expected to have to write a midterm, much less three midterms, in my living room. at least i am now prepared to do the same regarding my final exams. to combat loneliness (and boredom), my family has been hosting weekly video-chats and strangely enough, i look forward to them! ■ matthew horton, materials project staff, lawrence berkeley national lab i work in computational materials science, a field for which i'm enormously grateful that myself and my colleagues can continue to make contributions to remotely. however, this can only happen because of people out there who are still performing the necessary maintenance and support for the high-performance computers and servers that we use. for myself, these are the people who are still keeping the facilities at the national energy research scientific computing center (nersc) running smoothly here in berkeley, but i know that my colleagues greatly appreciate similar efforts all across the world. it is important we recognize all of these people, and the personal risk they're taking on, as well as everyone working to keep laboratories in a safe and stable condition. beyond computation, much of my job is working to share the data we generate online at the materials project so that it is accessible to as broad a group of people as possible, and part of that is working to try to build a community. i have many open questions for how best to do this, but it is feeling necessary now more than ever that we better understand what steps we can take to make our community stronger and more inclusive. for my part, i'm enjoying talking to scientists on an online materials science discussion forum we recently launched, as well as help welcome in new developers to make contributions to the open-source codes we work on. as this situation evolves we can challenge ourselves to become more inventive in finding ways to connect and collaborate together, and carry these lessons forward. ■ hrishikesh joshi, graduate student, i have been living in germany for 3.5 yrs, pursuing a doctorate in chemistry. presently, i am preparing for a big day in my career, my ph.d. defense, "remotely." if i had to describe my current state in one word, it would be "uncertain" on all accounts. will the internet hold up during the defense? will my online presentation be good enough? will i find a job after my defense? the global economy is headed toward a recession, and most companies are downsizing. being an international candidate, it will be more challenging to find a job now than before. on the one hand, i am thrilled as i get to at least defend my ph.d. on the other hand, i am disappointed as i miss out on the opportunity to share this day with people. working remotely has made me appreciate personal interactions even more. every thursday, i am very excited as i get to go to the lab (reduced-workforce regulations) and feel a bit normal again. nevertheless, these times have also been productive as i am working on overdue programming projects and experimenting a lot with cooking. i feel these times are uncertain and disappointing but opportunistic in some ways. the enactment of unfamiliar public health measures and the rapid breakdown of our status quo are two major emotional stressors associated with the ongoing covid-19 pandemic. as an early career scientist, it is easy to fall into the rut of futility that comes with leaving experiments half-finished with looming deadlines. instead of focusing on events outside of my control, i found it more productive to reframe the current situation as a unique opportunity to work on myself, whether it be through reading up on current literature or investing more time into hobbies. in the past month, i've invested more time to properly care for my existing houseplants, repurposing a garment rack to create a diy grow light setup. if you are in need of a hobby, i recommend cultivating common, inexpensive vining plants such as pothos or philodendron! both plants display rapid growth and thrive even when you occasionally forget to water them, and with enough care and time they will grow into respectable, climbing foliage. houseplants are also inspiring metaphors for how we should live our lives; by constantly reaching for the sky while taking care of our essential needs, we can succeed and flourish in the face of unexpected change. ■ grit kupgan, postdoctoral researcher, as a member of a theoretical/computational group, i feel grateful that our research does not have to come to a halt. i cannot imagine the frustrations of researchers who have to delay their work as laboratories become inaccessible. for me, the transition from working in the lab to working from home has been a smooth one. at the beginning of march, our advisor requested a meeting to develop a contingency plan. as expected, the plan was invoked within a week due to the worsening pandemic. because of this preparation, everyone in the group was ready. our data were backed up, and some of us even took our workstations home. indeed, working from home has several advantages. i get to spend more time with my family who lives in a different state, and eliminate daily commutes. to keep the research momentum going, our group holds the research meetings twice a week online. additionally, i try to maintain my usual routines, such as work hours and breaks, and attend online seminars whenever possible. unfortunately, in my opinion, working from home reduces the sense of camaraderie slightly. in the lab, we can have research discussions, ask questions, get suggestions, and share personal stories and thoughts about politics (domestic and international) with our colleagues daily, which is always a pleasure. ■ henry s. la pierre, assistant professor, my professional and personal life move on different frequencies. my research group is in a rush to publish and to meet grant deadlines. even with the closure of laboratories and the cancellation of a busy travel schedule, leaving synchrotron and magnet lab experiments indefinitely suspended, our scientific progress is still planned on the order of days. while i am building plans for my students to safely return to the lab, these may very well not be implemented: there is no justification to rush back without effective and organized testing. as my wife, daughter, and i prepare for the arrival of her baby brother in july, i am acutely aware that the changes and dangers of this new world will not abate soon. it is exhausting simultaneously meeting the demands of the moment and mitigating the risks of a nebulous "return." these risks are particularly worrisome in the vacuum of federal and state leadership. as we rebuild our institutions, scientists, engineers, and academics must demand the integration of our technical and organizational expertise to the structure and function of our governments. one bright spot in this debacle has been the competent and measured response of scientific leadership across disciplines and institutions. ■ chengcheng rao, graduate student, with respect to my own research, my ongoing experiments needed to be postponed, accompanied by a shift of focus to literature, writing, and paperwork. i had been wondering if it would become possible to execute my experiments automatically and/or remotely? currently, the answer is no, but i cannot help but observe that with the growth of ai and breakthroughs underpinned by intelligent robots, some experiments will be doable by machines with fewer hands-inthe-lab. it is an eye-opening moment to think about how to bridge and transfer this advanced technology to my research as well. for interpersonal communication, our group meetings have moved online to keep social distancing and self-isolation, which is a new format for me. hence, we need some time to get familiar with this new communication method/software as face-to-face communication is more productive. all graduate courses are all online as well, for graduate students who need to take courses, and this is a challenge. meanwhile, so much information is shared by email, and it sure feels like we are receiving literally tons of emails every dayit is very hard to follow every email as some have too much/little information, and some are duplicated or even conflicting. it is another challenge to obtain useful information effectively through the information explosion. for my graduation, as i approach the end of my ph.d., my defense was expected at the end of the summer of 2020. will i have an online defense, and a virtual graduation convocation? i hope not. hence, i am always thinking about when the coronavirus outbreak will come to an end. wuhan's shutdown was lifted on april 8, and it shows that the epidemic will be brought under control if effective controls are taken. but will it manifest itself as a second wave? this is the part i am most worried about. after thinking about all aspects, it is necessary to create and maintain a routine during this ever-changing time. do some work on paper or computer, avoid going out unnecessarily, and be sure to get some exercise to strengthen immunitythese are the things that i am doing. although the temperature is still below zero celsius in edmonton, i do believe spring is on its way. ■ adrianne rosales, assistant professor, almost exactly one year ago, i was working from home for a different reason: the birth of my daughter. while i was over the moon to be a new mother, there was a part of me that struggled with the anxiety of staying productive. my research group was only two years old, and i had watched others continue to submit grants, write papers, and advise students soon after the births of their children. whether that productivity was real or not, i held myself to an impossibly high standard on very little sleep. when covid-19 began to spread, i was acutely aware of the implications of shutting down my lab space and working from home again, not even one year later. nap time only goes so far, and my group was d still mostly firstand second-year graduate students. things were finally starting to work! ultimately, it was clear that much larger consequences were at hand and that many would suffer. while it was still voluntary, i decided to pause our lab activity. our university mandate came the following week, but in the meantime, it felt like a decision i made every day. and although i still hold high expectations for my group, we have worked together to make sure those expectations are also realistic. this will not be the last challenge my lab faces, and i hope that in addition to research productivity, i am training my lab on leadership and resiliency. the covid-19 pandemic is now well under control in china as of recent. most of our regular daily life has recovered, although the institute is only open to permanent staff. the "good" thing is i have much more time to concentrate on researchi reviewed several papers during this time, probably more than i usually do, and i feel that i have read each manuscript more carefully than before. senior graduate students have not yet come back from home, which will certainly have a big impact on the progress of their thesis and the production of the group as a whole. as a group of experimental chemistry, we urgently await, with great anticipation, the start of normal experiments and research. social media on the web have played an important role during this special period. i taught a course to first-year graduate students online, in spite of having no experience with respect to online teaching, and with relief i can say finally that it went well. many conferences have been canceled or postponed to next year, but on the other hand, online conferences became increasingly popular. i have already received several invitations for online ph.d. defenses, which is critical for our young people to progress. web-based conferences can significantly save time, and may become more and more popular even after the pandemic. i am confident that the pandemic will be under control worldwide, probably starting around the summer. however, the impact of the pandemic on the economy already shows, and i hope that it will not affect the funding situation in the future. during this winter vacation, i went to wuhan to visit my family on 16th of january, one week before the lockdown of the city caused by the covid-19 outbreak. i stayed at my parents' home with my family for more than two months, wondering about the fates of my entire family. the virus infected several of my relatives, four of whom were hospitalized. luckily, they all recovered, except my 99-year-old grandfather. during the most dire of days, like the people of wuhan, i could not sleep in the night and was looking for the slightest hope in the news, while planning to limit exposure of uninfected family members. with strong backup from the chinese people, especially medical personnel, wuhan survived and reopened on 8th of april. i returned to shanghai by flight on the first day wuhan reopened, and was greeted by media instead of medical teams upon arrival. during the city lockdown, a journal article authored by a student of mine and me was submitted in spite of difficulties caused by a lagging internet connection. after peer-review, a reviewer raised stability issues, which required further experimental data. luckily, my collaborators were able to provide relevant results. i sincerely cannot anticipate the outcome, had i wrote to the editor explaining that we could not provide any further experimental data in the near future due to the coronavirus outbreak, or if i were to ask for an unlimited extension of time to revise. as of now, we still do not have a schedule to reopen the laboratories. i am still waiting for another 14 days upon arrival in shanghai before the university can clear me to return to campus. almost all my group members were in their respective hometowns, longing for a notice to return to shanghai, especially the ones who are graduating this year. online meetings were possible but were difficult due to slow internet connections. i have individual discussions regarding literature with each one of them, hoping they are coping well with the current situation. chemistry of materials we wish the best to all of our authors, readers, and reviewers. we are in this together, and we look forward to another set of snapshots in a month amaury bossion orcid grit kupgan orcid key: cord-268524-lr51ubz5 authors: droit-volet, sylvie; gil, sandrine; martinelli, natalia; andant, nicolas; clinchamps, maélys; parreira, lénise; rouffiac, karine; dambrun, michael; huguet, pascal; dubuis, benoît; pereira, bruno; bouillon, jean-baptiste; dutheil, frédéric title: time and covid-19 stress in the lockdown situation: time free, «dying» of boredom and sadness date: 2020-08-10 journal: plos one doi: 10.1371/journal.pone.0236465 sha: doc_id: 268524 cord_uid: lr51ubz5 a lockdown of people has been used as an efficient public health measure to fight against the exponential spread of the coronavirus disease (covid-19) and allows the health system to manage the number of patients. the aim of this study (clinicaltrials.gov nct 0430818) was to evaluate the impact of both perceived stress aroused by covid-19 and of emotions triggered by the lockdown situation on the individual experience of time. a large sample of the french population responded to a survey on their experience of the passage of time during the lockdown compared to before the lockdown. the perceived stress resulting from covid-19 and stress at work and home were also assessed, as were the emotions felt. the results showed that people have experienced a slowing down of time during the lockdown. this time experience was not explained by the levels of perceived stress or anxiety, although these were considerable, but rather by the increase in boredom and sadness felt in the lockdown situation. the increased anger and fear of death only explained a small part of variance in the time judgment. the conscious experience of time therefore reflected the psychological difficulties experienced during lockdown and was not related to their perceived level of stress or anxiety. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 in 2020, faced with a virus that is uncontrollable because of its unknown [1] and virulent nature (sars-cov-2), the governments of different countries of the european union, as well as of the whole world, found themselves obliged to impose a lockdown on their citizens. this unprecedented public measure is thought to allow the health system to manage the number of patients in hospital and ensure that they receive proper care in the context of the covid-19 outbreak. in france, confinement was officially imposed in the month of march (on march 17 th at 12:00 noon). this lockdown, which requires a large number of people to stay at home, thus depriving them of their liberty, is a situation never previously encountered and its psychological consequences in the short and medium term are not yet known. researchers into time perception can nevertheless easily imagine that this life in lockdown completely changes individuals' relationship to time, i.e. their experience of time. however, to our knowledge, no studies have as yet investigated this question. very recent scale surveys or survey projects on covid-19 conducted all around the word (e.g., china, korea, iran and united kingdom) suggest that the lockdown situation generates new or heightened emotional states in the form of an increase in psychological distress [2] [3] [4] [5] [6] . nonetheless, in the different distress scales used, the different dimensions of emotion (valence and arousal) were not dissociated, and no survey has examined their relationships to time experience, even though emotion and the experience of time are known to be intrinsically linked. the aim of the present study was thus to conduct a scale survey on a large sample of an as yet untested population-french people-in order to assess not only the perceived stress related to covid-19 but also the emotions (happiness, boredom, arousal) felt during as compared to before the lockdown and their links to the subjective experience of time. the experience of time corresponds to one's feeling about time, i.e., the conscious judgment of the speed of the passage of time [7, 8] . this has received relatively little attention by researchers in the field when compared to research into individuals' abilities to perceive short durations (< 1 minute). this is probably due to the challenge of objectively examining just what makes up the experience of each individual, and therefore the role of higher-level cognitive mechanisms (e.g., consciousness, memory, self-awareness) [9] [10] [11] . indeed, the judgment of the passage of time can be seen as a mirror of the subjective experience of one's internal state [12] [13] [14] . for example, contrary to the generally held belief that time seems to pass faster as we get older, some studies have demonstrated that the feeling of the passage of time in the immediate moment is not directly related to age (young adult vs. older adult), but to people's subjective emotional experience and lived activities [10, 15, 16] . the passage of time is in fact a sensitive index of emotional experience felt in the present moment and of its variations as a function of life conditions. it is thus important to investigate individuals' judgments about how fast time seems to pass in the exceptional situation of lockdown and the factors explaining these. from a general standpoint, the literature provides evidence of the role of emotional experience as a critical factor in the experience of time. nevertheless, the famous expression "time flies when you feel good; time drags when you feel bad" is not straightforward to explain, as negative feelings are diverse and may involve varying mechanisms. more precisely, the emotional experience can be divided into two fundamental dimensions, valence (pleasure vs. displeasure) and activation (calmness vs. excitement/alertness) [17, 18] . these two dimensions interact in the characterization of any given emotion. for example, while the emotions of sadness and fear are both negative, the former is weakly activating (or even deactivating) while the latter is strongly activating. accordingly, the level of felt arousal has been shown to be a prominent factor in temporal mechanisms: the more individuals report being in a state of arousal, the faster time is reported to pass. several studies have shown a lengthening of estimates of short temporal intervals in situations of acute stress, for example when participants are faced with unpleasant stimuli [19] [20] [21] or when they imminently expect a very unpleasant event, e.g., electric shock [22, 23] . however, few studies have examined the effect of chronic stress on time judgments, such as that experienced by people with the covid-19 virus or subjected to lockdown. in the context of chronic stress, i.e. when stress is extended over several days or weeks as in the case of hospital nurses, cocenas-silva et al. [24] showed that duration judgments were no longer altered by physiological stress as measured by physiological markers, but rather by subjective psychological stress as assessed by a self-reported scale. in addition, one can assume that different mechanisms are at work in the case of an emotion, such as fear (an immediate and ephemeral negative state directed towards a specific event), compared to a more diffuse affective state, like anxiety or perceived stress (a prolonged negative state whose origin is not necessarily identified) [25] . the covid-19 pandemic, i.e., the risk that you or your loved ones will be affected by the disease as well as uncertainty about this disease, could produce chronic stress that has consequences for mental and physical health. it is well known that chronic stress affects the immune system, suppressing protective and increasing pathological immune responses [26] . there is thus a risk in this period of pandemic that the chronic stress related to covid-19 and its corollaries (anxiety, fear of death) are particularly high and therefore impact the subjective experience of time by speeding up the perceived passage of time. consequently, we hypothesized a significant relationship between stress and time experience during the lockdown imposed by the covid-19 pandemic. furthermore, in this covid-19 period, it is critical to consider not only the disease-related perceived stress but also the consequences for life of being locked down at home, as well as the direct and indirect effects on daily psychological and social functioning. as a recent survey highlighted, confining people increases their sense of boredom [2] . boredom corresponds to "the aversive state of wanting, but being unable, to engage in satisfying activity" and involves, in particular, low arousal, negative affects [27, p 483 ]. in particular, some studies have shown that boredom produces a feeling of the slowing down of time rather than a speeding up [14, 28] . an alternative hypothesis was thus that boredom would prevail over stress in the experience of time. since boredom is associated with negative emotion of low level of arousal, we thus expected participants to experience of slowing down of time with the boredom experienced during the lockdown. it was not possible a priori to identify which hypothesis would be valid, i.e., which are the factors related to and influencing the experience of time in a lockdown situation, the perceived stress in the stressful situation of covid-19 and/or-by contrast-other affective states characterized by a decrease in arousal such as boredom. indeed, on one hand, the fear and distress generated by the morbid nature of the crisis and its repercussions (fear for one's health and for that of one's family and friends) or by inappropriate housing quality (stress at home) or working conditions (job stress) could increase people's sense of alertness, and therefore lead to a speeding of the passage of time. on the other, confinement at home and social distancing could result in an increased sense of sadness (i.e., less happiness) and boredom, and thus in the feeling that the passage of time slows down. here, a large sample of french people were asked to answer a scale survey during the lockdown period. this consisted of a series of questions, i.e., demographic questions but also questions on the stress perceived (covid-19 stress, home stress, job stress, anxiety), the emotions (happiness, arousal, boredom) felt during compared to before the lockdown and the experience of time. the participants were asked to assess their experience of the passage of time according to three periods of the lockdown: in the immediate moment, during the day, during the last week, as well as before the lockdown for comparison purposes. the sample consisted of 4364 french participants, 3436 women and 928 men (mean age = 41.5, sd = 12.81, min = 16, maxi = 89, n 16-17 years = 11). the participants completed the questionnaire at home (72.5%) or at work (27.5%). the study was reviewed and approved by the human ethics committees sud est vi, france (clinicaltrials.gov nct 04308187). all participants were volunteers and were informed of the objective of the survey and that their data would be processed anonymously and be used for research purposes. the ethics committee waived the need for written consent considering that if people respond to the questionnaires by going to the website, they are giving their consent. furthermore, they can withdraw it at any time. the few minors who completed the questionnaire did so with the consent of their parents who sent them the survey. the responses to the demographic questions allowed us to characterize the surveyed population. 71.8% of participants were married or equivalent (civil partner, etc.) and 27.2% were single (1% other). their distribution as a function of education level was: 1.5% certificate of general education, 21.9% high school vocational certificate, 0% high school diploma, 40.6% bachelor's degree, 24.5% master's degree and 11% doctoral degree. the percentage of participants per professional category was: jobseekers: 4.4%; students: 6.2%; farmers: 0.3%; craftsmen/shopkeepers/business executives: 5.7%; white-collar workers: 30%; manual workers: 8.9%; intermediate professions, 35.7%; retired: 6.3% (2.5% no response). we implemented an open epidemiological, observational, descriptive study by administering a self-reported questionnaire proposed to volunteers using redcap 1 software available through the covistress.org website. the redcap 1 questionnaire was hosted by the university hospital of clermont-ferrand. the questions analyzed in this manuscript were therefore specific questions included in a large questionnaire composed of different thematic sections of questions (s1 questions). the thematic sections were presented in random order after the demographic questions. the online questionnaire was distributed several times through mailing lists held by institutions and french social groups. there were no exclusion criteria. the data that we analyzed were obtained for the period of lockdown from march 31 th to april 12 th , 2020, whereas the french lockdown was ordered on march 17 th at 12:00 noon. the time taken to complete the survey lasted between 5 and 20 minutes on average, depending on sub-items. for the main outcomes, we used a visual analog scale (vas), i.e., a non-calibrated line of 100 mm, ranging from 0 to 100 [29, 30] . the subjective experience of time was thus assessed using this vas, which went from very slowly (0) to very fast (100). the question was "what are your feelings about the speed of the passage of time". there were four time questions, one for the passage of time before the lockdown, and three for during the lockdown: now, for the day, and for the week. the stress resulting from covid-19 as well as job stress and home stress, health-related and financial concerns and anxiety were assessed using the same vas. the emotional dimensions tested were also assessed with the vas for the period before the lockdown and during the lockdown (now): fear of death (not at all vs. at lot), arousal (calm vs. excited), happiness (sad vs. happy), anger (peaceful vs. angry), boredom (occupied vs. bored). the quality of sleep and level of fatigue were also examined in the survey using the vas. as explained above, these different questions were presented in different thematic sections presented in a random order (s1 questions). we performed analyses of variance on the subjective experience of time. we also examined correlations and ran a linear regression model on all the measures of interest by using the standardized data. we used the variance inflation factor (vif) to examine the multicollinearity in the regression analysis [31] . finally, to examine the results of the linear regression model in more detail, we also performed an analysis of mediation. the analyses were performed with spss and the bonferroni correction was systematically applied when necessary. a preliminary analysis of variance performed on the subjective experience of time showed a marked difference between the experience of time before and during the lockdown (fig 1) . that time passed faster when a longer period of time was considered, i.e., a week compared to a day or the present moment (bonferroni comparisons, p < 0.01). to simplify the results, the subsequent statistical analyses are based on the difference in time ratings for the question on the period before the lockdown and that for the present moment (during the lockdown). indeed, the meaning of temporal judgment during the lockdown is relative to that before the lockdown. in addition, the results were similar when the analyses were only performed on the ratings for the present moment. a positive value of our temporal difference index therefore indicates that the individuals experience a slowing down of time during the lockdown, a negative value a speeding up of time and a null value no difference. the anova performed on this temporal difference index, with level of education, professional category and whether the individuals were at work or home as factors, did not show any significant effect (all f < 1). there was indeed no significant difference in time experience before the lockdown situation as a function of these factors. only a small effect of professional category was observed in the present time judgment during the lockdown, f the anova on the temporal index with sex and marital status (single vs. not single) as factors showed a significant main effect of sex, f(1, 4084) = 14.77, p < 0.001, η 2 p = .004, and status, f(1, 4084) = 11.74, p < 0.001, η 2 p = .003, with no sex x status interaction (p > 0.10). this suggests that the single people in our sample tended to experience a greater difference in the flow of time during the lockdown when compared to before (29.19 vs. 24.19) . indeed, in the lockdown situation, time in the present was judged to pass slower by the single people (m = 47.12, sd = 30.12) than by the others (m = 53.93, sd = 29.15). the women also tended to feel a greater slowing down of time than the men (29.41 vs. 23.89) during as compared to before the lockdown, but time passed faster for the women than for the men before the lockdown (80.51 vs. 74.37), f(1, 4084) = 71.11, p < 0.001, η 2 p = .02. nevertheless, their responses to the stress questions indicated that they tended to be more stressed than the men, even though the sex difference only explained a very small proportion of variance ( table 1 shows the correlation matrix (s1 table) between the subjective experience of time (difference in the judgment of the passage of time between before the lockdown and the present moment, i.e., during the lockdown) and the different tested factors. an examination of table 1 reveals that several dimensions were associated with the slowing down of time during as compared to before the lockdown. with regard to stress, the participants experienced that time passed slower-rather than faster-with an increase in the level of perceived stress, i.e., the perceived stress related to covid-19 (r = .18) as well as the stress at home (r = .23) and at work (r = .08). a slowing down of time was therefore observed as the stress level increased. this deceleration of subjective time was observed even if the stress value reported on the vas was high, and higher for covid-19-related stress than for home and job stress (covid-19 stress, m = 61.50, sd = 28.87; job stress, m = 57.94, sd = 32.65; home stress, m = 46.97, sd = 32.65, f(2, 7466) = 342.78, p < 0.001, η 2 p = .08 (all bonferroni tests, p < 0.001). the rating for each type of stress was indeed significantly different from zero (t(4196) = 138.18, t(4184) = 93.06, t(3892) = 10.13, respectively, all p < 0.001). finally, the stress resulting from covid-19 was more closely associated with anxiety (r = .75, p < 0.001), the fear of death (r = -.42, p < 0.001) than it was with the experienced time per se. inconsistently with our first hypothesis, the level of correlation between the experience of time and covid-19-related stress was therefore very low, and this was also the case for stress in the other contexts (home, work). as suggests table 1 , the experience of time was more correlated with boredom (r = -.48, p < 0.001) and decreased happiness (r = .39, p < .0001) than with the level of perceived stress. therefore, the participants experienced a slowing down of time as boredom increased and happiness decreased during the lockdown. as the time judgment was significantly correlated with several dimensions, to identify the best predictor of the subjective experience of time we performed a regression analysis on the time judgments with the different significant dimensions entered into the same model ( table 2 ). the examination of multicollinearity in the regression analysis using the vif indicated no problematic presence of multicollinearity (all vif < 3) [31] . the results of this regression analysis indicated that the perceived stress resulting from covid-19 and its spread was not a table 1 . correlations between the passage of time (difference between before the lockdown and for the present, i.e., during the lockdown) and the different tested factors (z-scores). participants were in the lockdown situation, the more they experienced a slowing down of time. indeed, time was experienced as passing increasingly slowly in the present moment compared to before the lockdown as the level of boredom rose (fig 2) . it also seemed to slow down as happiness decreased, i.e., as sadness increased (fig 3) . increasing boredom and decreasing happiness were therefore the two main predictors of the experience of the passage of time during the lockdown. since these two dimensions are related, we conducted statistical analyses to estimate whether the boredom mediated the effect of emotion on the experience of time and, conversely, whether emotion mediated the effect of the boredom of the experience of time. the mediation analyses indicated that boredom contributes to explaining the effect of emotion on the experience of the passage of time, with a significant indirect effect of 0.159 (β), se = .01, 95% ci (.138; .1812), z = 14.7, p < 0.001, 34.4% of mediation) (fig 4) . however, the direct effect of emotion (sadness) on the time experience remained significant (β = . the results of our survey showed that the stress felt by a broad cross-section of the french population during the lockdown was high, in particular with regard to stress relating to the covid-19 pandemic, as is indicated by the rating of 61.50 (+/-28.87) on a 100-mm vas. the level of perceived stress linked to covid-19 was even higher than the stress at work and at home. covid-19 stress was, in fact, related to the participants' anxiety and their fear of death. the more anxious and frightened they were about death, the more stressed they were in the face of this disease. these results are entirely consistent with the initial results of surveys on covid-19 conducted, in particular, in china [5, 6] and iran [4] , which have shown an increase in psychological distress as a result of the covid-19 pandemic. however, as reported by qui et al. [5] , it is noteworthy that people's distress does not reach a pathological level (m = 23.65), with only 5% of the population suffering from severe distress and 29% from mild or moderate distress. in addition, the proportion of individuals presenting psychological distress disorders before the covid-19 is unknown. however, the chinese suffer less psychological distress and have greater life satisfaction when working in the office than at home, whereas the opposite seems to be the case in the french population, as suggested by the significantly lower level of stress at home than at work. this suggests that there are some differences in culture or living conditions between people in different countries with regard to stress management in similar social isolation situations. the originality of our results is to show that, although the level of stress was quite high, it had little impact on the current subjective experience of time. indeed, the participants did not feel a speeding up of time related to the increase in their stress level. this is contrary to the results of studies on timing which have described a lengthening of duration estimates and the experience of a faster passage of time when the levels of stress and anxiety are high [21, 32, 33] . however, these findings were obtained in intense and concisely emotional situations, when the subjects were faced or expecting a forthcoming threatening event, or in individuals with high-anxiety traits. in the situation of lockdown at home, the current level of stress was therefore not high enough to affect the sense of time. indeed, the level of arousal remained low, although it increased slightly between the period before and during the lockdown. to conclude, one might nevertheless think that it would have been more convincing to record the physiological markers of stress. however, this was not possible in the lockdown situation which was rapidly decided on by the public authorities [34, 35] . in addition, cocenas et al. [24] recently showed that perceived stress was a better predictor of changes in time estimates than physiological stress per se in the case of prolonged stressful situations, for example in the case of hospital nurses at work. in addition, the likelihood of encountering a series of intensely stressful events may be reduced in the present isolation situation. family life involving the care of children can obviously be a source of stress. our study did indeed indicate that women were more stressed at home than men, but were even more so when they were single than part of a family, and that the number of children only slightly increased the stress level at home (r = .08, p < 0.001). rather than covid-19-related stress or home and job stress, our study showed that it was the emotional experience of everyday life during the lockdown that influenced the sense of time. indeed, the participants clearly reported experiencing a slowing down of the passage of time during in comparison to before the lockdown. and the most reliable predictors of this slowing down were the feelings of boredom and sadness. our results are consistent with those of recent studies on time judgments that have pointed out the critical role of emotion in human beings' sense of time [for a review 35] and of boredom [14, 28, 36] . these studies have indeed found a slowing down of time as both sadness and boredom increase. in line with theoretical models of boredom [27] , the present study found that the degree of boredom experienced was related not only to arousal but mostly to negative emotional experience: the more bored people were in lockdown, the sadder they were. the boredom is known to be linked to depression [37, 38] , and depressed people feel a slowing down of time [39] . consequently, the experience of boredom in the lockdown and the judgment of a slower passage of time have increased sadness and could lead to pathological depression. however, in the lockdown situation, the level of boredom explained a proportion, but not all, of the effect of sadness on the experience of the passage of time. other factors that we need to examine in a future study could also help to explain sadness and time experience in the lockdown, such as social withdrawal. the changes in the sense of time in lockdown were therefore due to the significant increase in both boredom and sadness. the literature on boredom suggests that it is involved in a multitude of behaviors and psychological dimensions and that it has a negative side, as in the sadness observed in our study, as well as a positive side. indeed, trait boredom is associated with psychological difficulties (e.g., drug abuse, depression, anxiety, binge eating) [40, 41] . however, some recent functional approaches have also suggested that boredom constitutes a key signal to change behavior by orientating humans to try to find a more satisfying situation [42] . in the context of lockdown, one may therefore wonder what influence this feeling of boredom has on the development of pro-social behaviors or on compliance with the containment situation in the short or longer term (does it only result in bad things or also in good things?). in the lockdown situation, people may have more time. however, they "die" of boredom and sadness and time slows down, drags on. the sense of the passage of time is, ultimately, a phenomenological time that is closely related to the self and the sense of existence [13] . as stated by jean-paul sartre, human beings are defined by their acts and their effects on others. however, when they have more time but are isolated and cannot act-they have nothing to do-they are overwhelmed by sadness and boredom. it would seem important for future surveys to examine whether this feeling is valid in all cultures and for all people. it also seems to be important to identify whether other factors specific to individual characteristics or living conditions, to representations/beliefs toward covid-19 or government policies contribute to changes in the sense of time in the lockdown situation. some authors nevertheless defend the benefits of boredom. however, this raises the question of individual abilities to cope with the feeling of boredom in industrial societies. individual differences in coping with boredom can potentially predict psychological difficulties, health problems and increased vulnerability to psychopathologies such as depression [43] . it is thus a serious problem and one which has to be taken into account. in conclusion, the changes in the sense of time in the lockdown situation, imposed as an efficient solution to the covid-19 pandemic, reflect the major psychological difficulties that people are experiencing during the lockdown. (docx) s1 table. table of members of the research group are nicolas andant, maélys clinchamps china; peter dieckmann -copenhagen academy for medical education and simulation (cames), denmark how will country-based mitigation measures influence the course of the covid-19 epidemic? the psychological impact of quarantine and how to reduce it: rapid review of the evidence multidisciplinary research priorities for the covid-19 pandemic: a call for action for mental health science the distress of iranian adults during the covid-19 pandemic-more distressed than the chinese and with different predictors. medrxiv a nationwide survey of psychological distress among chinese people in the covid-19 epidemic: implications and policy recommendations unprecedented disruption of lives and work: health, distress and life satisfaction of working adults in china one month into the covid-19 outbreak passage of time judgements intertwined facets of subjective time passage of time judgments in everyday life are not related to duration judgments except for long durations of several minutes passage of time judgments are not duration judgments: evidence from a study using experience sampling methodology what day is today? a social-psychological investigation into the process of time orientation mindfulness meditation, time judgment and time experience: importance of the time scale considered (seconds or minutes) awareness of the passage of time and self-consciousness: what do meditators report? psych journal individual differences in self-rated impulsivity modulate the estimation of time in a real waiting situation time does not fly but slow down in old age experience sampling methodology reveals similarities in the experience of passage of time in young and elderly adults a circumplex model of affect core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant the effect of expectancy of a threatening event on time perception in human adults time estimation of fear cues in human observers negative emotionality influences the effects of emotion on time perception fear and time: fear speeds up the internal clock emotional modulation of interval timing and time perception chronic stress impairs temporal memory. timing time percept anxiety makes time pass quicker while fear has no effect effects of stress on immune function: the good, the bad, and the beautiful the unengaged mind: defining boredom in terms of attention what happens while waiting? how self-regulation affects boredom and subjective time during a real waiting situation clinical stress assessment using a visual analogue scale validity of occupational stress assessment using a visual analogue scale extracting the variance inflation factor and other multicollinearity diagnostics from typical regression results when time slows down: the influence of threat on time perception in anxiety the effects of valence and arousal on time perception in individuals with social anxiety jobstress study: comparison of heart rate variability33. in emergency physicians working a 24-hour shift or a 14-hour night shift-a randomized trial urinary interleukin-8 is a biomarker of stress in emergency physicians, especially with advancing age-the jobstress* randomized trial the temporal dynamic of emotional effect on judgments of durations proneness to boredom mediates relationships between problematic smartphone use with depression and anxiety severity relationships between boredom proneness, mindfulness, anxiety, depression, and substance use time perception in depression: a meta-analysis time flies when you're having fun: temporal estimation and the experience of boredom psychometric measures of boredom: a review of the literature high boredom proneness and low trait self-control impair adherence to social distancing guidelines during the covid-19 pandemic intrinsic enjoyment and boredom coping scale: validation with personality, evoked potential and attention measures the covistress network is headed by pr. frédéric dutheil (frederic.dutheil@uca.fr) chu key: cord-104133-d01joq23 authors: arthur, ronan f.; jones, james h.; bonds, matthew h.; ram, yoav; feldman, marcus w. title: adaptive social contact rates induce complex dynamics during epidemics date: 2020-07-14 journal: biorxiv doi: 10.1101/2020.04.14.028407 sha: doc_id: 104133 cord_uid: d01joq23 the covid-19 pandemic has posed a significant dilemma for governments across the globe. the public health consequences of inaction are catastrophic; but the economic consequences of drastic action are likewise catastrophic. governments must therefore strike a balance in the face of these trade-offs. but with critical uncertainty about how to find such a balance, they are forced to experiment with their interventions and await the results of their experimentation. models have proved inaccurate because behavioral response patterns are either not factored in or are hard to predict. one crucial behavioral response in a pandemic is adaptive social contact: potentially infectious contact between people is deliberately reduced either individually or by fiat; and this must be balanced against the economic cost of having fewer people in contact and therefore active in the labor force. we develop a model for adaptive optimal control of the effective social contact rate within a susceptible-infectious-susceptible (sis) epidemic model using a dynamic utility function with delayed information. this utility function trades off the population-wide contact rate with the expected cost and risk of increasing infections. our analytical and computational analysis of this simple discrete-time deterministic model reveals the existence of a non-zero equilibrium, oscillatory dynamics around this equilibrium under some parametric conditions, and complex dynamic regimes that shift under small parameter perturbations. these results support the supposition that infectious disease dynamics under adaptive behavior-change may have an indifference point, may produce oscillatory dynamics without other forcing, and constitute complex adaptive systems with associated dynamics. implications for covid-19 include an expectation of fluctuations, for a considerable time, around a quasi-equilibrium that balances public health and economic priorities, that shows multiple peaks and surges in some scenarios, and that implies a high degree of uncertainty in mathematical projections. author summary epidemic response in the form of social contact reduction, such as has been utilized during the ongoing covid-19 pandemic, presents inherent tradeoffs between the economic costs of reducing social contacts and the public health costs of neglecting to do so. such tradeoffs introduce an interactive, iterative mechanism which adds complexity to an infectious disease system. consequently, infectious disease modeling typically has not included dynamic behavior change that must address such a tradeoff. here, we develop a theoretical model that introduces lost or gained economic and public health utility through the adjustment of social contact rates with delayed information. we find this model produces an equilibrium, a point of indifference where the tradeoff is neutral, and at which a disease will be endemic for a long period of time. under small perturbations, this model exhibits complex dynamic regimes, including oscillatory behavior, runaway exponential growth, and eradication. these dynamics suggest that for epidemic response that relies on social contact reduction, secondary waves and surges with accompanied business re-closures and shutdowns may be expected, and that accurate projection under such circumstances is unlikely. the covid-19 pandemic had infected almost 9 million people and caused over 450,000 17 deaths worldwide as of june 23, 2020 [1] . in the absence of effective therapies and 18 vaccines [2] , many governments responded with lock-down policies and social distancing 19 laws to reduce the rate of social contacts and curb transmission of the virus. prevalence 20 of covid-19 in the wake of these policies in the united states indicates they may have 21 been successful at decreasing the reproduction number (r t ) of the epidemic [1] . 22 however, they have also led to economic recession with an unemployment rate at an 23 80-year peak, the stock market in decline, and the federal government forced to borrow 24 heavily to financially support businesses and households. solutions to these economic 25 crises may conflict with public health recommendations. thus, governments worldwide 26 must decide how to balance the economic and public health consequences of their 27 epidemic response interventions. 28 behavior-change in response to an epidemic, whether autonomously adopted by 29 individuals or externally directed by governments, affects the dynamics of infectious 30 diseases [3, 4] . prominent examples of behavior-change in response to infectious disease 31 prevalence include measles-mumps-rubella (mmr) vaccination choices [5] , social 32 distancing in influenza outbreaks [6] , condom purchases in hiv-affected communities [7], 33 and social distancing during the ongoing covid-19 pandemic [2] . behavior is 34 endogenous to an infectious disease system because it is, in part, a consequence of the 35 prevalence of the disease, which in turn responds to changes in behavior [8, 9] . 36 individuals and governments have greater incentive to change behavior as prevalence 37 increases; conversely they have reduced incentive as prevalence decreases [10, 11] . 38 endogenous behavioral response may then theoretically produce a non-zero endemic 39 equilibrium of infection. this happens because, at low levels of prevalence, the cost of 40 avoidance of a disease may be higher than the private benefit to the individual, even 41 though the collective, public benefit in the long-term may be greater. however, in 42 epidemic response we typically think of behavior-change as an exogenously-induced 43 intervention without considering associated costs. while guiding positive change is an 44 important intervention, neglecting to recognize the endogeneity of behavior can lead to 45 a misunderstanding of incentives and a resurgence of the epidemic when behavior 46 change is reversed prematurely. 47 although there is growing interest in the role of adaptive human behavior in 48 infectious disease dynamics, there is still a lack of general understanding of the most 49 important properties of such systems [3, 8, 12] . behavior is difficult to measure, quantify, 50 or predict [8] , in part, due to the complexity and diversity of human beings who make simply allowed the transmission parameter (β) to be a negative function of the number 56 infected, effectively introducing an intrinsic negative feedback to the infected class that 57 regulated the disease [13] . 58 modelers have used a variety of tools, including agent-based modeling [14] , network 59 structures for the replacement of central nodes when sick [15] or for behavior-change as 60 a social contagion process [16] , game theoretic descriptions of rational choice under 61 changing incentives as with vaccination [6, 11, 17] , and a branching process for 62 heterogeneous agents and the effect of behavior during the west africa ebola epidemic 63 in 2014 [18] . a common approach to incorporating behavior into epidemic models is to 64 track co-evolving dynamics of behavior and infection [16, 19, 20] , where behavior 65 represents an i-state of the model [21] . in a compartmental model, this could mean 66 separate compartments (and transitions therefrom) for susceptible individuals in a state 67 of fear and those not in a state of fear [16] . 68 periodicity (i.e. multi-peak dynamics) has long been documented empirically in 69 epidemiology [22, 23] . periodicity can be driven by seasonal contact rate changes (e.g. 70 when children are in school) [24] , seasonality in the climate or ecology [25] , sexual 71 behavior change [26] , and host immunity cycling through new births of susceptibles or a 72 decay of immunity over time. some papers in nonlinear dynamics have studied delay 73 differential equations in the context of epidemic dynamics and found periodic 74 solutions [27] . although it is atypical to include delay in modeling, delay is an 75 important feature of epidemics. for example, if behavior responds to mortality rates, 76 there will inevitably be a lag with an average duration of the incubation period plus the 77 life expectancy upon becoming infected. in a tightly interdependent system, reacting to 78 outdated information can result in an irrational response and periodic cycling. the original epidemic model of kermack and mckendrick [28] was first expressed in 92 discrete time. then by allowing "the subdivisions of time to increase in number so that 93 each interval becomes very small" the famous differential equations of the sir epidemic 94 model were derived. here we begin with a discrete-time susceptible-infected-susceptible 95 model that is adjusted on the principle of endogenous behavior-change through an 96 adaptive social-contact rate that can be thought of as either individually motivated or 97 institutionally imposed. we introduce a dynamic utility function that motivates the 98 population's effective contact rate at a particular time period. this utility function is 99 based on information about the epidemic size that may not be current. this leads to a 100 time delay in the contact function that increases the complexity of the population 101 dynamics of the infection. results from the discrete-time model show that the system 102 approaches an equilibrium in many cases, although small parameter perturbations can 103 lead the dynamics to enter qualitatively distinct regimes. the analogous continuous-time model retains periodicities for some sets of parameters, but numerical 105 investigation shows that the continuous time version is much better behaved than the 106 discrete-time model. this dynamical behavior is similar to models of ecological 107 population dynamics, and a useful mathematical parallel is drawn between these 108 systems. to represent endogenous behavior-change, we start with the classical discrete-time 112 susceptible-infected-susceptible (sis) model [28] , which, when incidence is relatively 113 small compared to the total population [29, 30] , can be written in terms of the recursions 114 where at time t, s t represents the number of susceptible individuals, i t the infected 115 individuals, and n t the number of individuals that make up the population, which is 116 assumed fixed in a closed population. we can therefore write n for the constant 117 population size. here γ, with 0 < γ < 1, is the rate of removal from i to s due to 118 recovery. this model in its simplest form assumes random mixing, where the parameter 119 b represents a composite of the average contact rate and the disease-specific 120 transmissibility given a contact event. in order to introduce human behavior, we 121 substitute for b a time-dependent b t , which is a function of both b 0 , the probability that 122 disease transmission takes place on contact, and a dynamic social rate of contact c t 123 whose optimal value, c * t , is determined at each time t as in economic epidemiological 124 models [31] , namely where c * t represents the optimal contact rate, defined as the number of contacts per unit 126 time that maximize utility for the individual. here, c * t is a function of the number of this utility function is assumed to take the form here u represents utility for an individual at time t given a particular number of 134 contacts per unit time c, α 0 is a constant that represents maximum potential utility 135 achieved at a target contact rateĉ. the second term, α 1 (c −ĉ) 2 , is a concave function 136 that represents the penalty for deviating fromĉ. the third term, the delay in information acquisition and the speed of response to that information. we 140 note that (1 − i n b 0 ) c can be approximated by when i n b 0 is small and c i n b 0 << 1. we thus assume i n (b 0 ) is small, and approximate 142 u (c) in eq. 5 using eq. 6. eq. 5 assumes a strictly negative relationship between 143 number of infecteds and contact. 144 we assume an individual or government will balance the cost of infection, the 145 probability of infection, and the cost of deviating from the target contact rateĉ to select 146 an optimal contact rate c * t , namely the number of contacts, which takes into account the 147 risk of infection and the penalty for deviating from the target contact rate. this 148 captures the idea that individuals trade off how many people they want to interact with 149 versus their risk of getting sick, or that authorities want to reopen the economy during a 150 pandemic and have to trade off morbidity and mortality from increasing infections with 151 the need to allow additional social contacts to help the economy restart. this optimal 152 contact rate can be calculated by finding the maximum of u with respect to c from eq. 153 5 with substitution from eq. 6, namely differentiating, we have which vanishes at the optimal contact rate, c * , which we write as c * t to show its 156 dependence on time. then which we assume to be positive. therefore, total utility will decrease as i t increases and 158 c * t also decreases. utility is maximized at each time step, rather than over the course of 159 lifetime expectations. in addition, eq 9 assumes a strictly negative relationship between 160 number of infecteds at time t − ∆ and c * . while behavior at high degrees of prevalence 161 has been shown to be non-linear and fatalistic [32, 33] , in this model, prevalence (i.e., 162 b0it n ) is assumed to be small, consistent with eq. 6. 163 we introduce the new parameter α = α2 we can now rewrite the recursion from eq. 2, using eq. 4 and replacing c t with c * t as 165 defined by eq. 10, as when ∆ = 0 and there is no time delay, f (·) is a cubic polynomial, given by july 10, 2020 5/17 for the susceptible-infected-removed (sir) version of the model, we include the removed category and write the (discrete-time) recursion system as the baseline contact rate and c * t specified by 169 eq. 10. with b t = b, say, and not changing over time, eqs. 13-15 form the discrete-time 170 version of the classical kermack-mckendrick sir model [28] . the inclusion of the 171 removed category entails thatĩ = 0 is the only equilibrium of the system eqs. 13-15; 172 unlike the sis model, there is no equilibrium with infecteds present. in general, since c * t 173 includes the delay ∆, the dynamic approach toĩ = 0 is expected to be quite complex. 174 intuitively, since the infecteds are ultimately removed, we do expect that from any 175 initial frequency i 0 of infecteds all n individuals will eventually be in the r category. numerical analysis of this sir model shows strong similarity between the sis and sir 177 models for several hundred time steps before the sir model converges toĩ = 0 with 178 r = n . in the section "numerical iteration and continuous-time analog" we compare 179 the numerical iteration of the sis (eq. 11) and sir (eqs. [13] [14] [15] and integration of the 180 continuous-time (differential equation) versions of the sis and sir models. to determine the dynamic trajectories of (11) without time delay, we first solve for the 184 fixed point(s) of the recursion (11) (i.e., value or values of i such that from eq. 16, it is clear that i = 0 is an equilibrium as no new infections can occur 187 in the next time-step if none exist in the current one. this is the disease-free 188 equilibrium denoted byĩ. other equilibria are the solutions of we label the solution with the + sign i * and the one with the − signî. it is important to note that under these conditionsî is an equilibrium of the for this to hold whenî is legitimate is if inequalities (20) and nĉb 0 > γ hold, thenî is locally stable. however, even if 210 both of these inequalities hold, the number of infecteds may not converge toî. it is well 211 known that iterations of discrete-time recursive relations, of which (12) is an example 212 (i.e., with ∆ = 0), may produce cycles or chaos depending on the parameters and the 213 starting frequency i 0 of infecteds. 214 table 1 shows an array of possible asymptotic dynamics with ∆ = 0 found by 215 numerical iteration of (12) for a specific set of parameters and an initial frequency table 1 are examples for which, beginning with a 217 single infected, the number of infecteds explodes, becoming unbounded; of course, this is 218 an illegitimate trajectory since i t cannot exceed n . however, in the case marked * ,î is 219 locally stable and with a large enough initial number of infecteds, there is damped 220 oscillatory convergence toî. in the case marked * * , with i 0 = 1 the number of infecteds 221 becomes unbounded, but in this case,î is locally unstable, and starting with i 0 close to 222 i a stable two-point cycle is approached; in this case df (i)/di| i=î < −1. table 1 . stability analysis of the sis model is more complicated when ∆ = 0, and in the 224 appendix we outline the procedure for local analysis of the recursion (11) nearî. local 225 stability is sensitive to the delay time ∆ as can be seen from the numerical iteration of 226 july 10, 2020 7/17 (11) for the specific set of parameters shown in table 2 . some analytical details related 227 to table 2 are in the appendix. table 1 reports an array of dynamic trajectories for some choices of parameters and, 240 in two cases, an initial number of infecteds other than i 0 = 1. the first three rows show 241 three sets of parameters for which the equilibrium values ofî are very similar but the 242 trajectories of i t are different: a two-point cycle, a four-point cycle, and apparently 243 chaotic cycling above and belowî. in all of these cases, df (i)/di| i=î < −1. clearly the 244 dynamics are sensitive to the target contact rateĉ in these cases. the fourth and eighth 245 rows show that i t becomes unbounded (tends to +∞) from i 0 = 1, but a two-point 246 cycle is approached if i 0 is close enough toî: df (i)/di| i=î < −1 in this case. for the 247 parameters in the ninth row, if i 0 is close enough toî there is damped oscillation intoî: 248 here −1 < df (i)/di| i=î < 0. the fifth and sixth rows of table 1 exemplify another interesting dynamic starting 250 from i 0 = 1. i t becomes larger thanî (overshoots) and then converges monotonically 251 down toî; in each case 0 < df (i)/dt| i=î < 1. for the parameters in the seventh row, 252 there is oscillatory convergence toî from i 0 = 1 (−1 < df (i)/di| i=î < 0), while in the 253 last row there is straightforward monotone convergence toî. a continuous-time analog of the discrete-time recursion (11), in the form of a 255 differential equation, substitutes di/dt for i t+1 − i t in (11). we then solve the resulting 256 delay differential equation numerically using the vode differential equation integrator 257 in scipy [36, 37] (source code available at https://github.com/yoavram/sanjose). using 258 the parameters in table 2 figure 1 with i 0 = 1. in figure 1 , with no delay (∆ = 0) and a one-unit delay (∆ = 1), the discrete and 266 continuous dynamics are very similar, both converging toî. however, with ∆ = 2 the 267 differential equation oscillates intoî while the discrete-time recursion enters a regime of 268 inexact cycling aroundî, which appears to be a state of chaos. for ∆ = 3 and ∆ = 4, 269 the discrete recursion "collapses". in other words, i t becomes negative and appears to 270 go off to −∞; in figure 1 , this is cut off at i = 0. the continuous version, however, in 271 these cases enters a stable cycle aroundî. it is important to note that in figure 1 for respectively. in fig. s5s there appears to be convergence toî, but in fig. s5l after 302 about 500 time units, in both discrete-and continuous-time sir versions, the number of 303 infected begins to decline towards zero. it is worth noting that if the total population size of n decreases over time, for 305 example, if we take n (t) = n exp(−zt), with z = 50b 0ĉ γ, then the short-term dynamics 306 of the sis model in (11) begins to closely resemble the sir version. this is illustrated 307 in supplementary fig. s5n , where b 0 ,ĉ, γ are, as in figs. s5s and s5l, the same as in 308 fig. 2, panel (a) . with n decreasing to zero, both s and i will approach zero in the our model makes a number of simplifying assumptions. we assume, for example, 320 that all individuals in the population will respond in the same fashion to government 321 policy. we assume that governments choose a uniform contact rate according to an 322 optimized utility function, which is homogeneous across all individuals in the population. 323 finally, we assume that the utility function is symmetric around the optimal number of 324 contacts so that increasing or decreasing contacts above or below the target contact 325 rate, respectively, yield the same reduction in utility. these assumptions allowed us to 326 create the simplest possible model that includes adaptive behavior and time delay. in holling's heuristic distinction in ecology between tactical models, models built to be 328 parameterized and predictive, and strategic models, which aim to be as simple as 329 possible to highlight phenomenological generalities, this is a strategic model [38] . 330 we note that the five distinct kinds of dynamical trajectories seen in these 331 computational experiments come from a purely deterministic recursion. this means 332 that oscillations and even erratic, near-chaotic dynamics and collapse in an epidemic 333 may not necessarily be due to seasonality, complex agent-based interactions, changing or 334 stochastic parameter values, demographic change, host immunity, or socio-cultural 335 idiosyncracies. this dynamical behavior in number of infecteds can result from 336 mathematical properties of a simple deterministic system with homogeneous endogenous 337 behavior-change, similar to complex population dynamics of biological organisms [39] . the mathematical consistency with population dynamics suggests a parallel in ecology, 339 that the indifference point for human behavior functions in a similar way to a carrying 340 capacity in ecology, below which a population will tend to grow and above which a individuals are incentivized to change their behavior to protect themselves, they will, 346 and they will cease to do this when they are not [10] . further, our results show certain 347 parameter sets can lead to limit-cycle dynamics, consistent with other negative feedback 348 mechanisms with time delays [41, 42] . this is because the system is reacting to 349 conditions that were true in the past, but not necessarily true in the present. in our 350 discrete-time model, there is the added complexity that the non-zero equilibrium may 351 be locally stable but not attained from a wide range of initial conditions, including the 352 most natural one, namely a single infected individual. observed epidemic curves of many transient disease outbreaks typically inflect and 354 go extinct, as opposed to this model that may oscillate perpetually or converge [43] , and surges in fluctuations in covid-19 cases globally [1] . there may be 363 many causes for such double-peaked outbreaks, one of which may be a lapse in 364 behavior-change after the epidemic begins to die down due to decreasing incentives [16] , 365 as represented in our simple theoretical model. this is consistent with findings that 366 voluntary vaccination programs suffer from decreasing incentives to participate as 367 prevalence decreases [44, 45] . it should be noted that the continuous-time version of our 368 model can support a stable cyclic epidemic whose interpretation in empirical terms will 369 depend on the time scale, and hence on the meaning of the delay, ∆. one of the responsibilities of infectious disease modelers (e.g. covid-19 modelers) 371 is to predict and project forward what epidemics will do in the future in order to better 372 assist in the proper and strategic allocation of preventative resources. covid-19 373 models have often proved wrong by orders of magnitude because they lack the means to 374 account for adaptive response. an insight from this model, however, is that prediction 375 becomes very difficult, perhaps impossible, if we allow for adaptive behavior-change 376 because the system is qualitatively sensitive to small differences in values of key 377 parameters. these parameters are very hard to measure precisely; they change 378 depending on the disease system and context and their inference is generally subject to 379 large errors. further, we don't know how policy-makers weight the economic trade-offs 380 against the public health priorities (i.e., the ratio between α 1 and α 2 in our model) to 381 arrive at new policy recommendations. to maximize the ability to predict and minimize loss of life or morbidity, outbreak 383 response should not only seek to minimize the reproduction number, but also the length 384 of time taken to gather and distribute information. another approach would be to use a 385 predetermined strategy for the contact rate, as opposed to a contact rate that depends 386 on the number of infecteds. in our model, complex dynamic regimes occur more often when there is a time delay. 388 if behavior-change arises from fear and fear is triggered by high local mortality and high 389 local prevalence, such delays seem plausible since death and incubation periods are 390 lagging epidemiological indicators. lags mean that people can respond sluggishly to an 391 unfolding epidemic crisis, but they also mean that people can abandon protective 392 behaviors prematurely. developing approaches to incentivize protective behavior 393 throughout the duration of any lag introduced by the natural history of the infection (or 394 otherwise) should be a priority in applied research. this paper represents a first step in 395 understanding endogenous behavior-change and time-lagged protective behavior, and we 396 anticipate further developments along these lines that could incorporate long incubation 397 periods and/or recognition of asymptomatic transmission. in the neighborhood of the equilibriumî, write i t =î + ε t and i t−∆ =î + ε t−∆ , where 437 ε t and ε t−∆ are small enough that quadratic terms in them can be neglected in the 438 expression for i t+1 =î + ε t+1 . the linear approximation to (a1) is then and in the case ∆ = 0, this reduces to we focus first on ∆ = 0 and write (a3) as ε t+1 = ε t l(î). recall thatî satisfies eq. 441 (17) , and substituting γ from (17) now we turn to the general case ∆ = 0 and eq. (a2), which we write as where a and b are the corresponding terms on the right side of (a2 constants with respect to time. local stability ofî is then determined by the properties 450 of recursion (a7), whose solution first involves solving its characteristic equation in principle there are ∆ + 1 real or complex roots of (a8), which we represent as 452 λ 1 , λ 2 , . . . , λ ∆+1 , and the solution of (a7) can be written as where c i are found from the initial conditions. convergence to, and hence local stability 454 ofî, is determined by the magnitude of the absolute value (if real) or modulus (if complex) of the roots λ 1 , λ 2 , . . . , λ ∆+1 :î is locally stable if the largest among the ∆ + 1 456 of these is less than unity. in table 2 , results of numerically iterating the complete recursion (11) are listed for 458 the delay ∆ varying from ∆ = 0 to ∆ = 4, all starting from i 0 = 1, with n = 10, 000 459 and the stated parameters. figure 3 illustrates the discrete-and continuous-time 460 dynamics summarized in table 2 with complex roots 0.4999 ± 0.6461i whose modulus is 0.8169, which is less than 1. the 465 complexity implies cyclic behavior, and since the modulus is less than one, we see 466 locally damped oscillatory convergence toî. for ∆ = 2, the characteristic equation is the cubic which has one real root 0.6383 and complex roots 0.8190 ± 0.6122i. here the modulus of 469 the complex roots is 1.0225, which is greater than unity so thatî is not locally stable. 470 in this case the dynamics depend on the initial value i 0 . if i 0 < 72, i t oscillates but not 471 in a stable cycle. if i 0 > 73, the oscillation becomes unbounded. world health organization. coronavirus disease (covid-19): situation report scientific and ethical basis for social-distancing 489 interventions against covid-19. the lancet infectious diseases social factors in epidemiology modelling the influence of human behaviour on 493 the spread of infectious diseases: a review evolving public perceptions and stability in 496 vaccine uptake game theory of social distancing in response to an epidemic the responsiveness of the demand for condoms 500 to the local prevalence of aids nine 502 challenges in incorporating the dynamics of behaviour in infectious diseases 503 models impact and behaviour: the importance of social forces to 506 infectious disease dynamics and disease ecology economic epidemiology and infectious diseases erratic 511 flu vaccination emerges from short-sighted behavior in contact networks capturing human behaviour a generalization of the kermack-mckendrick deterministic 516 epidemic model a hybrid epidemic model: 518 combining the advantages of agent-based and equation-based approaches winter. ieee the effect of a prudent adaptive 521 behaviour on disease transmission coupled contagion 523 dynamics of fear and disease: mathematical and computational explorations a general approach for population games with 526 application to vaccination ebola cases and health system demand in liberia the spread of awareness and its 532 proceedings of the national academy of sciences a review the dynamics of physiologically structured populations periodicity in epidemiological models measles in england and wales-i: an analysis of factors 544 underlying seasonal patterns seasonal and interannual cycles of endemic cholera in 547 bengal 1891-1940 in relation to climate and geography etiology of newly emerging marine diseases epidemic cycles driven by host behaviour periodic solutions of delay differential equations 552 arising in some models of epidemics a contribution to the mathematical theory of 555 epidemics the royal society modeling infectious diseases in humans and animals princeton university press time series modelling of childhood diseases dynamical systems approach adaptive human behavior in epidemiological models choices, beliefs, and infectious disease dynamics higher disease prevalence can 568 induce greater sociality: a game theoretic coevolutionary model global stability of an sir epidemic 571 global stability for the seir model in epidemiology scipy-based delay differential equation (dde) solver the strategy of building models of complex ecological systems simple mathematical models with very complicated dynamics journal of the fisheries board of canada time-delay versus stability in population models with two and three 587 trophic levels time delays are not necessarily destabilizing different epidemic curves for severe acute respiratory 591 rational epidemics and their public control group interest versus self-interest in smallpox 596 vaccination policy key: cord-026144-buctm04o authors: mullick, shantanu; malshe, ashwin; glady, nicolas title: modeling the costs of trade finance during the financial crisis of 2008–2009: an application of dynamic hierarchical linear model date: 2020-05-18 journal: information processing and management of uncertainty in knowledge-based systems doi: 10.1007/978-3-030-50146-4_47 sha: doc_id: 26144 cord_uid: buctm04o the authors propose a dynamic hierarchical linear model (dhlm) to study the variations in the costs of trade finance over time and across countries in dynamic environments such as the global financial crisis of 2008–2009. the dhlm can cope with challenges that a dynamic environment entails: nonstationarity, parameters changing over time and cross-sectional heterogeneity. the authors employ a dhlm to examine how the effects of four macroeconomic indicators – gdp growth, inflation, trade intensity and stock market capitalization on trade finance costs varied over a period of five years from 2006 to 2010 across 8 countries. we find that the effect of these macroeconomic indicators varies over time, and most of this variation is present in the year preceding and succeeding the financial crisis. in addition, the trajectory of time-varying effects of gdp growth and inflation support the “flight to quality” hypothesis: cost of trade finance reduces in countries with high gdp growth and low inflation, during the crisis. the authors also note presence of country-specific heterogeneity in some of these effects. the authors propose extensions to the model and discuss its alternative uses in different contexts. trade finance consists of borrowing using trade credit as collateral and/or the purchase of insurance against the possibility of trade credit defaults [2, 4] . according to some estimates more than 90% of trade transactions involve some form of credit, insurance, or guarantee [7] , making trade finance extremely critical for smooth trades. after the global financial crisis of 2008-2009, the limited availability of international trade finance has emerged as a potential cause for the sharp decline in global trade [4, 13, 21] . 1 as a result, understanding how trade finance costs varied over the period in and 1 see [27] for counter evidence. around the financial crisis has become critical for policymakers to ensure adequate availability of trade finance during crisis periods in order to mitigate the severity of the crisis. 2 in addition, as the drivers of trade finance may vary across countries, it is important to account for heterogeneity while studying the effect of these drivers on trade finance [20] . a systematic study of the drivers of trade finance costs can be challenging: modeling the effects of these drivers in dynamic environments (e.g., a financial crisis) requires one to have a method that can account for non-stationarity, changes in parameters over time as well as account for cross-sectional heterogeneity [42] . first, nonstationarity is an important issue in time-series analysis of observational data [36, 42] . 3 the usual approach to address nonstationarity requires filtering the data in the hope of making the time-series mean and covariance stationary. 4 however, methods for filtering time series, such as first differences can lead to distortion in the spectrum, thereby impacting inferences about the dynamics of the system [22] . further, filtering the data to make the time-series stationary can (i) hinder model interpretability, and (ii) emphasize noise at the expense of signal [43] . second, the effect of the drivers of trade finance costs changes over time [10] . these shifts happen due to time-varying technological advances, regulatory changes, and evolution of the banking sector competitive environment, among others. as we are studying 2008-2009 global financial crisis, many drivers of the costs may have different effects during the crisis compared to the pre-crisis period. for example, during the crisis many lenders may prefer borrowers with the top most quality, thus exhibiting a "flight to quality" [12] . to capture changes in model parameters over time, studies typically use either (1) moving windows to provide parameter paths, or (2) perform a before-and-after analysis. however, both these methods suffer from certain deficiencies. models that yield parameter paths [11, 32] by using moving windows to compute changes in parameters over time leads to inefficient estimates since, each time, only a subset of the data is analyzed. these methods also presents a dilemma in terms of selection of the length of the window as short windows yield unreliable estimates while long windows imply coarse estimates and may also induce artificial autocorrelation. using before-and-after analysis [9, 25, 38] to study parameter changes over time implies estimating different models before and after the event. the 'after' model is estimated using data from after the event under the assumption that this data represents the new and stabilized situation. a disadvantage of this approach is the loss in 2 such as the world trade organization (wto), the world bank (wb), and the international monetary fund (imf). 3 the studies that used surveys for understanding the impact of financial crisis on trade finance costs [30] are also susceptible to biases present in survey methods. first, survey responses have subjective components. if this subjectivity is common across the survey respondents, a strong bias will be present in their responses. for example, managers from the same country tend to exhibit common bias in their responses [8] . second, survey responses are difficult to verify. managers may over-or under-estimate their trade finance costs systematically, depending on the countries where their firms operate. finally, survey research is often done in one cross-section of time, making it impossible to capture the variation over time. 4 methods like vector autoregression (var) often filter data to make it stationary [15, 17, 34] . statistical efficiency as a result of ignoring effects present in part of the data. further, this approach assumes that the underlying adjustment (due to events, such as the financial crisis) occurs instantaneously. however, in practice, it may take time for financial markets to adjust before it reaches a new equilibrium. this also serves to highlight the drawback of the approach in assuming time-invariant parameters for the 'before' model, as well as for the 'after' model. third, the effects of the drivers of trade finance cost may vary across countries [28] , and we need to account for this heterogeneity. a well accepted way to incorporate heterogeneity is by using hierarchical models that estimate country-specific effects of the drivers of trade finance cost [40] . however, as hierarchical models are difficult to embed in time-series analysis [24] , studies tend to aggregate data across cross-sections which leads to aggregation biases in the parameter estimates [14] . nonstationarity, time-varying parameters and cross-sectional heterogeneity render measurement and modeling of factors that impact the dependent variable of interest-in our case, cost of trade finance-challenging in dynamic environments (such as a financial crisis). therefore, we propose a dynamic hierarchical linear model (dhlm) that addresses all these three concerns and permits us to explain the variations in trade finance costs over several years, while also allowing us to detect any variation across countries, if present. our dhlm consists of three levels of equations. at the higher level, observation equation specifies, for each country in each year, the relationship between trade finance costs and a set of macroeconomic variables (e.g., inflation in the country). the coefficients of the predictors in the observation equation are allowed to vary across crosssection (i.e., countries) and over time. next, in the pooling equation we specify the relationship between the country-specific time-varying coefficients (i.e., parameters) from the observation equation to a new set of parameters that vary over time, but are common across countries. thus, the pooling equation enables us to capture the "average" time-varying effect of macroeconomic variables on trade finance cost. finally, this "average" effect can vary over time and is likely to depend on its level in the previous period. the evolution equation, which is the lowest level of the dhlm, captures these potential changes in the "average" effects of the macroeconomic variables in a flexible way through a random walk. we employ our dhlm to study how the effects of four macroeconomic variables-gdp growth, trade intensity, inflation, and stock market capitalization-on trade finance costs varied across 8 nations over a period of five years from 2006 to 2010. 5 although the objective of our paper is to introduce a model that can address the different challenges outlined earlier, our model estimates provide several interesting insights. we find that the effect of macroeconomic indicators on the cost of trade finance varies over time and that most of this variation is present in the years preceding and succeeding the financial crisis. this is of interest to policymakers in deciding how long to implement interventions designed to ease the cost of trade finance. in addition, the trajectory of time-varying effects of gdp growth and inflation are consistent with the "flight to quality" story [12] : during the crisis, cost of trade finance reduces in countries that have high gdp growth and low inflation. the time-varying effects of trade intensity is also consistent with our expectations, but the time-varying effect of market capitalization is counter-intuitive. finally, we also note heterogeneity in the trajectory of the country-specific time-varying effects, primarily for the effects of stock market capitalization and trade intensity. this research makes two contributions. first, we introduce a new model to the finance literature to study the evolution in the drivers of trade finance costs over time in dynamic environments such as a financial crisis, while also allowing the impact due to these drivers to be heterogeneous across countries. our modeling approach addresses concerns related to nonstationarity, time-varying model parameters and cross-sectional heterogeneity that are endemic to time-series analysis of dynamic environments. our model can be adopted to study evolution of various other variables such as financial services costs and global trade. our model can also be extended to a more granular level to incorporate firm-level heterogeneity by using a second pooling equation. doing this can pave the way to identify the characteristics of companies which may need assistance during a financial crisis. thus, our research can remove subjectivity in extending benefits to the affected exporters and importers. even large scale surveys may not be able to provide such granular implications to policy makers. second, our research has substantive implications. using a combination of data from loan pricing corporation's dealscan database and the world bank, we complement the finance literature by empirically studying the evolution of the drivers of trade finance cost. we find that the impact of these drivers varies over time, with a large part of the variation present in the years preceding and succeeding the financial crisis. to the best of our knowledge, we are the first to study the time-varying impact of these macro-economic drivers on trade finance and this is of use to policy makers in deciding how long to extend benefits to parties affected by the crisis. the paper proceeds as follows. in the first section we describe the dhlm. we provide the theoretical underpinnings necessary to estimate the model. next we describe the data and variables used in the empirical analysis. in the fourth section we provide detailed discussion of the results. we conclude the paper with the discussion of the findings. we specify trade finance cost of a country as a function of country-specific macroeconomic variables and country-specific time-varying parameters using a dhlm. the dhlm has been used by previous studies in marketing and statistics [19, 26, 33, 35, 39] to estimate time-varying parameters at the disaggregate level (e.g., at the level of a brand or store). a dhlm is a combination of dynamic linear models (dlm) which estimates time-varying parameters at an aggregate level [5, 6] , and a hierarchical bayesian (hb) model which estimates time-invariant parameters at the disaggregate level [31] . the dhlm and the hb model both have a hierarchical structure which permits us to pool information across different countries to arrive at overall aggregate-level inferences. shrinking of the country-specific parameters to an "average" effect of the key variables across country has been used by other researchers to estimate country-specific tourism marketing elasticity [39] and to estimate store-level price elasticity [31] . we specify trade finance cost of a country as a function of country-level variables gdp growth, inflation, stock market capitalization and trade: where trade finance cost it is the cost of trade finance of country i at time t, gdp growth it is the gdp growth of country i at time t, inflation it is the inflation of country i at time t, stock market capitalization it is the stock market capitalization of country i at time t, trade intensity it is the intensity of trade of country i at time t, a it , b it , c it , d it and f it are country-specific time-varying coefficients and u 1 is the error term. in order to specify the equations in a compact manner, we cast eq. 1 as the observation equation of the dhlm. a dhlm also consists of a pooling equation and an evolution equation, and we specify these three equations below. 6 we specify the observation equation as: an observation y t is defined as a vector that consists of country-specific trade finance cost at time t, whereas f 1t is a matrix that contains the country-specific macroeconomic variables at time t. the vector of parameters h 1t contains all the countryspecific time-varying parameters defined in eq. 1: a it , b it , c it , d it and f it . the error term v 1it is multivariate normal and is allowed to have a heteroskedastic variance r 2 v 1 ;i , and i 1 an identity matrix of appropriate dimension. we specify y t , f 1t , and h 1t similar to [17, 35] . we specify the pooling equation as: we specify the country-specific time-varying parameters h 1t as a function of a new set of parameters h 2t that vary only in time. this hierarchical structure pools information across countries at every point in time, and thus h 2t represent the "average" timevarying effect. hence, f 2t is the matrix of 0's and 1's which allows us to specify the relationship between the average time-varying parameters h 2t and the country-specific time-varying parameters h 1t . the error distribution v 2t is multivariate normal, and i 2 an identity matrix of appropriate dimension. we specify how the average time-varying parameters, h 2t , evolves over time. we follow the dynamic linear models (dlm) literature [43] and model the evolution of these parameters over time as a random walk. we specify the evolution equation as: the random walk specification requires g to be an identity matrix and w t is a multivariate normal error, and i 3 an identity matrix of appropriate dimension. we compute the full joint posterior of the set of parameters (h 1t , h 2t , and the variance parameters r 2 v 1 ;i , r 2 v 2 , and r 2 w ) conditional on observed data. to generate the posteriors of the parameters we used the gibbs sampler [16] . in the interest of space, we refer the reader to [26] for more details. as a robustness check, we estimate our dhlm on simulated data to check if our sampler is able to recover the parameters. the model we use to simulate the data in similar to the one [26] used for their simulation study. we find that our sampler performs well and recovers the parameters used to simulate the data. space constraints prevent us from including further details. for the empirical tests, the data are derived from two sources. the information on trade finance costs is obtained from loan pricing corporation's dealscan database. the information on macroeconomic variables for the countries is obtained from the world bank. we briefly describe the data sources. dealscan provides detailed information on loan contract terms including the spread above libor, maturity, and covenants since 1986. the primary sources of data for dealscan are attachments on sec filings, reports from loan originators, and the financial press [41] . as it is one of the most comprehensive sources of syndicated loan data, prior literature has relied on it to a large extent [1, 3, 16, 23, 41] . the dealscan data records, for each year, the loan deals a borrowing firm makes. in some instances, a borrower firm may make several loan deals in a year. to focus on trade finance, we limit the sample to only those loans where the purpose was identified by dealscan as one of the following: trade finance, cp backup, pre-export, and ship finance. our trade finance costs are measured as the loan price for each loan facility, which equals the loan facility's at-issue yield spread over libor (in basis points). due to the limited number of observations, we don't differentiate between different types of loans. instead, the trade finance costs are averaged across different types of loans such as revolver loans, term loans, and fixed-rate bonds. we use the world bank data to get information on the economic and regulatory climate, and extent of development of the banking sector of the countries where the borrowing firms are headquartered. the economic and regulatory climate of a country is captured by gdp growth, inflation, stock market capitalization, and trade intensity. countries with high gdp growth are likely to face lower cost of trade finance, particularly during the financial crisis. as a high gdp growth is an indicator of the health of the economy, during the financial crisis lenders are likely to move their assets to these economies. countries with higher inflation will likely have higher cost of trade finance as the rate of returns on the loans will incorporate the rate of inflation. we include stock market capitalization scaled by gdp as a proxy for the capital market development in the country. countries with higher stock market capitalization are likely to have more developed financial markets. therefore, the cost of trade finance in such markets is likely to be lower. finally, we include total trade for the country scaled by the country's gdp as a measure of trade intensity. we expect that countries with a higher trade intensity will face a higher trade finance cost since a greater reliance on trade may make a country more risky during a crisis. as our objective is to study the phenomenon at the national level, we need to merge these two data sets. as our data from dealscan contains trade finance costs at the firm level in a given year, we use the average of the trade finance costs at the level of a borrowing firm's home country to derive country-specific trade finance costs. this permits us to merge the data from dealscan with macro-economic data from world bank. our interest is in modelling trade finance costs around the financial crisis of 2008-2009. therefore, we use a 5-year time series starting in 2006 and ending in 2010. this gives us a reasonable window that contains pre-crisis, during the crisis, and postcrisis periods. while we would like to use a longer window, we are constrained by the number of years for which the data are available to us from dealscan. after merging the two databases, our final sample consists of eight countries for which we have information on trade finance costs as well as macroeconomic indicators for all the five years. the eight countries are: brazil, ghana, greece, russia, turkey, ukraine, united kingdom (uk), and the united states (usa). we report the descriptive statistics for the sample in table 1 . average trade finance costs are approximately 190 basis points above libor. mean gdp growth is just 2.57%, reflecting the lower growth during the financial crisis. although average inflation is at 10.53%, we calculated the median inflation to be a moderate 6.55%. on average stock market capitalization/gdp ratio is around 63% while trade/gdp ratio is around 54%. more detailed summary statistics for the trade finance costs are depicted in fig. 1 . figure 1 captures the variation in trade finance cost over time and across 8 countries. we find countries experience a large increase in trade finance costs from 2008 to 2009. also, except for greece, these costs came down in 2010 from their peak in 2009. this suggests that the crisis impacted trade finance costs uniformly in our sample. we also see heterogeneity across countries in the manner in which these costs evolve over time. we also tested for multicollinearity among the independent variables, gdp growth, inflation, stock market capitalization and trade intensity. we specified a panel data regression model (i.e., without time-varying parameters) and calculated the variance inflation factors (vifs). the vifs we get for gdp growth, inflation, stock market capitalization and trade intensity are 1.13, 1.15, 1.42 and 1.36 respectively. as the vifs are less than 10, we can conclude that multicollinearity is not a concern [44]. in this section, we present the main results based on our dhlm, and subsequently compare our model to the benchmark hb model in which the parameters do not vary over time. we estimate our model using the gibbs sampler [18] . we use 200,000 iterations and use the last 20,000 iterations for computing the posterior, while keeping every tenth draw. we verified the convergence of our gibbs sampler by using standard diagnostics: (1) we plotted the autocorrelation plot of the parameters and see that the autocorrelation goes to zero [40] and (2) we plot and inspect the posterior draws of our model parameters and find that they resemble a "fat, hairy caterpillar" that does not bend [29] . we first present the estimates for the pooling equation (h 2 ) which are summarized in fig. 2 . these estimates represents the "average" effect across countries of the four macroeconomic variables, gdp growth, inflation, stock market capitalization and the trade/gdp ratio. in fig. 2 , each of the four panels depict the "average" effect, over time, of the macro-economic variables on the cost of trade finance. the dotted lines depict the 95% confidence interval (ci). we discuss these "average" time-varying effects in the subsequent paragraphs. we see that for all four macro-economic variables, the effects vary over time. in addition, a large part of the variations occur between 2007 to 2009, the 2 year span during which the financial crisis happened. our estimates will interest policy makers as it implies that interventions to alleviate the impact of the crisis should start before its onset and should continue for some time after it has blown over. we find that gdp growth has a negative effect on trade finance costs and this effect becomes more negative over time, especially during the years 2006 to 2009. our result implies that countries with high gdp growth faced monotonically decreasing cost of trade finance in the years before and during the financial crisis, and can be explained by the "flight to quality" hypothesis advanced in the finance literature [12] . inflation has a positive effect on the cost of trade finance and this effect become more positive over time, especially during 2007 to 2009 which are the year preceding the crisis and the year of the crisis. our result implies that countries with high inflation faced monotonically increasing costs of trade finance from 2007 to 2009 and is also consistent with the "flight to quality" theory. stock market capitalization has a positive effect on the cost of trade finance. this effect seems somewhat counterintuitive as we used stock market capitalization as a proxy for development of financial markets and one would expect that during the financial crisis trade finance costs would decrease as financial markets became more developed. we note that the trade/gdp ratio has a positive effect on the cost of trade finance, and this effect becomes more positive between the years 2007 to 2009, similar to the pattern we noticed for the effects of inflation. since this variable measures the intensity of trade of a country, our results indicate that, during the financial crisis, a greater reliance on trade leads to higher costs of trade finance. this is expected since higher reliance on trade may make a country more risky in a financial crisis. countries with higher trade intensity are also exposed to higher counterparty risks. our model can also estimate country-specific time-varying parameters presented in the observation equation (h 1 ). these estimates underscore the advantage of using a model such as ours, since with only 40 observations of our dependent variable, we are able to estimate 200 estimates which are country-specific and time-varying. 7 we note some heterogeneity in the country-specific trajectory of the effects of stock market capitalization and trade intensity. for example, we see that for some countries such as ghana, russia and greece, the effect of trade/gdp ratio on the cost of trade finance witnesses a steeper increase compared to other countries such as usa and ukraine in 2008 to 2009, the year of the crisis; we are unable to present these results due to space constraints. however, these findings offer directions for future research. to assess model fit, we compare the forecasting accuracy of our proposed model to the benchmark hierarchical bayesian (hb) model which has time-invariant parameters. we specify the hb model as follows: the above specification is similar to the dhlm with the major difference being that the parameters now do not vary over time. the dependent variables (y) and independent variables (x 1 ) are the same as those in the proposed model, while x 2 is a matrix that adjusts the size of l 1 to that of l 2 . we compare the model fit by computing the out-of-sample one-step-ahead forecast of our proposed model and the benchmark model. we calculate the mean absolute percentage error (mape), which is a standard fit statistic for model comparison [17, 35] . we find that the mape of our proposed model is 21.11, while that of the benchmark hb model is 42.68. thus our proposed model forecasts more accurately than the benchmark hb model. in this research, we attempt to shed light on the following question: how can we develop a model that would permit us to examine variations in trade finance costs over time in dynamic environments (such as a financial crisis), while also accounting for possible variations across countries? we addressed this question by proposing a dhlm model that can cope with the three challenges present when modeling data from dynamic environments: nonstationarity, changes in parameters over time and cross-sectional heterogeneity. our model estimates detect variation over time of the macroeconomic drivers o trade finance, which are of interest to policy makers in deciding when and for how long to schedule interventions to alleviate the impact of a financial crisis. further, the trajectory of the time-varying effects of the macroeconomic indicators are in line with our expectations. we also note some degree of countryspecific heterogeneity in the manner in which these drivers evolve over time, and a detailed scrutiny of these findings may prove fertile ground for future research. the dhlm can be easily scaled up thereby allowing us to extend our analysis. first, we can add another level in the model hierarchy by specifying a second pooling equation. this would permit us to study the problem at the firm level since evidence suggests thatduring the crisis -firms from developing countries and financially vulnerable sectors faced higher trade finance costs [13, 30] , and one can use recent nlp approaches [37] to gather firm information across different data sources. second, more macroeconomic variables can be added in the observation equation. in addition, our model can be used to study other contexts that face dynamic environments such as financial services costs and global trade. the suitability of our model for dynamic environments also implies that it can also be used to study the impact of the recent coronavirus (covid-19) on financial activities, since reports from the european central bank have suggested that the virus can lead to economic uncertainty. in many ways, the way the virus impacts the economy is similar to that of the financial crisis: there is no fixed date on which the interventions starts and endsunlike, for example, the imposition of a new state taxand its impact may vary over time as the virus as well as people's reaction to it gains in strength and then wanes and it would be interesting to model these time-varying effects to see how they evolve over time. aggregate risk and the choice between cash and lines of credit: aggregate risk and the choice between cash and lines of credit a theory of domestic and international trade finance liquidity mergers exports and financial shocks the long-term effect of marketing strategy on brand sales building brands boosting trade finance in developing countries: what link with the wto? working paper, ssrn elibrary response styles in marketing research: a crossnational investigation do depositors discipline banks and did government actions during the recent crisis reduce this discipline? an international perspective financial institutions and markets across countries and over time-data and analysis the emergence of market structure in new repeat-purchase categories: the interplay of market share and retailer distribution international and domestic collateral constraints in a model of emerging market crises off the cliff and back? credit conditions and international trade during the global financial crisis using market-level data to understand promotion effects in a nonlinear model improving consumer mindset metrics and shareholder value through social media: the different roles of owned and earned media debtor-in-possession financing and bankruptcy resolution: empirical evidence the persistence of marketing effects on sales sample-based approaches to calculating marginal densities investigating the relationship between the content of online word of mouth, advertising, and brand performance econometric analysis decomposing the great trade collapse: products, prices, and quantities in the 2008-2009 crisis time series analysis foreign banks in syndicated loan markets combining time series and cross sectional data for the analysis of dynamic marketing systems. working paper product line extensions and competitive market interactions: an empirical analysis dynamic hierarchical models: an extension to matrix-variate observations the collapse of international trade during the 2008-09 crisis. in search of the smoking gun financial intermediation and growth: causality and causes the bugs book: a practical introduction to bayesian analysis trade and trade finance developments in 14 developing countries post creating micro-marketing pricing strategies using supermarket scanner data the long-term impact of promotion and advertising on consumer brand choice price elasticity variations across locations, time and customer segments: an application to the self-storage industry reducing food waste through digital platforms: a quantification of cross-side network effects modeling and forecasting the sales of technology products consumer bankruptcies and the bankruptcy reform act: a time-series intervention analysis, 1960-1997 term based semantic clusters for very short text classification who benefits from store brand entry? marketing budget allocation across countries: the role of international business cycles information asymmetry and financing arrangements: evidence from syndicated loans the dynamic effect of innovation on market structure bayesian forecasting and dynamic models predicting nitrogen and chlorophyll content and concentrations from reflectance spectra (400-2500 nm) at leaf and canopy scales key: cord-010712-6idcbl66 authors: fennell, peter g.; melnik, sergey; gleeson, james p. title: limitations of discrete-time approaches to continuous-time contagion dynamics date: 2016-11-16 journal: phys rev e doi: 10.1103/physreve.94.052125 sha: doc_id: 10712 cord_uid: 6idcbl66 continuous-time markov process models of contagions are widely studied, not least because of their utility in predicting the evolution of real-world contagions and in formulating control measures. it is often the case, however, that discrete-time approaches are employed to analyze such models or to simulate them numerically. in such cases, time is discretized into uniform steps and transition rates between states are replaced by transition probabilities. in this paper, we illustrate potential limitations to this approach. we show how discretizing time leads to a restriction on the values of the model parameters that can accurately be studied. we examine numerical simulation schemes employed in the literature, showing how synchronous-type updating schemes can bias discrete-time formalisms when compared against continuous-time formalisms. event-based simulations, such as the gillespie algorithm, are proposed as optimal simulation schemes both in terms of replicating the continuous-time process and computational speed. finally, we show how discretizing time can affect the value of the epidemic threshold for large values of the infection rate and the recovery rate, even if the ratio between the former and the latter is small. a feature of our environment is the existence of networks, from real-life human contact networks, to virtual networks such as online social networks, to functional and technological networks such as transport networks and the internet [1]. networks form a medium for contagions, which spread from node to node through the links of the networks. contagions can be physical [2, 3] , cultural [4, 5] , societal [6] [7] [8] , financial [9] [10] [11] , and the modeling of such contagions [12] [13] [14] [15] [16] -along with the understanding of the suitability of various modeling approaches [17] [18] [19] -is vital for matters of the utmost public importance [20] [21] [22] . a common modeling paradigm for studying contagions is the framework of continuous-time markov processes [23] [24] [25] , where events (such as the infection of a susceptible individual by an infected individual) occur at certain rates. the most well known of these models are epidemiological compartment models [2] , which, although introduced as models of disease spread [26] , are also widely used as models of social contagions such as the diffusion of information and innovations [27] [28] [29] . continuous-time markov process models can provide valuable insights into contagion processes, and have real value in both predicting and controlling contagious outbreaks [30] [31] [32] . one avenue to study continuous-time markov process models is by using discrete-time approximations [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] . such approaches can be either numerical (i.e., synchronous updating monte carlo simulations) or theoretical. in a discretetime approach, time is discretized into time steps of length t (which usually takes the value t = 1), and events occur with certain probabilities. these probabilities are known as the state transition probabilities, and are simply the product of the corresponding rate and the time step t. although discrete-time approaches correspond to their continuous-time counterpart in the limit t → 0, they can significantly differ in the case that t is finite. allen, in her work [33] , shows that discrete-time susceptible-infectedsusceptible (sis) and susceptible-infected-recovered (sir) models can produce complex behavior such as period doubling and chaotic effects for sufficiently large values of the time step and/or contact rate. this behavior is not possible in the continuous-time sis and sir models, and is thus no more than an artifact of discretizing time. similarly, gomez et al. [39] observe that differences between continuous and discrete-time sis dynamics are substantial when an arbitrary time step of t = 1 is employed. an understanding of the discrepancies introduced as a result of discretizing time is thus important, allowing us to gauge the validity of discrete-time approaches and when they may accurately be employed. in this paper, we show the limitations of discrete-time approaches when used to study continuous-time contagion dynamics. our message is clear-that the accuracy of such methods will be poor if state transition probabilities are too large, leading to deviations from the underlying continuoustime process. the repercussions of this are manifold. discretetime theoretical approaches can be significantly inaccurate for large values of the contagion parameter values (such as infection and recovery rates), and thus the analysis of such approaches will not be valid. furthermore, discrete-time monte carlo simulations-often used as a gold standard [34, 49, 50] can be inaccurate for large parameter values, and such inaccurate simulations can lead to misleading conclusions. we illustrate this latter point with an example from the literature in sec. iv. our work highlights the consequences of erroneous approaches to studying continuous-time contagion dynamics, which has important implications not only for the academic study of these dynamics [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] but also for the implementation of such dynamics within large-scale simulators for real contagions [31, 32] . to begin, we describe in some detail both continuous and discrete-time markov processes to illustrate mathematically the difference between the two. in continuous-time markov processes, events are described by rates λ, while events in the discrete-time analog are described by transition probabilities λ, whereλ = λ t. in the course of our analysis we focus on the specific example of sis dynamics; however, our analysis holds for any continuous-time markovian dynamics, with the core message being the limitations on the size of the transition probabilitiesλ for which discrete-time approaches are accurate. consider sis dynamics taking place on a network of n nodes. this is a continuous-time markov process where at any time t each node i in the network has a corresponding state x i t , which is either susceptible (x i t = s) or infected (x i t = i ) [23, 51, 52] . the states of each node in the network change dynamically over time. susceptible nodes become infected through each of their infected neighbours at a rate β per infected neighbor, while infected nodes recover at a rate μ. "rate" here refers to instantaneous transition rates, which in continuous-time dynamics define the transitions between states; these are defined in terms of probabilities as (1) where {x i t+ t = i via j } is the event that a susceptible node i became infected through an infected neighbor j . the fraction terms in the right hand sides of eqs. (1) and (2) are the probabilities of state changes per unit time and taking the limit of these fractions as t → 0 leads to the concept of transition rates. in general, we can define r i as the rate at which a node i changes from its current state to the opposite state; this is given by where m i,t is the number of infected neighbors of node i at time t. the evolution of the dynamics in the network can be fully described by the master equation for the markov process [24, 53] . if we denote by y t = {x i t } n i=1 the state of the network at time t, and by p(y,t) the probability that the network is in state y t = y, then the master equation is given by with initial conditions p(y,0) = p 0 (y). here r y→y is the instantaneous rate at which the network changes from state y to y and is fully determined by the network structure and the transition rates μ and β. while the master equation is the gold standard-exactly describing the evolution of sis dynamicsthe dimension of its sample space is 2 n , which in general is prohibitively large for analytical or numerical studies. one way to tackle this problem is to study the dynamics as a series of individual transitions between states. in continuous-time dynamics nodes change state one at a time, or asynchronously [54] . given the state of the network, the probability distributions governing both the length of time until the next state change and the node which will change state can be constructed. these are given by the following lemmas (of which rigorous derivations can be found in the literature, e.g., ref. [23] , chapter 10): lemma 1. let τ be the holding time of the network, the length of time that the network remains in its current state before changing to the next state. then τ is an exponentially distributed random variable and the parameter of the distribution is the sum of the individual node transition rates, i.e., n i=1 r i . lemma 2. the probability that the next node in the network to change state will be node i is r i / n j =1 r j . lemmas 1 and 2 describe how the network probabilistically evolves from one state to another. they are the basis of continuous-time stochastic simulation methods such as the well-known gillespie algorithm, also known as the stochastic simulation algorithm or kinetic monte carlo [55] [56] [57] [58] . such simulations are often referred to as event-based simulations because the time intervals are not fixed but rather correspond to the time between consecutive state-changes in the system. at each step in such algorithms, time advances by an amount τ and node i changes its state, where τ and i are random numbers drawn according to lemmas 1 and 2 ( fig. 1) . stochastic simulations give the opportunity to construct p(y,t) empirically by running multiple realizations of the stochastic process and aggregating over an ensemble of realizations. such simulations are statistically exact as they are fully based on lemmas 1 and 2, which are derived without approximation from the axioms of the markov process. in a discrete-time framework, time is no longer treated as a continuous variable but rather takes the form of a discrete variable, which advances in time intervals of length t. instantaneous transition rates are replaced by transition probabilities. in a single time interval, susceptible nodes become infected through their infected neighbors with probabilityβ = β t per infected neighbor, while infected nodes recover with probabilityμ = μ t. note that t is often assumed to take the value t = 1, but even in this case it should be included in the expression forβ andμ to clarify that a rate needs to be multiplied by a time step before it can be expressed as a probability. the discretization of time in this manner leads to two deviations from the continuous-time process. these deviations arise through both the transition probabilities, which are used in place of transition rates, as well as the parallel (synchronous) state changes in discrete-time systems that are uncharacteristic of continuous-time dynamics. to understand the roots of the deviations introduced through the transition probabilities we can examine the definitions of μ and β as rates given in eqs. (1) and (2). these equations can be rearranged to give transition probabilities in terms of these rates, i.e., where in this case t is an infinitesimally small length of time. in the case that t is not infinitesimally small, eqs. (5) and (6) fig. 1. schematics of both (a) the gillespie algorithm and (b) the synchronous updating scheme. vertical ticks on the t axis indicate the moments through which the simulation advances-in synchronous updating the interval between these moments is a fixed time step with value t while in the gillespie algorithm the interval is a random variable τ given by lemma 1. the light green and dark red circles are nodes in the network, which are in the susceptible and infected states, respectively. a square around a node means that the node has been chosen for updating at a certain moment and may change its state. in the gillespie algorithm a node is chosen according to lemma 2 and will always change its state while in the synchronous updating scheme every node has the chance to change state and will do so with a probability that depends on their state and the states of their nearest neighbors. become approximations. in a time interval of length t in the continuous-time markov process, the exact probability that an infected node will recover is 1 − e −μ t while the probability that a susceptible node will become infected by a given infected neighbor is 1 − e −β t . the transition probabilitiesμ andβ, the right-hand side of eqs. (5) and (6), are approximations to 1 − e −μ t and 1 − e −β t , respectively, and an important question then arises of the effect these approximations have on the dynamics. figure 2 shows the actual probability 1 − e −λ t along with the discrete-time probabilityλ = λ t, where we use the parameter λ to represent either μ or β. we also plot the error , which is defined as the difference between the discrete-time probability and the actual probability. whenλ < 0.1, < 0.01 and so the approximation is fairly accurate in this range. for larger values of the state transition probabilityλ, however, the approximation differs significantly from the true values. at λ = 0.5, ≈ 0.1 and whenλ = 1, ≈ 0.37. these individual errors can accumulate and have significant implications on the dynamics as a whole; indeed we show empirically in secs. iii and iv that although discrete-time approaches can be very accurate whenμ andβ are very small they begin to lose accuracy whenμ andβ are of the order of magnitude of 10 −1 . fig. 2. the actual probability (blue solid line) that a rate λ event will occur in a time step of length t plotted along with the approximate probabilityλ (black dash-dotted line) as used in discrete-time formalisms. the error is defined as the absolute distance between the two. second, we comment on the synchronous updating nature of discrete-time approaches. this is in contrast to the continuoustime process where nodes change state asynchronously and the change of state of one node immediately affects the transition rates of the other nodes (fig. 1) . the strength of effect will depend on the transition probabilities, as the valuesμ andβ dictate the number of state changes that take place in each time step and thus the propensity of multiple nodes to change state at the same time. thus, we arrive at a simple conclusion: the values ofμ and β (and thus μ, β, and t) used in discrete-time approaches should be controlled so that these approaches are accurate representations of the continuous-time process. for large values of μ or β, the time step t should be small while if t = 1, as in the case of the majority of discrete-time approaches, the values of μ and β should be relatively small. throughout the rest of this paper we will give empirical evidence of this conclusion. finally, we comment on discrete-time numerical simulation schemes that are used to stochastically simulate sis dynamics. a commonly used simulations scheme is synchronous updating, also referred to as rejection sampling ( fig. 1) [34, 49, 57, 59] . in this case, time advances in steps of one time unit, i.e., t = 1. in a single time unit, a susceptible node will become infected by its infected neighbors with probabilityβ per infected neighbor while infected nodes become susceptible with probabilityμ. synchronous updating simulations are statistically exact realizations of the discretetime dynamics; these dynamics are fully described by the discrete-time master equation where q y →y is the probability that the network changes from state y to state y in a time step of length t = 1 and is fully determined by the network structure and the transition probabilitiesμ andβ [24] . because synchronous updating simulations exactly mimic the discrete-time dynamics and master equation they will be used throughout this paper to gauge the accuracy of the discrete-time approach. in the remainder of the paper, we show how the approximations introduced in discrete-time approaches can lead to misrepresentation of the actual continuous-time dynamics. we begin in the next section by examining the discrete-time approximations of eqs. (5) and (6) for fixed μ and β and various values of t. we show that discrete-time dynamics can accurately reproduce continuous-time dynamics for small values of t, but that they incur a breakdown in accuracy as t increases. further to this, we show in sec. iv that when the time step is fixed to the value t = 1, as in much of the literature, discrete-time approaches break down in accuracy when the transition rates (μ and β) are too large. this limits the range of parameters that can be studied with discrete-time approaches. we illustrate this with an example from the literature, also showing how synchronous updating simulation schemes can favour discrete-time formalisms leading to biased conclusions when comparing against continuous-time theories. finally, in sec. v we show that overly large values ofβ andμ can affect the value of the epidemic threshold, even if the effective transition rate defined as γ = β/μ =β/μ is small. in this section, we analyze the discrete-time approximations introduced in sec. ii b as a function of the size of the discrete time step t. we do this by carrying out synchronous updating simulations for various values of t and comparing them against exact results obtained from the master equation. numerical simulations are carried out in c++ and the code is available online [60]. as our example, we consider sis dynamics on a complete graph of n nodes, i.e., a graph where every pair of nodes is connected. on such a graph, the sis dynamics are defined by the rate functions where z t is the number of infected nodes at time t and β and μ are the infection rate and recovery rate respectively, consistent with eq. (3) for the complete graph. we choose the complete graph because on such a graph the master equation given in eq. (4) can be reduced from a system of 2 n equations for p(y,t) to a system of n + 1 equations for p(n,t), the probability that there are n infected nodes in the graph at time t [24] . this reduced system is given by for 0 n n with initial conditions p(n,0) = p 0 (n). for small values of n , this system can easily be solved using standard differential-equation solvers, giving us a gold standard against which to compare the discrete-time simulations. we also perform gillespie algorithm simulations to illustrate the accuracy and the speed of such simulations and thus their efficacy in simulating continuous-time dynamics. we present the results for sis dynamics with β = 1, μ = 1 running on a complete graph with n = 10 nodes in fig. 3 . we plot the solution of eq. (9) as well as the numerical results given by the gillespie algorithm and synchronous updating schemes with different time steps t. for the numerical simulations, we performed 10 6 realizations and obtained the corresponding p(n,t) by taking the fraction of realizations in which there are n infected nodes at time t. for the synchronous updating simulations, we consider time steps of t = 0.01,0.1 and 1. from fig. 2 , it is clear that these values of t with μ = β = 1 will give a comprehensive range on which to judge the accuracy of the discrete-time approach, while noting that for t = 1 and for these μ and β parameter values the system is deterministic. we consider the sis process at time t = 3 at which stage the expected fraction of infected nodes η t = 10 n=0 np(n,t) has reached a metastable state (fig. 3) [61] . at t = 3 we empirically construct p(n,3) from the synchronous updating simulations and compare it to p(n,3) calculated from the master equation (9) . the histogram of fig. 3(b) shows this comparison. from this histogram, it is clear that while the discrete-time simulations are quite accurate for small t this accuracy can fully break down when t is too large. the accuracy of the probability distribution in the metastable state depends highly on the value of the time step used to reach the metastable state. in the synchronous updating simulations with t = 1 the results are highly inaccurate with all of the probability concentrated on n = 9, i.e., p(n,3) = δ n, 9 . even the case t = 0.1, while fairly accurate, shows discrepancies in both the probability distribution p(n,3) and the expected fraction of infected nodes η t [ fig. 3(a) ]. considering that the error betweenμ (β) and 1 − e −μ t (1 − e −β t ) is less than 0.005 for μ = β = 1 and t = 0.1 (fig. 2) , we conclude that these discrepancies are due to the simultaneous state changes in synchronous updating, which are uncharacteristic of the continuous-time process. in the histogram of fig. 3 (c) we compare p(n,3) constructed empirically from the gillespie algorithm to p(n,3) calculated from the master equation. the gillespie algorithm is extremely accurate and matches the exact p(n,t) to a high degree of precision. furthermore, this algorithm is computationally rapid. we performed a short comparison of the simulation algorithms in terms of speed, showing in table i the simulation run times for the 10 6 realizations for the gillespie algorithm and for synchronous updating with various values of t. for t = 0.01-corresponding to the simulations which most closely match the accuracy of the gillespie simulations-the gillespie algorithm is an order of magnitude faster. this computational speed, along with its natural precision of the algorithm, make the gillespie algorithm an optimal algorithm for simulating continuous-time dynamics. to summarize, the accuracy of discrete-time approximations to continuous-time dynamics depends highly on the size of the discrete-time step t at which the system evolves. this has extremely important implications for real-world predictive models of epidemic spreads that are discrete-time based [31, 32] , as overly large time steps can affect the prediction of both the expected evolution of a contagion [ fig. 3 in the next section, we fix the time step at t = 1 and show how the accuracy breaks down when the infection and recovery rates are too large, showing that discrete-time formalisms using this approach are limited in the ranges of the rate parameters that they can study and thus their ability to match continuoustime dynamics. as mentioned in sec. ii b, synchronous updating has the same characteristics of discrete-time systems, which are characterized by transition probabilities and difference equations of the form where p(y,t + t)-the probability that the system is in state y at time t + t-is a function of the probabilities p(y ,t) for all possible states y in the sample space . on the other hand, continuous-time systems are characterized by transition rates and differential equations of the form given by the master equation (4) . although the discrete-time formulation coincides with the continuous-time one in the limit t → 0, the dynamics will differ for noninfinitesimal t. issues then arise when comparing discrete and continuous-time systems and the choice of numerical scheme becomes important. we illustrate this now with an example from the literature, while also showing how the accuracy of discrete-time approaches with t = 1 can be insufficient for large values of the transition rates. a prominent current strand of research is the behavior of the sis model on infinite networks with power-law degree distributions [36, 49, [62] [63] [64] [65] [66] . in ref. [36] , chakrabarti et al. introduced the nonlinear dynamical systems (nlds) theory, a discrete-time approach to sis modeling with a set of meanfield difference equations of the form for 0 i n , where p i,t+1 is the probability that node i is infected at time t. they compare their results to two continuous-time formulations, the heterogeneous mean-field (hmf) approach of pastor-sattoras and vespignani [49] and the kephart-white (kw) [67] approach. the bases of the comparison are synchronous updating numerical simulations with a time step t = 1 and it is found (see for example fig. 4 of ref. [36] ) that the nlds theory is much closer to the numerical simulations than both the hmf and kw theories. however, the comparison of discrete-time and continuoustime formulations in this manner is biased. synchronous updating with a time step t = 1 is the correct procedure for numerically simulating discrete-time dynamics. on the other hand, to simulate continuous-time dynamics either synchronous updating with a vanishingly small time step or a continuous-time simulation scheme such as the gillespie algorithm should be used. to illustrate the difference resulting from the use of the different updating methods we reproduce an example from [36] . the example is sis dynamics on an erdős-rényi network of 1000 nodes and mean degree k = 4. figure 4 shows various numerical simulations of these dynamics. again, the computer code used to perform the simulations is available from ref. [60] . included in fig. 4 are synchronous updating simulations with a time step t = 1, as in ref. [36] , along with synchronous updating simulations with a small time step t = 0.01 and gillespie algorithm simulations. in fig. 4 (c) of ref. [36] , where μ = 0.72 and β = 0.2, it can be seen that the fractionη of infected nodes in the metastable state given by the nlds theory matches very closely the synchronous updating numerical simulations. however, as can be seen in fig. 4(b) here, these synchronous updating simulations differ quite significantly from continuous-time simulations, which plateau at the metastable state withη ≈ 0.04. the kw theory, which in ref. [36] is rejected as being inaccurate, actually converges to a value much closer to the continuous-time simulations than the nlds theory. thus, using the correct simulation technique, the conclusions in ref. [36] should be reversed: the kw model is more accurate than the ndls model. for fixed t = 1, the accuracy of the discrete-time approach decreases as μ and β increase. in the example above, when μ is decreased from μ = 0.72 to μ = 0.48 the discretetime simulations match relatively closer to the continuous-time simulations [ fig. 4(a) ], while when μ is decreased further to μ = 0.24 the discrepancy between the two simulations in negligible. chakrabarti et al. state that their model "outperforms (the kw model) when μ is high." however the opposite is the case: their discrete-time approach breaks down in accuracy (as an approximation to the continuous-time process) as μ increases. we conclude with an observation to motivate the next section. as μ is increased from μ = 0.48 [ fig. 4(a) ] to μ = 0.72 [ fig. 4(b) ], the fraction of infected nodes in the metastable stateη (at, for example, t = 100) decreases for both the continuous-time simulations and discrete-time simulations. however,η decreases quicker for the continuous-time simulations and so it would seem that the critical value μ c at whichη first becomes zero will be different depending on whether a discrete-time or continuous-time approach is used. this has implications for the epidemic threshold, which is the focus of the next section. a characteristic of sis dynamics is the occurrence of phase transitions as the effective transition rate γ is varied. recall that the effective transition rate is defined as the ratio of the infection rate to the recovery rate, i.e., γ = β/μ. depending on the structure of the network and whether the network is finite or infinite, the critical point, or epidemic threshold, γ c between different phases can vary. as mentioned in sec. iv, there are still remaining questions about the steady-state behavior of the sis model-particularly the value of the epidemic threshold on such networks-and so a good understanding of how different approximations affect the value of the epidemic threshold is important. in this section, we show that although the epidemic threshold is defined in terms of the ratio γ = β/μ =β/μ, the individual values of the transition probabilitiesβ andμ used in discrete-time approaches affect the value of the epidemic threshold when it is calculated by (i) performing discrete-time numerical simulations or (ii) iterating a discrete-time system [such as eq. (10)] from a set of initial conditions (as, for example, in ref. [37] ). note however that the epidemic threshold predicted by steady-state analysis [i.e., setting p t+1 = p t in eq. (11) ], such as in ref. [36] , is completely valid. we show howμ andβ affect the value of the epidemic threshold in the following manner. for a given network, we fix the value ofμ and varyβ so that the effective transition rate γ varies between γ min and γ max where γ min and γ max are chosen so that the epidemic threshold lies between them, i.e., γ min γ c γ max . thus whenμ is small (large),β will be small (large) so that their ratio lies in the range γ min β /μ γ max . we perform standard synchronous updating simulations (with t = 1) and obtain the critical value γ c as the smallest value of γ such that the fraction of nodes in the metastable state is nonzero. if the epidemic threshold depends only on the ratio γ =β/μ and is independent of the individual values of μ andβ, then γ c should be the same regardless of the value of μ, which is fixed. however, we find that this is not the case. we perform this experiment on an erdős-rényi network with n = 10 4 nodes and mean degree k = 4, similar to the network used in the example of sec. iv (fig. 5) but with a greater number of nodes. on such a network, the epidemic threshold is predicted by steady-state analysis of both the ndls and hmf theories as γ c = 0.25. from fig. 5 we see that whenμ is small (μ = 0.1), the epidemic threshold predicted by synchronous updating simulations corresponds to this value γ c = 0.25. however, asμ (and thusβ) increases, the accuracy of the discrete-time approach breaks down and both the fraction of infected nodes in the metastable state and the epidemic threshold deviate from the true values. the epidemic threshold decreases from γ c = 0.25 whenμ = 0.1 to γ c = 0.2 whenμ = 1, even though the ratio γ =β/μ remains in the same range. thus, in discrete-time formalisms the steady-state behavior is not fully determined by the effective transition rate γ but also depends onμ andβ. from our analysis in sec. iii (fig. 3) it is clear that the metastable state reached iteratively from an initial condition depends on the single-step transition probabilitiesμ andβ. if these are too large, the errors introduced in the discrete-time approximation become significant, affecting the metastable state and the value of the epidemic threshold. the results of this section have important implications for discrete-time approaches. first, they show that the epidemic threshold calculated empirically using synchronous updating simulations can be incorrect ifμ andβ are too large, even if the ratio between them is small. second, they have implications for calculating the epidemic threshold from discrete-time systems of the form p t+1 = f (p t ) by forward iterating the system from an initial condition [37] . if the transition probabilitiesμ and β used in such systems are too large then the metastable state will be affected, possibly leading to a miscalculation of the epidemic threshold. in this paper, we have provided conclusive evidence of the limitations of discrete-time approaches as approximations to continuous-time contagion processes. when the state transition probabilities are too large, such approaches become inaccurate and misrepresentative of the underlying continuoustime processes, thus compromising their utility and their applicability to prediction and analysis. our message is clear: due care needs to be taken when implementing discrete-time methods as approximations to continuous-time dynamical processes. being constructive, we have briefly discussed alternatives. for simulations of continuous-time processes on networks, event-based simulations such as the gillespie algorithm are more favorable than synchronous updating schemes both in terms of accuracy and speed. for theoretical analysis, continuoustime analogs [63, 68] of discrete-time approaches should be employed because they are unconstrained in the range of dynamics parameter values that can be studied. proc. natl. acad. sci. usa advances in network analysis and its applications performance analysis of complex networks and systems dynamical processes on complex networks the theory of stochastic processes proceedings of the 22nd international symposium on reliable distributed systems the mathematical theory of infectious diseases and its applications stochastic interacting systems: contact, voter and exclusion processes handbook of stochastic methods, springer series in synergetics interacting particle systems we refer to the metastable state as the state when the expected fraction of infected nodes has reached a plateau proceedings of the 1991 ieee computer society symposium on research in security and privacy this work has been supported by science foundation key: cord-157736-n1cwg58b authors: bernini, antonio; bonaccorsi, lorella; fanti, pietro; ranaldi, francesco; santosuosso, ugo title: use of it tools to search for a correlation between weather factors and onset of pulmonary thromboembolism date: 2020-08-11 journal: nan doi: nan sha: doc_id: 157736 cord_uid: n1cwg58b pulmonary embolism (pe) and deep vein thrombosis (dvt) are gathered in venous thromboembolism (vte) and represent the third cause of cardiovascular diseases. recent studies suggest that meteorological parameters as atmospheric pressure, temperature, and humidity could affect pe incidence but, nowadays, the relationship between these two phenomena is debated and the evidence is not completely explained. the clinical experience of the department of emergency medicine at aouc hospital suggests the possibility that a relationship effectively exists. we have collected data concerning the emergency medicine unit admissions of pe patients to confirm our hypothesis. at the same time, atmospheric parameters are collected from the lamma consortium of tuscany region. we have implemented new it models and statistic tools by using semi-hourly records of weather time high resolution data to process the dataset. we have carried out tools from econometrics, like mobile means, and we have studied anomalies through the search for peaks and possible patterns. we have created a framework in python to represent and study time series and to analyze data and plot graphs. the project has been uploaded on github. our analyses highlighted a strong correlation between the moving averages of atmospheric pressure and those of the hospitalizations number (r= -0.9468, p<0,001) although causality is still unknown. the existence of an increase in the number of hospitalizations in the days following short-to-medium periods of time characterized by a high number of half-hourly pressure changes is also detected. the spectrograms studies obtained by the fourier transform requires to increase the dataset. the analyzed data (especially hospitalization data) were too few to carry out this kind of analyses. study time series by developing a python project and to analyze data and plot graphs. the project has been uploaded on github. aim of the study. the aim of our study is to describe a method to evaluate the relationship between the atmospheric parameters spikes and the incomes to emergency unit at careggi hospital of subjects diagnosed with pe. to set up the study, we have retrospectively collected clinical data of pe diagnosed subjects in 2016 hospital admissions. we have set a database characterized by the demographic and pathological parameters of the subjects. in another database the meteorological data of atmospheric parameters relatively to the investigated period of time have been collected. results. to approach this issue we have used econometry, chronobiology (time series analysis) and pattern recognition approaches. the moving average of the time series of daily hospitalizations, on an annual window, has evidenced an increasing trend. conversely, the annual moving average of pressure values has a decreasing trend. our analyses highlighted a strong correlation between the moving averages of atmospheric pressure and those of the hospitalizations number although causality is still unknown. the existence of an increase in the number of hospitalizations in the days following short-to-medium periods of time characterized by a high number of half-hourly pressure changes is also detected. average pressure changes in 4 days with halfhourly records divided by number of hospitalizations at 4th day in 2016 has an increase of 8.66%. in the all studied period this increment was 3.18%. the spectrograms studies obtained by the fourier transform requires to increase the dataset. the analyzed data (especially hospitalization data) were too few to carry out this kind of analyses. conclusions. in conclusion, our results evidence a correlation between pulmonary embolism and meteorological parameters and in particular, atmospheric pressure (r= -0.9468, p<0,001) that is more relevant than temperature and wind speed. further data collections will increase the investigated time and the enrolled subjects treating the information through more complex computational approaches to confirm our results. studying seasonality of certain diseases and an eventual influence of weather factors on their onset and mortality is a current and very debated topic, also in light of recent studies done in this sense on covid-19's diffusion [1] [2] [3] . in particular, some recent researches seem to suggest the existence of a correlation between onset and mortality of deep vein thrombosis and/or pulmonary embolism, two diseases strongly linked together (see 9), and weather factors: however, since the studies are recent, conclusions are often not statistically significant or in contrast to each other. indeed some analyses highlight an increase in cases during spring months, that are characterized by a lower atmospheric pressure [4] . however, other analyses find peaks of cases in winter [5] . anyway, most studies agree that probably some kind of correlations exist, but more investigation is needed [6] . the clinical experience of the department of emergency medicine at aouc hospital seems to suggest that a connection between the number of hospitalizations for pulmonary embolism and weather factors actually exists. from this comes this research which has the purpose of using statistical tools and models to verify this correlation and, in case, to describe it. besides attempting methods already used by other researchers, we tried to tackle the problem with different and new tools, too. in particular, it was found that much of the existing literature tended to focus on studying annual and monthly means, which nevertheless have the defect of flattening values. this fact leads to a loss of information. therefore we tried to study the problem with a higher resolution of data, using daily and semi-hourly means, the latter obtained thanks to a concession obtained by the laboratory for meteorology and environmental modelling of lamma consortium of tuscany region. we have also used tools from econometrics, like mobile means, and we have studied anomalies through the search for peaks and possible patterns. everything has been realized using python. we have created a framework to represent and study the time series. it can be used for future developments of this study or to analyze other data regarding other diseases, bot pulmonary and no, or statistic samples from different sources. 2 software tools and data's preliminary analysis we have created a python project and used it to analyse data and plot graphs which are shown in this document. the project uses pyplot library [7] and it can be consultes and downloaded on github [8] . for this study we crossed data from different sources. daily hospitalizations data have been gotten by medical records of 476 cases of pulmonary embolism in the period 2016-2018, which aouc hospital gave us. the corresponding time series is shown in figure 1 . daily weather data (in particular atmospheric pressure, minimum, maximum and average temperature and minimum, maximum and average wind speed) in the same period were initially obtained by meteo.it [9]. the corresponding time series is shown in figure 2 . later, since we needed semi-hourly values rather than daily values, we have used data provided by lamma consortium. these data are publicy available, only in the form of graphs, on [8, lamma.txt] . for the narrow range (from august 25th 2018 to august 29th 2018) without data we have manually assigned the standard value 1000 mbar. since we worked for most of the time with meteo.it's data, as initially we did not need semi-hourly means, in the rest of the document, when referring to weather data, always consider those taken from the latter source, unless otherwise specified. as first approach, the autocorrelation of daily values of atmospheric pressure and, separately, of the number of hospitalizations was investigated. the tool used to represent and study autocorrelation is the correlogram, also known as autocorrelogram. it is a graph constructed by examining the correlation between a time series and several dalayed series of k periods, in other words the sime series y t = y 1 , ..., y t is correlated with his traslations of amplitude k. in our case, k assumes values 0, 1, ..., 10. for every correlation, the index is calculated, whithȳ representing the arithmetic mean of the values of the starting time series, not translated. then, the correlogram is constructed reporting in a cartesian chart the couple of values (k, r k ). as you can see in figure 3 , the correlogram of pressue looks flat, with a monotonically increasing trend which is highlighted making the autocorrelation of the normalized time series (figure 4 ). for details about normalization of a time series, look at the section 8.3.1. insted the correlogram of hospitalizations, figure 5 , is more variegated, with a peak in 0. this is a predictable result: as regards hospitalizations, the past has almost no influence on the present and the accidental component prevails. in the pressure values, on the contrary, the trend component prevails, with the present being strongly influenced by the past, in particular by the nearest one, with decreasing influence as we move backwards, as evidenced by the decreasing correlogram [11] . for the algorithm used, see [8, auto_correlation.py]. since in the graph in the figure 2 there are peaks of annual seasonality, in an attempt to seasonally adjust the series, its moving average was realized on a 365-day window. the resulting historical series, shown in the figure 6 , has a decreasing trend. using the data provided by lamma, a similar graph is obtained, so this historical series can be considered completely reliable. the trend observed is not surprisingly: barometric variations of this amplitude are in fact normal in the time spans considered. by making the moving average of the time series of daily hospitalizations, always on an annual window, we obtain a graph ( figure 7) with an increasing trend, which cannot be ignored in the analysis of the results obtained later. some hypotheses about the cause of this growing trend include: • improvement of diagnostic tools; • increased awareness of the pathology, with consequent increase in the frequency of diagnosis; • increased precription of drugs with pulmonary embolism as a side effect; • increase of triggering factors such as atmospheric phenomena and/or polluting agents. note that only the last of the hypotheses formulated above can lead to finding a correlation between the data in our possession. if, on the other hand, the cause was to be found in one of the other hypothesised phenomena (or were in any case of another nature), the growing trend found would make the data less "clear" for the purposes of the study, constituting an obstacle in data analysis. 3 search for a correlation between atmospheric pressure and number of hospitalizations the most immediate tool for finding a correlation is the scatter plot, however, if used alone, it can be conditioned by subjective interpretations. for this reason, the use of this tool has been accompanied by two of the most widely used correlation coefficients. where x and y represent respectively the values assumed by the two random variables x and y whose correlation we are studying, while m x and m y represent the averages. the p value associated with the calculation of the pearson coefficient indicates the probability that a randomly generated system of variables has a degree of correlation at least equal to that of the system examined. note however that the calculation of p is completely correct only under the assumption that all the variables examined have normal distribution, which is not always ensured in the analyzes made below, which is why the values of p reported could be subject to errors [12] . the spearman's rank correlation coefficient, or spearman's coefficient, indicates the degree of monotonic correlation between two random variables [13] . like the pearson coefficient, it assumes values belonging to the range [−1, 1] where 0 indicates the absence of a monotonic correlation, while −1 and 1, on the contrary, indicate an exact monotonic correlation , negative in the first case, positive in the second. it is defined as a particular case of the pearson coefficient in which the values are converted into ranks before moving on to the calculation of the coefficient [14] . compared to the pearson coefficient, it has the advantage of also detecting non-linear correlations, albeit always monotonic, and of not needing the normality of the distribution of the variables for the calculation of the associated p value, which is however sufficiently reliable only for sets of data of at least 500 values [12] , hypothesis that however, unlike what happened for the pearson coefficient, is always satisfied in the present study. a first approach in the search for a correlation between the time series of hospitalizations and that of atmospheric pressure was to generate a scatter plot (figure 8 ), which however did not highlight anything in particular, but, at first sight, it shows a fairly random relationship between the data. something that, at least apparently, seems more relevant is what is shown in figure 9 where a scatter plot between the time series of hospitalizations and the time series of the variation of pressure values since one day to another has been generated. the resulting graph appears to deviate slightly more from a completely random distribution, with the upper left part of the figure completely empty, however, trying to calculate the pearson and spearman coefficients, no relevant results were produced. for the algorithm used, see [8, corr_pressure.py] . note that in both figures 8 and 9 many of the points actually represent multiple occurrences, not visible because they overlap. correlating the seasonally adjusted time series mentioned in the section 2.3.2, the result obtained is completely different. the dispersion graph in figure 10 shows a negative linear correlation, confirmed by the pearson (−0.9468, p = 0.0) and spearman (−0.9324, p = 0.0) coefficients, both close to −1. this result is in line with studies in the literature which show a higher incidence of pulmonary embolism in the months characterized by low atmospheric pressure [4] . in general, there is an increase in the negative correlation as the width of the moving average range increases, this is highlighted by the graphs in figure 11 (notice that the graph (d) is the same as shown in figure 10 ) and its correlation ceofficientes in table 1 . for the algorithm used, see [8, corr_mobile_means.py] . although a correlation regarding seasonally adjusted values is undeniable, causality is not necessarily so. in fact, we have already discussed in the section 2.3.2 about the possible causes of the increase in hospitalizations, and several have been hypothesized that have nothing to do with atmospheric pressure. investigations on this causality and possible explanations will be the subject of future research in the context of a collaboration with biomedical and experimental doctors. given the nature of the biological phenomenon that causes pulmonary embolisms and the possible correlation with environmental factors (see appendix b), it is reasonable to consider the analysis of the peaks of the historical series of such data, and their patterns, as a valid tool for look for some kind of correlation between data. it was initially thought that a sudden pressure jump could cause the detachment of the thrombus with the consequent appearing of the embolus (see b.1). 9). let a = a 0 , a 1 , ..., a n a time series. let w ∈ n with w > 0 and f ∈ r with f > 0. we indicate by mean : r w → r and std : r w → r the arithmetic mean and the mean squared error functions, respectively. we define the set {p + } of positive peaks as: similarly, the set {p − } of negative peaks is: once the time series has been transformed into a peak time series, by simply setting 1 and −1 as the values that are, respectively, positive or negative peaks, and equal to 0 all other values, it is possible to obtain a historical series of patterns. this is achieved by analyzing all the consecutive n-tuples (with n equal to the length of the considered pattern) of peaks series (which is a sequence of −1, 0 and 1 values), and posing 1 for each occurrence of the pattern, 0 otherwise. note that, since w=7 and n=3, the occurrence of a pattern involves an interval of 9 consecutive days which are the day the pattern is found and the 8 previous days. figures 12, 13 e 14 , obtained by performing [8, pattern_hospitalizations.py] , show a comparison between the incidence of some patterns of the atmospheric pressure peaks and the number of daily hospitalizations. the chosen patterns are (1, −1, 1), (1, 0, 1), and (1, 0, −1) since they reveal the greatest pressure changes in restricted periods. notice how the (1, −1, 1) pattern never occurs, so that we deduce that in the geographical area of study the atmospheric pressure does not change too quickly or at least this did not happen in the period of interest. consider the following values: {900, 900, 900, 900, 900, 900, 1020, 900, 1000, 900} with w = 7, the corresponding sequence of the moving average is {917.14, 917.14, 931.43, 931.43} we then calculate the absolute value of the difference between the last value of each interval and its moving average: if we are looking for the pattern (1, 0, 1) we obtain the following pattern time series: {1, 0}. note that the same results would have been obtained even if the values 1000 and 1020 had been reversed. at least with the data in our possession, it is difficult to describe a relationship between the occurrence of a certain pattern (in the graphs represented by vertical segments in orange) and the number of hospitalizations in the next days. for this reason, in order to better study the effect pressure surges on the number of hospitalizations we have used also other tools (described in section 6.1), which needed semi-hourly pressure data. almost all parameters are easily modifiable, in particular: • meteorological data to be considered; • length of the intervals to generate the time series of the variations; • w and f values for the search of peaks; • length of the maximum translation between the time series of the hospitalizations and peak time series; • pattern of peaks to search. with the configuration we used to run our algorithm, the correlations examined amounted to 138. then, the algorithm returns a graph of comparison between the time series, the dispersion graph and the indices of correlation of the that series which, after the correlation with daily hospitalizations, has the highest pearson index and the one with the highest spearman's index (both in terms of absolute value). failing to reach the value 0.2 for neither index, the results are omitted because irrelevant. however, this failed attempt motivates the choice to continue the investigation focusing on the only atmospheric pressure values, since neither temperature nor speed of the wind seem to have a particular correlation with the number of hospitalizations. in this study, the pressure measures show that small, short and continuous pressure variations are present. in order to study the effects of these variations could have on the incidence of pulmonary embolism cases, it was seen that was no longer sufficient to analyse the daily averages of atmospheric pressure. for this reason an analysis carried out using pressure values recorded every half an hour. (i.e. 48 daily records, during the period 2016-2018, data provided by the lamma consortium). in accordance with the hypothesis that pressure changes are determinants for human health, the total pressure variation during the day was calculated for each day belonging the sample interval. this methodology was found to be more effective than analyzing peaks and patterns. the choice of this methodology was also due to empirical observations. imagine having a glass tube full of fluid and an impurity not occluding the lumen of the vessel, for example: a gas bubble. to detach the impurity from the wall and then remove it, is more effective to proceed applying a series of repeated and delicate taps to the tube, rather than a single strong shot. by analogy, it was thought that, a series of repeated, small and rapid changes in pressure were more probable causes of a thrombus detachment from the venous wall, rather than a single surge of strong entity. the calculation of the variations has been done in the following way: for every day d, given the half-hourly values d 1 , d 2 , ..., d 48 ,the daily pressure change ∆ d has been calculated as: a graph has then been drawn showing the averages of the variations ∆ d depending on the number of daily hospitalizations, on the same day and in the three preceding days. initially it was considered a test data set only the period corresponding to the year 2016. the result is in figure 15 (the size of the points in the graph is directly proportional to the number of occurrences). this graph shows that, in the periods immediately preceding the days with two or more hospitalizations there is an average variation of pressure ∆ d higher of 8.66% than to the average variation over the whole period considered. this result seems to confirm the hypothesis that a greater number of pressure changes correspond, in the short term, an increase in cases of pulmonary embolism. for a better confirmation, the procedure has been repeated on all the data in our possession, that is on all the 2016-2018 period. the analysis we have done, seem to confirm only partially what was found. the graph shown in figure 16 is more flattened, and shows an increase in the variation in the periods before the days with more hospitalizations equal to only 3.18%. all this is implemented in the script [8, daily_variation.py]. only the graphs obtained by analyzing the averages on the time interval that produced the best results over the whole period 2016-2018, i.e. four table 2 : percentage increase in pressure changes in the preceding periodsdays with two or more hospitalizations in relation with the average of the variations -over the whole period. days, have been reported. for completeness, the results obtained for some of the other ranges analysed are given in table 2 . in particular, it shows the percentage increase of the average pressure changes in the periods before the days with two or more hospitalizations compared to the average over the whole period considered. although with more or less incisive results, it should be noted that there is an increase independent from the amplitude of the range and the length of the sample analysed. finally, it can be observed that, by extending the length of the sample, the increase is always scaled down to three years. having increased, at least for the pressure, the sample detection frequency, we now have a much larger data set. this fact allowed to try to apply the fourier fast transform to calculate the spectrograms of the historical series of pressure and that of the hospitalizations [12] . in order to have two series of the equal length and to compare more easily the spectrogram, it was thought to expand also the series of hospitalizations, using a formal artifice, to semihourly samples. since the information on the distribution of hospitalizations within the day was unknown, it was chosen to assume a homogeneous distribution. for this reason, each daily observation has been divided into 48 semihourly observations, each of 1/48 of the daily hospitalizations. the two spectrograms obtained are those in figures 17 and 18 . in these figures the frequency on the x-axis is expressed in 1/days. note that, to make the graphs more readable, in both figures the value of the first frequency (0hz) has been manually set to 0. this was done because, for the properties of the treated signals, it would normally have a very high value. to see the algorithm used in detail and the graphs with the first frequency values left unchanged, see paragraph [8, fft.py] . no relationship or analogy seems to emerge by comparing the two historical series in the domain of frequencies. this could also be due to the method by which the series of hospitalizations was expanded. therefore, no particular conclusion can be drawn with a sample of this size. many of the attempts we have done, lead to no statistically significant results, thus confirming the difficulty of the existing literature to give a definite answer on the subject. despite this, the research carried out highlighted the strong correlation between the moving averages of atmospheric pressure and those of the number of hospitalizations discussed in section 3.2. the existence of this correlation, from the results obtained, is undeniable for the sample studied. as already mentioned, however, a possible causality between the time series of pressures and that of hospitalizations is far from certain. this is due to the fact that the possible factors that could have caused the increase in the number of hospitalizations are manifold, and many of these, probably, do not concern meteorological factors, such as, for example, the improvement of diagnostic tools. for this reason, the result obtained is a solid result, but it is necessary to start more studies in this field, which analyze different samples. it would be interesting to see if, in a geographical area ( both the same and other areas ) which has, during a period of time similar to the one studied ( 3 years ), an increase in the annual moving average of pressure atmospheric, a decrease in the moving average number of hospitalizations occurs. in this case, many of the causes of other nature, thus obtaining a more certain answer. such efforts should be a priority in any future developments given the severity of the disease and the difficulty of its diagnosis. another significant finding is: the existence of an increase in the number of hospitalizations in the days following short-to-medium periods of time characterized by a high number of half-hourly pressure changes, observed in section 6.1. results obtained seems to give credit to the hypothesis of considering the physical phenomenon of thrombus detachment as the effect of very small pressure variations recurring. however, unlike the moving average, this is not an unequivocal result. this considering that the result over the whole period 2016-2018 is much more contained than the one related to 2016 only. in this sense it would be interesting to confirm or deny our results by studying what happens for longer periods of time, covering several years. the study of the spectrograms obtained by the fourier transformation will undoubtedly be at the centre of future developments: data in our possession (especially hospitalization data) were found to be too few to carry out analyses of this type, but better results could emerge by repeating the procedure on larger datasets. in conclusion, although further confirmations are needed, there seems to be some kind of correlation between pulmonary embolism and meteorological parameters, and in particular, atmospheric pressure seems to be more relevant than temperature and wind speed, which, moreover, is strongly related to pressure variations. where x i is a generic value of the random variable x, min and max are respectively the minimum and maximum values assumed by x and [a, b] is the new range within which the values of the random variable will be scaled. this transformation is implemented in an efficient and intuitive way in the normal method normalise(self, feature_range), where the feature_range pair, which if not specified is equal to (0, 1), indicates the exremes a and b. variation_series(self, length: int, unsigned: boolean), executed on an object of type series representing a historical series of length n, returns an object of the same type representing a historical series of length n − length + 1 where the value at instant t contains: • in the case unsigned = false (default option), the difference between the first and the last value of the interval [t, t + length] of the starting time series; • in the case unsigned = true, the sum of the absolute values of the differences between each value of the starting time series and the next one, within the interval [t, t + length]. mobile_mean(self, window: int, ws: list), executed on an object of type series representing a time series of length n, returns an object of the same type representing a time series of length n − window + 1, where the value at instant t contains the mobile average weighted according to the weights contained in ws over the interval [t, t + window]. if the weights are not specified, the arithmetic moving average is executed. eventuality(self), executed on an object of type series representing a time series of length n, returns an object of the same type representing a time series of the same length, where the value at instant t contains: • 0 if the starting time series is also 0 at instant t; • 1 otherwise. peakseries extends series, with the additional method pattern_series(self, pattern: tuple) generating the time series containing the occurrences of a given peacks pattern. objects of this class are generated by an object of type peakmaker by means of the method get_peaks(self). it is a class composed of two objects of the series class that provides different methods to calculate correlation indices between the two time series, to draw graphs of various types, or to translate one series with respect to the other. 9 appendix b: difinitions of medical and meteorological terms 9.1 medical terms a thrombus, colloquially called blood clot, is a semisolid substance consisting of cells and fibrin that can locate anywhere in the circulatory district, such as arteries and veins and is attached to the inner wall of the blood vessels. a clot is a healthy response to injury to prevent bleeding, but, when clots obstruct blood flow through healthy blood vessels, it can become the leading cause of some severe pathologies in which case we talk about thrombus [16] . when thrombosis occurs within deep blood veins, usually at the level of the lower limbs, we speak of deep vein thrombosis (dvp). [17] the embolism occurs when a thrombus is detached from the wall of a blood vessel to which it is attached. the thrombus or some of its parts can enter into blood circulation until it stops in a blood vessel smaller than the source vessel reducing the blood supply to the downstream tissues [16] . pulmonary embolism (pe) is a blockage of an artery in the lungs by a clot that has moved from other districts in the body through the bloodstream (embolism). pe usually results from a blood clot in the leg that travels to the lung. main signs of the pe include low blood oxygen levels, rapid breathing, rapid heart rate, that cause circulatory and respiratory problems. in most cases, the pe is preceded by deep vein thrombosis (dvt). pe and dvt share both the risk factors and the triggers. among these, advanced age and some pathological conditions have been found to be the main risk factors. in italy, pe occurs in one patient per 100000 and is the cause of about 15% of hospital deaths, which rise to 30% if it is not treated correctly [18] . pulmonary embolism and deep vein thrombosis are two closely related pathological manifestations, which can be described by a single pathological process that is known as venous thromboembolism (vte) or thromboembolism [19] . atmospheric pressure measures the total weight exerted on a horizontal unit surface by the air column above [20] . the atmospheric pressure is measured with an instrument called a barometer. generally, atmospheric pressure is measured in atmospheres (atm) or millbar (mbar). however, neither of these two units of measurement is the adopted unit by the system, which instead adopts the pascal (p a) for the measurement of pressure [21] . in this study the adopted unit of measurement of atmospheric pressure is the millibar. one millibar corresponds to 100p a. average daily atmospheric pressure is the arithmetic mean among all atmospheric pressure values recorded over a full day that is normally detected from 00:00 to 23:59. the number of recordings are made at regular intervals throughout the day to achieve a discretization of the number of registrations. although atmospheric pressure values in a given area tend to remain the same over the long term, these pressure values can change from day to day or from month to month due to weather phenomena. different average values of annual pressures are also possible due to climate factors. correlation between weather and covid-19 pandemic in humidity and latitude analysis to predict potential spread and seasonality for covid-19 high temperature and high humidity reduce the transmission of covid-19 barometric pressure and the incidence of pulmonary embolism venous thromboembolism in denmark: seasonality in occurrence and mortality meteorological parameters and seasonal variations in pulmonary thromboembolism serie storiche economiche the proof and measurement of association between two things research design and statistical analysis, seconda edizione medicina preventiva e riabilitativa, padova, piccin trombosi venosa profonda, manuale msd versione per i pazienti malattie dell'apparato respiratorio bureau international des poids et mesures, the international system of units (si) this appendix wants to be a tool to help you understand some scripts and project functions in python weatherpe.this project was created to carry out the analysis of this study, in order to be modular and applicable to the analysis of other statistical samples, which were mentioned in the document.this appendix is not intended as documentation or a full explanation of the code, for that see the code itself [8] . the project is divided into four folders:• res: contains the data files used;• weape: contains the classes and functions used;• launchers: contains executable codes to generate all results included in this document;• tests:contains tests of methods and functions. file series.py implements series class, which has been used to represent historical series. this file includes only two attributes:: values and label storing respectively a list containing the values of the series and a string containing the name of the same. below you will be documented the most important methods at the end of the understanding of the document. due to the various meanings it assumes in different scientific disciplines, the concept of normalisation always causes some ambiguity, unless it is explicitly stated [15] . for this project, the term normalization has been used to mean a transformation of random variables, also known as min-max normalization, described by the following formula: key: cord-025439-3rlvmwce authors: christman, ananya; chung, christine; jaczko, nicholas; li, tianzhi; westvold, scott; xu, xinyue; yuen, david title: new bounds for maximizing revenue in online dial-a-ride date: 2020-04-30 journal: combinatorial algorithms doi: 10.1007/978-3-030-48966-3_14 sha: doc_id: 25439 cord_uid: 3rlvmwce in the online-dial-a-ride problem (oldarp) a server travels to serve requests for rides. we consider a variant where each request specifies a source, destination, release time, and revenue that is earned for serving the request. the goal is to maximize the total revenue earned within a given time limit. we prove that no non-preemptive deterministic online algorithm for oldarp can be guaranteed to earn more than half the revenue earned by [formula: see text]. we then investigate the segmented best path ([formula: see text]) algorithm of [8] for the general case of weighted graphs. the previously-established lower and upper bounds for the competitive ratio of [formula: see text] are 4 and 6, respectively, under reasonable assumptions about the input instance. we eliminate the gap by proving that the competitive ratio is 5 (under the same assumptions). we also prove that when revenues are uniform, [formula: see text] has competitive ratio 4. finally, we provide a competitive analysis of [formula: see text] on complete bipartite graphs. in the on-line dial-a-ride problem (oldarp), a server travels through a graph to serve requests for rides. each request specifies a source, which is the pick-up (or start) location of the ride, a destination, which is the delivery (or end) location, and the release time of the request, which is the earliest time the request may be served. requests arrive over time; specifically, each arrives at its release time and the server must decide whether to serve the request and at what time, with the goal of meeting some optimality criterion. the server has a capacity that specifies the maximum number of requests it can serve at any time. common optimality criteria include minimizing the total travel time (i.e. makespan) to satisfy all requests, minimizing the average completion time (i.e. latency), or maximizing the number of served requests within a specified time limit. in many variants preemption is not allowed, so if the server begins to serve a request, it must do so until completion. on-line dial-a-ride problems have many practical applications in settings where a vehicle is dispatched to satisfy requests involving pick-up and delivery of people or goods. important examples include ambulance routing, transportation for the elderly and disabled, taxi services including ride-for-hire systems (such as uber and lyft), and courier services. we study a variation of oldarp where in addition to the source, destination and release time, each request also has a priority and there is a time limit within which requests must be served. the server has unit capacity and the goal for the server is to serve requests within the time limit so as to maximize the total priority. a request's priority may simply represent the importance of serving the request in settings such as courier services. in more time-sensitive settings such as ambulance routing, the priority may represent the urgency of a request. in profit-based settings, such as taxi and ride-sharing services, a request's priority may represent the revenue earned from serving the request. for the remainder of this paper, we will refer to the priority as "revenue," and to this variant of the problem as roldarp. note that if revenues are uniform the problem is equivalent to maximizing the number of served requests. the online dial-a-ride problem was introduced by feuerstein and stougie [10] and several variations of the problem have been studied since. for a comprehensive survey on these and many other problems in the general area of vehicle routing see [12] and [16] . feuerstein and stougie studied the problem for two different objectives: minimizing completion time and minimizing latency. for minimizing completion time, they showed that any deterministic algorithm must have competitive ratio of at least 2 regardless of the server capacity. they presented algorithms for the cases of finite and infinite capacity with competitive ratios of 2.5 and 2, respectively. for minimizing latency, they proved that any algorithm must have a competitive ratio of at least 3 and presented a 15-competitive algorithm on the real line when the server has infinite capacity. ascheuer et al. [2] studied oldarp with multiple servers with the goal of minimizing completion time and presented a 2-competitive algorithm. more recently, birx et al. [5] studied oldarp on the real line and presented a new upper bound of 2.67 for the smartstart algorithm [2] , which improves the previous bounds of 3.41 [14] and 2.94 [4] . for oldarp on the real line, bjelde et al. [6] present a preemptive algorithm with competitive ratio 2.41. the online traveling salesperson problem (oltsp), introduced by ausiello et al. [3] and also studied by krumke [15] , is a special case of oldarp where for each request the source and destination are the same location. there are many studies of variants of oldarp and oltsp [3, 11, 13, 15] that differ from the variant that we study which we omit here due to space limitations. in this paper, we study oldarp where each request has a revenue that is earned if the request is served and the goal is to maximize the total revenue earned within a specified time limit; the offline version of the problem was shown to be np-hard in [8] . more recently, it was shown that even the special case of the offline version with uniform revenues and uniform weights is np-hard [1] . christman and forcier [9] presented a 2-competitive algorithm for oldarp on graphs with uniform edge weights. christman et al. [8] showed that if edge weights may be arbitrarily large, then regardless of revenue values, no deterministic algorithm can be competitive. they therefore considered graphs where edge weights are bounded by a fixed fraction of the time limit, and gave a 6competitive algorithm for this problem. note that this is a natural subclass of inputs since in real-world dial-a-ride systems, drivers would be unlikely to spend a large fraction of their day moving to or serving a single request. in this work we begin with improved lower and upper bounds for the competitive ratio of the segmented best path (sbp) algorithm that was presented in [8] . we study sbp because it has the best known competitive ratio for roldarp and is a relatively straightforward algorithm. in [8] , it was shown that sbp's competitive ratio has lower bound 4 and upper bound 6, provided that the edge weights are bounded by a fixed fraction of the time limit, i.e. t /f where t is the time limit and 1 < f < t, and that the revenue earned by the optimal offline solution (opt) in the last 2t /f time units is bounded by a constant. this assumption is imposed because, as we show in lemma 1, no non-preememptive deterministic online algorithm can be guaranteed to earn this revenue. we note that as t grows, the significance of the revenue earned by opt in the last two time segments diminishes. we then close the gap between the upper and lower bounds of sbp by providing an instance where the lower bound is 5 (sect. 3.1) and a proof for an upper bound of 5 (sect. 3.2). we note that another interpretation of our result is that under a weakened-adversary model where opt has two fewer time segments available, while sbp has the full time limit t , sbp is 5-competitive. we then investigate the problem for uniform revenues (so the objective is to maximize the total number of requests served) and prove that sbp earns at least 1/4 the revenue of opt, minus an additive term linear in f , the number of time segments (sect. 4). this variant is useful for settings where all requests have equal priorities such as not-for-profit services that provide transportation to elderly and disabled passengers and courier services where deliveries are not prioritized. we then consider the problem for complete bipartite graphs; for these graphs every source is from the left-hand side and every destination is from the righthand side (sect. 5). these graphs model the scenario where only a subset of locations may be source nodes and a disjoint subset may be destinations, e.g. in the delivery of goods from commercial warehouses only the warehouses may be sources and only customer locations may be destinations. we refer to this problem as roldarp-b. we first show that if edge weights are not bounded by a minimum value, then roldarp on general graphs reduces to roldarp-b. we therefore impose a minimum edge weight of kt /f for some constant k such that 0 < k ≤ 1. we show that if revenues are uniform, sbp has competitive ratio 1/k . finally, we show that if revenues are nonuniform sbp has competitive ratio 1/k , provided that the revenue earned by opt in the last 2t /f time units is bounded by a constant. (this assumption is justified by lemma 1 which says no non-preemptive deterministic algorithm can be guaranteed to earn any fraction table 1 . bounds on the algorithm sbp for roldarp variants. † this upper bound assumes the optimal revenue of the last two time segments is bounded by a constant. ‡ this upper bound assumes the number of time segments is constant. § k is a constant where 0 < k ≤ 1 such that the minimum edge weight is kt /f where t is the time limit and 1 < f < t . competitive ratio ρ of sbp for roldarp uniform revenue nonuniform revenue of what is earned by opt in the last 2t /f time units.) table 1 summarizes our results. the revenue-online-dial-a-ride problem (roldarp) is formally defined as follows. the input is an undirected complete graph there is a weight w u,v > 0, which represents the amount of time it takes to traverse (u, v). 1 one node in the graph, o, is designated as the origin and is where the server is initially located (i.e. at time 0). the input also includes a time limit t and a sequence of requests, σ, that are dynamically issued to the server. each request is of the form (s, d, t, p) where s is the source node, d is the destination, t is the time the request is released, and p is the revenue (or priority) earned by the server for serving the request. the server does not know about a request until its release time t. to serve a request, the server must move from its current location x to s, then from s to d. the total time for serving the request is equal to the length (i.e. travel time) of the path from x to s to d, and the earliest time a request may be released is at t = 0. for each request, the server must decide whether to serve the request and if so, at what time. a request may not be served earlier than its release time and at most one request may be served at any given time. once the server decides to serve a request, it must do so until completion. the goal for the server is to serve requests within the time limit so as to maximize the total earned revenue. (the server need not return to the origin and may move freely through the graph at any time, even if it is not traveling to serve a request.) the algorithm segmented best path (sbp) [8] starts by splitting the total time t into f segments each of length t /f (recall that f is fixed and 1 < f < t ). input is complete graph g with time limit t and maximum edge weight t /f. at the start of ti, find the max-revenue-request-set, r. 8: if r is non-empty then 9: move to the source location of the first request in r. 10: at the start of ti+1, serve request-set r. 11: else 12: remain idle for ti and ti+1 13: end if 14: let i = i + 2. at the start of a time segment, the server determines the max-revenue-requestset, i.e. the maximum revenue set of unserved requests that can be served within one time segment, and moves to the source of the first request in this set. during the next time segment, it serves the requests in this set. it continues this way, alternating between moving to the source of first request in the max-revenuerequest-set during one time segment, and serving this request-set in the next time segment. to find the max-revenue-request-set, the algorithm maintains a directed auxiliary graph, g to keep track of unserved requests (an edge between two vertices u,v represents a request with source u and destination v). it finds all paths of length at most t /f between every pair of nodes in g and returns the path that yields the maximum total revenue (please refer to [8] for full details). it was observed in [8] that no deterministic online algorithm can be guaranteed to serve the requests served by opt during the last time segment and the authors proved that sbp is 6-competitive barring an additive factor equal to the revenue earned by opt during the last two time segments. more formally, let rev(sbp(t j )) and rev(opt(t j )) denote the revenue earned by sbp and opt respectively during the j-th time segment. it was also shown in [8] that as t grows, the competitive ratio of sbp is at best 4 (again with the additive term equal to rev(opt(t f )) + rev(opt(t f −1 ))), resulting in a gap between the upper and lower bounds. we first present a general lower bound for this problem and show that no non-preemptive deterministic online algorithm (e.g. sbp) can be better than 2-competitive with respect to the revenue earned by the offline optimal schedule (ignoring the last two time segments; see lemma 1, below). can be guaranteed to earn more than half the revenue earned by opt in the first t − 2t /f time units. this is the case whether revenues are uniform or nonuniform. proof (sketch). the adversary repeatedly releases requests such that depending on which request(s) the algorithm serves, other request(s) are released that the algorithm cannot serve in time. this scheme requires carefully constructed edge weights, release times, and revenues so that the optimal offline revenue is always twice that of any online algorithm. please see the full version of the paper for details [7] . we now show that no non-preemptive deterministic online algorithm (e.g. sbp) can be competitive with the revenue earned by opt in the last two segments of time. we note that this claim applies to the version of non-preemption where, as in real-world systems like uber/lyft, once the server decides to serve a request, it must move there and serve it to completion. proof (). the adversary releases a request in the last two time segments and if the online algorithm chooses not to serve it, no other requests will be released. if the algorithm chooses to serve it, another batch of requests will be released elsewhere that the algorithm cannot serve in time. please see the full version of the paper for details [7] . in this section we improve the lower and upper bounds for the competitive ratio of the segmented best path algorithm [8] . in particular, we eliminate the gap between the lower and upper bounds of 4 and 6, respectively, from [8] , by providing an instance where the lower bound is 5 and a proof for an upper bound of 5. note that throughout this section we assume the revenue earned by opt in the last two time segments is bounded by some constant. we must impose this restriction on the opt revenue of the last two time segments because, as we showed in lemma 1, no non-preemptive deterministic online algorithm can be guaranteed to earn any constant fraction of this revenue. theorem 2. if the revenue earned by opt in the last two time segments is bounded by some constant, and sbp is γ-competitive, then γ ≥ 5. proof (sketch). for the formal details, please refer to the proof of theorem 2 in the full version [7] . consider the instance depicted in fig. 1 . since t = 2hf in this instance, h represents "half" the length of one time segment, so only one request of length h + 1 fits within a single time segment for sbp. the general idea of the instance is that while sbp is serving every other request across the top row of requests (since the other half across the top are not released until after sbp has already passed them by), opt is serving the entire bottom row in one long chain, then also has time to serve the top row as one long chain. we now show that sbp is 5-competitive by creating a modified, hypothetical sbp schedule that has additional copies of requests. first, we note that sbp loses a factor of 2 due to the fact that it serves requests during only every other time segment. then, we lose another factor of two to cover requests in opt that overlap between time segments. finally, by adding at most one more copy of the requests served by sbp to make up for requests that sbp "incorrectly" serves prior to when they are served by opt, we end up with 5 copies of sbp being sufficient for bounding the total revenue of opt. note that while this proof uses some of the techniques of the proof of the 6-competitive upper bound in [8] , it reduces the competitive ratio from 6 to 5 by cleverly extracting the set of requests that sbp serves prior to opt before making the additional copies. let rev(opt) and rev(sbp) denote the total revenue earned by opt and sbp over all time segments t j from j = 1 . . . f. theorem 3. if the revenue earned by opt in the last two time segments is bounded by some constant c, then sbp is 5-competitive, i.e., if rev(opt(t f )) + rev(opt(t f −1 )) ≤ c, then f j=1 rev(opt(t j )) ≤ 5 f j=1 rev(sbp(t j )) + c. note that another interpretation of this result is that under a resource augmentation model where sbp has two more time segments available than opt, sbp is 5competitive. proof. we analyze the revenue earned by sbp by considering the time segments in pairs (recall that the length of a time segment is t /f for some 1 < f < t ). we refer to each pair of consecutive time segments as a time window, so if there are f time segments, there are f/2 time windows. note that the last time window may have only one time segment. for notational convenience we consider a modified version of the sbp schedule, that we refer to as sbp , which serves exactly the same set of requests as sbp, but does so one time window earlier. specifically, if sbp serves a set of requests during time window i ≥ 2, sbp serves this set during time window i − 1 (so sbp ignores the set served by sbp in window 1). we note that the schedule of requests served by sbp may be infeasible, and that it will earn at most the amount of revenue earned by sbp. let b i denote the set of requests served by opt in window i that sbp already served before in some window j < i. and let b be the set of all requests that have already been served by sbp in a previous window by the time they are served in the opt schedule. formally, b = let opt(t j ) denote the set of requests served by opt in time segment t j . let opt i denote the set of requests served by opt in the time segment of window i with greater revenue, i.e. opt i = arg max{rev(opt(t 2i−1 )), rev(opt(t 2i ))}. note this set may include a request that was started in the prior time segment, as long as it was completed in the time segment of opt i . let rev(opt i ) denote the revenue earned in opt i . let sbp i denote the set of requests served by sbp in window i and let rev(sbp i ) denote the revenue earned by sbp i . let h denote the chronologically ordered set of time windows w where rev(opt w ) > rev(sbp w ), and let h j denote the jth time window in h. we refer to each window of h as a window with a "hole," in reference to the fact that sbp does not earn as much revenue as opt in these windows. in each window h j there is some amount of revenue that opt earns that sbp does not. in particular, there must be a set of requests that opt serves in window h j that sbp does not serve in h j . note that this set must be available for sbp in h j since opt does not include the set b. let opt hj = a j ∪ c * j , where a j is the subset of requests served by both opt and sbp in h j and c * j is the subset of opt requests available for sbp to serve in h j but sbp chooses not to serve. let us refer to the set of requests served by sbp in h j as sbp hj = a j ∪ c j for some set of requests c j . note that if opt hj = a j ∪ c * j can be executed within a single time segment, then rev(c j ) ≥ rev(c * j ) by the greediness of sbp . however, since h j is a hole we know that the set opt hj cannot be served within one time segment. our plan is to build an infeasible schedule sbp that will be similar to sbp but contain additional "copies" of some requests such that no windows of sbp contain holes. we first initialize sbp to have the same schedule of requests as sbp . we then add additional requests to h j for each j = 1 . . . |h|, based on opt hj . consider one such window with a hole h j , and let k be the index of the time segment corresponding to opt hj . we know opt must have begun serving a request of opt hj in time segment t k−1 and completed this request in time segment t k . let us use r * to denote this request that "straddles" the two time segments. after the initialization of sbp = sbp , recall that the set of requests served by sbp in h j is sbp hj = a j ∪ c j for some set of requests c j . we add to sbp a copy of a set of requests. there are two sub-cases depending on whether r * ∈ c * j or not. case r * ∈ c * j . in this case, by the greediness of sbp, and the fact that both r * alone and c * j \{r * } can separately be completed within a single time segment, we have: rev(c j ) ≥ max{rev(r * ), rev(c * j \ {r * })} ≥ 1 2 rev(c * j ). we then add a copy of the set c j to the sbp schedule, so there are two copies of c j in h j . note that for sbp, h j will no longer be a hole since: rev(opt hj ) = rev(a j )+rev(c * j ) ≤ rev(a j ) + 2 · rev(c j ) = rev(sbp hj ). case r * / ∈ c * j . in this case c * j can be served within one time segment but sbp chooses to serve a j ∪ c j instead. so we have rev(a j ) + rev(c j ) ≥ rev(c * j ), therefore we know either rev(a j ) ≥ 1 2 rev(c * j ) or rev(c j ) ≥ 1 2 rev(c * j ). in the latter case, we can do as we did in the first case above and add a copy of the set c j to the sbp schedule in window h j , to get rev(opt hj ) ≤ rev(sbp hj ), as above. in the former case, we instead add a copy of a j to the sbp schedule in window h j . then again, for sbp, h j will no longer be a hole, since this time: rev(opt hj ) = rev(a j ) + rev(c * j ) ≤ 2 · rev(a j ) + rev(c j ) = rev(sbp hj ). note that for all windows w / ∈ h that are not holes, we already have rev(sbp w ) ≥ rev(opt w ). so we have where the second inequality is because sbp contains no more than two instances of every request in sbp . combining (1) with the fact that sbp earns at most what sbp does yields since sbp serves in only one of two time segments per window, we have ). hence, by the definition of opt, and by (2) we can say rev(sbp(t j )) + rev(opt(t f −1 )) + rev(opt(t f )). (3) now we must add in any request in b, such that opt serves the request in a time window after sbp serves that request. by definition of b (as the set of all requests that have been served by sbp in a previous window) b may contain at most the same set of requests served by sbp . therefore rev(b) ≤ rev(sbp ), so rev(b) ≤ rev(sbp). by the definition of opt, opt = opt + b, so and by combining (3)-(4) with the fact that rev(b) ≤ rev(sbp), we have we now consider the setting where revenues are uniform among all requests, so the goal is to maximize the total number of requests served. this variant is useful for settings where all requests have equal priorities, for example for notfor-profit services that provide transportation to elderly and disabled passengers. the proof strategy is to carefully consider the requests served by sbp in each window and track how they differ from that of opt. the final result is achieved through a clever accounting of the differences between the two schedules, and bounding the revenue of the requests that are "missing" from sbp. we note that the lower bound instance of theorem 2 can be modified to become a uniform-revenue instance that has ratio 5 − 14/f. we further note that the lower bound instance provided in [8] immediately establishes a lower bound instance for sbp that has a ratio of 4. we now show that opt earns at most 4 times the revenue of sbp in this setting if we assume the revenue earned by opt in the last two time segments is bounded by a constant, and allow sbp an additive bonus of f . note that even when revenues are uniform, no nonpreemptive deterministic online algorithm can earn the revenue earned by opt in the last two time segments (see lemma 1) . we begin with several definitions and lemmas. as in the proof of theorem 3, we consider a modified version of the sbp schedule, that we refer to as sbp , which serves exactly the same set of requests as sbp, but does so one time window earlier. for all windows i = 1, 2, ..., m, where m = f/2 − 1, we let s i denote the set of requests served by sbp in window i and s * 2. the set s * i cannot be served within one time segment. this means there must be one request in s * i that opt started serving in the previous time segment. we refer to this straddling request as r * . there are three sub-cases based on where r * appears. (a) if r * ∈ y * i , then due to the greediness of sbp , we know that since otherwise sbp would have chosen to serve r * . we also know since otherwise sbp would have chosen to serve y * i \{r * }. from (5), we have |x i | + |y i | ≥ 1 and from (6) then r * is served by both opt and sbp . we know that a i ∪ y * i \{r * } can be served within one time segment since r * is the only request that causes s * i to straddle between two time segments. again by the greediness of sbp , we have rev therefore, for all cases, for window i, we have now we will build an infeasible schedule sbp that will be similar to sbp but contain additional "copies" of some requests such that no windows of sbp contain holes, i.e. such that rev(sbp) ≥ m i=1 rev(s * i ). we define a modified opt schedule which we refer to as opt such that by lemma 2 and eq. (7), we can say rev(opt this tells us that to form an sbp whose revenue is at least that of opt , we must "compensate" sbp by adding to it at most copies of all requests in the set y i for all i = 1, 2, ..., m, plus m "dummy requests." in other words, we know the total revenue of all y i can not exceed the total revenue of sbp , hence we have combining (8) and (9), we get rev(opt ) ≤ 2 rev(sbp ) + m, which means recall that s * i is the set of requests served by opt during the time segment of window i with greater revenue. in other words, , which, combined with (10), gives us we assumed that the total revenue of requests served in the last two time segments by opt is bounded by c. from (11), we get we also know that the total revenue of requests served by sbp during the first m windows is less than or equal to the total revenue of sbp. therefore, from (12) , we have f j=1 rev(s * (t j )) ≤ 4 f j=1 rev(s(t j )) + 2m + c. in this section, we consider roldarp for complete bipartite graphs g = (v = v 1 ∪ v 2 , e), where only nodes in v 1 maybe be source nodes and only nodes in v 2 may be destination nodes. one node is designated as the origin and there is an edge from this node to every node in v 1 (so the origin is a node in v 2 ). due strictly to space limitations, most proofs of theorems in this section are deferred to the full version of the paper [7] . we refer to this problem as roldarp-b and the offline version as rdarp-b. we first show that if edge weights of the bipartite graph are not bounded by a minimum value, then the offline version of roldarp on general graphs, which we refer to as rdarp, reduces to rdarp-b. since rdarp has been show in [1, 8] to be np-hard (even if revenues are uniform), this means rdarp-b is np-hard as well. theorem 5. the problem rdarp is poly-time reducible to rdarp-b. also, rdarp with uniform revenues is poly-time reducible to rdarp-b with uniform revenues. proof (sketch) . the idea of the reduction is to split each node into two nodes connected by an edge in the bipartite graph with a distance of . then we turn each edge in the original graph into two edges in the bipartite graph. please see the full version for details [7] . we show that for bipartite graph instances, if revenues are uniform, we can guarantee that sbp earns a fraction of opt equal to the ratio between the minimum and maximum edge-length. proof (sketch). the proof idea is akin to that of theorem 7 below. please see the full version of the paper for details [7] . in this section we show that even if revenues are nonuniform, we can still guarantee that sbp earns a fraction of opt equal to the ratio between the minimum and maximum edge-length, minus the revenue earned by opt in the last window. recall that we refer to each pair of consecutive time segments as a time window. note that no non-preemptive deterministic online algorithm can be competitive with any fraction of the revenue earned by opt in the last 2t /f time units (i.e. lemma 1 also holds for roldarp-b with nonuniform revenues). due space limitations, please refer to the full version of this work [7] for the proof of the following theorem. maximizing the number of rides served for dial-a-ride online dial-a-ride problems: minimizing the completion time algorithms for the on-line travelling salesman 1 tight analysis of the smartstart algorithm for online dial-a-ride on the line improved bounds for open online dial-a-ride on the line tight bounds for online tsp on the line new bounds for maximizing revenue in online dial-a-ride revenue maximization in online dial-a-ride maximizing revenues for on-line dial-a-ride on-line single-server dial-a-ride problems generalized online routing: new competitive ratios, resource augmentation, and asymptotic analyses online vehicle routing problems: a survey online travelling salesman problem on a circle online optimization: competitive analysis and beyond on minimizing the maximum flow time in the online dial-a-ride problem typology and literature review for dial-aride problems i denote the set of requests served by opt during the time segment of window i with greater revenue, i.e. s * i = arg max{rev(opt(t 2i−1 ), rev(opt(t 2i ))} where rev(opt(t j )) denotes the revenue earned by opt in time segment t j . we define a new set j * i as the set of requests served by opt during the time segment of window i with less revenue, i.e. j * i = arg min{rev(opt(t 2i−1 ), rev(opt(t 2i ))}.(1) a i is the set of requests that appear in both s * i and s i ; (2) x * i is the set of requests that appear in s w for some w = 1, 2, ..., i − 1. note there is only one possible w for each individual request r ∈ x * i , because each request can be served only once; (3) y * i is the set of requests such that no request from y * i appears in s w for any w = 1, 2, ..., i − 1, i; (4) x i is the set of requests that appear in s * w for some w = 1, 2, ..., i − 1. note there is only one possible w for each individual request r ∈ x i , because each request can be served only once; (5) y i is the set of requests such that no request from y i appears in s * w for any w = 1, 2, ...., m, or may not appear in any other sets. also note that since each request can be served at most once, we have:given the above definitions, we have the following lemma whose proof has been deferred to the full version of the paper [7] . it states that at any given time window, the cumulative requests of opt that were earlier served by sbp are no more than the number that have been served by sbp but not yet by opt. proof. note that since revenues are uniform, the revenue of a request-set u is equal to the size of the set u , i.e., rev(u ) = |u |. consider each window i where rev(s * i ) > rev(s i ). note that the set s * i may not fit within a single time segment. we consider two cases based on s * i . 1. the set s * i can be served within one time segment. note that within s * i = a i ∪x * i ∪y * i , x * i is not available for sbp to serve because sbp has served the requests in x * i prior to window i. among requests that are available to sbp , sbp greedily chooses to serve the maximum revenue set that can be served within one time segment. therefore, we have rev(x i ) + rev(y i ) ≥ rev(y * i ). since revenues are uniform, we also have |x i | + |y i | ≥ |y * i |. if this is not the case, then sbp would have chosen to serve y * i instead of x i ∪y i since it is feasible for sbp to do so because the entire s * i can be served within one time segment. key: cord-262594-kzt09vmf authors: huang, x.; li, z.; lu, j.; wang, s.; wei, h.; chen, b. title: time-series clustering for home dwell time during covid-19: what can we learn from it? date: 2020-09-30 journal: nan doi: 10.1101/2020.09.27.20202671 sha: doc_id: 262594 cord_uid: kzt09vmf in this study, we investigate the potential driving factors that lead to the disparity in the time-series of home dwell time, aiming to provide fundamental knowledge that benefits policy-making for better mitigation strategies of future pandemics. taking metro atlanta as a study case, we perform a trend-driven analysis by conducting kmeans time-series clustering using fine-grained home dwell time records from safegraph, and further assess the statistical significance of sixteen demographic/socioeconomic variables from five major categories. we find that demographic/socioeconomic variables can explain the disparity in home dwell time in response to the stay-at-home order, which potentially leads to disparate exposures to the risk from the covid-19. the results further suggest that socially disadvantaged groups are less likely to follow the order to stay at home, pointing out the extensive gaps in the effectiveness of social distancing measures exist between socially disadvantaged groups and others. our study reveals that the long-standing inequity issue in the u.s. stands in the way of the effective implementation of social distancing measures. policymakers need to carefully evaluate the inevitable trade-off among different groups, making sure the outcomes of their policies reflect interests of the socially disadvantaged groups. we perform a trend-driven analysis by conducting kmeans time-series clustering using finegrained home dwell time records from safegraph. • we find that demographic/socioeconomic variables can explain the disparity in home dwell time in response to the stay-at-home order. • the results suggest that socially disadvantaged groups are less likely to follow the order to stay at home, potentially leading to more exposures to the covid-19. • policymakers need to make sure the outcomes of their policies reflect the interests of the disadvantaged groups. of their unique characteristics, all selected mobility datasets suggest a statistically significant positive correlation between mobility reduction and income at the u.s. county scale. despite the above efforts, the soundness of correlating disparity in response to demographic/socioeconomic variables is hampered by the coarse geographical units, as mitigation policies may vary in different countries, states, and even counties; therefore, the documented disparity in response may result from the discrepancy in mitigation policies, not from the varying demographic/socioeconomic indicators. thus, the examination of fine-grained mobility records (e.g., at the census tract or block group level) are in great need. in addition, most existing studies utilize indices summarized during a specific period to quantify the mobility-related response, neglecting the dynamic perspectives revealed from time-series data. in comparison, time-series trend-based analytics may provide valuable insights in distinguishing different dynamic patterns of mobility records, thus warranting further investigation. the objective of this study is to explore the capability of time-series clustering in categorizing fine-grained mobility records during the covid-19 pandemic, and further investigate what demographic/socioeconomic variables differ among the categories with statistical significance. taking advantage of the home dwell time at census block group (cbg) level from the safegraph [17] , and using the atlanta-sandy springs-roswell metropolitan statistical area (msa) (hereafter referred to as metro atlanta) as a study case, this study investigates the potential driving factors that lead to the disparity in the time-series of home dwell time during the covid-19 pandemic, providing fundamental knowledge that benefits policy-making for better mitigation measures of future pandemics. the contributions of this work are summarized as follows: • we perform a trend-driven analysis by conducting kmeans time-series clustering using finegrained home dwell time records from safegraph. we assess the statistical significance of sixteen selected demographic/socioeconomic variables among categorized groups derived from the time-series clustering. those variables cover economic status, races and ethnicities, age and household type, education, and transportation. we discuss the potential demographic/socioeconomic variables that lead to the disparity in home dwell time during the covid-19 pandemic, how they reflect the long-standing health inequity in the u.s., and what can be suggested for better policy-making. the remainder of the paper is organized as follows. section 2 introduces the datasets used in this study. section 3 presents the methodological approaches we applied. section 4 describes the contexts of the study case (metro atlanta). section 5 presents the results of time-series clustering, the results of the analysis of variance, and the discussion. section 6 concludes our article. the home dwell time records are derived from safegraph (https://www.safegraph.com/), a data company that aggregates anonymized location data from numerous applications in order to provide insights about physical places. safegraph aggregates data using a panel of gps points from anonymous mobile devices and determines the home location as the common nighttime location of each mobile device over a six-week period to a geohash-7 granularity (∼153m × ∼ 153m) [17] . to enhance privacy, safegraph excludes cbg information if fewer than five devices visited an establishment in a month from a given cbg. the data records used in this study are the median home dwell time in minutes for all devices with a certain cbg on a daily basis. for each device, the observed minutes at home across the day are summed, and the median value for all devices with a certain cbg is further calculated [17] . the raw safegraph dataset we used for the year 2020 spans from january 1, 2020, to august 31, 2020 (244 days) with daily home dwell records (in mins) for a total of 219,972 cbgs. heat map of home dwell time for these cbgs are is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . presented in figure 1 . the impact of covid-19 can be observed, as home dwell time notably increased after the declaration of national emergency on march 13, 2020 [18] (figure 1 ), despite the disparity in the increasing intensity. after the lifting of strict social distancing measures in early may, however, home dwell time starts to decrease and returns to the pre-pandemic level ( figure 1 ). the increased variation of home dwell time after the national emergency declaration indicates that cbgs have different responses to the pandemic and the government order. despite the large number of cbgs, not all cbgs contain sufficient records to derive stable time-series that can be used for clustering. the details of the preprocessing steps are presented in section 3.1. demographic and socioeconomic variables in this study are derived from the american community survey (acs), collected by the u.s. census bureau. acs is an ongoing nationwide survey that investigates a variety of aggregated information about u.s. residents at different geographic levels every year [19] . acs randomly selects monthly samples based on housing unit addresses and publishes annual estimates datasets (i.e., 12-month samples). in addition to the 1year datasets, acs also releases 3-year estimates (i.e., 36-month samples) and 5-year estimates (i.e., 60-month samples). compared to the 1-year and 3-year datasets, 5-year estimates cover the most areas, have the largest sample size, and contain the most reliable information [20] . in this study, we use the latest 5-year acs data, i.e., the 2014-2018 acs 5-year estimates, obtained from social explorer (https://www.socialexplorer.com/). we recode the variables from acs data as five major categories: 1) economic status; 2) races and ethnicities; 3) gender, age and household type; 4) education; 5) transportation. previous empirical studies suggested that these variables could be associated with the pattern of daily travels and participation of out-of-home activities [21] [22] [23] [24] . the detailed information of the variables within the five categories is presented in table 1 . in addition, cbg boundaries are derived from 2019 tiger/line shapefiles by u.s. census bureau (https://www.census.gov/cgi-bin/geo/shapefiles/index.php). economic status pct_low_income percent of household income less than $15,000 . cc-by-nc 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint several preprocessing steps are applied to ensure that cbgs within the study area contain sufficient and valid records to derive stable time-series that can be used for clustering. we first select cbgs that fall within the study area, i.e., metro-atlanta (more details of the metro-atlanta can be found in section 4), which results in a total of 2,687 cbgs. as safegraph uses digital devices to measure home dwell time, the number of available devices in each cbg greatly determines the representativeness and the stability of the time-series. we plot the spatial distribution of median daily device count within the metro atlanta area and observe that cbgs dominated by non-residential zones tend to have less daily device count (figure 2a) , presumably due to the low number of home locations identified via safegraph's algorithm (see section 3.1). we keep cbgs with more than 200 days (out of 244 days) of home dwell time records to ensure reliable time-series can be generated. to fill the missing data, we adopt the approach from huang et al. [10] , where missing data are filled via a simple linear interpolation by assuming that home dwell time changes linearly between two consecutive available records. our preliminary investigation suggests that stable time-series of daily home dwell time can be achieved when daily device count reaches 100. thus, we calculate the median of daily device count for each cbg during the 244-day period and select cbgs with the median equal or larger than 100. we also observe that some cbgs present abnormal home dwell patterns with consecutive 0 values for a certain period of time. to avoid the potential problems caused by these cbgs on the performance of the clustering algorithm, we remove cbgs with 0 values that span more than three consecutive days. a total of 1,483 cbgs remain after the aforementioned preprocessing steps, and their representativeness is presented in figure 2b . the representativeness is defined as the ratio between the median daily device count and the population from the acs 2014-2018 estimates. the representativeness for most cbgs ranges from 5% -10% (figure 2b ), which is considerably higher than twitter [25] , a commonly used open-sourced platform to derive mobility-related statistics. . cc-by-nc 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . time-series clustering is the process of the partitioning a time-series dataset into a certain number of clusters, according to a certain similarity criterion. in this study, we aim to cluster the time-series of home dwell time in the cbgs within the study area. we adopt the design of kmeans [26] , an unsupervised partition-based clustering algorithm in which observations are categorized into the cluster with the nearest mean. the choice of similarity measurement in kmeans is crucial to the detection of clusters [27] . considering that the time-series of home dwell time for the majority of the cbgs present a similar shape but vary in intensity (figure 1 ), we decide to calculate the euclidean distance between two time-series. given a dataset on time series = { 1 , 1 , … , }, we aim to partition into a total of clusters, i.e., = { 1 , 2 , … , } by minimizing the objective function j, given as: where denotes the time-series in category , and ‖•‖ denotes the similarity measurement that measures the distance between and the cluster center of . let and each be adimensional vector, where equals the length of the time series (244 in this case). as euclidean distance is selected as similarity measurement in this study, ‖ − ‖ can be rewritten as: further, kmeans utilizes an iterative procedure with the following steps to derive the final category for each time-series candidate: 1. initialize cluster centroids 1 , 2 , … , arbitrarily. 2. assign each time-series to its correct cluster , according to ‖ − ‖. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . the kmeans time-series clustering requires pre-specification of the total number of clusters (i.e., ), which inevitably introduces the subjective nature of deciding the constitution of reasonable clusters [28] . through the investigation of the time-series dataset, we set = 3 , expecting to find three cbg clusters with different home dwell time patterns, following the stayat-home order: 1) cbgs with a significant increase of home dwell time; 2) cbgs with a moderate increase of home dwell time; 3) cbgs with unnoticeable changes in home dwell time. after the time-series clustering, three cbg clusters are therefore formed, each with a unique distribution pattern of daily home dwell time. identifying the statistical difference in demographic/socioeconomic variables among these clusters facilitates a better understanding of what variables potentially lead to the disparity in home dwell time during the covid-19 pandemic. qualitatively, we label the cbg clusters, plot them spatially, and compare the spatial pattern of clusters with the spatial pattern of several major demographic/socioeconomic variables in the study area (see figure 3 in section 4). quantitatively, we apply one-way anova (analysis of variance) (α = 0.001) [29] to assess the statistical significance of five major indicators (see table 1 ) among categorized cbg groups derived from the time-series clustering. as anova does not provide insights into particular differences between pairs of cluster means, we further conduct tukey's test (α = 0.05, 0.01, 0.001) [30] , a common and popular post-hoc analysis following anova, to assess the statistical difference of demographic/socioeconomic variable between cluster pairs. the study area defined in this study is referred to as metro atlanta, designated by the united states office of management and budget (omb) as the atlanta-sandy springs-alpharetta, georgia (ga) metropolitan statistical area (msa). metro atlanta is the twelfth-largest msa in the u.s. and the most populous metro area in ga [31] . the study area includes a total of 30 ga counties (listed in table a ) and has an estimated population of 5,975,424, according to the acs 2014-2018 estimates. metro atlanta has grown rapidly since the 1940s. despite its rapid growth, however, metro atlanta has shown widening disparities, including class and racial divisions, underlying the uneven growth and development, making it one of the metro regions with the most inequity [32] [33] [34] . it is the main reason why we chose this metro region to explore the disparity in responses to the covid-19 pandemic. in the last few decades, the north metro area has absorbed most of the new growth, thanks to the northward shifting trend of the metro region's white population and the rapid office, commercial, and retail development [35] . after the increasingly unbalanced development in recent decades, metro atlanta started to present a distinct north-south spatial disparity in many demographic/socioeconomic variables (figure 3 ). compared to the south metro region, the north region is characterized by higher income (figure 3a) , higher white percentages (figure 3b ), higher education (figure 3c) , and higher percentages of work-from-home workers (figure 3d) . is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . in contrast to the substantial spatial heterogeneity of socioeconomic status, ga's governmental reactions to the covid-19 pandemic are rather homogenous in space. on march 14, 2020, governor brian p. kemp announced the public health state of emergency in ga. twenty days later (april 3), the shelter-in-place order took effect for the entire state [36] . the strict social distancing measures lasted until late april when ga started to reopen gradually: resuming restaurant dine-in services (april 27), reopening bars and nightclubs with capacity limits (june 1), allowing the gatherings of 50 people (june 16), and reopening conventions and live performance (july 1) [37] . is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . implemented in march and april strongly. cbgs in cluster #2 experienced a moderate increase in home dwell time during the implementation of strict social distancing measures (figure 4b ). compared to cluster #2 where the daily home dwell time increased up to 1,200 mins, cbgs in cluster #3 saw a more dramatic increase, as the home dwell time for most of the cbgs in cluster #3 reached 1,400 mins (out of 1440 mins in a day) in march and april, suggesting that mitigation measures have greatly changed people's travel behavior in these cbgs (figure 4c ). note that the three identified clusters are with different numbers of cbgs. clusters #1, #2, and #3 have 157 cbgs, 778 cbgs, and 552 cbgs, respectively. figure 5 shows the spatial distribution of the three cbg clusters, which presents a certain level of spatial autocorrelation, especially for cluster #2 and cluster #3. the global moran's i [38] for the distribution of the three identified clusters is 0.243, and it is significant at the significance level of 0.01. in general, the spatial distribution implies that demographic/socioeconomic variables potentially drive the disparity in home dwell time during the pandemic. the is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . distribution of cbgs in cluster #3 suggests a high correlation of home dwell time and income, as the distribution patterns between cbgs in cluster #3 and cbgs of high household income (see figure 3a ) are largely similar. north metro atlanta, where cbgs with high percentages of workfrom-home workers and high educational levels are concentrated, exhibits a strong influence due to the stay-at-home orders, evidenced by the high concentration of cbgs in cluster #3, a cluster with significantly increased home dwell time in march and april. the selected sixteen demographic/socioeconomic variables present unique distribution patterns in the three identified clusters ( figure 6 ). compared with the other two clusters, cluster #3 is characterized by a high median household income, a high percentage of high-earning groups, a low percentage of low-earning groups, and a low unemployment rate, suggesting that residents in rich cbgs respond to the stay-at-home order more aggressively by considerably reducing their out-of-home activities. it indicates that financial resources can, to a certain degree, influence the effectiveness of policies, as stated in other studies [16, 39] . in terms of racial composition, the three clusters are distinctly different. the mean black percentages of cbgs in cluster #1, #2, and #3 are respectively 49.5%, 31.3%, and 14.5%. cbgs in cluster #1 (with unnoticeable home dwell time increase) present much higher black percentages than cluster #3 (with strong home dwell time increase), revealing that stay-at-home order is less effective for cbgs with higher black percentages. this finding coincides with other recent studies that identified the racial disparities during the covid-19 pandemic [40, 41] . as expected, cluster #1 also presents a higher single-parent family percentage, given the fact that a high percentage of single-parent families is usually seen in black communities [42] . in contrast, the three identified is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . clusters present similar hispanic and female percentages, indicating their weaker role in distinguishing the patterns of home dwell time. as for education, cbgs in cluster #1 and #2 show similar distribution of the percentages of low education (42.1% and 43.1% as mean) while cbgs in cluster #3 shows a considerably lower percentage (22.7% as mean). a reversed pattern can be found for high education, where cluster #3 presents a notably higher percentage of high education compared to cluster #1 and #2. the percentages of short-commuters remain similar in all three clusters, while the percentages of long-commuters differ. the mean percentages of long-commuters in cluster #1, #2, and #3 are 27.3%, 31.8%, and 35.4%, respectively. the result points out that a stronger increase in home dwell time is in tandem with a higher percentage of long-commuters. figure 6 . selected demographic/socioeconomic variables in three identified clusters. the descriptions of these variables can be found in table 1 . we perform avnoa to assess the statistical difference of demographic/socioeconomic variables among the three identified clusters and post-hoc tukey's test to evaluate the statistical difference between a certain cluster pair. the results from anova suggest that all selected variables, except for the percentage of females (pct_female) and the percentage of shortcommuters (pct_short_commute), show a statistically significant difference (α = 0.001) among the three clusters ( table 2 ). the results reveal that gender and the percentage of short-commuters are not significantly different (α = 0.001) among the means of the three identified clusters, indicating that these two variables play a weaker role in explaining the disparity in patterns of home dwell time. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. to provide deeper insights into the comparisons of selected variables between a specific pair of clusters, we further conduct post-hoc tukey's test (figure 7) . for variables regarding economic status, cluster #3 is statistically different (α = 0.001) from cluster #1 and #2 in all four economicrelated variables, i.e., pct_low_income, pct_high_income, median_hhinc, and pct_unemployrate. cluster #1 and cluster #2 present a weaker difference (α = 0.01) in median_hhinc and are not significantly different in pct_high_income. results of racial and ethnic variables suggest that three clusters are statistically different from each other in pct_white, pct_black, and pct_hispanic, despite the weaker difference in pct_hispanic (α = 0.05) between cluster #1 and #2. the difference in education (pct_low_edu and pct_high_edu) is not significant between cluster #1 and cluster #2 but is significant (α = 0.001) when comparing cluster #3 to either cluster #1 or #2. it suggests that cbgs in cluster #3, a cluster with a strong increase in home dwell time, are characterized by their residents with high education, which is statistically different from the other two clusters. in addition, the three clusters are statistically different (α = 0.001) from each other in terms of longcommuters (pct_long_commute) and car ownership (pct_0car), suggesting that these two variables partially explain the disparity in home dwell time. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . this study applies a time-series clustering technique to categorize fine-grained mobility records (at cbg level) during the covid-19 pandemic. through the investigation of the demographic/socioeconomic variables in identified time-series clusters, we find that they are able to explain the disparity in home dwell time in response to the stay-at-home order, which potentially leads to disproportionate exposures to the risk from the covid-19. this study also reveals that socially disadvantaged groups are less likely to follow the order to stay at home, pointing out the extensive gaps in the effectiveness of social distancing measures exist between socially disadvantaged groups and others. to make things worse, the existing socioeconomic status induced disparities are often exaggerated by the shortcomings of u.s. protection measures (e.g., health insurance, minimum incomes, unemployment benefits), potentially causing longterm negative outcomes for the socially disadvantaged populations [10] . in addition to the many pieces of epidemiological evidence that prove a strong relationship between social inequality and health outcomes [43, 44] , this study offers evidence in the covid-19 pandemic we are facing. specifically, we find that all selected variables, except for the percentage of females (pct_female) and the percentage of short-commuters (pct_short_commute), show a statistically significant difference (α = 0.001) among the three identified clusters. cbgs in cluster #3, a cluster with strong response in home dwell time, are characterized by high median household income, high black percentage, high percentage of high-earning groups, low unemployment rate, high is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september 30, 2020. . education, low percentage of single parents, high car ownership, and high percentage of longcommuters. the statistically significant difference of demographic/socioeconomic variables in cluster #3 collectively points out the privilege of the advantaged groups, usually the white and the affluent. the weak response from the socially disadvantaged groups in home dwell time can be possibly explained by the fact that policies can sometimes unintentionally create discrimination among groups with different socioeconomic status [16] , as people can react to policies based on the financial resources they have [45] , which in return, influences the effectiveness of the policies. our study reveals that the long-standing inequity issue in the u.s. stands in the way of the effective implementation of social distancing measures. thus, policymakers need to carefully evaluate the inevitable trade-off among different groups and make sure the outcomes of their policies reflect not only the preferences of the advantaged but also the interests of the disadvantaged. it is important to mention several limitations of this study and provide guidelines for future directions. first, we acknowledge the subjectivity of predefining the number of clusters in the kmeans clustering algorithm. in this study, we set the number of clusters as three (i.e., = 3) via the investigation and interpretation of the home dwell time records from safegraph. we notice that, even after the preprocessing, some cbgs still present unstable temporal patterns due to the low and varying daily device count. our interpretation of the data records reveals three distinct temporal patterns with a strong, moderate, and unnoticeable increase in home dwell time during march and april (hence, is predefined as 3). to ensure the interpretability of clusters, the selection of the number of clusters in kmeans via prior knowledge (priori) is common. however, we acknowledge that approaches like elbow curve [46] and silhouette analysis [47] are largely adopted to facilitate the optimization of without prior knowledge. when conducting a crosscity comparison or reproducing our approach in another region, we advise re-investigating the pattern of the time-series or adopting the aforementioned approaches to derive a reasonable setting of k. second, we construct and cluster the time-series of home dwell time using the data in the year 2020 (january 1 to aug 31), without considering the changes in time-series compared to the previous year. it is reasonable to assume that deriving a cross-year change index facilitates the identification of cbgs that behave differently compared to the year 2019. however, we need to acknowledge the involvement of data records in the year 2019 inevitably introduces a certain level of uncertainty, as daily device count may vary substantially, leading to different representativeness of the same cbg between the two years. in addition, the kmeans time-series clustering algorithm in this study takes the 8-month period as input. further efforts can be directed towards the exploration of how cbgs behave differently at a certain time frame window, e.g., march and april, when strict social distancing measures were implemented. third, this study selects a total of sixteen variables from five major categories and explores the distribution of these variables in three identified clusters. although previous studies have demonstrated the strong linkage between these variables and the participation of out-of-home activities, we can not rule out the possible contribution of other demographic/socioeconomic variables that are not included in this study. future studies need to incorporate more variables to understand their roles in how social distancing guidelines are practiced. in addition, it is reasonable to assume that these variables drive the disparity in home dwell time, not independently but collectively. therefore, statistical approaches like multinomial logit regression [48] can be used to further investigate the interactions among these variables towards time-seriesbased cluster generation. finally, it should be noted that the demographic structure, spatial pattern, and built environment vary substantially across areas, especially across densely populated urban fabrics [49, 50] . thus, the influence of demographic/socioeconomic variables on the disparity in home dwell time following the stay-at-home order may not hold the same and tend to vary geographically. in addition, local governments had differing responses to the pandemic with varying strictness of the implemented social distancing measures, potentially leading to an unequal impact that disfavors disadvantaged groups. this study only explores the situation in metro atlanta, which can not be generalized to other regions without caution. thus, it is necessary to conduct comparative studies that include multiple regions to better understand the contribution of demographic/socioeconomic variables to the impact of the covid-19 pandemic on mobility-related behaviors. this study categorizes the time-series of home dwell time records during the covid-19 pandemic, and further explores what demographic/socioeconomic variables differ among the categories with statistical significance. taking the atlanta-sandy springs-roswell metropolitan statistical area (metro atlanta) as a study case, we investigates the potential driving factors that lead to the disparity in the time-series of home dwell time, providing fundamental knowledge that benefits policy-making for better mitigation measures of future pandemics. we find that demographic/socioeconomic variables can explain the disparity in home dwell time in response to the stay-at-home order, which potentially leads to disproportionate exposures to the risk from the covid-19. the results further suggest that socially disadvantaged groups are less likely to follow the order to stay at home, pointing out the extensive gaps in the effectiveness of social distancing measures exist between socially disadvantaged groups and others. specifically, we find that cbgs with strong response to the stay-at-home order are characterized by high median household income, high black percentage, high percentage of highearning groups, low unemployment rate, high education, low percentage of single parents, high car ownership, and high percentage of long-commuters, pointing out the privilege of the advantaged groups, usually the white and the affluent. in other words, populations with lower socioeconomic status may lack the freedom or flexibility to stay at home, leading to the exposure of more risks during the pandemic. our study reveals that the long-standing inequity issue in the u.s. stands in the way of the effective implementation of social distancing measures. thus, policymakers need to carefully evaluate the inevitable trade-off among different groups and make sure the outcomes of their policies reflect not only the preferences of the advantaged but also the interests of the disadvantaged. covid-19) -events as they happen covid-19) -weekly epidemiological update the covid-19 vaccine development landscape social distancing responses to covid-19 emergency declarations strongly differentiated by income the effect of human mobility and control measures on the covid-19 epidemic in china transmission potential and severity of covid-19 in south korea covid-19 and italy: what next? first cases of coronavirus disease 2019 (covid-19) in france: surveillance, investigations and control measures unemployment effects of stay-at-home orders: evidence from high frequency claims data. institute for research on labor and employment working paper 2020 the characteristics of multi-source mobility datasets and how they reveal the luxury nature of social distancing in the u.s. during the covid-19 pandemic the determinants of the differential exposure to covid-19 in new york city and their evolution over time. covid economics: vetted and real-time papers economic and social consequences of human mobility restrictions under covid-19 social distancing, internet access and inequality (no. w26982) the benefits and costs of social distancing in rich and poor countries urban residents in states hit hard by covid-19 most likely to see it as a threat to daily life are stay-at-home orders more difficult to follow for low-income groups? working paper american community survey information guide when to use 1-year, 3-year, or 5-year estimates distance traveled in three canadian cities: spatial analysis from the perspective of vulnerable population segments a time-use investigation of shopping participation in three canadian cities: is there evidence of social exclusion? my car, my friends, and me: a preliminary analysis of automobility and social activity participation relative accessibility deprivation indicators for urban settings: definitions and application to food deserts in montreal human mobility, and covid-19. arxiv preprint 2020 an efficient kmeans clustering algorithm: analysis and implementation clustering of time series data-a survey selection of k in k-means clustering analysis of variance (anova) tukey's honestly significant difference (hsd) test metropolitan and micropolitan statistical areas population totals and components of change multi-city study of urban inequality inequities of transit access: the case of sprawl atlanta: social equity dimensions of uneven growth and development atlanta: race, class, and urban expansion kemp -office of the governor where states reopened and cases spiked after the u.s. shutdown, the washington post local spatial autocorrelation statistics: distributional issues and an application the impact of social vulnerability on covid-19 in the us: an analysis of spatially varying relationships assessing racial and ethnic disparities using a covid-19 outcomes continuum for new york state the covid-19 pandemic: a call to action to identify and address racial and ethnic disparities the changing demographic and socioeconomic characteristics of single parent families anthropology, inequality, and disease: a review the income-associated burden of disease in the united states disadvantage, inequality, and social policy review on determining number of cluster in k-means clustering selecting variables for k-means cluster analysis by using a genetic algorithm that optimises the silhouettes multinomial logistic regression algorithm impact of metropolitan-level built environment on travel behavior how built environment affects travel behavior: a comparative analysis of the connections between land use and vehicle miles traveled in us cities key: cord-035127-we3lmrps authors: yoo, geunsik title: real-time information on air pollution and avoidance behavior: evidence from south korea date: 2020-11-10 journal: popul environ doi: 10.1007/s11111-020-00368-0 sha: doc_id: 35127 cord_uid: we3lmrps this study provides new empirical evidence on the relationship between information about air pollution and avoidance behavior. many countries provide real-time information to describe the current level of air pollution exposure. however, little research has been done on people’s reactions to that real-time information. using data on attendance at professional baseball games in south korea, this study investigates whether real-time information on particulate matter affects individuals’ decisions to participate in outdoor activities. regression models that include various fixed effects are used for the analysis, with the results showing that real-time alerts reduce the number of baseball game spectators by 7%, and that the size of the effect is not statistically different from that of air pollution forecasts. the study demonstrates that providing real-time information can be a way to protect the public’s health from the threat of air pollution. moreover, the findings suggest that having easy access to the relevant information and an awareness of the risks involved are necessary for a real-time information policy to succeed. the hazards of air pollution are well known, and government authorities worldwide have implemented various policies to protect their people from the threat it presents. providing information on the level of air pollution is one of these efforts and is based on the expectation that people will adjust their behavior in response to the information. thus, the information provided typically includes behavioral guidelines to explain the actions the public should take in response to elevated levels of air pollution. a number of studies have shown that providing information on air pollution, such as forecasts, prompts avoidance behavior (neidell 2009; graff zivin and neidell 2009; janke 2014; altindag et al. 2017) . with developments in information and communication technologies, providing and acquiring information have become easier, and the type of information that can be exchanged has become more diverse than ever before. many countries now provide the public with real-time information on air pollution, which more accurately describes the current level of pollution exposure than what air pollution forecasts can offer. in addition, individuals can obtain this information easily and immediately through smart devices (mobile phones, tablets, etc.) . while the expectation is that people will adjust their behavior based on real-time information, little has been studied on people's actual reactions to that information. providing access to information in real-time does not guarantee that people will respond to it. therefore, this study analyzes whether realtime information about air pollution triggers avoidance behavior, based on data about air pollution levels and baseball game attendance in south korea from 2012 to 2016. since the mid-2000s, the south korean government has been providing real-time information on air pollutants via a website created for this purpose (www.airkorea.or. kr). the information has also been disseminated through an open api system since december 2013, enabling the public to access the information from various portals or mobile applications. this study focuses on information regarding particulate matter (pm) among the various air pollutants tracked by the south korean government. many studies have investigated the health effects of pm, which is associated with morbidity and mortality from cardiovascular and respiratory diseases (pope and dockery 2006; epa 2014) . pm also negatively affects non-health aspects, such as human capital formation, cognitive abilities, and labor productivity (neidell 2017; roth 2017; shier et al. 2019) . pm is the most significant threat among the air pollutants in south korea. the annual average pm10 concentration in south korea as of 2016 was 47 μg/m 3 , more than double the world health organization (who) standards for acceptable risk. kim et al. (2016) reported that south koreans perceived "micro dust" as the most significant among public health threats. the most popular mobile application that presents real-time information on pm in south korea, named "misemise," has been downloaded more than one million times from the google play store (as of april 2020). therefore, it is reasonable to assume that some south koreans would adjust their behavior based on real-time pm information. this study focuses on pm10, an aerodynamic particle with a diameter smaller than 10 μm because information about pm2.5, the other form of particulate matter, was made available only after 2015. given that the typical avoidance behavior in response to air pollution is to reduce one's outdoor activities, the reaction to real-time pm10 information is measured using the change in attendance at professional baseball games. baseball is one of the most popular sports in south korea. 1 focusing on data about attendance at professional baseball games has some useful attributes for this study. first, since the baseball season runs from march to october, we observe the extensive variations in pm levels throughout the season. second, given that baseball games are typically held at night during the summer, the effects of ozone, another type of pollutant that triggers avoidance behavior, can be eliminated (neidell 2009 ). 2 third, the data are suitable for investigating avoidance behavior considering that the amount of time it takes to play a baseball game (approximately 3 h) is generally longer than for other sports, and the amount of time spent at an outdoor event is directly linked to an individual's exposure to air pollution on highly polluted days. the results of this study show that people do adjust their behavior in response to real-time pm10 information. i find that attendance at baseball games decreases by approximately 7% when real-time information shows the level of pm10 to be bad or very bad, and the results are highly robust to alternative specifications. the effects of real-time information have increased drastically since 2014 due to the greater accessibility of the information and heightened sensitivity to pm. i also find that the size of the effect of real-time information is not statistically different from that of air pollution forecasts. these results demonstrate that providing real-time information can be a way to protect people's health from the threat of air pollution and suggest that having easily accessible channels of information and awareness of the risk are necessary for a realtime information policy to succeed. the risks created by air pollution provide incentives to avoid it, and a typical avoidance behavior is to reduce one's outdoor activities. previous studies investigated the relationship between information on levels of air pollution and changes in outdoor activities. neidell (2009 ), janke (2014 , and altindag et al. (2017) reported that information about air pollution such as smog alerts led people to reduce their outdoor activities, reducing the adverse health effects of air pollution. 3 the scope of those studies was more comprehensive than what this study addresses in that they focused not only on the existence of avoidance behavior but also on the health effects. however, the previous studies did not consider real-time information, which is of growing importance. in this way, this study offers a new perspective. nam and jeon (2019) examined the effect of pm on the number of spectators in attendance at professional baseball games in south korea. the authors reported that a high level of pm on the morning of a game day decreased the number of spectators. they concluded that poor visibility due to the high level of pm caused the reduced attendance but did not consider the effect of information. in contrast, this study shows that information about the level of pm is a key factor in the decline in spectators. there are other notable studies concerning avoidance behavior. moretti and neidell (2011) estimated the welfare costs of avoidance behavior using daily boat traffic as an instrumental variable. graff zivin et al. (2011) showed that responses to information about water quality violations led people to buy bottled water. sheldon and sankaran (2019) reported that poor air quality due to forest fires in indonesia induced people to stay indoors, resulting in increased electricity demand by households in singapore. zhang and mu (2018) , and liu et al. (2018) found that a high level of pm increased online searches for and sales of anti-pm2.5 masks and air filters. kim (2019) showed that estimates of health effects based on data from south korea could be biased when avoidance behavior is not considered. yoon (2019) reported that retail sales declined when the level of pm becomes worse than the "bad" category, which indicated that people were reacting to government-enacted air quality standards. eom and oh (2019) demonstrated that an ambient level of pm10 indirectly affects avoidance behavior by changing subjective risk perceptions. the south korean government and its local agencies operate more than 250 air quality monitoring stations (as of 2017) and collect hourly data on various pollutants, including so 2 , no 2 , co, o 3 , pm10, and pm2.5. 4 the national institute of environmental research (nier) of korea receives data from these monitoring stations and disseminates it to the public via various channels, such as a government-operated website (www.airkorea.or.kr), portals, and mobile applications. real-time pm10 information is provided as numerical values and by categories that are determined by the numerical values as follows: 0-30, good; 31-80, moderate; 81-150, bad; and over 151, very bad. 5 each of these categories is represented by different colors, i.e., blue, green, yellow, and red, to express these risk levels visually and intuitively. behavioral guidelines associated with each category that are provided along with the real-time information are shown in table 1 . the nier distributes real-time information on pm at various levels, from the individual monitoring station to province-level data. this study uses county-level information that is generated by averaging the pm10 concentration of monitors within a county. 6 information about air pollution forecasts is also considered in this study. the south korean government has been operating an air quality forecasting system since 2014 that provides forecasts of pm and ozone levels for the next day, four times a day at the following times: 5 a.m., 11 a.m., 5 p.m., and 11 p.m. this study uses information from the 5 p.m. forecast because it is disseminated via the main evening news, which is presumed to be the most widely viewed news source. these forecasts are provided in terms of the four categories, namely, good, moderate, bad, and very bad, but not as numerical values. since the forecast is disseminated at the province level, all counties within a province receive the same forecast information. data on pm10 forecast are available on www.airkorea.or.kr. data on professional baseball games is obtained from the korea baseball organization, including information about the ballpark, the home team, the away team, and the number of attendees per game. the baseball season in south korea runs from march to october, and each team plays approximately 140 games per season. the league consisted of eight teams in 2012, and two teams were added during the analysis period; thus, there were ten teams in the korean professional baseball league as of 2016. 7 three teams changed their home ballparks during the analysis period. 8 only data from the regular season was used; in other words, post-season (playoff/championship) games were not considered. occasionally, a home team plays at a substitute field rather than at its home ballpark. these cases are also excluded from the sample because the number of such games is negligible. weather data are obtained from the climate data open portal (data.kma.go.kr) operated by the korea meteorological administration. this portal provides hourly data from more than 90 weather monitors across the country. for the several counties that have no weather stations, the weather observed from the station nearest to the administrative office of a county is defined to be the weather of that county. table 2 shows summary statistics for the data. the study covers 3004 baseball games, and all variables were linked based on location and day. the average number of fans in attendance per game was 11,579, and the average pm10 concentration for the overall sample was approximately 43 μg/m 3 . over 90% of real-time pm information was categorized as good or moderate. 9 real-time alerts indicating that real-time pm10 information was in the "bad" or "very bad" category occurred for 257 games. forecast alerts, referring to instances when the forecasted level of pm10 on a game day was expected to be bad or very bad, occurred for 110 games. forecast alerts have a value of 0 before 2014, when the air pollution forecast system did not exist. for those days where real-time alerts occurred, only 26.1% of them were simultaneously affected by forecast alerts. even if the time period is limited to 2014 or later, in which the forecasting system was in operation, only 37.9% of the real-time alerts appeared together with forecast alerts. 10 this means that the correlation between the types of alerts may not cause problems of multicollinearity in estimations. figure 1 displays the 7 eight teams existed in 2012, and two teams joined the league in 2013 and 2015. 8 one team moved its ballpark in 2014, and two teams moved in 2016. 9 this summary statistic is of real-time information available at the reservation cancelation deadline of a game. the reason for using this variable is explained in section 4. 10 the reason why real-time and forecast alerts do not match each other well is that they cover different regions and periods. the forecast provides information on the daily average pollution levels at provinces, while real-time information represents the hourly information of counties. limit extended or strenuous outdoor activities. people should avoid strenuous outdoor activities. sensitive people should stay indoors. source: www.airkorea.or.kr. the pm10 classification follows the standard of the south korean government, which is less stringent than the who standard variations in attendance, frequency of real-time alerts as a percentage of games played, and the average pm10 concentration by year and by month. pm pollution is more severe in the spring; therefore, real-time alerts appear more frequently in the first half of the baseball season. the number of attendees per game is especially large in march and may. the reason for the high average number of spectators in march is that the season starts at the end of march and many people are eager to attend a baseball game at the start of the season, especially the popular first game of the year. may has a large average number of spectators because it is family month in korea. figure 2 shows the locations of the ballparks. the average annual pm10 levels in counties that have a ballpark ranged from 35 to 50 during the analysis period. a total of ten teams participate in the korean baseball league, but two of them share the same ballpark, so only nine ballparks are shown on the map. figure 3 shows the distribution of the percentage of real-time alert occurrences among games held on the same day (days when the percentage is zero are excluded from the figure). the histogram shows variation in the real-time alerts within a given day, not only for the entire country but also in areas where a number of ballparks are clustered, such as the northwest region. the influence of real-time information is estimated using the real-time pm10 information available as of the reservation cancelation deadline in the county where the game is to be held. the assumption behind the use of this variable is that people adjust their behavior by canceling their reservations, and they refer to real-time pm information available at the deadline by which ticketholders must decide whether or not to cancel. later in the analysis, i verify this only those seats that have not been reserved can be purchased onsite and people have to line up for on-site purchases; therefore, those who want to go to a ball game usually make reservations in advance. a report on professional sports in south korea revealed that for the 2016 season, only 20.5% of people purchased baseball game tickets on-site (kpsa 2017). although the rules for canceling a reservation differ across teams, all reservations can be canceled up to 2 to 4 h before the game. the cost of canceling is approximately $1 plus 10% of the ticket price, which varies from $10 to $80, and cancelation is not possible after the deadline. actual data on canceled reservations would be helpful in this analysis but none was available; therefore, i was only able to use the number of attendees in the study. as mentioned above, real-time pm10 information is provided not only as numeric values but also as categories. although the actual level of pm and the associated category are both pieces of information, people are more likely to respond to the category than the actual level of pm10 because each category has its own color and behavioral guidelines corresponding to the risk level. moreover, high levels of pm are directly related to poor visibility or physical reactions, so the effect estimated from the actual level of pm could not be considered entirely as the result of viewing the information. on the other hand, people's reactions to a category can be interpreted as the direct effect of information, as the thresholds for each category are arbitrarily determined by authorities. therefore, a dummy variable indicating whether the real-time pm10 is categorized as bad or very bad as of the cancelation deadline is used to estimate the real-time information effects. a number of previous studies have used air pollution forecasts to identify avoidance behavior (neidell 2009; janke 2014; altindag et al. 2017) . in this study, however, forecasts can be a confounding factor that affects the estimation of the impact of realtime information. the forecast is associated with both real-time information and with the number of attendees simultaneously, so omitting it would cause omitted variable bias. therefore, all regression models in this study include a forecast alert variable there are ten teams across the country and four teams in the northwest region indicating whether the forecasted pm for a ballpark location is categorized as bad or very bad. the regression model controls for the average pm10 level before the game starts. the pm10 level before the game can affect the number of attendees because people can identify a high level of pollution based on poor visibility or physical reactions (e.g., difficulty breathing, eye irritation). also, the higher the level of pm10 before a game, the more likely a real-time alert will occur. although pm10 levels during a game can also be associated with real-time alerts, this is less likely to affect the number of attendees. the attendance data only show the number of people who enter the ballpark; therefore, if people leave the game early due to high levels of pollution it does not influence the number of attendees. the regression model used to estimate the effect of real-time information is as follows: where y ct is the (log) number of attendees in the game held in county (or ballpark) c on day t. 11 ra indicates that the real-time information available at the cancelation deadline is categorized as either bad or very bad. if real-time information affects attendance, the effect will be shown in β 1 . pm is the average level of pm10 before the game starts and is included as a quadratic function or as 20 μg/m 3 interval dummies. fa is the indicator for forecast alerts, which indicates that the forecasted pm is categorized as bad or very bad. fa is provided at the province level (p), so counties within the same province have the same fa. w represents a set of weather variables, including temperature, precipitation, wind speed, and relative humidity. omitting weather variables could also create omitted variable bias because weather affects both attendance and level of air pollution. weather variables are included as second-order polynomials. team is a vector of home team fixed effects, away team fixed effects, and homeaway team fixed effects. home team and away team fixed effects capture the time invariant characteristics of teams when they are the home team or an away team. the unique relationships between teams can influence the number of attendees. as shown in fig. 2 , some teams are clustered in the northwest and southeast regions of the country, and two teams share a ballpark. this indicates that the number of fans for the away team is likely to vary depending on the location of the home team's ballpark. in addition, rivalries between two teams can affect game attendance. home-away fixed effects, measured by the interaction terms of the home team and away team dummies, account for the unique relationships between teams. time represents a set of time fixed effects including year, month, day of the week, holiday, and the mers outbreak, to capture temporal and seasonal trends in baseball game attendance and real-time alerts. 12 during the analysis period, a few teams changed their ballparks; the interaction terms for the home team and year dummy variables are included in the model to account for such changes. the results obtained from the regression specified in eq. (1) are shown in table 3 . each column provides results for different specifications of the model, but weather, team fixed effects, and time fixed effects are included in all of the specifications. results show that real-time information available about pm at the cancelation deadline reduces baseball game attendance by approximately 7%. moreover, the coefficient of the impact of real-time alerts is largely unaffected by the functional form of the average pm level before the start time of the game and is not sensitive to the forecast alert. the results suggest that the issuance of real-time alerts affects participation in outdoor activities independently of the actual pm level or the forecast. considering nam and jeon (2019)'s results reporting that the number of baseball spectators decreased by 8.9% when the average pm10 on the morning of the game day is high (81 μg/m 3 or over), we can infer that real-time information is a key factor for the decrease. in this analysis, reverse causality can exist in that the regional air pollution level could be influenced by game-related traffic (locke 2019) . although home team fixed effects capture the average level of local traffic, the daily traffic variations due to baseball games can affect the level of air pollution and, consequently, real-time alerts. however, even if reverse causality does exist, it would only cause a downward bias on the effect of real-time information because traffic and air pollution are positively robust standard errors in parentheses. ***p < 0.01, **p < 0.05, *p < 0.1. the analysis period is from 2012 to 2016. real-time alert indicates that real-time information on pm10 is categorized as bad or very bad. forecast alert indicates that the forecasted level of pm10 is categorized as bad or very bad. pm indicates the average level of pm10 before the game starts and is included as a quadratic function or 20 μg/m 3 interval dummies. weather, team-fe, and time-fe are controlled correlated. despite the possibility of underestimation, the results of the analysis are statistically significant. this study uses the information available as of the cancelation deadline as the real-time information to which people respond. the underlying assumption is that people adjust their behavior in response to information about the level of pm by canceling their reservations, and they refer to the information that is available as of the cancelation deadline to make their decision. i check the validity of this assumption by including all of the real-time pm10 information available before and after the cancelation deadline. real-time alerts occurring within 6 h before and after the deadline are included in the model. since real-time information at a given time can be highly correlated with that of adjacent times, real-time alerts around the deadline are included at 2-to 3-h intervals in table 4 and fig. 4 show that real-time alerts occurring after the cancelation deadline have no significant effect on attendance. this supports the hypothesis that the people who do respond to real-time information adjust their behavior by canceling their reservations prior to the deadline. in addition, although the effects of real-time alerts increase as the deadline approaches, the alert as of the deadline has the largest effect and is the only statistically significant source of information. therefore, we conclude that our underlying assumption, namely, that people adjust their behavior by canceling their reservation and that they refer to information as of the cancelation deadline to decide whether or not to cancel, is reasonable. the effects of real-time information can differ by year if factors such as accessibility and sensitivity to real-time information vary over time. the south korean government began to publish air pollution data through an open api system in december 2013, allowing people to easily check pm information in real-time via various portals and mobile applications. 13 furthermore, the international agency for research on cancer (iarc) classified pm as a group 1 carcinogen in october 2013 (iarc 2013). figure 5 shows the google search trend for pm10 in south korea, revealing significant increases after these events. 14 therefore, in this section, i examine whether the effect of real-time information differs before and after 2014. the interaction term of real-time alerts and a dummy variable indicating the years after 2014 is included in the main regression model to estimate this effect. column (1) of table 5 shows the result of the main regression model, which estimates the average impact of real-time alerts over the entire analysis period. column (2) shows the differential effect over time. the results show that real-time alerts on pm10 do not have a noticeable effect prior to 2014 and that the impact increases dramatically after 2014. this finding confirms that the increased accessibility of real-time information and sensitivity to the risks of pm exposure have changed the way people respond to real-time information about pm levels. unfortunately, it is not possible to decompose the contributions of accessibility and sensitivity to the overall change due to data limitation. however, we can infer that easily accessible information and education about risks are necessary to the success of a real-time information policy. figure 6 , which displays the real-time information effects by year, also supports this finding. robust standard errors in parentheses. ***p < 0.01, **p < 0.05, *p < 0.1. the analysis period is from 2012 to 2016. real-time alert indicates that real-time information on pm10 is categorized as bad or very bad. in all analyses, pm, forecast alerts, weather, team-fe, and time-fe are controlled the effect of real-time information can also be differentiated by the importance of the particular baseball game. for example, more people may decide to go to the ballpark despite a high level of air pollution when the game is crucial for their team. this section investigates whether there are differentiated effects based on the importance of specific games by including the ranking of the home team and the difference in ranking between robust standard errors in parentheses. ***p < 0.01, **p < 0.05, *p < 0.1. the analysis period is from 2012 to 2016. real-time alert indicates that real-time information on pm10 is categorized as bad or very bad. in all analyses, pm, forecast alerts, weather, team-fe, and time-fe are controlled. the home team's ranking ranges from 1 to 10, with 10 representing first place and 1 representing last place. ranking differences range from 0 to 9-the larger the value, the closer the ranks of home and away teams the home and away teams. variables related to ranking are converted as follows: the home team's ranking ranges from 1 to 10, with 10 representing first place and 1 representing last place. ranking differences range from 0 to 9-the larger the value, the closer the ranks of the home and away teams. table 6 shows the results. consistent with our expectations, fans are more likely to attend a baseball game when their team is highly ranked, or when their team is playing another team with a similar ranking. however, real-time information about pm levels does have a greater impact on more crucial games. the real-time alert decreases the number of spectators by an additional 1.25% when the home team's ranking increases by one unit. this may be because those spectators whose behavior is affected by the importance of the game are relatively less devoted to their team and therefore are more likely to abandon going to a ballpark on a high pollution day. considering that a oneunit increase in ranking increases the number of spectators by 4.6% when the pm level does not trigger an alert, a real-time alert reduces approximately 27% of this increase. column (3) shows that the effect of real-time information is greater for fans who decide to attend a game based on the ranking difference (i.e., the importance of the game to their team). this section compares the magnitude of the effects of real-time alerts and forecast alerts using data after 2014. given that the forecast system was implemented in 2014 and that the effects of real-time alerts have surged since then, the data after 2014 are deemed most appropriate for this comparison. column (2) includes the interaction term of real(3) and (4) estimate the effect of one type of alert on days when the other type of alert does not appear. table 7 shows that real-time alerts reduce baseball game attendance by approximately 6-8%, which is comparable to the main results. forecast alerts are shown to decrease the number of spectators by approximately 7-9%. although the regression coefficient of forecast alerts is slightly greater than that of real-time alerts across all specifications, the difference is not statistically significant. therefore, the results suggest that people adjust their behaviors in response to real-time information and the extent to which they depend on that information is almost the same as their dependence on forecasts. in this section, i conduct additional robustness checks on the effect of real-time information and the findings support the main results. column (1) in table 8 estimates the regression with the dependent variable expressed in levels rather than log values. it shows that a real-time alert reduced attendance at baseball games in south korea by approximately 936 individuals, or 8.1% of average attendance. this percentage is slightly larger than the main result but is quite comparable. column (2) presents the results where dummy variables representing the week are included instead of month dummies. although month fixed effects account for seasonal trends, if time trends exist in our dependent and key explanatory variables within a month, the regression results could show a spurious relationship. the result in column (2) shows that the effect of real-time information is not affected by the inclusion of week dummies. column (3) in table 8 represents the result of a multi-pollutant model. pm is associated with other pollutants, given that gases such as sulfur oxides and nitrogen oxides can be transformed into pm through chemical reactions. this model controls for the average levels of so 2 , co, o 3 , and no 2 before the game, and real-time alerts on these air pollutants are also included. 16 the result is almost unchanged, even when the other pollutants are considered. in column (4), the games that attract full capacity crowds are excluded from the sample. the main regression model can be considered as a censored model with upper limits because each ballpark has a maximum number of spectators it can accommodate. i perform an analysis that excludes the games that attracted full capacity crowds (318 cases) to remove any distortion related to these upper limits. the result using the restricted sample shows that the effects are still largely negative and statistically significant. the models in columns (5) and (6) investigate the impact of real-time information on indoor activities. in 2016, nexen, one of the professional baseball teams in korea, changed its home stadium to the gocheok skydome, the first and only domed stadium in korea. the influence of information about pm on games played in this stadium 15 vector inflation factors (vif) of real-time alert and forecast alert are 1.92 and 2.91 for column (1), 3.07 and 3.23 for column (2). the vifs reconfirm that the correlation between real-time and forecast alert does not cause a serious problem in the regression. 16 there were 39 real-time alerts for o3, while there were no alerts for the other pollutants. could differ from that of other ballparks if people believe that games held in the domed stadium are not affected, or are significantly less affected by pm. column (5) estimates the differentiated effect of the domed stadium by including the interaction term of realtime alerts and the gocheok skydome dummy in the model. 17 separately, column (6) examines the effect of real-time pm information on basketball game attendance, as all basketball games are held indoors. the results in columns (5) and (6) show that attendance at games held in the gocheok skydome, as well as at basketball games, are negatively influenced by real-time pm alerts. this finding may imply that fewer people are willing to go out on highly polluted days, resulting in a lower number of visitors even to indoor facilities. this study investigates the impact of real-time information regarding the level of pm on outdoor activities using data on attendance at professional baseball games in south korea. the main results show that real-time alerts reduced the number of spectators at baseball games by approximately 7%. this result is robust under various model specifications. the finding suggests that people adjust their behavior based on realtime information and the dependence on real-time information about pollution levels is not statistically different from the dependence on air pollution forecasts. this study uses the real-time information about pm levels that is available as of the deadline for canceling baseball game reservations as the real-time information to which people respond, and additional analysis confirmed that the use of this variable is 17 the gocheok skydome fixed effect is already captured in the main model because the interaction terms of home teams and year dummy variables are included in that model. robust standard errors in parentheses. ***p < 0.01, **p < 0.05, *p < 0.1. the analysis period is from 2012 to 2016. real-time alert indicates that real-time information on pm appears as bad or very bad. in all analyses, pm, forecast information, weather, team-fe, time-fe are controlled. in the multi-pollutants model, so 2 , co, o 3 , and no 2 is considered. nexen, one of the korean professional baseball teams, changed its home stadium to gocheok skydome, which is the first and the only domed stadium in korea in 2016. data on basketball games of 2014-2015 and 2015-2016 season are used for column (6) reasonable. i find that the effect of real-time information has dramatically increased since 2014 due to a change in the accessibility of the information and the public's sensitivity to the risks associated with pm exposure. real-time information on pm has a greater impact on spectators whose attendance is affected by the importance of the game. in addition, i find that the desire to avoid air pollution affects attendance even at indoor facilities. with the development of technology, authorities can more easily provide information to the public about air pollution in real-time. despite its growing importance, little research has been done regarding the impact of real-time information. this study identified that real-time information can be a way to protect people's health from the threat of air pollution by triggering avoidance behavior. these findings may apply not only to air pollution but also to other fields of health policy where real-time information can be provided and applied in a practical way. the results also indicate that for a realtime information policy to succeed, authorities should provide real-time information through easily accessible channels such as mobile applications or portals and should increase people's awareness of health risks through education. unlike some previous research, this study focuses only on a behavioral response and does not consider the costs or benefits of a specific action. unfortunately, i was not able to analyze the discriminatory effects with respect to groups with greater than average health risks. nevertheless, this study is meaningful in that it provides new empirical evidence on the impact of real-time information and how it can help improve health policy decisions. chinese yellow dust and korean infant health. social science and medicine health risks from particulate matters (pm10) and averting behavior: evidence from the reduction of outdoor leisure activities air quality index: a guide to air quality and your health days of haze: environmental information disclosure and intertemporal avoidance behavior water quality violations and avoidance behavior: evidence from bottled water consumption air pollution and cancer: iarc scientific publication no air pollution, avoidance behaviour and children's respiratory health: evidence from england air pollution, health, and avoidance behavior: evidence from south korea, working paper national risk awareness of public health issues and application for future policy developments avoidance behavior against air pollution: evidence from online search indices for anti-pm2.5 masks and air filters in chinese cities estimating the impact of major league baseball games on local air pollution pollution, health, and avoidance behavior evidence from the ports of los angeles a study on the impact of air pollution on the korean baseball attendance information, avoidance behavior, and health the effect of ozone on asthma hospitalizations air pollution and worker productivity health effects of fine particulate air pollution: lines that connect air pollution, educational achievements, and human capital formation averting behavior among singaporeans during indonesian forest fires ambient air pollution and children's cognitive outcomes effects of particulate matter (pm10) on tourism sales revenue: a generalized additive modeling approach air pollution and defensive expenditures: evidence from particulate-filtering facemasks publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations acknowledgments this article is based on the first chapter of the author's ph.d. dissertation at seoul national university. the author would like to thank chulhee lee, dea-il kim, sok chul hong, jungmin lee, sung won kang, and anonymous referees for their useful comments and suggestions.funding this research was supported by the bk21plus program (future-oriented innovative brain raising type, 21b20130000013) funded by the ministry of education (moe, korea) and national research foundation of korea (nrf). key: cord-026513-3myuf5q2 authors: feo-arenis, sergio; vujinović, milan; westphal, bernd title: on implementable timed automata date: 2020-05-13 journal: formal techniques for distributed objects, components, and systems doi: 10.1007/978-3-030-50086-3_5 sha: doc_id: 26513 cord_uid: 3myuf5q2 generating code from networks of timed automata is a well-researched topic with many proposed approaches, which have in common that they not only generate code for the processes in the network, but necessarily generate additional code for a global scheduler which implements the timed automata semantics. for distributed systems without shared memory, this additional component is, in general, undesired. in this work, we present a new approach to the generation of correct code (without global scheduler) for distributed systems without shared memory yet with (almost) synchronous clocks if the source model does not depend on a global scheduler. we characterise a set of implementable timed automata models and provide a translation to a timed while language. we show that each computation of the generated program has a network computation path with the same observable behaviour. automatic code generation from real-time system models promises to avoid human implementation errors and to be cost and time efficient, so there is a need to automatically derive (at least parts of) an implementation from a model. in this work, we consider a particular class of distributed real-time systems consisting of multiple components with (almost) synchronous clocks, yet without shared memory, a shared clock, or a global scheduler. prominent examples of such systems are distributed data acquisition systems such as data aggregation in satellite constellations [16, 18] , the wireless fire alarm system [15] , iot sensors [30] , or distributed database systems (e.g. [12] ). for these systems, a common notion of time is important (to meet real-time requirements or for energy efficiency) and is maintained up to a certain precision by clock synchronisation protocols, e.g., [17, 23, 24] . global scheduling is undesirable because schedulers are expensive in terms of network bandwidth and computational power and the number of components in the system may change dynamically, thus keeping track of all components requires large computational resources. timed automata, in particular in the flavour of uppaal [7] , are widely used to model real-time systems (see, for example, [14, 32] ) and to reason about the correctness of systems as the ones named above. modelling assumptions of timed automata such as instantaneous updates of variables and zero-time message exchange are often convenient for the analysis of timed system models, yet they, in general, inhibit direct implementations of model behaviour on real-world platforms where, e.g., updating variables take time. in this work, we aim for the generation of distributed code from networks of timed automata with exactly one program per network component (and no other programs, in particular no implicit global scheduler), where all execution times are considered and modelled (including the selection of subsequent edges), and that comes with a comprehensible notion of correctness. our work can be seen as the first of two steps towards bridging the gap between timed automata models and code. we propose to firstly consider a simple, iterative programming language with an exact real-time semantics (cf. sect. 4) as the target for code generation. in this step, which we consider to be the harder one of the two, we deal with the discrepancy between the atomicity of the timed automaton semantics and the non-atomic execution on real platforms. the second step will then be to deal with imprecise timing on real-world platforms. our approach is based on the following ideas. we define a short-hand notation (called implementable timed automata) for a sub-language of the well-known timed automata (cf. sect. 3). we assume independency from a global scheduler [5] as a sufficient criterion for the existence of a distributed implementation. for the timing aspect, we propose not to use platform clocks directly in, e.g., edge guards (see related work below) but to turn model clocks into program variables and to assume a "sleep" operation with absolute deadlines on the target platform (cf. sect. 4). in sect. 5, we establish the strong and concrete notion of correctness that for each time-safe computation of a program obtained by our translation scheme there is a computation path in the network with the same observable behaviour. section 6 shows that our short-hand notation is sufficiently expressive to support industrial case studies and discusses the remaining gap towards realworld programming languages like c, and sect. 7 concludes. generating code for timed systems from timed automata models has been approached before [3, 4, 20, 25, 29] . all these works also generate code for a scheduler (as an additional, explicit component) that corresponds to the implicit, global scheduler introduced by the timed automata semantics [5] . thus, these approaches do not yield the distributed programs that we aim for. a different approach in the context of timed automata is to investigate discrete sampling of the behaviour [28] and so-called robust semantics [28, 33] . a timed automaton model is then called implementable wrt. to certain robustness parameters. bouyer et al. [11] have shown that each timed automaton (not a network, as in our case) can be sampled and made implementable at the price of a potentially exponential increase in size. a different line of work is [1, 2, 31] . they use timed automata (in the form of rt-bip components [6] ) as abstract model of the scheduling of tasks. considering execution times for tasks, a so-called physical model (in a slightly different formalism) is obtained for which an interpreter has been implemented (the real-time execution engine) that then realises a scheduling of the tasks. the computation time necessary to choose the subsequent task (including the evaluation of guards) is "hidden" in the execution engine (which at least warns if the available time is exceeded), and they state the unfortunate observation that time-safety does not imply time-robustness with their approach. there is an enormous amount of work on so-called synchronous languages like esterel [10] , signal [8] , lustre [19] and time triggered architectures such as giotto/htl [21] . these approaches provide an abstract programming or modelling language such that for each program, a deployable implementation, in particular for signal processing applications, can be generated. as modelling formalism (and input to code generation), we consider timed automata as introduced in [7] . in the following, we recall the definition of timed automata for self-containedness. our presentation follows [26] and is standard with the single exception that we exclude strict inequalities in clock constraints. a timed automaton a = (l, a, x, v, i, e, ini ) consists of a finite set of locations (including the initial location ini ), sets a, x, and v of channels, clocks, and (data) variables. a location invariant i : l → φ(x) assigns a clock constraint over x from φ(x) to a location. finitely many edges in e are of the form ( , α, ϕ, r, consists of input and output actions on channels and the internal action τ , φ(x, v ) are conjunctions of clock constraints from φ(x) and data constraints from φ(v ), and r(x, v ) * are finite sequences of updates, an update either resets a clock or updates a data variable. for clock constraints, we exclude strict inequalities as we do not yet support their semantics (of reaching the upper or lower bound arbitrarily close but not inclusive) in the code generation. in the following, we may write (e) etc. to denote the source location of edge e. the operational semantics of a network n = a 1 . . . a n of timed automata as components -and with pairwise disjoint sets of clocks and variables -is the (labelled) transition system t (n ) = (c, λ, { λ − →| λ ∈ λ}, c ini ) over configurations. a configuration c ∈ c = { , ν | ν |= i( )} consists of location vector (an n-tuple whose i-th component is a location of a i ) and a valuation ν : x(n ) ∪ v (n ) → r + 0 ∪ d of clocks and variables. the location vector has invariant i( ) = n i=1 i( i ), and we assume a satisfaction relation between valuations and clock and data constraints as usual. labels are λ = {τ } ∪ r + 0 , and the set of initial configurations is there is an internal transition , ν τ − → , ν , if and only if there is an edge e = ( , τ, ϕ, r, ) enabled in , ν and ν is the result of applying e's update vector to ν. an edge is enabled in , ν if and only if its source location occurs in the location vector, its guard is satisfied by ν, and ν satisfies the destination location's invariant. there is a rendezvous transition , ν τ − → , ν , if and only if there are edges e 0 = ( 0 , a!, ϕ 0 , r 0 , 0 ) and e 1 = ( 1 , a?, ϕ 1 , r 1 , 1 ) in two different automata enabled in , ν and ν is the result of first applying e 0 's and then e 1 's update vector to ν. a transition sequence of n is any finite or infinite, initial and consecutive sequence of the form 0 , ν 0 λ1 −→ 1 , ν 1 λ2 −→ · · · . n is called deadlock-free if no transition sequence of n ends in a configuration c such that there are no c , c such that c next, deadline, boundary. given an edge e with source location and clock constraint ϕ clk , and a configuration c = , ν , we define next(c, ϕ clk ) = min{d ∈ r + 0 | ν+d |= i( )∧ϕ clk } and deadline(c, ϕ clk ) = max{d ∈ r + 0 | ν+next(c, ϕ clk )+ d |= i( )∧ϕ clk } if minimum/maximum exist and ∞ otherwise. that is, next gives the smallest delay after which e is enabled from c and deadline gives the largest delay for which e is enabled after next. the boundary of a location invariant ϕ clk is a clock constraint ∂ϕ clk s.t. ν + d |= ∂ϕ clk if and only if d = next(c, ϕ clk ) + deadline(c, ϕ clk ). a simple sufficient criterion to ensure existence of boundaries is to use location invariants of the form ϕ clk = x ≤ q, then ∂ϕ clk = x ≥ q. in the following, we introduce implementable timed automata that can be seen as a definition of a sub-language of timed automata as recalled in sect. 2. as briefly discussed in the introduction, a major obstacle with implementing timed automata models is the assumption that actions are instantaneous. the goal of considering the sub-language defined below is to make the execution time of resets and the duration of message transmissions explicit. other works like, e.g., [13] , propose higher-dimensional timed automata where actions take time. we propose to make action times explicit within the timed automata formalism. implementable timed automata distinguish internal, send, and receive edges by action and update in contrast to timed automata. an internal edge models (only) updates of data variables or sleeping idle (which takes time on the platform), a send edge models (only) the sending of a message (which takes time), and a receive edge (only) models the ability to receive a message with a timeout. all kinds of edges may reset clocks. figure 1 shows an example implementable timed automaton using double-outline edges to distinguish the graphical representation from timed automata. the edge from 0 to 1 , for example, models that message 'lz[id]' may be transmitted between time s 0 + g (including guard time g and operating time) and s 0 + g + m, i.e., the maximal transmission duration here is m. the time n l1 would be the operating time budgeted for location 1 . the semantics of the implementable network n consisting of implementable timed automata i 1 , . . . , i n is the labelled transition system t (a i1 . . . a in ). the timed automata a ii are obtained from i i by applying the translation scheme in fig. 2 edge-wise. the construction introduces fresh × -locations. intuitively, a discrete transition to an × -location marks the completion of a data update or message transmission in i that started at the next time of the considered configuration. after completion of the update or transmission, implementable timed automata always wait up to the deadline. if the update or transmission has a certain time budget, then we need to expect that the time budget may be completely used in some cases. using the time budget, possibly with a subsequent wait, yields a certain independence from platform speed: if one platform is fast enough to execute the update or transmission in the time budget, then all faster platforms are. note that the duration of an action may be zero in implementable timed automata (exactly as in timed automata), yet then there will be no timesafe execution of any corresponding program on a real-world platform. in [5] , the concept of not to depend on a global scheduler is introduced. intuitively, independency requires that sending edges are never blocked because no matching receive edge is enabled or because another send edge in a different component is enabled. that is, the schedule of the network behaviour ensures that at each point in time at most one automaton is ready to send, and that each automaton that is ready to send finds an automaton that is ready for the matching receive. similar restrictions have been imposed on timed automaton models in [9] to verify the zeroconf protocol. whether a network depends on a global scheduler is decidable; for details, we refer the reader to [5] . figure 3 shows an artificial network of implementable timed automata whose independency from a global scheduler depends on the parameters s 1,0 + w 1 and s 2,0 + w 2 . if the location 1,1 is reached, then the standard semantics of timed automata would (using the implicit global scheduler) block the sending edge until 2,1 is reached. yet in a distributed system, the sender should not be assumed to know the current location of the receiver. by choosing the parameters accordingly (i.e., by protocol design), we can ensure that the receiver is always ready before the sender so that the sender is never blocked. in this case, we can offer a distributed implementation. in the following sections, we only consider networks of implementable timed automata that are deadlock-free, closed component (no shared clocks or variables, no committed locations (cf. [7] )), and do not depend on a global scheduler. in this section, we introduce a timed programming language that provides the necessary expressions and statements to implement networks of implementable timed automata as detailed in sect. 5. the semantics is defined as a structural operational semantics (sos) [27] that is tailored towards proving the correctness of the implementations obtained by our translation scheme from sect. 5. we use a dedicated time component in configurations of a program to track the execution times of statements and support a snapshot operator to measure the time that passed since the execution of a particular statement. due to lack of space, we introduce expressions on a strict as-needed basis, including message, location, edge, and time expressions. in a general purpose programming language, the former kinds of expressions can usually be realised using integers (or enumerations), and time expressions can be realised using platform-specific representations of the current system time. syntax. expressions of our programming language are defined wrt. given network variables v and x. we assume that each constraint from φ(x, v ) or expression from ψ (v ) over v and x has a corresponding (basic type) program expression and thus that each variable v ∈ v and each clock x ∈ x have corresponding (basic type) program variables v v , v x ∈ v b . in addition, we assume typed variables for locations, edges, and messages, and for times (on the target platform). we additionally consider location variables v l to store the current location, edge variables v e to store the edge currently worked on, message variables v m to store the outcome of a receive operation, and time variables v t to store platform time. message expressions are of the form mexpr ::= m | a, m ∈ v m , a ∈ a, location expressions are of the form lexpr ::= l | | nextloc i (mexpr ), l ∈ v l , ∈ l, and edge expressions are of the form eexpr ::= e | e, e ∈ v e , e ∈ e. a time expression has the form texpr ::= | t | t + expr , where is the current platform time and t ∈ v t . note that time variables are different from clock variables. the values of clock variable v x are used to compute a new next time, which is then stored in a time variable, which can be compared to the platform time. clock variables can be represented by platform integers (given their range is sufficient for the model) while time variables will be represented by platform specific data types like timespec with c [22] and posix. in this way, model clocks are only indirectly connected (and compared) to the platform clock. table 1 . statements s, statement sequences s, and programs p . | if e = eexpr 1 : s1 . . . e = eexpr n : snfi | while expr do s od s ::= | s | s | s; s | s ; s ( ; s ≡ s; ≡ s), p ::= s1 · · · sn. the set of statements, statement sequences, and timed programs are given by the grammar in table 1 . the term nextedge i ([mexpr ]) represents an implementation of the edge selection in an implementable timed automaton that can optionally be called with a message expression. we denote the empty statement sequence by and introduce as an artificial snapshot operator on statements (see below). the particular syntax with snapshot and non-snapshot statements allows us to simplify the semantics definition below. we use stmseq to denote the set of all statement sequences. π = s, (β, γ, w, u) , σ consisting of a statement sequence s ∈ stmseq, the operating time of the current statement β ∈ r + 0 i.e., the time passed since starting to work on the current statement), the time to completion of the current statement γ ∈ r + 0 ∪ {∞} (i.e., the time it will take to complete the work on the current statement), the snapshot time w ∈ r + 0 (i.e., the time since the last snapshot), the platform clock value 1 u ∈ r + 0 , and a type-consistent valuation σ of the program variables. we will use operating time and time to completion to define computations of timed while programs (with discrete transitions when the time to completion is 0), and we will use the snapshot time w as an auxiliary variable in the construction of predicates by which we relate program and network computations. the valuation σ maps basic type variables from v b to values from a domain that includes all values of data variables from d as used in the implementable timed automaton and all values needed to evaluate clock constraints (see below), i.e. σ(v b ) ⊆ d b . time variables from v t are mapped to non-negative real numbers, i.e., σ(v t ) ⊆ r + 0 , message variables from v m are mapped to channels, i.e., σ(v m ) ⊆ a ∪ {⊥} or the dedicated value ⊥ representing 'no message', location variables from v l are mapped to locations, i.e., σ(v l ) ⊆ l, and edge variables from v e are mapped to edges, i.e., σ(v e ) ⊆ e. for the interpretation of expressions in a component configuration we assume that, if the valuation σ of the program variables corresponds to the valuation of data variables ν, then the interpretation expr (π) of basic type expression expr corresponds to the value of expr under ν. other variables obtain their values from σ, too, i.e. t (π) = σ(t), m (π) = σ(m), l (π) = σ(l), and e (π) = σ(e); constant symbols are interpreted by their corresponding value, i.e. a (π) = a, (π) = , and e (π) = e, and we have t + expr (π) = t (π) + expr (π). there are two non-standard cases. the -symbol denotes the platform clock value of π, i.e.. (π) = u, and we assume that nextloc i ([mexpr ]) (π) yields the destination location of the edge that is currently processed (as given by e), possibly depending on a message name given by mexpr . if e (π) denotes an internal action or send edge e, this is just the destination location (e), for receive edges it is (e) if mexpr evaluates to the special value ⊥, and an i from a (a i ?, i ) pair in the edge otherwise. if the receive edge is non-deterministic, we assume that the semantics of nextloc i resolves the non-determinism. program computations. table 2 gives an sos-style semantics with discrete reduction steps of a statement sequence (or component). note that the rules in table 2 (with the exception of receive) apply when the time to completion is 0, that is, at the point in time where the current statement completes. each rule then yields a configuration with the operating time γ for the new current statement. the new snapshot time w is 0 if the first statement in s is a snapshot statement s , and w otherwise. rule (r7) updates m to a, which is a channel or, in case of timeout, the 'no message' indicator '⊥'. rule (r8) is special in that it is supposed to represent the transition relation of an implementable timed automaton. depending on the program valuation σ, (r8) is supposed to yield a triple of the next edge to work on, this edge's next and deadline. for simplicity, we assume that the interpretation of nextedge i ([mexpr ]) is deterministic for a given valuation of program variables. a configuration of program p = s 1 · · · s n is an n-tuple π = ( s 1 , (β 1 , γ 1 , w 1 , u 1 ), σ 1 , . . . , s n , (β n , γ n , w n , u n ), σ n ) of component configurations; c(p ) denotes the set of all configurations of p . the operational semantics of a program p is the labelled transition system on system configurations defined as follows. there is a delay transition if no current statement completes strictly before δ. there is an internal transition if for some i, 1 ≤ i ≤ n, a discrete reduction rule from table 2 there is a synchronisation transition , σ j by (r7), and β j ≥ β i , i.e. if component j has been listening at least as long as component i has been sending. note that this definition of synchronisation allows multiple components to send at the same time (which may cause message collision on a shared medium) and that, similar to the rendezvous communication of timed automata, out of multiple receivers, only one takes the message. in our application domain these cases do not happen because we assume that implementable networks do not depend on a global scheduler. that is, the program of an implementable network never exhibits any of these two behaviours. a program configuration is called initial if and only if the k-th component configuration, 1 ≤ k ≤ n, is at s k , with any β k , γ k = 0, w k = 0, u k = 0, and any σ k with σ k (v b ) = 0. we use c ini (p ) to denote the set of initial configurations of program p . a computation of p is an initial and consecutive sequence of program configurations ζ = π 0 , π 1 , . . . , i.e. π 0 ∈ c ini (p ) and for all i ∈ n 0 exists λ ∈ r + 0 ∪ {τ } such that π i λ − → π i+1 as defined above. we need not consider terminating computations of programs here because we assume networks of implementable timed automata without deadlocks. the program of the network of implementable timed automata n = i 1 . . . i n is p (n ) = s(i 1 ) . . . s(i n ) (cf. table 3c ). the edges' work is implemented in the corresponding line 2 of the statement sequences in tables 3a and 3b. the remaining lines 3 to 8 include the evaluation of guards to choose the edge to be executed next. the result of choosing the edge is stored in program variable e which (by the while loop and the if-statement) moves to line 1 of the implementation of that edge. the program's timing behaviour is controlled by variable t and is thus decoupled from clocks in the timed automata model. after line 8, the value of t denotes the absolute time where the execution of the next edge is due. that is, clocks in the program are not directly compared to the platform time (which would raise issues with the precision of platform clocks) but are used to determine points in time that the target platform is supposed to sleep to. by doing so, we also lower the risk of accumulating imprecisions in the sleep operation of the target platform when sleeping for many relative durations. the idea of scheduling work and operating time is illustrated by the timing diagram in fig. 4 . row (a) shows a naïve schedule for comparison: from time t i−1 , decide on the next edge to execute and determine this edge's next time at t i (light grey phase: operating time, must complete within the next edge's next time n e ), then sleep up to the next time (dashed grey line), then execute the edge(s) actions (dark grey phase: work time, must complete within the edge's deadline d e ), then sleep up to the edge's deadline at t i+1 , and start over. the program obtained by our translation scheme implements the schedule shown in row (b). the program begins with determining the next edge right after the work phase and then has only one sleep phase up to, e.g., t i+2 where the next work phase begins. in this manner, we require only one interaction with the execution platform that implements the sleep phases. row (c) illustrates a possible extension of our approach where operating time is needed right before the work phase, e.g., to prepare the platform's transceiver for sending a message. we call the program p (n ) a correct implementation of network n if and only if for each observable behaviour of a time-safe execution of p (n ) there is a corresponding computation path of n . in the following, we provide our notion of time-safety and then elaborate on the above mentioned correspondence between program and network computations. intuitively, a computation of p (n ) is not time-safe if either the execution of an edge's statement sequence takes longer than the admitted deadline or if the next time of the subsequent edge is missed, e.g., by an execution platform that is too slow. note that in a given program computation, the performance of the platform is visible in the operation time β and time to completion γ. we write π k :l e n to denote that the program counter of component k is at line n of the statement sequence of edge e. we use σ| x∪v to denote the (network) configuration encoded by the values of the corresponding program variables. we assume 2 that for each program variable v, the old value, i.e., the value before the last assignment in the computation is available as @v. i.e., if the i-th configuration completes (γ i,k = 0) line 2 of an edge's statement sequence, not more time than admitted by its deadline has been used (w k ), i.e., the sleepto statement in line 1 completes exactly after the deadline of the previously worked on edge plus the current edge's next time. ♦ note that, by definition 2, operating times may be larger than the subsequent edge's next time in a time-safe computation (if the execution of the current edge completes before its deadline). stronger notions of time-safety are possible. for correctness of p (n ), recall that we introduced timed while programs to consider the computation time that is needed to compute the transition relation of an implementable network on the fly. in addition, program computations have a finer granularity than network computations: in network computations, the current location and the valuation of clocks and variables are updated atomically in a transition. in the program p (n ), these updates are spread over three lines. we show that, for each time-safe computation ζ of program p (n ), there is a computation of network n that is related to ζ in a well-defined way. the relation between program and network configurations decouples both computations in the sense that at some times (given by the respective timestamp) the, e.g., clock values in the program configuration are "behind" network clocks (i.e., correspond to an earlier network configuration), at some times they are "ahead", and there are points where they coincide. figure 5 illustrates the relation for one edge e. the top row of fig. 5 gives a timing diagram of the execution of the program for edge e of one component. the rows below show the values over time for each program variable v up to e, n, and d. for example, the value of l will denote the source location of e until line 3 is completed, and then denotes the destination location . similarly, v and x denote the effects of the update vector of e on data variables and clocks. note that, during the execution of line 3, we may observe combinations of values for v and l that are never observed in a network computation due to the atomic semantics of networks. the two bottom lines of fig. 5 show related network configurations aligned with their corresponding program lines. note that the execution of each line except for line 1 may be related to two network configurations depending on whether the program timestamp is before or after the current edge's deadline. figure 6 illustrates the three possible cases: the execution of program line 2 (work time, dark gray) is related to network configurations with the source location of the current edge. right after the work time, the network location × is related and at the current edge's deadline the destination location is related. in the related network computation, the transition from × to always takes place at the current edge's deadline. this point in time may, in the program computation, be right after work time (fig. 6a , no delay in × ), in the operating time (fig. 6b) , or in the sleep time (fig. 6c) . the relation between program and network configurations as illustrated in fig. 5 can be formalised by predicates over program and network configurations, one predicate per edge and program line. 3 the following lemma states the described existence of a network computation for each time-safe program computation. the relation gives a precise, component-wise and phase-wise relation of program computations to network computations. in other words, we obtain a precise accounting of which phases of a time-safe program computation correspond to a network computation and how. we can argue component-wise by the closed component-assumption from sect. 3. table 3c reach the line 2 of a send or receive edge (cf . table 3a and 3b) and establish a related network configuration. for the induction step, we need to consider delays and discrete steps of the program. from time-safety of ζ we can conclude to possible delays in n for the related configurations with a case-split wrt. the deadline (cf. fig. 6 ). when the program time is at the current edge's deadline, the network may delay up to the deadline in an intermediate location × , take a transition to the successor location , and possibly delay further. for discrete program steps, we can verify that n has enabled discrete transitions that reach a network configuration that is related to the next program line. here, we use our assumptions from the program semantics that update vectors have the same effect in the program and the network. and we use the convenient property of our program semantics that the effects of statements only become visible with the discrete transitions. for synchronisation transitions of the program, we use the assumption that the considered network of implementable timed automata does not depend on a global scheduler, in particular that send actions are never blocked, or, in other words, that whenever a component has a send edge locally enabled, then there is a receiving edge enabled on the same channel. our main result in theorem 1 is obtained from lemma 1 by a projection onto observable behaviour (cf. definition 3). intuitively, the theorem states that at each point in time with a discrete transition to line 2, the program configuration exactly encodes a configuration of network p (n ) right before taking an internal, send, or receive edge. . . be the projection of a computation path ξ of the implementable network n onto component k, 1 ≤ k ≤ n, labelled such that each configuration k i,0 , ν k i,0 is initial or reached by a discrete transition to a source location of an internal, send, or receive edge. the sequence ξ k is the largest index such that between c := k j,0 , ν k j,0 and k j,ij , ν k j,ij + d j exactly next(c) time units have passed, is called the observable behaviour of component k in ξ. ♦ theorem 1. let n be an implementable network and ζ k = π 0,0 , . . . , π 0,n0 , π 1,0 , . . . the projection onto the k-th component of a time-safe computation ζ of p (n ) labelled such that π i,ni , π i+1,0 are exactly those transitions in ζ from a line 1 to the subsequent line 2. then ( σ i,0 (l), σ i,0 | x∪v , u i,0 ) i∈n0 is an observable behaviour of component k on some computation path of n . ♦ fig. 7 . timed automaton of the implementable timed automaton (after applying the scheme from fig. 2) for the lz-protocol of sensors [15] . the work presented here was motivated by a project to support the development of a new communication protocol for a distributed wireless fire alarm system [15] , without shared memory, only assuming clock synchronisation and message exchange. we provided modelling and analysis of the protocol a priori, that is, before the first line of code had been written. in the project, the engineers manually implemented the model and appreciated how the model indicates exactly which action is due in which situation. later, we were able to study the handwritten code and observed (with little surprise) striking regularities and similarities to the model. so we conjectured that there exists a significant sublanguage of timed automata that is implementable. in our previous work [5] , we identified independency from a global scheduler as a useful precondition for the existence of a distributed implementation (cf. sect. 2). for this work, we have modelled the lz-protocol of sensors in the wireless fire alarm system from [15] as an implementable timed automaton (cf. fig. 1 ; fig. 7 shows the timed automaton obtained by applying the scheme from fig. 2 ). hence our modelling language supports real-world, industry case-studies. implementable timed automata also subsume some models of time-triggered, periodic tasks that we would model by internal edges only. from the program obtained by the translation scheme given in table 3 , we have derived an implementation of the protocol in c. clock, data, location, edge, and message variables become enumerations or integers, time variables use the posix data-structure timespec. the implementation runs timely for multiple days. although our approach with sleeping to absolute times reduces the risk of drift, there is jitter on real-world platforms. the impact of timing imprecision needs to be investigated per application and platform when refining the program of a network to code, e.g., following [11] . in our case study, jitter is much smaller than the model's time unit. another strong assumption that we use is synchrony of the platform clocks and synchronised starting times of programs which can in general not be achieved on real-world platforms. in the wireless fire alarm system, component clocks are synchronised in an initialisation phase and kept (sufficiently) synchronised using system time information in messages. robustness against limited clock drift is obtained by including so-called guard times [23, 24] in the protocol design. in the model, this is constant g: components are ready to receive g time units before message transmission starts in another component. note that theorem 1 only applies to time-safe computations. whether an implementation is time-safe needs to be analysed separately, e.g., by conducting worst-case execution time (wcet) analyses of the work code and the code that implements the timed automata semantics. the c code for the lz-model mentioned above actually implements a sleepto function that issues a warning if the target time has already passed (thus indicating non-time-safety). the translation scheme could easily be extended by a statement between lines 2 and 3 that checks whether the deadline was kept and issues a warning if not. then, theorem 1 would strengthen to the statement that all computations of p (i) either correspond to observable behaviour of i or issue a warning. note that, in contrast to [1, 2, 31] , our approach has the practically important property that time-safety implies time-robustness, i.e., if a program is time-safe on one platform then it is time-safe on any 'faster' platform. furthermore, we have assumed a deterministic choice of the next edge to be executed for simplicity and brevity of the presentation. non-deterministic models can be supported by providing a non-deterministic semantics to the nextedge i function in the programming language and the correctness proof. we have presented a shorthand notation that defines a subset of timed automata that we call implementable. for networks of implementable timed automata that do not depend on a global scheduler, we have given a translation scheme to a simple, exact-time programming language. we obtain a distributed implementation with one program for each network component, the programs are supposed to be executed concurrently, possibly on different computers. we propose to not substitute (imprecise) platform clocks for (model) clocks in guards and invariants, but to rely on a sleep function with absolute deadlines. the generated programs do not include any "hidden" execution times, but all updates, actions, and the time needed to select subsequent edges are taken into account. for the generated programs, we have established a notion of correctness that closely relates program computations to computation paths of the network. the close relation lowers the mental burden for developers that is induced by other approaches that switch to a slightly different, e.g., robust semantics for the implementation. our work decomposes the translation from timed automata models to code into a first step that deals with the discrepancy between atomicity of the timed automaton semantics and the non-atomic execution on real platforms. the second step, to relate the exact-time program to real platforms with imprecise timing is the subject of future work. model-based implementation of real-time applications rigorous implementation of real-time systems -from theory to application synthesis of ada code from graph-based task models code synthesis for timed automata on global scheduling independency in networks of timed automata modeling heterogeneous real-time components in bip a tutorial on uppaal synchronous programming with events and relations: the signal language and its semantics compositional abstraction in real-time model checking the esterel synchronous programming language: design, semantics, implementation timed automata can always be made implementable spanner: google's globally distributed database higher-dimensional timed automata automated analysis of aodv using uppaal ready for testing: ensuring conformance to industrial standards through formal verification parameterized verification of track topology aggregation protocols clock synchronization of distributed, real-time, industrial data acquisition systems ridesharing: fault tolerant aggregation in sensor networks using corrective actions the synchronous data flow programming language lustre translating uppaal to not quite c giotto: a time-triggered language for embedded programming programming languages -c formal approach to guard time optimization for tdma optimizing guard time for tdma in a wireless sensor network -case study automatic translation from uppaal to c real-time systems -formal specification and automatic verification a structural approach to operational semantics dynamical properties of timed automata on generating soft real-time programs for non-realtime environments a methodology for choosing time synchronization strategies for wireless iot networks model-based implementation of parallel real-time systems ad hoc routing protocol verification through broadcast abstraction almost asap semantics: from timed models to timed implementations key: cord-102705-mcit0luk authors: gupta, chitrak; cava, john kevin; sarkar, daipayan; wilson, eric; vant, john; murray, steven; singharoy, abhishek; karmaker, shubhra kanti title: mind reading of the proteins: deep-learning to forecast molecular dynamics date: 2020-07-29 journal: biorxiv doi: 10.1101/2020.07.28.225490 sha: doc_id: 102705 cord_uid: mcit0luk molecular dynamics (md) simulations have emerged to become the back-bone of today’s computational biophysics. simulation tools such as, namd, amber and gromacs have accumulated more than 100,000 users. despite this remarkable success, now also bolstered by compatibility with graphics processor units (gpus) and exascale computers, even the most scalable simulations cannot access biologically relevant timescales the number of numerical integration steps necessary for solving differential equations in a million-to-billion-dimensional space is computationally in-tractable. recent advancements in deep learning has made it such that patterns can be found in high dimensional data. in addition, deep learning have also been used for simulating physical dynamics. here, we utilize lstms in order to predict future molecular dynamics from current and previous timesteps, and examine how this physics-guided learning can benefit researchers in computational biophysics. in particular, we test fully connected feed-forward neural networks, recurrent neural networks with lstm / gru memory cells with tensorflow and pytorch frame-works trained on data from namd simulations to predict conformational transitions on two different biological systems. we find that non-equilibrium md is easier to train and performance improves under the assumption that each atom is independent of all other atoms in the system. our study represents a case study for high-dimensional data that switches stochastically between fast and slow regimes. applications of resolving these sets will allow real-world applications in the interpretation of data from atomic force microscopy experiments. molecular dynamics or md simulations have emerged to become the cornerstone of today's computational biophysics, enabling the description of structure-function relationships at atomistic details [19] . these simulations have brought forth milestone discoveries including resolving the mechanisms of drug-protein interactions, protein synthesis and membrane transport, molecular motors and biological energy transfer, and viral maturation, encompassing a number of our contributions [9] . more recently, we have employed molecular modeling to predict mortality rates from sars-cov-2 [26] , showcasing its application in epidemiology. in md simulations, the chronological evolution of an n -particle system is computed by solving the newton's equations of motion. methodological developments in md has pushed the limits of computable system-sizes to hundreds of millions of interacting particles, and timescales from femtoseconds (10 −15 second) to microseconds (10 −6 second), allowing all-atom simulations of an entire cell organelle [23] . high performance computing, parallelized architecture, speciality hardware and gpuaccelerated simulations have made notable contributions towards this progress. however, in spite of significant advancements in both development and applications, computational resources required to achieve biologically relevant system-sizes and timescales in brute-force md simulations remain prohibitively "expensive". notably, md involves solving newtonian dynamics by integrating over millions of coupled linear equations. an universal bottleneck arises from the time span chosen to perform the numerical integration. akin to any paradigm in dynamic systems, the time span for numerical integration is limited by the dynamics of the fastest mode. in biological systems, this span is 2 femtoseconds (fs) or lower, owing to the physical limitations of capturing fast vibrations of hydrogen atoms. thus, md simulation of at least 1 microsecond, wherein biologically relevant events occur, requires the computation of 500 million fs-resolved time steps. each step involves the calculation of the interaction between every particle with its neighbors, which scales as n 2 or n logn . when n = 1-100 million atoms, these simulations are only feasible on peta to exascale supercomputers. several techniques have been employed to accelerate atomistic simulations, which can broadly be classified into two categories: coarse-gaining and enhanced sampling. in the former, the description of the system under study is simplified in order to reduce the number of particles required to completely define the system [9] . in the latter, either the potential energy surface and gradients (or forces) that drive the molecular dynamics is made artificially long-range so as to accelerate the movements or multiple short replica of the system are simulated in order to sample a broader range of molecular movements than a long brute-force md [13] . a major contention of these techniques is that, the simulated protein movements cannot be attributed either chemical precision or a realistic time label [9] . we explore machine-learning methodologies for predicting the outcomes of md simulations by preserving their accurate time labels. this idea will greatly reduce the computational expenses associated with performing md, making it broadly accessible beyond the current user-base of scientific researchers to high schools and colleges, where the computational resources are sparse. the developments will imminently expedite the efforts of nearly 20,000 users of our open-source md engine namd [19] . in this resource paper, we present trajectory (green: high dimension, red: reduced dimension) visualized in 3d and rendered in 2d using the molecular visualization software vmd [12] . (c) deviation from gaussian behavior (quantified by kurtosis where a higher value denotes larger deviation) of the distribution of x, y, and z positions of each of the 214 particles (shown in red in b). two types of data sets, the dynamic correlations within which pose significant challenge on existing machine-learning techniques for predicting the real-time nonlinear dynamics of proteins. the underlying physics of these data sets represent out-of-equilibrium and inequilibrium conditions, wherein the n -particle systems evolve in the presence vs. absence of external perturbations. beyond tracking the nonlinear transformations, these examples also create an opportunity to study whether prediction accuracy of future outcomes with fs-resolution improve, if prior knowledge is utilized to enhance the signal-to-noise ratio of key features in the training set. a number of works in the past have focused on predicting protein structures from protein sequence/ compositional information by training on the so-called sequencestructure relationship using massive data sets accrued over the pdb and psi databases [2] . however, knowledge of stationary 3d coordinates offer little to no information on how the system evolves in time following the laws of classical or quantum physics. little data is available to train algorithms on such time series information despite the imminent need to predict molecular dynamics [15] . the presented data sets capture both the linear and nonlinear movements of molecules, resolved contiguously across millions of time points. these time series data enable the learning of spatio-temporal correlation or memory-effect that underpins newtonian dynamics of large biomolecules -a physical property that remains obscure to the popular sequence-structure models constructed stationary data. we establish that the success of any deep learning strategy towards predicting the dynamics of a molecule with fs precision is contingent on accurately capturing on these many-body correlations. thus, the resolution of our md data sets will result in novel training strategies that decrypt an inhomogeneously evolving time series. as a publicly accessible resource, our md simulations trajectories of even larger systems (10 5 -10 7 particles) [23] will be provided in the future to seek generalizable big-data solutions of fundamental physics problems. in what follows, we use equilibrium and non-equilibrium md to create high-dimensional time series data with atom-scale granularity. for simplicity, we derive a sub-space of intermediate size composed only of carbon atoms. in this intermediate-dimensional space, where the data distribution is densed highly correlated, we train state-of-the-art time sequence modeling techniques including recurrent neural networks (rnns) with long short term memory (lstms) to predict the future state of the system (fig. 1 ). we explore, how a kirchhoff decomposition [1] of the many-body problem dramatically enhances the learning accuracy both under equilibrium and non-equilibrium data, even when the number of hidden layers << than the number of atoms. hardness of the time series are captured in terms of root mean square deviations (rmsd) errors, computed at different lead-times. the rmsd between two n -dimensional data points a and b is defined as: (1) where a and b could be either real and predicted points. we also define history time and lead time to be a moving window of cumulative time steps (in units of fs) respectively in the past and in the f uture of a given data point in the time series, over which training and predictions are achieved. modeling accuracy was evaluated by varying the amount of historical data points incorporated during the training phase, and then comparing its prediction accuracy against that of a static or linear model. surprisingly, we find that the equilibrium md time series is more challenging to learn, despite the non-gaussian distribution atoms associated to the non-equilibrium md. henceforth, we discuss how these new data-set resources can be used for future research of modeling high-dimensional high-frequency event-driven md time series data. in the recent past, machine learning approaches have been successful in analyzing the results of md simulations. support vector machines and variational auto-encoders have been developed to extract free energy information from md simulations [15] . kinetic properties of small-molecules have also been extracted using neural networks. it is also shown, that neural networks trained on limited data selected from very expensive md simulations can resurrect the entire boltzmann distribution for small proteins [15] . however, none of these approaches are aimed at resurrecting the real-time (i.e. fs-resolution molecular movements of biological molecules) -one of the central goals of md simulations [19] . rnns and lstms have been used to predict md [5] , but the tested data sets fail to wholly capture the dynamical complexity of a biological molecule. a key observation made therein that inspires our current investigations is that training on molecular dynamics beyond 16 particles is improbable. the data sets we present in the next section challenges this seminal bottleneck that must be overcome to forecast md simulations of real biological systems. from a computational perspective, any dynamically evolving system can be regarded as event-driven time series data; in this sense, md simulations are essentially high-dimensional high-frequency time series data, and sequence modeling techniques like recurrent neural networks [4] , hidden markov models and arma, can be used to model md trajectories. deep learning has recently emerged as a popular paradigm for modeling dynamically evolving time series and predicting future events. these techniques have also been vastly studied in special application areas like business and finance [21] , healthcare [14] , power and energy [20] . at room temperature, where biology exists, newtonian mechanics of the molecules become stochastic described by the fluctuation-dissipation theorem. the ensuing molecular trajectories converge at boltzmann-distributed ensembles at infinitely long times. it has been established that protein dynamics in cells can be modeled as motions of molecules within a media that is highly viscous. imposing this so called friction-dominated condition on the stochastic newton's equations, and assuming that a complete set of the degrees of freedom for describing the dynamical system is known, molecular dynamics is deemed to be a markovian process. in simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state, and most importantly, such predictions are just as good as the ones that could be made knowing the process's full history. the equation of motion of a particle of mass (m), at position (x) in time (t) within an environment of friction coefficient (γ) becomes: where the random force ζ is constrained by requiring the integral of its autocorrelation function to be inversely related to the friction coefficient. however, we often cannot find a complete set of descriptors to probe the molecular dynamics of proteins. the problem becomes particularly challenging once the number of amino acids in the protein sequences becomes more than 115 [17] (i.e. roughly n = 1150 atoms). the associated phase space (of 3n positions, x = x 1 , x 2 , ...x n , and 3n momenta) for systems of these sizes (or higher) becomes too extended for physics-based methods such as md to visit all the possible points in the 6n -dimensional space. this incomplete description of the phase space together with the well-known finite-size artifacts[19], introduces "memory" into any realistic md simulation. introduced originally by zwanzig and used in ref. [22] , this memory shows up as a "long-time" tail in auto-correlation functions of atoms undergoing simulation. in a fully equilibrated systems, this memory is short-term vanishing within picoseconds (10 −12 seconds) for carbon, hydrogen and oxygen atoms that primarily compose the proteins [7] . in non-equilibrium simulations that are often employed to accelerate md [10] , the long-time tail stretches to nanosecond (10 −9 seconds). noting that every integration time step in md is 1-2 fs (10 −15 seconds), there exists at least 6 order of magnitude in time within which the memory of the system is relevant and offers the opportunity to leverage deep learning techniques for making predictions. computational modeling of any complex dynamics essentially boils down to a multivariate time series forecasting task, and hence time series trajectory data capturing an evolving biological system is necessary to analyze and computationally learn the underlying molecular dynamics. below we first present some basic definitions and notations we will used to characterize the md time series. -lead time: for a forecasting problem, the lead time specifies how far ahead the user wants to predict the future positions of atoms. predicting far ahead (high lead) enables faster md simulation, and at the same time, makes forecasting task more challenging. -history size: next, we must decide how much historical data we wish to use to predict the future positions of atoms. this value is known as the history size. -prediction window: prediction window indicates the discrete time-window in the future used for creating the prediction outcome. for simplicity, in this paper, we always use a prediction window of 1 fs. -prediction error: error is defined as the root mean square deviation (rmsd (eq.1)) between real and predicted structures at a given time point. during the learning stages, the error across individual interactions is denoted loss. we present two new data sets to introduce subtleties in the equilibrium and nonequilibrium molecular dynamics from the perspective of time series forecasting. an analysis of these data sets will bring to light how effects of the history (or correlation) in the time series data can be described at different lead times and prediction windows to model a real-time dynamically evolving md time series. the training objective here is to minimize the prediction error for a sufficiently large batch of training instances over a historical time span. we introduce two data sets from two distinct kinds of md simulation systems. illustrated in fig. 2 , the first data set is an equilibrium simulation of the enzyme adenosine kinase (adk). the second one is a steered molecular dynamics (smd) or non-equilibrium simulation of the 100-alanine polypeptide helix (fig. 3) . in smd, an external force is applied to the system along a chosen direction. we applied a force of 1 nanonewton along one end of the 100-alanine helix, unfolding the protein [25] . we have generated high-as well as low-dimensional data for both the systems. in highdimension, the position of every atom is explicitly defined, resulting in 3324 × 3 for adk in both the data sets, shape transformation of a 3-dimensional (3d) many-body system is recorded over time. for adk, a transition from an open to a closed 3dshape is observed due to concomitant rearrangements of 214 particles (fig. 2b) , while in 100-alanine, a more non-linear helixto-coil transition is probed by tracking the changes in position of 100 particles (fig. 4a ). beyond such high dimensionality of the data sets, the uniqueness of the equilibrium md time series is in its dynamical evolution -the kinetic behavior stochastically switches between fast and slowly evolving regimes. using rmsd values of all the the particle positions with respect to the very first, t = 0 position, we showcase these sudden changes in single-particle as well as collective dynamics in fig. 2a . for the non-equilibrium time series data of 100-alanine, the movements occur in the presence of an external force. these simulations produce less noisy data than the equilibrium md of adk fig. 4b vs. 2a) . however, given that the shape changes are highly directed, we find that there are multiple classes of single-particle dynamics hidden under a collective behavior. unlike the equilibrium md simulations, where the positions of all the particles are gaussian-distributed about a mean, at least two different classes of particle distributions is observed in the non-equilibrium time series (fig. 4c vs. 2c) . the distribution of the significant majority of atoms is non-gaussian, reflecting of the positional biases from high external forces to which they are subjected. during protein structure determination experiments, the atomic positions of a target protein are assigned by averaging the observed electron densities [9] . while this assignment offers a good starting model, the derived protein structure is typically in a non-biological (or non-native) state, and therefore severely limits biological application. such artifacts can be resolved by bringing the starting model into thermal equilibrium at room temperature. once in equilibrium, the protein adopts its native structure (3d shape) and dynamics. by numerically integrating eq. 2, equilibrium md simulations monitor the real-time evolution of native proteins. the challenges involved in modeling of an equilibrium md data can be presented employing the lead times of the associated time series. the hardness of the time series data is quantified by tracking how the rmsd values between the data points change at different lead times, namely at leads of 10,50..1200 fs (fig. 3) . the change in rmsd at different lead times also serve as a direct probe for the correlation in the data. if the lead time is short (10 or 50 fs) then it is simple to computationally probe the 0.1-0.2å scale changes in molecular position (fig. 3 , black and red traces) by analyzing the associated short-time correlations (fig. 4d) . in contrast, if the lead time is too long (600 and 1200 fs), then key short-time correlations within the data are missed. thus, the associated small 3d shape changes may not be accurately learnt at this scale. one advantage of this data set is that all the particles are "well-behaved" and their dynamics is gaussian distributed (fig. 2c) . thus, an optimal lead time is desired which is sufficiently large (far into the future) to be interesting from a biological standpoint, and at the same time, can be used to train a machine learning model aimed at replacing computationally expensive md. data preparation. a starting 3d protein model of adk was generated using an x-ray diffraction crystal structure obtained from the pdb [2] . the atomic coordinates of adk are encoded in the traditional pdb format presenting the x, y, z positions. x-ray is unable to resolve hydrogen atom positions. thus, the position of hydrogen atoms were inferred using the run adk.py script located in the equilibrium md simulation of the github for this project [11] . thus, a complete initial model was determined. the goal of equilibrium md is to recreate the native dynamics of a protein of interest. therefore, the forces acting on each atom of the protein is defined using a potential energy function or force field. the amber force field, ff14sbonlysc, was used for the adk simulation [6] . an implicit water model, gb-neck2, was chosen to capture the equilibrium adk environment; it is computationally efficient and enhances conformational sampling through decreased friction (γ in eq. 2) [6] . after force field and water model selection, the energy of the protein model is minimized. the energy minimization corrects atoms that are in erroneously close contact due to artifacts from structural determination. if uncorrected, the simulation can produce unrealistic forces that cause the simulation to become unstable. once minimized using conjugate gradients, the all-atom model is ready for production simulation. the adk simulation was performed for 10 5 timesteps with a periodic update frequency of 1 fs, and atomic models were saved every 10 fs. this results in a 0.1 nanosecond (10 5 steps× 1 fs/step) simulation of the adk protein, providing in time series of 10 4 data points. the simulation of adk was performed using the openmm python library [6] . five copies simulations were performed at a temperature of 310k. collective dynamics of adk was monitored by computing its rmsd relative to the t = 0 time point (fig. 2a) . a plateau in this profile suggests that equilibrium is attained at 0.8 ×10 5 fs. the trajectory data, containing 10 4 time points or snapshots, was initially stored in single precision binary fortran files known as dcd files. the positional coordinates (x,y,z) of all atoms in each snapshot were extracted from the dcd file resulting in a rank-3 tensor which was (3324 × 3 × 10 4 ) for the high dimensional space and (214 × 3 × 10 4 ) for the low dimensional data. the entire simulation can be reproduced with a single openmm python script located in the equilibrium md simulation on github [11] . life as we know, exists out of equilibrium. traditionally, experiments focusing on the nonequilibrium behavior of proteins were performed by either adding heat or inducing chemical perturbations. another factor that can bring proteins out of equilibrium is mechanical stress (e.g stretching of the molecules). such stretching arises naturally in proteins located in the muscle tissue. the response of these proteins to mechanical stress can be studied by investigating the individual domain's response to stretching within atomic force microscopy or afm experiments [16] . this molecular events are analogous to the process of pulling a rubber-band by holding one end fixed in our hand (fig. 4a) . now, we employ nonequilibrium md simulations for computationally recreating the afm experiments. in particular, steered md or smd is used to generate a relevant and challenging data set for learning algorithms to be trained and validated. it is notable that events from such non-equilibrium pulling experiments or their equivalent smd simulations, have never been used within rnn, in particular lstm framework for time series forecasting. the challenge in smd is commensurate to that of equilibrium md in that, an optimal lead time should be derived respecting the correlation limits of the data. however, subtleties are twofold: first, for the same lead time steps the rmsd error bars in smd are much higher (fig. 5) , consistent with more prominent 3d shape changes that those observed for equilibrium md simulations of adk (fig. 4a vs. 2b ). yet, the longer the correlation times (fig. 4e) indicate smoother shifts within the time series. second, there are multiple classes of atoms with different dynamics distribution (fig. 4c) . data preparation. the 100-alanine helix was prepared using the avagadro software on a single cpu. the external force acts on the c-terminus of the long helical protein, while the n-terminus region remains constrained. as the molecule is stretched, it undergoes a gradual conformational change, transitioning from an α-helix to a random coil (fig. 4a) . typically, there are two variants of smd, constant force and constant velocity pulling. the equation for the external pulling force (f spring ) acting on the atom in the c-terminal region of the protein is given by, here, x is the displacement of the atom in protein which is pulled from its original position, v is the prescribed pulling velocity, and k is the spring constant. in the presence of this external force, the equation of motion (eq. 2) becomes for our data set, we adopt the smd with constant velocity (smd-cv) protocol from our open-source namd tutorial [25] . the smd-cv simulations are performed using the langevin dynamics scheme of md at constant temperature of 300 k in generalized-born implicit solvent with the charmm36m force field [19] . one end of the molecule (n-terminus) is constrained while the other end at the (c-terminus) is free to move along the z-axis with a constant speed of 0.2å/ps and force constant of 7 kcal/mol/å 2 , exerting an overall force of 1 nanonewton (fig. 4a) [16] . a set of 5 copies of smd is used to generate an ensemble of conformations when subject to smd-cv pulling. all simulations are performed using the recent build of namd (version nightly build) with a time step of 1 fs, with dielectric constant of 20, and a user defined cut-off for coulomb forces with a switching function starting at a distance of 10å which plateaus to zero at 12å. a simulation time of 10 7 fs is required for extension of the helix to random coil. here, we save the trajectory every 10 fs, mainly to generate a large data set of 10 6 points to train an lstm model in sect. 5. the data presented in figs. 4 and 5 are saved at even longer time intervals, namely 5000 fs, to reduce the number of time points to 2000 for computing lead times and correlations. the full data set of (1003 or 100 × 3 × 10 6 ) points, which is used in the lstms below is accessible through the google drive link provided on github [11] . a tcl script smd.constvel.namd is used to implement the outlined simulation protocol. the script includes all the standard namd parameters, which are outlined above to perform smd. this script together with all other input files are available freely through github [11] and the namd website [25] to reproduce our data set for non-equilibrium md simulations. our md data is documented in tutorial files, scripts, and an openly accessible github page [11] so any user with access to a single cpu or gpu node will be able to reproduce the results. the full time series can be loaded, visualized in 3d and analyzed for rmsd using the molecular visualization tool vmd (figs. 2 and 4) . the presented data set exemplifies arguably a first attempt at capturing the entire range of time series variations typical of a biomolecule. we describe two broad classes of data with distinct correlation timescales. more importantly, the data clearly shows how external physical forces can alter time series correlations and provides an avenue to experiment with machine learning models for probing such external factors. accordingly, a data scientist can chose a suite of different learning algorithms to model these fast evolving high dimensional md trajectory data. the equilibrium data at a single-particle level appears to be well behaved with relatively uniform kurtosis values (fig. 2c ), but offers difficulties in training of the rapid variability in rmsds ( fig. 2a-multiple shaded regions) . in contrast, the non-equilibrium data shows non-gaussian statistics at a single-particle level (fig. 4c ) eliciting complexity at a single-particle level, but manifest smooth changes in the time series when treated together (fig. 4b) . a key question these data sets pose is whether a common learning algorithm can ever be introduced to work with all the limits of biomolecular dynamics. a second question the data sets raise pertains to identification of limits that are easier to model using popular sequence modeling techniques like rnns with lstms or grus cells either in isolation or in concert. finally, will the learning algorithms scale if the dimensions of the data sets increase from the hundred-to-thousand variables, chosen here for simplicity, to the more realistic million-to-billion dimensional spaces. these three questions also offer the opportunity to think about the use of the existing petascale or the upcoming exascale resources for handing the convoluted biomolecular problems with data science methodologies. put together, these data sets places an machine learning expert in a position to address one of the central questions at the interface of life sciences and computer sciences, namely to what extent can numerical simulation schemes be by-passed using the machine learning tools. the community of computational biophysics with nearly 20,000 namd users and a 3-4 fold large cadre of researchers applying md will immediately benefit from answering this question. the findings from this data set are further generalizable to any domain with quantitative data on highdimensional rapidly fluctuating time series. due to the recent success of recurrent neural networks (rnn) for modeling time series data [4] , we conducted an exploratory study with rnns to model the two new dynamically evolving md trajectory data. we used long-short term memory (lstm) cells in the hidden layers and trained rnns on both equilibrium and non-equilibrium md simulations to decipher which data-set is more amenable to learning. more specifically, we conducted a series of experiments to produce baseline accuracy numbers for lstms as well as to tune the different hyperparameters associated with the same. below we present a brief summary of the experiments that were conducted and report our findings to facilitate in-depth future research in this direction. as a starting point, we set the static model as our baseline where we assume that the position of an atom at a future timestamp x t+lead does not change relative to its last known position, i.e. x t , where, t is the current timestamp. the assumption is incorrect, but still helps us set a realistic baseline for evaluating the performance of advanced machine learning techniques like lstms. figures 6a,b (adk) and 8a,b (smd) show the rmsd distributions of static model for lead time steps 15 and 120, respectively. for starters, we trained a rnn with 32 lstm units in the hidden layer, a learning rate of 0.01, history size 5 and varying lead time steps of {1, 5, 15, 60, 120}. the output layer used linear activation and mean squared loss was used as the training loss function. below we report some of our key observations from the experiments. curse of dimensionality and kirchhoff decomposition we found that learning by treating the entire protein structure at a given timestamp as a single training instance is very challenging due to the high dimensionality of the problem, generating higher errors than the static model. to deal with this issue, we assumed that the position of each atom within the protein structure is independent of one another and can be modeled as separate one dimensional time-series. this so-called kirchhoff decomposition scheme boosted the performance of lstm significantly. adk vs 100-alanine: we report rmsd of each simulated system, i.e., adk and smd ( figs. 2a and 4b ). we found that rmsd of smd simulation of 100-alanine is one order of magnitude higher than that of the equilibrium md simulation of adk. this is due to the non-equilibrium nature of the former, where an external force is used to pull the system. this difference is also reflected in the static model error at varying lead time steps (figs. 3 and 5). effect of lead time: increasing lead time makes time series forecasting harder, which we expected would justify the use of complex sequence modeling techniques like lstm. in other words, we hypothesized that an increase in lead time will cause the lstm error to increase less than the static model error. we found this to be true for the 100-alanine smd simulation. with lead time steps of 1 and 5, lstm loss was higher than the static model error. however, with lead time step of 15, lstm performed better than the static model, and improvement from static model increased further at even higher lead time steps (60 and 120). due to lack of space, we only present the results for lead time 15 an 120 (fig. 8 a,b) . in contrast, we have not been able to achieve lower lstm losses compared to the static model loss for the equilibrium md simulation of adk, for the lead time steps 1 through 120. equilibrium md simulation of adk decorrelates much faster than smd, in the picosecond regime ( fig. 2a) . this yields an interesting as well as surprising result that equilibrium md trajectories were more difficult to model than the non-equilibrium md trajectories, which is indeed counter intuitive. effect of history: for this set of experiments, we hypothesized that an increase in history size will reduce the lstm training error as we are using more information from the past. indeed, the results confirm our hypothesis (figs. 6ab and 8a,b). more specifically, we varied history size among {1, 5, 10} and found that increasing the history actually reduces the lstm training erros for both adk and 100-alanine trajectories. effect of learning rate: we trained the lstm network separately while varying learning rate among {1.0, 0.1, 0.01, 0.001, 0.0001}. we found that rates of 1.0, 0.1 were unstable, while 0.001, 0.0001 were too slow to converge for smd simulations (figs. 7) [results for md simulations were similar, and are provided in github [11] ]. thus, we recommend 0.01 as the learning rate. summary of hyper-parameter tuning study: based on our exploratory study, we recommend the following set of empirical values for each hyper-parameter as shown in table 1 . in regards to the future directions of methods that can be done in the data set, there are still more ways to improve training on lstms. one possible improvement is through more stacked lstms. this would be able to learn more nonlinear dynamical relationships between the points. other than lstms, we can also borrow from deep learning in natural language processing by utilizing attention models, which have recently been getting state of the art results, without of the use of a recurrent hidden layer [4] . other considerations for future direction is the ability to reformulate the 3d structural input of the data as a 3d point cloud. there have been recent deep learning architectures used in 3d point cloud segmentation and classification such as voxelnet and pointnet [3] . both architectures leverage the underlying 3d relationship between points and objects in 3d space for the supervising task. with voxelnet, the data is voxelized into fixed voxels in which a 3d convolutional neural network is used. however, with architectures like pointnet, the input can be variable. in this case, future directions can be the addition of a data set in which the number of atoms per dynamical system and be varied. with architectures that deal with data in the 3d space, there is the consideration of new loss functions. here, we utilized mse loss in optimizing our lstm. loss functions such as earth movers distance (emd) and chamfer loss are two most notable losses used for 3d point generation [8] . moreover, emd can be extended for graphs, which can be useful for not only learning the 3d geometrical relationships, but the graph relationships between atoms. the external information sought in the current data sets from afm or force measurements to improve temporal correlation can also be derived from other experimental modalities such as x-ray crystallography [18] or cryo-electron microscopy [24] . finally, recovery of the all-atom description from an lstm-predicted reduced space of only heavy atoms opens the door to inverse-boltzmann approaches for reverse coarse-graining [9] . in the present study, we report two new data sets for describing equilibrium and nonequilibrium protein dynamics produced by physics-based simulations. these data sets fill a much needed knowledge gap in the protein-learning field, providing a synergistic augmentation to the popular existing data sets used for learning molecular structure [2] . protein dynamics was represented as a time-series data and was modeled through a recurrent neural network with lstm cells in the hidden layer. we found that the learning of both data sets was improved when using a kirchhoff decomposition on models with a constant number of hidden layers. the ability to forecast future structure was shown to be dependent on the correlation among the recent past structures. specifically, dynamics within the nonequilibrium molecular dynamic simulations were highly correlated, and thus protein dynamics were effectively learned. conversely, the movements of a protein at thermal equilibrium were poorly correlated, making accurate forecasting more difficult. increasing history size improved the prediction accuracy for both data-sets and lstm outperformed the static baseline while forecasting at higher lead times. overall, lstms provide an exciting tool to model non-equilibrium protein dynamics. virtually all biologically relevant actions occur at non-equilibrium, therefore these results indicate an exciting advance with far-reaching implications. on the range of applicability of the reissner-mindlin and kirchhoff-love plate bending models the protein data bank pointnet: deep learning on point sets for 3d classification and segmentation recurrent neural networks for multivariate time series with missing values a thorough review on the current advance of neural network structures openmm 7: rapid development of high performance algorithms for molecular dynamics shear viscosity of the hard-sphere fluid via nonequilibrium molecular dynamics a point set generation network for 3d object reconstruction from a single image computational methodologies for real-space structural refinement of large macromolecular complexes reconstructing potentials of mean force through time series analysis of steered molecular dynamics simulations cikm2020 md prediction vmd: visual molecular dynamics generalized scalable multiple copy algorithms for molecular dynamics simulations in namd ensemble of multi-headed machine learning architectures for time-series forecasting of healthcare expenditures boltzmann generators: sampling equilibrium states of many-body systems with deep learning calculating potentials of mean force from steered molecular dynamics simulations accelerating molecular simulations of proteins using bayesian inference on weak information predicting improved protein conformations with a temporal deep recurrent neural network time series forecasting of petroleum production using deep lstm recurrent networks financial time series forecasting with deep learning: a systematic literature review order parameters for macromolecules: application to multiscale simulation atoms to phenotypes: molecular design principles of molecular dynamicsbased refinement and validation for sub-5å cryo-electron microscopy maps total predicted mhc-i epitope load is inversely associated with mortality from sars-cov-2. medrxiv p key: cord-000988-79fp75u3 authors: al-siyabi, turkiya; binkhamis, khalifa; wilcox, melanie; wong, sallene; pabbaraju, kanti; tellier, raymond; hatchette, todd f; leblanc, jason j title: a cost effective real-time pcr for the detection of adenovirus from viral swabs date: 2013-06-07 journal: virol j doi: 10.1186/1743-422x-10-184 sha: doc_id: 988 cord_uid: 79fp75u3 compared to traditional testing strategies, nucleic acid amplification tests such as real-time pcr offer many advantages for the detection of human adenoviruses. however, commercial assays are expensive and cost prohibitive for many clinical laboratories. to overcome fiscal challenges, a cost effective strategy was developed using a combination of homogenization and heat treatment with an “in-house” real-time pcr. in 196 swabs submitted for adenovirus detection, this crude extraction method showed performance characteristics equivalent to viral dna obtained from a commercial nucleic acid extraction. in addition, the in-house real-time pcr outperformed traditional testing strategies using virus culture, with sensitivities of 100% and 69.2%, respectively. overall, the combination of homogenization and heat treatment with a sensitive in-house real-time pcr provides accurate results at a cost comparable to viral culture. human adenoviruses (hadv) are ubiquitous dna viruses that cause a wide spectrum of illness [1] . the majority of hadvs cause mild and self-limiting respiratory tract infections, gastroenteritis or conjunctivitis; however, more severe disease can occur such as kerato-conjunctivitis, pneumonitis, and disseminated disease in the immunodeficient host [2] [3] [4] [5] . hadv is increasingly being recognized as a significant viral pathogen, particularly in immunocompromized patients where accurate and timely diagnosis can play an integral part of management [6] [7] [8] [9] [10] [11] [12] . hadv diagnosis can be achieved using virus culture, antigen-based methods (immunofluorescence, enzyme immunoassays or immunochromatography), or nucleic acid amplification tests (naats). for respiratory viruses, naats are well established as the most sensitive methods for detection and have become front-line diagnostic procedures [7, [13] [14] [15] [16] [17] [18] . most commercially available naats are highly multiplexed assays and enable simultaneous detection of several respiratory pathogens; however, their poor performance for detecting hadv emphasizes the need for single target detection [15, 17, 19] . adenovirusspecific naats have been challenged by the diversity of hadv species, which now include more than 60 different types [20, 21] . commercial qualitative and quantitative naats are available for the detection of all hadv species and most types, yet these are cost prohibitive for many laboratories. "in-house" real-time pcr assays are relatively inexpensive alternatives to commercial naats that provide rapid and accurate results [7, 17, 18, [21] [22] [23] [24] [25] [26] . wong and collaborators [18] developed an in-house real-time pcr assay that has been designed for the detection of all hadv species. it has been extensively validated using a variety of clinical specimens [17, 18] . in addition to the pcr reaction itself, extraction of nucleic acids prior to pcr is also a substantial contributor to cost. recently, a crude mechanical lysis using silica glass beads (i.e. homogenization) and heat treatment was shown to recover herpes simplex virus dna from swabs submitted in universal transport media (utm) [27, 28] . while defying the traditional paradigm of specimen processing for molecular testing, homogenization with heat treatment was shown to be a cost effective alternative to nucleic acid extraction. this study evaluated whether the combination of homogenization and heat treatment with an in-house real-time pcr would be a cost effective strategy for the detection of hadv from viral swabs transported in utm. in patients suspected of respiratory or conjunctivitis, flocked nasopharyngeal or ocular swabs, respectively, were submitted for adenovirus detection. swabs were collected by clinicians at the capital health district authority (cdha) and were submitted to the cdha microbiology laboratory (halifax, ns, canada) between april 2010 and march 2012. the swabs were transported in 3 ml of utm (copan diagnostics inc., murrieta, ca) and kept at 4°c for no more than 24 hours prior to processing. viral cultures were performed as part of routine diagnostic testing by experienced technologists. following virus culture, specimens were transferred in aliquots into cryotubes (without any identifiable patient information) and the anonymized specimen tubes were archived at −80°c for retrospective molecular analyses. twentyseven virus culture-positive specimens and 169 virus culture-negative specimens were randomly selected and tested for the presence of hadv using a well established in-house real-time pcr assay [18] following recovery of viral dna was recovered by homogenization with heat treatment or automated nucleic acid extraction. the world medical association (wma) declaration of helsinki is a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data. since the purpose of this clinical validation was quality improvement of the laboratory detection of adenovirus and relied exclusively on anonymous human biological materials that did not use or generate identifiable patient information, research ethics board (reb) review was not required based on chapter 2, article 2.4 of the tri-council policy statement: ethical conduct for research involving humans (2nd edition). viral cultures were performed as part of routine diagnostic testing by experienced technologists in the cdha microbiology laboratory (halifax, ns, canada). briefly, 500 μl of specimen was inoculated onto cultured a549 cells (atcc ccl-185), incubated at 37°c in a 5% co 2 atmosphere, and monitored daily for the presence of characteristic cytopathic effect (cpe) [29] . if cpe was observed, cells were fixed with acetone and stained using specific fluorescein isothiocyanate (fitc)-labeled monoclonal antibodies in the d 2 ultra dfa reagent kit (diagnostic hybrids, athens, oh). in absence of cpe, cells were fluid changed on day 7 and incubated for an additional 7 days. on day 14, the culture was discontinued and a terminal stain was performed. a549 cells were propagated in nutrient mixture f-12 ham with lglutamine (sigma-aldrich canada ltd., oakville, on) supplemented with 1% fetal calf serum (hyclone, thermo fisher scientific, ottawa, on), 2 μg/ml amphotericin b (sigma-aldrich), 25 μg/ml ampicillin (novapharm ltd, toronto, on), and 1 mg/ml vancomycin (sigma-aldrich). for quantification, 10-fold dilutions of hadv-c, type 6 (strain tonsil 99, atcc vr-6) were inoculated onto 96-well plates in volumes of 100 μl. cells were maintained as described above and after 14 days were subjected to direct immunofluorescence (dfa) to determine the 50% tissue culture infective dose (tcid 50 ). results were expressed as tcid 50 /ml and represent 8 replicates obtained in four independent experiments (n = 32). prior to molecular testing, viral dna was recovered from specimens using either homogenization with heat treatment as previously described [27] , or using a commercial nucleic acid extraction as recommended by the manufacturer. for homogenization, 500 μl of specimen and 0.5 g of various sized acid-washed silica beads: ≤106 μm; 150-212 μm; 719-1180 μm at a ratio of 3:2:1 (sigma-aldrich, oakville, on) were placed on a fastprep-24 homogenizer (mp biomedicals, solon, oh) at 6.5 m/s for 45 s. following a brief centrifugation at 14,000 × g for 1 min, 200 μl of the supernatant was diluted in two volumes of te buffer (10 mm tris-hcl, 1 mm edta, ph 8.0). the homogenate was then heated at 95°c for 15 min, cooled to room temperature, and 5 μl was subjected to adenovirus real-time pcr. automated extractions were performed on 200 μl of specimen using a magna pure total nucleic acid isolation kit (roche diagnostics, mannheim, germany) on a roche magnapure lc instrument. the elution volume was 100 μl. specimens with discordant results during method comparison were subjected to a manual dna extraction using a qiaamp dna blood mini kit (qiagen, toronto, on) with a sample volume of 200 μl. the dna was eluted in 100 μl, and concentrated 10-fold using a qiagen minelute pcr purification kit. plasmid dna, used for the internal control, was purified from a 5 ml overnight culture using a qiaprep spin miniprep kit (qiagen) as recommended by the manufacturer. for molecular typing, amplicon was purified using a qiaquick gel extraction kit (qiagen) with a final elution volume of 50 μl. all nucleic acid extractions were performed using manufacturers' instructions. nucleic acids were used immediately following extraction and aliquots were placed at −80°c for longterm storage. the real-time pcr has been extensively validated using respiratory specimens [18] . to facilitate workflow in the cdha microbiology laboratory (halifax, nova scotia, canada), the in-house assay was optimized for amplification and detection on a roche lightcycler 2.0 platform. real-time pcr was performed as duplex reactions with primers and probes (table 1 ) targeting the adenovirus hexon gene and an exogenous internal control. for adenovirus, two sets of primers and probes were used to span the genetically diverse adenovirus types [17, 18] . primers were synthesized by sigma genosys (oakville, on). probes for adenovirus and the internal control were purchased from biosearch technologies (novato, ca) and tib molbiol llc (adelphia, nj), respectively. the internal control, termed pgfp, is added to each reaction to monitor for the presence of pcr inhibitors. pgfp is a pmk-derived plasmid with a fragment of the gene encoding green fluorescence protein (gfp). the construct was synthesized, assembled, and transformed into escherichia coli k12 by life technologies (burlington, on). the final construct was verified by dna sequencing and restriction endonuclease digestion. e. coli harboring pgfp was inoculated into luria bertani broth supplemented with 50 μg/ml kanamycin. plasmid dna was purified from a 5 ml overnight culture and plasmid dna was quantified by spectrophotometry. ten-fold serial dilutions were used as template for the in-house realtime pcr. an inverse linear relationship (y = −3.3916× + 40.275; r 2 = 0.9982) was generated by plotting crossing points (cp) values against plasmid concentration (data not shown). the linear range spanned cp values ranging from 7 to 37, corresponding to concentrations of 10 0 to 10 9 copies per μl, respectively. for each pcr reaction, approximately 2000 copies were added. real-time pcr assay was performed using the light-cycler dna master hybprobe kit (roche diagnostics) in 20 μl reactions consisting of: 5 μl of template, 1 × lightcycler faststart mix, 3 mm mgcl 2 ; 0.5 units of heat-labile uracil-n-glycosylase [30] ; 5 μl the internal control at 400 copies/μl; 400 nm of each adenovirus primer (adv2f, adv2r, adv4f, adv4r) and 200 nm of probe (adv2pr and adv4pr); and 500 nm of each pgfp primer (fgfp and rgfp) and 300 nm of each probe (gfppr1 and gfppr2) ( table 1 ). amplification and detection were performed using the lightcyler 2.0 instrument under the thermocycling conditions described for the roche hsv-1/2 detection kit: initial activation at 95°c for 10 min, followed by 45 amplification cycles of denaturation at 95°c for 10 s, annealing at 55°c for 15 s, and elongation at 72°c for 15 s. following amplification, melting temperature (tm) analysis was performed by measuring the fluorescent signal during the following cycling profile: 95°c for 0 s, 40°c for 60 s, and 80°c for 0 s with a 0.2°c/s transition. fluorescence was acquired at the annealing stage during amplification and continuously during the melting curve. cp and tm values were determined using software provided by the manufacturer. the 530 nm (adenovirus) and 705 nm (pgfp) channels were analyzed for presence or absence of target. pcr inhibition was suspected by either loss of positivity in the 705 nm channel, or a shift in cp values greater than two standard deviations (cp ≥ 1.0) from the value obtained with the negative control. to resolve discrepant results obtained between the inhouse pcr assay and virus culture, or quantify the adenovirus dna during evaluation of the analytical sensitivity, the adenovirus r-gene kit (argene inc., sherley, ny) was used according to the manufacturer's protocol following a manual dna extraction. this internally controlled quantitative real-time pcr assay targets the hexon gene of adenovirus, and is validated for detection table 1 nucleotide sequences of primers and probes used in this study sequence (5′ to 3′) reference cca gga cgc ctc gga gta [18] adv2r aaa ctt gtt att cag gct gaa gta cgt [18] adv2pr fam-agt ttg ccc gcg cca cca ccg -bhq1 * [18] adv4f gga cag gac gct tcg gag ta [18] adv4r ctt gtt ccc cag act gaa gta ggt [18] adv4pr fam-cag ttc gcc cgy gcm aca g -bhq1 * [ of types 1 to 52 [7] . the kit contains: a ready-to-use premix contains (primers, probe, polymerase, and buffer) needed for amplification, 4 quantification standards (at 50, 500, 5,000, and 50,000 copies/reaction), and a sensitivity-control at 10 copies/reaction. results were expressed as the number of copies per reaction. analytical specificity, limit of detection, and reproducibility the analytical specificity was first determined in silico by performing a basic local alignment search tool (blast) for primers, probes, and entire amplicon sequences using the national center for biotechnology information website (http://www.ncbi.nlm.nih.gov). in addition, high titer nucleic acids were extracted from a panel of microorganisms chosen based on their ability to cause similar diseases or their potential for being found in the clinical specimen as a pathogen or normal flora ( (figure 1 and table 2 ). the analytical sensitivity (or limit of detection, lod) of the homogenization with heat treatment or nucleic acid extraction, in combination with the real-time pcr, was determined using 10-fold serial dilutions (in utm) of a cultured hadv-c type 6. each dilution was simultaneously processed by both extraction methods, and an aliquot immediately inoculated onto a549 cells for virus culture. the lod was defined by probit analysis [31] using triplicate values obtained in four independent experiments by two different operators (n = 24). each virus dilution was expressed as tcid 50 /ml in the original sample. the virus dilutions were also quantified using a commercial real-time pcr and expressed as target copies/ reaction for each assay. intra-and inter-assay reproducibility were calculated for each dilution and expressed as % coefficients of variation (%cv). the performance of each method was compared to a modified gold standard to determine sensitivity, specificity, accuracy and precision. a case was defined by concordant results (positive or negative) between at least two assays. to resolve discrepant results obtained between the in-house real-time pcr assay and virus culture, dna was extracted manually and was subjected to commercial real-time pcr. the 27 virus culture-positive specimens were subjected to pcr targeting the conserved segments surrounding the hypervariable region 7 (hvr7) of the hexon gene terminator chemistry on the applied biosystems 3130 × l dna sequencer. type designation was undertaken by blast analysis, and confirmed by comparison to a database generated from sequences obtained from genbank [32] . sequence analysis and multiple sequence alignments (clustalw analysis) were performed using the seqman and megalign components of lasergene 6 software (dnastar, madison, wi). the phylogenetic tree was inferred using a neighbor-joining (nj) method with bootstrapping analysis for n = 1000. chi-square and two-tailed fisher's exact tests were used to compare proportions in 2-by-2 contingency tables. confidence intervals (99%) for the estimated parameters are computed by a general method based on "constant chisquare boundaries" [33] . agreement between assays was measured using kappa statistics. the statistical package for social sciences (spss) software v.10 was used and p ≤ 0.01 was used to denote a statistically significance. clades are shaded to depict species a to f. analytical specificity, limit of detection, and reproducibility blast searches of primers and probes targeting the adenovirus hexon gene the internal control sequences revealed that these were highly specific targets. in fact, no cross reactions were observed with high-titer nucleic acids extracted from other respiratory viruses or bacteria ( table 2 ). the in-house real-time assay was able to detect serogroups a to f, including a variety of genetically diverse types: 1, 2, 3, 4, 6, 7, 10, 20, 26, 31, and 40 ( figure 1 and table 2 ). as seen in figure 2 , the performance of the inhouse pcr following the homogenization-or nucleic acid extraction-based protocols was equivalent. for each method, overlapping linear relationships were observed (y = −3.7668 × + 44.733; r 2 = 0.9987 compared to y = −3.9058 + 45.313; r 2 = 0.9985, respectively) that spanned eight orders of magnitude with cp values ranging from 14 to 40 (figure 2a ). the intra-and inter-assay reproducibility of the real-time pcr following homogenization and heat treatment ranged from 0.03 to 4.80%, and 1.45 to 3.79%, respectively. similarly, intra-and inter-assay reproducibility of following the nucleic acid extraction protocol ranged from 0.2 to 2.15% and 0.85 to 3.15%. as expected, the highest %cv values observed for both methods were with virus dilutions near the lod. for hadv-c type 6, the lod for virus culture was 0.2 tcid 50 /ml. the in-house real-time pcr was reproducibly positive following nucleic acid extraction or homogenization with viral stock dilutions corresponding to 0.02 tcid 50 /ml (24/24 and 24/24, respectively), and positive pcr reactions were frequently observed using virus dilutions of 0.002 tcid 50 /ml (20/24 and 21/24, respectively). virus stock dilutions were quantified using commercial real-time pcr assay, and the lod for homogenization or nucleic acid extraction-based protocols were shown to be approximately equivalent (figure 2) . with a probability of 95%, the lod for the homogenization-and nucleic acid extraction-based protocols were 12 copies/reaction (log 10 = 1.08) and 18 copies/reaction (log 10 = 1.08), respectively ( figure 2b) . dilutions corresponding to the lod for virus culture were also quantified by real-time pcr and estimated at approximately 380 copies/reaction ( figure 2b ). of the 196 clinical specimens, 157 concordant negative and 27 concordant positive results were obtained when comparing virus culture to the in-house pcr following either of the two extraction methods ( figure 3a and table 3 ). real-time pcr generated 12 additional positive results that were later resolved as true positives using a manual dna extraction and a commercial real-time pcr ( figure 3a ). all 12 pcr-positive culture-negative results were detected following homogenization protocol, whereas 11 were detected following nucleic acid extraction ( figure 3a) . the single discordant result between the molecular assays had a cp value of 37.22, suggesting that it may be attributed to sampling error (poisson distribution) at low concentrations of template [34] . since the internal control also failed to amplify in this sample, the negative result could also be attributed to pcr inhibition. upon repeat processing by automated and manual nucleic acid extractions, positive results were obtained. therefore, the original specimen result was considered a false negative. overall, compared to the modified gold standard, the sensitivity of the inhouse real-time pcr following homogenization with heat treatment or nucleic acid extraction was approximately equivalent at 100% (89.7-100%) and 97.4% (86.4-97.4%), respectively (table 3 ). in contrast, the sensitivity of virus culture was only 69.7% (56.0-69.2%) ( table 3 ). the accuracy of each method was 100% (95.6-100%), 99.5% (95.1-99.5%), and 93.9% (88.6-93.9%), respectively ( table 3) . all assays showed a high degree of specificity and precision (table 3) . when comparing cp values for the positive results obtained with the real-time pcr following both extraction methods, a linear relationship was observed (y = 0.9416 × + 4.5731; r 2 = 0.9756) ( figure 3b ). cp values for homogenization with heat treatment were consistently higher than those obtained using the nucleic acid extraction; however, no significant differences in sensitivity (analytical or clinical) were observed (figure 2 and table 3 ). as expected, virus culture-positive specimens had positive pcr results with low cp values, whereas the virus culture-negative specimens had pcr-positive results with cp values greater than 30 ( figure 3b ). dna extracted from the 39 real-time pcr positive specimens were subjected to a conventional pcr targeting the conserved segments surrounding the hvr7 of the hexon gene [32] . successful sequences were obtained from dna extracted from the 27 specimens that were both virus culture and real-time pcr-positive. a type could be assigned using multiple sequence alignment of sequences derived from genbank, as previously described [32] . individual blast analysis yielded similar results. three serogroups were observed: b (types 3, 7, 14, and 34), c (types 2 and 6), and d (types 8, 10, and 29). the predominant types observed were: 3 (37.0%), 29 (18.5%), 2 (14.8%), and 8 (1.1%). the conventional pcr was unable to amplify the target sequences from dna extracted from the 12 virus culture-negative/real-time pcr-positive specimens. the cp values for these specimens ranged from 30 to 40, suggesting only low quantities of virus were present (figure 3 ). dna sequencing was also used to distinguish the prototypic hadv type 14p (strain de wit) from newly emerged type 14p1. adenovirus type 14p1 has been associated with severe disease in europe and the north america [2] [3] [4] [5] . while the hexon hvr7 sequences obtained in this study share 100% identity with hadv type 14p1, only two mutations (g1341a and g1491a) separate types 14p1 from 14p in this region. to further characterize the a b figure 2 analytical sensitivity of the in-house real-time pcr. prior to amplification, 10-fold serial dilutions of hadv-c type 6 were processed by homogenization and heat treatment (open circles, solid line), or nucleic acid extraction (filled squares, dashed line). in both cases, equivalent results were obtained in respect to: a) the linear range; and b) the lod determined by probit analysis (n = 24). at a probability of 95%, the lod for the homogenization-and nucleic acid extraction-based protocols were 12 copies/reaction (log 10 = 1.08) and 18 copies/reaction (log 10 = 1.08), respectively. the same dilutions used for inoculate virus culture and dfa staining (indicated by open triangles, dotted line) were also quantified and demonstrated a lod of approximately 380 copies/ml (log 10 = 2.58). virus, the fibre knob gene was sequenced with primer pair f14mut and r14mut (table 1) , using reaction conditions, thermocycling parameters, and dna sequencing as described for the molecular typing. compared to wild-type 14p, the fiber knob gene of hadv type 14p1 displays a 6-bp deletion (referred to as the k250-e251 deletion) [4, 35, 36] . the adenovirus type 14 from this study harbored the characteristic 6-bp deletion, consistent with hadv type 14p1 (figure 4) . an exogenous internal control was used in this study which is non-competitive (contains a primer pair that does not target adenovirus). the addition of the internal control and primers and probes to the in-house pcr reaction did affect the analytical sensitivity of the assay (data not shown). since the internal control was added at the level of pcr, both extraction methods could be directly evaluated for the presence of pcr inhibitors. despite a subsequent heat treatment and dilution step, homogenization is a crude method to recover viral dna and may not sufficient to remove or inactivate pcr inhibitors. amplification of the internal control in adenovirusnegative specimens is consistent with a true negative result and not simply attributed to pcr inhibition. pcr inhibition was suspected by either loss of positivity in the 705 nm channel, or a shift in cp values greater than two standard deviations (which corresponds to approximately ±1.0 cp) from the value obtained with the negative control. this value was established previously, where the internal control cp values from 150 consecutive hsv-negative specimens were compared by homogenization and heat treatment or nucleic acid extraction [27] . this cutoff value remains true for the internal control used in this study. since the in-house pcr was performed as a duplex with an internal control added at the level of pcr, the 196 clinical specimens processed following homogenization and heat treatment or nucleic acid extraction could be monitored directly for the presence of potential pcr inhibitors. potential inhibitory substances were observed in two distinct cases: the first was a specimen that had been processed by homogenization with heat treatment, and the second, in a specimen subjected to nucleic acid extraction. in both cases, pcr inhibition was not observed upon repeat processing, suggesting either a processing error had occurred or the pcr inhibitor was labile [37] . therefore, pcr inhibition could not be proven or excluded. as a result, the rate of possible pcr inhibition with either extraction method was equivalent at 0.51% (1/196). at cdha (halifax, ns, canada), the average number of specimens submitted yearly for adenovirus testing is 312 (range 208 to 466 for years 2009 to 2012) and the turnaround time for virus culture can be up to 14 days. a cost analysis was performed that assumed a more practical approach of bi-weekly molecular testing (3-5 specimens with positive, negative and reagent controls). excluding labor, the average cost of a commercial pcr following nucleic acid extraction would range from $45 to $55 (cad) per specimen. in comparison, the inhouse real-time pcr following a nucleic acid extraction would reduce the cost approximately~2-fold ($21.44 to $25.97). replacement of the nucleic acid extraction with the homogenization-base protocol further reduces the cost~2-fold ($8.84 to $10.97), which is comparable to the average cost of virus culture ($9.47 to 11.64). the time require for bi-weekly processing for either molecular methods is~5 h/week, which is far lower than the time required for weekly maintenance and processing of specimens using cell culture and dfa staining. naats like real-time pcr have revolutionized the detection of human pathogens in clinical microbiology laboratories. rapid specimen throughput and excellent performance characteristics make them an appealing alternative to traditional culture methods; however, cost limits their use in many clinical laboratories. both the recovery of nucleic acids using extraction and the pcr reaction itself contribute to the cost. we have shown that combining a crude extraction method like homogenization with heat treatment [27] and an in-house real-time pcr [18] is a cost effective strategy for the detection hadv from swabs submitted in utm. homogenization uses multidirectional motion to disrupt cells through contact with silica beads and the heat treatment [27, 28] . in combination with a subsequent heat treatment to inactivate heat-labile pcr inhibitors, this crude mechanical lysis had been shown to be a cost-effective method to recover viral dna from swabs transported in utm [27] . the performance characteristics of this approach were equivalent to using traditional nucleic acid extraction and both molecular methods far exceeded those obtained with virus culture. replacing the nucleic acid extraction with the homogenization protocol did not affect the analytical (or clinical) sensitivity of the real-time pcr (figure 2 and table 3 ). using dilutions of hadv-c type 6, the lod for the homogenization protocol was approximately 12 copies/reaction, was consistent with previously reported values (22-33 copies/reaction) for hadv types 2 and 4 [18] . this analytical sensitivity is approximately 32-fold more sensitive than the estimated lod for virus culture. furthermore, positive results could be even obtained at 6 copies/reaction with a probability of 87.5% ( figure 2b ). while no significant differences were observed between the molecular assays, both demonstrated a high level of analytical sensitivity. when comparing 196 clinical specimens using a modified gold standard, the in-house pcr following homogenization and heat treatment or nucleic acid extraction demonstrated similar sensitivities of 100% and 97.4%, respectively (table 3 ). this far surpasses the performance of virus culture at 69.2%. the 30% increase in positivity is consistent with the~32-fold increase in analytical sensitivity and is not surprising since similar results were observed when transitioning other viruses from culture to naats [38] [39] [40] [41] . when comparing positive results from the in-house real-time pcr, cp values obtained following the homogenization protocol were consistently higher than those obtained following nucleic acid extraction ( figure 3b ). however, the analytical and clinical sensitivities of each assay were not significantly different ( figure 2 and table 3 ). it should be noted that all virus culture-negative/pcr-positive specimens had cp values greater than 30, corresponding to viral loads that fell below the lod for virus culture ( figure 3b ). the homogenization-or nucleic acid extraction-based protocols both showed excellent analytical specificity, with no cross-reactions from other organisms ( table 2) . both methods were able to detect diverse hadv types spanning all the different species (figure 1 and table 2 ). of the virus culture-positive specimens, the most predominant types detected were 3, 29 and 2, belonging to species b, d and c, respectively. these hadv types are well-recognized causes of acute respiratory tract and ocular infections and are consistent with the distribution reported by others regions in canada [42, 43] . interestingly, a variant of hadv type 14, termed 14p1, has been described as an emerging pathogen associated with outbreaks and sporadic cases of acute respiratory disease in europe and the united states [2] [3] [4] [5] . while most recorded cases were mild infections, severe disease and deaths have occurred. hadv type 14p1 has a characteristic 6-bp deletion (k250-e251) in the fiber knob gene [4, 35, 36] . the adenovirus type 14 from this study was consistent with type 14p1 and harbored these mutations ( figure 4 ). while there has been a number of reports of type 14p1 circulating in the us and europe, this variant has only once been reported in canada [4] . the first adenovirus 14p1 cases in canada were reported from nova scotia's neighboring province, new brunswick, and included one fatality (figure 4 ) [4] . the specimen identified as 14p1 in this study was obtained from a fatal case dating back to same time period as the new brunswick cases. further epidemiological investigations are underway. while severe and fatal cases associated with type 14p1 have been reported, similar outcomes have been reported with many other common hadv types [6, 7, 10, 44] . the most likely culprit of disease severity is the immune status of the host, not the adenovirus type or species. it should be noted that the thermocycling conditions for the adenovirus pcr were modified to allow simultaneous processing of other real-time pcr assays (hsv and vzv) in the cdha microbiology laboratory [18] . simultaneously processing of multiple pcr assays on the same lightcycler instrument allows more efficient batch testing when equipment availability is limited. interestingly, these modifications allowed the detection of hadv type 31 which had previously been problematic on an abi instrument [18] . difference between assays can be attributed to a numerous factors (i.e. instrumentation, kits, etc.); however, the most likely explanation in this case is the annealing temperature. using the original pcr protocol [18] , hadv type 31 could only be detected when the annealing temperature was reduced from 60°c to 57°c [18] . the annealing temperature in this study is 55°c. using conditions described in this study, the detection of hadv type 31 has now been replicated in both collaborating laboratories. a limitation of this study is that the validation of homogenization was only performed using swabs in utm. future experiments will need to examine whether homogenization can be applied to other relevant specimen types (urine, stool, blood and tissue); however, the realtime pcr following a nucleic acid extraction has been shown to be effective for this purpose [18, 21] . secondly, the performance characteristics of homogenization may vary between pcr assays and should not be implemented without proper validation [27] . while homogenization with heat treatment has shown to be effective for the recovery of viral dna from hadv (this study), hsv [27] , and varicella zoster virus, decreased sensitivity was observed for enveloped rna viruses like mumps and influenza viruses ( [24, 45] leblanc, j. unpublished data). homogenization and heat treatment showed performance characteristics equivalent to a commercial nucleic acid extraction for the detection of hadvs. in combination with a sensitive in-house real-time pcr, homogenization with heat treatment generated results far superior than virus culture, and at a comparable cost. by modifying the thermocycling conditions to those used by other assays in the cdha microbiology laboratory, it further streamlined workflow and facilitated transition from virus culture to molecular testing. compared to virus isolation and propagation using culture, molecular testing also further reduces the risk of laboratoryacquired infections [46] . overall, homogenization with heat treatment combined with a sensitive in-house real-time pcr is a cost-effective method for the detection of hadvs. s principles and practice of infectious diseases a communitybased outbreak of severe respiratory illness caused by human adenovirus serotype 14 severe pneumonia due to adenovirus serotype 14: a new respiratory threat? adenovirus serotype 14 infection first reported cases of human adenovirus serotype 14p1 infection a: treatment of adenovirus infections in patients undergoing allogeneic hematopoietic stem cell transplantation comparison of in-house realtime quantitative pcr to the adenovirus r-gene kit for determination of adenovirus load in clinical samples quantification of adenovirus dna in plasma for management of infection in stem cell graft recipients t-cell immunotherapy for adenoviral infections of stem-cell transplant recipients clinical features and treatment of adenovirus infections high levels of adenovirus dna in serum correlate with fatal outcome of adenovirus infection in children after allogeneic stem-cell transplantation invasive adenoviral infections in t-cell -depleted allogeneic hematopoietic stem cell transplantation: high mortality in the era of cidofovir comparison of three multiplex pcr assays for the detection of respiratory viral infections: evaluation of xtag respiratory virus panel fast assay, respifinder 19 assay and respifinder smart 22 assay switching gears for an influenza pandemic: validation of a duplex reverse transcriptase pcr assay for simultaneous detection and confirmatory identification of pandemic (h1n1) 2009 influenza virus comparison of the filmarray respiratory panel and prodesse real-time pcr assays for detection of respiratory pathogens development of a respiratory virus panel test for detection of twenty human respiratory viruses by use of multiplex pcr and a fluid microbead-based assay detection of adenoviruses detection of a broad range of human adenoviruses in respiratory tract samples using a sensitive multiplex real-time pcr assay comparison of the luminex xtag respiratory viral panel with in-house nucleic acid amplification tests for diagnosis of respiratory virus infections members of the adenovirus research community: toward an integrated human adenovirus designation system that utilizes molecular and serological data and serves both clinical and fundamental virology real-time qualitative pcr for 57 human adenovirus types from multiple specimen sources development of a pcrbased assay for detection, quantification, and genotyping of human adenoviruses molecular detection and quantitative analysis of the entire spectrum of human adenoviruses by a two-reaction real-time pcr assay multiplexed, realtime pcr for quantitative detection of human adenovirus pring-akerblom p: rapid and quantitative detection of human adenovirus dna by real-time pcr evaluation of type-specific real-time pcr assays using the lightcycler and j.b.a.i.d.s. for detection of adenoviruses in species hadv-c homogenization with heat treatment: a cost effective alternative to nucleic acid extraction for herpes simplex virus real-time pcr from viral swabs a reliable and inexpensive method of nucleic acid extraction for the pcr-based detection of diverse plant pathogens presumptive identification of common adenovirus serotypes by the development of differential cytopathic effects in the human lung carcinoma (a549) cell culture uracil-dna glycosylase (ung) influences the melting curve profiles of herpes simplex virus (hsv) hybridization probes probit analysis comprehensive detection and serotyping of human adenoviruses by pcr and sequencing logistic regression quantitation of targets for pcr by use of limiting dilution genome sequences of human adenovirus 14 isolates from mild respiratory cases and a fatal pneumonia, isolated during 2006-2007 epidemics in north america genome sequence of the first human adenovirus type 14 isolated in china inhibition and facilitation of nucleic acid amplification a comparison of cell culture versus real-time pcr for the detection of hsv1/2 from routine clinical specimens adenovirus polymerase chain reaction assay for rapid diagnosis of conjunctivitis comparison of a commercial qualitative real-time rt-pcr kit with direct immunofluorescence assay (dfa) and cell culture for detection of influenza a and b in children efficacy of pcr and other diagnostic methods for the detection of respiratory adenoviral infections epidemiology of severe pediatric adenovirus lower respiratory tract infections in manitoba, canada characterization of culture-positive adenovirus serotypes from respiratory specimens in genome type analysis of adenovirus types 3 and 7 isolated during successive outbreaks of lower respiratory tract infections in children detection of mumps virus rna by real-time one-step reverse transcriptase pcr using the lightcycler platform viral agents of human disease: biosafety concerns a cost effective real-time pcr for the detection of adenovirus from viral swabs we would like to thank members of division of microbiology, department of pathology and laboratory medicine at cdha (halifax, nova scotia) for their ongoing support and for funding for this project. in particular, we are indebted to wanda brewer for the propagation and maintenance of a546 cells, and the various technologists responsible for routine virus culture. the authors declare that they have no competing interests.authors' contributions jl conceived the study. jl, th and rt participated in its design and coordination. ta, kb, and jl carried out the molecular testing. mw quantified the adenovirus stocks and established tcid 50 values. ta and kb performed statistical analyses. jl analyzed the dna sequencing results. rt, sw and kp were involved in the phylogenetic analyses and typing of the adenoviruses as well as preparing the specificity panels. all authors were involved in the preparation of the manuscript. all authors have read and approved the final manuscript. key: cord-026550-h7360j3q authors: pianini, danilo; mariani, stefano; viroli, mirko; zambonelli, franco title: time-fluid field-based coordination date: 2020-05-13 journal: coordination models and languages doi: 10.1007/978-3-030-50029-0_13 sha: doc_id: 26550 cord_uid: h7360j3q emerging application scenarios, such as cyber-physical systems (cpss), the internet of things (iot), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in “space and time” of distributed data structures, called fields. more specifically, regarding time, in field-based coordination it is assumed that local activities in each device, called computational rounds, are regulated by a fixed clock, typically, a fair and unsynchronized distributed scheduler. in this work, we challenge this assumption, and propose an alternative approach where the round execution scheduling is naturally programmed along with the usual coordination specification, namely, in terms of a field of causal relations dictating what is the notion of causality (why and when a round has to be locally scheduled) and how it should change across time and space. this abstraction over the traditional view on global time allows us to express what we call “time-fluid” coordination, where causality can be finely tuned to select the event triggers to react to, up to to achieve improved balance between performance (system reactivity) and cost (usage of computational resources). we propose an implementation in the aggregate computing framework, and evaluate via simulation on a case study. emerging application scenarios, such as the internet of things (iot), cyberphysical systems (cpss), and edge computing, call for software design approaches addressing openness, heterogeneity, self-adaptation, and deployment agnosticism [19] . to effectively address this issue, researchers strive to define increasingly higher-level concepts, reducing the "abstraction gap" with the problems at hand, e.g., by designing new languages and paradigms. in the context of coordination models and languages, field-based coordination is one such approach [3, 5, 21, 23, 37, 40] . in spite of its many variants and implementations, fieldbased coordination roots in the idea of programming system coordination declaratively and from a global perspective, in terms of distributed data structures called (computational) fields, which span the entire deployment in space (each device holds a value) and time (each device continuously produces such values). regarding time, which is the focus of this paper, field-based coordination typically abstracts from it in two ways: (i) when a specific notion of local time is needed, this is accessed through a sensor as for any other environmental variable; and (ii) a specification is actually interpreted as a small computation chunk to be carried on in computation rounds. in each round a device: (i) sleeps for some time; (ii) gathers information about state of computation in previous round, messages received by neighbors while sleeping, and contextual information (i.e. sensor readings); and (iii) uses such data to evaluate the coordination specification, storing the state information in memory, producing a value output, and sending relevant information to neighbors. so far, field-based coordination approaches considered computational rounds as being regulated by an externally imposed, fixed distributed clock: typically, a fair and unsynchronized distributed scheduler. this assumption however, has a number of consequences and limitations, both philosophical and pragmatic, which this paper aims to address. under a philosophical point of view, it follows a pre-relativity view of time that meets general human perception, i.e., where time is absolute and independent of the actual dynamics of events. this hardly fits with more modern views connecting time with a deeper concept of causality [22] , as being only meaningful relative to the existence of events as in relational interpretations of space-time [30] , or even being a mere derived concept introduced by our cognition [29] -as in loop quantum gravity [31] . under a practical point of view, consequences on field-based coordination are mixed. the key practical advantage is simplicity. first, the designer must abstract from time, leaving the scheduling issue to the underlying platform. second, the platform itself can simply impose local schedulers statically, using fixed frequencies that at most depend on the device computational power or energetic requirements. third, the execution in proactive rounds allows a device to discard messages received few rounds before the current one, thus considering non-proactive senders to have abandoned the neighborhood, and simply modeling the state of communication by maintaining the most recent message received from each neighbor. however, there is a price to pay for such a simple approach. the first is that "stability" of the computation, namely, situations in which the field will not change after a round execution, is ignored. as a consequence, sometimes "unnecessary" computations are performed, consuming resources (both energy and bandwidth capacity), and thus reducing the efficiency of the system. symmetrically, there is a potential responsiveness issue: some computations may require to be executed more quickly under some circumstances. for instance, consider a crowd monitoring and steering system for urban mass events as the one exemplified in [7] : in case the measured density of people gets dangerous, a more frequent evaluation of the steering advice field is likely to provide more precise and timely advices. similar considerations apply for example to the area of landslide monitoring [28] , where long intervals of immobility are interspersed by sudden slope movements: sensors sampling rate can and should be low most of the time, but it needs to get promptly increased on slope changes. this generally suggests a key unexpressed potential for field-based computation: the general ability to provide improved balance between performance (system reactivity) and cost (usage of computational resources). for instance, the crowd monitoring and landslide monitoring systems should ideally slow down (possibly, halt entirely) the evaluation in case of sparse crowd density or of absence of surface movements, respectively. and they should start being more and more responsive with growing crowd densities or in case of landslide activation. the general idea that round execution distribution can actually dynamically depend on the outcome of computation itself, can be captured in field-based coordination by modeling time by a causality field, namely, a field programmable along with (and hence intertwined with) the usual coordination specification, dictating (at each point in space-time) what are the triggers whose occurrence should correspond to the execution of computation rounds. programming causality along with coordination leads us to a notion of time-fluid coordination, where it is possible to flexibly control the balance between performance and cost of system execution. accordingly, in this work we discuss a causality-driven interpretation of field-based coordination, proposing an integration with the field calculus [3] with the goal of evaluating a model for time-fluid, field-based coordination. in practice, we assume computations are not driven by time-based rounds, but by perceivable local event triggers provided by the platform (hardware/software stack) executing the aggregate program, such as messages received, change in sensor values, and time passing by. the aggregate program specification itself, then, may affect scheduling of subsequent computations through policies (expressed in the same language) based on such triggers. the contribution of this work can be summarized under three points of view. first, the proposed model enriches the coordination abstraction of field-based coordination with the possibility to explicitly and possibly reactively program the scheduling of the coordination actions; second, it enables a functional description of causality and observability, since manipulation of the interaction frequency among single components of the coordinated system reflects in changes in how causal events are perceived, and actions are taken in response to event triggers; third, the most immediate practical implication of a time-fluid coordination when compared to a traditional time-driven approach is improved efficiency, intended as improved responsiveness with the same resource cost. the remainder of this work is as follows: sect. 2 frames this work with respect to the existing literature on topic; sect. 3 introduces the proposed time-fluid model and discusses its implications; sect. 4 presents a prototype implementation in the framework of aggregate computing, showing examples and evaluating the potential practical implications via simulation finally, sect. 5 discusses future directions and concludes the work. time and synchronization have always been key issues in the area of distributed and pervasive computing systems. in general, in distributed systems, the absence of a globally shared physical clock among nodes makes it impossible to rely on absolute notions of time. logical clocks are hence used instead [17] , realizing a sort of causally-driven notion of time, in which the "passing time" of a distributed computation (that is, the ticks of logical clocks) directly expresses causal relations between distributed events. as a consequence, any observation of a distributed computation that respects such causal relations, independently of the relative speeds of processes, is a consistent one [4] . our proposal absorbs these foundational lessons, and brings them forward to consider the strict relations between the spatial dimension and the temporal dimension that situated aggregate computations have to account for. in the area of sensor networks, acquiring a (as accurate as possible) globally shared notion of time is of fundamental importance [33] , to properly capture snapshots of the distributed phenomena under observation. however, global synchronization also serves energy saving purposes. in fact, when not monitoring or not communicating, the nodes of the network should go to sleep to avoid energy waste, but this implies that to exchange monitoring information with each other they must periodically wake-up in a synchronized way. in most of existing proposals, though, this is done in awakening and communicating rounds of fixed duration, which makes it impossible to adapt to the actual dynamics of the phenomena under observation. several proposals exist for adaptive synchronization in wireless sensor networks [1, 13, 16] , dynamically changing the sampling frequency (and hence frequency of communication rounds) so as to adapt to the dynamics of the observed phenomena. for instance, in the case of crowd monitoring systems, it is likely that people (e.g, during an event) stay nearly immobile for most of the time, then suddenly start moving (e.g., at the end of the event). similarly, in the area of landslide monitoring, the situation of a slope is stable for most of the time, with periodic occurrences of (sometimes very fast) slope movements. in these cases, waking up the nodes of the network periodically would not make any sense and would waste a lot of energy. nodes should rather sleep most of the time, and wake up only upon detectable slope movements. such adaptive sampling approaches challenge the underlying notion of time, but they tend to focus on the temporal dimension only (i.e., adapting to the dynamics of a phenomena as locally perceived by the nodes). our approach goes further, by making it possible to adapt in time and space as well: not only how fast a phenomenon changes in time, but how fast it propagates and induces causal effects in space. for instance, in the case of landslide monitoring or crowd monitoring, adapting to the dynamics of local perceived movements to the overall propagation speed of such movements across the monitored area. besides sensor networks, the issue of adaptive sampling has recently landed in the broader area of iot systems and applications [35] , again with the primary goal of optimizing energy consumption of devices while not losing relevant phenomena under observation. however, unlike what promoted in sensor net-works, such optimizations typically take place in a centralized (cloud) [34] or semi-decentralized (fog) way [18] , which again disregards spatial issues and the strict space-time relations of phenomena. since coordination models and languages typical address a crosscutting concern of distributed systems, they are historically concerned with the notion of time in a variety of ways. for instance, time is addressed in space-based coordination since javaspaces [12] , and corresponding foundational calculi for timebased linda [6, 20] : the general idea is to equip tuples and query operations with timeouts, which can be interpreted either in terms of global or local clocks. the problem of abstracting the notion of time became crucial when coordination models started addressing self-adaptive systems, and hence openness and reactivity. in [25] , it is suggested that a tuple may eventually fade, with a rate that depends on a usefulness concepts measuring how many new operations are related to such tuple. in the biochemical tuple-space model [38] , tuples have a time-dynamic "concentration" driven by stochastic coordination rules embedded in the data-space. field-based coordination emerged as a coordination paradigm for selfadaptive systems focusing more on "space" rather than "time", in works such as tota [24] , field calculus [3, 37] , and fixpoint-based computational fields [21] . however, the need for dealing with time is a deep consequence of dealing with space, since propagation in space necessarily impacts "evolution". these approaches tend to abstract from the scheduling dynamics of local field evolution, in various ways. in tota, the update model for distributed "fields of tuples" is an asynchronous event-based one: anytime a change in network connectivity is detected by a node, the tota middleware provides for triggering an update of the distributed field structures so as to immediately reflect the new situation. in the field calculus and aggregate computing [5] as already mentioned, an external, proactive clock is typically used. in [21] this issue is mostly neglected since the focus is on the "eventual behavior", namely the stabilized configuration of a field, as in [36] . for all these models, scheduling of updates is always transparent to the application/programming level, so the application designer cannot intervene on coordination so as to possible optimize communication, energy expenses, and reactivity. in this section, we introduce a model for time-fluid field-based coordination. the core idea of our proposed approach is to leverage the field-based coordination itself for maintaining a causality field that drives the dynamics of computation of the application-level fields. our discussion is in principle applicable to any fieldbased coordination framework, however, for the sake of clarity, we here focus on the field calculus [3] . considering a field calculus program p, each of its rounds can be though of as consuming: i) a set of valid messages received from neighbors, m ∈ m; and ii) some contextual information s ∈ s, usually obtained via so-called sensors. the platform or middleware in charge of executing field calculus programs has to decide when to launch the next evaluation round of p, also providing valid values for m and s. note that in general the platform could execute many programs concurrently. in order to support causality-driven coordination, we first require the platform to be able to reactively respond to local event triggers, each representing some kind of change in the values of m or s-e.g., "a new message is arrived", "a given sensor provides a new value", or "1 second is passed". we denote by t the set of all possible local event triggers the platform can manage. then, we propose to associate to every field calculus program p a guard policy g (policy in short), which itself denotes a field computation-and can hence be written with a program expressed in the same language of p, as will be detailed in next section. most specifically, whenever evaluated across space and time, the field computation of a policy can be locally modeled as a function where p(t ) denotes the powerset of t . namely, a policy has the same input of any field computation, but specifically returns a pair of boolean b ∈ {0, 1} and a set of event triggers t c ⊆ t . t c is essentially the set of "causes": g will get evaluated next time by the platform only when a new event trigger is detected that belongs to t c . then, such an evaluation produces the second output b: when this is true (value 1) it means that the program p associated to the policy must be evaluated as soon as possible. on system bootstrap, every policy gets evaluated for the first time. in the proposed framework, hence, computations are caused by a field of event triggers (the causality field ) computed by a policy, which is used to i) decide whether to run the actual application round immediately, and ii) decide which event triggers will cause a re-evaluation of the policy itself. this mechanism thus introduces a sort of guard mediating between the evolution of the causality field and the actual execution of application rounds, allowing for fine control over the actual temporal dynamics, as exemplified in sect. 4.2. crucially, the ability to sense context (namely, the contents of s) and to express event triggers (namely, the possible contents of t ) has a large impact on the expressivity of the proposed model. for the remainder of this work, we will assume the platform or middleware hosting a field computation to provide the following set of features, which we deem reasonable for any such platform-this is for the sake of practical expressiveness, since even a small set of event triggers could be of benefit. first, t must include changes to any value of s; this allows the computation to be reactive to changes in the device perception, or, symmetrically speaking, makes such changes the cause of the computation. second, timers can be easily modeled as special boolean sensors flipping their value from false to true; making the classic time-driven approach a special case of the proposed framework. third, which specific event trigger caused the last computation should be available in s, accessible through the appropriate sensor. fourth, the most recent result of any field computation p that should affect the policy must be available in s; this is crucial for field computations to depend on each other, or, in other words, for a field computation to be the cause of another, possibly more intensive field computation. for instance, consider the crowd sensing and steering application mentioned in sect. 1 to be decomposed in two sub-field computations: the former, lightweight, computing the local crowd density under a policy triggering the computation anytime a presence sensor counts a different number of people in the monitored area; the latter, resource intensive, computing a crowd steering field guiding people out of the over-crowded areas, whose policy can leverage the value of the density field to raise the evaluation frequency when the situation gets potentially dangerous. fifth, the conclusion of a round of any field program is a valid source of event triggers, namely, t also contains a boolean indicating whether a field program of interest completed its round. programming the space-time and propagating causality. as soon as we let the application affect its own execution policy, we are effectively programming the time (instead of in time, as is typically done in field-based coordination): evaluating the field computation at different frequencies would actually amount at modulating the perception of time from the application standpoint. for instance, sensors' values may be sampled more often or more sparsely, affecting the perception that the application has of its operating environment along the time scale. in turn, as stemming from the distributed nature of the communicating system at hand, such an adaptation along time would immediately cause adaptation across space too, by affecting the communication rate of devices, hence the rate at which events and information spread across the network. it is worth emphasizing that this a consequence of embracing a notion of time founded on causality. in fact, as we are aware of computational models adaptive to the time fabric, as mentioned in sect. 2, we are not aware of any model allowing programming the perception of time at the application level. adapting to causality. being able to program the space-time fabric as described above necessarily requires the capability of being aware of the spacetime fabric in the first place. when the notion of space-time is crafted upon the notion of causality between events, such a form of awareness translates to awareness of the dynamics of causal relations among events. under this perspective, the application is no longer adapting to the passage of time and the extent of space, but to the temporal and spatial distribution of causal relations among events. in other words, the application is able to "chase" events not only as they travel across time and space, but also as their "traveling speed" changes. for instance, whenever in a given region of space some event happens more frequently, devices operating in the same area may compute more frequently as well, increasing the rate of communications among devices in that region, thus leading to an overall better recognition of the quickening dynamics of the phenomenon under observation. controlling situatedness. the ability to control both the above mentioned capabilities at the application level enables unprecedented fine control over the degree of situatedness exhibited by the overall system, along two dimensions: the ability to decide the granularity at which event triggers should be perceived; and the ability to decide how to adapt to changes in events dynamics. in modern distributed and pervasive systems the ability to quickly react to changes in environment dynamics are of paramount importance [32] . for instance, in the mentioned case of landslide monitoring, as anomalies in measurement increase in frequency, intensity, and geographical coverage, the monitoring application should match the pace of the accelerating dynamics. on the practical side, associating field computations to programmable scheduling policies brings both advantages and risks (as most extensions to expressiveness do). one important gain in expressiveness is the ability to let field computation affect the scheduling policy of other field computations, as in the example of crowd steering or landslide monitoring: the denser some regions get, the faster will the steering field be computed; the more intense vibrations of the ground get, the more frequently monitoring is performed. on the other hand, this opens the door to circular dependencies among fields computations and the scheduling policies, which can possibly lead to deadlocks or livelocks. therefore, it is good practice for time-fluid field coordination systems that at least one field computation depends solely on local event triggers, and that dependencies among diverse field computations are carefully crafted and possibly enriched with local control. pure reactivity and its limitations. technically, replacing a scheduler guided by a fixed clock with one triggering computations as consequence of events, turns the system from time-driven to event-driven. in principle, this makes the system purely reactive: the system is idle unless some event trigger happens. depending on the application at hand, this may be a blessing or a curse: since pro-activity is lost, the system is chained to the dynamics of event triggers, and cannot act on its own will. of course, it is easy to overcome such a limitation: assuming a clock is available in the pool of event triggers makes pro-activity a particular case of reactivity, where the tick of the clock dictates the granularity. furthermore, since policies allow the specification of a set of event triggers causing re-evaluation, the designer can always design a "fall-back" plan relying on expiration of a timer: for instance, it's possible (and reasonable) to express a policy such as "trigger as soon as happens, or timer τ expires, whichever comes first". the proposed model has been prototypically reified within the framework of aggregate computing [5] . in particular, we leveraged the alchemist simulator [26] 's pre-existing support for the protelis programming language [27] and the scafi scala dsl [39] , and we produced a modified prototype platform supporting the definition of policies using the same aggregate programming language used for the actual software specification. the framework has been open sourced and publicly released, and it has been exercised in a paradigmatic experiment. in this section we first briefly provide details about the protelis programming language, which we use to showcase the expressive power of the proposed system by examples, then we present an experiment showing how the time-fluid architecture may allow for improved precision as well as reduced resource use. this protelis language primer is intended as a quick reference for understanding the subsequent examples. entering the language details is out of the scope of this work, only the set of features used in this paper will be introduced. protelis is a purely functional, higher-order, interpreted, and dynamically typed aggregate programming language interoperable with java. programs are written in modules, and are composed of any number of function definitions and of an optional main script. module some:namespace creates a new module whose fully qualified name is some:namespace. modules' functions can be imported locally using the import keyword followed by the fully qualified module name. the same keyword can be used to import java members, with org .protelis.builtins, java.lang.math, and java.lang.double being pre-imported. similarly to other dynamic languages such as ruby and python, in protelis top level code outside any function is considered to be the main script. def f(a, b) { code } defines a new function named f with two arguments a and b, which executes all the expressions in code upon invocation, returning the value of the last one. in case the function has a single expression, a shorter, scala/kotlin style syntax is allowed: def f(a, b) = expression. the rep (v <-initial) { code } expression enables stateful computation by associating v with either the previous result of the rep evaluation, or with the value of the initial expression, the code block is then evaluated, and its result is returned (and used as value for v in the subsequent round). the if(condition) {then} else {otherwise} expression requires condition to evaluate to a boolean value; if such value is true the then block is evaluated and the value of its last expression returned, while if the value of condition is false the otherwise code block gets executed, and the value of its last expression returned. notably, rep expressions that find themselves in a non-evaluated branch lose their previously computed state, hence restarting the state computation from the initial value. this behavior is peculiar of the field calculus semantics, where the branching construct is lifted to a distributed operator with the meaning of domain segmentation [3] . the let v = expression statement adds a variable named v to the local name space, associating its value to the value of the expression evaluation. square brackets delimit tuple literals: [] evaluates to an empty tuple, [1, 2," foo"] to a tuple of three elements with two numbers and a string. methods can be invoked with the same syntax of java: obj.method(a, b) tries to invoke method member on the result of evaluation of expression obj, passing the results of the evaluation of expressions a and b as arguments. special keywords self and env allow access to contextual information. self exposes sensors via direct method call (typically leveraged for system access), while env allows dynamic access to sensors by name (hence supporting more dynamic contexts). anonymous functions are written with a syntax reminiscent of kotlin and groovy: { a, b, -> code } evaluates to an anonymous function with two parameters and code as body. protelis also shares with kotlin the trailing lambda convention: if the last parameter of a function call is an anonymous function, then it can be placed outside the parentheses. if the anonymous function is the only argument to that call, the parentheses can be omitted entirely. the following calls are in fact equivalent: [1 , 2] . map { a -> a + 1 } // returns [2 , 3] in this section we exemplify how the proposed approach allows for a single fieldbased coordination language to be used for expressing both p and g. in the following discussion, event triggers provided by the platform (i.e., members of t ), will be highlighted in green. in our first example, we show a policy recreating the round-based, classic execution model, thus demonstrating how this approach supersedes the previous. consider the following protelis functions, which detect changes in a value: where current is the current value of the signal being tracked, and condition is a function comparing the current with the previously memorized value and returning true if the new value should replace the old one. function changed is the simplest use of update, returning true whenever the input signal current changes. in the showcased code, the second argument to updated is provided using the trailing lambda syntax (see sect. 4.1). they can be leveraged for writing a policy sensitive to platform timeouts. for instance, in the following code, we write a policy that gets re-evaluated every second (we only return timer(1) of all the possible event triggers in t ), and whose associated program runs if at least one second passed since the last round. finally, we articulate a case in which the result of an aggregate computation is the cause for another computation to get triggered. consider the crowd steering system mentioned in sect. 1: we would like to update the crowd steering field only when there is a noticeable change in the perceived density of the surroundings. to do so, we first write a protelis program leveraging the scr pattern [8] to partition space in regions 300 meters wide and compute the average crowd density within them. functions s (network partitioning at desired distance), summarize (aggregation of data over a spanning tree and partitionwide broadcast of the result), and distanceto (computation of distance) come from the protelis-lang library shipped with protelis [11] . its execution policy could be, for instance, reactive to updates from neighbors and to changes in a "people counting sensor", reifying the number of people perceived by this device (e.g. via a camera). now that density computation is in place, the platform reifies its final result as a local sensor, which can in turn be used to drive the steering field computation with a policy such as: in which a low pass filter exponentialbackoff avoids to get the program running in case of spikes (e.g. due to the density computation re-stabilization). note that access to the density computation is realized by accessing a sensor with the same name of the module containing the density evaluation program, thus reifying a causal chain between field computations. we exercise our prototype by simulating a distance computation over a network of situated devices. we consider a 40 × 40 irregular grid of devices, each located randomly in a disc centered on the corresponding position of a regular grid; and a single mobile node positioned to the top left of the network, free to move at a constant speed v from left to right. once the mobile device leaves the network, exiting to the right side, another identical one enters the network from the left hand side. mobile devices and the leftmost device at bottom are "sources", and the goal for each device is to estimate the distance to the closest source. computing distance from a source without a central coordinator in arbitrary networks is a representative application of aggregate computing, for which several implementations exist [36] . in this work, since the goal is exploring the behavior of the platform rather than the efficiency of the algorithm, we use an adaptive bellman-ford [9] , even though it's known not to be the most efficient implementation for the task at hand [2] . we choose to compute the distance from a source (a gradient) as our reference algorithm as it is one of the most common building block over which other, more elaborate forms of coordination get built [10, 36] . we expect that an improvement in performance on this simple algorithm may lead to a cascading effect on the plethora [11] of algorithms based on it, hence our choice as a candidate for this experiment. we let devices compute the same aggregate program with diverse policies. the baseline for assessing our proposal is the classic approach to aggregate computing: time-driven, unsynchronized, and fair scheduling of rounds set at 1 hz. we compare the classic approach with time fluid versions whose policy is: run if a new message is received or an old message timed out, and the last round was at least f −1 seconds ago. the latter clause sets an upper bound to the number of event triggers a device can react to, preventing well-known limit situations such as the "raising value problem" for the adaptive bellman-ford [2] algorithm used in this work. we run several versions of the reactive algorithm, with diverse values for f ; and we also vary v . for each combination of f and v , we perform 100 simulations with different random seeds, which also alter the irregular grid shape. we measure the overall number of executed rounds, which is a proxy metric for resource consumption (both network and energy), and the root mean square error of each device. the simulation has been implemented in alchemist [26] , writing the aggregate programs in protelis [27] . data has been processed with xarray [14] , and charts have been produced via matplotlib [15] . for the sake of reproducibility, the whole experiment has been automated, documented, and open sourced 1 . intuitively, devices situated closer to the static source than to the trajectory of mobile sources should be able to execute less often. figure 1 confirms such intuition: there is a clear border separating devices always closer to the static source, which execute much less often, from those that at times are instead closer to the mobile source. figure 2 shows the precision of the computation for diverse values of v and f , compared to the baseline. the performance of baseline is equivalent with the performance of the time-fluid version with figure 3 depicts the cost to be paid for the algorithm execution. the causal version of the computation has a large advantage when there is nothing to recompute: if the mobile device is stands still, and the gradient value does not need to be recomputed, the computation is fundamentally halted. when v = 0, the resource consumption grows; however, compared to the classic version, we can sustain f = 1.5 hz with the same resource consumption. considering that the performance of the classic version gets matched with f = 1 hz, and cost gets equalized at f = 1.5 hz, when 1hz < f < 1.5hz we achieve both better performance and lower cost. in conclusion, the time-fluid version provides a higher performance/cost ratio. fig. 3 . root mean squared error for diverse v . when the network is entirely static (top left), raising f has a minimal impact on the overall cost of execution, as the network stabilizes and recomputes only in case of time outs. in dynamic cases, instead, higher f values come with a cost to pay. however, in the proposed experiment, the cost for the baseline algorithm matches the cost of the time fluid version with f = 1.5 hz, which in turn has lower error (as shown in fig. 2 ). in this work we introduced a different concept of time for field-based coordination systems. inspired by causal models of space-time in physics, we introduce the concept of field of causality for field computations, intertwining the usual coordination specification with its own actual evaluation schedule. we introduce a model that allows expressing the field of causality with the coordination language itself, and discuss the impact of its application. a model prototype is then implemented in the alchemist simulation platform, supporting the execution of the aggregate computing field-based coordination languages protelis, demonstrating the feasibility of the approach. finally, the prototype is exercised in a paradigmatic experiment, highlighting the practical relevance of the approach by showing how it can improve efficiency-intended as precision in field evaluation over resource consumption. future work will be devoted to provide more in-depth insights by evaluating the impact of the approach in realistic setups, both in terms of scenarios (e.g. using real world data) and evaluation precision (e.g. by leveraging network simulators such as omnet++ or ns3). moreover, further work is required both for the current prototype to become a full fledged implementation, and for the model to be implemented in practical field-based coordination middlewares. towards an adaptive synchronization policy for wireless sensor networks optimal single-path information propagation in gradient-based algorithms a higher-order calculus of computational fields consistent global states of distributed systems: fundamental concepts and mechanisms aggregate programming for the internet of things process calculi for coordination: from linda to javaspaces modelling and simulation of opportunistic iot services with aggregate computing self-organising coordination regions: a pattern for edge computing a lyapunov analysis for the robust stability of an adaptive bellman-ford algorithm description and composition of bio-inspired design patterns: a complete overview towards a foundational api for resilient distributed systems design javaspaces: principles, patterns, and practice adaptive sensing scheme using naive bayes classification for environment monitoring with drone xarray: n-d labeled arrays and datasets in python matplotlib: a 2d graphics environment decentralized control of adaptive sampling in wireless sensor networks time, clocks, and the ordering of events in a distributed system monitoring of iot data for reducing network traffic software engineering for self-adaptive systems: research challenges in the provision of assurances on the expressiveness of timed coordination via shared dataspaces asynchronous distributed execution of fixpoint-based computational fields nature of time and causality in physics field-based coordination for pervasive multiagent systems programming pervasive and mobile computing applications: the tota approach the fading concept in tuple-space systems chemical-oriented simulation of computational systems with protelis: practical aggregate programming landslide monitoring with sensor networks: experiences and lessons learnt from a real-world deployment quantum mechanics without time: a model relational quantum mechanics loop quantum gravity pervasive social context: taxonomy and survey clock synchronization for wireless sensor networks: a survey optimized on-demand data streaming from sensor nodes low-cost adaptive monitoring techniques for the internet of things engineering resilient collective adaptive systems by self-stabilisation from fieldbased coordination to aggregate computing biochemical tuple spaces for self-organising coordination simulating large-scale aggregate mass with alchemist and scala linda in space-time: an adaptive coordination model for mobile ad-hoc environments acknowledgements. this work has been supported by the miur prin 2017 project "fluidware". the authors want to thank dr. lorenzo monti for the fruitful discussion on causality, the shape and fabric of space and time, and physical models independent of time. key: cord-123103-pnjt9aa4 authors: ordun, catherine; purushotham, sanjay; raff, edward title: exploratory analysis of covid-19 tweets using topic modeling, umap, and digraphs date: 2020-05-06 journal: nan doi: nan sha: doc_id: 123103 cord_uid: pnjt9aa4 this paper illustrates five different techniques to assess the distinctiveness of topics, key terms and features, speed of information dissemination, and network behaviors for covid19 tweets. first, we use pattern matching and second, topic modeling through latent dirichlet allocation (lda) to generate twenty different topics that discuss case spread, healthcare workers, and personal protective equipment (ppe). one topic specific to u.s. cases would start to uptick immediately after live white house coronavirus task force briefings, implying that many twitter users are paying attention to government announcements. we contribute machine learning methods not previously reported in the covid19 twitter literature. this includes our third method, uniform manifold approximation and projection (umap), that identifies unique clustering-behavior of distinct topics to improve our understanding of important themes in the corpus and help assess the quality of generated topics. fourth, we calculated retweeting times to understand how fast information about covid19 propagates on twitter. our analysis indicates that the median retweeting time of covid19 for a sample corpus in march 2020 was 2.87 hours, approximately 50 minutes faster than repostings from chinese social media about h7n9 in march 2013. lastly, we sought to understand retweet cascades, by visualizing the connections of users over time from fast to slow retweeting. as the time to retweet increases, the density of connections also increase where in our sample, we found distinct users dominating the attention of covid19 retweeters. one of the simplest highlights of this analysis is that early-stage descriptive methods like regular expressions can successfully identify high-level themes which were consistently verified as important through every subsequent analysis. monitoring public conversations on twitter about healthcare and policy issues, provides one barometer of american and global sentiment about covid19. this is particularly valuable as the situation with covid19 changes every day and is unpredictable during these unprecedented times. twitter has been used as an early warning notifier, emergency communication channel, public perception monitor, and proxy public health surveillance data source in a variety of disaster and disease outbreaks from hurricanes [1] , terrorist bombings [2] , tsunamis [3] , earthquakes [4] , seasonal influenza [5] , swine flu [6] , and ebola [7] . in this paper, we conduct an exploratory analysis of topics and network dynamics of covid19 tweets. since january 2020, there have been a growing number of papers that analyze twitter activity during the covid19 pandemic in the united states. we provide a sample of papers published since january 1, 2020 in table i . chen, et al. analyzed the frequency of 22 different keywords such as "coronavirus", "corona", "cdc", "wuhan", "sinophobia", and "covid-19" analyzed across 50 million tweets from january 22, 2020 to march 16, 2020 [8] . thelwall also published an analysis of topics for english-language tweets from march 10-29, 2020. [9] . singh et al. [10] analyzed distribution of languages and propogation of myths, sharma et al. [11] implemented sentiment modeling to understand perception of public policy, and cinelli et al. [12] compared twitter against other social media platforms to model information spread. our contributions are applying machine learning methods not previously analyzed on covid19 twitter data, mainly uniform manifold approximation and projection (umap) to visualize lda generated topics and directed graph visualizations of covid19 retweet cascades. topics generated by lda can be difficult to interpret and while there exist coherence values [22] that are intended to score the interpretability of topics, they continue to be difficult to interpret and are subjective. as a result, we apply umap, a dimensionality reduction algorithm and visualization tool that "clusters" documents by topic. vectorizing the tweets using term-frequency inverse-document-frequency (tf-idf) and plotting a umap visualization with the assigned topics from lda allowed us to identify strongly localized and distinct topics. we then visualized "retweet cascades", which describes how a social media network propagates information [23] , through the use of graph models to understand how dense networks become over time and which users dominate the covid19 conversations. in our retweeting time analysis, we found that the median time for covid19 messages to be retweeted is approximately 50 minutes faster than h7n9 messages during a march 2013 outbreak in china, possibly indicating the global nature, volume, and intensity of the covid19 pandemic. our keyword analysis and topic modeling were also rigorously explored, where we found that specific topics were triggered to uptick by live white house briefings, implying that covid19 twitter [14] 30,990,645 jan. 1 -apr 4, 2020 x medford, et al. [15] 126,049 jan. 14 -jan. 28 , 2020 x x x x singh, et al. [10] 2,792,513 jan. 16 , 2020 -mar. 15 , 2020 x x x x lopez, et al. [16] 6,468,526 jan. 22 -mar. 13 , 2020 x x x cinelli, et al. [12] 1,187,482 jan. 27 -feb. 14, 2020 x x x kouzy, et al. [17] 673 feb 27 , 2020 x x alshaabi, et al. [18] unknown mar. 1 -mar 21, 2020 x x sharma, et al. [11] 30,800,000 mar. 1, 2020 -mar. 30, 2020 x x x x x x x chen, et al. [8] 8,919,411 mar. 5, 2020 -mar. 12 , 2020 x schild [19] 222,212,841 nov. 1, 2019 -mar. 22 , 2020 x x x x yang, et al. [20] unknown mar. 9, 2020 -mar. 29 , 2020 x x ours 23,830,322 mar. 24 -apr. 9 , 2020 x x x x x yasin-kabir, et al. [21] 100,000,000 mar. 5, 2020 -apr. 24 , 2020 x x x x users are highly attuned to government broadcasts. we think this is important because it highlights how other researchers have identified that government agencies play a critical role in sharing information via twitter to improve situational awareness and disaster response [24] . our lda models confirm that topics detected by thelwall et al. [9] and sharma et al. [11] , who analyzed twitter during a similar period of time, were also identified in our dataset which emphasized healthcare providers, personal protective equipment such as masks and ventilators, and cases of death. this paper studies five research questions: 1) what high-level trends can be inferred from covid19 tweets? 2) are there any events that lead to spikes in covid19 twitter activity? 3) which topics are distinct from each other? 4) how does the speed of retweeting in covid19 compare to other emergencies, and especially similar infectious disease outbreaks? 5) how do covid19 networks behave as information spreads? the paper begins with data collection, followed by the five stages of our analysis: keyword trend analysis, topic modeling, umap, time-to-retweet analysis, and network analysis. our methods and results are explained in each section. the paper concludes with limitations of our analysis. the appendix provides additional graphs as supporting evidence. ii. data collection similar to researchers in table i , we collected twitter data by leveraging the free streaming api. from march 24, 2020 to april 9, 2020, we collected 23,830,322 (173 gb) tweets. note, in this paper, we refer to the twitter data interchangeably as both "dataset" and "corpora" and refer to the posts as "tweets". our dataset is a collection of tweets from different time periods shown in table v . using the twitter api through tweepy, a python twitter mining and authentication api, we first queried the twitter track on twelve query terms to capture a healthcare-focused dataset: 'icu beds', 'ppe', 'masks', 'long hours', 'deaths', 'hospitalized', 'cases', 'ventilators', 'respiratory', 'hospitals', '#covid', and '#coronavirus'. for the keyword analysis, topic modeling, and umap tasks, we analyzed non-retweets that brought the corpus down to 5,506,223 tweets. in the time-to-retweet and network analysis, we included retweets but selected a sample out of the larger 23.8 million corpus of 736,561 tweets. our preprocessing steps are described in the data analysis section that follows. prior to applying keyword analysis, we first had to preprocess the corpus on the "text" field. first, we removed retweets using regular expressions, in order to focus the text on original tweets and authorship, as opposed to retweets that can inflate the number of messages in the corpus. we use no-retweeted corpora for both the keyword trend analysis and the topic modeling and umap analyses. further we formatted datetime to utc format, removed digits, short words less than 3 characters, extended the nltk stopwords list to also exclude "coronavirus", "covid19", "19", "covid", removed "https:" hyperlinks, removed "@" signs for usernames, removed non-latin characters such as arabic or chinese characters, and implemented lower-casing, stemming, and tokenization. finally, using regular expressions, we extracted tweets that table vi and the frequencies of tweets per minute here in table ii . the greatest rate of tweets occurred for the tweets consisting of the term "mask" (mean 55.044) in table ii , followed by "hospital" (mean 32.370) and "vent" (mean 24.811). tweets of less than 1.0 mean tweets per minute, came from groups about testing positive, being in serious condition, exposure, cough, and fever. this may indicate that people are discussing the issues around covid19 more frequently than symptoms and health conditions in this dataset. we will later find out that several themes consistent with these keyword findings are mentioned in topic modeling to include personal protective equipment (ppe) like ventilators and masks, and healthcare workers like nurses and doctors. lda are mixture models, meaning that documents can belong to multiple topics and membership is fractional [25] . further, each topic is a mixture of words, where words can be shared among topics. this allows for a "fuzzy" form of unsupervised clustering where a single document can belong to multiple topics, each with an associated probability. lda is a bag of words model where each vector is a count of terms. lda requires the number of topics to be specified. similar to methods described by syed et al. [26] , we ran 15 different lda experiments varying the number of topics from 2 to 30, and selected the model with the highest coherence value score. we selected the lda model that generated 20 topics, with a medium coherence value score of 0.344. roder et al. [22] developed the coherence value as a metric that calculates the agreement of a set of pairs and word subsets and their associated word probabilities into a single score. in general, topics are interpreted as being coherent if all or most of terms are related. our final model generated 20 topics using the default figure 2 and include the terms generated and each topic's coherence score measuring interpretability. similar to the high-level trends inferred from extracting keywords, themes about ppe and healthcare workers dominate the nature of topics. the terms generated also indicate emerging words in public conversation including "hydroxychloroquine" and "asymptomatic". our results also show four topics that are in non-english languages. in our preprocessing, we removed non-latin characters in order to filter out a high volume of arabic and chinese characters. in twitter there exists a tweet object metadata field of "lang" for language to filter tweets by a specific language like english ("eng"). however, we decided not to filter against the "lang" element because upon observation, approximately 2.5% of the dataset consisted of an "undefined" language tag, meaning that no language was indicated. although it appears to be a small fraction, removing even the "undefined" tweets would have removed several thousand tweets. some of these tweets that are tagged as "undefined" are in english but contain hashtags, emojis, and arabic characters. as a result, we did not filter out for english language, leading our topics to be a mix of english, spanish, italian, french, and portuguese. although this introduced challenges in interpretation, we feel it demonstrates the global nature of worldwide conversations about covid19 occurring on twitter. this is consistent with what singh et al. singh et al. [10] reported as a variety of languages in covid19 tweets upon analyzing over 2 million tweets. as a result, we labeled the four topics by the language of the terms in the respective topics: "spanish" (topic 1), "portuguese" (topic 14), "italian" (topic 16) and "french" (topic 19). we used google translate to infer the language of the terms. when examining the distribution of the 20 topics across the corpora in figure 2 , topics 18 ("potus"), 12 ("case.death.new"), 13 ("mask.ppe.ventil"), and 2 ("like.look.work") were the top five in the entire corpora. for each plot, we labeled each topic with the first three terms of each topic for interpretability. in our trend analysis, we summed the number of tweets per minute, and then applied a moving weighted average of 60 minutes for topics march 24 -march 28, and 60 minutes for topics march 30 to april 8th. we provided two different plots in order to visualize smaller time frames such as march 24 of 44 minutes compared to figure 3 and figure 4 show similar trends on a time-series basis per minute across the entire corpora of 5,506,223 tweets. these plots are in a style of "broken axes" 2 to indicate that the corpora are not continuous periods of time, but discrete time frames, which we selected to plot on one axis for convenience and legibility. we direct the reader to table v for reference on the start and end datetimes, which are in utc format, so please adjust accordingly for time zone. the x-axis denotes the number of minutes, where the entire 2 https://github.com/bendichter/brokenaxes corpora is 8463 total minutes of tweets. figure 3 shows that for the corpora of march 24, 25, and 28, the topics (denoted in hash-marked lines) focused on topic 18 "potus" and topic 13 "mask.ppe.ventil" trended greatest. for the later time periods of march 30, march 31, april 4, 5 and 8 in figure 4 , topic 18 "potus" and topic 13 "mask.ppe.ventil" (also in hash-marked lines) continued to trended high. it is also interesting that topic 18 was never replaced as the top trending topic, across a span of 17 days (april 8, 2020 also includes early hours of april 9 2020 est), potentially as this may have been a proxy for active government listening. the time series would temporally decrease in frequency during overnight hours, between the we applied change point detection in the time series of tweets per minute for topic 18 in the datasets march 24, 2020, april 3 -4, 2020, april 5 -6, 2020, and april 8, 2020, to identify whether the live press briefings coincided with inflections in time. using the ruptures python package [27] containing a variety of change point detection methods, we used binary segmentation [28] , a standard method for change point detection. given a sequence of data y 1:n = (y 1 , ..., y n ) the model will have m changepoints with their positions τ 1:m = (τ 1 , ..., τ m ). each changepoint position is an integer between 1 and n − 1. the m changepoints split the time series data into m + 1 segments, with the ith segment containing y ( τ i−1 + 1) : τ i . changepoints are identified by minimizing a cost function, c for a given segment, where βf (m) is a penalty to prevent overfitting. where twice the negative log-likelihood is a commonly used cost function. binary segmentation detects multiple changepoints across the time series by repeatedly testing on different subsets of the sequence. it checks to see if a τ exists that satisfies: c(y 1:τ + c(y (τ +1):n ) + β < c(y 1:n ) if not, then no changepoint is detected and the method stops. but if a changepoint is detected, the data are split into two segments consisting of the time series before (figure 7 blue) and after (figure 7 pink) the changepoint. we can clearly see in figure 7 that the timing of the white house briefing indicates a changepoint in time, giving us the intuition that this briefing influenced an uptick in the the number of tweets. we provide additional examples in the appendix. our topic findings are consistent with the published analyses on covid19 and twitter, such as [10] who found major themes of healthcare and illness and international dialogue, as we noticed in our four non-english topics. they are also similar to by thelwall et al. [9] who manually reviewed tweets from a corpus of 12 million tweets occurring earlier and overlapping our dataset (march 10 -29). similar topics from their findings to ours includes "lockdown life", "politics", "safety messages", "people with covid-19", "support for key workers", "work", and "covid-19 facts/news". further, our dataset of covid19 tweets from march 24 to april 8, 2020 occurred during a month of exponential case growth. by the end of our data collection period, the number of cases had increased by 7 times to 427,460 cases on april 8, 2020 [29] . the key topics we identified using our multiple methods were representative of the public conversations being had in news outlets during march and april, including: term-frequency inverse-document-frequency (tf-idf) [34] is a weight that signifies how valuable a term is within a document in a corpus, and can be calculated at the n-gram level. tf-idf has been widely applied for feature extraction on tweets used for text classification [35] [36] , analyzing sentiment [37] , and for text matching in political rumor detection [23] with tf-idf, unique words carry greater information and value than common, high frequency words across the corpus. tf-idf can be calculated as follows: where i is the term, j is the document, and n is the total number of documents in the corpus. tf-idf calculates the term frequency tf i,j multiplied by the log of the inverse document frequency n dfi . the term frequency tf i,j is calculated as the frequency of i in j divided by all terms i in given j. the inverse document frequency is n dfi is the log of the total number of documents j in the corpus divided by the number of documents j containing term, i. using the scikit-learn implementation of tfidfvectorizer and setting max_features to 10000, we transformed our corpus of 5,506,223 tweets into a r n×k sparse dimensional matrix of shape (5506223, 10000). note, prior to fitting the vectorizer, our corpus of tweets was pre-processed during the keyword analysis stage. we chose to visualize how the 20 topics grouped together using uniform manifold approximation and projection (umap) [38] . umap is a dimension reduction algorithm that finds a low dimensional representation of data with similar topological properties as the high dimensional space. it measures the local distance of points across a neighborhood graph of the high dimensional data, capturing what is called a fuzzy topological representation of the data. optimization is then used to find the closest fuzzy topological structure by first approximating nearest neighbors using the nearest-neighbor-descent algorithm and then minimizing local distances of the approximate topology using stochastic gradient descent [39] . when compared to t-distributed stochastic neighbor embedding (t-sne), umap has been observed to be faster [40] with clearer separation of groups. due to compute limitations in fitting the entire high dimensional vector of nearly 5.5m records, we randomly sampled one million records. we created an embedding of the vectors along two components to fit the umap model with the hellinger metric which compares distances between probability distributions, as follows: we visualized the word vectors with their respective labels, which were the assigned topics generated from the lda model. we used the default parameters of n_neighbors = 15 and min_dist = 0.1. figure 6 presents the visualization of the tf-idf word vectors for each of the 1 million tweets with their labeled topics. umap is supposed to preserve local and global structure of data, unlike t-sne that separates groups but does not preserve global structure. as a result, umap visualizations intend to allow the reader to interpret distances between groups as meaningful. in figure 6 each topic is colorcoded by its respective topic. the umap plots appear to provide further evidence of the quality and number of topics generated. our observations is that many of these topic "clusters" appear to have a single dominant color indicating distinct grouping. there is strong local clustering for topics that were also prominent in the keyword analysis and topic modeling time series plots. a very distinct and separated mass of purple tweets represents the "100: n/a" topic which is an undefined topic. this means that the lda model outputted equal scores across all 20 topics for any single tweet. as a result, we could not assign a topic to these tweets because they all had uniform scores. but this visualization informs us that the contents of these tweets were uniquely distinct from the others. examples of tweets in this "100: n/a" cateogry include "see, #democrats are always guilty of whatever", "why are people still getting in cruise ships?!?", "thank you mike you are always helping others and sponsoring anchors media shows.", "we cannot let this woman's brave and courageous actions go to waste! #chinaliedpeopledied #chinaneedstopay", "i wish people in this country would just stay the hell home instead of going to the beach". other observations reveal that the mask-related topic 10 in purple, and potentially a combination of 8 and 9 in red are distinct from the mass of noisy topics in the center of the plot. we can also see distinct separation of aqua-colored topic 18 "potus" and potentially topics 5 and 6 in yellow. we refer the reader to other examples where umap has been leveraged for twitter analysis, to include darwish et al. [41] for identifying clusters of twitter users with controversial topic similarity, vargas [42] for event detection, political polarization by darwish et al. [41] and estimating political leaning of users by [43] . retweeting is a special activity reserved for twitter where any user can "retweet" messages which allows them to disseminate their messages rapidly to their followers. further, a highly retweeted tweet might signal that an issue has attracted attention in the highly competitive twitter environment, and may give insight about issues that resonate with the public [44] . whereas in the first three analyses we used no retweets, in the time-series and network modeling that follows, we exclusively use retweets. we began by measuring time-toretweet. wang et al. [1] calls this "response time" and used it to measure response efficiency and speed of information dissemination during hurricane sandy. wang analyzed 986,579 tweets and found that 67% of re-tweets occur within 1 h [1] . we researched how fast other users retweet in emergency situations, such as what spiro [45] reported for natural disasters, and how earle [46] reported as 19 seconds for retweeting about an earthquake. we extracted metadata from our corpora for the tweet, user, and entities objects. for reference, we direct the reader to the twitter developer guide that provides a detailed overview of each object [47] . due to compute limitations, we selected a sample that consisted of 736,561 tweets that included retweets from the corpora of march 24 -28, 2020. however, since we were only focused on retweets, out of the corpus of 736,561 tweets, we reduced it to 567,909 (77%) that were only retweets. the metadata we used for both our time-to-retweet and directed graph analyses in the next section, included: 1) created_at (string) -utc time when this tweet was created. 2) text (string) -the actual utf-8 text of the status update. see twitter-text for details on what characters are currently considered valid. 3) from the user object, the id_str (string) -the string representation of the unique identifier for this user. 4) from the retweeted_status object (tweet) -the cre-ated_at utc time when the retweet was created. 5) from the retweeted_status object (tweet) -the id_str which is the unique identifier for the retweeting user. we used the corpus of retweets and analyzed the time between the tweet created_at and the retweeted created_at. here, the rt_object is the datetime in utc format for when the message that was retweeted was originally posted. the tw_object is the datetime in utc format when the current tweet was posted. as a result, the datetime for the rt_object is older than the datetime for the current tweet. this measures the time it took for the author of the current tweet to retweet the originating message. this is similar to kuang et al. [48] who defined response time of the retweet to be the time difference between the time of the first retweet and that of the origin tweet. further, spiro et al. [45] calls these "waiting times". the median time-to-retweet for our corpus was 2.87 hours meaning that half of the tweets occurred within this time (less than what wang reported as 1.0 hour), and the mean was 12.3 hours. figure 9 shows the histogram of the number of tweets by their time to retweet in seconds and figure 10 shows it in hours. further, we found that compared to the 2013 avian influenza outbreak (h7n9) in china described by zhang et al. [49] covid19 retweeters sent more messages earlier than h7n9. zhang analyzed the log distribution of 61,024 h7n9related posts during april 2013 and plotted reposting time of messages on sina weibo, a chinese twitter-like platform and one of the largest microblogging sites in china figure 12 . zhang found that h7n9 reposting occurred with a median time of 222 minutes (i.e. 3.7 hours) and a mean of 8520 minutes (i.e. 142 hours). compared to zhang's study, we found our median retweet time to be 2.87 hours, about 50 minutes faster than the reposting time during h7n9 of 3.7 hours. when comparing figure 11 and figure 12 , it appears that covid19 retweeting does now completely slow down until 2.78 hours later (10 4 seconds). for h7n9 it appears to slow down much earlier by 10 seconds. unfortunately few studies appear to document retweeting times during infectious disease outbreaks which made it hard to compare how covid19 retweeting behavior against similar situations. further, the h7n9 outbreak in china occurred seven years ago and may not be a comparable set of data for numerous reasons. chinese social media may not represent similar behaviors with american twitter and this analysis does not take into account multiple factors that imply retweeting behavior to include the context, the user's position, and the time the tweet was posted [44] . we also analyzed what rapid retweeters, or those retweeting messages even faster than the median, in less than 10,000 seconds were saying. in figure 21 we plotted the top 50 tf-idf features by their scores for the text of the retweets. it is intuitive to see that urls are being retweeted quickly by the presence of "https" in the body of the retweeted text. this is also consistent with studies by suh et al. [50] who indicated that tweets with urls were a significant factor impacting retweetability. we found terms that were frequently mentioned during the early-stage keyword analysis and topic modeling mentioned again: "cases", "ventilators", "hospitals", "deaths", "masks", "test", "american", "cuomo", "york", "president", "china", and "news". when analyzing the descriptions of the users who were retweeted in figure 21 , we ran the tf-idf vectorizer on bigrams in order to elicit more interpretable terms. user accounts whose tweets were rapidly retweeted, appeared to describe themselves as political, news-related, or some form of social media account, all of which are difficult to verify as real or fake. vii. network modeling we analyzed the network dynamics of nine different time periods within the march 24 -28, 2020 covid19 dataset, and visualized them based on their speed of retweeting. these types of graphs have been referred to as "retweet cascades" which describes how a social media network propagates information [23] . similar methods have been applied for visualizing rumor propogation by jin et al. [23] we wanted to analyze how covid19 retweeting behaves at different time points. we used published disaster retweeting times to serve as benchmarks for selecting time periods. as a result, the graphs in figure 8 are plotted by retweeting time of known benchmarks -the median time to retweet after an earthquake which implies rapid notification, the median time to retweet after a funnel cloud has been seen, all the way to a one-day or 24 hour time period. we did this to visualize a retweet cascade of fast to slow information propogation. we used median retweeting times published spiro et al. [45] for the time it took users to retweet messages based on hazardous keywords like "funnel cloud", "aftershock", and "mudslide". we also used the h7n9 reposting time which zhang et al. [49] published of 3.7 hours. we generated a directed graph for each of the nine time periods, where the network consisted of a source which was the author of the tweet (user object, the id_str) and a target which was the original retweeter shown in table iv . the goal was to analyze how connections change as the retweeting speed increases. the nine networks are visualized in figure 8 . graphs were plotted using networkx and drawn using the kamada kawai layout [51] , a force-directed algorithm. we modeled 700 users for each graph. we found that more nodes became too difficult to interpret. the size of the node indicates the number of degrees, or users that it is connected to. it can mean that the node has been retweeted by others several times. or, it can also mean that the node itself has been retweeted by others several times. the density of each network increases over time shown in figure 8 and figure 13 . very rapid retweeters, in the time it takes to retweet after an earthquake, start off with a sparse network with a few nodes in the center being the focus of retweets in figure 8a . by the time we reach figure 8d , the retweeted users are much more clustered in the center and there are more connections and activity. the top retweeted user in our median time network figure 8g , was a news network and tweeted "the team took less than a week to take the ventilator from the drawing board to working prototype, so that it can". by 24 hours out in figure 8h , we see a concentrated set of users being retweeted and by figure 8i , one account appears to dominate the space being retweeted 92 times. this account was retweeting the following message several times "she was doing #chemotherapy couldn't leave the house because of the threat of #coronavirus so her line sisters...". in addition, the number of nodes generally decreased from 1278 in "earthquake" time to 1067 in one week, and the density also generally increased, shown in table iv. these retweet cascade graphs provide only an exploratory analysis. network structures like these have been used to predict virality of messages, for example memes over time as the message is diffused across networks [52] . but, analyzing them further could enable 1) an improved understanding about how covid19 information diffusion is different than other outbreaks, or global events, 2) how information is transmitted differently from region to region across the world, and 3) what users and messages are being concentrated on over time. this would support strategies to improve government communications, emergency messaging, dispelling medical rumors, and tailoring public health announcements. there are several limitations with this study. first, our dataset is discontinuous and trends seen in figure 3 and figure 4 where there is an interruption in time should be taken with caution. although there appears to be a trend between one discrete time and another, without the missing data, it is impossible to confirm this as a trend. as a result, it would be valuable to apply these techniques on a larger and continuous corpus without any time breaks. we aim to repeat the methods in this study on a longer continuous stream of twitter data in the near future. next, the corpus we analyzed was already pre-filtered with thirteen "track" terms from the twitter streaming api that focused the dataset towards healthcare related concerns. this may be the reason why the high level keywords extracted in the first round of analysis were consistently mentioned throughout the different stages of modeling. however, after review of similar papers indicated in table i , we found that despite having filtered the corpus on healthcare-related terms, topics still appear to be consistent with analyses where corpora were filtered on limited terms like "#coronavirus". third, the users and conversations in twitter are not a direct representation of the u.s. or global population. the pew research foundation found that only 22% of american adults use twitter [53] and that this group is different from the majority of u.s. adults, because they are on average younger, more likely to identify as democrats, more highly educated and possess higher incomes [54] . the users were also not verified and should be considered as a possible mixture of human and bot accounts. fourth, we reduced our corpus to remove retweets for the keyword and topic modeling anlayses since retweets can obscure the message by introducing virality and altering the perception of the information [55] . as a result, this reduced the size of our corpus by nearly 77% from 23,820,322 tweets to 5,506,223 tweets. however, there appears to be variability in terms of consistent corpora sizes in the twitter analysis literature both in table i fifth, our compute limitations prohibited us from analyzing a larger corpus for the umap, time-series, and network modeling. for the lda models we leveraged the gensim mul-ticorelda model that allowed us to leverage multiprocessing across 20 workers. but for umap and the network modeling, we were constrained to use a cpu. however, as stated above, visualizing more than 700 nodes for our graph models was unintepretable. applying our methods across the entire 23.8 million corpora for umap and the network models may yield more meaningful results. sixth, we were only able to iterate over 15 different lda models based on changing the number of topics, whereas syed et al. [26] iterated on 480 models to select coherent models. we believe that applying a manual gridsearch of the lda parameters such as iterations, alpha, gamma threshold, chunksize, and number of passes would lead to a more diverse representation of lda models and possibly more coherent topics. seven, it was challenging to identify papers that analyzed twitter networks according to their speed of retweets for public health emergencies and disease outbreaks. zhang et al. [49] points out that there are not enough studies of temporal measurement of public response to health emergencies. we were lucky to find papers by zhang et al. [49] and spiro et al. [45] who published on disaster waiting times. chew et al. [62] and szomszor et al. [6] have published about twitter analysis in h1n1 and the swine flu, respectively. chew analyzed the volume of h1n1 tweets and categorized different types of messages such as humor and concern. szomszor correlated tweets with uk national surveillance data and tang et al. [63] generated a semantic network of tweets on measles during the 2015 measles outbreak to understand keywords mentioned about news updates, public health, vaccines and politics. however, it was difficult to compare our findings against other disease outbreaks due to the lack of similar modeling and published retweet cascade times and network models. we answered five research questions about covid19 tweets during march 24, 2020 -april 8, 2020. first, we found highlevel trends that could be inferred from keyword analysis. second, we found that live white house coronavirus briefings led to spikes in topic 18 ("potus"). third, using umap, we found strong local "clustering" of topics representing ppe, healthcare workers, and government concerns. umap allowed for an improved understanding of distinct topics generated by lda. fourth, we used retweets to calculate the speed of retweeting. we found that the median retweeting time was 2.87 hours. fifth, using directed graphs we plotted the networks of covid19 retweeting communities from rapid to longer retweeting times. the density of each network increased over time as the number of nodes generally decreased. lastly, we recommend trying all techniques indicated in table i to gain an overall understanding of covid19 twitter data. while applying multiple methods for an exploratory strategy, there is no technical guarantee that the same combination of five methods analyzed in this paper will yield insights on a different time period of data. as a result, researchers should attempt multiple techniques and draw on existing literature. models were calculated using the ruptures python package. we also applied exponential weighted moving average using the ewm pandas function. we applied a span of 5 for march 24, 2020 and a span of 20 for april 3 -4 datasets, april 5 -6 datasets, and april 8 -9 datasets. our parameters for binary segmentation included selecting the "l2" model to fit the points for topic 18, using 10 n_bkps (breakpoints). crisis information distribution on twitter: a content analysis of tweets during hurricane sandy evaluating public response to the boston marathon bombing and other acts of terrorism through twitter twitter tsunami early warning network: a social network analysis of twitter information flows twitter earthquake detection: earthquake monitoring in a social world a case study of the new york city 2012-2013 influenza season with daily geocoded twitter data from temporal and spatiotemporal perspectives what can we learn about the ebola outbreak from tweets? covid-19: the first public coronavirus twitter dataset retweeting for covid-19: consensus building, information sharing, dissent, and lockdown life a first look at covid-19 information and misinformation sharing on twitter coronavirus on social media: analyzing misinformation in twitter conversations the covid-19 social media infodemic using twitter and web news mining to predict covid-19 outbreak a large-scale covid-19 twitter chatter dataset for open scientific research-an international collaboration an" infodemic": leveraging highvolume twitter data to understand public sentiment for the covid-19 outbreak understanding the perception of covid-19 policies by mining a multilanguage twitter dataset coronavirus goes viral: quantifying the covid-19 misinformation epidemic on twitter how the world's collective attention is being paid to a pandemic: covid-19 related 1-gram time series for 24 languages on twitter an early look on the emergence of sinophobic behavior on web communities in the face of covid-19 prevalence of low-credibility information on twitter during the covid-19 outbreak coronavis: a real-time covid-19 tweets analyzer exploring the space of topic coherence measures detection and analysis of 2016 us presidential election related rumors on twitter analysis of twitter users' sharing of official new york storm response messages latent dirichlet allocation full-text or abstract? examining topic coherence scores using latent dirichlet allocation selective review of offline change point detection methods optimal detection of changepoints with a linear computational cost get your mass gatherings or large community events ready trump says fda will fast-track treatments for novel coronavirus, but there are still months of research ahead the white house. presidential memoranda using tf-idf to determine word relevance in document queries twitter trending topic classification predicting popular messages in twitter opinion mining and sentiment polarity on twitter and correlation between events and sentiment umap: uniform manifold approximation and projection for dimension reduction how umap works ¶ understanding umap unsupervised user stance detection on twitter event detection in colombian security twitter news using fine-grained latent topic analysis predicting the topical stance of media and popular twitter users bad news travel fast: a content-based analysis of interestingness on twitter waiting for a retweet: modeling waiting times in information propagation omg earthquake! can twitter improve earthquake response? introduction to tweet json -twitter developers predicting the times of retweeting in microblogs social media as amplification station: factors that influence the speed of online public response to health emergencies want to be retweeted? large scale analytics on factors impacting retweet in twitter network an algorithm for drawing general undirected graphs virality prediction and community structure in social networks share of u.s. adults using social media, including facebook, is mostly unchanged since how twitter users compare to the general public retweets are trash characterizing diabetes, diet, exercise, and obesity comments on twitter comparing twitter and traditional media using topic models empirical study of topic modeling in twitter characterizing twitter discussions about hpv vaccines using topic modeling and community detection topic modeling in twitter: aggregating tweets by conversations twitter-network topic model: a full bayesian treatment for social network and text modeling pandemics in the age of twitter: content analysis of tweets during the 2009 h1n1 outbreak tweeting about measles during stages of an outbreak: a semantic network approach to the framing of an emerging infectious disease software framework for topic modelling with large corpora lda model parameters patient 4 -china.thank.lockdown 5 -case.spread.slow 6 -day.case.week 7 -test.case.hosp 8 -die.world.peopl 9 -mask.face.wear 10 -make.home.stay 11 -hospit.nurs.le 12 -case.death.new 13 -mask.ppe.ventil 14 -portuguese 15 -case.death.number 16 -italian 17 -great.god.news 18 -potus 1 -spanish 2 -like.look.work 3 -hospit.realli.patient 4 -china.thank.lockdown 5 -case.spread.slow 6 -day.case.week 7 -test.case.hosp 8 -die.world.peopl 9 -mask.face.wear 10 -make.home.stay 11 -hospit.nurs.le 12 -case.death.new 13 -mask.ppe.ventil 14 -portuguese 15 -case.death.number 16 -italian 17 -great.god.news 18 -potus 1 -spanish 2 -like.look.work 3 -hospit.realli.patient 4 -china.thank.lockdown 5 -case.spread.slow 6 -day.case.week 7 -test.case.hosp 8 -die.world.peopl 9 -mask.face.wear 10 -make the authors would like to acknowledge john larson from booz allen hamilton for his support and review of this article. [64, 65] . it provides four different coherence metrics. we used the "c_v" metric for coherence developed by roder [22] . coherence metrics are used to rate the quality and human interpretability of a topic generated. all models were run with the default parameters using a ldamulticore model parallel computing on 20 workers, default gamma threshhold of 0.001, chunksize of 10,000, 100 iterations, 2 passes. note -sudden decreases in figure 14 signal may be due to temporary internet disconnection. key: cord-030335-esa9154w authors: pinzón, carlos; rocha, camilo; finke, jorge title: algorithmic analysis of blockchain efficiency with communication delay date: 2020-03-13 journal: fundamental approaches to software engineering doi: 10.1007/978-3-030-45234-6_20 sha: doc_id: 30335 cord_uid: esa9154w a blockchain is a distributed hierarchical data structure. widely-used applications of blockchain include digital currencies such as bitcoin and ethereum. this paper proposes an algorithmic approach to analyze the efficiency of a blockchain as a function of the number of blocks and the average synchronization delay. the proposed algorithms consider a random network model that characterizes the growth of a tree of blocks by adhering to a standard protocol. the model is parametric on two probability distribution functions governing block production and communication delay. both distributions determine the synchronization efficiency of the distributed copies of the blockchain among the socalled workers and, therefore, are key for capturing the overall stochastic growth. moreover, the algorithms consider scenarios with a fixed or an unbounded number of workers in the network. the main result illustrates how the algorithms can be used to evaluate different types of blockchain designs, e.g., systems in which the average time of block production can match the average time of message broadcasting required for synchronization. in particular, this algorithmic approach provides insight into efficiency criteria for identifying conditions under which increasing block production has a negative impact on the stability of a blockchain. the model and algorithms are agnostic of the blockchain’s final use, and they serve as a formal framework for specifying and analyzing a variety of non-functional properties of current and future blockchains. a blockchain is a distributed hierarchical data structure that cannot be modified (retroactively) without alteration of all subsequent blocks and the consensus of a majority. it was invented to serve as the public transaction ledger of bitcoin [22] . instead relying on a trusted third party, this digital currency is based on the concept of 'proof-of-work', which allows users to execute payments by signing transactions using hashes through a distributed time-stamping service. resistance to modifications, decentralized consensus, and robustness for supporting cryptocurrency transactions, unleashes the potential of blockchain technology for uses in various industries, including financial services [12, 26, 3] , distributed data models [5] , markets [25] , government systems [15, 23] , healthcare [13, 1, 18] , iot [16] , and video games [21] . technically, a blockchain is a distributed append-only data structure comprising a linear collection of blocks, shared among so-called workers, also referred often as miners. these miners generally represent computational nodes responsible for working on extending the blockchain with new blocks. since the blockchain is decentralized, each worker possesses a local copy of the blockchain, meaning that two workers can build blocks at the same time on unsynchronized local copies of the blockchain. in the typical peer-to-peer network implementation of blockchain systems, workers adhere to a consensus protocol for inter-node communication and validation of new blocks. specifically, workers build on top of the largest blockchain. if they encounter two blockchains of equal length, then workers select the chain whose last produced block was first observed. this protocol generally guarantees an effective synchronization mechanism, provided that the task of producing new blocks is hard to achieve in comparison to the time it takes for inter-node communication. the effort of producing a block relative to that of communicating among nodes is known in the literature as 'proof of work'. if several workers extend different versions of the blockchain, the consensus mechanism enables the network to eventually select only one of them, while the others are discarded (including the data they carry) when local copies are synchronized. the synchronization process persistently carries on upon the creation of new blocks. the scenario of discarding blocks massively, which can be seen as an efficiency issue in a blockchain implementation, is rarely present in "slow" block-producing blockchains. the reason is that the time it takes to produce a new block is long enough for workers to synchronize their local copies of the blockchain. slow blockchain systems avert workers from wasting resources and time in producing blocks that are likely to be discarded in an upcoming synchronization. in bitcoin, for example, it takes on average 10 minutes for a block to be produced and only 12.6 seconds to communicate an update [8] . the theoretical fork-rate of bitcoin in 2013 was approximately 1.78% [8] . however, as the blockchain technology finds new uses, it is being argued that block production needs to be faster [6, 7] . broadly speaking, understanding how speed-ups in block production can negatively impact blockchains, in terms of the number of blocks discarded due to race conditions among the workers, is important for designing new fast and yet efficient blockchains. this paper introduces a framework to formally study blockchains as a particular class of random networks with emphasis in two key aspects: the speed of block production and the network synchronization delays. as such, it is parametric on the number of workers under consideration (possibly infinite), the probability distribution function that specifies the time for producing new blocks, and the probability distribution function that specifies the communication delay between any pair of randomly selected workers. the model is equipped with probabilistic algorithms to simulate and formally analyze blockchains concurrently producing blocks over a network with varying communication delays. these algorithms focus on the analysis of the continuous process of block production in fast and highly distributed systems, in which inter-node communication delays are cru-cial. the framework enables the study of scenarios with fast block production, in which blocks tend to be discarded at a high rate. in particular, it captures the trade-off between speed and efficiency. experiments are presented to understand how this trade-off can be analyzed for different scenarios. as fast blockchain systems tend to spread to novel applications, the algorithmic approach provides mathematical tools for specifying, simulating, and analyzing blockchain systems. it is important to highlight that the proposed model and algorithms are agnostic of the concrete implementation and final use of the blockchain system. for instance, the 'rewards' for mining blocks such as the ones present in the bitcoin network are not part of the model and are not considered in the analysis algorithms. on the one hand, this sort of features can be seen as particular mechanisms of a blockchain implementation that are not explicitly required for the system to evolve as a blockchain. thus, including them as part of the framework can narrow its intended aim as a general specification, design, and analysis tool. on the other hand, such features may be abstracted away into the proposed model by tuning the probability distribution functions that are parameters of the model, or by considering a more refined base of choices among the many probability distribution functions at hand for a specific analysis. therefore, the proposed model and algorithms are general enough to encompass a wide variety of blockchain systems and their analysis. the contribution of this work is threefold. first, a random network model is introduced (in the spirit of, e.g., and erdös-renyi [9] ) for specifying blockchains in terms of the speed of block production and communication delays for synchronization among workers. second, exact and approximation algorithms for the analysis of blockchain efficiency are made available. third, based on the proposed model and algorithms, empirical observations about the tensions between production speed and synchronization delay are provided. the remaining sections of the paper are organized as follows. section 2 summarizes basic notions of proof-of-work blockchains. sections 3 and 4 introduce the proposed network model and algorithms. section 5 presents experimental results on the analysis of fast blockchains. section 6 relates these results to existing research, and draws some concluding remarks and future research directions. this section overviews the concept of proof-of-work distributed blockchain systems and introduces basic definitions, which are illustrated with the help of an example. a blockchain is a distributed hierarchical data structure of blocks that cannot be modified (retroactively) without alteration of all subsequent blocks and the consensus of the network majority. the nodes in the network, called workers, use their computational power to generate blocks with the goal of extending the blockchain. the adjective 'proof-of-work' comes from the fact that producing a single block for the blockchain tends to be a computationally hard task for the workers, e.g., a partial hash inversion. definition 1. a block is a digital document containing: (i) a digital signature of the worker who produced it; (ii) an easy to verify proof-of-work witness in the form of a nonce; and (iii) a hash pointer to the previous block in the sequence (except for the first block, called the origin, that has no previous block and is unique). technical definitions of blockchain as a data structure have been proposed by different authors (see, e.g., [27] ). most of them coincide on it being an immutable, transparent, and decentralized data structure shared by all workers in the network. for the purpose of this paper, it is important to distinguish between the local copy, independently owned by each worker, and the abstract global blockchain, shared by all workers. the latter holds the complete history of the blockchain. definition 2. the local blockchain of a worker w is a non-empty sequence of blocks stored in the local memory of w. the global blockchain (or, blockchain) is the minimal rooted tree containing all workers' local blockchains as branches. under the assumption that the origin is unique (definition 1), the (global) blockchain is well-defined for any number of workers present in the network. if there is at least one worker, then the blockchain is non-empty. definition 2 allows for local blockchains to be either synchronized or unsynchronized. the latter is common in systems with long communication delays or in the presence of anomalous situations (e.g., if a malicious group of workers is holding a fork intentionally). as a consequence, the global blockchain cannot simply be defined as a unique sequence of blocks, but rather as a distributed data structure against which workers are assumed to be partly synchronized to. figure 1 presents an example of a blockchain with five workers, where blocks are represented by natural numbers. on the left, the local blockchains are depicted as linked lists; on the right, the corresponding global blockchain is depicted as a rooted tree. some of the blocks in the rooted tree representation in figure 1 are labeled with the identifier of a worker, which indicates the position of each worker in the global blockchain. for modeling, the rooted tree representation of a blockchain is preferred. on the one hand, it can reduce the amount of memory needed for storage and, on the other hand, it visually simplifies the inspection of the data structure. furthermore, storing a global blockchain with m workers containing n unique blocks as a collection of lists requires in the worst-case scenario o(mn) memory (i.e., with perfect synchronization). in contrast, the rooted tree representation of the same blockchain with m workers and n unique blocks requires o(n) memory for the rooted tree (e.g., using parent pointers) and an o(m) map for assigning each worker its position in the tree, totaling o(n + m) memory. a blockchain tends to achieve synchronization among the workers due to the following reasons. first, workers follow a standard protocol in which they are constantly trying to produce new blocks and broadcasting their achievements to the entire network. in the case of cryptocurrencies, for instance, this behavior is motivated by paying a reward. second, workers can easily verify (i.e., with a fast algorithm) the authenticity of any block. if a malicious worker (i.e., an attacker ) changes the information of one block, that worker is forced to repeat the extensive proof-of-work process for that block and all its subsequent blocks in the blockchain. otherwise, its malicious modification cannot become part of the global blockchain. since repeating the proof-of-work process requires that the attacker spends a prohibitively high amount of resources (e.g., electricity, time, and/or machine rental), such a situation is unlikely to occur. third, the standard protocol forces any malicious worker to confront the computational power of the whole network, assumed to have mostly honest nodes. algorithm 1 presents a definition of the above-mentioned standard protocol, which is followed by each worker in the network. when a worker produces a new block, it is appended to the block it is standing on, moves to it, and notifies the network about its current position and new distance to the root. upon reception of a notification, a worker compares its current distance to the root with the incoming position. such a worker switches to the incoming position whenever it represents a greater distance. to illustrate the use of the standard protocol with a simple example, consider the blockchains depicted in figures 1 and 2. in the former, either w 1 or w 4 produced block 6, but the other workers are not yet aware of its existence. in the latter, most of the workers are synchronized with the longest branch, which is typical of a slow blockchain system, and results in a tree with few and short branches. some final remarks on inter-node communication and implementations for enforcing the standard protocol are due. note that message communication in the standard protocol is required to include enough information about the position of a worker to be located in the tree. the detail degree of this information depends, generally, on the design of the particular blockchain system. on the one hand, sending the complete sequence from root to end as part of such a message is an accurate, but also expensive approach, in terms of bandwidth, computation, and time. on the other hand, sending only the last block as part of the message is modest on resources, but can represent a communication conundrum whenever the worker being notified about a new block x is not yet aware of the parent block of x. in contrast to slow systems, this situation may frequently occur in fast systems. the workaround is to use subsequent messages to query the previous blocks of x, as needed, thus extending the average duration of inter-working communication. the network model generates a rooted tree representing a global blockchain from a collection of linked lists representing local blockchains (see definition 2) . it consists of three mechanisms, namely, growth, attachment, and broadcast. by growth it is meant that the number of blocks in the network increases by one at each time step. attachment refers to the fact that new blocks connect to an existing block, while broadcast refers to the fact that the newly connected block is announced to the entire network. the model is parametric in a natural number m specifying the number of workers, and two probability distributions α and β governing the growth, attachment, and broadcast mechanisms. internally, the growth mechanism creates a new block to be assigned at random among the m workers by taking a sample from α (the time it takes to produce such a block) and broadcasts a synchronization message, whose reception time is sampled from β (the time it takes the other workers to update their local blockchains with the new block). a network at a given discrete step n is represented as a rooted tree t n = (v n , e n ), with nodes v n ⊆ n and edges e n ⊆ v n × v n , and a map w n : {0, 1, . . . , m − 1} → v n . a node u ∈ v n represents a block u in the network and an edge (u, v) ∈ e n represents a directed edge from block u to its parent block v. the assignment w n (w) denotes the position (i.e., the last block in the local blockchain) of worker w in t n . definition 3. (growth model) let α and β be positive and non-negative probability distributions. the algorithm used in the network model starts with v 0 = {b 0 }, e 0 = {} and w 0 (w) = b 0 for all workers w, being b 0 = 0 the root block (origin). at each step n > 0, t n evolves as follows: uniformly at random, a worker w ∈ {0, 1, . . . , m − 1} is chosen for the new block to extend its local blockchain. a new edge appears so that e n = e n−1 ∪ {(w n−1 (w), n)}, and w n−1 is updated to form w n with the new assignment w → n, that is, w n (w) = n and w n (z) = w n−1 (z) for any z = w. broadcast. worker w broadcasts the extension of its local blockchain with the new block n to any other worker z with time β n,z sampled from β. the rooted tree generated by the model in definition 3 begins with block 0 (the root) and adds new blocks n = 1, 2, . . . to some of the workers. at each step n > 0, a worker w is selected at random and its local blockchain, 0 ← · · · ← w n−1 (w), is extended to 0 ← · · · ← w n−1 (w) ← n = w n (w). this results in a concurrent random global behavior, inherent to distributed blockchain systems, not only because the workers are chosen randomly due to the proofof-work scheme, but also because the communication delays bring some workers out of sync. it is important to note that the steps n = 0, 1, 2, . . . are logical time steps, not to be confused with the sort of time units sampled from the variables α and β. more precisely, although the model does not mention explicitly the time advancement, it assumes implicitly that workers are synchronized at the corresponding point in the logical future. for instance, if w sends a synchronization message of a newly created block n to another worker z, at the end of logical step n and taking β n,z time, the message will be received by z during the logical step n ≥ n that satisfies another two reasonable assumptions are implicitly made in the model, namely: (i) the computational power of all workers is similar; and (ii) any broadcasting message includes enough information about the new and previous blocks, so that no re-transmission is required to fill block gaps (or, equivalently, that these re-transmission times are included in the delay sampled from β). assumption (i) justifies why the worker producing the new block is chosen uniformly at random. thus, instead of simulating the proof-of-work of the workers to know who will produce the next block and at what time, it is enough to select a worker uniformly and take a sample time from α. assumption (ii) helps in keeping the model description simple. without assumption (ii), it would be mandatory to explicitly define how to proceed when a worker is severely out of date and requires several messages to get synchronized. in practice, the distribution α that governs the time it takes for the network, as a single entity, to produce a block is exponential with meanᾱ. since proofof-work is based on finding a nonce that makes a hashing function fall into a specific set of targets, the process of producing a block is statistically equivalent to waiting for a success in a sequence of bernoulli trials. such waiting times would correspond -at first-to a discrete geometric distribution. however, because the time between trials is very small compared to the average time between successes (usually fractions of microseconds against several seconds or minutes), the discrete geometric distribution can be approximated by a continuous exponential distribution function. finally, note that the choice of the distribution function β that governs the communication delay, and whose mean is denoted byβ, heavily depends on the system under consideration and its communication details (e.g., its hardware and protocol). this section presents an algorithmic approach to the analysis of blockchain efficiency. the algorithms are used to estimate the proportion of valid blocks that are produced during a fixed number of growth steps, based on the network model introduced in section 3, for blockchains with fixed and unbounded number of workers. in general, although presented in this section for the specific purpose of measuring blockchain efficiency, these algorithms can be easily adapted to compute other metrics of interest, such as the speed of growth of the longest branch, the relation between confirmations of a block and the probability of being valid in the long term, or the average length of forks. definition 4. let t n = (v n , e n ) be a blockchain that satisfies definition 3. the proportion of valid blocks p n in t n is defined as the random variable: the proportion of valid blocks p produced for a blockchain (in the limit) is defined as the random variable: their expected values are denoted withp n andp, respectively. note thatp n andp are random variables particularly useful to determine some important properties of blockchains. for instance, the probability that a newly produced block becomes valid in the long run isp. the average rate at which the longest branch grows is approximated byp/ᾱ. moreover, the rate at which invalid blocks are produced is approximately (1 −p)/ᾱ and the expected time for a block to receive a confirmation isᾱ/p. although p n and p are random for any single simulation, their expected valuesp n andp can be approximated by averaging several monte carlo simulations. the three algorithms presented in the following subsections are sequential and single threaded 1 , designed to compute the value of p n under the standard protocol (algorithm 1). they can be used for computingp n and, thus, for approximatingp for large values of n. the first and second algorithms compute the exact value of p n for a bounded number of workers. while the first algorithm simulates the three mechanisms present in the network model (i.e., growth, attachment, and broadcast -see definition 3), the second one takes a more timeefficient approach for computing p n . the third algorithm is a fast approximation algorithm for p n , useful in the context of an unbounded number of workers. it is of special interest for studying the efficiency of large and fast blockchain systems because its time complexity does not depend on the number of workers in the network. algorithm 2 simulates the model with m workers running concurrently under the standard protocol for up to n logical steps. it uses a list b of m block sequences that reflect the local copy of each worker. the sequences are initially limited to the origin block 0 and can be randomly extended during the simulation. each iteration of the main loop consists of four stages: (i) the wait for a new block to be produced, (ii) the reception of messages within a given waiting period, (iii) the addition of a block to the blockchain of a randomly selected worker, and (iv) the broadcasting of the new position of the selected worker in the shared blockchain to the other workers. the priority queue pq is used to queue messages for future delivery, thus simulating the communication delays. messages have the form (t , i, b ), where t represents the arrival time of the message, i is the recipient worker, and b the content that informs that a (non-specified) worker has the sequence of blocks b . the statements α() and β() draw samples from α and β, respectively. the overall complexity of algorithm 2 depends, as usual, on specific assumptions on its concrete implementation. first, let the time complexity to query α() and β() be o(1), which is a reasonable assumption in most computer programming languages. second, note that the following time complexity estimates may be higher depending on their specific implementations (e.g., if a histogram is used instead of a continuous function for sampling these variables). in particular, consider two implementation variants. for both variants, the average length of the priority queue with arbitrarily large n is expected to be o(m), more precisely, mβ/ᾱ. consider a scenario in which the statement b i ← b is implemented by creating a copy in o(n) time and the append statement is o(1) time. the overall time complexity of the algorithm is o(mn 2 ). now consider a scenario in which b i ← b merely copies the list reference in o(1) time and the append statement creates a copy in o(n) time. for the case where n m, under the assumption that the priority queue has log-time insertion and removal, the time complexity is brought down to o(n 2 ). in either case, the spatial complexity is o(mn). a key advantage of algorithm 2 is that with a slight modification it can return the blockchain s instead of the proportion p n , which enables a richer analysis in the form of additional metrics different than p. for example, assume algorithm 2: simulation of m workers using a priority queue. algorithm 3: simulation of m workers using a matrix d .., β(), 0, β(), ..., β() j'th position 9 end 10 return zn−1 algorithm 3 is a faster alternative to algorithm 2. it uses a different encoding for the collection of local blockchains. in particular, algorithm 3 stores the length of the blockchains instead of the sequences themselves. thereby, it suppresses the need for a priority queue. algorithm 4 offers an optimized routine that can be called from algorithm 3. let t k represent the (absolute) time at which block k is created, h k the length of the local blockchain after being extended with block k, and z k the cumulative maximum given by the spatial complexity of algorithm 3 is o(mn) due to the computation of matrix d and its time complexity is o(nm + n 2 ) when algorithm 4 is not used. note that there are n iterations, each requiring o(n) and o(m) time for computing h k and d k , respectively. however, if algorithm 4 is used for computing h k , the average overall complexity is reduced. in the worst-case scenario, the complexity of algorithm 4 is o(k). however, the experimental evaluations suggest an average below o(β/ᾱ) (constant with respect to k). thus, the average runtime complexity of algorithm 3 is bounded by o nm + min{n 2 , n + nβ/ᾱ} , and this corresponds to o(nm), unless the blockchain system is extremely fast (β ᾱ). algorithms 2 and 3 compute the value of p n for a fixed number m of workers. both algorithms can be used to compute p n for different values of m. however, the time complexity of these two algorithms heavily depends on the value of m, which presents a practical limitation when faced with the task of analyzing large blockchain systems. this section introduces an algorithm for approximating p n for an unbounded number of workers. it also presents formal observations that support the proposed approximation. recall that p n can be used as a measure of efficiency in terms of the proportion of valid blocks that have been produced up to step n in the blockchain t n = (v n , e n ). formally: this definition assumes a fixed number of workers. that is, p n can be written as p m,n to represent the proportion of valid blocks in t n with m workers. for the analysis of large blockchains, the challenge is to find an efficient way to estimate p m,n for large values of m and n. in other words, to find an efficient algorithm for approximating the random variables p * n and p * defined as: the proposed approach modifies algorithm 3 by suppressing the matrix d. the idea is to replace the need for computing d i,j by an approximation based on the random variable β and the length of the blockchain h k in each iteration of the main loop. note that the first row can be assumed to be 0 wherever it appears because d 0,j = 0 for all j. for the remaining rows, an approximation is introduced by observing that if an element x m is chosen at random from the matrix d of size (n − 1) × m (i.e., matrix d without the first row), then the cumulative distribution function of x m is given by this is because the elements x m of d are either samples from β, whose domain is r ≥0 , or 0 with a probability of 1/m since there is one zero per row. therefore, given that the following functional limit converges uniformly (see theorem 1 below), each d i,j can be approximated by directly sampling the distribution β. as a result, algorithm 4 can be used for computing h k by replacing d i,j with β(). theorem 1. let f k (r) := p (x k ≤ r) and g(r) := p (β() ≤ r). the functional sequence {f k } ∞ k=1 converges uniformly to g. proof. let > 0. define n := 1 2 and let k be any integer k > n. then using theorem 1, the need for the bookkeeping matrix d and the selection of a random worker j are discarded from algorithm 3, resulting in algorithm 5. the proposed algorithm computes p * n , an approximation of lim m→∞ p m,n in which the matrix entries d i,j are replaced by samples from β, each time they are needed, thus ignoring the arguably negligible hysteresis effects. algorithm 5: approximation for lim m→∞ p m,n simulation 1 t0, h0, z0 ← 0, 0, 0 2 for k ← 1, ..., n − 1 do algorithm 4* stands for algorithm 4 with β() instead of di,j (approximation) the time complexity of algorithm 5 implemented by using algorithm 4 with β() instead of d i,j is o(n 2 ) and its space complexity is o(n). if the pruning algorithm is used, the time complexity drops below o(n + nβ/ᾱ)) according to experimentation. this complexity can be considered o(n) as long asβ ᾱ. this section presents an experimental evaluation of blockchain efficiency in terms of the proportion of valid blocks produced by the workers for the global blockchain. the model in section 3 is used as the mathematical framework, while the algorithms in section 4 are used for experimental evaluation on that framework. the main claim is that, under certain conditions, the efficiency of a blockchain can be expressed as a ratio betweenᾱ andβ. experimental evaluations provide evidence on why algorithm 5 -the approximation algorithm for computing the proportion of valid blocks in a blockchain system with an unbounded number of workers-is an accurate tool for computing the measure of efficiency p * . note that the speed of a blockchain can be characterized by the relationship between the expected values of α and β. definition 5. let α and β be the distributions according to definition 3. a blockchain is classified as: chaotic ifᾱ β , and fast ifᾱ ≈β. definition 5 captures the intuition about the behavior of a global blockchain in terms of how alike are the times required for producing a block and for local block synchronization. note that the bitcoin implementation is classified as a slow blockchain system because the time between the creation of two consecutive blocks is much larger than the time it takes for local blockchains to synchronize. in chaotic blockchains, a dwarfing synchronization time means that basically no (or relatively little) synchronization is possible, resulting in a blockchain in which rarely any block would be part of "the" valid chain of blocks. a fast blockchain, however, is one in which both the times for producing a block and broadcasting a message are similar. the two-fold goal of this section is first, to analyze the behavior ofp * for the three classes of blockchains, and second, to understand how the trade-off between production speed and communication time affects the efficiency of the data structure by means of a formula. in favor of readability, the experiments presented next identify algorithms 3 and 5 as a m and a ∞ , respectively. furthermore, the claims and experiments assume that the distribution α is exponential, which holds true for proof-of-work systems. claim 1 unless the system is chaotic, the hysteresis effect of the matrix entries note that theorem 1 implies that if the hysteresis effect of the random variables d i,j is negligible, then algorithm 5 is a good enough approximation of algorithm 3. however, it does not prove that this assertion holds in general. experimental evaluation suggests that this is indeed the case, as stated in claim 1. figure 3 summarizes the average output of a m and the region that contains half of these outputs, for several values of m. all outputs seem to approach that of a ∞ , not only for the expected value ( figure 3.(a) ), but also in terms of the generated p.d.f. (figure 3.(b) ). similar results were obtained with several distribution functions for β. in particular, the exponential, chi-squared, and gamma probability distribution functions were used (with k ∈ {1, 1.5, 2, 3, 5, 10}), all with different mean values. the resulting plots are similar to the ones depicted in figure 3 . as the quotientβ/ᾱ grows beyond 1, the convergence of a m becomes much slower and the approximation error is noticeable. an example is depicted in figure 4 , where a blockchain system produces on average 10 blocks during the transmission of a synchronization message (i.e., the system is classified as chaotic). even after considering 1000 workers, the shape of the p.d.f. is shifted considerably. the error can be due to: (i) the hysteresis effect that is ignored by a ∞ ; or (ii) the slow rate of convergence. in any case, the output of this class of systems is very low, making them unstable and useless in practice. an intuitive conclusion about blockchain efficiency and speed of block production is that slower systems tend to be more efficient than faster ones. that is, faster blockchain systems have a tendency to overproduce blocks that will not be valid. claim 2 if the system is either slow or fast, then p * =ᾱ α +β . figure 5 presents an experimental evaluation of the proportion of valid blocks in a blockchain in terms of the ratioβ/ᾱ. for the left and right plots, the horizontal axis represents how fast blocks are produced in comparison with how slow synchronization is achieved. if the system is slow, then efficiency is high because most newly produced blocks tend to be valid. if the system is fast, however, then efficiency is balanced because the newly produced blocks are likely to either become valid or invalid with equal likelihood. finally, note that for fast and chaotic blockchains, say for 10 −1 ≤β/ᾱ, there is still a region in which efficiency is arguably high. as a matter of fact, even if synchronization of local blockchains takes on average a tenth of the time it takes to produce a block, in general, the proportion of blocks that become valid is almost 90%. in practice, this observation can bridge the gap between the current use of blockchains as slow systems and the need for faster blockchains. a comprehensive account of the vast literature on complex networks is beyond the scope of this work. the aim here is more modest, namely, the focus is on related work proposing and using formal and semi-formal algorithmic approaches to evaluate properties of blockchain systems. there are a number of recent studies that focus on the analysis of blockchain properties with respect to metaparameters. some of them are based on network and node simulators. other studies conceptualize different metrics and models that aim to reduce the analysis to the essential parts of the system. in [10] , a. gervais et al. introduce a quantitative framework to analyze the security and performance implications of various consensus and network parameters of proof-of-work blockchains. they devise optimal adversarial strategies for several attack scenarios while taking into account network propagation. ultimately, their approach can be used to compare the tradeoffs between blockchain performance and its security provisions. y. aoki et al. [2] propose simblock, a blockchain network simulator in which blocks, nodes, and the network itself can be instantiated by using a comprehensive collection of parameters, including the propagation delay between nodes. towards a similar goal, j. kreku et al. [19] show how to use the absolut simulation tool [28] for prototyping blockchains in different environments and finding optimal performance, given some parameters, in constrained platforms such as raspberry pi and nvidia jetson tk1. r. zhang and b. preneel [29] introduce a multi-metric evaluation framework to quantitatively analyze proof-of-work protocols. their systemic security analysis in seven of the most representative and influential alternative blockchain designs concludes that none of them outperforms the so-called nakamoto consensus in terms of either the chain quality or attack resistance. all these efforts have in common that simulation-based analysis is used to understand non-functional requirements of blockchain designs such as performance and security, up to a high degree of confidence. however, in most of the cases the concluding results are tied to a specific implementation of the blockchain architecture. the model and algorithms presented in this work can be used to analyze each of these scenarios in a more abstract fashion by using appropriate parameters for simulating the blockchain growth and synchronization. an alternative approach for studying blockchains is through formal semantics. g. rosu [24] takes a novel approach to the analysis of blockchain systems by focusing on the formal design, implementation, and verification of blockchain languages and virtual machines. his approach uses continuation-based formal semantics to later analyze reachability properties of the blockchain evolution with different degrees of abstraction. in this direction of research, e. hildenbrandt et al. [14] present kevm, an executable formal specification of ethereum's virtual machine that can be used for rapid prototyping, as well as a formal interpreter of ethereum's programming languages. c. kaligotla and c. macal [17] present an agent-based model of a blockchain systems in which the behavior and decisions made by agents are detailed. they are able to implement a generalized simulation and a measure of blockchain efficiency from an agent choice and energy cost perspective. finally, j. göbel et al. [11] use markov models to establish that some attack strategies, such as selfish-mine, causes the rate of production of orphan blocks to increase. the research presented in this manuscript uses random networks to model the behavior of blockchain systems. as future work, the proposed model and algorithms can be specified in a rewrite-based framework such as rewriting logic [20] , so that the rule-based approach in [24, 14] and the agent-based approach in [17] can both be extended to the automatic analysis of (probabilistic) temporal properties of blockchains. moreover, as it is usual in a random network approach, topological properties of blockchain systems can be studied with the help of the model proposed in this manuscript. in general, this paper differs from the above studies in the following aspects. the proposed analysis is not based on an explicit low-level simulation of a network or protocol; it does not explore the behavior of blockchain systems under the presence attackers. instead, this work simulates the behavior of blockchain efficiency from a meta-level perspective and investigates the strength of the system with respect to shortcomings inherent in its design. therefore, the proposed analysis differs from [10, 2, 19, 29] and is rather closely related to studies which consider the core properties of blockchain systems prior to attacks [17, 29] . the bounds for the meta-parameters are more conservative and less secure, compared to scenarios in which the presence of attackers is taken into account. finally, with respect to studying blockchains through formal semantics, the proposed analysis is able to consider an artificial but convenient scenario of having an infinite number of concurrent workers. formal semantics, as well as other related simulation tools, cannot currently handle such scenarios. this paper presented a network model for blockchains and showed how the proposed simulation algorithms can be used to analyze the efficiency (in terms of production of valid blocks) of blockchain systems. the model is parametric on: (i) the number of workers (or nodes); and (ii) two probability distributions governing the time it takes to produce a new block and the time it takes the workers to synchronize their local copies of the blockchain. the simulation algorithms are probabilistic in nature and can be used to compute the expected value of several metrics of interest, both for a fixed and unbounded number of workers, via monte carlo simulations. it is proven, under reasonable assumptions, that the fast approximation algorithm for an unbounded number of workers yields accurate estimates in relation to the other two exact (but much slower) algorithms. claims -supported by extensive experimentation-have been proposed, including a formula to measure the proportion of valid blocks produced in a blockchain in terms of the two probability distributions of the model. the model, algorithms, and experiments provide insights and useful mathematical tools for specifying, simulating, and analyzing the design of fast blockchain systems in the years to come. future work on the analytic analysis of the experimental observations contributed in this work should be pursued. this includes proving the two claims in section 5. first, that hysteresis effects are negligible unless the system is extremely fast. second, that the expected proportion of valid blocks in a blockchain system is given byᾱ/(ᾱ +β), beingᾱ andβ the mean of the probability distributions governing block production and communication times, respectively. furthermore, the generalization of the claims to non-proof-of-work schemes, i.e. to different probability distribution functions for specifying the time it takes to produce a new block may also be considered. finally, the study of different forms of attack on blockchain systems can be pursued with the help of the proposed model. introducing blockchains for healthcare simblock: a blockchain network simulator blockchain technologies: the foreseeable impact on society and industry emergence of scaling in random networks application of public ledgers to revocation in distributed access control the limits to blockchain? scaling vs. decentralization on scaling decentralized blockchains information propagation in the bitcoin network on random graphs on the security and performance of proof of work blockchains bitcoin blockchain dynamics: the selfish-mine strategy in the presence of propagation delay. performance evaluation blockchain application and outlook in the banking industry bc-med: plataforma de registros médicos electrónicos sobre tecnología blockchain kevm: a complete formal semantics of the ethereum virtual machine the application of blockchain technology in e-government in china managing iot devices using blockchain platform a generalized agent based framework for modeling a blockchain system blockchain solutions for big data challenges: a literature review evaluating the efficiency of blockchains in iot with simulations conditional rewriting logic as a unified model of concurrency challenges and security aspects of blockchain based on online multiplayer games bitcoin: a peer-to-peer electronic cash system blockchain in government: benefits and implications of distributed ledger technology for information sharing formal design, implementation and verification of blockchain languages blockchain technology in the chemical industry: machine-to-machine electricity market how blockchain is changing finance toward more rigorous blockchain research: recommendations for writing blockchain case studies early-phase performance exploration of embedded systems with absolut framework lay down the common metrics: evaluating proof-of-work consensus protocols' security ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license and indicate if changes were made. the images or other third party material in this chapter are included in the chapter's creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the chapter's creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use key: cord-281330-x8e9cz8a authors: mishra, devanshu; haleem, abid; javaid, mohd title: analysing the behaviour of doubling rates in 8 major countries affected by covid-19 virus date: 2020-08-14 journal: journal of oral biology and craniofacial research doi: 10.1016/j.jobcr.2020.08.007 sha: doc_id: 281330 cord_uid: x8e9cz8a abstract background and aims sars-cov2 is a novel coronavirus that is transmitted to humans through zoonosis and characterised by mild to moderate pneumonia-like symptoms. the outbreak began in wuhan, china, and has now spread on a global scale. doubling time is the amount of period taken for a particular entity (that tends to grow over time) to double its size/value. this study's prime target is to develop relationships between the variation in the doubling time of the number of cases of covid-19 virus and various socio-economic factors responsible for them. these frameworks focus on the relationships instead of relational data, so here in graph structures, we have generated different patterns of doubling rates and drawn the inferences. methods only significant countries affected by the covid-19 virus are studied, and accordingly, collected datasets of growth of cases in the form of spreadsheets. the doubling rate is determined by calculating the doubling time for each day and then plotting these datasets in graphical form. results the doubling time of various countries is vastly affected by the preventive measures taken and the lockdown implementation's success. higher testing rates helped identify the hosts of the virus; thus, countries with mass testing have lower doubling rates. countries, where the virus spread started earlier, had less time to prepare themselves, and they were in initial stages, the doubling time suffered. a sudden dip in doubling time is due to a large gathering of people or not effective lockdown; thus, people's attitude contributes to an essential role in affecting the doubling time. conclusion the relationships between the spread of the virus and various factors such as dissimilarities in ethnic values, demographics, governing bodies, human resources, economy, and tourism of major countries are carried out to understand the differences in the virus's behaviour. this fast-moving pandemic has shown various defects and weaknesses in our healthcare systems, political organisations & economic stability and gives numerous lessons on how to enhance the ways that the global societies address similar epidemics. there is also a component that may share the same denominator is the necessity for requisite healthcare systems and medical staff. still, the shortage of this component does not certainly mean that taking necessary steps would be ineffective. transmission of covid-19 to humans by zoonosis reveals that the global community is required to be observant concerning similar pandemics in the future. in december, an outbreak with pneumonia-like symptoms broke out in wuhan, china. the natural hosts of this virus are considered to be bats, yet other species are also regarded as sources. there has not been enough information accumulated by the epidemiologists to conclude how the virus spreads and affects the patients' bodies on a cellular level, but the figures indicate that the disease's reproduction number lies between 2 and 3. covid-19was the name announced of this new virus on 11th february 2020, and the virus is caused by severe acute respiratory syndrome coronavirus 2 (sars-cov-2). although similar relations were seen with sars-cov and mers-cov after genomic characterisation was done, the novel virus was more aggressive than other coronaviruses 1 . since so far, confirmed cases are touching 20 million, with more than 0.7 million deaths worldwide. preliminary data from the eu/eea show that around 20-30% of confirmed covid-19 patients are hospitalised, and 4% are in severe conditions. for patients aged above 60 or those having other medical conditions, there is an increase in the hospitalisation rates 2,3 . the incubation period for the disease is around 14 days, with a high possibility of symptoms showing 11.5 days 4 . the covid-19 outbreak is putting a massive strain on societies due to the considerable mortality and morbidity, the profound impact on healthcare and the societal and economic harm included with the physical distancing measures 5 . different countries have variation in the geography, economy, culture, tourism, healthcare, education, leadership; thus, several such factors are responsible for altering doubling rates, which can explain why the outbreak in a few countries has been at an alarming rate 6 . the developing countries have an immense amount of air traffic (easing foreign diseases to spread inside the country) but have overpopulated cities and underfunded healthcare systems. thus, in the long term, these countries may observe a slight increase in the doubling rates and show an exploding number of cases [7] [8] [9] [10] .the measures taken by the governing bodies are also an essential factor in coronavirus's behaviour in countries. we intend to add value to this discussion by analysing doubling rates of 9 major countries and drawing inferences that can act as a resource for containing the outbreak of covid-19 virus. problem definition and research objectives of the paper here, the main focus of this paper is identifying the doubling rate of several covid-19 positive cases. this analysis aims to assess the relationship between the variations in the doubling rates in various countries with factors such as geography, culture, government, economy and tourism. our objective is to identify these relations, draw patterns and accumulate the ones which seem effective worldwide. scope of the paper the study focuses on drawing insights on publicly available datasets and statistics. the data considered for the study is the number of cases of covid-19 positive patients for each country, and spreadsheets are used to accumulate the data on a single platform. the study's depth is limited to analysing the current factors responsible for altering doubling rates through graphical representation and will not cover and sort of data forecasting. . the study's depth will be limited to some exploratory data analysis, data analysis for correlation and cause-and-effect relationships, bivariate analysis, and data visualisation, this study focuses on prediction. it will not cover any kind of data forecasting. the datasets used in the study are publicly available and taken for eight major countries. microsoft excel is used to gather the data on a single platform. the programming language used for converting the data in graphical form is r (used explicitly for statistical computing and graphics) data used www.worldometers.info/coronavirus is used as the data source for accumulating the datasets for each country in our study, epidemic doubling time refers to the sequence of intervals in which the cases of covid-19 doubles in value. it is an important factor in determining the rate at which the virus is multiplying in various countries. figure 1 shows that after 20 days, when the first infected patient was registered, the doubling rate was still at an alarming rate of 54 hours. on 21st march, the state of são paulo issued a complete lockdown, and all sort of non-essential services were closed down for two weeks starting from 23rd march. this lockdown somewhat helped to improve the doubling rate from 2 days to 4.5 days as the maximum cases were in the state of sao paulo only. however, as the cases spread, the doubling time suffered as the country's health system is underfunded. president jair bolsonaro is strictly against lockdown, which is seen on the graph as a numerous rise and dips. the increase in the doubling time is simply due to the citizens' precautions, but the country may suffer in the long term. the country's test positivity rate is 33%, which shows that the increase in doubling time may be false due to less testing done. initially, the government did not act quickly enough and even punished doctors who sounded the alarm, which caused initial lower rates of doubling time and also spread of the virus. figure 2 shows the rapid improvement in the doubling time in china from day 30. this increase can be attributed to the government's vigorous measures to reallocate a vast chunk of its healthcare system to respond to the outbreak's centre and build new facilities specifically for the patients. the government also made testing free of charge even if the results were negative. it took quick actions to alert the public about infection symptoms and isolate confirmed cases and track their closer contacts to find the origins of infection clusters. on day 70, china updated its data with new 14,108 cases which can be seen as a sharp dip in the graph. being the epicentre, china has the most knowledge and experience in dealing with the virus, which also acts as an essential factor in the doubling time's steady rise. china has now successfully contained the epidemic, with the doubling time reaching more than 100 days. germany recorded its first case earlier than italy, but its doubling time was far better than other european countries, as seen in figure 3. the initial low value of doubling time is due to german carnival, a hotspot for the virus's early spread. however, comprehensive mass testing& associated quality healthcare system is the main reason behind the success of improving the doubling rate. germany has successfully kept the doubling rate low through a well-thought-out strategy and adequately funded healthcare system with strong top leadership support. the country has tested far more people than other countries that have allowed authorities to slow the transmission of the disease by isolating known cases while they are infectious. therefore, health officials understood the situation early on and took the required measures. it can be seen as a steady increase in the graph from day 40. germany has a building block of strong public trust and smoothly functioning leaders. the improvement in doubling time of several cases also displays the significance of governing bodies and transparent data in controlling the virus's extent. thus, the country managed to increase its doubling time at a significant rate. earlier, the country was only testing citizens with travel history, which then broadened to only symptomatic cases. thus, not adopting extensive testing (which helps find mild and asymptomatic cases) led the virus to spread in the country's vast majority. on day 32, the government of india issued a nationwide lockdown. as we can see from figure 4, starting from day 38, there has been an improvement in the doubling time, i.e. from the 25th march. it is directly related to the lockdown issued by the government and adding the incubation period. the lockdown was placed when the number of positive covid-19 cases in india was around 500. lockdown slowed the pandemic rate by 6th april to a rate of doubling every six days, and by 18th april, to a rate of doubling every eight days. another factor responsible for mass spreading is massive economic migration in the country. the migrants in lockdown were forced to return to their homes (many on foot). although the lockdown was implemented early on, the governing bodies took slower actions in developing an effective strategy to contain the virus spread. the slow response can worsen the situation as mass testing and contact tracing is not being adopted on a mass scale. poor health infrastructure has also contributed to the lowering of the doubling rate. as seen in figure 5 , the starting 15 days of the coronavirus spread show no improvement in the doubling time mostly due to the government light response and the country held nationwide parliamentary elections on 21st february. the president called it a conspiracy by enemy countries to shut down the nation, showing no sign of declaring lowdown until the conditions got out of hand. after the cases hit more than 11000, the country went under lockdown. it showed a slight improvement in the doubling rate from day 20 to day 30. the country again suffered the second wave of corona cases due to people disobeying the nowruz holiday restrictions. thus, starting from day 45, there was a steady improvement in the doubling time with doubling time of 45.5 days.the initial doubling rates also displays the significance of governing bodies and transparent data in controlling the virus's extent. the initial 30 days show an exponential growth of corona cases as the doubling time graph tends to alter from 1 day to 6 days, as shown in figure 6 . a large number of people in northern italy showed pneumonia-like symptoms and thus becoming sites of infection. there is a high possibility that there were already people infected with the virus long before the first registered case appeared. the country had less time to prepare for the massive explosion that took place in various cities. in mid-february, the doubling time was on the lower side; still, no norms and rules limiting the population's movement were taken. on 10th march, the whole country went under lockdown, and strict measures were taken, and movement in and out of areas was prohibited. thus, the graph showed a steady upward trend starting on day 30. the majority of cases in the country came due to people travelling from spain, france and italy. the government initially adopted the mitigation approach, where it tried not to react and let the outbreak continue with only minor measures. finally, enough people are supposed to get infected and create herd immunity to permanently reduce the r (reproduction number) below 1. due to this reason, the cases multiplied quickly until 30 days, as seen in figure 7 . on 16th march, the government changed its approach and started promoting social distancing and self-isolation. furthermore, later in march, the country went into lockdown slowly, there was an upward trend in the doubling time of no. of cases. the country earlier adopted reactive testing, i.e. prioritising testing of people showing severe symptoms. thus, those are not actively seeking out the tests, and people with mild or no symptoms were left out. the initial 40 days show the country's conditions worsening with doubling rate altering between 3 and 7 days, as seen in figure 8 . this condition can be reflected due to the slow response towards the pandemic and 400000 people travelling to the country from china and no virus testing of those people done. social distancing precautions were taken in mid of march when the cases over flowed, and mass testing was carried out. no improvement in the doubling rate was also due to cultural issues with the people and the top leadership's poor role. this country has one of the best healthcare systems, but there has been a large number of residents have not following social distancing. the country has shown the highest number ofcovid-19 positive cases, but the governing bodies have shown improvement in doubling rate. after 40 days, the doubling time started to increases due to mass testing being done, and social distancing being implemented. as of 2nd april, 90% of the population was under lockdown, which is reflected in day 40 till day 80. the doubling time further increased as more testing was done, and preventive measures were taken. three different analyses are carried out for eight major countries, and results are drawn. the doubling time of various countries is vastly affected by the preventive measures taken, top leadership role with lockdown implementation and attitude of its citizens. countries such as brazil where the lockdown has still not been enacted, the cases have risen steadily. higher testing rates helped identify the hosts of the virus; thus, countries with mass testing have higher doubling time. the doubling time graph can also help countries where doubling time is low address the fact that increased medical staff and healthcare facilities are needed, including extra testing centres and ppe kits in countries with high cases. in terms of doubling rates, the worst affected countries are the developing countries due to their weak healthcare system covering an overpopulated expanse. if the conditions are not controlled, this may pose a severe threat to the country. the country with a weak economy suffered more due to underfunded health system and economic migration (causing the virus to spread to more regions). unemployment recession and unstable jobs also cause incapable individuals to disobey the lockdown to meet their daily needs to survive. the concerned government plays the most crucial role. thus, the need is for early and rapid actions for all the governments to control the virus's outreach. therefore, countries such as china that acted quickly to contain virus spread did not have an explosion of cases, whereas iran and brazil lack acting swiftly. countries such as brazil where the political assemblies did not take necessary actions and ignored the importance of lockdown and social norms suffered heavily. the strategy of contact tracing and aggressive testing is not easier to replicate in countries with large populations, thus usually countries with high population density will show lower doubling rates (china being the outlier). limitations of the study, tool, data the barrier to increasing the number of people getting tested is the limited number of testing facilities, medical staff, and healthcare facilities. the test positivity rate of countries such as brazil and the united kingdom is high, which shows that several other people suffer from the virus that is not getting tested. thus, inadequate information may be available for a large extent of areas. some governments' failure to provide transparent, up-to-date information about the spread of disease poses a barrier to precise results. there are specific patterns or sequence where the length of the path is unknown upfront, so it is hard to express with absolute certainty the outcome of the growth in the doubling time of the number of coronavirus positive cases. analysis of covid-19 pandemic requires multilayered parameters, here we have chosen an elementary model that could include the fundamental aspects of the dissimilarities in the doubling rate of cases of covid-19 only. another factor for possible bias is that the data used does not cover all the periods and countries from when the first case was recorded, thus making it tough to study homogenously about the outcomes. understanding the study done of covid-19 outbreak can help the authorities take new healthcare measures and other systems to more successfully take necessary action on other diseases lurking in the current time and prepare ourselves more efficiently any future outbreaks. the datasets can be used in conjunction with other systems such as analytics cloud or machine learning. these data sets' patterns give insights on what further measures can be taken by the governing bodies to combat the deadly virus. the possibility of horizontal scalability is there such that no matter what amount of data is there, one can add more resources to the infrastructure and carry out further analysis. the covid-19 outbreak reveals the significance of rapid actions and strategies in terms of containing the diseases to prevent any further pandemics. the lessons of this study can be learning for others and in dealing with multiple cases of outbreaks. the evolution of healthcare systems, scientific research and medical institutions with strong government support over the past years are important factors that could prove significant in containing any future diseases that may get spread on a global scale. this fast-moving pandemic has shown various defects and weaknesses in our healthcare systems, political organisations & economic stability and gives numerous lessons on how to enhance the ways that the global societies address similar epidemics. there is also a component that may share the same denominator is the necessity for requisite healthcare systems and medical staff. still, the shortage of this component does not certainly mean that taking necessary steps would be ineffective. transmission of covid-19 to humans by zoonosis reveals that the global community must be observant concerning similar pandemics in future. from what we have observed and from the inference that we have drawn, we can say that government response to the pandemic plays a vital role in affecting the virus's doubling time. mass testing can help identify hosts of the virus and prevent the virus from spreading to other regions. countries, where the virus spread early, had less time to be prepared and thus in initial stages, the doubling time suffered and vice versa. the people's attitude towards the government and the lockdown also alters the rate at which the doubling time increases. thus countries such as germany and south korea did far better than the united states of america and iran. the healthcare system and the economic conditions also affect the doubling time, where countries such as peru and brazil are immensely affected. the developing countries are the worst hit due to overpopulation and underfunded healthcare system and must take strictest measures to contain the virus spread. naming the coronavirus disease (covid-19) and the virus that causes it statement on the second meeting of the international health regulations remdesivir in adults with severe covid-19: a randomised, double-blind, placebocontrolled, multicentre trial. the lancet a systematic review of covid-19 epidemiology based on current evidence ostwald growth rate in controlled covid-19 epidemic spreading as in arrested growth in a quantum complex matter effect of changing case definitions for covid-19 on the epidemic curve and transmission parameters in none key: cord-031409-7cs1z6x6 authors: baraitser, lisa title: the maternal death drive: greta thunberg and the question of the future date: 2020-09-04 journal: psychoanal cult soc doi: 10.1057/s41282-020-00197-y sha: doc_id: 31409 cord_uid: 7cs1z6x6 the centenary of freud’s beyond the pleasure principle (freud, 1920a/1955) falls in 2020, a year dominated globally by the covid-19 pandemic. one of the effects of the pandemic has been to reveal the increasingly fragile interconnectedness of human and non-human life, as well as the ongoing effects of social inequalities, particularly racism, on the valuing of life and its flourishing. drawing on earlier work, this paper develops the notion of a ‘maternal death drive’ that supplements freud’s death drive by accounting for repetition that retains a relation to the developmental time of ‘life’ but remains ‘otherwise’ to a life drive. the temporal form of this ‘life in death’ is that of ‘dynamic chronicity’, analogous to late modern narratives that describe the present as ‘thin’ and the time of human futurity as running out. i argue that the urgency to act on the present in the name of the future is simultaneously ‘suspended’ by the repetitions of late capitalism, leading to a temporal hiatus that must be embraced rather than simply lamented. the maternal (death drive) alerts us to a new figure of a child whose task is to carry expectations and anxieties about the future and bind them into a reproductive present. rather than seeing the child as a figure of normativity, i turn to greta thunberg to signal a way to go on in suspended ‘grey’ time. and why should i be studying for a future that soon will be no more, when no one is doing anything whatsoever to save that future? (greta thunberg) this paper is late. not just a little late but seriously forestalled. there is some pressure -an urgency produced by the centenary of freud's beyond the pleasure principle falling in 2020 -and the desire and pleasure in partaking in a collaborative, timely celebration of the work. there are the ordinary repetitions that are holding this up: a chronic relation to my own thoughts, veering towards and away from the satisfactions and disturbances of ideas connecting or linking; the chronic overwhelm produced by the difficulty of saying 'no' and resisting the temptations of an overloaded life; and the realities of overload brought on not by a chronic relation to limits but by their obliteration by the institutions and systems that govern our lives. then, of course, as 2020 has deepened, there have been the temporalities of illness, care and grief; of the suspension of time under conditions of lockdown; the stop-start of uncertainty and helplessness. for some, it has been a time of permanent and dangerous work; of intolerable waiting for others; and of the fault-lines of inequality and racial injustice urgently rupturing the otherwise monotonous rhythm of a global pandemic. in 2020, everything and nothing went on hold. during this time, i continued to work with patients, albeit 'remotely', in the strange temporality of a five times per week psychoanalysis. even with so much time, the wait between sessions can be felt to be intolerable. to be in an analysis is to be held in suspension from one session to the next. one of my patients describes the wait as an agonizing 'blank time', like the crackling of an oldfashioned tv. it is not dead time as such but the incessant noise of nothing happening. to be in the session, however, produces a different kind of disturbance: an utterly absorbing kind of time that they liken to the colour blue. we move between the absorbing blue time of the sessions to the blank, crackling, maddening time between them. there is a 'session-time' analyst, who is blue, and a 'between-session-time' analyst, who maddens with a blank, crackling absence. time is both interminable -a wait between the sessions that feels like it goes on forever -and chronic: the repetition of blue, blank, blue, blank, blue, blank… beyond the pleasure principle is freud's meditation on the temporalities of repetition and return as species-time articulates with the time of the subject. in many ways, the death drive is a temporal concept, holding together the paradoxical time in which repetition contains within it a backwards pull towards the no-time of the living organism, even as the shape of this relation describes 'a life'. 100 years later, time in the early decades of the 21st century ó 2020 springer nature limited. 1088-0763 psychoanalysis, culture & society appears oddly analogous: it seems to loop or repeat but is undercut by a pull towards no-time, since the human and planetary future is not just foreshortened but now 'foreclosed' by the immanent twin disasters of capitalist and (neo)colonial expansion (baraitser, 2017a, p. 8) . franco 'bifo' berardi has long argued that our collective human future has come and gone and that the future has outlived its usefulness as a concept (berardi, 2011) . time after the present will come, but it will not bring the promises of bettering the conditions of the now for most, this having been a central aspect of european and north american future narratives in the post-war period (toffler, 1970; lee, 2004; luhmann, 1976) . in fact, as naomi klein (2007) argues, the very folding of disaster into capitalist discourses, governmental policies and institutional practices does not stave off disaster but profits further from it, pushing the relations between the human and non-human world to the brink of sustainability. what this implies is that disaster is not a future horizon we must urgently draw back from but a condition we have already incorporated, profited from and continue to sustain in the present. in these conditions of 'crisis capitalism', whole populations are kept in a 'chronic state of near-collapse' (invisible committee, 2009, p. 31) , 1 a kind of temporal hiatus in which one goes on but without a future. amy elias (2016) has noted the intensive discussions about the 'presentism' of post-wwii globalized societies that have revolved around the idea of the loss of history (p. 35). in these narratives, a sense of a saturated, elongated, thin present is a product of a traumatized western collective consciousness confronting the unprecedented 'event' of wwii. however, these narratives, she argues, have given way in the 21st century, as humankind 'has created its own version of durational time inside (rather than outside) the box of historicity' (p. 36). this durational time is not bergson's duration that teams with experience (bergson, 1889 (bergson, /1994 (bergson, , 1896 (bergson, /2004 but the empty, timeless time of a 'marketplace duration' (elias, 2016, p. 35) , closer to the maddening crackling of nothing happening that my patient describes. in addition, as time is increasingly synchronized in the post-war period in terms of economic, cultural, technological, ecological and planetary registers, the 'present' itself becomes the management of a tension between time that is felt to be synced or simultaneous and time that is multiple or heterogeneous to simultaneity (burges and elias, 2016, p. 3) . we could think of this tension as produced by the dominating effects of european models of time (mills, 2014 (mills, , 2020 . european time is constantly imposed by the west on 'the rest' through the temporal structures of empire and enacted through colonization, exploitation, extraction and enslavement. european time comes to mediate representations of the world through the imposition of a particular account of the world-historical present on other temporal organizations -cosmic time, geological time, earth time, soil time, indigenous time, women's time, queer time, to name a few (chakrabarty, 2009; freeman, 2010; kristeva, 1981 kristeva, /1986 nanni, 2012; puig de la bellacasa, 2017) . another way to put this is that, although freud proposes that repetition leads to the ultimate suspension of time -the return to non-being -the state of nonbeing produced by temporal suspension in the early 21st century is radically unequally distributed. writing under conditions of lockdown during the covid-19 pandemic, achille mbembe (2020) states: for we have never learned to live with all living species, have never really worried about the damage we as humans wreak on the lungs of the earth and on its body. thus, we have never learned how to die. with the advent of the new world and, several centuries later, the appearance of the 'industrialized races,' we essentially chose to delegate our death to others, to make a great sacrificial repast of existence itself via a kind of ontological vicariate. non-being, or death, is a luxury that hasn't yet been learnt by the 'human', non-being having been delegated to slaves -those humans who are denied status as humans against which the category of 'human' is both founded and flounders -as well as to non-human others. unless we recognize the 'universal right to breath' (emphasis added) for all organic matter, mbembe argues, we will continue to fail to die for ourselves, the death drive being projected, that is, into the body of that which is deemed non-human. if we go on collectively refusing to die for ourselves, we could say that the temporality of the current human predicament is closer to what martin o'brien calls 'zombie time' (o'brien, 2020) . as an artist and writer living with cystic fibrosis, which gives rise to symptoms very similar to covid-19 (coughing, shortness of breath, exhaustion), o'brien has now outlived his own life expectancy. he writes: zombie time insists on a different temporal proximity to death. like the hollywood zombie which holds within it a paradox, in that it is both dead and alive, those of us living in zombie time experience death as embodied in life […] .we had come to terms with the fact that we are about to die, and then we didn't. freud's movement towards death is circular: a repetitive arc that leads us back to the inorganic, so that in some sense it too describes zombie time, the fact we have always already surpassed our death date, whereby a life is an act of return. each organism follows its own path, he tells us, to death, and that deviation is a life. a path, however, is not quite what o'brien is suggesting. here the presence of death is sutured to every aspect of life, closer perhaps to melanie klein's insistence on the death drive as a permanent unconscious phantasy that must be managed as a life-long psychic struggle (klein, 1946 (klein, / 1975 . two questions arise from this. firstly, does recognizing 'death as embodied in life' lead us to begin to die for ourselves? in this 'hour of autophagy', as mbembe (2020) puts it, we will no longer be able to delegate death to an other. we do, indeed, have to die not just in our own fashion but on our own behalf. in one reading of freud's death drive, it is associated with the freedom to do one's own thing, follow one's own path and stands as a marker of an independent life in many ways free from others -even if, as lacan would have it, not free from the big other. 2 but, as so many feminist, queer, disability, and black studies scholars have attested, living an independent life is a fantasy; it is always premised on dependency or interdependency, which so often requires the temporary or permanent tethering of the life of an other, or, more profoundly, the harnessing of 'life' itself. 3 judith butler (2020) writes in the force of non-violence that we are all born into a condition of 'radical dependency' (p. 41), that no-one stands on their own, that we are all at some level propped up by others. freud's suggestion of 'eternal return' requires practices of maintenance that have largely been accorded to women, people of colour, animals, and other non-human others. these practices of maintenance entail the temporalities of often mind-numbing repetition: reproductive and other forms of labour that support, sustain, and maintain all living systems. in order to 'deviate', someone or something else needs to preserve, maintain, protect, sustain, and repeat. those 'others' stay on the side of life, not as progression or even deviation towards death but as a permanent sustaining of life-processes. death in life requires a simultaneous articulation, in other words, of life in death, in which the temporalities of progression, regression, and repetition can be understood as supported and supplemented by another temporal element within the death drive that operates through 'dynamic chronicity': an element that animates 'life' in such a way as to allow the subject to die in its own fashion. i call this life in death the 'maternal death drive' (baraitser, 2017a) to distinguish it from the pleasure principle or the 'life' drive. secondly, if the time of the 'now', as i've elaborated above, takes the form of dynamic chronicity, a suspended yet chronically animated time that pushes out temporal multiplicity, what work needs to be done in order that this form of time retains some connection to a futurity for all? do the repetitions of 'blue blank' in their own circular fashion retain within them a relation to futurity, even if they don't exactly lead us somewhere else? i would hope, after all, that my patient may eventually, with time, come to experience the 'blue-session' analyst and the 'blank-absent' analyst as one and the same analyst, even as the agonies of having and losing may continue to be difficult. from a kleinian perspective, the time that this requires is the time in which what is hated and what is loved come to have a relation to one another, which klein calls 'depression' (klein 1946 (klein /1975 and which may entail 'depressing time'. we could say that it is the time in which we come to be concerned about the damage done to what is loved, the time whereby what is loved and what is hated can come to matter to one another, making the time of working through that of 'mattering' itself. furthermore, mbembe (2020) writes: community -or rather the in-common -is not based solely on the possibility of saying goodbye, that is, of having a unique encounter with others and honoring this meeting time and again. the in-common is based also on the possibility of sharing unconditionally, each time drawing from it something absolutely intrinsic, a thing uncountable, incalculable, priceless. (emphases in original) this would suggest that, supplementary to the time of blue-blank (saying goodbye again and again), there is another time: that of the 'in-common'. this is a time of permanent mattering, which also takes time to recognize. it is, if you like, the time in which depressive guilt survives and hence the time it takes for a future to be recognized within the present, rather than being the outward edge, the longed-for time that is yet to come. in what follows, and taking my cue from beyond the pleasure principle itself, i attempt to rework freud's death drive by drawing attention to a particular form of developmental time that lies inside the time of repetition, which i link to 'life in death'. in chapter ii of freud's essay, in the midst of his struggle with the meaning of repetition, pleasure and unpleasure, he turns to a child. the function of the child at this point in the text is to provide the case of 'normalcy' -the play of children -in order to help him understand the 'dark and dismal topic of traumatic neurosis' (freud, 1920b (freud, /2003 . the child will be 'light' (read white) and playful but turns out to be deeply troubled. instead of dragging the cotton reel along the floor as the adults intended, so it could turn and check its existence at any point, the child, standing outside the cot, throws the reel into the cot, accompanied by an o-o-o-o sound, so it cannot be seen, and then pulls it out with a 'da!' that freud describes as 'joyful' (p. 52). the pleasure of refinding, however, is postponed -in the time between 'gone' and 'found', the child plays at waiting, as it attempts to remaster the experience, freud tells us, of its 'gone' mother. this is of course also an attempt to deal with its own goneness from the imagined place of the mother; the child is standing outside the cot, after all. the passivity of being left is repeated but transformed through an act of 'revenge', a repetitive act of aggression in which, through psychic substitution, something essentially unpleasurable is turned into something 'to be remembered and to be processed in the psyche' (p. 52). the child does this by identifying with the mother, waiting in her place. my aim is to repeat freud's impulse, re-inserting a mother and child into the scene of the death drive 'proper' as a way to signal how to die on our own behalf and therefore how to go on in the suspended hiatus we appear to be living through. the maternal, as i will elaborate, appears as a non-normative developmental temporality within the death drive. in my account, the child reappears, however, in the figure of the child-activist greta thunberg. she is the child who has been invested in symbolically to carry hope for the future, a hope that she is decidedly pushing back towards those of the generation who came before her, calling on them to take action now, before it is too late. although thunberg names her vision of the world in terms of 'black and white' thinking, i draw on laura salisbury's notion of 'grey time' (salisbury, in press) in order to understand what to do with the time that remains in which action can still take place. it is always an uncomfortable thing to do, to insert a mother and child into a scene where they are ostensibly not wanted. it carries the sour smells of heteronormativity and essentialism that still cling to discussions of the maternal and relegate mother-child configurations as the counterpoint to those who are 'not fighting for the children', as lee edelman (2004) suggested in his famous polemic no future. for edelman, the death drive is a queer refusal of futurity that allows negativity to operate as a 'pulsive force' that would otherwise trap queer as a determinate stable position (p. 3). the child and mother come to represent the ultimate trap, that of development itself -the unfolding of the normative temporalities of birth, growth, development, maturation, reproduction, wealth generation and death. in some ways, this is what makes the insertion of mother-child back into discourses about the death drive rather 'queer'. in doing so, i deliberately refuse the association between motherhood and normativity and suggest that motherhood is the name for any temporal relation of 'unfurling' whereby the unfurling of one life occurs in relation to the unfurling of another, albeit out of sync. in fact, as i will elaborate below, for a life to unfurl there needs to be the presence of another life that is prepared to wait whilst life and death can come to have a relation to one another. this suspended time of waiting for life to unfurl is a non-teleological, crystalline form of developmental time based on the principle of life in death (baraitser, 2017a, p. 92) . whilst motherhood is always in danger of being squeezed out of this kind of queer theory, it is also in danger of being squeezed out of feminist theories that purport to make space for the maternal. julia kristeva's essay 'women's time' (1981/1986 ), for instance, conceptualized female subjectivity as occupying two forms of time: cyclical time (repetition) and monumental time (eternity without cleavage or escape). these two 'feminine' forms of time, she argued, work to conceal the inherent logic of teleological, historical, 'masculine' time, which is linear, progressive, unfolding and yet constantly rupturing, an 'anguished' time (p. 192) . masculine time rests on its own stumbling block, which is death. cyclical time and 'monumental' or eternal time, kristeva argued, are both accessed through the feminine, so that the feminine signifies a less 'anguished' time because it is uncoupled from the death of the subject and more concerned with suturing the subject to extrasubjective time. although this has been rightly critiqued for essentializing 'the feminine' through the normative positioning of the female subject on the side of the biological, as well as mobilizing a nonpolitical appeal to 'nature', i have argued elsewhere that, in attempting to separate the feminine from cyclical and monumental time, feminist theory designates the maternal as the keeper of species-time, in which the mother becomes a biologistic and romanticized subject attached to the rhythms of nature (baraitser, 2009, p. 5) . toril moi (1986) writes of kristeva's essay that the question for kristeva was not so much how to valorize the feminine but how to reconcile maternal time with linear (political and historical) time (p. 187). without a theory of the desire to have children (a desire that can permeate any gender configuration and that i name as maternal regardless of the gendered body that desires it), we leave the door open to the consequence of a failure to theorize and the maternal falls out of signification, time and history. moreover, motherhood is not just the desire for children but a particular form of repetitive labour relegated largely to women and particularly, in the global north, to women of colour and women from the south. although the concept of 'social reproduction' has been expanded to incorporate a much broader array of activities than caring for children, maternal labour remains distinct from other forms of domestic labour. joy james (2016) argues that the ongoing trauma and theft involved in slavery, for instance, produces not only western democracy but a repudiated 'twin' within western theory that she names 'the black matrix' (p. 256). where mothers in captivity and slavery have always provided the reproductive and productive labour that underpin wealth and culture, they are systematically erased -not just in culture but in what she calls 'womb theory' (theory, for instance, that accommodates feminism, intersectionality and antiracism, whilst still denying the maternal captive). despite this, she claims, the black matrix can act as a 'fulcrum' that leverages power against captivity (p. 257). i would argue that this power comes, in part, from the impossibility of the maternal captive remaining indifferent to her labour. subsistence farming, cooking, cleaning, household maintenance, support work and the production of status are forms of repetition from which it remains possible to emotionally disattach. but the 'labour' of maternity is 'affective, invested, intersubjective' (sandford, 2011, p. 6 ) and retains an ethical dimension that is distinct. here the maternal emerges as a figuration of the subject that is deeply attached to its labouring, whose labouring is a matter of attachment to that labour, as well as providing the general conditions for attachment (the infant's psychic struggle to become connected to the world) to take place. we could say, then, that the time of repetition under the condition that is maternity becomes the time of mattering, as opposed to the 'meaningless' time of reproduction: the time, that is, in which repetition may come to matter. this time can be felt as obdurate, distinctively uncertain in its outcome, both intensive and 'empty', and bound to the pace of the unfurling other. what is at play is a kind of crystalline developmental time within the time of history. it takes the form of repetition, but this repetition holds open the possibility of something coming to matter, rather than the death drive understood only as a return to non-being. a maternal death drive? what might this conjunction mean? freud always maintained that the two elements of psychic life that couldn't be worked through were the repudiation of femininity in both men and women, by which he meant the repudiation of passivity; and the death drive, the repetitive return again and again to our psychic dissolution or unbinding. in 'analysis terminable and interminable', written in the last years of his life, freud (1937 freud ( / 1964 ) named these the 'bedrocks' of psychic life, evoking an immoveable geological time. the permanent fixtures of psychic life that an analysis cannot shift are the hatred of passivity and the simultaneous impulse to return to an ultimate passive state, suturing the feminine to death in psychoanalysis. earlier, in beyond the pleasure principle, freud had offered an hypothesis in which, despite his conception of drives as exerting the pressure that presses for change, they are constrained by a conservatism, meaning they do not operate according to one singular temporality. this double temporality within the death drive is drawn out by adrian johnston (2005) , who has noted freud's (1905 freud's ( / 1955 ) developmental account of the drive in three essays on the theory of sexuality and later in 'instincts and their vicissitudes ' (1915/1957) , where the drive is articulated as maturing over time. johnston (2005) maintains that freud's drive is simultaneously timeless and temporal, both interminable (it repeats) and containing an internal tendency to deviate, to change its object and its aim (it develops or alters) (p. 228). after all, something happens, according to freud, that shifts the human organism from one that dies easily to one that diverges ever more widely from the original course of life (that is, death) and therefore makes ever more complicated detours before reaching death. for johnston, alteration can be understood as an intra-temporal resistance to the time of iteration, a negation of time transpiring within time. this means that the death drive therefore includes rather than negates developmental time. this is not a developmental tendency separated off and located within the selfpreservative drives or a 'life' drive but a death drive that contains within it its own resistance to negation. i would want to reclaim this doubled death drive as 'maternal', the drive that includes within it the capacity for development, for what johnston calls 'alteration', which always mediates the axis of repetition or 'iteration' (p. 344). the maternal death drive would describe the unfolding of another life in relation to one's own path towards death and marks the point that alteration and iteration cross one another. if we move from freud to klein, we see how this double temporality plays out between the maternal and child subject. i have described elsewhere how, in love, guilt and reparation, klein (1937 klein ( /1998 tells us that anxiety about maternal care and dependency on the maternal body in very early life -the relationship, that is, with a feeding-object of some kind that could be loosely termed 'breast' -is a result of both the frustrations of that breast (its capacities to feed but also to withhold or disappear at whim) and what the infant does with the hatred and aggressive feelings stirred up by those experiences of frustration that rebound on it in the form of terrifying persecutory fantasies of being attacked by the breast itself (pp. 306-43; see also baraitser, 2017b, p. 4 ). klein's conceptual infant swings in and out of psychic states that are full of envious rage and makes phantasized aggressive raids on the maternal body in an attempt to manage the treacherous initial experiences of psychical and physical survival. klein (1937 klein ( /1998 moves us closer to a more thing-like internal world permeated less with representations and more with dynamic aggressive phantasies of biting, hacking at and tearing the mother and her breasts into bits, and attempts to destroy her body and everything it might be phantasized to contain (p. 308). in klein's thinking, libido gives way to aggression, so that the defences themselves are violent in their redoubling on the infant in the form of persecutory anxiety. one's own greed and aggressiveness themselves become threatening, along with the maternal object that evokes them, and have to be split off from conscious thought. coupled with this are feelings of temporary relief from these painful states of mind (p. 307) and these 'good' experiences form the basis for what we could think of as love. it is only as the infant moves towards a tolerance of knowing that good and bad 'things' and experiences are bound up in the same person (that is, both (m)other and self) that guilt arises as an awareness that we have tried to destroy what we also love. whilst this can overwhelm the infant with depressive anxiety that also needs to be warded off, there is a chance that this guilt can be borne and a temporary state of ambivalence can be achieved that includes the desire to make good the damage done. 'unfurling', then, arises out of the capacity to tolerate the proximity of love and hate towards the mother, but the mother also needs to tolerate the time this takes -to be prepared to go back 'again and again' to the site of mattering without becoming too overwhelmed or rejecting. it is here that futurity emerges, not as that which is carried forward by the child but as this element within the death drive that i am naming as maternal, which is a capacity to tolerate repetition within the present. to return to a lacanian formulation, chenyang wang (2019) , in his work on differentiating real, imaginary and symbolic time in lacan, shows how lacan's death drive is not so much the reinsertion of the bodily or biological into the human subject but the traumatic intrusion of the symbolic into the organism at the expense of the imaginary, which evokes the real body. wang describes how what he calls the 'real future' (p. 69) does not involve the human subject. where the ego may continue to imagine a future of fulfilled wishes, hopes and expectations, in which the present is characterized as a mode of 'waiting' until the future unfolds, the death drive in fact interrupts the fantasy of the future as something unreachable or unattainable and instead returns the future to the subject as something that has already structured it. for wang, real time opens the subject to the real present that is neither instantaneous or immediate but the freedom of returning to the same place in one's own way. he sees this as the offer of the possibility of freedom that transcends the isolated, egoic individual, otherwise trapped in its established temporal order (p. 79). we could say, then, that the death drive includes rather than negates developmental time and holds out the possibility of a time that breaks free of the ego's imaginary sense of past, present and future. developmental time, from this perspective, is precisely a suspension of the flow of time, a capacity to wait for the other to unfold. maternity, in its failure to be indifferent to the specificity of its labour, implies a return, again and again, to a scene that matters, a kind of repetition that is not quite captured by the death drive as excessive access to jouissance, nor to the death drive as a deviation towards a unique form of death, but that might after all have something to do with generativity, indeed with freedom, not of the self, but of the other. the return to a scene that matters is not a kind of flowing time (anyone who has spent time with small children will know this) nor the stultifying time of indifferent labour, but living in a suspended or crystalline time, which is the time it takes for mattering to take place. finally, we can link the maternal death drive to elizabeth freeman's (2019) concept of 'chronothanatopolitics' (p. 57) that extends mattering beyond the mother-child relation to the politics of mattering in the contemporary moment. in her discussion of 'playing dead' in 19th century african-american literature, freeman notes that many african-american stories involve 'fictive rebirths'(p. 55). these are stagings of death and rebirth, not just once but multiple times, so that in these stories slaves and their descendants are constantly moving towards and away from death. feigning death, she argues, does not solve the problem of having not been 'born' as human -a position well established within afropessimist thought -but allows an engagement through repetitive staged dying with what jared sexton (2011) has called 'the social life of social death' (quoted in freeman, 2019, p. 55). freeman therefore builds on freud's death drive to develop a concept of 'chronothanatopolitics' in which life is not simply the opposite of death but the opposite of the 'presence' of death (p. 57), a temporary 'disappearing' of death within life, the counterpart to the maternal death drive as life in death. staging one's death again and again, she states, is a way of managing the life/death binary, rather than simply a commitment to life or an acceptance of unchanging black deathliness. where freud's death drive does refuse any simple opposition between life and death, freeman notes, it nevertheless proposes a universal and purely psychic drive. she calls instead for recognition of a socio-political death drive enacted by white supremacy: chronothanatopolitics is the 'production of deathliness and nonbeing by historical forces external to the subjectivity it creates for nonblack people, and forecloses for people of african descent' (p. 55). in the 21st century, we see 'playing dead' resurfacing in the 'die-ins' revived by the protest movement black lives matter. time becomes central, creating what freeman terms 'temporal conjoinments' with death (p. 85) through counting 'i can't breathe' 11 times, as eric garner did. we have seen this repeated in 2020, when protesters hold a silence or take the knee for 8 minutes and 46 seconds, the time that george floyd had his neck knelt on by the police officer who killed him on 25 may. 'mattering', in the sense of black life coming to matter, freeman notes, captures the double meaning of coming to importance and becoming-inert substance or matter, giving the phrase an ambivalent valence. mattering refuses the afropessimist insight that black life is structurally foreclosed and instead implies a more open stance towards non-being. by miming death rather than life, black lives matter activists 'commit to an (a)social life within death even as they fight for an end to the annihilation of blackness' (p. 86). here, life in death is the 'social' work of activism that counts the time that is left within black life even as it is extinguished, just as it is the social work of mothering that waits for life to unfurl towards its death without knowing when or how this will take place. miming death, again and again, is analogous to returning to the scene of mattering again and again, the hiatus within the path towards death that i have described as the maternal death drive. however, freeman's work provides the corrective to an easy universalizing of the drive, pointing us towards the way that black lives matter politicizes repetition in the name of life in death. recently i've seen many rumours circulating about me and enormous amounts of hate. (greta thunberg) in the child to come: life after the human catastrophe, rebekah sheldon (2016) charts a recent shift in the use of the child to suture the image of the future. the child, metonymic with the fragility of the planetary system and therefore in need of protection, has become 'the child as resource' (p. 16). as resource, the child is used to carry both expectations and anxieties about the future. unlike earlier iterations, the child as resource is premised on a future that cannot be taken for granted. much of the affect around ecological disasteranxiety, fear, terror, hopelessness, despair, guilt, determination, protectivenesscomes not so much from an awareness of the current effects of global climate change as they play out in the present but from the projected harm to the future that it portends. and the future, sheldon reminds us, is the provenance of the child. sheldon describes the history of this relationship between child and future as emanating from the 19th century at the same point as modern theories of 'life' begin to proliferate in darwin and of course in freud. 'the link forged between the child and the species', she writes, 'helped to shape eugenic historiography, focalized reproduction as a matter of concern for racial nationalism, and made the child a mode of time-keeping' (p. 3). in the face of anxious concerns about the deep biological past of the human species, the child held open a future through a coordination of the trio 'life, reproduction and species' with that of 'race, history and nation'. freud's child, for instance, caught both in the relentless unfolding of developmental time and the timelessness of unconscious life, is also the site of the regulation of 'life' itself. whilst these two axes of temporality (development and timelessness), as we saw above, cross one another, the figure of the child is nevertheless a 'retronaut, a bit of the future lodged in the present ' (p. 4) . yet, at the same time, sheldon's child is already melancholic. it knows its childness can't be preserved; it will be lost; just as the future is felt also to be something constantly slipping away. as a melancholic figure, sheldon suggests that the child as resource has a very specific task right now: to cover over the complex systems at work in biological materiality. as non-human animacy becomes more visible in conditions of planetary crisis, with it comes the terrifying potential (at least for the human world) of nature to slip its bonds. the child stands in for life itself at a time of vibrant and virulent reassertion of materialisms in all their forms. the child's new task, according to sheldon, becomes one of binding nonhuman vibrancy back into the human, into something safer, and into the frame of human reproduction. this perhaps helps us modulate how we might respond to the figure of greta thunberg, the climate activist who describes herself as both 'autistic' and living with asperger's, and to her work as a 'cry for help' (thunberg, 2019, p. 3) . during 2018, when she was 15 years old, thunberg started to skip school to sit outside the swedish parliament with a sign reading 'skolstrejk fö r klimatet' [school strike for climate]. 4 as a result of the school climate change movement that grew around thunberg's 'fridays for future' actions during 2019, there has been an intensive, rapid sanctification of the plain-speaking, white, plaitedhaired child now simply known as 'greta'. although she herself acknowledges that she is not unique and is part of a network of youth movements in the global south who bear the brunt in the present for the effects of climate disaster largely produced by the global north, she has nevertheless become an enormously influential figure through whom climate discussions now pass. some describe her influence as simply the 'greta effect' (watts, 2019) . there is a specific and careful simplicity to the way thunberg talks. in a speech entitled 'almost everything is black and white', she states, 'i have asperger's syndrome, and to me, almost everything is black or white' (thunberg, 2019, p. 7) . utilizing what others may see as a disability, a difficulty in seeing shades of grey, she speaks against the need for more complexity, more reflection, more science; in short, a more 'grown up' approach to climate chaos: 'we already have all the facts and solutions. all we have to do is to wake up and change […] everything needs to change. and it has to start today' (p. 11). it is this rhetorical insistence that there is no more time and that the future of her generation has been stolen by the inaction of the generation that has come before that positions her as not so much future-orientated but backed up against a closing future, looking back towards those who came before her as they continue to gaze ahead towards what they imagine is her future. as she states, 'we children are doing this to wake the adults up. we children are doing this for you to put your differences aside and start acting as you would in a crisis. we children are doing this because we want our hopes and dreams back' (p. 68). in many ways, we could see thunberg as performing a call, in the name of a human reproductive future, for the binding of nonhuman vibrancy back into the human, into something safe and stable, the child's new task that sheldon describes. we could also make a critical reading of the ways thunberg -as a contemporary incarnation of maisie in henry james ' (1897/1969 ) what maisie knew, where the child-protagonist is sacrificed to save a negligent and damaged society -re-mobilizes a discourse that re-stabilizes the differences between the generations in the name of the reproduction of the white heteronormative social bond. however, i want to read thunberg's 'black and white' thinking as metonymic with my patient's blank and blue: the oscillation between the absorbing blue of the analytic session and the suspended time of nothing happening between the sessions; the time of no-analyst and the agonies of waiting. thunberg (2019) states: 'there are no grey areas when it comes to survival. either we go on as a civilization or we don't. we have to change' (p. 8). in many ways, she refuses 'development' in the sense of klein's depressive position functioning, where blue and blank come to be understood as having a relation to one another, and insists instead on their separation, on what klein would call 'paranoid-schizoid' thinking, in which blue and blank are radically split apart, as a viable place to speak from. indeed, she goes on insisting she is a child and that development is precisely what has got us into so much trouble. she warns us that, from the perspective of blank time (the time of nothing happening), blue time is absorbing for sure, but it is short, cannot last, and time itself needs to urgently come to matter if we are to find a way out of the current predicament. if we want to repair a relationship with monumental time, there is only action or no action, blue or blank, as we have now run out of time. despite the obvious occlusion of the many brown and black children who have protested, spoken out, organized school strikes and presented to the un over the years and gained no coverage, what is striking is that the white child claims that it is her unusual perspective, in which black and white remain separate, that is our only way out. in describing what she calls 'grey time', laura salisbury (in press) reminds us that grey is not, strictly speaking, a colour at all; rather, it is a shade. as such, it is achromatic, composed of black and white in various shades of intensity, rather than hues. moving from colour to time, salisbury claims that grey time can be thought of as similarly a time that contains intensities of affect, naming grey time as 'anachromistic', a form of intensive temporality that belongs to and traverses the perceiving subject and the aesthetic object. to speak of grey time as anachromistic is to evoke an aesthetic experience that is against colour or hue, but, with its echo of anachronism, also produces a slub in the fabric of time as it is usually thought. the double gesture of the term anachromism is the attempt to speak to time's intensity rather than, as is more usual, concentrating on its flow or movement, while trying to capture an atmosphere where there is a weaving or binding in of blank, uncertain, colourless 'colour', and affect into what is felt of time. (emphasis in original) grey time, then, is an intensity of time that moves us beyond the impasse of action and no action, or blue and blank, by acting as a slub or thickening in the oscillation between the two. this thickening, if we follow salisbury, both reveals time's stuck oscillation between black and white at the same point as it acts to bind greyness into what is felt of time. grey inhabits black and white without resolving the oscillation, both intensifying the sense of time's stuckness but also drawing attention to the affect of greyness, of uncertainty. whilst the time for grey thinking, as thunberg states, may have passed, perhaps salisbury's attention to grey time is important. as the existential dangers facing humanity deepen -by mbembe's description, the destruction of the biosphere, the criminalization of resistance and the rise of determinisms, whether genetic, neuronal, biological or environmental -so perhaps greta thunberg's urgency cannot be heard until we bind the blank, uncertain, colourless affect of the grey 'now' into what is felt of time. mbembe (2020) writes of the covid-19 virus: of all these dangers, the greatest is that all forms of life will be rendered impossible. […] at this juncture, this sudden arrest arrives, an interruption not of history but of something that still eludes our grasp. since it was imposed upon us, this cessation derives not from our will. in many respects, it is simultaneously unforeseen and unpredictable. yet what we need is a voluntary cessation, a conscious and fully consensual interruption. without which there will be no tomorrow. without which nothing will exist but an endless series of unforeseen events. (emphasis in original) this is, indeed, grey time -a voluntary cessation, a conscious and fully consensual interruption to business as usual as a response to the profound ó 2020 springer nature limited. 1088-0763 psychoanalysis, culture & society uncertainty that is the reality of the interdependencies of all forms of life. although i know that there is no way for 'couch time' to have an effect without a 'session-time' analyst and a 'between-session-time' analyst eventually coming together in the time that is an analysis, it may be that we have simply run out of time. then a new psychoanalytic temporality may be needed, one that understands the simultaneous need for and suspension of development in the name of really knowing about the death drive; one in which action would no longer be simply understood as acting out but in which the mutative interpretation, the one that brings about change, can be grey, ill-timed, coming too soon and too late, before it is too late. maternal encounters: the ethics of interruption ó 2020 springer nature limited. 1088-0763 psychoanalysis postmaternal, postwork and the maternal death drive. special issue: the postmaternal after the future 1889/1994) time and free will: an essay on the immediate data of consciousness 1896/2004) matter and memory the 1911 schoolchildren strikes when the kids are united introduction: time studies today the force of non-violence the climate of history: four theses no future: queer theory and the death drive past/future time binds: queer temporalities, queer histories. durham and london beside you in time: sense methods and queer sociabilities in the american nineteenth century /1955) three essays on the theory of sexuality /1957) instincts and their vicissitudes /1955) beyond the pleasure principle beyond the pleasure principle /1964) analysis terminable and interminable a queer place and time lose your mother: a journey along the atlantic slave route semiotext(e). invisible committee, the (2014) to our friends the (2017) now. new york: semiotext(e) 1897/1969) what maisie knew the womb of western theory: trauma, time theft and the captive maternal time driven: metapsychology and the splitting of the drive /1975) notes on some schizoid mechanisms /1998) love, guilt and reparation the shock doctrine: the rise of disaster capitalism /1986) women's time chronophobia: on time in the art of the 1960s the future cannot begin: temporal structures in modern society the universal right to breathe. translated by c. shread. critical inquiry white time: the chronic injustice of ideal theory the chronopolitics of racial time introduction to women's time the colonisation of time: ritual, routine and resistance in the british empire you are my death: the shattered temporalities of zombie time matters of care: speculative ethics in more than human worlds grey time: anachromism and waiting for beckett what is maternal labour? the social life of social death: on afro-pessimism and black optimism the child to come: life after the human catastrophe no one is to small to make a difference. london: penguin, random house future shock subjectivity in-between times: exploring the notion of time in lacan's work the greta thunberg effect: at last, mps focus on climate change. the guardian publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ó 2020 springer nature limited. 1088-0763 psychoanalysis the research in this paper was funded by a wellcome trust collaborative award, 'waiting times', grant number [205400/a/16/z] (see waitingtimes.exeter.ac.uk). data sharing is not applicable as no datasets were generated and/or analysed for this study. (2007) and hedva (2016) . 4. in doing so, she was perhaps unwittingly building on a long history of school strikes, certainly dating back at least 100 years in the uk, in which schoolchildren mobilised against caning in 1889 and later came out on strike as part of a number of localised general strikes in 1911. see bloom (2010 bloom ( , 2011 . key: cord-027133-kiyix3qd authors: grzesik, piotr; mrozek, dariusz title: comparative analysis of time series databases in the context of edge computing for low power sensor networks date: 2020-05-25 journal: computational science iccs 2020 doi: 10.1007/978-3-030-50426-7_28 sha: doc_id: 27133 cord_uid: kiyix3qd selection of an appropriate database system for edge iot devices is one of the essential elements that determine efficient edge-based data analysis in low power wireless sensor networks. this paper presents a comparative analysis of time series databases in the context of edge computing for iot and smart systems. the research focuses on the performance comparison between three time-series databases: timescaledb, influxdb, riak ts, as well as two relational databases, postgresql and sqlite. all selected solutions were tested while being deployed on a single-board computer, raspberry pi. for each of them, the database schema was designed, based on a data model representing sensor readings and their corresponding timestamps. for performance testing, we developed a small application that was able to simulate insertion and querying operations. the results of the experiments showed that for presented scenarios of reading data, postgresql and influxdb emerged as the most performing solutions. for tested insertion scenarios, postgresql turned out to be the fastest. carried out experiments also proved that low-cost, single-board computers such as raspberry pi can be used as small-scale data aggregation nodes on edge device in low power wireless sensor networks, that often serve as a base for iot-based smart systems. in the recent years we have been observing iot systems being applied for multiple use cases such as water monitoring [20] , air quality monitoring [24] , and health monitoring [25] , generating a massive amount of data that is being sent to the cloud for storing and further processing. this is becoming a more significant challenge due to the need for sending the data over the internet. due to that, a new computing paradigm called edge computing started to emerge [28] . the main idea behind edge computing is to move data processing from the cloud to the devices that are closer to the source of data in order to reduce the volume of data that needs to be send to the cloud, improve reaction time to the changing state of the system, provide resilience and prevent data loss in situations where internet connection is not reliable or even not available most of the time. to achieve that, edge computing devices need to be able to ingest data from sensors, analyze them, aggregate metrics, and send them to the cloud for further processing if required. for example, while collecting and processing environmental data on air quality, the edge device can be responsible for aggregating data and computing air quality index (aqi) [22] , instead of sending raw sensor readings to the environmental monitoring center. in systems with multiple sensors generating data at a fast rate, efficient storage and analytical system running on edge device becomes a crucial part. due to the time-series nature of sensor data, dedicated time series databases seem like a natural fit for this type of workload. this paper aims to evaluate several time series databases in the context of using them in edge computing, low-cost, constrained device in form of raspberry pi that is processing data from environmental sensors. the paper is organized as follows. in sect. 2, we review the related works. in sect. 3, we describe databases selected for comparison. section 4 describes testing environment, used data model as well as testing methodology. section 5 contains a description of the performance experiments that we carried out. finally, sect. 6 concludes the results of the paper. in the literature, there is a few research concerning the comparison of various time-series databases. in the paper [27] , tulasi priyanka sanaboyina compared two time-series databases, influxdb and opentsdb, based on the energy consumption of the physical servers on which the databases are running under several reading and writing scenarios. the author concludes the research with claims that influxdb consumes less energy than opentsdb in comparable situations. bader et al. [17] focused on open source time-series databases, examined 83 different solutions during their research, and focused on the comparison of twelve selected databases, including influxdb, postgresql and opentsdb among others. all selected solutions were compared based on their scalability, supported functions, granularity, available interfaces, and extensions as well as licensing and support. in his research [21] , goldschmidt et al. benchmarked three open-source timeseries databases, opentsdb, kariosdb and databus in the cloud environment with up to 36 nodes in the context of industrial workloads. the main objective of the research was to evaluate selected databases to determine their scalability and reliability features. out of the three technologies, kairosdb emerged as the one that meets the initial hypotheses about scalability and reliability. wlodarczyk, in his article [29] , provides an overview and comparison of four offerings, chukwa, opentsdb, tempodb, and squwk. the analysis focused on feature differences between selected technologies, without any performance benchmarks. the author identified opentsdb as a most popular choice for the time series storage. pungilȃ et al. [26] compared the databases to use them in the system that stores large volumes of sensor data from smart meters. during the research, they compared three relational databases, sqlite3, mysql, postgresql, one timeseries database, ibm informix with datablade module, as well as three nosql databases, monetdb, hypertable and oracle berkeleydb. during the experiments, it was determined that hypertable offers the most significant number of insert operations per second, but is slower when it comes to scanning operations. the authors suggested that berkeleydb offers a compromise when there is a need for a workload that has a balanced number of both insert and scan operations. fadhel et al. presented research [20] concerning the evaluation of the databases for a low-cost water quality sensing system. authors identified influxdb as the most suitable solution, listing the ease of installation and maintenance, support for multiple interface formats, and http gui as the deciding factors. in the second part of the research, they conducted performance experiments and determined that influxdb can handle the load from 450 sensors. in his article [23] , kiefer provided a performance comparison between post-gresql and timescaledb for storage and analytics of large scale, time-series data. the author presented that at the scale of millions of rows, timescaledb offers up to 20× higher ingest rates than postgresql, at the same time offering time-based queries to be even 14,000× faster. the author also mentions that for simple queries, e.g., indexed lookups, timescaledb will be slower than post-gresql due to more considerable planning time. boule, in his work [19] , described a performance comparison for insert and read operations between influxdb and timescaledb. it is based on a simulated dataset of metrics for a fleet of trucks. according to results obtained during the experiments, timescaledb offers a better read performance than influxdb in tested scenarios. based on the above, it can be concluded that most of the current research focuses on the use of time-series databases for large-scale systems, running in cloud environments. one exception to that is the research [20] , where authors evaluate several databases in the context of a low-cost system; however, presenting performance tests only for one of them, influxdb. in contrast to the mentioned works, this paper focuses on the comparison of the performance of several database systems for storing sensor data at the edge devices that have limited storage and compute capabilities. time series database (tsdb) is a database type designed and optimized to handle timestamped or time-series data, which is characterized by a low number of relationships between data and temporal ordering of records. most of the time series workloads consist of a high number of insert operations, often in batches. query patterns include some forms of aggregation over time. it is also important to note that in such workloads, data usually does not require updating after being inserted. to accommodate these requirements, time-series databases store data in the form of events, metrics, or measurements, typically numerical, together with their corresponding timestamps and additional labels or tags. data is very often chunked, based on timestamp, which in turn allows for fast and efficient time-based queries and aggregations. most tsdbs offer advanced data processing capabilities such as window functions, automatic aggregation functions, time bucketing, and advanced data retention policies. there are currently a few approaches to building a time-series database. some of them, like opentsdb or timescaledb, depend on already existing databases, such as hbase or post-gresql, respectively, while others are standalone, independent systems such as influxdb. in recent years, according to db engine ranking, as seen in fig. 1 , the growth rate of the popularity of time series databases is the highest out of all classified database types. for the experiments, databases were selected based on their popularity, offered aggregation functionalities, support for arm architecture, sql or sql-like query language support as well as on their availability without commercial license. timescaledb is an open-source, time-series database, written in c programming language and is distributed as an extension of the relational database, postgresql. it is developed by timescale inc., which also offers enterprise support and cloud hosting in the form of timescale cloud offering. timescaledb is optimized for fast ingest and complex queries [14] . thanks to the support for all sql operations available in postgresql, it can be used as a drop-in replacement of a traditional relational database, while also offering significant performance improvements for storing and processing time-series data. by taking advantage of automatic space-time partitioning, it enables horizontal scaling, which in turn can further improve the ingestion capabilities of the system. it stores data in structures called hypertables, which serve as an abstraction for a single, continuous table. internally, timescaledb splits hypertables into chunks that correspond to a specific time interval and partition keys. chunks are implemented by using regular postgresql tables [16] . thanks to being an extension of postgresql dbms, it supports the same client libraries that support post-gresql. according to the db engines ranking [15] , it is the 8th most popular time-series database. influxdb is an open-source, time-series database, written in go programming language, developed and maintained by influxdb inc., which also offers enterprise support and a cloud-hosted version of the database. internally, it uses a custom-build storage engine called time-structured merge (tsm) tree, which is optimized for time series data. it has no external dependencies, is distributed as a single binary, which in turn allows for easy deployment process on all major operating systems and platforms. influxdb supports influxql, which is a custom, sql-like query language with support for aggregation functions over time series data. it supports advanced data retention policies as well as continuous queries, which allow for automatic computations of aggregate data to speed up frequently used queries [5] . it uses shards to partition data and organizes them into shards groups, based on the retention policy and timestamps. influxdb is also a part of tick stack [4], which is a data processing platform that consists of a time-series database in form of influxdb, kapacitor, which is a realtime streaming data processing engine, telegraf, the data collection agent and chronograf, a graphical user interface to the platform. client libraries in the programming languages like go, python, java, ruby, and others are available, as well as command-line client "influx". according to db engines ranking [3] , it is the most popular time-series database management system. riak ts is an open-source, distributed nosql database, optimized for the time series data and built on top of riak kv database [9] , created and maintained by basho technologies. riak ts is written in erlang programming language, supports masterless, multi-node architecture to ensure resiliency to network and hardware failures. this type of architecture also allows for efficient scalability with near-linear performance increase [10] . it supports a sql-like query language with aggregation operations over time series data. it offers both http and pbc apis as well as dedicated client libraries in java, python, ruby, erlang, and node.js. besides, it has a native apache spark [1] connector for the in-memory analytics. according to db engines ranking [11] , it is the 15th most popular time-series database. postgresql is an open-source relational database management system written in c language and currently maintained by postgresql global development group. postgresql runs on all major operating systems, is acid [30] compliant and supports various extensions, namely timescaledb. it supports a major part of the sql standard and offers many features, including but not limited to, triggers, views, transactions, streaming replication. it uses multi-version concurrency control, mvcc [18] . in addition to being a relational database, it also offers support for storing and querying document data thanks to json, jsonb, xml, and key-value data types [6] . there are client libraries available in programming languages like python, c, c++, java, go, erlang, rust, and others. according to db engines ranking [7] , it is the 4th most popular database overall. it does not offer any dedicated support and optimizations for time-series data. sqlite is an open-source relational database, written in c language. the sqlite source code is currently available in the public domain. it is a lightweight, single file, and unlike most databases, it is implemented only as a library and does not require a separate server process. sqlite provides all functionalities directly by the function calls. its simplicity makes it one of the most widely used databases, especially popular in embedded systems. sqlite has a full-featured sql standard implementation with support for functionalities such as triggers, views, indexes, and many more [12] . similar to postgresql, it does not offer any specific support for time series data. besides, it does not provide a data type for storing time, and it requires users to save it as numerical timestamps or strings. according to db engines ranking [13] , it is the 7th most popular relational database and 10th most popular database overall. the testing environment was based on a 6lowpan sensor network that is a part of the environment monitoring system, which consists of a group of the edge router device that additionally serves as a database and analytical engine. it is also responsible for sending aggregated metrics to the analytic system in the cloud for further processing. another part of the network is composed of ten sensor nodes that are sending measurements such as air quality and weather condition metrics to the edge router device. figure 2 presents the network diagram of the described system. in this research, we focused on performance evaluation of the edge database functionality of the presented system. to simplify the testing environment and allow for running tests multiple times in a reasonable amount of time, we developed a small python application to serve as a generator of sensor readings instead of using data generated by the physical network. as an edge device we decided to use a raspberry pi single-board computer, with the following specification each data point sent by the sensor consists of air quality metrics in the form of no 2 and dust particle size metrics -pm2.5 and pm10. besides, it also carries information about weather conditions such as ambient temperature, pressure, and humidity. each reading is timestamped and tagged with the location of the sensor and the unique sensor identifier. table 1 shows the structure of a single data point with corresponding data types. for the experiments, we generated data from 10 simulated sensors, where each sensor sends reading every 15 s over 24 h. it resulted in 28,800 data points used for performance testing. for testing, a small python application was developed separately for each of the selected databases. the application was responsible for reading simulated time-series data, inserting that data into the database and reading the data back from the database, while measuring the time it took to execute all of the described operations. table 2 presents the list of the databases along with their corresponding client libraries. it also shows versions of the software used during the experiments. to evaluate the insertion and querying performance, we conducted several experiments. firstly, we ran the test to assess the writing capabilities of all selected databases by simulating the insertion of data points in two ways: one-by-one and in batches of 10 points. the reason for that was to accommodate the fact that databases can offer better performance for batch insertions, and it is possible to buffer data before saving it to the database. in this step, for each database, we ran the simulation 50 times (except for sqlite where simulations were run 20 times due to relatively long simulation time). secondly, we ran the experiments to evaluate the query performance of all selected solutions in three scenarios. in the first scenario, we evaluated a query for average temperature in the chosen period, grouped by location. in the second query, we tested a query for minimum and maximum values of no 2 , pm2.5, and pm10 in the selected period, once again grouped by location. in the last, third scenario, we evaluated the performance of a query that counts data points grouped by sensor id in the selected period for which no 2 was larger than selected value and location was equal to a specific one. each query was executed 5000 times. the query scenarios were selected in order to test the performance of the databases for most common aggregation queries that can be used in scenarios where the analysis has to be performed directly on the edge device or when the data needs to be aggregated before sending to the cloud in order to reduce the volume of transferred data. in the first simulation, we evaluated the insertion performance in two different scenarios. figure 3 presents the obtained results in the form of the average number of data points inserted per second in both scenarios. for one-by-one insertion, we observe postgresql and timescaledb as the best performing solutions, with 260 and 230 points inserted per second, respectively. next is riak in the following experiments, we tested the reading performance for three different queries. results are presented in the form of the average query execution time in milliseconds for each database. due to the fact that execution for riak ts was in all cases 20-40 times slower than for all other solutions, the results for riak ts were removed from the further comparison to improve the readability of the presented charts. figure 4 shows both the query used in the first scenario as well as the obtained results. in this scenario, influxdb emerged as the fastest solution with average query execution time of 24 ms, followed by postgresql and timescaledb with 41 and 52 ms, respectively. sqlite was the slowest, recording average query execution time of 66 ms. next, a comparison was made for the results obtained during the evaluation of second query computing minimum and maximum aggregations of air quality metrics. the recorded results and queries are shown in fig. 5 . in this example, postgresql turned out to be the fastest solution with average query execution time of 48 ms, next was influxdb with 70 ms and timescaledb with 72 ms. tested query took the longest time to execute on sqlite, taking on average 81 ms. we can observe a general trend of increased query execution time with more aggregations performed in comparison to the first testing scenario. the last experiment was performed for the third tested query, evaluating the number of times the no 2 was higher than the predefined threshold. figure 6 presents the query used and the results obtained during that simulation. once again, postgresql was the fastest solution with an average query execution time of 15 ms, followed by influxdb with 29 ms. the two slowest databases were timescaledb and sqlite, with 39 and 40 ms per execution on average. considering results for all presented simulations, we can observe that in almost all cases, postgresql is the best performing solution for the evaluated workloads, except for influxdb, which turned out to be faster for the first aggregation query. it was validated that batching data points for insertion causes performance gains, as high as 8.65 times more data points ingested per second for influxdb. with the exception of riak ts, all databases executed tested queries on average in less than 80 ms, and the relative differences in performance for queries are not as high as in the case of insertion. the selection of a proper storage system with declarative querying capabilities is an essential element of building efficient systems with edge-based analytics. this research aimed to compare the performance of several databases in the context of edge computing in wireless sensor networks for iot-based smart systems. we believe that experiments and analysis of the results presented in the paper complement the performance evaluation of influxdb presented in [20] by showcasing performance results for multiple databases and can serve as a reference when selecting an appropriate database for low-cost, edge analytics applications. as it turned out, for a smaller scale, it might make sense to choose a more traditional, relational database like postgresql, which offers the best performance in all but one tested case. however, when features such as data retention policies, time bucketing, automatic aggregations are crucial for the developed solution, dedicated time-series databases such as timescaledb and influxdb become a better choice. dbms popularity broken down by database model influxdb on db-engines ranking postgresql on db-engines ranking riak ts on db-engines ranking sqlite on db-engines ranking timescaledb on db-engines ranking timescaledb: sql made scalable for time-series data survey and comparison of open source time series databases concurrency control in distributed database systems how to benchmark iot time-series workloads in a production environment a comparison of time series databases for storing water quality data scalability and robustness of time-series databases for cloud-native monitoring of industrial processes a review on air quality indexing system postgresql for time-series: 20x higher inserts, 2000x faster deletes, 1.2x-14,000x faster queries air quality monitoring system and benchmarking fog computing-based iot for health monitoring system benchmarking database systems for the requirements of sensor readings performance evaluation of time series databases based on energy consumption optimize cloud computations using edge computing overview of time series storage and processing in a cloud environment advanced ebusiness transactions for b2b-collaborations key: cord-292850-6mf4jmqp authors: rosen, claire b.; joffe, steven; kelz, rachel r. title: covid-19 moves medicine into a virtual space: a paradigm shift from touch to talk to establish trust date: 2020-05-20 journal: ann surg doi: 10.1097/sla.0000000000004098 sha: doc_id: 292850 cord_uid: 6mf4jmqp nan counterbalance the value of an in-person exam. during the pandemic, the added risk of exposure to sars-cov-2 tips the scales further in favor of virtual visits. the doctor-patient relationship hinges on mutual respect and trust. in a world where online dating dominates the singles scene and video chatting with licensed therapists allows patients critical access to mental health care, surgeons should believe that their ability to establish a relationship based on trust does not require physical contact. shouldn't it be possible for a surgeon to inspire her patients to believe in her ability during virtual visits where she faces the patient, his caregivers, and the electronic health record simultaneously? might it, in fact, be easier to make meaningful connections with patients when one can see them on time, in the convenience of their home or place of work? wouldn't it be more respectful and financially responsible to offer a visit free from wasted travel and waitingroom time? prior to the covid pandemic, with diminishing reimbursements and the advent of the electronic health record, physicians were already spending less face-to-face time with patients in favor of more face-to-screen time. 1 in addition, expensive parking fees or transportation costs, coupled with crowded waiting rooms, were the norm. 2 to make time for medical visits, patients often needed to take time off from work, and some have faced the threat of unemployment in order to meet the demands of their medical needs. 3 telehealth dramatically reduces the time and economic burden of routine medical care 2, 4 and, in times of contagion, eliminates the risk of transmission of infectious diseases in overcrowded waiting more screen time, less face time -implicaitons for her design improving value and access to specialty medical care for families: a pediatric surgery telehealth program association of paid sick leave with job retention and financial burden among working patients with colorectal cancer patient preference for time-saving telehealth postoperative visits after routine surgery in an urban setting development of a telehealth monitoring service after colorectal surgery: a feasibility study influence of an early recovery telehealth intervention on physical activity and functioning following coronary artery bypass surgery (cabs) among older adults with high disease burden medicare telemedicine health care key: cord-028972-1athnjkh authors: etemad, hamid title: managing uncertain consequences of a global crisis: smes encountering adversities, losses, and new opportunities date: 2020-07-10 journal: j int entrep doi: 10.1007/s10843-020-00279-z sha: doc_id: 28972 cord_uid: 1athnjkh nan more and faster, they faced uncertainties concerning the length of time that the higher demand would continue to justify the additional investments. the phenomenon of newly found (or lost) opportunities and associated uncertainties occupied most smes. generally, smes depend intensely on their buyers, supplies, employees, and resource providers without much slack in their optimally tuned value creation system. a fault, disruption, slow-down, strike, or the likes anywhere in the system would travel rapidly upstream and downstream within a value creation cycle with minor differences in its adverse impact on nearly every member. when a crisis strikes somewhere in the value creation stream, all members would soon suffer the pains. consider, for example, the impact of national border closures on international supplies and sales. generally, disruptions in logistics, including international closures, could stop ismes' flow of international supplies and, after the depletion of inventories, shipping and international deliveries would be forced to stop, which in turn would be exposing nearly all other members of the value-net slow-downs and stoppages indirectly, if not directly, sooner rather than later. in spite of many advantages of smes relying on collaborative international networks, the covid-19 crisis pointed out that all members will need to devise alternative contingency plans for disruptions that may affect them more severely than otherwise. the rapidly emerging evidence suggests that the capable, far-sighted, and innovative enterprises perceived the slow-downs, or stoppages in some cases, as an opportunity for starting, or increasing, their alternative ways of sustaining activities, including on-line and remote activities and involvements, in order to compensate for the shrinkage in their pre-covid demands, while the short-sighted or severely resource-constrained smes faced the difficult decision of closure in favor of "survival or self-preservation" strategy, thus losing expansion opportunities. 3 the silver lining of the covid darkness is that we have collectively learned invaluable lessons that deserve a review and re-examination from entrepreneurial and internationalization perspectives in order to prepare for the next unexpected crises, regardless of its cause, location, magnitude, and timing. in few words, the world experienced a crisis of massive scale for which it was unprepared and even after some 6 months there is no effective remedial strategy (or solution) for crises caused by the covid-19 pandemic in sight. the inevitable lesson of the above exposition is that even the most-prepared institutions of the society's last resort nearly collapsed. given such societal experiences, the sufferings of industries and enterprises, especially smaller ones, are understandable and the scholarly challenge before us is what should be researched and learned about, or from, this crisis to avoid the near collapse of smaller enterprises and industries, on which the society depends and may not easily absorb another crisis of similar, or even smaller, scale. 4 the main intention of this introduction is not to review the emergence and unfolding of a major global crisis that inflicted massive damage on smes in general and on ismes in particular, but to search for pathways out of the covid-19's darkness to brighter horizons. accordingly, the logical questions in need of answers are: 1 were there strategies that could reduce, if not minimize, the adverse impact of the crisis. 2 could (or should) smes or isme have prepared alternative plans to protect themselves or possibly avoid the crippling impact of the crisis. 3 why were smes affected so badly. 4 are there lessons to be learned to fight the next one, regardless of location, time, and scale? in spite of the dominating context of the ongoing and still unfolding covid-19 crises, there is a need to learn about the world's both effective and difficult experiences at this point in time, which are beyond the aims and scope of this journal. rather, it aims to analyze and learn about the bright rays of light that can potentially enlighten entrepreneurial and human innovative ingenuity to find pathways from the darker to the brighter side of this global and non-discriminatory crisis, within the scope of international entrepreneurship. naturally, in seeking those pathways, one is expected to encounter barriers, obstacles, and institutional rigidities that could still pose nearly insurmountable challenges to the society, and especially to smes and ismes, have experienced in the past which were partially due to endemic rigidities (aparicio et al. 2016; north 1990 ). on the positive side of the ledger is that many of the above adverse factors are among the host of smaller crisis-like challenges that entrepreneurial enterprises face regularly and manage to bridge across them to realize fertile and promising opportunities. learning how such bridges are built and maintained not only is entrepreneurial, but also may help the causes of humanity by showing the way out of this, and other similar, crises. this will be a noble objective if it can be accomplished, which should motivate many to take up corresponding challenges in spite of the low chances of its success. we will return to this topic at the end of this article. a cautionary note it is very important to note that the next four articles appearing in this issue were neither invited for this issue nor received any special considerations. they are included in this issue as they offer concepts, contents, contexts, and issues that are relevant to the overriding theme of this issue and may assist smes trying to manage a crisis facing them and scholars interested in investigating related issues. without exceptions, they were subjected to the journal's rigorous and routine double-blind peerreview processes prior to their acceptance. they were then published in the journal website's "on-line first" option waiting for placement in an issue with a coherent theme drawing on the research of each and all the selected articles for that issue. the highlights of the four articles that follow are presented in the next section of this article. they offer promising argument and plausible pathways based on their scholarly research relevant to an emerging or unfolding crisis. structurally, this article comprises five parts. a developmental discussion of uncertainties and their types, causes, and remedies as well as enabling topics relating to crisis-like challenges follows this brief introduction in "developmental arguments." a brief highlight of each of the four articles appearing in this issue, and their interrelationships, will be presented in "the summary highlight of articles in this issue." "discussions" provides discussions related to the overriding theme of this article. conclusion and implications for further research, management, and conducive public policy will appear at "conclusion and implications." the extra-ordinary socio-economic pains and the added stress of the covid-19 crisis exposed entrepreneurs, smes, larger enterprises, and even national governments to unprecedented conditions and issues. as stated earlier, there is a need for understanding how and why it became such a major world crisis and what factors contributed to expanding and amplifying its impact in the early quarters of the year 2020. although the primary aim of this issue is not reviewing the crippling impact of the covid-19 crisis, for it is done elsewhere (e.g., etemad 2020; surico and galeotti 2020), some of its influential factors emerged and stood out in the early days of 2020 that effected international business and entrepreneurial institutions from the very beginning; and yet, it took some time to enact defensive private and public actions against it. although covid-19 was not the first world-wide health crisis, many enterprises, one-afteranother, were defenselessly affected by it, even after a few months. while we have learned about some of the contributing factors to this expansive crisis, we are still in the dark as to why the broader community, and even resourceful enterprises, had failed to foresee the emergence and unfolding of such a crisis (surico and galeotti 2020) or prepare potential defenses against it. the crisis' high magnitude and broad scope involved nearly everyone worrying and learning about its impact first-hand as it unfolded; but it appears that top management teams (tmts) had not learned fully from the past or taken precautions against the emergence of potential crises, and for this one, in a timely fashion. 5 however, the literature of managing a major crisis of the past, mainly in the large enterprises, has pointed out a few known forces, or potential factors, and issues that had contributed to past crises and are briefly reviewed below as follows. uncertainties as similar broad world-wide crisis-like challenges involving nearly all institutions have not been experienced in the recent past, enterprises and especially smaller firms found themselves unprepared and encountered high levels of discomfort and taxing uncertainties in their daily lives. generally, such effects are more disabling when enterprises are in the earlier stages of their life cycle when they suffer from lack of rich experience to guide them forward, and they do not have access to the necessary capabilities and resources to support them through (etemad 2018a (etemad , b, 2019 . entrepreneurial enterprises that have already internationalized, or aspire to internationalize, encounter the customary risks and uncertainties of "foreignness" (e.g., hymer 1967; zaheer 1995; zaheer and mosakowski 1997) , lack of adequate "experiential experience"(e.g., eriksson et al. 1997; johanson and vahlne 1977) , "outsidership" (johanson and vahlne 2009) , and the liability of "newness" (stinchcombe 1965) . the covid crisis added risks and uncertainties arising from national lockdowns, unprecedented regulatory restrictions, closure of international borders not experienced since the second world war, the near collapse of international supply chains and logistics, among many others, most of which became effective without much prior notice, and each of which alone could push smaller enterprises to the edges of demise due to the consequent shortages, operational dysfunctions, closure, and potential bankruptcies. survival during the heights of the covid-19 crisis required rapid strategic adaptations, mostly technological, and use of alternative online facilities, capabilities, and novel strategies to reach stakeholders (customers, supplier, employees, investors, and the likes) to quickly compensate, if not substitute, for their previous arrangements that had become dysfunctional. smaller firms that had prepared alternative contingency plans, supported with reserved dynamic capabilities and resources (eisenhardt and martin 2000; jantunen et al. 2005) , viewed the dysfunctionality of rivals as opening opportunities and managed rapidly a successful transition to exploit them in a timely fashion, either through their own or through established others' functional "platform-type operations" (e.g., amazon, alibaba, shopify, spotify, and many similar multi-sided platforms). they were viewed by others as exceptional, progressive, even somewhat "disruptive" (utterback and acee 2005) and creatively destructive to others (chiles et al. 2007 ) in some cases as internationalized firms restrategized and refocused on their domestic markets in reaction to the closure of border and international logistics dysfunctions. 6 however, such adaptations, deploying innovative and additive technologies and other innovative industry 4.0 (e.g., additive technologies, artificial intelligence, internet of things (iot), robotic, 3-d printing, and other i. 4.0 technologies) (hannibal 2020, forthcoming) or collaboration with established on-line or off-line establishments, faced their own unexpected operational difficulties nationally, while their counterparts experienced them internationally, including "cross-cultural communication and misunderstandings" (noguera et al. 2013; mitchell et al. 2000; mcgrath et al. 1992) , national and international logistic problems, supply chain disruption, among many others, mostly attributable to covid-related restrictions. among such unexpected international factors were forced rapid change in consumer behavior and national preferences in exporting countries (verging on implicit discriminatory practices 7 ), worsening diplomatic relations, rising international disputes, regulatory restriction, and a host of other well-documented causes, exposing firms to unforeseen risks and uncertainties not experienced for decades. therefore, the concepts of risk, uncertainty, and the way for mitigating, or getting over, true or perceived crises deserve discussion as they are pertinent to resolving crisis-like challenges facing smaller firms, regardless of their particular timing and situation. similarly, factors contributing to, or mitigating against, the experience level(s) of ex-ante unknowns, or "un-knowables" (huang and pearce 2015) , contributing to uncertainties merit equal 6 reportedly, rapidly growing internationalized medium-sized enterprises reconfigured and redeployed parts of their facilities rapidly to fabricate and provide goods locally to reduce shortages in products previously imported from international markets. for example, canada goose, manufacturer of luxury winter clothing, began making personal protective garments for hospital staff (see an article entitled as "toronto -canada goose holdings inc. is moving to increase its domestic production of personal protective equipment for health-care workers across canada at https://globalnews.ca/news/6798844/canada-goose-production-medicalppe-coronavirus/ visited on april 19, 2020). similarly, many other companies, including ccm sporting equipment and yoga jeans, began producing protective visors, glasses, and gowns for essential workers and hospital staff members (see article entitled as "quebec companies answer the call to provide protective equipments" at https://montrealgazette.com/business/local-business/masks-and-ppes...visited on june 8, 2020). for all of the above companies, their sales required very different distribution channels, such as pharmacies and hospital supply companies that are far from clothing and sport equipment. 7 the us-based 3m was ordered not to ship n95 face mask to canada in march-april 2020. similarly, some chinese suppliers refused to ship previously placed and paid-for ordered supplies. considerations. nearly all articles appearing in this issue relate to such contributing factors and offer different bridging pathways, if not causeways, over the sea of scholarly challenges faced by international entrepreneurs in quests for their success in entrepreneurial internationalization. in the context of the ongoing crisis, the pertinent discussion of uncertainties is extensive (liesch et al. 2014; bylund and mccaffrey 2017; coff 1999; dimov 2018; dow 2012; huff et al. 2016; liesch et al. 2014; matanda and freeman 2009; mckelvie et al. 2011; mcmullen and shepherd 2006) and ranges from one extreme to another classic view at the other extreme-namely, from the akerlofian cross-sectional (akerlof 1970) to the knightian longitudinal uncertainties (knight 1921) . at the root of both is in the absence of objective, or reliable information and knowledge with very different density distributions. the akerlof's cross-sectional uncertainty relates to a relatively shorter term and the information and knowledge (erikson and korsgaard 2016) discrepancies (between or among agents) favoring those who have more of them and exposing those who have less, or lack of them. consider, for example, the case of buying a used car (or second-hand equipment). generally, the seller has more reliable, if not near perfect, knowledge about the conditions of his offerings in terms of its age, performance, repairs, faults, and the likes than a potential buyer who will have to assume, predict, or perceive the offer's conditions without reliable information in order to justify his decision to either buy the car (or the equipment) or not. the potential buyer may ask for more detailed information about the offers' conditions or seek assurances (or even guarantees) against its dysfunctions to pursue the transaction or not when he is in doubt. the noteworthy point is that the objective information(or knowledge) is available but the buyer cannot access it to assess it objectively-thus, the cross-sectional uncertainty is due to the asymmetric state of information and knowledge among parties involved in a transaction (townsend et al. 2018) , which clears soon after the transaction is consummated. in williamson's transaction cost approach (williamson 1981) , such discrepancies are viewed as transaction frictions between the parties, where at least one party acts opportunistically to maximize self-interest at the cost to the other(s), while the other party(ies) is incapable of securing the necessary objective information to form the required knowledge for enabling a prudent decision. within the uncertain state of covid-19 crisis, both of the above phenomena (asymmetry and opportunistic behavior) were clearly observable and contributed to creating subsidiary crises of different magnitudes-relatively larger ones for smaller enterprises and smaller ones for the larger companies, some of which were unduly amplified due to the lack of objective information and opportunistic behavior at the time. retrospectively, we collectively learned, for example, that there was no worldwide shortage of health-care equipment and supplies but major suppliers, or intermediaries, withheld their supplies and did not ship orders on time as they usually would have previously done, which created the perception of acute shortages forcing prices higher, knowing well that buyers were incapable of assessing the true availability of inventoried supplies for demanding lower prices, especially when the richer buyers (e.g., national governments) were willing to bid-up prices due to urgency of their situations. this is not far from a discriminating monopolist taking advantage of its uninformed buyers. similar situation happens when a small company fails to plan for contingencies to cover for emerging uncertainties by ordering just sufficient for their regularly needed supplies (e.g., the minimum order quantity) to minimize the short-term costs of holding inventory. the longer term overall costs of over-ordering to build contingency supplies is the cumulative cost of holding excess inventory over time, which could be viewed as an insurance premium for avoiding supply shortages, or stock-outs, while the true costs of such internal imprudent strategies become much higher when, for example, potential customer switch to other available brands, or there are uncertain and adverse external conditions, including artificially created shortages, as discussed earlier. generally, the top management of resource-constrained smaller companies aims to ensure the efficiency of their resources, including supplies, and to preserve adequate cash flows, to avoid short-term uncertainties of insolvency, akin to the akerlofian type (akerlof 1970) . in contrast, the absence of reliable information (or assurances) about steady supplies may contribute to, if not cause, a change in potential buyers' consumer behavior and further contribute to suppliers' over-estimation of buyers' demand trajectory over a longer time period fearing from facing acute adverse conditions such as those discussed in the previous case. however, such internal (e.g., management oversights) or external (e.g., suppliers withholding shipments or change in consumer behavior) causes due to absence of the required information, reliable forecasts or estimates, and imperfect knowledge over time begin to pertain more to knightian uncertainties than akerlofian types (i.e., across transacting agents, which is comparatively shorter term and more frequently encountered uncertainties). the impact of resources and capabilities naturally, a firm's level of resources (wernerfelt 1984 (wernerfelt , 1995 barney et al. 1991) or institutional inadequacies and restrictions (bruton et al. 2010; north dc 1990; kostova 1997; yousafzai et al. 2015 ) may play influential roles in mitigation of encountered uncertainties. consider, for example, the difference in abilities of smaller resource-constrained enterprises in continual need of minimizing fixed costs and larger and richer institutions (such as national governments) with higher priorities (than costs) to enforce performance contract(s). the richer resources of larger institutions pose a more credible threat of suing the supplier(s) lateron for potential damages of higher costs or delayed shipments than those of smaller firms, thus reducing the temptation for opportunistic behaviors (williamson 2000) over time. as the transaction cost theory suggests (williamson 1981) , the ever-presence of such threats may dissuade suppliers from delaying and withholding shipments for the hope of higher revenues. furthermore, even the opportunists may be exposed as other lower costs suppliers may realize the opportunities and respond with lower prices. 8 time, timing, and longer term uncertainties the above demonstrative discussions point to the critical role of timely-planned acquisition of capabilities and resources over time before emergencies, or shortages, become acute. the time dimension of this discussion relates to knight's (1921) longitudinal uncertainties. the future is inherently uncertain; but one's needs and their corresponding transactions costs are more predictable at the time as, for example, transactions can be consummated at the prevailing prices. delaying a transaction in the hope of buying at lower costs exposes the transaction to longitudinal uncertainties, as the uncertainties' ex-post costs and the true prices are only revealed in the due course of time. similarly, the longer term costs of preparedness and security can minimize the short-term costs to individual employees and other corporate persons. 9 accordingly, the intensity of a crisis, and its cumulative costs, may force national and local authorities to bid-up prices and absorb the much higher short-term costs at the time to ensure acquisition of essential supplies in order to avoid the difficult-to-predict costs of longitudinal uncertainties. for smaller enterprises, however, the state of their resources and the extent of prior experiences may influence their decision at the time. this will be discussed below. past experience and the firm's stages of life-cycle generally, smaller and younger companies are short of excess resources and lack rich experience to provide them with a longer and far-sighted outlook for avoiding longitudinal uncertainties of the knightian types by, for example, keeping a level of contingency inventories for difficult conditions and rainy days. however, even smaller start-ups with experienced serial entrepreneurs at the helm can benefit from the past experiences of their founding entrepreneur(s) through what etemad calls as "the carry-over of paternal heritage" (etemad 2018a) to enable planning and providing for their necessary resources. the state of competition on one extreme, a monopolist can control supplies and create artificial shortages to force prices up in normal conditions. under distress and unusual conditions, customers may bid-up prices to have priority access to the available supplies. in the perfect competition, on the other extreme, many suppliers compete to attract demand and prices remain relatively competitive due to highly elastic demands. practically, however, the state of global competition is likely to be closer to a combination of regional oligopolistic (or monopolistic) competition and global competitive conditions, where suppliers perceive to have certain monopolistic powers to manipulate prices (e.g., due to their brand equity, location, or product quality), while they need to compete in a nearly hyper-competitive state (chen et al. 2010 ) with other competitors who provide similar offerings. the knowledge of the competitive and institutional structures (jepperson 1991; yousafzai et al. 2015; welter and smallbone 2011) is, therefore, essential, especially to smes for deciding as optimally as possible, which all depends on both the buyers and suppliers state of information, communication, and knowledge, which is further discussed below. the state of communication and information the advanced state of a firm's information and communication technology (ict) is highly likely to enable it to decide prudently. as discussed earlier, uncertainties depend on one's state of reliable information impacting the achievement of optimality, which in turn depends on the state of information at the time. in short, a small firm's potential exposure to cross-sectional and longitudinal risks and uncertainties is also likely to depend on information on a combination of influential factors, some of which are discussed above; prominent 9 similar arguments apply to national preparedness and national security over time to shield individual and corporate citizens from bearing short-term or long-term high costs-the national costs per capita may pale relative to the immeasurable costs of human mortalities paid by the deceased people and their families, the massive unemployment, or high costs related to shortages in major crises, such as the covid-19 pandemic. among such influential factors is reliable information about their operating context at the time and its probable trajectory in the near future. furthermore, nearly all of the emerging advances in management and production, including additive technologies, depend heavily on information (hannibal 2020, forthcoming) finally, the next section will seek to discuss the above elements within the articles that follow. this part consists of summary highlights of the contribution of the four double-blind, peer-reviewed articles with relevant materials to an emerging or an unfolding crisis. the second article in this issue is entitled "muddling through akerlofian and knightian uncertainty: the role of socio-behavioral integration, positive affective tone, and polychronicity" and is co-authored by daniel leunbach, truls erikson, and max rapp-ricciard. as discussed earlier, uncertainty and risk-taking propensity have been recognized as integral parts of general entrepreneurship for a long time (e.g., gartner and liao 2012) and this article focuses on studying them as they relate to individual entrepreneurs' affective socio-behavioral and also the way entrepreneurs function, including how they perceive their situation, manage, progress, and adjust their outlook within the environment(s) that exposes them to perceived risks and uncertainties. from an entrepreneurial perspective, a combined interaction of time and flow of information, or lack thereof, forming a knowledge base, is what entrepreneurial decisions depend. when the entrepreneurs need to make decisions without perfect cognition (based on his information and knowledge about the state of affairs at the time) within a relatively short time period, they and their decisions are exposed to an uncertain state of the world. such uncertainty(ies) within a relatively short time span is (are) termed as crosssectional uncertainty. generally, it is difficult to acquire nearly perfect information due to the shortage of time or the cost of searching for the information. george akerlof (1970) suggested that such perceived uncertainty would not be as much due to lack of pertinent information as it would be due to the asymmetric distribution of information (brown 2016) and the corresponding knowledge among agents-i.e., those who had more potent information and those who did not but needed it. consider a typical entrepreneur in need of acquiring a good, or service, from a supplier or service provider, who has nearly perfect information about, or the knowledge of, the good or service he offers but does not fully disclose them to the entrepreneur in self-interest, which gives rise to the asymmetric distribution of the information (or knowledge) between the supplier and the potential buyer. generally, this is also termed as "akerlofian uncertainty." time is an important factor in entrepreneurial decisions, and the influence of time is as significant as the state of information. for example, the urgency of a decision deprives the entrepreneur of sufficient time for conducting informative research to enrich his state of information and forces the entrepreneur to decide earlier, rather than later, with some discomfort and reservation due to his insufficient information. 10 with more time for acquiring sufficient information for forming a supportive knowledge base, he can comfortably decide to consummate a particular transaction or not. the time cost of switching to another supplier or conducting research may increase the transaction costs and expose the transaction to longitudinal uncertainties as well. in contrast to the asymmetric distribution of information across individuals in the short term (brown 2016) , the required time to acquire, or develop, the required information about the relevant state of affairs for forming the corresponding knowledge for portraying the future, or the near future, gives rise to longitudinal uncertainty, as suggested by frank knight's article in 1921 (knight 1921) and is viewed as "knightian uncertainty." generally, the future is uncertain and it is not prudent to assume that it will be a linear extension of the current state of affairs, or alternatively, it will be predictable perfectly. again, both the information and time are influential factors as more pertinent information is revealed over time. entrepreneurial start-ups, and new ventures, for example, suffer from both the shortage of time and information and thus offer fertile context for exploring not only the interaction of time and information, but also how new venture teams (nvts) perceive the gravity of risks and uncertainties facing them. therefore, exploring uncertainty within new venture teams, especially those based on new science and technology, which usually encounter higher uncertainties and commercialization risks, 11 enables a deeper understanding of how uncertainties are perceived and managed by the new venture teams. as discussed in some length in the paper, the authors' research methodology enabled them to observe the impact of nvt's socio-behavioral and psychological characteristic and explore their reactions and response to both the shorter and longer term uncertainties facing them. in the context of a major crisis, smes' top management teams (tmts) suffer from both the inadequacies of information and shortage of time in most cases, neither of which they can control or extend into the future. in a major crisis, there are complex uncertainties with unclear prospects and come without prior warning-e.g., what will have an immediate adverse impact, will it increase or subside, how long will it last, and what will be the magnitude of accumulated damage when the crisis is nearly over? these are among many other questions that have no certain answers at the time. similar to young start-ups, where founder-entrepreneurs suffer from shortages of time, knowledge (or reliable information), and resources, in addition to uncertainties associated with consumer behavior and market reactions as well as regulatory restriction, smes, and especially ismes, suffer from complex uncertainties for which they were not prepared, nor would they have sufficient time and resource to deal with the unfolding crisis satisfactorily. furthermore, the normal sources of previous help and advice, including their social networks and support agencies, such as lenders, service providers, and suppliers, would be facing even larger problems of their own and incapable of assisting them in a timely fashion, which call for adequate alternative contingency plans for the rainy days as discussed earlier and further reviewed in a later section. this discussion points to a need for examining the potential role of environmental context in increasing uncertainties or mitigating against them. the next article examines this very topic next. the third article of this issue examines the context within which entrepreneurial decisions are made. it is entitled "home country institutional context and entrepreneurial internationalization: the significance of human capital attributes" and co-authored by the team of vahid jafari-sadeghi, jean-marie nkongolo-bakenda, lã©o-paul dana, robert b. anderson, and paolo pietro biancone. nearly all decisions are embedded in a context, and the context for most international entrepreneurship decisions is perceived as more complex than those in the home market, as extensively discussed by internationalization literature. internationalization involves at least two contextsone characterized by formal institutional structures and informal socio-cultural values (hofstede 1983 (hofstede , 2001 (hofstede , 2011 hofstede et al. 2010 ) at home, both helpful and restrictive, and the other at the host country environment, where each country's institutional structures differ from the others (chen et al. 2017; li and zahra 2012) . even in the european union's (eu) single market, where eu has increasing harmonized intercountry-wide regulatory and institutional requirements ever since 1993, 12 different local socio-cultural and behavioral forces influence decisions differently, especially those affecting consumer behavior and market-sensitive aspects, which are more deeply embedded in their country's institutional structures than others, encouraging or restricting certain entrepreneurial actions. generally, international entrepreneurship and their corresponding entrepreneurial actions are deeply embedded in their more complex contexts (barrett, jones, mcevoy, granovetter 1985; wang and altinay 2012; yousafzai et al. 2015) and highly internationalized smes (ismes), and even larger firms, need to respond sensitively to the various local (i.e., contextual) facets and adapt their practices accordingly (welter 2011) , which in turn add incremental complexities and expose the firm's early entrepreneurial, and especially marketing, to a higher degree of risk and cross-sectional uncertainties than those at home which is more familiar than elsewhere. however, firms learn from their host environment and also from their competitors as to how to mitigate their risks and remove the information (and knowledge) asymmetries over time in order to operate successfully after their early days in the host county environment. naturally, entrepreneurial activities of innovative start-ups face higher risks and uncertainties at the outset, as discussed earlier. although the cross-sectional methodology of this article's research across 28 european country-environments, using structural equation modeling (sem), could not examine the specific impact of various environmental characteristics on entrepreneurial orientation and entrepreneurial practices at the local levels in each country context over time, the overall indicators pointed to contextual influences strongly affecting various facets of internationalization and international entrepreneurship. it is noteworthy that the entrepreneurial intentions and orientations of "non-entrepreneurs" that portray their context had a significant positive influence on the creation of entrepreneurial businesses and their internationalizations. in summary, the findings of this research strongly support the notion that the true, or perceived, state of the firms' environment influences their strategic management of their regular affairs as well as the management of an emerging or unfolding crisis, regardless of its magnitude and timing. the fourth article in this issue complements the previous article through a deeper examination of institutional impacts from a women entrepreneurs' perspective. it is entitled "the neglected role of formal and informal institutions in women's entrepreneurship: a multi-level analysis" and is co-authored by daniela gimenez-jimenez, gimenez-jimenez, andrea calabrã², and david urbano. this article draws on, and extends, the impact of institutional context, discussed above, to include what the authors termed as "informal institutions' impact on women entrepreneurs." as discussed in the context of european countries earlier, socio-cultural and behavioral aspects of the various societies vary and influence different entrepreneurship initiatives differently. in contrast to the tangible influences and effects of formal institutions, the socio-cultural values of a society remain nearly invisible, but quite influential. what the authors call as neglected "informal institutions" are widely portrayed as a society's "software" by cultural anthropologists, such as geert hofstede, among others (hofstede 1983 (hofstede , 2001 (hofstede , 2011 hofstede et al. 2010) . in contrast to the "hardware" that is structural and tangible, the "software" remains hidden, if not intangible, neglected, and ignored, that it functions consistent with its socio-cultural values and daily behavioral routine, which act as design parameters woven into the software's programs that control social functions quietly. the underlying multi-level research methodology analyzing the entrepreneurial experience of more than 27,000 women in 20 countries in this article suggests that both of the societal formal and informal institutions impact entrepreneurship, and especially women entrepreneurs more significantly and profoundly, and yet they have remained "neglected." in the context of a major crisis, facing society in general, and entrepreneurial smes in particular, the question of how do the formal and informal institutions of a society assist or hamper effective crisis management, especially by women executives, resumes high importance. the casual observation of conditions imposed by the covid-19 crisis in the past 6 months, or so, suggests that both the society's formal and informal institutions of the affected environments imposed higher expectation, if not more responsibilities, on women than their previous family setting transformed into home and office at the same time, which have adversely affected women's time, effort, and attention in effectively managing their firm's crisis and also attending to their family as they did previously. assuming that crisis management requires management's more intensive attention and effort than those of normal times, the important question for women executives is, how should the required additional efforts by busier women executives be assisted? and if they cannot be, who should be bearing the additional costs and the consequent damages to both the women's family and their firms? specifically, what should be the uncodified, but understood, societal expectations of women executives? are they expected to sacrifice their family's wellbeing or not concentrate fully on managing their enterprises' crisis effectively? 13 naturally, the preliminary answer lies in what is consistent with the society's informal sociocultural value systems as well as those formally codified in the society's laws, regulations, and broadly accepted behaviors. this discussion provides a socio-cultural bridge to the next article. the fifth article of this issue is entitled "market orientation and strategic decisions on immigrant and ethnic small firms" and is co-authored by eduardo picanã§o cruz, roberto pessoa de queiroz falcã£o, and rafael cuba mancebo. as the title suggests, this research is about entrepreneurs facing a new and possibly different environmental context than their familiar previous one at home, thus exposing them to the fear, if not the uncertainty, of unknowns, including the hidden and intangible socio-cultural value systems. they need to decide about their overall strategy, including marketing orientation, in their newly adopted environment. immigrant entrepreneurs face the challenges of belonging to two environments, one at home, which they left behind, and the other in their unfamiliar new home (the host country), in which they would aim to succeed (etemad 2018b) . when there are significant differences between the two, they face a minor crisis in terms of the uncertainty of if their innate home strategic orientation or that of their host can serve them best. either of the two strategic choices exposes them to certain uncertain costs and benefits, which are not clear at the time. naturally, their familiarity with their previous home's socio-cultural environment, within which they feel comfort and may need nearly no new learning and adaptation, pushes them to operate in a similar environment to home to give them certain advantages and possibly lower risks and uncertainties. this orientation attracts them towards their ethnic and immigrant community, or enclave, based primarily on the perception that their ethnic communities, enclaves, and their market segments in their adopted home still resemble their home environment's context, which in turn suggest that they can capitalize on them by relying on their common ethnic social capital (davidsson and honig 2003) , using their home language, culture, and routine practices with minimal cognitive and psychic pain of adapting to the new context. however, that perception, or assumption, may not be valid or functional, where the society's sociocultural values encourage rapid adaption and change so that immigrants become like other native citizens. although the market orientation of concentrating on their ethnic community in their adopted country for its advantages, including a lower perceived short-term uncertainty (e.g., akerlofian type of risk and uncertainty), it may not work or prove to be restrictive in longer term due to, for example, the community may be small, decreasing in size and gradually adapting to their host country prevailing sociocultural values, thus posing an uncertainty of knightian type, where the future state is difficult to predict. in contrast, adopting a strategic and market orientation towards attractive market segments of their new home's socio-cultural values and routine practices may expose the young entrepreneurial firm to the other well-documented risks and uncertainties, which are similar to difficulties encountered by firm starting a new operation in a foreign country (hymer 1967; vahlne 1977, 2009; zaheer 1995; zaheer and mosakowski 1997, among many others) . this strategy may also force the nascent firm to compete with the entrenched competition, both of the immigrant and indigenous origins, unless it can offer innovative, or unique, products (or services) similar to other native innovative start-ups. the noteworthy point, and as discussed earlier (in the "introductions" and "developmental arguments" sections), the state of the firm's resources and the entrepreneur's (or the firm's top management team's) extent of experience, information, and knowledge may make the difference between the ultimate success or mere survival in either of the above strategies. the rich multi-method and longitudinal research methodology of this article over a 6-year period involving interviews, ethnographic observation, and regular data collection among the ethnic and immigrant entrepreneurs in brazilian enclaves world-wide enabled the authors to offer a conceptual framework and complementary insights based on their findings and experiential knowledge. in summary, the research supporting articles in this part are both consistent and supportive of arguments presented in "introductions" and "developmental arguments." they will also serve as a basis for arguments in the following "discussions." as stated in the "introductions" section, this issue's release would coincide with the world in the midst of the coronavirus pandemic. initially and on the face of it, the pandemic was perceived as a health-care problem in china followed by other counties in the east and south-east asia; but it soon turned into a world-wide crisis far-beyond health-care in a few other countries quickly affecting nearly all aspect of the others before inflicting them with the unfolding crises of their own. generally, health-care institutions are viewed as the societies' institution of last resort and are expected to deal with potential crises of others, rather than becoming the epicenters of a crisis, posing challenges to others and to their respective societies as a whole. the health-care system in the publicly financed countries is given resources; held in high regard because of their highly capable human resources; 14 is assumed to be well managed ready to resolve health-care-related problems, if not crises; and consequently expected to effectively solve all health-related challenges as they arise. however, and regardless of their orientation-privately held or publicly supported systems-the health-care system traveled to the brinks of breakdown 15 and collapse, although they had dealt with similar, but smaller, outbreaks of regional and seasonal flu or other epidemics previously-e.g., the hiv/aids outbreak of the late 1970s and now endemic world-wide, sars epidemic of 2003, and n 1 h 1 flu of 2009 that became a pandemic, among others, in the near recent memories-but the covid-19 pandemic overwhelmed them. retrospectively, the health-care institutions, and the system as a whole, were not the only sector experiencing high levels of systematic fatigue, stress, and strains nearing breakdowns, suggesting that some countries were not prepared to deal with a major crisis. naturally, less prominent institutions than the country' healthcare system, and subsequently the government alike, were not spared; and many ad-hoc experimental procedures had to be used and valuable lessons had to be learned in hurry in various institutions and many occasions in the hope of saving the moment pregnant with precarious lives and livelihoods. such rapidly developing phenomena, seemingly beyond control initially, influenced the overall theme of this issue, although already accepted articles waiting to be placed in a regular publication were not written on the topic of crisis management. given the gravity of the covid-19 pandemic pushing many institutions into their crisis of survival, this issue adopted the overriding thematic topic of crisis management perspective to enable a richer discussion of different components of crisis management with a focus on smes and ismes based on the specific research of each of the articles accepted through the journal rigorous doubleblind review process. expectedly, the resource-constrained small-and medium-sized enterprises (smes) and their internationalized counterparts (ismes) suffered deeply due to lockdown of their customers, employees, and service providers. similarly, the sudden stoppage of major national and international economic activities in many advanced countries, including those in the european and american continent, paralyzed them initially as the early impacts were totally unexpected. the health-care system was not the only sector experiencing dangerous levels of stress and strains. entertainment hotel and lodging industries, performing and creative arts, hospitality and restaurant industries, and tourism and their complementary goods and service providers, mostly smaller enterprises, among many other smes, were caught off-guard and suffered deeply from lack of demand due to rapid economic slow-down, fears of infections, and enforcement of lockdowns in many affected countries. similarly, integrated manufacturing systems, such as automobile industries, where parts had to arrive from different international sources on time, if not just in time, came to a halt because of the near collapse of international supply chains in addition to national protectionism of the past showing its ugly face after sometimes. such conditions were not seen for some seven decades, after the second world war triggering the multi-lateral agreement in bretton wood in 1941 to 1947's multi-national conferences that created the world's enduring world institutions such as gatt (replaced by wto), imf, and world bank. at the socio-cultural and economic levels, the imposed self-isolation and lockdown of cities and communities, to avoid further transmission of the coronavirus to the unsuspecting others, entailed immobility, and the imposition of social distancing disrupted all normal routine behaviors. many industries could no longer operate as safe social distances could not be provided. 16 international, national, and even regional travels were shuttered down as national boarders were nearly closed. as a direct result, small-and medium-sized enterprises in the affected industries, who depend on others intensively, suffered a double whammy massively-their demand had collapsed, and their supplies had stopped. some were ordered closed and others had no reason to operate due to the immobility of employees, customers, and clients alike as well as severe shortages of parts and supplies. consequently, they had to shut down to minimize unproductive fixed costs. in short, the world has been, and in some cases is still, struggling with the covid-19 crisis at the time of this writing in june 2020. only 6 months earlier in december 2019, not many people imagined the emergence of the crisis in their locale, let alone a massive global pandemic, crippling community after community, which revealed deep socio-cultural, economic, and institutional unattended faults. collectively, they pointed to unpreparedness of many unsuspecting productive enterprises and institutions alike. in contrast, more far-sighted and conservative institutions with alternative contingency plans based on their relatively minor crisis-like experiences previously, such as transportation and labor strikes, not comparable to covid-19, activated their relevant contingency plans. consider, for example, that the alternative of online marketing and sales could compensate for the immobility of customers and in-person sales transactions. naturally, enterprises with on-line capabilities either gained market shares or suffered less severely. in short, the overriding lesson of this crisis as discussed in some length in "developmental arguments" and in "the summary highlight of articles in this issue" and in response to queries raised in "introductions" is that the institutional under-and unpreparedness, regardless of the level, location, and size, inflicted far higher harms than the incremental costs of carrying alternative contingency plans for the rainy days as evidenced by the considerable success of on-line expansion and quick reconfiguration of flexible manufacturing to accommodate the unexpected oddities of the unfolding crisis. aside from the global scale of the covid crisis, similar, if not more severe, crises had happened in different locations, some repeatedly, and humanity had suffered and should have learned. consider, for example, the torrential rains and subsequent flooding and mudslides in the temperate regions; massive snow falls in the northern hemisphere shutting down activities for days, if not weeks, at the time; massive earthquakes destroying residential and office buildings without warning (e.g., christ church city, new zealand, the ancient trading city of baam in southeastern iran); heavy ice storm in eastern canada destroying electrical transmission lines that resulted in shutting down cities for many days; the massive and widespread tsunami of indian ocean destroying coastal areas in about 19 countries with quarter-million casualties, among many others, should have served as wake-up calls. while many of the past stricken areas still remain exposed and vulnerable to recurrence, reinforcement and warning systems are in place for a minority of them. for example, the earthquake detection system in the deep seas neighboring tsunami-prone areas has provided ample warning to the vulnerable regions and have avoided major damages. conversely, the massive tsunami of eastern japan that destroyed the fukushima daichi nuclear power plant in addition to large financial, property, and human losses as well as untold missing persons could have been avoided. in an earthquake-prone country such as japan, the design faults in the protective barrier walls should have protected the fukushima nuclear power plant and avoided the release of toxic nucleic emissions. the above discussions point to a few noteworthy lessons and implications as follows: 1 the possibility of recurrence, possibly with higher striking probabilities than before, is not out of the realm of reality, thus calling for planned precautions and, if not adequate, preparations for preparedness. 2 the organs, institutions, and systems weakened by a crisis, regardless of its magnitude and gravity, are in need of rebuilding and re-enforcement to endure the next adverse impacts, which also include smes that reached near demise and their management systems were nearly compromised by the emerging covid and its still uncertain unfolding and post-covid-aftermath. 3 the primary vital support systems, especially the support systems of the last resort, including the first responders, emergency systems, warning and rescue systems, among others, need to develop alternative and functional contingencies and stay near readiness as the timing of the next crisis may remain a true uncertainty (of knightian type discussed earlier). 4 the immediate support systems, agencies, or persons need to have planned redundancies and ready-to-act as backups should their clients be affected by unforeseen events. 5 sustainability and resilience need to become an integral part of all contingency plans as the strength of the collectivity depends on the strength and resilience of the weakest link(s) (blatt 2009 ). 6 the prevention of a natural disaster of covid-scale possibly engulfing humanity is in need of supra-national institutions with effective plans, incentives, and sanctions to prevent self-interest at the cost to larger collectivity, if not to the humankind. the immediate implication of the above discussions in the post-mortem analysis of a crisis, regardless of its scale and magnitude, is to learn about causes and the reason for failure to stop and possibly reverse their effects in a timely fashion. in the context of smes and ismes, management training, simulation to test the efficacy and reliability of crisis scenarios for alternative contingency plans, and their feasibility and functionality, among others, are of critical importance, which point to the four equally important efforts: 1 crisis management needs to become an indispensable part of education at all professional levels to enable individuals to protect themselves and assist others in need as well as to reduce the burdens and gravity of the collective harms. 2 the societal backbone institutions and institutional infra-structures on which others depend must be strengthened so that they can stand the impact of the next crisis, regardless of its timing and origin, and support their dependents. 3 the widespread and learned lessons of covid-19 crisis should be utilized to prepare for a more massive crisis in not so distant future 4 the smes, and especially ismes, as socio-economic institutions with societal impact need to re-examine their dependencies on others and take steps to avoid their recurrence in ways consistent with their long-term aims and objectives. in the final analysis, the experience of the covid-19 pandemic indicates that humanity is fragile and only collective actions can provide for the necessary capabilities and resources for dealing with the next potential disaster. similarly, the smaller institutions that provide for the basic ingredients, parts, and support for the full functioning of their networks and the livelihood of their respective members need the assurance of mutual support in order to survive and to deliver their vital support needed. on a final note, it is an opportune privilege for the journal of international entrepreneurship to take this invaluable opportunity to reflect on the ongoing crisis with the ability to still inflict further harm and more damages nearly beyond the control of national governments. similarly, and on the behalf of the journal, i invite the scholarly community to take up the challenge of educating and preparing us for the next crisis, regardless of its nature, location, and timing. the journal is prepared to offer thematic and special issue(s) covering the management of crisis in smes and ismes alike. the market for "lemons": quality uncertainty and the market mechanism institutional factors, opportunity entrepreneurship and economic growth: panel data evidence the resource-based view of the firm: ten years after 1991 resilience in entrepreneurial teams: developing the capacity to pull through the palgrave encyclopedia of strategic management institutional theory and entrepreneurship: where are we now and where do we need to move in the future? a theory of entrepreneurship and institutional uncertainty navigating hypercompetitive environment: the role of action aggressiveness and tmt integration home country institutions, social value orientation, and the internationalization of ventures beyond creative destruction and entrepreneurial discovery: a radical austrian approach to entrepreneurship how buyers cope with uncertainty when acquiring firms in knowledge-intensive industries: caveat emptor the role of social and human capital among nascent entrepreneurs uncertainty about uncertainty. foundations for new economic thinking experiential knowledge and cost in the internationalization process knowledge as the source of opportunity early strategic heritage: the carryover effect on entrepreneurial firm's life cycle advances and challenges in the evolving field of international entrepreneurship: the case of migrant and diaspora entrepreneurs actions, actors, strategies and growth trajectories in international entrepreneurship 2020) management of crisis by smes around the world risk-takers and taking risks economic action and social structure: the problem of embeddedness the influence of additive manufacturing on early internationalization: considerations into potential avenues of ie research the cultural relativity of organizational practices and theories culture's consequences: comparing values, behaviors, institutions and organizations across nations cultures and organizations: software of the mind managing the unknowable: the effectiveness of early-stage investor gut feel in entrepreneurial investment decisions a conversation on uncertainty in managerial and organizational cognition. in: uncertainty and strategic decision making the international operations of national firms: a study of direct foreign investment entrepreneurial orientation, dynamic capabilities and international performance institutions, institutional effects, and institutionalism the internationalization process of a firm -a model of knowledge foreign and increasing market commitments the uppsala internationalization process model revisited: from liability of foreignness to liability of outsidership country institutional profiles: concept and measurement formal institutions, culture, and venture capital activity: a cross-country analysis risk and uncertainty in internationalisation and international entrepreneurship studies. the multinational enterprise and the emergence of the global factory effect of perceived environmental uncertainty on exporter-importer interorganizational relationships and export performance improvement elitists, risk-takers, and rugged individualists? an exploratory analysis of cultural differences between entrepreneurs and non-entrepreneurs unpacking the uncertainty construct: implications for entrepreneurial action entrepreneurial action and the role of uncertainty in the theory of the entrepreneur cross-cultural cognitions and the venture creation decision socio-cultural factors and female entrepreneurship institutions, institutional change and economic performance social structure and organizations the economics of a pandemic: the case of covid-19 uncertainty, knowledge problems, and entrepreneurial action disruptive technologies: an expanded view social embeddedness, entrepreneurial orientation and firm growth in ethnic minority small businesses in the uk contextualizing entrepreneurship-conceptual challenges and ways forward institutional perspectives on entrepreneurial behavior in challenging environments resource-based view of the firm the resource-based view of the firm: ten years after the economics of organization: the transaction cost approach the new institutional economics: taking stock, looking ahead institutional theory and contextual embeddedness of women's entrepreneurial leadership: evidence from 92 countries overcoming the liability of foreignness the dynamics of the liability of foreignness: a global study of survival in financial services publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord-214822-pfx1eh5b authors: sotolongo-costa, oscar; weberszpil, jos'e; sotolongo-grau, oscar title: a fractal viewpoint to covid-19 infection date: 2020-07-14 journal: nan doi: nan sha: doc_id: 214822 cord_uid: pfx1eh5b one of the central tools to control the covid-19 pandemics is the knowledge of its spreading dynamics. here we develop a fractal model capable of describe this dynamics, in term of daily new cases, and provide quantitative criteria for some predictions. we propose a fractal dynamical model using conformed derivative and fractal time scale. a burr-xii shaped solution of the fractal-like equation is obtained. the model is tested using data from several countries, showing that a single function is able to describe very different shapes of the outbreak. the diverse behavior of the outbreak on those countries is presented and discussed. moreover, a criterion to determine the existence of the pandemic peak and a expression to find the time to reach herd immunity are also obtained. the worldwide pandemic provoked by the sars-cov-2 coronavirus outbreak have attracted the attention of the scientific community due to, among other features, its fast spread. its strong contamination capacity has created a fast growing population of people enduring covid-19, its related disease, and a non small peak of mortality. the temporal evolution of contagion over different countries and worldwide brings up a common dynamic characteristic, in particular, its fast rise to reach a maximum followed by a slow decrease (incidentally, very similar to other epidemic processes) suggesting some kind of relaxation process, which we try to deal with, since relaxation is, essentially, a process where the parameters characterizing a system are altered, followed by a tendency to equilibrium values. in physics, clear examples are, among others, dielectric or mechanical relaxation. in other fields (psychology, economy, etc.) there are also phenomena in which an analogy with "common" relaxation can be established. in relaxation, temporal behavior of parameters is of medular methodological interest. that is why pandemics can be conceived as one in which this behavior is also present. for this reason, we are interested, despite the existence of statistical or dynamical systems method, in the introduction of a phenomenological equation containing parameters that reflect the system s behavior, from which its dynamics emerges. we are interested in studying the daily presented new cases, not the current cases by day. this must be noted to avoid confusion in the interpretation, i.e. we study not the cumulative number of infected patients reported in databases, but its derivative. this relaxation process in this case is, for us, an scenario that, by analogy, will serve to model the dynamics of the pandemics. this is not an ordinary process. due to the concurrence of many factors that make very complex its study, its description must turn out to non classical description. so, we will consider that the dynamics of this pandemic is described by a "fractal" or internal time [1] . the network formed by the people in its daily activity forms a complex field of links very difficult, if not impossible, to describe. however, we can take a simplified model where all the nodes belong to a small world network, but the time of transmission from one node to other differs for each link. so, in order to study this process let us assume that spread occurs in "fractal time" or internal time [1, 2] . this is not a new tool in physics. in refs. [3] [4] [5] this concept has been successfully introduced and here, we keep in mind the possibility of a fractal-like kinetics [6] , but generalizing as a nonlinear kinetic process. here we will follow to what we refer as a "relaxation-like" approach, to model the dynamics of the pandemic and that justify the fractal time. by analogy with relaxation, an anomalous relaxation, we build up a simple nonlinear equation with fractal-time. we also regain the analytical results using a deformed derivative approach, using conformable derivative (cd) [7] . in ref. [8] one of the authors (j.w.) have shown intimate relation of this derivative with complex systems and nonadditive statistical mechanics. this was done without resort to details of any kind of specific entropy definition. our article is outlined as follows: in section 2, we present the fractal model formulated in terms of conformable derivatives, to develop the relevant expressions to adjust data of covid-19. in section 3, we show the results and figures referring to the data fitting along with discussions. in section 4, we finally cast our general conclusions and possible paths for further investigations. let us denote by f (t) the number of contagions up to time t. the cd is defined as [7] d α note that the deformation is placed in the independent variable. for differentiable functions, the cd can be written as an important point to be noticed here is that the deformations affect different functional spaces, depending on the problem under consideration. for the conformable derivative [8] [9] [10] [11] [12] , the deformations are put in the independent variable, which can be a space coordinate, in the case of, e.g, mass position dependent problems, or even time or spacetime variables, for temporal dependent parameter or relativistic problems. since we are dealing with a complex system, a search for a mathematical approach that could take into account some fractality or hidden variables seems to be adequate. this idea is also based in the fact that we do not have full information about the system under study. in this case, deformed derivatives with fractal time seems to be a good option to deal with this kind of system. deformed derivatives, in the context of generalized statistical mechanics are present and connected [8] . there, the authors have also shown that the q − def ormed derivative has also a dual derivative and a q − exponential related function [13] . here, in the case under study, the deformation is considered for the solutions-space or dependent variable, that is, the number f (t) of contagions up to time t. one should also consider that justification for the use of deformed derivatives finds its physical basis on the mapping into the fractal continuum [8, [14] [15] [16] . that is, one considers a mapping from a fractal coarse grained (fractal porous) space, which is essentially discontinuous in the embedding euclidean space, to a continuous one [9] . in our case the fractality lies in the temporal variable. then the cd is with respect to time. a nonlinear relaxation model can be proposed here, again based on a generalization of brouers-sotolongo fractal kinetic model (bsf) [3, 4, 17] , but here represented by a nonlinear equation written in terms of cd: where τ is our "relaxation time" and q and α here are real parameters. we do not impose any limit for the parameters. equation (3) has as a well known solution a function with the shape of burr xii [18] , with : the density (in a similar form of a pdf, but here it is not a pdf) is, then: where ,which can be expressed as: where the parameter are or, in a simpler form for data adjustment purposes with this is very similar, though not equal, to the function proposed by tsallis [19, 20] in an ad hoc way. here, however, a physical representation by the method of analogy is proposed to describe the evolution of the pandemics. though we have introduced a, b, c, b, and a as parameters to simplify the fitting, the true adjustment constants are, clearly, q, τ and α. note that we do not impose any restrictive values to the parameters. there is no need to demand that the solution always converge. the equation to obtain burr xii has to impose restrictions but this is not the case. in burr xii the function was used as a probability distribution. but here the function describes a dynamic, which can be explosive, as will be shown for the curves of brazil and mexico. therefore, if we consider infinite population, a peak will never be reached unless the circumstances change (treatments, vaccines, isolation, etc.). our model does not impose finiteness of the solution. the possibility for a decay of the pandemic in a given region in this model requires the fulfillment of the condition what expresses the property that what means that the function has a local maximum. if this condition is not accomplished, the pandemic does not have a peak and, therefore, the number of cases increases forever in this model. in this case there is, apart from the change of propagation and development conditions, the possibility for a given country that does not satisfies condition (8), to reach "herd immunity", i.e., when the number of contagions has reached about 60% of population, in which case we may calculate the time to reach such state using (4), assuming t 0 = 0: we will work with what we will call t 1000 ahead and that seems to make more sense and bring more information. with eq. (7) let us fit the data of the epidemic worldwide. the data was extracted from johns hopkins university [21] and the website [22] to process the data for several countries. we covered the infected cases taken at jan 22 as day 1, up to june 13. the behavior of new infected cases by day is shown in figure 1 . the fitting was made with gnuplot 5.2. as it seems, the pandemic shows some sort of "plateau", so the present measures of prevention are not able to eliminate the infection propagation in a short term, but it can be seen that condition (8) is weakly fulfilled. table i . condition (8) is satisfied. in the particular case of mexico the fitting is shown in figure 2 . in this case condition (8) is not fulfilled. in terms of our model this means that the peak is not predictable within the present dynamics. something similar occurs with brazil, as shown in figure 3 . the data for brazil neither fulfill the condition (8) . in this case there is neither the prevision of a peak and we can say that the data for mexico and brazil reveals a dynamics where the peak seems to be quite far if it exists. but there are some illustrative cases where the peak is reached. progression of the outbreak in cuba and iceland are shown in figure 4 and 5 respectively. condition (8) is satisfied for both countries and we can see that the curve of infection rate descends at a good speed after past the peak. now let us take a look at united states data, shown in figure 6 . the usa outbreak is characterized by a very fast growth until the peak and, then, very slow decay of the infection rate is evident. as discussed above, the outbreak will be controlled for almost infinite time in this dynamics. there is also some intermediate cases as spain and italy, shown in figures 7 and 8 . in this case the data exhibits the same behavior as in usa, a fast initial growth and a very slow decay after the peak. however, the outbreak is controlled in a finite amount of time. in table i we present the relevant fitting parameters, including herd immunity time, t hi and t 1000 , the time to reach a rate of 1000 infections daily. this, for countries that have not reached the epidemic peak, mexico and brazil. we also include the population, p ; of each country. table i . t hi = 778 days. condition (8) is not satisfied. table i . condition . figure 6 . daily infections in usa, where the peak looks already surpassed. here again, the behavior of the pandemic in this country looks well described by eq. (7). see fitting parameters in table i . condition (8) is satisfied. table i . condition (8) is satisfied. table i but let us briefly comment about herd immunity. those countries that have managed to stop the outbreak, even with relative high mortality as spain and italy, will not reach the herd immunity. as a matter of fact, this can not be calculated for those countries. then, we can see countries like brazil where, if the way of deal with the outbreak do not change, the herd immunity will be reached. even when it seems desirable, the ability to reach the herd immunity brings with it a high payload. that is, for a country like brazil the herd immunity would charge more than 100 million of infected people. that is, much the same as if a non small war devastates the country. there is an alike scenario in mexico, but the difference here is that the value for t hi is so high that sars-cov-2 could even turn into a seasonal virus, at least for some years. we can expect around the same mortality but scattered over a few years. a special observation deserves usa, where t hi tends to infinity. here we can expect a continuous infection rate for a very long time. the outbreak is controlled but not enough to eradicate the virus. virus will not disappear in several years but maybe the healthcare system could manage it. the virus will get endemic, and immunity will never be reached. however the infections and mortality rate associated with it, can be, hypothetically, small if compared with mexico and brazil. we can also compare the speed of the outbreak in different countries. as we already said in table i we calculated t 1000 for some countries. however, it should be noticed that this time is not calculated from day 0, which is always january 22, but for the approximated day when the outbreak began in the correspondent country. by example, in brazil there was no cases at january, 22 but the first cases were detected around march, 10. so both, data fitting and t 1000 , were calculated from march, 10. in this work, for the first time, we presented a model built using the method of analogy, in this case with a nonlinear relaxation-like behavior. with this, a good fitting with the observed behavior of the daily number of cases with time is obtained. the explicit expressions obtained may be used as a tool to approximately forecast the development of the covid-19 pandemic in different countries and worldwide. in principle, this model can be used as a help to elaborate or change actions. this model does not incorporate any particular property of this pandemic, so we think it could be used to study pandemics with different sources. with the collected data of the pandemics at early times, using this model, it can be predicted the possibility of a peak, indefinite growth, time for herd immunity, etc. what seems to be clear from the covid-19 data, the fitting and the values shown in the table i , is that sars-cov-2 is far from being controlled at world level. even when some countries appear to control the outbreak, the virus is still a menace for its health system. furthermore, in the nowadays interconnected world it is impossible for any country to keep closed borders and pay attention to what happens only inside. all isolation measures should be halted at some time and we can expect new outbreaks in countries like spain or herd immunity. indeed, the model made possible to make an approximate forecast of the time to reach the herd immunity. this may be useful in the design of actions and policies about the pandemic. we have introduced the t 1000 , that gives information about the early infection behavior in populous countries. a possible improvement of this model is the formal inclusion of a formulation including the dual conformable derivative [12, 13] . this will be published elsewhere. proceedings of ieee conference on electrical insulation and dielectric phenomena-(ceidp'94 we acknowledge dr. carlos trallero -giner for helpful comments and suggestions the authors declare that they have no conflict of interest. key: cord-271810-7uzk4pi9 authors: soriano, joan b. title: humanistic epidemiology: love in the time of cholera, covid-19 and other outbreaks date: 2020-04-25 journal: eur j epidemiol doi: 10.1007/s10654-020-00639-y sha: doc_id: 271810 cord_uid: 7uzk4pi9 nan colombian nobel prize awardee author gabriel garcía márquez, suffered cholera and many bouts of malaria during his life. in love in the time of cholera, one of many masterpieces by him, he wrote that persistence (and handwashing!) were rewarded with love after a life of living with countless cholera outbreaks. i am a respiratory epidemiologist; and literally, at the peak of the covid-19 pandemic, we are now being bombarded with descriptive epidemiology statistics and standard, cold figures: "as of today the death toll of covid-19 worldwide is 114,290"; "the peak resource use of respirators and icu rooms in the usa is expected on april 10, 2020…", and counting. in the distant past, there have been devastating epidemics of infectious disease, such as cholera, the 1918 flu (wrongly called spanish), the plague's black death,… other more recent outbreaks like sars, mers or ebola were considered exotic, faraway occurrences. yet we were not ready for this one. and at the least for the last four generations, we are now living unprecedented times. no one, even in the wildest nightmares of any hollywood-based science fiction screenwriter, would have anticipated that 2020 would have started with such drama and suffering. when we were raising our glasses and toasting on new year's eve for a happy 2020, few were aware of a safety alert reported that morning in wuhan, hubei province, china due to a cluster of pneumonia cases of unknown etiology [1] . it took only 10 days, on january 9, 2020, for china's cdc to report that a novel coronavirus was the causative agent of that local outbreak. as for good and for bad, all is globally interconnected, that minute incident in china is the reason why we live in lockdown, basic civil liberties are limited, many deaths and suffering, and locally my hospital being near collapse. hospital de la princesa, an old 450-bed, tertiary hospital in downtown madrid, spain, had its d day on march 30, 2020, when a total 552 covid-19 patients were admitted, and 120+ more patients were in the emergency room, impatiently waiting to be admitted [2] . many twopatient rooms already had three, even four occupants. our petite, modern icu room with 17 beds had to be stretched to 73 beds, by invading two surgical theatres turned to critical care, as well as the entire psychiatric ward. mirroring ancient times, all mentally ill patients, including those with active, severe paranoid schizophrenia or major depression, were sent home with their relatives to make room for others requiring invasive mechanical ventilations, mostly with improvised ventilators, or by reusing disposable ones, or duplicating machines with home-made technology. even friends who have been veteran volunteers with médecins sans frontières in syria's civil war, or at sierra leone's ebola zone, were not ready. using military terminology, la princesa was a war-time hospital in the front line; my respiratory department with thirteen staff plus eight residents, suffered eleven "casualties", counting quarantines plus infections plus one admission with severe, bilateral pneumonia. but other madrid hospitals were hit even harder; colleagues at hospital la paz or gregorio marañón, were suffering an even worse avalanche of patients to care for. all like a modern hecatombe, literally from the ancient greek ἑκατόν, hekatón, "one hundred" and βοῦς, boũs, "ox", a religious sacrifice of a hundred oxen to indicate a great catastrophe with great mortality, or the end of the world. we are still facing a cruel disease and global epidemic, both of biblical proportions [3] . it is still severely and seriously affecting our old ones and others with heart, lung and other chronic diseases. but not only them. several colleagues of mine, young, completely healthy, even athletic, have been admitted into the same ward where the previous day they we still don't know whether an immunological, genetic factor, a combination of risk factors, or serendipity make this little rna virus collapse your bronchi and lungs with a thick "snail snot or slime" and accompanied with an inflammatory outburst killing some perfectly healthy lungs. as dr landete was explaining to junior residents in the morning clinical round: "-this is the first time that i have seen the occurrence of acute sudden respiratory distress syndrome (ards) in front of my eyes. in the emergency room i was examining a walk-in 52 yr-old, female patient with temperature, malaise and a dry cough, and within 20 min, i had to call an ent colleague to intubate this patient, as she had developed the fastest, quickest ards i've ever seen". even after all their greatest efforts and in the best hands, they could not save her. it is indeed a nasty little bug [4] . however, there is always hope, and as seen in great literature, times of crisis bring out the best in us. to date, i have seen residents choosing to stay longer after finishing a 24-h duty to try and save one more critically ill patient; auxiliary nurses improvising aprons and boots with trash bags, who, on finally receiving their space suits, posed for posterity like a football team, always with a ready smile (fig. 1) ; residents in neurology, immunology or pathology becoming chest medicine residents; medical students volunteering to learn the practicalities of lung mechanics and gas exchange; a department head creating a blog aimed at praising individuals for outstanding bravery and commitment; or i have been privileged to lead a small think tank including nurses, doctors, physicists, engineers and other friends who from saturday march 14 have met on a daily basis to brainstorm initiatives by videoconference at 7 am, just before seeing patients or awakening their families. many of the above have been living for 2 weeks in hotels next to our hospital, extremely and severely sleep deprived for a month already (fig. 2) . our hospital administrators recommended all staff not to take weekends off until further notice. no one disagreed, trade unions included, out of a 2000+ headcount. and this has been going on for nearly a month. again, all always with a ready smile. this is the so-called, espíritu de la princesa. myself, a humble respiratory epidemiologist that has dedicated his professional life to research on copd, asthma and tobacco had to go back to the textbooks and online resources for a fast-track, hands-on crash course on outbreaks research, counting the number of deaths, infected cases, r0 infectivity, and the like [5] . that was the easy part. realizing that behind every case there was a personal tragedy, a family loss, slowly broke my heart and my lungs. so many people dying alone in elderly homes and residences, without medical care, without any care. i imagine not even anyone holding their hands. calling for the sake of hygiene and competing priority, no one available to say a prayer while they were buried or cremated, alone. it will take time to accept this sad passing away, a cruel ending for many. we must live in this planet, there is no other earth or planet/plan b. and we have observed that air pollution and planetary health can be improved within weeks, with concerted individual and societal efforts [6] . children confined at home for 3 weeks already, have appreciated playing with their brothers and sisters, or talking with neighbors across the balconies; even remotely with their school friends and teachers. they should be the first to end confinement. and we need to learn the lessons from history. this is not our first epidemic. it is the toll we have to pay for living in society and in cities. if we were still collectors and hunters in the wild, no such thing would have happened. yet humans are emotional, social animals and beyond our species homo sapiens, scholars say we are of emotionalis subspecies. as human animals we are not meant to live alone, or die alone, or in solitude. i have no doubt that when this crisis is over, and i am positive it will be over soon, music, theatre, movies, literature and the arts in general, will help to restore balance, and make us all wiser, better persons. the so-called move from omics to humanomics [7] . beyond modern, ever more technical and robotized medicine, medical humanism in the twenty-first century is to be more important than ever [8] . as pangloss, candide's optimistic teacher in voltaire's masterpiece, said: "everything happens for a reason". pangloss chants over and over: "… all is for the best in the best of all possible worlds" while candide leads an outrageous life illustrating that it is patently false. but we have no room for pessimism. i remember reading essay on blindness by the portuguese author josé saramago; happily, the panic and selfishness in his outbreak of sudden blindness only occurred in his literature. let's only imagine if gabo's inspiration were by nowadays covid-19 pandemic, and his love in the time of cholera, were rewritten. or la peste by french novelist albert camus who, at the premature age of 46 died in a car accident near sens. camus, not wearing a safety belt in the passenger seat, died instantly. but, what a life! la peste tells the story of a plague sweeping the french algerian city of oran. nevertheless. it is not a medical book, but about human passions during and after an outbreak. can't wait to re-read it. in all of these books, and other, health personnel have been rightfully characterized and praised as heroes and martyrs. yet, last but certainly not least, i wish to make a call to remember the crucial role played by our nonhealth related hospital staff. thoroughly well-deservedly nurses and doctors are credited since they must frequently and harshly endure the pains of covid-19. however, their work would all be a lost effort without the cleaning personnel, wardens, cooks and cafeteria caterers, administrative workers, security forces, lab technicians, and other hospital-based job groups. they suffer this modern plague equally, often without protection, mostly without recognition, but always proudly; working 24/7, weekends included, and again always with a ready smile. these critical workers should be praised and acknowledged equally since, with no cleaners and cooks, our hospitals would instantly collapse. as this is neither the last outbreak, and with all likelihood nor "the" last big one, we need to learn one more lesson from the past. in the future, let's never again take for granted those simple things that during confinement we have suddenly seen as precious: a bear hug, a slap on the back and, of course, a ready smile without a face mask. i have no doubt that medical humanism and the arts are already helping; and they will help us to learn to take better care of our patients, our loved ones, and ourselves. jb soriano, md. madrid, april 14, 2020. emergencies preparedness, response. pneumonia of unknown cause-china situación de covid-19 en españa offline: covid-19-what countries must do now a novel coronavirus from patients with pneumonia in china in snow's footsteps: commentary on shoe-leather and applied epidemiology the 2019 report of the lancet countdown on health and climate change: ensuring that the health of a child the need for humanomics in the era of genomics and the challenge of chronic disease management opening editorial-the importance of the humanities in medical education conflict of interest there are no conflicts of interest or competing interests to report. key: cord-265348-hnu8gw6w authors: buising, kirsty l; thursky, karin a; black, james f; macgregor, lachlan; street, alan c; kennedy, marcus p; brown, graham v title: improving antibiotic prescribing for adults with community acquired pneumonia: does a computerised decision support system achieve more than academic detailing alone? – a time series analysis date: 2008-07-31 journal: bmc med inform decis mak doi: 10.1186/1472-6947-8-35 sha: doc_id: 265348 cord_uid: hnu8gw6w background: the ideal method to encourage uptake of clinical guidelines in hospitals is not known. several strategies have been suggested. this study evaluates the impact of academic detailing and a computerised decision support system (cdss) on clinicians' prescribing behaviour for patients with community acquired pneumonia (cap). methods: the management of all patients presenting to the emergency department over three successive time periods was evaluated; the baseline, academic detailing and cdss periods. the rate of empiric antibiotic prescribing that was concordant with recommendations was studied over time comparing pre and post periods and using an interrupted time series analysis. results: the odds ratio for concordant therapy in the academic detailing period, after adjustment for age, illness severity and suspicion of aspiration, compared with the baseline period was or = 2.79 [1.88, 4.14], p < 0.01, and for the computerised decision support period compared to the academic detailing period was or = 1.99 [1.07, 3.69], p = 0.02. during the first months of the computerised decision support period an improvement in the appropriateness of antibiotic prescribing was demonstrated, which was greater than that expected to have occurred with time and academic detailing alone, based on predictions from a binary logistic model. conclusion: deployment of a computerised decision support system was associated with an early improvement in antibiotic prescribing practices which was greater than the changes seen with academic detailing. the sustainability of this intervention requires further evaluation. with the rapidly expanding body of medical knowledge, clinicians need access to appropriate, relevant information to guide their clinical decision making. for many conditions, clinical experts have used available evidence and experience to generate guidelines that endeavour to assist clinicians, and improve patient outcomes. a major problem, however, has been finding the best strategies to implement these guidelines in a busy hospital environment. [1] [2] [3] group lectures, one to one academic detailing, laminated cards and advertising material such as posters have all been tried with variable success. [4] [5] [6] [7] with the increasing role played by computers as a source of information in the hospital setting, computerised decision support may provide a useful alternate strategy. [8] [9] [10] [11] at the royal melbourne hospital, a transferable web based computerised decision support system was developed, with the capacity to present any guideline or algorithm. [12] we chose in the first instance to deploy a guideline for the management of patients with community acquired pneumonia (cap) as this is one of the most common conditions presenting to hospital emergency departments. international and national guidelines have been produced to guide the management of cap [13] [14] [15] , but uptake has been poor. [16] the general aim of this study was to describe the impact of different methods of guideline promotion on clinician prescribing behaviour. more specifically, a comparison of the impact of both academic detailing (ad) and a computerised decision support system (cdss) on the management of patients with cap in an emergency department (ed) was examined. the outcomes of interest included the prescription of antibiotics that were concordant with guideline recommendations, the early identification of the severely ill patients and adjustment of antibiotics to meet recommendations for prescribing in the severely ill group, and adjustment of antibiotics to accommodate known patient allergies. a two stage pre and post intervention cohort study, and a time series analysis this study was performed at the royal melbourne hospital, an urban adult tertiary teaching hospital with 350 beds including 14 intensive care unit (icu) beds. the emergency department assesses 50,000 patients per year, leading to 16,000 admissions to hospital. this hospital did not have an electronic medical record or a computerised order entry system. over 30 different doctors were working in the ed at any point in time over the study periods, and the allocation of doctors to patients was not structured. a computerised antibiotic approval system restricting access to ceftriaxone was also in operation over all three time periods of this study. its implementation pre dated the commencement of this study. it approved ceftriaxone use for all patients with severe pneumonia, and its content agreed with the cap guideline content. this study described the prescribing behaviour of doctors (both senior and junior medical staff) managing patients in the ed. specifically, the study focused on antibiotic prescribing for all patients who were initially diagnosed with cap by the treating clinician in the ed. the study extended over three distinct time periods. the first, (or 'baseline') period was from during the first ('baseline') time period, electronic and paper copies of national antibiotic prescribing guidelines were available to staff in the ed [13] but no particular additional efforts were made to encourage uptake of the guideline. at the start of the second ('academic detailing') time period, a program of academic detailing was initiated at the hospital. this involved training two senior ed clinicians, a pharmacist and a nurse to provide academic detailing to their colleagues. they spent one on one time educating colleagues (doctors and pharmacists) about antibiotic prescribing recommendations. these activities were opportunistic and occurred during the usual rostered hours. interactions were not scheduled and no formal documentation of ad encounters was made. posters and laminated cards with information about severity assessments and appropriate antibiotic choices for patients with cap were distributed and actively promoted throughout the ed during the academic detailing period. these personnel and advertising material remained available throughout the following ('computerised decision support') time period, but were not specifically promoted. at the commencement of the computerised decision support period, the guideline for the management of patients with cap was deployed on an existing decision support tool. this tool is a web-based transferable system that was designed at the hospital using a .net framework and implemented in january 2005. the cap algorithm used the pneumonia severity index (psi) to guide site of management decisions (inpatient vs. outpatient care) and the modified british thoracic society severity score (curb) to highlight patients with severe pneumonia who were likely to need review by the intensive care unit (icu) staff. [17, 18] the program was integrated with hospital databases containing patient demographics and pathology results to facilitate rapid calculation of scores required for these prediction rules. use of these scores was not, however mandated. users could choose to skip the score to obtain antibiotic advice alone. antibiotic allergy reminders were included. if a user had previously registered an allergy for a patient this was presented, otherwise a reminder was given to check with the patient. detailed information was included about unusual pathogens to consider, the most appropriate choice of empiric antibiotics, the duration of therapy, and the timing of change from intravenous to oral antibiotic therapy. users had access to medical literature via the internet, along with local interpretation of this literature within the cdss. users could browse the cdss content without logging a patient in, so it could be used as an educational tool as well as providing patient specific advice. there was general agreement between the empiric antibiotic recommendations made in the national guideline, the ad directives and the content of the cdss. the cdss was available hospital wide and its use was entirely voluntary. all hospital clinicians could access it via a shortcut on the desktop of any hospital computer. no specific incentives were provided to encourage its use. it was not triggered by any other computer systems. it resided alongside other electronic hospital guidelines. an introductory demonstration was provided to the ed staff and to all staff at a hospital grand round. thereafter, infectious diseases registrars or pharmacists provided demonstrations informally. all patient presentations to the ed were available for inclusion in the study. patients were prospectively identified from a database in the ed where the treating doctor already routinely recorded the patient's diagnosis. all patients with a diagnosis of pneumonia, chest infection, lower respiratory tract infection, pleuritic chest pain, cough, shortness of breath, and/or aspiration were identified. patients were included in the study if they had a new respiratory symptom, a new chest x-ray infiltrate consistent with pneumonia, and if the initial assessment made by the treating doctor was that the patient had pneumonia. exclusion criteria included: age <18 years, immunocompromised patients (corticosteroids ≥ 15 mg prednisolone/ day for ≥ 2 weeks, hiv positive with cd4 <200 umol/l, transplant recipients on immunosuppressive therapy), suspected or known severe acute respiratory syndrome (sars), nosocomial pneumonia (discharged from hospital in the previous 2 weeks, after an admission longer than 48 hours), and/or known suppurative lung diseases such as cystic fibrosis or bronchiectasis. data were prospectively collected from the medical history by a single trained research nurse, according to a set of specified rules. a single clinician was assigned to make judgements about any difficult issues, and a random sample of these cases was cross checked with a second infectious diseases physician. this group comprised 5% of the total patient cohort (40 patients). specific clinical and pathological and radiological data available within the first 24 hours were sought to allow calculation of severity scores. [17, 18] clinicians' comments about suspicion of aspiration, and documentation of known antibiotic allergies were recorded. the time to antibiotic therapy was calculated using the time of presentation, documented electronically by the ed triage nurse, and the time of antibiotic administration as documented on the medication chart by the nurse in ed or on the ward. information regarding ongoing antibiotic use was collected. any antibiotics that were clearly being used to treat a separate infection (as described in the patient's medical record) were not included. where the duration of treatment after discharge was not recorded, it was assumed to be 5 days. antibiotic costs were calculated using pharmacy purchasing data. no actual changes in the cost of drugs commonly prescribed for pneumonia occurred over the study period. the admission criteria for icu were based entirely upon the treating clinician's assessment in all time periods. no protocols or guidelines were enforced. clinicians were not aware that the study was being conducted. the researchers had no clinical role in the ed over the study period. there were no major changes in the number or composition of staff in the ed, or their responsibilities over the study period. this study was approved by the ethics committee of melbourne health. individual consent from the clinicians or the patients involved was not required. the primary outcome assessed was the prescription of empiric antibiotic therapy that adequately covered the likely pathogens (both typical and atypical) and was concordant with recommendations. this included the combination of a recommended beta lactam (amoxicillin, ampicillin, benzylpenicillin, ceftriaxone, cefotaxime or cefuroxime) plus either a macrolide (erythromycin, roxithromycin, clarithromycin or azithromycin) or doxycycline. the use of moxifloxacin alone was also classed as appropriate. patients who received additional antibiotics were still classed as appropriate, so long as their antibiotic regimen included the recommended drugs (reflecting that the patients at least received appropriate cover). the possibility that antibiotics were required for other concordant problems was appreciated, and without detailed clinical information, it was not possible to determine if this additional antibiotic use was unnecessary. a number of secondary outcomes were also examined. for patients who required icu intervention at any time during their admission, the proportion that were admitted directly from the ed to the icu was evaluated as a marker of early recognition of severe disease. similarly, the proportion of patients requiring icu management at any time during their admission who were initially prescribed the recommended empiric broad spectrum antibiotics for severe pneumonia in the ed was compared. appropriate therapy for this group was defined as ceftriaxone (or benzylpenicillin plus gentamicin), in combination with either intravenous azithromycin or erythromycin. the use of moxifloxacin alone was also deemed appropriate. the number of patients prescribed an antibiotic to which they had a documented allergy was examined. the overall pattern of antibiotics prescribed, and the average cost of antibiotics per patient, were assessed in each time period. finally, the time between presentation to the ed and the administration of antibiotics was recorded. baseline characteristics of subjects were compared between the three periods using a chi-squared test of homogeneity for categorical variables and analysis of variance for continuous variables. an a priori level of statistical significance of 0.05 was assumed. the baseline period extended over one year to give an indication of the baseline pattern of change in the rate of concordant prescribing over time, in the absence of any intervention. the academic detailing period included enough patients to detect an improvement in mean concordance from 65% to 75% (188 patients, power = 0.8 and p = 0.05). the computerised decision support period included enough patients to detect an expected further improvement in concordance from 75% to 85% (120 patients, power = 0.8, p = 0.05). multivariable logistic models were used to compare the mean proportions of concordance across the three periods, while adjusting for disease severity, age, and suspected aspiration. secondary outcome measures were assessed in the same way. specifically, among the patients who required icu admission, the proportion directly admitted from ed to the icu, and the proportion administered appropriate broad-spectrum empiric antibiotic therapy, were compared. this was specifically recorded as a measure of the degree of recognition of markers of severe illness, which were a key focus of the guideline content. the proportion of patients with a known antibiotic allergy who received that antibiotic was also compared. time to antibiotic administration was recorded as a measure of whether the cdss delayed decision making to any extent. a time series analysis was performed to evaluate changes in concordance of prescribing over time, covering all three time periods. the rate of concordant prescribing was expected to improve over time. change in concordance over time was assessed with a binary logistic model, incorporating month of treatment as a continuous variable. the 'expected' proportion of concordant treatment at any given time then plausibly corresponds to a regression line fitted through the data. we hypothesized that the rate of concordant prescribing after the intervention (in the third time period) would be greater than that expected given the observed trend before the intervention (the first and second time periods). statistical analysis was performed using stata version 9.0. [19] the demographic details of the patients in each of the three time periods are presented in table 1 . during the computerised decision support period (cdss), patients were generally older than those in the other two time periods (a greater proportion were aged >85 years), and less likely to have received antibiotic therapy prior to presentation. the observed death rate during the cdss period appeared to be higher than for the other two periods, but this was largely explained by differences in the proportion of patients aged over 85 years, and differences in the number of patients who died in the ed for whom supportive therapy was not thought appropriate 3 .69], p = 0.02. the estimated effect over time within each cohort did not appear to be substantially altered by the inclusion of these covariates. the effect of change over time was observed in more detail. figure 1 illustrates the percentage of empiric antibiotic prescriptions that were concordant with recommendations per month over the entire period. prescribing patterns improved slowly over time. one year after release of the guideline, in the absence of any promotional efforts, (that is, at the end of the baseline period), the concordance rate was around 60%. the change in the proportion of concordant prescribing between the last month of the baseline period and the first month of the academic detailing period was +10.8% over 12 months. the change in the proportion of concordant prescribing between the last month of the academic detailing period and the first month of the computerised decision support period was +21.5% over 5 months. at the end of the study period, the rate of concordant prescribing was high. the first month post the cdss intervention had a very high concordance rate (100%) and thereafter the rate remained around 90%, although the study was not long enough to demonstrate whether this level was maintained beyond 6 months. further analysis was performed to compare the observed results with that which would be expected based upon an underlying trend in improvement over time [11] . the observed behaviour in the preceding time periods (over 3 years) were used to predict the expected prescribing behaviour in the latter 6 month period of the study. figure 2 shows the three regression lines that best fit the observed rate of concordance over the three separate time periods, and the concordance predicted from a logistic regression model based upon the first and second time periods extrapolated forward through the third time period (the 'expected' concordance). while it is important to note that such a regression line may be sensitive to outliers, there were in fact few actual outliers in these actual data and the likelihood of effect would be low. during the first six months of the cdss period, the proportion of patients who were prescribed concordant therapy was greater than would be expected based on the observed trend. a confidence interval around the trend line was determined, and this described the likelihood of the observed results in the first month of the cdss period as having a p value of 0.06 based on the existing trend alone. secondary outcomes were analysed as a measure of the impact of the changes in prescribing on key areas of interest. regarding those patients who required icu support, the likelihood that recommended broad spectrum empiric antibiotics were received in the ed increased over time. the patient did not increase, and was actually found to progressively fall over the three time periods, from 171 to 158 and then 142 minutes, p < 0.01. this study demonstrates the pattern of behavioural change in emergency department clinicians over three and a half years, and describes the changes surrounding different interventions to promote a particular prescribing strategy. in particular, it demonstrates that the implementation of a computerised decision support system was associated with greater improvement in prescribing practices than would have been expected based upon the predictions made from actual prescribing observed over the preceding 3 years. the baseline period provides an example of the rate of change of prescribing behaviour with passive, informal means of information transfer. it shows that change is slow, and that the rate of change falls with time. this is consistent with the suggestion that while some clinicians respond to recommendations early, others may be more difficult to access, or more resistant to change, and change may be harder to achieve in the later time periods. the improvement in concordance of prescribing was not dramatic with academic detailing, but appeared to be greatest immediately after the cdss was deployed. it is likely that the interest generated by a novel system, and the attention it received during early education sessions contributed to the high initial concordance. junior staff in this ed rotated on average every three months, which means that the impact of ad may not be sustained as new staff enter the unit. it is important to note that 100% concordance should not be expected in this context. the cap guideline represents a basic recommendation, and individual patients vary from the average. in the case of cap, experienced clinicians would be expected to vary from the guidelines for valid clinical reasons. it is impossible to separate the effect of the computerised decision support system itself, from the effect of the education sessions, which would have increased awareness of the cap guideline and its recommendations. a longer duration of follow up after deployment of the cdss would be required to comment upon the sustainability of any change. the cdss was associated with changes in many of the secondary outcomes of interest that were not demonstrated with academic detailing. in particular, better recognition of patients with severe pneumonia, suggested by increased use of recommended broad-spectrum empiric antibiotics in those requiring icu care was noted. this change occurred without a major increase in the overall rate of cephalosporin use or the average antibiotic costs per patient. this may be because the content of the decision support system highlighted this perceived problem, and percentage of empiric antibiotics prescribed that were concordant with recommendations per month the advice was consistent for all users. in contrast, with passive transfer and academic detailing advice might be less consistent. one of the strengths of this paper is that our statistical analysis has taken in to account the expectation that prescribing practices would improve over time, in the absence of intervention. [11] this improvement is presumably due to a 'learning effect' as information is disseminated. it demonstrates that trends in prescribing practices were already present before any specific intervention and these should be acknowledged. this is one of the first papers to compare the impact of a cdss with academic detailing alone in the same clinical setting. to date, academic detailing has been one of the more common strategies used to promote guidelines, but it can be a labour intensive exercise. the staff members who provided academic detailing attended a two-day training session, and thereafter dedicated a portion of their clinical time to training purposes. the information provided to different staff members may have varied due to time constraints or the interest of the trainer, and par-ticular areas may not have been discussed. the cdss, in contrast, provided consistent advice, and could be accessed whenever required by the clinicians. it required an initial investment of clinician's time to develop and test the algorithm, but thereafter did not consume any additional staff resources. to date, most evaluations of cdss in hospitals have described large purpose built systems, often in academic centres in the usa with a specific interest in computerisation. [8, 20] this paper, in contrast, describes a transferable web based computerised decision support system which can be integrated with many existing clinical databases in other hospitals. this study describes a clinical setting that would be familiar to most tertiary australian hospitals. previous reviewers have noted the lack of reports of systems outside of the usa, and this paper therefore provides an important contribution. [21] the major limitation of this study is that the changes were not compared with a separate control group. this study used the same group of clinicians at different time points as controls. in order to do this, the effect of time needed to be taken into account. the predictions of prescribing patterns that we have described are extrapolations beyond the actual data, and make assumptions about patterns of practice remaining similar over time. in this hospital, it would not have been practical to separate control and intervention groups without cross contamination. in addition, such a study might increase clinician awareness proportion of concordant therapy prescribed over time figure 2 proportion of concordant therapy prescribed over time. the solid lines indicate regression lines that best fit the observed data in each of the three time periods, demonstrating the percentage of empiric antibiotic therapy that was concordant with recommendations per month over time. the broken line is a regression line that best fits the observed data in just the first and second time periods. this line is projected forward over the third time period to demonstrate the 'predicted' concordance if the underlying trend from the first two time periods was to continue. the horizontal arrows demonstrate the timing of the two interventions. the vertical arrow represents the difference between the 'predicted' concordance and the observed concordance after the computerised decision support system (cdss) intervention. academic detailing cdss and introduce bias affecting prescribing practices. although multiple testing issues are a concern where several hypothesis tests are performed, in this study the findings comparing time periods were relatively consistent across different variables and the statistical significance of the effect was generally better than the 0.05 level. it is also important to recognize that the successful implementation of cdss depends heavily on the personnel and the setting, hence separate hospitals or wards do not necessarily provide accurate control groups for comparison. the 'culture' within an institution has important effects on guideline implementation strategies. exploration of the effect of a computerised decision support system on the prescribing practices in other institutions would, therefore, be of interest. this study has demonstrated improved antibiotic prescribing practices in a hospital setting associated with two different strategies for implementation of guidelines. the improvement in prescribing practices was initially more significant with computerised decision support system than with academic detailing alone, although this may represent the effect of increased attention being given to a novel system. further exploration of the role of computerised decision support system in hospitals is warranted to particularly to assess the sustainability of the effect on clinician decision-making at the point of care. publish with bio med central and every scientist can read your work free of charge antibiotic guidelines: improved implementation is the challenge what has evidence based medicine done for us? bmj what's the evidence that nice guidance has been implemented? results from a national evaluation using time series analysis, audit of patients' notes, and interviews a simple intervention to improve hospital antibiotic prescribing improving compliance with hospital antibiotic guidelines: a time-series intervention analysis evaluating the impact of education by a clinical pharmacist on antibiotic prescribing and administration in an acute care state psychiatric hospital printed educational materials: effects on professional practice and health care outcomes a computer-assisted management program for antibiotics and other antiinfective agents reduction of broad-spectrum antibiotic use with computerized decision support in an intensive care unit improving empirical antibiotic treatment using treat, a computerized decision support system: cluster randomized trial interventions to improve antibiotic prescribing practices for hospital inpatients the experience with web-based computerised decision support systems at the royal melbourne hospital-the search for transferability and maintainability. icaac; washington therapeutic guidelines: antibiotic. version 12 ed bts guidelines for the management of community acquired pneumonia in adults guidelines for the initial management of adults with community-acquired pneumonia: diagnosis, assessment of severity, and initial antimicrobial therapy empiric management of community-acquired pneumonia in australian emergency departments a prediction rule to identify low-risk patients with community-acquired pneumonia community acquired pneumonia: aetiology and usefulness of severity criteria on admission release 7.0 version. college station, tx: stata corporation use of computerized decision support systems to improve antibiotic prescribing clinical decision support systems and antibiotic use ms thao nguyen and ms annmarie sherman for their assistance with data collection and data management. the authors declare no financial conflict of interest. all authors have been employed by melbourne health who now hold the rights to the computerised decision support system evaluated in this study. melbourne health had no influence over the findings described in this study. the authors have no other personal financial interests in the cdss. kb and kt designed the study, carried out data collection and data analysis. jb and lm provided specific advice regarding statistical evaluation at the study design and analysis stages. as gb and mk participated in study design and analysis. all authors contributed to the final manuscript. the pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/8/35/prepub key: cord-269197-o9xb30vx authors: osserman, jordan; lê, aimée title: waiting for other people: a psychoanalytic interpretation of the time for action date: 2020-06-10 journal: wellcome open res doi: 10.12688/wellcomeopenres.15959.1 sha: doc_id: 269197 cord_uid: o9xb30vx typical responses to a confrontation with failures in authority, or what lacanians term ‘the lack in the other’, involve attempts to shore it up. a patient undergoing psychoanalysis eventually faces the impossibility of doing this successfully; the other will always be lacking. this creates a space through which she can reimagine how she might intervene in her suffering. similarly, when coronavirus forces us to confront the brute fact of the lack in the other at the socio-political level, we have the opportunity to discover a space for acting rather than continuing symptomatic behaviour that increasingly fails to work. 'a strike is precisely that kind of rapport that connects a group to work.' (lacan, 2006, p. 266) we were on the picket lines when the uk woke up to the reality that responding to covid-19 was going to require mass shut-downs. we had been thinking and speaking, in university college union 'teach outs', about how participation in industrial action opens up a particular and generative kind of temporal space. withdrawing one's labour dramatically disrupts the 'on-go' of daily life. one is thrown into a situation where time takes on a different quality: our relationship to the past is called into question ('what has brought me to the point? where have i been placed within the economic structure?'), and we gain a new sense of agency over the future through a rearticulation of the self. we thought this had something in common with the scenario of a patient undergoing psychoanalytic therapy, and we were attempting to tease out relevant parallels. this was the beginning of theorising an aspect of the psychic life of time rooted in a joyful form of collective struggle. it came to a dramatic halt with covid-19, which suspended and indefinitely postponed strike action, while simultaneously throwing the causes for the dispute into sharp relief. what will happen to precarious staff employed on hourly and temporary contracts about to expire, accustomed to regularly moving across the country (or indeed the world) for insecure academic work, in the context of a pandemic and economic crash? how will university pensions, held in investment portfolios, endure a stock market freefall? will we be told, yet again, that 'now is not the time' for rectifying the bame and gender wage gaps, and that taking on unsustainable workloads in the shift to online teaching are simply part of being a team player during a 'chaotic time'? neoliberal economics has shaped our healthcare provision (and indeed our health) for decades, ever since the introduction of 'internal markets' to the nhs, but the extent which health has been deprioritised in order to create an 'efficient' and profitable health service is now showing its true face. prior to the outbreak, hospital occupancy had repeatedly hit all-time record highs, routinely exceeding 95% of capacity, leading 92% of doctors in a bma survey to say that the nhs is 'in a state of year-round crisis' (bma, 2020) 1 . the doctrine of profitability means no margin of 'waste' -which means no ability to cope with everyday volumes of patients, much less an actual crisis. it has become increasingly clear that our physical health relies not only on epidemiology but on the questions of politics, economics and analyses of social life more traditionally associated with the humanities and social sciences. the boundary between the physical and the social body has fallen. here, we attempt to offer some suggestions with regard to these extraordinary times. concomitant with widespread fear of illness and economic ruin associated with covid-19, we have observed the emergence of an unusual form of optimism. as governments around the world begin to implement stimulus and rescue packages designed to mitigate the economic effects of the disease -associated in the popular imaginary with wartime spending measuressome are beginning to hope that if we simply 'wait' (or 'hang tight') under quarantine, the government will ensure that things will be 'okay'. things will 'return to normal' eventually (as if returning to the state of affairs that gave rise to this crisis would be desirable), or even (in its more left-wing formulation), with the advent of socialist spending, a new and more equitable social order will arrive. keeping the racial implications of waiting in mind, we might remember colonial injunctions that the time was never right (so colonial subjects had to wait) for independence (chakrabarty, 2009) , or in the us context, for emancipation and subsequently civil rights, about which langston hughes wrote a lyric to the 'hesitation blues': 'how long/ have i got to wait?/ can i get it now-/ or must i hesitate?' (hughes, 2001, p. 91 ). so do we wait for these conclusions to sink in? this is a question of time, and also, clearly, a question of power. it is evocative of an early experiment in behavioural psychology, the stanford 'marshmallow experiment', meant to explore the connection between delayed gratification and later successful life outcomes (mischel & ebbesen, 1970) . in the experiment, children were given the choice between an immediate reward (a marshmallow or pretzel), or two rewards if they were willing to wait for 15 minutes. the study, and subsequent others like it, linked children who waited with better test scores, better jobs, even better bodies (casey et al., 2011; mischel et al., 1972; shoda et al., 1990) . in mass media, the results of the study were promoted as a kind of neo-calvinist doctrine of the persevering rich, as well as providing a handy economic allegory about the importance of obedience and trust when facing apparent deprivation. if you follow the rules (and don't, for example, hoard toilet paper), the second marshmallow will be coming along any second now… the researcher who dispenses the marshmallows is playing a role known psychoanalytically as the 'big other' 2 . as theorised by lacan, the big other stands for the place from which people imagine that authority ultimately emanates, a kind of 'necessary illusion' that grounds the otherwise potentially infinite uncertainty of subjective speech and behaviour. ('the other must first of all be considered a locus, the locus in which speech is constituted' [lacan, 1997, p. 274 ].) individuals take on the mantle of the big other insofar as they successfully appear to be a guarantor of futurity: my hands hold the keys to your fate. this is a structural relation between parent and child which, although eventually surmounted to varying degrees, becomes 'transferred' onto figures of authority actual and spectral. however, as derek hook clarifies, 'we should not fix the other in any one personage, or view it in a static way as embodied in certain lofty or powerful figures. … we as subjects constantly call upon, reiterate and thus reinstate the other … [it] is a (trans)subjective presupposition which exists only insofar as we act as if it exists' (hook, 2017, p. 23 ). consider the way investors are speaking about 'the market': 'the market right now is really shellshocked'; 'until the market sees some evidence that we've got the virus under control ... there isn't going to be a lot of confidence to buy'. this anthropomorphic creature we call 'the market' is, of course, the sum total of individual investors' financial behaviour. yet, these investors do not decide whether to buy or sell stocks based directly on what they think other investors will do, but through the mechanism of a presupposed, transubjective third: what i think other people think 'the market' is going to do (see tuckett, 2011). in his late teaching, lacan made a crucial emphasis on the notion of a lack in the big other. at certain pivotal moments, we begin to realise that nobody is actually behind the curtain. the 'glue' that holds together a social order starts to melt. the covid-19 crisis is, of course, a prime example of such a moment. it is difficult to overstate just how incompetent and incoherent our political leaders have made themselves out to be. from boris johnson boasting that he was shaking hands with covid-19 patients before contracting the virus (the guardian, 2020); to the government denying that it promoted 'herd immunity' (walker, 2020) ; to cabinet ministers openly contradicting who guidance in order to obscure the government's failure to procure adequate testing, hospital equipment, and ppe (itv news, 2020) -it has become clear that there no longer exists a stable authority upon whose pronouncements we can rely (see especially recent exposes in the guardian [conn et al., 2020] and sunday times [calvert et al., 2020] ). one of the ways lacanian psychoanalysts approach the question of diagnosis is to consider how a patient responds when he is confronted with a lack in the big other. similarly, with the void in power that has emerged as a consequence of covid-19, we are witnessing a variety of what we might call 'symptomatic' responses that index the coordinates of individuals' psychic structures: • denial: the big other is perfectly intact. the novel coronavirus isn't any worse than the ordinary flu, people are needlessly panicking due to social media and liberal commentators intent on discrediting our political leaders. • conspiracy: we are being duped, a malevolent big other is pulling the strings. china designed covid-19 as a biological weapon to destroy the west. • deferral: give the big other some time, and it will reconstitute itself. things are messy now, but if we just wait it out, they will return to normal. once the government secures enough antibody tests, we can go back to work, the pubs will reopen, our holidays abroad will resume. • panicked incapacitation: without the big other, we are doomed. the government is sending us all to our deaths and nothing can be done. in different ways, each of these responses indicate an attempt or wish to shore up the big other, to retrieve some kind of guarantor of the body politic in the midst of its apparent breakdown 3 . here we might also consider how a depoliticised portrayal of 'science' itself constitutes a kind of ' as the clinician thomas svolos notes, 'if psychoanalysis has something to offer here, it is to recognize ... the proper place of the lack in the other, and the very personal nature of the fantasies we make to cover over it, so that people can soberly address the unknown' (svolos, 2020). in other words, there is another approach: proceeding with the understanding that the lack in the other was there from the beginning. in a sense, we all knew this was coming. 3 as feminist and critical race studies engagement with psychoanalysis has highlighted, the way one imagines and relates to the big other and its inconsistencies is mediated through history, symbolic inheritance, and structural positioning along multiple axes of difference including race and gender (e.g. chistopher & lane, 1998; fanon, 2008; mitchell, 2015; spillers, 1996) . likewise, the fallout from covid-19 has differential impacts; while it is beyond the scope of this piece to explore, it is important to emphasise that the consequences of this disease will exacerbate existing inequalities and forms of oppression. people were already perceiving that nobody was properly in charge. regularly we received dire warnings about the nhs: waiting times at record highs, hospitals operating beyond capacity. yet our transference towards the nhs as a safe parental figure (or 'brick mother') seemed to persist: people continued to believe that when they fell ill the nhs would provide adequate care (see baraitser & salisbury, 2020; moore, 2020, waiting in pandemic times) . similarly, as fixed-term academics, we've long known that universities are simply not offering enough permanent posts for the majority of academics to do their work securely in the sector. yet as a group we nevertheless persist as if we'll all eventually find the right job. (ucu's qualitative study on casualisation found an 'inability to project into the future' one of the significant mental health consequences of precarious academic work [megoran & mason, 2020, p. 20 ].) psychoanalytically, the practice of simultaneously accepting and rejecting a traumatic truth --continuing to behave as if it isn't true -is called disavowal, summarised in the phrase: 'i know very well, but nevertheless' (mannoni, 1969) . in our daily life before covid-19, we were already constantly surrounded by pronouncements of apocalypse, post-history, crisis and collapse -but these were always warnings, as it were, from 'within' the current coordinates, as society as a whole appeared to continue as normal (see flexer, 2020, this collection). we were both present during the california wildfires of 2018, and despite the massive loss of life and environmental destruction, economic activity continued as usual, with the occasional addition of masks, respirators and so on. this seems to be a model for the way our government initially hoped we would respond to coronavirus. before covid-19, appeals for redistributive policies were easily diffused with the familiar language of technocratic neoliberalism: 'the numbers don't add up' 'this is not how it works', etc. the message was: 'your material suffering, while regrettable, does not have any bearing on the immutable laws of the economy'. with the sudden emergence of massive government spending -as we were writing this, the government cancelled £13.3 billion pounds of nhs debt -we're witnessing this logic disappear before our very eyes. this suspension of daily economic activity and the seemingly iron-clad principles that upheld it, alongside the threat of the virus, has interrupted the circuitry that forced us to act as if the big other existed, even when all available evidence indicated otherwise. we began from the transformative potential of suspended time in strike activity, which relies on the conscious decision of workers to withhold our labour. now we have entered a different kind of suspended time. from the collectivity of the strike, we have gone into self-isolation, imposed by the current crisis. these are also not mutually exclusive; workers as well as renters have seized this time to strike. in both cases, however, different kinds of suspended time produce an opportunity for the subject to consider her own agency in relation to the lack in the big other. it's common for a patient to seek out analysis because a feeling of enjoyment, or what lacanians call 'jouissance', is somehow no longer available. this instability provides an opportunity to reconsider the relation to the other. in the current moment, we have arrived at a kind of analytic situation through simply suspending the function of enjoyment. the stock market is crashing but of course in neoliberal capitalism what is also crashing is our jouissance. our typical release valves -going to the pubs, shopping -are gone. amazon is deprioritising shipping anything but 'essentials', only 'key workers' and urgent tasks allowed 4 . we actually have to live in a time that is supposed to be a 'waiting time' -subjectively experience it as our reality in the here and now. lacan in 1968, famously criticised student activists for posing what he took to be their hysterical demands to the powers that be: 'you want a master. you will get one' (see frosh, 2009 ). the protests of '68 were an explosion of activity, which we could counterpose to today's means of reinstating a powerful other through passivity. the act, as theorised in lacanian psychoanalysis, has to be distinguished from 'acting out', or everyday action. the true act has such stakes that it simultaneously abolishes and transforms (in hegelian terms, sublates) the symbolic coordinates of a given social order. so, how and when do we act? first, we have to find a way of acting within the context of there being no big other. this means our actions cannot be verified or guaranteed to succeed from the outset. nor, however, can we rely on an authority to predictably stop or punish us in the way transgression is often intended. acts will always appear to us as risks -serious ones. this is even true when they are the self-evidently 'right things to do' in retrospect. the corollary to this lack of divine verification is that the time to act never arrives. even as people fall ill with coronavirus, and are no longer waiting to potentially contract it, the question of what to do is not resolved, it is even intensified. we can say that an act never emerges from nothing, but only appears to in retrospect. we must be careful not to fetishize a moment of rupture for its own sake, or, as baraitser (2017) reminds us, to fail to account for the preexisting context of endurance within an impossible situation upon which any significant rupture depends (the pre-existing 'state of year-round crisis' in the nhs, for example, which has led to this point). these would be further forms of acting out. lastly, an act must be collective but each of us cannot wait for another to start it. those of us advocating for radical emancipatory change cannot simply make our individual appeals to 'socialism' as a self-evident intellectual solution to the problems we face, but must directly intervene to build it and create our own vehicles of mass struggle. only through action can we instate a new symbolic situation. we can envision the collapse of neoliberal capitalism -a system that literally cannot function in the present situation -but without an alternative we will remain in the same symbolic coordinates. people are already beginning to figure out ways of coordinating activity during lockdown without risking their health, as technology creates an opportunity for greater international solidarity. the emergence of 'mutual aid' groups across the country is an example of people coordinating responses to the crisis in the absence of adequate government provision. it is a first step but, at present, relies on the voluntary goodwill of people able to share what little they have with each other. the next step would be recognising the production and planning of resources in society -those zones where our intervention was once strictly forbidden -and seizing our right to directly provision to people's material needs rather than obeying market logic. (it is a consequence of attempting to act that one may come to embody the big other. this is a very interesting problem and should be dealt with in a subsequent essay.) we need to push our governments to value human life over economic gain, but we must also recognise that our own activity is what will make this possible, not the benevolence of a prime minister. revisiting the period of post-war reforms that delivered the nhs should make this clear. while claiming to support the principle of a health service in theory, churchill's opposition voted against the establishment of the nhs over a dozen times, including at second and third reading. the nhs was founded despite strong opposition from the tories and the right-wing press, both of whom now praise it as a national achievement 5 . none of the institutions we rely on now -especially during this crisis -came about because they were handed down from above. they were formed through processes of social antagonism. this poses the question: why do people today view themselves as outside of the historical process? attempting to pose these questions to ourselves as well, we decided to act, to directly engage with universities to demand two years' extension of employment for all casualised staff: a #coronacontract (https://coronacontract.org/). we have reached a point where continuing within the existing framework of society is no longer possible. the question is, will we desperately search for another way to shore up the big other, relying on symptomatic behaviour even as it fails to work -or can we find a way to act? all data underlying the results are available as part of the article and no additional source data are required. school of political science, aristotle university of thessaloniki, thessaloniki, greece obviously, the effects of covid-19 extend far beyond the biological domain. they encompass many biopolitical, psychosocial and (psycho)political aspects in addition to health and welfare stricto sensu. this paper attempts to map and illuminate in an innovative way some of these effects. in particular, special emphasis is placed on the collapse of guarantees, the confrontation with failures in authority this crisis involves (as crises often do), a rubric that merits broader discussion. such failures are traced and framed on a variety of levels (economics, power, time, psychic life, etc.) and are then cogently theorized -through lacanian theory -as encounters with the so-called 'lack in the other', meaning the various instances in which one is bound to feel and, perhaps, have the opportunity to register the cracks in the fantasmatic consistency and the ultimately arbitrary (contingent) foundations of our socio-symbolic order. how are such encounters usually dealt with? and how were they negotiated within the context of the covid-19 crisis? in other words, are we doomed to reproduce a sisyphean struggle to cover over this lack, which continuously reappears? perhaps psychoanalysis can point to an alternative type of agency beyond this vicious circle, enabling thus a different ethos of political acting. all in all, the paper deals with a highly original, topical and timely theme. it performs an analysis, which is simultaneously accessible and rigorous, straightforward and conceptually sophisticated (drawing on a very pertinent lacanian apparatus). the argument is indeed challenging, ambitious, witty and to the point. thus, the paper does contribute significantly to the state-of-the-art in this field and is bound to influence the ongoing public debate in revealing ways. what is particularly suggestive is the axis of temporality, which is highlighted at various turns of the argumentation. is the study design appropriate and is the work technically sound? baraitser l: enduring time. bloomsbury publishing. 2017. reference source containment, delay, mitigation': waiting and care in the time of a pandemic the seminar of jacques lacan. book iii, the psychoses je sais bien, mais quand même second class academic citizens: the dehumanising effects of casualisation in higher education publisher full text mischel w, ebbesen eb, zeiss ar: cognitive and attentional mechanisms in delay of gratification debating sexual difference, politics, and the unconscious: with discussant section by jacqueline rose containment and delay": covid-19, the nhs and highrisk patients bloomberg once suggested farming, factory work don't require much 'gray matter publisher full text spillers hj: 'all the things you could be by now if sigmund freud's wife was your mother': psychoanalysis and race. boundary 2 publisher full text svolos t: coronavirus and the hole in the big other. the lacanian review. 2020. reference source the guardian: 'i shook hands with everybody revisiting the marshmallow test: a conceptual replication investigating links between early delay of gratification and later outcomes pubmed abstract | publisher full text | free full text webster c: conflict and consensus: explaining the british health service if applicable, is the statistical analysis and its interpretation appropriate? not applicable are all the source data underlying the results available to ensure full reproducibility? no source data required are the conclusions drawn adequately supported by the results? yes competing interests: no competing interests were disclosed.reviewer expertise: psychoanalysis, freud, lacan, discourse theory, political theory, populism i confirm that i have read this submission and believe that i have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.reviewer report 05 august 2020 https://doi.org/10.21956/wellcomeopenres.17503.r39809 © 2020 mcgowan t. this is an open access peer review report distributed under the terms of the creative commons attribution license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. university of vermont, burlington, vt, usa "waiting for other people" outlines the effect of the coronavirus pandemic on the contemporary political situation. it points out that one of the main effects of the outbreak is that it exposes the lack in the other or the failure of the big other. social authority is unable to deal with the disease, and as a result, subjects' investment in the figure of the big other comes into question. the most widespread response, the authors claim, is the attempt to shore up the big other, to obscure its lack. but at the same time, the virus presents us with another opportunity -the possibility of the genuine political act that occurs through the other's failure.this essay represents an outstanding intervention in the psychoanalysis of the effects of the pandemic. i have read several psychoanalytic accounts of our political situation today, and this is the best. i don't view any changes as necessary. if applicable, is the statistical analysis and its interpretation appropriate? not applicable are all the source data underlying the results available to ensure full reproducibility? yes competing interests: no competing interests were disclosed. i confirm that i have read this submission and believe that i have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. key: cord-121935-uilzmmxu authors: mo, baichuan; feng, kairui; shen, yu; tam, clarence; li, daqing; yin, yafeng; zhao, jinhua title: modeling epidemic spreading through public transit using time-varying encounter network date: 2020-04-09 journal: nan doi: nan sha: doc_id: 121935 cord_uid: uilzmmxu passenger contact in public transit (pt) networks can be a key mediate in the spreading of infectious diseases. this paper proposes a time-varying weighted pt encounter network to model the spreading of infectious diseases through the pt systems. social activity contacts at both local and global levels are also considered. we select the epidemiological characteristics of coronavirus disease 2019 (covid-19) as a case study along with smart card data from singapore to illustrate the model at the metropolitan level. a scalable and lightweight theoretical framework is derived to capture the time-varying and heterogeneous network structures, which enables to solve the problem at the whole population level with low computational costs. different control policies from both the public health side and the transportation side are evaluated. we find that people's preventative behavior is one of the most effective measures to control the spreading of epidemics. from the transportation side, partial closure of bus routes helps to slow down but cannot fully contain the spreading of epidemics. identifying"influential passengers"using the smart card data and isolating them at an early stage can also effectively reduce the epidemic spreading. infectious diseases spread through social contacts, such as at schools (salathé et al., 2010; litvinova et al., 2019) or conferences (stehlé et al., 2011) . past studies proved that human mobility networkslike air transportation (colizza et al., 2006; balcan et al., 2009) or waterways gatto et al., 2012 )-could transport pathogens or contagious individuals to widespread locations, leading to the outbreak of epidemics. recently, the outbreak of coronavirus disease 2019 confirmed the strong connection between human mobility network and disease dynamics. the first case of covid-19 was reported in wuhan, china, at the beginning of dec. 2019, and has then quickly spread to the rest of china through airlines and high-speed rail networks during the spring festival travel season (wu et al., 2020) . besides the transmissions of pathogens to destination local communities via human mobility network, the densely populated urban public transit (pt) network, however, may also become a key mediate in the spreading of influenza-like epidemics with public transport carriers being the location of transmission (sun et al., 2013 ). the pt system in large metropolitan areas plays a key role in serving the majority of urban commuting demand between highly frequented locations, as human trajectories present a high degree of spatiotemporal regularity following simple, reproducible patterns (gonzalez et al., 2008) . by the end of 2017, the annual patronage of urban metro systems worldwide increased from 44 billion in 2013 to 53 billion, and in asia, the systems carry more than 26 billion passengers a year (international association of public transport (uitp), 2018). the urban pt system is often framed as a key solution for building sustainable cities with concerns of environment, economy, and society's effectiveness (miller et al., 2016) . but the indoor environment created by crowded metro carriages or buses can also make an infected individual easily transmit the pathogen to others via droplets or airborne routes (xie et al., 2007; yang et al., 2009) . in recent years, scholars began to turn their attention to the spreading of the epidemic through the urban pt network. rooted in peoples daily behavior regularity, individuals with repeated encounters in the pt network are found to be strongly connected over time, resulting in a dense contact network across the city (sun et al., 2013) . such mobility features lead to great risks for the outbreak of infectious diseases through bus and metro networks to the whole metropolitan area (sun et al., 2014; liu et al., 2019) . based on the contact network developed by sun et al. (2013) , a variation of human contact network structures has been proposed to characterize the movement and encounters of passengers in the pt system, which are then used to model the epidemic spreading among passengers (bóta et al., 2017a,b; hajdu et al., 2019; el shoghri et al., 2019) . however, previous studies focusing on the human contacts in pt systems often use a static passenger contact network, discarding the time-varying nature of encounters. the aggregation of the time-varying edges into a static version of the network offers useful insights but can also introduce bias (perra et al., 2012; coviello et al., 2016) . besides, most of the previous studies focused on understanding epidemic spreading and identifying the risks in the pt system. few studies have discussed the pt operation-related epidemic control strategies. pt operation plays an important role in controlling epidemics. recently, a variety of epidemic control strategies in pt systems have been implemented to respond to the outbreak of covid-19 since late january 2020. for example, in wuhan, almost all pt services have been shut down since jan. 24th. in wuxi, another chinese big city, except the 22 arterial bus routes kept running with shortened operation hours, all other pt services (roughly 92% of bus routes) were suspended since feb. 1st. in milan, italy, the pt services were still in operation, but the suspension of pt has been officially proposed with the rapid surge of covid-19 cases in the lombardy area. the impacts of these strategies and other possible pt operation strategies (e.g., distributing passengers' departure time, limiting maximum bus load), however, have seldom been carefully explored. to fill this gap, this study proposes a time-varying weighted pt encounter network (pen) to model the spreading of the epidemic through urban pt systems. the social activity contacts at both local and global levels are also considered. we select the epidemiological characteristics of covid-19 as the case study along with high-resolution smart card data from singapore to illustrate the model at the metropolitan level. different control policies from both the public health side and the transportation side are evaluated. in this work, we do not attempt to reproduce or predict the patterns of covid-19 spreading in singapore, where a variety of outbreak prevention and control measures have been implemented (ministry of health (moh), 2020) and make most of epidemic prediction models invalid. instead, since the pt systems in many cities share the similar contact network structure despite the differences in urban structures, pt network layouts and individual mobility patterns (qian et al., 2020) , this study aims to employ the smart card data and the pt network of singapore as proximity to the universal pen to better understand the general spatiotemporal dynamics of epidemic spreading over the pt system, and to evaluate the potential effects of various measures for epidemic prevention in the pt systems, especially from the pt operation angle. the main contribution of this paper is threefold: • propose a pt system-based epidemic spreading model using the smart card data, where the timevarying contacts among passengers at an individual level are captured. • propose a novel theoretical solving framework for the epidemic dynamics with time-varying and heterogeneous network structures, which enables to solve the problem at the whole population level with low computational costs. • evaluate various potential epidemic control policies from both public health side (e.g., reducing infectious rate) and transportation side (e.g., distributing departure time, closing bus routes) the rest of the paper is organized as follows. in section 2, we elaborate on the methodology of establishing contact networks and solving the epidemic transmission model. section 3 presents a case study using the smart card data in singapore to illustrate the general spatiotemporal dynamics of epidemic spreading through the pt system. in section 4, conclusions are made and policy implications are offered. the majority of previous studies investigated the epidemic process in a static network, where the spreading of the disease is virtually frozen on the time scale of the contagion process. however, static networks are only approximations of the real interplay between time scales. considering daily mobility patterns, no individual is in contact with all the friends simultaneously all the time. on the contrary, contacts are changing in time, often on a time scale that is shorter than the whole spreading process. real contact networks are thus inherently dynamic, with connections appearing, disappearing, and being rewired with different characteristic time scales, and are better represented in terms of a temporal or time-varying network. therefore, modeling the epidemic process on pt should be based on a time-varying contact network. although we focus on the contagion process through pt, passengers' social-activity (sa) contacts besides riding the same vehicles are not neglectable. in this study, two components of the contact network are considered: 1) a pen that is designated to capture the interaction of passengers on pt, 2) and an sa contact network that captures all other interactions among people. pt passengers' encounter patterns have been studied by sun et al. (2013) through an encounter network, which is an undirected graph with each node representing a passenger and each edge indicating the paired passengers that have stayed in the same vehicle. the network is constructed by analyzing the smart card data, which includes passengers' tap-in/tap-out time, location, and corresponding bus id. since pen provides direct contact information of passengers, it is an ideal tool to investigate the epidemic spreading through pt. extending the work by sun et al. (2013) , we propose a time-varying weighted pen to model the epidemic process. we first evenly divide the whole study period into different time intervals t = 1, ..., t . the length of each interval is τ . for a specific time interval t, consider a weighted graph g t (n , e t , w t ), where n = {i : i = 1, .., n } is the node set with each node representing an individual, n is the total number of passengers in the system; e t is the edge set and w t is the weight set. the edge between i and j ( i, j ∈ n ), denoted as e t ij , exists if i and j have stayed in the same vehicle during the time interval t. the weight of e t ij , denoted as w t ij , is defined as w t ij = d t ij τ , where d t ij is the duration of i, j staying in the same vehicle during time interval t. by definition, we have 0 ≤ w t ij ≤ 1. the weight is used to capture the reality that epidemic transmission is related to the duration of contact. in addition to contacts during rides on the pt, passengers may also contact each other during their daily social activities. given the heterogeneity of passengers' spatial distributions, people may have various possibilities to contact with different people. however, capturing the real connectivity of passengers in social activities requires a richer dataset (e.g., mobile phone, gps data), which is beyond the scope of this research. in this study, we made the following assumptions to build the sa contact network. • global interaction: passengers may interact with any other individuals in the system during a time interval of t with a uniform probability of θ g . • local interaction: passengers with same origins or destinations of pt trips may interact with each other during time interval t with a uniform probability θ l . since local interaction is more intense than global interaction, we have θ l > θ g . for the global interaction, we assume that the contact time for all connected individuals is τ for a specific time interval if there are no pt and local contacts between them. otherwise, the contact time should be subtracted by the pt and local contact duration (cd) at that time interval. for the local interaction, the contact time calculation is illustrated by the following example. consider passenger i with pt trip sequence where t k is the time when the passenger board or alight the vehicles. o i t k and d i t k are the trip origin and destination, respectively. the trip sequence is defined as a sequence of consecutive pt trips where every adjacent trip pair has an interval of fewer than 24 h (e.g., t 3 − t 2 < 24 h). we call the interval between two consecutive pt trips (e.g., [t 2 , t 3 ]) as activity time hereafter. since passengers may not stay in the same place between two consecutive trips, we may have d i t2 = o i t3 . we further assume that from time t 2 to t 3 , the passenger spends half of the activity time at d i t2 and half of the activity time at o i t3 . suppose passenger j has a trip sequence )}, and d j t 2 = o i t3 ; and the overlapping time between intervals [t 2 , t 3 ] and [t 2 , t 3 ] are not zero. this means passengers i and j may have local contact because they have stayed in the same place d j t 2 = o i t3 (by definition, the probability of having local contact is θ l ). recall that we assume that passengers spend half of the activity time at a specific origin or destination. if they have a local contact, then the cd between passengers i and j is calculated as half of the overlapping time between interval [t 2 , t 3 ] and interval [t 2 , t 3 ]. this calculation gives us the total cd of i and j at the local interaction level. for example, if t 2 < t 2 < t 3 < t 3 , the total local cd between i and j is 1 2 (t 3 − t 2 ). analogizing to the pen, the total local cd can be mapped to each time interval. for example, if t * is the time boundary for time interval t and time interval t + 1, and t * − τ < t 2 < t * < t 3 < t * + τ . denote the local cd between i and j for time interval t asd l,t ij (0 ≤d l,t ij ≤ τ ). then we haved l,t ij = t * − t 2 andd l,t+1 we denote the sa contact network asg t (n ,ẽ g,t ,ẽ l,t ,w g t ,w l t ), whereẽ g,t is the edge set of global interaction;ẽ l,t is the edge set of local interaction. the edge of global interaction between any i and j, denoted asẽ g,t ij , exists with probability θ g for all i, j ∈ n . when i and j share the same pt trip origins or destinations during time interval t, the edge of local interaction between i and j (ẽ l,t ij ) exists with probability θ l .w g t andw l t are the weight set for global and local interaction edges, respectively. by the discussion above, we havew l,t ij =d l,t ij τ for allw l,t ij ∈w l t andw g,t ij = 1 −w l,t ij − w t ij for allw g,t ij ∈w g t . by definition, the contacts from three sub-networks (local, global, and pt) are mutually exclusive. to illustrate the proposed epidemic contact network, we present a five-passenger system with a single bus route (n = 5) in figure 1 . we consider the time period from 7:00 to 10:00 with τ = 1 h. for illustrative purpose, we neglect the global interaction and set the local interaction insensitivity θ l = 1. the bottom of the graph shows the passengers trajectories along the bus route. in the time interval t = 1, at 7:30, passengers 1,2, and 5 board the bus; since they share the same origin and also are in the same bus during t = 1, we have edges of pen (colored green) and edges of location interaction of sa contact network (colored orange). accordingly, we have d 1 12 = d 1 25 = d 1 15 = 0.5 h. the weights are calculated as w 1 12 = w 1 25 = w 1 15 = 0.5/1 = 0.5. in the meanwhile, from the trajectories at t = 2, we noticed passengers 3 and 4 also share the same origin. thus, we also have an sa contact edge between 3 and 4 at t = 1. the local cd for passengers 1,2, and 5 at time interval t = 1 isd l,1 12 =d l,1 25 =d l,1 15 = 1 2 × 0.5 = 0.25 h. the 1 2 comes from the assumption that these passengers only spend half of their time around this bus station (see section 2.1.2). hence, the corresponding weights for the sa contact network arew l,1 12 =w l,1 25 =w l,1 15 = 0.25/1 = 0.25. similarly, we haved l,1 34 = 1 2 × 1 = 0.5 andw l,1 34 = 0.5. the weights for t = 2 and t = 3 are calculated in the same way. the epidemic transition model is independent of the network representation. we can model various infectious diseases based on the proposed pen using different epidemic transition frameworks. for the case study we considered (covid-19), we employed the susceptible-exposed-infectious-removed (seir) diagram in this study. the seir model is generally used to model influenza-like illness and other respiratory infections. for example, small and tse (2005) used this model to numerically study the evolution of the severe acute respiratory syndrome (sars), which shares significant similarities with the covid-19. we first divided the population into four different classes/compartments depending on the stage of the disease (anderson et al., 1992; diekmann and heesterbeek, 2000; keeling and rohani, 2007) : susceptibles (denoted by s, those who can contract the infection), exposed (e, those who have been infected by the disease but cannot yet transmit it or can only transmit with a low probability), infectious (i, those who contracted the infection and are contagious), and removed (r, those who are removed from the propagation process, either because they have recovered from the disease with immunization or because they have died). by definition, we have n = s ∪ e ∪ i ∪ r, where n is the set of the whole population. the diagram of the seir model is shown in figure 2 . the diagram shows how individuals move through each compartment in the model. the infectious rate, β, controls the rate of spread and is associated with the probability of transmitting disease between a susceptible (s) and an exposed individual (e). the incubation rate, γ, is the rate of exposed individuals (e) becoming infectious (i). removed rate, µ, is the combination of recovery and death rates. the seir model typically assumes the recovered individuals will not be infected again, given the immunization obtained. it is worth noting that this study focuses on the early stage of an epidemic process, where the impact of outside factors on n (e.g., birth and natural death) are not considered. for the epidemic process models, people are concerned about the steady-state, epidemic threshold and reproduction number. according to pastor-satorras et al. (2015) , the number of infected individuals in the seir model always tends to zero after a long term (see figure 3a ). this is obvious from the diagram of seir (figure 2) , where there is only one recurrent state r. the basic reproduction number, denoted by r 0 , is defined as the average number of secondary infections caused by a primary case introduced in a fully susceptible population (anderson et al., 1992) . in the standard seir model, we have r 0 = β µ . epidemic threshold, in many cases, is defined based on the value of r 0 . when r 0 < 1, the number of infectious individuals tends to decay exponentially; thus, there is no epidemic. however, if r 0 > 1, the number of infectious individuals could grow exponentially with an outbreak of epidemic (see figure 3b ). the typical modeling of epidemic falls into two different categories: the individual-based approach and the degree-based approach. generally, the individual-based approach models the epidemic transmission at the individual level while the degree-based approach captures the infection process at the group level, where each group includes a set of nodes (individuals) with the same degree. since in the pen, we can characterize the behaviors of human interaction at the individual level, the individual-based framework is used in this study. we denote s i,t , i i,t , e i,t , and r i,t as the bernoulli random variable that describes whether individual i is in class s, i, e, and r at time interval t, respectively (yes = 1). by definition we have s i,t +i i,t +e i,t +r i,t = 1 for all i and t. let p(x i,t = 1) = p x i,t , where x ∈ {s, i, e, r} and x p x i,t = 1. since the contact network is defined in discrete time, we can describe the epidemic process of the seir model as a discrete markov process with specific transition probabilities. to match with the epidemiological characteristics of covid-19, we assume that the exposed individual can also infect others based on the recent finding (rothe et al., 2020) , which may not be the common case in the seir model. let β i be the probability of a susceptible individual i ∈ s getting infected by an infected individual j ∈ i at a time interval t if i and j contact each other (either by pt or sa) for the entire time interval. since the actual transmission probability is related to the interaction duration, we can write the actual probability of i getting infected by j (β i i,j,t ) as where h(·, ·) is a function to describe the actual transmission probability with respect to cd. it can be a form of survival function (e.g., exponential, weibull) or a linear function (i.e., h(w, β) = wβ, which is used in the case study). a t ij (ã l,t ij ,ã g,t ij ) is an indicator variable showing whether e t ij (ẽ l,t ij ,ẽ g,t ij ) exists. it is worth noting that a t ij is a known constant butẽ l,t ij andẽ g,t ij are random variables with bernoulli distribution: a l,t ij ∼ b(l t ij θ l ) andã g,t ij ∼ b(θ g ), where l t ij = 1 if i and j share the same origin or destination at time interval t and l t ij = 0 otherwise (see section 2.1.2 for details). therefore, we have similarly, we define β e as the probability of a susceptible individual i ∈ s getting infected by an exposed individual j ∈ e at time interval t if i and j contact each other for the entire time interval (β e β i ). the actual transmission probability considering interaction duration is note that if i and j have been in contact, we assume the transmission probability only depends on the cd. the variation of transmission probability due to spatial distribution is neglected. capturing spatial factors requires dedicated transmission models (e.g., wellsriley model (wells et al., 1955) ) and an assumption of passengers' spatial distribution in a vehicle, which can be done in the future work. let γ be the probability of e → i, which is unrelated to the network. µ is the probability of i → r and the notations and epidemic transmission mechanism allow us to write the following system equations: calculating p(s i,t = 1, x j,t = 1) requires the joint distribution of s i,t and x j,t , which is usually unavailable. according to the individual-based mean-field approximation, we can assume that the state of neighbors is independent (hethcote and yorke, 2014; chakrabarti et al., 2008; sharkey, 2008 sharkey, , 2011 . hence, this leads to by plugging eqs. 8 and 9 into eqs. 4 and 5, we can get a new group of solvable system equations. different from the typical seir model, the proposed epidemic model with the individual-based pen has two challenges. first, the infection rate in a typical seir model is defined at the population level (i.e., homogeneous network assumption). however, in the proposed framework, we consider one-to-one contagious behaviors at the individual level with heterogeneous contact networks. the heterogeneity is difficult to characterize by probabilistic models (e.g., degree distributions) because contact structures are known from the smart card data. second, the proposed framework lies on a time-varying network, for which the contagious behaviors and interacted individuals vary over time. one of the solution methods for eqs. 4 -7 is simulation. similar to many other complex stochastic process, simulation can output approximate values for p x i,t for all x ∈ {s, i, e, r} and t. the simulation process is described in algorithm 1, wherep x t (x ∈ {s, i, e, r}) is the proportion of people in class x for time interval t. initialization can assign some seed infectious people in the system. at each time step t, we calculate the one-to-one transmission probability β i i,j,t and β e i,j,t for each person in class s, where the time complexity is o(n 2 ). therefore, the total time complexity of simulation is o(n 2 t ), where t is the total number of time intervals considered. the model also requires to store the network structures and individual states at each time step, where the space complexity is also o(n 2 t ). algorithm 1 simulation-based solving algorithm assign i i,t = 1 with probability γ. 10: assign r i,t = 1 with probability µ = µ r + µ d let 14: else 15: assign r i,t = 1, and , 2, ..., t }, and x ∈ {s, i, e, r}. given the o(n 2 t ) of time and space complexity, the simulation-based solving framework is hard to scale up to the whole population level (4.7 million in our case study). according to the numerical results, n ≥ 300k would cause memory errors to a 32 gb ram personal computer. considering computational costs, a scalable and lightweight theoretical model is proposed to handle the epidemic simulations. the theoretical model, though simplified from agent-level simulation, can retain the flexibility to capture the behavioral, mechanical, networked, and dynamical features of the simulation-based models. the framework is separated into three steps. 1) we first build up a multi-particle dynamics model for the epidemic process to represent the individual-based model. 2) considering properties of contact network and multi-particle dynamics (gao et al., 2016) , an effective model is employed to represent the multi-dimensional dynamics (individual-based) into one-dimension (mean-based). 3) previous effective models are developed for static network structure. to fit with the time-varying contact network, we innovatively combine the effective model with a temporal network model by adding energy flow into the equations, from which we can capture the impact of the time-varying contact networks on the entire dynamical system. for the multi-particle dynamics model, we focus on the early stage of the epidemic process, where the percent of susceptible people is almost 100%, and recovered people are 0%. hence, we could use taylor expansion to simplify the four-dimension (s,e,i, and r) individual dynamical epidemic process described in eq. (4 -7) to two dimensions (e and i): in this formula, we embedded the dynamical network structure into two tensors these two tensors are non-negative, and each temporal slice of these tensors (i.e., are symmetric due to the property of infection. according to the previous studies (gao et al., 2016; tu et al., 2017) , the infectious burst in this canonical system could be captured by a one-dimensional simplification of the individual-based model. this simplification is based on the fact that in a network environment, the state of each node is affected by the state of its immediate neighbors. more details on the simplification can be found in gao et al. (2016) and tu et al. (2017) . therefore, we can characterize the effective state of the system using the average nearest-neighbor activity: where is the effective proportion of exposed (infectious) people in the system at time interval t. if we assume that all individuals hold a uniform probability to come into contact with each other, p e eff,t and p i eff,t are good proxies forp e t andp i t , wherep e t andp i t are the actual proportion of exposed and infectious population (i.e., . however, this assumption may not hold in reality. the relaxation of the assumption will be described later. p e eff,t and p i eff,t allow us to reduce the individual-based equations (eq. 10 and 11) to an effective mean-based equations: where: considering that people's interaction probabilities are actually heterogeneous, in practice, to relax the uniform contact assumption, we further consider the dynamics of the mobility network on the multi-particle systems based on li et al. (2017) , which recommends adding the energy flow f x t = (1/n 2 ) i,j∈n (β x i,j,t ) 2 (x ∈ {e, i}) into the general dynamical process: are parameters to be estimated. the energy flow and corresponding parameters are expected to capture the heterogeneous contacts in the network. the theoretical model is calibrated from a two-layer regression method. in the first layer, given a trajectory of epidemic process: eq. 17 and 18. this leads to a linear regression problem with a total of t samples: where the only unknown parameters are k; β x eff,t and f x t are calculated from the constructed contact network, andp x t is given (x ∈ {e, i}). therefore, k can be obtained for every given epidemic trajectory. the epidemic trajectory is generated using the simulation-based solving framework for a small sample size (e.g., 100k). in the second layer, we aim to obtain the relationship between k and epidemic/mobility parameters (i.e., for every combination of θ, we can use the simulation model to generate a trajectory and thus to estimate k (as we described above, the first layer regression). therefore, based on different values of θ, we can estimate a series of k. then, we assume a linear relationship between θ and k, and use a linear regression model to fit the relationship based on the generated θ, k pairs. after the two-layer regression, the theoretical model can be used to flexibly predict the epidemic process under different policy conditions (i.e., different θ, β e , or β i ). the theoretical model can smoothly consider different sizes of the studied population by scaling effective exposed proportion p e eff,t , effective infectious proportion p i eff,t , and network energy flows f e t and f i t . those variables can be directly extracted once the contact network is constructed. this model can also efficiently test different policy combinations with a low computational cost. according to the numerical test, it can evaluate one-million policy combinations for the full population (4.7 million) within seconds. the memory and computational complexity are both o(t ). this allows us to find the optimal policy to control the contagion. in epidemiology, the basic reproduction number (expressed as r 0 ) of the infection can be viewed as the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection (fraser et al., 2009 ). the most important use of r 0 is to determine whether an emerging infectious disease would spread throughout the population. in a common infection model, if r 0 > 1, the infection starts to spread throughout the population, but not if r 0 < 1 (see figure 3b ). in general, the larger the value of r 0 , the more difficult it is to control the epidemic (fine et al., 2011) . in the ideal seir model where diseases spread uniformly over time and people have a uniform contact probability, r 0 is easy to define. to match with the discrete-time expression in this study, letβ be the average number of people infected by one infectious person within one time interval in the ideal sir system, andμ be the probability that the infectious people (i) are removed (r) within one time interval (note that in continuous time context,β andμ represents infectious rate and removal rate, respectively). the basic reproduction number for ideal seir model is calculated as where i t is the number of infectious people at time interval t. note that it+1−it it = β, ∀t only holds under the ideal seir system. in heterogeneous populations, the definition of r 0 is more subtle. the definition must take into account the fact that the contact between people is not uniform. one person may only contact a small group of friends and be isolated from the rest of the population. on the temporal evolving side, people's mobility patterns may vary every day, resulting in time-varying contact networks. this defeats ideal seir system assumptions. to consider the network heterogeneity damgaard et al. (1995) , we define r 0 as "the expected number of secondary cases of a typical infected person in the early stages of an epidemic, which focuses on the expected direct infected population for each time step at the early stage let e t and i t be the number of exposed and infectious people at time interval t. given a trajectory of epidemic process [(e t , i t )] t=1,...,t (either from the simulation model or the theoretical model), we define the equivalent reproduction number for time period t (r 0 (t )) as: we assume that the incubation period 1 γ is much longer than a time interval, which holds true for most of the diseases (e.g., in our case study, 4 days 1 h). therefore, e t+1 − e t in this study is a good proxy for the equivalent r 0 enables flexible and fair comparison of different epidemic processes with heterogeneous contact and time-varying networks. in the following sections, we will use the defined equivalent r 0 as a major epidemic measure for different policy discussions. we use the singapore bus system as a proximity to demonstrate the dynamics of epidemic spreading through a pt network based on the developed time-vary pen approach. the time-varying pen is constructed based on daily mobility patterns in the bus system and is then calibrated according to the epidemiological characteristics of covid-19. a series of disease control policies are evaluated to exhibit the sensitivity of the developed pen. singapore is a city-state country where inter-city land transportation is relatively small. this provides an ideal testbed to focus on epidemic spreading through intra-city transportation, especially for bus systems, which count for a high proportion of modes shared in singapore (mo et al., 2018; shen et al., 2019) . according to singapore land transport authority (lta) (2018), the average daily ridership of buses is around 3.93 million, accounting for almost half of all travel modes. there are more than 368 scheduled bus routes operated by four different operators. a total of approximately 5,800 buses are currently in operation. in the case study, the mass rapid transit (mrt) system is neglected because a) passengers' contacts in a bus are more conducive for epidemic transmission compared to the mrt system, given the limited space in a bus; b) smart card data can provide exact bus id to identify the direct contact of passengers. the direct contact in trains is, however, difficult to obtain from smart card data because the transactions are recorded at the station level. the smart card data used in this study are from august 4th (monday) to august 31st (sunday), 2014, with a length of four weeks. the dataset contains 109.2 million bus trip transaction records from 4.7 million individual smart cardholders. given that the population of singapore in 2014 is around 5.5 million, the smart card data is representative of the population (accounting for 84% of the population) and can model the epidemic spreading for the whole city. figure 5 shows the hourly ridership distribution for one week (average of four weeks). the ridership of weekdays shows highly regular and recurrent patterns with morning (8:00-9:00 am) and evening peaks (18:00-19:00). while the ridership distributions on weekends are different from those on weekdays, there are no prominent peaks observed. the usage of bus systems is related to daily activities, which represent mobility patterns of metro travelers in singapore and can influence the epidemic spreading. figure 6a shows the distribution of trip duration (p (t d)) in four weeks, where t d means trip duration. most trips have a duration of fewer than 40 min. from the inset of figure 6a , we found the tail of p (t d) can be well characterized by an exponential function: when t d ≥ 10 min, we have p (t d) ∼ e − t d λ td , where λ td = 11.94 min calculated by regression. as people tend to use mrt for long-distance travel, the duration of bus trips is relatively short. on average, the duration of bus trips is 14.55±12.51 min (mean±standard deviation). in summary, singapore has an intense usage of bus systems with high ridership and user frequency, though the trip duration is relatively small. this implies that for highly infectious diseases that can be infected by short-term exposure, the bus system may play a crucial platform for the epidemic spreading. as discussed in section 2.1, the pen and local interaction network (lin) highly depend on passengers' mobility patterns and present time-varying properties. figure 7 shows example networks of 100 passengers extracted from real-world data (7:00-10:00 am). the length of time interval τ = 1 h is used in this study. for better visualization, these passengers are chosen from the same bus, and θ l = 1 is used for lin. both the property of the contact network is essential for analyzing the epidemic spreading. figure 8 summarizes the degree and cd distribution of pen and lin. note that the global interaction network is, by definition, a simple random graph with homogeneous structures. hence, we did not plot it. given the time-varying properties of networks, we consider three different time intervals: morning peak (8:00-9:00 am), noon off-peak (12:00-13:00), and evening peak (18:00-19:00). figures 8a and 8b show that the degree distribution of pen displays a power-law tail (p (k) ∼ k −λ k , where k is the degree), implying a significant degree heterogeneity. most of the nodes are of low or medium degree. the number of super-nodes with a high degree is limited, and the maximum degree is bounded, which is reasonable given the limited capacity of buses. these properties are consistent with the findings in qian et al. (2020) and indicate that pens are a type of small-world network (telesford et al., 2011) . although the shapes of p (k) for different times are similar, the exact values are still time-dependent. on weekdays, p (k) for morning and evening peaks are similar but different from the off-peak curve. pens in peak hours also have a larger degree of nodes. on weekends, however, the degree distributions in the three time intervals are similar. figures 8c and 8d show the degree distribution of the lin (θ l = 1× 10 −3 is used to correspond to the case study in the following sections). similar to pen distribution, a power-law tail is also observed. however, the degree distribution in lins is less concentrated and shows noisy patterns for high degree distributions. we also find that other local cd values are nearly uniformly distributed. since local interaction duration that does not equal 30 or 60 min indicates that the trip occurs or ends in this time interval, the uniform distribution implies a poisson start and end time of bus trips within the time interval. covid-19, also known as 2019-ncov, is an infectious disease caused by "sars-cov-2", a virus closely figure 9 shows the number of confirmed (infectious), cured, and dead people from jan 24 to feb 20, 2020, in wuhan. up to feb 20, there are more than 40 thousand confirmed covid-19 cases. the total number of healed and dead patients is around 6,000 and 2,000, respectively. the sudden increase in confirmed cases on feb 12 is due to the revision of diagnosis criteria (adding the cases of clinical diagnosis). (2020)). the inset plot is the zoom-in of the number of cured and dead people. we select covid-19 as the case study for the following reasons: a) covid-19 is extremely contagious. it is primarily spread between people via respiratory droplets from infected individuals when they cough or sneeze. according to cdc (2020), anyone who has been within approximately 2 m of a person with covid-19 infection for a prolonged period (more than 1 or 2 min) is considered risky to get infected. therefore, pt can play a significant intermediary for such a highly contagious disease. b) even when authors are writing this article, covid-19 is a big threat to global public health. singapore is also experiencing the impact of covid-19 (ministry of health (moh), 2020). the case study of covid-19 can provide disease control suggestions from the transportation side, which adds real-time value to this research. the seir model parameters are chosen based on the epidemiological characteristics of covid-19. time from exposure to onset of symptoms (latent or incubation period) is generally between 2 to 14 days for covid-19. read et al. (2020) suggested setting the latent period as 4 days. we, therefore, have γ = 1 24×4 = 0.0104 (probability from e to i per h). according to read et al. (2020) , the transmission rate of covid-19 in the static seir model is 1.96 day −1 , which can be seen as the number of people that one infectious person can infect per day in a well-mixed network. therefore, assuming one person, on average, has close contact with 100 others per day, we can calculate the hourly one-to-one infectious probability as β i = 1.96 24×100 = 8.17 × 10 −4 . although recent studies show β e > 0 for covid-19 (rothe et al., 2020) , calibrating the exact value of β e is difficult due to lack of data. since people in the latent period (group e) usually have extremely lower probability of transmission, we arbitrarily set β e = 0.01β i . we calculate µ r and µ d using data from wuhan. figure 10 shows the daily cure and death rate (number of cured/dead people per day divided by the total number of confirmed people on that day) in wuhan. the reason for the high value on the first day may be the inaccurate data. from figure 10 , we observe the average daily cure and death rate are approximately 1% at the early stage. therefore, we can calculate the hourly cured and death probability as µ r = µ d = 0.01 24 = 4.17 × 10 −4 . figure 10 : daily cure and death rate in wuhan (data sources: ding xiang yuan (2020)) θ l = 1 × 10 −3 is used for the status quo analysis. this value is calculated as follows. consider a community with 10,000 people, assume each person may have close contact with another 10 people on average per hour locally. we therefore have θ l = 10 10,000 = 1 × 10 −3 . the global interaction captures individual's probability of close contact with people outside his/her community. given that the population of singapore is around 5.6 million, we assume one person, on average, can closely contact 10 people per day globally. then θ g = 10 5.6×10 6 ×24 = 7.44 × 10 −8 . note that the number 24 in the denominator is used to get the hourly probability. table 1 summarizes all parameters of the status quo analysis, which can be seen as the reference scenario. the sensitivity analysis column indicates whether this value will be changed in the following policy analysis sections. we first calibrate the theoretical model using the generated epidemic dynamics from the simulation model. 320 cases with different combination of parameters (table 2) for 100k sample passengers are simulated. these cases are fed into a regression model to obtain the parameters for the theoretical model. figure 11 shows the comparison of the number of infectious and exposed people between the simulation and the calibrated theoretical model. we observe a high goodness-of-fit for the theoretical models, which implies the proposed theoretical framework can capture the epidemic spreading through pt and sa contacts. figure 12 shows the comparison of the number of infectious and exposed people by time for the five selected cases. generally, the simulation and theoretical models show a similar number of infectious people over time. for the number of exposed people, the two models show similar dynamic fluctuations with only a slight difference for some periods. human mobility θ l [1 × 10 −3 , 1 × 10 −2 , 1 × 10 −4 , 0] θ g [7.44 × 10 −8 , 1 × 10 −7 , 1 × 10 −9 , 0] (a) comparison on infectious people (b) comparison on exposed people figure 11 : comparison between simulation model and calibrated theoretical model (100k sample passengers). one dot represents # of infectious/exposed people in a specific time interval. (a) comparison on infectious people (b) comparison on exposed people figure 12 : comparison of infectious/exposed people by time for 5 cases (100k sample passengers) based on the parameters in table 1 , we evaluate the epidemic process in singapore for all smart cardholders (4.7 million) using the calibrated theoretical model. figure 13 shows the dynamics of the number of infectious and exposed people. we randomly assign 30 initial infectious passengers in the system. results show that if there are no control policies for the disease, the number of infectious people will increase to more than 3,000 (100 times the initial value) after four weeks. this is consistent with liu et al. (2020) 's results on early human-to-human transmission of covid-19 in wuhan. the equivalent r 0 is 2.67, which is consistent with many previous estimates: 2.03.3 (majumder and mandl, 2020); 2.6 (imai et al., 2020); 2.92 (liu et al., 2020 ). the inset plot shows the intra-day dynamics of the number of exposed people in week 4, from which we observe the high intensity of infections from 7:00 to 22:00. the sudden increases in morning and evening peaks for weekdays (day 22-26) highlights the transmissibility through pt. during the weekends (day 27 and 28), the number of exposed people shows a decreasing trend, which implies lower transmission rates on weekends. figure 13 : epidemic process in status quo scenario (whole population). the inset plot shows the zoom-in of the number of exposed people from day 22 (monday) to day 28 (sunday). motivated by current epidemic control strategies worldwide in pt systems, especially the control policies of covid-19 exemplified in appendix a, we can hardly find the general criteria to determine whether, when, and where to suspend the urban pt services. the time-varying weighted pen developed in this work contributes to better understanding of the spatiotemporal impacts of various pt operation strategies for epidemic control, to facilitate the decision-making process for pt operation adjustments. we first evaluate the impact of β i and µ. β i is related to people's preventive behavior, such as wearing masks and sanitizing hands, which results in a decrease of β i . µ is related to the hospital's medical behavior, such as increasing cure rate and developing vaccines, which leads to an increase in µ. figure 14 shows the impact of β i and µ on equivalent r 0 . β i is scaled from 10 −2 to 10 1 and µ is scaled from 10 −1 to 10 2 . we fixed β e = 0.01β i for all testing. figure 14b shows that the epidemics would fade out (equivalent r 0 < 1) if transmissibility was reduced to less than 10 −1 of current value. however, figure 14c suggests that even if the cure rate is increased by 100 times, the epidemics would still happen, though the process would be lagged (with smaller r 0 ). this implies reducing transmissibility is more effective than increasing the cure rate for covid-19. in figure 14a , we show the joint impact of β i and µ and the critical bound of equivalent r 0 . if β i was decreased to 32% and µ was enlarged tenfold, the equivalent r 0 would be decreased to less than 1. if the cost of controlling each parameter is given, this graph can help to optimize the controlling strategies with limited costs. one of the typical control strategies for the epidemic is decreasing the trip occurrence rate in a city. at the average level, this is equivalent to reducing the total contact time, and total squared contact time 1 . figure 15 shows the impact of different control percentage of trips (i.e., reducing the percentage of total contact and squared contact time). we observe that controlling one trip (pt or local or global) cannot eliminate the epidemic. based on current parameter settings, reducing pt trips contribute more to the control of the epidemic process than the other two. the impact of reducing trips to r 0 is generally linear unless the reduction percentage is sufficiently large. when all trips are reduced by more than 80%, the reduction rate for r 0 starts to accelerate. when all trips were decreased by 98%, the spreading process would fade out, which implies travel control can only be effective at an extreme level. this is corresponding to wu et al. (2020) 's statement that a 50% reduction in inter-city mobility in wuhan has a negligible effect on the covid-19 epidemic dynamics. figure 16 shows the influence of distributing departure time with different flexibility (from 0 to ±110 min). note that the benchmark equivalent r 0 is 1.763 for the sample passengers situation, rather than 2.67 as the whole population situation because fewer passengers in the system can reduce the human contacts and limit the epidemic process. we observe a decline in equivalent r 0 as the departure time flexibility increases. this is because higher flexibility allows more dispersed riding on the bus; thus, fewer contacted passengers. however, the effectiveness of distributing passengers is very limited. with ±110 min flexibility, there is only a 1.6% decrease in r 0 (from 1.763 to 1.740). as what has been summarized in section 1, the closure of bus routes is an in-practice implemented strategy from the transportation side to reduce people's close-contacts during the outbreak of covid-19. we assume that the suspension of bus service is a sign of travel restrictions on the corresponding bus routes. while keeping the sa contact network the same, we evaluate four different strategies of closing various percentages of bus routes (from 10% to 90%): a) close from high demand to low demand routes (h-l). b) close from low demand to high demand routes (l-h). c) close by randomly picking bus routes (random). d) close by different local planning areas. we assume that passengers who originally take these closed bus routes will change to alternative routes if available; otherwise, they will cancel their trips. again, given the computational burden, we evaluate this policy for sample passengers. high-demand bus routes can reduce the equivalent r 0 by 9.2%. it is also important to look at how many passengers are affected by each strategy. the affected passengers are defined as those who cannot find alternative bus routes when the original routes are closed. as expected, for the same closing percentage of bus routes, the h-l strategy affected more passengers than the other two policies. however, we find that the h-l strategy is more effective in terms of r 0 reduction per affected passengers. if we fix the percentage of affected passengers at approximately 22% (black dashed line), the h-l strategy can reduce r 0 to 1.63, while random and l-h strategies can only reduce r 0 to 1.68 and 1.69, respectively. this may be because passengers in high demand bus routes are more influential (e.g., with a higher degree in the pen) in the system. therefore, to control the epidemic with fewer people getting affected, pt agencies should close buses from high to low demand routes. figure 18a shows the percentage of reduction in r 0 resulting from the closure of bus routes by planning areas, while figure 18b shows the corresponding percentage of affected pt passengers. generally, the high reduction in r 0 is the result of a high number of affected passengers. closing bus routes in the main business and residential areas in the southern part of singapore island leads to higher controlling effects of epidemics than closing other areas. however, to minimize the impact on passengers' daily travel, pt agencies should first close bus routes in regions with relatively high r 0 reduction and a low number of affected passengers, such as core central business district (cbd) areas. passengers who take buses crossing core cbd areas can easily find alternative routes. thus, the concentrated demands at cbd areas can be distributed to other less crowded routes, which leads to a r 0 reduction and less number of affected passengers. although closing bus routes can postpone the epidemic spreading, it also brings huge inconvenience to society, for example, decreasing people's accessibility to hospitals. a moderate alternative way is preserving the pt supplies but limiting the maximum bus load to reduce passengers' interaction. figure 19 shows the impact of this policy. since we use 100k sample data, the maximum bus load is relatively small. hence, we test the maximum bus load from 10 to 2. passengers who cannot board the bus due to this policy are assumed to cancel their trips (these passengers are called affected passengers). to compare with closing bus routes strategies, the x-axis is set as the percentage of affected passengers. only the h-l strategy is plotted as it is the most effective one among all closing bus routes strategies. we find the policy of limiting the maximum bus load can only take effects when the available maximum bus load is very small. with the same percentage of affected passengers, it is not as effective as closing bus routes from high demand to low demand. however, since this policy preserves the city's mobility capacity, it can be seen as a more moderate way to control epidemics compared with directly closing bus routes. figure 19 : impact of limiting maximum bus load (100k sample passengers). mbl: maximum bus load. cp: close percentage of the h-l strategy 3.5.6. impact of isolating critical passengers ideally, a more precise pandemic policy goes into the individual-level. the government could find out those influential passengers who are potential to spread viruses considerably and get them isolated at an early stage. individual-based policies can outperform region-based or population-level policies in effectiveness and flexibility. however, due to the computational cost, it is hard to optimize the isolation options for each individual directly. hence, we employed an isolation method based on k-core decomposition. different from traditional degree-based methods, k-core method shows higher impacts on the dynamics of multi-particle systems (kitsak et al., 2010; morone et al., 2019; borge-holthoefer and moreno, 2012; yang et al., 2017) . a k-core of a graph g is a maximal connected subgraph of g in which all vertices have a degree of at least k. each vertex in a k-core is connected to at least k other nodes (i.e., has a degree of at least k). a high k number represents the highly concentrated structure of the local network, which indicates the most clustered part in the whole network. nodes in a core with larger k usually have larger degrees on average (dorogovtsev et al., 2006) . in the context of infectious diseases, if one node in a high k-core is infected, it has, on expectation, > k 2 times chances to spread the disease to other nodes in that core in one-time step compared to an arbitrary node (if the network is under scaling law) (serrano and boguná, 2006) . therefore, we designed the policy by first limiting the nodes in high k-cores, which means limiting the more influential nodes. the population in a higher k-core is always lower than that in a lower k-core. thus, by applying this policy, we can isolate a small portion of people to limit the spreading of the disease. figure 20 shows the impact of isolating passengers in different k-cores and the corresponding number of isolated passengers. for comparison purposes, we also evaluate a random policy. for each k-core, the random policy isolates the same number of randomly picked passengers in the system, which corresponds to implementing isolation at the population level. since the number of passengers with a core number greater than 5 is low (in the 100k sample passengers case), the reduction in r 0 is not significant. however, isolating all 4-core passengers, which accounts for 5% of the whole population, the equivalent r 0 is reduced from 1.76 to 1.66 (5.7%), which shows higher effectiveness than any other region-based or route-based policies in section 3.5.4. we also observe that the k-core isolating method can outperform the benchmark random isolating method. figure 20 : impact of isolating critical passengers (100k sample passengers). "base" and "k-core" scenarios indicate no isolation and isolation of people of core number ≥ k in the pen, respectively. this paper proposed a general time-varying weighted pen to model the spreading of infectious diseases over the pt system. the social-activity contacts at both local and global level are also considered. the network is constructed using smart card data as an undirected graph, in which a node refers to a pt passenger; an edge refers to a pair of passengers staying in the same vehicle; the weight of an edge captures the cd. we employ the seir diagram-a general diagram to model the influenza-like disease-to model the disease dynamics using the recent global outbreak of covid-19 as a case study. a scalable and lightweight theoretical framework is derived to capture the time-varying and heterogeneous network structures, which enables to solve the problem at the whole population level with low computational costs. we use the pt smart card data from singapore as a proximity to understand the general spatiotemporal dynamics of epidemic spreading over the pt networks in one month. the status-quo analysis shows that the covid-19 infected population is expected to increase by 100 times of their initial value by the end of the month without any disease control enforcement. a series of disease control and prevention scenarios are envisioned from both public health policy and pt operations sides. from the public health side, the model sheds light on people's preventative behavior. wearing face masks and sanitizing hands, are considered the most effective measures to control the spreading of the epidemic; however, an increased cure rate can only postpone the outbreak of the disease. from the perspective of pt operation adjustments, several policies are evaluated, including reducing trip occurrences, enlarging departure time flexibility, closing bus routes, limiting maximum bus loads, and isolating critical passengers. in general, the control of the epidemic process starts to take effect, with over 80% of all trips being canceled. the equivalent r 0 can be reduced to 1 if over 98% of trips are banned. in terms of bus operation policies, distributing departure times and limiting maximum bus loads can slightly decelerate the spreading process. closing high-demand bus routes, especially in the main business areas, is more effective than the closure of low-demand bus routes. the most effective approach is isolating influential passengers at the early stage, in which the epidemic process can be significantly reduced with a small proportion of people being affected. many policy implications can be derived from the case study. on the public health side, the government should encourage people to take preventative behaviors, such as wearing face masks and sanitizing hands, to reduce the transmission probability. the travel restriction policy can take effect-with an equivalent r 0 less than 1-only at the extreme travel, such as in hubei province, china, where all travels were banned during the outbreaks of covid-19 in feb 2020. on the pt operation side, according to our models, partial closure of bus routes and limiting the maximum bus load can postpone the spreading of epidemics. the most effective way is closing bus routes with high demands, especially those crossing the cbd areas. in practice, a (partially) shutdown of pt services is a serious decision for authorities, many related issues, such as equity and accessibility, should be considered to determine how to design the suspension of pt services in the pandemics. for prevention purpose, if possible, the government could identify the influential passengers with large core numbers based on smart card data, and suggest them to isolate themselves or reduce travels. similarly, all entities should cancel events with a high number of participants to avoid generating large k-core contact networks. several limitations of this study are as follows: 1) some parameters of the model (e.g., θ l , θ g ) are determined by the authors' assumptions, which hurts the credibility of the results. although we end up with a reasonable r 0 , suggesting the values of these parameters are reasonable, more parameter calibration jobs should be done in the future. 2) policy evaluations are based on some ideal assumptions. in the real world, many unexpected results can happen. for example, closing bus routes may decrease people's accessibility to the hospital, thus, decrease the cure rate. the government should think cautiously from multiple perspectives before applying any control strategies. 3) this study did not model the contacts of passengers at trains due to difficulties in identifying the vehicles that passengers belong to. future research can incorporate a transit assignment model (e.g., zhu et al., 2017; mo et al., 2020) to infer passengers' boarding trains and construct the pen by trains. meanwhile, due to the large space in a train, the variation of transmission probability due to passengers' spatial distribution should also be captured. 4) other modes of transmission than contact transmission are not considered (e.g., infectious passengers contaminating surfaces), which may result in under-estimation of transmissibility of pt systems. 5) population infected heterogeneity is neglected in this study. in reality, the infectious probability may depend on age, gender and health conditions. the demographics distribution can be incorporated in the future. future works include the following: 1) elaborate on the sa contacts based on other data sources (e.g., mobile phone data) and extend n to the whole population. due to a lack of data, the contacts of social activities are simplified and n is assumed to be pt users in this paper. though pt users in singapore account for 84% of the population, future research can combine different data sources to model the sa contacts for the whole population in more detail (wang et al., 2018) . 2) incorporate spatial effect and model transmission probability more finely. the current transmission probability between two individuals only depends on the contact duration. the contact distance, passenger density, and distribution on a vehicle can be considered in future research. 3) conduct case studies in cities with covid-19 outbreaks (e.g., wuhan, new york city) to validate the model. these case studies can calibrate the model based on ground truth data, quantify the contribution of disease transmission by pt systems, predict the epidemic spreading, and evaluate the effects of different policies. 4) incorporate the time-varying epidemic and mobility parameters to better predict the reality. although this study does not attempt to predict and reproduce the covid-19 spreading, the proposed model can potentially better fit the epidemic process, given the more fine-grained framework. however, the complexity in the real-world lies on the time-varying mobility and epidemic patterns. future research can make the epidemic parameters (θ) as time-dependent (θ(t)), instead of constant, to better fit the reality. the authors confirm contribution to the paper as follows: study conception: b. shen. all authors reviewed the results and approved the final version of the manuscript. the research is sponsored by the natural science foundation of china (71901164) and the natural science in practice, since the outbreak of covid-19 in late january 2020, a variety of epidemic control strategies in pt systems, such as the requirement of pt riders to wear face masks, the sterilization of bus and metro carriages, the adjustment of pt operation schedules, the closure of bus routes, etc., have been implemented during the outbreak of covid-19. in china, the requirement of wearing face masks in pt systems has been successively implemented in many provinces since late january of 2020. in addition, a variety of pt operation control strategies have also been enforced in many cities. in cities of hubei province, especially in wuhan, along with the lockdown policies to control the spreading of covid-19, almost all pt services have been shut down since jan. 23rd and 24th, whereas the patients with severe symptoms are transported by ambulance. after the lockdown and travel restrictions of hubei province, the pt operation adjustment strategies implemented in other chinese cities were largely diverse. different from wuxi with only 8% of arterial bus routes kept running, in nanjiang, another major city of jiangsu province, the pt services were still in operation but with shortened operation hours and dispatching frequencies. in shanghai, both inter-provincial pt services and the bus services between rural districts like qingpu and jinshan were closed from jan. 27th, but most of the urban pt services remained in operation by limiting the maximum passenger loads. outside china, in italy, where the reported covid-19 cases dramatically increased in march 2020, the suspension of pt has been officially proposed in the lombardy area, where the urban pt service the transport for london (tfl) is running reduced service across the network in london, closing up to 40 stations. no services of waterloo and city line was provided since march 30, 2020. many us cities have seen reduced services, including boston and washington d.c., in cities of other countries with fewer covid-19 reported cases infectious diseases of humans: dynamics and control multiscale mobility networks and the spatial spreading of infectious diseases absence of influential spreaders in rumor dynamics identifying critical components of a public transit system for outbreak control modeling the spread of infection in public transit networks: a decision-support tool for outbreak planning and control guidance for risk assessment and public health management of healthcare personnel with potential exposure in a healthcare setting to patients with epidemic thresholds in real networks the role of the airline transportation network in the prediction and predictability of global epidemics predicting and containing epidemic risk using friendship networks social and sexual function following ileal pouch-anal anastomosis mathematical epidemiology of infectious diseases: model building, analysis and interpretation covid-19 real-time data k-core organization of complex networks how mobility patterns drive disease spread: a case study using public transit passenger card travel data herd immunity: a rough guide pandemic potential of a strain of influenza a (h1n1): early findings universal resilience patterns in complex networks generalized reproduction numbers and the prediction of patterns in waterborne disease understanding individual human mobility patterns discovering the hidden community structure of public transportation networks gonorrhea transmission dynamics and control statistics brief -world metro comparing different approaches of epidemiological modeling stochastic dynamics. modeling infectious diseases in humans and animals identification of influential spreaders in complex networks public transport utilisation: average daily public transport ridership the fundamental advantages of temporal networks reactive school closure weakens the network of social interactions and reduces the spread of influenza investigating physical encounters of individuals in urban metro systems with large-scale smart card data time-varying transmission dynamics of novel coronavirus pneumonia in china early transmissibility assessment of a novel coronavirus in wuhan, china. china modelling cholera epidemics: the role of waterways, human mobility and sanitation public transportation and sustainability: a review 2020. past update on covid-19 local situation capacity-constrained network performance model for urban rail systems impact of built environment on first-and last-mile travel mode choice the k-core as a predictor of structural collapse in mutualistic ecosystems epidemic processes in complex networks random walks and search in time-varying networks scaling of contact networks for epidemic spreading in urban transit systems novel coronavirus 2019-ncov: early estimation of epidemiological parameters and epidemic predictions transmission of 2019-ncov infection from an asymptomatic contact in germany a high-resolution human contact network for infectious disease transmission percolation and epidemic thresholds in clustered networks deterministic epidemiological models at the individual level deterministic epidemic models on contact networks: correlations and unbiological terms built environment and autonomous vehicle mode choice: a first-mile scenario in singapore small world and scale free model of transmission of sars simulation of an seir infectious disease model on the dynamic contact network of conference attendees efficient detection of contagious outbreaks in massive metropolitan encounter networks understanding metropolitan patterns of daily encounters the ubiquity of small-world networks collapse of resilience patterns in generalized lotka-volterra dynamics and beyond inferring metapopulation propagation network for intra-city epidemic control and prevention airborne contagion and air hygiene. an ecological study of droplet infections. airborne contagion and air hygiene. an ecological study of droplet infections nowcasting and forecasting the potential domestic and international spread of the 2019-ncov outbreak originating in wuhan, china: a modelling study how far droplets can move in indoor environmentsrevisiting the wells evaporation-falling curve small vulnerable sets determine large network cascades in power grids the transmissibility and control of pandemic influenza a (h1n1) virus a probabilistic passenger-to-train assignment model based on automated data key: cord-274083-6vln3erl authors: bhardwaj, rajneesh; agrawal, amit title: likelihood of survival of coronavirus in a respiratory droplet deposited on a solid surface date: 2020-06-01 journal: phys fluids (1994) doi: 10.1063/5.0012009 sha: doc_id: 274083 cord_uid: 6vln3erl we predict and analyze the drying time of respiratory droplets from a covid-19 infected subject, which is a crucial time to infect another subject. drying of the droplet is predicted by using a diffusion-limited evaporation model for a sessile droplet placed on a partially wetted surface with a pinned contact line. the variation in droplet volume, contact angle, ambient temperature, and humidity are considered. we analyze the chances of the survival of the virus present in the droplet based on the lifetime of the droplets under several conditions and find that the chances of the survival of the virus are strongly affected by each of these parameters. the magnitude of shear stress inside the droplet computed using the model is not large enough to obliterate the virus. we also explore the relationship between the drying time of a droplet and the growth rate of the spread of covid-19 in five different cities and find that they are weakly correlated. previous studies have reported that infectious diseases such as influenza spread through respiratory droplets. the respiratory droplets could transmit the virus from one subject to another through the air. these droplets can be produced by sneezing and coughing. han et al. 1 measured the size distribution of sneeze droplets expelled from the mouth. they reported that the geometric mean of the droplet size of 44 sneezes of 20 healthy subjects is around 360 μm for unimodal distribution and is 74 μm for bimodal distribution. liu et al. 2 reported around 20% longer drying time of saliva droplets as compared to water droplets deposited on a teflonprinted slide. they also predicted and compared these times with a model and considered the solute effect (raoult's effect) due to the presence of salt/electrolytes in saliva. the slower evaporation of the saliva droplet is attributed to the presence of the solute in it. 2 xie et al. 3 developed a model for estimating the droplet diameter, temperature, and falling distance as a function of time as droplets are expelled during various respiratory activities. they reported that large droplets expelled horizontally can travel a long distance before hitting the ground. in a recent study, bourouiba 4 has provided evidence that droplets expelled during sneezing are carried to a much larger distance (of 7-8 m) than the distance previously found. the warm and moist air surrounding the droplets helps in carrying the droplets to such a large distance. while the role of virus-laden droplets in spreading infectious diseases is well-known, the drying time of such droplets after falling on a surface has not been well-studied. in this context, buckland and tyrrell 5 experimentally studied the loss in infectivity of different viruses upon drying of virus-laden droplets on a glass slide. at room temperature and 20% relative humidity, the mean log reduction in titer was reported to be in the range of 0.5-3.7 for 19 viruses they considered. the need for studying the evaporation dynamics of virus-laden droplets has also been recognized in the recent article by mittal et al. 6 furthermore, to reduce the transmission of covid-19 pandemic caused by sars-cov-2, the use of a face mask has been recommended by who. 7 the infected droplets could be found on a face mask or a surface inside the room, which necessitates the regular cleaning of the surface exposed to the droplets. therefore, the present study examines the drying times of such droplets, which correlates with the time in which the chances of the transmissibility of the virus are high. 5, 8 first, we present different components of the model that are used to estimate the drying time and shear stress. we consider aqueous respiratory droplets that are on the order of 1-10 nl on a solid surface. the range of the volume is consistent with previous measurements. 1 the corresponding diameters of the droplets in the air are around 125 μm and 270 μm, and the probability density scitation.org/journal/phf function (pdf) of the normal distribution of the droplet diameter in the air is plotted in fig. 1 . the mean diameter and standard deviation are 188 μm and 42 μm, respectively. droplets smaller than 100 μm are not considered in this study because such droplets are expected to remain airborne, while the larger droplets being heavier settle down. 9 the droplet is assumed to be deposited as a spherical cap on the substrate. since the wetted diameter of the droplet is lesser than the capillary length (2.7 mm for water), the droplet maintains a spherical cap shape throughout evaporation. the volume (v) and contact angle (θ) of a spherical cap droplet are expressed as follows: where h and r are droplet height and wetted radius, respectively. we consider diffusion-limited, quasi-steady evaporation of a sessile droplet with a pinned contact line on a partially wetted surface (fig. 2) . the assumption of quasi-steady evaporation is valid for t h /tf < 0.1, as suggested by larson, 10 where t h and tf are heat equilibrium time in the droplet and drying time, respectively. t h /tf scales as follows: 10 where d, α, h, r, csat, and ρ are diffusion coefficient of liquid vapor in the air, thermal diffusivity of the droplet, droplet height, wetted radius, saturation liquid vapor concentration, and droplet density, respectively. in the present work, the maximum value of t h /tf is estimated to be around 0.05 at 40 ○ c, the maximum water droplet temperature considered in the present work, and a contact angle of 90 ○ (h/r = 1). the values of d, α, and ρ are set as 2.5 × 10 −5 m 2 /s, 1.45 × 10 −7 m 2 /s, and 997 kg/m 3 , respectively. 11 therefore, the assumption of quasi-steady evaporation is justified. the mass lost rate (kg/s) of an evaporating sessile droplet is expressed as follows: 12 where h and θ are relative humidity and static contact angle, respectively. the saturated concentration (kg/m 3 ) at a given temperature for water vapor is obtained using the following third order polynomial: 13,14 where t is the temperature in ○ c (20 ○ c ≤ t < 100 ○ c). the dependence of the diffusion coefficient (m 2 /s) of water vapor on temperature ( ○ c) is given by 13, 14 ). assuming a linear rate of change in the volume of the droplet for a sessile droplet pinned on the surface, 12,15 the drying time of the droplet is given by where v 0 and ρ are the initial volume and density of the droplet, respectively. the properties of pure water have been employed in the present calculations to determine the drying time and shear stress. since the thermo-physical properties of saliva are not very different from water, the present results provide a good estimate of the evaporation time under different scenarios and shear stress. furthermore, we obtain the expression of the maximum shear stress (τ) on the 125 nm diameter sars-cov-2, suspended in the sessile water droplet, and estimate its range for the droplet size considered. the shear stress on the virus would be maximum for a virus adhered to the substrate surface (fig. 2) . assuming a linear velocity profile across the cross section of the virus, the expression of τ is given by where μ, u, and dv are the viscosity of the droplet, flow velocity on the virus apex ( fig. 2) , and virus diameter, respectively. the flow inside the droplet is driven by the loss of liquid vapor by diffusion. we neglect the flow caused by marangoni stress, since an evaporating water droplet in ambient does not exhibit this stress. 13, 16, 17 the expression of the non-uniform evaporative mass flux on the liquid-gas interface, j, (kg m −2 s −1 ), is given by 12 where λ(θ) = 0.5 −θ/π and r is the radial coordinate (fig. 2) . the above expression exhibits singularity at r = r, and the maximum value of j (say, jmax) occurs near the contact line region (say, at r = 0.99r). the magnitude of the evaporative-driven flow velocity (m s −1 ) is expressed as follows: 17 the following expression of the maximum shear stress (τ) is therefore obtained: using eqs. (8) and (10) second, we present the effect of ambient temperature, surface wettability, and relative humidity on the drying time of the droplet. in this context, we examine the drying time of a deposited droplet in two different ambient temperatures, 25 ○ c and 40 ○ c. the chosen temperatures are representative of temperatures inside a room with air-conditioning and outdoors in summer. figure 3 shows the variation in evaporation time with the droplet volume at the two different ambient temperatures considered. the contact angle and humidity for these simulations are set as 30 ○ and 50%, respectively. at 25 ○ c, the evaporation time for small droplets is about 6 s, which increases to 27 s for large size droplets. the evaporation time increases as the square of the droplet radius or 2/3 power of volume. an increase in the ambient temperature reduces the evaporation time substantially (by about 50% for 15 ○ c rise in temperature). therefore, an increase in the ambient temperature is expected to drastically reduce the chance of infection through contact with an infected droplet. the effect of the surface on which the droplet can fall onto is modeled here through an appropriate value of the contact angle. the contact angle of 10 ○ corresponds to a water droplet on glass, while 90 ○ corresponds to a water droplet on the touch screen of a smartphone (table i) . the results of the simulations corresponding to these two contact angles are plotted in fig. 3 . the ambient temperature and humidity are set as 25 ○ c and 50%, respectively. figure 3 shows that the effect of the surface can be quite profound; the evaporation time can increase by 60% for a more hydrophobic surface. the droplet spreading on the surface is larger as the contact angle decreases and thereby, enhancing the mass loss rate of liquid from the droplet to the ambient. therefore, for a surface with a smaller contact angle, the evaporation time of the droplet is smaller. the effect of the surface can further be manifested by a difference in temperature in different parts of the surface. such inhomogeneity in the surface temperature can be brought about by the difference in the surface material (leading to the difference in the emissivity) or differential cooling (for example, due to the corner effect). even a slight difference in the surface temperature can further aggravate the surface effect by influencing the evaporation time. sars-cov-2 has a lipid envelop, and in general, the survival tendency of such viruses, when suspended in air, is larger at a lower relative humidity of 20%-30%, 23 as compared to several other viruses that do not have a protective lipid layer. here, we examine the effect of the relative humidity on the survival of the virus inside a droplet deposited on a surface. figure 3 shows that the relative humidity has a strong effect on the evaporation time. the contact angle and ambient temperature for these calculations are set as 30 ○ c and 25 ○ c, respectively. the evaporation time of a droplet increases almost sevenfold with an increase in humidity from 10% to 90%. furthermore, the evaporation time becomes greater than 2 min for large droplets at high humidity. with the increase in humidity in coastal areas in summer and later in other parts of asia in july-september with an advent of monsoon, this may become an issue as there will be sufficient time for the virus to spread from the droplet to new hosts upon contact with the infected droplet. therefore, higher humidity increases the survival of the virus when it is inside the droplet; however, it decreases its chances of the survival if the virus is airborne. finally, we discuss the relevance of the present results in the context of covid-19 pandemic. the evaporation time of a droplet is a critical parameter as it determines the duration over which the spread of infection from the droplet to another person coming in contact with the droplet is possible. the virus needs a medium to stay alive; 5 therefore, once the droplet has evaporated, the virus is not expected to survive. the evaporation time can, therefore, be taken as an indicator of the survival time of the virus. in general, it is regarded that a temperature of 60 ○ c maintained for more than 60 min inactivates most of the viruses; 23 however, contrary reports about the effect of temperature on the survivability of sars-cov-2 has been reported. 24, 25 our results indicate that the survival time of the virus depends on the surface on which the droplet has fallen, along with the temperature and humidity of the ambient air. the present results are expected to be of relevance in two different scenarios: when droplets are generated by an infected person by coughing or sneezing (in the absence of a protective mask) or when fine droplets are sprayed on a surface for cleaning/disinfecting the surface. a wide range of droplet sizes is expected to be produced in these cases. the mutual interaction of the droplets such that they interfere in the evaporation dynamics is, however, expected to be weak because of the large distance between the droplets, as compared to their diameter. the virus inside a droplet is subjected to shear stresses due to the generation of the evaporation-induced flow inside the droplet. the magnitude of this shear stress has however been estimated to be small, and the virus is unlikely to be disrupted by this shear stress inside the droplet. to determine the likelihood of the droplet and the virus on the surface, we find the mean and standard deviation of the probability density function (pdf) of the normal distribution of the droplet drying times for different cases of ambient temperature, contact angle, and relative humidity. the values of the mean and standard deviation are plotted using the bar and error bar, respectively, in fig. 4 . the likelihood lifetime is in the range for (5-20) s for h ≤ 50%, while it is in the range of (40-100) s for h = 50%. this result shows that the drying time is likely to be larger by around five times in the case of large relative humidity values, thereby increasing the chances of the survival of the virus. furthermore, we examine the connection between the drying time of a droplet and the growth of the infection. a similar approach was recently tested for suspended droplets in air in ref. 27 . we hypothesize that since the drying time of a respiratory droplet on a surface is linked to the survival of the droplet, it is correlated with the growth of the pandemic. since the drying time is a function of weather, we compare the growth of infection with the drying time in different cities. the cities were selected based on cold/warm and dry/humid weather. the growth of the total number of infections is plotted for cities with different weather conditions during the pandemic in fig. 5 . the data of the infections were obtained from public repositories. 28, 29 the data were fitted with linear curves using the least-squares method, and the slope of the fits represents the growth rate (the number of infections per day) of the respective city. the growth rate of new york city and singapore is the highest and the lowest, respectively. for different cities, we compute the drying time of a droplet of 5 nl volume, which is the mean volume obtained using the pdf of the distribution (fig. 1) . the ambient temperature and relative humidity are taken as a mean of the respective ranges listed in table ii . as discussed earlier, the drying time increases with an increase in humidity; however, it decreases with an increase in ambient temperature. thus, the combined effect of humidity and temperature dictates the final drying time. this can be illustrated by comparing the drying time of singapore and new york city plotted in fig. 6 . the time is shorter for the former as compared to the latter despite with a large humidity for the former (70%-80%) as compared to the latter (50%-60%). finally, fig. 6 compares the growth rate and drying time in different cities using vertical bars and symbols, respectively. the growth rate appears to be weakly correlated with the drying time, i.e., a larger (lower) growth rate corresponds to larger (lower) drying time. qualitatively, these data verify that when a droplet evaporates slowly, the chance for the survival of the virus is enhanced and the growth rate is augmented. we recognize that our model has limitations, which can be improved in subsequent studies. in particular, air has been assumed to be stationary; the evaporation time is expected to reduce in the presence of convective currents. therefore, the value of the predicted evaporation times is on the conservative side, and the actual evaporation time will be smaller than that obtained here. the effect of the solute present (i.e., raoult's law) in saliva/mucus has not been modeled, and the contact angle and drying of these biological fluids could be slightly different from that of pure water on a solid surface. however, the impact of these latter effects on the drying time is expected to be small. furthermore, the model does not consider the interaction of the droplets. it is likely that the respiratory droplets, expelled from mouth and/or nose, deposit adjacent to each other on a surface and could interact while evaporating. 30 they may interact while falling, 31 and a falling droplet may coalesce on an already deposited droplet on a surface. 32 in addition, receding of the contact line may influence the drying time, 33 which is not considered in the present work. in sum, we have examined the likelihood of the survival of sars-cov-2 suspended in respiratory droplets originated from a covid-19 infected subject. the droplet is considered to be evaporating under ambient conditions on different surfaces. the droplet's volume range is considered as (1, 10) nl. the datasets of the drying time presented here for different ambient conditions and surfaces will be helpful for future studies. the likelihood of the survival of the virus increases roughly by five times under a humid condition as compared to a dry condition. the growth rate of covid-19 was found to be weakly correlated with the outdoor weather. while the present letter discusses the results in the context of covid-19, the present model is also valid for respiratory droplets of other transmissible diseases, such as influenza a. the data that support the findings of this study are available from the corresponding author upon reasonable request. characterizations of particle size distribution of the droplets exhaled by sneeze evaporation and dispersion of respiratory droplets from coughing how far droplets can move in indoor environments-revisiting the wells evaporation-falling curve turbulent gas clouds and respiratory pathogen emissions potential implications for reducing transmission of covid-19 loss of infectivity on drying various viruses the flow physics of covid-19 a review of coronavirus disease-2019 (covid-19) inactivation of influenza a viruses in the environment and modes of transmission: a critical review aerobiology and its role in the transmission of infectious diseases transport and deposition patterns in drying sessile droplets crc handbook of chemistry and physics evaporation of a sessile droplet on a substrate pattern formation during the evaporation of a colloidal nanoliter drop: a numerical and experimental study a combined computational and experimental investigation on evaporation of a sessile water droplet on a heated hydrophilic substrate evaporative deposition patterns: spatial dimensions of the deposit analysis of the microfluid flow in an evaporating sessile droplet self-assembly of colloidal particles from evaporating droplets: role of dlvo interactions and proposition of a phase diagram dynamics of water spreading on a glass surface wetting of wood on the collision of a droplet with a solid surface water wetting and retention of cotton assemblies as affected by alkaline and bleaching treatments preparation and adhesion performance of transparent acrylic pressure sensitive adhesives for touch screen panel the effect of environmental parameters on the survival of airborne infectious agents high temperature and high humidity reduce the transmission of covid-19 no association of covid-19 transmission with temperature or uv radiation in chinese cities modeling ambient temperature and relative humidity sensitivity of respiratory droplets and their role in determining growth rate of covid-19 outbreaks evaporation-induced transport of a pure aqueous droplet by an aqueous mixture droplet effect of viscosity on droplet-droplet collisional interaction coalescence dynamics of a droplet on a sessile droplet on the lifetimes of evaporating droplets with related initial and receding contact angles key: cord-257813-2ij3fkrh authors: walsh, froma title: loss and resilience in the time of covid‐19: meaning making, hope, and transcendence date: 2020-07-17 journal: fam process doi: 10.1111/famp.12588 sha: doc_id: 257813 cord_uid: 2ij3fkrh this article addresses the many complex and traumatic losses wrought by the covid‐19 pandemic. in contrast to individually‐based, symptom‐focused grief work, a resilience‐oriented, systemic approach with complex losses contextualizes the distress and mobilizes relational resources to support positive adaptation. applying a family resilience framework to pandemic‐related losses, discussion focuses on the importance of shared belief systems in (1) meaning‐making processes; (2) a positive, hopeful outlook and active agency; and (3) transcendent values and spiritual moorings for inspiration, transformation, and positive growth. practice guidelines are offered to facilitate adaptation and resilience. matriarch. a death is often experienced as a hole in the heart of a family that will never again feel intact. sudden deaths, most common in rapidly progressing, severe cases of covid-19, are jolting experiences for families. a recovering loved one may suddenly take a turn for the worse. there is often extreme physical suffering before death, which is agonizing for loved ones, helpless on the sidelines and lacking treatment options. with quarantine restrictions, family members are unable to be at the bedside, to provide comfort and say their good-byes. additional heartache ensues when gatherings are prohibited for funeral and burial rituals that help families and their communities to honor the deceased, share grief, and provide mutual support (imber-black, 2020). my extended family experienced a heartbreaking death to coronavirus. in march, i received an anguished email from my cousin: she had been informed by her mother's nursing home, that her mother had contracted covid-19, was in isolation and declining rapidly, but could receive no visitors. family members hovered outside the building, unable to be with her as she declined and died. they weren't allowed to see her body or to hold a funeral gathering. a week later, her daughter, who had visited her mother just before symptoms appeared, contracted the virus herself, was in quarantine, and worried about having spread it to other grieving family members. i was relieved to hear, a month later, that she was recovering from a mild case. but she and her siblings were deeply distressed over their mother's death and furious that the facility had not informed them that other residents had tested positive before her mother's diagnosis. they were wracked with remorse that they had let her go to a care facility and had not insisted upon taking her in to live with them. such heart-wrenching situations are all too common for families losing a loved one in this time of high contagion. the elderly and others with underlying medical conditions face heightened risk. with an unexpected loss, family members lack time to prepare emotionally or practically, to deal with unfinished business, or to say their goodbyes. grief can be complicated with regrets that it is too late to repair wounded bonds. in some cases, families and emergency care providers must make agonizing end-of-life decisions to forego or end life support efforts. strong disagreements or religious concerns can lead to long-lasting family distress. this article is protected by copyright. all rights reserved the isolating constraints of social distancing heighten awareness that our connections with others are vital to thrive. in traumatic experiences like a pandemic, when helplessness and confusion are common, we have an urgent need to turn to one another for support, comfort, and safety. separations are keenly felt. with high risks of severe illness and death for elders and those with chronic medical conditions, loved ones are fearful of bringing the virus to them. travel safety concerns limit visits by those living at a distance. elders miss out on the rapid developments of grandchildren and yearn for a hug, a kiss, the scent of a baby's breath. individuals in prolonged isolation, living alone or in care facilities, can suffer a sense of disconnection and loneliness, which increases risks for physical and mental decline, substance use, emotional despair, and death (caccioppo, cacioppo, & capitanio, 2015; killgore, cloonan, taylor, & dailey, 2020) . families need to sustain connections across distance: phone and internet contact, cards and letters, and children's drawings all offer vital lifelines. the severe economic shockwaves of the covid-19 pandemic have far reaching impact for financial security and wellbeing in families. job loss and the looming threat of prolonged unemployment, business closures, and uncertain economic recovery can be devastating, especially for lower-income families who lack savings and barely scrape by, paycheck to paycheck. the loss of essential income can have cascading effects with loss of homes, disruptive relocations, and persistent housing and food insecurity. an untimely death in the pandemic is especially heartbreaking for families. the loss of a child, even one in early adulthood, upends life cycle expectations and shatters hopes and dreams for all that might have been. in the rapid spread of the coronavirus, anticipatory loss (rolland, 2018) is a constant concern, with worry about one's own safety and the threatened loss of loved ones. dire forecasts of a prolonged economic recession generate deep anxieties about future livelihoods and retirement security. young adults, facing the loss of educational and job plans, fear the loss of life dreams: in pursuing careers, gaining financial independence, finding life partners, and starting a family. this article is protected by copyright. all rights reserved the loss of a sense of normalcy is widespread. life as we have known it has been derailed. life forward is on hold, the future uncertain, and the road ahead unclear. there is much talk about the "old normal" and the "new normal." yet, like the aftershocks of an earthquake, the ground keeps shifting, and nothing feels normal. these harrowing times take a mental, physical, and emotional toll. daily news reports increase a sense of overwhelm, with confusing and conflicting information and changing forecasts on what lies ahead. a cartoon depicts a couple in their living room, with flames rising up around them. as one partner sits on the sofa, trying to read a book, the other stands transfixed in front of the large screen tv watching the breaking news bulletin: "hell still on fire." in this unprecedented pandemic, there is a collective experience of shattered assumptions in our worldview: our taken-for-granted beliefs and expectations about our lives and our connections to our world (janoff-bulman, 1992) . the invisibility of the virus, its lethal potential, and the possible spread by non-symptomatic persons heighten fears of infection. the death of a loved one, and loss of physical contacts, life structures, and future life visions can shatter core beliefs and make our world seem unpredictable and unjust. as one father lamented, "everything i thought i knew is shaken." one global mental health specialist coined the term "covid cognitive cloud" to describe the disorganizing impact of the pandemic. ambiguities cloud our thinking and decision-making. who is trustworthy for leadership, information, and guidance? where and with whom are we safe? we feel trapped and angry at a loss of freedoms with lockdown and restrictions. paradoxically, we also feel unmoored and adrift, swept by strong currents in a perfect storm of extreme events beyond our comprehension and control. the impact of loss is compounded with situational risks, larger systemic/structural forces, and/or complex family dynamics. high risk situations and socio-economic disparities. the risk and pain of loss is intensified when loved ones are working on the front lines and in jobs with repeated exposure to the virus. it is this article is protected by copyright. all rights reserved heartbreaking for families of healthcare emergency workers who contract coronavirus while providing critical care, often lacking protective equipment, without respite from the overload of cases, and suffering emotionally when lives can't be saved. those who self-isolate to protect their own family members miss their support. socio-economic and racial disparities render disadvantaged and marginalized communities at higher risk for multiple losses in major disasters worldwide (norris, 2002) . in a pandemic, crowded living and conditions, job and environmental hazards, chronic medical conditions, and discrimination in disaster response heighten risks. blacks and latinx have been disproportionately affected by coronavirus across the united states and all age groups (oppel, gabeloff, et al., 2020) . stark disparities are seen in the highest death rates, particularly among low-paid workers and their family members. many employees. are caught between troubling options: going to work for a needed paycheck or losing their jobs and income if they stay home to keep themselves and loved ones safe. prolonged unemployment and financial insecurities have long-term effects, ambiguous loss. ambiguity surrounding risk and loss generate anxiety, depression, and conflict, interfering with adaptation (boss, 1999) . with covid-19, ambiguities persist about how the virus is spread and whether a death was due to coronavirus. unclarity about the diagnosis, symptoms, severity can be an impediment in getting emergency care. family members may fault themselves for not having understood risks or acted to prevent a death and remain unclear about their future risks. unacknowledged and stigmatized losses. when losses are unacknowledged, hidden, or minimized, they leave families unsupported (doka, 2002) . the denial of the human tragedy of illness and deaths in the spread of covid-19 by national authorities renders their suffering invisible. the stigma of possible contagion surrounding a covid-related death fosters misinformation, secrecy, and estrangement, impairing social support as well as critical health and mental health care. reports are also emerging of a spike in suicides and addiction-related deaths, with concerns about further increases with long-term effects in the economy and vulnerable groups (gunnell, appleby, et al., 2020) . deaths by suicide or overdose are tormenting for families, who struggle to comprehend them and may need help with anger, blame, shame, or guilt over how they might have made a difference (walsh, in press ). as the first wave of the pandemic surges in many places, with a second this article is protected by copyright. all rights reserved wave expected, most families experience a roller-coaster course in efforts to cope and adapt. families can be overwhelmed by the emotional, relational, and functional impact of the many stresses in their lives. adaptation can be further complicated in highly conflicted, abusive, or estranged relationships or with reactivation of painful emotions around past trauma or loss (walsh & mcgoldrick, 2013) . the dominant anglo-american culture has fostered avoidance in facing death and loss, minimizing their impact, and encouraging people to quickly get "closure" and move on from losses and painful emotions (walsh & mcgoldrick, 2004) . some seek reassurance that death happens to others who are unfortunate or at fault, to assuage anxieties about their own risks. many are uncomfortable in responding to others' loss experiences and may distract attention or avoid contact. reflecting the cultural aversion, many therapists working with families have been hesitant in addressing significant losses, leaving grief to bereavement specialists and pastoral counselors. moreover, there's no safe professional boundary from emotional spill-over: therapists, as well as clients, are impacted by the pandemic and are dealing with losses, disruptions, and anxieties in both work and family spheres of life. like our clients, we are trying to hold it all together. in a larger cultural context and mental health field that favors brief solution-oriented approaches, therapists need to appreciate that loss is not a problem to solve. we can't bring back a deceased loved one or a livelihood or way of life that is gone. we can listen openheartedly to pain and suffering in families, facilitate their mutual support, and encourage active efforts for positive adaptation. the cultural ethos of the "rugged individual" fosters expectations for self-reliance and fierce independence in dealing with serious life challenges. vulnerability and dependence on others are shame laden, viewed as weakness and deficiency. associated cultural images of masculinity constrain accepted article many men's emotional expression and strain relational bonds. in couples, a distraught spouse may feel abandoned by an emotionally unavailable partner when mutual support is needed most. this ethos also encourages individuals to tough it out on their own: "i should be able to manage it all myself." "i don't want to ask for help or burden others." such expectations lead to burnout, especially for single parents, and leave no time to attend to emotional needs or find respite from pandemic-related stresses. vulnerability is part of the human condition. distress is normal in abnormal times.. although some families are more vulnerable in this pandemic, most face losses and upheaval. false assurances of invulnerability are foolhardy. acknowledgment of grief, suffering and hardship is a strength that can rally mutual support and collective efforts for recovery. we are relational beings. recognition of our essential interdependence is vital for our wellbeing and resilience. in turning to others for help, we can pay it back and pay it forward. mobilizing kin and social support, while challenging with social distancing restrictions, is crucial to build family and community resource teams. as a society, we are all going through this pandemic together. we need and depend on each other for our lives and our future. in this time of pandemic, there is much talk about widespread grief. it's important to clarify current research-based understandings of loss, and common misconceptions from earlier theories positing a single, universal model of "normal," or "healthy" grief. epidemiological and cross-cultural studies have found wide diversity in responses to loss, with variation in the timing, expression, and intensity of normal grief responses (walsh, in press) . in families, members may not be in sync, requiring respect for differences. grief and recovery processes do not follow an orderly stage sequence or timetable as proposed by kubler-ross and kessler (2005) . common reactions of shock and disbelief, anger, bargaining, sorrow, and acceptance are better seen as facets of grief, which ebb and flow over time. while usually decreasing in intensity, various facets can surface unexpectedly, particularly around nodal events. this article is protected by copyright. all rights reserved in the covid-19 pandemic. initial shock and disbelief are common, but unshakable denial becomes detrimental in not facing the reality that must be dealt with. in families, tolerance is needed for different reactions: one member may be consumed by sadness and yearning while another is enraged by the unfairness of a loss. a breadwinner may need to keep emotions under wraps to function at work. small children may show anxious clinging or need constant contact while adolescents may distance (walsh & mcgoldrick, 2013) . adaptation involves a dynamic oscillation in attention alternating between loss and restoration, focused at times on grief and at other times on emerging challenges . with pressing demands, many don't have the time and space to process complicated losses, which may find expression in substance use, relational conflict, or child-focused problems. many only seek counseling much later, after initial social support wanes and the full impact of loss-related challenges is felt. this will require pacing of interventions attuned to each family, weaving back and forth in attention to grief, coping efforts, and future directions. adaptation to loss does not mean full recovery or resolution in the sense of some complete, once and for all, getting over it. recovery is best seen in terms of adaptation over time, rather than a final outcome. many recover from coronavirus, yet some suffer long-term sequelae not yet understood. recovery from the economic effects of the pandemic may be partial, as will be recovery of aspects of past ways of life. efforts will be needed for both continuity and adaptive change. likewise, resilience in response to loss and other major disruptions does not mean "just bounce back," quickly rallying and moving on unscathed (walsh, 2016b) . healing and resilience occur gradually over time. grief is a healing process: we don't get over grief--we go through it. resilience is forged through suffering and setbacks; it involves struggling well and integrating painful loss experiences into our life passage. the concept of resilience-the capacity to overcome adversity-is finding valuable application in situations of widespread disaster, collective trauma, and loss (landau, 2007; masten & motti-stefanidi, 2020; saul, 2013; walsh, 2007; 2016b) . with advances in research, resilience is now this article is protected by copyright. all rights reserved understood as involving dynamic multilevel systemic processes over time. the response to a disaster by communities and larger systems can make the difference for individual and family wellbeing and resilience. for instance, abysmal failures in government response to hurricane katrina compounded widespread suffering and loss. in contrast, the coordinated response to the oklahoma city bombing tragedy by community leaders and agencies provided immediate support and fostered long-term positive adaptation (walsh, 2007; 2016b) . family resilience refers to capacities in family functioning to withstand and rebound from disruptive life challenges in adversity. more than surviving loss and coping with disruptions, resilience involves positive adaptation: regaining the ability to thrive, with the potential for transformation and positive growth forged through the searing experience. a family resilience orientation is finding broad application in strengths-based, collaborative, systemic training, practice, and research (walsh, 2016a (walsh, , 2016b . a resilience-oriented approach with loss (a) contextualizes the distress; (b) attends to the challenges, suffering, and struggles of families, and (c) strengthens relational processes that support coping, adaptation, and growth. with a multisystemic lens, this approach draws on extended kin, social, community, sociocultural and spiritual resources, and strengthens larger systemic/structural supports. to help families forge resilience in response to pandemic-related losses and the myriad of challenges they face, therapists can usefully apply this author's family resilience framework. designed as a practice map to guide intervention with families facing extreme adversity, it has been applied to traumatic and complicated losses in communities and with widespread disaster (walsh, 2007; 2016b) . the covid-19 pandemic is a perfect storm of stressors, involving acute crisis and loss events, disruptions in many aspects of life, and ongoing multistress challenges with evolving conditions. this situation is so extreme that families are experiencing the strains of grief and sadness over so much loss, fears for loved ones, and anxieties about the future. how a family deals with stress and loss is crucial; therapists can help families strengthen key transactional processes for mutual support and mobilize active efforts to overcome challenges. in gaining resilience, they strengthen bonds and resourcefulness in meeting future challenges. this article is protected by copyright. all rights reserved the walsh family resilience framework identified nine key processes--facilitative beliefs and practices--in three domains of family functioning: family belief systems, family organizational processes, and communication / problem-solving processes (walsh, 2003 (walsh, , 2016b ). discussion in this paper focuses on the powerful influence of family belief systems in the covid-19 pandemic. shared facilitative beliefs are the heart and soul of family resilience. each family's belief system, rooted in multi-generational and sociocultural influences comes to the fore in times of crisis and loss, shaping members' experience and their pathways in adaptation. family resilience is fostered by shared beliefs (1) to make meaning of the crisis and challenges; (2) to (re)gain a positive, hopeful outlook that supports active agency, and (3) for transcendence: to rise above suffering and hardship through larger values, spiritual beliefs and practices, and experiencing transformations in new priorities, a sense of purpose, and deeper bonds. core beliefs ground and orient families, providing a sense of reality, normalcy, meaning, or purpose in life. well-being is fostered by expectations that others can be trusted; that communities are safe; that life is orderly and events predictable; and that society is just. when the losses and upheavals in this pandemic shatter such assumptions, as noted above, there is a deep need to restore order, meaning, and purpose (janoff-bulman, 1992). meaning making and recovery involve a struggle to understand what has been lost, how to build new lives, and how to prevent future tragedy. meaning reconstruction is a central process in healing in response to trauma involving both death and non-death losses (neimeyer & sands, 2011) . it involves sense-making efforts over time, not simply a final stage in resolving grief, an "aha" moment when everything makes sense. in this pandemic, at first it is hard to understand what is happening, without previous experience to relate it to. as we grapple with the implications, we gradually try to come to terms with the situation, what can be known and the uncertainties that persist. in families, meaning-making processes involve shared attempts to make sense of the loss, put it in perspective to make it more bearable, and, over time, integrate it into personal and relational life passage (nadeau, 2008) . resilience is strengthened in helping families gradually forge a sense of coherence through shared efforts to make loss-related challenges comprehensible, manageable, and this article is protected by copyright. all rights reserved meaningful to tackle (antonovsky & sourani, 1988) . this requires dealing with ongoing negative implications, including the loss of hopes and dreams. contextualizing members' distress as common and understandable in their situation--normal in an abnormal times--can depathologize intense reactions and reduce blame, shame, and guilt. in the context of covid-19, therapists need to explore both the factual circumstances of losses and the implications they hold for family members in their social and developmental contexts. commonly, they grapple with painful questions: "how did this happen?" "could it have been prevented?" "what will happen to us?" "what does it mean for our lives? such concerns persist when, for instance, the source of viral transmission, the development of vaccines and treatments, or the future of the economy remain unclear. causal attributions concerning blame, self-blame, and guilt can be strong when questions of failed responsibility or negligence arise, such a not following public health guidelines. meaning-making efforts and future planning are hampered by repeated unclear and inconsistent information by government authorities. frustrations may boil over in anger that more should have done to prevent widespread viral contagion and economic losses. systemic therapists can help family members to voice such concerns, come to terms with reasonable limits of control in the situation, and seek greater accountability and leadership by those in charge at local and national levels. families may struggle to envision a new sense of normality, identity, and relatedness to adapt to altered conditions. they can become trapped in helplessly waiting to hear what will happen next or in the future. a sense of active agency is vital for resilience: what can we do about it? what are our options? clinicians can support efforts to gain and share helpful information and become involved in community efforts. helping professionals are cautioned not to ascribe meaning to a family's unique experience. our role is not to provide meaning for those who are struggling, but to facilitate their meaningmaking process (frankl, 1946) . the multiple meanings of a particular loss evolve over time as they find expression in continuing patterns of interaction and are integrated with other life experiences. over time, adaptation involves weaving the painful experience and the resilience forged into the fabric of individual and collective identity and life passage. this article is protected by copyright. all rights reserved abundant research has found the importance of a positive outlook for resilience (walsh, 2016b) . yet this should not be seen as relentless optimism and good cheer. in confronting significant challenges, with covid-19, it is common to experience discouraging setbacks. sadness and nostalgic yearning are intensified when former lives can't be restored. many persons report that there were times when they didn't know if they could face another day, or felt that life no longer had meaning-but with the support--or needs--of others, they vowed to carry on. . family members' mutual encouragement bolsters active efforts to take initiative and to persevere. affirming individual and family strengths in the midst of difficulties can counter a sense of helplessness, failure, and despair as it reinforces shared pride, confidence, and a "can do" spirit. hope is most essential in times of overwhelm and despair, fueling energies and efforts to cope and rebuild lives. we hold onto hope in the midst of uncertainty. weingarten (2010) cautions us to practice reasonable hope and to avert false hopes. wishful thinking does not make a pandemic go away. flaskas (2007) notes the complex dynamics of hope and hopelessness within intimate relationships, embedded in family history, community, and social processes, which can support or undermine hope. in a couple, one partner may lose hope while the other holds hope for both. therapists can witness the coexistence of hope and hopelessness in a way that nurtures hope and yet emotionally holds both. in working with covid-related loss, we can help families reorient hope: as some hopes are lost, what can realistically be hoped for and worked toward? support may be needed to tolerate prolonged uncertainties and lengthy recovery processes, while holding hope in future possibilities with sustained efforts. as studies have found, resilience is fostered by focusing efforts to master the possible, accepting that which is beyond control, and coming to terms with what cannot be changed (walsh, 2016a (walsh, , 2016b . transcendent values and connections enable families to view losses and suffering beyond their immediate plight, cultural and spiritual connections are valuable resources to support adaptation, providing assistance to honor and grieve all that was lost and move forward with life (rosenblatt, 2013) . in the time of covid-19, transcendent values and practices help families to endure and rise this article is protected by copyright. all rights reserved above losses and disruptions, by fostering meaning, harmony, connection, and purpose. they offer opportunities to reaffirm identity, relatedness, and core social values of caring and compassion for others. in times of loss and deep suffering, spiritual matters commonly come to the fore, whether based in religious or existential concerns (wright & bell, 2009) . clinicians are encouraged to attend to the spiritual dimension of experience to explore issues that constrain adaptation and to draw on spiritual resources that fit clients' preferences within and/or outside organized religion (walsh, 2009b) . research has documented the positive effects of deep faith, belief in a higher power, prayer and meditative practices, and congregational support in times of crisis (koenig, 2012) . in this time of sequestering and social restrictions, connections with nature are important to nourish spirits--from soothing bonds with companion animals (walsh, 2007) , to the rhythm of waves on the shore, the songs of birds, and the hopeful renewal of life with new birth. the transcendent power of music and the creative arts fosters resilience, expressing unbearable sorrow and restoring the spirit to rise above adversity. in this prolonged period of angst, i find music most uplifting; it also connects me with my mother, a gifted musician, whom i lost too soon. times of great tragedy can bring out the best in the human spirit: ordinary people show extraordinary courage, compassion, and generosity in helping kin, neighbors, and strangers to recover and rebuild lives. for many, their spirituality is connected to a purposeful dedication to social justice or climate change activism. creativity is vital in our lives, as we need to invent new ways to overcome pandemic-related challenges. in some communities, individuals and multi-generational families are exploring safe ways to come together by creating "social pods"-contact clusters for interpersonal connection and practical support. mental health professionals are needing to transform ways of providing therapy when social distancing and face coverings constrain in-person "face-to-face" office sessions. therapists and clients are gaining new skills and comfort with telehealth therapy, despite the limitations. many notice a silver lining: increased access to therapy for those whose stress overload, incompatible schedules, disabilities, or distances from offices prevented in-person sessions. this article is protected by copyright. all rights reserved finding creative ways to celebrate important events, such as birthdays and graduations, can revitalize spirits and reconnect all with the rhythms of life. one young couple, saddened when plans for an elaborate wedding had to be cancelled, instead held a simple, yet deeply meaningful backyard ceremony, under a homemade wooden arch covered with a trellis of white blossoms. we witnessed the couple's joyful union via zoom, along with family members across two continents, snapping memorable photos with a screen-saving click. they look forward to a festive post-pandemic party. whatever our adverse situation, we can make the most of it, practicing the "art of the possible": "do all you can, with what you have, in the time you have, in the place you are." more than surviving loss or managing stressful conditions, family processes in resilience can yield personal and relational transformation and positive growth. in struggling through loss and hardship, in active coping efforts, and in reaching out to others, families tap resources that they may not have drawn on otherwise, and gain new perspective on life (walsh, 2016b) . similarly, studies of posttraumatic growth have found that individuals often emerge from life-shattering losses with remarkable transformations: gaining appreciation of life and new priorities; warmer, closer relationships; enhanced personal strengths; recognition of new possibilities or paths in life; and deepened spirituality (tedeschi & calhoun, 2008) . the experience of suffering and loss often sparks compassionate actions to benefit others or address harmful conditions. clinicians can encourage families to find pride, dignity, and purpose from their darkest times through altruistic initiatives. many report stronger bonds forged through shared dedication, such as mobilizing community action coalitions or medical research funding (walsh, 2016b) . bereaved families can find strength to surmount heartbreaking loss and go on in meaningful lives by bringing benefit to others from their own suffering. one african-american family lost their beloved matriarch to covid-19. she had worked tirelessly as a home healthcare provider but never had the healthcare herself that she needed. when the family sought testing and care for her symptoms of coronavirus, their community lacked essential resources that might have saved her life. her children were devastated by her loss, but agreed that she wouldn't want this article is protected by copyright. all rights reserved them to become consumed by anger and grief. they vowed to do something meaningful to honor her life and her memory. as her son related, "we want something good to come out of our tragedy. we're taking up fierce advocacy for changes in our healthcare system so everyone gets quality care, no community is left behind, and no family will suffer as our has. she will be smiling down on us with pride." in the wake of loss, families cannot bring back a deceased loved one or recover all that was valued and lost, yet their suffering and struggle can yield new purpose and life priorities. many report that a major life challenge spurred them to reappraise their priorities and stimulated greater investment in meaningful pursuits. in the peak of covid-19 hospitalizations, as neighborhoods put up lawn signs thanking healthcare workers, a teenager in one family expressed outrage: "thanking them is nice, but we should value them by making sure they have the equipment they need and by paying them what they are worth!" she and her parents mobilized community members to lobby for changes. many become more keenly aware of urgent needs for children and families. in the pandemic, parents are juggling incompatible demands of jobs, housework, childcare, and home schooling, with planned school openings precarious and daycare resources unavailable. gender disparities are starkly revealed for women who provide essential income and carry most responsibilities for homemaking and childrearing. with the economy reopening, many parents are between a rock and a hard place: forced to give up jobs to attend to children's needs. the difficulties experienced in home-based learning sharpened awareness of the vital importance of quality education and the undervalued and underpaid role of teachers in our society. it also exposed the striking lack of access for remote learning in under-resourced, low-income and largely minority neighborhoods, setting children back from achieving their potential. transforming new insights into meaningful actions requires initiative, persistence, and creative solutions. we are living through time out of the ordinary. with our life course seemingly on hold and future forecasts cloudy, we cope by trying to "be here now," focused on getting through each day and week. while we are restricted in our social space, we need not be trapped in time in the "here and now." this article is protected by copyright. all rights reserved time out: as the initial overwhelm with covid-related loss and disruption eases and we contemplate a long haul, it affords the opportunity to reflect on our personal and collective lives and to re-appraise our values and aspirations (bruner, 1986) . a crisis can be a wake-up call, heightening attention to what matters and who matters. in thinking more deeply about the "old normal" and "new normal" we realize that many aspects of our pre-covid lives that were normalized need to be changed for the better. as we expand our vision beyond our personal struggles, we see needs for broader systemic changes with more urgency. reconnecting with the past: we can learn and grow stronger from the past. this is the time to deepen understandings and connections to our past, to the joys and sorrows experienced. we can encourage family members to share sweet memories to revive spirits and bonds in these hard times. we can reminisce together over photos, make scrapbooks and pass on keepsakes to cherish. using technology or old-school pen and paper, adults and their children can interview family elders and record their life stories: how grandparents fell in love; what their lives were like in their times. in hearing about experiences of crisis and challenge, it's important to draw out accounts of resilient responses alongside the difficulties. how did they and their families get through the great depression and world war ii? the courage, tenacity, and ingenuity in dealing with past loss and hardships can inspire current efforts. moreover, gaining perspective on elders' lived experience can increase compassion for their shortcomings and deepen bonds (walsh, 2016b) . re-envisioning the future. in a pandemic that is novel, complex, and changing, long-term forecasts are hazy. we must learn to live with considerable uncertainty with flexibility to adapt to new developments. many joke about making plans a, b, c and beyond. we can't control everything that will happen, but we can dream and direct our energies toward our preferred vision. when future hopes and dreams are lost, therapists can help family members to reorient hope and envision new possibilities. linking the past, present, and future. when death ends a life, it does not end relationships. research finds that healthy adaptation to loss involves not a "letting go" or detachment, but rather a transformation from physical presence to continuing bonds (klass, 2009; stroebe, schut, & boerner, 2010) . these bonds can be sustained through spiritual connections, memories, stories, keepsakes, deeds, and legacies. in this time of covid-19, families will need to find innovative ways to honor accepted article and sustain connections: through meaningful memorial events, whether virtual or postponed; in websites to share tributes and remembrances. in varied ways, family members can find meaning and resilience in "saying 'hullo' again" (white, 1988 ) and re-membering those who have been lost. where bonds were frayed, they can be healed. therapists can foster an evolutionary perspective that integrates painful experiences and yields meaning and hope for the future. present time / precious time. the pandemic sharpens our awareness of the fragility of life and jolts us not to take future time for granted. the inevitability of losing others becomes more salient: what would we regret --things unsaid, undone--if a loved one died, or as we faced our own impending death. loss and threatened loss can heighten appreciation of loved ones taken for granted and spur efforts to repair grievances in wounded bonds. time does not heal all wounds, but offers new perspectives, experiences, and connections that can help people forge new meaning and purpose in their lives. over time, we will need to integrate the pandemic experience into the chapters of our individual and shared lives, strengthening the relational connections that matter to us: with the families we were born into, those we choose, and our wider communities. there is no love--or life--without loss. we are all mourners now, trying to guide one another as we navigate our way forward and strive to make a better world out of tragedy. our resilience is relationally-based, nurtured and fortified through our interconnections. by facing our vulnerability and by supporting one another through the worst of times we are better able to overcome daunting challenges to live and love fully. resilience is commonly thought of as "bouncing back," like a spring, to our pre-crisis norm. however, when events of this magnitude occur, we cannot return to "normal" life as we knew it. as our world changes, we must change with it. in the wake of the 9/11 terrorist attacks, i suggested that a more apt metaphor for resilience might be "bouncing forward" to face an uncertain future (walsh, 2002) . this involves constructing a new sense of normalcy as we recalibrate our lives to face unanticipated challenges ahead. over the ages, individuals, families, and communities have shown that, in coming together, they could endure the worst forms of suffering and loss, and with this article is protected by copyright. all rights reserved time and concerted effort, rebuild and grow stronger. the painful experiences in this pandemic will require time and shared reflection for meaning making, questioning old assumptions and grappling with a fundamentally altered conception of ourselves and our interconnections with all others in our shared world. taking a systemic view, the pandemic and our response will generate reverberations we can't foresee or control. mastering these challenges will require great wisdom and humanity in the months and years ahead. family sense of coherence and family adaptation the denial of death loss, trauma, and human resilience: have we underestimated the human capacity to thrive after extremely aversive events? ambiguous loss actual minds / possible worlds the neuroendocrinology of social isolation disenfranchised grief holding hope and hopelessness: therapeutic engagements with the balance of hope man's search for meaning covid-19 pandemic, unemployment, and civil unrest: underlying deep racial and socioeconomic divides risk and prevention during the covid-19 pandemic rituals in the time of covid-19. imagination, responsiveness and the human spirit accepted article this article is protected by copyright. all rights reserved shattered assumptions: towards a new psychology of trauma on grief and grieving loneliness: a signature mental health concern in the era of covid-19 enhancing resilience: families and communities as agents for change multisystemic resilience for children and youth in disaster: reflections in the context of covis-19. adversity and resilience science meaning-making in bereaved families: assessment, intervention, and future research meaning reconstruction in bereavement: from principles to practice 60,000 disaster victims speak: part 1. an empirical review of the empirical literature the fullest look yet at the racial inequality of coronavirus family grief in cross-cultural perspective collective trauma, collective healing: promoting community healing in the aftermath of disaster the dual process model of coping with bereavement: a decade on continuing bonds in adaptation to bereavement: accepted article this article is protected by copyright. all rights reserved toward theoretical integration beyond the concept of recovery: growth and the experience of loss bouncing forward: resilience in the aftermath of traumatic loss and major disasters: strengthening family and community resilience human-animal bonds: i. the relational significance of companion animals spiritual resources in family therapy applying a family resilience framework in training, practice, and research: mastering the art of the possible strengthening family resilience (3 rd ed.) complicated loss: fostering healing & resilience loss and the family: a systemic perspective living beyond loss: death in the family bereavement: a family life cycle perspective reasonable hope: construct, clinical applications, and supports saying hullo again: the incorporation of the lost relationship in the resolution of grief beliefs and illness: a model for healing the myths of coping with loss accepted article key: cord-128991-mb91j2zs authors: agapiou, sergios; anastasiou, andreas; baxevani, anastassia; christofides, tasos; constantinou, elisavet; hadjigeorgiou, georgios; nicolaides, christos; nikolopoulos, georgios; fokianos, konstantinos title: modeling of covid-19 pandemic in cyprus date: 2020-10-05 journal: nan doi: nan sha: doc_id: 128991 cord_uid: mb91j2zs the republic of cyprus is a small island in the southeast of europe and member of the european union. the first wave of covid-19 in cyprus started in early march, 2020 (imported cases) and peaked in late march-early april. the health authorities responded rapidly and rigorously to the covid-19 pandemic by scaling-up testing, increasing efforts to trace and isolate contacts of cases, and implementing measures such as closures of educational institutions, and travel and movement restrictions. the pandemic was also a unique opportunity that brought together experts from various disciplines including epidemiologists, clinicians, mathematicians, and statisticians. the aim of this paper is to present the efforts of this new, multidisciplinary research team in modelling the covid-19 pandemic in the republic of cyprus. coronavirus disease 2019 , an infection caused by the novel coronavirus sars-cov-2 (coronaviridae study group of the international committee on taxonomy of viruses (2020)) that first emerged in wuhan, china, (zhu et al. (2020) ), counts now more than 25 million cases and has claimed nearly 850,000 lives (world health organization (2020)). despite some advances in therapy (beigel et al. (2020) ) and considerable progress in vaccine development, with some vaccine candidates reaching phase iii trials (jackson et al. (2020) ), there are still many gaps in our understanding of the new pandemic disease including some epidemiological parameters. epidemic modelling is a fundamental component of epidemiology, especially with regards to infectious diseases. following the pioneering work of r. ross, w. kermack, and mckendrick in early twentieth century (kermack and mckendrick (1927) ), the discipline has established itself and comprises a major source of information for decision makers. for instance, in the united kingdom, the scientific advisory group of emergencies (sage) is a major body that collects evidence from multiple sources including inputs from mathematical modelling to advice the british government on its response to the complex covid-19 situation; for more information see this link. in the context of the covid-19 pandemic, expert opinions can help decision makers comprehend the status of the pandemic by collecting, analyzing, and interpreting relevant data and developing scientifically sound methods and models. an exact model that would describe perfectly the data is usually not feasible and of limited scope; hence scientists usually aim for models that allow a statistical simulation of synthetic data. at the same time, models can also approximate the dynamics of the disease and discover important patterns in the data. in this way, researchers can study various scenarios and understand the likely consequences of government interventions. finally, the proposed models could motivate the conduct of further studies about the evolution of both infectious and non-infectious diseases of public interest. here we report our work including results from statistical and mathematical models used to understand the epidemiology of covid-19 in cyprus, during the time period starting from the beginning of march till the end of may 2020. we propose a range of different models that capture different aspects of the covid-19 pandemic. the analysis consists of several methods applied to understand the evolution of pandemics in the long and short run. we use change-point detection, count time series methods and compartmental models for short and long term projections, respectively. we estimate the effective reproduction number by using three different methods and obtain consistent results irrespective of the method used. results are cross-validated against observed data with considerable consistency. besides providing a comprehensive data analysis we illustrate the importance of mathematical models to epidemiology. in this section, after a brief introduction to the testing protocol, we introduce the different techniques and models that have been used for the modelling and analysis of the covid-19 infections in cyprus. the unit for surveillance and control of communicable diseases (usccd) of the ministry of health operates covid-19 surveillance. the lab-based surveillance system consists of 19 laboratories (7 public and 12 private) that carry out molecular diagnostic testing for sars-cov-2. sociodemographic, epidemiological, and clinical data of individuals with sars-cov-2 infection are routinely collected from laboratories and clinics, and reported to an electronic platform of the usccd. a confirmed covid-19 case is a person, symptomatic or asymptomatic, with a respiratory swab (nasopharynx and/or pharynx) positive for sars-cov-2 by a real-time reverse-transcription polymerase chain (rrt-pcr) assay. cases are considered imported if they have travel history from an affected area within 14 days of the disease onset. locallyacquired cases are individuals who test positive for sars-cov-2 and have the earliest onset date in cyprus without travel history from affected areas. people with symptomatic covid-19 are considered recovered after the resolution of symptoms and two negative tests for sars-cov-2 at least 24-hour apart from each other. for asymptomatic cases, the negative tests to document virus clearance are obtained at least 14 days after the initial positive test. a person with a positive test at 14 days is further isolated for one week and finally released at 21 days after the initial diagnosis without further laboratory tests. testing approaches in the republic of cyprus included: a) targeted testing of suspect cases and their contacts; of repatriates at the airport and during their 14-day quarantine; of teachers and students when schools re-opened in mid-may; of employees in essential services that continued their operation throughout the first pandemic wave (e.g., customer services, public domain); and of health-care workers in public hospitals, and b) population screenings following random sampling in the general population of most districts and in two municipalities with increased disease burden. by june 2nd 2020, 120,298 pcr tests had been performed (13,734.2 per 100,000 population). public health measures were taken in 4 phases: period 1 (10 -14 march, 2020) included closures of educational institutions and cancellation of public gatherings (>75 persons); period 2 (15 -23 march, 2020) involved closure of entertainment areas (for instance, malls, theatres, etc), allowance of 1 person per 8 square meters in public service areas, and restrictions to international travel (for example, access to the republic of cyprus was permitted only for specific persons and after sars-cov-2 testing); period 3 (24 -30 march, 2020) included closure of most retail services; and period 4 (31 march -3 may) included the suspension of incoming flights with few exceptions (for instance, repatriated cypriot citizens), stay at home order, and night curfew. change-point detection is an active area of statistical research that has attracted a lot of interest in recent years and plays an essential role in the development of the mathematical sciences. a non-exhaustive list of application areas includes financial econometrics (schröder and fryzlewicz, 2013) , credit scoring (bolton and hand, 2002) , and bioinformatics (olshen et al., 2004) . the focus is on the so-called a posteriori changepoint detection, where the aim is to estimate the number and locations of certain changes in the behaviour of a given data sequence. for a review of methods of inference for single and multiple change-points (especially in the context of time series) under the a-posteriori framework, see jandhyala et al. (2013) . the aim is to estimate the number and locations of certain changes in a stream of data. detecting these change-points enables us to separate the data sequence into homogeneous segments, leading to a more flexible modeling approach. advantages of discovering such heterogeneous segments include interpretation and forecasting. interpretation naturally associates the detected change-points to real-life events or/and political decisions. in this way, a better description of the observed process and the impact of any intervention can be communicated. forecasting, is based on the final detected segment which is important as it allows for more accurate prediction of future values of the data sequence at hand. methods developed in this context are based on a given model. for the purpose of this paper, we work with the following signal-plus-noise model where x t denotes the daily incidence covid-19 cases and f t is a deterministic signal with structural changes at certain time points. details about f t are given below. the sequence t consists of independent and identically distributed (iid) data with mean zero and variance equal to one and σ > 0. we denote the number of change-points by k and their respective locations by r 1 , r 2 , . . . , r k . the locations are unknown and the aim is to estimate them based on (1). the daily incidence cases of the covid-19 outbreak in cyprus is investigated by using the following two models for f t of (1): 1. continuous, piecewise-linear signals: f t = µ j,1 + µ j,2 t, for t = r j−1 + 1, r j−1 + 2, . . . , r j with the additional constraint of µ k,1 + µ k,2 r k = µ k+1,1 + µ k+1,2 r k for k = 1, 2, . . . , n . the change-points, r k , satisfy f r k −1 + f r k +1 = 2f r k . 2. piecewise-constant signals: f t = µ j for t = r j−1 + 1, r j−1 + 2, . . . , r j , and f rj = f rj +1 . in this work, we are using the isolate-detect (id) methodology of anastasiou and fryzlewicz (2019) to detect changes based on (1) by using linear and constant signals, as described above; see appendix a-1 for a description of the method. the analysis of count time series data (like daily incidence data we consider in this work) has attracted considerable attention, see kedem and fokianos (2002, sec 4 & 5) for several references and fokianos (2015) for a more recent review of this research area. in what follows, we take the point of view of generalized linear modelling as advanced by mccullagh and nelder (1989) . this framework naturally generalizes the traditional arma methodology and includes several complicated data generating processes besides count data such as binary and categorical data. in addition, fitting of such models can be carried out by likelihood methods; therefore testing, diagnostics and all type of likelihood arguments are available to the data analyst. the logarithmic function is the most popular link function for modeling count data. in fact, this choice corresponds to the canonical link of generalized linear models. suppose that {x t } denotes a daily incidence time series and assume, that given the past, x t is conditionally poisson distributed with mean λ t . define ν t ≡ log λ t . a log-linear model with feedback for the analysis of count time series (fokianos and tjøstheim (2011) ) is defined as in general, the parameters d, a 1 , b 1 can be positive or negative but they need to satisfy certain conditions to obtain stability of the model. the inclusion of the hidden process makes the mean of the process to depend on the long-term past values of the observed data. further discussion on model (2) can be found in appendix a-2 which also includes some discussion about interventions. an intervention is an unusual event that has a temporary or a permanent impact on the observed process. computational methods for discovering interventions, in the context of (2), under a general mixed poisson framework have been discussed by liboschik et al. (2017) . in this work, we will consider additive outliers (ao) defined by where the notation follows closely that of sec. 2.2 and i(.) denotes the indicator function. inclusion of the indicator function shows that at the time point r k , the mean process has a temporary shift whose effect is measured by the parameter γ k but in the log-scale. other type of interventions can be included (see appendix a-2) whose effect can be permanent and, in this sense, intervention analysis and change-point detection methodologies address similar problems but from a different point of view. model fitting is based on maximum likelihood estimation and its implementation has been described in detail by liboschik et al. (2017). compartmental models in epidemiology, like the susceptible-infectious-recovered (sir) and susceptible-exposed-infectious-recovered (seir) models and their modifications, have been used to model infectious diseases since the early 1920's (see keeling and rohani (2008) , nicolaides et al. (2020) among others). the basic assumptions for these models are the existence of a closed community, i.e without influx of new susceptibles or mortality due to other causes, with a fixed population, say n , and also that the individuals who recover from the illness are immune and do not become susceptible again. in the basic seir model, at any point in time t, each individual is either susceptible (s(t)), exposed (e(t)), infectious (i(t)) or recovered (r(t), including death). the epidemic starts at time t = 0 with one infectious individual, usually thought of as being externally infected, and the rest of the population being susceptible. people progress between the different compartments and this motion is described usually through a system of ordinary differential equations that can be put in a stochastic framework. a variety of seir modifications and extensions exist in the literature, and a multitude of them emerged recently because of the covid-19 epidemic. in this work, we consider four such modifications, based on the models proposed in peng et al. (2020) and li et al. (2020) for the analysis of the covid-19 epidemic in wuhan and the rest of the chinese provinces. initially, we employ the seir model based on the meta-population model of li et al. (2020) but simplified to take into account only a single population. the novelty compared to the standard seir model, is that this model takes into account the existence of undocumented/asymptomatic infections, which transmit the virus at a potentially reduced rate. the model tracks the evolution of four state variables at each day t, representing the number of susceptible, exposed, infected-reported and infected-unreported individuals, s(t), e(t), i r (t), i u (t) respectively. the parameters of the model are the transmission rate β (days −1 ), the relative transmission rate µ representing the reduction in transmission for asymptomatic individuals, the average latency/incubation period z (days), the average infectious period d (days) and the reporting rate α representing the proportion of infected individuals which are reported. for a graphic description of the model see figure 16 . the time evolution of the system is defined by the following set of differential equations (recall n denotes the population size): following li et al. (2020) , we use a stochastic version of this model with a delay mechanism. each term, say u , on the right hand side of (4) is replaced by a poisson random variable with mean u . at each day, we use the 4th order runge-kutta numerical scheme to integrate the resulting equations and obtain the values of the four state variables on the next day. for each new reported infection, we draw a gamma random variable with mean τ d days, to determine when this infection will be recorded. for the main analysis we use τ d =6 days, as the average reporting delay between the onset of symptoms and the recording of an infection; see also li et al. (2020) . note that the results are robust with respect to the value of reporting delay. the final output of this model is the number of recorded infections on each day t, y = y(t). we also use the meta population model of li et al. (2020) . it models the transmission dynamics in a set of populations, indexed by i, connected through human mobility patterns, say m ij . this is implemented by incorporating information on human movement between the 5 main districts of cyprus: nicosia, limassol, larnaca, paphos and ammochostos. in this case, i = 1, 2, 3, 4, 5 and m ij denotes the daily number of people traveling from district i to district j, i = j. such information is based on the 2011 census data obtained from the cyprus statistical service. the time evolution of the four compartmental states in each district i is defined by the following set of differential equations: where the notation follows the notation given in sec. 2.4.1. in addition to the four state variables, this model also updates at each time step the population of each area i, say n i , by where the multiplicative factor θ is assumed to be greater than 1 to reflect under-reporting of human movement. like model (4), model (5) further, we consider the meta-population model of peng et al. (2020) . this is a generalisation of the classical seir model, consisting of seven states: (s(t), p (t), e(t), i(t), q(t), r(t), d(t)). at time t, the susceptible cases s(t) will become with a rate ζ insusceptible p (t) or with rate β exposed e(t), that is infected but not yet infectious i.e. in a latent state. some of the exposed cases will eventually become infected with a rate γ. infected means they have the capacity of infecting but are not quarantined q(t) yet. the introduction of the new quarantined state, q(t), in the classical seir model, formed by the infected cases with a constant rate δ, allows to consider the effect of preventive measures. finally, the quarantined cases, are now split to cured cases, r(t), with rate λ(t) and to closed, d(t), with mortality rate κ(t). the model's parameters are the transmission rate β, the protection rate ζ, the average latent time γ −1 (days), the average quarantine time δ −1 (days) as well as the time dependent cure rate λ(t) and mortality rate κ(t). the relations are characterized by the following system of difference equations: the total population size is assumed to be constant and equal to n = s(t) + p (t) + e(t) + i(t) + q(t) + according to the official reports, the number of quarantined cases , recovered and deaths , due to covid-19, are available. however, the recovered and death cases are directly related to the number of quarantine cases, which plays an important role in the analysis, especially since the numbers of exposed (e) and infectious (i) cases are very hard to determine. the latter two are therefore treated as hidden variables. this implies that we need to estimate the four parameters ζ, β, γ −1 , δ −1 and both the time dependent cure rate λ(t) and mortality rate κ(t). notice here that while the rest of the parameters are considered fixed during the pandemic, we allow the cure and mortality rate to vary with time. we expect that the former will increase with time, given that social distancing measures have been put in place, while the latter will decrease. finally, this is an optimization problem, and the methodology we have followed in order to address it can be found in appendix a-3. the last model we consider is a modified version of a solution created by bettencourt and ribeiro (2008) to estimate real-time effective reproduction number r t 1 using a bayesian approach on a simple susceptible -infected (si) compartmental model: we use the bayes rule to update the beliefs about the true value of r t based on our predictions and on how many new cases have been reported each day. having seen k new cases on day t, the posterior distribution of r t is proportional to (denoted by ∝) the prior beliefs of the value of p (r t ) times the likelihood of r t given that we have recorded k new cases, i.e., p (r t |k) ∝ p (r t ) × l(r t |k). to make this iterative every day that passes by, we use last day's posterior p (r t−1 |k t−1 ) to be today's prior p (r t ). therefore in general however, in the above model the posterior is influenced equally by all previous days. thus, we propose a modification suggested in systrom (2020) that shortens the memory and incorporates only the last m days of the likelihood function, p (r t |k) ∝ t t=m l(r t |k t ). the likelihood function is modelled with a poisson distribution. recall the compartmental models discussed in sec. 2.4.1 and 2.4.2. then the effective reproduction number is given by see the supplement of li et al. (2020) . we estimate r t in (8) during consecutive fortnight periods for which its value is considered to be constant. to achieve this we estimate the parameters of each model, also assumed to be constant for each fortnight, using daily incidence data for cyprus 2 . to estimate the parameters we employ bayesian statistics, that is, we postulate prior distributions on the parameters and incorporate the data and the model (through the likelihood) to obtain the posterior distributions on the parameters. the posterior distributions capture our updated beliefs about the parameters after combining the prior with the observed data; see, for example, bernardo and smith (1994) . for the model defined by (4), we consider the whole area of cyprus as a single uniform population. for this case, the observations are not sufficiently informative to identify all five parameters of the model. a solution would be to enforce identifiability by postulating strongly informative prior distributions on the parameters. instead, we choose to make the assumption that the parameters z, d and µ have globally constant values, fixed over time. in particular we set d = 3.5 and µ = 0.5 as estimated in li et al. (2020) and .1 which appears to be the globally accepted mean incubation period. we thus only need to infer the reporting rate α and the transmission rate β, which vary both between different fortnights and for different countries because of the amount of testing and the degree of adherence to the social distancing policies. on the other hand, the model defined by (5) is sufficiently informative to infer all six model parameters. all computational methods, prior modelling and assumptions in relation to both compartmental models discussed in sec. 2.4.1 and 2.4.2 are given in appendix a-4. in addition to the above methods we further consider the method of cori et al. (2013) as a benchmark to compare all methodologies for estimating the effective reproduction number. by the end of may 2020, 952 cases of covid-19 were diagnosed in the republic of cyprus. of these, 50.2% were male (n = 478) and the median age was 45 years (iqr: 31-59 years). the setting of potential exposure was available for 807 cases (84.8%). of these, 17.4% (n = 140) had history of travel or residence abroad during a 14-day period before the onset of symptoms. locally acquired infections were 667 (82.7%) with 8.6% (n = 57) related to a health-care facility in one geographical setting (cluster a) and 12.4% (n = 83) clustered in another setting (cluster b). the epidemic curve by date of sampling and date of symptom onset is shown in figure 1 . the number of cases started to decline in april reaching very low levels in late may. in this section, we investigate the long-term impact of covid-19 to cyprus. towards this, we give longterm projections for the daily incidence and death rates. we fit system (6) to covid-19 data that were collected during the period from the 1st of march 2020 till the 31st of may 2020, in cyprus. we treat all the reported cases without making the distinction between local and imported. the model parameters are estimated using the methodology described in appendix a-3. once the model is fitted to data, it can be used to forecast the epidemic. in order to study the evolution of the model as new data are added and the quality of the respective forecasts, we have fitted model (6) using different time periods. specifically, the four datasets were formed using the daily reported incidences from the beginning of the observation period until and including the 2/4/2020, 17/4/2020, 15/5/2020 and 24/5/202 respectively. the dates were chosen according to the change points detected using the methodology described in section 2.2, see also section 3.3. the fitted model in each case was used in order to predict the pandemic's evolution until the 30/6/2020. in figure 2 , we show the number of predicted exposed plus infectious cases (green solid lines) and the number of predicted recovered cases (blue solid lines) for the duration of the prediction period, and compare them to the observed cases which are indicated by circles and triangles. we use circles for data that have been used in the prediction and triangles for the observed data that are used for validation. visual inspection shows that after a period of about two months during which the model overestimates the number of active cases and underestimates the number of recovered, see figure 2 (top), model (6) was able to capture accurately the evolution of the pandemic, figure 2 (bottom). the performance of the predictions can also be evaluated by means of the relative error (re) which are , where x t denotes the datum for day t and y t the model prediction for the same day. the re for the recovered cases equal 0.4%, 0.2%, 0.3% and 0.3% for the four time periods respectively with the corresponding re for the active cases being high in the beginning 18%, 5.8%, but then dropping considerably 0.16% and 0.1%, reflecting the fact the model caught up with the evolution of the pandemic. overall, system (6) gives adequate predictions especially when data from longer time periods are used. for the active cases. figure 3 shows the number of deaths and their respective predictions using subsets of data as described above. in the duration of the first data set, there were no deaths registered and therefore the prediction was identically zero, giving also an re equal to 100% see figure 3 (top left). as more deaths are registered the model's ability to predict the correct number of deaths is improving, see figure 3 . the recovery rate (λ(t)) is modelled as , λ i ≥ 0, i = 1, 2, 3. the idea is that the recovery rate, as time increases, should converge towards a constant. in figure 4 (left), the fitted recovery rate (solid line) is plotted against the observed number of recovered cases (stars). finally, model 3 can be used to estimate the unobserved number of exposed, e(t), and infectious, i(t), cases during the development of the pandemic. the maximum number of exposed cases occurs on the 21st of march 2020 and is estimated to be 173 cases, figure 4 (right, blue line), with the maximum of infectious individuals (136) being attained on the 26 of march 2020. we can observe a delay in the transition of exposed to infectious in the order of 5 days, which suggests a 5 day latent time of covid-19. we first consider the change-point detection method of sec. 2.2 for the case of piecewise-linear signal plus noise model. figure 5 illustrates the results obtained by this analysis on daily incidence data. we first fit model (2) note again that the sum 0.779 + 211 ∼ 1 which shows that the non-stationarity persists even after including additive outliers (in the log-scale). furthermore, the positive sign of both interventions shows the sudden explosion of the daily number of people infected. the corresponding bic values obtained after fitting this model is equal to 576.643 which improves the bic of the model without intervention which was equal to 615.766. figure 6 shows the fit of the model to the data and gives 95% prediction intervals for the week ahead. comparing both change-point analysis (see fig. 5 ) and the result obtained by using the above intervention analysis, we observe that both approaches give similar prediction intervals that include future observed incidence data. indeed, the observed data for the week ahead (01/06/2020-07/06/2020) were 4,6,1,0,5,5 and 1 cases. recall the effective reproduction number r t defined by (8). we perform bayesian analysis using (4) (see for the data concerning all incidents, the first recorded incident was on 07/03/2020, hence, as detailed in appendix a-4, we initialize our analysis of the outbreak 3 days earlier, on 04/03/2020. figure posterior probabilities of the event r t < 1. analysis using the full data. next, we consider the estimation model described in li et al. (2020) where cyprus is divided in 5 subpopulations (nicosia, limassol, larnaca, paphos, ammochostos) and the mobility patterns between them are taken into account (as described in metapopulation compartmental model 2). the effective reproduction number is given by (8). the compartmental model 2 structure was integrated stochastically using a 4th order runge-kutta (rk4) scheme. we use uniform prior distributions on the parameters of the model, with ranges similar to li et al. (2020) as follows: relative trasmissibility 0.2 ≤ µ ≤ 1, movement factor 1 ≤ θ ≤ 1.75; latency period 3.5 ≤ z ≤ 5.5; infectious period 3 ≤ d ≤ 4. for the infection rate we choose 0.1 ≤ β ≤ 1.5 before the lockdown and 0 ≤ β ≤ 0.8 after the lockdown and for the reporting rate we choose 0.3 ≤ α ≤ 1. note that the ensemble adjustment kalman filter (eakf, described in appendix a-4) is not constrained by the initial priors and can migrate outside these ranges to obtain system solutions. for the initialization purposes we assume that all 5 districts are potential origins with an undocumented infected and exposed population drawn from a uniform distribution [0, 5] a week before the first documented case. initial condition does not affect the outcome of the inference. transmission model 2 does not explicitly represent the process of infection confirmation. thus, we mapped simulated documented infections to confirmed cases using a separate observational delay model. in this delay model, we account for the time interval between a person transitioning from latent to contagious and observational confirmation of that individual infection through a delay of t d . we assume that t d follows a gamma distribution g(a, τ d /a) where τ d = 6 days and a = 1.85 as derived by li et al. (2020) using data from china. inference is robust with respect to the choice of τ d . for the inference we use incidents from local transmission in cyprus as were reported by the ministry of health. in figure 10 we plot the time evolution of the weekly effective reproduction number r t . while at the beginning of the outbreak the effective reproduction number was close to 2.5, after the lockdown measures, it dropped below 1 and stayed consistently there until the end of june 2020. we then use the methodology proposed by bettencourt and ribeiro (2008) and recently modified by systrom (2020) as described in detail in section 2.4.4. for that method we also use the incidents from local transmission in cyprus as were reported by the ministry of health. figure 11 shows the daily median value as well as the 95% credible intervals for the effective reproductive number using that method. the work presented in this report is the result of intensive collaboration of an interdisciplinary team which was formed shortly after the pandemic started. the main motivation was to give guidance to cypriot government for controlling this major infectious disease outbreak. accordingly, we developed models and methods that are of critical importance in appreciating how this disease is developing and what will be its next stage and in what kind of time framework. this is a valuable information for outbreak control, resource utilization and to initiate again the normal daily life. we followed diverse paths to accomplish this by appealing to different modeling approaches and methods. we have shown that the government interventions were successful on containing covid-19 in cyprus, by the end of may, even though the disease initiated with a high value of r t . the government lockdown helped reduce the reproduction number, as the data shows, by applying different methodology. in addition, we have shown by change-point methodology and time series analysis the effect of various measures taken and have developed short-term predictions. the models we applied are based on simple surveillance data, seem to work well, give similar results, and can certainly help epidemiologists and public health officials quantify and understand changes in the transmission intensity of future epidemics and the drivers of these changes. finally, we feel that our approach to bring together experts from various fields avoids misunderstandings and gaps in communication between scientists, and maximizes the effectiveness of efforts to deal with public health emergencies. the existing change-point detection techniques for the scenarios mentioned in section 2.2 are mainly split into two categories based on whether the change-points are detected all at once or one at a time. the former category mainly includes optimization-based methods, in which the estimated signal is chosen based on its least squares or log-likelihood criterion penalized by a complexity rule in order to avoid overfitting. the most common example of a penalty function is the bayesian information criterion (bic); see schwarz (1978) and yao (1988) for details. in the latter category, in which change-points are detected one at a time, a popular method is binary segmentation, which performs an iterative binary splitting of the data on intervals determined by the previously obtained splits. even though binary segmentation is conceptually simple, it has the disadvantage that at each step of the algorithm, it looks for a single change-point, which leads to its suboptimality in terms of accuracy, especially for signals with frequent change-points. one method that works towards solving this issue is the isolate-detect (id) methodology of anastasiou and fryzlewicz (2019) ; it is the method used for the analysis carried out in this paper. the concept behind id is simple and is split into two stages; firstly, the isolation of each of the true change-points within subintervals of the domain [1, 2, . . . , t ], and secondly their detection. the basic idea is that for an observed data sequence of length t and with λ t a positive constant, id first creates two ordered sets of k = t /λ t right-and left-expanding intervals as follows. the j th right-expanding interval is for clarity of exposition, we give below a simple example. figure 13 covers a specific case of two change-points, r 1 = 38 and r 2 = 77. we will be referring to phases 1 and 2 involving six and four intervals, respectively. these are clearly indicated in the plot and they are only related to this specific example, as for cases with more change-points will entertain more such phases. at the beginning, s = 1, e = t = 100, and we take the expansion parameter λ t = 10. then, r 2 gets detected in {x s * , x s * +1 , . . . , x e }, where s * = 71. recall (2) and that the parameters d, a 1 , b 1 can be positive or negative but they need to satisfy certain conditions so that we obtain stable behavior of the process. note that the lagged observations of the response x t are fed into the autoregressive equation for ν t via the term log(x t−1 + 1). this is a one-to-one transformation of x t−1 which avoids zero data values. moreover, both λ t and x t are transformed into the same scale. covariates can be easily accommodated by model (2). when a 1 = 0, we obtain an ar(1) type model in terms of log(x t−1 + 1). in addition, the log-intensity process of (2) can be rewritten as after repeated substitution. hence, we obtain again that the hidden process {ν t } is determined by past functions of lagged responses, i.e. (2) belongs to the class of observation driven models; see cox (1981) . models like (2) (2), is determined by a latent process. therefore a formal linear structure, as in the case of gaussian linear time series model does not hold any more and interpretation of the interventions is a more complicated issue. hence, a method which allows detection of interventions and estimation of their size is needed so that structural changes can be identified successfully. important steps to achieve this goal are the following; see chen and liu (1993) : 1. a suitable model for accommodating interventions in count time series data. 2. derivation of test procedures for their successful detection. 3. implementation of joint maximum likelihood estimation of model parameters and outlier sizes. 4. correction of the observed series for the detected interventions. all these issues and possible directions for further developments of the methodology have been addressed by liboschik et al. (2017) under the poisson and mixed poisson distributional framework. (6) according to the official reports, the number of quarantined cases (q), recovered (r) and deaths (d), due to covid-19, are available. however, the recovered and death cases are directly related to the number of quarantine cases, which plays an important role in the analysis, especially since the numbers of exposed (e) and infectious (i) cases are very hard to determine. the latter two are therefore treated as hidden variables. this implies that we need to estimate the four parameters ζ, β, γ −1 , δ −1 and both the time dependent cure rate λ(t) and mortality rate κ(t). this is an optimization problem that we solve as follows: first we allow the latent time γ −1 to vary between 1 and 7 days and for each fixed γ −1 , we explore its influence on the rest of the parameters. the system of differential equations (6) is solved numerically using the runge-kutta 45 numerical scheme. the left plot of figure 14 shows that the protection rate ζ and the transmission rate β both attain their corresponding maximum value when γ −1 is equal to 3 days. note that ζ takes values between 0.08 and 0.2, while β converges very fast to 1. the reciprocal of the quarantine time δ −1 is increasing with the latent time γ −1 . one would suspect that longer latent time results to higher transmission rate and as the latent time increases almost every unprotected person will be infected after a direct contact with a covid-19 patient. the right plot of figure 14 shows the effect of the latent time on the total number of infected cases (exposed and infectious e(t) + i(t)) but not yet quarantined. the peak of the infection was achieved between the 21st and the 24th of march, depending on the latent time with the estimated number of infected people ranging between 338 and 526, depending again on the latent time considered. hence, once the latent time γ −1 is fixed, the fitting performance depends on the values of ζ, β and δ −1 . after a small sensitivity analysis the latent time was finally determined as 3 days. the mortality rate κ(t) is constantly very small and almost equal to zero, therefore we have not attempted to fit any function to it. for the cure rate λ(t) we have fitted the exponential function given in 9, the idea behind being that with time the recovery should converge to a constant rate. for the parameter estimation we have used a modified version of the matlab code given by cheynet (2020) because cyprus is a small country and this fact needs to be taken properly into account. figure 14 : sensitivity analysis on the parameters for the model defined by (6). the influence of the latent time γ −1 on the protection rate ζ, the transmission rate β and the quarantine time δ −1 (left plot ), the sum of exposed and infectious cases e(t) + i(t) (right plot). we present a bayesian analysis, for the model defined by (4) consideration. in particular, in the first period (when the number of tests was relatively low) we employ a symmetric prior around the value α = 0.5, while for later periods (when the number of targeted and random tests increased) we let the prior become progressively skewed towards 1. for the transmission rate β > 0, in the first period we use a gamma(3/2, 3/2) prior, which puts high probability around 2, while for later periods we use an exponential(1) prior which puts more mass closer to zero. this choice reflects the existence of super-spreaders in the early stages of the outbreak with higher probability compared to later on. in each time-period under consideration we also need to initialize the outbreak in cyprus. for the first period in both datasets, we use a uniform prior supported in {0, 1, . . . , 10} on the number of exposed and the number of undocumented infected 3 days before the first recorded incident. the two priors are independent, while the number of susceptible individuals is taken equal to cyprus' population and the number of infected-reported equal to zero. for later periods, we use as priors on the four state variables, their posterior distributions at the end of the previous period (corrected appropriately based on the observation at the end of the previous period). following li et al. (2020) , we assume that the daily number of reported cases are independent gaussian random variables and use an empirical variance given as by recalling that y(t) denotes the number of infected cases at day t. this allows us to build a gaussian likelihood for the parameters α and β. combining this likelihood with the prior distributions, we can deduce a formula for the posterior distribution on α, β. this distribution is not available in a closed form, hence in order to compute posterior estimates and their respective uncertainty quantification, we need to sample it. in the relatively simple setting of model 1, it is feasible to employ markov chain monte carlo methods, (see robert and casella (2013) ), in order to sample the posterior (namely, we use an independence sampler). this is in contrast to the model defined by (5) in sec. 2.4.2, see li et al. (2020) , where one has to use the ensemble adjustment kalman filter (eakf), which introduces some approximations to the posterior distribution, due to the more complex meta-population structure. originally developed for use in weather prediction, the eakf assumes a gaussian distribution for both the prior and the likelihood and adjusts the prior distribution to a posterior using bayes rule deterministically. in particular, the eakf assumes that both the prior distribution and likelihood are gaussian, and thus can be fully characterized by their first two moments (mean and variance). the update scheme for ensemble members is computed using bayes rule (posterior ∼ prior × likelihood) via the convolution of the two gaussian distributions (see li et al. (2020) for the implementation). we report the results obtained after fitting a piecewise-constant signal plus noise model, as descibed in sec. 2.2. the scenario here is that at each change-point, we have a sudden jump in the mean level of the signal. figure data and code are available at github (https://github.com/chrisnic12/covid_cyprus) detecting multiple generalized change-points by isolating single ones remdesivir for the treatment of covid-19 -preliminary report bayesian theory real time bayesian estimation of the epidemic potential of emerging infectious diseases statistical fraud detection: a review joint estimation of model parameters and outlier effects in time series generalized seir epidemic model (fitting and computation) a new framework and software to estimate time-varying reproduction numbers during epidemics the species severe acute respiratory syndrome-related coronavirus: classifying 2019-ncov and naming it sars-cov-2 statistical analysis of time series: some recent developments impact of non-pharmaceutical interventions (npi)s to reduce covid-19 mortality and healthcare demand statistical analysis of count time series models: a glm perspective log-linear poisson autoregression an mrna vaccine against sars-cov-2 -preliminary report inference for single and multiple change-points in time series regression models for time series analysis modeling infectious diseases in humans and animals a contribution to the mathematical theory of epidemics substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov-2) tscount: an r package for analysis of count time series following generalized linear models generalized linear models hand-hygiene mitigation strategies against global disease spreading through the air transportation network circular binary segmentation for the analysis of array-based dna copy number data epidemic analysis of covid-19 in china by dynamical modeling monte carlo statistical methods adaptive trend estimation in financial time series via multiscale change-pointinduced basis recovery estimating the dimension of a model the metric we need to manage covid-19 rt: the effective reproduction number coronavirus disease (covid-19) -situation report-197 estimating the number of change-points via schwarz' criterion a novel coronavirus from patients with pneumonia in china key: cord-103781-bycskjtr authors: mönke, gregor; sorgenfrei, frieda a.; schmal, christoph; granada, adrián e. title: optimal time frequency analysis for biological data pyboat date: 2020-06-04 journal: biorxiv doi: 10.1101/2020.04.29.067744 sha: doc_id: 103781 cord_uid: bycskjtr methods for the quantification of rhythmic biological signals have been essential for the discovery of function and design of biological oscillators. advances in live measurements have allowed recordings of unprecedented resolution revealing a new world of complex heterogeneous oscillations with multiple noisy non-stationary features. however, our understanding of the underlying mechanisms regulating these oscillations has been lagging behind, partially due to the lack of simple tools to reliably quantify these complex non-stationary features. with this challenge in mind, we have developed pyboat, a python-based fully automatic stand-alone software that integrates multiple steps of non-stationary oscillatory time series analysis into an easy-to-use graphical user interface. pyboat implements continuous wavelet analysis which is specifically designed to reveal time-dependent features. in this work we illustrate the advantages of our tool by analyzing complex non-stationary time-series profiles. our approach integrates data-visualization, optimized sinc-filter detrending, amplitude envelope removal and a subsequent continuous-wavelet based time-frequency analysis. finally, using analytical considerations and numerical simulations we discuss unexpected pitfalls in commonly used smoothing and detrending operations. oscillatory dynamics are ubiquitous in biological systems. from transcriptional to behavioral level these oscillations can range from milliseconds in case of neuronal ring patterns, up to years for the seasonal growth of trees or migration of 1 birds (goldbeter et al. [2012] , gwinner [2003] , rohde and bhalerao [2007] ). to gain biological insight from these rhythms, it is often necessary to implement time-series analysis methods to detect and accurately measure key features of the oscillatory signal. computational methods that enable analysis of periods, amplitudes and phases of rhythmic time series data have been essential to unravel function and design principles of biological clocks (lauschke et al. [2013] , ono et al. [2017] , soroldoni et al. [2014] ). here we present pyboat, a framework and software package with a focus on usability and generality of such analysis. many time series analysis methods readily available for the practitioner rely on the assumption of stationary oscillatory features, i.e. that oscillation properties such as the period remain stable over time. a plethora of methods based on the assumption of stationarity have been proposed which can be divided into those working in the frequency domain such as fast fourier transforms (fft) or lomb-scargle periodograms (lomb [1976] , ruf [1999] ) and those working in the time domain such as autocorrelations (westermark et al. [2009] ), peak picking (abraham et al. [2018] ) or harmonic regressions (edwards et al. [2010] , halberg et al. [1967] , naitoh et al. [1985] , straume et al. [1991] ). in low noise systems with robust and stable oscillations, these stationary methods suce to reliably characterize oscillatory signals. recordings of biological oscillations frequently exhibit noisy and time-dependent features such as drifting period, uctuating amplitude and trend. animal vocalization (fitch et al. [2002] ), temporal changes in the activatory pathways of somitogenesis (tsiairis and aulehla [2016] ) or reversible and irreversible labilities of properties in the circadian system due to aging or environmental factors (pittendrigh and daan [1974] , scheer et al. [2007] ) are typical examples where systematic, often non-linear changes in oscillation periods occur. in such cases, the assumption of stationarity is unclear and often not valid, thus the need to use nonstationary-based methods that capture time-dependent oscillatory features. recently, the biological data analysis community has developed tools that implement powerful methods tailored to specic steps of time-series analysis such as rhythmicity detection (hughes et al. [2010] , thaben and westermark [2014] ), de-noising and detrending, and the characterization of nonstationary oscillatory components (leise [2013] , price et al. [2008] ). to extract time-dependent features of non-stationary oscillatory signals, methods can be broadly divided into those that rely on operations using a moving time-window (e.g. wavelet transform) and those that embeds the whole time series into a phase space representation (e.g. hilbert transform). these two families are complementary, having application-specic advantages and disadvantages, and in many cases both are able to provide equivalent information about the signal (quiroga et al. [2002] ). due to the inherent robustness in handling noisy oscillatory data and its interpretability advantages, we implemented at the core of pyboat a continuous-wavelet-transform approach. as a software package pyboat combines multiple steps in the analysis of oscillatory time series in an easy-to-use graphical user interface that requires no prior programming knowledge. with only two user-dened parameters, pyboat is able to proceed without further intervention with optimized detrending, amplitude envelope removal, spectral analysis, detection of main oscillatory components (ridge detection), oscillatory parameters readout and visualization plots (figure 1a) . pyboat is developed under an open-source license, is freely available for download and can be installed on multiple operatings systems. in the rst section of this work we lay out the mathematical foundations at the core of pyboat. in the subsequent section 3 we describe artifacts generated by the widely used smoothing and detrending techniques and how they are resolved within pyboat. in section 4 we describe the theory behind spectral readouts in the special case of complex amplitude envelopes. we nalize this manuscript with a short description of the user interface and software capabilities. shown together with a sweeping signal whose instantaneous period coincides with the morlet of scale s = 1 exactly at τ 2 . bottom panel: result of the convolution with the sliding morlet ψ 1,τ (t) along signal f (t). the power quickly decreases away from τ 2 . the curve corresponds to one row in the wavelet power spectrum of panel d). c) synthetic signal with periods sweeping from t 1 = 30s (f 1 ≈ 0.033hz) to t 2 = 70s (f 2 ≈ 0.014hz). d) wavelet power spectrum shows timeresolved (instantaneous) periods. in this section we aim to lay down the basic principles of wavelet analysis as employed in our signal analysis tool, albeit the more mathematical subtleties are moved to the appendix. the classic approach to do frequency analysis of periodic signals is the wellknown fourier analysis. its working principle is the decomposition of a signal f (t) into sines and cosines, known as basis functions. these harmonic components have no localization in time but are sharply localized in frequency: each harmonic component carries exactly one frequency eective everywhere in time. thus, the straighforward fourier analysis underperforms in cases of time-dependent oscillatory features, such as when the period of the oscillation changes in time ( figure 1c ). the goal behind wavelets is to reach an optimal compromise between time and frequency localization (gabor [1946] ). gabor introduced a gaussian modulated harmonic component, also known as morlet wavelet: the basis harmonic functions for time-frequency analysis are then generated from the mother wavelet by scaling and translation: varying the time localization τ slides the wavelet left and right on the time axis. scale s changes the center frequency of the morlet wavelet according to ω center (s) = ω 0 /s (see also appendix equation (8)). higher scales therefore generate wavelets with lower center frequency. the gaussian envelope suppresses the harmonic component with frequency ω center farther away from τ , therewith localizing the wavelet in time ( figure 1b top panel). this frequency ω center (s) is conventionally taken as the fourier equivalent (or pseudo-) frequency of a morlet wavelet with scale s. it is noteworthy to state that wavelets are in general not as sharply localized in frequency as their harmonic counterparts ( figure s1 ). this is a trade-o imposed by the uncertainty principle to gain localization in time (gröchenig [2013] ). the wavelet transform of a signal f (t) is given by the following integral expression: for a xed scale, this equation has the form of a convolution as denoted by 4 the ' * ' operator, whereψ denotes the complex conjugate of ψ. for an intuitive understanding it is helpful to consider above expression as the cross-correlation between the signal and the wavelet of scale s (or center frequency ω center (s)). the translation variable τ slides the wavelet along the signal. since the wavelet decays fastly away from τ , only the instantaneous correlation of the wavelet with center frequency ω center and the signal around τ signicantly contributes to the integral ( figure 1b middle and lower panel). by using an array of wavelets with dierent frequencies (or periods), this allows to scan for multiple frequencies in the signal in a time-resolved manner. the result of the transform: w : f (t) → f (t, ω) is a complex valued function of two variables, frequency ω and time localization τ . in the following, we implicitly convert scales to frequency via the corresponding central frequencies ω center (s) of the morlet wavelets. to obtain a physically meaningful quantity, one denes the wavelet power spectrum: we adopted the normalization with the variance of the signal σ 2 from torrence and compo [1998] as it allows for a natural and statistical interpretation of the wavelet power. by stacking the transformations in a frequency order, one constructs a two-dimensional time-frequency representation of the signal, where the power itself is usually color coded ( figure 1d ), using a dense set of frequencies to scan for approximates of the continuous wavelet transform. it is important to note that the time averaged wavelet power spectrum is an unbiased estimator for the true fourier power spectrum p f f of a signal f (percival [1995] ). this allows to directly compare fourier power spectra to the wavelet power. white noise is the simplest noise process which may serve as a null hypothesis. normalized by variance white noise has a at mean fourier power of one for all frequencies. hence, also the variance normalized wavelet power of one corresponds to the mean expected power for white noise: p f w n (ω) = 1 ( figure 2c ). this serves as a universal unit to compare dierent empirical power spectra. extending these arguments to random uctuations of the fourier spectrum allows for the calculation of condence intervals on wavelet spectra. if a background spectrum p f 0 (ω) is available, the condence power levels can be easily calculated as: assuming normality for the distribution of the complex fourier components of the background spectrum, one can derive that the power itself is chi-square dis-tributed (chateld [1995] ). thus, picking a desired condence (e.g. χ 2 (95%) ≈ 6) gives the scaling factor for the background spectrum. only wavelet powers greater than this condence level are then considered to indicate oscillations with the chosen condence. the interested reader may nd more details in section 4 of torrence and compo [1998] . a wavelet power of c = 3 corresponds to the 95% condence interval in case of white noise ( figure 2b ), which is frequency independent. for the practitioner, this should be considered the absolute minimum power, required to report 'oscillations' in a signal. it should be noted that especially for biological time series, due to correlations present also in non-oscillatory signals, white noise often is a poor choice for a null model (see also supplementary information a.3). a possible solution is to estimate the background spectrum from the data itself, this however is beyond the scope of this work. 3 optimal filtering -do's and dont's a biological recording can be decomposed into components of interest and those elements which blur and challenge their analysis, most commonly noise and trends. various techniques for smoothing and detrending have been developed to deal with these issues. often overlooked is the fact that both, smoothing and detrending operations can introduce spectral biases, i.e. attenuation and amplication of certain frequencies. in this section we lay out a mathematical framework to understand and compare the eects of these two operations, showing examples of the potential pitfalls and at the same time providing a practical guide to avoid these issues. finally, we discuss how pyboat minimizes most of these common artifacts. the operation which removes the fast, high-frequency (low period) components of a signal is colloquially called smoothing. this is most commonly done as a sliding time window operation (convolution). in general terms we can refer to a window function w(t) such that the smoothed signal is given as: by employing the convolution theorem, it turns out that the fourier transformation of the smoothed signal is simply given by the product of the individual fourier transforms. 6 it follows that the fourier power spectrum of the smoothed signal reads as: applying a few steps of fourier algebra shows that the original power spectrum p f f gets modied by the low pass response of the window function | w| 2 scaled by the ratio of variances σ 2 f /σ 2 f s . also without resorting to mathematical formulas, smoothing and its eect on the time-frequency analysis can be easily grasped visually. a broad class of ltering methods falls into the category of convolutional ltering, meaning that there is some operation in a sliding window done to the data, e.g. for moving average and loess or savitzky-golay ltering (savitzky and golay [1964] ). moving average lter is a widely used smoothing technique, dened simply by a box-shaped window that slides in the time domain. in figure 2 we summarize the spurious eects that this lter can have on noisy biological signals. white noise, commonly used as descriptor for uctuations in biological systems, is a random signal with no dominant period. the lack of dominant period can be seen from a raw white noise signal ( figure 2a ) and more clearly from the almost at landscape on the power spectrum ( figure 2b ). applying to raw white noise signal a moving average lter of size 5-times the signal's sampling interval (m = 5∆t) leads to a smoothed noise signal (figure 2c ) that now has multiple dominant periods, as seen by the emergence of high power islands in figure 2d . comparing the original spectrum ( figure 2b ) with the white noise smoothed spectrum ( figure 2d ), it becomes evident that smoothing introduces a strong increase in wavelet power for longer periods. in other words, smoothing perturbs the original signal by creating multiple highpower islands of long periods, also referred as spurious oscillations. to better capture the statistics behind these smoothing-induced spurious oscillations, it is best to look at the time averaged wavelet spectrum. figure 2e shows the mean expected power after smoothing white noise with a moving average lter. a zone of attenuated small periods become visible, the sloppy stopband for periods around 7∆t. these are the fast, high-frequency components which get removed from the signal. however, for larger periods 10 the wavelet power gets amplied up to 5-fold. it is this gain, given by σ 2 f /σ 2 f s , which leads to spurious results in the analysis. as stated before, variance normalized white noise has a mean power of 1 for all frequencies or periods (p f w n (ω) = 1). this allows to use a straightforward numerical method to estimate a lter response | w(ω)| 2 , i.e. applying the smoothing operation to simulated white noise and time averaging the wavelet spectra. this monte carlo approach works for every (also non-convolutional) smoothing method. results for the savitzky-golay lter applied to white noise signals can be found in supplementary figure s2 . convolutional lters will in general produce more gain and hence more spurious oscillations with increasing window sizes in the time domain. if smoothing, even with a rather small window (m = 5∆t), already potentially introduces false positive oscillations, what does that mean for practical time-frequency analysis? for wavelet analysis the answer is plain and clear: smoothing is simply not needed at all. a close inspection of the unaltered white noise wavelet spectrum shown in figure 2b , shows the same structures for higher periods as in the spectrum of the smoothed signal ( figure 2d ). the big dierence is, that even though these random apparent oscillations get picked up by the wavelets, their low power directly indicates their low signicance. as wavelet analysis (see previous section) is based on convolutions, it already has power preserving smoothing built in. as illustration, we show in figure 2f a raw noisy signal with lengthening period (noisy chirp) and the corresponding power spectrum ( figure 2f lower panel). without any smoothing the main periodic signal can be clearly identied in the power spectrum. thus, wavelet analysis does not require smoothing for the detection of oscillations in very noisy signals. for all other spectrum analysis methods which rely on explicit smoothing, characteristics of the background noise and the signal to noise ratio are crucial to avoid detecting spurious oscillations. these are usually both quantities not readily available a priori or in practice. complementary to smoothing, an operation which removes the slow, low frequency components of a signal is generally called detrending. strong trends can dominate a signal by eectively carrying most of the variance and power. there are at least two broad classes of detrending techniques: parametric tting and convolution based. both aim to estimate the trend as a function over time to be subtracted from the original signal. a parametric t always is the best choice, if the deterministic processes leading to the trend are known and well understood. an example is the so called photobleaching encountered in time-lapse uorescent imaging experiments, here an exponential trend can often be well tted to the data based on rst principle deliberations (song et al. [1995] ). however, there are often other slow processes, like cell viability or cells drifting in and out of focus, which usually can't be readily described parametrically. for all these cases convolutional detrending with a window function w(t) is a good option and can be written as: the trend here is nothing less than the smoothed original signal, i.e. f (t) * w(t). however with the signal itself falling into the stop-band of the low-pass lter, with the aim to not capture and subtract any signal components. using basic algebra in the frequency domain we obtain an expression relating 8 the window w(t) to the power spectrum of the original signal f (t): as in the case of smoothing, the so called high-pass response of the window function is given by (1 − | w|) 2 and scaled with the ratio of variances σ 2 f /σ 2 f d . in strong contrast to smoothing, there is no overall gain in power in the range of the periods, passing through the lter (called passband). there is however, in case of moving average and other time-domain lters (see also figure s2 ), no simple passband region. instead, there are rippling artifacts in the frequency domain, meaning some periods getting amplied to up to 150% and others attenuated by up to 25%. to showcase why this can be problematic, we constructed a synthetic chirp signal sweeping through a range of periods t 1 − t 2 , however, this time modied by a linear and an oscillatory trend ( figure 3a ). the oscillatory component of the trend was chosen for clarity with a specic time scale given by its period t trend , which is three times the longest period found in the chirp signal. strongly depending on the specic window size chosen for the moving average lter, there are various eects on both, the time and frequency domain (shaded area in figure 3b ) such as the introduction of amplitude envelopes and/or incomplete trend removal ( figure 3c ). a larger window size is better to reduce the eect of the ripples inside the passband. however, the lter decay (roll-o ) towards larger periods becomes very slow. that in turn means that trends can not be fully eliminated. smaller window sizes perform better in detrending, but their passband can be dominated by ripples (see also supplementary figure s3 ). in practice, sticking to lters originally designed for the time-domain without having oscillatory signals in mind, can easily lead to biased results of a time-frequency analysis. however, given the moderate gains of the detrending lter response, there is a much smaller chance to mistakenly detect spurious oscillations compared to the case of smoothing. the sinc lter, also known as the optimal lter in the frequency domain, is a function with a constant value of one in the passband. in other words, frequencies which pass through are neither amplied nor attenuated. accordingly, this lter also should be constantly zero in the stop-band, the frequencies (or periods) which should be ltered out. this optimal low-pass response can be formulated in the frequency domain simply as: here ω c is the cut o frequency, an innitely sharp transition band dividing the frequency range into pass-and stop-band. it is eectively a box in the frequency domain (dashed lines in figure 3d ). note that the optimal high-pass or detrending response simply and exactly swaps the pass-and stop-band. in the time domain via the inverse fourier transform, this can be written as: this function is known as the sinc function and hence the name sinc lter. an alternative name used in electrical engineering is brick-wall lter. in practice, this optimal lter has a nonzero roll-o as shown for two dierent cut-o periods (t c = 2π ωc ) in figure 3d . the sinc function mathematically requires the signal to be of innite length. therefore, every practical implementation implements windowed sinc lters (smith et al. [1997] ), see also supplementary information s3) about possible implementations. strikingly still, there are no ripples or other artifacts in the frequency-response of the windowed sinc lter. and hence also the 'real world' version allows for a bias free time-frequency analysis. as shown in figure 3e , the original signal can be exactly recovered via detrending. to showcase the performance of the sinc lter, we numerically compared its performance against two other common methods, the hodrick-prescott and moving average lter ( figure 3f ). the stop-and passband separation of the sinc lter clearly is the best, although the hodrick-prescott lter with a parameterization as given by ravn and uhlig [2002] also gives acceptable results (see also supplementary figure s5 ). the moving average is generally inadvisable, due to its amplication right at the start of the passband. in addition to its advantages in practical signal analysis, the sinc lter also allows to analytically calculate the gains from ltering pure noise (see also supplementary information a.3). the gain, and therefore the probability to detect spurious oscillations, introduced from smoothing is typically much larger compared to detrending. however, if a lot of energy of the noise is concentrated in the slow low frequency bands, also detrending with small cut-o periods alone can yield substantial gains (see figure s6 and figure s7 ). importantly, when using the sinc lter, the background spectrum of the noise will always be uniformly scaled by a constant factor in the pass-band. there is no mixing of attenuation and amplication as for time-domain lters like moving average ( figure 3b and c). if the spectrum of the noise can be estimated, or an empirical background spectrum is available, the theory presented in a.3 allows to directly calculate the correct condence intervals. extraction of the instantaneous period, amplitude and phase of that component is of prime interest for the practitioner. in this section we show how to obtain these important features using wavelet transforms as implemented in pyboat. from the perspective of wavelet power spectra, such main oscillatory components are characterized by concentrated and time-connected regions of high power. wavelet ridges are a means to trace these regions on the time-period plane. for the vast majority of practical applications a simple maximum ridge extraction is sucient. this maximum ridge can be dened as: with n being the number of sample points in the signal. thus, the ridge r(t k ) maps every time point t k to a row of the power spectrum, and therewith to a specic instantaneous period t k ( figure 4c and d). evaluating the power spectrum along a ridge, gives a time series of powers: p w f (t k , r(t k )). setting a power threshold value is recommended to avoid evaluating the ridge in regions of the spectrum where the noise dominates (in figure 4c threshold is set to 5). alternatively to simple maximum ridge detection, more elaborated strategies for ridge extraction have been proposed (carmona et al. [1995] ). a problem often encountered when dealing with biological data is a general time-dependent amplitude envelope ( figure 4a ). under our wavelet approach, the power spectrum is normalized with the overall variance of the signal. consequently, regions with low signal amplitudes but robust oscillations are nevertheless represented as very low power blurring them with the spectral oor ( figure 4c ). this leads to the impractical situation, where even a noise free signal with an amplitude decay will show very low power at the end ( figure 4e ,f and s8), defeating its statistical purpose. a practical solution in this case is to estimate an amplitude envelope and subsequently normalize the signal with this envelope ( figure 4a and b). we specically show here non-periodic envelopes, estimated by a sliding window (see also methods). after normalization, lower amplitudes are no longer penalized and an eective power-thresholding of the ridge is possible (figure 4d and f) . a limitation of convolutional methods, including wavelet-based approaches, are edge eects. at the edges of the signal, the wavelets only partially overlap with the signal leading to a so-called cone of inuence (coi) (figure 4c and d). even though the periods are still close to the actual values, phases and especially the power should not be trusted inside the coi (see discussion and supplementary figure s9 ). once the trace of consecutive wavelet power maxima has been determined and thresholded, evaluating the transform along it yields the instantaneous envelope amplitude, normalized amplitude and phases (see figure 4e ,f and g). to obtain 14 applications after introducing the dierent time series analysis steps using synthetic data for clarity, in this paragraph, we discuss examples of pyboat applications to real data. to showcase the versatility of our approach, we chose datasets obtained from dierent scientic elds. in figure 5a we display covid-19 infections in italy as reported by the disease prevention and control (ecdc). a sinc-lter trend identication with a cut-o period of 14 days reveals a steep increase in newly reported infections at the beginning of march and a steady decline after the beginning of april. subtracting this non-linear trend clearly exposes oscillations with a stable period of one week ( figure 5b , and see supplementary figure s10a for power spectrum analysis). similar ndings were recently reported in ricon-becker et al. [2020] . the signals shown in figure 5c show cycles in hare-lynx population sizes as inferred from the yearly number of pelts, trapped by the hudson bay company. data has been taken from odum and barrett [1971] . the corresponding power spectra are shown in the supplement ( figure s10b ), and reveal a fairly stable 10 year periodicity. after extracting the instantaneous phases with pyboat, we calculated the time-dependent phase dierences as shown in figure 5b . interestingly, the phase dierence slowly varies between being almost perfectly out of phase and being in phase for a few years around 1885. the next example signal shows a single-cell trajectory of a u2os cell carrying a geminin-cfp uorescent reporter (granada et al. [2020] ). geminin is a cell cycle reporter, accumulating in the g2 phase and then steeply declining during mitosis. applying pyboat on this non-sinusoidal oscillations reveals the cellcycle length over time ( figure 5f ), showing a slowing down of the cell cycle progression for this cell. ensemble dynamics for a control and a cisplatin treated population are shown in the supplementary figure s10c . the nal example data set is taken from mönke et al. [2017] , here populations of mcf7 cells where treated with dierent dosages of the dna damaging agent ncs. this in turn elicits a dose-dependent and heterogeneous p53 response, tracked in the individual cells for each condition ( figure 5g ). pyboat also features readouts of the ensemble dynamics: figure 5h shows the time-dependent period distribution in each population, figure 5i the phase coherence over time. the latter is calculated as r(t) = j e iφ j (t) . it ranges from zero to one and is a classical measure of synchronicity in an ensemble of oscillators kuramoto [1984] . the strongly stimulated cells (400ng ncs) show stable oscillations with a period of around 240min, and retain more phase coherent after an initial drop in synchronicity. the medium stimulated cells (100ng ncs) start to slow down on average already after the rst pulse, both the spread of the period distribution and the low phase coherence indicate a much more heterogeneous response. two individual cells and their wavelet analysis are shown in supplementary figure s10d . graphical interface the extraction of period, amplitude and phase is the nal step of our proposed analysis workow, which is outlined in the screen-captures of figure 6 . the user interface is separated into several sections. first, the 'dataviewer' allows to visualize the individual raw signals, the trend determined by the sinc lter, the detrended time series and the amplitude envelope. once satisfactory parameters have been found, the actual wavelet transform together with the ridge are shown in the 'wavelet spectrum' window. after ridge extraction, it is possible to plot the instantaneous observables in a 'readout' window. each plot produced from the interface can be panned, zoomed in and saved separately if needed. once the delity of the analysis has been checked for individual signals, it is also possible to run the entire analysis as a 'batch process' for all signals imported. one aggregated result which we found quite useful is to determine the 'rhythmicity' of a population by creating a histogram of time-averaged powers of the individual ridges. a classication of signals into 'oscillatory' and 'nonoscillatory' based on this distribution, e.g. by using classical thresholds (otsu [1979] ) is a potential application. examples of the provided ensemble readouts are shown in figure 5h and i and supplementary figure s10c . finally, pyboat also features a synthetic signal generator, allowing to quickly explore its capabilities also without having a suitable dataset at hand. a synthetic signal can be composed of up to two dierent chirps, and ar1-noise and an exponential envelope can be added to simulate possible challenges often present in real data (see also material and methods and supplementary figure s11 ). installation and user guidelines of pyboat can be found in the github repository. gets detrended with a cut-o period of 90h, and an amplitude envelope is estimated via a window of size 50h. see labels and main text for further explanations. the example trajectory displays a circadian rhythm of 24h and is taken from the data set published in abel et al. [2016] . recordings of biological oscillatory signals can be conceptualized as an aggregate of multiple components, those coming from the underlying system of interest and additional confounding factors such as noise, modulations and trends that can disguise the underlying oscillations. in cases of variable period with noisy amplitude modulation and non-stationary trends the detection and analysis of oscillatory processes is a non-trivial endeavour. here we introduced pyboat, a novel software package that uses a statistically rigorous method to handle non-stationary rhythmic data. pyboat integrates pre-and post-processing steps without making a priori assumptions about the sources of noise and periodicity of the underlying oscillations. we showed how the signal processing steps of smoothing, detrending, amplitude envelope removal, signal detection and spectral analysis can be resolved by our hands-o standalone software (figure 1 and 5). artifacts introduced by the time series analysis methods itself are a common problem that inadvertently disturbs results of the time-frequency analysis of periodic components (wilden et al. [1998] ). here we rst analyzed the eects of data-smoothing on a rhythmic noisy signal and showed how common smoothing approaches disturb the original recordings by introducing non-linear attenuations and gains to the signal (figures 2, s6 and s7 ). these gains easily lead to spurious oscillations that were not present in the original raw data. these artifacts have been characterized since long for the commonly used moving-average smoothing method, known as the slutzky-yule eect (slutzky [1937] ). using an analytical framework, we describe the smoothing process as a lter operation in frequency domain. this allows us to quantify and directly compare the eects of diverse smoothing methods by means of response curves. importantly, we show here how any choice of smoothing unavoidably transforms the original signal in a non-trivial manner. one potential reason for its prevalence is that practitioners often implement a smoothing algorithm without quantitatively comparing the spectral components before versus after smoothing. pyboat avoids this problem by implementing a wavelet-based approach that per se evades the need to smooth the signal. another source of artifacts are detrending operations. thus, we next studied the spectral eects that signal detrending has on rhythmic components. our analytical and numerical approaches allowed us to compare the spectral eects of dierent detrending methods in terms of their response curves (see figure 3 ). our results show that detrending also introduces non-trivial boosts and attenuations to the oscillatory components of the signal, strongly depending on the background noise ( figures s6 and s7 ). in general there is no universal approach and optimally a detrending model is based on information about the sources generating the trend. in cases without prior information to formulate a parametric detrending in the time domain, we suggest that the safest method is the convolution based sinc lter , as it is an "ideal" (step-function) lter in the frequency domain ( figures 3c and s3 ). furthermore we compared the performance of the sinc lter with two other commonly applied methods to remove non-linear trends in data ( figure 3f ), i.e. the moving average (díez-noguera [2013] ) and hodrick-prescott (myung et al. [2012] , schmal et al. [2018] , st. john and doyle [2015] ) lter. in addition to smoothing and detrending, amplitude-normalization by means of the amplitude envelope removal is another commonly used data processing step that pyboat is able to perform. here we further show how that for decaying signals amplitude normalization grants that the main oscillatory component of interest can be properly identied in the power spectrum ( figure 4a to d). this main component is identied by a ridge-tracking approach that can be then used to extract instantaneous signal parameters such as amplitudes, power and phase ( figure 4e to g). rhythmic time series can be categorized into those showing stationary oscillatory properties and the non-stationary ones where periods, amplitudes and phases change over time. many currently available tools for the analysis of biological rhythms rely on methods aimed at stationary oscillatory data, using either a standalone software environment such as brass (edwards et al. [2010] , locke et al. [2005] ), chronostar (klemz et al. [2017] ) and circada (cenek et al. [2020] ) or an online interfaces such as biodare , zielinski et al. [2014] ). continuous wavelet analysis allows to reveal non-stationary period, amplitude and phase dynamics and to identify multiple frequency components across dierent scales within a single oscillatory signal (leise [2013] , leise et al. [2012] , rojas et al. [2019] ) and is thus complementary to approaches that are designed to analyze stationary data. in contrast to the r-based waveclock package (price et al. [2008] ), pyboat can be operated as a standalone software tool that requires no prior programming knowledge as it can be fully operated using its graphical user interface (gui). an integrated batch processing option allows the analysis of large data sets within a few "clicks". for the programming interested user, pyboat can also be easily scripted without using the gui, making it simple to integrate it into individual analysis pipelines. pyboat also distinguishes itself from other wavelet-based packages (e.g. harang et al. [2012] ) by adding a robust sinc lter-based detrending and a statistically rigorous framework, providing the interpretation of results by statistical condence considerations. pyboat is not specically designed to analyze oscillations in high-throughput "omics" data. for such sake, specialized algorithms such as arser (yang and su [2010] ), jtk-cycle (hughes et al. [2010] ), metacycle (wu et al. [2016] ) or rain (thaben and westermark [2014] ) are more appropriate. its analysis reveals basic oscillatory properties such as the time-dependent (instantaneous) rhythmicity, period, amplitude and phase but is not aimed at more specic statistical tests such as, e.g., tests for dierential rhythmicity as implemented in dodr (thaben and westermark, 2016) . the continous-wavelet analysis underlying pyboat requires equidistant time series sampling with no gaps. methods such as lomb-scargle periodograms or harmonic regressions are more robust or even specically designed with respect to unevenly-sampled data (lomb [1976] , ruf [1999] ). being beyond the scope of this manuscript, it will be interesting in future work to integrate the ability to analyze unevenly sampled data into the pyboat software, either by the imputation of missing values (e.g. by linear interpolation) or the usage of wavelet functions specically designed for this purpose (thiebaut and roques [2005] ). pyboat is fast, easy too use and statistically robust analysis routine designed to complement existing methods and advance the ecient time series analysis of biological rhythms research. in order to make it publicly available, pyboat is a free and open-source, multi-platform software based on the popular python (van rossum and drake [2009] ) programming language. it can be downloaded using the following link : https://github.com/tensionhead/pyboat, and is available on the anaconda distribution (via the conda-forge channel). software pyboat is written in the python programming language (van rossum and drake [2009] ). it makes extensive use of python's core scientic libraries numpy and scipy (virtanen et al. [2020] ) for the numerics. additionally we use matplotlib (hunter [2007] for visualization, and pandas (mckinney [2010] ) for data management. pyboat is released under the open source gpl-3.0 license, and its code is freely available from https://github.com/tensionhead/pyboat. the readme on this repository contains further information and installation instructions. pyboat is also hosted on the popular anaconda distribution, as part of the conda-forge community https://conda-forge.org/. to estimate the amplitude envelope in the time domain, we employ a moving window of size l and determine the minimum and maximum of the signal inside the window for each time point t. the amplitude at that time point is then given by a(t) = 1 2 (max(t) − min(t)). this works very well for envelopes with no periodic components, like an exponential decay. however, this simple method is not suited for oscillatory amplitude modulations. it is also recommended to sinc-detrend the signal before estimating the amplitude envelope. note that l should always be larger then the maximal expected period in the signal, as otherwise the signal itself gets distorted. a noisy chirp signal can be written as: f (t i ) = a cos(φ(t i )) + d x(t i ), where φ(t i ) is the instantaneous phase and the x(t i ) are samples from a stationary stochastic process (the background noise). the increments of the t i are the sampling interval: t i+1 − t i = ∆t, with i = 0, 1, ..., n samples. starting from a linear sweep through angular frequencies: ω(0) = ω 1 and ω(t n ) = ω 2 , we have ω(t) = ω 2 −ω 1 t n t + ω 1 . the instantaneous phase is then given by sampling n times from a gaussian distribution with standard deviation equal to one corresponds to gaussian white noise ξ(t i ). with x(t i ) = ξ(t i ) the signal to noise ration (snr) then is a 2 /d 2 . a realization of an ar1 process can be simulated by a simple generative procedure: the inital x(t 0 ) is a sample from the standard normal distribution. then the next sample is given by: x(t i ) = αx(t i−1 ) + ξ(t i ), with α < 1. simulating pink noise is less straightforward, and we use the python package colorednoise from https://pypi.org/project/colorednoise for the simulations. its implementation is based on timmer and koenig [1995] . functional network inference of the suprachiasmatic nucleus quantitative analysis of circadian single cell oscillations in response to temperature identication of chirps with continuous wavelet transform circada: shiny apps for exploration of experimental and synthetic circadian time series with an educational emphasis problem solving: a statistician's guide methods for serial analysis of long time series in the study of biological rhythms quantitative analysis of regulatory exibility under changing environmental conditions calls out of chaos: the adaptive signicance of nonlinear phenomena in mammalian vocal production theory of communication. part 1: the analysis of information systems biology of cellular rhythms the eects of proliferation status and cell cycle phase on the responses of single cells to chemotherapy foundations of time-frequency analysis circannual rhythms in birds circadian system phase an aspect of temporal morphology; procedures and illustrative examples wavos: a matlab toolkit for wavelet analysis and visualization of oscillatory systems jtk_cycle: an ecient nonparametric algorithm for detecting rhythmic components in genomescale data sets matplotlib: a 2d graphics environment hilbert transformer and time delay: statistical comparison in the presence of gaussian noise reciprocal regulation of carbon monoxide metabolism and the circadian clock chemical turbulence scaling of embryonic patterning based on phase-gradient encoding wavelet analysis of circadian and ultradian behavioral rhythms persistent cellautonomous circadian oscillations in broblasts revealed by six-week singlecell imaging of per2::luc bioluminescence extension of a genetic network model by iterative experimentation and mathematical analysis least-squares frequency analysis of unequally spaced data mckinney, wes. "data structures for statistical computing in python excitability in the p53 network mediates robust signaling with tunable activation thresholds in single cells online period estimation and determination of rhythmicity in circadian data, using the biodare data infrastructure period coding of bmal1 oscillators in the suprachiasmatic nucleus circadian rhythms determined by cosine curve tting: analysis of continuous work and sleep-loss data fundamentals of ecology dissociation of per1 and bmal1 circadian rhythms in the suprachiasmatic nucleus in parallel with behavioral outputs a threshold selection method from gray-level histograms on estimation of the wavelet variance circadian oscillations in rodents: a systematic increase of their frequency with age waveclock: wavelet analysis of circadian oscillation performance of dierent synchronization measures in real data: a case study on electroencephalographic signals on adjusting the hodrick-prescott lter for the frequency of observations a sevenday cycle in covid-19 infection and mortality rates: are inter-generational social interactions on the weekends killing susceptible people? medrxiv plant dormancy in the perennial context beyond spikes: multiscale computational analysis of in vivo long-term recordings in the cockroach circadian clock the lomb-scargle periodogram in biological rhythm research: analysis of incomplete and unequally spaced time-series smoothing and dierentiation of data by simplied least squares procedures plasticity of the intrinsic period of the human circadian timing system measuring relative coupling strength in circadian systems the summation of random causes as the source of cyclic processes the scientist and engineer's guide to digital signal processing photobleaching kinetics of uorescein in quantitative uorescence microscopy a doppler eect in embryonic pattern formation quantifying stochastic noise in cultured circadian reporter cells least squares analysis of uorescence data detecting rhythms in time series with rain dierential rhythmicity: detecting altered rhythmicity in biological data time-scale and time-frequency analyses of irregularly sampled astronomical time series on generating power law noise a practical guide to wavelet analysis self-organization of embryonic genetic oscillators into spatiotemporal wave patterns python 3 reference manual. cre-atespace quantication of circadian rhythms in single cells subharmonics, biphonation, and deterministic chaos in mammal vocalization metacycle: an integrated r package to evaluate periodicity in large scale data analyzing circadian expression data by harmonic regression based on autoregressive spectral estimation strengths and limitations of period estimation methods for circadian data we gratefully thank bharath ananthasubramaniam, hanspeter herzel, pedro pablo rojas and shaon chakrabarti for fruitful discussions and comments on the manuscript. we further thank jelle scholtalbers and the gbcs unit at the embl in heidelberg for technical support. we thank members of the aulehla and leptin labs for comments, support and helpful advice. key: cord-321492-u2jm6y25 authors: catty, jocelyn title: lockdown and adolescent mental health: reflections from a child and adolescent psychotherapist date: 2020-06-10 journal: wellcome open res doi: 10.12688/wellcomeopenres.15961.1 sha: doc_id: 321492 cord_uid: u2jm6y25 the author, a child and adolescent psychoanalytic psychotherapist working in the uk nhs, ponders the varied impacts of ‘lockdown’ on adolescents, their parents and the psychotherapists who work with them, during the covid-19 pandemic. she asks, particularly, how psychological therapies are positioned during such a crisis, and whether the pressures of triage and emergency can leave time and space for sustained emotional and psychological care. she wonders how psychoanalytic time with its sustaining rhythm can be held onto in the face of the need for triage on the one hand and the flight to online and telephone delivery on the other. above all, the author questions how the apparent suspension of time during lockdown is belied by the onward pressure of adolescent time, and how this can be understood by, and alongside, troubled adolescents. the time of the covid-19 virus brings a strange shifting of priorities to my professional life as a child and adolescent psychoanalytic psychotherapist working in a child and adolescent mental health service (camhs). covid-19: the name itself encapsulates delay (flexer, 2020, waiting in pandemic times) . building into the term the origins of the virus in 2019, it provides a stark reminder that, having ignored warnings from the medical world and then the evidence before our eyes, we are now always already trying to catch up (horton, 2020) . the world is in crisis, but it is hard to position the acute and chronic crises of mental health work in the nhs against the unfolding crisis we see on our screens. are we high priority or low? frontline or routine? do we, like primary care staff, rush to 'man the barricades' (davies, 2020, waiting in pandemic times) -anxiety about the possibility of redeployment is spreading among mental health staff even where they are entirely untrained for physical health care -or do we hunker down at home to conduct therapy online for the foreseeable future? (what is foreseeable about the future, now, for the young patients, depressed, anxious or enduring the turbulence of adolescence, for whom the future was only hazily in view in the first place?) mental health has traditionally been lamented as the poor relation within the national health service (nhs), with psychiatry under-valued and repeated cries to achieve parity between mental and physical health ignored. how, then, are we to consider the seriousness of psychological and emotional labour conducted in services such as camhs during a national crisis? talking to young people and children about their anxieties, or even their considerable distress, appears low priority when compared to doctors and nurses battling covid; yet an adolescent death by suicide remains one of the most catastrophic events imaginable, for family, friends and professionals alike. in the time of the virus, we are thus adrift in the prevailing geo-spatial metaphors of the age: nowhere near the 'front line', we may find ourselves thrust suddenly towards it if a teenager attempts to harm him-or herself. the world gives the impression of having halted adolescent time. exams are cancelled; school is out, or virtual; universities have sent their students home. for those in their teens, the covid-19 pandemic arrives at a crucial time in development, as they transition from childhood to adulthood. yet the time of adolescence itself often feels both chronic and acute, its difficulties regarded as perennial, even predictable, yet often plunging the young into crisis. disturbed adolescents may try to arrest a march of time that feels relentless by retreating into depression, or into their bedrooms: to halt their progress towards a future that is perceived as bleak, or simply unimaginable. what can we learn about time -now, in the time of covid-19 -from this sudden suspension of time which is not actually a suspension at all? this questioning of the future which is, curiously, so familiar to many of the young people whose mental health elicits our care? the decision to award gcse and a level results, rather than postpone the exams, could be seen as a shocking pronouncement: that time waits for no one, that adolescent progress cannot, must not, be halted -even if, for those awarded a grade less than that which they might have achieved, progress is thwarted. like their younger counterparts at the top of primary school, they must, even from their bedrooms, be ushered forwards to the brink at which they bid their school lives farewell. those struggling with the pressure of work and exams may be relieved, but their world has also crashed down upon them and many are disappointed. some lament a lack of control: the final academic effort, for which they were preparing, is denied them, and teachers, or government, will decide upon their grades. yet for some, for whom the pressure of external life has been unbearable, perhaps there is the possibility of respite, and the lockdown may provide them with much-needed time for recovery. adolescent development 'runs unevenly' (waddell, 2018, p. 26 ): how the time of covid-19 intersects into each individual trajectory will vary hugely. while the media portray the young as oblivious -gathering in parks, spitting defiantly in the faces of police or the elderly -we hear our young patients report their varying responses, almost always ambivalent, anxious. for those with depression, existential despair, sometimes born of inter-generational trauma and loss, is known to dominate (catty ed., 2016): how are they to believe that the future holds any promise when it appears to have been cancelled, or at least indefinitely postponed? for some, this will confirm a pre-existing belief, a bleakness. meanwhile, they worry about grandparents, parents and, increasingly, each other. there is an idea that psychoanalytic work with adults involves the recollection and processing of remembered trauma -that it is, as wordsworth wrote of poetry, 'emotion recollected in tranquillity' (1805/1987, p. 42 ) -while therapy with children and adolescents is conducted during and alongside the unfolding of their key emotional dramas. theory and clinical practice afford many contradictions of this dichotomy; yet it remains meaningful to conceptualise adolescent therapy as a 'being alongside' a teenager as they live through their most turbulent of times. how does lockdown impact on this sense of immediacy? during lockdown, young people are suffering a crisis that we appear to share with them, at least in this basic way: we too sit in our homes as we engage them in their therapy. keeping a focus on the particularity of their experience -the extent to which the national crisis may or may not be impacting on their internal dramas -will need close attention. yet perhaps they have something to tell us about uncertainty -about the future, about the passing of time -that they have long feared we did not understand. for some, we have finally entered into their world. there are implications here, too, for our work with their parents, now that we feel ourselves to share their most immediate circumstances: we are all in lockdown; we are all worried about our ageing parents; we are all, increasingly, worried about the young. crisis time in adolescent mental health services relies on a redamber-green system of case-flagging. now only the reddest of the red cases can be seen in person, anxiously diverted from accident and emergency departments to the community clinic to avoid contamination. while those on duty manage these most critical of crises in person, the rest of the team connect to their patients via telephone and video-conferencing. fears that mental health work will be deemed such low priority as to justify sending therapists into the medical settings for which they would be entirely, shockingly, unprepared, seem to abate as authorities determine that mental health emergencies are themselves 'priority'. at the same time, the urgency of attending to an unfolding mental health crisis is becoming clearer: articulated in a recent 'call for action' to include data collection on the psychological, social and neuroscientific effects of the pandemic on both the general population, vulnerable groups and those with the virus (holmes et al., 2020) . what, then, are the implications of mental health triage in this new world? in the early weeks of the lockdown, we wonder whether to activate a crisis response by focusing only on emergencies, keeping in touch with our regular patients for more frequent, but briefer, telephone updates. implicitly, we are invoking ideas of triage (focusing only on emergencies in any detail or depth) and support (finding out how our patients are managing, rather than working with them). yet it is clear that such a model will not serve us well in the longer-term: if nearly the whole camhs population is provided with brief, intermittent support rather than treatment, logic dictates that their mental health will deteriorate. yet does such a distinction between support and treatment hold in a time of crisis? it is a distinction that has always been uncomfortable where it privileges the activity of psychological therapists over other mental health specialisms, such as nursing, occupational therapy or social work (deemed to be providing 'support' or 'risk management'); yet it has enabled us to retain an emphasis on the 'work' that is involved in psychological treatment and the process that unfolds between the participants in psychotherapy, patient and therapist. what the nature of such work may be during lockdown remains to be seen. meanwhile, mental health emergencies among the teenage population seem to have plummeted: we wonder, where are they? have they too been suspended? there is anxiety about when the dam may break; an increase in anxiety, depression and self-harm are expected in the population as a whole (holmes et al., 2020) 1 . for those that come in, we find ourselves contorting the familiar nhs language of 'risk': do we mean suicide risk or covid-19 risk? where is 'safe' for a 16 year-old determined to kill herself, or a 13 year-old who has taken an overdose? a mother asks whether, were her teenage son to harm himself, she would be allowed to be with him in hospital; we cannot advise her. the focused maternal care that a teenager may specifically crave in such desperate moments becomes the one thing he would deprive himself of; the choices facing those with suicidal thoughts become starker now. we ask ourselves, can we provide a reassuring presence dressed in protective mask and goggles? or should we retreat behind a computer or smartphone screen, through which we can, at least, be seen as ourselves? how do we keep time in such a crisis? there is a rhythm that psychotherapists and their patients come to live and breathe: the regular pulse of the psychoanalytic session, whether weekly or more frequent; the predictability of the starting time; the inevitability of the session end or the week's wait. this rhythm underpins the duration of a therapy as it unfolds in time and is the bedrock of the 'containment' (bion, 1962 ) that psychoanalysis offers (baraitser & salisbury, 2020, waiting in pandemic times) . can this rhythm, based on the fifty-minute hour, be maintained over the telephone or protected with the same boundaries as in the clinic? in the rush of psychotherapists to online platforms and the telephone, can we maintain this steady pulse? for a teenage patient, does it still feel like his session time if he knows his therapist is going to ring? will it still feel like time to stop if we are wrapped in the cocoon of sound provided by a telephone call in a quiet room; or if we have been trying to focus on each other's faces in a shaky video call? despite the fact that most teenagers are more familiar with online discourse than we are, this shift raises issues of space too. is it intrusive to conduct therapy online with an adolescent, looking into that most private of spaces, their bedroom? alternatives are unlikely when families are crammed together conducting school and home lives under one roof. what is it like for a depressed adolescent to know that his therapist is telephoning from her own home? or for a troubled teenage girl, reliant on self-harm to embody her misery, to bring her therapist into her home on a smartphone screen? decisions continue to need making: despite the impression that time has been suspended, in fact it waits for nobody. an offer of time-limited psychotherapy for a girl of seventeen-and-a-half is paused: can it still be done? the time-frame provided by the therapy model was to fit neatly into the time that remains for her as a camhs patient: upon her eighteenth birthday, she will be discharged. despite the impression that the world has stopped turning, time is marching on. nothing sums up better the paradox of the crisis for adolescents or gives the lie more obviously to the notion of shutdown, suspension or postponement. time is still passing. all data underlying the results are available as part of the article and no additional source data are required. 1 this paper was written in the first two weeks after lockdown, when emergency presentations nationally were hugely reduced (bmj, 2020); by the time of publication, it could be anecdotally observed that emergency presentations of adolescents in a state of mental health crisis had increased. the child and adolescent psychoanalytic psychotherapist, jocelyn catty reflects on how psychological therapies are positioned during a crisis such as the covid-19 pandemic. the author questions how the psychoanalytic session is maintained over online platforms and telephone consultations. furthermore, catty addresses essential reflections of how the crisis leaves time for emotional and psychological care in a time characterized by the pressure of triage and emergency. the introduction outlines the immediate and dark consequences for young people, their school achievements, mental health, social development, and the opportunity to get adequate treatment. she points at the specific developmental challenges of covid-19 putting young people at high risk for lagging behind in this important transitional phase into adulthood. the manuscript is well-written and contains highly needed questions regarding adolescent mental health during a crisis as the pandemic. it emphasizes the uncertainness in therapy rooms and improvised therapy rooms at home worldwide. the paper raises the urgency for young people and the need for society to take their situation seriously. jocelyn catty fears that only mental health triage will be offered to young people and a generation will be deprived of the opportunity for treatment. the paper is formulated as a warning bringing into the discussion some problematic sides of video consultations. the paper is submitted as a research article, however, we read it to be an opinion article. thus, evaluation according to research standards is not applicable. the paper does not provide sufficient details of method and analysis to allow replication of others, neither are any conclusions drawn -as there are no results and the manuscript is lacking both qualitative and quantitative data. however, as an opinion article, the paper is a structured and well written, article. however, the manuscript might be more useful to a broad clinical readership if the author moved beyond the rather pessimistic undertone regarding psychotherapy during a crisis and explored alternative perspectives. as there is evidence as well as clinical experiences during the ongoing pandemic that some young people in some periods of therapy might profit even more from video consultations. in addition, video consultations give the therapist an opportunity to follow the young person and offer treatment even when students according to change in school or study situation move to other places. some questions the author may consider: how does the therapists' attitude towards online therapy or telephone consulting affect the therapy delivered on these platforms? how is the therapist marked by the current crisis and shaped by being forced to deliver therapy on alternative platforms? might it be that the physical distance to the therapist for some adolescents facilitates a greater emotional closeness to the psychotherapy -and therapist? the manuscript ends rather abruptly with loads of important questions to reflect on without stating a clear take-home message. overall, this is a timely and much-needed essay. it is written nicely with rich metaphor and moving examples of how these last months has changed the entire field of psychotherapy -both for patients and most therapists. the paper might be improved by modifications addressing the detailed comments below and perhaps a mention of alternative perspectives to better recognizing the complexity of this important matter. are all the source data underlying the results available to ensure full reproducibility? no source data required are the conclusions drawn adequately supported by the results? partly containment, delay, mitigation': waiting and care in the time of a pandemic this paper was developed in collaboration with colleagues working on the waiting times research project (see waitingtimes. exeter.ac.uk). we confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. this is a provocative and timely article. catty has used the remarkable interruption of normal camhs psychotherapy practice by the covid-19 crisis to explore and illuminate the importance of time and 'the future' for adolescents and the, often overlooked, impact of rhythm in the psychotherapeutic process.the importance of the article would seem to lie in its engagement with a rapidly changing (and novel) challenge to practice. the trade offs between (masked) face to face sessions with digital but unmasked consultations are well noted.like many writing about covid catty appears to accept the 'looming mental health' epidemic it will cause while observing that initially referrals fell. it might be worthwhile to revisit previous national crises (eg. wwii) when predictions of mass psychological casualties were found to be baseless.catty acknowledges that her thinking is based on the first few weeks of lockdown and a folllow up article after a couple of months to compare her thoughts with what transpires would be of considerable interest. are all the source data underlying the results available to ensure full reproducibility? no source data required are the conclusions drawn adequately supported by the results? yes competing interests: no competing interests were disclosed.reviewer expertise: social psychiatry and the application of psychotherapeutic principles in adult disorders.i confirm that i have read this submission and believe that i have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. key: cord-022551-qlhkwbp9 authors: fisher, peter g. title: [image: see text] ferret behavior date: 2009-05-15 journal: exotic pet behavior doi: 10.1016/b978-1-4160-0009-9.50011-6 sha: doc_id: 22551 cord_uid: qlhkwbp9 nan polecats tend to be solitary and very territorial with fi ghting between males having been observed, presumably over territory and sexual domain. the domestic ferret on the other hand is very social and gregarious, enjoying play activities with conspecifi cs and preferring to sleep with other ferrets of the same or opposite sex. the polecat is quick, nervous, and easily frightened and will show fear of people if left with the mother during the critical period of 7 1 / 2 to 8 1 / 2 weeks of age. 34 the domestic ferret, however, was initially kept as a pest destroyer, normally raised in confi nement and liberated to the fi eld in order to hunt the intended prey. therefore these ferrets were raised to be easily handled and could not be nervous or fearful of humans. further resemblance and disparities between the domestic ferret and the wild polecat will be made in other sections of this chapter as we investigate the behavior of today's pet ferret. being domesticated from a crepuscular species, ferrets possess a tapetum lucidum, which allows for more effective vision at low levels of light. they do not see well in pitch dark and have diffi culty adjusting to bright light. this means that a ferret must be allowed to adjust to the light and become fully awake before it is removed from under a blanket or from a cozy spot where it is sleeping, or the handler risks being bitten. ferrets have binocular vision, and although they can swivel their eyes to look at different objects, most ferrets look forward and turn their heads to see things to the side. the pupil is horizontally slit, which is common in species that chase prey with gaits characterized by a hopping motion 39 and explains the ferret's fascination with a bouncing ball. ferrets have very good visual acuity at close range, which is important because the ferret uses varying body language and visual displays to communicate. 28 they see less detail at greater distances and as a result pay more attention to complex visual stimuli such as moving objects. the ear canals of a ferret do not open until approximately 32 days postnatally (as compared with 6 days in a cat), which coincides with the appearance of a startle response to loud hand claps and the recording of acoustically activated neurons in the midbrain (figure 4-2) . 31 this late onset of hearing may explain why kits produce exceptionally loud, piercing sounds during the fi rst 4 weeks of life. lactating jills are tuned in to ferret behavior 167 kit vocalizations and will respond to high-frequency (greater than 16 khz) sounds in a maze test, whereas males and nonlactating females will ignore these sounds. 39 adult ferrets hear best when sounds are within a range of 8 to 12 khz. 39 this may explain why ferrets love squeaky toys, which produce sounds in this range. kits of wild polecats have a critical period of learning the scent of prey (olfactory imprinting), which according to apfelbach (1986) is between 60 and 90 days of age. 1 except under duress, polecats will refuse to eat any prey whose smell they have not learned by that time. 39 as adults, they will actively search for prey with which they were familiarized during this critical time period and will ignore other prey or food smells. this may explain why certain ferrets will eat only one type of diet and why kits exposed to only one brand of food at 60 to 90 days of age may be opposed to dietary changes later in life. it is therefore recommended that young kits be offered a variety of foods during their fi rst 6 months of life in order to prevent dietary selectivity or olfactory imprinting. the sense of smell in ferrets is particularly keen. wild mustelidae hunt down their prey using their sense of smell to home in on the quarry. during exploratory behavior the ferret spends a great deal of time with its nose to the ground investigating its environment. objects placed directly in front of a ferret will be examined fi rst by smelling, followed by visual or tactile inspection. polecats are a solitary species and leave marks throughout their home range by performing a repertoire of scent-marking actions that include wiping, body rubbing, and the anal drag (figures 4-3 and 4-4) . 10 observations of ferrets in an outside enclosure revealed that anal drags were performed at latrines near den sites and at an equal frequency by males and females throughout the year. 10 mustelidae also use urine for scent-marking and produce skin oils that are profoundly affected by circulating hormone levels. hobs (male ferrets) in particular will produce intense seasonal skin oils that correspond to the increased testosterone levels associated with longer day lengths. ferret anal scent gland odors are sexually dimorphic, and studies have demonstrated that ferrets can use these variations as a communication tool. 11 ferrets can distinguish between male and female anal sac odors, among strange, familiar, and their own odors, and between fresh and 1-day-old odors. 11 these results are consistent with both a sex attraction role and a territorial defense role for anal sac odors. the anal drag. the domestic ferret defi nes its territory by marking behavior such as backing into corners to defecate and following with the anal drag, as illustrated here. (illustration courtesy barb lynch.) different messages are conveyed by the various marks. kelliher and baum 22 showed that in the ferret, olfactory detection and processing of volatile odors from conspecifi cs is required for heterosexual mate choice. 22 males perform more body rubbing than females (jills), especially during the breeding season. anal drags leave an olfactory signature of anal sac secretion for intersexual and intrasexual communication. olfactory marking behavior also communicates territoriality and gives other ferrets knowledge of the marking ferret's sex and hormonal activity. wiping and rubbing actions release the ferret's general body odor and may act as a threat signal in agonistic encounters. 10 the response to olfactory stimuli and the scent-marking behavior of domestic ferrets is much less pronounced than that of their undomesticated counterparts. domestic ferrets retain the actions of marking that are so important to their wild relatives. 39 the ferret thrives in the company of other ferrets; readily sharing living quarters, hammocks in which they sleep, food bowls, and water bottles. despite this harmony, ferrets are still instinctively territorial and lay claim to smaller, albeit signifi cant territories within their home environment. like the wild polecat, domestic ferrets back up and defecate on objects or certain areas (and some even anal drag after defecation) in order to mark their territory. the domestic ferret tends to choose corners in which to defecate that may represent territory perimeters. wiping behavior. domestic hobs possess preputial sebaceous glands that produce oils that they will wipe or mark on household items to communicate sexuality and territoriality. this behavior corresponds to the increased testosterone levels associated with longer day lengths. (illustration courtesy barb lynch.) when it comes to the postdefecation anal drag, operators of ferret shelters will note that this behavior will increase in some ferrets when a new ferret is introduced to the household or when ferrets become more seasonally hormonal. 25 this innate behavior occurs even in ferrets that are surgically descented (anal sacculectomy), as the ferret is unaware of its missing anatomy. ferrets also possess perianal sebaceous glands that secrete oils used in scent-marking. the strength of the scent from these glands is reduced in neutered males. worth mentioning is the way in which ferrets use their sense of smell in meet-and-greet behavior. when ferrets are introduced they will often sniff each other's anal area and neck and shoulder region (figure 4 -5) . this behavior may give a domestic ferret information about the other ferret's sex and hormonal status. this activity may be the domestic ferret's equivalent of the behavior in the wild counterpart by which sexual receptivity is assessed. although quiet most of the time, domestic ferrets do make a variety of vocalizations with which they communicate. in order to determine the meaning of ferret sounds, shimbo 39 recorded waveforms and sound spectrographs of various domestic ferret vocalizations. interpretation of these meet-and-greet behavior. when ferrets are introduced, they will often sniff each other's anal area and the neck or shoulder region. this behavior may give the domestic ferret information about the other ferret's sex and hormonal status. (courtesy laura powers.) auditory studies led to several generalizations such as an increase in tonality on a basic signal indicates heightened excitement, a rising infl ection indicates urgency, and a rising pitch of a string of sounds indicates displeasure. 39 any one or more of these alterations in infl ection can be superimposed on any vocalization to alter its meaning. the following are descriptions and the interpretive signifi cance of the most common ferret vocalizations as recognized by many ferret owners. also know as chuckling or "the buck," the dook is the most commonly used ferret vocalization. this vocal signal can be low-or high-pitched and is usually strung together in a series of chortles or chucks in undulating pitches. the dook usually signifi es happiness or excitement and is commonly expressed during play and exploratory behavior. the greater the excitement level is, the louder are the intensity and volume. the ferret and most other mustelidae use a hissing sound to convey anger and frustration, but it can also denote fear or be used as a warning signal. it can be a short burst that warns a playing partner, "hey, that hurt, back off a little," or serves as a fear response, forewarning that "my guard is up, be careful." prolonged hissing usually indicates frustration. a high-pitched screech is used when a ferret is startled, frightened, or in pain. when cornered by another animal, ferrets may scream to startle their opponent and thereby gain escape. prolonged screaming is an indication that something is seriously wrong and may occur when a ferret is in intense pain; such screaming has also been reported to occur during seizures. 6 all cases of continual or recurrent screaming warrant a medical workup. an unusual loud chirp may occur as a defensive vocalization when a ferret is frightened or very excited. some ferrets bark when they are angry. it is usually easy to discern a happy, curious ferret vocalization from one indicating anger, fear, or extreme pain. be aware that an apprehensive or distressed ferret may bite, and use appropriate caution with ferrets that are using these verbal signals. ferrets also use body language and a variety of visual displays to communicate moods and feelings. they prefer to follow and attack prey moving 4 172 exotic pet behavior at a velocity close to the escape speed of a mouse. 45 this may help to explain their fascination with bouncing balls, toys pulled along the ground in front of them, and in general anything that moves. during exploration the inquisitive ferret will periodically demonstrate scouting behavior in the form of erect or alert posturing. this attention response is similar to (and probably stems from) actions shown by the european polecat while investigating unfamiliar surroundings. during this response the neck is raised, the head is held at 90 degrees to the body, the ears are pricked, and the vibrissae are extended. 39 piloerection in the form of a frizzed-out tail may be a sign of anger or excitement, either fearful or joyous (figure 4-6) . during a display of anger, the puffed tail is usually accompanied by an arched back and a vocal hiss or screech. if the display represents excitement and joy, the tail may fuzz out and fl ick back and forth. piloerection of the tail may also be noted during an anaphylactic reaction such as that seen with a vaccine reaction. normal locomotion in a ferret consists of alternating movements of all four feet, although a ferret can be seen to hop or gallop with the rear legs when running or at play. many repeatable locomotor patterns can be noted bottle brush tail. piloerection in the form of a frizzed-out tail may be a sign of anger or excitement, either fearful or joyous. during a display of anger the puffed tail is usually accompanied by an arched back and a vocal hiss or screech. if the display represents excitement and joy, the tail may fuzz out and fl ick back and forth. (courtesy lisa leidig.) that tell us the ferret is a happy, playful pet; these activities have been described and nicknamed by ferret owners. 37 for example, the "dance of joy" or the "weasel war dance" is exhibited by the ferret that is happy and excited. the animated ferret tries to go in several directions at once; dancing from side-to-side, hopping forward, twisting back, fl ipping and rolling on the fl oor-all at an energetic pace. there seems to be no apparent reason for this dance other than pure joy and happiness. the "alligator roll" is a form of intense play or wrestling between two ferrets where one ferret grabs the other by the back of the neck and fl ips him upside down. some feel this is a way for one ferret to show dominance. 6 because wild ferrets are solitary, any form of social hierarchy would be a refl ection of domestication and the housing of multiple unrelated ferrets in close captivity. it is obvious that ferrets are energetic, funloving animals. as a result of this high energy, ferrets need ample play time (preferably up to 2 hours per day) and benefi t greatly from environmental enrichment. in addition to the "dance of joy" and the "alligator roll" already discussed, play behavior may include other visual displays. during periods of intense play ferrets may suddenly stop, fall to the ground, and slump, with body fl attened, eyes open, and back legs splayed. this usually indicates the ferret is worn out and is taking a short break. in a few minutes the ferret will usually engage the rear legs and inch forward by pushing with the hind feet only. once rested or if teased by a playmate into resuming the fun, the ferret will jump up and again engage in full-blown play behavior. this slumping may stem from the silent stalking of polecat predatory-attack behavior in which the body is held close to the ground. the actual predatory attack in which the ferret springs forward may be elicited by any rapid movement, which initiates a preprogrammed electrical brain stimulation. therefore further romping on the part of the domestic ferret play partner initiates a return-to-play assault by the "slumping" ferret. because of their high metabolic rate, short gastrointestinal tract, and gastrointestinal transit time of about 3 hours, ferrets defecate frequently and can mark the corners of their cages, much to the aggravation of the conscientious owner who keeps a clean litter box available at all times. it should be stressed that clean to the ferret often means unused, as many ferrets will avoid a litter box that has been soiled only once. before defecating and urinating, ferrets will usually briefl y explore their cage environment in order to fi nd a suitable location in which to void. most ferrets choose one or two corners within the cage as the favorite location. once satisfi ed with the spot, the ferret will turn around, back into the corner, and, with back slightly arched and tail raised directly over the back, defecate using slight pulsing contractions of the abdomen. ferrets do not bury their stool but will at times perform a postdefecation anal drag in which they scoot their anus along the fl oor for a few seconds. when urinating the ferret behaves similarly to fi nd the appropriate site, it then squats with the rear legs spread slightly apart. the urination posture of both males and females is similar, with the only difference being that females squat slightly lower. ferrets have an innate love for digging, and a clean litter box is a perfect setting for digging and play behavior, often resulting in an unused, tipped-over litter box. see box 4-1 for litter box tips. it is not uncommon for ferret owners to report that their ferret licks or drinks its own or a cagemate's urine. physical examinations and health workups including complete blood count, chemistry panel, and urinalysis are usually unremarkable. it is possible that this behavior stems from the behavior of polecat hobs, which sometimes groom themselves with their own urine to make themselves more desirable to jills. ferrets usually reach sexual maturity at 8 to 12 months of age. most reproductive behavior in the pet ferret is suppressed because of surgical sterilization and exposure to artifi cial, indoor lighting for consistent periods of time averaging 15 hours per day. knowledge of normal reproductive behavior is important when interpreting certain ferret play and aggressive behaviors as well as understanding the behavioral and physiologic changes associated with adrenal disease. researchers have shown that both estrogen and testosterone contribute to masculine sexual behavior in male ferrets 8 and female ferrets. 41 ferret hormonal activity is strongly infl uenced by endogenous circadian rhythms, which persist under conditions of constant light and constant darkness. however, these circadian rhythms are usually infl uenced by external factors such as light, temperature, barometric pressure, and hormones. 20 of these factors the most important is light, and ferret sexual behavior becomes more evident as natural day lengths increase. as day lengths increase, circulating melatonin levels diminish and hypothalamic gonadotropin-releasing hormone (gnrh) is released in a pulsatile fashion, in turn resulting in the release of pituitary luteinizing hormone (lh) and follicle-stimulating hormone (fsh), which stimulate the release of ferret behavior 175 estrogen and testosterone from the gonads. 38 this results in an increase in sexual activity and interest. the onset of puberty in hobs is denoted by development of male sexual behaviors such as showing more interest in jills and the introduction of neck gripping and pelvic thrusting into their play behavior. if exposed only to natural lighting, the hob will become reproductively active a full litter box tips w spend some time observing your ferret's habits in the cage. when it backs into a cage corner to relieve itself, pick it up and place it in the corner of the litter box. w provide a large litter box that takes up most of the bottom of the cage. this is more likely to encourage use of the box. punch holes in the litter box and wire it to the cage walls so that it can't be tipped over. w offer praise and food treats when your ferret uses the box. w to discourage digging use newspaper strips in the litter box, and slowly add a little bit of litter. over a week or more gradually add more litter and less newspaper. most ferrets learn not to play in the litter fairly quickly. newspaper doesn't deter odors, so it needs to be changed often. w buy a ferret-friendly litter box with one low side and a guard on the higher sides to prevent the ferret from backing up far enough to miss the box. w clean soiled corners, inside or outside the cage, with an appropriate pet odor neutralizer such as urine-off or eliminodor. w provide litter boxes in corners of rooms ferrets are allowed to explore. more than one litter box is ideal. if your ferret seems to prefer a certain corner, place the litter box there. w ferrets do not bury their stool as cats do; therefore only a shallow layer of litter to cover the bottom of the litter box is needed. avoid fi ne clumping litters as they are messy and dusty, potentially resulting in respiratory problems. w recycled newspaper litter or plain clay litter are good choices. avoid scented litters, as ferrets may avoid them. w change the litter box(es) often to encourage use. w most ferrets won't soil their beds or food bowls. place bedding or food dishes in all non-litter box corners of the cage. bedding that has been slept in and retains the ferret's body scent works best. w before out-of-the-cage play, place your ferret in the cage's clean litter box. continue to place it in the box until it urinates or defecates, then reward it with play. 1 to 2 months before the jill. 18 a testosterone surge will result in reproductive behaviors associated with attracting the opposite sex and protecting territory. hobs in rut will be more aggressive and will scent-mark in order to get the message across to potential breeding partners that they are ready to mate. male ferrets have preputial gland secretions that they will wipe on objects by dragging their bellies across the ground. perianal scent glands are also used for scent-marking by dragging the anus or scooting across the ground (anal drag). numerous dermal sebaceous glands, most prominent at the nape of the neck, are used by rubbing and rolling onto inanimate objects hobs wish to mark. males have more sebaceous glands than females, and glandular production appears to be under androgenic control. 32 in a natural setting all these reproductive behaviors would allow multiple polecat hobs to stake their territory and fi ght off any potential competitive male suitors so that by the time the jill becomes sexually receptive they can get down to the business of breeding. during the mount, the male grabs the nape of the jill's neck with his teeth and will grip her body by wrapping his forelegs around her ribcage. pelvic thrusts last variable lengths of time up to 3 minutes. between pelvic thrust bursts are periods of rest during which the male simply lies over the female and holds on with the neck grip. at the point of penetration the male will increase the arch of his back anteriorly, causing his foreleg grip to slip behind the female's rib cage. 30 holding this position for a variable but usually prolonged period of time is the best interpretation of penetration, at which time pelvic thrusting ceases. occasionally the male will tense his pelvis, causing the tail to raise for short periods of time. at this time the female will occasionally fl inch or remain fl accid. 30 variable mating times from 120 minutes to 3 hours have been reported, but in one study mating times recorded in 10 pairs of ferrets lasted from 34 to 172 minutes. 30 these prolonged intromissions appear necessary in order to ensure fertilization. whether this is to allow for increased sperm deposition as a result of the multiple ejaculations of the male or if it is necessary to stimulate the lh surge and subsequent ovulation in the female is open to debate. neutered males with adrenal gland disease may display sexual behavior because of production of testosterone by the abnormal glands (see the discussion of adrenal disease). behavioral changes associated with rising estrogen levels and puberty in the female ferret are less pronounced. some jills may show evidence of being more excitable and nervous, whereas most show no behavioral changes at all. wheel-running activity was shown to increase during ferret behavior 177 estrus, with the number of wheel revolutions being doubled or tripled as compared with totals in ovariohysterectomized or anestrus ferrets. 14 with the onset of full estrus, food intake may decrease and jills may sleep less and become irritable. 18 before onset of full estrus, jills will be unresponsive to the advances of a hob in rut. there will be a good bit of anal, genital, and neck sniffi ng, nose poking, and attempts by the male to grab the female by the neck, but the jill will ignore this behavioral foreplay or when tired of it hiss and nip or attack the male. dramatic edematous vulvar swelling in response to estrogen secretion by the ovaries is a clear signal that full estrus has occurred. at this time the jill will demonstrate the above behaviors but with more noise and intensity. these reproductive behaviors are very similar to those in other mammalian species, with much sniffi ng, genitalia display, and play fi ghting. when ready to breed, the female becomes fl accid and submissive, and mounting by the hob is allowed. being an induced ovulator, the jill will remain in estrus for extended periods if not bred. if breeding does not occur, the vulvar tissue will remain swollen, and hyperestrinism can cause severe anemia that will not abate until ovariohysterectomy or hormonal treatment is instituted. adrenal disease may also cause a swollen vulva as a result of androgens being oversecreted by the adrenals. remnants of ovarian tissue may also cause hyperestrinism. as a general rule, patterns of behavior and social relationships are developed through learning as well as heredity, and social animal groups are organized by social status, territoriality, and reproductive activities. the interplay of experience and innate* factors in the development of behavior is very subtle and can be diffi cult to separate. 16 european polecat kits are dependent on their mothers to bring them meat meals from the time they are weaned at 6 to 8 weeks to the time they begin to hunt on their own at 10 weeks of age. during this preweaning time kits have been observed to interact socially and play. 24 however, by the time they are 13 weeks old, a time when kits may leave their nests permanently and go out on their own, kits show various degrees of independence from one another. 24 adult polecats are essentially solitary, with one study fi nding ferrets to share dens simultaneously with other fer rets on only 7.4% of 706 radio-tracking events. 36 also, adult ferrets *innate behaviors are those that do not seem to require specifi c experiences (learning) for their expression. demonstrate intrasexual territoriality, with dominant males showing more spatial overlap with females than with subordinate males. 29 the domesticated ferret, on the other hand, shows much more diurnal activity, and many can be kept in pairs or groups without confl ict. the best explanation for the difference in socialization patterns is that familiarization and habituation* play a signifi cant role in the ferret's social response to both man and conspecifi cs. familiarization in the form of imprinting may be involved as young polecats removed from their mothers during this critical phase in their development (4 to 10 weeks) become imprinted on their human caretakers. evidence to support this belief is the fact that young polecats follow their mother on foraging expeditions and that hand-reared ferrets readily follow a human being. it has also been shown that the presence of the mother appears to facilitate the development of fear of humans in the young. 34 in captivity, however, fear of humans does not develop in wild polecats if they are removed from their mother at any time before the second day after their eyes have opened (typically 28 to 34 days). 34 socialized ferrets are also more likely to show habituation than isolated ferrets, 9 demonstrating that socialization and domestication go hand in hand. pet ferrets acclimate to their environment and will rise to the occasion when given an opportunity to play, explore, or interact with others. in other words, they become diurnal as their periods of activity coincide with that of their human household. most ferrets enter the pet trade at any time from 6 to 10 weeks of age. in the united states most pet ferrets come from a few large commercial breeding farms and therefore are exposed to other ferrets and humans from the time their eyes open. from the above data, and from observing domestic ferrets and their obvious agreeable nature with both humans and cagemates, it seems safe to assume that ferrets, like dogs, do have a critical period of socialization. this period occurs between the time their eyes open at 4 weeks of age to 10 weeks. through their observations, most ferret researchers and multiple ferret owners believe that ferrets do not form any kind of social hierarchy and that positioning for dominance does not occur. nevertheless, ferrets will fi ght occasionally, especially when exposed to an unfamiliar ferret. some ferret rescue workers have recommended placing ferretone on the necks and scruffs of all ferrets being introduced to an unfamiliar ferret. ferrets consistently like this oily supplement and will likely lick each other in an appropriate manner while being less likely to demonstrate aggression. *habituation is a decreased response to new objects and environments resulting from prolonged or repeated exposure. pet ferrets readily show affection for their human owners through gleeful greeting behavior and willingness to shower owners with ferret kisses. young ferrets, on the other hand, are not likely to enjoy quiet cuddle time. their exploratory behavior creates too strong an urge to get off an owner's lap and move on to investigate the environment around them. as ferrets mature, a combination of age, improved socialization, and a decrease in exploratory behavior results in a more staid ferret that enjoys periods of quiet snuggling and petting. ferrets have been domesticated for over 2000 years; therefore it seems likely that given the right environment, poorly socialized ferrets can become more affable and gregarious. this suggestion is supported by the fact that intact hobs kept together in a colony situation with minimal human handling can live in harmony outside the breeding season. ferrets will groom their fur through licking and gentle nibbling motions. they normally maintain a smooth and shiny hair coat as long as they are kept on a balanced diet made up primarily of high-quality animal protein and fat. ferrets have also been known to groom other ferrets to which they are bonded. this grooming is usually around a cagemate's ears and head as the ferrets lie side by side. normal ferret skin is smooth and pale without evidence of fl aking, scabs, or infl ammation. a dry, dull fur coat and evidence of fl aking may be a refl ection of poor diet or low environmental humidity. in the wild ferrets spend a good part of the day in underground burrows in which the humidity is high and the temperature a consistent 55â° f (13â°c). the dry warmth of many homes during the winter months may cause the skin to dehydrate, with subsequent fl aking and itching. pruritus may also be a sign of external parasites or adrenal disease. a ferret's hair coat has a thick cream-colored undercoat covered by longer, coarse guard hairs. it is the color of these guard hairs that defi nes the various ferret coat colorations-from dark and light sable to cinnamon, silver, or white. both intact and neutered ferrets undergo a hormonally infl uenced molt, usually twice a year, in which the hair coat thins in response to photoperiod. as daylight hours and environmental temperatures increase, corresponding with late spring in the northern hemisphere as well as ferret breeding season, ferrets may lose most of their guard hairs over a period of several weeks. ferrets can get trichobezoars from grooming, which can get large enough to cause gastric obstruction or irritation (figure 4-7) . therefore the use of a hairball remedy in the form of a feline petroleum-based laxative is especially important during this seasonal molt. the skin of the ferret contains numerous sebaceous glands, the secretions of which give the ferret its characteristic musky odor. 32 these secretions also are strongly infl uenced by seasonal hormonal infl uences, especially in the male, and may give the coat a greasy feel and an obvious yellow to orange appearance most noticeable over the dorsal shoulder area. in the northern hemisphere these secretions are observable in the late spring to early summer, corresponding with the ferret's natural breeding season. if this coat discoloration and increased odor are particularly evident and do not diminish with time, they may be signs of adrenal disease, especially in the male ferret. if associated with adrenal disease, loss of guard hairs may occur concurrently, and areas of obvious alopecia may develop, as well as other systemic signs discussed elsewhere. during this time ferrets may also show increased scent-marking behavior and will rub their backs and shoulders along carpet and furniture to the dismay of odor-conscious owners. cage walls and bedding readily take on the yellow color and musky odor of these sebaceous secretions. the ferret is an obligate carnivore with a short intestinal tract that lacks a cecum and ileocolic valve. the small intestine is approximately fi ve times longer than the ferret's body, and the mean gastrointestinal transit time of food passage from stomach to rectum is 182 minutes. 3 this rapid transit time, along with the ferret's lack of intestinal brush border enzymes, especially lactase, contributes to ineffi ciency in absorption. as a result, they are less able than cats to absorb suffi cient calories from carbohydrates. to compensate for the ineffi ciency of its digestive tract, the ferret requires a concentrated diet, high in protein and fat and low in fi ber. ferrets snack and eat multiple small meals throughout the day, and unless regularly fed very high-fat content foods they generally eat as much as they want without becoming obese. ferrets normally increase food intake approximately 30% in the winter and gain weight by depositing subcutaneous fat. this will reverse as daylight lengthens in the spring. for maintenance, ferrets may consume 200 to 300 kcal/kg body weight daily. daily food consumption averages 42 g (1.5 oz) and 49 g (1.7 oz) dry matter per kg body weight for male and female ferrets, respectively. 4 ferrets are solitary feeders that when allowed free access to food will eat 9 or 10 meals per day, which is true of most species when food is available ad libitum. in laboratory studies in which ferrets had to perform a task (bar press) to gain access to food, it was shown that meal frequency declined. 21 there was a corresponding increase in meal size, allowing the ferrets to maintain a relatively constant total daily food intake suffi cient to maintain normal growth and body weight. these shifts in feeding patterns in response to increased work needed to procure a meal are similar to those in other species and are consistent with an ecologic analysis of foraging behavior. 21 generally, socially feeding animals increase procurement and consumption rates as food availability decreases, whereas solitary feeders, such as cats and ferrets, do not. 27 this study demonstrates that ferrets could be maintained on meals fed once or twice daily versus a free-feeding situation. many ferret owners like to offer raisins or other simple carbohydrates as treats, but these high-sugar treats are diffi cult for the ferret's gastrointestinal tract to digest and may be contraindicated because of the prevalence of insulinoma in ferrets. instead, small pieces of cooked chicken, totally ferret, or n-bones treats may be offered. the predatory behavior of ferrets consists mainly of instinctive behavior patterns that are elicited by external stimuli. in all higher animals a sudden change or stimulation will usually elicit a movement toward the source-a response described as the orientation response. the learning of an orientational component plays an important role in the development of functional sequences of behavior. eibl-eibesfeldt observed the preycatching techniques of polecats and found that the normal movements of pursuit, grasping the neck of the prey, shaking it, and turning it over on its back occur the fi rst time an appropriate object is presented. 19 after several experiences the neck bite becomes properly oriented for quick killing of prey. more recently, similar behavior was demonstrated in the black-footed ferret in which maturation, experience, and greater environmental complexity (enriched cage, including encouragement of foodsearching behaviors) all increased the likelihood of the ferret's making a successful kill. 44 this behavior becomes important when the domestic ferret is kept in a household with other exotic pets that it may perceive as prey. it is at this time that the ferret may cross the line from play behavior to prey-catching behavior. both behaviors can look similar initially, as ferrets at play also demonstrate neck biting, but if stimulated by a perceived prey species (pet bird, lizard, or rodent) the ferret may instinctively go beyond play and infl ict harmful or fatal bite wounds on the other pet. therefore, ferrets should not be left unsupervised with other small exotic pets. apfelbach showed that for the ferret the time needed to catch and kill rats depended on the size of the rats in relation to the ferret. killing success decreases with a relative increase in prey size. 2 this may explain why the domestic ferret tends to live harmoniously in a home with dogs and cats. these larger species do not stimulate instinctive prey catching and killing behavior, but instead the response to the larger animal takes the form of the less-intense, albeit similar, play behavior. a number of investigations have demonstrated the existence in mammals of behavior not motivated by fear, thirst, or hunger and unrelated to general activity level. these have led to postulation of another drive, the exploratory drive. 15 characteristically, such behavior is aroused by novel external stimulation. it involves either locomotor activity or manipulation of objects, and it declines as a function of time. 15 in general, higher animals will approach and examine strange objects with whatever sensory equipment is available to them, and in a strange environment they usually move around and examine all of their surroundings. when an animal is exposed to a novel object or environment it will fi rst familiarize itself with the new stimulus or situation. with a strange object, exploration must precede play. as familiarity increases the exploratory behavior decreases and the animal's curiosity of the novelty may with time lead to play behavior. with other new situations fear may lead to rejection instead of exploration. whether fear or curiosity and subsequent play behavior are elicited depends on various factors, including the physiologic state of the individual animal and the magnitude, intensity, or strangeness of the eliciting stimulus. in general, a small change in the environment elicits investigation, whereas a major change may elicit fear. degree of domestication can also affect exploration behavior. in his work on identifying behavioral differences between the domesticated ferret and its wild counterpart, the european polecat, poole (1972) showed that wild ferrets are less likely to examine and more likely to avoid strange objects than tame ferrets. 34 in this study attention responsiveness to new auditory stimuli was measured, and exploration of new surroundings was observed. the attention response is essentially a method of scanning for stimuli. the european polecat shows extreme caution in exploring an unfamiliar environment; it takes frequent cover, uses defi nite pathways in the immediate vicinity of its den, and regularly returns to its home area after making forays into unfamiliar territory. the polecat also shows a more rapidly diminished attention response to auditory stimuli. the domestic ferret, on the other hand, can be moved to a strange cage or placed in an unfamiliar area without showing signs of fear or disorientation. the ferret also shows persistent response to repeated auditory stimuli, a feature that might also be related to the reactivity typical of a juvenile animal. these results again appear to support lorenz's view that domesticated animals show more juvenile behavior as compared with their wild counterparts. 19 ehrlich's work with the black-footed ferret 15 demonstrated similar fi ndings: increased handling (equivalent to greater degree of domestication) leads to increased exploration behavior. those fi ndings may explain why today's domestic ferret shows an intense curiosity and little fear of its surroundings. when allowed to roam, the domestic ferret shows fearless exploratory behavior: the domestication process has been thorough at removing the ferret's fear response when it comes to exploratory behavior, and the ferret's inquisitive nature and love of exploration are boundless, sometimes to its detriment. as a result of being continuously handled and carried securely without being dropped, or perhaps because of its reportedly poor eyesight, domestic ferrets show little fear of height. this is opposite to what shimbo 39 found in her personal experience with undomesticated polecats, which appeared very frightened and uneasy when exposed to heights. 39 this apparent lack of fear of heights in the domestic ferret can result in injuries or death if they climb out of an upstairs or apartment window. an open door may also be addressed with fearless curiosity, and the urge to explore can lead the ferret to the outdoors where it is limited in its ability to fi nd food and to protect itself from predators or from extremes of weather. refl ecting on the 1953 lorenz hypothesis that the behavior of domestic animals resembles that of juvenile individuals of their wild counterparts, 19 it can certainly be said that this holds true when it comes to play behavior. in general, motor patterns used in intraspecies play behavior are characterized by actions that occur frequently in other functional contexts (e.g., aggression, sexual, and predatory behaviors). in 1966 poole observed polecats (putorius putorius) at play and demonstrated the incomplete sequence of confl ict behavior. four of the agonistic patterns were absent from intense play behavior-two extreme forms of attack ("sustained neck biting" and "sideways attack") and two extreme fear patterns ("defensive threat" and "screaming"). 19 the play behavior imitated patterns of aggression, but in a less-serious and less-threatening manner. adolescent ferrets' play behavior also imitates sexual behavior, with juvenile male ferrets exhibiting higher levels of neck biting and "stand-over" behavior than females (figure 4-8) . sex differences in the expression of prepubertal play behavior of ferrets apparently result from the differential exposure of males and females to androgen during the postnatal period. 42 the same holds true for the domestic ferret. the ferret demonstrates an obvious love of play in a variety of forms, and we can imagine their mermounting play behavior. adolescent ferrets' play behavior also imitates sexual behavior, with juvenile male ferrets exhibiting higher levels of neck biting and "stand-over" behavior than females. sex differences in the expression of prepubertal play behavior of ferrets apparently result from the differential exposure of males and females to androgen during the postnatal period. (courtesy peter fisher.) riment stemming from predatory, sexual, exploratory, and digging behaviors. the typical sequence for two ferrets at play would begin with the chase, followed by an exaggerated approach or ambush, veering off, and reciprocal chasing, followed by mounting, rolling, and wrestling with inhibited neck biting (figure 4-9) . these mock sexual and predatory behaviors are accompanied by vocalizations that signal both excitement (dooking) and anger (hissing). the solitary ferret at play also demonstrates various behaviors that stem from normal behaviors seen in its wild counterpart. predators stalk and chase quarry. observe the ferret playing with a hard rubber ball or squeaky toy, and you will see the same type of behaviors. hard rubber balls, such as super balls, really stimulate hunt and capture behavior. a rolling or bouncing ball captivates the ferret, which immediately begins the hunt. the ball is aggressively pursued until the ferret "captures" it by grabbing it, then bites hard and shakes it as if it were prey. keep in mind that ferrets love soft-rubber items and will readily ingest the torn pieces of soft rubber, creating potential for gastrointestinal foreign bodies (figure 4-10) . therefore any ferret play ball needs to be hard enough and large enough neck-biting play behavior. the typical sequence for two ferrets at play would begin with the chase, followed by an exaggerated approach or ambush, veering off, and reciprocal chasing, followed by mounting, rolling, and wrestling with inhibited neck biting. these mock sexual and predatory behaviors may be accompanied by vocalizations that signal both excitement (dooking) and anger (hissing). (courtesy peter fisher.) that the ferret cannot readily tear off chunks that it might ingest ( figure 4-11) . playing ferrets also love to dig. this digging behavior comes naturally, as the sharp claws and streamlined body of the polecat were designed for digging and tunneling deep underground in pursuit of game or for making their below-ground burrows. this ancestral behavior may explain why ferrets love to dig at the carpet, the fl oor, and their litter box and enjoy digging in the soil of potted plants. some ferret owners satisfy this desire to dig by allowing the ferrets to dig away in a large plastic play box (such as a large cat litter box) fi lled two-thirds full with rice or potting soil. a neater option is to provide the ferret with tubing to explore. both fl exible (ribbed plastic similar to clothes dryer vent tubing) and rigid (pvc pipe) tubing make for great exploratory amusements. because of its keen olfactory sense the ferret explores with its nose. the ferret can be observed searching back and forth across a room with its nose to the ground. when it fi nds an object of interest, often the ferret will drag it off to its "lair," also known as "ferreting it away" (figure 4-12) . this ferret burrow is usually the most inaccessible location the ferret can fi nd-a small cubby or a hole discovered under the kitchen cabinets or in the back of a closet. the ferret is instinctively seeking out a tight, dark, enclosed space, which mimics its native ancestor's underground burrow. hiding objects. when it fi nds an object of interest, often the ferret will drag it off to its "lair," also known as "ferreting it away." this ferret burrow is usually the most inaccessible location the ferret can fi nd-a small cubby or a hole discovered under the kitchen cabinets or in the back of a closet. (courtesy peter fisher.) it is amazing to see the variety of objects the ferret has "stolen and stashed." this hoarding behavior probably stems, once again, from behaviors seen in polecats, which have a very high metabolic rate and energy need; therefore having a readily available food supply is a must. instead of toys and objects of interest, polecats would build a cache of leftover food items and prey on which to feed while resting in their burrows. studies suggest that environmental impoverishment, whether in the form of physical or social restriction or limitation of play objects, has wideranging effects on the overall well-being of ferrets. chivers and einon 9 found that some of the isolation-induced effects on behavior seen in rats also occurred in ferrets, with deprivation of rough and tumble social play causing hyperactivity that persisted into adulthood. 23 work done by korhonen (1992) showed that overall health, refl ected by optimum weight and fur coat quality, occurred when ferrets were provided with increased housing fl oor space and compatible cagemates and when offered balls and bite cups with which to play. 23 although adult ferrets may appear perfectly content sleeping in their hammocks 20 hours a day, this certainly is not mentally and physically stimulating. unsupervised free time in a "ferret proof" room is always recommended. keep in mind that ferrets love human interaction, like to explore new places and objects, have a keen olfactory sense, and enjoy digging. a ferret that jumps back and forth in front of you and nips at your feet is telling you it wants to play. simply getting down on your hands and knees and chasing a ferret will stimulate more ferret dancing and happy vocalizations, chuckling, or dooking. if the ferret is not prone to biting, try playing tug-of-war games with an old washcloth or favorite plush toy (figure 4-13) . if the ferret is other-ferret friendly, try taking it to a fellow ferret fancier's ferret-proofed home for some exposure to a whole new environment complete with sights, smells, and ferret friends. transmissible diseases, especially the gastrointestinal infection likely caused by a coronavirus and commonly referred to as epizootic catarrhal enteritis (ece), should be considered when allowing initial contact between ferrets. digging can be encouraged by hiding toys in a children's sandbox or in a litter box (figure 4-14) . remember, however, to never leave ferrets unsupervised outdoors, as they tend to wander and may get lost. they are also relatively intolerant of extreme heat or cold. inactive ferrets are prone to weight gain and its subsequent effects on overall health. constant captivity in an enclosed space may also lead to behavioral problems such as biting and conspecifi c aggression. it is the ferret behavior 189 ferret owner's responsibility to ensure that this active, energetic pet's mental, physical, and sensory well-being is routinely stimulated so that it may lead a full and robust life. not all activities require human interaction, nor do they require a big monetary investment. many just require a little time, creativity, and imagination. box 4-2 describes some activities created by ferret owners-suggestions for inexpensively creating a fun and stimulating ferret household environment. the primary function of aggressive behavior between conspecifi cs is to determine and maintain rank or territory. aggressive actions are among the most prominent social activities of animals, with patterns of aggressive behavior differing from species to species. although such actions often appear antisocial, the fi ghting, bluffi ng, and threatening serves to promote survival of the species. it appears that a species disposition to environmental enrichment. filling a large plastic litter box with recycled newspaper pellets or rice and hiding objects for the ferret to fi nd helps satisfy their innate digging behavior. ferrets that enjoy playing in their water bowls will also enjoy recreation in a small wading pool with added ping-pong balls coated with ferretone. (courtesy peter fisher.) aggression is innate, but many details of the aggressive behavior are learned or perfected through experience. 12 in most animals early social experience greatly affects subsequent aggressive behavior. true fi ghting behavior between domestic ferrets is similar to that described by poole in his study of european polecats 33 -as an incident during which each animal attempted to bite the back of its opponent's neck with a sustained, immobilizing hold. successful bites (i.e., those during which the opponent was unable to break free) were sometimes accompanied by shaking or dragging of the immobilized animal. when the attacked animal was able to break free, it sometimes displayed evidence of intimidation, including screaming, defensive biting, hissing, fl eeing, urinating, or defecating. however, serious injury did not usually occur. 40 staton and crowell-davis 40 reported on the results of an experimental protocol to evaluate the effects of four factors on the fi ghting behavior between pairs of domestic ferrets: familiarity (pairings of cagemates versus strangers); time of year (pairings during winter versus spring); sex (male-male, male-female, and female-female pairings); and neutering status (intact-intact, neutered-intact, and neutered-neutered pairings). 40 awareness of factors that might affect the potential for aggression between w use food as treats, with the following caveats. keep in mind that ferrets are strict carnivores with high protein requirements. they use fats more so than carbohydrates for energy needs. however, excessive high-fat treats will result in the ferret's caloric needs being met with minimal food intake and as a result protein requirements may not be met. so if treating with high-fat oils such as ferretone, remember to use them in small quantities. â�¢ try rubbing a little ferretone on ping-pong balls and fl oating them in a shallow pan of water. â�¢ place a few pieces of food or desirable treat in an egg carton, tape the lid shut and cut a small hole in the top. make the ferret work for the treat. the same idea can be used with a milk carton. â�¢ place a few pieces of good-quality ferret food (try a different variety from its normal everyday food) in an 8-ounce (237 ml) plastic soft-drink bottle, leave the top off, and let the ferret roll and play with it trying to make the treats come out. w create handmade toys and amusement centers. â�¢ make tunnels from pcv pipe or empty oatmeal containers with the bottoms cut out and taped end to end. â�¢ tape cardboard boxes together, and cut holes in various locations for exploration. â�¢ glue a small bell inside a plastic easter egg. â�¢ make a ferret maze out of a large appliance box. fill the box with scrap cardboard rolled and taped into round or triangular tubes. hide food items at various spots within the box. w fill a box with potting soil, rice, hay, plastic balls, or crumpled paper balls, and let the ferret fulfi ll its instinctive digging needs. w use old towels to give a ferret a "magic carpet ride," or just twirl the towel around and over the ferret. w use dryer hose to satisfy instinctive tunneling behavior. some owners like to stretch the hose out, using a beanbag chair to hold one end in place. w obtain a bottle of deer or boar scent from the hunting section of a sporting goods store, and rub a drop or two on a favorite toy. w tie plastic or a ping-pong ball to a piece of sturdy string and hang it from the ceiling to 2 in (5 cm) above the ground. w put empty paper grocery bags on the fl oor. some of the bags can be fi lled with crumpled paper, ping-pong balls, or food treats. unfamiliar ferrets may predict likelihood of a fi ght. results of the staton and crowell-davis study suggest that familiarity, sex, and neutering status are all important determinantes of aggression between ferrets. sixty percent of the attempts of pairing strangers resulted in combative behavior, whereas none of the familiar cagemates fought. based on previous information on aggressive behavior in intact male ferrets 33 and that shown in studies of other species, it was thought that intact male ferrets would, in general, be more aggressive than neutered animals. however, the study showed that intact male ferrets were not indiscriminantly aggressive and that pairs of neutered males were just as likely as pairs of intact males to fi ght. in addition, the study showed that females were in general not less aggressive than males, with pairings of unfamiliar neutered female ferrets likely to result in aggression. the study also showed that if unfamiliar neutered ferrets are introduced, the pairing of two males or a male and female would result in the lowest levels of aggression. it is also interesting to note that time of year (winter versus spring) did not affect the incidence of fi ghting behavior, even for intact animals in which circulating hormone concentrations are likely to change with seasons. this may have been a result of the fact that animals in this study were housed under artifi cial lighting and that this amount of light was not altered to mimic the increase in daylight that stimulates the breeding season in ferrets. the fact that 60% of unfamiliar pairings did fi ght illustrates the diffi culties faced by pet owners attempting to introduce a new ferret into the household or in ferret shelters in which new additions and limited space result in frequent pairings of strange ferrets. studies with kangaroo rats, pigs, and mice have shown that olfactory exposure, visual exposure, and sharing a common substrate may all play a key role in establishing familiarity between strangers and thus reduce fi ghting behavior. 40 a second part of the staton and crowell-davis study showed that this was not the case with ferrets. housing strange ferrets next to each other for 2 weeks where they shared visual and olfactory stimuli did not reduce fi ghting when the ferrets were later introduced. however, ferret owners claim that housing a new ferret next to an existing ferret or ferrets for a period of time before introduction does help. if introducing a female to a bonded male-female pair, experienced ferret owners advocate housing the new female with the male for a few days, as they are more likely to get along. if they get along, then putting all three together inside the original cage with additional sleeping arrangements can be tried. make sure close supervision is provided during these introductory periods. in addition, ferret shelter managers have found that a 4-to 5-day introductory period works best when familiarizing a new ferret to a multipleferret household. a small open room with a minimum of hiding areas and one that does not house other ferrets works best as a neutral meeting site. the new ferret member is introduced to the most congenial ferret in the household, usually an older, easygoing male, for 30 minutes of chaperoned meet-and-greet time. if the ferrets seem to get along, then the time together is extended. once the new ferret has accepted the introductory ferret, other ferrets are slowly introduced to see if the new ferret is capable of cohabiting with the group. ferrets are individuals, and this bonding procedure does not always work; some ferrets just prefer living alone. ferrets use their mouths in many behaviors, including play, attention seeking, defense, "hunting," fear, and response to pain. watch young kits wrestle and play and you will see them bite each other's necks and drag each other around while grasping any loose skin with their mouths. mother ferrets use their mouths to pick up and move their kits if they have wandered too far and to discipline them with gentle nips. ferrets playing with a toy will usually pick it up, grab it, and drag it around with their mouths. inappropriate nipping or biting may occur when ferrets perceive people as playmates, as an attention-getting device, or when ferrets are in pain or hungry. depending on the message they are trying to convey, ferrets may give a friendly nip or grab a human's hand or foot, bite down, hold on, and shake their heads. this is how they would respond to another ferret, whose naturally thick skin and fur would lessen the intensity of the bite. to humans, however, the bite can be both painful and alarming. in ferrets with a history of consistent biting behavior, it is ideal to try to determine the cause. this begins with collecting a behavioral history and in problem cases having the owners fi ll out a behavioral questionnaire (box 4-3) . once the type(s) of aggression and most probable causes for the aggression have been identifi ed, the goal is to avoid situations that elicit the biting behavior and to diminish biting behavior if it occurs. box 4-4 summarizes the various causes of ferret aggression and potential situations in which biting behavior may occur. recently purchased or adopted ferrets may be especially problematic, as they come with limited socialization, training, or handling. these (often young) ferrets may bite as a fear response to sudden movements and noise. this is probably a refl ex response that stems from the ferret's wild counterparts who were preyed on by larger mammals or birds of prey. if frightened or startled the ferret may show a defense reaction much like the frightened submissive dog-it will arch the back and fl uff out the tail and body fur (piloerection) to look larger and stronger, open the mouth in a threatening way, and hiss or 4 194 exotic pet behavior behavioral questionnaire for the biting ferret w how long have you owned your ferret? w are there certain situations that initiate the biting behavior? w is this your fi rst pet ferret? w describe your ferret's environment? w is there a new pet in the household? w has the amount of "free time" and exercise your ferret gets recently changed? w have there been any lifestyle changes that may be refl ected in the ferret's behavior? w have you experienced a recent move or change in living arrangements? w have any new ferrets been brought into the household? w do young children routinely handle your ferret? w what do you do when your ferret bites? w what forms of behavior modifi cation have you tried? what is the response? w are there situations or objects that stimulate biting behavior? w do you smoke? w do you apply hand cream to which the ferret may be attracted? screech in order to frighten the perceived attacker or alarm other ferrets in the area. if not descented the ferret may empty or express its anal sacs, again much as a frightened dog would do. depending on the level of socialization of the ferret, this fear response may also lead to biting. it is best to try and prevent this defense response by letting the fearful ferret know your whereabouts and intentions when handling. to ensure that the ferret will not be startled, make noise outside the cage or rattle the cage door so they are aware of your presence, and when approaching talk in a soothing manner. if the ferret continues to appear fearful, give it time to adjust to your presence before handling, then use a towel to pick it up while talking to it quietly. if a ferret is possessive of a particular toy, take away the guarded object, and discourage the ferret from obtaining objects that are off limits. if the problem persists, try redirecting the ferret's attention with an alternative activity such as ball chasing, or try desensitizing the possessive ferret with repeated relaxed exposures to the object or toy and reward with gentle praise or a treat when the ferret is not possessive. be careful not to reinforce this behavior by offering another, possibly more acceptable toy while the ferret is nipping. with fear-related or maternal aggression, avoid circumstances that might elicit aggression. it is important not to startle or grab fearful ferrets, ferret behavior 195 especially when they are sleeping, and to respect a nursing jill's privacy and innate protective behavior. ferrets are rarely fearful once they are awake and aware of your presence; however, if they continue to be cautious and nippy, try to replace the fear response with a counteraction such as anticipation of play or food. extra care should be taken with deaf ferrets that may startle more readily. congenital deafness occurs in ferrets, and anecdotal reports indicate that an increased incidence of deafness exists in albino and/or black-eyed ferrets. if a ferret stalks and nips at young children, it may be best to change your ferret's free time to a time when the child is napping or away from home. owners should be made aware that other household pets (e.g., birds, rabbits, or rodents) may be perceived as prey, and unsupervised w play aggression-the most common underlying cause for biting in ferrets, this is a normal behavior, especially in young ferrets, that needs to be mitigated. w possessive aggression-aggressive behavior directed at humans or other pets that approach the ferret when it is in possession of something it values, usually a favorite toy. may be exacerbated by restricting the ferret's free time and space. w fear-related aggression-occurs when ferret is startled or poorly socialized and not used to handling. this type of aggression can occur as a result of punishment, traumatic experiences, or genetic factors. w predatory aggression-a normal innate behavior of the polecat from which the ferret was domesticated. when directed toward people or other pets it results in a behavioral problem that may involve stalking, chasing, grasping, and biting. w redirected aggression-occurs when the harmful behavior is directed toward a person or pet that is not the original stimulus for the aggressive behavior. occurs when a person or pet interferes with two ferrets that are playing hard. w maternal aggression-refers to aggressive behavior directed toward humans or other pets that approach a jill with her kits. w pain-induced or irritable aggression-caused by an underlying medical condition: â�¢ gastrointestinal foreign body, hairball, gastric ulcers â�¢ infl ammatory bowel disease â�¢ hormonal changes associated with adrenal disease â�¢ any painful disease process w sexual aggression-may explain conspecifi c ferret aggression in which mating behavior may be accompanied by intense biting. contact time with that pet should be discouraged. it is also a good idea to put the pet dog or cat in another room during the ferret's free time until the behavioral interaction of these pets is known. overly exuberant play behavior or play aggression is the most common situation in which ferret frolicking can lead to biting. gentle nips are normal and natural to ferrets, which often bite at other ferrets to encourage play. therefore it is not uncommon for ferrets to nip their owners gently in order to gain their attention. play biting in ferrets is similar to the same misbehavior in puppies. box 4-5 outlines ideas for controlling this behavior problem. ferret play can escalate to the point where its frenzied commotion borders on aggressive behavior. one ferret may arch its neck and back and shove itself sidelong into the other in a very characteristic way. this fake challenge is an example of subdued or "domesticated" aggressive behavior turned to play. nose poking, ramming another ferret with mouth open, and defensive threats in which the ferret stands very erect with back arched and tail possibly brushed up are other examples of playful behaviors that originate in polecat aggressive behavior. the magnitude of the actions and vocalizations differentiates play from aggressive behavior. young kits at play are particularly mouthy, as seen with continual biting, mouthing, tugging, and dragging each other. these behaviors are preventing and managing ferret play biting behavior w avoid aggressive play (pushing, rolling, wrestling). w avoid tug-of-war games. w keep fi ngers curled when playing with an easily excited ferret. w use time outs-10 to 15 minutes in a small room or pet carrier (do not use ferret's cage for time out) with no toys or towels or anything with which the ferret could play. w when the ferret bites, make a high-pitched sound (yip, ouch). this will mimic the sound another ferret makes when play behavior gets out of hand. w when the ferret starts biting during play, redirect it to appropriate toys such as hard rubber balls or plush toys. w try a gentle scruffi ng, and wiggle the ferret in the air while making a hissing sound (this is how a mother ferret disciplines a kit), but be mindful that the ferret may get even more excited. w display gentle but fi rm dominance over the ferret by holding it on its back for a few minutes. w wrap the ferret snugly in a towel or baby blanket so that it cannot get out and bite. walk around while gently cuddling and talking to the ferret and petting when it is calm. offer a food treat when the ferret remains calm. believed to refl ect innate dominance behavior learning. 6 similar to other mammals, a mother ferret will tug and pull at her young as a means of discipline and control. if kits are sold to pet retailers at a young age (some kits are placed in pet shops as early as 6 weeks of age), mother-kit socialization patterns may be disrupted. this lack of maternal nurturing can lead to overly stormy play behavior, which may be perceived as aggression by new owners. keep in mind that the solitary pet ferret may perceive its human owner as its playmate, and nips, pokes, and attempted drags of arms or feet may be directed at him or her. frequent handling in a quiet, subdued environment, along with behavior modifi cation in the form of positive reinforcements, time-outs, and counterconditioning, will all go a long way toward properly socializing these belligerent kits. remember to never incorporate physical punishment into your behavioral modifi cation routine. this may cause a frightened or excited ferret to become even more frenzied, resulting in more intense and perhaps vicious biting. also, ferrets may be sensitive to certain odors. ferrets may react to sweet-smelling hand lotion or soaps by licking or nipping the wearer. some ferrets love the smell or taste of nicotine and may react by biting the smoker's fi ngers. finally, an adult ferret that suddenly becomes more aggressive and nippy should be assessed for underlying health issues such as pain or hormonal imbalances associated with adrenal disease. most wild mustelidae are considered nocturnal, but wild polecats have been observed hunting during the day. sleeping habits probably refl ect habitat, territorial competition, and availability of food. under laboratory conditions, ferrets spend over 60% of the time sleeping, with approximately 40% of total sleep time in rapid eye movement (rem) sleep. 26 this large amount of rem sleep is achieved by having a high number of rem sleep episodes rather than longer rem periods. 26 domestic ferrets show diurnal activity in captivity and normally sleep from 12 to 16 hours in a typical day, 6 with varying sleeping patterns. as a general rule, older ferrets demonstrate shorter, more frequent periods of activity and spend a greater part of their day sleeping. younger ferrets, on the other hand, tend to display longer periods of activity interspersed with sleep. regardless of age, the duration and timing of active wakefulness refl ect the owner's schedule and how often the ferrets are given the opportunity to interact. most ferrets are ready to explore and play at any time; the duration of these activities is a refl ection of age. domestic ferrets often sleep very soundly, during which time respirations and heart rate decrease. the depth of sleep is so profound that many ferret owners mistake this deep phase of sleep for severe illness or death. ferrets, especially older ferrets, can take several moments to awaken from sleep even with vigilant attempts at arousal. ferret owners need to be aware of this and be patient in awakening their ferret with gentle prodding and soothing vocalizations. if deep sleep behavior becomes increasingly pronounced in duration and depth, clinical evaluation for illness, particularly hypoglycemia associated with insulinoma, is warranted. 6 ferrets sleep in a variety of positions, and bonded pairs or groups will pile on one another to sleep (figure 4-15 ). ferrets may sleep curled up like dogs, on their backs with all four legs sprawled out, or even hanging upside down half-way out of their hammocks. quiet respirations are usually audible, and periodic soft whimpering sounds may be heard from the sleeping ferret. yawning is a normal behavior of most ferrets and is usually not of clinical concern. ferrets just waking up from a nap will begin their wakefulness with a stretch and a yawn. it is interesting to note that scruffi ng the ferret (restraining the ferret by holding the skin at the nape of the neck) often elicits a yawning refl ex. this may facilitate a brief oral examination. the action of scruffi ng causes a relaxation response and is used as a form of restraint when necessary to calm an excited ferret. the relaxation elicited by scruffi ng is usually consistent and is similar to the method used by the jill when disciplining young kits or moving them from one location to another. 6 learning to understand the behavior of animals is a very important aspect of diagnosis in veterinary medicine. the nonemergency physical examination begins with hands-off observation of the animal for any sensory signals that give an impression of overall health status. a few minutes of observing the ferret for behavioral signs of health or illness can disclose valuable information about overall patient well-being. a healthy ferret is alert and curious about its surroundings, demonstrating attentive and exploratory behaviors, and has bright, clear eyes and a smooth, shiny hair coat. before initiating the physical examination in any ferret, healthy or sick, it is also important to observe for temperament; behavioral signs of friendliness, fear, or potential aggression. a ferret that leans forward with interest, using its sense of smell to explore an outstretched hand, is usually a normal, friendly, inquisitive ferret. poorly socialized ferrets may show signs of fear such as backing up with the ears laid back and fl at, and some may even give a vocal hiss. the aggressive ferret may signal its displeasure by trying to get away and/or hissing vocally, but it will not give you other warning signs-no snarl or showing of the teeth; it will bite without warning. this tends to be a fairly intense bite that breaks the skin, and the ferret may hold on despite attempts by the holder to break free. during quiet observation the clinician can be taking the patient history. listen to the client for behavioral clues that might aid in making a differential diagnosis. if the ferret is just lying on the examination table with a dull look in its eyes, the clinician immediately gets the impression that this ferret does not feel well. pawing at the mouth or salivating may be a sign of nausea potentially caused by gastrointestinal discomfort secondary to gastric obstruction, gastric ulcers, or hypoglycemia secondary to insulinoma. if clients state that their normally passive and friendly ferret has become intermittently aggressive toward cagemates, adrenal disease or pain should be ruled out. if the owner reports lethargy and diffi culty in arousing the ferret from sleep, then hypoglycemia resulting from insulinoma should be considered. if the owner notices that the ferret has been standing in a hunched position with an arched back and wiggling its ears while grinding its teeth, then abdominal pain with secondary bruxism should be ruled out. pollakiuria and stranguria are abnormal urinary behaviors and common signs of cystitis or prostatitis, both of which may occur secondary to adrenal disease. male ferrets with urethral calculi, severe prostatitis, or periprostatic cysts associated with adrenal disease may become obstructed and be unable to urinate. these ferrets will usually display repeated attempts to urinate, with urgency demonstrated by intense arching of the back, straining, and evidence of abdominal pain with or without vocalization. observe the respirations to see if the ferret is showing any signs of increased respiratory rate and distress. scratching may indicate external parasites or underlying adrenal disease. the ferret's normal physical changes in response to seasonal changes and photoperiod, such as weight gain in the winter and weight loss in late spring, as well as seasonal shedding patterns, need to be taken into consideration if the client is concerned about weight loss or a sudden onset of increased shedding in an otherwise healthy ferret. an understanding of ferret behaviors, both normal and abnormal, therefore serves as a great aid in assessing ferret health. hormonal abnormalities including elevations in plasma estrogens and androgens can occur secondary to adrenal disease. these imbalances may lead to increased sexual behavior even in neutered and spayed ferrets. these behaviors include neck gripping, mounting, and pelvic thrusting, which may be interpreted by owners of pet ferrets as an aggressive behavioral change for which they will seek veterinary counseling (figure 4-16) . as a result, any healthy ferret that is presented because of a recent onset of conspecifi c aggressive or sexual behavior should be assessed for adrenal disease. other signs of adrenal disease include bilaterally symmetric alopecia (usually beginning on the tail and then extending up over the dorsum), pruritus, vulvar swelling in ovariohysterectomized female ferrets, and prostatic enlargement and cysts in neutered male ferrets (figure 4-17) . another less commonly reported behavioral change is the increased mothering behavior of jills associated with increased circulating levels of progesterone secondary to adrenal disease. for example, we have seen a jill with adrenal disease that showed nesting behavior by taking favorite stuffed animals and mothering them. the underlying cause(s) of these behavioral manifestations may be clarifi ed with a review of ferret adrenal disease physiology. in the intact ferret, gonadal estradiol or testosterone exerts a negative feedback on the hypothalamus and pituitary gland, thereby preventing excessive secretion of gnrh, lh, and fsh. it has been shown that lack of negative gonadal hormonal feedback on hypothalamic gnrh in neutered ferrets results in persistently elevated gonadotropic lh, which may induce nonneoplastic mounting behavior. ferret hyperadrenocorticism results in increases in plasma levels of one or more of the following sex steroids: estradiol, androstenedione, 17-alpha-hydroxyprogesterone, and dehydroepiandrosterone sulfate (dheas). 38 this hormonal imbalance may lead to increased sexual behavior including neck gripping, mounting, and pelvic thrusting, which may be interpreted by owners of pet ferrets as an aggressive behavioral change for which they will seek veterinary counseling. (courtesy peter fisher.) pruritus. pruritus can be a behavioral sign of external parasites or adrenal disease. some ferrets with adrenal disease may manifest itching behavior as the only outward sign of this common endocrine disease. (courtesy peter fisher.) and neoplastic adrenocortical enlargement. 38 the ensuing hyperadrenocorticism may result in increases in plasma levels of estradiol, androstenedione, 17-alpha-hydroxyprogesterone, and dehydroepiandrosterone sulfate, which result in physical and behavioral changes dominated by features consistent with excessive production of these sex hormones. 38 a diagnosis of adrenal disease based on history and clinical signs can be more defi nitively confi rmed with ultrasonography of the adrenal glands and measurement of plasma levels of androstenedione, 17-alpha-hydroxyprogesterone, and estradiol (clinical endocrinology service, university of tennessee). insulin-secreting pancreatic islet cell tumors are among the most common neoplastic diseases affecting ferrets. synonyms include functional islet cell tumor, pancreatic î²-cell tumor, pancreatic endocrine tumor, and insulinoma. the disease affects both male and female ferrets between the ages of 2 and 8 years but is most commonly diagnosed in ferrets 4 to 5 years of age. on histopathologic examination beta cell carcinoma is most often found, sometimes in combination with beta cell adenoma or hyperplasia. continuous hyperinsulinemia sustains the metabolic effect of insulin; therefore hepatic gluconeogenesis and glycogenolysis are inhibited, and peripheral uptake of glucose by tissue cells is increased. 35 as the disease progresses, hypoglycemia ensues. insulinoma is another endocrine disease in which a history of certain behavioral changes will help the clinician narrow the differential diagnosis. the rate of development, magnitude, and duration of hypoglycemia are factors determining the severity of clinical signs. many ferrets are presented with a history of behavioral changes including intermittent weakness and lethargy, a decrease in play and exploratory behavior, and an increase in length and depth of sleep. ferret owners may report that the pet is no longer animated and seems dull and confused. signs usually progress slowly over a period of weeks to months; many owners are slow to pick up on the changes in the pet's behavior or attribute the quiet, less-responsive behavior to old age. pawing at the mouth, teeth grinding, and hypersalivation-a result of hypoglycemia-induced nauseaare other behavioral signs that may be associated with insulinoma. left untreated, hypoglycemia may result in seizures, coma, and death. the defi nitive diagnosis of insulinoma depends on the histopathologic examination of pancreatic tissue. however, in most ferrets a diagnosis of insulinoma is made before surgery, by demonstration of hypoglycemia in ferret behavior 203 association with history and clinical signs. other causes of hypoglycemia should be ruled out, including anorexia or starvation, severe gastrointestinal disease, sepsis, neoplasia, and hepatic disease. pain assessment in ferrets is often more diffi cult than in dogs and cats because, in general, veterinarians are less familiar with the normal behavior of ferrets. 17 changes in behavior associated with pain can be subtle, but careful observation of the undisturbed ferret will allow the clinician to pick up on the various indicators of pain. an uncomfortable ferret will be reluctant to curl into its normal, relaxed sleeping position, may have a tucked appearance to the abdomen and a strained facial expression, and may have increased frequency and depth of respirations. the gait may be stiff, with the head elevated and extended forward. most ferrets in pain are lethargic and anorexic. a painful abdomen is a common sequela to ferret gastrointestinal diseases including gastric ulcers, gastrointestinal foreign bodies or trichobezoars, ece, and helicobacter infections. owners often report that the ferret is hunched up with an arched back, immobile, or walking with a stilted gait and is grinding its teeth-all common signs of abdominal pain. a less astute owner may not recognize spasmodic teeth grinding behavior manifested in a ferret that holds its head down and rhythmically moves its facial muscles back and forth and wriggles its ears in response to painful stimuli. postoperative and traumatic pain are usually manifested as a reluctance to move and a facial expression demonstrating dull, half-open, noninquisitive eyes, which are overall expressions of tension. it is amazing to watch the change in behavioral attitude and facial relaxation once pain medication is administered. if possible, analgesics should be provided before painful stimulus occurs. administering preemptive analgesia as part of the preanesthetic protocol or administering analgesics intraoperatively before discontinuing general anesthesia diminishes the wind-up effect of pain and decreases the postoperative pain caused by neuropathic and infl ammatory pain. 7 each patient must be evaluated individually when analgesic protocols are chosen, with frequency, duration, and type of analgesic used based on clinical judgment, hematologic, and biochemical values and patient response. a return to normal attentive behavior, curling up under a towel to sleep, and a good appetite are all behavioral signs that postoperative analgesia is adequate. imprinting on prey odours in ferrets (mustela putorius f. furo l.) and its neural correlates instinctive predatory behavior of the ferret (putorius putorius furo) modifi ed by chlordiazeperoxide hydrochloride (librium) ferret nutrition feed consumption and food passage in mink (mustela vison) and european ferret (mustela putorius furo) available at: www.practical-pet-care behavior of mustela putorius furo (the domestic ferret) recognizing pain in exotic mammals evidence implicating aromatization of testosterone in the regulation of male ferret sexual behavior effects of early social experience on activity and object investigation in the ferret scent marking behavior of the ferret an olfactory recognition system in the ferret mustela furo l. (carnivora: mustelidae) the physiological analysis of aggressive behavior hybridization of the phylogenetic relationship between polecats and domestic ferrets in britain wheel-running during anoestrus and oestrus in the ferret exploratory behaviour of the black-footed ferret etkin w: theories of socialization and communication analgesia of small mammals growth, reproduction and breeding animal behaviour: a synthesis of ethology and comparative psychology domestic animal behavior for veterinarians and animal scientists foraging cost and meal patterns in ferrets nares occlusion eliminates heterosexual partner selection without disrupting coitus in ferrets of both sexes the effects of environmental enrichment in ferrets social play or the development of social behavior in ferrets (mustela putorius)? mustela putorius furo: a carnivore with extremely high proportion of rem sleep mechanisms of animal behavior seeing is believing: ferrets' eyes and vision assessing spatial activity in captive ferrets, mustela furo l. (carnivora: mustelidae), nz failure of fertilization following abbreviated copulation in the ferret (mustela putorius furo) late onset of hearing in the ferret dermatologic diseases the aggressive behavior of individual male polecats (mustela putorius, m. furo and hybrids) towards familiar and unfamiliar opponents some behavioural differences between the european polecat, mustela putorius, the ferret, m. furo, and their hybrids endocrine diseases the denning behavior of feral ferrets (mustela furo) in a pastoral habitat ferrets for dummies hyperadrenocorticism in ferrets a tao full of detours, the behavior of the domestic ferret factors associated with aggression between pairs of domestic ferrets effects of neonatal castration and testosterone on sexual partner preference in the ferret sexual differentiation of play behavior in the ferret a history of the ferret effects of experience and cage enrichment on predatory skills of black-footed ferrets (mustela nigripes) physiology of the ferret key: cord-342785-55r01n0x authors: lemmon, gordon h; gardner, shea n title: predicting the sensitivity and specificity of published real-time pcr assays date: 2008-09-25 journal: ann clin microbiol antimicrob doi: 10.1186/1476-0711-7-18 sha: doc_id: 342785 cord_uid: 55r01n0x background: in recent years real-time pcr has become a leading technique for nucleic acid detection and quantification. these assays have the potential to greatly enhance efficiency in the clinical laboratory. choice of primer and probe sequences is critical for accurate diagnosis in the clinic, yet current primer/probe signature design strategies are limited, and signature evaluation methods are lacking. methods: we assessed the quality of a signature by predicting the number of true positive, false positive and false negative hits against all available public sequence data. we found real-time pcr signatures described in recent literature and used a blast search based approach to collect all hits to the primer-probe combinations that should be amplified by real-time pcr chemistry. we then compared our hits with the sequences in the ncbi taxonomy tree that the signature was designed to detect. results: we found that many published signatures have high specificity (almost no false positives) but low sensitivity (high false negative rate). where high sensitivity is needed, we offer a revised methodology for signature design which may designate that multiple signatures are required to detect all sequenced strains. we use this methodology to produce new signatures that are predicted to have higher sensitivity and specificity. conclusion: we show that current methods for real-time pcr assay design have unacceptably low sensitivities for most clinical applications. additionally, as new sequence data becomes available, old assays must be reassessed and redesigned. a standard protocol for both generating and assessing the quality of these assays is therefore of great value. real-time pcr has the capacity to greatly improve clinical diagnostics. the improved assay design and evaluation methods presented herein will expedite adoption of this technique in the clinical lab. real-time pcr assays are gaining popularity as a clinical tool for detecting and quantifying the presence of both viral and bacterial pathogens, as reviewed in [1] . com-pared to traditional culturing methods used in identification, real-time pcr is fast and cost effective. in addition, it can be quantitative and sensitive, in some cases greatly exceeding the sensitivity for conventional testing meth-ods. commercially distributed kits are available for pcrbased pathogen diagnostics, and pcr is no longer thought of merely as confirmatory to culture. however real-time pcr assays are limited by the quality of the primers and probes chosen. these primers and probes must be sensitive enough to match all target organisms yet specific enough to exclude all others. a common approach to developing a primer/probe combination is by using commercial software such as primerexpress ® (applied biosystems, foster city, ca, usa). this software asks the user to upload a dna sequence file, and then finds possible primer/probe sets that meet the assay criteria. generally a researcher will provide as input a gene region conserved throughout the taxa that the assay is being designed to detect. the software then provides possible primer/probe sets. the researcher chooses a representative signature. if there are single nucleotide polymorphisms (snps) within the chosen conserved region, a signature with consensus primers and probes is often chosen. next a blast [2] search is performed to ensure that the primers are not hitting other targets. finally the signature is verified in vitro with laboratory strains. while this design approach may work acceptably well in the research laboratory, the clinical laboratory calls for a more thorough analysis to ensure detection of novel, diverse, and uncommon strains. these may appear, for example, as a result of spread by foreign travel or migration. whole genome based automated signature design [3] presents a great improvement to the common method. however, in addition to better design strategies, methods for automated signature evaluation are needed. as additional sequence data becomes available, it is necessary to regularly reassess the predicted efficacy of a given signature. this analysis must include the predicted false negative and false positive rates for the developed signatures, and consider all available public sequence data. we have analyzed a number of real-time pcr assays found in the literature based on public sequence data. herein we report how well these signatures performed, offer a revised approach to pcr assay design, and use this approach to produce new assays predicted to have higher sensitivity and specificity. the literature was combed for recently published articles reporting real-time pcr assays for the clinical detection of bacterial and viral taxa. the primer and probe sequences were accumulated, with a preference for taqman assays. however, 3 intercalating dye assays were also selected. papers reporting nucleotide sequences that could not eas-ily be copied from an online source were avoided. in total, 112 signatures from 32 papers were analyzed. local oracle databases have been constructed from the complete genome sequence data available at ncbi genbank, tigr, embl, img (jgi), and baylor hgsc. we used our "all_virus" and "all_bacteria" databases to find signature matches and predict false negatives and false positives. these databases were designed to contain only whole genomes and whole segments from segmented genomes. however, the heuristics used to separate whole genomes from partial sequences are not fail-proof due to inconsistency in sequence annotation within the public databases. consequently many sequences in these databases may show up as false negatives when they are actually just a section or segment of a genome that is not expected to contain the signature, and we manually sorted these sequences into true or false negatives. a freely available real time pcr analysis tool called taqsim [4] was used to find public sequences that would match the primer/probe assay in question. taqsim uses blast searches to find sequences that match both forward and reverse primers and probe. to be reported as a "hit" the primers and probe must match in the required orientations relative to one another and the primers must be in sufficiently close proximity. the forward/reverse primers may fall on either the plus or minus strand, so long as the orientation relative to one another is appropriate. there may not be mismatches at the 3' end of either primer. for each hit taqsim calculates the primer and probe melting temperatures as bound to the candidate hit sequence (accounting for mismatches) based on reaction conditions (reagent concentrations and hybridization temperature), and returns sequences predicted to be amplified. instead of replicating the various exact reaction conditions reported in each paper, very lenient settings were applied in all cases, essentially removing the screen for primer/probe vs. candidate hit tm by setting this threshold to 0°k, and instead checking for specificity by requiring that hits have fewer than 3 mismatches per primer or probe. taqsim's predicted sequence hits were compared with sequences listed under a given set of ncbi taxonomy tree nodes. for instance, if a signature was reported to detect hepatitis b, then its set of taqsim hits would be compared with the set of sequences under node 10407, corresponding to hepatitis b virus. sequences in both sets were considered true positives, sequences in the taqsim output that were missing from the chosen taxonomy nodes were considered false positives, and sequences that were in the taxonomy tree but missing from the taqsim output were considered false negatives. test statistics such as specificity and sensitivity (power) were then calculated. in this paper we define sensitivity and specificity as follows: the primary research articles were read carefully to determine what the authors had designed their primers/probes to detect. ncbi taxonomy nodes were chosen to represent these target organisms. this was not a trivial task, since many articles lack clarity as to which taxa, specifically, their assay should detect. for instance, the cytomegalovirus assay did not detect all sequences in the cytomegalovirus genus (taxonomy node 10358), but rather all sequences in the human herpesvirus 5 species (taxonomy node 10359). none of the articles specified a taxonomy node for their signatures. perl scripting [5] was used to help compare blast hits and taxonomy node sequences, and count false negative, false positive and true positive sequence matches. however some sequences required hand sorting due to the wide array of sequence types and annotations. these often represented segmented genomes, in which case many of the would-be false negative sequences simply represent a different segment than that on which the signature lands, so we manually tabulated them as true negatives. they may also represent plasmids. in these situations, a careful review of the genbank entry, and sometimes of the primary article cited by genbank, was necessary to determine if the sequence of interest was truly a false negative. although we attempt to include only complete genomes in our sequence database, because of inconsistencies in the annotation of sequence data some partial sequences nevertheless make it into our databases. any of these partial cds's documented as containing the target gene on which the signature was supposed to land were counted as false negatives, but those partial cds's not documented to contain the target gene were eliminated from the false negative pool because it is possible the signature could land on the unsequenced section with the target gene. our database also contains "glued fragments", which represent draft genomes "glued" together with hundreds of "n"s as a simple way to keep the separate contigs associated as part of the same genome. while we report false negatives from these draft genomes, it is possible that the signatures could land on gaps between the contigs, and that finished sequencing could result in re-classification as a true positive. tables 1, 2, 3, 4, 5 summarize our analysis of various dna signatures. details of all true positive, false positive and false negative sequences are available from the authors. note that these results are in silico results; no laboratory testing was performed for verification, so that by stating that an organism is "detected" we mean that this is our prediction based on sequence data. a few notes of interest concerning the data in the tables are described below: the two human corona virus strains, 229e and oc43 are a frequent cause of the common cold [6] . a taqman assay for 229e was predicted to perform perfectly, while an assay for oc43 turned up a number of false positives, all of which were animal corona viruses. a coxsackie b3 virus assay [7] performed well, but a coxsackie b4 assay [8] hit many other human coxsackie, echo, and entero viruses. four out of 5 false negatives for a marburg virus assay [9] were of the lake victoria variety. false negatives associated with a yellow fever signature [10] included trinidad, french neurotropic, french viscerotropic, and vaccine strains. the filoviridae (ebola/marburg) assay [10] detected only ebola viruses. staphylococcus aureus [11] and enterobacteriaceae assays [12] had low sensitivity. an escherichia coli assay [12] hit shigella and vibrio sequences. many of these signatures [13] had high sensitivity. combining several of them into a multiplex assay would probably improve sensitivity further. these signatures were designed using a minimal set clustering approach [14] . while individual signatures have decent sensitivity, combining several signatures in one assay, as advocated in the publication greatly improved sensitivity. the signatures for hepatitis a are currently undergoing laboratory screening by the fda, and are performing well (g. hartman, personal communication). several reported signatures produced no predicted hits. these include assays for several flaviviruses [10, 15, 16] , and 16s rrna assays [12] for several bacteria. examination of blast output showed that in these cases either a primer or internal oligo (probe) did not have blast hits to target, there were too many mismatches per primer or probe sequence above the threshold specified in our analall but three are taqman signatures. the three intercalating dye type assays are bolded. yses, or there were mismatches at the 3' end of a primer relative to target. it is possible that if the sequences of the samples used in the laboratory differ from available genomic data, or if the pcr reaction conditions are performed at low stringency (e.g. low annealing temperatures or high salt concentrations) these assays could in fact work in the laboratory. however, according to the genomic data available, a better match of primers and probes to target is possible and is usually desired for high sensitivity detection. targeting a number of the organisms for which currently published signatures were predicted to perform poorly, as well as some for which additional signatures may be desired (even though published signatures may perform well), we generated new signatures using minimal set clustering (msc) according to methods previously described [14, 17] . msc begins by removing non-unique regions from consideration as primers or probes from each of the target sequences relative to a database of nontarget bacterial and viral sequences. the remaining unique regions of each target sequence are mined for all or many candidate signatures, without regard for conservation among other targets, yet satisfying user specifications for primer and probe length, t m , gc%, avoidance of long homopolymer runs, and amplicon length. all candidate signatures are compared to all targets and clustered by the subset of targets they are predicted to detect. signatures within a given cluster are equivalent, in that they are predicted to detect the same subset of targets, so by clustering we reduce the redundancy and size of the problem to finding a small set of signatures that detect all targets. nevertheless, finding the optimal solution of the fewest clusters to detect all targets is an np complete problem, so for large data sets we use a greedy algorithm to find a small number of clusters that together should pick up all targets. in the supplementary table, we often provide more than one alternative signature to detect a given equivalence group of genomes to serve as a backup should a signature perform poorly in laboratory testing. some of the signatures may have mismatches to some of their intended targets, although these mismatches are not predicted to reduce the t m of primer/probe hybridizing to target below typical taqman reaction conditions. none of these computationally predicted signatures have been screened in the laboratory, as this is beyond the scope of this paper. year target pseudomonas aeruginosa, escherichia coli, and neisseria meningitidis. as expected we found that false negatives were much more common than false positives. though signatures are generally based on conserved gene regions, they often fail to take into account all of the variation within a target set of organisms. this may be because the signatures were developed using sequence data from a handful of strains, rather than a thorough study of all strains publicly available. these false negatives may also represent sequences that have become available since the publication of the given signature. since new sequence data is made available at an ever increasing rate, there is great benefit in re-evaluating clinically used dna signatures regularly. when new sequence data leads to false negative predictions for a signature, one of two explanations can be given. the new sequences either represent recently recognized variation that has been around since the time the signature was published, or new variation, the result of mutation and natural selection. in either case, an improved or additional signature should be designed. high false positive or false negative rates do not necessarily indicate a "bad" dna assay. the quality of an assay must be considered in light of the milieu in which the testing will take place. in the clinical laboratory, a signature with high sensitivity but perhaps low specificity may be preferred over a test with lower sensitivity in cases where the putative pathogen requires immediate treatment or may spread quickly. the case of antibiotic resistant bacteria probably falls in this category. on the other hand, the nation's basis and biowatch programs insists on zero false positives, so as to avoid public disturbances due to false alarms, while still aiming for zero false negatives [18] . one must also consider the type of false negative and false positive results to determine their relevance. for instance, in this article an assay for human corona virus oc43 [6] what about such a match in a clinical lab in africa? on the other hand, the echovirus sequences that the coxsackie b4 assay [8] can detect could produce misleading results in any clinical lab. the false negative and false positive rates presented in this study may vary substantially from those seen empirically. this is because the strains available in a laboratory may differ significantly from the sequence data available, or because the empirical protocol is more or less stringent than the sequence-based requirements we imposed, which allowed no more than 2 mismatches per primer or probe for detection. we believe that as more target sequences become available, our predicted false negative rates will tend to increase for a given published signature both as a result of better sampling of diversity and as a result of failure to detect newly evolved variants. it has been estimated that a minimum of 3-4 genomes are needed in order to computationally design taqman pcr signatures likely to detect most strains, with those isolates chosen for sequencing that have been selected to span gradients of geographic, phenotypic, and temporal variation [19] . even more than 4 genomes are needed for particularly diverse organisms. thus, older signatures may not perform as well as newly developed signatures from the most up-to-date sequence data. a future study of interest would be a longitudinal look at how these rates continue to change over time as additional sequences become available. this study could be performed retrospectively, since sequence submission dates are easily obtained from public databases. we also hypothesized that the wider the intended scope of a signature, the lower its sensitivity would tend to be. the point is illustrated loosely in our data tables. twenty-six of the 28 signatures with less than 10 publicly available target sequences had sensitivities of 1 (i.e. zero false negatives), while signatures with 10 or more targets had an average sensitivity of 0.710116. however this approach only considers scope in the context of sequence data available. we tried to demonstrate the relationship between specificity and scope at a more fundamental level by grouping signatures by the taxonomic level of their target as shown in figure 1 . however the results are misleading. in virology, taxonomic level is not a good indicator of nucleotide diversity. for instance, there is more diversity in the influenza a species then there is in the entire filoviridae family, which consists of only two known genera: ebola-like viruses and marburg viruses. a better approach might be to calculate nucleotide diversity as a function of phylogenetic branch length or shared k-mer clusters within a target taxonomy node. finally, we averaged the sensitivities of microbes by genome type as shown in table 6 . note that the ssrna-rt category includes only hiv-1. this chart demonstrates that creating signatures with high sensitivity becomes more difficult for target organisms with high mutation rates. current real-time pcr assay design approaches produce signatures with sensitivities generally too low for clinical use. we suggest that a rigorous approach involving false positive and false negative analysis should be the standard by which an initial assessment of signature quality is made. signatures must also regularly be reassessed as sequence data becomes available. for targets with wide nucleotide diversity, it becomes necessary to develop a set of signatures, for which we suggest a minimal set clustering approach that may also include signatures with degenerate/inosine bases. newrealtimepcrsigatures. fifty seven taqman pcr primer/probe combinations we predict to have higher sensitivity/specificity than current published assays. click here for file [http://www.biomedcentral.com/content/supplementary/1476-0711-7-18-s1.doc] sensitivity by taxonomy level figure 1 sensitivity by taxonomy level. each colored diamond represents a real-time pcr assay examined in this paper. black bars indicate the mean, grey bars indicate the median. top and bottom of each box indicates 75th and 25th percentiles, and grey lines at whisker ends denote min and max values. the wide ranging sensitivities demonstrate both inconsistency in genetic diversity at a given taxonomy level, and inconsistency in signature design approaches. japanese encephalitis virus too many mismatches in either forward or reverse primer. several strains have 3 mismatches at 3' end of forward primer in addition to internal mismatches reverse primer only has a blast hit to one strain (angola71) saint louis encephalitis virus 2007 too many mismatches in the reverse primer, with 3 mismatches at 3' end as well as at other locations dengue virus 1 2006 reverse primer does not have any blast hits to target dengue virus 2 2006 forward primer has 3 or 12 mismatches at 3' end for most strains, the probe has blast hits to only 7 of the 57 genomes available, and reverse primer only has a blast hit to 1 genome dengue virus 4 2006 too many mismatches in forward primer and in some cases the probe too many mismatches in forward primer. however, they are at the 5' end, so assay could still work for some strains pseudomonas aeruginosa 2004 no blast hits of probe to pseudomonas aeruginosa probe is not between or even in close proximity to the forward and reverse primers stenotrophomonas maltophilia 2004 no blast hits of probe to stenotrophomonas maltophilia probe only matches in 17 of 22 bases, which is unlikely to give a strong signal, since probe is unlikely to bind prior to the primers as desired for real time taqman chemistry. references 1 basic local alignment search tool comprehensive dna signature discovery and validation the perl directory frequent detection of human coronaviruses in clinical specimens from patients with respiratory tract infection by use of a novel real-time reverse-transcriptase polymerase chain reaction the interferon inducer ampligen markedly protects mice against coxsackie b3 virus-induced myocarditis coxsackievirus b4 infection of human fetal thymus cells rapid detection protocol for filoviruses rapid detection and quantification of rna of ebola and marburg viruses, lassa virus, crimean-congo hemorrhagic fever virus, rift valley fever virus, dengue virus, and yellow fever virus by real-time reverse transcription-pcr a 5' nuclease pcr (taq-man) high-throughput assay for detection of the meca gene in staphylococci algorithm for the identification of bacterial pathogens in positive blood cultures by real-time lightcycler polymerase chain reaction (pcr) with sequence-specific probes. diagnostic microbiology and infectious disease development of quantitative gene-specific real-time rt-pcr assays for the detection of measles virus in clinical specimens limitations of taqman pcr for detecting divergent viral pathogens illustrated by hepatitis a, b, c, and e viruses and human immunodeficiency virus development of mulitplex real-time reverse transcriptase pcr assays for detecting eight medically important flaviviruses in mosquitoes development of real-time reverse transcriptase pcr assays to detect and serotype dengue viruses comparative genomics tools applied to bioterrorism defense rapid development of nucleic acid diagnostics sequencing needs for viral diagnostics design and validation of an h5 taqman real-time one-step reverse transcription-pcr and confirmatory assays for diagnosis and verification of influenza a virus h5 infections in humans lion t: real-time quantitative pcr assays for detection and monitoring of pathogenic human viruses in immunosuppressed pediatric patients rapid reverse transcription-pcr detection of hepatitis c virus rna in serum by using the taqman fluorogenic detection system rapid detection of west nile virus from human clinical specimens, field-collected mosquitoes, and avian samples by a taqman reverse transcriptase-pcr assay development of a quantitative real-time detection assay for hepatitis b virus dna and comparison with two commercial assays sensitive and accurate quantitation of hepatitis b virus dna using a kinetic fluorescence detection system (tagman pcr) comparison of two quantitative cmv pcr tests, cobas amplicor cmv monitor and taqman assay, and pp65-antigenemia assay in the determination of viral loads from peripheral blood of organ transplant patients differentiation of herpes simplex virus types 1 and 2 in clinical samples by a real-time taqman pcr assay development of a flurogenic polymerase chain reaction assay (taqman) for the detection and quantitation of varicella zoster virus rapid and sensitive detection of mumps virus rna directly from clinical samples by real-time pcr development of a real-time reverse-transcription pcr for detection of newcastle disease virus rna in clinical samples transfer and evaluation of an automated, low-cost real-time reverse transcription-pcr test for diagnosis and monitoring of human immunodeficiency virus type 1 infection in a west african resource-limited setting rapid detection of enterovirus rna in cerebrospinal fluid specimens with a novel single-tube real-time reverse transcription-pcr assay use of applied biosystems 7900ht sequence detection system and taqman assay for detection of quinolone-resistant neisseria gonorrhoeae comparison of a new quantitative ompa-based real-time pcr taqman assay for detection of chlamydia pneumoniae dna in respiratory specimens with four conventional pcr assays a lightcycler taqman assay for detection of borrelia burgdorferi sensu lato in clinical samples detection of medically important ehrlichia by quantitative multicolor taqman real-time polymerase chain reaction of the dsb gene we thank beth vitalis, jason smith, and tom slezak for helpful discussion and for encouraging this work, and kari allmon for entering the references. we gratefully acknowledge financial support from the intelligence technology innovation center. lawrence livermore national laboratory is operated by lawrence livermore national security, llc, for the u.s. department of energy, national nuclear security administration under contract de-ac52-07na27344. the authors declare that they have no competing interests. gl found real time pcr signatures in the literature, wrote perl scripts, and performed the analysis of published signatures. sg conceived of the research, designed new signatures, and provided guidance throughout the study. publish with bio med central and every scientist can read your work free of charge http://www.ann-clinmicrob.com/content/7/1/18 key: cord-289389-xailjga5 authors: wang, xiaoli; zeng, daniel; seale, holly; li, su; cheng, he; luan, rongsheng; he, xiong; pang, xinghuo; dou, xiangfeng; wang, quanyi title: comparing early outbreak detection algorithms based on their optimized parameter values date: 2009-08-13 journal: j biomed inform doi: 10.1016/j.jbi.2009.08.003 sha: doc_id: 289389 cord_uid: xailjga5 background: many researchers have evaluated the performance of outbreak detection algorithms with recommended parameter values. however, the influence of parameter values on algorithm performance is often ignored. methods: based on reported case counts of bacillary dysentery from 2005 to 2007 in beijing, semi-synthetic datasets containing outbreak signals were simulated to evaluate the performance of five outbreak detection algorithms. parameters’ values were optimized prior to the evaluation. results: differences in performances were observed as parameter values changed. of the five algorithms, space–time permutation scan statistics had a specificity of 99.9% and a detection time of less than half a day. the exponential weighted moving average exhibited the shortest detection time of 0.1 day, while the modified c1, c2 and c3 exhibited a detection time of close to one day. conclusion: the performance of these algorithms has a correlation to their parameter values, which may affect the performance evaluation. following the outbreak of severe acute respiratory syndrome [1] in 2003, there has been a growing recognition of the necessity and urgency of early outbreak detection of infectious diseases. in january 2004, the national disease surveillance, reporting and management system were launched in china. the system which covers 37 infectious diseases has the potential to provide timely analysis and early detection of outbreaks. however, as the passive surveillance system relies on accumulated case and laboratory reports, which are often delayed and sometimes incomplete, the opportunity to contain the spread of the disease is often missed. as increasing numbers of early outbreak detection algorithms are now being used in public health surveillance [2] [3] [4] [5] [6] [7] [8] [9] , there is a need to evaluate their performance. due to a lack of complete and real data pertaining from historical outbreaks, the perfor-mance of these systems have been previously difficult to evaluate [10] . adding to these difficulties is the fact that the information obtained from historical outbreaks may be heterogeneous, due to changes in the outbreak surveillance criteria's over time. in order to compensate for missing or heterogeneous information, semisynthetic datasets can be created which contain the outbreak signals, using a software tool. by using this tool, the parameters of the outbreak including the desired duration, temporal pattern and the magnitude (based on a predefined criteria), can be specially set. this approach has been documented in a number of previous studies, which have compared the performance of early outbreak detection algorithms using simulated outbreaks [11] [12] [13] [14] [15] [16] [17] [18] . the simulation enables the performance assessment and provides much-need comparative findings about outbreak detection algorithms. however, there are still limited studies examining how the performance varies with the values of these algorithm parameters. our study aimed to observe the relationship between the algorithms' performance and their parameters values. the outcomes of this study may help improve the accuracy and objectivity of the evaluation of these algorithms and provide guidelines for future research and implementation. bacillary dysentery is one of the key epidemic potential diseases in beijing. it commonly occurs in summer and in regions with high population densities. with economic development and improvements in sanitary conditions in china, the incidence of bacillary dysentery has decreased substantially from 1990 to 2003 [19] . between 2004 and 2007, data from the national disease surveillance reporting and management system showed that the average incidence rate of bacillary dysentery was 235.9 cases per 100,000 in beijing. whilst there has been a substantial decline in the disease burden, bacillary dysentery continues to be a major public health problem in beijing. the observed daily case counts of bacillary dysentery from 2005 to 2007 in beijing were extracted from the national disease surveillance reporting and management system [20] for this study. the onset date of illness and area code at the sub-district level was extracted for each reported case. this data was used as the baseline for the outbreak simulation. data from 2005 to 2006 were used to adjust and optimize the parameter values of the algorithms, while data from 2007 were used to evaluate the algorithms. the outbreak criteria was defined on the basis of the bacillary dysentery reporting criteria specified in the national protocol for information reporting and management of public health emergencies (trial) [21] . this protocol was issued by the health emergency office of the ministry of health (moh) at the end of 2005. in the protocol, a bacillary dysentery outbreak was defined as the occurrence of 10 or more bacillary dysentery cases in the same school, natural village or community within 1-3 days. based on this definition, there was only one actual outbreak in the summer of 2007. during this outbreak, 10 children from a middle school were clinically diagnosed as having bacillary dysentery and four were culture positive for shigella sonnei. the first case became ill on the evening of the 21st of july and was taken to hospital the next day. two cases were reported on the 22nd of july, a further four on the 23rd of july, and two on the 24th july. as there were insufficient documents collected during the outbreak, a simulated outbreak signal had to be produced. before the simulation, the actual outbreak was excluded by replacing actual data with a 7-day moving average for fear of contamination. our simulation approach used semi-synthetic data, that was, authentic baseline data injected with artificial signals [9] . the aegis-cluster creation tool (aegis-cct) was used to generate outbreak signals [22] . first, the duration was fixed at three days and the outbreak magnitude varied from 10 to 20. the outbreak magnitude was fixed at 10 cases and the duration was varied from one to three days. the temporal progression of these outbreaks included a random, a linear, and an exponential growth spread (12 signals for each temporal progression pattern). a total of 36 different outbreak signals were finally simulated. considering the spatial distribution and seasonal variability of bacillary dysentery, we randomly selected 30 (10 for each pattern) from a possible 100 sub-districts (townships), where the incidence was higher than the average incidence in beijing, and then randomly selected one day as the starting date of an outbreak from the high incidence seasons. the remaining six outbreak signals were randomly added to the low incidence seasons and areas. simulations injected into the baseline data from the selected sub-districts (2005-2006) were used to observe the relationship between the algorithm performance and the parameter value. this data allowed us to select the optimal combination of parameter values. simulations added to the baseline data from 2007 were used to evaluate the algorithm. in order to reduce sampling errors, means were calculated by repeating the sampling 50 times. evaluation indices included sensitivity, specificity and time to detection [14] . an outbreak was considered to be detected when a signal was triggered: (1) within the same period as the start and end date of the particular simulated outbreak; and (2) within the same sub-district as what the simulation was geographically located in. in our study, sensitivity was defined as the number of outbreaks in which p1 day was flagged, divided by the number of simulated outbreaks. specificity was defined as the number of days that were not flagged divided by the number of non-outbreak days. time to detection was defined as the interval between the beginning of the simulated outbreak and the first day flagged by the algorithm, divided by the number of simulated outbreaks. time to detection was zero, if the algorithm flagged a simulated outbreak on the first day. time to detection was three, if the algorithm did not produce a flag on any of the days during the period of the simulated outbreak. time to detection is an integrated index that reflects both timeliness and sensitivity of an algorithm. we intended to find a simple and practical criterion to evaluate the performance of these algorithms. generally, the parameter values with the shortest time to detection were considered as preferable. the disparity in specificity between the parameter values was also taken into consideration. priority was given to the value with the higher specificity, if the time to detection was either equal to or had a difference of less than half a day and the difference between the specificities was >5.0%. we compared the performance of five outbreak detection algorithms, the exponential weighted moving average (ewma), c1-mild (c1), c2-medium (c2), c3-ultra (c3) and the spacetime permutation scan statistic model. we calculated the ewma, using a 28-day baseline based on day t à 30 through till day t à 3 within each sub-region [15] . if the observed values were x i $ n (l, r 2 ), the weighted daily counts of each sub-district were calculated as: in the algorithm, k (0 < k < 1) was the weighting factor, and k was the control limit coefficient [15, 23] . they are the adjustable parameters. based on the range in values of k found in previous literature [23] , k was set as 0 < k 6 3. the adjustment interval for k and k was set as 0.1 and 0.5, respectively. the moving standard deviation (s) was used as the estimate of r; and the moving average (ma) was used as the estimate of l. the cumulative sum (cusum) algorithm keeps track of the accumulative deviation between the observed and expected values. for cusum, the accumulated deviation s t was defined as: s_{0} = 0ákr xt is the allowed shift from the mean to the detected. s t is the current cusum calculation, and s tà1 is the previous cu-sum calculation. we found that there was an aberration when the mean l 0 shifted to l 0 + kr x á h was the decision value. in ears, k was set as 1 and when s t > h = 2, an alarm would be trigged [24] . when the denominator r xt equals to zero, 0.2 was taken to replace zero in ears. however, as both sides of the equation can be multiplied (s t = max (0, s tà1 + ((x t à (l 0 + kr xt ))/r xt) ) > 2) by r xt , the decision value was changed to hr xt (referred to as h). biosense originally implemented the c1, c2 and c3 methods but has since modified the c2 method (referred to as w2). in our study, we did not use the threshold; k or decision values set in ears, rather we adjusted these values to achieve a preferable efficiency for aberration detection. additionally, we did not use 0.2 when r xt was 0, rather the actual value. based on the previous literatures [13, 25, 26] , we determined the value range of h and k, as 3r 6 h 6 5r and 0 < k 6 1.5, respectively. the adjustment interval for k and h was set as 0.1 and 0.5r, respectively. we modified the three original cusum referred to as c1, c2 and c3 to c1 0 , c2 0 and c3 0 in the reporting of the results in this study. the equation is written as c3 0 was the sum of c t , c tà1 and c tà2 derived from c2 0 . ma 1 was the moving sample average and s 1 was the moving standard deviation of the case count reported from baseline. ma 2 and s 2 were the moving sample average and moving standard deviation of the case count reported during baseline period, with a 2-day lag. the moving standard deviation (s) was used as the estimate of r; and the moving average (ma) was used as the estimate of l. the length of the baseline comparison period for all three methods was 7-days in order to account for the day of the week effect [13, 14] . the space-time permutation scan statistic model utilizes thousands to millions of overlapping cylinders to define the scanning window, each of which is a possible candidate for an outbreak. the circular base represents the geographical area of the potential outbreak from zero to some designated maximum value. the height of the cylinder represents the time period of a potential cluster. the probability function for any given window is proportional to [27, 28] : where c zd was the observed number of cases in subzone z and during day d. c was the total number of observed cases during the whole study phase t for the whole study region. c a was the observed case count scanned in cylinder a. the generalized likelihood ratio (glr) was calculated as a measure of the evidence that cylinder a contains an outbreak. among the many cylinders evaluated, the one with the maximum glr constitutes the space-time cluster of cases that is least likely to be a chance occurrence and, hence, is the primary candidate for a true outbreak. the size and location of the scanning window is under dynamic change [28] . the maximum temporal cluster size was determined by considering the incubation period of the disease studied. for bacillary dysentery, the average incubation period was 1-3 days. therefore, the maximum temporal cluster size in this study was set as (1d, 3d, 5d and 7d). the maximum spatial cluster size can be determined in virtue of the geographical area or the proportion of the whole population. since data on the proportion of the population in each sub-district were unavailable, the maximum spatial cluster size in this study was set as (2, 5, 8 and 10 km), referring to the geographical area of each sub-district. the performance was analyzed using p values of 0.05. analyses were undertaken using excel, spss software (version 13.0 for windows; spss inc., chicago, il), aegis-cct (available from http://sourceforge.net/projects/chipcluster/), java programming (available from http://java.com/zh_cn/) and satscan (available from www.satscan.org). spss was used for data processing, descriptive statistics and the chi-square test. the bonferroni correction was applied for multiple comparisons to control the family wise error rate. the significance level a for an individual test was calculated by dividing the family wise error rate (0.05) by the number of tests [29] . ewma and the cumulative sum were coded by java programming to find out whether the incidence level was abnormal. satscan was used to analyze the clustering of cases in different sub-districts in beijing based on space-time permutation scan statistics and whether the incidence level was abnormal. the correlation coefficients between the three evaluation indices (sensitivity, specificity and time to detection) and parameter values were calculated. table 1 showed the correlation coefficients with pearson's r and p values. all algorithms showed strong relation between the evaluation indices and the parameter' values, except space-time permutation scan statistic. great majority of the correlation was statistically significant, with p values less than 0.05(two-tailed). however, for space-time permutation scan statistic, specificity showed no relation to the spatial cluster size. only when the maximum temporal cluster size was set as 3d, both sensitivity and time to detection exhibited a significant correlation with the spatial cluster size (p < 0.05). figs. 1-4 describe the average sensitivity, specificity and time to detection of the five algorithms. the top plot of fig. 1 shows the sensitivity versus k values for the three control limit coefficients (k). in all of the combinations of k and k values, the sensitivities were greater than 90%. as k increased from 0 to 0.9, the sensitivity also increased. the middle plot of fig. 1 shows the specificity of the three k values. specificity of the three k values had a similar change trend by k value, increasing until k = 0.3, and then declining gradually. the bottom plot of fig. 1 shows the effect of k values on detection timeliness of ewma. time to detection declined gradually with the increasing k values. among these combinations of different k and k values, k = 0.9, k = 1.0 showed the shortest detection time, with a specificity of 89.5%. there were only two combination of k and k values that had a detection time longer than half a day (k = 0.1, k = 2.0 and k = 0.1, k = 3.0). out of the remaining combinations, there were 11 which had specificity greater than 89.5%. within these 11 combinations, k = 0.7, k = 3.0 showed the greatest specificity (97.2%). according to the evaluation criteria, we concluded that k = 0.7, k = 3.0 was the optimal parameter for ewma. fig. 2 shows the influence of different h and k values on sensitivity, time to detection and specificity. the sensitivity was shown to decrease as k increased. as the sensitivity decreased, time to detection increased. among the combinations of h and k values, (h = 3r, k = 0.1) had the shortest time to detection of 0.3 day (specificity: 55%). there were 14 combinations with a detection time of half a day longer than (h = 3r, k = 0.1). all of these 14 combinations had specificities greater than 55%, with the highest one being 95.6%, when h = 5r, k = 0.4. according to the evaluation criteria, (h = 5r, k = 0.4) was found to be the optimal combination for c1 0 . the relationship between performance and the combination of h and k values for c2 0 is shown in fig. 3 . we found that sensitivity declined as k increased from 0.1 to 1.5. in comparison, specificity and time to detection increased as sensitivity declined. the combination of (h = 3r, k = 0.1) showed the shortest detection time (0.2d), with a specificity of 46.2%. similarly, 14 combinations had a detection time which was half a day longer than (h = 3r, k = 0.1). the specificities for all of these 14 combinations was greater than 46.2%, with the highest one recorded at 88.6%, when h = 4r, k = 0.5. accordingly, (h = 4r, k = 0.5) was thought the optimal combination for c2 0 . fig. 4 shows the influence of sensitivity, time to detection and specificity of h and k values for c3 0 . the specificity and time to detection had an overall growth of k value. sensitivity declined gradually as k increased. among the combinations of h and k values, (h = 3r, k = 0.1) had the shortest time to detection (0.1d), with a specificity of 18.3%. likewise, there were 14 combinations with a detection time half a day longer than (h = 3r, k = 0.1). 13 out of these 14 combinations had specificities greater than 18.3%, the highest one being 73.9%, when h = 3r and k = 0.8. consequently, (h = 3r, k = 0.8) was thought as the optimal combination for c3 0 . we found that the space-time permutation scan statistics exhibited no real difference in the specificity when the parameter combinations were changed (table 2) . when the maximum temporal cluster size was set as 3d and the maximum spatial cluster size of 2 km, the detection time was found to be the shortest. this combination also resulted in the highest specificity and sensitivity. thus the optimal parameter was taken as 3d (maximum temporal cluster size) and 2 km (maximum spatial cluster size). five commonly used algorithms were evaluated by comparing the performance with their optimized parameters values. the performance of these algorithms is shown in table 3 with p values. according to bonferroni's procedure, the significance level a for an individual test was calculated by dividing the family wise error rate (0.05) by four. this was found to be 0.0125. of the algorithms evaluated, space-time permutation scan statistics had a higher average specificity than any other algorithms (p < 0.001), followed by ewma (95.2%), while c3 0 showed the lowest specificity (73.7%). ewma had the shortest time to detection (0.1d), while c1 0 showed the longest time to detection of one day. space-time permutation scan statistics had a relatively longer time to detection compared to ewma (0.2d), but this difference was not statistically significant (p = 0.081 > 0.0125). according to the evaluation criteria and statistical test, we could conclude that space-time permutation scan statistics was the optimal algorithm, followed by ewma. spacetime permutation scan statistics had a specificity of 99.9%, which meant that only one false alarm occurred per 1000 days, whereas ewma was evaluated to trigger one false alarm for every 21 days. the burden of bacillary dysentery has long been thought to be great in many developing countries [30] . detecting outbreaks in their early stages may prevent secondary infections, and subsequently an epidemic from occurring. the benefits of this extend not only to the individual, but also to the community in terms of morbidity prevented and costs saved. from the case study in 2007, the outbreak was detected when the accumulated number of cases reached the threshold (10 cases in 3 days within the same geographic area). the problem with this method of detection is that the optimal opportunity to curb an outbreak is often missed. in the event of a pandemic influenza or another emerging inflection, missing this opportunity may have national or global implications. we observed that the effects of the same algorithm varied significantly with different parameter values. for example, the time to detection and specificity were 73.9% and 0.6d for c3 0 (h = 3r, k = 0.8) versus 61.8% and 0.6d for c2 0 (h = 3r, k = 0.2). if the performance of c3 0 and c2 0 were compared with these values, c3 0 (h = 3r, k = 0.8) seemed to be better than c2 0 (h = 3r, k = 0.2) according to the evaluation criteria, which might lead to the conclusion that c3 0 was more effective than c2 0 . in fact, c2 0 (h = 4r and k = 0.4) had a detection time of 0.6d and a specificity of 85.8%, 11.9% higher than 73.9% (c3 0 , h = 3r, k = 0.8). in this case, c2 0 (h = 4r and k = 0.4) were better than c3 0 (h = 3r and k = 0.8). the difference in performance of the two algorithms is largely caused by the difference between parameters' values. therefore, parameter values should be optimized prior to the performance evaluation of algorithms. a wide range of outbreak detection algorithms are available including: temporal, spatial and spatial-temporal [31] . in this study, we used both the temporal and spatial information of the reported cases. the temporal information refers to the onset date of the illness, and spatial information refers to the sub-district where the case currently resides at. cusum and ewma are commonly used to analyze the temporal data, as they can be adjusted to identify a meaningful change from the expected range of data values. we calculated the daily case counts reported for each sub-district, and then judged whether the change from the expected value was significant within each sub-district. so in our study, cusum and ewma can also give us both the temporal and spatial information of the signal. our study focused on the correlation between algorithm parameter values and their performance. by calculating the correlation coefficient and comparing the performance of different algorithms with various values, we observed a strong correlation between them. the differences in the parameter values may have resulted from a difference in the performances among these algorithms. consequently, we recommend that before evaluating the effectiveness of an outbreak detection algorithm, parameter values should be optimized to remove the noise which has resulted from the potential influence of parameter value for a given disease. in our study we found that space-time permutation scan statistics and the ewma outperformed other algorithms both in terms of timeliness and accuracy for detecting bacillary dysentery outbreaks. ewma applies weighting factors which decrease exponentially. the choice of weighting factor k is the key for successful outbreak detection. with proper k value, ewma control procedure can be adjusted to be sensitive to a small or gradual drift in the process. we feel that adjusting k value should be an imperative step before applying ewma into practice. space-time permutation scan statistics consider both the temporal and spatial factors. the scanning window is under dynamic change to avoid selection bias. however, space-time scan statistics do not consider population movements. in addition, space-time scan statistics can only identify clusters in simple regular shapes. if the cluster does not conform to a regular shape, the algorithm may have a poor performance. therefore, when space-time permutation scan statistics are used to detect the outbreaks, it is imperative to understand the cluster shape. only in the right shape, can space-time permutation scan statistics demonstrate a high detection efficacy. aside from these limitations, the use of space-time permutation scan statistics allowed the early outbreak detection for bacillary dysentery. previously, hutwagner et al. [14] compared the time to detection with simulation based on influenza like illness and pneumonia data. in her study, c1, c2 and c3 were found to have an increasing time to detection. in comparison, we found a decline in the detection time for our modified c1, c2 and c3. these differences in the time to detection calculations may explain the differences between the two studies. in our study, when the algorithms failed to detect the simulated outbreak, time to detection was set as the largest value (3 days). as we know, c1, c2 and c3 have increasing sensitivities. obviously, as the sensitivity increased from c1 to c3, the number of missed outbreaks decreased and consequently the time to detection declined accordingly. an integrated time to detection might be recommended, in order to address this limitation [14] . theoretically, the optimal parameter value can maximize the algorithm's ability to detect aberration in disease incidence and minimize the probability of producing a false alarm. the balance between the accuracy and timeliness is still a matter of debate. in our study, we set simple and practical evaluation criteria's. considering the time to detection integrating effect of sensitivity, we simplified the three evaluation indices to two, time to detection and specificity. the former reflected both the timeliness and sensitivity, and the latter reflected the accuracy of outbreak detection. we made timeliness the priority over accuracy due to bacillary dysentery's short incubation period and the fact that it can be both food-borne and water-borne. when deciding which index should be given the priority, practitioners should take the length of incubation, the mode of transmission and the current situation (climatic, social, demographic, economic factors, etc.) into consideration. the variation in patterns of the evaluation indices with the change of parameter values observed in our study was found to be consistent with previous related studies [9, 12, 14, 15, 32, 33] . for example, hutwagner et al. [14] observed that c1, c2 and c3 had increasing sensitivity, but a decreasing specificity as the sensitivity increased. in our study, we also observed this change in sensitivity and specificity in our modified c1 0 , c2 0 and c3 0 . in our study we observed a growth in sensitivity and specificity as weighing values increased from 0 to 0.3. it seemed that the range of weighting values from 0.4 to 0.9 enabled a better performance. this recommendation was also made by jackson et al. [15] , who suggested weighing values of 0.4 and 0.9 for ewma. there are several factors which may limit the generalization of our findings. to apply these five algorithms, information on the specific setting (workplaces, schools etc.) is often required. this information is usually not available in the current national disease surveillance reporting and management system in china. consequently, the sensitivity of the five algorithms may be less when a bacillary dysentery outbreak occurs in a school, as the cases may be scattered in different sub-districts. it is therefore important to collect extra information on workplaces, schools and other units. due to a lack of actual outbreaks, we injected simulated outbreaks into the baseline so we could undertake a performance assessment on these outbreak detection algorithms. we changed the size, magnitude, temporal progressive pattern, season and spatial distribution of bacillary dysentery, in order to have a variety of outbreak conditions to test. as these are approximations, it is difficult to evaluate how close our simulations came to the actual outbreak. consequently, further research is needed in predicting the actual performance of these algorithms. advisors of expert sgoha, yung rwh, peiris jsm. effectiveness of precautions against droplets and contact in prevention of nosocomial transmission of severe acute respiratory syndrome (sars) a model-adjusted space-time scan statistic with an application to syndromic surveillance the bioterrorism preparedness and response early aberration reporting system (ears) national bioterrorism syndromic surveillance demonstration program ambulatory-care diagnoses as potential indicators of outbreaks of gastrointestinal illness -minnesota wsare: what's strange about recent events? time series modeling for syndromic surveillance disease outbreak detection system using syndromic data in the greater washington dc area measuring outbreak-detection performance by using controlled feature set simulations bio-alirt biosurveillance detection algorithm evaluation evaluating detection of an inhalational anthrax outbreak an evaluation model for syndromic surveillance: assessing the performance of a temporal algorithm comparing syndromic surveillance detection methods: ears' versus a cusum-based methodology comparing aberration detection methods with simulated data a simulation study comparing aberration detection algorithms for syndromic surveillance simulation for assessing statistical methods of biologic terrorism surveillance an open source environment for the statistical evaluation of outbreak detection methods approaches to the evaluation of outbreak detection methods analysis about epidemic situation of dysentery in near upon fourteen years in beijing conceptual model for automatic early warning information system of infectious diseases based on internet reporting surveillance system national protocol for information reporting and management of public health emergencies (trial) a software tool for creating simulated outbreaks to benchmark surveillance systems the exponentially weighted moving average (ewma) rule compared with traditionally used quality control rules statistical quality control methods in infection control and hospital epidemiology, part i: introduction and basic theory evaluation and extension of the cusum technique with an application to salmonella surveillance the cusum chart method as a tool for continuous monitoring of clinical outcomes using routinely collected data evaluating cluster alarms: a space-time scan statistic and brain cancer in a space-time permutation scan statistic for disease outbreak detection multiple comparison procedures updated a multicentre study of shigella diarrhoea in six asian countries: disease burden, clinical manifestations, and microbiology algorithms for rapid outbreak detection: a research synthesis a simulation model for assessing aberration detection methods used in public health surveillance for systems with limited baselines evaluation of school absenteeism data for early outbreak detection key: cord-030957-45tc5ksf authors: schaap, andrew; weeks, kathi; maiguascha, bice; barvosa, edwina; bassel, leah; apostolidis, paul title: the politics of precarity date: 2020-08-28 journal: contemp polit theory doi: 10.1057/s41296-020-00435-z sha: doc_id: 30957 cord_uid: 45tc5ksf nan forms that political agency and solidarity might take in response to it, and the appropriate site within which precarious social conditions can be contested and transformed, is controversial. precarity refers to a situation lacking in predictability, security or material and social welfare. importantly, this condition is socially produced by the development of post-fordist capitalism (which relies on flexible employment practices) and neoliberal forms of governance (which remove social protections) (see azmanova, 2020) . precarity entails social suffering, which is manifested in the declining mental and physical health of both working and 'out of work' people and compounded by the attribution of personal responsibility to individuals for their politically induced predicament (apostolidis, 2019, pp. 3-5) . precarity leads to social isolation as workers find themselves segregated and alienated by work processes while the capacity to sustain community is undermined (pp. 8-10). moreover, precarity leads to temporal displacement with precarious workers finding they have no time to do much else than work: they must constantly make time to find and prepare for work and, in doing so, become out of sync with the normal rhythms of social life (pp. 5-8) . precarity involves social dislocation as people are forced to relocate to adapt to precarious situations at the same time as their movements are constrained and policed (pp. 10-12). importantly, precarity is distributed unequally, with people of colour, women, low-status workers and many in the global south experiencing its most devastating effects. at the same time, however, some of its aspects penetrate all social strata. as apostolidis (2019, p. 2) puts it, 'if precarity names the special plight of the world's most virulently oppressed human beings, it also denotes a near-universal complex of unfreedom'. recognizing that anti-capitalist struggle has always been a fight for time, apostolidis (2019, p. 15 ) reflects on how this fight should be adapted to our present political conjuncture. to develop this vision of radical democratic politics, he turns to the experience of migrant day labourers to both diagnose contemporary social pathologies and envision alternative social possibilities. the research for the book is based on apostolidis's involvement in the activities of two worker centres located in seattle, washington, and portland, oregon. in addition to participating in various activities of the centres (such as staffing phones and running occupational health and safety sessions), the research team conducted 78 interviews with migrant day labourers. through interpreting the interviews, apostolidis practices a kind of political theory inspired by paulo freire, which he characterises as 'critical-popular analysis' (p. 30). by attending to the self-interpretations of the research participants, apostolidis characterises precarity and considers the possibility of its transformation in terms of four generative themes around which the book is structured. the first three themes speak to the experience of precarity: 'desperate responsibility', 'fighting for the job' and 'risk on all sides, eyes wide open'. the fourth theme envisions an anti-precarity politics in terms of a 'convivial politics'. as apostolidis acknowledges, there is an ethnographic dimension to this project since it provides a thick description of the everyday experiences and practices of migrant day labourers. however, it also entails critical-popular analysis since apostolidis aims to co-create political theory with the research participants. he does so by staging a constructive dialogue between the self-interpretations and practical insights of day labourers and the systematic and defamiliarized perspective afforded by critical theory. the fight for time not only provides insight into how some of the most vulnerable people in society experience, negotiate and resist precarity: from this social perspective, it aims to generate a wider understanding, of what agency all working (and 'out of work') people have to challenge the precaritisation of social life. as such, the book pivots on a fundamental distinction between day labour as exception and day labour as synecdoche. as kathi weeks explains below, this paradigmatic understanding of the precarity of day labouring, enables a perspectival shift from the singular experiences and ideas of migrant day labourers to the more general social condition of precarity and the possibility of its transformation. on the one hand, apostolidis considers those exceptionalising forms of precarity that dominate day labourers' lives, differentiating them from other members of society. on the other hand, however, apostolidis considers the significance of day labour as synecdoche for how precarity permeates social relations on a much broader social scale. a synecdoche is a figure of speech in which a part represents the whole. an often remarked on synecdoche in political language is that of the people, whereby the poor (those who do not participate in politics) speak in the name of the citizenry (the people as a whole). similarly, apostolidis treats day labour as synecdoche, according to which the exceptional forms of precarity experienced by labourers might make visible the precarity that increasingly conditions all social relations. in the final chapters, apostolidis explores how worker centres might also function synecdochally insofar as the purpose of association is construed not only instrumentally, as protection against the risks associated with precarity, but in terms of their constitutive potential to sustain convivial networks of political possibility for more mutually supportive, creative and pluralistic forms of solidarity than those afforded by traditional unionised spaces. it is in these spaces, which are both mundane and potentially extraordinary, that apostolidis discerns a nascent form of radical democratic politics that consists in a struggle against precarity. this entails three key elements: first, the refusal of work, i.e. the refusal to allow one's life to be consumed according to one's role as worker within capitalist social relations; second, the constitution of spaces for egalitarian social interaction that resist the imperatives of neoliberal governance, and; third, the reclamation of people's time from capitalist and state powers (p. 34) . this recuperation of time (the time robbed from people's lives, which is symptomatic of alienated labour) is fundamental to understanding how day labour might function as synecdoche both of the wider social condition of precarity and the possibility of its transformation. as apostolidis explains, 'working people are running out of time and living out of time ' (p. 8; emphasis in original) . in this context, he suggests, day labourers' socialized activities within the 'time-gaps' of the precarious work economy indicate how the 'time of everyday precarity' might be remade into 'novel, unpredictable, and politically generative temporalities ' (p. 29) . the contributors to this critical exchange engage with two key aspects of the politics of precarity. the first relates to the subject of an anti-precarity politics and the extent to which the exceptional but inevitably partial experiences of day labourers can function as a synecdoche for the precarity of all. edwina barvosa questions whether identification with precarity provides an adequate basis for an emancipatory politics, given that it may condition unreflexive modes of action. bice maiguashca suggests that an intersectional politics would require attending to multiple exceptions, each with their own set of experiences and aspirations, as the basis for a coalitional anti-precarity politics. leah bassel similarly advocates building a politics of migrant justice from the knowledge experiences that are generated by a matrix of oppression, which requires acknowledging struggles against patriarchy and racism as well as capitalist domination. in this context, she emphasises the political imperative of making settler colonialism visible in any analysis of migrant justice, including acknowledging the social position of migrants as settlers. in contrast, kathi weeks highlights how certain appropriations of the marxian category of lumpenproletariat resonate with apostolidis's synecdochal interpretation of day labour. as such, it can be interpreted as a conceptual articulation of a heterogenous -rather than a homogenizing -political subject. indeed, in his response, apostolidis clarifies that the use of the term synecdoche indicates that the perspectival shift from the experience of day labour to the general social condition of precarity is intended as a contingent act of representationrather than a reductive empirical truth. the second issue relates to the mode and site of political organizing against precarity, encapsulated in apostolidis's demand of 'workers' centres for all'. weeks emphasises the urgency of politicizing workplace death and injury, which is obscured by the managerial appropriation of discourses of health and well-being with increased productivity of workers. yet, she is concerned that workers centres might be susceptible to co-optation. moreover, she wonders whether workers centres require embodied social interaction to be effective or might also be realised in virtual spaces. bassel highlights how such anti-precarity spaces are both sustained by affective labour of women and may reproduce other forms of oppression. maiguashca wonders what the visionary pragmatism that apostolidis ascribes to day laborers has in common with the principled pragmatism that she and catherine eschle observed among feminist activists involved in the global justice movement. barvosa questions the assumption that global inequality is most effectively redressed through the mobilization of oppressed groups according to a salt-of-the-earth script. she invokes instead to an alternative keep-only-a-competency script, according to which social inequality might be more effectively reduced by the voluntary giving of the wealthy. in response, apostolidis elaborates on the benefits of the critical-popular approach he adopts in the book. while the practical focus of the fight for time supports a coalitional politics as a key mode of struggle, apostolidis highlights the limits of a 'coalitional epistemology', which would require a cumulative assemblage of particularised knowledges prior to envisioning a desirable form of mass solidarity. lois mcnay (2014) has rightly highlighted how radical democratic theory risks becoming 'socially weightless' to the extent that it treats the social world as contingent, devoid of any significance of its own and able to be reshaped in limitless ways through political action. radical democrats tend to over-estimate the agency of members of oppressed groups when they neglect the mundane experiences of social suffering, which undermine individuals' capacity to participate in politics (mcnay, 2014, pp. 11, 14-15) . as this critical exchange demonstrates, the fight for time challenges theorists of radical democracy to recognise the weight of the world while reflecting on how political agency is shaped, constrained and enabled by the conditions that it seeks to transform. moreover it challenges us to reflect on how political solidarity is possible across the differences and inequalities that are currrently being exacerbated and intensified by the social production of precarity in response to the covid-19 pandemic. andrew schaap the future of anti-precarity politics the discussion that follows is constructed around three insights gleaned from the fight for time about how to formulate an anti-precarity politics in the u.s. today. the first concerns one target for such a politics, the second its political subject, and the third considers one of its organizational sites. all three draw on apostolidis's approach to day labouring as both singular and paradigmatic, as at once an exceptional case and an exemplar of precarious work in the contemporary economy. i will begin with one of the targets of an anti-precarity politics apostolidis identifies that seems critically important today: publicizing and politicizing the incidents of work-related death and injury. this is one of the aspects of day labouring, which might be distinctive insofar as it is more hazardous than many other jobs, but is also appallingly common to precarious work under postfordism more generally. (if we include the household as a site of unwaged work as well, the rate of workplace injury and death increases dramatically.) apostolidis mentions briefly an encounter with a nurse who talked about the dangers of working intimately with bodies in need, and this certainly squares with the literature on other forms of care work, especially of home health aides (one of the fastest growing jobs in the u.s.), whose privatized places of work, and complex as well as under-regulated employment relations, can easily render workers unsafe. publicizing this issue is difficult because, as apostolidis notes, the problem of workplace death and injury is strangely absent from popular consciousness. public awareness is only occasionally peaked when massive disasters are reported: 'intervallic evocations of shock enable an overall scheme of normalization' (p. 3). the anarchist polemicist bob black, in his 1985 essay 'the abolition of work', speaks to this normalization -using his own inimitable brand of sarcasm in a bid for attention to the issue -by claiming that we have made homicide a way of life: 'we kill people in the six-figure range (at least) in order to sell big macs and cadillacs to the survivors' (black, 1996, p. 245) . in her book on emma goldman, i was struck by the effort with which ferguson (2011) attempts to make visible the violence that capital and the state used against workplace organizing in the late 19th and early 20th centuries, which was rarely reported at the time and remains largely absent from our history books. ferguson (2011, p. 22 ) even offered, to powerful effect, a visual aid in the form of a six-page list, a 'bloody ledger', of what she could find of the documented instances of violence levied by public and private armies against striking or resistant workers. for the most part, this spectacular, overt wielding of force and violence over workers by the state and capital has been replaced by brutality meted out through the tools and within the routines of the labour process, such that the perpetrators are typically less directly involved or clearly identifiable. i agree with apostolidis when he argues that anti-precarity political activism requires 'a self-conscious, strategically eclectic, affectively inventive politics of the body' (p. 145). the trick, as i see it, is how not only to publicize but also to politicize the issue of bodily harm, given how extensively the idiom of health has been rendered amenable to the logics and aims of biopolitical management. what vocabulary can be used when the seemingly most obvious and most legible candidate, the language of health, has become so tightly sutured to measures of productivity and complicit with the 'workplace wellness' programs dedicated to its restoration and maximization? although it may still be a language through which the problem of work-related death and injury can be publicized, particularly in light of the ways it is currently deployed to pathologize various modes of indiscipline, i am less certain that the individualizing and biologizing vocabulary of health can be used as a tool of work's politicization. the second aspect of the analysis that i want to consider once again draws on the day labourer as both a specific figure and an archetype of precarious work in order to think further about how to conceptualize a political subject adequate to a broad anti-precarity politics. the case of day labourer activism would seem to lend support to the proposition that the marxist category of the lumpenproletariat is once again resonant. the concept is not offered as a form of self-identification, but rather as a mechanism of conceptual articulation, particularly across lines of gender, race, and citizenship, that might serve as alternatives to the analytical and political categories of proletariat and working class. famously disparaged by marx and engels as the sub-working class, or, more precisely, a de-classed and disparate collection that includes vagabonds, former prisoners, pickpockets, brothel keepers, porters, tinkers, and beggars (marx, 1981, p. 75) , the lumpenproletariat was negatively contrasted to the upstanding 'labouring nation' exemplified by the economically and socially integrated -hence, powerful and politically reliableindustrial proletariat. (although it should be noted that marx and engels include some discards from other classes as well, including the bourgeoisie.) even the unemployed members of the industrial reserve army were posited as fully inside capitalist relations, as opposed to the surplus population relegated to the outside: that subaltern, disorganized, and politically untrustworthy non-class of people 'without a definite occupation and a stable domicile' (engels cited in draper, 1972 draper, , p. 2287 . engels included day-laborers in his list of the lumpenproletariat, and those who have since tried to reclaim and revalue the category -most notably, bakunin, fanon, and the black panthers -have added as well various modes of petty criminality, maids, sex workers, and 'the millions of black domestics and porters, nurses' aides and maintenance men, laundresses and cooks, sharecroppers, unpropertied ghetto dwellers, welfare mothers, and street hustlers' with 'no stake in industrial america' (brown, 1992, p. 136) . while i am interested in the category as a way to make particular connections among prison workers, domestic workers, day laborers, sex workers, laborers in various underground economies, and undocumented migrants, it has also been used to identify linkages among a host of precariously employed people (see, for example, bradley and lee, 2018) . indeed, refusing the original distinction between proletariat and lumpenproletariat, the latter category could serve as the general designation that links the lumpen to the proletariat through the hinge category of the precariat. engels once criticized kautsky for using the label proletariat as inclusive of what engels sought to set apart as the lumpen class; kautsky's proletariat was a 'squinty-eyed' concept because it looks in both directions, thereby blurring an important distinction (draper, 1972 (draper, , p. 2288 . perhaps today the lumpenproletariat could serve as a squinty-eyed, broad category, more adequate to a u.s. political economy where the difference between formal and informal employment, employment and unemployment, work and nonwork are breaking down. the specific advantages of this formulation of the lumpen category include its breadth. stallybrass (1990, p. 72) notes how the lumpenproletariat is often described in terms of the 'spectacle of multiplicity' it evokes in contrast to the unified sameness of the conception of the proletariat. this heterogeneous breadth would seem especially appropriate to a political economy in which, as apostolidis notes, rather than determine who exactly counts as a precarious worker, 'the better question might be: who does not belong to the vast population of the precaritised?' (apostolidis 2019, p. 4; emphasis in original) . another attraction of the concept is how marx and engels's pejorative characterization of the lumpen class betrays some of the ways that the moralized understanding of work and family -recall the description of the lumpen as lacking or marginal to the stabilizing force of both occupation and family -haunts their analyses. for this reason, some, myself included, are interested in how the lumpenproletariat can, as thoburn (2002, p. 435 ) notes, be figured as the 'class of the refusal of work'-and, i would add, the refusal of family. finally, i am interested in how it was conceived as politically unreliable in a way that seems more realistic than the tendency for some to posit some kind of special 'wokeness' to the working class, only to be disappointed when they turn out to be politically erratic, sometimes acting against what are taken to be their class interests. the third and last point of particular interest for me in apostolidis's theorizing about the politics of work today was the argument about the worker centre as a mode of labour organizing for precarious workers. in thinking about analogous organizational innovations two examples come to mind. both share some resemblances with the worker centre even if they are associated with more privileged workers. the first is what might be characterized as a dystopian version of the worker centre that goes by the label coworking. interestingly, coworking originated from below as activist projects to create spaces of community and collaboration among elements of the white-collar precariat, but as de peuter et al. (2017, p. 692) note: 'inside a decade, an innovation from below was drawn out of the margins, harnessed by capital and imprinted with corporate power relations'. today, by way of these global real estate ventures, capital can both appropriate the value waged workers create and charge them rent, just as we pay for the households where so much of our free reproductive labour is enacted. but what might seem quite distant from the worker centres apostolidis describes comes a little closer if we take seriously the contradictory (merkel, 2019) or ambivalent (de peuter et al., 2017) status of coworking, which may provide opportunities for the convivial mutualism that apostolidis finds in the worker centre while also interpellating members as entrepreneurial individuals, and which 'is animated by a tension between accommodating precarity and commoning against it' (de peuter et al., 2017, p. 689) . i am left with a question that i think might be worth pursuing: is coworking best understood as a specular image against which we can recognize the progressive potential of the worker centre, or is it a cautionary tale about its potential to be co-opted? the second comparison is to a very different model of labour organizing for precarious workers. this is a project based in new york city called wage, an acronym for working artists in the greater economy. it started in 2008 as a project committed to help artists to be remunerated for all the work they do with non-profit arts organizations and museums. their 'womanifesto' says they demand payment 'for making the world more interesting' (wage, 2020) . among other initiatives, wage's efforts involve knowledge production about various arts organizations and the contracts they make with independent artistic workers, the development of a platform that helped artists negotiate fair compensation, and a certification for which arts institutions can apply. this approach to organizing precarious workers is comparable to the model of the worker centre in the sense that each of the projects seeks at once to facilitate work and to acknowledge anti-work critical languages and agendas. one of the questions that the comparison with this project raises is whether the forms of convivial mutualism and politicization apostolidis found in the worker centre require the kind of 'embodied social interaction' (p. 34) and faceto-face encounters that platform models of organizing do not necessarily prioritize. kathi weeks in 2010, i co-authored a book with catherine eschle entitled making feminist sense of the global justice movement which sought to make visible, audible and intelligible a strand of feminist anti-capitalist activism that was being consistently ignored in the international relations and social movement literatures (eschle and maiguashca, 2010) . driven by the conviction that taking the words and deeds of the women engaged in these struggles seriously would yield not only a more intricate and complete empirical map of the movement, but also prompt a re-conceptualisation of its meaning and trajectory, we embarked on fieldwork in several countries as well as interviews with 80 activists over a period of several years. by seeking to expose the gendered power relations that marginalise women within the world social forum process, as well as in the academic literature about this movement, and by choosing to speak to and from the feminist struggles that emerged to confront them, the book was written in solidarity with feminist anticapitalist activists. paul apostolidis' book the fight for time encapsulates a very similar kind of intellectual-political project as it also seeks to capture the self-understandings of migrant day labourers in their everyday struggles, to reflect on how they resonate with contemporary critical theoretical concepts and to learn how, taken together, these empirical and conceptual insights may lead us to a renewed vision of what a left politics might look like for our age. like our book, paul's is unashamedly political in intent and, as such, it embodies a form of 'militant research', which 'activates enlivening moments of contact between the popular conceptions of day labourers and scholars attempts to describe and account for precarity in sociostructural terms' (p. 21). like our project, paul's research wants to bring what has been rendered marginal, both politically and academically, to the centre of our scholarship and theorising. and like my own work, more generally, paul's is driven by a commitment to revitalising both the theory and practice of left politics. in my contribution to this critical exchange i will draw out the points of contact between our respective approaches as well as tease out what i take to be our differences. in doing so, i aim to underline not only what is distinctive about paul's efforts, but also the shared challenges that we face as critical theory scholars attempting to chart a path for the theory and practice of a collective, transformative politics. more specifically, i want to highlight two broad lines of inquiry that emerge when undertaking this kind of politicised scholarship. the first line of inquiry seeks to open up a dialogue about the challenges that implicitly accompany the quest of constructing a critical theory that can simultaneously speak to and from 'the exception' and 'the synecdoche', or, to put it otherwise, that can light a path from the particular to the universal. the second theme concerns the role of utopian thinking in galvanising and giving direction to a radical left politics that is inclusive and that is fit for purpose in the 21st century. turning first to the task of critical theory, understood in marx's terms as the selfclarification of the wishes and struggles of the age, it is imperative that one grounds one's analysis in the practices and aspirations of a particular marginalised subject. elaborating on this point leonard (1990, p. 14) states, 'without the recognition of a class of persons who suffer oppression, conditions from which they must be freed, critical theory is nothing more than an empty intellectual enterprise'. now, while apostolidis and i agree on this, and both of us have chosen 'addressees' that are subjected to oppressive power relations that undermine their life chances and denigrate their ways of knowing and feeling, the conditions and experiences which give rise to and shape their respective ideas and practices are significantly different. indeed, despite some important overlaps, the radical politics and utopian imagination that emerge from each constituency -precarious labourers, on the one hand and feminist activists, on the other -diverge considerably. so, what are these differences, and what lessons might be drawn from this comparative analysis for those of us seeking to develop a comprehensive critical theory that seeks to move seamlessly from the exception to the synecdoche? apostolidis' chosen addressee is the migrant day labourer living precariously from day to day in a hostile environment in the us. framed as an exploited class, apostolidis' chosen subject wages his struggle for survival and dignity on the terrain of labour relations. while paul rightly recognises that day labourers, as a group, are also gendered and racialised subjects, his study remains primarily focused on the collective efforts of male labourers to resist forms of denigration and harm that mark their lives as workers and to overturn the destructive and exploitative practices of an unregulated capitalist economy, more generally. by contrast, my feminist interlocutors were relatively privileged economically in comparison to other women in their respective societies -and certainly to the day labourers of apostolidis's book. moreover, most of these women were well educated and, although many lived precarious professional lives (e.g. their ngo funding was secured year on year), the women themselves were, in the main, leading comparatively secure lives both materially and socially (they had families and belonged to social movement networks). finally, all of our activists were already politicised and involved in consciousness-raising activities (e.g. our fieldwork in brazil exposed popular education as a common practice) and, to this extent, were engaged in a form of feminist praxis that quite self-consciously and explicitly sought to transform the world they lived in. in sum, pace apostolidis' claim that precarity is a 'near universal complex of unfreedom' (p. 2), it is not the obvious starting point for conceptualising the challenges faced by these women. given these different starting points, what kind of politics emerges from each constituency, what utopian visions accompany them, and to whom are they directed? for apostolidis, an anti-precarity politics demands a 'post-work' future, one in which we all refuse to assume the responsibility for facing up to and accepting the consequences of precarity as an inevitable condition of life. instead, we are entreated to engage in a 'politics of demand' that seeks to reclaim our wages and our time ('for what we will') from predatory capitalist powers. more concretely, apostolidis outlines several attendant policies, including the introduction of a universal basic income and the creation of affective spaces of embodied social interaction, including multiple work centres. as he puts it, 'if all working people could gain access to workers centres like those that are inspiring such utopian effulgence â�¦ such a politics could well find masses of adherents and assume more fully developed form in our common precarious world' (p. 34). this is a resolutely anti-capitalist vision of a transformed world demanded by and imagined for all workers. or, to put it in fraser's (1994) terms, this is a bold call for a social politics of redistribution. turning to the feminist activists of my project, we find an alternative vision of what a better, more just future looks like. and while it is also anti-capitalist in orientation, it refuses to centralise either the realm of 'work' or 'workers' as its central axis of liberation. instead, the politics of demand that emerges from this politicised subject targets, not only capitalism as a systemic power relations but also patriarchy and racism. in this context, all three systems of power are understood as interlinked and pervasive to the extent that they cut across all social realms (economy, society, political, cultural) and are reproduced in both the public and private sphere. each however is also sui generis, and therefore, requires specific strategies to be overturned. moreover, on the affirmative side, our feminist interlocutors articulated their vision for the future in terms of two sets of demands. the first took the form of multiple proposals for policy change that seek to address context specific problems, such as violence against women, reproductive health, labour rights including the women's right to work and environmental degradation. the second was normative and universal in nature and revolved around the identification and defence of a set of ethical values -bodily integrity, equality, fulfilment of basic needs, peace and respect for the environment -that go beyond the concrete wish lists of different groups and pertain to all human beings. thus, the feminist anti-capitalist activism that i explored embodied a self-consciously intersectional politics in which demands for material redistribution and social justice were combined with equally important claims for cultural recognition. thus, here we have different struggles, different self-understandings and different visions of a progressive left politics. but if, as apostolidis suggests, 'we need a politics that merges universalist ambitions to change history, which are indispensable to structural change, with responsiveness to group differences that matter because minimizing them means leaving some people out' (2019, p. 14; emphasis in original), then how do we knit together these connected and yet distinct visions of emancipation? how do we move from the exception to synecdoche if we have multiple exceptions, each with their own sets of experiences, analyses and aspirations? after all, linking 'universal ambitions' to radical social change requires that we have a shared understanding not only of which structures of power need to be transformed/challenged the most, but also of how we go about building a common struggle. and whatever the intellectual synergies, programmatic overlaps and emotional affinities between the struggles of day labourer in the us and that of women worldwide, their utopian dreams would take us along very different, perhaps even incommensurable, paths. given this challenge, the question becomes one of deciding whether we need multiple critical theories running parallel to each other animated by different kinds of oppressions and degrees of marginality or whether we are still looking for a singular revolutionary subject, the one catalyst for change who is able to be both an exception and a universal exemplar, thereby embodying all the demands of the oppressed? this is not just a quibble about who gets to lead the charge: it is about what radical, progressive change should actually look like. as a feminist scholar seeking to find and defend space for an intersectional politics that refuses to be contained and streamlined in any way, i think it is imperative that critical theorists resist the temptation of elevating one concrete subject to that of a universal one. instead, we must engage in far more patient, painstaking ethnographic work of the kind that apostolidis has undertaken on male migrant day labourers, with a range of other addresses or marginalised subjects (e.g. the experiences of female day labourers are, as apostolidis suggests, one good place to start). it is only once these varied, complex mappings of power and resistance are drawn, with the recognition that they cannot be easily merged, that we can begin to look for connections across them and identify possible sites of bridge building which may lead to a convivial politics of the left and to the emergence of a collective dream. whatever it ends up being, my sense is that it will have to take the form of a coalitional politics, one in which sui generis struggles fight alone and together for radical change. the second theme is the role of utopian thinking in galvanising and giving direction to a radical left politics. despite being burdened by a 'relentless presentism' that does not allow them to think about, let alone strive for, a better future, it is clear that apostolidis believes that the 'demand' politics of day workers is suffused with utopian aspirations (p. 68). drawing on coles (2016), apostolidis describes their aspirations in terms of a 'visionary pragmatism' (p. 221) that combines an overt disruptive politics, that makes them visible and audible to the wider public, with more mundane, everyday practices of solidarity, mutual aid and self-government. interestingly, this view of utopian thinking as granular, incremental and cumulative, as well as eventful, unruly and confrontational, resonates very strongly with the dreams and impulses of feminist anti-capitalist activists. in fact, we deployed the notion of 'principled pragmatism' as a way of capturing their mode of action, in general and its pre-figurative orientation, in particular. for what became clear to us as researchers is that our feminist activists were concerned with articulating not only the political substance of their alternative future and the values that underpin it, but also an ethos by which this future should be brought into being. in this way, the 'principled' part of principled pragmatism sought to underline the highly ethical nature of both the goals/ends of their mode of action, as well as the means designed to achieve them. moreover, we found that this normative mode of action embodied a specific temporality, which was open ended and processual as well as nonlinear. this is, in part, due to the commitment of feminist activists to enabling women to speak and act for themselves, a project which, by its very nature, is unpredictable. it is nonlinear because its pre-figurative orientation demands that the future be lived out in the present. in this way, principled pragmatism is anchored by the imperative of getting things done in the 'here and now' of everyday life, without giving up the goal of radical change in the future. as a mode of praxis that pursues incremental, context specific change, feminist anticapitalist activism presents us with an inspiring alternative to the clichã©d dualism of reformism and revolution. the question here is whether the 'visionary pragmatism' of day workers is generalizable to other forms of contestation and, if not, in what ways it might be different from the 'principled pragmatism' of the feminist activists outlined above and what might be at stake in these differences. whatever our different starting points, what all the contributors to this exchange share is an abiding interest in generating explicitly normative, politicised scholarship or what apostolidis refers to as 'emancipatory scripts'. in other words, we all resist the path of what mcnay calls 'socially weightless' theorising, referred to by andrew schaap in his introduction to this critical exchange, opting instead to grapple with the messy world of politics, the material social conditions that hold it in place, and the suffering it engenders. to this extent, we all believe that what we write about and how we conceptualise it matters, not just intellectually, but also politically. for in the end, the stories we tell about the world and 'politics of resistance' that bubble up within it, can contribute to opening up (or closing down) the spaces of possibility for its realisation. pursuing this intuition is becoming harder, however, not only because academia continues to extol the virtues of scientific knowledge, but also because of changes in the political landscape. with 'populism' now elevated as the threat du jour, all resistance against the status quo is in danger of being discursively contained by politicians and academics alike. moreover, the increasingly trenchant calls to drop the left-right distinction in favour of other political cleavages (e.g. 'people vs elites', 'people from somewhere' vs 'people from nowhere') are making it harder to reclaim a politics for and by the left. in this context, critical theorists of all ilks need to stick together, learn from each other and engage in a form of 'epistemological coalition building'. while it may not be the only route to progressive change, as paul rightly points out, it is one worth sustaining, in my view, and critical exchanges of this sort provide one step in this direction. fighting from fear or creating collaboration across economic divides? in the fight for time paul apostolidis offers readers a powerful meditation on the problem and politics of precarity. he contends that precarity is a global problem shared by virtually all who toil in the global economy. through his study of latino day laborers in the us, apostolidis argues that day laborers present a proxy for the precarity of laborers worldwide (pp. 4-5). through his portrait of the cruel trials faced by day laborers, apostolidis wisely proposes that work centers for all, popular education practices and consciousness raising, as well as a 'demand politics' for better and safer labor conditions, fair pay, and flexible time are necessary to improve the lot of all laborers everywhere. his valuable work thus provides a vision of collective practices that might, if we are persistent and lucky, ease the plight of billions of precariously placed workers across all walks of life worldwide. along with my admiration, this book's fine and yet familiar tones raise for me two questions that i pose here in the spirit of conversation and in sharing in paul's quest for the best ways to realize global prosperity and peace that recoups the time that all human beings need to explore and express their best qualities and capacities. my first question is whether inviting widespread personal identification with precarity -as opposed to identifying with peace, justice, or other motivating concepts -is a necessary step to ignite awareness and action for economic change that recoups time for all (pp. 4-5)? a recent national public radio/harvard university poll shows that in the us, the majority of both the wealthy (62%) and the poor (75%) already share the view that extreme economic inequality is a widespread and serious problem that presents risks to everyone in the global economy (harvard, 2020) . while wealth and poverty are facts of a balance sheet, precarity is experienced as a feeling or state of mind. this is acknowledged implicitly by apostolidis in his application of lauren berlant's concept of 'cruel optimism', in which precarity is not considered as economic hardship alone, but is an 'affective syndrome' (p. 5). thus while wealth and poverty shape experience in material ways, the feeling of precarity is a choice to embrace and/or identify emotionally with a fearful state of dangerous insecurity. but is the choice to identify oneself with the feelings and fears of precarity wise or helpful? dangerous insecurities may arise for anyone, and even the comparatively well off may feel fear of sudden destitution. yet as frankl (2006) observed in man's search for meaning, the responses that we choose to a threatparticularly one's capacity to choose not to succumb to fear -is a central factor in securing human freedom under any conditions. as frankl himself exhibits, even in the life-threatening conditions of a nazi concentration camp, his humanity and true freedom could not be extracted from him because freedom lies in our capacity to choose our own responses to violent and destructive conditions, even unfathomable extremes. thus, in contrast to berlant's cruel optimism, frankl's observation is that even within the vicissitudes of illness, exposure, and hunger, those who faced the concentration camps with dignity, self-worth, and courage were far more likely to survive, and eventually escape those conditions, than those who surrendered to a mindset of fear-based terror and precarity. in short, our chosen mindsets under hardship also shape our prospects for resolution and escape from extremity for better or worse. thus, to choose to embrace affective fear and precarity may ultimately undermine the strength and survivability of the self. if fear of precarity is widely embraced, this may in turn subvert the capacity for collective action in pursuit of economic justice and the reclaimed time that all workers, as apostolidis deftly shows, so desperately need. beyond frankl's philosophy and experience, neuroscience also illuminates the possible hazards of self-identifying with a precarity mindset. in ledoux's (1996) influential work on the interface of emotion and human physiology, the emotion of fear, particularly mortal fear, triggers neurological subsystems of the body that enable rapid responses by bypassing and making temporarily inaccessible the neocortex -the brain-centers of conscious reflection -which are too slow to address risks to mortal safety. in other words, when humans are in fear, we cannot physically access our capacity for conscious reflection until our fear subsides (ledoux, 1996, p. 128) . instead, when in fear, the human body defaults to operating on autopilot through whatever neurologically encoded scripts the emergency systems of a given body happens to have for its fear responses, typically including, fight, flight or freeze. arguably, this can be seen in chapter three of the fight for time, in which paul shows day laborers -fearful of missing out on even an extractive job in their precarious conditions -inflict violent harm on one other in a 'surly wrestling match' as a car approaches (p. 118). does such fearbased reaction help? not as much as it endangers people, fosters increasing fear and dissention among laborers, and drives away would-be employers. yet this kind of scrum is not a poor conscious choice. instead it is a scripted embodied impulse that is the anticipated neurological consequence of adopting a fearful approach to experience and thereby hobbling conscious response. on this analysis, choosing a precarity mindset risks disabling physical access to conscious, thoughtful reasoning and response in fearful moments in favor of fear-based impulses and reactions that are attendant to moments of fear. these risks of identifying with precarity raise my second question. what blind spots might exist in the familiar narrative of economic reforms championed in the fight for time? the proposed path to reform invites readers to embrace work centers for all and collective action based in common experiences of deprivation that address intra-group biases and divisions along the way. this is an inherited social script that is long-treasured and often invoked. as a common social inheritance among scholars and activists alike it has been portrayed eloquently before in such powerful retellings as that of salt of the earth, the once blacklisted film narrating a famous 1951 new mexico labor strike. in this valuable and familiar approach, echoed here by paul, laborers come together to confront and overcome their mutual biases, and then pursue together demands for better wages and benefits. paul's recruitment into one work center's 'theatre of the oppressed,' intended to help workers address their biases, is an example of this longstanding approach in action (p. 29). in this script, rich capitalists appear as universally greedy and cruel hoarders whose victims, the long-suffering poor, must now muster the courage to see their commonalities within divisions of race and gender to demand a fair shake from capitalists. this story is rewarding. and it is true that workers everywhere would be better off if this familiar scenario were consistently fulfilled. yet the gains of this approach over time have been slow, sporadic, labor intensive, and often hobbled by the stubbornly persistent biases, suspicions, and enmities of many laborers -as well as owners -weaknesses to which all of humanity is still often prone. in contrast, from a chicana feminist perspective, such as that of gloria anzaldãºa, the enduring problem of economic inequality does not call only for looking within worker's groups for sources of intra-group conflict and dissention. it also calls for searching across polarized social divides -of workers and owners, of the haves and the have nots -to explore and create the conditions for peaceful resolution of economic inequality. although venerated in death, anzaldãºa was at times scorned in her lifetime for proposing that true peace and justice required people to eventually come together to work across trenchant social divides: people of color working with whites, women with men, immigrants with non-immigrants, and so on (anzaldãºa, 2002) . this anzaldãºan chicana feminist perspective urges us to not overlook the possibility of working generatively across the divides among workers and owners, a possibility in the blind spot of the salt of the earth narrative in which economic benefits must always be fought for and hard won rather than produced through collaborative vision and effort. following this traditional script, the fight for time's focus on work centers and the fight of traditional labor activism implies that attempts to collaboratively bridge the worker-owner divide may be futile, naã¯ve, or at best irrelevant. yet among the ultra-rich, practices of large-scale philanthropy are emerging which suggest that there is more transformative common ground between laborers and some owners than the traditional salt of the earth viewpoint can yet acknowledge. if so, then attending to this common ground may help remedy the lack of time, economic freedom, and financial stability needed by everyone more quickly and effectively than the fights and struggles of work centers, strikes, and direct actions have historically achieved. specifically, in recent years carnegie's (1889) assertion that successful capitalists should ideally end their financial careers by giving away all of their wealth, retaining only a personal competency -defined by carnegie as enough wealth to meet their own life needs and that of one's family -has been gaining a following. reflecting this view, in 2010 two of the world's wealthiest billionaires, bill gates and warren buffet, created an organizational structure called the giving pledge (2020), in which ultra-wealthy people across the world pledge to give away the majority, or at least half, of their wealth in their lifetime or upon their death. to date, over 210 ultra-wealthy individuals and families have made this pledge, including five of the top thirteen billionaires on earth (i.e. bill gates, warren buffet, elon musk, mark zuckerberg and mackenzie scott). in july 2020, these five pledgers command a combined total net worth of $410 billion usd (bloomberg bi, 2020), representing an estimated philanthropic giving over time of at least $205 billion usd by those five pledgers alone. if a growing number of the ultra-rich are voluntarily committed to giving away their wealth for the benefit others, then -by adopting an anzaldãºan perspective on working across economic and other social divides -it becomes valid to explore beyond the familiar salt of the earth script hailed in the fight for time. doing this would involve considering how engagement across social divides of workers and owners may help direct emerging philanthropy into social justice philanthropy that could potentially ease global financial inequities more quickly and resoundingly than the efforts of work centers and traditional labor actions have done to date. such a move could potentially recoup both time and transformative possibilities for the benefit of laborers, as well as owners, and provide sustainability benefits for the planet from a revised economy. by shining an anzaldãºan chicana feminist perspective into the blind spots of the fight for time, apostolidis's project is not abandoned, but augmented by bringing unforeseen possibilities into view. new possibilities might arise from organizing with willing and openhearted owners, rather than fighting against them as a class to retrieve the time and financial freedoms precious to all. in moving beyond the view that labor and owners are always divided (rather than only often so), it becomes possible, for example, to imagine efforts in large-scale social justice philanthropy that could, for example, provide everyone on earth with a carnegiesque financial competency. for the sake of discussion let's imagine that such a personal competency would be $2 million usd per person worldwide. with 7.7 billion people now on earth, the core funding for a $2 million dollar safety-trust for each person at present on earth would require 15.4 billion usd. that sum seems large, yet it is less than 7% of the combined minimum pledge, of the five of the 210 signatories to the giving pledge named above. of those five givers, mackenzie scott herself is committed to giving away all of her $59.5 billion, a sum that alone could handily endow a universal personal competency worldwide. thus at least in terms of core capital resources (even accounting for the illiquidity of many assets of the ultra-wealthy), a universal competency could be funded by a small fraction of the funds already pledged for giving by the world's ultra-rich. in this context, self-identifying with fearful precarity and fighting for traditional reforms through work centers and labor actions for the changes so urgently needed in the (now pandemic-stricken) world may be worthy in our traditional socially inherited script of salt of the earth-style social change. yet this accustomed approach arguably now may be less wise and expeditious than other emerging options. if so, it is worthwhile to explore the limitations of our commonplace labor-related scripts and to confront as needed our own potential blind-spots regarding the diversity among the ultra-rich that could -in an anzaldãºan manner -help us to better see new possibilities for bridging economic divides and opening ourselves to collaboratively producing transformations that can benefit all people and the planet upon which we reside together. is resolving the pain of global poverty through philanthropic giving so farfetched? it is not as implausible as so often thought. alongside the kinds of labor actions hailed in the fight for time, in recent months one us billionaire chose to pay the college debt of an entire class of morehouse college totaling over $35 million usd. another man paid the college debt of his uber driver, a single mother, thereby enabling her to finish her college degree. by chance, the latter giver is a well-off white man and the recent graduate an african american woman. meeting as strangers by chance, the two have now become friends and their story has gained popular attention. if giving to strangers in need is not merely feasible but also appealing, why is it perhaps emerging more visibly now? it may be because many humans are learning that beyond a meaningful competency, wealth does not necessarily create happiness, but that human connection and giving often do. if so, then a season of transformational giving may be on the near horizon. if these events reveal a nascent turning of the tide, there are still many obstacles on the path of philanthropic giving-for-global-prosperity. if a pathway to funding a universal competency could be created through social justice philanthropy, for instance, this would also need to involve further measures for healing the poverty-related traumas so aptly described in the fight for time. beyond a basic endowment, provisions would be needed to provide for new learning, safeguards, and other supports for recipients in order to truly solve the lingering problems of precarity. why? because those who come into sudden wealth from poverty and lack often risk experiencing poverty once again through missteps, fraud, or other hazards arising from a rapid change in economic conditions. thus even if furnished with a financial competency, in the context of hazardous grafts, frauds and other pitfalls that remain mainstays of us culture (young, 2017) , latino day laborerslike the vast majority of other workers alluded to in the fight for time -would need additional training to cultivate the skill sets and mindsets needed for living with meaningful wealth after having had little or no prior knowledge or instruction in how to hold, manage, or grow the would-be competency that could furnish them at last with time and freedom from extractive labor. is the idea of philanthropic solutions to global economic inequalities simply another example of 'cruel optimism'? by berlant's (2007, p. 33 ) definition, optimism is cruel only if the desired change is truly 'impossible or too possible and toxic'. clearly, however, changes are emerging that make meaningful large-scale social justice philanthropy possible, even if those changes are growing in the shadow of predatory economic practices. with these changes in view, it is worth asking whether paul apostolidis's fine call to 'fight' to retrieve time across all laborers might be best served by extending our willingness to also seek common cause not only among diverse workers, but also among those openhearted wealthy owners who are willing to give back their wealth to benefit the well-being of all humanity. if so, it may be worth our time not to fight for time, but instead to work collaboratively and creatively for time and wealth to become equitably available to everyone in unexpected ways. edwina barvosa whose politics? whose time? traditionally, political theory has not co-theorised. it has spoken from on high among 'male, pale, stale' companions. hence my defection from these ranks. in this dialogue with paul apostolidis' the fight for time, i would like to recognise the attempt to co-theorise. in this work some migrant day labourers' voices, described as latino, are represented through ethnographic moments. bodies, presumably cis-male, are portrayed in struggle. this day labour is proposed as 'synecdoche' -the part that stands for the whole -by which is meant precarity on the grand social scale (p. 248). thus, the collective fight for time is staged. demands include: a politics that goes beyond seeking marginal relief from overwork and instead fundamental alternatives; a repudiation of the work ethic that prescribes personal responsibility in the face of desperation; the demand to restore time as well as wages to the people; a refusal of work 'as the axial concept that constricts working people's social and political imaginaries' (p. 243). i can only respond from outside of the social and political world the book portrays. i am not latinx/latin@ (hence the unsatisfactory use of terms that are, themselves, the site of struggle), but white, cis female, and belonging to many other privileged social locations. from my vantage point i explore struggles for migrant justice and against austerity and precarity at the intersections, drawing on lessons from black feminism and indigenous scholars writing in the context of the ongoing violence of settler colonialism. i ask: whose politics? whose time? whose politics? whose knowledge counts as the basis for politics? i cannot accept proposals, as in this book, to radiate outwards from some bodies and experiences -people presented as cis-gender latino men, workers -as the part that stands as the whole, the synecdoche. this is a project of inclusion: generative themes are based primarily on these experiences, to which others must then align. this story has been told before. it is of a linear, sequential march toward 'justice'. some are at the centre, in the lead, and others need to wait their turn to then be included. add and stir. who must wait their turn? in this work, this sounds like (presumably cis) women domestic workers who are mentioned but peripheral to this study, as well as those who experience misogyny and harassment at the worker centres (pp. 87, 124, 226) that are to be the incubators of progressive alternatives and the collective fight for time. we could add here the women who founded and run the worker centres in this book, who are barely visible but are also key protagonists of anti-precarity and antideportation struggles. those who must wait also surely encompass malepresenting others who do not identify with what are referred to in the book as the 'normative' masculinities deployed in the worker centres (p. 161). what happens when the political knowledge of queer, non-conforming, differently gendered actors is parked for consideration later on? what politics is generated when these experiences and these intersections are named at the end of a book (pp. 249-250), after the contours of struggle have been determined against precaritisation 'as the array of social dynamics that structure these settings' (p. 243)? it becomes possible to call for 'workers centres for all workers'. and thus a space for the resistance of some is built on the oppression of others. theorising this as synecdoche does not name the problem or open up the space for resistance to multiple, intersecting oppressions. it does not centre as part of the theory the messy and vital struggles of workers' centres to change representation on governing boards, to reconfigure resistance to border control in recognition of the specific brutality experienced by lgbtq migrants (p. 249) and to bring into focus all forms of work (p. 250). this call, 'workers centres for all workers', chills me without scrutiny of all gender relations and all gendered labour -and i mean all, beyond gender binaries, at multiple intersections. what can the 'repudiation of work' mean without naming cis heteropatriarchal relationships of domination, in ableist and racialized capitalist systems that pervade all 'public' and 'private' realms? this book asks how various groups of workers articulate terms of their consent, how regimens and discontinuities of body-time on the job vary between different groups. but this undertaking is impossible without articulating at the same time the terms of consent to cis heteropatriarchal relations in and outside of the workplace. oppressors are not only employers. they are also other workers, community and family members, who are cis men and women embedded in hierarchies that include gender, class, race and legal status. what would it look like to build a politics for migrant justice, against austerity and precarity starting with the knowledge of experiences of a matrix of oppression (hill collins, 2000) ? this is no synecdoche. it is the challenge of forging justice at the intersections. these are not new lessons to learn and there is no way to do justice here to all the illustrations of this kind of politics in practice. from my past work, one example from france in the 1990s, may provide purchase on us-based challenges. in paris, madjiguene cissã© led movements for the regularisation of 'sans papiers' -people 'without papers'. she described the 'struggle within the struggle' by women 'sans papie`res' (the feminised version of 'sans papiers') for gender equality within the movement, as well as regularisation of immigration status. this was a struggle against patriarchy as well as the racism of the french mainstream. the knowledge that sans papie`res women imparted in the struggle meant that they were in charge of their own thought and politics but without excluding others (hill collins, 2000, p. 18) , and they did not project separatist solutions to oppression because they were sensitive to how these same systems oppress others (hill collins, 1986, p. s21) . women revitalised the movement and kept it together: 'a role of cement' (cissã© and quiminal, 2000) . cissã© explains how women kept the group together particularly when the government attempted to divide them, by offering to regularise 'good files' of some families, but not of single men. sans papie`res very firmly opposed this proposal, arguing that if single men were abandoned, they would never get their papers. migrant justice, anti-austerity and precarity politics look different when built at these intersections. the difference lies in who is present and also in what results. care and self-care are centred as 'an act of political warfare' in a system in which some were never meant to survive (lorde, 1988). self-help, self-care and selforganising are alternative, sometimes complementary spaces, and an important source of personal support, resilience, information and community, beyond whitedominated, politically raceless, misogynistic anti-austerity/precarity spaces (emejulu and bassel, 2017) . no part can stand for any whole when other spaces are unsafe and sites of violence rather than a collective fight for time. whose time? in our work exploring the activism of women of colour across europe, akwugo emejulu and i have argued that epistemic justice is about women of colour producing counter-hegemonic knowledges for and about themselves to counter the epistemic violence that defines white supremacy (emejulu and bassel, 2017, p. 30 ). epistemic justice is not a correction or adjustment to 'include' unheard voices, but a break away from destructive hierarchical binaries of european modernity. it is a break away from the 'persistent epistemic exclusion that hinders one's contribution to knowledge production' (dotson, 2014, p. 116) and renders women of colour invisible, inaudible and illegitimate to both policymakers and ostensible social movement 'allies'. epistemic justice at the intersections makes settler colonialism visible, whether in the united states of this study or so-called canada, where i grew up. this means going much further than the possibilities briefly flagged in the book: kindling a critical sense of historical time and orientation to the future that is fuelled by an awakened sense of historical injustice (pp. 216-217). it is necessary to go much further because the fight for time cannot be founded on indigenous erasure. erasure does not create a path toward solidarity 'with other colonised populations who understand their past experiences in somewhat parallel ways' (p. 217). this book discusses workers turning a day-labour corner where jobs are fought for in portland into a space of musical performance. these are important moments to explore and co-theorise. but when they are described as transforming the space into a 'site of freedom' (p. 217), indigenous struggles are erased. these performances are taking place on stolen land in what is now referred to as 'portland'. tuck and yang's (2012) key work 'decolonisation is not a metaphor' rattles the kind of settler logic that allows for this erasure. they discuss the occupy movement and argue that claiming land for the commons and asserting consensus as the rule of the commons, erases existing, prior, and future native land rights, decolonial leadership, and forms of self-government. occupation is a move towards innocence that hides behind the numerical superiority of the settler nation, which elides democracy with justice and the logic that what became property under the 1% rightfully belongs to the other 99%. in contrast to the settler labour of occupying the commons, homesteading, and possession, some scholars have begun to consider the labour of de-occupation in the undercommons, permanent fugitivity, and dispossession as possibilities for a radical black praxis â�¦ [that] includes both the refusal of acquiring property and of being property (tuck and yang, 2012, p. 28 ). the fight against precarity and for migrant justice must be reconfigured, if it is to be in solidarity with indigenous struggles. this means changing whose understanding of time and labour are at the centre of analysis. the land where this study took place is not an 'immigrant-receiving country' but a settler colony, founded on indigenous genocide, dispossession and slavery. when time is decolonised, the refusal of work is recast in relation to the refusal of the settler colonial state (simpson, 2014) and the formations of race, class, gender that it engenders. these formations, rooted in settler colonialism, shape the lives of the migrant day labourers who are 'here' because the united states was 'there' (sivandandan, n.d.) and must contend with entangled colonial legacies from different social locations. this requires a shift in vocabulary, when 'migrants' are in fact settlers. but with this comes also a shift in politics. in undoing border imperialism, walia (2013) shows how movements such as no one is illegal (noii) in what is now called canada have reconsidered their understandings of migrant justice. this has required recognizing the ways in which their actions have been premised on an understanding of sovereignty and territory that perpetuates the colonial legacy that has dispossessed and disenfranchised indigenous peoples (walia, 2013) . noii activists consequently re-centre ongoing colonialism and reconfigure understandings of land, movement, and sovereignty when claiming that 'no one is illegal'. specifically, activists have tried to consider how their calls for 'no borders' undermine indigenous struggles for title and against land loss, to reclaim land and nation. solidarity means reshaping the political agenda of noii beyond token acknowledgements, to move from a politics of 'no borders, no nation' to 'no one is illegal, canada is illegal' (fortier, 2015) . and now? i asked two questions here: whose politics? whose time? they remain unanswered. but they are a path to solidarity rather than solutions. so it goes in the messy world of politics, not political theory. leah bassel representing precarity: health, social solidarity, and the limits of coalitional epistemology in her contribution to this critical exchange, kathi weeks poses an unexpectedly timely question about how to politicise precaritisation in the form of heightened bodily risk at work. writing prior to the coronavirus outbreak, weeks echoes my observation in the book that, apart from the temporary rush of reporting when an occupational safety and health (osh) disaster strikes somewhere in the world, 'the problem of workplace death and injury is strangely absent from public consciousness'. how quickly things can change. i am writing this response in april 2020 in london, now in its fifth week of 'lockdown'. in this context, weeks's reflections prompt two questions: first, in what specific ways has the covid-19 crisis made workplace threats to life and health newly legible? second, what ramifications do state and employer responses to the pandemic have for the pressing issue of how 'to politicise the issue of bodily harm given how extensively the idiom of health has been rendered amenable to the logics and aims of biopolitical management', as weeks aptly puts it? i still see the outlines of an answer to the second question in the politics of solidarity around osh matters that day labourers have developed through worker centres. today's work-culture construes the task of sustaining the worker's health as the worker's personal responsibility, which the worker also exercises as a productivity-oriented social duty. many day labourers abet this tendency through their own themes of meeting the 'risk on all sides' by individually keeping their 'eyes wide open'. yet day labourers also demonstrate how health-related language, desires and practices can be cathected with a different figuration of social and individual conscientiousness: responsibility as autonomously collective solidarity. day labourers pose this alternative in three main ways. first, through convivial relations at worker centres, day labourers bolster one another to stand up to abusive employers, to refuse dangerous jobs and to de-throne work and income from their primacy in everyday affairs. second, day labourers contest biopolitical powerknowledge by fusing their own analyses of work-hazards to responsive practices of their own devising, as they teach one another about risky work processes, materials and employer conduct through popular education. third, day labourers are hatching visionary ideas about how distinct working populations can recognise their common stakes in ending the bodily precaritising dimensions of work, such as by organising with, not just against, their middle-class employers. in all these ways, at day labour centres, the talk of putting 'health' first mobilises a complexly social vernacular. one's 'own' health is always a concern, but the worker's understanding of 'health' does not stop with the individual. instead, this idiom positions health as stemming from social interactions that are contingent on power-differences, which are amenable to workers' collective re-formulations, which, in turn, need not be determined by the ideal of productivity. politically, these initiatives by day labourers imply that disentangling health-talk from the corporate wellness apparatus depends on autonomous action from below in tandem with cross-class organising. the role of the wizened welfare state in such efforts, however, is not clear -and that brings us back to the coronavirus. talk about 'biopolitical management'. the crisis has precipitated massive deployments of state resources to expand public health knowledge-systems and to use statistical probability calculations to foster mass populations' biological vigour and protection from disease, albeit in racially selective and gender-unequal ways. must this tidal wave of emergency mobilisation re-sediment personal responsibility and productivism as the norms that regulate occupational safety and health? or, as this surge recedes, could it leave behind institutional beachheads for fighting precarity on the level, and within the sinews, of the working body? even as the present apotheosis of biopolitics applies itself globally and to entire nations, it targets micro-practices in the workplace and affects precarity's configuration of work as a zone of bodily hazard. overall, the covid 19 crisis reduces to the point of vanishing the already quite faint and episodic awareness of how mounting osh threats have made the workplace increasingly dangerous to workers' health for decades, across occupations. the fight for time discusses how these threats principally entail work-environmental hazards, especially poor air quality as more work is done indoors, ergonomically dysfunctional work-processes, and debilitating stress due to corporate downsizing and rising job insecurity. ironically, the pandemic's sudden re-framing of the workplace as replete with health dangers focuses on the work environment. it does so, however, in terms that reproduce the moral individualism of the precaritised osh culture, while occluding the work-environmental systems that generate endemic hazards. thus the exhaled breath of a single co-worker becomes the respiratory threat, rather than the air circulation machinery in the office or warehouse. health-conscious bodily comportment means obeying the individual remonstrance to keep six feet away from any colleague rather than ensuring that the ergonomics of work-procedures avoid forcing workers to contort their bodies and overstrain their tendons. the stress of losing one's job, having work hours reduced, or fearing these things because of the virus's immediate economic effects, normalises the ongoing anxiety that is baked into precarious work-life and linked to heart disease. the hyper-individualisation of osh hazards in the covid-19 crisis and the fingering of co-workers as those who pose lethal hazards to us also clearly discourage building safer and healthier workplaces through solidarity among workers. such miscasting of fellow workers as the culprits whose irresponsible conduct explains why everyone's health is in jeopardy bedevils many day labourers' attempts to rationalise the contradiction between expectations of personal responsibility and the power-relations governing their work. the pandemic further embeds this thought-habit of precarity. meanwhile, consigning 'essential' workers in some occupations to higher risk exposures while others 'shelter at home' and assemble via zoom aggravates the difficulties of organising across class lines. in all these ways, the pandemic has made it harder to dislodge health discourses from their current ensnarement in norms of productivity and individual responsibility. yet the sheer size and weight of institutional responses to covid-19 also presents an opportunity to argue that, if states and employers can so speedily muster these titanic responses to this virus, then the capabilities are there, more obviously than ever, to tackle the endemic osh challenges that constitute the bodily mortifying facets of precarity even in 'normal' times. this will only happen, however, if working people redouble their organising efforts. and that makes the project of founding worker centres for all workers even more vital: extending the scaffolding for leadership development and autonomously collective organisationbuilding along with new ventures in state-sponsored redistribution, such as a universal basic income. bice maiguascha correctly observes that she and i share aspirations to pursue critical theory in ways informed by the ideas she cites from marx, leonard and militant research, and i am glad she sees in my book the work of a fellow traveller. for us both, this means doing theoretically evocative social research from positions of active engagement within political struggles against oppression and with the aim of contributing something tangible to those struggles. maiguascha and eschle's research with feminist anti-capitalist activists also illuminates how political agents quite different from those who occupy centre stage in my book can pinpoint 'systemic power relations', including gender, that are fundamental in their own right and need to be contested both as such and via the demands these women raise. in response to maiguashca, let me also underscore that, notwithstanding the near-exclusive focus of my fieldwork on male, latino day labourers, the fight for time affirms, explicitly and in its intellectual practice, the need to theorise politicaleconomic power and contestation in ways that attend to the complex gendered and racialised aspects of work. maiguashca allows that my book 'recognises that day labourers â�¦ are gendered and racialised subjects', but the book does more than this. it probes the masculine ideals woven into these workers' themes, explores how the racial state constitutes precarity through policing migrants, distinguishes day labourers' varied renderings of latino identity, and draws on my own supplementary field work and secondary literature to suggest how domestic workers' conceptions would likely both differ from and align with those of day labourers. maiguashca also implies that the book searches 'for a singular revolutionary subject' and anoints the day labourer as 'the one catalyst for change', but the fight for time does neither. if my statements in the book to the contrary do not suffice to show this, then it should still be apparent from the book's premise of basing a critique of capitalism on research with workers who, as weeks notes, resemble marx's disparaged and heterogeneous lumpenproletariat, rather than the traditional proletariat. i stand firmly in sympathy with the efforts of weeks and other theorists influenced by autonomism to widen and complicate the notion of 'the working class', as weeks does by training our attention on women's reproductive labour in households, and as studying day labourers does by foregrounding a liminal and ambiguously gendered realm between productive and reproductive labour. the analytical rubric that positions day labour as both exception and synecdoche in relation to precarity writ large appears to lie at the heart of what most troubles maiguascha and leah bassel. let me thus address further what this interpretive framework means, going somewhat beyond what is already in the book. the exception/synecdoche formulation is intended as a strategy of provocation: a prod to imagine how the critical language of one especially benighted group, which has done a remarkable job of building itself up politically, could shake loose new ways of construing overarching forms of power and domination. such general structures, systems and flows of power and domination exist, and they need to be named in order to be engaged politically. this does not obviate the fact that any act of naming by a situated subject is also bound to yield misnomers because of that person's or group's particularised social location. moreover, as mezzadra and neilson (2019) argue, capital itself regenerates, accumulates and dominates both through systemic processes that integrate the globe and through localised 'operations' that proliferate heterogeneities of experience, identity and activity (including work-activity). this, however, makes it imperative to theorise capital on both levels at the same time, through critical procedures that juxtapose the general and the particular, teasing out their resonances and tensions. one models the whole with the help of closely scrutinising an always-insufficient particular, then re-envisions the systemic through considering other concrete-particulars, and so forth. a synecdoche is a part that stands in for the whole, but this notion's origin in literary theory bespeaks selfawareness that this figuration is a contingent act of representation -rather than a straightforward declaration of truth. furthermore, critical-popular analysis does not simply infer the whole from a part but rather effects mutual mediations between self-expressions of the part and conceptions of general dynamics. the fight for time pursues this path by reading day labourers' themes together with allied concepts from critical and political theory about broad formations of precarity. this is certainly a different way of reaching a provisional sense of society-wide power than that preferred by maiguashca, but it has its virtues. one virtue has to do with the temporality and affectivity of collective action that seeks to confront thoroughly pervasive forms of social, political and economic power. having exhorted readers to pursue with other groups more of the finegrained ethnographic analysis that my book provides, maiguashca then cautions: it is only once these varied, complex mappings of power and resistance are drawn, with the recognition that they cannot be easily merged, that we can begin to look for connections across them and identify possible sites of bridge building which may lead to a convivial politics of the left and to the emergence of a collective dream. this statement conveys a political temporality of postponement as well as an ascetic tinge, and i question both. if capital and other systemic forms of power are perpetually in motion, always mutating, and never ceasing to employ both universalising and particularising modes of operation, then it makes little sense for theory to hold its own generalising capacities in reserve until it has amassed some critical mass of analyses of situated perspectives (and how could a non-arbitrary threshold be specified?). strategically, this appears unwise. affectively, something also seems awry with the gesture of renunciation one must make to defer the invigoration that comes from battling broad-scale domination, while also letting systemically generated suffering endure without being called out as such. the critical-popular approach, in contrast, partakes in the affective spirit of weeks's 'politics of the demand'. this means taking seriously both the re-constituting of desiring subjects in the midst of utopian struggle and the value of fighting for a 'collective dream' that is massive and radical -like 'worker centres for all workers' or 'wages for housework' -but neither totalising, nor conclusive. another virtue of the critical-popular approach to theorising the whole, in comparison to mapping specific differences and then building localised bridges, is that the former offers not just an alternative to the latter, but also a prelude to it. my book not only juxtaposes day labourers' popular themes with academic concepts to theorise precarity writ large and anti-precarity struggle, but also shows how worker centres, the day labour movement and a broader anti-precarity politics all depend on developing popular consciousness and political action-plans through molecular processes and alliance formation. the book's practical contribution to day labour centres' popular education programming, through workshops i conducted, as well as a report i wrote with additional dialogue options, further shows this project's commitment to fostering intersectional interactions of the kind that maiguashca and bassel endorse. the fight for time thus supports coalitional politics as one key mode of struggle needed to define and confront precarity. it takes issue, however, with what we might call a 'coalitional epistemology', or the idea that understanding power on the broadest levels and identifying desirable forms of mass solidarity, can only occur through the cumulative, piece-by-piece assembling of particularised knowledges into progressively larger composites. along these lines, it bears emphasis that the fight for time is one of two inaugural books in my publisher's series 'subaltern studies in latina/o politics', edited by alfonso gonzales and raymond rocco. i am honoured to have my book involved in this effort to support work that brings together latino studies and political theory. the series is also promoting research on latino/latin-american transnationalism (fã©lix, 2019) , contentious citizenship and gender among salvadorans in the us, and religion, gender and local agency in mexican shelters for central american migrants. colleagues interested in how my book contributes to more wide-ranging discussions of race, ethnicity, migration and gender, and to coalitional politics, should be aware of this context. for the most part, my responses to maiguashca, and defence of the criticalpopular method above, comprise my answer to leah bassel as well. bassel shares with maiguashca a similar orientation toward critique and political action, which bassel describes as embracing 'the challenge of forging justice at the intersections'. bassel argues, however, that rather than either encouraging consideration of other oppressed groups' experiences or incorporating such analysis into the book, the fight for time suppresses and erases such experiences. i strongly disagree. as i have explained, there are good reasons for understanding the logic of the synecdoche as evoking provisional renderings of broad power dynamics in ways that invite -rather than discourage -contestation. readers hoping to join a 'linear, sequential march toward ''justice''' will search in vain for marching orders in my book. bassel also does not mention how the book frames day labour as both exception and synecdoche in relation to precarity writ large. this dual optic makes basic to the book an appreciation for the specificity of day labourers' social experiences. it thus signals clearly that attentiveness to situated subjectivity is a sine qua nonthough not the sole legitimate basis -of critique. in this way, my book underscores how the forms of precarity thematised by day labourers reflect, for instance, their particular position in the urban construction economy and their specific vulnerability to the racialized and gendered homeland security state. this implicitly affirms the value of hearing what other groups of workers, situated distinctly, would say about precarity. at the same time, bassel's commentary neglects a different problem with which my book grapples: the need to challenge the invidious naturalisation of assumed group differences. white middle-class americans, for instance, certainly need to understand better what makes the lives of working-class migrants in the us both different and harder. but the former also need a better grasp of how their own economic, political and bodily fortunes resemble those of the latter much more closely than most would like to admit. anderson (2019) calls for 'migrantizing citizenship' as a tactic for waking britons up to how the shrill demand to save 'british jobs for british workers' has precaritised work for everyone. in a similar spirit, the fight for time appeals for precaritised workers throughout society to recognise their shared stakes in a common struggle, even while observing how the stakes are graver, and different, for some than for others. i do see it as a limitation of my research that, although it delved into the complexities of day labourers' commentaries and traced their interactions with an eclectically convened set of theoretical interlocutors, it did not include substantial fieldwork with other precaritised workers. thus, i could not critically compare such workers' generative themes with the themes spotlighted in the book. the conception of critical-popular research is in its formative stages, and maiguashca's and bassel's comments, have fuelled my interest in exploring how a future project could bring such critical moves into the heart of the inquiry. planning such work with migrant and indigenous subjects (including indigenous migrants) would offer one attractive pathway for doing this, especially given the anti-capitalist trajectories of leading critiques of settler colonialism, which prioritise spatial and temporal politics that may both align and conflict with migrant endeavours (coulthard, 2014) . in the meantime, i appreciate maiguashca's and weeks's invitations to speculate about how day labourers' themes and organisational spaces might relate to those of other groups. i see an affinity between feminist wsf activists' embrace of an 'ethos' whereby organising processes 'prefigure' radically altered social relations and the day labourers' anticipatory enactment of the 'refusal of work', -even as they desperately pursue jobs, and even though the day labour network takes no stand for such a refusal. as these lines suggest, however, day labourers pursue social change by generating transformation from within, and by virtue of acutely contradictory circumstances. i wonder whether a similar catalysis of power-fromcontradiction plays a role in the wsf activists' undertakings, or whether perhaps these women's class privileges permit a more confident sense that an ethically consistent programme of action is possible in ways that are precluded for day labourers. that said, it would be intriguing to know if the activists in maiguashca's research feel subjected to class-transcending temporal contradictions of precarity, such as the clash between oppressively continuous and jarringly discontinuous patterns of work. even if precarity does not furnish the express 'starting point' for these women's advocacy, it might still provide a basis for solidarity with the day labour movement in the broad fight against capital. barvosa asks whether encouraging people to identify with the timorous mindstate of precarity might be politically counter-productive, given how fear induces corporeal responses that shut down complex thinking, induce self-preserving automatism and impede cooperation. as the book shows, however, the emotions that pervade precarity include not just fear but also guilt, hopefulness, selfsatisfaction, resentment, boredom, numbness and compassion, and more. precisely because precarity is so emotionally plural, it both acquires compelling force and spawns opportunities from within itself for its own contestation. in addition, precarity is more than a 'state of mind'. it is also a socially and politically constituted condition that stems from the convergence of protracted welfare-state austerity with the transformation of employment norms and institutions. precarity, moreover, is a hegemonic formation that relies on working people's consent, which day labourers provide, for instance, through the individualism of their generative themes. yet precisely for this reason and because it is structured in contradiction, especially temporally, precarity can be transformed from within. as my book argues, many workers prefer to see the worker centrecommunity as just a 'workforce' and in this way 'identify emotionally with a fearful state of dangerous insecurity', as barvosa fittingly puts it. yet more day labourers respond to fear -along with confusion, rash self-confidence, impatience and loneliness -by acknowledging these tangled emotions and converting their affective energy into bonds of solidarity. as to gates and buffet, i am glad they are giving away mounds of money and have updated philanthropy's ethical framework, but relying on a programme to broaden beneficent actions does not strike me as a viable response to precarity. as azmanova (2020) argues, in ways complementary to the fight for time, the systemic roots of precarity lie in the competitive pursuit of profit, and precarity's structural foundations abide in the re-organisation of work and de-funding of the welfare state. absent a coordinated and democratic (anti-oligarchic) movement by masses of working people to tackle power on these levels, precarity will persist. the emancipatory script proposed by my book, far from simply pitting poor downtrodden workers against greedy bosses, casts working people at all levels of the economic hierarchy as potential collaborators in the fight against precarity, which must also be a struggle against gargantuan wealth -and a fight for time. paul apostolidis new directions in migration studies: towards methodological de-nationalism now let us shiftâ�¦the path of conocimientoâ�¦inner work, public acts the fight for time: migrant day laborers and the politics of precarity capitalism on edge: how fighting precarity can achieve radical change without crisis or utopia cruel optimism: on marx, loss and the senses the abolition of work a taste of power: a black woman's story 1889) the gospel of wealth. www.carnegie.org/about/our-history/gospelofwealth visionary pragmatism: radical and ecological democracy in neoliberal times red skin, white masks: rejecting the colonial politics of recognition the ambivalence of coworking: on the politics of an emerging work practice conceptualizing epistemic oppression the concept of the 'lumpenproletariat' in marx and engels the politics of survival. minority women, activism and austerity in france and britain making feminist sense of the global justice movement spectres of belonging: the political life cycle of mexican migrants emma goldman: political thinking in the streets no one is illegal, canada is illegal! negotiating the relationships between settler colonialism and border imperialism through political slogans man's search for meaning justice interruptus: from redistribution to recognition school of public health. (2020) life experiences and income inequality in the united states learning from the outsider within: the sociological significance of black feminist thought black feminist thought: knowledge, consciousness and the politics of empowerment the emotional brain: the mysterious underpinnings of emotional life critical theory as political practice the misguided search for the political freelance isn't free: co-working as a critical urban practice to cope with informality in creative labour markets the politics of operations: excavating contemporary capitalism mohawk interruptus: political life across the borders of settler states marx and heterogeneity: thinking the lumpenproletariat difference in marx: the lumpenproletariat and the proletarian unnameable decolonization is not a metaphor undoing border imperialism the a. sivandandan collection. race & class bunk: the rise of hoaxes, humbug, plagiarist, phonies, post-facts, and fake news key: cord-290637-3tgtstd4 authors: ferranti, erin p.; wands, lisamarie; yeager, katherine a.; baker, brenda; higgins, melinda k.; wold, judith lupo; dunbar, sandra b. title: implementation of an educational program for nursing students amidst the ebola virus disease epidemic date: 2016-12-31 journal: nursing outlook doi: 10.1016/j.outlook.2016.04.002 sha: doc_id: 290637 cord_uid: 3tgtstd4 abstract background the global ebola virus disease (evd) epidemic of 2014/2015 prompted faculty at emory university to develop an educational program for nursing students to increase evd knowledge and confidence and decrease concerns about exposure risk. purpose the purpose of this article is to describe the development, implementation, and evaluation of the evd just-in-time teaching (jitt) educational program. methods informational sessions, online course links, and a targeted, self-directed slide presentation were developed and implemented for the evd educational program. three student surveys administered at different time points were used to evaluate the program and change in students' evd knowledge, confidence in knowledge, and risk concern. discussion implementation of a jitt educational program effectively achieved our goals to increase evd knowledge, decrease fear, and enhance student confidence in the ability to discuss evd risk. these achievements were sustained over time. conclusion jitt methodology is an effective strategy for schools of nursing to respond quickly and comprehensively during an unanticipated infectious disease outbreak. the ebola virus disease (evd) epidemic of 2014/2015 presented atlanta-area health care providers, health care professions schools, and students a unique challenge to quickly prepare for the care of evd-infected aid workers from african countries affected by this disease. the decision to accept these patients resulted in the activation and expansion of the serious communicable diseases unit (scdu) at emory university hospital (feistritzer, hill, vanairsdale, & gentry, 2014) . intense public interest followed the decision and resulted in tremendous media coverage. between july 31 and september 22, 2014, >42,000 stories went out on broadcast and >18,000 print stories were written mentioning emory and ebola ("telling the story," 2014). some of the attention heightened the fear and anxiety associated with caring for individuals in our community because of the highly infectious nature of evd. people spoke out on social media, fearing that our caring for these patients put our larger community at risk. in response to the public outcry, susan grant, the chief nurse for emory healthcare wrote in the washington post, "we can either let our actions be guided by misunderstanding, fear, and self-interest or we can lead by knowledge, science and compassion. we can fear, or we can care." (grant, 2014) . the emory university nell hodgson woodruff school of nursing (nhwsn) is located on the same campus as emory university hospital and is also adjacent to the centers for disease control and prevention (cdc). both the cdc and emory healthcare are key partners for the clinical and public health education of our student nurses. the treatment of patients with evd at emory university hospital, combined with our cdc colleagues' response to the evd epidemic in africa and the status of atlanta being a major international transportation hub, necessitated a swift response by key public health faculty and administration of the nhwsn to educate our students and fellow faculty colleagues and staff members about evd. evd education needed to include modes of transmission, risk for exposure and transmission, signs and symptoms of infection, therapy, and counseling techniques to allay fear and anxiety associated with living in atlanta and working or training within the health care facilities treating evd-infected patients. it was our goal to increase evd knowledge, decrease fear, and enhance students' confidence in their ability to discuss evd risk with family and friends. just-in-time teaching (jitt) is an online educational approach that can be used to rapidly disseminate important information in an efficient and effective way to address learning needs during a crisis (chotani et al., 2003) . jitt approaches have been used to quickly disseminate information after large-scale disasters and public health epidemics, such as the global outbreak of severe acute respiratory syndrome (sars) that occurred in the early 2000s (o'connor et al., 2009; yang et al., 2010) . providing information expeditiously during complex humanitarian emergencies, such as a disease outbreak, is essential to quelling the fears of nursing students, who may encounter affected patients during clinical rotations, and communities who are uncertain about essential facts and who might be influenced by media coverage that at times dwells on unpleasant details and fuels the public's apprehensions (stirling, harmston, & alsobayel, 2015; "teaching in a time," 2015) . to respond to the emergent evd epidemic, we designed a comprehensive and targeted approach to educate our students. the purpose of this article is to describe the development, implementation, and evaluation of this educational effort. early in the fall semester of 2014, we arranged for lunch-and-learn presentations, inviting all community members to learn more about the evd outbreak in africa. we invited colleagues from the cdc to present information about their experiences in sierra leone, one of the evd-affected countries. interested students and faculty attended other educational events at our university's school of public health. specific to information about the evd patients being cared for at emory's scdu, many attended a town hall meeting held jointly with the medical school where attendees heard directly from the scdu team that was caring for the individuals with evd. in addition to the opportunities provided to learn more about evd across our campus, the faculty decided that because our undergraduate nursing students were engaging in clinical training within the health care facility caring for patients with evd, a more comprehensive and targeted approach to educate our students was needed. additional goals for providing education were to increase student knowledge of evd risks and ways to mitigate exposure, decrease fear of evd, and enhance students' confidence in discussing evd with others, including family, friends, and patients. faculty course coordinators of classes addressing professional role content for each cohort of undergraduate students created an ebola information page on their electronic course sites. the ebola information content included links to cdc, emory healthcare, and other atlanta-area health care evd policies and guidelines. in addition, a 23-slide powerpoint presentation was developed using cdc guidelines and the newly developed emory healthcare ebola preparedness protocols. the presentation, posted on the course sites, included an overview of the evd outbreak, evd facts, modes of transmission, signs and symptoms of early and later stage infection, emory university's evd-specific travel policies, emory healthcare's publically available ebola preparedness protocols, and cdc's published "frequently asked questions" and answers. the presentation was designed for students' self-directed viewing and learning. participants targeted for this educational program were all undergraduate student nurses enrolled in our prelicensure bachelor of science in nursing (bsn) program at nhwsn in fall 2014 and spring 2015. inclusion criteria included all enrolled undergraduate students; there were no exclusion criteria. sample size was determined by the size of the enrolled undergraduate student population. this target group consisted of a total of 320 undergraduate students who were 93% women, 36% white, 24% black, 11% asian, 7% hispanic, and 20% multiracial/ethnic or undeclared. consultation with the emory university institutional review board confirmed that this project met exemption criteria. early in december 2014, the project's data manager (b.b.) invited all students in the bsn program to participate in the evd self-directed education program described previously, via e-mail. the data manager did not serve in a faculty capacity to any of the students at the time. the e-mail stated that faculty were interested in students' perceptions about educational information that had already been provided to them about evd and their experiences and level of comfort with discussing evd with others, such as family members. the e-mail invitation described that the new education program included completing a pretest, viewing a powerpoint slideshow, and completing two post-tests (immediately after the training and five weeks later) and that participation was completely voluntary. the pre-and post-tests were identical. students were enrolled in one of three classes, and faculty members teaching those classes agreed to offer a small incentive in the form of bonus points to students who participated in the study. application of the bonus points was determined by the individual faculty member for each course. the invitation e-mail included a link to the pretest, which was hosted on the research electronic data capture (redcap) platform. redcap allowed for tracking participants for comparison on pre-and posttest results by linking student identification numbers that were loaded into the system. time to complete the pretest was estimated to be about 15 min. the pretest link remained active for 3 days after the initial e-mail was sent. the pre-test survey consisted of three demographic questions, one item related to who they may have already provided any evd information to, two questions related to the student's confidence level providing education to others about evd, one item asking if they felt they needed additional evd training, 13 knowledge questions, two questions related to the student's level of concern about their risk to evd, and one question about attendance at recent campus educational programs about evd. a few of the students either did not receive the initial e-mail or did not receive a valid link to the pre-test; these issues were resolved by sending e-mails to these students individually. the survey link was e-mailed again to students who experienced technological difficulties; no duplicate surveys occurred as a feature of the redcap system. three days after the pre-test survey closed, faculty of the students' classes posted the powerpoint slideshow on their online course sites, and participants were directed where to find the slideshow for viewing. participants did not have access to the powerpoint slideshow before completing the pre-test. time to completely view the slideshow was estimated to be about 15 minutes. the powerpoint slideshow was available to view for three days. students who participated in the pretest were invited via e-mail to complete the post-test. the post-test was also hosted on the redcap platform, and survey responses were linked by student identification numbers. the post-test link remained active for three days. after the post-test, students were on an academic break for approximately five weeks. when classes resumed in january 2015, students who had participated in the pre-and post-test surveys were invited to participate in a follow-up post-test. the purpose of this follow-up post-test was to assess the retention of knowledge and any changes in confidence in addressing evd concerns after a major school break. the redcap system linked survey results across the three tests with student identification numbers. the data manager provided student identification numbers to faculty for the purpose of awarding bonus credit. bonus credit was awarded to students who completed all three surveys. statistical analyses were performed using the statistical package for the social sciences, version 22 (spss, chicago, il). statistical significance was set at p <.05 a priori. data were reviewed for completeness. any skipped or missing items were summarized and reported for items not related to the knowledge test. missing items on the knowledge test portion were treated as incorrect responses. most of the data collected were categorical and ordinal in nature; thus, most items were summarized using percentages and frequencies. age was normally distributed, so the mean and standard deviation were reported. descriptive statistics were compiled for all student characteristics, demographics, knowledge test scores, training needs, comfort, and confidence items. knowledge scores were computed as the percentage correct out of the 13 evd content items. the two concern items and three confidence items were scaled with four response ordinal categories (not at all, somewhat, very, and extremely). the two concern items were averaged together, and the three confidence items were averaged together. reliability was assessed for each of the averaged items using standardized cronbach alpha and the split-half spearmanebrown formula (eisinga, grotenhuis, & pelzer, 2013) . average concern was significantly right skewed and still ordinal in nature and was dichotomized into subjects who were "not at all" concerned (score ¼ 1) vs. those who were somewhat to extremely concerned (scores > 1). average confidence was also right skewed and still ordinal in nature and was also dichotomized into subjects who were "not at all" or "somewhat" confident (scores 2) and those who were "very" or "extremely" confident (scores > 2). multilevel modeling (mlm) instead of repeated measures analysis of variance was used to test for changes over time for the three time points for the continuous knowledge scores because mlm uses all available data and adjusts for missing data over time (hedeker & gibbons, 2006) . for the dichotomized items for concern, confidence, and needing additional training, generalized multilevel modeling (gzmlm) was used for these binary response variables with logit link function (e.g., logistic regression) to test for changes over time. age was also included as a covariate since older students may have had more confidence or comfort levels. for consistency, age was included in all the mlm/gzmlm models. for all models, pairwise comparisons were made between responses at all three time points using sidak type i error ratee adjusted p values ("ibm spss statistics for windows, version 22.0," 2013). pairwise differences between t1 and t2 evaluated the initial improvements immediately after training, between t1 and t3 evaluated the longer term improvements from baseline, and between t2 and t3 evaluated the sustained or retained effects from the training. baseline surveys were completed by 233 (73%) of 320 eligible undergraduate students. the age of students participating in this study ranged from 19 to 52 years with an average age of 26.2 (standard deviation ¼ 7.2) years. the majority were female (93.1%) with 39.5% juniors, 34.8% seniors, and 25.7% accelerated undergraduates. when asked who the students had previously provided any information about the ebola outbreak to, the majority (>83%) said friends and family, and slightly less than half (41%) said fellow students (table 1) . of the 233 who completed the baseline surveys, 192 (82.4%) completed the immediate post-test survey and 145 (62.2%) completed the final post-test survey. the 88 who did not complete all three surveys were not significantly different from the 145 who completed all three in age, gender, baseline knowledge, concern, confidence, or wanting additional training. initially, the students scored 75.9 (12.7) on the knowledge test and improved immediately after training with scores averaging 90.7 (12.4) which was significantly higher than baseline ( p < .001; table 2, figure 1 ). their knowledge scores were well retained by the third time point with average scores of 89.8 (11.6) * evd knowledge test scores analyzed using multilevel modeling (mlm). y dichotomous outcomes analyzed using generalized multilevel modeling (gzmlm) with binary responses with logit link functions. the categories indicated by the counts and percents reported were the target category for the binary response logit link gzmlm. z one subject skipped answering the concern items at time 3. n u r s o u t l o o k 6 4 ( 2 0 1 6 ) 5 9 7 e 6 0 3 with no significant loss in knowledge scores from time 2 ( p ¼ .514). at baseline, only half (52.4%) were not at all concerned about their risk (averaged from concern as a health care provider and as an atlanta city resident). these two items showed good internal consistency and reliability with a standardized cronbach alpha and split-half spearmanebrown coefficient of 0.865. this percentage of students not at all concerned did increase significantly over time with improvements from baseline to time 3 ( p ¼ .001) with 68.1% not at all concerned by time 3 (table 2, figure 1 ). when looking at the two individual concern items (concern as a health care provider and concern as a resident), the levels of not at all concerned was consistently lower for risk as a health care provider, but both showed steady increases over all three time points (table 2) . at baseline, slightly more than half (52.8%) stated they did need additional training about evd, but this decreased significantly over time with significant decreases from time 1 to time 2 ( p ¼ .010) and from time 2 to time 3 ( p ¼ .048) with overall decreases from time 1 to time 3 ( p < .001) down to only 25.5% wanting additional training by time 3 (table 2, figure 1 ). at baseline, only 43.8% of the students were very or extremely confident in their average ability to discuss evd with family/friends, answer questions about evd transmission, and convey a calm message about the general public risk for evd. these three items showed high internal consistency and reliability with a figure 1 e estimated percentages (means and 95% confidence intervals) from multilevel models. note: all estimated means and 95% confidence intervals were adjusted for age as a covariate in the multilevel models. test scores were analyzed using multilevel models (mlms) and the other three outcomes (additional training [% yes], average concern [% not at all], and confidence average [% very or extremely]) were analyzed using generalized multilevel models (gzmlms) with binary responses and logit link functions. p values are provided for each pairwise comparison between the three time points and were adjusted using sidak pairwise error rate correction. standardized cronbach alpha of 0.905. the average confidence increased significantly from baseline to immediate post-test to 70.3% ( p < .001), but this confidence level decreased slightly by the final post-test at time 3 down to 62.8% which was not significantly less than time 2 ( p ¼ .135) and was still significantly higher than baseline ( p < .001; table 2, figure 1 ). when looking at the individual confidence level items, the lowest confidence levels were for discussing evd with family/ friends and answering questions about evd transmission. the confidence levels for conveying a calm message about the general public's risk for evd were consistently higher across time (table 2) . a final detailed summary of the percentage of correct answers to the 13 individual knowledge test items at all three time points is provided in table 3 . reviewing this table shows that the weakest knowledge areas were for knowing how the ebola virus infection is diagnosed (item 12 with baseline knowledge at 31.8%), knowing how long protection lasts for people who recover from ebola (item 14 with baseline knowledge at 33.5%), and knowing whether you can still contact the virus from a person not showing symptoms (item 13 with baseline knowledge at 49.8%). these three items all showed improvement from baseline to immediate post-test at time 2, but items 13 and 14 showed the poorest retention by the third time point at the final post-test. two additional items with lower levels of knowledge at baseline were item 18 with only 62.7% knowing if supportive care was currently the only treatment available for ebola patients and item 20 with only 62.7% knowing how health care responders returning from other countries should be monitored. however, after training, both of these items showed significant and sustained improvement with knowledge levels above 80%. the remaining knowledge test items showed reasonable levels of knowledge at baseline above 80% that improved to 90% and higher over time. the west african evd epidemic of 2014/2015 that brought ebola-infected patients to the metro-atlanta area and to a hospital in which our student nurses were completing clinical education rotations provided a unique opportunity for the faculty at the nhwsn to prepare our student nurses for a major, fear-provoking public health event and to test the effectiveness of an educational program. there is little study devoted to the response of health professional schools, particularly schools of nursing in the event of an unforeseen infectious disease outbreak such as evd (stirling et al., 2015) . furthermore, there is little guidance for how to n u r s o u t l o o k 6 4 ( 2 0 1 6 ) 5 9 7 e 6 0 3 swiftly and effectively prepare nursing students for such events, both in their roles as patient providers and community educators. the implementation of a jitt educational program effectively achieved our goals to increase evd knowledge, decrease fear, and enhance students' confidence in their ability to discuss evd risk. furthermore, these achievements were sustained over time. this demonstration educational program highlights the effectiveness of self-directed learning, especially in times of a threatening disease outbreak. limitations to this educational program included a substantial decrease in the number of student participants who completed the final survey from the baseline measurement time point. this decrease aligned with the level of course credit or bonus points provided to students, indicating greater student motivation to complete the full program when credit was awarded in meaningful ways to students. giving extra credit points could also be a limitation of the program findings as it may not be representative of students who did not need extra credit (i.e., students with better course grades). the challenge with implementing consistent bonus points was having differing courses over two separate semesters. greater coordination among faculty and throughout the courses might have helped to encourage student participation. the student nurse population at emory university is primarily female, reflecting common gender norms of the nursing profession. this may, however, limit the generalizability of these findings to other more gender-balanced student groups. the jitt methodology and self-directed learning are effective means of increasing knowledge and confidence and decreasing risk concern among student nurses. in this era of globalization, when any communicable illness is "only a plane ride away" and intense media coverage can increase fear and anxiety, jitt is a successful method of delivering evidencebased information to students in a timely manner. schools of nursing must have the tools and resources to respond quickly and comprehensively during an unanticipated infectious disease outbreak to protect their students and staff, to prevent disease, and to be empowered advocates of accurate information in the midst of an epidemic. r e f e r e n c e s just-in-time lectures the reliability of a two-item scale: pearson, cronbach or spearman-brown? care of patients with ebola virus disease i'm the head nurse at emory. this is why we wanted to bring the ebola patients to the u.s. the washington post longitudinal data analysis statistics for windows, version 22.0. armonk risk communication with nurses during infectious disease outbreaks: learning from sars an education programme for nursing college staff and students during a mers-coronavirus outbreak in saudi arabia telling the story chinese disasters and justin-time education key: cord-219817-dqmztvo4 authors: oghaz, toktam a.; mutlu, ece c.; jasser, jasser; yousefi, niloofar; garibay, ivan title: probabilistic model of narratives over topical trends in social media: a discrete time model date: 2020-04-14 journal: nan doi: nan sha: doc_id: 219817 cord_uid: dqmztvo4 online social media platforms are turning into the prime source of news and narratives about worldwide events. however,a systematic summarization-based narrative extraction that can facilitate communicating the main underlying events is lacking. to address this issue, we propose a novel event-based narrative summary extraction framework. our proposed framework is designed as a probabilistic topic model, with categorical time distribution, followed by extractive text summarization. our topic model identifies topics' recurrence over time with a varying time resolution. this framework not only captures the topic distributions from the data, but also approximates the user activity fluctuations over time. furthermore, we define significance-dispersity trade-off (sdt) as a comparison measure to identify the topic with the highest lifetime attractiveness in a timestamped corpus. we evaluate our model on a large corpus of twitter data, including more than one million tweets in the domain of the disinformation campaigns conducted against the white helmets of syria. our results indicate that the proposed framework is effective in identifying topical trends, as well as extracting narrative summaries from text corpus with timestamped data. social media and microblogging platforms, such as twitter and facebook, are becoming the primary sources of real-time content regarding ongoing socio-political events, such as united states presidential election in 2016 [11] , and natural and man-made emergencies, such as covid-19 pandemic in 2020 [9] . however, without the appropriate tools, the massive textual data from these platforms makes it extremely challenging to obtain relevant information on significant events, distinguish between high-quality and unreliable content [17] , or identify the opinions within a polarized domain [13] . the challenges mentioned above have been studied from different aspects related to topic detection and tracking within the field 1 in this study, we have used the terms narrative and story interchangeably. *equal contribution. of natural language processing (nlp). researchers have developed automatic document summarization tools and techniques, which intend to provide concise and fluent summaries over a large corpus of textual data [20] . preserving the key information in the summary and producing summaries that are comparable to human-created narratives are the primary goals of the extractive and abstractive approaches for automatic text summarization [2] . news websites are a prime example of such techniques, where automatic text summarization algorithms are applied to generate news headlines and titles from the news content [31] . the shortage of labeled data for text analysis has encouraged researchers to develop novel unsupervised algorithms that consider co-occurrence of words in documents as well as emerging new techniques such as exploiting an additional source of information similar to wikipedia knowledge-based topic models [37, 38] . additionally, unsupervised learning enables training general-purpose systems that can be used for a variety of tasks and applications as strong classifiers [7] . in this regard, statistical models of co-occurrence such as latent dirichlet allocation (lda) [6] , discover the relevant structure and co-occurrence dependencies of words within a collection of documents to capture the distribution of topic latent variable from the data. although an abundant timestamped textual data, particularly from social media platforms and news reports are available for analysis, the changes in the distribution of data over time have been neglected in most of the topic mining algorithms proposed in the literature [35] . for instance, time-series analysis on datasets over the events relative to 2012 us presidential election suggests that modeling topics and extracting summaries without considering the text-time relationship lead to missing the rise and fall of topics over time, the changes in terms of correlations, and the emergence of new topics [12] . although continuous-time topic models such as [35] have been proposed in the literature, topical models with continuous-time distribution cannot model many modes in time, which leads to deficiency in modeling the fluctuations. additionally, continuoustime models suffer from instability problems in the case of analyzing a multimodal dataset that is sparse in time. in this paper, we propose a probabilistic model of topics over time with categorical time distribution to detect topical recurrence, designed as an lda-based generative model. to achieve probabilistic modeling of narratives over topical trends, we incorporate the components of narratives including named-entities and temporal-causal coherence between events into our topical model. we believe that what differentiates a narrative model 2 from topic analysis and summarization approaches is the ability to extract relevant sequences of text relative to the corresponding series of events associated with the same topic over time. accordingly, our proposed narrative framework integrates unsupervised topic mining with extractive text summarization for narrative identification and summary extraction. we compare the identified narratives by our model with the topics identified by latent dirichlet allocation (lda) [6] and topics over time (tot) [35] . this comparison includes presenting numerical results and analysis for a large corpus of more than one million tweets in the domain of disinformation campaigns conducted against the white helmets of syria. the collected dataset contains tweets spanning 13 months within the years 2018 and 2019. our results provide evidence that our proposed method is effective in identifying topical trends within a timestamped data. furthermore, we define a novel metric called significance-dispersity trade-off (sdt) in order to compare and identify topics with higher lifetime attractiveness in timestamped data. finally, we demonstrate that our proposed model discovers time localized topics over events that approximates the distribution of user activities on social media platforms. the remaining of this paper is organized as follows: first, an overview of the related works is provided in section 2. in section 3, we provide a detailed explanation of our proposed method followed by the experimental setup and results. finally, in section 5 we conclude the paper and discuss future directions. in this section, we first provide a background on narrative analysis and how literature has investigated stories in social media. then, we present an overview of topic modeling and text summarization. narratives can be found in all day-to-day activities. the fields of research on narrative analysis include narrative representation, coherence and structure of narratives, and the strategies, aim, and functionality of storytelling [22] . from a computational perspective, narratives may relate to topic mining, text summarization, machine translation [33] , and graph visualization. the later can be achieved via using directed acyclic graphs (dags) to demonstrate relationships over the network of entities [15] . narrative summaries can be constructed from an ordered chain of individual events with causality relationships amongst events, appeared within a specific topic [18] . the narrative sequence may report fluctuations over time relative to the underlying events. additionally, the story-like interpretation of the text is a must to imply a narrative [25] . since social media have been admitted as a component of today's society, many studies have investigated narratives in social media content [14, 25, 34] . these narratives contain small autobiographies that have been developed in personal profiles and cover trivial everyday life events. other types of narratives appearing in social media platforms consist of breaking news and long stories of past events [25] . some types of narratives, such as breaking news, result in the emergence of other narratives related to the predictions or projections of events in near future [14] . these literature view social media conversation cascades as stories that are co-constructed by the tellers and their audience, and are circulating amongst the public within and across social media platforms. moreover, the events have been considered as the causes of online user activity that can be identified via activity fluctuations over time [3, 25] . developing appropriate tools for social media narrative analysis can facilitate communicating the main ideas regarding the events in large data. as social media activities generate abundant timestamped multimodal data, many studies such as [8] have presented algorithms to discover the topics and develop descriptive summaries over social media events. probabilistic models to discover word patterns that reflect the underlying topics in a set of document collections [1] . the most commonly used approach to topic modeling is latent dirichlet allocation (lda) [19] . lda is a generative probabilistic model with a hierarchical bayesian network structure that can be used for a variety of applications with discrete data, including text corpora. using lda for topic mining, a document is a bagof-words that has a mixture of latent topics [6] . many advanced topic modeling approaches have been derived from lda, including hierarchical topic models [15, 16] that learn and organize the topics into a hierarchy to address a super-sub topic relationship. this approach is well-suited for analyzing social media and news stories that contain rich data over a series of real-world events [30] . topic models over time with continuous-time distribution [5] and dynamic topic models [35] intend to capture the rise and falls of topics within a time range. however, continuous-time topic models, such as beta or normal time distribution, cannot model many modes in time. furthermore, the smooth time distribution over topics does not allow recognizing distinct topical events in the timestamped dataset, where topical events reflect the event-based topic activity fluctuations over time. topic modeling and summarization of social media data is challenging as a result of certain restrictions, such as the maximum number of characters allowed on the twitter platform. as shorttext or microblogs have low word co-occurrence and contextual information, models designed for short-text topic analysis and summarization may obtain context information with short-text aggregation to enrich the relevant context before further analysis [27] . document summarization techniques are generally categorized into abstractive and generative text summarization models. herein, we consider extractive text summarization methods. several algorithms for extractive text summarization have been proposed in the literature that assign a salient score to sentences [10] . to summarize a text corpus with short text, [29] presents an automatic summarization algorithm with topic clustering, cluster ranking and assigning scores to the intermediate features, and sentence extraction. some other approaches, particularly for the twitter data include aggregating tweets by hashtags or conversation cascades [27, 32] , and obtaining summaries for a targeted event of interest as one or a set of tweets that are representative of the topics [8] . additionally, neural network-based summarization models [23, 28] , commonly with an encoder-decoder architecture, leverage attention mechanism for contextual information among sentences or rouge evaluation metric to identify discriminative features for sentence ranking and summarization. however, these architectures require labeled datasets and might not apply to short-text. text summarization with compression using neural networks is proposed by [36] which applies joint extraction and syntactic compression to rank compressed summaries with a neural network. our focus in the present work is on probabilistic topic modeling and extractive text summarization to provide descriptive narratives for the underlying events that occurred over a period of time. in this section we explain our narrative framework. the framework comprises of 2 steps: i. narrative modeling based on topic identification over time and ii. extractive summarization from the identified narratives. to discover the narratives over topical events, first, we use our discrete-time generative narrative model as an unsupervised learning algorithm to learn distribution of textual contents from daily conversation cascades. then, we extract narrative summaries over topical events from sentences in the time categories. this is achieved by sampling from the identified distribution of narratives and perform sentence ranking. narrative modeling and summarization steps are explained below in separate subsections. to model narratives, we design our topic model such that the discovered topics present a series of timely ordered topical events. accordingly, the topical events deliver a narrative covering distinct events over the same topic. in this regard, we present narratives over categorical time (noc), a novel probabilistic topic model that discovers topics based on both word co-occurrence and temporal information to present a narrative of events. according to the topic-time relationship explained above, we refer to the topics or narratives, topical events as events, and the extracted timely ordered sentences of documents with high probability of belonging to each event as the extracted narrative summary. to fully comply with the definition of narrative, we assume a causality relation between the conversation cascades in social media. however, we do not investigate the causality relation across the conversation cascades or named-entities. the differences between our narrative model with dynamic topic models [5] , topic models with continuous time distribution [35] , and hierarchical topic models [16, 26] include: not filtering the data for an specific event, imposing sharp transition for topic-time changes with time slicing, discovering topical events without scalability and sparsity issues, allowing multimodal topic distribution in time as a result of categorical time distribution, and selecting an appropriate slicing size such that distinct topical events be recognizable. additionally, categorical time distribution enables discovering topical events with varying time resolution, for instance, weekly, biweekly, and monthly. time discretization brings the question of selecting the appropriate slicing size or the number of categories that depends on the characteristics of the dataset under study. on the contrary, topical models with continuous time distribution cannot model many modes in time. additionally, continuous time models such as [35] suffer from instability problem if the dataset is multimodal and sparse in time. furthermore, categorical time enables discovering topic recurrence which results in identifying topical events related to distinct narrative activities, which is of our interest in this paper. narrative activities in social media refer to the amount of textual content that is circulating in online platforms over time, corresponding to a specific topic. the generative process in noc, models timestamps and words per documents using gibbs sampling which is a markov chain monte carlo (mcmc) algorithm. the graphical model of noc is illustrated in figure 1 . as can be seen from the graphical model, the posterior distribution of topics is dependent on both text and time modalities. this generative procedure can be described as follows: i. for each topic z, draw t multinomials ϕ z from a dirichlet prior β; ii. for each document d, draw a multinomial θ d from a dirichlet prior α; iii. for each word w di in d: (a) draw a topic z di from multinomial θ d ; where in this model, gibbs sampling provides an approximate inference instead if exact inference. to calculate the probability of topic assignment to word w di , we first need to calculate the joint probability of the dataset as p(z d i , w d i , t d i |w −di , t −di , z −di , α, β,ψ ) and use chain rule to derive the probability of p(z d i |w, t, z −d i , α, β,ψ ) as below, where −di subscripts refers to all tokens except w di : where n zv refers to the number of words v assigned to topic z, m dz refers to the number of word tokens in document d that are assigned to topic z, and b k represents the kth time slice. the details on the gibbs sampling derivation can be found in the appendix section. after each iteration of gibbs sampling, we update the probability of p(t z d i ∈ b k ) as follows: where i(.) is equal to 1 when t z d i ∈ b k , and 0 otherwise. in this paper, we report results with bi-weekly categorical time resolution. to determine the values for hyper-parameters α and β and to investigate the sensitivity of the model to these values, we repeated our experiment with symmetric dirichlet distributions using values α ∈ [0.1, 0.5, 1], β ∈ [0.01, 0.1, 0.5, 0.8, 1]. we observed that the model did not show significant sensitivity to the values of these hyper-parameters. thus, we fix α = 1 and β = 0.5, both as symmetric dirichlet distributions. we initialize the hyperparameter ψ in 2 ways for comparison: i. random initialization (model referred as noc r ); and ii. based on the probability of user activity per time category, illustrated in figure 3c , (model referred as noc a ). to estimate the number of topics for our experiments, we first visualize the tweets' hashtag co-occurrence graph. we measure the graph modularity to examine the structure of the communities in this graph. we observe the highest modularity score of 0.41 using modularity resolution equal to 0.85. figure 2 illustrates a downsample version of this graph, where each color represents a modularity class. the edges of the graph are weighted according to the number of hashtags' co-occurrence in the document collection. our modularity analysis suggests that few distinct hashtag communities exist. additionally, the dataset under study contains tweets associated with a single domain. as a result, we assume the number of topics to be relatively low. to choose an appropriate number of topics, we repeated lda with the number of topics as t ∈ [4, . . . , 20] with increments of size 1. we evaluated the c v coherence of topics identified by lda and observed the highest coherence score for t = 5 and t = 5, respectively. thus, we report our experimental results using these values. we employ the discovered probabilities of topics over documents, θ , probabilities of words per topic, ϕ, and probabilities of topics per time category, ψ to perform sentence ranking. this ranking allows extracting the sentences with the higher scores of belonging to each topic. this is achieved via performing weighted sampling on the collection of documents based on the probabilities of topics per time category ψ and draw d documents from θ . the weighted sampling leads to drawing more documents from the time categories b k with a higher ψ as this time slices contain more documents related to the topic z. each document contains a sequence of sentences (s 1 , s 2 , . . . , s j ) ∈ d from the aggregated conversation cascades per day. information on the aggregation of conversation cascades and document preparation can be found in section 4.2. since the social media narrative activity over a topic evolves from the circulation of identical or similar textual content in the platform, the content involves significant similarity. for instance, the twitter conversation cascades include replies, quotes, and comments, where replies and quotes duplicate the textual content. therefore, we applied jaro-winkler distance over the timely ordered sentences and dismissed the sentences with similarity above 70%, while keeping the longest sentence. after removing redundant text as described earlier, we calculate the probability of each sentence s j by measuring the sum of the probabilities of topics for words w di ∈ s j . then, we select the sentences with the highest accumulative probability of words w per topic z. summary coherence was induced as suggested in [4] by ordering the extracted sentences according to their timestamps such that the oldest sentences appear first. table 4 in the appendix section contains the extracted narrative summaries for 5 topics for a sample run. as mentioned earlier, the discovered topics by noc present a series of timely ordered topical events. thus, the topical events deliver a narrative covering distinct social media events over the same topic. figure 3 demonstrates the generated narrative distributions with noc, where the hyperparameter ψ was randomly initialized (referred to as noc r ). this figure represents that the identified narratives by our model are distinct from each other and the collapsed distribution of all narratives approximates the distribution of social media user activity over time. the identified narratives can be evaluated using effective evaluation metrics for topic models. accordingly, we calculate pointwise mutual information [24] to measure the coherence of a topic z as follows: where k is the number of most probable words for each narrative, p(w j ) and p(w k ) refer to the probabilities of occurrence for words w j and w k , and p(w j , w k ) represents the probability of co-occurrence for the two words in the collection of documents. we compare our model with lda and tot [35] , where tot is a probabilistic topic model over time with beta distribution for time. table 2 displays the average coherence score measured across the discovered topics by lda, tot, and noc. for noc, we investigate initializing the parameterψ with random and user activity-based initialization, referred as noc r and noc a , respectively. we considere k = 500 most probable words from each topic. this comparison suggests that the narratives identified by noc are more coherent than the identified topics by lda, with an improvement in coherence of about 35%. the observed improvement comparing with tot was about 27%. additionally, initializing the hyperparameter ψ in noc using the distribution of user activity improves the narrative coherence by about 3%. the topic attractiveness to social media users can be investigated as a measure of the length of conversation cascades, the number of initiated textual content, and the number of unique users performing an activity relative to the underlying topic. the user activity fluctuations for timestamped data may contain activity bursts that are illustrative of significant events. similarly, the generation and propagation of textual content within an online platform can illustrate the narrative activity relative to the events over time, where a burst represents a significant narrative activity. additionally, the recurrence of a topic can be considered as an attractiveness measure for the associated topic. in this regard, we propose the significance-dispersity trade-off (sdt) metric to compare the identified narratives against each-other. sdt measures the lifetime attractiveness of the identified narratives based on the distribution of narratives over topical events. the proposed metric quantifies the significance of the narrative activities and recurrence of a topic via employing the shannon entropy for the discovered narrative distributions. the intuition behind the sdt score is that the value of the entropy is maximum when the probability distribution is uniform. on the contrary, this value is minimum if the distribution is delta function. this is visualized in figure 4 in the appendix section. we define dispersity of a categorical time topic distribution as a measure of the dispersion of the time categories. based on this definition, sdt score of topic z can be obtained as: where h is the shannon entropy for the categorical distribution of time for topic z: h max = loд 2 (k), and k refers to the number of time slices in the distribution. we assume that social media topics with high lifetime attractiveness are significant and recurrent. however the probability distribution imposes a trade-off on the two. the parameter α provides a weighted geometric mean of h and h max − h that enables promoting either significance or recurrence, dependent on the application under study. a larger value of parameter α promotes dispersity for sdt score, and a smaller amount of this parameter promotes mode significant. the bounds for the sdt score are: where h = 0 occurs when the distribution under study is uniform, and h = h max relates to delta distribution. since the time categorical distribution of our narrative model allows many modes in time, recurrent narratives can be identified. additionally, the narrative activity fluctuations can be modeled using categorical time distribution in topic analysis. table 3 provides a comparison for the sdt scores measured for the 5 identified narratives, using varying values of α. the illustration of the distribution of the extracted narratives can be seen in figure 3a . we can clearly see in this figure that narratives 1 and 3 have the highest dispersity. on the contrary, narratives 4 and 2 have the highest significance. we compare sdt i for narrative i with the number of user activity associated with narrative z. the results suggest that sdt score can be used to identify the narrative with higher lifetime attractiveness in a timestamped dataset. in our experiments, this is achieved for topic 1 when the value of γ is greater than or equal to 0.7. as it can be seen, this topic is associated with the highest user activity count, reported in the same table. to analyze topical events and provide narratives, we investigate the twitter dataset on the domain of white helmets of syria over a period of 13 month from april 2018 to april 2019. this dataset was provided to us by leidos inc 1 as part of the computational simulation of online social behavior (socialsim) 2 program initiated by the defense advanced research projects agency (darpa). we analyze more than 1,052,000 tweets from april 2018 to april 2019. to prepare the model inputs, we filter the tweets from non-english text. then, we clean up the data by removing usernames, short urls, as well as emoticons. additionally, we remove the stopwords, performe part of speech (pos) tagging and named entity recognition (ner) on each tweet using stanford named entity recognizer 3 model. using the ner tool, we extract persons, locations and organizations and removed all pseudo-documents that do not contain named entities similar to [21] . furthermore, we remove the tweets shorter than 3 words. as the twitter maintains a maximum allowed character limit of 280 characters, collected tweets lack context information and have very low word co-occurrence. we tackle the challenge of topic modeling on short-text tweets and to include plentiful context information by preparing pseudo-documents for our model inputs via aggregating daily root, parent, and reply/quote/retweet comments in each conversation cascade. we maintain the order of the conversation according to the timestamps associated with each tweet. this text aggregation method results in preparing pseudodocuments rich of context and related words with a daily time resolution. we use the pre-processing phase output as the model input pseudo-documents, referred as documents in this paper. in this paper, we addressed the problem of narrative modeling and narrative summary extraction for social media content. we presented a narrative framework consisting of i. narratives over topic categories (noc), a probabilistic topic model with categorical time distribution; and ii. extractive text summarization. the proposed narrative framework identifies narrative activities associated with social media events. identifying topics' recurrence and significance over time categories with our model allowed us to propose significance-dispersity trade-off (sdt) metric. sdt can be employed as a comparison measure to identify the topic with the highest lifetime attractiveness in a timestamped corpus. results on real-world timestamped data suggest that the narrative framework is effective in identifying distinct and coherent topics from the data. additionally, the results illustrate that the identified narrative distributions approximate the user activity fluctuations over time. moreover, informative, and concise narrative summaries for timestamped data are produced. further improvement of the narrative framework can be achieved via incorporating the causality relation cross the social media conversation cascades and social media events into account. other future directions include identifying topical hierarchies and extract summaries associated with each hierarchy. starting with the joint distribution p(w, t, z|α, β,ψ ), we can use conjugate priors to simplify the equations as below: p(w, t, z|α, β,ψ ) = p(w |z, β) p(t |ψ , z) p(z|α) where p and p refer to the probability mass function (pmf) and probability density function (pdf), respectively. the conditional probability p(z di |w, t, z −di , α, β,ψ ) can be found using the chain rule as: the probability of p(t di ∈ b k ) can be measured as follows: where i(.) is equal to 1 when t z d i ∈ b k , and 0 otherwise. remember first they said the video including the pics of the chlorine cylinder was fake. whitehelmets one america news pearson sharp visits hospital in douma where white helmets filmed chemical attack hoax multiple eyewitness doctors say no chemical attack took place syria. this is the video evidence of the airstrike on zardana an idlib town controlled by very expensive camera on the helmet of the whitehelmets rescuer. white helmets making films of chemical attacks with children in idlib. chemical, attack, douma, terrorist, fake, child, propaganda, video, russian, russia from the fabrication of the plays of the chemist and coverage of the crimes of terrorism to the public cooperation with the israeli army the white helmets. they are holding children! another chemical attack is imminent its all they've got left! 4 dead including two children and more than 50 wounded mostly women and children. love the white helmets propaganda almost as untruthful as the bbc. trumps usa has built a rationale for its public that it will need to support rebels in holding on to a large chunk of syria. i wonder how it is possible that criminal associations such as whitehelmets and the syrian human rights observatory can make the world go round as they want by influencing the policies of world leaders. u.s. freezes funding for syrias white helmets. white helmets are terrorists. former head of royal navy lord west on bbc white helmets aren't neutral they're on the side of the terrorists. the summaries provided here are the results for a sample run of the proposed narrative framework and do not reflect authors' personal opinions. a survey of topic modeling in text mining text summarization techniques: a brief survey leveraging burst in twitter network communities for event detection sentence ordering in multidocument summarization dynamic topic models latent dirichlet allocation a survey of multi-label topic models automatic summarization of events from social media the covid-19 social media infodemic summarizing microblogs during emergency events: a comparison of extractive summarization algorithms twitter as arena for the authentic outsider: exploring the social media campaigns of trump and clinton in the 2016 us presidential election themedelta: dynamic segmentations over temporal topic models polarization in social media assists influencers to become more influential: analysis and two inoculation strategies 17 small stories research: a narrative paradigm for the analysis of social media. the sage handbook of social media research methods hieve: a corpus for extracting event hierarchies from news stories hierarchical topic models and the nested chinese restaurant process claimbuster: the first-ever end-to-end factchecking system skip n-grams and ranking functions for predicting script events latent dirichlet allocation (lda) and topic modeling: models, applications, a survey twitter based event summarization real-time entity-based event detection for twitter models of narrative analysis: a typology ranking sentences for extractive summarization with reinforcement learning automatic evaluation of topic coherence seriality and storytelling in social media large-scale hierarchical topic models short and sparse text topic modeling via self-aggregation leveraging contextual sentence relations for extractive summarization using a neural attention model sumblr: continuous summarization of evolving tweet streams sub-story detection in twitter with hierarchical dirichlet processes from neural sentence summarization to headline generation: a coarse-to-fine approach seq2seq models for recommending short text conversations narrative information extraction with non-linear natural language processing pipelines make data sing: the automation of storytelling topics over time: a non-markov continuous-time model of topical trends neural extractive text summarization with syntactic compression incorporating wikipedia concepts and categories as prior knowledge into topic models. intelligent data analysis concept over time: the combination of probabilistic topic model with wikipedia knowledge key: cord-204835-1yay69kq authors: sun, chenxi; hong, shenda; song, moxian; li, hongyan title: a review of deep learning methods for irregularly sampled medical time series data date: 2020-10-23 journal: nan doi: nan sha: doc_id: 204835 cord_uid: 1yay69kq irregularly sampled time series (ists) data has irregular temporal intervals between observations and different sampling rates between sequences. ists commonly appears in healthcare, economics, and geoscience. especially in the medical environment, the widely used electronic health records (ehrs) have abundant typical irregularly sampled medical time series (ismts) data. developing deep learning methods on ehrs data is critical for personalized treatment, precise diagnosis and medical management. however, it is challenging to directly use deep learning models for ismts data. on the one hand, ismts data has the intra-series and inter-series relations. both the local and global structures should be considered. on the other hand, methods should consider the trade-off between task accuracy and model complexity and remain generality and interpretability. so far, many existing works have tried to solve the above problems and have achieved good results. in this paper, we review these deep learning methods from the perspectives of technology and task. under the technology-driven perspective, we summarize them into two categories missing data-based methods and raw data-based methods. under the task-driven perspective, we also summarize them into two categories data imputation-oriented and downstream task-oriented. for each of them, we point out their advantages and disadvantages. moreover, we implement some representative methods and compare them on four medical datasets with two tasks. finally, we discuss the challenges and opportunities in this area. time series data have been widely used in practical applications, such as health [1] , geoscience [2] , sales [3] , and traffic [4] . the popularity of time series prediction, classification, and representation has attracted increasing attention, and many efforts have been taken to address the problem in the past few years [4, 5, 6, 7] . the majority of the models assume that the time series data is even and complete. however, in the real world, the time series observations usually have non-uniform time intervals between successive measurements. three reasons can cause this characteristic: 1) the missing data exists in time series due to broken sensors, failed data transmissions or damaged storage. 2) the sampling machine itself does not have a constant sampling rate. 3) different time series usually comes from different sources that have various sampling rates. we call such data as irregularly sampled time series (ists) data. ists data naturally occurs in many real-world domains, such as weather/climate [2] , traffic [8] , and economics [3] . in the medical environment, irregularly sampled medical time series (ismts) is abundant. the widely used electronic health records (ehrs) data have a large number of ismts data. ehrs are the real-time, patient-centered digital version of patients' paper charts. ehrs can provide more opportunities to develop advanced deep learning methods to improve healthcare services and save more lives by assisting clinicians with diagnosis, prognosis, and treatment [9] . many works based on ehrs data have achieved good results, such as mortality risk prediction [10, 11] , disease prediction [12, 13, 14] , concept representation [15, 16] and patient typing [17, 16, 18] . due to the special characteristics of ismts, the most important step is establishing the suitable models for it. however, it is especially challenging in medical settings. various tasks need different adaptation methods. data imputation and prediction are two main tasks. the data imputation task is a processing task when modeling data, while the prediction task is a downstream task for the final goal. the two types of tasks may be intertwined. standard techniques, such as mean imputation [19] , singular value decomposition (svd) [20] and k-nearest neighbour (knn) [21] , can impute data. but they still lead to the big gap between the calculated data distribution and have no ability for the downstream task, like mortality prediction. linear regression (lr) [22] , random forest (rf) [23] and support vector machines (svm) [24] can predict, but fails for ists data. state-of-the-art deep learning architectures have been developed to perform not only supervised tasks but also unsupervised ones that relate to both imputation and prediction tasks. recurrent neural networks (rnns) [25, 26, 27] , auto-encoder (ae) [28, 29] and generative adversarial networks (gans) [30, 31] have achieved good performance in medical data imputation and medical prediction thanks to their abilities of learning and generalization obtained by complex nonlinearity. they can carry out prediction task or imputation task separately, or they can carry out two tasks at the same time through the splicing of neural network structure. different understandings about the characteristics of ismts data appear in existing deep learning methods. we summarized them as missing data-based perspective and raw data-based perspective. the first perspective [1, 32, 33, 34, 35] treat irregular series as having missing data. they solve the problem through more accurate data calculation. the second perspective [17, 36, 37, 38, 39] is on the structure of raw data itself. they model ismts directly through the utilization of irregular time information. neither views can defeat the other. either way, it is necessary to grasp the data relations comprehensively for more effectively modeling. we conclude two relations of ismts -intra-series relations (data relations within a time series) and inter-series relations (data relations between different time series). all the existing works model one or both of them. they relate to the local structures and global structures of data and we will introduced in section 3. besides, different ehr datasets may lead to different performance of the same method. for example, the real-world mimic-iii [40] and cinc [41] datasets record multiple different diseases. the records between diseases have distinct data characteristics and the prediction results of each general methods [1, 17, 33, 35] varied between each disease datasets. thus, many existing methods model a specific disease record, like sepsis [42] , atrial fibrillation [43, 44] and kidney disease [45] and have improved the predicting accuracy. the rest of the paper is organized as follows. section 2 gives the basic definition and abbreviations. section 3 describes the features of ismts based on two viewpoints -intra-series and inter-series. section 4 and section 5 introduce the related works in technology-driven perspective and task-driven perspective. in each perspective, we summarize the methods into specific categories and analyze the merits and demerits. section 6 compares the experiments of some methods on four medical datasets with two tasks. in section 7 and 8, we raise the challenges and opportunities for modeling ismts data and then make conclusion. the summary of abbreviations is in table 1 . a typical ehr dataset is consist of a number of patient information which includes demographic information and in-hospital information. in-hospital information is a hierarchical patient-admission-code form shown in figure 1 . each patient has certain admission records as he/she could be in hospital several times. the codes have diagnoses, lab values and vital sign measurements. each record r i is consist of many codes, including static diagnoses codes set d i and dynamic vital signs codes set x i . each code has the time stamp t. ehrs have many ismts because of two aspects: 1) multiple admissions of one patient and 2) multiple time series records in one admission. multiple admission records of each patient have different time stamps. because of health status dynamics and some unpredictable reasons, a patient will visit hospitals under varying intervals [46] . for example, in figure 1 , march 23, 2006 , july 11, 2006 and february 14, 2011 are patient admission times. the time interval between the 1st admission and 2nd admission is couple of months while the time interval between admissions 2, 3 is 5 years. each time series, like blood pressure in one admission, also has different time intervals. shown as admission 2 in figure 1 , the sampling time is not fixed. different physiological variables are examined at different times due to the changes in symptoms. every possible test is not regularly measured during an admission. when a certain symptom worsens, corresponding variables are examined more frequently; when the symptom disappears, the corresponding variables are no longer examined. without the loss of generality, we only discuss univariate time series. multivariate time series can be modeled in the same way. definition 2 illustrates three important matters of ismts -the value x, the time t and the time interval δ. in some missing value-based works (we will introduce in section 4), they use masking vector m ∈ {0, 1} to represent the missing value. 3 characteristics of irregularly sampled medical time series the medical measurements are frequently correlated both within streams and across streams. for example, the value of blood pressure of a patient at a given time could be correlated with the blood pressure at other times, and it could also have a relation with the heart rate at that given time. thus, we will introduce ismts's irregularity in two aspects: 1) intra-series and 2) inter-series. intra-series irregularity is the irregular time intervals between the nearing observations within a stream. for example, shown in figure 1 , the blood pressure time series have different time intervals, such as 1 hour, 2 hours, and even 24 hours. the time intervals add a time sparsity factor when the intervals between observations are large [46] . existing two ways can handle the irregular time intervals problem: 1) determining a fixed interval, treating the time points without data as missing data. 2) directly modeling time series, seeing the irregular time intervals as information. the first way requires a function to impute missing data [47, 48] . for example, some rnns [1, 49, 34, 50, 51, 52] can impute the sequence data effectively by considering the order dependency. the second way usually uses the irregular time intervals as inputs. for example, some rnns [17, 36] apply time decay to affect the order dependency, which can weaken the relation between neighbors with long time intervals. inter-series irregularity is mainly reflected in the multi-sampling rates among different time series. for example, shown in figure 1 , vital signs such as heart rate (ecg data) have a high sampling rate (in seconds), while lab results such as ph (ph data) are measured infrequently (in days) [53, 54] . existing two ways can handle the multi-sampling rates problem: 1) considering data as a multivariate time series. 2) processing multiple univariable time series separately. the first way aligns the variables of different series in the same dimension and then solves the missing data problem [55] . the second way models different time series simultaneously and then designs fusion methods [38] . numerous related works are capable of modeling ismts data, we category them from two perspectives: 1) technologydriven and 2) task-driven. we will describe each category in detail. based on technology-driven, we divide the existing works into two categories: 1) missing data-based perspective and 2) an raw data-based perspective. the specific categories are shown in figure 2 . the missing data-based perspective regards every time series has uniform time intervals. the time points without data are considered to be the missing data points. as shown in figure 5a , when converting irregular time intervals to regular time intervals, missing data shows up. the missing rate r missing can measure the degree of the missing at a given sampling rate r sampling . r missing = # of time points with missing data # of time points (3) the ismts in the real-world ehrs have a severe problem with missing data. for example, luo et al. [56] gathered statistics of cinc2012 dataset [10, 57] . as time goes by, the results show that the maximum missing rate at each timestamp is always higher than 95%. most variables' missing rate is above 80%, and the mean of the missing rate is 80.67%, as shown in figure 3a . the other three real-word ehrs data set mimic-iii dataset [40] ,cinc2019 dataset [58, 41] , and covid-19 dataset [59] are also affected by the missing data, shown in figure 3b , 3c, and 3d. in this viewpoint, existing methods impute the missing data, or model the missing data information directly. the raw data-based perspective uses irregular data directly. the methods do not fill in missing data to make the irregular sampling regular. on the contrary, they think that irregular time itself is the valuable information. as shown in figure 5b , the time are still irregular and the time intervals are recorded. irregular time intervals and multi-sampling rates are intra-series characteristic and inter-series characteristic we have introduced in section 3 respectively. they are very common phenomenons in ehr database. for example, cinc2019 dataset is relatively clean but still has more than 60% samples with irregular time intervals. only 1.28% samples have the same sampling rate in mimic-iii dataset. in this viewpoint, methods usually integrate the features of varied time intervals to the inputs of model, or design models which can process samples with different sampling rates. the methods of missing data-based perspective convert ismts into equally spaced data. they [60, 61, 62] discretize the time axis into non-overlapping intervals with hand-designed intervals. then the missing data shows up. the missing values damage temporal dependencies of sequences [56] and make applying many existing models directly infeasible, such as linear regression [63] and recurrent neural networks (rnn) [64] . as shown in figure 4 , because of missing values, the second valley of the blue signal is not observed and cannot be inferred by simply relying on existing basic models [63, 64] . but the valley values of blood pressure are significant for icu patients to indicate sepsis [65] , a leading cause of patient mortality in icu [66] . thus, missing values have an enormous impact on data quality, resulting in unstable predictions and other unpredictable effects [67] . many prior efforts have been dedicated to the models that can handle missing values in time series. and they can be divided into two categories: 1) two-step approaches and 2) end-to-end approaches. two-step approaches ignore or impute missing values and then process downstream tasks based on the preprocessed data. a simple solution is to omit the missing data and perform analysis only on the observed data. but it can result in a large amount of useful data not being available [1] . the core of these methods is how to impute the missing data. some basic methods are dedicated to filling the values, such as smoothing, interpolation [68] , and spline [69] . but they cannot capture the correlation between variables and complex patterns. other methods estimate the missing values by spectral analysis [70] , kernel methods [71] , and expectation-maximization (em) algorithm [72] . however, simple reasoning design and necessary model assumptions make data imputation not accurate. recently, with the vigorous development of deep learning, these methods have higher accuracy than traditional methods. rnns and gans mainly realize the deep learning-based data imputation methods. a substantial literature uses rnns to impute the missing data in ismts. rnns take sequence data as input, recursion occurs in the direction of sequence evolution, and all units are chained together. their special structure endows them with processing sequence data by learning order dynamics. in a rnn, the current state h t is affected by the previous state h t−1 and the current input x t and is described as rnn can integrate basic methods, such as em [73] and linear model (lr) [74] . the methods first estimate the missing values and again uses the re-constructed data streams as inputs to a standard rnn. however, em imputes the missing values by using only the synchronous relationships across data streams (inter-series relations) but not the temporal relationships within streams (intra-series relations). lr interpolates the missing values by using only the temporal relationships within each stream (intra-series relations) but ignoring the relationships across streams (inter-series relations). meanwhile, most of the rnn-based imputation methods, like simple recurrent network (srn) and lstm, which have been proved to be effective to impute medical data by kim et al. [35] , are also learn an incomplete relation with considering intra-series relations only. chu et al. [49] have noticed the difference between these two relations in ismts data and designed multi-directional recurrent neural network (m-rnn) for both imputation and interpolation. m-rnn operates forward and backward in the intra-series directions according to an interpolation block and operates across inter-series directions by an imputation block. they implanted imputation by a bi-rnn structure recorded as function φ and implanted interpolation by fully connected layers with function ψ. the final objective function is mean squared error between the real data and calculated data. where x, m and δ represent data value, masking and time interval we have defined in 2, we will not repeat it below. bi-rnn is bidirectional-rnn [75] . it is an advanced rnn structure with forward and backward rnn chains. it have two hidden states for one time point in the above two orders. two hidden states concatenate or sum into the final value in this time point. unlike the basic bi-rnn, the timing of inputs into the hidden layers of m-rnn is lagged in the forward direction and advanced in the backward direction. however, in m-rnn, the relations between missing variables are dropped, the estimated values are treated as constants which cannot be sufficiently updated. to solve the problem, cao et al. [34] proposed bidirectional recurrent imputation for time series (brits) to predict missing values with bidirectional recurrent dynamics. in this model, the missing values are regarded as the variables in the model graph and get delayed gradients in both forward and backward directions with consistency constraints, which makes the estimation of missing values more accurate. it can update the predicted missing data with a combined three objective function l -the errors of historical-based estimationx, the feature-based estimationẑ and the combined estimationĉ, which not only considered the relations between missing data and known data, but also modeled the relations between missing data ignored by m-rnn. but brits did not take both inter-series and intra-series relations into account, m-rnn solved it. gans are a type of deep learning model which train generative deep models through an adversarial process [76] . from the perspective of game theory, gan training can be seen as a minimax two-player game [77] between generator g and discriminator d with the objective function. however, typical gans require fully observed data during training. in response to this, yoon et al. [78] proposed generative adversarial imputation nets (gain) model. different from the standard gan, its generator receives both noise z and mask m as input data, the masking mechanism makes missing data as input possible. gain's discriminator outputs both real and fake components. meanwhile, a hint mechanism h makes discriminator get some additional information in the form of a hint vector. gain changes the objective min g max d (v (d, g)) of basic gan to to improve gain, camino et al. [79] used multiple-inputs and multiple-outputs to the generator and the discriminator. the method did the variable splitting by using dense layers connected parallelly for each variable. zhang et al. [80] designed stackelberg gan based on gain to impute the medical missing data for computational efficiency. stackelberg gan can generate more diverse imputed values by using multiple generators instead of a single generator and applying the ensemble of all pairs of standard gan losses. the main goal of the above two-step methods is to estimate the missing values in the converted time series of ismts (convert irregularly sampled features to missing data features). however, in medical background, the ultimate goal is to carry out medical tasks such as mortality prediction [10, 11] and patient subtyping [17, 16, 18] . two separated steps may lead to the suboptimal analyses and predictions [81] as the missing patterns are not effectively explored for final tasks. thus, some researches proposed finding ways to solve the downstream tasks directly, rather than filling missing values. end-to-end approaches process the downstream tasks directly based on modeling the time series with missing data. the core objective is to predict, classify, or clustering. data imputation is an additional task or not even a task in this type of methods. lipton et al. [13] demonstrated a simple strategy -using the basic rnn model to cope with missing data in sequential inputs and the output of rnn being the final characteristics for prediction. then, to improve this basic idea, they addressed the task of multilabel classification of diagnoses by given clinical time series and found that rnns can make remarkable use of binary indicators for missing data, improving auc, and f1 significantly. thus, they approached missing data by heuristic imputation directly model missingness as a first-class feature in the new work [33] . similarly, che at al. [1] also use rnn idea to predict medical issues directly. for solving the missing data problem, they designed a kind of marking vector as the indicator for missing data. in this approach, the value x, the time interval δ and the masking m impute missing data x * together. it first replaces missing data with the mean values, and then used the feedback loop to update the imputed values, which are the input of a standard rnn for prediction. meanwhile, they proposed gru-decay (gru-d) to model ehrs data for medical predictions with trainable decays. the decay rate γ weighs the correlation between missing data x t and other data (previous data x t and mean datax t ). meanwhile, in this research, the authors plotted the pearson correlation coefficient between variable missing rates of mimic-iii dataset. they have observed that the missing rate is correlated with the labels, demonstrating the usefulness of missingness patterns in solving a prediction task. however, the above models [1, 63, 34, 37, 33] are limited to using local information (empirical mean or the nearest observation) of ismts. for example, gru-d assumed that a missing variable could be represented as the combination of its corresponding last observed value and the mean value. the global structure and statistics are not directly considered. the local statistics are unreliable when the continuous data misses (shown in figure 4 ), or the missing rate rises up. tang et al. [32] have realized this problem and designed lgnet, exploring the global and local dependencies simultaneously. they used gru-d model local structure, grasping intra-series relations, and used a memory module to model the global structures, learning inter-series relations. the memory module g have l rows, it capture the global temporal dynamics for missing values with the variable correlations a. meanwhile, an adversarial training process can enhance the modeling of global temporal distribution. the alternative of processing the sequences with missing data by pre-discretizing ismts is constructing models which can directly receive ismts as input. the intuition of raw data-based perspective is from the characteristics of raw data itself -the intra -series relation and the inter-series relation. the intra -series relation of ismts is reflected in the irregular time intervals between two neighbor observations within one series; the inter-series relation is reflected in the different sampling rate of different time series. thus, two subcategories are 1) irregular time intervals-based approaches and 2) multi-sampling rates-based approaches. in ehrs setting, the time lapse between successive elements in patient records can vary from days to months, which is the characteristic of irregular time intervals in ismts. a better way to handle it is to model the unequally spaced data using time information directly. basic rnns only process uniformly distributed longitudinal data by assuming that the sequences have an equal distribution of time differences. thus, design of traditional rnns may lead to suboptimal performance. they applied a memory discount in coordination with elapsed time to capture the irregular temporal dynamics to adjust the hidden status c t−1 of basic lstm to a new hidden state c * t−1 . however, when ismts is univariate, t-lstm is not a completely irregular time intervals-based method. for the multivariate ismts, it has to align multiple time series and filling missing data first. where they have to solve the missing data problem again. but the research did not mention the specific filling strategy and used simple interpolation like mean values when data preprocessing. for the multivariate ismts and the alignment problem, tan et al. [36] gave an end-to-end dual-attention time-aware gated recurrent unit (data-gru) to predict patients' mortality risk. data-gru uses a time-aware gru structure t-gru as same as t-lstm. besides, the authors give the strategy of multivariate data alignment problem. when aligning different time series to multi dimensions, previous missing data approaches, such as gru-d [1] and lgnet [32] , assigned equal weights to observed data and imputed data, ignoring the relatively larger unreliability of imputation compared with actuality. data-gru tackles this difficulty by a novel dual-attention structure -unreliability-aware attention α u with reliability score c and symptom-aware attention α s . the dual-attention structure jointly considers the data-quality and the medical-knowledge. further, the attention-like structure makes data-gru explainable according to the interpretable embedding, which is an urgently needed issue in medical tasks. instead of using rnns to learn the order dynamics in ismts, bahadori et al. [37] have proposed methods for analyzing multivariate clinical time series that are invariant to temporal clustering. the events in ehrs may appear in a single admission together or may disperse over multiple admissions. for example, the authors postulated that whether a series of blood tests are completed at once or in rapid succession should not alter predictions. thus, they designed a data augmentation technique, temporally coarsening, to exploits temporal-clustering invariance to regularize deep neural networks optimized for clinical prediction tasks. moreover, they proposed a multi-resolution ensemble (mre) model with the coarsening transformed inputs to improve predictive accuracy. only modeling the irregular time intervals of intra-series relation would ignore the multi-sampling rate relation of inter-series relation. further, modeling inter-series relation is also a reflection of considering the global structure of ismts. the above rnn-based methods of irregular time intervals-based category only consider the local order dynamics information. although lgnet [32] has integrated the global structures, it incorporates all of the information from all time points into an interpolation model, which is redundant and low adaptive. some models can also learn the global structures of time series, like a basic model kalman filters [82] and a deep learning deep markov models [83] . however, this kind of models mainly process the every time series with a stable sampling rate. che et al. [39] focused on the problem of modeling multi-rate multivariate time series and proposed a multi-rate hierarchical deep markov model (mr-hdmm) for healthcare forecasting and interpolation tasks. mr-hdmm learns generation model and inference network by auxiliary connections and learnable switches. the latent hierarchical structure reflected in the states/switches s factorizing by joint probability p with layer z. p(x 1 , z 1 , s 1 |z 0 ) = p(x 1 |z 1 )p(z 1 , s 1 |z 0 ) these structures can capture the temporal dependencies and data generation process. similarly, binkowski et al. [84] presented an autoregressive framework for regression tasks by modeling ismts data. the core idea of implementation is roughly similar with mr-hdmm. however, these methods considered the different sampling rates between series but ignored the irregular time intervals in each series. they process the data with a stable sampling rate (uniform time intervals) for each time series. for the stable sampling rate, they have to use forward or linear interpolation, where the global structures are omitted again for getting the uniform intervals. the gaussian process can build global interpolation layers for process multi-sampling rate data. li et al. [85] and futoma et al. [86] used this technique. but if a time series is multivariate data, covariance functions are challenging due to the complicated and expensive computation. satya et al. [38] designed a fully modular interpolation-prediction network (ipn). ipn has an interpolation network to accommodate the complexity of ismts data and provide the multi-channel output by modeling three informationbroad trends χ, transients τ and local observation frequencies λ. the three information is calculated by a low-pass interpolation θ, a high-pass interpolation γ and an intensity function λ. ipn also has a prediction network which operates the regularly partitioned inputs from the former interpolation module. in addition to taking care of data relationships from multiple perspectives, ipn can make up for the lack of modularity in [1] and address the difficulty of the complexity of the gaussian process interpolation layers in [85, 86] . modeling ists data aims to achieve two main tasks: 1) missing data imputation and 2) downstream tasks. the specific categories are shown in figure 7 . missing data imputation is of practical significance, as works on machine learning have become actively, getting large amounts of complete data has become an important issue. however, it is almost impossible in the real world to get complete data for many reasons like lost records. in many cases, the time series with missing values becomes useless and then thrown away. this results in a large amount of data loss. the incomplete data has adverse effects when learning a model [76] . existing basic methods, such as interpolation [68] kernel methods [71] and em algorithm [72, 73] , have been proposed a long time ago. with the popularity of deep learning in recent years, most new methods are implemented by artificial neural networks (anns). one of the most popular models is rnn [64] . rnns can capture long-term temporal dependencies and use them to estimate the missing values. existing works [67, 34, 35, 33, 87, 88] have designed several special rnn structures to adapt the missingness and achieve good results. another popular model is gans [89] , which generate plausible fake data through adversarial training. gan has been successfully applied to face completion and sentence generation [90, 88, 91, 92] . based on their data generation abilities, some research [56, 76, 78, 93] have applied gan on time series data generation with considering sequence information into the process. downstream tasks generally include prediction, classification, and clustering. for ismts data, medical prediction (such as mortality prediction, disease classification and image classification) [10, 12, 13, 14, 11] , concept representation [15, 16] and patient typing [17, 16, 18] are three main tasks. the downstream task-oriented methods calculate missing values and perform downstream tasks simultaneously, which is expected to avoid suboptimal analyses and predictions caused by the not effectively explored missing patterns due to the separation of imputations and final tasks [81] . most methods [1, 32, 17, 36, 38, 39, 37, 33] use deep learning technology to achieve higher accuracy on tasks. in this section, we apply the above methods on four datasets and two tasks. we will analyze the method through the experimental results. four datasets were used to evaluate the performance of baselines. cinc2012 dataset [10] consist of records from 12,000 icu stays and have 4000 multivariate clinical time series. all patients were adults who were admitted for a wide variety of reasons to cardiac, medical, surgical, and trauma icus. each record is a multivariate time series of roughly 48 hours and contains 41 variables such as albumin, heart rate, glucose etc. cinc2019 dataset [41] is publicly available and comes from two hospitals; it contains 30,336 patient admission records and 2,359 records of diagnosed sepsis cases. it is a set of multivariate time series that contains 40 related features, 8 kinds of vital signs, 26 kinds of laboratory values and 6 kinds of demographics. the time interval is 1 hour. the sequence length is between 8 and 336, and 29,414 records have lengths less than 60. covid-19 dataset [59] is collected between 10 january and 18 february 2020 from tongji hospital of tongji medical college, huazhong university of science and technology, wuhan, china. the dataset contains 375 patients with 6120 blood sample records as training set, 110 patients with 757 records as test set and 80 characteristics. the experiments have two tasks -1) mortality prediction and 2) data imputation. the mortality prediction tasks use the time series of 48 hours before onset time from the above four datasets. the imputation tasks use 8 features (using the method in [39] ) which are eliminated 10% of observed measurements from data. the eliminated data is the new ground-truth. for rnn-based method, we fix the dimension of hidden state is 64. for gan-based methods, the series inputs also use rnn structure. for final prediction, all methods use one 128-dimensions fc layer and one 64-dimensions fc layer. all methods apply adam optimizer [94] with α = 0.001, β 1 = 0.9 and β 2 = 0.999. we use the learning rate decay α current = α initial · γ global step decay steps with decay rate γ = 0.98 and the decay step is 2000. the 5-fold cross validation is used for both two tasks. [33] 0.809 ± 0.014 0.800 ± 0.016 0.825 ± 0.024 0.945 ± 0.004 lstm [35] 0.812 ± 0.009 0.805 ± 0.010 0.829 ± 0.019 0.945 ± 0.005 gru-d [1] 0.829 ± 0.003 0.818 ± 0.009 0.835 ± 0.013 0.965 ± 0.003 m-rnn [49] 0.827 ± 0.005 0.820 ± 0.011 0.842 ± 0.010 0.959 ± 0.003 brits [34] 0.833 ± 0.002 0.819 ± 0.012 0.839 ± 0.013 0.959 ± 0.002 t-lstm [17] 0.817 ± 0.004 0.804 ± 0.010 0.831 ± 0.014 0.963 ± 0.003 data-gru [36] 0.832 ± 0.006 0.822 ± 0.012 0.851 ± 0.012 0.961 ± 0.003 lgnet [32] 0.833 ± 0.003 0.822 ± 0.013 0.843 ± 0.013 0.956 ± 0.002 ipn [38] 0.831 ± 0.003 0.824 ± 0.009 0.844 ± 0.015 0.960 ± 0.003 the prediction results were evaluated by assessing the area under curve of receiver operating characteristic (auc-roc). roc is a curve of true positive rate (tpr) and false positive rate (fpr). tn, tp, fp and fn stand for true positive, true negative, false positive and false negative rates. t p r = t p t p + f n f p r = f p t n + f p (19) we evaluate the imputation performance in terms of mean squared error (mse). for ith item,x i is the real value and x i is the predicting value. the number of missing values is n . table 1 shows the performances of baselines for the mortality prediction task. for the two categories of technologydriven methods, each has its own merits, but irregularity-based methods work relatively well. missing data-based methods have 2/4 top 1 results and 2/5 top 2 results, while irregularity-based methods have 2/4 top 1 results and 3/5 top 2 results. for the methods of whether the two series relation are considered, the methods that take both inter-series relation and intra-series relation (both global and local structures) into account perform better. ipn, lgnet, and data-gru have relatively good results. for different datasets, the methods show different effects. for example, as covid-19 is a small dataset, unlike the other three datasets, the relatively simple methods perform better on this dataset, like t-lstm, which doesn't perform very well on the other three datasets. table 2 the data imputation is better in the sepsis and covid-19 dataset. perhaps the time series in these two datasets is from the patients who suffered from the same disease. that's probably why they also have relatively better results in the prediction task. table 3 shows a basic rnn model's performance for mortality prediction tasks based on baselines' imputation data. different from the results in table 2 , the rnn-based methods perform better. where the rnn-based methods have 4/5 top 1 results, but gan-based methods have 1/5. the reason may be that the rnn-based approaches have integrated the downstream tasks when imputing. so, the data generated by them is more suitable for the final prediction task. according to the analysis of technologies and experiment results, in this section, we will discuss ismts modeling task from three perspectives -1) imputation task with prediction task, 2) intra-series relation with inter-series relation / local structure with global structure and 3) missing data with raw data. the conclusions of the approaches in this survey are in table 5 . based on the above five perspectives, we summarize the challenges as follows. how to balance the imputation with the prediction? different kinds of methods suit different tasks. gans prefer imputation while rnns prefer prediction. however, in the medical setting, aiming at different datasets, the conclusion does not seem correct. for example, missing data is generated better by rnns than gans in the covid-19 dataset. and the two-step methods based on gans for mortality prediction are no worse than using rnns directly. therefore, it seems difficult to achieve a general and effective modeling method in medical settings. the method should be specified according to the specific task and the characteristics of the datasets. how to handle the intra-series relation with inter-series relation of ismts? in other words, how to trade off the local structure with global structure. in ismts format, a patient has several time series of vital signs connected to the diagnoses of diseases or the probability of death. seeing these time series as a whole multivariate data sample, intraseries relations are reflected in longitudinal dependences and horizontal dependencies. the longitudinal dependencies contain the sequence order and context, time intervals, and decay dynamics. the horizontal dependence is the relations between different dimensions. and the inter-series relations are reflected in the patterns of time series of different samples. however, when seeing these time series as separated multi-samples of a patient, the relations will change. intra-series relations change to the dependencies of values observed on different time steps in a univariate ismts. the features of different time intervals should be taken care of. inter-series relations change to the pattern relations between different patients' different samples and between different time series of the same vital sign. for the structural level, modeling intra-series relations is basically at the local level, while modeling inter-series relations is global. it is not clear what kind of consideration and which structure will make the results better. modeling local and global structures seems to perform better in morality prediction, but it is a more complex method, and it's not universal for different datasets. how to choose the modeling perspective, missing data-based or irregularity-based? both two kinds of methods have advantages and disadvantages. most existing works are missing data-based and there are methods of estimating missing data for a long time [95] . in settings of missing data-based perspective, the discretization interval length is a hyper-parameter needs to be determined. if the interval size is large, missing data is less, but several values will show in low-applicability for multivariate data; incomplete data relation. multi sampling rates-based [38, 39, 110, 111] no artificial dependency; no data imputation. implementation complexity; data generation patterns assumptions. the same interval; if the interval size is small, the missing data becomes more. no values in an interval will hamper the performance, while too many values in an interval need an ad-hoc choosing method. meanwhile, missing data-based methods have to interpolate new values, which may artificially introduce some naturally occurring dependencies. over-imputation may result in an explosion in size and the pursuit of multivariate data alignment may lead to the loss of raw data dependency. thus, of particular interest are irregularity-based methods that can learn directly by using multivariate sparse and irregularly sampled time series as input without the need for other imputation. however, although the raw data-based methods have metrics of no artificial dependencies introduced, they suffer from not achieving the desired results, complex designs, and large parameters. irregular time intervals-based methods are not complex as they can be achieved by just injecting time decay information. but in terms of specific tasks, such as morality prediction, the methods seem not as good as we think (concluded from experiments section). meanwhile, for multivariable time series, these methods have to align values on different dimensions, which leads to missing data problems again. multi-sampling rates-based methods will not cause missing data. however, processing multiple univariate time series at the same time requires more parameters and is not friendly to batch learning. meanwhile, modeling the entire univariate series may require data generation model assumptions. considering the complex patient states, the amount of interventions and the real-time requirement, the data-driven approaches by learning from ehrs are the desiderata to help clinicians. although some difficulties have not been solved yet, the deep learning method does show a better ability to model medical ismts data than the basic methods. basic methods can't model ismts completely as interpolation-based methods [68, 69] just exploit the correlation within each series, imputation-based methods [72, 96] just exploit the correlation among different series, matrix completion-based methods [97, 98] assume that the data is static and ignore the temporal component of the data. deep learning methods use parameter training to learn data structures, and many basic methods can be integrated into the designs of neural networks. the deep learning methods introduced in this survey basically solve the problem of common methods and have achieved state-of-the-art in medical prediction tasks, including mortality prediction, disease prediction, and admission stay prediction. therefore, the deep learning model based on ismts data has a broad prospect in medical tasks. the deep learning methods, both rnn-based and gan-based methods mentioned in this survey, are troubled by poor interpretability [99, 100] , and clinical settings prefer interpretable models. although this defect is difficult to solve due to models' characteristics, some researchers have made some breakthroughs and progress. for example, the attention-like structures which are used in [12, 14] can give an explanation for medical predictions. this survey introduced a kind of data -irregularly sampled medical time series (ismts). combined with medical settings, we described characteristics of ismts. then, we have investigated the relevant methods for modeling ismts data and classified them by technology-driven perspective and task-driven perspective. for each category, we divided the subcategories in detail and represented each specific model's implementation method. meanwhile, according to imputation and prediction experiments, we analyzed the advantages and disadvantages of some methods and made conclusions. finally, we summarized the challenges and opportunities of modeling ismts data task. recurrent neural networks for multivariate time series with missing values convolutional lstm network: a machine learning approach for precipitation nowcasting restful: resolution-aware forecasting of behavioral time series data tensorized lstm with adaptive shared memory for learning trends in multivariate time series clustering and classification for time series data in visual analytics: a survey time2graph: revisiting time series modeling with dynamic shapelets adversarial unsupervised representation learning for activity time-series revisiting spatial-temporal similarity: a deep learning framework for traffic prediction deep ehr: a survey of recent advances in deep learning techniques for electronic health record (ehr) analysis predicting in-hospital mortality of icu patients: the physionet/computing in cardiology challenge 2012 holmes: health online model ensemble serving for deep learning models in intensive care units dipole: diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks learning to diagnose with lstm recurrent neural networks retain: an interpretable predictive model for healthcare using reverse time attention mechanism multi-layer representation learning for medical concepts mime: multilevel medical embedding of electronic health records for predictive healthcare patient subtyping via time-aware lstm networks deep computational phenotyping a survey of methodologies for the treatment of missing values within datasets: limitations and benefits singular value decomposition and least squares solutions an efficient nearest neighbor classifier algorithm based on pre-classify. computer ence simple linear regression in medical research predicting disease risks from highly imbalanced data using random forest a modified svm classifier based on rs in medical disease prediction alzheimer's disease neuroimaging initiative. rnn-based longitudinal analysis for diagnosis of alzheimer's disease estimating brain connectivity with varying-length time lags using a recurrent neural network on clinical event prediction in patient treatment trajectory using longitudinal electronic health records bidirectional recurrent auto-encoder for photoplethysmogram denoising a deep learning method based on hybrid auto-encoder model research and application progress of generative adversarial networks an accurate saliency prediction method based on generative adversarial networks joint modeling of local and global temporal dynamics for multivariate time series forecasting with missing values directly modeling missing data in sequences with rnns: improved classification of clinical time series brits: bidirectional recurrent imputation for time series recurrent neural networks with missing information imputation for medical examination data prediction data-gru: dual-attention time-aware gated recurrent unit for irregular multivariate time series temporal-clustering invariance in irregular healthcare time series. corr, abs interpolation-prediction networks for irregularly sampled time series hierarchical deep generative models for multi-rate multivariate time series mimic-iii, a freely accessible critical care database. sci. data early prediction of sepsis from clinical data: the physionet/computing in cardiology challenge an intelligent warning model for early prediction of cardiac arrest in sepsis patients k-marginbased residual-convolution-recurrent neural network for atrial fibrillation detection opportunities and challenges of deep learning methods for electrocardiogram data: a systematic review risk prediction for chronic kidney disease progression using heterogeneous electronic health record data and time series analysis learning from irregularly-sampled time series: a missing data perspective. corr, abs time series analysis : forecasting and control forecasting in multivariate irregularly sampled time series with missing values. corr, abs estimating missing data in temporal data streams using multi-directional recurrent neural networks long short-term memory empirical evaluation of gated recurrent neural networks on sequence modeling temporal belief memory: imputing missing data during rnn training survey of clinical data mining applications on big data in health informatics analysis of incomplete and inconsistent clinical survey data modeling irregularly sampled clinical time series multivariate time series imputation with generative adversarial networks physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals early prediction of sepsis from clinical data -the physionet computing in cardiology challenge an interpretable mortality prediction model for covid-19 patients ua-crnn: uncertainty-aware convolutional recurrent neural network for mortality risk prediction a hybrid residual network and long short-term memory method for peptic ulcer bleeding mortality prediction raim: recurrent attentive and intensive model of multimodal patient monitoring data linear regression with censored data a learning algorithm for continually running fully recurrent neural networks arterial blood pressure during early sepsis and outcome hospital deaths in patients with sepsis from 2 independent cohorts data cleaning: overview and emerging challenges the effects of the irregular sample and missing data in time series analysis wavelet methods for time series analysis. (book reviews) comparison of correlation analysis techniques for irregularly sampled time series multiple imputation using chained equations. issues and guidance for practice pattern classification with missing data: a review. neural computing and applications a solution for missing data in recurrent neural networks with an application to blood glucose prediction speech recognition with missing data using recurrent neural nets framewise phoneme classification with bidirectional lstm and other neural network architectures a survey of missing data imputation using generative adversarial networks stable and improved generative adversarial nets (gans): a constructive survey gain: missing data imputation using generative adversarial nets improving missing data imputation with deep generative models. corr, abs medical missing data imputation by stackelberg gan strategies for handling missing data in electronic health record derived data kalman filtering and neural networks hidden markov and other models for discretevalued time serie autoregressive convolutional neural networks for asynchronous time series a scalable end-to-end gaussian process adapter for irregularly sampled time series classification learning to detect sepsis with a multitask gaussian process rnn classifier doctor ai: predicting clinical events via recurrent neural networks generative face completion generative adversarial nets ambientgan: generative models from lossy measurements approximation and convergence properties of generative adversarial learning seqgan: sequence generative adversarial nets with policy gradient learning from incomplete data with generative adversarial networks adam: a method for stochastic optimization a study of handling missing data methods for big data multiple imputation for nonresponse in surveys spectral regularization algorithms for learning large incomplete matrices temporal regularized matrix factorization for high-dimensional time series prediction interpretable machine learning: a guide for making black box models explainable. online interpretability of machine learning-based prediction models in healthcare iterative robust semi-supervised missing data imputation medical time-series data generation using generative adversarial networks unsupervised online anomaly detection on irregularly sampled or missing valued time-series data using lstm networks. corr, abs kernels for time series with irregularly-spaced multivariate observations. corr, abs timeautoml: autonomous representation learning for multivariate irregularly sampled time series a distributed descriptor characterizing structural irregularity of eeg time series for epileptic seizure detection a bio-statistical mining approach for classifying multivariate clinical time series data observed at irregular intervals automatic classification of irregularly sampled time series with unequal lengths: a case study on estimated glomerular filtration rate mcpl-based ft-lstm: medical representation learning-based clinical prediction model for time series events a comparison between discrete and continuous time bayesian networks in learning from clinical time series data with irregularity multi-resolution networks for flexible irregular time series modeling (multi-fit) key: cord-188465-wwi8uydi authors: spadon, gabriel; hong, shenda; brandoli, bruno; matwin, stan; rodrigues-jr, jose f.; sun, jimeng title: pay attention to evolution: time series forecasting with deep graph-evolution learning date: 2020-08-28 journal: nan doi: nan sha: doc_id: 188465 cord_uid: wwi8uydi time-series forecasting is one of the most active research topics in predictive analysis. a still open gap in that literature is that statistical and ensemble learning approaches systematically present lower predictive performance than deep learning methods as they generally disregard the data sequence aspect entangled with multivariate data represented in more than one time series. conversely, this work presents a novel neural network architecture for time-series forecasting that combines the power of graph evolution with deep recurrent learning on distinct data distributions; we named our method recurrent graph evolution neural network (regenn). the idea is to infer multiple multivariate relationships between co-occurring time-series by assuming that the temporal data depends not only on inner variables and intra-temporal relationships (i.e., observations from itself) but also on outer variables and inter-temporal relationships (i.e., observations from other-selves). an extensive set of experiments was conducted comparing regenn with dozens of ensemble methods and classical statistical ones, showing sound improvement of up to 64.87% over the competing algorithms. furthermore, we present an analysis of the intermediate weights arising from regenn, showing that by looking at inter and intra-temporal relationships simultaneously, time-series forecasting is majorly improved if paying attention to how multiple multivariate data synchronously evolve. time series refers to the persistent recording of a phenomenon along time, a continuous and intermittent unfolding of chronological events subdivided into past, present, and future. in the last decades, time series analysis has been vital to predict dynamical phenomena on a wide range of applications, varying from climate change [1] [3] , financial market [4] , [5] , land use monitoring [6] , [7] , anomaly detection [8] , [9] , energy consumption, and price forecasting [10] , besides epidemiology and healthcare studies [11] [15] . on such applications, an effective data-driven decision frequently requires precise forecasting based on time series [16] . a prime example is the sars-cov-2, covid-19, or coronavirus pandemic [17] , which is known for being highly contagious and causing increased pressure on healthcare systems worldwide. in this case, time-series modeling fig. 1 : an example of a multiple multivariate time-series forecasting problem, where each multivariate time-series (i.e., sample) share the same domain, timestream, and variables. when stacking the time-series together, we assemble a tridimensional tensor with the axes describing samples, timestamps, and variables. the multiple samples have the same variables recorded during the same timestamps, meaning that samples are unique, but every sample is observed in the same way. by tackling the problem altogether, we leverage from inner and outer variables besides intra-and inter-temporal relationships to improve the forecasting. works (gcn) [31] , the later with significant applications on traffic forecasting [32] , [33] . meanwhile, others used unsupervised auto-encoders on time-series forecasting tasks [34] , [35] . notwithstanding, all former approaches are bounded to a bidimensional space in which forecasting time-series can be summarized by a non-linear function between time and variables. from a different perspective, we hypothesize that timeseries are not only dependent on their inner variables, which are observations from themselves, but also from outer variables provided by different time series that share the same timestream. for instance, the evolution of a biological species is not related solely to observations from itself, but also from other species that share the same environment, as they are all part of the same food chain. by considering the variables and the dependency aspect during the analysis, the time series gains an increased dimensionality. a previously considered bidimensional problem, in which the forecasting ability of a model comes from observing relationships of variables over time, now becomes tridimensional, where forecasting means understanding the entanglement between variables of different time-series that co-occur in time. accordingly, time-series define an event that is not a consequence of a single chain of observations, but a set of synchronous observations of many time-series. for example, during the coronavirus pandemic, it is paramount to understand the disease's time-aware behavior in every country. despite progressing in different moments and locations, the underlying mechanisms behind the pandemic are supposed to follow similar (and probably interconnected) patterns. along these lines, looking individually at the development of the pandemic in each country, one can describe the problem in terms of multiple variables, like the number of confirmed cases, recovered people, and deaths. however, when looking at all countries at once, the problem yields an additional data dimension, and each country becomes a multivariate sample of a broader problem, such as depicted in fig. 1 . in linguistic terms, we refer to such a problem as a multiple multivariate time-series forecasting. along with these premises, in this study, we contribute with an unpreceded neural network that emerges from a graph-based time-aware auto-encoder with linear and nonlinear components working in parallel to forecast multiple multivariate time-series simultaneously, named as recurrent graph evolution neural network (regenn). we refer to evolution as the natural progression of a process where the neural network iteratively optimizes a graph representing observations from the past until it reaches an evolved version of itself that generalizes on future data still to be observed. accordingly, the underlying network structure of regenn is powered by two graph soft evolution (gse) layers, a further contribution of this study. the gse stands for a graph-based learning-representation layer that enhances the encoding and decoding processes by learning a shared graph across different time-series and timestamps. the results we present are based on an extensive set of experiments, in which regenn surpassed a set of 49 competing algorithms from the fields of deep learning, machine learning, and time-series; among of which are single-target, multioutput, and multitask regression algorithms in addition to univariate and multivariate time-series forecasting algorithms. aside from surpassing the state-ofthe-art, regenn remained effective after three rounds of 30 ablation tests through distinct hyperparameters. all experiments were carried out over the sars-cov-2, brazilian weather, and physionet datasets, detailed in the methods. in the task of epidemiology modeling on the sars-cov-2 dataset, we had improvements of, at least, 64.87%. we outperformed the task of climate forecasting on the brazilian weather dataset by at least 11.96% and patient monitoring on intensive care units on the physionet dataset by 7.33%. furthermore, we analyzed the results by using the cosine similarity on the evolution weights from the gse layers, which are the intermediate hidden adjacency matrices that arise from the graph-evolution process, showing that graphs shed new light on the understanding of non-linear black-box definition ω ∈ n + sliding window size w, z ∈ n + number of training and testing (i.e., stride) timestamps s, t, v ∈ n + number of samples, timestamps, and variables t ∈ r s×t×v tensor of multiple multivariate time-series y ∈ r s×ω×v batched input of the first gse and the autoregression layers yα ∈ r s×ω×v output of the first gse and input of the encoder layers yε ∈ r s×ω×v output of the encoder and input of the decoder layers yε ∈ r s×z×v output from the first recurrent unit and input to the second one y ∈ r s×z×v output of the second recurrent unit and input of the second gse layer y ψ ∈ r s×z×v non-linear output yielded by the second gse layer y λ ∈ r s×z×v linear output provided by the autoregression layer y ∈ r s×z×v final result from the merging of the linear and non-linear outputs g = v, e graph in which v is the set of nodes and e the set of edges a ∈ r v×v adjacency matrix of co-occurring variables aµ ∈ r v×v adjacency matrix shared between gse layers a φ ∈ r v×v evolved adjacency matrix produced by the second gse layer u • v batch-wise hadamard product between matrices u and v u · v batch-wise scalar product between matrices u and v · f the frobenius norm of a given vector or matrix ϕ(·) dropout regularization function σg (·) sigmoid activation function σ h (·) hyperbolic tangent activation function cos θ (·) cosine matrix-similarity relu(·) rectified exponential linear unit softmax(·) normalized exponential function models. since higher dimensional time-series forecasting is a topic to be further explored, we understand regenn has implications in other research areas, like economics, social sciences, and biology, in which different time-series share the same timestream and co-occur in time, mutually influencing one another. in order to present our contributions, this paper is further divided into four sections. we begin by proposing a layer and neural network architecture, besides detailing the methods used along with the study. subsequently, we display the experimental results compared to previous literature. next, we provide an overall discussion on our proposal and the achieved results. finally, we present the conclusions and final remarks. the supplementary material exhibits extended results and additional methods. hereinafter, we use bold uppercase letters to denote multidimensional matrices (e.g., x), bold lowercase letters to vectors (e.g., x), and calligraphic letters to sets (e.g., x ). matrices, vectors, and sets can be used with subscripts. for example, the element in the i-th row and j-th columns of a matrix is x ij , the i-th element of a vector is x i and the j-th element of a set is x j . the transposed matrix of x ∈ r m×n is x t ∈ r n×m , and the transposed vector of x ∈ r m×1 is x t ∈ r 1×m , where m and n are arbitrary dimensions; further symbols are defined as needed, but tab. 1 presents a summary of the notations. graph soft evolution (gse) stands for a representationlearning layer that, given a training dataset, builds a graph in the form of an adjacency matrix, as in fig. 2 . the gse layer receives no graph as input, but a set of multiple multivariate time-series. the graph is built by tracking pairs of co-occurring variables, one sample at a time, and merging the results into a single co-occurrence graph shared among samples and timestamps. we define co-occurring variables as two variables, from a multivariate time-series, with a nonzero value in the same timestamp -in such a case, we say that one variable influences another and is influenced back. the co-occurrence graph is the projection of a tridimensional tensor, t, t ∈ r s×w×v , into a bidimensional one, a, a ∈ r v×v , describing pair-wise time-invariant relationships among variables. the co-occurrence graph a = v, e is symmetric and weighted. it is composed of a set v of |v| nodes equal to the number of variables, and another set e of |e| non-directed edges equal to the number of co-occurring variables. a node v ∈ v corresponds to a variable from the time-series multivariate domain, and an edge e ∈ e is an ordered pair u, v ≡ v, u of co-occurring variables u, v ∈ v. the weight f of the edges corresponds to the summation of the values of the variables u, v ∈ v whenever they co-occur in time, such that f (u, v) = s−1 i=0 w−1 j=0 t i,j,u + t i,j,v . notice that the whole graph is bounded to w, the number of timestamps existing in the training portion of the input tensor, and if a pair of variables never co-occur in such a subset of data, no edge will be assigned to the graph, which means that u, v ∈ e, and f (u, v) = 0. given an adjacency matrix a ∈ r v×v , we formulate a gse layer through the following equations: where w α , w η , w µ ∈ r v×v are the weights and b α , b η , b µ ∈ r v the bias to be learned. in further details, fig. 2 : illustration of the graph soft evolution layer's representation-learning, in which the set of multiple multivariate time-series is mapped into adjacency matrices of co-occurring variables. the matrices are element-wise summed to generate a shared graph among samples, which, after a linear transformation, goes through a similarity activation-like function and is scaled by an element-wise multiplication to produce an intermediate adjacency matrix holding the similarity properties inherent to the shared graph. in eq. 1.1, the layer starts by employing a linear transformation to the shared adjacency matrix, which, after multiple iterations, provides a more generic version of the matrix across the samples. subsequently, in eq. 1.2, it uses the cosine similarity on the output of eq. 1.1, which works as an intermediate activation-like function that provides a similarity index for each pair of variables; see the supplementary material. the resulting matrix goes through an element-wise matrix multiplication to transform it back into an adjacency matrix while holding the similarity properties inherent to the shared graph. following, in eq. 1.3, it performs a batchwise matrix-by-matrix multiplication between the adjacency matrix from eq. 1.2 and the batched input tensor (i.e., y) so to combine the information from the graph, which generalizes samples and timestamps, with the time-series. the result will be followed by a dropout regularizer [36] and batch-wise matrix-by-matrix multiplication, where the final features from joining both tensors will be produced. the evolution concept comes from the cooperation be-tween two gse layers, one at the beginning (i.e., right after the input) and the other at the end (i.e., right before the output) of a neural network, such as in the example shown in fig. 3 . as evolution arises from sharing hidden weights between a pair of non-sequential layers, we named this process after soft evolution. accordingly, the first layer (i.e., source) has the aim to learn the weights that will scale the matrix and produce a µ . such a result is the input of the second gse layer (i.e., target), and it will be used for learning the evolved version of the adjacency matrix, referred to as a φ and produced as in eq. 1.1. notice that in fig. 3 , the source layer is different from the target one, as we disregard the regularizer ϕ, trainable weights w α , and bias b α from eq. 1.3. this is because they aim to enhance the feature-learning processes when multiple layers are stacked together. as the last layer, gse provides the output from already learned features through a scalar product between the data propagated throughout the network, i.e., y, and the intermediate evolved adjacency-matrix, i.e., a ψ . fig. 3 : graph soft evolution layers assembled for evolution-based learning. in such a case, the output of the first gse layer (i.e., source) will feed further layers of the neural network, whose result goes through the second gse layer (i.e., target). the gse, as the last layer, does not use regularizers or linear transformations before the output. contrarily, it provides the final predictions by the scalar product between the output of the representation-learning process and the data propagated throughout the network. one can see that the source gse layer has two constant inputs, the graph and input tensor. differently, the target gse layer has two dynamic inputs, the shared graph from the source gse layer and input propagated throughout the network. in the scope of this work, we use an autoencoder in between gse layers to learn data codings from the output of the source layer, which will be decoded into a representation closest to the expected output and later re-scaled by the target layer. in this sense, while the first layer learns a graph from the training data (i.e., past data) working as a pre-encoding feature-extraction layer, the second layer re-learns (i.e., evolve) a graph at the end of the forecasting process based on future data, working as a postdecoding output-scaling layer. when joining the gse layers with the auto-encoder, we assemble the recurrent graph evolution neural network (regenn), introduced in detail as follows. regenn is a graph-based time-aware auto-encoder with linear and non-linear components with parallel data-flows working together to provide future predictions based on observations from the past. the linear component is the autoregression implemented as a feed-forward layer, and the non-linear component is made of an encoder and a decoder module powered by a pair of gse layers. fig. 4 shows how these components communicate from the input to the output, and, in the following, we detail their operation. the non-periodical changes and constant progressions of the series across time usually decrease the performance of the network. that is because the scale of the output loses significance compared to the input, which comes from the complexity and non-linear nature of neural networks in tasks of time series forecasting [21] . following a systematic strategy to deal with such a problem [37] , [38] , regenn leverages from an autoregressive (ar) layer working as a linear feedforward shortcut between the input and output, which for a tridimensional input, is algebraically defined as: where w ∈ r ω×z are the weights and b ∈ r z the bias to be learned. the output of the linear component, i.e., y λ ∈ r s×z×v as in eq. 2, is element-wise added to the nonlinear component's output, i.e., y ψ ∈ r s×z×v , so to produce the final predictions of the network y ∈ r s×z×v , formally given as y = y λ + y ψ . subsequently, we describe the autoencoder functioning that produces the non-linear output of regenn. we use a non-conventional transformer encoder [25] , which employs self-attention, to learn an encoding from the features forwarded by the gse layer. the self-attention consists of multiple encoders joined through the scaled dotproduct attention into a single set of encodings through the non-linear one, however, has an auto-encoder and two gse layers. the last gse layer, although equal to the first, yields an early output as it is not stacked with another gse layer. multi-head attention. the number of expected features by the transformer encoder must be a multiple of the number of heads in the multi-head attention. our encoder's nonconventionality comes from the fact that the first gse layer's output goes through a single scaled dot-product attention on a single-head attention task. that is because the number of features produced by the encoder is equal to the length of the sliding window, and through single-head attention, the window can assume any length. the encoder module is defined as follows: where self-attention in eq. 3a is a particular case of the multi-head attention, in which the input query q, key k, and value v of the scaled dot-product attention, i.e., softmax q · k t ÷ √ d k · v, are equal; and d k is the dimension of the keys. the attention results are followed by a dropout regularization [36] , a residual connection [39] , and a layer normalization [40] as in eq. 3b, which ensure generalization. the first two layers work to avoid overfitting and gradient vanishing, while the last one normalizes the output such that the samples among the input have zero mean and unit variance γ (∆ (y ε + ϕ (y ε ))) + β, where ∆ is the normalization function, and γ and β are parameters to be learned. after, in eq. 3c, the intermediate encoding goes through a double linear layer, a point-wise feed-forward layer, which, in this case, consists of two linear transformations in sequence with a relu activation in between, having the weights w ε , w ι and bias b ε , b ι as optimizable parameters. finally, the transformed encoding goes through one last set of generalizing operations, as shown in eq. 3d. the resulting encoding y ε ∈ r s×ω×v is a tensor with the time-axis length matching the size of the sliding window ω, the same dimension as it is the input tensor. the previous encoding will be decoded by two sequenceto-sequence layers, which in this case, are long short term memory (lstm) [28] units. the decoder operates in two of the tridimensional axes of the encoding, the time axis, and the variable axis, once at a time. during the time-axis decoding, the first recurrent unit will translate the windowsized input into a stride-sized output as in the following: where v is the v-th variable of the t-th time-series group, and the weights w o ∈ r z are parameters to be learned. along with eq. 4, we refer to f as the forget gate's activation vector, i as the input and update gate's activation vector, o as the output gate's activation vector, c as the cell input activation vector, c as the cell state vector, and h as the hidden state vector. the last hidden state vector, goes through a dropout regularization ϕ before the next lstm in the sequence. the next recurrent unit decodes the variable axis from the partially-decoded encoding without changing the input dimension. the set of variables within the time-series does not necessarily imply a sequence that does not interfere in the decoding process as long as the variables are kept in the same position. the second lstm in the sequence, in which t is the t-th timestamp of the v-th variable group, work as follows: where y ε is the partially-decoded encoding, and the weights w o ∈ r z are parameters to be learned. the description of the notations within eq. 4 holds for eq. 5. the difference, besides the decoding target, is the residual connection with the partially-decoded encoding y ε at the last hidden state vector after the dropout regularization ϕ. finally, the output of the last recurrent unit y goes through the last gse layer, so to produce the non-linear output y ψ of regenn. regenn operates on a tridimensional space shared between samples, timestamps, and variables. in such a space, it carries out a time-based optimization strategy. the training process iterates over the time-axis of the dataset, showing the network how the variables within a subset of timeseries behave as time goes by, and later repeating the process through subsets of different samples. the network's weights are shared among the entire dataset and optimized towards best generalization simultaneously across samples, timestamps, and variables. in this work, we used adam [41] , a gradient descent-based algorithm, to optimize the model. as the optimization criterion, we used the mean absolute error (mae), which is a generalization of the support vector regression [42] with soft-margin criterion where ω is the set of internal parameters of regenn, y is the network's output and t the ground truth: due to sars-cov-2 behave as a streaming time-series, we adopted a transfer learning approach to train the net-work on that dataset only. transfer learning shares knowledge across different domains by using pre-trained weights of another neural network. the approach we adopted, although different, resembles online deep learning [43] . the main idea is to train the network on incremental slices of the time-axis, such that the pre-trained weights of a previous slice are used to initialize the weights of the network in the next slice. the purpose of this technique is not only to achieve better forecasting performance but also to show that regenn is superior to other algorithms throughout the pandemic. the results are based on three datasets, all of which are multi-sample, multivariate, and vary over time. the first dataset describes the coronavirus pandemic, refereed to as sars-cov-2, made available by john hopkins university [44] . it describes 3 variables through 120 days for 188 countries and varies from the first day of the pandemic to the day it completed four months of duration. the second one is the brazilian weather 1 dataset collected from 253 sensors during 1,095 days regarding 4 variables. the third dataset is from the 2012 physionet computing in cardiology challenge [45] , from which we are using 9 variables across 48 hours recorded from 11,988 icu patients. the variables within the datasets are: data pre-processing all datasets were normalized between zero and one along the variable axis. such a pre-processing task is conventional to all types of learning algorithms but crucial to deep learning. by doing so, we speed up and avoid gradient spikes during the training phase. however, the neural network output is transformed back to the initial scale before computing any of the evaluation metrics. a simplistic yet effective approach to train time-series algorithms is through the sliding window technique [46] , also referred to as rolling window. the window size is well known to be a highly sensitive hyperparameter [47] , [48] . consequently, we followed a non-tunable approach, in which we set the window size before the experiments, just taking into consideration the context and domain of the datasets. these values were used across all windowbased experiments, including the baselines and ablation 1 . available together with the implementation source-code. tests. it is noteworthy that most of the machine-learning algorithms are not meant to handle time-variant data, such that no sliding window was used in those cases. conversely, we considered training timestamps as features and those reserved for testing as tasks of a multitask regression. on the deep learning algorithms, we used a window size of 7 days for training and reserved 7 days for validation (between the training and test sets) to predict the last 14 days of the sars-cov-2 dataset. the 7-7-14 split idea comes from the disease incubation period, which is of 14 days. on the other hand, we used a window size of 84 days and reserved 28 days for validation to predict the last 56 days in the brazilian weather dataset. the 84-28-56 split is based on the seasonality of the weather data, such that we will look to the previous 3 months (a weather-season window) to predict the last 2 months of the upcoming season. finally, we used a window size of 12 hours for training and 6 hours for validation to predict the last 6 hours of the physionet dataset. the 12-6-6 split comes from the fact that patients in icus are in a critical state, such that predictions within 24 hours are more useful than long-term predictions. many existing algorithms are limited because they neither support multitask nor multi-output regression, making these algorithms even more limited to tasks when data is tridimensional. the most straightforward yet effective approach we followed to compare them to regenn was to create a chain of ensembles 2 . in such a case, each estimator makes predictions on order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain. the number of estimators in each experiment varies according to the type of the ensemble and the type of the algorithms, and the final performance is the average of each estimator's performance. for simplicity sake, we grouped the algorithms into five categories, as follows: corresponds to tridimensional compliant algorithms of single estimators; ○ describes multivariate algorithmss estimators, one estimator for each sample; ◎ consists of multi-output and multitask algorithmsv estimators, one estimator per variable; indicates single-target algorithms -v×z estimators, one estimator per variable and stride; and, represents univariate algorithms -s×v estimators, one estimator for each sample and variable. as time-series forecasting poses as a time-aware regression problem, our goal remains in predicting values that resemble the ground truth the most. hyperparameter tuning unlike many neural networks, regenn has only two hyperparameters able to change the size of the weights' tensors, which are the window size (i.e., input size) and the stride size (i.e., output size). as already discussed, both were set before the experiments, and none of them were tuned towards any performance improvement. the trade-off of having fewer hyperparameters is to spend more energy on training the network towards a better performance. we are focusing on the network optimizer, gradient clipping, learning rate scheduler, and dropout regularization when we refer to tunable hyperparameters. along these lines, we followed a controlled and limited exploratory approach similar to a random grid-search, starting with pytorch's defaults. the tuning process was on the validation set, intentionally reserved for measuring the network improvement. the tuning process follows by updating the hyperparameters whenever observing better results on the validation set, leading us to a set of optimized but no optimum hyperparameters. we used the set of optimized hyperparameters to evaluate the network on the test set. we used the default values for all the other algorithms [49] [51] unless explicitly required for working with a particular dataset, as was the case of lstnet [21] , dsanet [23] , and mlcnn [26] . the list of hyperparameters of regenn and further deep-learning algorithms are in the supplementary material. the experiments related to machine-learning and timeseries algorithms were carried out on a linux-based system with 64 cpus and 750 gb of ram for all the datasets. the experiments related to deep learning on the sars-cov-2 dataset were carried out on a linux-based system with 56 cpus, 256 gb of ram, and 8 gpus (titan x -pascal). the brazilian weather and physionet datasets were tested on a different system with 32 cpus, 512 gb of ram, and 8 gpus (titan x -maxwell). while cpu-based experiments are even across all cpu architectures, the same does not hold for gpus, such that the gpu model and architecture must match to guarantee reproducibility. aiming at complete reproducibility, we disclose not only the source code of regenn on github 3 , but also the scripts, pre-processed data, and snapshots of all the networks trained on a public folder at google drive 4 . 3. available at https://bit.ly/2ydbroo. 4. available at https://bit.ly/31x6cwn. subsequently, we go through the experimental results in each one of the benchmarking datasets. due to not all the algorithms performed evenly across the datasets, we display the 34 most prominent ones out of the 49 tested algorithms; for the extended results, see the supplementary material. additionally, we also discuss the ablation experiments, which were carried out with regenn's hyperparameters; in the supplementary material, we provide two other rounds of this same experiment using as hyperparameters the pytorch's defaults 5 and others recurrently employed on the literature. at the end of each experiment, we draw explanations about the evolution weights, i.e., intermediate adjacency matrices from the gse layers, by using the cosine similarity on pairs of co-occurring variables. the sars-cov-2 has being updated daily since the beginning of the coronavirus pandemic. we used a self-to-self transfer-learning approach to train the network in slices of time due to such a dataset's streaming nature. in short, the network was re-trained every 15 days with new incoming data, using as starting weights, the pre-trained weights from the network trained in the past 15 as a result of the analysis of the dataset in time-slices, we were able to notice that, as time goes by and more information is available on the sars-cov-2 dataset, the problem becomes more challenging to solve by looking individually at each country, and more natural when looking to all of them together. although countries have their particularities, which make the disease spread in different ways, the main goal is to decrease the spreading, such that similarities between the historical data of different countries provide for finer predictions. furthermore, we also observed that not all the estimators within an ensemble perform in the same way in the face of different countries. due to the regenn capability of observing inter-and intra-relationships between time-series, it performs better on highly uncertain cases like this one. subsequently, we present the ablation results, in which we utilized the same data-flow as regenn but no gse 5 . available at https://bit.ly/2quwrsd. . results presented in descending order of metric mae (i.e., from worst to best performance); the algorithms are symbol-encoded according to their number of estimators. we use gray arrows to describe the standard deviation of the results; the negative deviation, which is equal to the positive one, was suppressed for better visualization. the results confirmed regenn's superior performance as it is the algorithm with the lowest error and standard deviation compared to the others, such that the improvement in the experiment was no lower than 64.87%. layer while systematically changing the decoder architecture. we provide results using different recurrent units (ru), which are the elman rnn [52] , lstm [28] , and gru [22] . we also varied the directional flag of the recurrent unit between unidirectional (u) and bidirectional (b). that because a unidirectional recurrent unit tracks only forward dependencies while a bidirectional one tracks both forward and backward dependencies. additionally, the network architecture of each test is described by a summarized tag. for example, given the architecture (e → u ru + b ru)+ar, it means the model has a transformer encoder as the encoder, a unidirectional recurrent unit as the time-axis decoder, and a bidirectional recurrent unit as the variable-axis decoder. besides, the output of the decoder is element-wise added to the autoregression (ar) output. furthermore, the table shows results with and without the encoder and autoregression component, as well as cases when using a single recurrent unit only for time-axis decoding. according to the ablation results detailed in tab. 2, one can observe that the improvement of regenn is slightly reduced than previously reported. that is because the performance of it does not only comes from the gse layer but also from how the network handles the multiple multivariate time-series data. consequently, the ablation experiments reveal that some models without gse layers are enough to surpass all the competing algorithms. however, when using regenn, we can improve them further and achieve 20.81% of additional reduction on the mae, 19.77% on the rmse, and 35.72% on the msle. fig. 6 shows the evolution weights originated from applying the cosine similarity on the hidden adjacency matrices of regenn. when comparing the input and evolved graphs, the number of cases and deaths has a mild similarity. that might come from the fact that, at the beginning of the pandemic, diagnosing infected people was already a broad concern. the problem did not go away, but more infected people were discovered as more tests were made, and also because the disease spread worldwide. a similar scenario can be drawn from the number of recovered and the number of cases, as infected people with mild or no symptoms were unaware of being infected. contrarily, we can see that the similarity between the recovered and deaths decreases over time, which comes from the fact that, as more tests are made, the mortality rate drops to a stable threshold due to the increased number of recovered people. the brazilian weather dataset is a highly seasonal dataset with a higher number of samples, variables, and timestamps than the previous one. for simplicity's sake, in this experiment, regenn was trained on the whole training set at once. the results are in fig. 7 , in which regenn was the first-placed algorithm, followed by the elman rnn in second. regenn overcame the elman rnn by 11.95% on the mae, 11.96% on the rmse, and 25.84% on the msle. we noticed that all the algorithms perform quite similarly for this dataset. the major downside for most algorithms comes from predicting small values that are close to zero, as noted by the msle results. in such a case, the ensembles showed a high variance when compared to regenn. we believe this is why the elman rnn shows performance closer to regenn rather than to exponential smoothing, the third-placed algorithm, as regenn has a single estimator, while the exponential smoothing is an ensemble of estimators. another understanding of why some algorithms underperform on the msle might be related to their difficulty to track temporal dependencies, which embraces the weather seasonality. the ablation results are in tab. 3, in which we observed again that the network without the gse layers already surpasses the baselines. when decommissioning the gse layers of regenn and using gru instead of lstm on the decoder, we observed a 3.86% improvement on the mae, 10.02% on the rmse, and 25.34% msle when compared to the elman rnn results. using regenn instead, we achieve a further performance gain of 8.42% on the mae and 2.16% on the rmse over the ablation experiment. fig. 8 depicts the evolution weights for the current dataset, in which we can observe a consistent similarity between pairs of variables in the input graph, which does not repeat in the evolved graph, implying different relationships. on the evolved graph, we observe that the similarity between all pairs of variables increased as the graph evolved. the pairs solar radiation and rain, maximum autoregressive baselines results for the brazilian weather dataset ordered from worst to best mae performance. along with the image, the algorithms are symbol-encoded based on their type and number of estimators, and we use gray arrows to report the standard deviation of the results; the negative deviation, which is equal to the positive one, was suppressed for improved readability. in such an experiment, regenn once more outperformed all the competing algorithms, demonstrating versatility by performing well even on a highly-seasonal dataset with improvement no lower than 11.95%. in the face of seasonality, the elman rnn surpassed the exponential smoothing, the previously second-placed algorithm. temperature and rain, and solar radiation and minimum temperature stood out. those pairs are mutually related, which comes from solar radiation interfering in both maximum and minimum temperature and also in the precipitation factors, where the opposite relation holds. what can be extracted from the evolution weights, in this case, is the notion of importance between pairs of variables, so that the pairs that stood out are more relevant and provide better information to predict the forthcoming values for the variables in the dataset. the physionet dataset presents a large number of samples and an increased number of variables, but little information on the time axis, a setting in which ensembles still struggle to perform accurate predictions, as depicted in fig. 9 . once again, regenn keeps steady as the first-placed algorithm in performance, showing solid improvement over the linear svr, the second-placed algorithm. the improvement was 7.33% on the mae and 35.13% on the msle, while the rmse achieved by regenn laid within the standard deviation of the linear svr, pointing out an equivalent performance between them. the linear svr is an ensemble with multiple estimators, while regenn uses a single one, which makes it better accurate and more straightforward for dealing with the current dataset. as in tab. 4, the ablation results reveal that a neural network architecture without the gse layers can achieve a better performance than the baseline algorithms. in this specific case, we see that by using a bidirectional lstm instead of unidirectional on the decoder module of the neural network, we can achieve a performance almost as good as regenn, but not enough to surpass it, as regenn still shows an improvement of 1.05% on the mae and 0.98% on the rmse over the ablation experiment with bidirectional lstm. in this specific case, regenn learns by observing multiple icu patients. however, one cannot say that an icu patient's state is somehow connected to another patient's state. contrarily, the idea holds as in the first experiment, where although the samples are different, they have the same domain, variables, and timestream, such that the information from one sample might help enhance future forecasting for another one. that means regenn learns both from the past of the patient but also from the past of other patients. nevertheless, we must be careful about drawing any understanding about these results, as the reason each patient is in the icu is different, and while some explanations might be suited for a small set of patients, it tends not to generalize to a significant number of patients. when analyzing the evolution weights in fig. 10 aided by a physician, we can say that there is a relationship between the amount of urine excreted by a patient and the arterial blood pressure, and also that there is a relation between the systolic and diastolic blood pressure. however, even aided by the evolution weights, we cannot further describe these relations once there are variables of the biological domain that are not being taken into consideration. the evolution weights are intermediate weights of the representation-learning process (see fig. 4 ), which are optimized throughout the network's training. such weights are time-invariant and are a requirement for the featurelearning behind the gse layer. although time does not flow through the adjacency matrix, the network is optimized as a whole, such that every operation influences the gradients resulting from the backward propagation process. that means that the optimizer, influenced by the gradients of both time-variant and invariant data, will optimize all the weights towards a better forecasting ability. such a process depends not only on the network architecture but also on the reliability of the optimization process. that increases uncertainty, which is the downside of re-genn, demanding more time to train the neural network, and causing the improvement not to be strictly uprising. baseline results for the physionet dataset arranged from the worst to the best mae performance, in which, regenn was the first-placed algorithm followed by the linear svr in the second one. the improvement from one to another was no lower than 7.33%, but, in this case, regenn yielded an rmse compatible with the linear svr. along with the image, the algorithms are symbol-encoded based on type and number of estimators, and gray arrows depict the standard deviation of the results; the negative deviation, which is equal to the positive one, was suppressed to provide better visualization for the results. fig. 10 : evolution weights extracted from regenn after training on the physionet dataset, in which we use cosine similarity to compare the relationship between pairs of variables. we use "abp" as shortening for arterial blood pressure, "ni" as non-invasive, "dias" as diastolic, and "sys" as systolic. consequently, training might take long sessions, even with consistently reduced learning rates on plateaus or simulated annealing techniques; this is influenced by the fact that the second gse layer has two dynamic inputs, which arise from the graph-evolution process. however, we observed that through the epochs, the evolution weights reaches a stable point with no major updates, and as a result, the network demonstrates a remarkable improvement in its last iterations when the remaining weights more intensely converge to a near-optimal configuration. even though regennhas a particular drawback, it shows excellent versatility, which comes from its superior performance in the task of epidemiology modeling on the sars-cov-2 dataset, climate forecasting on the brazilian weather, and patient monitoring on intensive care units on the physionet dataset. consequently, we see regenn as a tool to be used in data-driven decision-making tasks, helping prevent, for instance, natural disaster, or during the preparation for an upcoming pandemic. as a precursor in multiple multivariate time-series forecasting, there is still much to be improved. for example, reducing the uncertainty that harms regenn without decreasing its performance should be the first step, followed by extending the proposal to handle problems in the spatio-temporal field of great interest to traffic forecasting and environmental monitoring. another possibility would be to remove the recurrent layers within the decoder while tracking the temporal dependencies through multiple graphs, which would provide a whole new temporal-modeling way. notwithstanding, in some cases, where extensive generalization is not required, the analysis of singular multivariate time-series may be preferred to multiple multivariate time-series. that because, when focusing on a single series at a time, some but not all samples might yield a lower forecasting error, as the model will be driven to a single multivariate sample. however, both approaches for tackling time-series forecasting can coexist in the state-of-the-art, and, as a consequence, the decision to work on a higher or lower dimensionality must relate to which problem is being solved and how much data is available to solve it. this paper tackles multiple multivariate time-series forecasting tasks by proposing the recurrent graph evolution neural network (regenn), a graph-based time-aware auto-encoder, powered by a pair of graph soft evolution (gse) layers, a further contribution of this study that stands for a graph-based learning-representation layer. the literature handles multivariate time-series forecasting with outstanding performance, but up to this point, we lacked a technique with increased generalization over multiple multivariate time-series with sound performance. previous research might have avoided tackling such a problem as a neural network to that matter is challenging to train, and usually yield poor results. that because one aims to achieve good generalization on future observations for multivariate time-series that do not necessarily hold the same data distribution. because of that, regenn is a precursor in multiple multivariate time-series forecasting, and even though this is a challenging problem, regenn surpassed all baselines and remained effective after three rounds of 30 ablation tests through distinct hyperparameters. the experiments were carried out over the sars-cov-2, brazilian weather, and physionet datasets. showing improvements, respectively, of at least 64.87%, 11.96%, and 7.33%. as a consequence of the results, regenn shows a new range of possibilities in time-series forecasting, starting by demonstrating that ensembles perform poorly than a single model that understands the entanglement between different variables by looking to how variables interact as the time goes by and multiple multivariate time-series evolve. this work was partially supported by the coordenação de aperfeiçoamento de pessoal de nível superior -brazil (capes) -finance code 001; fundação de amparo à pesquisa do estado de são paulo (fapesp), through grants 2014/25337-0, 2016/17078-0, 2017/08376-0, 2019/04461-9, and 2020/07200-9; conselho nacional de desenvolvimento científico e tecnológico (cnpq) through grants 167967/2017-7, and 305580/2017-5; national science foundation awards iis-1418511, ccf-1533768, and iis1838042; and, the national institute of health awards nih r01 1r01ns107291-01 and r56hl138415. we thank jeffrey valdez for his aid with sunlab's computer infrastructure, lucas scabora, for his careful review of the paper and gustavo merchan, m.d., for his analysis of the evolution weights on the physionet dataset. baselines notes. table 1 list the acronym and full name of all algorithms we tested during the baselines computation. tables 2 to 6 present detailed information from the experiments discussed along with the main manuscript. the following tables regard the tests using transfer learning on the sars-cov-2 dataset, in which a new network was trained every 15 days starting on 45 days after the pandemic started and up to 120 days of its duration. cosine similarity. the cosine similarity, which has been widely applied in learning approaches, accounts for the similarity between two non-zero vectors based on their orientation in an inner product space [1] . the underlying idea is that the similarity is a function of the cosine angle θ between vectors hence, when θ = 1, the two vectors in the inner product space have the same orientation, when θ = 0, these vectors are oriented a 90 • relative to each other, and when θ = −1 the vectors are diametrically opposed. the cosine similarity between the vectors u and v is defined as it follows: lg] 28 aug 2020 the dot product between u and v and u represent the norm of the vector u = √ u · u, while u i is the i-th variable of the object represented by the vector. in the scope of this work, the cosine similarity is used to build similarity adjacency matrices, which measures the similarity between all nodes in a variables' co-occurrence graph. the similarity between two nodes in the graph describes how likely those two variables are to co-occur at the same time for a time-series. in this case, the similarity ends up acting as an intermediate activation function, enabling the graph evolution process by maintaining the similarity of the relationships between pairs of nodes. in such a particular case, we define the cosine-matrix similarity as follows: where a · a t denotes the dot product between the matrix a and the transposed a t , while a represents the norm of that same matrix with respect to any of its ranks, as we consider a to be a squared adjacency matrix. horizon forecasting. the horizon forecasting stands for an approach used for making non-continuous predictions by accounting for a future gap in the data. it is useful in a range of applications by considering, for instance, that recent data is not available or too costly to be collected. thereby, it is possible to optimize a model that disregards the near future and focuses on the far-away future. however, such an approach abdicates from additional information that could be learned from continuous timestamp predictions [2] . by not considering the near past as a variable that influences the near future, we might result in a non-stochastic view of time, meaning that the algorithm focuses on long-term dependencies rather than both long-and short-term dependencies. along these lines, both the lstnet [3] and dsanet [4] comply with horizon forecasting, and to make our results comparable, we set the horizon to one on both of them. thus, we started assessing the test results right after the algorithms' last validation step because, as closer to the horizon, the more accurate these models should be. a simplistic yet effective approach to train time-series algorithms is through the sliding window technique [5] , which is also referred to as rolling window (see fig. 1 ). such a technique fixes a window size, which slides over the time axis, predicting a predefined number of future steps, referred to as stride. some studies on time-series have been using a variant technique known as expanding sliding window [6, 7] . this variant starts with a prefixed window size, which grows as it slides, showing more information to the algorithm as time goes by. regenn holds for the traditional technique as it is bounded to the tensor weights dimension. those dimensions are of a preset size and cannot be effortlessly changed during training, as it comes with increased uncertainty by continuously changing the number of internal parameters, such that a conventional neural network optimizer cannot handle it properly. nevertheless, the window size of the sliding window is well known to be a highly sensitive hyperparameter [8, 9] ; to avoid increased number of parameters, we followed a nontunable approach, in which we set the window size before the experiments taking into consideration the context of the datasets; such values were even across all window-based trials, including the baselines and ablation. optimization strategy. regenn operates on a three-dimensional space shared between samples, time, and variables. in such a space, it carries out a time-based optimization strategy. the training process iterates over the time-axis of the dataset, showing to the network how the variables within a subset of time-series behave as time goes by, and later repeating the process through subsets of different samples. the network's weights are shared among the entire dataset and optimized towards best generalization simultaneously across samples, time, and variables. the dataset t ∈ r s×t×v is sliced into training t ∈ r s×w×v and testing t ∈ r s×z×v as follows: once the data is sliced, we follow by using a gradient descent-based algorithm to optimize the model. in the scope of this work, we used adam [10] as the optimizer, as it is the most common optimizer among timeseries forecasting problems. as the optimization criterion, we used the mean absolute error (mae), which is a generalization of the support vector regression [11] with soft-margin criterion formalized as it follows: where w is the set of optimizable parameters, · f is the frobenius norm, and both c and ρ are hyperparameters. the idea, then, is to find w that better fit y i , x i ∀i ∈ [1, n] so that all values are in [ρ + ξ i , ρ + ξ * i ], where ξ i and ξ * i are the two farther opposite points in the dataset. a similar formulation on the linear svr implementation for horizon forecasting was presented by lai et al. [3] . due to the higher-dimensionality among the multiple multivariate time-series used in this study, in which we consider the time to be continuous, the problem becomes: where ω is the set of internal parameters of regenn, y is the output of the network and t the ground truth. when disregarding c and setting ρ as zero, we can reduce the problem to the mae loss formulation: square-and logarithm-based criteria can also be used with regenn. we avoid doing so, as this is a decision that should be made based on each dataset. contrarily, we follow the svr path towards the evaluation of absolute values, which is less sensitive to outliers and enables regenn to be applied on a range of applications. transfer-learning approach. we adopted a transfer learning approach to train the network on the sars-cov-2 dataset that, although different, resembles online deep learning [12] . the idea is to train the network on incremental slices of the time-axis, such that the pre-trained weights of a previous slice are used to initialize the weights of the network in the next slice (see fig. 2 ). the purpose of this technique is not only to achieve better performance towards the network but also to show that regenn is useful throughout the pandemic. slice hyperparameters adjustment is usually required when transferring the weights from one network to another, mainly the learning rate; for the list of hyperparameters we used, see tab. 3. besides, we deliberately applied a 20% dropout on all tensor weights outside the network architecture and before starting the training. the aim behind that decision was to insert randomness in the pipeline and avoid local optima. it worth mentioning that we did not observe any decrease in performance, but in some cases, the optimizer's convergence was slower. baselines algorithms. open-source python libraries provided the time series and machine learning algorithms used along with the experiments. time series algorithms came from the statsmodels 1 , while the machine learning ones majorly from the scikit-learn 2 . further algorithms, such as xgboost 3 , lgbm 4 , and catboost 5 , have a proprietary, open-source implementation, which was preferred over the others. we used the default hyperparameters over all the experiments, performing no fine-tuning. however, because all the datasets we tested are strictly positive, we forced all the negative output to become zero, such as made by a relu activation function. a list with names and algorithms tested along with the experiments is provided in tab 1, which contains more algorithms than we reported in the main paper. that because we are listing all algorithms, even the ones that were removed from the pipeline due to being incapable of working with the input data and yielding exceptions. hyperparameters for the ablation tests legend: algorithms with best performance are in bold, the ones noted as -yielded exceptions, and others as *** were suppressed due to poor performance. isotonic table 9 : detailed results for the first three slices of the sars-cov-2. legend: algorithms with best performance are in bold, the ones noted as -yielded exceptions, and others as *** were suppressed due to poor performance. legend: algorithms with best performance are in bold, the ones noted as -yielded exceptions, and others as *** were suppressed due to poor performance. effects of climate and land-use changes on fish catches across lakes at a global scale rising river flows throughout the twenty-first century in two himalayan glacierized watersheds the impact of climate change and glacier mass loss on the hydrology in the mont-blanc massif stock market prediction using optimized deep-convlstm model stock market analysis using candlestick regression and market trend prediction (ckrm) robust landsat-based crop time series modelling continuous monitoring of land disturbance based on landsat time series generic and scalable framework for automated time-series anomaly detection multivariate time series anomaly detection: a framework of hidden markov models a review on time series forecasting techniques for building energy consumption multi-layer representation learning for medical concepts temporal phenotyping of medically complex children via parafac2 tensor factorization patient trajectory prediction in the mimic-iii dataset, challenges and pitfalls taste: temporal and static tensor factorization for phenotyping electronic health records an interpretable mortality prediction model for covid-19 patients on the responsible use of digital data to tackle the covid-19 pandemic temporal dynamics in viral shedding and transmissibility of covid-19 temporal aggregation of univariate and multivariate time series models: a survey artificial neural networks: a tutorial a simulation study of artificial neural networks for nonlinear time-series forecasting modeling long-and short-term temporal patterns with deep neural networks empirical evaluation of gated recurrent neural networks on sequence modeling dsanet: dual selfattention network for multivariate time series forecasting dstp-rnn: a dual-stage two-phase attention-based recurrent neural network for longterm and multivariate time series prediction attention is all you need towards better forecasting by fusing near and distant future visions gradient-based learning applied to document recognition long short-term memory recurrent neural networks for multivariate time series with missing values geoman: multilevel attention networks for geo-sensory time series prediction ijcai-18. international joint conferences on artificial intelligence organization semi-supervised classification with graph convolutional networks t-gcn: a temporal graph convolutional network for traffic prediction spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting a dual-stage attention-based recurrent neural network for time series prediction multivariate time series forecasting via attention-based encoder-decoder framework dropout: a simple way to prevent neural networks from overfitting highway networks deep residual learning for image recognition layer normalization adam: a method for stochastic optimization support vector method for function approximation, regression estimation and signal processing online deep learning: learning deep neural networks on the fly an interactive web-based dashboard to track covid-19 in real time predicting in-hospital mortality of icu patients: the physionet/computing in cardiology challenge 2012 segmenting time series: a survey and novel approach input window size and neural network predictors time series prediction and neural networks pedregosa2011:sklearn: machine learning in python xgboost: a scalable tree boosting system catboost: unbiased boosting with categorical features finding structure in time data mining introduction towards better forecasting by fusing near and distant future visions modeling long-and short-term temporal patterns with deep neural networks dsanet: dual self-attention network for multivariate time series forecasting segmenting time series: a survey and novel approach picture fuzzy time series: defining, modeling and creating a new forecasting method machine learning time series regressions with an application to nowcasting input window size and neural network predictors time series prediction and neural networks adam: a method for stochastic optimization support vector method for function approximation, regression estimation and signal processing online deep learning: learning deep neural networks on the fly key: cord-280929-4aa20cut authors: clavijo, nathalie title: reflecting upon vulnerable and dependent bodies during the covid‐19 crisis date: 2020-05-07 journal: gend work organ doi: 10.1111/gwao.12460 sha: doc_id: 280929 cord_uid: 4aa20cut this paper is a short narrative on how feminism helped me find a balance in my life and how this balance has been disrupted with the covid‐19 crisis. i reflect on how this crisis is showing our vulnerabilities as human beings. this crisis reflects how our bodies depend on each other, moving away from the dominant patriarchal ontology that perceives bodies as being independent (butler, 2016). i reflect on how this crisis is letting the most vulnerable in situations of survival because the infrastructures (butler, 2016) that support our bodies are not functioning. at the same time, this crisis is providing visibility to certain occupations that are dominated by issues of race, class and gender. these occupations are being at least temporarily rehabilitated to their central position in society. we are living a time where we could show, through our teaching, possible resistance to the neoliberal ontology that captured humanity. we are living a time where we could show, through our teaching, possible resistance to the neoliberal ontology that captured humanity. keywords: vulnerability; gender; covid-19; dominated occupations before embracing an academic career, i worked for several years in a company where i was a management accountant. at the age of thirty, i ticked all the boxes that helped maintain masculine domination in my professional and worse, deep in my personal life. i had to small sons, had taken the decision to change my contract on a part-time basis and was married to a man who always believed his work was more important than mine. i was in charge of taking my sons to day-care and school in the morning and in the evening. whenever there was a problem at school or at day-care, i was the one receiving the calls and running to find a solution. when i would have to go for business travel, i would leave my sons to the nanny, take the plane at 6 am in the morning and come back running at 8pm to pick them up. i remember crying a lot, filling so much guilt and so much anger. if my sons' father did not do his part of parental work, it was simply because he was too tired and had to concentrate on his career. i felt so much pain at the time; so much guilt; so much oppression. in 2014, my manager told me at an annual evaluation: "nathalie, you are a confirmed management accountant but you will not go up the latter because you have children. but you know it's normal, my wife is living the same situation". stronger and more beautiful than i had ever been. most of all, i felt free and had liberated myself from the guilt that comes with the different identities one can embody as a mother, a partner, a researcher, a professor… of course, nothing is easy. of course, i am still vulnerable to gendered norms but at least feminist theories have helped me construct strategies to resist to these norms (butler, 2016) . the problem is that my partner and the people who know me see me as someone extremely strong, who can cope with anything and always stays up at times when many would fall down. whenever i am facing an issue, my partner and parents just say: "don't worry, you'll find a way out". that is about the only thing they will say. they do not believe me when i tell them that i am weaker than they think and sometimes i would need more support that they can imagine. yes, i am still seen like a sponge ready to absorb the family, economic and social this article is protected by copyright. all rights reserved. everything collapsed around me. everything i had taken so many years to build, to find the right personal balance, went to dust. many parents are experiencing right now the same difficult days i am going through: organizing my work, working sometimes at 5 am because i really cannot think of any other timeslot for work, my zoom conferences while my sons are playing in the room next door, homeschooling a 12-year-old boy, a 9-year-old boy and a 4year-old boy at the same time, thinking about meals, laundry, calling family to make sure everyone is fine etc. when my mother asks me how i am doing, i tell her it is really difficult to handle everything. she simply answers: "you'll be fine, you're used to multi-tasking". you know what came back to my face like a boomerang during this crisis? guilt, that horrible guilt. although i am with my boys all day, i do not feel like i am appreciating that time with them because my mind is thinking of all the work i have to cover. i also feel guilty because i feel useless being home during the crisis we are living. i feel that i am taking dirty care upon myself again. two weeks ago, my 9-year-old hurt himself while playing with his older brother. when it happened, i was heading a zoom conference. my boy came into my office screaming. i was so angry at him because he was disturbing the conference and he "knew the rules". i muted my this article is protected by copyright. all rights reserved. microphone, asked him to "shut up" because i still had one hour of zoom. when i ended my conference, i went to see him. he was still crying. his wrist was hurting. i told him he would be fine. deep inside of me, i hoped he was ok because i needed to work for at least half a day the next day and i did not want to spend time going to see a doctor or going to the hospital's emergency unit, especially during pandemic times: i had to work. well, the next day, i went with him to the emergencies. he had cried all night; i was told his wrist was broken. guilt. i had put my work before my son's well-being. guilt. he must have been in so much pain during the last 24 hours. when we were at the hospital, a nurse was taking care of him when my 9 year-old asked her: "have you seen your children? are they ok?" the nurse looked at me. i saw the pain in her eyes. she smiled at my son and said she would see her little girl tonight. i think i will never forget that nurse's pain in her eyes. her eyes were clearly saying: "no, i have not seen my children". i had been telling my sons how much sacrifice care workers are making for the common good. i had told them care workers are working at least 12 hours a day and many of them were not seeing their children very often. it is one thing to be conscious about such situation and telling your children about it because you want them to understand others' sacrifices for the common good. it is a completely different thing to see the pain in a nurse's eyes. when we left the hospital, my son said: "i don't think her daughter is going to see her". my family and i have been all together at home for 5 weeks now. i still feel frustration and guilt but i have also tried to look at what this crisis is showing us. if many of us have felt that this article is protected by copyright. all rights reserved. our lives have collapsed, part of the reason is because some of the infrastructures (associations, schools, day care, stores, offices…) that support our bodies (butler, 2016) are not functioning during this crisis. we are living a situation where bodies need each other, where we depend on each other, but the access to infrastructure is reduced or impossible. this situation is a real-life case where we, as scholars, will be able to show our different audiences that the dominant patriarchal ontology that thinks the body as independent is over. during this crisis, i have thought of feminism as the act of putting myself aside for a while because bodies that are in a much more dominated position than i am are in real pain. women and men with violent partners are stuck home with very little ways to escape. children with a violent parent are left on their own. the government has offered alternatives in these difficult times to help those who suffer. for example, the government has declared in the media that because it is difficult for women to call the police when they are home with their aggressor, women can now seek help when they go to a pharmacy. it strikes me how the government keeps conceiving violence within a heterosexual matrix where it is systematically a woman who suffers from the violence of a man. this type of discourse might be blocking persons who are in a non-heterosexual relationship from seeking help. all of us are over-consuming media networks; therefore, violent partners know that pharmacies have become an alternative to calling the police. what about children? social workers are still working but are not allowed to go to people's homes. at a time where the vulnerable would need the support of the infrastructure even more, they are left on their own. this article is protected by copyright. all rights reserved. this crisis has also brought to the front the tremendous inequalities that exist in terms of education. according to the government, schools have completely lost contact with about 10% of children; the most vulnerable ones. what do you need to be able to do homeschooling? a computer, a phone, the internet, a printer and paper. some of us my take this equipment for granted but the problem is that not all families possess this equipment. what else do you need? at least one parent who will be able to help children in organizing their work and help them understand lessons. what about those parents who left school too early to be able to accomplish these tasks? what about those parents who simply do not know how to teach (and there are many!)? in normal times, the poorest children in france can eat at their school canteen for 1 euro per meal. french school canteens offer healthy meals with a starting course, main course, cheese, bread and a desert. for these children it is sometimes the only proper meal they are able to eat during the day. what meals are they having during this crisis? the infrastructures supporting our bodies are not functioning. vulnerable bodies are trying to literally survive through the crisis. i have also been thinking about bodies who are fighting for the common good. i am thinking of dominated occupations where race, class and gender play a significant role in rendering them invisible in normal times. just like the nurse i was mentioning earlier, care workers are not home like i am; they are not even able to see their children as much as i am. in fact, in france, their children are being taken care of by other women who are working in day care and schools that remain open for the needs of what the government has called "essential" occupations. most of all, these occupations are exposing themselves to unimaginable risk for the common good. this article is protected by copyright. all rights reserved. for the time being, these occupations are more vulnerable than i am; therefore, it is fine for me to put myself aside for a little while and reflect on what is happening. at 8pm at night, i go to my balcony with my sons and applaud for a minute all these occupations that were invisible before the crisis. i have mentioned care workers but we also applaud cashiers, garbage collectors, truck drivers, teachers… at 8pm at night, my neighbors and i also hit our saucepans with a spoon to protest against the last 15 years of neoliberal decisions that have weaken our health system. society should not be paying for the decisions of a so-called elite. at the same time, society is learning what « essential » occupations are. for feminist researchers, the essential role of these occupations seems obvious, but for french society, it is not always the case. before the covid-19 crisis, these occupations were considered as peripheral; now they seem to have become central. society is temporarily providing recognition to these workers but i surely hope this recognition will be more than symbolic in the future. a debate is rising in france regarding the low levels of remuneration that these "essential" occupations have accepted for so many years. society seems to be struck by the strong decorrelation that exists between an occupation's salary and its central role for the common good. on april 13 th 2020, president macron's speech on television mentioned the following: "we will also have to remember that our country, today, stands entirely on women and men whom our economies recognize and pay so poorly. "social distinctions can only be based on common utility. the french wrote these words more than 200 years ago. today, we must take up the torch and give full force to this principle" 1 1 https://www.elysee.fr/emmanuel-macron/2020/04/13/adresse-aux-francais13-avril-2020 this article is protected by copyright. all rights reserved. i did not know if i had to laugh or cry at this comment. i find so much hypocrisy in such words because it is the neoliberal system that president macron supports which has worsen such misrecognition. at least, french society might remember president macron's words to act in the future. feminist research has been debating this crucial point for so many years and has a lot to offer in these debates to understand the structural norms that have led to such misrecognition. feminist research can also contribute in finding ways of reconciling these occupations with their own power to act. if we want recognition to run in the long term, for those of us who teach, one possible way to start is by educating our audiences, starting with our students. i personally teach accounting in a business school, providing knowledge to future managers in big corporations. this audience has been educated in a context where they have been taught to become entrepreneurs of the self and to constantly maximize their individual performances (brown, 2003; cooper, 2015) . this crisis illustrates how vulnerable we are; how taking care of the other is central. an entrepreneur of the self cannot survive without support, without infrastructure (butler, 2016 ). an entrepreneur of the self fully depends on others. how many entrepreneurs have had to stop their activity? how many are struggling to pay their bills, to survive? before teaching what financial performance is, we should start teaching what social justice is. my colleagues and i are building up a course called "accounting for the common good" that will start during the fall semester. we are mobilizing feminist theories to educate future managers. these difficult times act as a reflection of what feminist research has been exposing for so many years. this article is protected by copyright. all rights reserved. our goal is to put social justice at the heart of accounting. our goal is to teach our future managers what this crisis has taught us. this is a difficult time where we might feel scared and lost. it is also a time of hope. a time where we can show resistance to the neoliberal ontology that has captured humanity. many of our countries are paying the consequences of neoliberalism during this crisis because finance was at the heart of it all. we are vulnerable and dependent human beings. without social good, everything will collapse again and again. bibliography: neo-liberalism and the end of liberal democracy rethinking vulnerability in resistance entrepreneurs of the self: the development of management control since 1976. accounting se défendre: une philosophie de la violence: zones key: cord-298563-346lwjr8 authors: kaplan, edward h. title: containing 2019-ncov (wuhan) coronavirus date: 2020-03-07 journal: health care manag sci doi: 10.1007/s10729-020-09504-6 sha: doc_id: 298563 cord_uid: 346lwjr8 the novel coronavirus 2019-ncov first appeared in december 2019 in wuhan, china. while most of the initial cases were linked to the huanan seafood wholesale market, person-to-person transmission has been verified. given that a vaccine cannot be developed and deployed for at least a year, preventing further transmission relies upon standard principles of containment, two of which are the isolation of known cases and the quarantine of persons believed at high risk of exposure. this note presents probability models for assessing the effectiveness of case isolation and quarantine within a community during the initial phase of an outbreak with illustrations based on early observations from wuhan. the novel coronavirus 2019-ncov first appeared in december 2019 in wuhan, china [1] . most of the initial cases were linked to the huanan seafood wholesale market, but person-to-person transmission was established quickly while viral transmission prior to the appearance of symptoms remains controversial [2, 3] . from the same family as the sars and mers coronaviruses (10% and 35% fatality rates respectively [4, 5] ), 2019-ncov has also led to serious cases of pneumonia, albeit with a lower estimated fatality rate of 2-3% at the present time [6] . given that a vaccine cannot be developed and deployed for at least a year, preventing further transmission relies upon standard principles of containment, two of which are the isolation of known cases and the quarantine of persons believed at high risk of exposure (with the latter extended inside china to prevent travel to or from wuhan, and globally via the cancellation of air travel to and from china). what follows are some probability models for assessing the effectiveness of case isolation of infected individuals and quarantine of exposed individuals within a community during the initial phase of an outbreak with illustrations based on early observations from wuhan. the good news is that in principle, case isolation alone is sufficient to end community outbreaks of 2019-ncov transmission provided that cases are detected efficiently. quarantining persons identified via tracing backwards from known cases is also beneficial, but less efficient than isolation. to begin, suppose someone has just become infected. absent intervention, assume that this infected person will transmit new infections in accord with a time-varying poisson process with intensity function λ(t) denoting the transmission rate at time t following infection. the expected total number of infections this person will transmit over all time (the reproductive number r 0 ) equals and as is well-known, an epidemic cannot be self-sustaining unless r 0 > 1 [7, 8] . it follows that a good way to assess isolation and quarantine is to examine their effect on r 0 . but first, we take advantage of another epidemic principle, which is that early in an outbreak, the incidence of infection grows exponentially. so, suppose that the rate of new infections grows as ke rt where r is the exponential growth rate, and let ι 0 denote the initial number of infections introduced at time 0. it follows that which is to say that the rate of new infections at chronological time t is the cumulation of all past infections times the chronological time t transmission rate associated with those past infections. simplifying and recognizing that e −rt λ(t) goes to zero (r 0 is finite) yields the euler-lotka equation in the disease outbreak context, eq. 3 can be understood as the composite of all sources of current infections. among all persons newly infected, the fraction whose infectors were infected between t and t + t time units ago equals is thus the probability density for the duration of time an infector has been infected as sampled from the infectors of those just infected. back to wuhan, where detailed study of the first 425 confirmed 2019-ncov cases was reported in [1] . using only case data up to january 4, the exponential growth rate r was directly estimated to equal 0.1/day [1] . contact tracing from identified index cases was able to establish links to their presumed infectors. while it was not possible to pinpoint exact dates of infection, the dates at which symptoms in both infectees and (presumed) infectors occurred were determined, and the difference in these dates taken as a proxy for the elapsed time since infection of the infector (see [7] for technical issues that arise from this approach). the resulting frequency distribution was then used to estimate b(t), which was fit as a gamma distribution with mean (standard deviation) of 7.5 (3.4) days [1] . given these estimates of r and b(t), λ(t) = e rt b(t) and consistent with what was reported in [1] as well as other studies employing different methods [9, 10] . we can now model containment. starting with case isolation, suppose that an infected person is detected at time t d days following infection, and is isolated for τ i days. the effect of doing this is to erase all infections that would have been transmitted between times t d and t d +τ i . following the poisson model, the expected number of transmissions blocked equals clearly the sooner an infected person is detected (the smaller t d ) and the longer a person is isolated (the larger τ i ), the greater the number of infections that can be prevented. suppose that newly infected persons self-recognize their infection at the time when symptoms appear. this optimistic scenario equates the detection time to the incubation time for 2019-ncov, and this incubation time distribution was reported to follow a lognormal distribution with a mean of 5.2 days and a 95th percentile of 12.5 days (which implies a standard deviation of 3.9 days) [1] . denoting the incubation time density by f t d (t), the expected number of transmissions blocked by case isolation of duration τ i upon the appearance of symptoms, β i , is given by substituting λ(t) and f t d (t) as previously described yields β i 's of 1 however, assuming that the time to detection is equal to the incubation time is very optimistic. indeed, the wuhan study revealed that the average time from onset of illness to a medical visit was 5.8 days [1] , comparable to the incubation time. to obtain a more sobering view of isolation, suppose that an individual's time to detection is twice the incubation time. using the lognormal incubation density cited above, the new detection time distribution will also be lognormal but now with a mean (standard deviation) of 10.4 (7.8) days. applying eq. 6 yields β i 's of 0.84, 1.07 and 1.1 for isolations of 7, 14 and unlimited days. even lifetime isolation fails to reduce transmission below threshold if the time to detection takes too long. given the amount of attention generated by news coverage and public service announcements, this second scenario is overly pessimistic. the real message is the importance of rapid (self) detection. what of quarantine? screening and quarantining individuals potentially exposed elsewhere upon entry to a community (as has been the case at airports) certainly can prevent the importation of new infections and their subsequent transmission chains, though at the cost of containing uninfected persons. beyond this, quarantine (typically at home where it is recommended that the exposed person not share immediate space, utensils, towels etc. with others) is meant for apparently healthy individuals discovered to be at risk of exposure via contact tracing with the idea that should they in fact have become infected, they would become ill without transmitting the virus and then report for isolation. however, quarantining uninfected contacts offers no benefit presuming the potential infector has already been identified and isolated, so the key question is whether such tracing would reach already infected but previously unidentified contacts in time to make a meaningful reduction in disease transmission. to present an optimistic view of tracing-driven quarantine, suppose that a newly infected person (referred to as the index from the standpoint of contact tracing) is immediately identified. instantaneous interview and tracing leads to the quarantine of our index's prior contacts, one of whom happens to be the infector (who is immediately isolated upon discovery). said infector, however, has already been infectious for some time before being identified via the index case. indeed, the probability density for the duration of time the infector has already been infected is given by eq. 4. suppose that the infector is placed in quarantine for τ q days. the expected number of transmissions that would be blocked, β q , is given by while the equations for β i and β q have the same structure, there is a key difference. the elapsed time from infection until an infected person enters isolation directly depends upon the time to recognize symptoms, which is related fundamentally to the incubation time distribution. the elapsed time from infection until an infected person enters quarantine/isolation via contact tracing, however, depends upon sampling from those newly infected and looking backwards to estimate the infector's elapsed duration of infection. using the previously estimated models for b(t) and λ(t), eq. 7 yields β q 's of 1.05, 1.33 and 1.36 for τ q 's of 7, 14, and unlimited days. the 14 day quarantine proposed in [1] would reduce the effective reproductive number to 2.26 − 1.33 = 0.93, which is just under threshold. again, this is an optimistic view of contact tracing, for identification of the infector is presumed instantaneous at the index's time of infection. taking into account the detection delay in recognizing the index case would similarly delay the identification of the infector via contact tracing, reducing the number of transmissions that could be prevented as a result. there is no either/or choice between quarantine and isolation. using both leads to an infected person being detected at the minimum of the time a person selfdetects due to symptoms and the time a person would be identified via contact tracing. the expected number of infections prevented then follows from eq. 6 after substituting the probability density for the minimum of the two detection times. to illustrate, assume independence between self-identification and contact-tracing detection times, that self-identification occurs at twice the incubation time, contact identification times follow b(t) as previously, and quarantine/isolation is unlimited in duration. the associated β i q denoting expected infections averted via isolation and quarantine now equals 1.64, which reduces the reproductive number from 2.26 to 0.62, well below the epidemic threshold. the preceding analysis has focused on reducing the reproductive number below 1, yet doing so can still lead to a large total number of infections. for example, reducing the reproductive number to 0.9 would lead to ten times as many infections in total as the extant number at the start of containment, as total infections in such a "minor" outbreak scales as 1/(1 − r 0 ) [11] . the modeling above is meant to be illustrative and surely could be improved in many ways. appropriate characterization of underlying statistical uncertainty, better operational modeling of how actual isolation, quarantine and contact tracing operate [12] (including voluntary selfquarantine by untraced persons who might have been exposed), consideration of the costs of intervention as well as the public health benefits, and characterizing the appropriate level of resources to devote to this outbreak relative to other arguably more pressing public health concerns are all subjects deserving careful study. additional common-sense precautions such as regular handwashing, the use of facemasks, and other measures not considered here should help make such outbreaks even more manageable. one important suggestion is that people should receive flu shots, for in addition to protecting against influenza, vaccination would reduce the number of false positive 2019-ncov cases reported since fewer people would have the common symptoms of both flu and coronavirus, and if a vaccinated person did get sick, it would raise the probability that the case is coronavirus as opposed to flu and make it more likely said person would seek care [13] . there are other practical aspects to explore, including the development of a less-precise but more rapid diagnostic mechanism, determining how long one can safely delay ill patients with symptoms from coming to the hospital to help alleviate congestion, and figuring out how quickly airborne infection isolation rooms (negative pressure units) can be created by hacking the ventilation system in ordinary wards to increase isolation capacity [14] . nonetheless, the modeling results obtained are reassuring. containment via isolation and quarantine has the capacity to control a community 2019-ncov outbreak. early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia transmission of 2019-ncov infection from an asymptomatic contact in germany study claiming new coronavirus can be transmitted by people without symptoms was flawed middle east respiratory syndrome coronavirus transmission a novel coronavirus outbreak of global health concern estimation in emerging epidemics: biases and remedies infectious diseases of humans: dynamics and control nowcasting and forecasting the potential domestic and international spread of the 2019-ncov outbreak originating in wuhan, china: a modelling study report 3: transmissibility of 2019-ncov mrc centre for global infectious disease analysis. imperial college modeling to inform infections disease control analyzing bioterror response logistics:the case of smallpox act now to prevent an american epidemic: quarantines, flu vaccines and other steps to take before the wuhan virus becomes widespread implementing a negative-pressure isolation ward for a surge in airborne infectious patients acknowledgments i thank ron brookmeyer, forrest crawford, gregg gonsalves, robert heimer, albert ko, barry nalebuff, david paltiel, greg zaric and an anonymous referee for comments; any errors are my own. key: cord-277909-rn1dow26 authors: gunson, r.n.; collins, t.c.; carman, w.f. title: practical experience of high throughput real time pcr in the routine diagnostic virology setting date: 2006-02-07 journal: j clin virol doi: 10.1016/j.jcv.2005.12.006 sha: doc_id: 277909 cord_uid: rn1dow26 the advent of pcr has transformed the utility of the virus diagnostic laboratory. in comparison to traditional gel based pcr assays, real time pcr offers increased sensitivity and specificity in a rapid format. over the past 4 years, we have introduced a number of qualitative and quantitative real time pcr assays into our routine testing service. during this period, we have gained substantial experience relating to the development and implementation of real-time assays. furthermore, we have developed strategies that have allowed us to increase our sample throughput while maintaining or even reducing turn around times. the issues resulting from this experience (some of it bad) are discussed in detail with the aim of informing laboratories that are only just beginning to investigate the potential of this technology. the advent of polymerase chain reaction (pcr) has transformed the utility of the virus diagnostic laboratory. in comparison with traditional methods, pcr offers a highly sensitive and specific result within 24-48 h. the routine use of this test in diagnostic laboratories has led to many benefits including improved patient management, and increased ascertainment of previously under-diagnosed and undetectable viruses. the advent of real time pcr technologies has further improved upon these already significant benefits (arya et al., 2005; aslanzadeh, 2004; bustin and nolan, 2004; mackay, 2004; tan et al., 2004) . in comparison to traditional gel-based pcr assays, real time pcr offers increased sensitivity and specificity in a rapid format (turn around time from sample receipt to result <5 h). unlike traditional systems, which rely upon endpoint analysis, real time pcr assays visualise the reaction as it is taking place allowing quantification and reaction analysis (e.g., pcr efficiency). since real time pcr reactions are performed in a closed system (no gel analysis needed) the risk of contamination has been substantially reduced. this has also reduced the requirement for a stringent laboratory structure. the increasing number of chemistries and platforms available for real time pcr have reduced its overall cost significantly making this an increasingly attractive technique. over the past 4 years we have introduced a number of qualitative and quantitative real time pcr assays into our routine testing service. these include assays for the detection of influenza a, b and c, human metapneumovirus, respiratory syncytial viruses (rsv) a and b, rhinovirus, parainfluenza viruses 1-4, coronaviruses nl63, oc43 and 229e, chlamydia pneumoniae, mycoplasma pneumoniae, pneumocystis jiroveci, varicella zoster virus (vzv), herpes simplex virus (hsv) 1 and 2, cytomegalovirus (cmv), epstien barr virus (ebv), hhv-6, hhv-7, norovirus, adenovirus, rotavirus, astrovirus, sapovirus, erythrovirus b19, mumps, chlamydia trachomatis, mycoplasma genitalium, nesseria gonnorhoeae and enterovirus. each year we carry out more than 84,000 pcr tests. during this period we have gained substantial experience relating to the development and implementation of real time assays. furthermore we have developed strategies that have allowed us to increase our sample throughput while maintaining or even reducing turn around times. the issues resulting from this experience (some of it bad) are discussed in detail below with the aim of informing labo-ratories that are only just beginning to investigate the potential of this technology. there are numerous chemistries available to carry out real time pcr. these include dual labelled probes (often known as taqman tm probes), minor groove binding (mgb) probes, molecular beacons, fluorescence energy transfer (fret) probes, intercalating dyes (such as sybr green) and more recently developed fluorescent labelled primers such as sunrise tm , lux tm or scorpion primers tm . the advantages and disadvantages of each chemistry are discussed (arya et al., 2005; aslanzadeh, 2004; bustin and nolan, 2004; mackay, 2004; tan et al., 2004) (table 1) . most of the published real time probe based pcr assays for viral diagnosis utilise either molecular beacons or dual labelled probes although more recent publications tend to favour the use of dual labelled probes. currently all our real time pcr tests are dual labelled probe assays. however, we do have experience of both methods and have noted that there are several important differences between these two systems, which should be considered before developing or implementing a diagnostic virology test. molecular beacons are very specific (tyagi and kramer, 1996) . the specificity is a direct result of their structure. in free solution a molecular beacon adopts a hairpin-loop conformation in which the reporter fluorescence is quenched by its proximity to the quencher molecule ( fig. 1 ). this is a very stable state and a molecular beacon will only bind to a target sequence, become linear and fluoresce if it is highly complementary. any nucleotide differences between the beacon and the target sequence will greatly reduce the target binding efficiency of the probe. as a result molecular beacons have an increased propensity for false negative results. we encountered this problem during the implementation of a previously published molecular beacon based test for the detection of parainfluenza viruses. during the initial assessment, this detected all culture/direct immunoflorescence parainfluenza 3 positive samples detected between 2001 and 2002. however, all parainfluenza 3 positive samples detected by dif or culture in 2003 were negative when tested with this assay. to assess whether the primers were amplifying the parainfluenza 3 viral rna, sybr green was added to the pcr reaction in place of the molecular beacon. the formation of pcr product was observed. using melt curve analysis, identical melting peaks were observed in all parainfluenza 3 samples and controls (fig. 2) . running the pcr products on an agarose gel and observing a band of the expected size confirmed the successful amplification of parainfluenza 3 rna by the primers. based on these results we deduced that the molecular beacon was no longer complementary to the amplified target sequence. consequently, following analysis of more recently available sequences in the database, a new molecular beacon was designed, which detected all 2003 parainfluenza 3 samples. fig unlike molecular beacons, dual labelled probes are in a permanent linear conformation (lee et al., 1993) (fig. 3) . like primers, they can tolerate a small number of mismatches between the probe and target and still bind to the target. consequently, dual labelled probes are less likely to result in false negative reactions and may, in comparison with molecular beacons, be of greater use in viral diagnosis where occasional changes in even the most conserved target sequence may be expected to occur (although it should be noted that mismatches can also lead to false negative reactions with dual labelled probes). however, either method will be useful if targeting a highly conserved region. the second difference between molecular beacons and dual labelled probe chemistries is related to the normalised fig. 2. figure showing assessment of whether real time molecular beacon primers were amplifying the parainfluenza 3 viral rna. in this reaction, sybr green was added to the pcr reaction in place of the molecular beacon (all samples negative by molecular beacon assay). the formation of pcr product was observed. using melt curve analysis, identical melting peaks were observed in all parainfluenza 3 samples and controls (http://probes.invitrogen.com/handbook/figures/0710.html). fig . 3 . dual labelled probes (also known as taqman tm probes) are oligonucleotides that contain a fluorescent dye on the 5 base, and a quenching dye located on the 3 base. when excited the flourescent dye transfers energy to the nearby quenching dye molecule rather than fluorescing, resulting in a non-fluorescent probe. dual labelled probes are designed to hybridize to an internal region of a pcr product. during pcr, when the polymerase replicates a template on which the probe is bound, the 5 -exonuclease activity of the polymerase cleaves the probe. this separates the fluorescent and quenching dyes and fret no longer occurs, allowing detection of the signal from the reporter dye. fluorescence increases in each cycle, proportional to the rate of probe cleavage. change in fluorescence ( r n ) produced during successful real time pcr. we have found in most, but not all cases, that dual labelled probes produce a greater fluorescent change than molecular beacons (fig. 4) . a larger r n allows easier interpretation of results, as low positive results maybe more easily differentiated from the variable background. dual labelled probes provide a greater fluorescent change as the reporter dye is irreversibly released from the quencher during the extension stage of each pcr cycle. consequently there is a cumulative and permanent record of successful amplification, which is added to during subsequent pcr cycles. molecular beacons are not destroyed at the end of each cycle, but return to free solution during the denaturation phase and revert back to their hairpin-loop structure. consequently there is no accumulation of free reporter dye and the extra fluorescence produced is less after each cycle than when compared to dual labelled probes. since real time pcr is a relatively new technique, published assays may not be available for all viral pathogens. as a result many laboratories may wish to develop novel in-house real time pcr assays. the initial stages in developing a real time pcr assay are the same as those required for designing traditional gel based pcr tests. the first step is to identify a conserved region of the viral (or other pathogen) genome in which to design the assay. a literature review will often reveal which genes are conserved, and most often these will be genes encoding nonimmunogenic proteins. once a gene is identified, a blast search (http://www.ncbi.nlm.nih.gov/blast/) is performed to locate the most conserved regions within this gene. as real time amplicons are short and contain a third oligonucleotide (i.e., the probe), the ideal region to design an assay would be 100-150 nucleotides long with 3 regions of 30 bases devoid of all base degeneracies. it is best to find a conserved region of 400-500 bases to allow the software to identify a number of potential assays. several software programs are available to design real time assays, and often software is provided by the instrument supplier. beacon designer (prefig. 4 . comparison of the r n produced when using dual labelled probes (a) vs. molecular beacons (b). table 2 main factors to consider when developing a dual labelled probe pcr assay factors to consider when developing a dual labelled probe pcr assay identify a conserved region of the viral (or other pathogen) genome identify a region within this area of ∼400-500 bases in length check that the probe sequence contains more c residues than g residues ensure that the probe does not begin with a g the optimal primer t m values are 58-60 • c the optimal probe t m should be ∼10 • c higher the amplicon should not exceed 150 bp in length primers should not contain more than 2/5 g or c residues at the 3 check the amplicon for secondary structure, and for specificity mier biosoft) can be used to design either molecular beacon assays or dual labelled probe based assays, and has additional functionality such as blast and secondary structure searching. primer express (applied biosystems) is another useful tool for designing taqman based assays, and is the only software currently available for designing assays based on applied biosystems (abi) mgb probes. once the software has suggested a primer and probe set, it is important to ensure that they meet the criteria ( table 2) . assuming the primers and probe meet these criteria, it is advisable to check the amplicon for secondary structure, and for specificity. secondary structure prediction software is available on the internet, for example, michael zukers' m-fold server (http://www.bioinfo.rpi.edu/∼zukerm/rna/) is particularly useful. a highly structured amplicon (higher − g) may reduce the efficiency of reverse transcription or primer annealing (fig. 5 ). this may reduce the overall sensitivity of the assay. the final stage of the design process is to check the amplicon for specificity using the blast algorithm (http://www.ncbi.nlm.nih.gov/blast/). the assay should be specific for the sequence/organism of interest, and should not detect other sequences. non-specific matches may be picked up, but closer analysis of the primer and probe binding sites often confirms that these sequences will not be amplified or detected due to multiple base changes. when performing gel based pcr it is essential to fully optimise primer concentrations to achieve the best sensitivity of the assay and best end-point signal (brightness of band) (gunson et al., 2003) . in real time pcr, the signal is detected early in the amplification process, and therefore the end-point variation seen in gel-based assays does not affect the result. also, careful design of the assay can reduce primer-dimer formation fig. 5 . structure within pcr amplicons may affect the sensitivity of an assay. respiratory syncitial virus (rsv)-a detection limit is 10 2 copies/reaction, while rsv-b which is more structured has a detection limit of 10 4 -10 5 copies per reaction. and increase the efficiency of the specific amplification reaction. consequently, many manufacturers of real time pcr equipment and oligonucleotide primers and probes no longer recommend optimising primer and probe concentrations for real time taqman assays. despite this we still perform an initial optimisation of both primer and probe concentrations to ensure we are running our real time pcr assays at their most sensitive and efficient. although for the majority of our assays the optimal concentrations are 1000:1000:300 nm (forward:reverse:probe), we have observed on several occasions that the optimal primer and probe concentrations were different to the values recommended. our method for primer and probe optimisation is available online (www.clinical-virology.org). however, other methods will also be available. optimisation of a real time pcr requires positive control material. where positive control is not available (examples being a virus, which cannot be cultured or a highly pathogenic virus such as h5 influenza or sars coronavirus), dna or rna oligonucleotide targets may be ordered. these are also useful as alternatives to plasmids as standards in quantitative assays. it should be noted that these oligonucleotide controls must be ordered from a separate supplier to prevent contamination of the primer-probe set, and should be diluted in a separate laboratory prior to use as they may contain up to 10 17 target copies per ml, and are therefore a considerable source of potential contamination. we have observed contamination in erythrovirus b19 primers purchased several months after a full length oligonucleotide control was ordered from the same supplier (fig. 6 ). once the assay is optimised, it is essential to check the sensitivity and specificity of a new pcr assay by using a selection of sample 'panels'. there is much debate about what is an acceptable validation process. these should minimally include clinical samples known to be positive by the current standard assay and should consist of the sample types commonly submitted to be examined for the virus in question. clinical samples tested negative by the previous method should also be examined to determine if the new assay is more sensitive than the current test, and samples known to be positive for other agents should be tested to confirm assay fig. 6 . contamination of primer and probes with assay target produced at the same facility. label 1, reaction component: supplier a salt free negative control, mean c t value: 21.08; label 2, reaction component: supplier a salt free positive control (−7), mean c t value: 21.10; label 3, reaction component: supplier a hplc negative control, mean c t value: 33.28; label 4, reaction component: supplier a hplc positive control (−7), mean c t value: 30.68; label 5, reaction component: supplier b negative control, mean c t value: 40.00; label 6, reaction component: supplier b positive control (−7), mean c t value: 29.93. a full length dna oligonucleotide representing the amplicon of a b19 real time pcr assay was synthesised by supplier a. during a later investigation into assay contamination following a reagent change, primers and probes were again purchased from supplier a (salt free and hplc purified), and from an alternative supplier b. the reagents purchased from supplier b (5 and 6) were clean, whilst those from supplier a (1 + 2) were contaminated with the previously synthesised positive control, even after hplc purification (3 + 4). specificity. serial dilution series of known positive samples may also be prepared, and tested in parallel in the new and previous assay systems to determine which assay is more sensitive. ideally these dilution panels should represent all subgroups of the target virus to ensure the test is sensitive for all types. a new test must be at least as sensitive as the assay in current use, and should ideally be able to detect a wider range of virus subtypes/variants. an additional method to validate a new assay is to test the assay using samples from an external quality assurance program. panels may be obtained from various sources, including national external quality assessment service (neqas) and quality control for molecular diagnostics (qcmd), and the expected results may be compared with those obtained from the old and new assays in parallel. the use of such panels also allows the comparison of assays currently in use by different laboratories. when implementing a newly designed or previously published assay a number of changes can be made in order to reduce the turn around time of the assay and increase laboratory throughput. multiplex real time pcr assays allow the detection of multiple pathogens within a single tube. the utilisation of such assays reduces overall testing costs and turn around times, enabling a high throughput. there are a number of multiplex real time pcr assays described in the literature (draganov and kulvachev, 2004; gunson et al., 2005; hindiyeh et al., 2001; richards et al., 2004; templeton et al., 2004) . we recently described 4 triplex assays designed to detect 12 respiratory viral pathogens. designing a multiplex real time pcr is a complicated process often requiring a great deal of trial and error. below are outlined some general criteria that may prove useful when designing such assays. in order to design appropriate primers and probes users should follow the development protocols outlined above. however, care should be taken to ensure that there is no primer or probe interaction that may reduce the sensitivity or efficiency of the pcr reaction. most primer design software will allow primer-probe interactions to be examined. in order to optimise the multiplex assay each separate pcr should be optimised separately before being assessed when added together (see above section for details). further experiments should include assessing the sensitivity of the multiplex assay for the simultaneous detection of mixed infections (real or spiked) and low copy targets in high copy backgrounds. ideally, no loss in sensitivity should be observed when additional primers are added. however, if the multiplex assay is less sensitive, altering the ratio of primers/probes concentrations may prove useful. alternatively, changing the concentration of pcr reagents (enzyme, mg 2+ , dntps etc.) may also be beneficial. some manufacturers are now producing real time reaction mixes specifically designed for use with multiplex assays, and provide guidelines on the optimal primer and probe concentrations to use. however, if all these factors fail to improve the sensitivity of the multiplex assay then some or all of the primer and probes may have to be redesigned. the number of targets detected in one assay is limited by the number of detection channels available on the real time platform and the number of fluorescent-labelled dyes available. newer machines tend to have five channels. although there are many fluorescent dyes available, many of the excitation/emission spectra overlap and thus only certain combinations can be used. at present we are using fam, vic, and cy5 detectors as these are optimal for the filter set utilised in the abi 7500 (please note that this may differ when using other platforms). syndrome based testing policies are ideal for rapid, high throughput testing. in our laboratory we offer a number of such "menus", which negate the need for clinical coding and allow samples to be tested immediately upon receipt (table 3) . for example, all csf samples from patients with neurological illnesses such as encephalitis or meningitis are tested for enterovirus, hsv (1 and 2), vzv, ebv, cmv and hhv-6 regardless of patient or clinical details. similar testing protocols are in place for urethritis, gastroenteritis, respiratory illness and eye infections. however, although such policies aid high throughput and reduce turn around times (sample receipt until when result is ready), it should be noted that they may be more expensive and will occasionally produce results that are difficult to interpret, e.g. herpes viruses in respiratory samples (see below). automation of the extraction and liquid handling process has led to significant improvements in turn around times and allows high throughput with a reduced risk of user error. many manufacturers now supply automated equipment for the extraction of nucleic acid from diagnostic samples ( table 4) . some manufacturers provide open platforms, which can be used with other suppliers' kits and reagents, while others provide complete extraction solutions. although universal extraction kits (dna and rna pathogens and most specimen types) are available, it should also be noted that different kits can be used for particular samples types and pathogens (e.g., rna or dna) and may be more sensitive for a particular application. although automated extraction has many advantages, laboratories should also consider complementing this service with a manual extraction system. this can be used for testing emergency samples that have arrived in the laboratory after an automated extraction has begun or for samples requiring special processing, not suited for automation, e.g. tissue. many suppliers also supply automated liquid handling equipment, which can facilitate the set up of large numbers of pcr reactions. traditionally most published and in-house developed real time pcr methods consist of the following standard parame-ters: a taq dna polymerase activation step (usually 95 • c for 2-15 min depending upon pcr kit manufacturer) followed by 40-50 cycles of 95 • c denaturation for 15-30 s and an annealing/extension cycle of 60 • c for 60 s. if an rna virus is to be detected, an additional 30 min reverse transcription step is required before the taq dna polymerase activation. overall the reaction run time for a real time pcr is between 80 and 100 min. we have repeatedly shown using dilution series of a number of dna and rna viral pathogens that reducing the duration of the reverse transcription step, the denaturing and annealing/extension step by 50% can reduce the reaction run time of the assay significantly without any concurrent loss in sensitivity (table 5) . overall our reaction run time has reduced to approximately 60 min (70 min for rt-pcr), freeing up pcr machines for further tests and allowing more testing within the working day. most dual labelled probe real time pcr assays are designed to utilise the same pcr parameters (i.e., denaturation step of 95 • c for 30 s and an annealing and extention step of 60 • c for 60 s). theoretically, multiple different real time pcr assays can be carried out at the same time on the sample plate. we have also shown that, where dna and rna reagents are purchased from the same supplier, and therefore have identical taq activation requirements, dna assays do not suffer any loss in performance when run through rt-pcr cycling conditions. this will allow laboratories greater flexibility and provide a rapid service. pre-prepared, frozen real time pcr reagents are user friendly and lead to reduced trt and improved quality control (qc) when compared to the preparation of pcr mixes from separate reagents. we have assessed two different methods of pre-prepared real time pcr reagents: frozen aliquots of pooled primers and probes, and frozen aliquots containing all real time pcr reagents. both systems have been assessed over relatively short time period (up to a maximum of 10 weeks, which corresponds to the maximum period of time a pool would last before running out). ideally these would have been assessed over a longer period. we find that the pooled primer and probe approach best suits seasonal assays such as those for respiratory pathogens, whereas the latter approach is more suited to assays which are performed regularly throughout the year on a standard number of samples. the advantages and disadvantages are listed in table 6 . we have introduced pooled primer and probes for the majority of our routine dna and rna tests. this has proved especially useful for our high throughput assays such as the 'respiratory screen', which consists of five triplex real time rt-pcr assays. for each multiplex assay, the operator needs only to mix three tubes containing pre-aliquotted reagents: an aliquot of mastermix containing rox reference dye, one containing enzyme mix (rt + taq), and an aliquot of primer probe pool (containing three sets of primers and probes, and sufficient water). in this way, mastermix can be prepared rapidly. the reagents have been carefully quality controlled and the possibility of pipetting or calculation errors at the time of preparation is reduced. the production of a large number of aliquots at the same time (sufficient for approximately 3000 tests) also facilitates inter-run reproducibility and assists in maintaining the quality of the results. while some mix is unavoidably wasted, the time saved and the reduced number of failed runs compensates for this cost, and during the summer months when sample numbers are much reduced, smaller aliquots can be prepared. table 7 shows the ct values obtained from the coronavirus triplex assay using pooled primers and probed stored for up to 6 weeks, demonstrating the stability of the reagents when stored at −20 • c. we have now been using the same lot number of pooled primer-probe for the coronavirus assay for in excess of 10 weeks without loss of performance. with this system and pooled controls (see below) in use, we are now able to provide an efficient and reliable. 3.6.2. frozen pools of primers, probes, mastermix and enzyme the use of aliquots of frozen mastermix (containing all pcr reagents except template) is an alternative to frozen primer and probe aliquots described above. the laboratory user need only remove the desired number of aliquots (or plates if frozen in this format), defrost and then add the template to be tested. frozen aliquots are easier to use than the pooled primer and probe method, facilitate rigorous quality control and reduce the overall turn around times. however, they are less flexible than the primer probe aliquot system and wastage will be more expensive as it includes enzyme. furthermore, any mistakes in the making up of the aliquots will result in the loss of primers and probe and expensive mastermix. we have shown, using positive controls, that both rna and dna mastermix from a number of companies (applied biosystems, invitrogen, and qiagen) can be frozen for at least 1 month with no loss of sensitivity. positive and negative controls are an essential part of any diagnostic pcr service. until recently, we, and many other laboratories, utilised two dilutions of a positive control for each virus to be tested (the end-point of a dilution series of cultured virus tested in the relevant assay (acting as a sensitivity control) and the dilution 1 log less dilute). as a result, for each robot extraction run of 96 wells, a substantial number of wells were required for the positive controls alone. the inclusion of negative extraction controls further reduces the possible number of extractions available for samples. the use of numerous controls increases the cost per sample and the turnaround times of the service. pooled controls are a significant improvement on the previous method and we now use separate pools containing 12 respiratory viruses, and 6 gastrointestinal pathogens. in order to develop a pooled respiratory or gastrointestinal control, each virus culture or stool extract was serially diluted and table 7 ct of positive control on frozen primer/probe pools at 4, 6 and 8 weeks an end point established. for the respiratory virus control a 'high' positive control pool was prepared by adding an equal volume of the dilution 2 logs above the end point for each of the 12 culture fluids. a further 1 in 10 dilution of this 'high' positive control was prepared to produce the 'low' positive control. we now include just two respiratory controls on our robot extractions, freeing up an additional 22 wells for other samples. the preparation of a large volume of control at the one time allowed better qc and reproducibility to be achieved. aliquots of control are stored at −70 • c and have been found to be stable for up to 3 months so far. we have previously experienced lot-to-lot variation of both primers and probes resulting in reductions in test sensitivity. when a new batch of reagents is purchased we now run a performance test (using the new reagents at the same concentrations previously determined as optimal) by testing the 'old' and 'new' primer probe sets in parallel with the same positive control on the same pcr run. if the c t and r n values observed are comparable (newly prepared reagents must produce c t values falling within two standard deviations of the mean value determined for the reagents previously in use (when testing identical positive controls), the new reagents are released for routine use. if the assay is less sensitive than the previous assay then primer and/or probe optimisation should take place. ideally this should be done several weeks before the next batch is required for routine use as new probes or primers may need to be re-ordered. our experience over the winter of 2004-2005 is that re-optimisation has not been required for any of the respiratory assays. for validation of each real time pcr run we recommend the following. the c t of the positive control should be documented with each run and compared to the value derived from previous runs. this should help identify any loss in sensitivity that can be seen due to user error or degradation of pcr reagents. if the c t falls significantly below the expected value the run should be repeated (outwith two standard deviations of the mean value determined by previous runs (when testing identical positive controls). if the c t remains low or reduces further, new controls and pcr reagents may be required. in addition to this the overall fluorescence change should also be monitored with each run. reductions in fluorescence may cause interpretation difficulties and may also highlight a problem in the pcr reaction. as with changes in c t , large reductions in fluorescence may result in the need to repeat the pcr or introduce a new batch of controls and pcr reagents. ideally real time pcr tests should include an internal control in order to ensure confidence in negative results. there are many internal control pcr tests available targeting animal viruses (added to the sample before extraction) and synthetic controls (which are added to the mastremix), and human genes. however, the inclusion of such controls can be expensive as they may have to be carried out separately from the diagnostic assay. as a result many laboratories (including ourselves) do not use such controls on all tests. any laboratories performing real time pcr assays can perform quantitative assays with the addition of suitable standard quantitative controls to the assay, although a uniform sample type is required to obtain meaningful results (e.g., blood, urine). attempting to quantify virus in nonuniform sample types (such as respiratory samples or stool) is not recommended without thorough assessment of sampling reproducibility. in common with many laboratories, we prepare our quantitative standards (oligos or plasmids) in bulk, test these for acceptable linearity and slope (−3.33) for a good 10-fold dilution series (we allow a range of 3.18-3.45 which equates to a variation of ±5% in the efficiency of the reaction), and then aliquot this into volumes sufficient to last 1 week at 4 • c. aliquots are stored frozen at −20 • c until required. it is essential to track the c t values of the controls to check that the assay is performing satisfactorily, and to enable a smooth transition to a new control set when required (fig. 7) . we record our c t values in the form of a shewart control chart (davies, 2003) . newly prepared standards (produced annually) must have c t values falling within two standard deviation of the mean value determined for the standards previously in use. a second issue with quantitative assays that do not use extracted material as quantitative controls is that these assays are sensitive to changes in extraction methodology or efficiency. we have recently moved to a more efficient extraction kit (qiaamp virus robot kit), but as our standards are plasmid or cellular dna based and are not extracted alongside the specimens, we are now reporting higher viral loads for the same sample than with previous extraction procedures. this observed change in viral loads is only a problem during the crossover period from one extraction procedure to another, as subsequent samples will be analysed in the light of the new baseline level. to ensure intra-run extraction consistency, a positive or an internal control (of known quantity) should be extracted and run at the same time as the samples to be tested. this control should be monitored in the same way as outlined above (see routine validation of each real time pcr run). once an assay (or a number of assays) has been introduced into routine service it is important to re-assess the sensitivity of the assay in relation to current circulating viruses. this can fig. 7 . application of shewart control chart to track potential changes in assay performance. the mean ct values obtained for the 1 × 10 7 copies per ml standard were plotted over time. the average value of these ct values was calculated and plotted (red line) for each data set (along with two standard deviations above (pink line) and below (blue line) the average value). two standard deviations are generally accepted as the warning level in such analyses. the first 'jump' (a) represents a change in the set of standards used, and while this is not ideal, results in a much more reproducible assay. the second jump (b) is caused by a change in the primer-probe pool in use, and shows a significant change in the sensitivity of the assay. as a result of this analysis another batch of primer-probe pool was prepared and the results obtained returned to the acceptable range. for interpretation of the references to colour in this figure legend, the reader is referred to the web version of the article. be carried out using positive samples detected by an alternative test or by comparing primers and probe to new sequences stored in surveillance databases. although most assays target a conserved region of the viral genome, small changes in the target can result in false negative reactions due to primer and/or probe mismatches. if a loss in sensitivity occurs primer or probe sequences may need to be updated or a new assay may have to be developed. interpreting real time pcr results is a relatively straightforward process. in a fully optimised assay all positive results should show increases in fluorescence in a characteristic exponential curve. however there are still pitfalls that we feel users should be made aware of when interpreting data. occasionally samples may show "signal drift" (traces that increase in fluorescence as the pcr progresses but are not exponential) (fig. 8) . signal drift can be produced for a number of reasons. true positive samples may show signal drift because of sub optimal pcr conditions, inhibition and primer mismatches. occasionally negative samples may also show signal drift. this may be due to probe breakdown resulting in a fluorescence increase. signal drift often occurs towards the end of the pcr reaction. some platforms allow multicomponent analysis of weak positive traces. this allows users to assess the changes of each fluorescent label in the reaction. genuine positive traces will show an exponential increase in the fluorescent signal whereas signal drift is often due to a change in the normalisation dye (e.g., rox). we currently repeat all positive samples with ct's greater than 35 cycles as we feel these may be either low copy number positive samples or non-specific reactions. some 96 well real time plates require sealing with optically clear plate seals before pcr can take place. on occasion these may not seal properly and pcr reagents evaporate during cycling. as a result of this a curve may be produced mimicking a positive pcr reaction. the correct placing of the threshold line is essential to allow accurate c t measurement. some of the computer software available with current real time pcr formats can automatically place the threshold line during result analysis. any sample with fluorescence above this line will be regarded as positive by the computer. always check the automatic placement of the threshold line as we have found that sometimes the computer will place it wrongly resulting in both false positive and negative results (fig. 9 ). an alternative is to use a fixed threshold line. the use of such a system will ensure the real time assay is directly comparable to previous runs this should not preclude careful analysis of the data. the increased sensitivity of real time pcr means that, like nested pcr, occasionally positive results will be obtained that fig. 8 . example of negative samples showing signal drift. the two sample shown in blue are showing an increases in flourescence when examined using the quantification option (shown above left). analysis of the raw cycling data (shown above right) shows that there is no increase in flourescence usually associated with a positive sample. are not in keeping with accepted knowledge. for example, herpes viruses in throat swabs should be interpreted with care. we often detect low positive ebv, hhv-6, hsv or cmv in throat swabs in cases of respiratory infection. whether these are the cause of disease or unrelated re-activations are unclear. although these findings may be irrelevant to the clinical illness it is important that these results are not ignored. for as we gain more experience with these sensitive assays we may identify new, previously unrecognised syndromes attributed to particular viral pathogens. there is no doubt that in the coming years an increasing number of virology laboratories will utilise real time fig. 10 . the turn around time of respiratory samples in 2002-2003 to 2004-2005. pcr assays. as a result virology laboratories will be able to offer more tests and process more samples while reducing turn around times. this can be highlighted by our winter respiratory surveillance service (servis), which currently tests for 12 pathogens. during the 2004-2005 season 509 samples were tested with 80% of results reported within 3 days of the samples arriving in the laboratory (fig. 10) . for 2002-2003 when gel based pcr was used to detect influenza a and b, rsv and picornavirus, 554 samples were tested in total. only 3.6% of results were available in 3 days with most results returned to users within 14 days. with a slightly extended working day, real time pcr results ought to be reported within 36 h of receipt. the routine use of real time pcr will have several benefits. first it will aid patient management (prognosis, treatment guidance and infection control) and may assist in the development of new antiviral therapies. real time pcr will also improve the sensitivity of the surveillance of viral pathogens, increasing our understanding of these important infections, providing accurate assessments of the morbidity and economic cost of disease and facilitating the implementation of public health prevention measures. 356 2.1. which real time pcr chemistry is best for viral diagnosis? automated extraction and liquid handling equipment continual assessment of the sensitivity of real time pcr primers and probe basic principles of real time quantitative pcr preventing pcr amplification carryover contamination in a clinical laboratory pitfalls of quantitative real time reverse-transcription polymerase chain reaction a coloured version of the j chart or the amc-d j-chart molecular techniques for detection, identification and analysis of human papillomaviruses (hpvs) my favourite primers: real time rt-pcr detection of 12 respiratory viral infections in four triplex reactions optimisation of pcr reactions using primer chessboarding evaluation of a multiplex real time reverse transcriptase pcr assay for detection and differentiation of influenza viruses a and b during the 2001-2002 influenza season in israel allelic discrimination by nick-translation pcr with fluorogenic probes real time pcr in the microbiology laboratory genogroup i and ii noroviruses detected in stool samples by real time reverse transcription-pcr using highly degenerate universal primers diagnostic value of real time capillary thermal cycler in virus detection rapid and sensitive method using multiplex real time pcr for diagnosis of infections by influenza a and influenza b viruses, respiratory syncytial virus, and parainfluenza viruses 1, 2, 3 and 4 molecular beacons: probes that fluoresce upon hybridization key: cord-225347-lnzz2chk authors: chakraborty, tanujit; ghosh, indrajit; mahajan, tirna; arora, tejasvi title: nowcasting of covid-19 confirmed cases: foundations, trends, and challenges date: 2020-10-10 journal: nan doi: nan sha: doc_id: 225347 cord_uid: lnzz2chk the coronavirus disease 2019 (covid-19) has become a public health emergency of international concern affecting more than 200 countries and territories worldwide. as of september 30, 2020, it has caused a pandemic outbreak with more than 33 million confirmed infections and more than 1 million reported deaths worldwide. several statistical, machine learning, and hybrid models have previously tried to forecast covid-19 confirmed cases for profoundly affected countries. due to extreme uncertainty and nonstationarity in the time series data, forecasting of covid-19 confirmed cases has become a very challenging job. for univariate time series forecasting, there are various statistical and machine learning models available in the literature. but, epidemic forecasting has a dubious track record. its failures became more prominent due to insufficient data input, flaws in modeling assumptions, high sensitivity of estimates, lack of incorporation of epidemiological features, inadequate past evidence on effects of available interventions, lack of transparency, errors, lack of determinacy, and lack of expertise in crucial disciplines. this chapter focuses on assessing different short-term forecasting models that can forecast the daily covid-19 cases for various countries. in the form of an empirical study on forecasting accuracy, this chapter provides evidence to show that there is no universal method available that can accurately forecast pandemic data. still, forecasters' predictions are useful for the effective allocation of healthcare resources and will act as an early-warning system for government policymakers. in december 2019, clusters of pneumonia cases caused by the novel coronavirus were identified at the wuhan, hubei province in china [58, 48] after almost hundred years of the 1918 spanish flu [118] . soon after the emergence of the novel beta coronavirus, world health organization (who) characterized this contagious disease as a "global pandemic" due to its rapid spread worldwide [99] . many scientists have attempted to make forecasts about its impact. however, despite involving many excellent modelers, best intentions, and highly sophisticated tools, forecasting covid-19 pandemics is harder [64] , and this is primarily due to the following major factors: -very less amount of data is available; -less understanding of the factors that contribute to it; -model accuracy is constrained by our knowledge of the virus, however. with an emerging disease such as covid-19, many transmission-related biologic features are hard to measure and remain unknown; -the most obvious source of uncertainty affecting all models is that we don't know how many people are or have been infected; -ongoing issues with virologic testing mean that we are certainly missing a substantial number of cases, so models fitted to confirmed cases are likely to be highly uncertain [55] ; -the problem of using confirmed cases to fit models is further complicated because the fraction of confirmed cases is spatially heterogeneous and time-varying [123] ; -finally, many parameters associated with covid-19 transmission are poorly understood. amid enormous uncertainty about the future of the covid-19 pandemic, statistical, machine learning, and epidemiological models are critical forecasting tools for policymakers, clinicians, and public health practitioners [26, 76, 126, 36, 73, 130] . covid-19 modeling studies generally follow one of two general approaches that we will refer to as forecasting models and mechanistic models. although there are hybrid approaches, these two model types tend to address different questions on different time scales, and they deal differently with uncertainty [25] . compartmental epidemiological models have been developed over nearly a century and are well tested on data from past epidemics. these models are based on modeling the actual infection process and are useful for predicting long-term trajectories of the epidemic curves [25] . short-term forecasting models are often statistical, fitting a line or curve to data and extrapolating from there -like seeing a pattern in a sequence of numbers and guessing the next number, without incorporating the process that produces the pattern [23, 24, 26] . well constructed statistical frameworks can be used for short-term forecasts, using machine learning or regression. in statistical models, the uncertainty of the prediction is generally presented as statistically computed prediction intervals around an estimate [51, 65] . given that what happens a month from now will depend on what happens in the interim, the estimated uncertainty should increase as you look further into the future. these models yield quantitative projections that policymakers may need to allocate resources or plan interventions in the short-term. forecasting time series datasets have been a traditional research topic for decades, and various models have been developed to improve forecasting accuracy [27, 10, 49] . there are numerous methods available to forecast time series, including traditional statistical models and machine learning algorithms, providing many options for modelers working on epidemiological forecasting [24, 25, 20, 26, 80, 22, 97] . many research efforts have focused on developing a universal forecasting model but failed, which is also evident from the "no free lunch theorem" [125] . this chapter focuses on assessing popularly used short-term forecasting (nowcasting) models for covid-19 from an empirical perspective. the findings of this chapter will fill the gap in the literature of nowcasting of covid-19 by comparing various forecasting methods, understanding global characteristics of pandemic data, and discovering real challenges for pandemic forecasters. the upcoming sections present a collection of recent findings on covid-19 forecasting. additionally, twenty nowcasting (statistical, machine learning, and hybrid) models are assessed for five countries of the united states of america (usa), india, brazil, russia, and peru. finally, some recommendations for policy-making decisions and limitations of these forecasting tools have been discussed. researchers face unprecedented challenges during this global pandemic to forecast future real-time cases with traditional mathematical, statistical, forecasting, and machine learning tools [76, 126, 36, 73, 130] . studies in march with simple yet powerful forecasting methods like the exponential smoothing model predicted cases ten days ahead that, despite the positive bias, had reasonable forecast error [91] . early linear and exponential model forecasts for better preparation regarding hospital beds, icu admission estimation, resource allocation, emergency funding, and proposing strong containment measures were conducted [46] that projected about 869 icu and 14542 icu admissions in italy for march 20, 2020. health-care workers had to go through the immense mental stress left with a formidable choice of prioritizing young and healthy adults over the elderly for allocation of life support, mostly unwanted ignoring of those who are extremely unlikely to survive [35, 100] . real estimates of mortality with 14-days delay demonstrated underestimating of the covid-19 outbreak and indicated a grave future with a global case fatality rate (cfr) of 5.7% in march [13] . the contact tracing, quarantine, and isolation efforts have a differential effect on the mortality due to covid-19 among countries. even though it seems that the cfr of covid-19 is less compared to other deadly epidemics, there are concerns about it being eventually returning as the seasonal flu, causing a second wave or future pandemic [89, 95] . mechanistic models, like the susceptible-exposed-infectious-recovered (seir) frameworks, try to mimic the way covid-19 spreads and are used to forecast or simulate future transmission scenarios under various assumptions about parameters governing the transmission, disease, and immunity [56, 52, 8, 29, 77] . mechanistic modeling is one of the only ways to explore possible long-term epidemiologic outcomes [7] . for example, the model from ferguson et al. [40] that has been used to guide policy responses in the united states and britain examines how many covid-19 deaths may occur over the next two years under various social distancing measures. kissler et al. [70] ask whether we can expect seasonal, recurrent epidemics if immunity against novel coronavirus functions similarly to immunity against the milder coronaviruses that we transmit seasonally. in a detailed mechanistic model of boston-area transmission, aleta et al. [4] simulate various lockdown "exit strategies". these models are a way to formalize what we know about the viral transmission and explore possible futures of a system that involves nonlinear interactions, which is almost impossible to do using intuition alone [54, 85] . although these epidemiological models are useful for estimating the dynamics of transmission, targeting resources, and evaluating the impact of intervention strategies, the models require parameters and depend on many assumptions. several statistical and machine learning methods for real-time forecasting of the new and cumulative confirmed cases of covid-19 are developed to overcome limitations of the epidemiological model approaches and assist public health planning and policy-making [25, 91, 6, 26, 23] . real-time forecasting with foretelling predictions is required to reach a statistically validated conjecture in this current health crisis. some of the leading-edge research concerning real-time projections of covid-19 confirmed cases, recovered cases, and mortality using statistical, machine learning, and mathematical time series modeling are given in table 1 . a univariate time series is the simplest form of temporal data and is a sequence of real numbers collected regularly over time, where each number represents a value [28, 18] . there are broadly two major steps involved in univariate time series forecasting [60] : -studying the global characteristics of the time series data; -analysis of data with the 'best-fitted' forecasting model. understanding the global characteristics of pandemic confirmed cases data can help forecasters determine what kind of forecasting method will be appropriate for the given situation [120] . as such, we aim to perform a meaningful data analysis, including the study of time series characteristics, to provide a suitable and comprehensive knowledge foundation for the future step of selecting an apt forecasting method. thus, we take the path of using statistical measures to understand pandemic time series characteristics to assist method selection and data analysis. these characteristics will carry summarized information of the time series, capturing the 'global picture' of the datasets. based on the recommendation of [32, 122, 75, 74] , we study several classical and advanced time series characteristics of covid-19 data. this study considers eight global characteristics of the time series: periodicity, stationarity, serial correlation, skewness, kurtosis, nonlinearity, long-term dependence, and chaos. this collection of measures provides quantified descriptions and gives a rich portrait of the pandemic time-series' nature. a brief description of these statistical and advanced time-series measures are given below. a seasonal pattern exists when a time series is influenced by seasonal factors, such as the month of the year or day of the week. the seasonality of a time series is defined as a pattern that repeats itself over fixed intervals of time [18] . in general, the seasonality can be found by identifying a large autocorrelation coefficient or a large partial autocorrelation coefficient at the seasonal lag. since the periodicity is very important for determining the seasonality and examining the cyclic pattern of the time series, the periodicity feature extraction becomes a necessity. unfortunately, many time series available from the dataset in different domains do not always have known frequency or regular periodicity. seasonal time series are sometimes also called cyclic series, although there is a significant distinction between them. cyclic data have varying frequency lengths, but seasonality is of a fixed length over each period. for time series with no seasonal pattern, the frequency is set to 1. the seasonality is tested using the 'stl' function within the "stats" package in r statistical software [60] . stationarity is the foremost fundamental statistical property tested for in time series analysis because most statistical models require that the underlying generating processes be stationary [27] . stationarity means that a time series (or rather the process rendering it) do not change over time. in statistics, a unit root test tests whether a time series variable is non-stationary and possesses a unit root [93] . the null hypothesis is generally defined as the presence of a unit root, and the alternative hypothesis is either stationarity, trend stationarity, or explosive root depending on the test used. in econometrics, kwiatkowski-phillips-schmidt-shin (kpss) tests are used for testing a null hypothesis that an observable time series is stationary around a deterministic trend (that is, trend-stationary) against the alternative of a unit root [108] . the kpss test is done using the 'kpss.test' function within the "tseries" package in r statistical software [117] . serial correlation is the relationship between a variable and a lagged version of itself over various time intervals. serial correlation occurs in time-series studies when the errors associated with a given time period carry over into future time periods [18] . we have used box-pierce statistics [19] in our approach to estimate the serial correlation measure and extract the measures from covid-19 data. the box-pierce statistic was designed by box and pierce in 1970 for testing residuals from a forecast model [122] . it is a common portmanteau test for computing the measure. the mathematical formula of the box-pierce statistic is as follows: where n is the length of the time series, h is the maximum lag being considered (usually h is chosen as 20) , and r k is the autocorrelation function. the portmanteau test is done using the 'box.test' function within the "stats" package in r statistical software [63] . nonlinear time series models have been used extensively to model complex dynamics not adequately represented by linear models [67] . nonlinearity is one important time series characteristic to determine the selection of an appropriate forecasting method. [115] there are many approaches to test the nonlinearity in time series models, including a nonparametric kernel test and a neural network test [119] . in the comparative studies between these two approaches, the neural network test has been reported with better reliability [122] . in this research, we used teräsvirta's neural network test [112] for measuring time series data nonlinearity. it has been widely accepted and reported that it can correctly model the nonlinear structure of the data [113] . it is a test for neglected nonlinearity, likely to have power against a range of alternatives based on the nn model (augmented single-hidden-layer feedforward neural network model). this takes large values when the series is nonlinear and values near zero when the series is linear. the test is done using the 'nonlinearitytest' function within the "nonlineartseries" package in r statistical software [42] . skewness is a measure of symmetry, or more precisely, the lack of symmetry. a distribution, or dataset, is symmetric if it looks the same to the left and the right of the center point [122] . a skewness measure is used to characterize the degree of asymmetry of values around the mean value [83] . for univariate data y t , the skewness coefficient is whereȳ is the mean, σ is the standard deviation, and n is the number of data points. the skewness for a normal distribution is zero, and any symmetric data should have the skewness near zero. negative values for the skewness indicate data that are skewed left, and positive values for the skewness indicate data that are skewed right. in other words, left skewness means that the left tail is heavier than the right tail. similarly, right skewness means the right tail is heavier than the left tail [69] . skewness is calculated using the 'skewness' function within the "e1071" package in r statistical software [81] . kurtosis is a measure of whether the data are peaked or flat, relative to a normal distribution [83] . a dataset with high kurtosis tends to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. datasets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. for a univariate time series y t , the kurtosis coefficient is 1 the kurtosis for a standard normal distribution is 3. therefore, the excess kurtosis is defined as so, the standard normal distribution has an excess kurtosis of zero. positive kurtosis indicates a 'peaked' distribution and negative kurtosis indicates a 'flat' distribution [47] . kurtosis is calculated using the 'kurtosis' function within the "performance-analytics" package in r statistical software [90] . processes with long-range dependence have attracted a good deal of attention from a probabilistic perspective in time series analysis [98] . with such increasing importance of the 'self-similarity' or 'long-range dependence' as one of the time series characteristics, we study this feature into the group of pandemic data characteristics. the definition of self-similarity is most related to the self-similarity parameter, also called hurst exponent (h) [15] . the class of autoregressive fractionally integrated moving average (arfima) processes [44] is a good estimation method for computing h. in an arima(p, d, q), p is the order of ar, d is the degree first differencing involved, and q is the order of ma. if the time series is suspected of exhibiting long-range dependency, parameter d may be replaced by certain non-integer values in the arfima model [21] . we fit an arfima(0, d, 0) to the maximum likelihood, which is approximated by using the fast and accurate method of haslett and raftery [50] . we then estimate the hurst parameter using the relation h = d + 0.5. the self-similarity feature can only be detected in the raw data of the time series. the value of h can be obtained using the 'hurstexp' function within the "pracma" package in r statistical software [16] . many systems in nature that were previously considered random processes are now categorized as chaotic systems. for several years, lyapunov characteristic exponents are of interest in the study of dynamical systems to characterize quantitatively their stochasticity properties, related essentially to the exponential divergence of nearby orbits [39] . nonlinear dynamical systems often exhibit chaos, characterized by sensitive dependence on initial values, or more precisely by a positive lyapunov exponent (le) [38] . recognizing and quantifying chaos in time series are essential steps toward understanding the nature of random behavior and revealing the extent to which short-term forecasts may be improved [53] . le, as a measure of the divergence of nearby trajectories, has been used to qualifying chaos by giving a quantitative value [14] . the algorithm of computing le from time-series is applied to continuous dynamical systems in an n-dimensional phase space [101] . le is calculated using the 'lyapunov exponent' function within the "tserieschaos" package in r statistical software [9] . time series forecasting models work by taking a series of historical observations and extrapolating future patterns. these are great when the data are accurate; the future is similar to the past. forecasting tools are designed to predict possible future alternatives and help current planing and decision making [10] . there are essentially three general approaches to forecasting a time series [82] : 1. generating forecasts from an individual model; 2. combining forecasts from many models (forecast model averaging); 3. hybrid experts for time series forecasting. single (individual) forecasting models are either traditional statistical methods or modern machine learning tools. we study ten popularly used single forecasting models from classical time series, advanced statistics, and machine learning literature. there has been a vast literature on the forecast combinations motivated by the seminal work of bates & granger [12] and followed by a plethora of empirical applications showing that combination forecasts are often superior to their counterparts (see, [17, 114] , for example). combining forecasts using a weighted average is considered a successful way of hedging against the risk of selecting a misspecified model [31] . a significant challenge is in choosing an appropriate set of weights, and many attempts to do this have been worse than simply using equal weightssomething that has become known as the "forecast combination puzzle" (see, for example, [109] ). to overcome this, hybrid models became popular with the seminal work of zhang [127] and further extended for epidemic forecasting in [24, 26, 23] . the forecasting methods can be briefly reviewed and organized in the architecture shown in figure 1 . the autoregressive integrated moving average (arima) is one of the well-known linear models in time-series forecasting, developed in the early 1970s [18] . it is widely used to track linear tendencies in stationary time-series data. it is denoted by arima(p, d, q), where the three components have significant meanings. the parameters p and q represent the order of ar and ma models, respectively, and d denotes the level of differencing to convert nonstationary data into stationary time series [78] . arima model can be mathematically expressed as follows: where y t denotes the actual value of the variable at time t, ǫ t denotes the random error at time t, β i and α j are the coefficients of the model. some necessary steps to be followed for any given time-series dataset to build an arima model are as follows: -identification of the model (achieving stationarity). -use autocorrelation function (acf) and partial acf plots to select the ar and ma model parameters, respectively, and finally estimate model parameters for the arima model. -the 'best-fitted' forecasting model can be found using the akaike information criteria (aic) or the bayesian information criteria (bic). finally, one checks the model diagnostics to measure its performance. an implementation in r statistical software is available using the 'auto.arima' function under the "forecast" package, which returns the 'best' arima model according to either aic or bic values [61] . wavelet analysis is a mathematical tool that can reveal information within the signals in both the time and scale (frequency) domains. this property overcomes the primary drawback of fourier analysis, and wavelet transforms the original signal data (especially in the time domain) into a different domain for data analysis and processing. wavelet-based models are most suitable for nonstationary data, unlike standard arima. most epidemic time-series datasets are nonstationary; therefore, wavelet transforms are used as a forecasting model for these datasets [26] . when conducting wavelet analysis in the context of time series analysis [5] , the selection of the optimal number of decomposition levels is vital to determine the performance of the model in the wavelet domain. the following formula for the number of decomposition levels, w l = int[log(n)], is used to select the number of decomposition levels, where n is the time-series length. the wavelet-based arima (warima) model transforms the time series data by using a hybrid maximal overlap discrete wavelet transform (modwt) algorithm with a 'haar' filter [88] . daubechies wavelets can produce identical events across the observed time series in so many fashions that most other time series prediction models cannot recognize. the necessary steps of a wavelet-based forecasting model, defined by [5] , are as follows. firstly, the daubechies wavelet transformation and a decomposition level are applied to the nonstationary time series data. secondly, the series is reconstructed by removing the high-frequency component, using the wavelet denoising method. lastly, an appropriate arima model is applied to the reconstructed series to generate out-of-sample forecasts of the given time series data. wavelets were first considered as a family of functions by morlet [121] , constructed from the translations and dilation of a single function, which is called "mother wavelet". these wavelets are defined as follows: where the parameter m ( = 0) is denoted as the scaling parameter or scale, and it measures the degree of compression. the parameter n is used to determine the time location of the wavelet, and it is called the translation parameter. if the value |m| < 1, then the wavelet in m is a compressed version (smaller support is the time domain) of the mother wavelet and primarily corresponds to higher frequencies, and when |m| > 1, then φ ( m, n)(t) has larger time width than φ(t) and corresponds to lower frequencies. hence wavelets have time width adopted to their frequencies, which is the main reason behind the success of the morlet wavelets in signal processing and time-frequency signal analysis [86] . an implementation of the warima model is available using the 'waveletfittingarma' function under the "waveletarima" package in r statistical software [87] . fractionally autoregressive integrated moving average or autoregressive fractionally integrated moving average models are the generalized version arima model in time series forecasting, which allow non-integer values of the differencing parameter [44] . it may sometimes happen that our time-series data is not stationary, but when we try differencing with parameter d taking the value to be an integer, it may over difference it. to overcome this problem, it is necessary to difference the time series data using a fractional value. these models are useful in modeling time series, which has deviations from the long-run mean decay more slowly than an exponential decay; these models can deal with time-series data having long memory [94] . arfima models can be mathematically expressed as follows: where b is is the backshift operator, p, q are arima parameters, and d is the differencing term (allowed to take non-integer values). an r implementation of arfima model can be done with 'arfima' function under the "forecast"package [61] . an arfima(p, d, q) model is selected and estimated automatically using the hyndman-khandakar (2008) [59] algorithm to select p and q and the haslett and raftery (1989) [50] algorithm to estimate the parameters including d. exponential smoothing state space methods are very effective methods in case of time series forecasting. exponential smoothing was proposed in the late 1950s [124] and has motivated some of the most successful forecasting methods. forecasts produced using exponential smoothing methods are weighted averages of past observations, with the weights decaying exponentially as the observations get older. the ets models belong to the family of state-space models, consisting of three-level components such as an error component (e), a trend component (t), and a seasonal component(s). this method is used to forecast univariate time series data. each model consists of a measurement equation that describes the observed data, and some state equations that describe how the unobserved components or states (level, trend, seasonal) change over time [60] . hence, these are referred to as state-space models. [59] . an r implementation of the model is available in the 'ets' function under "forecast" package [61] . as an extension of autoregressive model, self-exciting threshold autoregressive (se-tar) model is used to model time series data, in order to allow for higher degree of flexibility in model parameters through a regime switching behaviour [116] . given a time-series data y t , the setar model is used to predict future values, assuming that the behavior of the time series changes once the series enters a different regime. this switch from one to another regime depends on the past values of the series. the model consists of k autoregressive (ar) parts, each for a different regime. the model is usually denoted as setar (k, p) where k is the number of threshold, there are k +1 number of regime in the model and p is the order of the autoregressive part. for example, suppose an ar(1) model is assumed in both regimes, then a 2-regime setar model is given by [41] : where for the moment the ǫ t are assumed to be an i.i.d. white noise sequence conditional upon the history of the time series and c is the threshold value. the setar model assumes that the border between the two regimes is given by a specific value of the threshold variable y t−1 . the model can be implemented using 'setar' function under the "tsdyn" package in r [34] . bayesian statistics has many applications in the field of statistical techniques such as regression, classification, clustering, and time series analysis. scott and varian [105] used structural time series models to show how google search data can be used to improve short-term forecasts of economic time series. in the structural time series model, the observation in time t, y t is defined as follows: where β t is the vector of latent variables, x t is the vector of model parameters, and ǫ t are assumed follow normal distributions with zero mean and h t as the variance. in addition, β t is represented as follows: where δ t are assumed to follow normal distributions with zero mean and q t as the variance. gaussian distribution is selected as the prior of the bsts model since we use the occurred frequency values ranging from 0 to ∞ [66] . an r implementation is available under the "bsts" package [103] , where one can add local linear trend and seasonal components as required. the state specification is passed as an argument to 'bsts' function, along with the data and the desired number of markov chain monte carlo (mcmc) iterations, and the model is fit using an mcmc algorithm [104] . the 'theta method' or 'theta model' is a univariate time series forecasting technique that performed particularly well in m3 forecasting competition and of interest to forecasters [11] . the method decomposes the original data into two or more lines, called theta lines, and extrapolates them using forecasting models. finally, the predictions are combined to obtain the final forecasts. the theta lines can be estimated by simply modifying the 'curvatures' of the original time series [110] . this change is obtained from a coefficient, called θ coefficient, which is directly applied to the second differences of the time series: where y " data = y t − 2y t−1 + y t−2 at time t for t = 3, 4, · · · , n and {y 1 , y 2 , · · · , y n } denote the observed univariate time series. in practice, coefficient θ can be considered as a transformation parameter which creates a series of the same mean and slope with that of the original data but having different variances. now, eqn. (2) is a second-order difference equation and has solution of the following form [62] : where a θ and b θ are constants and t = 1, 2, · · · , n. thus, y new (θ) is equivalent to a linear function of y t with a linear trend added. the values of a θ and b θ are computed by minimizing the sum of squared differences: forecasts from the theta model are obtained by a weighted average of forecasts of y new (θ) for different values of θ. also, the prediction intervals and likelihoodbased estimation of the parameters can be obtained based on a state-space model, demonstrated in [62] . an r implementation of the theta model is possible with 'thetaf' function in "forecast" package [61] . the main objective of tbats model is to deal with complex seasonal patterns using exponential smoothing [33] . the name is acronyms for key features of the models: trigonometric seasonality (t), box-cox transformation (b), arma errors (a), trend (t) and seasonal (s) components. tbats makes it easy for users to handle data with multiple seasonal patterns. this model is preferable when the seasonality changes over time [60] . tbats models can be described as follows: where y (µ) t is the time series at time point t (box-cox transformed), s (i) t is the i-th seasonal component, l t is the local level, b t is the trend with damping, d t is the arma(p, q) process for residuals and e t as the gaussian white noise. tbats model can be implemented using 'tbats' function under the "forecast" package in r statistical software [61] . forecasting with artificial neural networks (ann) has received increasing interest in various research and applied domains in the late 1990s. it has been given special attention in epidemiological forecasting [92] . multi-layered feed-forward neural networks with back-propagation learning rules are the most widely used models with applications in classification and prediction problems [129] . there is a single hidden layer between the input and output layers in a simple feed-forward neural net, and where weights connect the layers. denoting by ω ji the weights between the input layer and hidden layer and ν kj denotes the weights between the hidden and output layers. based on the given inputs x i , the neuron's net input is calculated as the weighted sum of its inputs. the output layer of the neuron, y j , is based on a sigmoidal function indicating the magnitude of this net-input [43] . for the j th hidden neuron, the calculation for the net input and output are: ω ji x i and y j = f (net h j ). for the k th output neuron: with λ ∈ (0, 1) is a parameter used to control the gradient of the function and j is the number of neurons in the hidden layer. the back-propagation [102] learning algorithm is the most commonly used technique in ann. in the error back-propagation step, the weights in ann are updated by minimizing where, d pk is the desired output of neuron k and for input pattern p. the common formula for number of neurons in the hidden layer is h = (i+j) 2 + √ d, for selecting the number of hidden neurons, where i is the number of output y j and d denotes the number of i training patterns in the input x i [128] . the application of ann for time series data is possible with 'mlp' function under "nnfor" package in r [72] . autoregressive neural network (arnn) received attention in time series literature in late 1990s [37] . the architecture of a simple feedforward neural network can be described as a network of neurons arranged in input layer, hidden layer, and output layer in a prescribed order. each layer passes the information to the next layer using weights that are obtained using a learning algorithm [128] . arnn model is a modification to the simple ann model especially designed for prediction problems of time series datasets [37] . arnn model uses a pre-specified number of lagged values of the time series as inputs and number of hidden neurons in its architecture is also fixed [60] . arnn(p, k) model uses p lagged inputs of the time series data in a one hidden layered feedforward neural network with k hidden units in the hidden layer. let x denotes a p-lagged inputs and f is a neural network of the following architecture: where c 0 , a j , w j are connecting weights, b j are p-dimensional weight vector and φ is a bounded nonlinear sigmoidal function (e.g., logistic squasher function or tangent hyperbolic activation function). these weights are trained using a gradient descent backpropagation [102] . standard ann faces the dilemma to choose the number of hidden neurons in the hidden layer and optimal choice is unknown. but for arnn model, we adopt the formula k = [(p+1)/2] for non-seasonal time series data where p is the number of lagged inputs in an autoregressive model [60] . arnn model can be applied using the 'nnetar' function available in the r statistical package "forecast" [61] . the idea of ensemble time series forecasts was given by bates and granger (1969) in their seminal work [12] . forecasts generated from arima, ets, theta, arnn, warima can be combined with equal weights, weights based on in-sample errors, or cross-validated weights. in the ensemble framework, cross-validation for time series data with user-supplied models and forecasting functions is also possible to evaluate model accuracy [106] . combining several candidate models can hedge against an incorrect model specification. bates and granger(1969) [12] suggested such an approach and observed, somewhat surprisingly, that the combined forecast can even outperform the single best component forecast. while combination weights selected equally or proportionally to past model errors are possible approaches, many more sophisticated combination schemes, have been suggested. for example, rather than normalizing weights to sum to unity, unconstrained and even negative weights could be possible [45] . the simple equal-weights combination might appear woefully obsolete and probably non-competitive compared to the multitude of sophisticated combination approaches or advanced machine learning and neural network forecasting models, especially in the age of big data. however, such simple combinations can still be competitive, particularly for pandemic time series [106] . a flow diagram of the ensemble method is presented in figure 2 . the ensemble method by [12] produces forecasts out to a horizon h by applying a weight w m to each m of the n model forecasts in the ensemble. the ensemble forecast f (i) for time horizon 1 ≤ i ≤ h and with individual component model forecasts f m (i) is then the weights can be determined in several ways (for example, supplied by the user, set equally, determined by in-sample errors, or determined by cross-validation). the "forecasthybrid" package in r includes these component models in order to enhance the "forecast" package base models with easy ensembling (e.g., 'hybridmodel' function in r statistical software) [107] . the idea of hybridizing time series models and combining different forecasts was first introduced by zhang [127] and further extended by [68, 24, 26, 23] . the hybrid forecasting models are based on an error re-modeling approach, and there are broadly two types of error calculations popular in the literature, which are given below [84, 30] : in the additive error model, the forecaster treats the expert's estimate as a variable,ŷ t , and thinks of it as the sum of two terms: where y t is the true value and e t be the additive error term. in the multiplicative error model, the forecaster treats the expert's estimateŷ t as the product of two terms: where y t is the true value and e t be the multiplicative error term. now, even if the relationship is of product type, in the log-log scale it becomes additive. hence, without loss of generality, we may assume the relationship to be additive and expect errors (additive) of a forecasting model to be random shocks [23] . these hybrid models are useful for complex correlation structures where less amount of knowledge is available about the data generating process. a simple example is the daily confirmed cases of the covid-19 cases for various countries where very little is known about the structural properties of the current pandemic. the mathematical formulation of the proposed hybrid model (z t ) is as follows: where l t is the linear part and n t is the nonlinear part of the hybrid model. we can estimate both l t and n t from the available time series data. letl t be the forecast value of the linear model (e.g., arima) at time t and ǫ t represent the error residuals at time t, obtained from the linear model. then, we write these left-out residuals are further modeled by a nonlinear model (e.g., ann or arnn) and can be represented as follows: where f is a nonlinear function, and the modeling is done by the nonlinear ann or arnn model as defined in eqn. (5) and ε t is supposed to be the random shocks. therefore, the combined forecast can be obtained as follows: wheren t is the forecasted value of the nonlinear time series model. an overall flow diagram of the proposed hybrid model is given in figure 3 . in the hybrid model, a nonlinear model is applied in the second stage to re-model the left-over autocorrelations in the residuals, which the linear model could not model. thus, this can be considered as an error re-modeling approach. this is important because due to model misspecification and disturbances in the pandemic rate time series, the linear models may fail to generate white noise behavior for the forecast residuals. thus, hybrid approaches eventually can improve the predictions for the epidemiological forecasting problems, as shown in [24, 26, 23] . these hybrid models only assume that the linear and nonlinear components of the epidemic time series can be separated individually. the implementation of the hybrid models used in this study are available in [1] . five time series covid-19 datasets for the usa, india, russia, brazil, and peru uk are considered for assessing twenty forecasting models (individual, ensemble, and hybrid). the datasets are mostly nonlinear, nonstationary, and non-gaussian in nature. we have used root mean square error (rmse), mean absolute error (mae), mean absolute percentage error (mape), and symmetric mape (smape) to evaluate the predictive performance of the models used in this study. since the number of data points in both the datasets is limited, advanced deep learning techniques will over-fit the datasets [51] . we use publicly available datasets to compare various forecasting frameworks. covid-19 cases of five countries with the highest number of cases were collected [2, 3] . the datasets and their description is presented in table 2 . characteristics of these five time series were examined using hurst exponent, kpss test and terasvirta test and other measures as described in section 3. hurst exponent (denoted by h), which ranges between zero to one, is calculated to measure the longrange dependency in a time series and provides a measure of long-term nonlinearity. for values of h near zero, the time series under consideration is mean-reverting. an increase in the value will be followed by a decrease in the series and vice versa. when h is close to 0.5, the series has no autocorrelation with past values. these types of series are often called brownian motion. when h is near one, an increase or decrease in the value is most likely to be followed by a similar movement in the future. all the five covid-19 datasets in this study possess the hurst exponent value near one, which indicates that these time series datasets have a strong trend of increase followed by an increase or decrease followed by another decline. kpss tests are performed to examine the stationarity of a given time series. the null hypothesis for the kpss test is that the time series is stationary. thus, the series is nonstationary when the p-value less than a threshold. from table 3 , all the five datasets can be characterized as non-stationary as the p-value < 0.01 in each instances. terasvirta test examines the linearity of a time series against the alternative that a nonlinear process has generated the series. it is observed that the usa, russia, and peru covid-19 datasets are likely to follow a nonlinear trend. on the other hand, india and brazil datasets have some linear trends. further, we examine serial correlation, skewness, kurtosis, and maximum lyapunov exponent for the five covid-19 datasets. the results are reported in table 4 . the serial correlation of the datasets is computed using the box-pierce test statistic for the null hypothesis of independence in a given time series. the p-values related to each of the datasets were found to be below the significant level (see table 4 ). this indicates that these covid-19 datasets have no serial correlation when lag equals one. skewness for russia covid-19 dataset is found to be negative, whereas the other four datasets are positively skewed. this means for the russia dataset; the left tail is heavier than the right tail. for the other four datasets, the right tail is heavier than the left tail. the kurtosis values for the india dataset are found positive while the other four datasets have negative kurtosis values. therefore, the covid-19 dataset of india tends to have a peaked distribution, and the other four datasets may have a flat distribution. we observe that each of the five datasets is non-chaotic in nature, i.e., the maximum lyapunov exponents are less than unity. a summary of the implementation tools is presented in table 5 . we used four popular accuracy metrics to evaluate the performance of different time series forecasting models. the expressions of these metrics are given below. where y i are actual series values,ŷ i are the predictions by different models and n represent the number of data points of the time series. the models with least accuracy metrics is the best forecasting model. this subsection is devoted to the experimental analysis of confirmed covid-19 cases using different time series forecasting models. the test period is chosen to be 15 days and 30 days, whereas the rest of the data is used as training data (see table 2 ). in first columns of tables 6 and 7, we present training data and test data for usa, india, brazil, russia and peru. the autocorrelation function (acf) and partial autocorrelation function (pacf) plots are also depicted for the training period of each of the five countries in tables 6 and 7 . acf and pacf plots are generated after applying the required number of differencing of each training data using the r function 'diff'. the required order of differencing is obtained by using the r function 'ndiffs' which estimate the number of differences required to make a given time series stationary. the integer-valued order of differencing is then used as the value of 'd' in the arima(p, d, q) model. other two parameters 'p' and 'q' of the model are obtained from acf and pacf plots respectively (see tables 6 and 7) . however, we choose the 'best' fitted arima model using aic value for each training dataset. table 6 presents the training data (black colored) and test data (red-colored) and corresponding acf and pacf plots for the five time-series datasets. further, we checked twenty different forecasting models as competitors for the short-term forecasting of covid-19 confirmed cases in five countries. 15-days and 30-days ahead forecasts were generated for each model, and accuracy metrics were computed to determine the best predictive models. from the ten popular single models, we choose the best one based on the accuracy metrics. on the other hand, one hybrid/ensemble model is selected from the rest of the ten models. the bestfitted arima parameters, ets, arnn, and arfima models for each country are reported in the respective tables. table 7 presents the training data (black colored) and test data (red-colored) and corresponding plots for the five datasets. twenty forecasting models are implemented on these pandemic time-series datasets. table 5 gives the essential details about the functions and packages required for implementation. results for usa covid-19 data: among the single models, arima (2, 1, 4) performs best in terms of accuracy metrics for 15-days ahead forecasts. tbats and arnn (16, 8) also have competitive accuracy metrics. hybrid arima-arnn model improves the earlier arima forecasts and has the best accuracy among all hybrid/ensemble models (see table 8 ). hybrid arima-warima also does a good job and improves arima model forecasts. in-sample and out-of-sample forecasts obtained from arima and hybrid arima-arnn models are depicted in fig. 4(a) . out-of-sample forecasts are generated using the whole dataset as training data. arfima(2,0,0) is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models. bsts and setar also have good agreement with the test data in terms of accuracy metrics. hybrid arima-warima model and has the best accuracy among all hybrid/ensemble models (see table 9 ). in-sample and out-of-sample forecasts obtained from arfima and hybrid arima-warima models are depicted in fig. 4(b) . results for india covid-19 data: among the single models, ann performs best in terms of accuracy metrics for 15-days ahead forecasts. arima(1,2,5) also has competitive accuracy metrics in the test period. hybrid arima-arnn model improves the arima(1,2,5) forecasts and has the best accuracy among all hybrid/ensemble models (see table 10 ). hybrid arima-ann and hybrid arima-warima also do a good job and improves arima model forecasts. in-sample and out-of-sample forecasts obtained from ann and hybrid arima-arnn models are depicted in fig. 5(a) . out-of-sample forecasts are generated using the whole dataset as training data (see fig. 5 ). ann is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models for india covid-19 data. ensemble ann-arnn-warima model and has the best accuracy among all hybrid/ensemble models (see table 11 ). in-sample and out-of-sample forecasts obtained from ann and ensemble ann-arnn-warima models are depicted in fig. 5(b) . results for brazil covid-19 data: among the single models, setar performs best in terms of accuracy metrics for 15-days ahead forecasts. ensemble ets-theta-arnn (efn) model has the best accuracy among all hybrid/ensemble models (see table 12 ). in-sample and out-of-sample forecasts obtained from setar and ensemble efn models are depicted in fig. 6 (a). warima is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models for brazil covid-19 data. hybrid warima-ann model has the best accuracy among all hybrid/ensemble models (see table 13 ). insample and out-of-sample forecasts obtained from warima and hybrid warima-ann models are depicted in fig. 6(b) . results for russia covid-19 data: bsts performs best in terms of accuracy metrics for a 15-days ahead forecast in the case of russia covid-19 data among single models. theta and arnn(3,2) also show competitive accuracy measures. ensemble ets-theta-arnn (efn) model has the best accuracy among all hybrid/ensemble models (see table 14 ). ensemble arima-ets-arnn and ensemble arima-theta-arnn also performs well in the test period. in-sample and out-ofsample forecasts obtained from bsts and ensemble efn models are depicted in fig. 7(a) . setar is found to have the best accuracy metrics for 30-days ahead forecasts among single forecasting models for russia covid-19 data. ensemble arima-theta-arnn (afn) model has the best accuracy among all hybrid/ensemble models (see table 15 ). all five ensemble models show promising results for this dataset. in-sample and out-of-sample forecasts obtained from setar and ensemble afn models are depicted in fig. 7(b) . results for peru covid-19 data: warima and arfima(2,0.09,1) perform better than other single models for 15-days ahead forecasts in peru. hybrid warima-arnn model improves the warima forecasts and has the best accuracy among all hybrid/ensemble models (see table 16 ). in-sample and out-of-sample forecasts obtained from warima and hybrid warima-arnn models are depicted in fig. 8(a) . arfima(2,0,0) and ann depict competitive accuracy metrics for 30-days ahead forecasts among single forecasting models for peru covid-19 data. ensemble ann-arnn-warima (aaw) model has the best accuracy among all hybrid/ensemble models (see table 17 ). in-sample and out-of-sample forecasts obtained from arfima(2,0,0) and ensemble aaw models are depicted in fig. 8(b) . results from all the five datasets reveal that none of the forecasting models performs uniformly, and therefore, one should be carefully select the appropriate forecasting model while dealing with covid-19 datasets. in this study, we assessed several statistical, machine learning, and composite models on the confirmed cases of covid-19 data sets for the five countries with the highest number of cases. thus, covid-19 cases in the usa, followed by india, brazil, russia, and peru, are considered. the datasets mostly exhibit nonlinear and nonstationary behavior. twenty forecasting models were applied to five datasets, and an empirical study is presented here. the empirical findings suggest no universal method exists that can outperform every other model for all the datasets in covid-19 nowcasting. still, the future forecasts obtained from models with the best accuracy will be useful in decision and policy makings for government officials and policymakers to allocate adequate health care resources for the coming days in responding to the crisis. however, we recommend updating the datasets regularly and comparing the accuracy metrics to obtain the best model. as this is evident from this empirical study that no model can perform consistently as the best forecasting model, one must update the datasets regularly to generate useful forecasts. time series of epidemics can oscillate heavily due to various epidemiological factors, and these fluctuations are challenging to be captured adequately for precise forecasting. all five different countries, except brazil and peru, will face a diminishing trend in the number of new confirmed cases of covid-19 pandemic. followed by the both short-term out of sample forecasts reported in this study, the lockdown and shutdown periods can be adjusted accordingly to handle the uncertain and vulnerable situations of the covid-19 pandemic. authorities and health care can modify their planning in stockpile and hospital-beds, depending on these covid-19 pandemic forecasts. models are constrained by what we know and what we assume but used appropriately, and with an understanding of these limitations, they can and should help guide us through this pandemic. since purely statistical approaches do not account for how transmission occurs, they are generally not well suited for long-term predictions about epidemiological dynamics (such as when the peak will occur and whether resurgence will happen) or inference about intervention efficacy. several forecasting models, therefore, limit their projections to two weeks or a month ahead. in this research, we have focused on analyzing the nature of the covid-19 time series data and understanding the data characteristics of the time series. this empirical work studied a wide range of statistical forecasting methods and machine learning algorithms. we have also presented more systematic representations of single, ensemble, and hybrid approaches available for epidemic forecasting. this quantitative study could be used to assess and forecast covid-19 confirmed cases, which will benefit epidemiologists and modelers in their real-world applications. considering this scope of the study, we can present a list of challenges of pandemic forecasting (short-term) with the forecasting tools presented in this chapter: -collect more data on the factors that contribute to daily confirmed cases of covid-19. -model the entire predictive distribution, with particular focus on accurately quantifying uncertainty [55] . -there is no universal model that can generate 'best' short-term forecasts of covid-19 confirmed cases. -continuously monitor the performance of any model against real data and either re-adjust or discard models based on accruing evidence. -developing models in real-time for a novel virus, with poor quality data, is a formidable task and real challenge for epidemic forecasters. -epidemiological estimates and compartmental models can be useful for longterm pandemic trajectory prediction, but they often assume some unrealistic assumptions [64] . -future research is needed to collect, clean, and curate data and develop a coherent approach to evaluate the suitability of models with regard to covid-19 predictions and forecast uncertainties. for the sake of repeatability and reproducibility of this study, all codes and data sets are made available at https://github.com/indrajitg-r/forecasting-covid-19-cases. github repository our world in data worldometers data repository modeling the impact of social distancing, testing, contact tracing and household quarantine on second-wave scenarios of the covid-19 epidemic. medrxiv forecasting time series using wavelets athanasios tsakris, and constantinos siettos. data-based analysis, modelling and forecasting of the covid-19 outbreak infectious diseases of humans: dynamics and control stability analysis and numerical simulation of seir model for pandemic covid-19 spread in indonesia principles of forecasting: a handbook for researchers and practitioners the theta model: a decomposition approach to forecasting the combination of forecasts real estimates of mortality following covid-19 infection. the lancet infectious diseases lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them long-term storage: an experimental study package 'pracma the combination of forecasts: a bayesian approach time series analysis: forecasting and control distribution of residual autocorrelations in autoregressive-integrated moving average time series models refining the global spatial limits of dengue virus transmission by evidence-based consensus time series: theory and methods: theory and methods ensemble method for dengue prediction theta autoregressive neural network model for covid-19 outbreak predictions. medrxiv forecasting dengue epidemics using a hybrid methodology an integrated deterministicstochastic approach for predicting the long-term trajectories of covid-19. medrxiv real-time forecasts and risk assessment of novel coronavirus (covid-19) cases: a data-driven analysis time-series forecasting the analysis of time series: an introduction a time-dependent sir model for covid-19 with undetectable infected persons multiplicative error modeling approach for time series forecasting combining forecasts: a review and annotated bibliography 25 years of time series forecasting forecasting time series with complex seasonal patterns using exponential smoothing fair allocation of scarce medical resources in the time of covid-19 analysis and forecast of covid-19 spreading in china, italy and france time series forecasting with neural networks: a comparative study using the air line data chaotic attractors of an infinite-dimensional dynamical system predicting chaotic time series. physical review letters impact of nonpharmaceutical interventions (npis) to reduce covid19 mortality and healthcare demand non-linear time series models in empirical finance nonlineartseries: nonlinear time series analysis deep learning an introduction to long-memory time series models and fractional differencing improved methods of combining forecasts critical care utilization for the covid-19 outbreak in lombardy, italy: early experience and forecast during an emergency response measuring skewness and kurtosis clinical characteristics of coronavirus disease 2019 in china business forecasting space-time modelling with long-memory dependence: assessing ireland's wind power resource the elements of statistical learning: data mining, inference, and prediction seir modeling of the covid-19 and its dynamics practical implementation of nonlinear time series methods: the tisean package. chaos: an interdisciplinary feasibility of controlling covid-19 outbreaks by isolation of cases and contacts. the lancet global health wrong but useful-what covid-19 epidemiologic models can and cannot tell us the effectiveness of quarantine of wuhan city against the corona virus disease 2019 (covid-19): a well-mixed seir model analysis artificial intelligence forecasting of covid-19 in china clinical features of patients infected with 2019 novel coronavirus in wuhan, china. the lancet forecasting with exponential smoothing: the state space approach forecasting: principles and practice unmasking the theta method automatic time series for forecasting: the forecast package for r. number 6/07. monash university, department of econometrics and business statistics forecasting for covid-19 has failed an introduction to statistical learning multivariate bayesian structural time series model nonlinear time series analysis an artificial neural network (p, d, q) model for timeseries forecasting. expert systems with applications statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis projecting the transmission dynamics of sars-cov-2 through the postpandemic period nnfor: time series forecasting with neural networks nnfor: time series forecasting with neural networks early dynamics of transmission and control of covid-19: a mathematical modelling study. the lancet infectious diseases metalearning: a survey of trends and technologies meta-learning for time series forecasting and forecast combination trend and forecasting of the covid-19 outbreak in china the end of social confinement and covid-19 re-emergence risk arma models and the box-jenkins methodology time series modelling to forecast the confirmed and recovered cases of covid-19 global spread of dengue virus types: mapping the 70 year history fforma: feature-based forecast model averaging introduction to the theory of statistics the assessment of probability distributions from expert opinions with an application to seismic fragility curves social contacts and mixing patterns relevant to the spread of infectious diseases comparative study of wavelet-arima and wavelet-ann models for temperature time series data in northeastern bangladesh wavelet methods for time series analysis comparing sars-cov-2 with sars-cov and influenza pandemics. the lancet infectious diseases forecasting the novel coronavirus covid-19 a review of epidemic forecasting using artificial neural networks testing for a unit root in time series regression beta autoregressive fractionally integrated moving average models the many estimates of the covid-19 case fatality rate. the lancet infectious diseases predictions, role of interventions and effects of a historic national lockdown in india's response to the covid-19 pandemic: data science call to arms. harvard data science review short-term forecasting covid-19 cumulative confirmed cases: perspectives for brazil log-periodogram regression of time series with long range dependence. the annals of statistics real-time forecasts of the covid-19 epidemic in china from february 5th to february 24th facing covid-19 in italy-ethics, logistics, and therapeutics on the epidemic's front line a practical method for calculating largest lyapunov exponents from small data sets. physica d: nonlinear phenomena learning internal representations by error propagation depends boom-spikeslab, and linkingto boom. package 'bsts'. 2020 bayesian variable selection for nowcasting economic time series predicting the present with bayesian structural time series fast and accurate yearly time series forecasting with forecast combinations forecasthybrid: convenient functions for ensemble time series forecasts the kpss stationarity test as a unit root test a simple explanation of the forecast combination puzzle generalizing the theta method for automatic forecasting a machine learning forecasting model for covid-19 pandemic in india power of the neural network linearity test linear models, smooth transition autoregressions, and neural networks for forecasting macroeconomic time series: a re-examination forecast combinations. handbook of economic forecasting nonlinear time series analysis since 1990: some personal reflections non-linear time series: a dynamical system approach tseries: time series analysis and computational finance. r package version 0 the 1918 "spanish flu" in spain nonlinearity tests for time series time series and forecasting: brief history and future research multiple time scales analysis of hydrological time series with wavelet transform rule induction for forecasting method selection: meta-learning the characteristics of univariate time series forecasting sales by exponentially weighted moving averages no free lunch theorems for optimization nowcasting and forecasting the potential domestic and international spread of the 2019-ncov outbreak originating in wuhan, china: a modelling study time series forecasting using a hybrid arima and neural network model neural network forecasting for seasonal and trend time series forecasting with artificial neural networks:: the state of the art estimation of local novel coronavirus (covid-19) cases in wuhan, china from off-site reported cases and population flow data from different sources. medrxiv key: cord-311957-3rmm1hfb authors: faes, c.; abrams, s.; van beckhoven, d.; meyfroidt, g.; vlieghe, e.; hens, n. title: time between symptom onset, hospitalisation and recovery or death: a statistical analysis of different time-delay distributions in belgian covid-19 patients date: 2020-07-21 journal: nan doi: 10.1101/2020.07.18.20156307 sha: doc_id: 311957 cord_uid: 3rmm1hfb background there are different patterns in the covid-19 outbreak in the general population and amongst nursing home patients. different age-groups are also impacted differently. however, it remains unclear whether the time from symptom onset to diagnosis and hospitalization or the length of stay in the hospital is different for different age groups, gender, residence place or whether it is time dependent. methods sciensano, the belgian scientific institute of public health, collected information on hospitalized patients with covid-19 hospital admissions from 114 participating hospitals in belgium. between march 14, 2020 and june 12, 2020, a total of 14,618 covid-19 patients were registered. the time of symptom onset, time of covid-19 diagnosis, time of hospitalization, time of recovery or death, and length of stay in intensive care are recorded. the distributions of these different event times for different age groups are estimated accounting for interval censoring and right truncation in the observed data. results the truncated and interval-censored weibull regression model is the best model for the time between symptom onset and diagnosis/hospitalization best, whereas the length of stay in hospital is best described by a truncated and interval-censored lognormal regression model. conclusions the time between symptom onset and hospitalization and between symptom onset and diagnosis are very similar, with median length between symptom onset and hospitalization ranging between 3 and 10.4 days, depending on the age of the patient and whether or not the patient lives in a nursing home. patients coming from a nursing home facility have a slightly prolonged time between symptom onset and hospitalization (i.e., 2 days). the longest delay time is observed in the age group 20-60 years old. the time from symptom onset to diagnosis follows the same trend, but on average is one day longer as compared to the time to hospitalization. the median length of stay in hospital varies between 3 and 10.4 days, with the length of stay increasing with age. however, a difference is observed between patients that recover and patients that die. while the hospital length of stay for patients that recover increases with age, we observe the longest time between hospitalization and death in the age group 20-60. and, while the hospital length of stay for patients that recover is shorter for patients living in a nursing home, the time from hospitalization to death is longer for these patients. but, over the course of the first wave, the length of stay has decreased, with a decrease in median length of stay of around 2 days. there are different patterns in the covid-19 outbreak in the general population and amongst nursing home patients. different age-groups are also impacted differently. however, it remains unclear whether the time from symptom onset to diagnosis and hospitalization or the length of stay in the hospital is different for different age groups, gender, residence place or whether it is time dependent. sciensano, the belgian scientific institute of public health, collected information on hospitalized patients with covid-19 hospital admissions from 114 participating hospitals in belgium. between march 14, 2020 and june 12, 2020, a total of 14,618 covid-19 patients were registered. the time of symptom onset, time of covid-19 diagnosis, time of hospitalization, time of recovery or death, and length of stay in intensive care are recorded. the distributions of these different event times for different age groups are estimated accounting for interval censoring and right truncation in the observed data. the truncated and interval-censored weibull regression model is the best model for the time between symptom onset and diagnosis/hospitalization best, whereas the length of stay in hospital is best described by a truncated and interval-censored lognormal regression model. the time between symptom onset and hospitalization and between symptom onset and diagnosis are very similar, with median length between symptom onset and hospitalization ranging between 3 and 10.4 days, depending on the age of the patient and whether or not the patient lives in a nursing home. patients coming from a nursing home facility have a slightly prolonged time between symptom onset and hospitalization (i.e., 2 days). the longest delay time is observed in the age group 20-60 years old. the time from symptom onset to diagnosis follows the same trend, but on average is one day longer as compared to the time to hospitalization. the median length of stay in hospital varies between 3 and 10.4 days, with the length of stay increasing with age. however, a difference is observed between patients that recover and patients that die. while the hospital length of stay for patients that recover increases with age, we observe the longest time between hospitalization and death in the age group 20-60. and, while the hospital length of stay for patients that recover is shorter for patients living in a nursing home, the time from hospitalization to death is longer for these patients. but, over the course of the first wave, the length of stay has decreased, with a decrease in median length of stay of around 2 days. the world is currently faced with an ongoing coronavirus disease 2019 (covid-19) pandemic. the disease is caused by the severe acute respiratory syndrome coronavirus 2 (sars-cov-2), a new strain of the coronavirus, which was never detected before in humans, and is a highly contagious infectious disease. the first outbreak of covid-19 occurred in wuhan, province hubei, china in december 2019. since then, several outbreaks have been observed throughout the world. on february 21, 2020, a cluster of covid-19 cases was confirmed in italy, the first european country affected by the virus. one week later, several imported cases were reported in belgium, after a week of school holidays. as from march 7, the first generation of infected individuals as a result of local transmission was confirmed in belgium. there is currently little detailed knowledge on the time interval between symptom onset and hospital admission, nor on the length of stay in hospital. however, information about the length of stay in hospital is important to predict the number of required hospital beds, both for beds in general hospital and beds in the intensive care unit (icu), and to track the burden on hospitals (vekaria et al. 2020) . the time delay from illness onset to death is important for the estimation of the case fatality ratio (donnely et al., 2003) . individual-specific characteristics, such as, e.g., the gender, age and co-morbidity of the individual, could potentially explain differences in length of stay in the hospital and are therefore important to correct for. therefore, in the present study, we investigate the time of symptom onset to hospitalization and the time of symptom onset to diagnosis, as well as the length of stay in hospital. more specifically, we consider and compare parametric distributions for these event times enabling to appropriately take care of truncation and interval censoring. in section 2, we introduce the epidemiological data and the statistical methodology used for the estimation of the parameters associated with the aforementioned delay distributions. the results are presented in section 3 and avenues of further research are discussed in section 4. the hospitalized patients clinical database is an ongoing multicenter registry that collects information on hospital admission related to covid-19 infection. the data are regularly updated as more information from the hospitals are sent in. at the time of writing this manuscript, the data were available until june 12, 2020. the individual patients' data are collected through 2 online questionnaires: one with data on admission and one with data on discharge. data in the survey, there is information about 14,618 patients, hospitalized between march 1, 2020 and june 12, 2020, including age and gender. from these, 6,866 of the hospitalized patients are females and 7,752 are males. 258 hospitalized patients are less than 20 years old, 4,338 individuals are between 20 and 59 years of age, 5,480 are between 60 and 79 years of age and 4,542 have an age above 80 years. from these patients, it is known that 2,337 live in a nursing home and 6,812 do not. table 1 shows that a large proportion of the hospitalized 60+ patients are known to live in a nursing home facility (about 12% for patients aged 60-79 and 35% for patients aged 80+). as expected, below the age of 60 years, there is only a very small proportion of patients that come from a nursing home facility. the survey contains information on 1,831 patients hospitalized during the initial phase of the outbreak (between march 1 and march 20); 4,998 patients in the increasing phase of the outbreak (between march 21 and march 31); 5,094 in the descending phase (between april 1 and april 18); and 2,695 individuals at the end of the first wave of the covid-19 epidemic (between april 19 and june 12). the time trend in the number of hospitalizations is presented in figure 1 . black dots represent the number of patients included in the national surveillance survey and the red dots show the reported number of confirmed hospitalizations in the population. the time trend in the survey matches well with the time trend of the outbreak in the whole population, though with some under-reporting in april and may. the date variables were checked for consistency. observations identified as inconsistent were excluded for analyses related to the inconsistent dates. a flow diagram of the exclusion criteria is displayed in figure 2 . the time of symptom onset and time of hospitalization is available for 13,321 patients. the date of symptom onset is determined based on the patient anamnesis history made by the clinicians. patients that were hospitalized before the start of symptoms (i.e., 715 patients) were not included. these include patients with nosocomial infections admitted prior to covid-19 infection for other long-term pathologies, then got infected at hospital and developing covid-19-related symptoms long after admission. patients reporting a delay between symptoms and hospitalization of more than 31 days (i.e., 121 patients) were also not included, because it is unclear for these patients whether the reason for hospital admission was covid-19 infection. a sensitivity analysis including patients with event times above 31 days is conducted. patients with missing information on age (i.e., 12 patients) or gender (i.e., 109 patients) were not included in the statistical analysis. this resulted in a total of 12,364 patients which were used to estimate the distribution of the time between symptom onset and hospitalization. the time of symptom onset and time of diagnosis is available for 13,156 patients. some of these were diagnosed prior to having symptoms (321) or experienced symptoms more than 31 days before diagnosis (136), and are excluded as these might be errors in reporting dates. similarly, the delay between symptoms and detection time is truncated at 31 days; but a sensitivity analysis including these patients is performed. in total, 125 patients were removed because of missing information on age and/or gender, resulting in 12,574 patients used in the analysis of the time from symptom onset to diagnosis. the time between hospitalization and discharge from hospital is available for 12,013 patients, either discharged alive or dead. for patients that were hospitalized before the start of symptoms (i.e., 528 patients), we use the time between the start of symptoms and discharge. patients with negative time intervals (54 patients) are excluded for further analysis. another 134 patients were discarded because of missing covariate information with regard to their age or gender. from these patients, we know that 6,054 recovered from covid-19, while 2,401 died. from the hospitalized patients, there is information about the length of stay at icu for 1,534 patients. note that we analyzed an anonymized subset of data from the hospital covid-19 clinical surveillance database of the belgian public health institute sciensano. data from sciensano was shared with the first author through a secured data transfer platform. as there exist large differences between healthcare systems in different countries, the reporting lag and time to hospitalization can be very different amongst countries (who, 2020). in this section, we describe the observed delay from symptom onset to hospitalization, from symptom onset to diagnosis and the length of stay in hospital during the first wave of covid-19 infections in belgium. statistical analysis results thereof are presented in section 3. the observed distribution of the delay from symptom onset to hospitalization (left panel) and to diagnosis (right panel) is presented in figure 3 . the observed length of stay in hospital and in icu is presented in figure 4 for all patients as well as separately for those that died or recovered from the disease. summary information about these distributions is presented in tables a1 and a2 in the appendix. note that the empirical distributions shown in in figure 4 do not explicitly account for truncation of the event times at the end of the study. more specifically, the relative frequencies of short-term stays in the hospital are inflated by the absence of patients with larger lengths of stay that are still in hospital at the end of the study period, and therefore missing in the data. consequently, these graphs should be interpreted with care. while the observed delay between symptom onset and hospitalization is between 0 and 31 days, 75% of the hospitalizations occur within 8 days after symptom onset. this is however shorter in the youngest age group (<20 years) and in the elderly group (>90 years). also patients coming from nursing homes seem to be hospitalized faster as compared to the general population. over the course of the first wave, the observed time between symptom onset and hospitalization was largest in the increasing phase of the epidemic (between march 21 and march 30). the time between symptom onset and diagnosis is very similar, ranging between 0 and 31 days, with 75% of the diagnoses occurring within 8 days after symptom onset. it should be noted that these observations are based on hospitalized patients, and non-hospitalized patients might have a quite different evolution in terms of their symptoms. as non-hospitalized patients were rarely tested in the initial phase of the epidemic, no conclusions can be made for this group of patients. the observed median length of stay in hospital is 8 days, with 95% of the patients have values ranging between 1 and 40 days. 25% of the patients stay longer than 14 days in the hospital. the median length of stay seems to increase with age (from 3 days in age group < 20 to 6 in age group 20−80, 9 in age group 80−90 and 10 days in age group > 90). on the other hand, with time since introduction of the disease in the population, the length of stay seems to decrease, though this might be biased due to incomplete reporting of los in patients who are actually still admitted at the time of writing. therefore, these observed statistics should be interpreted with care. similar results are observed for the length of stay in icu. different flexible parametric non-negative distributions can be used to describe the delay distributions, such as the exponential, weibull, lognormal and gamma distributions (held et al., 2020). however, as the reported event times are expressed in days, the discrete nature of the data should be accounted for in the estimation of the distributional parameters with regard to the respective delay distributions. different techniques are used in literature to take this into account. donnely we use interval-censoring methods originating from survival analysis to deal with the discrete nature of the data, to acknowledge that the observed time is not the exact event time (sun, 2006) . let x i be the recorded event time. instead of assuming that x i is observed exactly, it is assumed that the delay is in the interval (l i , r i ), with l i = x i −0.5 and r i = x i +0.5 for x i ≥ 1 and l i = = 10 −3 and r i = 0.5 for x i = 0. as a sensitivity analysis, we compare this assumption with the wider in addition, the delay distribution is often truncated, either because there is a maximal clinical delay period (e.g., time between symptom onset and hospitalization is at most 31 days) or because the hospitalization is close to the end of the study (e.g., if hospitalization is 14 days before the end of the study, the observed length of stay cannot exceed 14 days) and partial information about patients still being hospitalized is not part of the database. we therefore use a likelihood function accommodating the right-truncated and interval-censored nature of the observed data to estimate the parameters of the distributions (cowling et al., 2014) . the likelihood function is given by in which t i is the (individual-specific) truncation time and f (·) is the cumulative distribution function corresponding to the density function f (·). we truncate the time from symptom onset to diagnosis and the time from symptom onset to hospitalisation to 31 days (t i ≡ 31). the length of stay in hospital is truncated at t i = e − t i , in which t i is the time of hospitalization and e denoted the end of the study period (june 6, 2020). in addition, to account for possible underreporting in the survey, each contribution is weighted by the post-stratification weight w i ≡ w t defined as w t = n t n t t n t , where t is the day of hospitalization for patient i , n t the number of hospitalizations in the population on day t and n t is the number of reported hospitalizations in the survey on day t . we assume a weibull and lognormal distribution for the delay time distribution. the two parameters of each distribution are regressed on age, gender, nursing home and time period. a maximum likelihood approach is used for parameter estimation. the bayesian information criterion (bic) is used to select the best fitting parametric distribution and the best regression model among the candidate distributions/models. only significant parameters are included in the final model. in addition, the delay distributions are summarized by their estimated mean, median, 5th, 25th, 75th and 95th quantiles, as these can be helpful in guiding policy decision making and future covid-19 modeling approaches. overall, the delay between symptom onset and hospitalization can be described by a truncated weibull distribution with shape parameter 0.845 and scale parameter 5.506. the overall average delay is very similar to the one obtained by abrams table 2 : summary of the regression of the scale (λ) and shape (γ) parameters for reported delay time between symptom onset and hospitalization and between symptom onset and diagnosis, based on a truncated weibull distribution: parameter estimate, standard error and significance ( * corresponds to p-value< 0.05; * * to p-value < 0.01 and * * * to < 0.001). the reference group used are females of age > 80 living in nursing home that are hospitalized in the period 01-03 to 20-03. lang delay distribution. however, there are significant differences in the time between symptom onset and hospitalization between males and females, among different age groups, between living statuses (nursing home, general population or unknown) and between different reporting periods. as the truncated weibull distribution has a lower bic as compared to the lognormal distribution (bic of 66,923 and 68,657 for weibull and lognormal distributions, respectively), results for the weibull distribution are presented. in table 2 , the regression coefficients of the scale (λ) and shape parameters (γ) of the weibull distribution are presented. the impact on the time between symptom onset and hospitalization is visualized in figure 5 , showing the model-based 5%, 25%, 50%, 75% and 95% quantiles of the delay times. age has a major impact on the delay between symptom onset and hospitalization, with the youngest age group having the shortest delay (median of 1 day, but with a quarter of the patients having a delay longer than 2.6 days). the time from symptom onset to hospitalization is more than doubled in the age groups 20-60 and 60-80 as compared to the age group < 20 (median close to 4 days and a delay of more than 6.7 days for a quarter of the patients). in contrast the increase in time between symptom onset and hospitalization is 50% in the age group 80+ as compared to the youngest age group < 20 (median delay of 1.6 days, with a quarter of the patients having a delay longer than 4.3 days). after correcting for age, it is observed that the time delay is somewhat higher when patients come from a nursing home facility, with an increase of approximately 2 days. note that in the descriptive statistics, we observed shorter delay times for patients coming from nursing homes. this stems from the fact that 80+ year old's have shorter delay times as compared to patients of age 20-79, but the population size in the 80+ group is much larger as compared to the 20-79 group in nursing homes. this is known as simpson's paradox. and although statistical significant differences were found for gender and period, we observe very similar time delays between males and females and in the different time periods (see figure a4 in the appendix). note, however, that there are indeed differences, but mainly in the tails of the distribution; with, e.g., the 5% longest delay times between symptoms and hospitalizations observed for males. the time between symptom onset and diagnosis is also best described by a truncated weibull distribution (shape parameter 0.900, scale parameter 5.657). as again the truncated weibull distribution has a lower bic value as compared to the lognormal distribution (bic values of 68,106 and 69,652 for weibull and lognormal, respectively), results for the weibull distribution are presented. parameter estimates are very similar to the distribution for symptom onset and hospitalization, and are presented in table 2 . the median delay between symptom onset and diagnosis is approximately one day longer as compared to the median delay between symptom onset and hospitalization. the diagnosis was typically made upon hospital admission to confirm covid-19 infection. this is why the date of admission is very close to date of diagnosis. the same ef-fects of age and nursing home are found for the time between symptom onset and diagnosis, as compared to the time to hospitalization. especially at the increasing phase of the epidemic, the time between symptom onset and diagnosis was longer as compared to the time between symptom onset and hospitalization (see figure a4 ), but this delay has shortened over time. as a sensitivity analysis, a comparison is made with an analysis without truncating the time between symptom onset and hospitalisation or diagnosis. results are presented in figure a5 , and are very similar to the once presented here. in addition, a sensitivity analysis assuming that the time delay is interval censored with time intervals defined as (x i −1, x i +1) is presented in figure a3 , yielding very similar results. it was also investigated whether or not there a difference between neonati (with virtually no symptoms, but diagnosed at the time of birth or at the time of the mothers testing prior to labour) and other children. for all children < 20 years of age, we found a median time from symptom onset to hospitalization and diagnosis to be 1 and 1.6 days, respectively. if we only consider children > 0 years of age, a small increase is found (1.5 (0.5-3.4) days for time to hospitalization and 1.8 (0.7-3.7) for time to diagnosis). a summary of the estimated length of stay in hospital and icu is presented in table 3 and figure 6 based on the lognormal distribution. the lognormal distribution has a slightly smaller bic value as compared to the weibull distribution for the length of stay in hospital (bic value of 76,928 for weibull and 76,865 for lognormal) and for the length of stay in icu (bic value of 7,341 for weibull and 7,312 for lognormal). the median length of stay in hospital is close to 3 days in the age group less than 20 years old, but 25% of these patients stay longer than 5.5 days in hospital for females and more than 8.6 days for males and 5% thereof stay longer than 13 days for females and 14 days for males. the length of stay increases with age, with a median length of stay of around 5.4 days for females aged 20-60 and 5.9 days for males aged 20-60. a quarter of the patients in age group 20-60 stay longer than 10 days and 5% stay longer than 24 days. this further increases for patients above 60 years of age, with a median length of stay of around 8.6 and 9.4 days for female and male patients aged 60-80 years and 9.4 and 10.3 days for female and male patients above 80 years of age. a large proportion of the elderly patients stay much longer in hospital. a quarter of these patients stay longer than 15.7-17.4 days for patients of age 60-80 years and longer than 17.3-19 days for patients of age above 80. some very long hospital stays are observed in this age group, with 5% of the stays being longer than 38 and 41 days for females and males in the age group 60-80 years, and 42 and 46 days in the age group 80+. no significant difference is found for patients coming from nursing homes. over the course of the first wave, the length of stay has slightly decreased, with a decrease in median length of stay of around 2 days from the first period to later periods. note that this result is corrected for possible bias of prolonged lengths of stay being less probable for more recently admitted patients. therefore, this might be related to improved to better clinical experience and improved treatments. the length of stay in icu (based on the lognormal distribution) is on average 3.8 days for patients below 20 years of age, with a quarter of the patients staying longer than 7.6 days in icu. similar to length of stay in hospital, also the length of stay in icu increases with age. the median length of stay in the age group 20-60 years is 6.4, in age group 60-80 7.6, while in age group 80+ it is slightly shorter (5.9 days). again, it is observed that a quarter of the patients in age group 20-60 stay longer than 13 days in icu, in age group 60-80 15.6 days and in 80+ 12 days. patients living in nursing homes stay approximately 2 days longer in intensive care. no major difference is observed in the length of stay in icu between males and females, though some prolonged stays are observed in males as compared to females. similar as the overall length of stay in hospital, the length of stay in icu has decreased over time (with a decrease of 1 days from the first period to the later periods, and an additional 2 days in the last period). table 4 summarizes the length of stay in hospital for patients that recovered or passed away. the lognormal distribution has the smallest bic value for time from hospitalization to recovery and the weibull distribution for time from hospitalization to death. figure 6 also table 4 : summary of the regression of the log-mean (µ) and log-standard deviation (σ) parameters for length of stay in hospital for recovered patients and patients that died, based on lognormal distribution and weibull distribution: parameter estimate, standard error and significance ( * corresponds to p-value< 0.05; * * to p-value < 0.01 and * * * to < 0.001). the reference group used are females of age > 80 living in nursing home that are hospitalized in the period 01-03 to 20-03. patients that recovered, the length of stay in hospital increased with age (the median age in age group <20 is 5 days, which increases to 8 days in age group 20-60 years, 12 days in age group 60-80 years and 15 days in age group 80+). in contrast to previous results, we observe that patients living in nursing homes leave hospital approximately 1 day faster as compared to the general population. however, the 5% longest stays in hospital before recovery are longer for patients living in nursing homes. but, while the length of stay in hospital for patients that recover increases with age for all age groups, the survival time of hospitalized patients that died is lower for the age groups 60-80 years (median time of 6.7 days) and 80+ (median time of 5.7 days) as compared to the age group 20 − 60 years (median time of 12.1 days). also large differences are observed amongst patients coming from nursing homes or not, with the time between hospitalization and death being approximately 3 days longer for patients living in a nursing home. no significant differences are found between males and females. as a sensitivity analysis, a sensitivity analysis assuming that the time de-lay is interval censored by (x i − 1, x i + 1) is presented in figure a3 . results are almost identical to the previously presented results. it was also investigated whether the smaller duration of hospitalization for < 20 years can be due to the neonati, for which the duration of stay is often determined by duration of postdelivery recovery of the mother. and indeed, the length of stay in hospital for the youngest age group increases slightly if we take out the children of 0 years of age to 4.1(2.2, 7.6) days for males and 3.7(2, 6.9) days for females. the length of stay in hospital for recovered patients increases to 6.4(3.7, 11) days for males and 5.9(3.4, 10.2) days for females of age between 1 and 19 years of age, making it very similar to the 20 − 60 years old patients that recovered. no impact was observed on the length of stay in icu. previous studies in other countries reported a mean time from symptom onset to hospitalization of 2.62 days in singapore, 4.41 days in hong kong and 5.14 days in the uk (pellis et al., 2020). other studies report mean values of time to hospitalization ranging from 5 to 9.7 days (linton et al., 2020 , kraemer et al., 2020 and ferguson et al., 2020 . in belgium, the mean time from symptom onset to hospitalization overall is 5.74 days, which is slightly longer as compared to the reported delay in other countries, but depending on the patient population, estimates range between 3 and 10.4 days in belgium. the time from symptom onset to hospitalization is largest in the age group 20-60 years old, followed by the 60-80 years old. if we compare patients within the same age group, it is observed that the time delay is somewhat higher when patients come from nursing home facility, with an increase of approximately 2 days. the time from symptom onset to diagnosis has the same behaviour, with a slightly longer delay as compared to time from symptom onset to hospitalization. to investigate the length of stay in hospital, we should make a distinction between patients that recover or that die. while the median length of stay for patients that recover varies between 5 days (in the age group < 20) to 15.7 (in the age group 80+), the median length of stay for patients that die varies between 5.7 days (in the age group 80+) and 12.2 days (in the age group 20 − 60). over all patients, the median length of stay varies between 3.1 days (in the age group > 20) to 104 (in the age group 80+). in general, it is observed that the length of stay in hospital for patients that recover increases with age, and males need a slightly longer time to recover as compared to females. but, patients living in nursing homes leave hospital sooner as compared to patients in the same age group from the general population. in contrast, the time between hospitalization and death is longest for the age group 20-60 years, with shorter survival time for the age groups 60-80 years and 80+. the length of stay in hospital for patients that die is longer for patients coming form nursing homes, as compared to patients from the same age group from the general population. a similar trend is observed for the length of stay in icu. the length of stay in belgian hospitals is within the range of the once observed in other countries, though especially the length of stay in icu seems short in belgian hospitals. rees different sensitivity analysis indicated that the results are robust to some of the assumptions made in the modeling. however, alternative methods could still be investigated to improve the estimation of the delay distributions. first, alternative distributions can be used, having more than two parameters and thus more flexibility, e.g., generalized gamma distributions (for which the gamma, exponential and weibull distributions are special cases). second, a truncated doubly-interval censored method could be considered to account for the uncertainty in both time points determining the observed delays (and their intervals). finally, the impact of severity of illness and co-morbidity on the length of stay in hospital is very important. this was not investigated in this study as this information was not made available, but is an important factor to investigate in future analyses. epidemiological determinants of spread of causal agent of severe acute respiratory syndrom in hong kong estimating incubation period distributions with coarse data incubation period and other epidemiological characteristics of 2019 novel coronavirus infections with right truncation: a statistical analysis of publicly available case data robust reconstruction and analysis of outbreak data: influenza a (h1n1)v transmission in a school-based population estimation of the serial interval of influenza rapid establishment of a national surveillance of covid-19 hospitalizations in belgium simid covid-19 team, beutels, p., hens, n. (2020) modeling the early phase of the belgian covid-19 epidemic using a stochastic compartmental model and studying its implied future trajectories hospital length of stay for covid-19 patients: data-driven methods for forward planning covid-19 length of hospital stay: a systematic review and data synthesis key: cord-327396-lshp0u5w authors: radoykov, s. title: in times of crisis, anticipate mourning date: 2020-04-02 journal: encephale doi: 10.1016/j.encep.2020.03.002 sha: doc_id: 327396 cord_uid: lshp0u5w nan please cite this article in press as: radoykov s. in times of crisis, anticipate mourning. encéphale (2020), https://doi.org/10.1016/j.encep.2020.03.002 last year, i graduated as a young fellow in psychiatry. around the same time, i lost three close relatives. two of my colleagues also lost cherished people, and their suffering deepened my own. i didn't think at the time that going through this painful grieving process would help provide me with strength and prepare me for the coronavirus pandemic ahead. healthcare professionals are currently striving to save as many lives as possible, as we face a new global viral threat. given the improvements in medical care in the last century, some patients are indeed saved every day. for other patients, that positive outcome is turning out to be impossible. with over 100,000 deaths worldwide [1], many people are now grieving loved ones. grief is a process that has evolved over centuries to help humankind overcome anxiety around death and dying. its natural processes involve culture and tradition-based rituals that serve the purpose of overcoming the suffering while maintaining a healthy psychological bond and distance with the dead [2] . unfortunately, due to confinement restrictions, there are reports of citizens neither being able to say goodbye to their loved ones, nor participating in essential mortuary rituals [3, 4] . the surviving population will need their mental health in order to rebuild worldwide peace of mind and regain a sense of hope and prosperity. current safety regulations notwithstanding, we need to remember that most people will probably overcome the covid-19 infection, and many of them will be mourning other people. careful planning and attention should therefore be devoted to supporting patients and families in these challenging times and arranging for some form of last human contact, either in person or via remote technology. people deserve the right to actively engage in the death process of their closest loved ones, to participate in the mortu-ary rituals, and to know where their loved one's body is located or buried. furthermore, and because they have no choice in the matter, caregivers will also experience grief, as a result of witnessing many passing in a brief period of time. they will need time and the possibility to recognize, validate and share their own feelings of sadness, fear and helplessness. sometimes, as a team. helping mourning families will ultimately help caregivers mourn, as well. now more than ever, special care and consideration should be given to the end of life, in a sincere and straightforward way. dignity over fear. the author declares that he has no competing interest. other conflicts of interest: teaching psychotherapy and clinical hypnosis for several universities and teaching institutions. rituels de deuil, travail du deuil 3rd ed. france: la pensée sauvage hôpital cochin, 89, rue d'assas, 75006 paris, france e-mail address: dr@radoykov key: cord-289498-6hf3axps authors: tull, matthew t.; barbano, anna c.; scamaldo, kayla m.; richmond, julia r.; edmonds, keith a.; rose, jason p.; gratz, kim l. title: the prospective influence of covid-19 affective risk assessments and intolerance of uncertainty on later dimensions of health anxiety date: 2020-08-12 journal: j anxiety disord doi: 10.1016/j.janxdis.2020.102290 sha: doc_id: 289498 cord_uid: 6hf3axps the covid-19 pandemic is likely to increase risk for the development of health anxiety. given that elevated health anxiety can contribute to maladaptive health behaviors, there is a need to identify individual difference factors that may increase health anxiety risk. this study examined the unique and interactive relations of covid-19 affective risk assessments (worry about risk for contracting/dying from covid-19) and intolerance of uncertainty to later health anxiety dimensions. a u.s. community sample of 364 participants completed online self-report measures at a baseline assessment (time 1) and one month later (time 2). time 1 intolerance of uncertainty was uniquely associated with the time 2 health anxiety dimension of body vigilance. time 1 affective risk assessments and intolerance of uncertainty were uniquely associated with later perceived likelihood that an illness would be acquired and anticipated negative consequences of an illness. the latter finding was qualified by a significant interaction, such that affective risk assessments were positively associated with anticipated negative consequences of having an illness only among participants with mean and low levels of intolerance of uncertainty. results speak to the relevance of different risk factors for health anxiety during the covid-19 pandemic and highlight targets for reducing health anxiety risk. beginning in late 2019, a severe acute respiratory syndrome coronavirus began to rapidly spread across the globe, becoming an unprecedented public health event (centers for disease control and prevention [cdc] , 2020; world health organization [who], 2020b). on january 30, 2020, the who announced that covid-19 was a public health emergency of international concern, and in march 2020, pandemic status was reached. currently, over 14 million confirmed cases of covid-19 have been reported worldwide, and over 600,000 people have died from the disease (cdc, 2020; who, 2020b) . within the u.s. alone, there have been over 3.7 million confirmed cases of covid-19, with over 140,000 mortalities attributed to the virus (cdc, 2020) . due to covid-19's long incubation period, ease of transmission, high mortality rate (relative to the seasonal flu), and lack of pharmacological interventions (linton et al., 2020; shereen, khan, kazmi, bashir, & siddique, 2020) , governments worldwide have had to implement extraordinary physical distancing interventions in an attempt to slow the spread of the virus, reduce covid-19 mortality rates, and minimize the burden on the health care system. within the u.s., implementation of stay-at-home orders began in mid-march 2020, with most states having such orders in place by early april 2020 (mervosh, lu, & swales, 2020) . although no vaccine or established treatments for covid-19 are currently available, strict stay-at-home orders within the u.s. are beginning to ease. specifically, all 50 states have taken steps to reopen businesses throughout may 2020, with most moving to rescind stringent stay-at-home orders in sensations (e.g., muscle soreness, shortness of breath, sore throat) as an indication of illness, infection, or some other threat to physical health taylor & asmundson, 2004) . at high levels, health anxiety may contribute to increased body vigilance, catastrophic misinterpretation of bodily sensations, and illness behavior (e.g., reassurance seeking on the internet, frequent and unnecessary visits to a doctor or emergency room, excessive collection of personal protective equipment; asmundson et al., 2010; asmundson & taylor, 2002b; taylor & asmundson, 2004) . in the context of a pandemic, individuals with elevated health anxiety may be particularly likely to experience an increase in awareness and catastrophic misinterpretation of bodily sensations that result in maladaptive safety-seeking behavior (asmundson & taylor, 2020b; taylor, 2019) . for example, a recent study found that health anxiety was associated with covid-19 related anxiety and cyberchondria (i.e., the repeated carrying out of health-related internet searches in an attempt to obtain reassurance or reduce health-related anxiety ; jungmann & witthöft, 2002) . given the potential negative consequences associated with health anxietyrelated behaviors in the context of a pandemic (e.g., increased doctor visits may overwhelm the health care system, stockpiling of personal protective equipment may decrease or eliminate its availability to others in need), there is a need to identify individual difference factors that may increase risk for health anxiety in the context of the current covid-19 pandemic. one such risk factor for health anxiety may be an individual's perceived likelihood of becoming infected with or dying from covid-19. past research has found that individuals with elevated healthy anxiety are more likely to cognitively overestimate their risk for illness (hadjistavropoulos, craig, & hadjistavropoulos, 1998; marcus & church, 2003) . however, health behavior models are increasingly highlighting the relevance of affect-laden risk or vulnerability assessments (vs. more cognitively-based assessments where individuals j o u r n a l p r e -p r o o f health anxiety during covid-19 6 deliberately estimate the probability or likelihood of a particular health event) to psychological outcomes, emphasizing the relative importance of the extent to which individuals feel that they are at risk for or worry about certain health events (i.e., affective risk assessments; loewenstein, weber, hsee, & welch, 2001; janssen, van osch, lechner, candel, & de vries, 2012; janssen, waters, van osch, lechner, & de vries, 2012) . for example, janssen, van osch, et al. (2012) found that affective risk assessments about cancer risk were more strongly related to cancerspecific health anxiety than cognitive risk assessments. likewise, affective risk assessments have been found to be more highly related to behavioral intentions and health behaviors than cognitive risk assessments (janssen, van osch, et al., 2012; janssen, waters, et al., 2012) . given evidence that worry states may increase attentional bias to threatening stimuli (mogg & bradley, 2005; mogg, mathews, & eysenck, 1992) , individuals who experience greater worry about their perceived risk for covid-19 infection and mortality may be more likely to notice and attend to bodily sensations that could be indicative of covid-19 infection (e.g., muscle pain, shortness of breath, cough, chills), resulting in increased health anxiety over time. given the unpredictability and variability associated with covid-19 symptom presentations, as well as the potentially long incubation period associated with this virus (i.e., symptoms may present themselves anywhere from 2 to 14 days following exposure), the association between covid-19 affective risk assessments and health anxiety may be particularly strong for individuals with high intolerance of uncertainty. intolerance of uncertainty is broadly defined as a cognitive and emotional tendency to react negatively to uncertain situations or unpredictable future events (freeston, rhéaume, letarte, dugas, & ladouceur, 1994) , and has been identified as a key factor in the development and maintenance of problematic worry (buhr & dugas, 2006; dugas, freeston, & ladouceur, 1997; freeston et al., 1994) . in addition to j o u r n a l p r e -p r o o f health anxiety during covid-19 7 demonstrating a relationship with numerous anxiety disorders (boelen & reijntjes, 2009; carleton et al., 2012; gentes & ruscio, 2011; holaway, heimberg, & coles, 2006) , intolerance of uncertainty has been associated with increased health anxiety and catastrophic health appraisals. inhibitory facets of intolerance of uncertainty (e.g., diminished functioning in the face of uncertainty) have been shown to predict health anxiety among medically healthy communitydwelling adults (fergus & bardeen, 2013) . further, intolerance of uncertainty has been found to moderate the relationship between the frequency of internet searches for health information and health anxiety among medically healthy adults in the community (fergus, 2013) . research has also found that intolerance of uncertainty moderates the relationship between catastrophic health appraisals and health anxiety among medically healthy college students, with this relationship emerging as significant only among individuals with high intolerance of uncertainty (fergus & valentiner, 2011) . more recently, asmundson and taylor (2020a) identified intolerance of uncertainty as a potential individual difference factor that may increase risk for covid-19 related anxiety. in the context of the covid-19 pandemic, high intolerance of uncertainty may further exacerbate worry and negative affect associated with perceived risk for covid-19 infection and mortality, contributing to heightened health anxiety. moreover, given that intolerance of uncertainty may increase the likelihood that ambiguous experiences are perceived as threatening (byrne, hunt, & chang, 2015) , high covid-19 affective risk perceptions may be more likely to prompt catastrophic misinterpretations of benign bodily sensations as an indication of illness. the goals of the present study were to examine the prospective relations of covid-19 affective risk assessments and intolerance of uncertainty to health anxiety dimensions one month j o u r n a l p r e -p r o o f later, as well as the moderating role of intolerance of uncertainty in the relations of covid-19 affective risk perceptions to later health anxiety. we predicted that both covid-19 affective risk perceptions and intolerance of uncertainty would predict later health anxiety dimensions, controlling for health anxiety at baseline. further, we predicted that the relationship between covid-19 affective risk assessments and health anxiety would be strongest among individuals with high (vs. mean or low) levels of intolerance of uncertainty. participants were a nationwide community sample of 364 adults from 44 states in the u.s. who completed a prospective online study of health and coping in response to through an internet-based platform (amazon's mechanical turk; mturk). participants completed an initial assessment (time 1) from march 27, 2020 through april 5, 2020, and a follow-up assessment (time 2) approximately one month later between april 27, 2020 and may 21, 2020. the study was posted to mturk via cloudresearch (cloudresearch.com), an online crowdsourcing platform connected to mturk that allows additional data collection features (e.g., creating selection criteria; chandler, rosenzweig, moss, robinson, & litman, 2019; litman, robinson, & abberbock, 2017) . mturk is an online labor market that provides "workers" with the opportunity to complete different tasks in exchange for monetary compensation, such as completing questionnaires for research. data provided by mturk-recruited participants have been found to be as reliable as data collected through more traditional methods (buhrmester, kwang, & gosling, 2011) . likewise, mturk-recruited participants have been found to perform better on attention check items than college student samples (hauser & schwarz, 2016) and comparably to participants completing the same tasks in a laboratory setting (casler, bickel, & j o u r n a l p r e -p r o o f health anxiety during covid-19 9 hackett, 2013). studies also show that mturk samples have the advantage of being more diverse than other internet-recruited or college student samples (buhrmester et al., 2011; casler et al., 2013) . for the present study, inclusion criteria consisted of: (1) u.s. resident, (2) at least a 95% approval rating as an mturk worker, (3) participants (51.4% women; 47.5% men; 0.5% non-binary; 0.3% transgender, 0.3% other) ranged in age from 20 to 74 years (m = 41.45, sd = 12.02) at the initial assessment. all states in the u.s. were represented, with the exception of delaware, nebraska, new hampshire, north dakota, vermont, and west virginia. the most frequently endorsed states of residence were florida (11.3%), california (9.1%), pennsylvania (6.3%), texas (6.0%), and new york (5.2%). most participants identified as white (84.9%), followed by black/african-american (9.1%), asian/asian-american (6.3%), latinx (3.8%), and native american (1.4%). with regard to other participant demographic characteristics at the time 1 assessment, 11% of participants had completed high school or received a ged, 38.2% had attended some college or technical school, 41.5% had graduated from college, and 9.3% had advanced graduate/professional degrees. most participants were employed full-time (68.1%), followed by employed part-time (16.5%) and unemployed (15.3%). annual household income varied, with 31.3% of participants reporting an income of < $35,000, 31.6% reporting an income of $35,000 to $64,999, and 37.1% reporting an income of > $65,000. finally, 19% of participants reported having a current medical condition (e.g., diabetes, hypertension, asthma) that would increase risk of complications from a covid-19 infection and 20.9% reported living alone. across both assessments, few participants j o u r n a l p r e -p r o o f health anxiety during covid-19 10 reported having sought out testing for covid-19 (3%) or having a confirmed covid-19 infection (0.3%). a demographic form was completed by all participants at the time 1 and time 2 assessments. information collected from the demographic form included age, sex, gender, racial/ethnic background, income level, highest level of education attained, employment status, the number of people in the household, state of residence, current medical conditions that could increase risk for susceptibility to and/or complications from covid-19, whether participants had sought out testing for covid-19, and whether participants had been infected with covid-19. covid-19 affective risk was assessed at time 1 using a 3-item self-report measure specifically created for this study. participants responded to questions about covid-19-related worry about risk (i.e., "how worried are you about your level of risk…") in three domains: (a) contracting covid-19, (b) dying from covid-19, and (c) spreading covid-19 to others (should they have it). participants responded to each item using a 5-point likert-type scale ranging from 1 (not at all worried) to 5 (extremely worried). research using similar self-report items (e.g., klein, 2002; rose, 2010) has shown that affective risk assessments are highly correlated with behavioral intentions and health behaviors. given that few participants in this sample reported having a confirmed covid-19 infection, as well as our interest in evaluating personal affective risk assessments (vs. assessments of others' risks), only the items pertaining to contracting and dying from covid-19 were used. these items were summed to create a covid-19 affective risk index. internal consistency was acceptable in this sample (α = .87). the intolerance of uncertainty scale-short form (ius-12; carleton, norton, & asmundson, 2007) was used to assess intolerance of uncertainty at the time 1 assessment. the ius-12 is a 12-item measure that assesses prospective and inhibitory anxiety. this scale was adapted from the 27-item intolerance of uncertainty scale (freeston et al., 1994) that was originally designed to measure six elements related to the inability to withstand uncertainty (i.e., emotional and behavioral consequences of being uncertain, beliefs that uncertainty reflects one's character, expectations that the future is predictable, frustration when the future is not predictable, efforts aimed at controlling the future, and inflexible responses during uncertain situations). example items include, "a small unforeseen event can spoil everything, even with the best of planning," and "i can't stand being taken by surprise." participants rate the extent to which they agree with each item on a 5-point likert-type scale (1 = "not at all characteristic of me;" 2 = "a little characteristic of me;" 3 = "somewhat characteristic of me;" 4 = "very characteristic of me;" 5 = "entirely characteristic of me"). for the present study, responses to each item were summed to create an overall index of intolerance of uncertainty, with possible scores ranging from 12-60 and higher scores reflecting greater intolerance of uncertainty. although carleton et al. (2007) found that the ius-12 has a stable two-factor structure, recent studies have demonstrated that the majority of the measure's variance is accounted for by a single latent variable; consequently, it is recommended that a single, overall ius-12 score is used (hale et al., 2016; lauriola, mosca, & carleton, 2016; shihata, mcevoy, & mullan, 2018) . there is evidence for the reliability and construct validity of the ius-12 within non-clinical and community samples (carleton et al., 2007; carleton, collimore, & asmundson, 2010; lauriola et al., 2016) . internal consistency for this measure in this sample was acceptable (α = .95). the short health anxiety inventory (shai; salkovskis, rimes, warwick, & clark, 2002) is an 18-item measure that was used to assess different dimensions of health anxiety at the time 1 and time 2 assessments. the shai was modified to assess health j o u r n a l p r e -p r o o f health anxiety during covid-19 12 anxiety symptoms over the past week (vs. the past 6-months on the original measure). found that the shai assesses three dimensions of health anxiety: (a) illness likelihood (i.e., the perceived likelihood that a serious illness will be acquired, as well as intrusive thoughts about one's health; 10 items); (b) body vigilance (i.e., attention to bodily sensations or changes in bodily sensations; 3 items); and (c) illness severity (i.e., anticipated burden, impairment, or negative consequences associated with having a serious illness; 4 items). depression and anxiety symptom severity at time 1 were assessed using the 21-item version of the depression anxiety stress scales (dass-21; lovibond & lovibond, 1995) . the current study utilized the depression and anxiety symptom severity subscales as covariates. the dass-21 is a self-report measure that assesses the unique symptoms of depression, anxiety, and stress. participants rate the items on a 4-point likert-type scale indicating how much each item applied to them in the past week (0 = "did not apply to me at all;" 1 = "applied to me some of the time;" 2 = "applied to me a good part of the time;" 3 = "applied to me most of the time"). this measure has demonstrated good reliability and validity (antony, bieling, cox, enns, & j o u r n a l p r e -p r o o f health anxiety during covid-19 13 swinson, 1998; roemer, 2001) . internal consistency of the depression (α = .93) and anxiety (α = .89) symptom severity subscales in this sample were acceptable. all procedures received prior approval from the university of toledo's institutional review board. to ensure that the study was not being completed by a bot (i.e., an automated computer program used to complete simple tasks), participants responded to a completely automatic public turing test to tell computers and humans apart (captcha) at the time 1 assessment prior to providing informed consent. participants were also informed on the consent form that "…we have put in place a number of safeguards to ensure that participants provide valid and accurate data for this study. if we have strong reason to believe your data are invalid, your responses will not be approved or paid and your data will be discarded." initial data were collected in blocks of nine participants at a time and all data, including attention check items and geolocations, were examined by researchers before compensation was provided. attention check items included three explicit requests embedded within the questionnaires (e.g., "if you are paying attention, choose '2' for this question"), two multiple-choice questions (e.g., "how many words are in this sentence?"), a math problem (e.g., "what is 4 plus 2?"), and a free-response item (e.g., "please briefly describe in a few sentences what you did in this study"). participants who failed one or more attention check items were removed from the study (n = 53 of 553 completers of the time 1 assessment). workers who completed the initial assessment and whose data were considered valid (based on attention check items and geolocations; n = 500) were compensated $3.00 for their participation and invited to participate in the one-month follow-up assessment. health anxiety during covid-19 14 one-month following completion of the time 1 assessment, participants were contacted via cloudresearch (litman et al., 2017) to complete the time 2 assessment. this online platform allows researchers to email participants a link to follow-up assessments while maintaining anonymity (i.e., study personnel never see email addresses) by using their amazon worker id numbers (provided by mturk). of the 500 participants who completed the initial assessment, 77% (n = 386) completed the follow-up assessment. there were no significant differences in time 1 intolerance of uncertainty or health anxiety dimensions between participants who completed (vs. did not complete) the follow-up assessment (ps > .11); however, participants procedures for assessing the validity of the time 2 data (i.e., examining attention check items and geolocations) were similar to those used for the time 1 assessment. participants who failed two or more attention check items at the time 2 assessment were removed from the study (n = 3); the remainder were compensated $3.00 for their participation. in addition, two participants were excluded for non-reconcilable differences in demographic data between the time 1 and time 2 assessments, and 17 additional participants were excluded for incomplete data on the primary variables of interest, resulting in a final sample size of 364. results of the hierarchical linear regression analyses examining the main and interactive effects of time 1 covid-19 affective risk and intolerance of uncertainty on time 2 health anxiety dimensions are presented in table 2 . the overall model was significant, accounting for 61% of the variance in the time 2 illness likelihood dimension of health anxiety, f (4, 359) = 141.14, p < .001, f = 1.24. the addition of time 1 covid-19 affective risk and intolerance of uncertainty in the second step of the model accounted for additional significant variance in time 2 illness likelihood above and beyond time 1 illness likelihood, δr 2 = .02, f (2, 360) = 11.04, p < .001, f = .24, with both variables demonstrating a significant unique positive association with time 2 illness likelihood. the addition of the interaction term did not significantly improve the model, δr 2 = .001, f (1, 359) = 1.38, p = .242, f = .03. the addition of the interaction term did not significantly improve the model, δr 2 = .00, f (1, 359) = .15, p = .697, f = .00. the overall model was significant, accounting for 55% of the variance in the time 2 to ensure that the significant interaction could not be attributed to other demographic or psychiatric variables, the regression analysis was rerun with the following covariates included in this study sought to examine the unique and interactive prospective relations of covid-19 affective risk assessments (i.e., worry about risk for contracting or dying from and intolerance of uncertainty to health anxiety one month later. hypotheses were partially supported. first, as predicted, covid-19 affective risk assessments and intolerance of uncertainty at time 1 were uniquely associated with later perceived likelihood that a serious illness would be acquired (i.e., illness likelihood subscale on the shai) and anticipated negative consequences of having a serious illness (i.e., illness severity subscale on the shai). these findings are consistent with past research demonstrating relationships between health anxiety and both intolerance of uncertainty (e.g., and concerns regarding perceived vulnerability to disease (e.g., duncan, schaller, & park, 2009 ). however, only intolerance of uncertainty at time 1 was found to be uniquely associated with time 2 body vigilance. the items assessing body vigilance on the shai focus on bodily sensations in general or aches and pains. although worry and anxiety regarding risk for contracting or dying from covid-19 j o u r n a l p r e -p r o o f health anxiety during covid-19 19 would be expected to amplify sensitivity to bodily sensations (consistent with a seek to avoid process; barlow, 2002) , it is possible that this process might be more evident for bodily sensations that are specifically associated with covid-19 infection (e.g., fever, shortness of breath, headache). however, as an individual difference factor that is not unique to covid-19, intolerance of uncertainty may be more likely to increase awareness of bodily sensations in general to identify any potential sources of health threat, thus increasing a sense of certainty, control, or predictability. contrary to hypotheses, intolerance of uncertainty was not found to moderate the association between time 1 covid-19 affective risk assessments and time 2 illness likelihood or body vigilance. in addition, although intolerance of uncertainty was found to moderate the association between covid-19 affective risk assessments and time 2 illness severity, the nature of this interaction was different than what was predicted. specifically, time 1 covid-19 affective risk assessments were significantly positively associated with time 2 illness severity only at mean and low levels of intolerance of uncertainty. at high levels of intolerance of uncertainty, no significant association was found between covid-19 affective risk assessments and health anxiety. this finding highlights the multiple ways in which individuals may develop anxiety surrounding the potential negative consequences associated with illness. even in the absence of an established vulnerability for the development of health anxiety (i.e., intolerance of uncertainty), elevated worry about risk for contracting or dying from covid-19 appears to be sufficient for the greater anticipation of negative consequences associated with having an illness. the experience of frequent worry thoughts surrounding risk for covid-19 infection or mortality may increase health anxiety by contributing to the increased generation of potential catastrophic outcomes that could occur if one were infected with the virus. indeed, in other health conditions j o u r n a l p r e -p r o o f (e.g., irritable bowel syndrome), worry has been found to contribute to increased suffering through catastrophizing (lackner & quigley, 2005) . however, among individuals high in intolerance of uncertainty, covid-19 affective risk assessments seem less relevant to later health anxiety, providing further evidence that intolerance of uncertainty may be a strong risk factor for the development or exacerbation of health anxiety. such a finding is consistent with previous research showing that intolerance of uncertainty predicts health anxiety above and beyond other established anxiety risk factors (e.g., anxiety sensitivity, negative affect; fergus & bardeen, 2013) . study limitations warrant consideration. first, all outcomes were assessed using selfreport questionnaires, which have the potential to be influenced by social desirability biases or recall difficulties. in addition, we used an unpublished, two-item measure developed specifically for the purposes of this study to assess covid-19 affective risk assessments. although this measure demonstrated associations with our other variables in the expected direction, it is possible that our measure did not provide a comprehensive evaluation of covid-19 affective risk assessments. at the time this study began, other validated measures of covid-19 affective risk assessments were not available. however, since that time, several measures have been published that may provide a better assessment of covid-19 affective risk assessments or the stress and anxiety associated with covid-19 more generally, such as the covid stress scales (taylor et al., 2020) and the coronavirus anxiety scale . in addition, our measures of intolerance of uncertainty and health anxiety were not specific to covid-19; thus, findings cannot speak to the extent to which intolerance of uncertainty surrounding the covid-19 pandemic in particular influences anxiety surrounding the experience and consequences of covid-19 related bodily sensations. in addition, given our recruitment methods and sample j o u r n a l p r e -p r o o f health anxiety during covid-19 21 (i.e., self-selected mturk workers), results may also not generalize to the larger u.s. population, adults in other countries, or particularly vulnerable populations (e.g., individuals with chronic medical conditions; health care workers; hospitalized patients). replication of our findings is needed within other samples. in addition, although covid-19 affective risk assessments and intolerance of uncertainty were found to predict later health anxiety, it is important to note that average health anxiety levels at time 2 were not at clinical levels (mean shai scores among individuals with hypochondriasis = 32.58; alberts et al., 2013) . moreover, it is not clear if the levels of health anxiety observed in this study are associated with engagement in adaptive or maladaptive health behaviors. health anxiety is conceptualized as a dimensional variable (taylor & asmundson, 2004) , and moderate levels of health anxiety may be functional in the context of a pandemic, increasing motivation to engage in protective behaviors such as social distancing, hand washing, and wearing a mask when outside of the home. studies employing multiple follow-up assessments are needed to determine whether the health anxiety stemming from covid-19 affective risk assessments and intolerance of uncertainty predicts later engagement in adaptive or maladaptive health behaviors. likewise, research is needed to examine the impact of the covid-19 pandemic on health anxiety within more vulnerable populations, such as individuals with pre-existing illness anxiety disorder or generalized anxiety disorder. despite limitations, findings lend support to the hypothesis that the covid-19 pandemic will result in elevated health anxiety (asmundson & taylor, 2020b) , and add to the growing body of literature on the mental health consequences of this pandemic (cao et al., 2020; gonzález-sanguino et al., 2020; harper et al., 2020; huang & zhao, 2020; jungmann & witthöft, 2020; lee et al., 2020; mckay et al., 2020; moghanibashi-mansourieh, 2020; zhang et al., 2020). specifically, our findings demonstrate that covid-19 affective risk assessments and intolerance of uncertainty are uniquely associated with various dimensions of health anxiety one month later. moreover, in addition to providing further evidence that high levels of intolerance of uncertainty may increase risk for later health anxiety, results highlight one pathway (i.e., affective-based risk assessments) through which individuals without high levels of intolerance of uncertainty may still be susceptible to later health anxiety during this time. specifically, the extent to which individuals feel that they are at risk for covid-19 infection and death was associated with elevated health anxiety one-month later among individuals with mean and low levels of intolerance of uncertainty. as such, findings highlight a number of potential targets for preventing the development of severe health anxiety that could lead to maladaptive behaviors during the current pandemic. for example, acceptance-and mindfulness-based behavioral interventions (e.g., acceptance-based behavioral therapy for generalized anxiety disorder; roemer, orsillo, & salters-pedneault, 2008 ) may be particularly useful for addressing worry about risk for contracting or dying from covid-19. psychoeducation on effective behaviors for mitigating risk for covid-19 infection may also reduce worry, and ultimately health anxiety, by modifying risk assessments and increasing a sense of control. cognitive-behavioral interventions that specifically target intolerance of uncertainty (e.g., hebert & dugas, 2019; ladouceur et al., 2000) may also have utility in reducing risk for future health anxiety during this particularly stressful and indeed uncertain time. j o u r n a l p r e -p r o o f the short health anxiety inventory: psychometric properties and construct validity in a non-clinical sample health anxiety, hypochondriasis, and the anxiety disorders the short health anxiety inventory: a systematic review and meta-analysis psychometric properties of the 42-item and 21-item versions of the depression anxiety stress scales in clinical groups and a community sample health anxiety: current perspectives and future directions coronaphobia: fear and the 2019-ncov outbreak how health anxiety influences responses to viral outbreaks like covid-19: what all decision-makers, health authorities, and health care professionals need to know anxiety and its disorders intolerance of uncertainty and social anxiety investigating the construct validity of intolerance of uncertainty and its unique relationship with worry amazon's mechanical turk: a new source of inexpensive, yet high-quality, data? comparing the roles of ambiguity and unpredictability in intolerance of uncertainty the psychological impact of the covid-19 epidemic on college students in china it's not just the judgements-it's that i don't know": intolerance of uncertainty as a predictor of social anxiety increasingly certain about uncertainty: intolerance of uncertainty across anxiety and depression fearing the unknown: a short version of the intolerance of uncertainty scale separate but equal? a comparison of participants and data gathered via amazon's mturk, social media, and face-to-face behavioral testing coronavirus (covid-19) online panels in social science research: expanding sampling methods beyond mechanical turk intolerance of uncertainty and problem orientation in worry perceived vulnerability to disease: development and validation of a 15-item self-report instrument cyberchondria and intolerance of uncertainty: examining when individuals experience health anxiety in response to internet searches for medical information anxiety sensitivity and intolerance of uncertainty: evidence of incremental specificity in relation to health anxiety intolerance of uncertainty moderates the relationship between catastrophic health appraisals and health anxiety the consequences of covid-19 pandemic on mental health and implications for clinical practice why do people worry a meta-analysis of the relation of intolerance of uncertainty to symptoms of generalized anxiety disorder, major depressive disorder, and obsessive-compulsive disorder mental health problems and social media exposure during the covid-19 outbreak cognitive and behavioral responses to illness information: the role of health anxiety resolving uncertainty about the intolerance of uncertainty scale-12: application of modern psychometric strategies functional fear predicts public health compliance in the covid-19 pandemic attentive turkers: mturk participants perform better on online attention checks than do subject pool participants introduction to mediation, moderation, and conditional process analysis: a regression-based approach behavioral experiments for intolerance of uncertainty: challenging the unknown in the treatment of generalized anxiety disorder a comparison of intolerance of uncertainty in analogue obsessive-compulsive disorder and generalized anxiety disorder generalized anxiety disorder, depressive symptoms and sleep quality during covid-19 outbreak in china: a web-based cross-sectional survey thinking versus feeling: differentiating between cognitive and affective components of perceived cancer risk the importance of affectively-laden beliefs about health risks: the case of tobacco use and sun protection health anxiety, cyberchondria, and coping in the current covid-19 pandemic: which factors are related to coronavirus anxiety? the shape of and solutions to the mturk quality crisis comparative risk estimates relative to the average peer predict behavioral intentions and concern about absolute risk pain catastrophizing mediates the relationship between worry and pain suffering in patients with irritable bowel syndrome efficacy of a cognitive-behavioral treatment for generalized anxiety disorder: evaluation in a controlled clinical trial hierarchical factor structure of the intolerance of uncertainty scale short form (ius-12) in the italian version clinically significant fear and anxiety of covid-19: a psychometric examination of the coronavirus anxiety scale incubation period and other epidemiological characteristics of 2019 novel coronavirus infections with right truncation: a statistical analysis of publicly available case data turkprime. com: a versatile crowdsourcing data acquisition platform for the behavioral sciences risk as feelings manual for the depression anxiety stress scales are dysfunctional beliefs about illness unique to hypochondriasis? anxiety regarding contracting covid-19 related to interoceptive anxiety sensations: the moderating role of disgust propensity and sensitivity see how all 50 states are reopening see which states and cities have told residents to stay at home attentional bias in generalized anxiety disorder versus depressive disorder attentional bias to threat in clinical anxiety states assessing the anxiety level of iranian general population during covid-19 outbreak suicide mortality and coronavirus disease 2019-a perfect storm practitioner's guide to empirically based measures of anxiety efficacy of an acceptance-based behavior therapy for generalized anxiety disorder: evaluation in a randomized controlled are direct or indirect measures of comparative risk better predictors of concern and behavioural intentions? the health anxiety inventory: development and validation of scales for the measurement of health anxiety and hypochondriasis covid-19 infection: origin, transmission, and characteristics of human coronaviruses a bifactor model of intolerance of uncertainty in undergraduate and clinical samples: do we need to reconsider the twofactor model? psychological assessment the psychology of pandemics: preparing for the next global outbreak of infectious disease treating health anxiety: a cognitive-behavioral approach development and initial validation of the covid stress scales mental health and psychosocial considerations during the covid-19 outbreak rolling updates on coronavirus disease (covid-19) use of hydroxychloroquine and chloroquine during the covid-19 pandemic: what every clinician should know the differential psychological distress of populations affected by the covid-19 pandemic 17 * .45 ** .28 ** .23 ** .41 ** .22 ** .26 ** 2. t1 ius 27 ** .46 ** .41 ** .28 ** .53 ** t1 illness likelihood t1 body vigilance t1 illness severity t2 illness likelihood t2 body vigilance t2 illness likelihood affective risk = covid-19 affective risk assessments ius = intolerance of uncertainty scale; illness likelihood = short health anxiety inventory illness likelihood subscale; body vigilance = short health anxiety inventory body vigilance subscale; illness severity = short health anxiety inventory illness severity subscale key: cord-136421-hcj8jmbm authors: myers, kyle r.; tham, wei yang; yin, yian; cohodes, nina; thursby, jerry g.; thursby, marie c.; schiffer, peter e.; walsh, joseph t.; lakhani, karim r.; wang, dashun title: quantifying the immediate effects of the covid-19 pandemic on scientists date: 2020-05-22 journal: nan doi: nan sha: doc_id: 136421 cord_uid: hcj8jmbm the covid-19 pandemic has undoubtedly disrupted the scientific enterprise, but we lack empirical evidence on the nature and magnitude of these disruptions. here we report the results of a survey of approximately 4,500 principal investigators (pis) at u.s.and europe-based research institutions. distributed in mid-april 2020, the survey solicited information about how scientists' work changed from the onset of the pandemic, how their research output might be affected in the near future, and a wide range of individuals' characteristics. scientists report a sharp decline in time spent on research on average, but there is substantial heterogeneity with a significant share reporting no change or even increases. some of this heterogeneity is due to field-specific differences, with laboratory-based fields being the most negatively affected, and some is due to gender, with female scientists reporting larger declines. however, among the individuals' characteristics examined, the largest disruptions are connected to a usually unobserved dimension: childcare. reporting a young dependent is associated with declines similar in magnitude to those reported by the laboratory-based fields and can account for a significant fraction of gender differences. amidst scarce evidence about the role of parenting in scientists' work, these results highlight the fundamental and heterogeneous ways this pandemic is affecting the scientific workforce, and may have broad relevance for shaping responses to the pandemic's effect on science and beyond. by mid-april 2020, the cumulative number of deaths due to covid-19 had reached approximately 115,000 with nearly 1,800 deaths per day in the u.s. and 3,000 deaths per day in europe 1 . throughout the u.s. and europe, schools and workplaces were typically required to be closed and restrictions on gatherings of more than 10 people were in place in most countries 2 . for scientists, not only did this drastically change their daily lives, it severely limited the possibilities of using traditional workspaces as most institutions had suspended "non-essential" activities on campus [3] [4] [5] [6] [7] [8] [9] [10] . to collect timely data on how the pandemic affected scientists' work, we disseminated a survey to u.s.-and europe-based scientists across a wide range of institutions, career stages, and demographic backgrounds. we identified the corresponding authors for all journal articles indexed by the web of science in the past decade, and then randomly sampled 400,000 u.s.-and europebased email addresses (see si s1 for more). we distributed the survey on monday april 13th, 2020, about 1 month after the world health organization declared the covid-19 pandemic. within one week, the survey received full responses from 4,535 individuals who self-identified as faculty or pis from academic or non-profit research institutions. respondents were located in all 50 states in the u.s. (63.7% of the sample, figure s1a ), 35 countries in europe (36.3% of the sample, figure s1b ), and were affiliated with the full spectrum of research fields listed in the survey. for more on the response rate, sampling method, and a comparison to a national survey of doctorate-level researchers, see si s3. motivated by prior research on scientific productivity [11] [12] [13] [14] [15] , the survey solicited information about scientists' working hours, how this time is allocated across different tasks, and how these time allocations have changed since the onset of the pandemic. we asked scientists to estimate changes to their research output-the quantity and impact of their publications-in coming years relative to prior years. we also solicited a wide range of characteristics including field of study, career stage (e.g., tenure status), demographics (e.g., age, gender, number and age of dependents in the household), and other features (e.g., institution closure and whether the respondent was exempt from any closures). details on the survey instrument are included in si s2, and table s1 reports summary statistics for all the respondents used in the analyses. to understand the immediate impacts of the pandemic, we compare the reported level and allocation of work hours pre-pandemic and at the time of the survey. figures 1a and 1b illustrate two primary findings. first, there is a sharp decline in total work hours, with the average dropping from 61.4 hours per week pre-pandemic to 54.4 at the time of the survey (diff.=-6.9, s.e.=0.20). in particular, 5.01% of scientists reported that they worked 42 hours or less before the pandemic, but this share increased nearly six-fold to 29 .7% by the time of the survey (diff.=24.7, s.e.=0.67). second, there is large heterogeneity in changes across respondents. although 55.0% reported a decline in total work hours, 27.3% reported no change, and 17.7% reported an increase in time devoted to work. this significant fraction of scientists reporting no change or increases in their work hours is notable given that 91.0% of respondents reported their institution was closed for non-essential personnel. to decompose these changes, we compare scientists' reported time allocations across four broad categories of work: research (e.g., planning experiments, collecting or analyzing data, writing), fundraising (e.g., writing grant proposals), teaching, and all other tasks (e.g., administrative, editorial, or clinical duties). we find that among the four categories, research activities have seen the largest negative changes. whereas total work hours decrease by 11.3% on average, research hours have declined by 24 .4% (teaching, fundraising, and "all other tasks" decrease by 1.9%, 9.3%, and 0.7%, respectively). comparing the share of time allocated across the tasks ( figure 1c -f), we find that research is the only category that sees an overall decline in the share of time committed (median changes: -16.2% for research, 0% for fundraising, +2.7% for teaching, and +2.0% for all other tasks). overall, these results indicate that scientists' research time has been disrupted the most, and the declines in time spent on the other three categories are mainly due to the decline in total work hours. furthermore, correlations suggest that research may be a substitute for each of the three other tasks (see si s5.1 and figure s4 ). still, despite the large negative changes in research time, substantial heterogeneity remains, as 9.4% reported no change and 21.2% reported spending more time on research. the sizable heterogeneity begs the question as to what factors are most responsible for the observed heterogeneous effects among scientists. to unpack the varied effects of the pandemic, we first examine across-field differences. figure 2a depicts the average change in reported research time across the 20 different fields we surveyed. fields that tend to rely on physical laboratories and time-sensitive experiments -such as biochemistry, biological sciences, chemistry and chemical engineering -report the largest declines in research time, in the range of 30-40% below pre-pandemic levels. conversely, fields that are less equipment-intensive -such as mathematics, statistics, computer science, and economics -report the lowest average declines in research time. the difference between fields can be as large as four-fold, again highlighting the heterogeneity in how certain scientists are being affected. these field-level differences may be due to the nature of work specific to each field, but may also be due to differences in the characteristics of individuals that work in each field. to untangle these factors, we use a lasso regression approach to select amongst (1) a vector of field indicator variables, and (2) a vector of flexible transformations of demographic controls and pre-pandemic features (e.g., research funding level, time allocations before the pandemic). the lasso is a datadriven approach to feature selection that minimizes overfitting by selecting only variables with significant explanatory power 16, 17 . we then regress the reported change in research time on the lasso-selected variables in a post-lasso regression, allowing us to estimate conditional associations for each variable selected (see si s4). comparing figure 2a and 2b, we find that the contrast between the "laboratory" or "bench science" fields versus the more computational or theoretical fields is still significant in the post-lasso regression, indicating that differences inherent to these fields are likely important mediators of how the pandemic is affecting scientists. although we cannot reject a null hypothesis of no change, there is also suggestive evidence of an increase in research time for the health sciences, possibly due to work related to covid-19. importantly, we also find that most of the variation across fields is diminished once we condition on the individual-level features selected by the lasso, which suggests a large amount of heterogeneity is due to these individual-level differences. indeed, the standard deviation of the twenty field-level averages of reported changes in research time is 7.4%. by contrast, the standard deviation of the individual-level residuals from these fieldlevel averages-that is, how much each individual's response differs from the average in their field-is 50.5%, indicating there is substantial variation across individuals even within the same field. to illustrate the raw individual-level variation, we measure the average change in reported research time across demographic and other group features ( figure 2c ). given the persistent gender gap in science 18-28 , we include interactions with the female indicator to explore potential gender-specific differences. we find that there are indeed widespread changes across the range of individual-level features we examined. yet, when we use the lasso and regression to control for the field differences documented in figure 2a , we find marked changes in the relevance of certain individual-level features. figure 2d plots the post-lasso regression coefficients associated with the demographic and careerstage characteristics and reveals four main results. first, career stage appears to be a poor predictor of the impacts of the pandemic, as conditional changes in research time for older versus younger and tenured versus untenured faculty are statistically indistinguishable. second, scientists who report being subject to a facility closure also report only minor unconditional differences in their research time ( figure 2c ), and this feature is not selected by the lasso as a relevant predictor for changes in research time. third, there is a clear gender difference. holding field and all other observable features fixed, female scientists report a 4.2% larger decline in research time (s.e.=1.5). fourth, child dependent care is associated with the largest effect. reporting a dependent under 5 years old is associated with a 15.8% (s.e.=2.1) larger decline in research time, showing a substantially larger effect than any other individual-level features. reporting a dependent 6 to 11 years old is also associated with a negative impact, ceteris paribus, but that decline is smaller than the decline associated with dependents under 5 years old. this is consistent with shifts in the demands of childcare as children age. having multiple dependents is associated with an additional 4.5% decline (s.e.=1.6) in research time. overall, these results are consistent with preliminary reports of differential declines in female scientists' productivity during the pandemic 29, 30 . our findings further indicate that some of the gender discrepancy can be attributed to female scientists being more likely to have young children as dependents (21.2% of female scientists in our sample report having dependents under the age of 5, compared to 17.7% of male and other scientists, s.e. of diff.=3.6). for further results related to the other three task categories, see si s5.2. to estimate the potential downstream impact of the pandemic, we also asked respondents to forecast how their research publication output in 2020 and 2021-in terms of the quantity and impact of their publications-will compare to their output in 2018 and 2019. we randomly assigned respondents to make a forecast for one of six possible scenarios where they were to take as given the duration of the pandemic to be 1, 2, 3, 4, 6, or 8 months from the time of the survey. for more on how we use this introduced random variation and adjust scientists' forecasts to account for underlying trends in publication output, see si s4.2. figure 3a plots the distribution of the estimated changes in publication quantity and impact due to the pandemic. we find that, on average, quantity is projected to decline 13.0% (s.d.=37.7). for comparison, prior estimates show that in the biomedical sciences, receiving a grant of approximately one million dollars from the national institutes of health raises a pi's short run publication output by 7-12% 31,32 , suggesting that a projected decline of 13% is not negligible. moreover, the decline in output is not limited to quantity, as impact is projected to decline by 7.9% on average (s.d.=31.0). to understand which scientists are most likely to forecast larger declines in their output due to the pandemic, we repeat the lasso-based regression approach using these forecasts as dependent variables. these analyses uncover two notable findings ( figure 3b ). first, all of the features selected as relevant are related to caring for dependents. as in the case of research time, reporting a dependent under 5 years old is associated with the largest declines. second, gender differences in these forecasts appear attributable to differential changes associated with dependents. reporting a 6-to 11-year-old dependent is associated with a 6.6% (s.e.=1.9) and 5.4% (s.e.=1.8) lower forecast of publication quantity and impact, respectively, but only for female scientists (see si s5.3 for the field-level results). we find that most of the same groups currently reporting the largest disruptions to research time also report the worst outlook for future publications. the correlations between reported change in research time and forecasted publication output are 0.337 for quantity (p-value < 0.001) and 0.214 for impact (p-value < 0.001). while understanding the relationships between time input and research output is beyond the scope of this study, we repeat the analysis, including the changes in reported time allocations to test if they moderate the effects we observe. we find that, while the post-lasso regression coefficients associated with the selected demographic features generally become smaller, a statistically significant relationship remains in most cases even when conditioning on the (lasso-selected) change in research time. this suggests the forecasted declines associated with reporting young dependents are not simply explained by the direct change in time spent on research ( figure s7 ). we further investigate how these publication forecasts may depend on the expected duration of the covid-19 pandemic by plotting the (randomized) expectation shown to the survey respondent against the estimated net effect of the pandemic ( figure 3c) . a linear fit indicates that, for every 1 month that the pandemic continues past april 2020, scientists expect a 0.63% decrease in publication quantity (s.e.=0.23) and a 0.48% decrease in impact (s.e.=0.19) due to the pandemic. these marginal effects may appear small relative to the others documented in this paper, but it is important to note that they are on a similar scale as economic forecasts for the u.s. and europe, which (as of may 2020) project economic declines in the range of 0.4-0.6% per month (5-7% for 2020) 33 . still, these results could also reflect uncertainties or errors inherent to these forecasts, or strong personal beliefs about the timeline for the pandemic that are not easily swayed by the survey's suggestion. our results shed light on several important considerations for research institutions as they consider reopening plans and develop policies to address the pandemic's disruptions. the findings regarding the impact of childcare reveal a specific way in which the pandemic is impacting the scientific workforce. indeed, "shelter-at-home" is not the same as "work-from-home" when dependents are also at home and need care. because childcare is often difficult to observe and rarely considered in science policies (aside from parental leave immediately following birth or adoption), addressing this issue may be an uncharted but important new territory for science policy and decision makers. furthermore, it suggests that unless adequate childcare services are available, researchers with young children may continue to be affected regardless of the reopening plans of institutions. and since the need to care for dependents is by no means unique to the scientific workforce, these results may also be relevant for other labor categories. more broadly, many institutions have announced policy responses such as tenure clock extensions for junior faculty. of 34 u.s. university policies we identified that provided some form of tenure extension due to the pandemic, 30 appeared to guarantee the extension for all faculty (see si s5.5 for more). institutions may favor such uniform policies for several reasons such as avoiding legal challenges. but given the heterogeneous effects of covid-19 we identify, it raises further questions whether these uniform policies, while welcoming, may have unintended consequences and could exacerbate pre-existing inequalities 34 . while this paper focuses on quantifying the immediate impacts of the pandemic, circumstances will continue to evolve and there will likely be other notable impacts to the research enterprise. the heterogeneities we observe in our data may not converge, but instead may diverge further. for example, when research institutions begin the process of reopening, there may be different priorities for "bench sciences" versus work that involves human subjects or that requires travel to field sites. and research requiring international travel could be particularly delayed; all of which could lead to new productivity differences across certain groups of scientists. furthermore, individuals with potential vulnerabilities to covid-19 may prolong their social distancing beyond official guidelines. in particular, senior researchers may have incentives to continue avoiding inperson interactions 35 , which historically facilitate mentoring and hands-on training of junior researchers. the possibility of a resurgence of infections 36 suggests that institutions may anticipate a reinstatement of preventative measures such as social distancing. this possibility could direct focus toward research projects that can be more easily stopped and restarted. funders seeking to support high-impact programs may have similar considerations, favoring proposals that appear more resilient to uncertain future scenarios. lastly, although we have focused on two of the denser geographic regions of scientific output in this study, the pandemic is having a substantial impact on research worldwide. in the coming years, researchers may be less willing or able to pursue positions outside of their home nation, which may deepen or alter global differences in scientific capacity. future work expanding our understanding of how the pandemic is affecting researchers across different countries, at different institutions, and in different points of their life and career could provide valuable insights to more effectively protect and nurture the scientific enterprise. the strong heterogeneities we observe, and the likely development of new impacts in the coming months and years, both argue for a targeted and nuanced approach as the world-wide research enterprise rebuilds. 30. kitchener, c. women academics seem to be submitting fewer papers during coronavirus. 'never seen anything like it,' says one editor. https://www.thelily.com https://www.thelily.com/women-academics-seem-to-be-submitting-fewer-papers-duringcoronavirus-never-seen-anything-like-it-says-one-editor/ (2020 the study protocol has been approved by the institutional review board (irb) from harvard university and northwestern university. informed consent was obtained from all participants. figure s6 reports the results from a similar exercise focusing on fieldlevel differences. we find the same three fields associated with the largest declines in research time -biochemistry, biology, and chemistry -also forecast the largest pandemic-induced declines in their publication output quantity, ceteris paribus. c. average estimated changes in publication outputs per the randomized duration of pandemic respondents were asked to assume for their forecasts (either 1, 2, 3, 4, 6, or 8 months from the time of the survey, mid-april 2020). .......................................................................................................................... ...........................................................7 s5 additional results ....................................................................................................................................9 spent on different tasks ................................................................................9 -research tasks by groups ...............................................................9 forecast results ................................................................................. ................................................................................10 s6 supplementary tables ...........................................................................................................................11 s7 supplementary figures .........................................................................................................................12 s8 references for supplementary information .........................................................................................19 to compile a large, plausibly random list of active scientists, we leverage the web of science (wos) publication database. the wos database is useful for two reasons: (1) it is one of the most authoritative citation corpuses available 1 and has been widely used in recent science of science studies 2-4 ; (2) among other large-scale publication datasets, wos is the only one, to our knowledge, with systematic coverage of corresponding author email addresses. we are primarily interested in active scientists residing in the u.s. and europe. we start from 21 million wos papers published in the last decade (2010-2019). in an attempt to focus on scientists likely to still be active and in a more stable research position, we link the data to journal impact factor information (wos journal citation reports), and exclude papers published in journals in the bottom 25% of the impact factor distribution for its wos-designated category. we use the journal impact factor calculated for the year of publication, and for papers published in 2019, we use the latest version (2018). we then extract all author email addresses associated with papers. for each email address in this list, we consider it as a potential participant if: (1) it is associated with at least two papers in the ten-year period, and (2) the most recent country of residence, defined by the first affiliation of the most recent paper, is in the u.s. or europe. we have approximately 2.5 million unique email addresses after filtering, with about 521,000 in the u.s. and 938,000 in europe. we then randomly shuffled the two lists separately and sampled roughly 280,000 email addresses from the u.s. and 200,000 from europe. we oversampled the u.s. as a part of a broader outreach strategy underlying this and other research projects. we recruited participants by sending them email invitations through with the following text: we build on field classifications used in national surveys such as the u.s. survey of doctorate recipients (sdr) to categorize fields in our survey, aggregating to ensure sufficient sample sizes within each field. the notable additions we make to the fields used in these other surveys are to include: business management, education, communication, and clinical sciences. these fields reflect major schools at most universities and/or did not immediately map to some of the default fields used in the sdr (i.e., the "health sciences" field in sdr does not include medical specialties). out of a total of 480,000 emails sent, approximately 64,000 emails were directly bounced either due to incorrect spelling in the wos data or the termination of the email account. in hopes of soliciting a larger sample, we also undertook snowball sampling by encouraging respondents to share the survey with their colleagues as well. overall 9,968 individuals entered the survey and 8,447 continued past the consent stage. of those that did not, 412 were not an active scientist, post-doc, or graduate student and thus not within our population of interest, 81 did not consent, and 1028 did not make any consent choice. when a respondent continued past the consent stage, we asked them to report the type of role they were in. out of the 8,447 consenting responses, there 5,728 responses from faculty or principal investigators (pis), 1,023 responses from post-doctoral researchers, 701 from graduate students in a doctoral program, and 52 from retired scientists. 551 of the remaining respondents were some other type of position and another 392 did not report their position. this yields an estimate of a response rate of approximately 1.6%. first, our low response rate may reflect the disruptive nature of the pandemic, but it also raises concerns for generalizability of our results. however, after we received feedback from the initial distribution that many individuals had received the email in their "junk" folder, we became concerned with our distribution being automatically flagged as spam. based on spot-checking of five individuals that we ex-post identified as being randomly selected by our sample, and who we had professional relationships with, found that in four of the five cases the recruitment email had been flagged as spam. we know of no systematic way of estimating the true spam-flagging rate (nor how to avoid these spam filters when using email distributions at this scale) without using high-end, commercial-grade products. additionally, as with any opt-in survey, there may be correlations between which scientists opt-in and their experiences about which they want to report. for example, scientists who felt strongly about sharing their situation, whether they experienced large positive or negative changes, may be more likely to respond, which would increase the heterogeneity of the sample. furthermore, there may also be non-negligible gender differences that arise not due to actual differences in outcomes but due to differences in reporting known to occur across genders [5] [6] [7] [8] [9] . for our analyses, we focus entirely on responses from the sample of faculty/pis. from the full sample of pis, we retain respondents who reported working for a "university or college", "nonprofit research organization", "government or public agency", or "other", and excluding 87 responses from individuals who report to work for a "for-profit firm". we also restrict the sample to respondents whose ip address originated from the united states or europe (dropping 1,049 responses from elsewhere). we then drop observations that have missing data for any of the variables used in our analyses: 26 responses do not report their time allocations, 74 do not report their age, 10 do not report the type of institution they work at, and 114 do not report their field of study. altogether, this amounts to dropping 187 observations. given the relatively small subset of our sample dropped due to missing data, we do not impute missing variables as this introduces unnecessary noise 10 . the summary statistics for the final sample used in the analyses are reported in figure s1 and the geographic distribution of respondents is shown in figure s2 . to estimate the generalizability of our respondent sample, we use the public microdata from the survey of doctorate recipients (sdr) as the best sample estimates of the population of principal investigators in the u.s. the sdr is conducted by the national center for science and engineering statistics within the national science foundation, sampling from individuals who have earned a science, engineering, or health doctorate degree from a u.s. academic institution and are less than 76 years of age. the survey is conducted every two years, and we use the latest data available (2017 cycle). for this comparison, we focus only on university faculty in both our survey and the sdr. we also constrain our sample to only include fields of study with a clear mapping to the sdr categories. the sdr focuses only on researchers with ph.d.-type degrees, and so it does not capture researchers with other degrees still actively engaged in research (i.e., researchers with only m.d.s). this means we exclude "architecture and design," "business management," "medicine," "education," "humanities," and "law and legal studies." figure s2 compares respondents between our sample and the sdr sample. figure s3a illustrates differences on demographics and career-stage features, including raw differences as well as those adjusted by field. we find only a small difference in age and no difference in partner status. our survey oversamples on female scientists, those with children, and untenured faculty. these differences persist after conditioning on the scientist's reported field. that we ultimately find female scientists and those with young dependents to report the largest disruptions suggests that these individuals may have been more likely to respond to the survey in order to report their circumstances. the geographic distributions are relatively similar, with slight oversampling of west and undersampling of south. lastly, we find a significant but small oversampling of u.s. citizens. we also compare the distribution of research fields (fig. s3.b) . overall the distributions are relatively similar. we appear to oversample most significantly on "atmospheric, earth, and ocean sciences" and "other social sciences." while we undersample most significantly on the biological sciences, "mathematics and statistics," and "electrical and mechanical engineering". there does not seem to be a clear pattern with these field-level differences, as we undersample fields that ultimately report being across the spectrum of disruptions (i.e., mathematics and statistics reports some of the smallest disruptions, and the biological sciences are amongst the most disrupted). the unconditional changes reported by each group of scientists is informative of how the pandemic affected researchers overall. but it does not allow us to infer whether groups reporting larger or smaller disruptions are doing so for reasons inherent to that group (i.e., the nature of work in certain fields, or the demands of home life unique to certain individuals) or because the individuals that select into that group tend to also be disrupted for unrelated reasons. this motivates a multivariate regression analysis to explore whether changes associated with a group of individuals change after conditioning on other observables. however, selecting which of an available set of covariates (or transformations thereof) to include in a regression is notoriously challenging. the lasso method provides a data-driven approach to this selection problem by excluding covariates from the regression that do not improve the fit of the model 11, 12 . when using the lasso, our general approach is to include a vector of indicator variables for the fields or demographic/career groups of interest, along with an additional set of controls. when focusing on differences across fields, we include the demographic/career variables in the control set, and vice versa. the control variables common to all lasso-based analyses are: pre-pandemic level of time allocations and totals, pre-pandemic share of time allocations, pre-pandemic funding estimate, and indicators for the type of institution (academic, non-profit, government, or other) and the location (state if in u.s., country if in europe). to make minimal assumptions about the functional form of the control variables, we conduct the following transformations to expand the set of controls: for all continuous variables we use inverse hyperbolic sine (which approximates a logarithmic transformation while allowing zeros), square and cubic transformations, and we interact all indicator variables with the linear versions of the continuous variables. we perform the lasso using the lasso linear package in stata 16 â© software. we use the defaults for constructing initial guesses, tuning parameters, number of folds (ten), and stopping criteria. we use the two-step cross-validation "adaptive" lasso model where an initial instance of the algorithm is used to make a first selection of variables, and then a second instance occurs using only variables selected in the first instance. the variables selected after this second run are then used in a standard post-lasso ols regression with heteroskedastic robust standard errors. we are interested in the effect of the covid-19 pandemic on research output. as an initial estimate of what this effect could be, we asked respondents to forecast how their research output in 2020 and 2021 will compare to their prior output in 2018 and 2019. this framing was chosen for its simplicity; however, it does not provide a direct estimate of the pandemic effect. for this effect, we could have asked how the respondent expects their output to be in 2020 and 2021 compared to what they would otherwise expect their output to have been in 2020 and 2021 had the pandemic not occurred. clearly, this is more complicated. but since we chose the simpler framing, we must account for some underlying factors before arriving at figures closer to what scientists think the effect of the pandemic will be (or our estimates thereof). these raw year-to-year forecasted changes in publication outputs will be influenced by four major factors: (1) changes due to the pandemic to date; (2) anticipated future changes due to the pandemic; (3) the respondent's expectations about how long the pandemic will last; and (4) regular trends in the evolution of publication output across different individuals and fields (e.g., if female scientists have continually been increasing their number of publications produced each year, then in the absence of the pandemic we might expect this trend to continue into the near future). again, we are primarily interested in (1) and (2). to overcome (3), we randomly assign respondents to make forecasts for one of 6 possible scenarios where they were to take as given the duration of the pandemic to be either 1, 2, 3, 4, 6, or 8 months from the time of the survey. in some analyses, we condition on this variable to control for variation due to perceptions about the length of the pandemic. in others, we explore the effect of these different perceptions directly to infer how scientists perceive disruptions may evolve as the pandemic does or does not continue to persist. with respect to the issue of differential trends across individuals and fields, we first note that the time scale we are concerned with (approx. 2 years) is small enough that we expect the majority of individuals to not change in terms of their observables. this is because all of our time-dependent observables used in the analyses are based on groupings of 5 years. still, to more quantitatively address this issue, we use historical data and another lasso-based regression model to project scientists' publication output in 2020 and 2021, using their observable features from the survey and publication data since 2010. our assumption is that these projections can approximate what scientists would have forecasted in the absence of the pandemic--they provide a crude counterfactual. given the short timeframes involved, and the rich observable data we possess, we hypothesize that the room for significant biases or deviations are small relative to the acrossindividual variation. due to data quality limitations, we are only able to connect 56% of respondents to their publication records, but a comparison of observables indicates that there are no meaningful differences between those scientists connected to their publication record, and those not (see figure s4 ). since we observe the variables used in these projections for all respondents, we can project out trends for all scientists in our sample. while the measurement of publication quantity is straightforward, the measurement of quality or, as it was asked in the survey, "impact" is not. following a long line of science of science research 13 , we use citation counts as the best available proxy for quality. we follow the state of the art in terms of adjusting and counting these citations in a manner that does not conflate across-field differences 14 . the lasso-based projection proceeds as follows. first, we demean the publication measures at the year level. this is because we do not want to attribute aggregate year-to-year variations across the entire sample to actual changes in net output, since these fluctuations can very plausibly be linked to changes in the web of science (wos) coverage over time, and we are much more concerned with differential trends amongst different fields and/or different individuals. next, we use the lasso to select which of the observables are the best predictors of publication counts and citations. the major difference between this lasso-based approach and the others used in this paper is that, here, we interact all observables with flexible time trends (i.e., squared, cubic, and inverse hyperbolic sine transformations of the year variable) to allow differential trends across groups. finally, we project out these expected output measures as a function of the selected covariates and their corresponding coefficients from a post-lasso ols regression. importantly, we project out of sample just two years so that we have estimates of the counterfactual trends for 2020 and 2021. with these estimates of respondents' counterfactual forecasts in hand, we then simply subtract them from scientists actual reported forecasts to arrive at our estimate of scientists' forecast of the "net effect" of the pandemic. figure s3 plots the distributions of the unadjusted forecasts and these net effects for both the quantity and impact measures. the adjustment does not substantially change the distribution, but we are more confident in these estimates as "pandemic effects" for the aforementioned reasons. figure s5 plots the reported changes in research time (y-axes) against the reported changes in time allocated to the other three task categories (x-axes). the figures are binned scatterplots, and linear fits of the data suggest that research may be a substitute for the other categories. a 10% increase in fundraising, teaching, or all other tasks is associated with a decline in research by 1.4% (s.e.=0.14), 4.6 % (s.e.=0.16), and 3.2% (s.e.=0.13), respectively. we lack exogenous variation in the data that can clearly shift the time allocated to one (or a subset) of tasks, so we cannot identify the extent to which these correlations reflect actual substitution patterns or unobserved factors. though the magnitudes and precision of these relationships suggests further investigations are certainly warranted to better understand how scientists allocate their time. figure s6a and s6b replicate figures 2b and 2d from the main text, respectively, instead using each of the other three task categories for the dependent variable. for the analysis focused on fields (fig. s6a) , no clear patterns emerge with respect to changes in time spent fundraising or teaching. reported time changes in teaching may be due to a combination of reasons. first, during the pandemic, the demand for these activities is likely relatively stable (e.g., most academic institutions have moved classes online, but there are few reports of suspension of classes); and second, impacts due to the transition to online teaching may have taken place earlier, hence not captured by our survey. there is evidence that clinical science and biochemists are spending an increasing amount of time on the "all other tasks" category, which could plausibly be due to a redirection of effort directly towards pandemic-related (non-research) work. for the analysis focused on demographic groups (fig. s6b) , we find that scientists reporting a dependent under 5 years old tend to also report larger declines across all task categories. this result is consistent with an unsurprising hypothesis that these dependents require care that leads scientists to decrease their total work hours. the fact that there does not appear to be any substitution away from research towards these other categories for these specific individuals with young dependents suggests the association is driven by factors inherent to having a dependent at home, and not that these individuals also tend to select alternative work structures that has them performing less research and more of other tasks. figure s7 recreates figure 3b from the main text, but using the field-level lasso approach. forecasted changes in output are almost entirely confined to publication quantity (as opposed to impact), with the same fields of biology and chemistry that reported the largest declines in research time also forecasting the largest declines in publication output, here in the range of 4-10% relative to what would have been expected otherwise. notably, some fields expect to publish more because of the pandemic, again highlighting the heterogenous experiences scientists are having due to the pandemic. figure s8 recreates figure 3b from the main text, but while including the reported changes in time allocated to each of the four task categories (in addition to the pre-pandemic reported time allocations as before). again, we find a similar set of dependent-related variables to be most predictive of forecasted publication changes, even though the reported change in research time is also selected as relevant by the lasso. for comparison, the forecasted disruption associated with a dependent under 5 years old (7.04% decline expected publication count) is approximately the same magnitude as the implied effect associated with a 26% decrease in research time. using internet searches, we attempted to identify university-level tenure clock extension policies put in place as a result of the covid-19 pandemic. while not a comprehensive list, we identified policies for 34 universities, encompassing both public and private, small and large institutions. of the 34 universities, 17 have automatically applied a tenure clock extension to all faculty, with individuals having the ability to opt out 15-31 ; 13 require applications but are automatically approved [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] . four universities have not established unilateral policies [45] [46] [47] [48] . instead, they have either created a separate application process or added covid-19-related impact to the list of reasons a faculty member may apply for an extension. table s1 . summary statistics. summary statistics for the main survey sample. "mean, with pubs." and "mean, miss. pubs." report the averages for the sub-samples that can and cannot be connected to their publication record in wos, respectively. the "t-stat" column reports the tstatistic from a test of mean differences between these two sub-samples. the two wos-based variables are "pub. quantity (number) since 2010" (the sum of the author's number of publications in the wos record), and "pub. impact (eucl. citations) since 2010" (the field-demeaned euclidean sum of citations to the author's publications in the wos record 14 ). figure s1 . geographic distribution. plots respondent locations in u.s. (a.) and europe (b.), aggregated to preserve anonymity. figure s2 . comparison to u.s. university-based sdr respondents. summary statistics for demographic variables and fields common to both our survey and the u.s. survey of doctoral recipients (sdr). all comparisons are based on u.s.-located faculty or pis at universities or colleges that report affiliation with a field of study present in both surveys (note: all fields present in the sdr are present in our survey, but not vice versa). a. describes the sample averages for both samples and the mean differences in both the raw data ("diff.") and after adjusting for the different composition of fields in each sample ("diff., field adjusted") b. plots the share of respondents in each sample that affiliate with each of the fields common to both surveys. (*** p<0.01; **p<0.05; *p<0.1) a. b. figure s3 . publication changes, raw and inferred pandemic effects. plots the distribution of changes to publication output. blue lines indicate publication quantity, red lines indicate impact. solid lines indicate the raw responses from the survey (which asked only about changes in publication output from 2018-19 to 2020-21), and dashed lines indicate our estimates of the implied effect due to the covid-19 pandemic based on the removal of group-specific trends in publication output. see the methodology section 2 for more. figure 3b from the main text, also including the scientists' reported changes in time committed to each of the four task categories. the error bars indicate 95% confidence intervals, and only variables selected in the corresponding lasso selection exercises are included in the post-lasso regression. the coefficient corresponding to the "10%change time, research" variable indicates the percent change in the scientists' forecasted quantity or impact associated with a 10% increase in the change in reported research time. for example, we estimate that scientists who reported a 10% larger decline in their research time forecast that the pandemic will cause them to produce 2.67% fewer publications in 2020-2021. covid-19) deaths. our world in data policy responses to the coronavirus pandemic science-ing from home how research funders are tackling coronavirus disruption safeguard research in the time of covid-19 coronavirus outbreak changes how scientists communicate the pandemic and the female academic early-career scientists at critical career junctures brace for impact of covid-19 how early-career scientists are coping with covid-19 challenges and fears how covid-19 could ruin weather forecasts and climate records productivity differences among scientists: evidence for accumulative advantage research productivity over the life cycle: evidence for academic scientists the economics of science faculty time allocation: a study of changeover twenty years incentives and creativity: evidence from the academic life sciences regression shrinkage and selection via the lasso regression shrinkage and selection via the lasso: a retrospective web of science as a data source for research on scientific and scholarly activity increasing dominance of teams in production of knowledge quantifying long-term scientific impact atypical combinations and scientific impact highly confident but wrong: gender differences and similarities in confidence judgments boys will be boys: gender, overconfidence, and common stock investment trouble in the tails? what we know about earnings nonresponse 30 years after lillard, smith, and welch measurement error in survey data response error in earnings: an analysis of the survey of income and program participation matched with administrative data flexible imputation of missing data regression shrinkage and selection via the lasso regression shrinkage and selection via the lasso: a retrospective how to count citations if you must extension of the probationary period for tenure-track faculty due to covid-19 disruptions harvard offers many tenure-track faculty one-year appointment extensions due to covid-19 extending the reappointment/promotion/tenure review timeline covid-19 and tenure review response to covid-19 disruption: extension of the tenure clock. the university of alabama in huntsville memo on tenure-track probationary period extensions due to covid-19. university of virginia office of the executive vice president and provost extension of tenure clock in response to covid-19 rule waivers: tenure clock extensions, leaves of absence, conversions, dual roles extension of the tenure-clock guidelines for contract extension and renewal. iowa state university office of the senior vice president and provost tenure clock extension due to covid-19 disruption faculty promotion and tenure tenure-track faculty: extension of tenure clock due to covid-19. the ohio state university office of academic affairs promotion/tenure clock extensions due to covid-19 -faculty one-year opt-in tenure clock extension covid-19 guidance for faculty: extensions of tenure clock probationary period extensions for tenure-track faculty. the university of texas at austin office of the executive vice president and provost tenure rollback policy for covid-19 we thank alexandra kesick for invaluable help. this work is supported by the air force office of scientific research under award number fa9550-19-1-0354, national science foundation sbe 1829344, and the alfred p. sloan foundation g-2019-12485 and g-2020-13873. key: cord-342890-2k5ttvfq authors: dabachine, yassine; taheri, hamza; biniz, mohamed; bouikhalene, belaid; balouki, abdessamad title: strategic design of precautionary measures for airport passengers in times of global health crisis covid 19: parametric modelling and processing algorithms date: 2020-09-04 journal: j air transp manag doi: 10.1016/j.jairtraman.2020.101917 sha: doc_id: 342890 cord_uid: 2k5ttvfq presently, the negative results of a pandemic loom in a threatening manner on an international scale. facilities such as airports have contributed significantly to the global spread of the covid-19 virus. therefore, in order to address this challenge, studies on sanitary risk management and the proper application of countermeasures should be carried out. to measure the consequences over passenger flow, simulation modelling has been set up at casablanca mohammed v international airport. several scenarios using daily traffic data were run in different circumstances. this allowed the development of some assumptions regarding the overall capacity of the airport. the proposed simulations make it possible to calculate the number of passengers to be processed in accordance with the available check-in counters based on the proposed sanitary measures. the aviation sector has been experiencing an unprecedented crisis since march 2020. indeed, almost all airports have been paralyzed following the outbreak of the covid-19 pandemic. euro control had announced a significant 88% reduction in the number of flights by 1 may 2020 [1, 2] . the flow of international traffic contributed significantly to the spread of the virus worldwide [3] . in europe, for example, it seems that the areas least affected by the virus are those where no international airport is located. one of the main characteristics of covid-19 is its long incubation period, which currently averages 5.2 days [4] .contagiousness during the incubation period is one of the reasons why covid19 spreads so widely compared to other viruses, making it extremely difficult to exclude the possibility of asymptomatic passengers passing through the airport [5, 6] . according to the international civil aviation organization (icao) [7] , international air traffic will experience a significant decline in the order of 44% to 80% in the number of international passengers in 2020 compared to 2019. the airports council international (aci) [8] estimates that airports will lose two fifths of their passenger traffic or more than $76 billion in revenues in 2020 compared to the status quo. the international air transport association (iata) [9] , which represents the airlines, estimates that revenue passenger-kilometers will decrease by 48% in 2020 compared to 2019. over time, economic activity will resume as governments strive to restore economic growth and recover, which will require a resumption of airport activity. however, airports need to be reopened gradually while being aware of the potential risk generated by the hyper mobility experienced so far in order to avoid a second wave of the pandemic. to date, there is no published solution for a passenger flow management system within the framework of the new health constraints. nevertheless, certain rules and standards are defined by iata to guarantee the quality requirements of the passenger assistance service in the terminal area, on which we have based our proposals for additional measures in line with sanitary requirements for such a pandemic. social distancing is one of the main measures agreed, which will affect the capacity of the airport, although the distances recently adopted by some airports make this issue a subject for debate. accordingly, this study seeks to determine the necessary parameters for passengers distancing in order to minimize the potential spread of the virus without compromising the airport's ability to manage the flow of passengers. therefore, this study proposes simulations and discussions on the possible effects of these measures and test their applicability. the proposed solution differs from other passenger flow management systems in that it introduces preventive measures mandated by the world health organization (who), as well as health precautionary measures at airports. this has led us to study the movement of passengers in this context in order to develop a parametric model capable of adjusting the health measures to the expected flow of passengers and also of verifying its usefulness. it has been decided to carry out the study in the country's busiest and best-equipped terminal, based on statistical data relative to the time of congestion. this is intended to evaluate the actions taken so that it is possible to determine if they have had viable results. this document proposes a simulation tool to better manage the flow of passengers, as part of an approach that inte-grates quality of service standards and the new requirements of health regulations within airports. this paper is divided into eight sections: the first section highlights the immediate and lasting impact on the aviation sector in the wake of the covid-19 pandemic crisis. the second section outlines context information for the simulation process. in the third section, the conception is presented. the fourth section is about the mathematical modelling of the variables parameters. then in section five the simulation is exposed. the validation model and the simulation analysis are discussed. in the sixth section results and discussion are presented in the seventh section. in the last section, some conclusions are drawn. casablanca mohammed v international airport has three terminals with a total capacity of 14 million passengers per year and is the hub of the moroccan airport network. the surface area of each terminal is respectively 76,000 2 for t1, 66,000 2 for t2 and 4,000 2 for t3. it is connected to more than 96 international destinations by 840 weekly frequencies operated by 24 airlines. the t1 terminal will be used in this study because of its capacity, state-of-the-art infrastructure and equipment that meets current international standards for safety, security and quality of service [10] . under the international health regulations (ihr), airport authorities are required to establish effective contingency plans and arrangements to deal with events that may constitute a public health emergency of international concern. the current outbreak of a new coronavirus disease (covid-19) has spread across several borders, resulting in demands for detection and management of suspicious cases. in order to implement the sanitary passenger flow management model proposed in this article, it is assumed that the guidelines for the detection and management of sick travelers suspected of being infected are applied in accordance with the interim guidelines for the management of ill travelers published on 16 february 2020 [11] , and that the prevention rules defined by the world health organization (who) are implemented within airport facilities [12] . in addition, we have proposed additional precautionary measures, such as physical separators between passengers and users, barriers between boarding gates, and signage indicating itineraries, so that departing and arriving passengers do not cross each other. attention to the los here is essential, especially in times of crisis. indeed, it is an indicator that allows to observe the fluidity in the treatment of the flow of passengers. in times of crisis, it is easy to observe overflows, passengers who get angry because of the conditions, etc. compliance with the los partly makes it possible to avoid having to manage overflows due for example to a long waiting time. this is why it remains an indicator which, even in times of crisis, must be watched. international standards have been established for terminals. the study's approach advocates that measures relating to waiting time, queue size and passenger handling rates should follow the iata quality of service standards [13] illustrated in the section . table 1 . present the quality of service scale represented by iata [14, 15] . the occupancy of a waiting area varies considerably according to the time spent by a passenger in that area. waiting time at the various modules is a key factor in the quality of service and an essential parameter in the sizing and study of a terminal's capacity. it is extremely difficult to establish a precise relationship between waiting time, level of service and available space per passenger. a first indicator of the quality of service is the space available per passenger in waiting and traffic areas, translated on the level of service scale by space allocation ratio expressed in 2 per passenger. table 2. presents the ratios recommended by iata for each passenger as a function of quality of service. a simplified way to approach the problem is to set maximum acceptable wait times. table 3 . shows the maximum waiting times, in minutes, recommended by iata for each processing module based on quality of service [14] , [16] .these maximum acceptable waiting times are to be adapted according to the context, the airport's service quality objectives and the type of traffic [15] . generally speaking, for a passenger, a waiting time is unacceptable as soon as it exceeds 20 minutes [17] . regarding passenger flow management, it is based on the concept of the faucet filling a leaking bucket, as shown in figure 1 , where the faucet represents the flow of passengers, and the leakage represents the flow of processed passengers, while the filling of the bucket results from the difference between incoming and processed passengers [18, 19] passengers are required to proceed to the check-in area in accordance with the rules in force and to follow the process laid down for departing passengers, [13] as illustrated in figure 2 . the simulation model contains a central processing unit and a display unit. the input data is divided into two groups as shown in figure 3 . the first part in blue corresponds to flight data, passenger data, check-in surface and resources. the second part in green represents the variable parameters, which include the speed of processing, passengers' movement models, social force and deviations, and the distribution of pre-departure time [20] . the input flight data are retrieved from the open data provided by the air navigation service provider. a distinction is made between national and international passengers as they do not share the same process within the airport structure. in order to have the most realistic and accurate simulation possible in the management of passenger flows, the data used are based on the airport's 2019 summer period. this makes it possible to check its applicability. on the basis of the number of movements per hour, taken from the actual timetables. the passenger flow can be obtained as illustrated in figure 4 . . present the evolution of passenger flows over one day during disembarking and boarding. the flight slots are such that the largest number of departing passengers are between 7.00 am and 9.00 am as well as 2:00 pm and 5:00 pm. casablanca mohammed v international airport handles an average of 43,000 passengers a day. the number of embarking and disembarking passengers is roughly balanced with a ratio of 22,000 passengers embarking for 21,400 passengers disembarking. the peak hour counts 2059 passengers boarding from 7:00 am to 8:00 am in the morning, while 2626 passengers board between 4:00 pm and 5:00 pm. the random, non-directional motion model is a unidirectional motion with the option of changing position forward or remaining stationary. the steps are assumed to be independent. therefore, the subsequent steps only depend on their current position x. x is a random variable. the probability ′ ( , ) that x is occupied at time t (probability of stay) is the result of the transition possibilities to leave the position x ′ ( → ′ ) or to enter it from the outside ′ ( ′ → ). as expressed below, a forward transition is defined by equation 1. immobility is defined by equation 2. residence probabilities ( , − 1) and ( ′ , − 1) indicate the probability with which the respective positions are filled at the previous point in time. if the transition probabilities are weighted by the residence probabilities, then the probability of occupation ( , ) from position x to time point t results according to the equation 3. for the spatial mapping of the random walk, a fixed position is assumed =0 and the following position +1 is determined by the addition of a random variable in the equation 4. in order not to induce a preferred direction of movement, the probability of the random walk according to the following model is assumed by the following equation 5 since for each time step t the same decision has to be made, it is therefore possible to obtain the probability of arriving at a point k with a number n of steps by a binomial distribution (equation 6). in general, the standard deviation increases with the number n of measurements taken according to equations 7 and 8 below: according to the limit plant theorem, in the case of → ∞ the binomial distribution b(n,f) opposes a normal distribution ( * , * * ). departing passengers are distributed according to their arrival time at the airport. all departing passengers are generated according to a flight number. this generation is based on the probability density function of a horizontally shifted normal distribution (equation 9), given by the time in minutes prior to arrival (t), average arrival time ( ) before departure and standard deviation ( ). the normal distribution has been demonstrated to be well suited to passenger arrivals at airports [21, 22] detected a rightward asymmetry and therefore used johnson's distribution to get a better fit as with a normal distribution. as passenger arrival time is a critical point in determining the percentage of travelers missing their flight due to sanitary barriers as well as determining the level of quality of service, five strategies with different arrival distributions are studied, which are presented in figure 5 . the abscissa values represent the time remaining until the departure of the aircraft. thus, the origin represents the departure of the aircraft. in the horizontal offset of the distribution function, a buffer time (tp)is included. this guarantees that all passengers are generated in the airport's boarding lounge at least 45 min before it is time for departure. the buffer time assumption is based on information provided by airlines. possible waiting times at security checks are not taken into account in the first instance. table 4 gives an overview of the strategies studied with varying average arrival times ( ), of standard deviations ( ) and buffer times ( ). from strategy 1 to strategy 4, the mean time of arrival and standard deviation were gradually increased by 10 minutes or 5 minutes, respectively, which means that the time range within which passengers arrive at the station was extended. strategy 5 is a variant of strategy 1 with a 5-minute delay from the departure time. the processing times for security checks are based on iata recommended measures with an average processing time of 35 seconds. the processing delays during the various operations in the check-in area are based on the operators' speed of processing passengers. the accumulation of passengers in the check-in zone is done by the arrival of passengers according to their presentation profile and the occupancy rate of the check-in zone. the difference between the arrival at the airport and the processing of passengers gives the fill rate of the check-in zone. the speed of the passenger processing (equation 10) depends on resources and increases by the number of operators (n) with the rate of absorption (v). given the possible differences that exist according to the age and gender of the different passengers, an average obstaclefree walking speed was suggested by [23] . the mean value is 4.9 km/h with a standard deviation of 0.918 km/h, which corresponds to approximately 1.36 m/s and 0.255 m/s. based on this distribution of values, this value distribution was accepted as an input to generate the preferred walking speed for the simulation. a similar data on preferred walking speeds has already been successfully used by [22] furthermore, both minimum (2.47 km/h) and maximum (6.18 km/h) walking speeds were established, approaching the transition speed of running [24, 25] during the simulation, actual walking speeds are influenced by the presence of social forces [26] . as a result, obstacles can slow down the speed within a certain period of time. on the other hand, thrust forces coming from behind can increase speed or even cause walking to deviate [26] . mathematically, the basis of the social forces model is now formulated [26, 27, 28] and have recently given an interpretation in terms of optimal control and differential play [29] . the position of a passenger can be represented by a point ( ) in space, which changes continuously over time t, so the speed ( ) is given by the following equation 11 : indeed, if the global social force ( ) represents the sum of the different systematic influences of different environmental factors on a passenger's behavior , and the fluctuation term ( ) reflects random behavioural variations resulting from voluntary or involuntary deviations from optimal behavior, the following equation for passenger acceleration or deceleration and change of direction is obtained by equation 12 : according to the description of ( ) (equation 13), an acceleration force is taken into account 0 ( ) repulsive effects, ( ) due to boundaries, repulsive interactions ( , , , ) with other passengers , and attraction effects ( , , ). in the equation 14, the single-force terms are discussed. each passenger has his own speed of travel 0 into the direction of his/her next destination. deviations of the actual velocity from the desired velocity 0 = 0 * due to disturbances (by obstacles or avoidance maneuvers) are corrected within the so-called "relaxation time" ≃ 1 : under normal circumstances, the desired speed 0 (equation 15) is approximately gaussian distributed with a mean value of 1.3 m/s, possibly smaller, and a standard deviation of around 0.3 m/s. in order to make up delays, the desired speed 0 ( ) is often increased over time. it can be described, for instance, by the following formula which is the maximum desired velocity and 0 (0) the initial one, which corresponds to the planned departure speed (equation 16) [29] . this parameter, which is time-dependent, reflects nervousness or impatience of passengers. according to our description of ( ), an acceleration force is taken into account 0 ( ) repulsive effects, where ( ) indicates the average speed over the desired motion direction. basically, long waiting times decrease the actual speed while desired speed increases. tragically, at high pressures, crowding effects can occur and people may find themselves at risk to lose the social distance separating them [27] . to avoid this type of situation, passengers must keep a certain distance from the barriers at all times. the closer the barrier is, the more [30] uncomfortable a passenger feels [31] . this effect can be described (eqaution 17) by a repulsive force ,which decreases monotonically with the distance ‖ ‖ ‖ − ‖ ‖ ‖ between the place ( ) of passengers and the nearest point of the barrier in the simplest case, this force can be expressed in terms of a repulsive potential : similar repulsive force terms ( , , , ) can describe that each passenger keeps a distance from other passengers according to the situation of . simulations performed in this paper have defined the repulsive interaction force according to the following formula 18: using the distribution equation 9 and within the iata airport terminal manual [6] , we have come to present the pattern of arrival earliness at the check-in area of terminal 1 of casablanca mohammed v international airport for domestic ( figure 6 and international flights (figure 7) . the program is aimed to obtain the appropriate passenger distribution.the program is developed by utilizing python functions,the program consists of 5 worksheets: arrival distribution, input data, daily distribution, and chart. the figure 6 and figure 7 showing the passenger flow rate at check-in are provided at intervals of ten minutes before departure time. they boh also show that the pattern will be different depending on the time of day. there are three different periods applied, from 06:00 to 10:00, 10:00 to 18:00, and 18:00 to 24:00. the time slot between 00.00 and 06.00 is not taken into account as it is a low-traffic period the comparison of passenger arrival earliness is shown in figure 8 these show the difference of passenger arrival earliness for international and domestic flights [14] . for international flights, the last passengers should arrive 70 minutes before departure time. for domestic flights, the passengers may arrive much later. however, passengers on international and domestic flights now have to arrive earlier, as they have to undergo additional formalities and respect sanitary measures, described in the next section. there may also be mandatory quarantine and testing. the flight and passenger data displayed in the simulations correspond to a typical summer peak departure time (7:00 a.m.) per day. around this hour, there are about 14 flights for a total of 2050 passengers, with 62.5 % travelling in boeing b-737s, 18.75 % in embraer e-190s, 12.5 % in b-787s and 6.25 % in b-767s. the flights operate out of terminal t1, reserved essentially for the national airline royal air maroc. as the terminal complies with iata standards, according to the design standards [14, 15] the check-in area measures no more than 1462 2 for a total of 52 check-in counters. check-in counters as they are organised today ( figure 9 ) do not provide any physical separation between check-in operators and passengers. therefore it is necessary to establish a procedure for managing passengers when carrying out check-in formalities in compliance with the sanitary rules required. [15] the simulations support two possible scenarios. the first scenario considers the closure of one out of every two counters in the absence of plexiglass separation panels. the second scenario assumes the installation of separation panels between the queues and the operators as well as side-by-side counters, thus bringing all the counters into operation . each scenario runs three simulations in order to adjust the social distancing between passengers. for instance, the first simulation operates a distance of 2m, the second a distance of 1.50m and the third a distance of 1m. normally, passengers must be present at the airport 3 hours before departure for international flights. the simulation program is developed to manage the flow of passengers in the check-in areas considering the new sanitary measures. some display screen of the simulator is shown in figure 10 and figure 11 . according to the parameters input (distance between two passengers, the capacity of the checkin area, the average processing time) illustrated in figure 11 the simulation program will estimate in steps of 10 minutes according to the open check-in counters, the cumulative percentage and number of passengers present in the checkin area, the passages registered and those in the process of being checked in as illustrated in figure 10 in order to validate the proposed model, a turing test [32, 33] was carried out to observe the behavior of the system in figure 11 : sanitary measure parameters extreme conditions, allowing data to be collected in order to compare them with real data from terminal 1 of casablanca mohamed v international airport. figure 12 represents, for each of the three simulations performed in the first scenario, the queue length observed in units of people before the aircraft departure. with a social distance of 2m (red curve), the accumulation of passengers is very fast. a high point is observed 1h10 before departure with 1154 people still waiting. overall, at the departure of the aircraft, there are still 306 passengers that have not yet been treated. based on the closing time for check-in, which usually occurs 40 minutes before departure, 826 passengers will not have been processed. with a social distance of 1.50m (orange curve), the accumulation of passengers is fast. a high point is also observed 1h10 before departure with 894 passengers still waiting. at the end, 20 minutes before departure, there are still 46 people left in the queue. knowing that 40 minutes before the departure of the aircraft the check-in is closed, it is then 410 people who, at that time, will not have been processed yet. with a social distance of 1m (green curve), the accumulation of passengers is slower. the processing of passengers is smoother. as in the other two scenarios, there is a high point in the waiting time, as in the other two scenarios, 1 hour 10 minutes before departure, with 663 people still waiting. 40 minutes before departure, there are still 23 passengers in the queue and the check-in is about to close. in all three scenarios here, the number of departing passengers, despite a fluctuation in the social distance measure, is not processed in the time available. indeed, even with a distance of 1m, 40 minutes before the check-in counter close for departure there are still passengers in the queue. the solution of opening only one check-in counter in two to ensure the application of health measures is therefore not an optimal solution in terms of passenger flow management. figure 13 represents, for each of the three simulations in the second scenario, the observed queue length in units of people before the aircraft's departure. with a social distance of 2m (red curve), the observed accumulation of passengers is rapid. a peak is observed 1h20 before departure with 573 people still waiting. however, contrary to what could be observed previously, 40 minutes before departure, all the passengers could be processed. with a social distance of 1.50m (orange curve), the accumulation of passengers is slower. a peak is also observed 1h20 before departure with 261 people still waiting. however, contrary to what was observed previously, 60 minutes before departure, all passengers were processed. with a social distance of 1m (green curve), the total number of passengers is almost non-existent. the processing of passengers is done in a fluid manner. a slight high point is observed as in the two other scenarios 1h30 before departure with 65 people waiting. however, contrary to what could be observed previously, 1h10 before departure, all passengers could be processed. with more check-in counters open, it is easier to comply with social distancing measures while at the same time ensuring an efficient passenger processing flow. indeed, in all simulations of the first scenario, there are still passengers who are unable to board the aircraft because they are still in the queue, whereas in all simulations of the second scenario all passengers manage to finish checking in. it should therefore be said that an effective measure to maintain the proper management of the flow of passengers processing would be to have a system for separating the checkin counters and the associated queues in order to be able to open as many counters as possible and thus be able to contain the flow of passengers arriving at the departure point. in the case of a large influx of passengers, the requested capacity may exceed the possible processing capacity. this creates queue hoarding. figure 14 shows the waiting-time variation for a distance measurement of 2m, in the case of the first scenario. the grey line represents the number of passengers arriving at the check-in counter. the high point of passenger presentation occurs 1h40 before departure with 656 people. the red line represents the number of passengers that are processed in time. with only 26 check-in counters open, it is not possible to exceed a maximum of 130 passengers processed in a 10 minute interval. this limits capacity and creates a wait as passengers arrive. the blue line represents the passengers remaining in the queue. a high point in the total number of remaining passengers can be observed 1 hour 20 minutes before the flight departure with 990 passengers waiting. here it can be observed that the cumulated number of passengers cannot be absorbed, because even after the departure there is still passengers waiting. the capacity to handle the flow of departing passengers was therefore exceeded. the limited number of check-in counters opened due to the diversion measures did not allow the flow to be processed. here we can see that the total number of passengers has been absorbed. indeed, 30 minutes before departure there are no more passengers left in the queue. there was therefore good management of the capacity to handle the flow of departing passengers. the higher number of check-in counters made it possible to process the flow. the simulation program provides a fairly accurate representation of the passenger flow. it provides information it also complies with iata measures for passenger processing. however, despite its effectiveness, the simulation program has limitations in its use. the surface area of the checkin counters is not allocated according to the distribution of flights. this means that flights are treated uniformly without distinction. passengers are treated for all flights together and it is assumed that the treatment is homogeneous. in addition, since the iata table distributing the volume of space per passenger is very extensive, the simulation was carried out for a particular parameter present at check-in and not according to all the parameters. also, the calculation of the surface area of the check-in hall is made according to dgac standards and not with a real calculation carried out within the terminal in question. persons with reduced mobility and those with assistance are not included in the simulation. they were considered to be passing through a separate lane specifically assigned to them. similarly, the time required for additional potential health measures before the check-in area is not taken into account in the calculation. the simulation models used are stochastic and dynamic. they represent a close approximation of the real system and incorporate most of its main features. the analysis of the results obtained attests to the proper use of the proposed passenger flow management solution. in times of health crisis, the tool for setting parameters allows to ensure the application of the required distances while anticipating the saturation of the check-in area. however, its use remains non-exhaustive and limited to a case study related to terminal t1, where the treatment of flights are considered to be uniform, as it is dedicated exclusively to the national airline. however, this does not necessarily apply to the other terminals. in times of health crisis, the tool for setting parameters allows to ensure the application of the required distances while anticipating the saturation of the check-in area. the simulations showed that it would be possible to support the processing of passengers at the airport using considerable re-sources (indeed, it would be necessary here to open all positions for check-in). under this condition alone, it is therefore possible to maintain an acceptable performance in terms of respectability of schedules and an acceptable los in terms of waiting times. however, its use remains non-exhaustive and limited to a case study related to terminal t1, where the treatment of flights are considered to be uniform, as it is dedicated exclusively to the national airline. while this study reveals the weaknesses of traditional passenger flow management and underlines the need for a fully integrated management system to meet service quality requirements, the proposed tool will help to keep ahead of unforeseen events, avoiding bottlenecks and long waiting times, while ensuring that sanitary measures such as social distancing is maintained during an international health crisis. the question arises to know to what extent such an allocation of resources (here we are only looking at recording, but we must think of other crossing points on which it is also necessary to intervene) would be economically bearable and if a possible adjustment of the schedule, or a decrease in the frequency of flights would lead to a decrease in the resource requirements necessary to maintain performance. in addition, this research could also be extended to other areas of the airport facing similar challenges. the effect of human mobility and control measures on the covid-19 epidemic in china clinical characteristics of coronavirus disease 2019 in china a methodological framework to evaluate the impact of disruptions on airport turnaround operations: a case study novel coronavirus (2019-ncov) early-stage importation risk to europe effects of novel coronavirus (covid-19) on civil aviation: economic impact analysis guide to airport performance measures puts over half of 2020 passenger revenues at risk visite du nouveau terminal 1 de l'aéroport mohammed v au profit de la presse nationale world health organization, management of ill travellers at points of entry -international airports, ports and ground crossings -in the context of the covid-19 outbreak world health organization, coronavirus disease 2019 (covid-19) situation report -82, tech. rep., world health organization antecedents and consequences of passenger satisfaction with the airport airport terminal reference manual capacité des aérogares passagers -guide technique, france,paris, tac/sina groupe documentation et diffusion des connaissances (ddc) edition competitiveness vis-à-vis service quality as drivers of customer loyalty mediated by perceptions of regulation and stability in steady and volatile markets building an integrated model of future complaint intentions: the case of taoyuan international airport transfer passengers' perceptions of airport service quality: a case study of incheon international airport the determinants of air passenger traffic at turkish airports evaluating passenger services of asia-pacific international airports entwicklung eines individuenbasierten modells zur abbildung des bewegungsverhaltens von passagieren im flughafenterminal social force model for pedestrian dynamics evaluation of pedestrian walking speeds in airport terminals are transitions in human gait determined by mechanical, kinetic or energetic factors? experimental research of pedestrian walking behavior freezing by heating in a driven mesoscopic system pedestrian and evacuation dynamics self-organized pedestrian crowd dynamics: experiments, simulations, and design solutions pedestrian and evacuation dynamics simulating dynamical features of escape panic validating expert system prototypes using the turing test fundamentals of a turing test approach to validation of ai systems j o u r n a l p r e -p r o o f  the aviation sector has been experiencing an unprecedented crisis sincemarch 2020 due to the covid-19 pandemic and a solution need to be found for the gradually reopening and the risk minimisation  this article proposes simulations and discussions on the possible effects of the social distanciation measures and to test their applicability in an international airport.  a bibliographic study of recent work in the same field of the article  distribution of passengers according to various scenarios  mathematical modelling of the problem key: cord-318727-93486y6e authors: magnusson, amanda; ahle, margareta; swolin‐eide, diana; elfvin, anders; andersson, roland e. title: population‐based study showed that necrotising enterocolitis occurred in space–time clusters with a decreasing secular trend in sweden date: 2017-04-24 journal: acta paediatr doi: 10.1111/apa.13851 sha: doc_id: 318727 cord_uid: 93486y6e aim: this study investigated space–time clustering of neonatal necrotising enterocolitis over three decades. methods: space–time clustering analyses objects that are grouped by a specific place and time. the knox test and kulldorff's scan statistic were used to analyse space–time clusters in 808 children diagnosed with necrotising enterocolitis in a national cohort of 2 389 681 children born between 1987 and 2009 in sweden. the municipality the mother lived in and the delivery hospital defined closeness in space and the time between when the cases were born – seven, 14 and 21 days – defined closeness in time. results: the knox test showed no indication of space–time clustering at the residential level, but clear indications at the hospital level in all the time windows: seven days (p = 0.026), 14 days (p = 0.010) and 21 days (p = 0.004). significant clustering at the hospital level was found during 1987–1997, but not during 1998–2009. kulldorff's scan statistic found seven significant clusters at the hospital level. conclusion: space–time clustering was found at the hospital but not residential level, suggesting a contagious environmental effect after delivery, but not in the prenatal period. the decrease in clustering over time may reflect improved routines to minimise the risk of contagion between patients receiving neonatal care. necrotising enterocolitis (nec) is the most common gastrointestinal emergency among neonates, and it mainly affects preterm infants, with mortality rates ranging from 10% to 50%. the highest mortality rate is found among infants requiring surgery (1) (2) (3) (4) (5) . the overall incidence of nec varies between studies, from 0.3 to 1.0 per 1000 live births (1, 6, 7) . however, in extremely preterm and very low birth weight infants, the incidence is approximately 7% (5, 8) . the pathogenesis of nec is multifactorial, and there are some factors that remain unknown (4, 5) . most cases of nec occur sporadically. nevertheless, reports of clusters or outbreaks suggest that an infectious element could be a causal factor in nec (3, (9) (10) (11) (12) (13) . this hypothesis is supported by the fact that improvements in infection-control procedures have stopped outbreaks of nec (11, 14) . seasonal variations in the incidence of nec have been described, which also indicate that an infectious agent may contribute to the clustering of the disease (6, 12, 15) . several microbial organisms have been proposed as possible causes of nec, for example klebsiella pneumonia, staphylococcus aureus, escherichia coli, clostridium difficile, norovirus and rotavirus, but no specific causative organism was identified in some outbreaks (9, 11, 14, 16, 17) . it has also been suggested that overcrowding in neonatal intensive care units (nicus) has contributed to clusters of nec (14) . nevertheless, the majority of reports describing outbreaks of nec are retrospective and based on observed suspected outbreaks that could just be random. abbreviations nec, necrotising enterocolitis; nicu, neonatal intensive care unit. this study investigated space-time clustering of necrotising enterocolitis from 1987 to 2009 using national swedish data on nearly 2.4 million births. clustering was found at the hospital level during 1987-1997, but not during 1998-2009, and not at the residential level. the decrease in clustering over time could be related to enhanced routines to minimise the spread of any potential necrotising enterocolitis inducing contagion between patients in the neonatal intensive care unit. furthermore, most of the described outbreaks have been on a hospital level, while clustering based on the mother's residential municipality has not been addressed (9, (11) (12) (13) . in reports on nec outbreaks, the cluster concept tends to be used subjectively without a standard definition (18) . our group previously presented a national, populationbased study on nec epidemiology and trends in sweden, which described an increase in the incidence of nec between 1987 and 2009 (6). the same cohort was used in the present study to investigate space-time clusters of nec on two levels for closeness in space: the mother's residential municipality and the delivery hospital. furthermore, the present study was designed to examine whether there had been any change in the occurrence of space-time clusters over time, by studying two subperiods: 1987-1997 and 1998-2009 . a cohort of newborn infants with a diagnosis of nec was identified from the following registers held by the swedish national board of health and welfare: the national patient register, the swedish medical birth register and the national cause of death register. all children born between 1987 and 2009 in sweden with a discharge diagnosis of nec according to the 9th or 10th revision of the international classification of diseases -icd-9 code 777f or icd-10 code p77were identified. the nec diagnosis was introduced to icd-9 in 1987 and is based on the modified bell nec staging criteria (19, 20) . as it was not possible to identify the exact date for the nec diagnosis, the date of birth of the study subjects was used for time comparisons in the cluster analysis. further details about the identification process were previously described (6) . an anonymised extract covering the background population of all children born in sweden during the same time period as the nec cases was also obtained from the birth register. this extract contained perinatal information and demographic data, including the municipality the mother lived in and the delivery hospital. sweden has a highly centralised care policy for very preterm and extremely preterm infants, based on intention to transfer mothers with a high risk of preterm delivery to a regional level three hospital before they give birth. as a result, most of the infants diagnosed with nec are admitted to the nicu at the hospital in which they were born. two methods were used to analyse for space-time interactions between nec cases: the knox space-time cluster analysis and kulldorff's space-time permutation scan statistic (21, 22) . the knox test is based on an analysis of the proximity in space and time of all possible n(n à 1)/2 distinct pairs of cases (23) . each individual pair is classified into one of four cells in a 2 9 2 table, with distance (close/not close) and time (close/not close) on the two axes, according to whether the two parts are close or not close to each other in terms of geographical distance and time. a pair of cases is regarded as being in close proximity if their dates of birth are close and if their geographical locations at the time of birth are close. closeness in the date of birth was divided into time windows of seven, 14 and 21 days apart. two geographical levels were used to define closeness in space: the mother's residential municipality and the delivery hospital. the number of pairs of cases observed in close proximity was compared with the expected number of pairs, which was obtained from the cross-products of the column and row totals. if the observed number of pairs of cases exceeded the expected number of pairs, there was evidence of space-time clustering. the magnitude of the excess, or deficit, was estimated by calculating the strength of clustering using the equation s = [(o-e)/e] 9 100, where s was the strength, o was the number of pairs of cases observed and e was the expected number of pairs. to study any changes over time in nec clustering, the population was divided into two cohorts according to the subjects' year of birth: 1987-1997 and 1998-2009. the knox test was used to compare the two time periods, and the binomial test was used to compare the change in incidence of nec in the two time periods. in addition, kulldorff's scan statistic, based on a spacetime permutation model, was used to identify the presence of space-time and purely temporal clusters of cases (21). kulldorff's scan statistic is based on the number of observed cases among all births that have taken place within a circle of varying radius in space in one dimension and in a time window with a varying duration in the other dimension. the statistic is centred at all geographical locations to look for possible clusters. thus, the circular window is flexible in location, size and time. for the analyses of clustering on the residential level, we used the geographical coordinates of the centre of the mothers' residential municipality. for the analyses of clustering at each delivery hospital, we used a purely temporal scan statistic, with a time window of varying duration. the number of observed cases in a cluster was compared to what would have been expected if the spatial and temporal locations of all cases were independent of each other, so that there was no space-time interaction. as described by kulldorff et al., the scan statistic makes minimal assumptions about the time, geographical location or size of the cluster and can be adjusted for both purely spatial and purely temporal variations (21) . the poisson distribution was used for testing the statistical significance of the difference between the observed and expected number of pairs in the knox test. kulldorff's scan statistic was assessed by monte carlo hypothesis testing in 999 simulations, which meant that the smallest p value we could get was 0.001 (24) . statistical significance was set at p < 0.05. the study used stata statistical software, version 13 (statacorp lp, college station, tx, usa) and satscan, version 9.4.2 (kulldorff m. and information management services inc., ma, usa) for the statistical analyses (25) . the study was approved by the regional ethical review board of link€ oping (dnr 2010/405-32). the study was based on a total of 2 389 681 births from 1987 to 2009, and the patient characteristics are described in table 1 . information about the delivery hospital and the mothers' residential municipality was missing for 5,621 and 19,130 children, respectively. we identified 808 cases of nec, including 27 pairs of twins. each twin pair with nec was counted as one instance of nec for the cluster analyses. information about the mother's residential municipality and delivery hospital was missing for 12 and seven of the 808 cases, respectively. after we excluded the 27 second twins and the births with missing information on municipality or delivery hospital, there were 769 cases for the analyses based on municipality and 774 cases for the analyses based on delivery hospital. due to the centralised care of preterm infants in sweden, 422 of the 774 nec cases (54%) occurred at a hospital that did not match the residential municipality of the mother. to be specific, 58% of all the nec cases among extremely preterm births, with a gestational age under 28 weeks, and 22% of all the nec cases among term births, with a gestational age over 36 weeks, occurred at a hospital that was not the closest to the mother's municipality. the cohort in the first time period, 1987-1997, consisted of 1 113 946 infants and 289 cases of nec, resulting in an nec incidence of 0.26 per 1000 live births. during the second time period, 1998-2009, the cohort consisted of 1 275 735 infants and 519 cases of nec, giving an nec incidence of 0.41 per 1000 live births. there was a significant increase in the incidence of nec in the second time period compared to the first time period (p < 0.001) ( table 1 ). the knox test did not indicate any space-time clustering at a residential level in any of the studied time windows of seven, 14 or 21 days. there was a significant space-time clustering at a hospital level, with the strongest clustering at a time window of seven days (s = 36.3, p = 0.026) ( table 2 ). the knox test is sensitive to time-related shifts in the background population, which can give biased results. we therefore performed separate analyses for each of the two time periods. the first time period showed significant space-time clustering of nec in the time windows of seven and 14 days, with the strongest clustering at seven days (s = 122.2, p = 0.003) ( table 2) . during the second time period, there was no significant clustering at a hospital level in any of the studied time windows. kulldorff's scan statistic at a residential level, kulldorff's scan statistic only identified one single space-time cluster of four cases during 17 days in january 1990. the four cases came from four different municipalities within a radius of 34 kilometres. at a hospital level, the purely temporal cluster analysis identified seven instances of temporal clusters at seven different hospitals (table 3 ). in four of the seven clusters identified by kulldorff 0 s scan statistic, the cluster only consisted of two patients. however, several of these clusters occurred in hospitals with a low number of deliveries and few expected cases of nec in the given time interval. of the seven statistically significant clusters, five occurred during november to april and only two clusters occurred during may to october. the present study showed that nec occurred in clusters at a hospital level, as found with both the knox test and kulldorff's scan statistic. when we compared two different time periods -1987-1997 and 1998-2009, using the knox test, significant clustering was only found in the early time period and the strongest significance was found using seven days as the time window. our results showed no signs of space-time clustering related to the mother's residential municipality with the knox test and only one single cluster with kulldorff 0 s scan statistic. several possible explanations for clustering on a hospital level have previously been described. one explanation is that nec is associated with a nosocomial infection spread from one child to another in a nicu. hill et al. described an outbreak of nec associated with klebsiella pneumoniae in all cases at one nicu (26) . han et al. and alfa et al. described outbreaks of nec associated with the clostridium species (27, 28) . a second possible mechanism for clustering on a hospital level could be transmission from the healthcare workers to the infants, as suggested by harbarth et al., who described an outbreak of enterobacter cloacae during a period of overcrowding and understaffing in the nicu (29) . as the present study was a retrospective register study, no investigations could be carried out into whether the bacteria in the infants and among the staff contributed to the clusters. contamination of human milk fortifier or formula is a third possibility for clustering on a hospital level. van acker et al. described an outbreak of nec where the same bacteria were isolated from both the neonates with nec and the powdered milk formula (10) . in sweden, most infants receive human breast milk in nicus, either from their mother or from a milk bank, but this milk is frequently enriched with human milk fortifier. a fourth possible explanation for clustering on a hospital level may be an accumulation of preterm births at referral hospitals due to referrals of at-risk pregnancies. this could theoretically lead to an overestimation of the number of clusters. the results from the knox test showed significant clustering during 1987-1997, but not during 1998-2009, which did not support an overestimation of clusters due to centralised care, as the centralisation of neonatal intensive care in sweden has increased over the last few decades. the finding of a decrease in clustering over time could be related to improvements in the neonatal intensive care of preterm infants. in this study, it was not possible to analyse whether the decrease in clusters was related to improved control of infection in the nicu, less overcrowding, better routines in the nicu or other reasons for reduced transmission of nec between patients. even though the knox test showed no significant clustering of nec during the last decade, kulldorff's scan statistic indicated that clusters of nec do still occur. clustering on a residential level would, as described above, indicate that nec is associated with causative agents, such as infections in the community. stuart et al. described a strong association with the norovirus in an outbreak of nec (11). chany et al. showed a significant association between coronavirus infections and nec (30) . these findings could indicate that the virus had its origin in the community and was then transmitted to the infants. the findings in the present study, in which the knox test found no clustering on a residential level and kulldorff's scan statistic found only one cluster, are strong indications against the theory that there is a connection between nec and infections spread in the community. however, when studying the clusters on a hospital level with the kulldorff 0 s scan statistic, it was noticed that the majority of the clusters at a hospital level occurred during november to april, which is also the season when most infections in the community occur. our group has previously described this seasonal variation, with a peak in incidence of all cases of nec in november and a decrease in may (6). the present study showed indications of space-time clustering of nec on a hospital level in sweden, but not at the level of the mother's residential municipality, suggesting a contagious environmental effect after delivery. the decrease in clustering on a hospital level over the last few decades may indicate that improved routines in modern neonatal care are effective in minimising the transfer of agents involved in the development of nec between patients in the nicu. however, continued awareness of signs of clusters is still warranted to further minimise the risk of environmental factors for nec being transferred from one patient to another. necrotising enterocolitis hospitalisations among neonates in the united states low birthweight, gestational age, need for surgical intervention and gramnegative bacteraemia predict intestinal failure following necrotising enterocolitis a cluster of necrotizing enterocolitis in term infants undergoing open heart surgery necrotising enterocolitis necrotizing enterocolitis epidemiology and trends of necrotizing enterocolitis in sweden epidemiology of neonatal necrotising enterocolitis: a population-based study variations in incidence of necrotizing enterocolitis in canadian neonatal intensive care units cluster of necrotizing enterocolitis in a neonatal intensive care unit: new mexico outbreak of necrotizing enterocolitis associated with enterobacter sakazakii in powdered milk formula an outbreak of necrotizing enterocolitis associated with norovirus genotype gii.3 epidemic occurrence of neonatal necrotizing enterocolitis cluster of late preterm and term neonates with necrotizing enterocolitis symptomatology: descriptive and case-control study a decrease in the number of cases of necrotizing enterocolitis associated with the enhancement of infection prevention and control measures during a staphylococcus aureus outbreak in a neonatal intensive care unit seasonal variation in the incidence of necrotizing enterocolitis nosocomial necrotising enterocolitis outbreaks: epidemiology and control measures necrotising enterocolitis: is there a relationship to specific pathogens? epidemiology of necrotizing enterocolitis temporal clustering in two neonatology practices neonatal necrotizing enterocolitis. therapeutic decisions based upon clinical staging necrotizing enterocolitis in neonates fed human milk a space-time permutation scan statistic for disease outbreak detection the knox method and other tests for space-time interaction the detection of space-time interactions modified randomization tests for nonparametric hypotheses satscan is a trademark of martin kulldorff. the satscantm software was developed under the joint auspices of martin kulldorff, the national cancer institute, and farzad nosocomial colonization with klebsiella, type 26, in a neonatal intensive-care unit associated with an outbreak of sepsis, meningitis, and necrotizing enterocolitis an outbreak of necrotizing enterocolitis associated with a novel clostridium species in a neonatal intensive care unit an outbreak of clostridium difficile necrotizing enterocolitis: a case for oral vancomycin therapy? outbreak of enterobacter cloacae related to understaffing, overcrowding, and poor hygiene practices association of coronavirus infection with neonatal necrotizing enterocolitis we would like to thank nils-gunnar pehrsson and henrik eriksson at statistiska konsultbyr an for their statistical expertise. this study was financed by grants from the alf agreement between the swedish government and county councils to sahlgrenska university hospital. the authors have no conflict of interests to declare. key: cord-347550-ai48wq61 authors: sheridan, gerard a.; boran, sinead; taylor, colm; o’loughlin, padhraig f.; harty, james a. title: pandemic adaptive measures in a major trauma center: coping with covid-19 date: 2020-05-20 journal: j patient saf doi: 10.1097/pts.0000000000000729 sha: doc_id: 347550 cord_uid: ai48wq61 nan i n light of the current global crisis due to covid-19, communication among the scientific community is both time sensitive and imperative to curtail the projected strain that is predicted to overwhelm our global healthcare services. "social distancing" is now considered a vital measure in controlling this pandemic spread. 1 we also know that healthcare workers with increased exposure times to the virus are more likely to contract the infection. 2 we therefore describe some pragmatic "pandemic adaptive measures" (pams) that have been implemented by the orthopedic department in our level 1 trauma center to reduce viral exposure times for patients and doctors: the most significant doctor-patient contact time occurs during the daily fracture clinic. typically, in our institution, 60 patients are reviewed by 3 doctors for a 4-hour period. with the announcement of the covid-19 pandemic, we immediately implemented virtual fracture clinics (vfcs). we already know that greater than two-thirds of patients may be managed virtually without ever needing to attend the fracture clinic in person. 3 this reduces the patient-doctor interaction time and doctor-doctor interaction time, significantly improves the financial burden, and is met with satisfaction in up to 97% of patients. 3, 4 since the introduction of the vfc, the total number of patients attending on average has dramatically reduced from 60 to 20 and we expect them to reduce further as the service develops. each doctor is now seeing on average 7 patients instead of 20 per clinic, and the average patientdoctor interaction time has reduced from 200 to 70 minutes in a single clinic. to reduce the time spent by patients in the emergency department, another pam introduced is the online clinical communication platform for the on-call trauma team. this mobile device application is general data protection regulations compliant and involves emergency department staff, house officers, residents, and the attending surgeon on call that day. immediate decision making allows for the accelerated discharge of patients not requiring emergency surgical intervention. this in turn reduces both patient-patient and doctor-patient contact times in the emergency department of our level 1 trauma center. reducing the rates of intradepartmental contact time is possibly the most important behavioral change that we could instigate at an institutional level at this time. we know that healthcare workers are particularly vulnerable to viral contraction. 2 if one member of staff becomes infectious, the risk of transmission within the department leading to significant numbers of surgical staff in selfisolation can become quickly overwhelming leading to the decimation of trauma service provision in the level 1 center. in this respect, we reduced the numbers of staff attending the daily posttake trauma round to essential staff only (attending on call, resident on call, trauma coordinator). this significantly reduced the number of staff in close confines every morning from 15 to 3. this shift has reduced the weekly doctor-doctor interaction time by 120 minutes per week. before the pandemic outbreak, a weekly mdt review meeting was held where all postoperative cases would be discussed and critiqued by the entire orthopedic department with ancillary staff including physiotherapists, nurse specialists, and radiographers. this meeting is essential in maintaining a high standard of care with up to 40 staff members in attendance at any one time. by transferring this meeting to an online multiuser platform, all staff members can login and review the cases presented from a remote location. this eliminates 45 minutes of unnecessary intradepartmental exposure time for each staff member. introduction of the vfc also has ramifications for intradepartmental exposure. for example, each resident would typically staff 3 clinic sessions per week (240 minutes total). since the implementation of the vfc, this weekly exposure time has dropped to 80 minutes and is likely to decrease further as the system evolves. to quantify the impact that these pams have had in our institution, consider the following histogram demonstrating a significant reduction in both doctor-patient interaction time (in clinic) and doctordoctor interaction time (in general) for a standard orthopedic resident on a weekly basis in our level 1 trauma center (fig. 1) . in summary, these pragmatic pams may be implemented by any surgical specialty facing the challenges of the covid-19 pandemic to reduce patient and doctor exposure times while simultaneously maintaining a high standard of trauma care at this challenging time. covid-19: uk starts social distancing after new model points to 260 000 potential deaths clinical characteristics of 30 medical workers infected with new coronavirus pneumonia trauma assessment clinic: virtually a safe and smarter way of managing trauma care in ireland cost comparison of orthopaedic fracture pathways using discrete event simulation in a glasgow hospital cork, ireland sheridga@tcd.iethe authors disclose no conflict of interest. key: cord-346973-muemte3p authors: lai, francisco tsz tsun title: association between time from sars-cov-2 onset to case confirmation and time to recovery across sociodemographic strata in singapore date: 2020-08-01 journal: j epidemiol community health doi: 10.1136/jech-2020-214516 sha: doc_id: 346973 cord_uid: muemte3p nan amid the coronavirus disease-2019 (covid-19) pandemic, one of the most important indices of healthcare systems' performance in addressing the drastically increased burden is the average time to recovery of patients, the minimization of which indicates a strong capacity in handling the crisis and avoiding a total collapse of the systems. previous research has suggested the importance of early detection amid epidemic outbreaks to facilitate better management of the disease. 1 nevertheless, seldom has any research examined the relationship between time from the onset of severe acute respiratory syndrome coronavirus 2 (sars-cov-2) to case confirmation and time to recovery, as well as how this relationship varies across sociodemographic strata. from the singaporean official website on covid-19, 2 i extracted the records of 221 recovered patients with symptomatic presentation in singapore, where the mortality rate from sars-cov-2 was estimated at only 0.09% as of may 2020. 3 although a large fraction of the patient data was pending further update, the currently available data will suffice for preliminary purposes. using this data, a poisson regression analysis was implemented to examine the aforementioned relationship, with age, sex and nationality specified as potential moderators respectively interacting with time from onset to case confirmation in relation to time to recovery. as only secondary analysis of publicly available data was involved, no ethics approval was required. results showed that being 10-year older was associated with 8% more time to recovery and one additional week from onset to case confirmation was associated with a 50.0% less time to recovery among singaporean female. this inverse association was 17% weaker among male, 5% weaker being 10-year older and 69% weaker with other south east asian nationalities. full numeric results are tabulated as table 1. the observed inverse relationship between time from onset to case confirmation and time to recovery is possibly due to a lower severity of the condition among patients with only mild symptoms, which took longer to arouse medical attention but eventually less time to treat. the increased complexities among male and older patients suggested in previous research 4 may explain the observed weaker negative association, because these patients may be more likely to develop severe symptoms regardless of the time from onset to case confirmation. last, the weaker association among south east asian patients was possibly because of the systematic testing for foreign workers living in dormitories where notable outbreaks took place, 5 such that time from onset to case confirmation no longer depended mainly on symptomatic presentation. the funding the author declares no specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. competing interests none declared. patient consent for publication not required. provenance and peer review not commissioned; internally peer reviewed. this article is made freely available for use in accordance with bmj's website terms and conditions for the duration of the covid-19 pandemic or until otherwise determined by bmj. you may use, download and print the article for any lawful, noncommercial purpose (including text and data mining) provided that all copyright notices and trade marks are retained. intensive care of patients with severe influenza during the epidemic in 2016 covid-19 singapore dashboard as virus deaths grow, two rich nations keep fatality below 0.1%. bloomberg risk factors for severity and mortality in adult covid-19 inpatients in wuhan singapore spike reveals scale of migrant worker /infections. bbc news key: cord-270818-hi4rkp9l authors: zhang, shu-ning; li, yong-quan; liu, chih-hsing; ruan, wen-qi title: a study on china's time-honored catering brands: achieving new inheritance of traditional brands date: 2021-01-31 journal: journal of retailing and consumer services doi: 10.1016/j.jretconser.2020.102290 sha: doc_id: 270818 cord_uid: hi4rkp9l the time-honored brand is the best brand retained from centuries of business and handicraft competition, representing inestimable brand, economic and cultural value. however, it has encountered the issue of heritage in the new era. to address this issue, in view of the critical role of customer word-of-mouth (wom) in brand inheritance and reputation, this study constructed and examined the wom path of time-honored catering brands by investigating 606 customers. its conclusions highlight the positive antecedent of brand authenticity for customers' in-person wom and ewom. the path is influenced by the mediation mechanisms of response (awakening of interest), cognition (brand experience) and affection (brand identification). moreover, the interaction between creative performance and brand authenticity can positively promote customers' brand experience. however, cultural proximity plays different roles in the stages of customers' brand attitudes and behaviors. the results provide managerial implications for how to promote the sustainable inheritance of traditional brands. around the world, many traditional brands are struggling with their own decline (li et al., 2019) . similarly, in china, the time-honored brand is the best surviving centuries of business and handicraft competition after generations of inheritance, showing irreplaceable value. unfortunately, most traditional brands won in history but have lost today. a survey by the ministry of commerce of the people's republic of china showed that the number of china time-honored brands dropped from approximately 16,000 in 1949 to 1128 in 2016 (li et al., 2019) . only 10% are thriving, and many time-honored brands' survival and growth are at serious risk. especially with the explosive growth of modern restaurants, traditional catering brands are experiencing unprecedentedly fierce market competition and challenges (cheng et al., 2018; troiville et al., 2019; phung et al., 2019; koh et al., 2009 ). how to survive and inherit traditional catering brands today is an urgent issue to be solved, which is related to the sustainable development of traditional excellent brands and the common treasure of mankind. the customer perspective is critical for evaluating the development of catering companies. traditional catering references have explored consumer decision, intention, satisfaction, service experience, brand image, etc. (sürücü et al., 2019; halim and hamed, 2005; stanujkic et al., 2019; tan and chang, 2010) . however, previous conclusions mainly highlight the customers' response to the consumption process and pay little attention to the future survival of traditional catering (chen and huang, 2016) . importantly, brand inheritance is mainly influenced by consumer word-of-mouth (wom). the connotation and development of the time-honored catering brand indicates that it historically relies on customer in-person wom to earn social reputation and loyalty (he, 2008; tian et al., 2018) . today, information technology and social media are advancing rapidly. electronic word-of-mouth (ewom) is not limited by time and space, showing a wider promotion effect. accordingly, customer wom contributes to greater influence than transitional marketing tools, such as enterprise publicity (steven podoshen, 2006) ; customer wom can change consumer brand attitudes, brand behaviors, and brand choices (chu and kim, 2011; east et al., 2008; séraphin et al., 2019) . therefore, an excellent means of exploring the time-honored catering brand inheritance path is examining how to improve the customer wom path. this new attempt can fill the theoretical gaps of traditional catering inheritance and provide managerial implications for its sustainable development in the new era. exploring the inheritance of the time-honored brands actually means constructing the formation path of customer wom. several issues have been raised and need to be addressed further. first, what is the guiding factor for customers in customer wom about time-honored brands? different from fast food and creative restaurants, etc., the essence and evaluation standards of the time-honored brand depends on recipe originality, craftsmanship, historical culture, brand spirit and so forth (li, 2018) . inheritance points to brand authenticity as the core element of the time-honored catering brands. in other words, brand authenticity represents a symbol of the unique chinese catering culture (tsai and lu, 2012; modya and hanksb, 2019) , which is also regarded as a brand symbol and an attraction that determines customers' consumption motivation (moulard et al., 2016) . therefore, this study asserts that brand authenticity is the critical leading factor for the inheritance of time-honored chinese catering brands. second, how does the authenticity of time-honored brands promote customers' wom? few scholars have attempted to examine the relationship between brand authenticity and customer wom (dipietro and levitt, 2019), especially in the context of time-honored catering brands. however, the stimulus-organism-response (sor) theory contributes to a theoretical foundation for investigating the time-honored catering brands' wom path (chang et al., 2011) . to this end, this study supposes that brand authenticity as a brand stimulus may promote customers' wom in 3 stages: awaken, experience and identify (moulard et al., 2016; tsai and wang, 2017; kim et al., 2019) . recently, culture and creativity are considered important development factors (zhang et al., 2019a,b) . in the new era, can traditional catering improve customer experience with creative performance? moreover, time-honored brands show profound cultural heritage. what role do customers' cultural backgrounds play in brand heritage? this study will systematically answer these questions to clarify the inheritance path of traditional catering brands. the perspectives of wom and creativity provide a new research scheme for the inheritance path of time-honored catering brands. our research not only addresses urgent issues, providing a theoretical path for the brand inheritance of time-honored catering brands and clarifying the specific role of the influencing factors, but also expands consumer behavior theory (liu and jang, 2009) , brand management theory (hyun, 2009 ) and cultural theory (arnould and thompson, 2005) . more importantly, our results are of great practical value and provide the significant contribution of specific managerial guidance for the sustainable development path of traditional catering brands. the phrase "time-honored brands" refers to well-recognized brands with unique regional cultural and historical value, traditional wellknown brands that have passed down through several generations and established before 1956. these brands have a long history of recipes, crafts or services and have won a wide range of social praise (he, 2008; tian et al., 2018) . because of the combination of unique traditional historical value and modern marketing brand concepts, the time-honored brand has become a popular research topic in academic circles (shang and chen, 2016) . related studies focus on the development of traditional catering brands, including traditional food technology, cooking (li and hsieh, 2004) , protection, innovation (lee, 2018; koh et al., 2009) and business models of traditional restaurants (indrawan et al., 2016) . some references reflect consumers' attitudes toward traditional food, including consumer brand image evaluation (almli et al., 2011; sürücü et al., 2019) , perception (guerrero et al., 2010) , motivation (wang et al., 2015) , and preferences (balogh et al., 2016) . moreover, china time-honored catering brands are the most unique and important branch of traditional catering. however, a large number of time-honored catering brands have gradually disappeared from the market (mu, 2017) . few studies have focused on the realistic and severe problems of inheritance and development that time-honored catering brands face. the current research on time-honored catering brands mainly explores the following two aspects: first, the importance of brand value and realistic problems, such as the relationship between the brand equity of time-honored brands and generational transfer (he, 2008) , and discussions on intangible value and brand value (li, 2018; grace and o'cass, 2005; sarker et al., 2019) . second, the exploration of time-honored brand development strategies, including micromarketing strategies (leng, 2004) , brand activation and revitalization (forêt and mazzalovo, 2014) , and brand image design (guo and kwon, 2018) . these studies mainly discuss the development of time-honored brands from the perspective of business management, and few scholars have conducted quantitative research from the perspective of customers. for example, mu (2017) found that consumers' nostalgic psychology played a key positive role in cognition and purchase intention by constructing a purchase intention model of time-honored brands. huang (2017) analyzed the difference in chinese and foreign customers' experiences with time-honored catering brands from a cross-cultural perspective. accordingly, previous studies show the following gaps: (1) previous studies mainly considered the perspective of enterprise management, ignoring the fact that customer wom is a critical factor in time-honored brands' ability to earn a reputation and in their historical heritage (tian et al., 2018) . therefore, this study is innovative in its research perspective on time-honored catering brands and explores brand inheritance from the perspective of customer wom. (2) current studies do not pay enough attention to the core element of brand authenticity or clearly explore the inheritance path of time honored brands from the customer perspective. therefore, this study constructs a brand inheritance model of time-honored catering brands. (3) although some scholars have paid attention to the impact of traditional catering on customer experience from a cross-cultural perspective or the perspective of innovation (huang, 2017; koh et al., 2009) , fewer studies have answered whether cultural factors and creativity can improve customers' cognitive attitudes and behaviors regarding catering time-honored brands. to address these unresolved issues, we introduce the moderator variables of creative performance and cultural proximity. the stimulus-organism-response (sor) theory was first proposed by mehrabian and russell (1974) and was later modified by jacoby (2002) . sor theory emphasizes that some external influences provoke and change the individual's emotional and cognitive condition, leading to certain behavioral outcomes (kamboj2018). the "s-o-r framework" consists of 3 components: stimulus, organism and response. the first "stimulating" component refers to "stimulating an individual's influence". in the restaurant experience, stimulation is an expression of the core features provided by the restaurant. undoubtedly, brand authenticity, as brand packaging attraction (moulard et al., 2016) , is an external stimulus condition for the time-honored catering brands to awaken customers' interest and passion. creative performance as a means of innovation, to some extent, belongs to an external stimulus. the "organism" is a second component of sor theory and refers to the customers' cognition and affection. the organism exists in the process from stimuli to customer responses (kamboj et al., 2018) . cognition represents people's understanding of things or phenomena, which specifically includes psychological experience processes such as feeling, perception, imagination, and thinking. further, affection is the attitude that people produce about whether objective things meet their own needs. after interest is aroused by brand authenticity, customers are more likely to engage with the brand, resulting in brand experience. brand experience is a critical attribute of direct and lasting links for customers to connect with time-honored brands (tsai and wang, 2017; ong et al., 2018) . when the experience notably meets customers' expectations of the brand, it will stimulate a high-level affection of brand identification (kempf, 1999) . behavior, as a final stage, is the response of customers to external stimuli. positive brand experience and brand identification promote customers to form cognition and emotion for traditional brands and ultimately lead to the wom behavior (kim et al., 2019) . in other words, to realize the wom behavior regarding time-honored catering brands, customers may need to experience the intermediary mechanism process of awakened interest, brand experience and brand identification. in addition, we need to consider some key accelerators. some of time-honored chinese catering brands perform poorly and are being eliminated (li, 2018) because they fail to match changing consumer trends or lack innovation (leng, 2004; munthree et al., 2013) . creativity is the manifestation of the originality and genuineness of time-honored brands, which may be a great approach for customers to better understand the brands (horng et al., 2013) . further, cultural background, such as cultural proximity, is considered another significant factor affecting customers' experiences and value evaluations (chang et al., 2011) . the role of cultural background in customer attitude and behavior is still unknown. however, the time-honored brand has a distinctive cultural characteristic. under this scenario, it is of great significance to explore the influence of cultural proximity on the customer's dining process. based on these viewpoints, the study will explore the moderating effects of creative expression and cultural proximity. consequently, based on sor theory, this study constructed a more complex two-stage mediating moderating model in fig. 1. brand authenticity refers to the degree to which a brand is perceived to be original and authentic, which means that it is unique rather than derivative (akbar and wymer, 2017) . if the food is produced by traditional or manual methods, the product may be considered authentic (cinelli and leboeuf, 2020) . the connotations and historical value of time-honored catering brands all point to the high level of uniqueness and originality of the brands, and these characteristics are the core of the brands' attraction and stimulation of consumer demand. experiencing authentic food is the primary motivation for customers to become interested and make decisions, which highlights the influence of authenticity on consumer decisions and behaviors (jiménezbarreto et al., 2020) . guèvremont and grohmann (2018) assert that the nature of brand authenticity can awaken great interest from consumers. the awakening of interest reflects the stimulation of a customer's potential interest in something, which serves as an important factor in predicting customer behavior (machleit et al., 1993) . if sensory and affective interest is awakened, they will have a strong need to participate in the brand experience and be satisfied through brand experience. in a word, the brand authenticity of time-honored catering brands will awaken customers' interest and enhance their in-depth brand experience, because customer interests can arouse their positive emotions. when customer interest is aroused, the impression of authenticity of time-honored catering brands will be deepened, thereby affecting customer brand experience. thus, this study concludes that the authenticity of time-honored catering brands can enhance customers' experience by awakening their interest. hypothesis 1. brand authenticity will affect customers' brand experience through awakening their interest. in an ever more competitive market, brands must offer memorable experiences to their customers if they want to differentiate themselves and build a solid competitive position (iglesias et al., 2019) . brand-related stimuli (i.e., product, design, atmosphere, packaging and publicity) is the main source of consumers' subjective response and feelings, which are called brand experience (brakus et al., 2009; carlson et al., 2019; liljander et al., 2009; ong et al., 2018) . the concept of brand experience is of great interest to marketers because brand experience is crucial in determining consumers' brand attitudes and behaviors . scholars have confirmed that brand experience is the leading factor driving customers with positive brand attitudes (shamim and mohsin butt, 2013) and driving brand identity (rahman, 2014) . customers' understanding and identification of time-honored brands will be generated and enhanced easily when they have deep experiences, such as the mobilization of the senses, affection, behavior, and even intelligence (füller et al., 2008) . further, some scholars assert that consumer brand experience is often closely associated with the authenticity represented by traditional culture and enhancement (alexander, 2009 ). brand authenticity is often used by companies to stimulate customer (laub et al., 2018) . the reason is that authenticity has become an important indicator of a brand, enriching the brand experience (modya and hanksb, 2019) . if time-honored catering brands can show the characteristics of being original and genuine, their customer brand experience will be enhanced. therefore, the authenticity of time-honored catering brands can enhance consumer brand experience and ultimately lead to positive brand identification. hypothesis 2a. brand authenticity will affect customers' brand identification through brand experience. another viewpoint on the effects of brand experience emphasizes that when a brand awakens the interest of consumers, internally consistent consumption desires will enhance brand depth experience (coelho et al., 2018) . the rationale for hypothesis 2a is that brand experience is subjective and an internal consumer response. over time, brand experiences may produce changes in affective bonds, participation behaviors, senses and intelligence (prentice et al., 2019; brakus et al., 2009) . in this study, the sensory experience comprises the tactile, visual, auditory, olfactory, and gustatory stimulation generated in customers by brands (iglesias et al., 2019) . because of the sense of history and cultural heritage of time-honored brand, the affective dimension captures the degree to which customers perceive it as an emotional brand (iglesias et al., 2019) . the intelligence level and behavioral level are the imagination, curiosity and customer attitude and behavior initiated by the brand, respectively (das et al., 2019) . these aspects compose the internal results of the stimulus that awakens the experience; in particular, contact with senses and behaviors greatly enhances the experience (pinker, 1997) . in the early stage of customer visits, marketers try to enhance the experience by awakening visitors with promotional materials (kim and ko, 2012) . some evidence in other research fields also shows that commercial websites enhance customers' online experiences by attracting their interest (nah et al., 2011) . in this study, the original attraction of time-honored catering brands can successfully awaken customer interest, which will trigger customers' motivation to participate in the experience and enhance their brand experience level (jones and runyan, 2013; chhabra, 2010) . when the customers' interests are aroused, they will identify the time-honored catering brands through in-depth brand experience. therefore, the hypothesis is as follows: hypothesis 2b. awakening customers' interest will affect their brand identification through brand experience. customers' brand identification is regarded as a high degree of brand understanding and recognition (popp and woratschek, 2017) , which affects additional customer behavior toward the brand, including positive wom and other supportive behaviors (zhu et al., 2016) . wom has become one of the most effective marketing tools (prentice et al., 2019) . arnett, german, and hunt (2003) assert that brand wom is a way of expressing and improving self-identity, and a behavioral response to customer identification. individuals achieve identification with a specific brand after experience, which ultimately leads to a positive behavioral outcome (alexander, 2009) . therefore, brand experience is often regarded to predict consumer behavior (zarantonello and schmitt, 2010) . further, brand identity as a link between brand and customer consistency promotes customers' intention to recommend (berrozpe et al., 2019) . more importantly, brand identification mainly occurs after a memorable positive brand experience (merk and michel, 2019) , because positive brand experience forms an identity between the customer and the brand (han et al., 2020) . when the degree of customer brand experience with time-honored catering brands is high, customers will show positive brand identification psychology and support the brands through wom (han et al., 2020) . further, according to sor theory, brand experience is the cognitive process of interacting with time-honored catering brands, identification represents the customer's brand attitude, and wom is the display of customers' brand behavior. in the process of brand experience accumulation, customers gain a comprehensive understanding of the brand and take action (wom) (hanna and rowley, 2011; saleem et al., 2018) . in other words, after obtaining a higher brand experience, customers will enhance their recognition of time-honored brands and immediately generate brand wom to support their identification psychology. additionally, wom includes two important forms of communication: in-person wom and ewom (eelen et al., 2017) . in-person wom is the real-time interaction between time-honored brand fans and other potential customers after the former's brand recognition (klesse et al., 2015) . in-person wom with strong credibility has a high success rate in influencing other customers to visit. in the process of ewom, customers have more time to think and reconstruct the communication content regarding time-honored brands (sijoria et al., 2019) . whether it is in-person wom in the traditional period or ewom in the internet era, it shows the important brand communication behavior after customer's brand identity. therefore, this study asserted that customer brand experience will influence brand identification, resulting in both positive in-person wom behavior and ewom behavior. hypothesis 3a. customers' brand identification mediates the relationship between their brand experience and in-person wom. hypothesis 3b. customers' brand identification mediates the relationship between their brand experience and ewom. creative performance is described as the ability to generate new ideas, behaviors, concepts, designs and service programs. it refers to update old ideas into new and unique ideas (wang and netemeyer, 2004) . previous studies indicate that creative performance can present innovative service forms and improve product quality (sternberg, 2012) , which are key factors in improving brand experience (füller et al., 2011) . especially, the combination of authenticity and creativity has become the key means for the development of tourism and leisure experience in the new period (zhang et al., 2019) . the authenticity of time-honored brands contributes to a key feature for customers in choosing to visit the restaurant (ponnam and balaji, 2014) . however, customers evaluate brand experience not only for the brand's authenticity but also for its creativity (darvishmotevali et al., 2018) . in the construction of time-honored catering brands, different creative ways can be adopted to improve customers' brand experience, such as the presentation of unique ideas, new marketing strategies and new services (darvishmotevali et al., 2018) . the creative development of innovation products and services by brands can satisfy the changing needs of customers (chang and teng, 2017) . moreover, creativity is regarded as the best expression and transmission method of brand authenticity conservation and inheritance. it not only helps customers enhance their cognition and understanding of the original nature of the brand, but also influences customers' brand experience and enhancing the emotional connection between customers and brands (schmitt, 2010) . therefore, this study concludes that the interaction between authenticity and creative performances of time-honored catering brands can enhance customers' brand experience. hypothesis 4. creative performance will positively moderate the relationship between the brand authenticity and brand experience. unlike other brands, time-honored brands are well known for their splendid culture (mu, 2017) and characteristics of regional cultures in china (forêt and mazzalovo, 2014) . some scholars argue that cultural background factors (e.g., cultural differences, cultural proximity, and cultural distance) can explain a variety of customer dining behaviors (sheldon and fox, 1988) . for example, chang et al. (2011) found that customers evaluated the local cuisine based on their own culinary culture and habits. in the background of time-honored catering brands, the process of customers' cognition (brand experience), attitude (brand identification) and behavior (wom) may be affected by cultural proximity. although many studies have proven that cultural proximity influences customers' visit motivation, interest, and familiarity (weiermair, 2000; huang et al., 2013) , no direct evidence has confirmed that cultural proximity plays an important role in determining the customers' brand attitudes and the backwards behavior of time-honored catering brands. kivela and crotts (2006) believe that restaurants have become the main channel for customers to experience local cultures, and the degree of cultural proximity is likely to affect their attitudes toward a customer destination. the time-honored catering brands represent unique regional cultures, and the degree of cultural proximity can affect the degree of customers' recognition of their brands (forêt and mazzalovo, 2014) , because previous research has shown that the unique food culture determines which customer groups are targeted (kivela and johns, 2003) . therefore, customers with the similar cultural backgrounds are likely to show a more positive brand identification after an in-depth brand experience. therefore, another hypothesis is as follows: hypothesis 5. cultural proximity will positively moderate the relationship between brand experience and brand identification. cultural background may influence the change of customer attitude to positive behavior. culture includes factors such as common values, beliefs, attitudes, behavior norms, customs, rituals, ceremonies and perceptions (warner and joynt, 2002) . especially in china, there are differences in the cultures of different ethnic groups, geographical locations, and regions, etc. (zhang et al., 2019) , causing differences in customer preferences, tastes, and habits, etc., because cultural characteristics are deeply rooted and influence multiple individual behaviors and distinct cultural background of time-honored catering brands (atkins and bowler, 2001) . for example, barreto (2014) asserts that cultural differences can affect the expression of customers' wom. when the cultural background is similar, customers are more likely to express their opinions and recommendations through wom after understanding and recognizing the time-honored catering brands (yaveroglu and donthu, 2002) . as part of a place's intangible cultural heritage, time-honored catering brands reflect local culture characteristics and create a sense of place (gordin and trabskaya, 2013) . customers with stronger cultural proximity can find cultural resonance with such brands and prefer to recommend the catering brand to others. if they have similar cultural backgrounds to the brands, customers can easily understand time-honored catering brands. additionally, cultural proximity in a region plays social and other comprehensive roles as a background factor (sahin and baloglu, 2011) , affecting the customer wom (in-person wom and ewom). therefore, this study concludes that customers with higher cultural proximity are more likely to have positive in-person wom and ewom after the identification. hypothesis 6a. cultural proximity will positively moderate the relationship between customers' brand identification and in-person wom. hypothesis 6b. cultural proximity will positively moderate the relationship between customers' brand identification and ewom. this study used a questionnaire survey to collect data and used structural equation modeling (sem) for data analysis. the latter is a quantitative research method with positivism, a methodology often used in restaurant research (chou et al., 2020; han et al., 2020; chen et al., 2014) . questionnaire surveys can reduce the interference of the surveyor with the respondents. furthermore, sem can test the relationship between multiple variables at the same time in this study. it provides a more complete test of the entire proposed theoretical model, avoiding inaccurate standard error estimates or evaluation biases due to nonindependent observations (liu, 2018) . therefore, the method can well test the problem of hypothesis relationship and customer's wom path formation in this study. to make the investigation in this study more accurate, data were collected using questionnaires completed by customers; an intercept approach was utilized in fujian province (zhang and xu, 2019) . we followed several lines of reasoning and consulted several resources to select the samples. first, according to the definition and industry standards for china's time-honored brands, a brand must have been established before 1956 and must possess unique products, skills or services. further, the brand must show bright regional cultural characteristics and historical and cultural value and have a good reputation. second, as the conditions for becoming a time-honored brand are relatively strict, we selected 11 time-honored brands with broad social awareness distributed throughout china (shown in fig. 2) . for example, quanjude (全聚德) was founded in 1864 of the qing dynasty, with a history of 157 years. founder yang quanren, famous for making beijing roast duck, pioneered the roast duck. at present, the roast duck technology has been selected as a national intangible heritage project. moreover, quanjude's "all-duck feast (全鸭席)" and more than 400 special dishes are known as "the first chinese food". zhou enlai, the former prime minister of the people's republic of china, has repeatedly chosen quanjude's "all-duck feast" as the state banquet. overall, fig. 2 showed the typicality and representativeness of our research objects. third, before the survey, we asked the respondents to choose the time-honored catering brands with which they had the deepest impression or experience to complete the questionnaire. therefore, the people who had no consumption experience with time-honored catering were not asked to participate. to efficiently gather quality research data, the data were collected using the following steps. first, we conducted survey training for investigators, such as the questionnaire distribution process and anonymous surveying, etc. second, several trained assistants were instructed to intercept customers who passed through the research area near restaurants and to distribute the questionnaire (zhang et al., 2019) . third, we clarified the purpose of the questionnaire and the procedure for completing it and answered any doubts. fourth, the respondents were asked to fill in the items one by one. then, the research assistants checked the questionnaire before it was collected and gave restaurant coupons as a token of appreciation. finally, the questionnaires were completed from january to april 2019. a total of 700 questionnaires were distributed, and 689 questionnaires (98.42%) were recovered. after checking the validity of the collected questionnaires, 83 invalid questionnaires (with completely self-consistent data and incomplete data) were excluded, leaving 606 questionnaires (86.57%) for the data analysis. table 1 summarizes the detailed statistics of the respondents. the study scales consisted of the following constructs: brand authenticity, awakening of interest, brand experience, brand identification, creative performance, cultural proximity, in-person wom and ewom. the scales were originally developed in english and then translated into chinese. thus, a back-translation procedure (brislin, 1976) was conducted by four professors and researchers in the tourism management research field to retain the original meanings of the items and obtain the chinese version of the scale. moreover, to make the measurement more accurate, we adopted a 7-point likert scale (1 = totally disagree to 7 = totally agree) (stylidis et al., 2017) . a maturity scale based on the previous literature was used in this study. specifically, (1) brand authenticity was measured using the scale of modya and hanksb (2019) containing 8 items, which comprised the two sub-dimensions of originality (3 items) and genuineness (5 items). (2) we assessed awakening of interest using 4 items from tercia et al. (2020) , measuring the degree of customers' interest in time-honored caterings (3) from brakus et al. (2009) , we used 11 items (e.g., sensory, 2 items; intellectual, 3 items; affective, 3 items; and behavioral, 3 items). to measure brand experience. (4) five items adapted from popp and woratschek (2017) were used to measure customers' brand identification, reflecting the degree to which the customers' identified the time-honored brands. (5) we used 6 items adopted from darvishmotevali et al. (2018) to measure creative performance, showing customers' perception of the creative ideas, behaviors or measures conveyed by time-honored brands. (6) cultural proximity was also measured by 6 items (huang et al., 2013) , which reflected the degree of proximity between the permanent residence culture of customers and the culture of the time-honored brands they visited. (7) the variables of in-person wom (3 items) and ewom (7 items) were measured by items from eelen et al. (2017) . (8) some variables related to demographics were controlled, including gender, age, education background, monthly income and number of experiences with relevant brands (fu et al., 2016; pan et al., 2017) . the mean, standard deviation, factor loading, composite reliability (cr), average variance extracted (ave) and cronbach's alpha value of measurement variables are shown in table 2 . all of the cronbach's alpha values were above 0.797, indicating the high level reliability for each construct (ryu et al., 2019) . the standardized factor loading of each item was significant above 0.700, satisfying the threshold value of exceeding 0.60 (gieling and ong, 2016) . moreover, the value of cr was within the range of 0.796-0.944 (above 0.70), and the value of ave ranged from 0.640 to 0.725 (above 0.50), which demonstrated the reliability and convergent validity of each construct (bagozzi and yi, 1989) . further, cronbach's alpha value was higher than the 0.70 cut-off recommended by chow and chan (2008) , showing the high level of internal consistency of each construct. to confirm the construct validity, we used confirmatory factor analysis (cfa) to examine the first and second factor structures (zhang et the cfa examination results indicated that the first-and second-factor structure models were considered acceptable for future study. as shown in table 3 , there was a high level of correlation between the structures. therefore, the variance inflation factor test (vif) needed to be examined to determine whether there was a high level of collinearity between each construct. the results showed that all the vifs of the constructs were less than 5.51 (liu, 2018) , indicating that collinearity was not a serious issue. further, this study confirmed discriminant validity between each pair of constructs because the values of ave were greater than the correlation coefficient (fornell and larcker, 1981) . in addition, this study used similar steps to measure each variable of serious correlations between variables (craighead et al., 2011) . according to the previous suggestions, the values of common method variation (cmv) needed to be calculated (podsakoff et al., 2003) . we applied satorra-bentler's scaled chi-square difference method using spss 21.0. finally, the results of the common factor model showed that the first factor that was extracted only explained 48.5% of the variance below the 50% threshold (podsakoff, 1986) . therefore, there was no concern about potential common method variation in this study. to test the research hypothesis in this study, a two-step procedure was followed. first, sem was applied in amos 20.0 to test the overall model structure. further, we used a bootstrap confidence interval approach, monte carlo approach 2000 resampling, 95% bias-corrected confidence intervals (ci) and p-values (liu, 2018) to examine the mediating effects. second, this study employed hierarchical regressions to test the moderating effects with stata 13.0 (zhang et al., 2019 ). fig. 3 shows the standardized path coefficients, and each direct path was significant at p < .001. further, the overall model fit the data well (χ 2 = 1929.127. p < .001; χ 2 /df = 2.954; cfi = 0.940; ifi = 0.940; tli = 0.935; nfi = 0.912; agfi = 0.823; rfi = 0.905 and rmsea = 0.057). this study tested the mediators of awakening of interest, brand experience and customers' brand identification using sem. first, hypothesis 1 proposed that awakening of interest mediated the relationships between brand authenticity and brand experience. as illustrated in fig. 3 , the two sub-dimensions of brand authenticity (originality, β = 0.970; genuineness, β = 0.944; all p < .001) directly affected the mediator of awakening of interest (β = 0.779; p < .001). moreover, .835 this time-honored catering brand is not action oriented. .700 customers-brand identification (cronbach's alpha = .920) (popp and woratschek, 2017) .920 .697 this btime-honored catering brand says a lot about the kind of person i am. .845 this time-honored catering brand's image and my self-image are similar in many respects. .822 this time-honored catering brand plays an important role in my life. .845 i am very attached to the time-honored catering brand. .818 the time-honored catering brand raises a strong sense of belonging. .844 creative performance (cronbach's alpha = .926) darvishmotevali et al. (2018) .926 .677 employees of the time-honored catering brand could carry out his/her routine tasks in resourceful ways. .803 this time-honored catering brand could come up with novel ideas to satisfy customer needs. .829 this time-honored restaurant offers a variety of dishes for customers to choose from. .754 . expressing your opinion about this time-honored catering brand online. .792 sharing ideas for new products and experiences of this time-honored catering brand online. .838 participating in a discussion on the brand website of this time-honored catering brand. .861 liking this time-honored catering brand on wechat or weibo. .849 sending or sharing online messages or promos of this time-honored catering brand to others. .849 writing an online review about this time-honored catering brand. .850 writing something or post a video about this timehonored catering brand. online .849 awakening of interest was positively associated with brand experience (β = 0.598; p < .001). the average indirect effects of brand authenticity on brand experience through awakening of interest were significant (β = 0.466, p < .001). therefore, hypothesis 1 was supported. second, the mediating effect of brand experience was examined in terms of the predictions of hypotheses 2a and 2b. the direct effects of brand authenticity on brand experience (sensory, β = 0.957; intellectual, β = 0.904; affective, β = 0.978; behavioral, β = 0.946; all p < .001) were positive and significant (β = 0.403, p < .001). additionally, brand experience had a positive effect on customers' brand identification (β = 0.904; p < .001). thus, brand experience played a mediator role on the relationships between brand authenticity and customers' brand identification (β = 0.785; p < .001) and the relationship between awakening of interest and customers' brand identification (β = .541; p < .001). as such, hypotheses 2a and hypotheses 2b were supported. hypotheses 3a and hypotheses 3b predicted that customers' brand identification played a mediator role in the relationships among brand experience, in-person wom and ewom. customers' brand identification affected in-person wom (β = 0.935; p < .001) and ewom (β = 0.925; p < .001) positively and significantly. the results showed that customers' brand identification mediated the relationship between brand experience and in-person wom (β = 0.845; p < .001). further, brand experience had a positive and significant effect on ewom through customers' brand identification (β = 0.837; p < .001), which demonstrated that hypotheses 3a and 3b were supported. as seen in table 4 , no confidence intervals of the two-tailed tests contained 0 (saleem et al., 2018) , confirming that the mediator variables for awakening of interest, brand experience and customers' brand identification were fully supported with hypotheses 1 to 3. the relationships between brand authenticity and customer wom were also moderated by creative performance and cultural proximity. as shown in table 5 , models 1 and 3 were the baseline models, including the control variables and independent variables of brand authenticity and brand experience. models 2 and 4 added interaction effects, which examined hypotheses 4 and 5. the results indicated that the coefficient for the interaction term brand authenticity × creative performance was positive and significant for brand experience (β = 0.034; p < .05). further, a slope test was used, and a two-dimensional diagram was drawn to further confirm the interaction effect's specific development trend. fig. 4 demonstrates that when the customers perceived that the time-honored catering brands had a high level of creative performance, the effects of brand authenticity on brand experience were enhanced. moreover, the results showed that the interaction effect of brand experience and cultural proximity on customers' brand identification was not significant (β = 0.021; p = .204), which did not support hypothesis 5. *p < .05; **p < .01; ***p < .001. correlation values above 0.608 were significant at *p < .001. square root of average variance extraction are shown on the diagonal in bold. a similar procedure method was used to examine hypotheses 6a and 6b. table 6 summarized the moderating effects of cultural proximity. model 6 showed that the moderator of cultural proximity associated with customers' brand identification and in-person wom was positive and significant (β = 0.045; p < .01). further, there was also a positive interaction effect of customers' brand identification and cultural proximity on ewom (β = 0.056; p < .001). as shown in fig. 5 and fig. 6 , the simple slope analysis showed that at a higher level of cultural proximity, there were two more significant positive correlations between customers' brand identification, in-person wom and ewom. thus, hypotheses 6a and hypotheses 6b were supported. to examine whether the results of this study have strong stability, the same procedure including sem and regression analyses was used to test the mediating moderating models (tsai et al., 2015; liu, 2017) . the independent variable of brand authenticity was separated into two dimensions of originality and genuineness in the alternative model. fig. 7 summarized the output path estimates, which showed that all direct paths were significant. the overall fitness of the alternative model was worse than that of the proposed model (χ 2 = 2529.291, p < .001; χ 2 /df specifically, the mediating effects of awakening of interest on the relationship between originality and brand experience (β = 0.381; p < .001) and the relationship between genuineness and brand experience (β = 0.253; p < .001), which provided evidence regarding hypothesis 1. further, originality (β = 0.465; p < .001) and genuineness (β = 0.499; p < .001) affected brand experience through awakening of interest. additionally, brand experience mediated the effect of awakening of interest on customers' brand identification (β = 0.579; p < .001), which still supported hypotheses 2a and hypotheses 2b. moreover, customers' brand identification positively and still significantly mediated the .874 support *p < .05; **p < .01; ***p < .001. relationships among brand experience, in-person wom and ewom (h3a: β = 0.799, h3b: β = 0.788; all p < .001). therefore, hypotheses 3a and 3b were fully supported. next, we evaluated two moderators. the interaction effects of originality × creative performance were not significant for brand experience (β = 0.029; p = .087), but genuineness × creative performance (β = 0.051; p < .01) was positive and significant for brand experience, which partially supported hypothesis 4. similarly, the moderating test results of cultural proximity were identical to the proposed model validation results. undoubtedly, the structural model proposed in this study was robust. the inheritance path of traditional catering brands has always been an issue that needs urgent theoretical response (tian et al., 2018) . this study analyzes the development characteristics of time-honored catering brands and finds the significance of customer wom for brand heritage. based on sor theory, we construct and test the wom path. first, this study has proved that brand authenticity is a critical leading factor and the interaction between brand authenticity and creative performance can promote the traditional brand inheritance. brand authenticity reflects the originality and genuineness of time-honored catering brands, which is one of the characteristics that distinguish these brands from other catering brands. however, the heritage of time-honored catering brands needs creative elements to improve customer experience in the performance reflects the combination of historical and innovative elements to enhance customer experience, leading to customer brand wom behavior. second, customers' responses to the brand (awakening of interest), cognition (brand experience) and attitudes (brand identity) are the important mediating factors for their wom. sor theory consolidates the theoretical foundation of the inheritance path model. as the stimulus, brand authenticity can successfully awaken customers' potential interest for experience. the brand experience process mobilizes customers' positive senses, behaviors, affections, intelligence and other comprehensive feelings (brakus et al., 2009) , leading to brand identification towards traditional brands. consequently, customers manifest desired behaviors such as in-person wom and ewom, showing positive results for brand recognition (dimitriadis and papista, 2010) . this study highlights the positive effects of cognition (brand experience) and affection (identification) on customers' wom in the context of time-honored catering brands. third, an interesting finding shows that cultural proximity can strengthen the behavior (wom) of customers after brand identification, but it cannot strengthen the formation process of customers' brand identification. customers with higher cultural proximity are willing to express common cultural emotions and generate desired behavior. it is easier to convey the cultural connotation and essence of time-honored brands. brand identification is a direct consequence of affective mobilization in the process of brand experience (stryker, 2004) . further, customers' brand identification is actually the evaluation of intuitive feelings in the process of high-level experience (lin and sung, 2014) . this evaluation is mainly aimed at the experience level of catering products, especially food, rather than cultural backgrounds. it is difficult to enhance or weaken the influence of customer brand experience on brand identification through cultural proximity. given the decline and failures stemming from serious issues with time-honored catering brands (li, 2018) , the study results contribute significant theoretical value and breakthroughs to the research field. first, this study is the first to provide a specific path of traditional catering brand inheritance. the theoretical process highlights the core driver of brand authenticity and the accelerator of creative performance. although many scholars have emphasized the unique historical and cultural value of traditional catering brands (tian et al., 2018) , to date, few studies have focused on addressing the brand decline and constructing a theoretical path of traditional catering brand inheritance, especially in remodeling from a new customer perspective rather than an enterprise perspective (li et al., 2019; he, 2008) . importantly, the critical leading factor of brand authenticity is placed in the present and the future, not just historical period (lu et al., 2015; sims, 2009) . only the combination of old originality and new creativity approaches can contribute to the process of high levels of customer wom (lu et al., 2015; wang and netemeyer, 2004) . this study is a new attempt to explore the creative performance of time-honored catering brands in terms of products and services because the variable is often used to represent the influence of employee service behavior (darvishmotevali et al., 2018) . therefore, creativity based on brand authenticity needs to be valued in the future. second, the findings contribute to the different influences of cultural background factors on traditional catering customers in attitude and behavior processes. the conclusions respond to the previous arguments about the influence of cultural distance on individuals (huang et al., 2013) . they further help us to re-understand the different roles of cultural proximity in the three stages of before, during and after a visit. many studies have proven that customers with greater cultural differences have more novelty interest and motivation before they visit (huang, 2017; huang et al., 2013) . however, the different findings in this study provide direct evidence for cultural proximity strengthening or weakening customer attitudes and behaviors after visits rather than before visits as indicated in a previous study (sims and rebecca, 2009 ). moreover, on the basis of the influence of cultural differences on customer motivation, we distinguish and expand the different roles of cultural proximity in different stages of customer experience, especially in the context of traditional cultural brands. the results complement the research on cross-cultural customer psychology and behavior and cultural theory (huang, 2017) . third, this study provides a new perspective to address and confirm the relationship between brand authenticity and customer behavior. although previous studies have highlighted that authenticity, as an attraction, plays an important role in customer satisfaction, evaluation and motivation (zeng et al., 2012; dipietro and levitt, 2019) . this study expands the research on the influence of brand authenticity on customer wom. more importantly, the research identifies the three-stage organism process through customers' responses (awakening of interest), cognitive processes (brand experience) and attitude formation (brand identification) (lu et al., 2015; dipietro and levitt, 2019) . this conclusion highlights that customer cognition and attitude are significant bridges in the process of forming their wom (moulard et al., 2016) . in addition, the paths of mediating mechanisms broaden the sor theory and facilitate theorizing how to realize traditional brand inheritance in the new period. the research results provide an important development path for the realization of time-honored catering brand inheritance. first, timehonored catering enterprises should protect their unique traditional secret formulas and manual skills to ensure the original flavor. for example, provide sufficient fund to train restaurant artisans and use original recipes. in addition, the management system of process inheritance and brand expansion need to learn from the modern enterprise development model based on their own traditional characteristics, which helps to adapt to the changing customer' demands. on the other hand, it is necessary to ensure the consistency of brand culture in services, products, and marketing; the core position of the traditional elements of time-honored catering brands, including brand logo, brand style and brand design, must be ensured. for example, authentic food ingredients and decoration features should be used, and the staff should wear cultural costumes, which will enhance the aesthetic and create a more unique and authentic service experience (lu et al., 2015) . in addition, unlike other forms of catering such as takeout, authenticity requires a sense of presence, and live production can enhance the sensory and emotion experience. however, it is important to maintain the social distancing of the dining tables to serve the customers and allow them to enjoy the authenticity in the loose space. in each link, time-honored catering enterprises should pay attention to the cultural storytelling to highlight their historical competitiveness, such as origin of each dish, technology process, founder experience, etc. second, authenticity alone may no longer be enough to ensure the sustainable development of a traditional restaurant (lu et al., 2015) . time-honored catering brands must realize that creative products and services are the driving factors of competitive advantage (liang and james, 2014) . it may include product windows and creative feature films and projects that enhance customers' intuitive perceptions of the brand (bogicevic et al., 2019) . the manifestation of time-honored products can be more diversified and creative and include the modeling of dishes, plate presentation, tableware, etc. further, the combination of traditional and modern elements needs to be emphasized. time-honored brands can also use creative derivatives to stimulate a diverse consumption experience, such as creative tableware souvenirs, time-honored seasonings, etc. (ryu and zhong, 2012) . further, personalized service should be available when necessary. transparent window can be applied to show the cooking process, which is conducive to tourists not only get taste experience, but can also understand traditional brands visually. moreover, due to the differences in regional tastes, managers should make appropriate improvements to adapt the taste to local people, such as reducing the spiciness. third, customers' wom is spontaneous behavior, but providing online and offline wom platforms for customers is still an initiative that time-honored catering brands must take seriously. therefore, creating social media platforms and online communities is essential (bernritter et al., 2016) . for example, the strong tone and style of the cultural atmosphere designed by the manager is a community activity with a sense of identity, which will maintain emotional bonds through social media interaction. further, festivals, special events should be actively taken advantage of to provide customers with promotional material about time-honored brands, pushing the customers to share brand information (collins-dodd and lindley, 2003; eelen et al., 2017) . moreover, establishing an internal connection between time-honored brands and customers is the key to maintaining wom, which can highlight customers' identification. for example, providing time-honored brand membership, benefits and feedback systems can enhance the offline one-to-one connection between customers and brands, acting as a channel for highlighting customers' unique identities. although this study creates a new path model for solving the issue of time-honored catering brand inheritance and the results make great significant contributions, some limitations can provide insights for future research. this study collected data from chinese customers of time-honored catering brands. however, research on cultural proximity needs to involve cross-cultural samples, and it is better to use a multinational sample to highlight the influence of cultural distance on customer attitudes and behaviors. future research may examine different groups in transnational cultures to ensure the universality and external validity of the hypothesis model proposed in this study (hwang and ok, 2013) . further, an examination of the results through a comparative study of transnational samples is needed, such as in different countries. in addition, sem was used in this study to discuss the inheritance of time-honored catering brands from the perspective of customers' wom. the multilevel model may be the best way to examine the influence of brand authenticity on customers' wom. unfortunately, there were a total of 1128 time-honored authorized enterprises in 2016, and only one in ten of them is thriving. further, time-honored catering brands are few, and there may be fewer than 30 typically successful time-honored catering brands that satisfy our study sample needs (li et al., 2019) , and this is insufficient to meet the data standards for the multilevel method. future research may not only be limited to samples of time-honored catering brands but also attempt to study traditional catering enterprises. the robustness of the model may be further examined through a multilevel analysis method. finally, future research can use qualitative interviews to investigate the relationship between spatial distance, crowding degree and customer brand authenticity experience in a new, post-covid 19 world order. refining the conceptualization of brand authenticity brand authentication: creating and maintaining brand auras general image and attribute perceptions of traditional food in six european countries the identity salience model of relationship marketing success: the case of nonprofit marketing consumer culture theory (cct): twenty years of research food in society on the use of structural equation models in experimentaldesigns consumer willingness to pay for traditional food products the word-of-mouth phenomenon in the social media era why nonprofits are easier to endorse on social media: the roles of warmth and brand symbolism am i ibiza? measuring brand identification in the tourism context virtual reality presence as a preamble of tourism experience: the role of mental imagery brand experience: what is it? how is it measured? does it affect loyalty? back-translation for cross-cultural research enhancing brand relationship performance through customer participation and value creation in social media brand communities intrinsic or extrinsic motivations for hospitality employees' creativity: the moderating role of organization-level regulatory focus attributes that influence the evaluation of travel dining experience: when east meets west understanding the importance of food tourism to chongqing nostalgic emotion, experiential value, brand image, and consumption intentions of customers of nostalgic-themed restaurants an empirical study on culinary tourism destination brand personality and its impact in the context of confucian culture back to the past: a sub-segment of generationy's perceptions of authenticity the critical criteria for innovation entrepreneurship of restaurants: considering the interrelationship effect of human capital and competitive strategy a case study in taiwan social network, social trust and shared goals in organizational knowledge sharing keeping it real: how perceived brand authenticity affects product perceptions on the relationship between consumer-brand identification, brand community, and brand loyalty store brands and retail differentiation: the influence of store image and store brand attitude on store own brand perceptions addressing common method variance: guidelines for survey research on information technology, operations, and supply chain management emotional intelligence and creative performance: looking through the lens of environmental uncertainty and cultural intelligence does brand experience translate into brand commitment?: a mediated-moderation model of brand passion and perceived brand ethicality integrating relationship quality and consumer-brand identification in building brand relationships: proposition of a conceptual model restaurant authenticity: factors that influence perception, satisfaction and return intentions at regional american-style restaurants measuring the impact of positive and negative word of mouth on brand purchase probability the differential impact of brand loyalty on traditional and online word of mouth: the moderating roles of self-brand connection and the desire to help the brand the long march of the chinese luxury industry towards globalization: questioning the relevance of the "china time-honored brand evaluating structural equations models with unobservable variables and measurement error reality tv, audience travel intentions, and destination image why co-creation experience matters? creative experience and its impact on the quantity and quality of creative contributions. r d manag brand community members as a source of innovation warfare tourism experiences and national identity: the case of airborne museum 'hartenstein' in oosterbeek the role of gastronomic brands in customer destination promotion: the case of st. petersburg service branding: consumer verdicts on service brands perception of traditional food products in six european regions using free word association does brand authenticity alleviate the effect of brand scandals? the remodeling of brand image of the time-honored restaurant brand of wuhan based on emotional design consumer purchase intention at traditional restaurant and fast food restaurant antecedents and the mediating effect of customer-restaurant brand identification towards a strategic place brand-management model transference or severance: an exploratory study on brand relationship quality of china's time-honored brands based on intergenerational influence creativity as a critical criterion for future restaurant space design: developing a novel model with dematel application the dining experience of beijing roast duck: a comparative study of the chinese and english online consumer reviews cultural proximity and intention to visit: destination image of taiwan as perceived by mainland chinese visitors the antecedents and consequence of consumer attitudes toward restaurant brands: a comparative study between casual and fine dining restaurants creating a model of customer equity for chain restaurant brand formation how does sensory brand experience influence brand equity? considering the roles of customer satisfaction, customer affective commitment, and employee empathy a business model canvas: traditional restaurant "melayu stimulus-organism-response reconsidered: an evolutionary step in modeling (consumer) behavior destination brand authenticity: what an experiential simulacrum! a multigroup analysis of its antecedents and outcomes through official online platforms brand experience and brand implications in a multichannel setting. the international review of retail examining branding co-creation in brand communities on social media: applying the paradigm of stimulus-organism-response attitude formation from product trial: distinct roles of cognition and affect for hedonic and functional products do social media marketing activities enhance customer equity? an empirical study of luxury fashion brand experience, brand prestige, perceived value (functional, hedonic, social, and financial), and loyalty among grocerant customers tourism and gastronomy: gastronomy's influence on how customers experience a destination restaurants, gastronomy and customers: a novel method for investigating customers' dining out experiences the effect of preference expression modality on self-control impact of brand recognition and brand reputation on firm performance: us-based multinational restaurant companies' perspective how archetypal brands leverage consumers' perception: a qualitative investigation of brand loyalty and storytelling how to protect traditional food and foodways effectively in terms of intangible cultural heritage and intellectual property laws in the republic of korea research on the reasons and countermeasures for the lagging development of china's traditional brand brand revitalization of heritage enterprises for cultural sustainability in the digital era: a case study in china intangible assets are more valuable than the tangible--study on the innovation and development of the traditional time-honored brands traditional chinese food technology and cuisine the low-cost carrier model in china: the adoption of a strategic innovation nothing can tear us apart: the effect of brand identity fusion in consumer-brand relationships examining social capital, organizational learning and knowledge transfer in cultural and creative industries of practice the relationships among intellectual capital, social capital, and performance-the moderating role of business ties and environmental uncertainty perceptions of chinese restaurants in the us: what affects customer satisfaction and behavioral intentions? modelling consumer responses to an apparel store brand: store image as a risk reducer authenticity perceptions, brand equity and brand choice intention: the case of ethnic restaurants the mature brand and brand interest: an alternative consequence of ad-evoked affect an approach to environmental psychology the dark side of salesperson brand identification in the luxury sector: when brand orientation generates management issues and negative customer perception tarik dogru. parallel pathways to brand loyalty: mapping the consequences of authentic consumption experiences for hotels and airbnb brand authenticity: testing the antecedents and outcomes of brand management's passion for its products the study on activation strategy of time-honored brand a framework for brand revitalization through an upscale line extension enhancing brand equity through flow and telepresence: a comparison of 2d and 3d virtual worlds impact of brand experience on loyalty development and validation of a destination personality scale for mainland chinese travelers the effect of authenticity perceptions and brand equity on brand choice intention how the mind works self-reports in organizational research: problems and prospects common method biases in behavioral research: a critical review of the literature and recommended remedies matching visitation-motives and restaurant attributes in casual dining restaurants consumers' relationships with brands and brand communities-the multifaceted roles of identification and satisfaction the influence of brand experience and service quality on customer engagement differentiated brand experience in brand parity through branded branding strategy effect of a brand story structure on narrative transportation and perceived brand image of luxury hotels antecedents and consequences of customers' menu choice in an authentic chinese restaurant context the effects of brand experiences, trust and satisfaction on building brand loyalty; an empirical research on global brands brand personality and destination image of istanbul drivers of customer loyalty and word of mouth intentions: moderating role of interactional justice conceptualising consumerbased service brand equity (cbsbe) and direct service experience in the airline sector customer experience management: a revolutionary approach to connecting with your customers destination branding and overtourism a critical model of brand experience consequences a study on development strategies of time-honored catering brand from the perspective of food tourism the role of foodservice in vacation choice and experience: a cross-cultural analysis impact of the antecedents of electronic word of mouth on consumer based brand equity: a study on the hotel industry food, place and authenticity: local food and the sustainable tourism experience an approach to determining customer satisfaction in traditional serbian restaurants the assessment of creativity: an investment-based approach word of mouth, brand loyalty, acculturation and the american jewish consumer integrating emotion into identity theory testing an integrated destination image model across residents and customers brand awareness, image, physical quality and employee behavior as building blocks of customer-based brand equity: consequences in the hotel context development and evaluation of an rfid-based e-restaurant system for customer-centric service conveying pre-visit experiences through travel advertisements and their effects on destination decisions old names neet the new market: an ethnographic study of classic brands in the foodservice industry in shantou definition, conceptualization and measurement of consumer-based retailer brand equity authentic dining experiences in ethnic theme restaurants experiential value in branding food tourism work environment and atmosphere: the role of organizational support in the creativity performance of tourism and hospitality organizations salesperson creative performance: conceptualization, measurement, and nomological validity motives for consumer choice of traditional food and european food in mainland china customers' perceptions towards and satisfaction with service quality in the cross-cultural service encounter: implications for hospitality and tourism management cultural influences on the diffusion of new products using the brand experience scale to profile consumers and predict consumer behaviour paradox of authenticity versus standardization: expansion strategies of restaurant groups in china a structural model of liminal experience in tourism critical factors in the identification of word-of-mouth enhanced with travel apps: the moderating roles of confucian culture and the switching cost view how does authenticity enhance flow experience through perceived value and involvement: the moderating roles of innovation and cultural identity effect of social support on customer satisfaction and citizenship behavior in online brand communities: the moderating role of support source key: cord-325963-d0hvukbu authors: faes, christel; abrams, steven; van beckhoven, dominique; meyfroidt, geert; vlieghe, erika; hens, niel title: time between symptom onset, hospitalisation and recovery or death: statistical analysis of belgian covid-19 patients date: 2020-10-17 journal: int j environ res public health doi: 10.3390/ijerph17207560 sha: doc_id: 325963 cord_uid: d0hvukbu there are different patterns in the covid-19 outbreak in the general population and amongst nursing home patients. we investigate the time from symptom onset to diagnosis and hospitalization or the length of stay (los) in the hospital, and whether there are differences in the population. sciensano collected information on 14,618 hospitalized patients with covid-19 admissions from 114 belgian hospitals between 14 march and 12 june 2020. the distributions of different event times for different patient groups are estimated accounting for interval censoring and right truncation of the time intervals. the time between symptom onset and hospitalization or diagnosis are similar, with median length between symptom onset and hospitalization ranging between 3 and 10.4 days, depending on the age of the patient (longest delay in age group 20–60 years) and whether or not the patient lives in a nursing home (additional 2 days for patients from nursing home). the median los in hospital varies between 3 and 10.4 days, with the los increasing with age. the hospital los for patients that recover is shorter for patients living in a nursing home, but the time to death is longer for these patients. over the course of the first wave, the los has decreased. the world is currently faced with an ongoing coronavirus disease 2019 (covid19) pandemic. the disease is caused by the severe acute respiratory syndrome coronavirus 2, a new strain of the coronavirus, which was never detected before in humans, and is a highly contagious infectious disease. the first outbreak of covid-19 occurred in wuhan, province hubei, china in december 2019. since then, several outbreaks have been observed throughout the world. as from 7 march, the first generation of infected individuals as a result of local transmission was confirmed in belgium. there is currently little detailed knowledge on the time interval between symptom onset and hospital admission, nor on the length of stay (los) in hospital in belgium. however, information about the los in hospital is important to predict the number of required hospital beds, both for beds in general hospital and beds in the intensive care unit (icu), and to track the burden on hospitals [1] . the time delay from illness onset to death is important for the estimation of the case fatality ratio [2] . individual-specific characteristics, such as the gender, age and co-morbidity of the individual, could potentially explain differences in los in the hospital. therefore, we investigate the time of symptom onset to hospitalization and the time of symptom onset to diagnosis, as well as the los in hospital. we consider and compare parametric distributions for these event times enabling to appropriately take care of truncation and interval censoring. in section 2, we introduce the epidemiological data and the statistical methodology used for the estimation of the parameters associated with the aforementioned delay distributions. the results are presented in section 3 and avenues of further research are discussed in section 4. the hospitalized patients clinical database is an ongoing multicenter registry in belgium that collects information on hospital admission related to covid-19 infection. the data are regularly updated as more information from the hospitals are sent in. the individual patients' data are collected through 2 online questionnaires: one with data on admission and one with data on discharge. data are reported for all hospitalized patients with a confirmed covid-19 infection. the reporting is strongly recommended by the belgian risk management group, therefore the reporting coverage is high (>70% of all hospitalized covid-19 cases) [3] . at the time of writing this manuscript, there is information about 14,618 patients, hospitalized between 1 march 2020 and 12 june 2020, including age and gender. table a1 (appendix b) summarizes the age and living status (living in nursing home or not) of the patients. age is categorized into 4 age groups: the young population (0-20 years), the working age population (20-60 years), the senior population (60-80 years) and the elderly (80+ years). it shows that a large proportion of the hospitalized 60+ patients live in a nursing home facility (about 12% for patients aged 60-79 and 35% for patients aged 80+). the survey contains information on 1831 patients hospitalized during the initial phase of the outbreak (between 1 march and 20 march); 4998 patients in the increasing phase of the outbreak (between 21 march and 31 march); 5094 in the descending phase (between 1 april and 18 april); and 2695 individuals at the end of the first wave of the covid-19 epidemic (between 19 april and 12 june). the time trend in the number of hospitalizations is presented in figure a2 (appendix b). the time trend in the survey matches well with the time trend of the outbreak in the whole population, though with some under-reporting in april and may. the time variables (time of symptom onset, hospitalisation, diagnosis, and recovery or death) were checked for consistency. observations identified as inconsistent were excluded for analyses. details of the inclusion and exclusion criteria are provided in appendix a. some descriptive analyses of the event times are provided in appendix c. different flexible parametric non-negative distributions can be used to describe the delay distributions, such as the exponential, weibull, lognormal and gamma distributions [4] . however, as the reported event times are expressed in days, the discrete nature of the data should be accounted for. reference [2, 5] assume a discrete probability distribution parameterized by a continuous distribution. alternatively, reference [6] estimate the serial interval using interval censoring techniques from survival analysis. reference [7, 8] use doubly interval-censoring methods for estimation of the incubation distribution. we use interval-censoring methods originating from survival analysis to deal with the discrete nature of the data, to acknowledge that the observed time is not the exact event time [9] . let x i be the recorded event time (e.g., los in hospital). instead of assuming that x i is observed exactly, it is assumed that the event time is in the interval (l i , r i ), with l i = x i − 0.5 and r i = x i + 0.5 for x i ≥ 1 and l i = = 10 −3 and r i = 0.5 for x i = 0. as a sensitivity analysis, we compare this assumption with the wider interval an additional complexity is that the delay distributions are truncated, either because there is a maximal clinical delay period or because the hospitalization is close to the end of the study. first, only patients reporting a delay between symptoms and hospitalization (or diagnosis) of at most 31 days were included in the study, because it is unclear for the other patients whether the reason for hospital admission was covid-19 infection. in literature, times from onset of symptoms to hospital admission have been reported between 4 and 15 days (e.g., reference [10] [11] [12] [13] ), with no mention of observed delay times above 31 days. second, if hospitalization is e.g., 14 days before the end of the study, the observed los cannot exceed 14 days. however, it has to be noted that only patients that have left the hospital are included in the survey, and as a result it will not include patients that are hospitalized near the end of the survey and have a long length of stay. this is a clear example of right-truncation (as opposed to right-censoring under which patients are still part of the study/data and only partial information is available on their length of stay). we therefore use a likelihood function accommodating the right-truncated and interval-censored nature of the observed data to estimate the parameters of the distributions [6] . the likelihood function is given by in which t i is the (individual-specific) truncation time and f(·) is the cumulative distribution function corresponding to the density function f (·). we truncate the time from symptom onset to diagnosis and the time from symptom onset to hospitalisation to 31 days (t i ≡ 31). the los in hospital is truncated at t i = e − t i , in which t i is the time of hospitalization and e denoted the end of the study period (6 june 2020). in addition, to account for possible under-reporting in the survey, each likelihood contribution is weighted by the post-stratification weight w i ≡ w t defined as w t = n t n t ∑ t n t , where t is the day of hospitalization for patient i, n t the number of hospitalizations in the population on day t and n t is the number of reported hospitalizations in the survey on day t. this weighted likelihood is also called pseudo-likelihood in the context of complex survey data, for which consistency and asymptotic normality has been shown [14] . we assume weibull and lognormal distributions for the delay distributions. the two parameters of each distribution are regressed on age, gender, nursing home and time period (as well as interactions of these). by assuming both parameters to be covariate-dependent, we allow that both the mean and the range of the time to event variable varies in different population groups. the bfgs optimization algorithm is used to maximize the likelihood. convergence is reached for all considered models. the bayesian information criterion (bic) is used to select the best fitting parametric distribution and the best regression model among the candidate distributions/models. only significant covariates are included in the final model. overall, the delay between symptom onset and hospitalization can be described by a truncated weibull distribution with shape parameter 0.845 and scale parameter 5.506. the overall average delay is very similar to the one obtained by [15] , based on a stochastic discrete time model relying on an erlang delay distribution. however, there are significant differences in the time between symptom onset and hospitalization amongst different gender groups, age groups, living status and time period of hospitalization. as the truncated weibull distribution has a lower bic as compared to the lognormal distribution (66,923 and 68,657 for weibull and lognormal distributions, respectively), results for the weibull distribution are presented. in table 1 , the regression coefficients of the scale (λ) and shape parameters (γ) of the weibull distribution are presented. the impact on the time between symptom onset and hospitalization is visualized in figure 1 , showing the model-based 5%, 25%, 50%, 75% and 95% quantiles of the delay times. table 1 . summary of the regression of the scale (λ) and shape (γ) parameters for reported delay time between symptom onset and hospitalization and between symptom onset and diagnosis, based on a truncated weibull distribution: parameter estimate, standard error and significance (* corresponds to p-value < 0.05; ** to p-value < 0.01 and *** to < 0.001). the reference group used are females of age > 80 living in nursing home that are hospitalized in the period 01 march to 20 march. age has a major impact on the delay between symptom onset and hospitalization, with the youngest age group having the shortest delay (median of 1 day, but with a quarter of the patients having a delay longer than 2.6 days). the time from symptom onset to hospitalization is more than doubled in the working age (20-60 years) and ageing (60-80 years) population as compared to this young population (median close to 4 days and a delay of more than 6.7 days for a quarter of the patients). in contrast the increase is 50% in the elderly (80+ years) as compared to the youngest age group (median delay of 1.6 days, with a quarter of the patients having a delay longer than 4.3 days). after correcting for age, it is observed that the time delay is somewhat higher when patients come from a nursing home facility, with an increase of approximately 2 days. note that in the descriptive statistics, we observed shorter delay times for patients coming from nursing homes. this stems from the fact that 80+ year old's have shorter delay times as compared to patients of age 20-79, but the population size in the 80+ group is much larger as compared to the 20-79 group in nursing homes. and although statistical significant differences were found for gender and period, we observe very similar time delays between males and females and in the different time periods (see figure a7 ). the differences occur in the tails of the distribution; with, e.g., the 5% longest delay times between symptoms and hospitalizations observed for males. the time between symptom onset and diagnosis is also best described by a truncated weibull distribution (shape parameter 0.900, scale parameter 5.657). as again the truncated weibull distribution has a lower bic value as compared to the lognormal distribution (68,106 and 69,652 for weibull and lognormal, respectively), results for the weibull distribution are presented. parameter estimates are very similar to the distribution for symptom onset and hospitalization ( table 1 ). the median delay between symptom onset and diagnosis is approximately one day longer as compared to the median delay between symptom onset and hospitalization. the time from symptom onset to diagnosis in males had a much wider range as compared to females. this is observed in the tails of the distribution, with the 5% longest delay times being 5 days longer for males as compared to females. especially at the increasing phase of the epidemic, the time between symptom onset and diagnosis was longer as compared to the time between symptom onset and hospitalization (see figure a7 ), but this delay has shortened over time. to test the impact of some of the model assumptions, a comparison is made with an analysis without truncating the time between symptom onset and hospitalisation or diagnosis and wider time intervals (x i − 1, x i + 1). results are presented in figures a6 and a8 , and are very similar to the once presented here. it was also investigated whether or not there a difference between neonati (with virtually no symptoms, but diagnosed at the time of birth or at the time of the mothers testing prior to labour) and other children. for all children <20 years of age, we found a median time from symptom onset to hospitalization and diagnosis to be 1 and 1.6 days, respectively. if we only consider children >0 years of age, a small increase is found (1.5 (0.5-3.4) days for time to hospitalization and 1.8 (0.7-3.7) for time to diagnosis). a summary of the estimated los in hospital and icu is presented in table 2 and figure 1 based on the lognormal distribution. the lognormal distribution has a slightly smaller bic value as compared to the weibull distribution for the los in hospital (76,928 for weibull and 76,865 for lognormal) and for the los in icu (7341 for weibull and 7312 for lognormal). table 2 . summary of the regression of the log-mean (µ) and log-standard deviation (σ) parameters for the length of stay in hospital and icu, based on the lognormal distribution: parameter estimate, standard error and significance (* corresponds to p-value< 0.05; ** to p-value < 0.01 and *** to < 0.001). the reference group used are females of age > 80 living in nursing home that are hospitalized in the period 01 march to 20 march. a '/' indicates that this variable was not included in the final model. the median los in hospital is close to 3 days in the youngest age group, but 25% of these patients stay longer than 5.5 (8.6) days in hospital for females (males), and 5% stay longer than 13 (14) days for females (males). the los increases with age, with a median los of around 5.4 (5.9) days for females (males) in the working age group. a quarter of the patients in age group 20-60 stay longer than 10 days and 5% stays longer than 24 days. this increases for patients above 60 years of age, with a median los of around 8.6 (9.4) days for female (male) patients in the senior population group and 9.4 (10.3) days for female (male) patients in the elderly group. a large proportion of the elderly patients stay much longer in hospital. a quarter of these patients stay longer than 15.7-17.4 days for patients in the ageing group and longer than 17.3-19 days for the elderly. some very long hospital stays are observed in these age groups, with 5% of the los being longer than 38 (41) days for females (males) in the ageing group, and 42 (46) days in the elderly. no significant difference is found for patients coming from nursing homes. over the course of the first wave, the los has slightly decreased, with a decrease in median los of around 2 days from the first period to later periods. note that this result is corrected for possible bias of prolonged lengths of stay being less probable for more recently admitted patients. the los in icu (based on the lognormal distribution) is on average 3.8 days for the young patients, with a quarter of the patients staying longer than 7.6 days in icu. similar to los in hospital, also the los in icu increases with age. the median los in the working age population is 6.4, in the senior population 7.6, while in elderly it is slightly shorter (5.9 days). again, it is observed that a quarter of the patients in age group 20-60 stay longer than 13 days in icu, in age group 60-80 15.6 days and in 80+ 12 days. patients living in nursing homes stay approximately 2 days longer in icu. no major difference is observed in the los in icu between males and females, though some prolonged stays are observed in males as compared to females. similar as the overall los in hospital, the los in icu has decreased over time (with a decrease of 1 day from the first period to the later periods, and an additional 2 days in the last period). table 3 summarizes the los in hospital for patients that recovered or passed away. the lognormal distribution has the smallest bic value for time from hospitalization to recovery and the weibull distribution for time from hospitalization to death. for patients that recovered, the los in hospital increased with age (the median los is 5 days for the young population, which increases to 8 days in working age population, 12 days in the senior population and 15 days in the elderly). in contrast to previous results, we observe that patients living in nursing homes leave hospital approximately 1 day faster as compared to the general population. however, in contrast, the 5% longest stays in hospital before recovery are longer for patients living in nursing homes. but, while the los in hospital for patients that recover increases with age for all age groups, the survival time of hospitalized patients that died is lower for the age groups seniors (median time of 6.7 days) and elderly (median time of 5.7 days) as compared to the working age group (median time of 12.1 days). also large differences are observed amongst patients coming from nursing homes or not, with the time between hospitalization and death being approximately 3 days longer for patients living in a nursing home. no significant differences are found between males and females. a sensitivity analysis assuming that the time delay is interval censored by (x i − 1, x i + 1) is presented in figure a6 . results are almost identical to the previously presented results. it was also investigated whether the smaller duration of hospitalization for <20 years can be due to the neonati, for which the duration of stay is often determined by duration of post-delivery recovery of the mother. and indeed, the los in hospital for the youngest age group increases slightly if we take out the children of 0 years to 4.1(2.2, 7.6) days for males and 3.7(2, 6.9) days for females. the los in hospital for recovered patients increases to 6.4(3.7, 11) days for males and 5.9(3.4, 10.2) days for females of age between 1 and 19 years of age, making it very similar to the 20-60 years old patients that recovered. no impact was observed on the los in icu. table 3 . summary of the regression of the log-mean (µ) and log-standard deviation (σ) parameters for length of stay in hospital for recovered patients and patients that died, based on lognormal distribution and weibull distribution: parameter estimate, standard error and significance (* corresponds to p-value < 0.05; ** to p-value < 0.01 and *** to < 0.001). the reference group used are females of age > 80 living in nursing home that are hospitalized in the period 1 march to 20 march. a '/' indicates that this variable was not included in the final model. previous studies in other countries reported a mean time from symptom onset to hospitalization of 2.62 days in singapore, 4.41 days in hong kong and 5.14 days in the uk [16] . other studies report mean values of time to hospitalization ranging from 5 to 9.7 days [8, 17, 18] . in belgium, the mean time from symptom onset to hospitalization overall is 5.74 days, which is slightly longer as compared to the reported delay in other countries, but depending on the patient population, estimates range between 3 and 10.4 days in belgium. the time from symptom onset to hospitalization is largest in the working age population (20-60 years), followed by the elderly (60-80) years. if we compare patients within the same age group, it is observed that the time delay is somewhat higher when patients come from a nursing home facility, with an increase of approximately 2 days. the time from symptom onset to diagnosis has a similar behaviour, with a slightly longer delay as compared to time from symptom onset to hospitalization. the diagnosis was typically made upon hospital admission to confirm covid-19 infection during the first wave, explaining why the time from symptom onset to hospitalization is very close to the time to diagnosis. to investigate the length of stay in hospital, we should make a distinction between patients that recover or that die. while the median length of stay for patients that recover varies between 5 days (in the young population) to 15.7 (in the elderly), the median length of stay for patients that die varies between 5.7 days (in the elderly) and 12.2 days (in the working age population). in general, it is observed that the length of stay in hospital for patients that recover increases with age, and males need a slightly longer time to recover as compared to females. but, patients living in nursing homes leave hospital sooner as compared to patients in the same age group from the general population. patients living in nursing homes might be more rapidly discharged from hospital to continue their convalescence in the nursing home, whereas this is probably less the case for isolated elderly patients. in contrast, the time between hospitalization and death is longest for the working age population, with shorter survival time for the seniors and the elderly. the length of stay in hospital for patients that die is longer for patients coming form nursing homes, as compared to patients from the same age group from the general population. a similar trend is observed for the length of stay in icu. over the course of the first wave, the los has slightly decreased. this result is corrected for possible bias of prolonged lengths of stay being less probable for more recently admitted patients. therefore, this might be related to improved clinical experience and improved treatments over the course of the epidemic. but note that also varying patients profiles in terms of comorbidities or severity of disease over time can explain this trend, and it would therefore be interesting to correct for the patient's profile in a future study. the length of stay in belgian hospitals is within the range of the once observed in other countries, though especially the length of stay in icu seems shorter in belgian hospitals. reference [19] report a median length of stay in hospital of 14 days in china, and of 5 days outside of china. the median length of stay in icu is 8 days in china and 7 days outside of china [20] . reference [1] report estimated length of stay in england for covid-19 patients not admitted to icu of 8.4 days and for icu length of stay of 12.4 days. it should however be noted that the criteria for hospital (and icu) admission and release might be distinct in the different countries. different sensitivity analysis indicated that the results are robust to some of the assumptions made in the modeling. however, alternative methods could still be investigated to improve the estimation of the delay distributions. first, alternative distributions can be used, having more than two parameters and thus more flexibility, e.g., generalized gamma distributions (for which the gamma, exponential and weibull distributions are special cases). second, a truncated doubly-interval censored method could be considered to account for the uncertainty in both time points determining the observed delays (and their intervals). third, there is possible reporting bias in the time of symptom onset, which can influence the results. finally, the impact of severity of illness and co-morbidity on the length of stay in hospital is very important. this was not investigated in this study as this information was not made available, but is an important factor to investigate in future analyses. funding: this work is funded by the epipose project from the european union's sc1-phe-coronavirus-2020 programme, project number 101003688. the authors declare no conflict of interest. the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. a flow diagram of the exclusion criteria is displayed in figure a1 . the time of symptom onset and time of hospitalization is available for 13,321 patients. the date of symptom onset is determined based on the patient anamnesis history made by the clinicians. patients that were hospitalized before the start of symptoms (i.e., 715 patients) were not included. these include patients with nosocomial infections admitted prior to covid-19 infection for other long-term pathologies, then got infected at hospital and developing covid-19-related symptoms long after admission. patients reporting a delay between symptoms and hospitalization of more than 31 days (i.e., 121 patients) were also not included, because it is unclear for these patients whether the reason for hospital admission was covid-19 infection. a sensitivity analysis including patients with event times above 31 days is conducted. patients with missing information on age (i.e., 12 patients) or gender (i.e., 109 patients) were not included in the statistical analysis. this resulted in a total of 12,364 patients which were used to estimate the distribution of the time between symptom onset and hospitalization. based on the patient anamnesis history made by the clinicians. patients that were hospitalized before 258 the start of symptoms (i.e., 715 patients) were not included. these include patients with nosocomial 259 infections admitted prior to covid-19 infection for other long-term pathologies, then got infected 260 at hospital and developing covid-19-related symptoms long after admission. patients reporting 261 a delay between symptoms and hospitalization of more than 31 days (i.e., 121 patients) were also 262 not included, because it is unclear for these patients whether the reason for hospital admission was 263 covid-19 infection. a sensitivity analysis including patients with event times above 31 days is 264 conducted. patients with missing information on age (i.e., 12 patients) or gender (i.e., 109 patients) 265 were not included in the statistical analysis. this resulted in a total of 12,364 patients which were used 266 to estimate the distribution of the time between symptom onset and hospitalization. the time between hospitalization and discharge from hospital is available for 12,013 patients, 275 either discharged alive or dead. for patients that were hospitalized before the start of symptoms (i.e., the time of symptom onset and time of diagnosis is available for 13,156 patients. some of these were diagnosed prior to having symptoms (321) or experienced symptoms more than 31 days before diagnosis (136), and are excluded as these might be errors in reporting dates. similarly, the delay between symptoms and detection time is truncated at 31 days; but a sensitivity analysis including these patients is performed. in total, 125 patients were removed because of missing information on age and/or gender, resulting in 12,574 patients used in the analysis of the time from symptom onset to diagnosis. the time between hospitalization and discharge from hospital is available for 12,013 patients, either discharged alive or dead. for patients that were hospitalized before the start of symptoms (i.e., 528 patients), we use the time between the start of symptoms and discharge. patients with negative time intervals (54 patients) are excluded for further analysis. another 134 patients were discarded because of missing covariate information with regard to their age or gender. from these patients, we know that 6054 recovered from covid-19, while 2401 died. from the hospitalized patients, there is information about the length of stay at icu for 1534 patients. note that we analyzed an anonymized subset of data from the hospital covid-19 clinical surveillance database of the belgian public health institute sciensano. data from sciensano was shared with the first author through a secured data transfer platform. the observed distribution of the delay from symptom onset to hospitalization and los in hospital are presented in figure a3 . summary information about these distributions are presented in tables a2 and a3. while the observed delay between symptom onset and hospitalization is between 0 and 31 days, 75% of the hospitalizations occur within 8 days after symptom onset. this is however shorter in the youngest age group (<20 years) and in the elderly group (>90 years). also patients coming from nursing homes seem to be hospitalized faster as compared to the general population. over the course of the first wave, the observed time between symptom onset and hospitalization was largest in the increasing phase of the epidemic (between 21 march and 30 march). the time between symptom onset and diagnosis is very similar, ranging between 0 and 31 days, with 75% of the diagnoses occurring within 8 days after symptom onset. it should be noted that these observations are based on hospitalized patients, and non-hospitalized patients might have a quite different evolution in terms of their symptoms. as non-hospitalized patients were rarely tested in the initial phase of the epidemic, no conclusions can be made for this group of patients. the observed median length of stay in hospital is 8 days, with 95% of the patients have values ranging between 1 and 40 days. 25% of the patients stay longer than 14 days in the hospital. the median length of stay seems to increase with age (from 3 days in age group <20 to 6 in age group 20-80, 9 in age group 80-90 and 10 days in age group >90). on the other hand, with time since introduction of the disease in the population, the length of stay seems to decrease, though this might be biased due to incomplete reporting of los in patients who are actually still admitted at the time of writing. therefore, these observed statistics should be interpreted with care. similar results are observed for the length of stay in icu. (figures a4 and a5 hospital length of stay for covid-19 patients: data-driven methods for forward planning epidemiological determinants of spread of causal agent of severe acute respiratory syndrom in hong kong rapid establishment of a national surveillance of covid-19 hospitalizations in belgium handbook of infectious diseases data analysis robust reconstruction and analysis of outbreak data: influenza a (h1n1)v transmission in a school-based population estimation of the serial interval of influenza estimating incubation period distributions with coarse data incubation period and other epidemiological characteristics of 2019 novel coronavirus infections with right truncation: a statistical analysis of publicly available case data statistical analysis of interval-censored failure time data clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in clinical features of patients infected with 2019 novel coronavirus in clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study interim clinical guidance for management of patients with confirmed coronavirus disease (covid-19) modeling the early phase of the belgian covid-19 epidemic using a stochastic compartmental model and studying its implied future trajectories short doubling time and long delay to effect of interventions. arxiv 2020 the effect of human mobility and control measures on the covid-19 epidemic in china impact of nonpharmaceutical interventions (npis) to reduce covid-19 mortality and healthcare demand covid-19 length of hospital stay: a systematic review and data synthesis clinical course and outcomes of critically ill patients with sars-cov-2 pneumonia in wuhan, china: a single-centered, retrospective, observational study publisher's note: mdpi stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. key: cord-272923-5ekgb0zx authors: hjálmsdóttir, andrea; bjarnadóttir, valgerður s. title: “i have turned into a foreman here at home.” families and work‐life balance in times of covid‐19 in a gender equality paradise. date: 2020-09-19 journal: gend work organ doi: 10.1111/gwao.12552 sha: doc_id: 272923 cord_uid: 5ekgb0zx this article explores the gendered realities of work‐life balance in iceland during the covid‐19 pandemic, in particular how these societal changes reflect and affect the gendered division of unpaid labor, such as childcare and household chores. the study draws on open ended real‐time diary entries, collected for two weeks during the peak of the pandemic in iceland. the entries represent the voices of 37 mothers in heteronormative relationships. the findings imply that, during the pandemic, the mothers took on greater mental work than before. they also described intense emotional labor, as they tried to keep everyone calm and safe. the division of tasks at home lay on their shoulders, causing them stress and frustration. the findings suggest that, even in a country that has been at the top of the gender gap index for several years, an unprecedented situation like covid‐19 can reveal and exaggerate strong gender norms and expectations towards mothers. this article is protected by copyright. all rights reserved. the covid-19 pandemic is not only a health emergency and economic hazard but has also resulted in dramatic changes in people's personal lives, and roles within families have been disrupted. during the pandemic, many countries have taken drastic measures to reduce the spread of the virus, such as social distancing, lockdowns, and closing schools, public institutions, and workplaces. children and adults alike have been forced to stay at home for a shorter or longer time and upturn their lives as the home became the school, the workplace, the playground, sports facility, and family sanctuary. unesco has estimated that more than 70% of the world's student population, or around 1,2 billion students, has been affected by either temporary school closings or restricted services (unesco, 2020) . this entails increased care responsibilities for parents across the world. even though the number of dual earner households has been increasing for the last decades, findings of several studies indicate that women still bear the burden of childrearing and household labor in industrialized countries (alon, doepke, olmstead-rumsey, & tertilt, 2020; carlson, petts, & pepin, 2020; friedman, 2015; knight & brinton, 2017; t. miller, 2018; schwanen, 2007) . it can therefore be assumed that they are more affected by the closing of schools than their male partners. in fact, several studies (alon et al., 2020; andrew et al., 2020; carlson et al., 2020) and media coverage (see e.g. ascher, 2020; c. c. miller, 2020; topping, 2020) on the impact of covid-19 on families have indicated complications and challenges, as this unprecedented situation appears to have revealed or exaggerated existing gender inequalities and divisions within families. some have even referred to this strange situation as the 1950s was revisiting homelife (ferguson, 2020) , indicating a backlash in terms of gender equality and power positions in the home during these circumstances. during previous crises, women have been more likely to either reduce their working hours or temporarily step down from work (alon et al., 2020; andrew et al., 2020) . we still pressure for the last few months and that mothers have spent less time on paid work and more time on household responsibilities as compared to fathers during the pandemic (andrew et al., 2020; carlson et al., 2020; collins, landivar, ruppanner & scarborough, 2020; craig & churchill, 2020; hennekam & shymko, 2020; manzo & minello, 2020; qian & fuller, 2020) . studies have indicated that young children tend to seek help and attention by interrupting their mothers, and that the mothers in turn experience time as more fragmented (collins, 2020; collins et al., 2020; sullivan & gershuny, 2018) which can become a bigger challenge in lockdown as the one during covid-19. since the lockdown, more mothers participating in andrew's et al. (2020) research have reduced their working hours and those who have stopped working do twice as much child care and household duties as their male partners who are still working. conversely, in families where the male partner has stopped working but not the female, the parents share childcare and household duties equally even though the mother works at least five hours of paid work a day. qian and fuller (2020) argue that the pandemic is far from being an equalizer when it comes to gender equality, as their research indicates a widening gender employment gap among canadian parents with young children. the pandemic has not only affected schools, as many companies and businesses have been forced to adopt to the circumstances with more working-from-home and telecommuting opportunities for their workers (alon et al., 2020) . juggling childcare and paid work has been very challenging for parents, but then again, this has meant increased flexibility for many employees, flexibility that has often been discussed as the solution to a better work-life balance, especially for women (gatrell, burnett, cooper, & sparrow, 2014; sullivan, 2015; wheatley, 2012) . however, there are various intricacies around the interactions of gender equality and work-life balance in normal times, which seem to have intensified during the this article is protected by copyright. all rights reserved. pandemic as the pressure on parents' time increases (e.g. andrew et al., 2020; carlson et al., 2020) . iceland has been considered a frontrunner, even among the other nordic countries, in gender equality (the world economic forum, 2020), which makes it a particularly interesting setting in this regard. we believe that times like the covid-19 pandemic provide a unique opportunity to explore and shed light on deeply entrenched and gendered social structures within the organization of the family. in fact, research has already pointed in that direction (auðardóttir & rúdólfsdóttir, 2020) . thus, the focus of this study is to look at how the societal changes reflect and affect the gendered division of labor, especially concerning the unpaid labor of childcare and household chores, from the perspectives of mothers in heterosexual relationships. this was done by collecting daily real-time diary entries from almost 40 mothers for two weeks during the peak of the pandemic in iceland while severe restrictions were being followed. important steps towards gender equality have been taken in the western part of the world over the years, not least in the nordic countries. these include improved legal frameworks, rising female employment and educational levels, and improvement in fathers' involvement in childrearing (evertsson, 2014; eydal & gíslason, 2015; gíslason & símonardóttir, 2018; jóhannsdóttir & gíslason, 2018) . despite these steps, the gender pay gap remains unbridged, reflecting the persistent idea of male provider roles (petersen, penner, & høgsnes, 2014; snaevarr, 2015) . iceland's reputation as the most gender equal country in the world has been quite prominent in public discourse and in the media, both in iceland and around the world. this media discourse has portrayed iceland as a paradise for women, implying that gender equality has more or less been achieved in iceland (hertz, 2016; jakobsdóttir, 2018; kilpatrick, 2017; this article is protected by copyright. all rights reserved. tuttle, 2017), which has even been used for international branding purposes (einarsdóttir, 2020) . despite the importance of recognizing that the ranking of gender equality as practiced by the global gender gap index, among others, has its limitations and overlooks important institutional variables such as social norms and values (einarsdóttir, 2020) , certainly iceland is doing well in international comparisons. women's educational attainment in iceland has steadily increased over the last few decades (bjarnason & edvardsson, 2017) , and in the year 2018, icelandic women had the highest labor ratio among the oecd countries at 84.5%. the same applies to men's labor force participation of 89.9% (oecd, 2020) . despite this active participation in the labor force, icelandic women have established families at relatively young ages and the average birthrate has been rather high up until very recently in comparison with other northern european countries (hognert et al., 2017; jónsson, 2017) . in iceland, as elsewhere, women work part-time jobs in higher numbers, and mothers reduce their labor participation following childbirth more often than do fathers (gíslason & símonardóttir, 2018) . regardless of international trends towards increased active female participation in the workforce, the labor market is still very gender divided, and the rates of gender segregation both in line of work and educational choices are striking (dinella, fulcher, & weisgram, 2014) . the same manifestation applies to iceland (snaevarr, 2015) . over the last few decades, the government of iceland has taken some important steps in making laws and policies to facilitate fathers' involvement in childrearing responsibilities. the most substantial step is probably an act on shared parental leave passed in 2000, which gave parents nine months in total, "dividing the nine months so that three are sharable while each parent has three that are strictly non-transferable" (gíslason & símonardóttir, 2018, p. 4) , and was lengthened by a month on january 1, 2020 (act on maternity/paternity leave and parental leave no. 95/2000 with amendments). in iceland, research has indicated that discourses on motherhood in this article is protected by copyright. all rights reserved. relation to breastfeeding imply more intensive mothering that starts when the children are very young. this is somehow in opposition to the governmental emphasis on gender equality that aim to get fathers more in involved in parenting (gíslason & símonardóttir, 2018) . despite all these advancements, there are some signs that these have been achieved at a cost and there are some cracks in icelandic's glossy image as the frontrunner of gender equality (einarsdóttir, 2020) . in recent years, media coverage about people experiencing burnout has been more common, especially among professions like nurses and elementary school teachers (halldórsdóttir, skúladóttir, sigursteinsdóttir, & agnarsdóttir, 2016; the icelandic nurses' association, 2017; the icelandic teachers union, n.d.) , which in iceland are typically female professions. it appears that people are increasingly experiencing stress in their everyday live, which, if prolonged, can result in both poor physical and mental health (jónsdóttir, 2017) . over the last few years, research results from iceland have indicated that conflicts between work and family are quite frequent among icelandic parents, even though they do not consider housework alone to be a great burden (þórsdóttir, 2012) . family obligations and issues related to the care of children are more likely to be woven into the mothers' working hours than fathers' (hjálmsdóttir & einarsdóttir, 2019) . there are also indications that parents are more likely to express difficulties when it comes to everyday chores than are workers without children and that parents experience conflict in balancing work and family (eyjólfsdóttir, 2013; hjálmsdóttir & einarsdóttir, 2019; þórsdóttir, 2012) . work-life balance refers to the ability of every individual, regardless of gender, to coordinate work and family obligations successfully. work, in this context, refers to paid labor performed outside the home (wheatley, 2012) . studies have found that, when parents manage to balance family and working life, they are more satisfied with their life, which positively this article is protected by copyright. all rights reserved. impacts their mental and physical health (haar et al., 2014) . successful work-life balance can, therefore, be considered to be an important public health issue (lunau, bambra, eikemo, van der wel, & dragano, 2014) . a growing number of people describe increased time pressure in their daily lives and experience time being a scarce resource for all the task in their daily schedules (fyhri & hjorthol, 2009) . time is gendered, and bryson and deery (2010) have claimed that gender inequalities are sustained by differences in the use and experience of time among men and women and "that 'time cultures' are bound up with power and control" (p. 91). research has indicated that men have, on average, more control over their time outside work than women. more claims are laid on women's time from family members. they feel more rushed in their daily lives and are more likely to be expected to attend to household work. women are also more inclined to multitask than men (bryson, 2016; craig & brown, 2017; friedman, 2015; rafnsdóttir & heijstra, 2013; sullivan & gershuny, 2018) . for the last few decades, some countries have been changing their policies to improve the opportunity parents have to balance work and family (gatrell, burnett, cooper, & sparrow, 2013; sullivan, 2015; wheatley, 2012) . such policies are often based on more access to subsidized childcare or flexibility. work flexibility has been argued to be desirable and a step towards gender equality, since it has enabled people's work-life balance (gatrell et al., 2013; haar et al., 2014; sullivan, 2015; wheatley, 2012) . alon et al. (2020) predict that the somehow forced flexibility of many workplaces caused by covid-19 might last after the pandemic has run its course and be beneficial for both mothers and fathers. nevertheless, work related flexibility has both pros and cons and can even cause stress. the division between work and home can become more blurred when the employees bring their work home and take care of family matters during working hours (hjálmsdóttir & einarsdóttir, 2019; wheatley, 2012) . it has also been argued that not all professions offer an this article is protected by copyright. all rights reserved. opportunity to enjoy the of taking work home or having different working hours. such flexibility is often dependent on educational level, as well as being related to the gendered division of the labor market (pedulla & thébaud, 2015) . female dominated profession, like teachers and nurses, often have strict attendance obligations in their workplaces and less opportunity for work flexibility (pétursdóttir, 2009; wheatley, 2012) . men enjoy the opportunity to have flexible working hours or work from home more often, and flexibility can be more likely to have a negative effect on women's careers (friedman, 2015) . as such, seemingly supportive policies can have different consequences for men and women (pedulla & thébaud, 2015) . the structure of the family as an institution has changed in recent years, including the composition of families and the roles of the genders, and each family member now has more complex roles (júlíusdóttir, 2012) . starting a family and having children has turned out to have different effects on the lives of men and women, and it seems to be less beneficial for mothers. more families now rely on dual-earnings, and although the number of females working in paid labor has been on the increase, there is still a lack of active participation among men in the home. this applies to iceland and many other countries (gíslason, 2007; petersen, penner, & høgsnes, 2014) . having children and family relations maintain and support gendered positions and divisions of labor in public and private lives. petersen et al. (2014) underline how important it is to take such aspects into consideration when it comes to the positions of men and women on the labor market. t. miller (2018) claims that the reasons behind caring practices and their gendered performances "can be multiple and are interrelated, operating at the interpersonal and broader structural, political, policy and cultural levels" (p. 32). research has indicated that social structures and prevailing attitudes can influence the gendered division of labor in relationships this article is protected by copyright. all rights reserved. (dotti sani, 2014; evertsson, 2014) . household labor has often been referred to as invisible work (hochschild & machung, 1989) , and the conceptualization of family work can be ambiguous since scholars often use different explanations of what such work actually entails (robertson, anderson, hall, & kim, 2019) . here, we follow these lines of thought and the three constructs of family work, commonly referred to in family work studies: housework, childcare, and emotional labor. emotional labor relates to activities relevant to the emotional wellbeing of other family members and giving them emotional support (curran, mcdaniel, pollitt, & totenhagen, 2015) . in an attempt to distinguish between emotional labor and mental work, robertson et al. (2019, p. 185) suggest mental work as the fourth construct of family work which "includes the invisible mental work related to managerial and family caregiving responsibilities", such as managing, monitoring, scheduling, knowing, and organizing the family life. mental work cannot be delegated to someone who does not belong to the family, and within families, mothers are much more likely to be household managers (ciciolla & luthar, 2019; curran et al., 2015; hjálmsdóttir & einarsdóttir, 2019; robertson et al., 2019) . this type of work often goes unnoticed by other family members along with the mental burden that such responsibilities require but impacts the mother's wellbeing with feeling of being rushed and strained in everyday life (ciciolla & luthar, 2019; craig & brown, 2017) . it has also been pointed out that it can be difficult to detect mental work since it is quite often closely connected with other activities related to the family (robertson et al., 2019) . in addition, many parents, especially mothers, experience work-family guilt when combining work and family, experiencing conflict between the tasks in the public and private spheres (borelli, nelson, river, birken, & moss-racusin, 2017) , which can add to the mental load of everyday life. this article is protected by copyright. all rights reserved. also, people had to ensure that they kept a distance of at least two meters between individuals. this entailed closing of swimming pools, gyms, pubs, and museums. however, no changes were made to the organization of schools (government of iceland, 2020b) from the previous measures. due to these actions, those who possibly could work from home were encouraged to do so (sveinsdóttir, 2020) . health, 2020), including no more than 20 children in the same group and groups not being allowed to interact. it was common for students to attend school every other day, for school days to be shorter and for meals to be available for a small part of the student body. parents were, in some cases, encouraged to let their children stay at home if they possibly could, while parents in occupations such as doctors, nurses, and police were identified as priority groups. this meant that they were somewhat less affected by school closures and restrictions. students in 8 th to 10 th grade (14-to16-year-olds) had to study from home via distance education. this article is protected by copyright. all rights reserved. after-school care was closed; sports and other extra curriculum actives were cancelled, and children were encouraged to only meet with the kids in their small groups outside school (icelandic association of local authorities, 2020). as in other countries, all these measures had severe impact on families with children, even though the schools technically never closed, and lockdowns were not imposed. this is the context in which this study was conducted in march and april of 2020. on may 4, 2020, social distancing restrictions were eased, meaning that all children's activities were more-or-less back to normal (government of iceland, 2020a) -at least for the time being. this article draws from a real-time diary study conducted during the ban on public gathering in iceland. the first week of the diary study started on march 26 th , and the second week kan, 2008; kitterød & lyngstad, 2005) . for the purpose of this study, we only analyze and present findings from the open diary entries. according to bolger et al. (2003) , diary studies are well suited to capturing the experiences and particulars of the life of the participants. since this is a real-time study with a minimum of time lapse between the experience and reflections, the likelihood of retrospection is minimized. one of the benefits of real-time diary studies like this one is that events are reported in a natural, spontaneous context. by doing so, the data becomes richer and important contextual information and meanings are pieced together to include in the study. this article is protected by copyright. all rights reserved. the sample is self-controlled as it consists of individuals who responded to an advertisement that we posted in various large and active icelandic facebook groups, such as brask & brall (a sales group with around 150.000 members), 1 and through our own extended networks. facebook is the most popular social media in iceland, used regularly by nearly all icelanders (facebook nation, 2018), which makes it a good forum for reaching a considerable part of the population. in all, 47 parents participated in the study, seven male and 40 female. in an effort to shed light on the everyday life of mothers during covid-19, we analyzed the open diary entries from female participants in heteronormative relationships, or 37 mothers. about half of them lived in the reykjavík metropolitan area (n = 18) while the others were spread around the country. the number of children in the homes of these 37 mothers varied from one to six, but the majority (n = 21) of the mothers had two children. the educational level of the participants was rather high, as a majority of participants held a university degree, 14 with bachelor's degrees and 18 with master's degrees. twenty-eight were in paid labor, four were on parental leave, one was an independent laborer, one was a student, one was both studying and working, one was on sick leave, and one was on disability. in most of the cases, both parents primarily or solely worked from home during the time of the study, and most of them were working full-time the whole period, even though some worked reduced hours due to the pandemic. in all cases, the children could attend schools up to some extent, but with severe restrictions of many sorts. after providing informed consent, participants were asked to answer a questionnaire consisting of background questions. then, they received a daily questionnaire via microsoft forms for two weeks. the purpose of the questionnaire was twofold; to collect structured time-use data (fisher, et al., 2000) , and open-ended diary entries in which participants would this article is protected by copyright. all rights reserved. write an "old style" diary, reflecting on everyday life during covid-19. in the diary entries, participants were asked to reflect on their day, the impact of covid-19 on their life, division of household duties and responsibilities, and other issues they wanted to share. it is important to consider the risk of failure in distinguishing participants' reports of atypical experiences related to or caused by a major event or general experiences (bolger et al., 2003) . therefore, participants were asked to reflect specifically on their experiences in the context of covid-19. the total word count of the written reflections was around 28.000 words, which provided us with rich qualitative data. we analyzed the written reflections drawing on braun and clarke's (2013) phases of thematic analysis. the text was sorted by date and participant before we read it several times, added notes, and discussed the content together. then, we coded the text, applying an inductive approach. this means that the initial coding of the diary entries was open and emphasized understanding the participants' experiences without engaging too much with existing literature and theories. similar codes and text segments were then collated in order to identify repeated patterns of meaning across the data: stress, work-life balance, and division of household duties. participants were promised confidentiality and that measures would be taken to prevent identification. we provided participants with a random personal participant number to ensure their anonymity. information that could link participants' names to the number was deleted right after the data collection period. participants were able to withdraw from the study at any time, and some did for unknown reasons. due to the limited time for the study, we decided to use the most convenient way possible to share information about the research and recruit participants, facebook. that probably affected both the number of participants, as the window of time to recruit this article is protected by copyright. all rights reserved. participants was limited, and how homogeneous the group became, particularly in terms of educational level. analysis of the data generated two themes, presented in two sections. the first concerns the complexities of work-life balance in covid-19 times, particularly the gendered interactions of stress, work-life balance, and mental work. the second section specifically draws on the emotional labor performed by the women in the study, some of which is represented by how conscious the women were of the well-being of their family members. the diary entries quite clearly described complications and stressful situations as the women were trying to juggle their time between work duties and childcare. they described how strained they were and how their stress level was increasing, using words like overwhelmed, frustrated, tired, annoyed, and angry to describe their situations. below are a few diary entries from mothers who were all working 70-100% that reflect this. in the following example, a mother of a 2-year-old working in mass media, who worked entirely from home as did her husband, described one of her days like this: "i'm a little anxious because of all this, the situation in society. then, i do not have the energy to do much, only the necessary things. the child wore pajamas the whole day." she mentioned how the whole situation made her feel anxious and drained her energy. this was true of many of the other women, like this mother of three (6, 8 and 13) who worked in a nursing home explained: "now we have spent more than a month in quarantine and home-schooling. it has started to take its toll mentally, and the day today was difficult. i was almost in tears." her husband was still working in his workplace while she had taken a leave for the first weeks of covid-19. juggling home-schooling, childcare, and work created a lot of pressure on the mothers and some of them described the guilt they were experiencing from feeling that they could not keep this article is protected by copyright. all rights reserved. up with everything. the next example is from a mother who worked full time at her workplace. she had two children, 2 and 7 years old, and wrote about her experience in the follow way. i experienced a slight panic attack on the way home over juggling all these different duties, and i cried a little. i went to the grocery store to get some time for myself and shopped for my sister who is in quarantine . . . no one has energy to start putting the kids to bed, so they went to sleep too late. . . jesus, how the parental-fuse is short, and i feel guilty about that. as these examples show, the mothers experienced stress, a lack of energy, and even guilt. as during 'normal' times mothers are more likely to experience work-family guilt, as they feel guilty about not being the best while not spending enough time with their kids, despite being on the run all the time (borelli et al., 2017; hjálmsdóttir & einarsdóttir, 2019) . during covid-19, this pattern seems to have intensified, supported by research from other contexts as well (e.g. hennekam & shymko, 2020) . the levels of guilt and how it affected them was addressed by more participants. this mother had two children (4 and 8 years old) and was working full time. she and her husband were both working from home. i feel as if i should be able to organize my time better. the day passes, and i have not had time to enjoy one cup of coffee in peace. i do not sit down, but still the apartment is in chaos, the children neglected, and work unfinished. these examples show how much time pressure these mothers were under, and how they experienced guilt over not being able to complete their tasks, neither work nor family related. studies have shown that parents are under significant time pressure in their daily lives (fyhri & hjorthol, 2009) , especially women (sullivan & gershuny, 2018) . this pressure seemingly this article is protected by copyright. all rights reserved. increased greatly during the pandemic, as other research has indicated as well (alon et al., 2020; andrew et al., 2020) . the above example also indicates a level of multitasking as did entries from several other mothers in the study. according to previous studies (e.g. bryson, 2016; craig & brown, 2017; friedman, 2015; rafnsdóttir & heijstra, 2013; sullivan & gershuny, 2018) , women multitask more often than men. the experiences of these women indicate that their perceived time pressure and increased need for multitasking laid heavily on their shoulders. towards the end of the study, when restrictions because of covid-19 were somewhat lifted, some mothers mentioned that they had just realized how much constraint was caused by having to erase the boundaries between work and family life. in the following diary entry, a mother with a six-month-old child, who worked as a manager in a half time job, explained how. i went to my workplace for the first time in weeks. it was so different. i do not think that i realized until yesterday how much constraint comes from working from home with a child at home. i cannot wait until i can return to my workplace every day and create these boundaries between private life and work. this description is interesting in the light of how flexibility and working from home has often been portrayed as the solution to work-life balance, especially for women, to improve parents' opportunities to better balance work with home life (gatrell et al., 2014; wheatley, 2012) . some of the other mothers also described how the boundaries between home and work had been blurred. these experiences indicate that working from home can be difficult for parents, particularly mothers, since they find their work time being interrupted by other duties. this has been documented in previous research (e.g. wheatley, 2012) . alon et al. (2020) predicted that changes in working practices adopted during covid-19 might be permanent, but we argue that it is important to consider that working from home and having this article is protected by copyright. all rights reserved. flexible working hours must be considered very carefully in favor of the working parents, bearing in mind gendered social structures. it was clear from the diaries that these unprecedented times revealed or intensified unequal divisions of duties at home, which made the mothers realize and reflect on their positions at home. a mother of two (5 and 9 years old), who was a teacher working full time but had started working from home, as well as her husband, said that: today, there was a little clash at home. i have noticed that i usually write the diary before dinner, and a lot of work awaits me afterwards. i usually put the kids to bed, bathe them, tidy up endlessly (usually in the evenings when they are asleep), read, and tuck them in. today, i threw a tantrum over this, . . . but we had a good conversation, and everyone agreed to contribute more . . . [my husband] agreed with me that he could be more present in these daily routines around the kids and home. this example shows how being responsible for the kids and home was on her shoulders, as well being responsible for taking action to change the balance. a few days later, the same woman explained how she was starting to realize how the situation affected the division of tasks, partially because her husband prioritized differently, e.g., around work or exercise, and also because the children asked her for help even though their father was also at home. we knew that the division of tasks is rather equal in our everyday life, but now that we are both working from home, it is obvious that he takes his space when he needs to attend to 'his' things, and i run, and i sprint from my work much more than he does. this example shows how the mother was easily interrupted with household responsibilities, which is in accord with other research findings that suggest that mother's time is more often fragmented (collins, 2020; collins et al., 2020; sullivan & gershuny, 2018) . according to this article is protected by copyright. all rights reserved. andrew's et al. (2020) study, mothers more frequently combined their paid work with other activities during the pandemic. this illustration also supports the notion of time being gendered (bryson & deery, 2010) , as she perceived that her husband had more control over his time to tend to matters unrelated to work or family. this is in accordance with previous studies on gendered control of time among parents (bryson, 2016; friedman, 2015) and new research conducted during covid-19 that indicate that unpaid work performed by mothers has increased during the pandemic (craig & churchill, 2020; manzo & minello, 2020) . the responsibility to divide duties at home lay on the mothers' shoulders, as they explained in several diary entries. this shows how mental work (robertson et.al., 2019 ) was central to their gendered realities. as one said, "everyone has to have certain duties in the home if domesticity is supposed to work without me losing my mind." this mother had two teenagers and was working full time from home while her husband worked in his workplace. another one, who had two children (2 and 7 years old) and was working full time, explained her situation in this way. it is not easy working from home with a two-year-old. i had to make sure that his father takes him to his parent's home, who were away, so that i could get some peace. then, i put him down to nap after lunch and had to make sure that father and son woke up at the right time . . . usually, i must make sure that things work . . . how are you supposed to be an employee, parent, leisure worker, cook, and a teacher all at once? this outlines quite well how she experiences the responsibility of managing the household. the father is a participant, but she is the manager and carries responsibilities that add to the mental burden of everyday life (ciciolla & luthar, 2019) , exacerbating to the mental draining women have felt during covid-19 (hennekam & shymko, 2020) . another this article is protected by copyright. all rights reserved. mother, with a two-year-old child, who worked full time from home along with her husband, similarly wrote that: i have turned into a foreman here at home. i am trying to get clearer oversight over what has to be done and activate my husband to prevent everything from becoming a mess, and i do not want to take care of it all by myself. so, i had a family meeting and put up a clear division of duties. this mother also wrote that, on an everyday basis, they did not have a clear division of tasks, but during covid-19, it became necessary. this indicates that times of crisis can reveal deeply rooted norms and structures on gender roles within the home. the experience of another mother, who had three children (6, 8 and 13 years old), further supports this. she was a care worker and she and her husband were both working in their workplaces. i became tired today and reprimanded my husband. i take care of the management, division of tasks and responsibility for the children's education and practices. i feel like we are dangerously close to the gender development as it was before the middle of the last century. also, it is my responsibility to remind [him] of that this is not supposed to be like this, so that also adds to my basket of duties. all of these examples show how the situation during the pandemic revealed and exaggerated the mothers' roles as household managers (ciciolla & luthar, 2019; curran et al., 2015) . they planned and organized family life to make sure that everything worked. this is consistent with research from australia where mothers felt unsatisfied with the division of labor in their homes during the covid-19 (craig & churchill, 2020) . drawing on previous studies (e.g. craig & brown, 2017) , this invisible mental work became a burden for the women and clearly affected their everyday wellbeing. interestingly, this also added to their this article is protected by copyright. all rights reserved. duties, as they became somewhat responsible for getting other people in the household, particularly the fathers, to take on more responsibility to even the load. some of the women in the study described how they made an effort to hide their stress and anxiety from their children and other family members in order to ease the atmosphere and keep the family calm. in accordance with studies and theories of gendered aspects of emotional labor (ciciolla & luthar, 2019; craig & brown, 2017; robertson et al., 2019) , the women performed that kind of labor in addition to other duties. this is reflected in the words of a mother of two children, nine and ten, working full time mostly from home with a husband who mostly worked away from home. the days are getting really difficult, and i will take my first summer holiday tomorrow. the younger child is not happy about [the situation] and cries over everything that seems like adversity, even as little things like when she is asked to read or tidy up. the little patience i have is running out, but i try my best not to let her see it. the day after, the situation became worse, as the family was facing possible quarantine and they were waiting for further directions from a national team of contact tracers. she wrote this in her diary. now we possibly have to start 14 days of quarantine. we will know tomorrow. at least we have to remain in quarantine for 24 hours until the test results. i am pained by this situation, but i try to stay positive, especially with my husband and children. they cannot may not see [my] anxiety because then they become afraid. i continue to meditate and do yoga; everything will be ok. this article is protected by copyright. all rights reserved. as these diary entries show, this mother found it important to keep her anxiety to herself in order to keep the family calm. another mother with a five-and eight-yearold who worked in an elementary school was working full-time from home as did her husband. she described how difficult her day was, as one of her children cried a lot because she missed her friends so dearly. the day "was spent tending emotionally to the children." the women in the study had to devote time to emotional labor instead of work. another reflected on how she tried to calm the people around her. i am really focused on being well informed so that i can answer [questions] and calm elderly people and children around me. i am very cautious and try to follow up with my children on how to be careful without frightening them. one of the women explained how her husband was irritated because of the situation and tired because he was working shifts, so much so that he "exploded" at times. therefore, she made an effort to try to make sure that his irritation did not affect the children (10, 14 and 17 years old) too much. she was working 70% from home while he was working away from home. she explained. i take care of the children and the home every day, since he is asleep until he has to go to work or loafs around on the computer. everyone has a short fuse, but i make sure that i intervene and suggest a break, that everyone goes out, plays or when the children are starting to nag. it is difficult to be able to concentrate on work. this article is protected by copyright. all rights reserved. another example of the women's emotional labor included dealing with difficult thoughts and decisions related to the pandemic. a mother of two (5 and 8 years old) wrote that: despite a lot of physical resting lately, my mind has been spinning around worries and difficult decisions. should the children attend school or not? can i meet my father [who has heart problems] if i keep a 2 m distance? is it necessary to disinfect all the groceries? according to curran et al. (2015) , this kind of work can be called emotional labor, as these women emphasize how they tend to the emotional wellbeing of other family members. this kind of labor was not limited to their children; it also applied to other relatives. for example, the emotional labor involved phone calls to parents or other relatives, sometimes several times a day. other studies have shown that this is often part of women's routines (ciciolla & luthar, 2019; robertson et al., 2019) . the months of covid-19 have been and are quite challenging for many families, and the drastic measures that have been taken to prevent its spread have meant severe changes to people's participation in everyday live and social contact (brooks et al., 2020) . in accordance with new research on the effect of covid-19 on everyday life (andrew et al., 2020; carlson et al., 2020; collins et. al., 2020; craig & churchill, 2020; manzo & minello, 2020; qian & fuller, 2020) , the time during the social restrictions was not easy. it is apparent from the diary entries of our participants that the period with the tightest restrictions was challenging for the mothers and their families, and they expressed feelings of frustration and being overwhelmed. despite advances in gender equality over the last decades, drastic events such as during the this article is protected by copyright. all rights reserved. covid-19 pandemic, can elicit situations that we do not necessarily pay attention to in our busy daily lives or even resist recognizing. in iceland, which has been portrayed as a "paradise for women" (jakobsdóttir, 2018) and which is considered a global frontrunner when comes to gender equality (the world economic forum, 2020), parents face challenges related to gendered realities, and gender equality has not been achieved regardless of what the dominant discourse may say. despite remarkably high labor participation, there are indications that women in iceland shoulder the greater burden of childcare and household labor (hjálmsdóttir & einarsdóttir, 2019; þórsdóttir, 2012) , as elsewhere around the world (alon et al., 2020; knight & brinton, 2017; t. miller, 2018) . the diary entries of the mothers in the study demonstrates a gendered reality in which they experience burdens that seem to have escalated during the pandemic. it was stated in a media coverage that the covid-19 pandemic had brought back the 1950s regarding gendered division of labor (ferguson, 2020) . the same phrase was used by one of our participants. some of the women wrote about how surprised they were about how much of the household chores and the childcare remained on their shoulders. despite some steps towards gender equality in the last few decades, there are few signs of a revolution, especially within the home. the focus on the struggle for gender equality has somehow been more on the public sphere, as reflected in the measures used for gender equality indexes that overlook the gendered division of labor in the home along with social norms and values (einarsdóttir, 2020) . one of the patterns identified in the reflections of the women in our study was how they seemed to be stunned by how uneven the division of labor turned out to be during the pandemic and how much time and energy they devoted to household chores and the management of the household, carrying out the mental work within the family. their experiences support the idea of time being gendered (bryson, 2016) , as they described how this article is protected by copyright. all rights reserved. their time was more restricted from childcare and household chores and how they prioritized their children's needs over work. when the families were pushed into the home due to lockdowns and social restrictions, women faced an uneven division of labor that they might have been too busy in our daily lives to observe or might have found difficult to acknowledge. we argue, based on this study as well as emerging findings from larger studies from different countries (andrew et al., 2020; collins et. al., 2020; craig & churchill, 2020; manzo & minello, 2020; qian & fuller, 2020) , that the situation caused by the pandemic brought to light pre-existing gendered performances and social structures, more than it caused drastic gendered division of labor in the home. in iceland, where the dominant discourses have centered on the country as a global leader in gender equality, the existing inequalities have been overlooked. our findings suggest that there is an uneven division of labor within icelandic homes as the mothers in the study bore the burdens of housework, childcare, emotional labor, and household mental work. if the aim is to close the gender gap both in the public and the private sphere, a focus on the gendered division of labor within the home is essential. the impact of covid-19 on gender equality. crc tr 224 discussion paper series how are mothers and fathers balancing work and family under lockdown? the institute for fiscal studies coronavirus: 'mums do most childcare and chores in lockdown'. bbc news chaos ruined the children's sleep, diet and behaviour: gendered discourses on family life in pandemic times. gender, work & organization. online advance publication university pathways of urban and rural migration in iceland diary methods: capturing life as it is lived household time allocation. theoretical and empirical results from denmark gender differences in work-family guilt in parents of young children successful qualitative research. a practical guide for beginners the psychological impact of quarantine and how to reduce it: rapid review of the evidence time, power and inequalities public policy,'men's time'and power: the work of community midwives in the british national health service us couples' divisions of housework and childcare during covid-19 pandemic invisible household labor and ramifications for adjustment: mothers as captains of households is maternal guilt a cross-national experience? qualitative sociology, 1-29. advance online publication covid-19 and the gender gap in working hours. gender, work & organization, 1-12. advance online publication feeling rushed: gendered time quality, work hours, nonstandard work schedules, and spousal crossover dual-earner parents' couples work and care during covid-19. gender, work & organization, 1-14. advance online publication gender, emotion work, and relationship quality: a daily diary study sex-typed personality traits and gender identity as predictors of young adults' career interests men's employment hours and time on domestic chores in european countries all that glitters is not gold: shrinking and bending gender equality in rankings and nation branding gender ideology and the sharing of housework and child care in sweden samraeming fjölskyldulífs og atvinnu: hvernig gengur starfsfólki á íslenskum vinnumarkaði að samraema fjölskyldulíf og atvinnu? (m.sc. dissertation) iceland magazine i feel like a 1950s housewife': how lockdown has exposed the gender divide. the guardian exploring new ground for using the multinational time use study. iser working paper series still a "stalled revolution"? work/family experiences, hegemonic masculinity, and moving toward gender equality children's independent mobility to school, friends and leisure activities work-life balance and parenthood: a comparative review of definitions, equity and enrichment parents, perceptions and belonging: exploring flexible working among uk fathers and mothers faeðingar-og foreldraorlof á íslandi: þróun eftir lagasetninguna árið mothering and gender equality in iceland: irreconcilable opposites? iceland eases restrictions -all children's activities back to normal stricter measures enforced in iceland: ban on gatherings of more than 20 people outcomes of work-life balance on job satisfaction, life satisfaction and mental health: a study across seven cultures tengsl streituvaldandi þátta í starfsumhverfi, svefns og stoðkerfisverkja hjá millistjórnendum í accepted article this article is protected by copyright. all rights reserved. heilbrigðisþjónustu [correlation between stressful factors in the working environment coping with the covid-19 crisis: force majeure and gender performativity. gender, work & organization why iceland is the best place in the world to be a women. the guardian mér finnst ég stundum eins og hamstur í hjóli the second shift: working parents and the revolution at home high birth rates despite easy access to contraception and abortion: a cross-sectional study icelandic association of local authorities how to build a paradise for women. a lesson from iceland vinnutengd streita. orsakir, úrraeði og ranghugmyndir [work related stress. causes, resources, and misbeliefs ársrit um accepted article this article is protected by copyright childbearing trends in iceland, 1982-2013: fertility timing, quantum, and gender preferences for children in a nordic context fjölskyldur -umbreytingar, samskipti og skilnaðarmál reykjavík: félagsvísindastofnun háskóla íslands measuring housework participation: the gap between "stylised" questionnaire estimates and diary-based estimates iceland has become the first country to officially require gender pay equality diary versus questionnaire information on time spent on housework-the case of norway one egalitarianism or several? two decades of gender-role attitude change in europe a balancing act? work-life balance, health and well-being in european welfare states 2020) mothers, childcare duties, and remote working under covid-19 lockdown in italy: cultivating communities of care nearly half of men say they do most of the home schooling. 3 percent of women agree. the new york times paternal and maternal gatekeeping? choreographing care auglýsing um takmörkun á skólastarfi vegna farsóttar labour force statisics can we finish the revolution? gender, work-family ideals, and institutional constraint from motherhood penalties to husband premia: the new challenge for gender equality and family policy, lessons from norway within the aura of gender equality: icelandic work cultures, gender relations and family responsibility covid-19 and the gender employment gap among parents of young children balancing work-family life in academia: the power of time mothers and mental labor: a phenomenological focus group study of family-related thinking work gender differences in chauffeuring children among dual-earner families launamunur karla og kvenna [the pay gap between men and women key figures, statistics bad mum guilt': the representation of 'work-life balance'in uk women's magazines speed-up society? evidence from the uk 2000 and 2015 time use diary surveys kórónuveiran: fyrirtaeki hvött til að þjálfa fólk í fjarvinnu [the coronavirus: companies encourage to train workers for distance work the directorate of health and the department of civil protection and emergency management the icelandic teachers union. (n.d.). streita og kulnun the global gender gap report working mothers interrupted more often than fathers in lockdown -study. the guardian one country is making sure all employers offer equal pay to women covid-19 educational disruption and response good to be home? time-use and satisfaction levels among home-based teleworkers vinna og heimilislíf reykjavík: félagsvísindastofnun háskóla íslands this article is protected by copyright. all rights reserved. key: cord-324006-y4bd38zz authors: rishu, asgar h.; marinoff, nicole; julien, lisa; dumitrascu, mariana; marten, nicole; eggertson, shauna; willems, su; ruddell, stacy; lane, dan; light, bruce; stelfox, henry t.; jouvet, philippe; hall, richard; reynolds, steven; daneman, nick; fowler, robert a. title: time required to initiate outbreak and pandemic observational research()() date: 2017-03-01 journal: j crit care doi: 10.1016/j.jcrc.2017.02.009 sha: doc_id: 324006 cord_uid: y4bd38zz purpose: observational research focused upon emerging infectious diseases such as ebola virus, middle east respiratory syndrome, and zika virus has been challenging to quickly initiate. we aimed to determine the duration of start-up procedures and barriers encountered for an observational study focused upon such infectious outbreaks. materials and methods: at 1 pediatric and 5 adult intensive care units, we measured durations from protocol receipt to a variety of outbreak research milestones, including research ethics board (reb) approval, data sharing agreement (dsa) execution, and patient study screening initiation. results: the median (interquartile range) time from site receipt of the protocol to reb submission was 73 (30-126) days; to reb approval, 158 (42-188) days; to dsa completion, 276 (186-312) days; and to study screening initiation, 293 (269-391) days. the median time from reb submission to reb approval was 43 (13-85) days. the median time for all start-up procedures was 335 (188-335) days. conclusions: there is a lengthy start-up period required for outbreak-focused research. completing dsas was the most time-consuming step. a reactive approach to newly emerging threats such as ebola virus, middle east respiratory syndrome, and zika virus will likely not allow sufficient time to initiate research before most outbreaks are advanced. new emerging and reemerging infections such as ebola virus, middle east respiratory syndrome (mers-cov), and zika virus are a concern for the public, clinicians, health systems, and public health agencies. outbreaks and pandemics are perceived to occur at increasing frequency; however, they remain unpredictable in their time and location of onset [1] . outbreaks increase patient morbidity and mortality, and cause additional burden on health care workers, facilities, and health agencies [2] [3] [4] . surveillance can identify cases at an early stage and lead to prevention of broader spread. severe acute respiratory syndrome [5] ; pandemic influenza a (h1n1) 2009-2010 [6] ; and, more recently, ebola virus [7] , mers-cov [8] , and zika virus have been characterized by challenges initiating observational research and a near inability to rapidly undertake interventional trials necessary to inform best practice and improve care of patients [9] [10] [11] . this has prompted calls from patients, clinicians, funders, and policy makers to improve preparedness, including the capacity to undertake real-time research during such events. however, conducting studies and trials involves time-consuming start-up steps such as development of study protocol, establishing a budget and obtaining funding, research ethics board (reb) approval, organizing multisite collaboration, and data sharing agreements. the objective of this study was to determine the delay from protocol completion to study initiation and determine time spent in each of the necessary steps to identify and collect data in real time for new and emerging infection-related critical illness. this is a time-in-motion study accompanying a prospective surveillance project to assess the feasibility of screening and real-time data collection for severe acute respiratory infection (sari)-and outbreak-related critical illness. the parent prospective study aimed to screen all hospitalized critically ill patients on a daily basis for up to 72 hours after admission to detect all cases of sari, the details of which are published elsewhere [12] . the study included 1 pediatric and 5 adult intensive care units (icus) across 6 canadian provinces. paper and electronic case report forms and daily and weekly screening log sheets were made available to all the sites to be used for data collection (appendix). the study was approved by each participating site's reb and was funded by the public health agency of canada, canadian critical care trials group, and heart and stroke foundation (ontario office). for the purpose of this study, the following data were collected: time required from protocol receipt by the site to reb submission, time required from reb submission to reb approval, time required from reb approval to data sharing agreement execution, time required from data sharing agreement execution to screening initiation, time required from protocol receipt to data sharing agreement execution, time required from protocol receipt to screening initiation, and overall time required for start-up procedures. categorical variables are presented as numbers and proportions. durations are presented as median, interquartile range (iqr), and ranges. all statistical tests were 2-tailed, and the significance level was set at p b .05. table 1 shows the median time required in each step along the pathway to initiate an observational study of outbreak surveillance in icus. overall start-up procedures required a median (iqr) of 335 (188-335) days (range, 128-335). median (iqr) duration from protocol receipt to reb submission was 73 (30-126) days (range, and protocol receipt to reb approval was 158 (42-188) days (range, 31-218 days). time from protocol receipt to data sharing agreement receipt was 92 (92-104) days (range, 92-104), protocol receipt to signed data sharing agreement was 276 (186-312) days (range, 177-335), and protocol receipt to screening initiation was 293 (269-391) days (range, 258-412). time from reb submission to reb approval was 43 (13-85) days (range, , reb approval to data sharing agreement completion was 118 (58-139) days (range, , and reb approval to screening initiation was 123 (92-237) days (range, 71-238). time from data sharing agreement receipt to data sharing agreement completion was 185 (89-215) days (range, 74-244), and data sharing agreement completion to screening initiation was 78 (35-99) days (range, 6-103) (fig. 1 ). in this multicenter study of severe acute respiratory infections, we observed that it took nearly 1 year to complete all necessary start-up procedures before enrolment in the study could begin at all sites. obtaining an interinstitutional legal data sharing agreement required approximately 9 months from protocol receipt to completion-the most time-consuming process. it took sites approximately 2½ months after protocol receipt to be ready to submit to their reb yet only approximately 1½ months for reb approval. our findings indicate that despite an existing in-icu infrastructure and capability for real-time data collection and reporting, observational research during an outbreak or pandemic is at risk of failing because of the time required for start-up procedures. seasonal influenza outbreaks provide a compelling annual example. if we do not initiate the study start-up process immediately after influenza season, we will not be ready for screening at the next. the time necessary for appropriate and necessary reb vetting and approvals has been reported previously for various clinical trials [13] [14] [15] [16] [17] [18] . however, none of the studies have identified the actual time required in initiating outbreak-related research at multiple sites. efficient research initiation during an outbreak or pandemic is critical considering the potential for outbreak expansion and greater morbidity and mortality without better understanding of risk factors for illness and transmission, clinical course, outcomes, and responses to treatment. although we studied timelines to initiate observational research, it is possible and in fact likely that start-up time for a clinical experimental trial would be even longer. this has been the experience during severe acute respiratory syndrome, pandemic influenza, mers-cov, ebola virus, and now zika virus [9] [10] [11] . there are various reasons for delays in initiating outbreak-focused observational research both at the investigator level and at the administrative level. some of these reasons include (1) developing the study protocol and case report forms in a short span of time [13] , (2) preparing reb applications, (3) fixed meeting dates of institutional ethics boards followed by important and necessary back-and-forth communications [16] , (4) drafting and finalizing the data sharing agreements, (5) lack of parallel reviewing of reb applications and data sharing agreements across institutions, and (6) finalizing budget and arranging funding. there may be several possible ways to overcome these delays and be prepared ahead of time to conduct an outbreak-related study or trial. first, there is a need to have research-ready protocols-inwaiting for periods when seasonal or outbreak-related infections increase. this can be achieved through research-ready outbreak-related observational studies and trials using national and international networks [19] , undertaking preemptive reb review of generic outbreakrelated observational study case report forms and protocols, establishing data sharing agreements where necessary ahead of time, and helping other centers similarly prepare. although ethical approval is mandatory for research involving human subjects, there are provisions in many jurisdictions for exempt reviews for studies involving public health emergencies, typically consisting of observational studies collecting already available and anonymized data [20, 21] . similarly, collecting data as "quality table 1 median time (in days) spent from receipt of protocol, reb submission, and finalization of data sharing agreements to task completion at study sites assurance" or "quality improvement" does not require reb approval in some provinces. if a multicenter observational study intends to collect nonidentifiable data from available information collected as a part of routine clinical care, which can be rapidly and efficiently used to generate new evidence, mechanisms are often in place to grant rapid assessments, and there exist guidelines to exempt certain studies from certain aspects of the review process [22] . another approach will be to identify certain steps that take the longest duration among start-up procedures. in this study, we identified that data sharing agreements took 6 to 9 months to be fully executed. we found that some sites had limited research administration and regulatory staff, that some sites were busy with other ongoing research-related activities and that starting an unplanned research project introduced substantial demand on a system with already stretched capacity. hospitals often have unique research administrative structures. some university-affiliated hospitals were required to obtain reb approval from a university authority first and then from the local hospital to proceed, whereas others required data sharing agreements to be finalized before issuing the final reb approval. improving efficiency and parallel administrative activities for certain types of low-risk observational studies are a potential mechanism to mitigate delays in start-up procedures. centralized ethics approvals for pandemic research at provincial, state, and national levels may also help to improve efficiency and lessen workload for individual sites [23] . having durable (5-10 years) protocols and generic approvals, to include anticipated ranges of pathogens and/or outbreaks meeting prespecified criteria, may also be more appropriate for outbreak and pandemic-related research as opposed to annual reapproval. having a tiered case report form that seeks to collect either a minimal amount of core clinical information or more detailed data, depending upon the clinical research resources of individual sites, might assist in both start-up and actual study-related workload and translate to greater enthusiasm, capacity, and shorter start-up times. the world health organization-international severe acute respiratory and emerging infection consortium clinical characterization protocol provides one such example [19] . planning and preparedness before the next outbreak or pandemic strikes, during interpandemic periods, are essential for an effective research and subsequent clinical and health system response. because of previous experiences in delays, there is a need to have a strategic plan for the surveillance of these emerging infections, if not at all times then during times of increased local or national risk, and to develop a mechanism to augment existing public health reporting with richer clinical data. recent examples of research responses to new infectious diseases events include funding and initiation of interpandemic clinical trials by groups within the platform for european preparedness against (re)emerging epidemics [24] and coordination of funding efforts through the formation of the global research collaboration for infectious disease preparedness [25] . models of informed consent are one other important consideration for outbreak-and pandemic-related research [26] . obtaining truly informed consent for research involving time-sensitive interventions, during critical illness, in the midst of an outbreak or pandemic is challenging. it can sometimes be difficult to locate and fully inform substitute decision-makers of critically ill patients in a timely manner for interventions targeted at prehospital care or during the period primary resuscitation. deferred consent may be appropriate for select emergency and time-sensitive interventions [27] . waived consent may, occasionally, be appropriate when evaluating select interventions that fall firmly within the standard of care. the strengths of this study include prospective data collection; use of internationally employed case definitions and eligibility criteria for, in this case, sari-related outbreak activity; a fully operational webaccessible case reporting system [28] ; and an experienced research team with expertise in outbreak and pandemic specific research. this is the first study to report the actual duration of time spent in each step to initiate multisite outbreak-related research. limitations to this study include lack of qualitative data from participating site research staff to better understand their perspectives regarding delays. future studies may focus upon this complementary aspect. also, the study was limited to major hospitals already carrying out critical care research, and therefore, we may be underestimating required timelines among centers without staff already familiar with the processes necessary for study start-up. finally, although this study was focused upon surveillance of sari during a period of global concern for many outbreak-causing pathogens-influenza a (h7n9, h1n1, h5n1) and mers-cov-it was initiated during an interoutbreak period in canada; start-up time may be shorter or longer during an actual outbreak and has generalizable lessons for nonrespiratory outbreaks such as ebola and zika virus. in this study, we found that there is substantial start-up time required to initiate outbreak-related observational research that may impede on our ability to conduct research, generate knowledge to help care for patients, and prepare for future threats. our study stresses the need to have a nationally and internationally coordinated approach, with context-appropriate, tiered case report forms and preparatory work-protocol and case report form generation, data sharing agreements, and reb submissions-completed during the pre-and interoutbreak periods. to have the research mechanisms functional for real-time data collection and reporting when they are required, fig. 1 . diagrammatic representation of median time (in days) spent from receipt of protocol, reb submission, and finalization of data sharing agreements to task completion at study sites. durable administrative and ethical approvals and data sharing agreements must be planned and executed before outbreaks and pandemics occur. influenza pandemics of the 20th century critical care capacity in canada: results of a national cross-sectional study development of a triage protocol for critical care during an influenza pandemic triaging for adult critical care in the event of overwhelming need critically ill patients with severe acute respiratory syndrome canadian critical care trials group h1n1 collaborative. critically ill patients with 2009 influenza a(h1n1) infection in canada kgh lassa fever program, viral hemorrhagic fever consortium, who clinical response team. clinical illness and outcomes in patients with ebola in sierra leone ksa mers-cov investigation team. hospital outbreak of middle east respiratory syndrome coronavirus early observational research and registries during the 2009-2010 influenza a pandemic clinical issues and research in respiratory failure from severe acute respiratory syndrome the challenges of treating ebola virus disease with experimental therapies influenza a (h1n1pdm09)-related critical illness and mortality in mexico and canada time required for institutional review board review at one veterans affairs medical center time required to start multicentre clinical trials within the italian medicine agency programme of support for independent research time to activate lung cancer clinical trials and patient enrollment: a representative comparison study between two academic centers across the atlantic impact of institutional review board practice variation on observational health services research variations among institutional review board reviews in a multisite health services research study obtaining regulatory approval for multicentre randomised controlled trials: experiences in the stich ii trial international severe acute respiratory and emerging infection consortium (isaric) for debate: should observational clinical studies require ethics committee approval? should observational clinical studies require ethics committee approval? institutional review board consideration of chart reviews, case reports, and observational studies central institutional review board review for an academic trial network platform for european preparedness against (re-)emerging epidemics (prepare) global research collaboration for infectious disease preparedness (glopid-r) clinical research ethics for critically ill patients: a pandemic proposal key stakeholder perceptions about consent to participate in acute illness research: a rapid, systematic review to inform epi/pandemic research preparedness supplementary data to this article can be found online at http://dx. doi.org/10.1016/j.jcrc.2017.02.009. key: cord-168862-3tj63eve authors: porter, mason a. title: nonlinearity + networks: a 2020 vision date: 2019-11-09 journal: nan doi: nan sha: doc_id: 168862 cord_uid: 3tj63eve i briefly survey several fascinating topics in networks and nonlinearity. i highlight a few methods and ideas, including several of personal interest, that i anticipate to be especially important during the next several years. these topics include temporal networks (in which the entities and/or their interactions change in time), stochastic and deterministic dynamical processes on networks, adaptive networks (in which a dynamical process on a network is coupled to dynamics of network structure), and network structure and dynamics that include"higher-order"interactions (which involve three or more entities in a network). i draw examples from a variety of scenarios, including contagion dynamics, opinion models, waves, and coupled oscillators. in its broadest form, a network consists of the connectivity patterns and connection strengths in a complex system of interacting entities [121] . the most traditional type of network is a graph g = (v, e) (see fig. 1a) , where v is a set of "nodes" (i.e., "vertices") that encode entities and e ⊆ v × v is a set of "edges" (i.e., "links" or "ties") that encode the interactions between those entities. however, recent uses of the term "network" have focused increasingly on connectivity patterns that are more general than graphs [98] : a network's nodes and/or edges (or their associated weights) can change in time [70, 72] (see section 3), nodes and edges can include annotations [26] , a network can include multiple types of edges and/or multiple types of nodes [90, 140] , it can have associated dynamical processes [142] (see sections 3, 4, and 5) , it can include memory [152] , connections can occur between an arbitrary number of entities [127, 131] (see section 6) , and so on. associated with a graph is an adjacency matrix a with entries a i j . in the simplest scenario, edges either exist or they don't. if edges have directions, a i j = 1 when there is an edge from entity j to entity i and a i j = 0 when there is no such edge. when a i j = 1, node i is "adjacent" to node j (because we can reach i directly from j), and the associated edge is "incident" from node j and to node i. the edge from j to i is an "out-edge" of j and an "in-edge" of i. the number of out-edges of a node is its "out-degree", and the number of in-edges of a node is its "in-degree". for an undirected network, a i j = a ji , and the number of edges that are attached to a node is the node's "degree". one can assign weights to edges to represent connections with different strengths (e.g., stronger friendships or larger transportation capacity) by defining a function w : e −→ r. in many applications, the weights are nonnegative, although several applications [180] (such as in international relations) incorporate positive, negative, and zero weights. in some applications, nodes can also have selfedges and multi-edges. the spectral properties of adjacency (and other) matrices give important information about their associated graphs [121, 187] . for undirected networks, it is common to exploit the beneficent property that all eigenvalues of symmetric matrices are real. traditional studies of networks consider time-independent structures, but most networks evolve in time. for example, social networks of people and animals change based on their interactions, roads are occasionally closed for repairs and new roads are built, and airline routes change with the seasons and over the years. to study such time-dependent structures, one can analyze "temporal networks". see [70, 72] for reviews and [73, 74] for edited collections. the key idea of a temporal network is that networks change in time, but there are many ways to model such changes, and the time scales of interactions and other changes play a crucial role in the modeling process. there are also other [i drew this network using tikz-network, by jürgen hackl and available at https://github.com/hackl/tikz-network), which allows one to draw networks (including multilayer networks) directly in a l a t e x file.] . an example of a multilayer network with three layers. we label each layer using di↵erent colours for its state nodes and its edges: black nodes and brown edges (three of which are unidirectional) for layer 1, purple nodes and green edges for layer 2, and pink nodes and grey edges for layer 3. each state node (i.e. nodelayer tuple) has a corresponding physical node and layer, so the tuple (a, 3) denotes physical node a on layer 3, the tuple (d, 1) denotes physical node d on layer 1, and so on. we draw intralayer edges using solid arcs and interlayer edges using broken arcs; an interlayer edge is dashed (and magenta) if it connects corresponding entities and dotted (and blue) if it connects distinct ones. we include arrowheads to represent unidirectional edges. we drew this network using tikz-network (jürgen hackl, https://github.com/hackl/tikz-network), which allows one to draw multilayer networks directly in a l at ex file. , which is by jürgen hackl and is available at https://github.com/hackl/tikz-network. panel (b) is inspired by fig. 1 of [72] . panel (d), which is in the public domain, was drawn by wikipedia user cflm001 and is available at https://en.wikipedia.org/wiki/simplicial_complex.] important modeling considerations. to illustrate potential complications, suppose that an edge in a temporal network represents close physical proximity between two people in a short time window (e.g., with a duration of two minutes). it is relevant to consider whether there is an underlying social network (e.g., the friendship network of mathematics ph.d. students at ucla) or if the people in the network do not in general have any other relationships with each other (e.g., two people who happen to be visiting a particular museum on the same day). in both scenarios, edges that represent close physical proximity still appear and disappear over time, but indirect connections (i.e., between people who are on the same connected component, but without an edge between them) in a time window may play different roles in the spread of information. moreover, network structure itself is often influenced by a spreading process or other dynamics, as perhaps one arranges a meeting to discuss a topic (e.g., to give me comments on a draft of this chapter). see my discussion of adaptive networks in section 5. for convenience, most work on temporal networks employs discrete time (see fig. 1(b) ). discrete time can arise from the natural discreteness of a setting, dis-cretization of continuous activity over different time windows, data measurement that occurs at discrete times, and so on. one way to represent a discrete-time (or discretized-time) temporal network is to use the formalism of "multilayer networks" [90, 140] . one can also use multilayer networks to study networks with multiple types of relations, networks with multiple subsystems, and other complicated networked structures. fig. 1 (c)) has a set v of nodesthese are sometimes called "physical nodes", and each of them corresponds to an entity, such as a person -that have instantiations as "state nodes" (i.e., node-layer tuples, which are elements of the set v m ) on layers in l. one layer in the set l is a combination, through the cartesian product l 1 × · · · × l d , of elementary layers. the number d indicates the number of types of layering; these are called "aspects". a temporal network with one type of relationship has one type of layering, a timeindependent network with multiple types of social relationships also has one type of layering, a multirelational network that changes in time has two types of layering, and so on. the set of state nodes in m is v m ⊆ v × l 1 × · · · × l d , and the set of indicates that there is an edge from node j on layer β to node i on layer α (and vice versa, if m is undirected). for example, in fig. 1(c) , there is a directed intralayer edge from (a,1) to (b,1) and an undirected interlayer edge between (a,1) and (a,2). the multilayer network in fig. 1 (c) has three layers, |v | = 5 physical nodes, d = 1 aspect, |v m | = 13 state nodes, and |e m | = 20 edges. to consider weighted edges, one proceeds as in ordinary graphs by defining a function w : e m −→ r. as in ordinary graphs, one can also incorporate self-edges and multi-edges. multilayer networks can include both intralayer edges (which have the same meaning as in graphs) and interlayer edges. the multilayer network in fig. 1 (c) has 4 directed intralayer edges, 10 undirected intralayer edges, and 6 undirected interlayer edges. in most studies thus far of multilayer representations of temporal networks, researchers have included interlayer edges only between state nodes in consecutive layers and only between state nodes that are associated with the same entity (see fig. 1 (c)). however, this restriction is not always desirable (see [184] for an example), and one can envision interlayer couplings that incorporate ideas like time horizons and interlayer edge weights that decay over time. for convenience, many researchers have used undirected interlayer edges in multilayer analyses of temporal networks, but it is often desirable for such edges to be directed to reflect the arrow of time [176] . the sequence of network layers, which constitute time layers, can represent a discrete-time temporal network at different time instances or a continuous-time network in which one bins (i.e., aggregates) the network's edges to form a sequence of time windows with interactions in each window. each d-aspect multilayer network with the same number of nodes in each layer has an associated adjacency tensor a of order 2(d + 1). for unweighted multilayer networks, each edge in e m is associated with a 1 entry of a, and the other entries (the "missing" edges) are 0. if a multilayer network does not have the same number of nodes in each layer, one can add empty nodes so that it does, but the edges that are attached to such nodes are "forbidden". there has been some research on tensorial properties of a [35] (and it is worthwhile to undertake further studies of them), but the most common approach for computations is to flatten a into a "supra-adjacency matrix" a m [90, 140] , which is the adjacency matrix of the graph g m that is associated with m. the entries of diagonal blocks of a m correspond to intralayer edges, and the entries of off-diagonal blocks correspond to interlayer edges. following a long line of research in sociology [37] , two important ingredients in the study of networks are examining (1) the importances ("centralities") of nodes, edges, and other small network structures and the relationship of measures of importance to dynamical processes on networks and (2) the large-scale organization of networks [121, 193] . studying central nodes in networks is useful for numerous applications, such as ranking web pages, football teams, or physicists [56] . it can also help reveal the roles of nodes in networks, such as those that experience high traffic or help bridge different parts of a network [121, 193] . mesoscale features can impact network function and dynamics in important ways. small subgraphs called "motifs" may appear frequently in some networks [111] , perhaps indicating fundamental structures such as feedback loops and other building blocks of global behavior [59] . various types of largerscale network structures, such as dense "communities" of nodes [47, 145] and coreperiphery structures [33, 150] , are also sometimes related to dynamical modules (e.g., a set of synchronized neurons) or functional modules (e.g., a set of proteins that are important for a certain regulatory process) [164] . a common way to study large-scale structures1 is inference using statistical models of random networks, such as through stochastic block models (sbms) [134] . much recent research has generalized the study of large-scale network structure to temporal and multilayer networks [3, 74, 90] . various types of centrality -including betweenness centrality [88, 173] , bonacich and katz centrality [65, 102] , communicability [64] , pagerank [151, 191] , and eigenvector centrality [46, 146] -have been generalized to temporal networks using a variety of approaches. such generalizations make it possible to examine how node importances change over time as network structure evolves. in recent work, my collaborators and i used multilayer representations of temporal networks to generalize eigenvector-based centralities to temporal networks [175, 176] .2 one computes the eigenvector-based centralities of nodes for a timeindependent network as the entries of the "dominant" eigenvector, which is associated with the largest positive eigenvalue (by the perron-frobenius theorem, the eigenvalue with the largest magnitude is guaranteed to be positive in these situations) of a centrality matrix c(a). examples include eigenvector centrality (by using c(a) = a) [17] , hub and authority scores3 (by using c(a) = aa t for hubs and a t a for authorities) [91] , and pagerank [56] . given a discrete-time temporal network in the form of a sequence of adjacency matrices i j denotes a directed edge from entity i to entity j in time layer t, we construct a "supracentrality matrix" c(ω), which couples centrality matrices c(a (t) ) of the individual time layers. we then compute the dominant eigenvector of c(ω), where ω is an interlayer coupling strength.4 in [175, 176] , a key example was the ranking of doctoral programs in the mathematical sciences (using data from the mathematics genealogy project [147] ), where an edge from one institution to another arises when someone with a ph.d. from the first institution supervises a ph.d. student at the second institution. by calculating timedependent centralities, we can study how the rankings of mathematical-sciences doctoral programs change over time and the dependence of such rankings on the value of ω. larger values of ω impose more ranking consistency across time, so centrality trajectories are less volatile for larger ω [175, 176] . multilayer representations of temporal networks have been very insightful in the detection of communities and how they split, merge, and otherwise evolve over time. numerous methods for community detection -including inference via sbms [135] , maximization of objective functions (especially "modularity") [117] , and methods based on random walks and bottlenecks to their traversal of a network [38, 80] -have been generalized from graphs to multilayer networks. they have yielded insights in a diverse variety of applications, including brain networks [183] , granular materials [129] , political voting networks [113, 117] , disease spreading [158] , and ecology and animal behavior [45, 139] . to assist with such applications, there are efforts to develop and analyze multilayer random-network models that incorporate rich and flexible structures [11] , such as diverse types of interlayer correlations. activity-driven (ad) models of temporal networks [136] are a popular family of generative models that encode instantaneous time-dependent descriptions of network dynamics through a function called an "activity potential", which encodes the mechanism to generate connections and characterizes the interactions between enti-ties in a network. an activity potential encapsulates all of the information about the temporal network dynamics of an ad model, making it tractable to study dynamical processes (such as ones from section 4) on networks that are generated by such a model. it is also common to compare the properties of networks that are generated by ad models to those of empirical temporal networks [74] . in the original ad model of perra et al. [136] , one considers a network with n entities, which we encode by the nodes. we suppose that node i has an activity rate a i = ηx i , which gives the probability per unit time to create new interactions with other nodes. the scaling factor η ensures that the mean number of active nodes per unit time is η we define the activity rates such that x i ∈ [ , 1], where > 0, and we assign each x i from a probability distribution f(x) that can either take a desired functional form or be constructed from empirical data. the model uses the following generative process: • at each discrete time step (of length ∆t), start with a network g t that consists of n isolated nodes. • with a probability a i ∆t that is independent of other nodes, node i is active and generates m edges, each of which attaches to other nodes uniformly (i.e., with the same probability for each node) and independently at random (without replacement). nodes that are not active can still receive edges from active nodes. • at the next time step t + ∆t, we delete all edges from g t , so all interactions have a constant duration of ∆t. we then generate new interactions from scratch. this is convenient, as it allows one to apply techniques from markov chains. because entities in time step t do not have any memory of previous time steps, f(x) encodes the network structure and dynamics. the ad model of perra et al. [136] is overly simplistic, but it is amenable to analysis and has provided a foundation for many more general ad models, including ones that incorporate memory [200] . in section 6.4, i discuss a generalization of ad models to simplicial complexes [137] that allows one to study instantaneous interactions that involve three or more entities in a network. many networked systems evolve continuously in time, but most investigations of time-dependent networks rely on discrete or discretized time. it is important to undertake more analysis of continuous-time temporal networks. researchers have examined continuous-time networks in a variety of scenarios. examples include a compartmental model of biological contagions [185] , a generalization of katz centrality to continuous time [65] , generalizations of ad models (see section 3.1.3) to continuous time [198, 199] , and rankings in competitive sports [115] . in a recent paper [2] , my collaborators and i formulated a notion of "tie-decay networks" for studying networks that evolve in continuous time. they distinguished between interactions, which they modeled as discrete contacts, and ties, which encode relationships and their strength as a function of time. for example, perhaps the strength of a tie decays exponentially after the most recent interaction. more realistically, perhaps the decay rate depends on the weight of a tie, with strong ties decaying more slowly than weak ones. one can also use point-process models like hawkes processes [99] to examine similar ideas using a node-centric perspective. suppose that there are n interacting entities, and let b(t) be the n × n timedependent, real, non-negative matrix whose entries b i j (t) encode the tie strength between agents i and j at time t. in [2] , we made the following simplifying assumptions: 1. as in [81] , ties decay exponentially when there are no interactions: where α ≥ 0 is the decay rate. 2. if two entities interact at time t = τ, the strength of the tie between them grows instantaneously by 1. see [201] for a comparison of various choices, including those in [2] and [81] , for tie evolution over time. in practice (e.g., in data-driven applications), one obtains b(t) by discretizing time, so let's suppose that there is at most one interaction during each time step of length ∆t. this occurs, for example, in a poisson process. such time discretization is common in the simulation of stochastic dynamical systems, such as in gillespie algorithms [41, 142, 189] . consider an n × n matrix a(t) in which a i j (t) = 1 if node i interacts with node j at time t and a i j (t) = 0 otherwise. for a directed network, a(t) has exactly one nonzero entry during each time step when there is an interaction and no nonzero entries when there isn't one. for an undirected network, because of the symmetric nature of interactions, there are exactly two nonzero entries in time steps that include an interaction. we write equivalently, if interactions between entities occur at times τ ( ) such that 0 ≤ τ (0) < τ (1) < . . . < τ (t ) , then at time t ≥ τ (t ) , we have in [2] , my coauthors and i generalized pagerank [20, 56] to tie-decay networks. one nice feature of their tie-decay pagerank is that it is applicable not just to data sets, but also to data streams, as one updates the pagerank values as new data arrives. by contrast, one problematic feature of many methods that rely on multilayer representations of temporal networks is that one needs to recompute everything for an entire data set upon acquiring new data, rather than updating prior results in a computationally efficient way. a dynamical process can be discrete, continuous, or some mixture of the two; it can also be either deterministic or stochastic. it can take the form of one or several coupled ordinary differential equations (odes), partial differential equations (pdes), maps, stochastic differential equations, and so on. a dynamical process requires a rule for updating the states of its dependent variables with respect one or more independent variables (e.g., time), and one also has (one or a variety of) initial conditions and/or boundary conditions. to formalize a dynamical process on a network, one needs a rule for how to update the states of the nodes and/or edges. the nodes (of one or more types) of a network are connected to each other in nontrivial ways by one or more types of edges. this leads to a natural question: how does nontrivial connectivity between nodes affect dynamical processes on a network [142] ? when studying a dynamical process on a network, the network structure encodes which entities (i.e., nodes) of a system interact with each other and which do not. if desired, one can ignore the network structure entirely and just write out a dynamical system. however, keeping track of network structure is often a very useful and insightful form of bookkeeping, which one can exploit to systematically explore how particular structures affect the dynamics of particular dynamical processes. prominent examples of dynamical processes on networks include coupled oscillators [6, 149] , games [78] , and the spread of diseases [89, 130] and opinions [23, 100] . there is also a large body of research on the control of dynamical processes on networks [103, 116] . most studies of dynamics on networks have focused on extending familiar models -such as compartmental models of biological contagions [89] or kuramoto phase oscillators [149] -by coupling entities using various types of network structures, but it is also important to formulate new dynamical processes from scratch, rather than only studying more complicated generalizations of our favorite models. when trying to illuminate the effects of network structure on a dynamical process, it is often insightful to provide a baseline comparison by examining the process on a convenient ensemble of random networks [142] . a simple, but illustrative, dynamical process on a network is the watts threshold model (wtm) of a social contagion [100, 142] . it provides a framework for illustrating how network structure can affect state changes, such as the adoption of a product or a behavior, and for exploring which scenarios lead to "virality" (in the form of state changes of a large number of nodes in a network). the original wtm [194] , a binary-state threshold model that resembles bootstrap percolation [24] , has a deterministic update rule, so stochasticity can come only from other sources (see section 4.2). in a binary state model, each node is in one of two states; see [55] for a tabulation of well-known binary-state dynamics on networks. the wtm is a modification of mark granovetter's threshold model for social influence in a fully-mixed population [62] . see [86, 186] for early work on threshold models on networks that developed independently from investigations of the wtm. threshold contagion models have been developed for many scenarios, including contagions with multiple stages [109] , models with adoption latency [124] , models with synergistic interactions [83] , and situations with hipsters (who may prefer to adopt a minority state) [84] . in a binary-state threshold model such as the wtm, each node i has a threshold r i that one draws from some distribution. suppose that r i is constant in time, although one can generalize it to be time-dependent. at any time, each node can be in one of two states: 0 (which represents being inactive, not adopted, not infected, and so on) or 1 (active, adopted, infected, and so on). a binary-state model is a drastic oversimplification of reality, but the wtm is able to capture two crucial features of social systems [125] : interdependence (an entity's behavior depends on the behavior of other entities) and heterogeneity (as nodes with different threshold values behave differently). one can assign a seed number or seed fraction of nodes to the active state, and one can choose the initially active nodes either deterministically or randomly. the states of the nodes change in time according to an update rule, which can either be synchronous (such that it is a map) or asynchronous (e.g., as a discretization of continuous time) [142] . in the wtm, the update rule is deterministic, so this choice affects only how long it takes to reach a steady state; it does not affect the steady state itself. with a stochastic update rule, the synchronous and asynchronous versions of ostensibly the "same" model can behave in drastically different ways [43] . in the wtm on an undirected network, to update the state of a node, one compares its fraction s i /k i of active neighbors (where s i is the number of active neighbors and k i is the degree of node i) to the node's threshold r i . an inactive node i becomes active (i.e., it switches from state 0 to state 1) if s i /k i ≥ r i ; otherwise, it stays inactive. the states of nodes in the wtm are monotonic, in the sense that a node that becomes active remains active forever. this feature is convenient for deriving accurate approximations for the global behavior of the wtm using branchingprocess approximations [55, 142] or when analyzing the behavior of the wtm using tools such as persistent homology [174] . a dynamical process on a network can take the form of a stochastic process [121, 142] . there are several possible sources of stochasticity: (1) choice of initial condition, (2) choice of which nodes or edges to update (when considering asynchronous updating), (3) the rule for updating nodes or edges, (4) the values of parameters in an update rule, and (5) selection of particular networks from a random-graph ensemble (i.e., a probability distribution on graphs). some or all of these sources of randomness can be present when studying dynamical processes on networks. it is desirable to compare the sample mean of a stochastic process on a network to an ensemble average (i.e., to an expectation over a suitable probability distribution). prominent examples of stochastic processes on networks include percolation [153] , random walks [107] , compartment models of biological contagions [89, 130] , bounded-confidence models with continuous-valued opinions [110] , and other opinion and voter models [23, 100, 142, 148] . compartmental models of biological contagions are a topic of intense interest in network science [89, 121, 130, 142] . a compartment represents a possible state of a node; examples include susceptible, infected, zombified, vaccinated, and recovered. an update rule determines how a node changes its state from one compartment to another. one can formulate models with as many compartments as desired [18] , but investigations of how network structure affects dynamics typically have employed examples with only two or three compartments [89, 130] . researchers have studied various extensions of compartmental models, contagions on multilayer and temporal networks [4, 34, 90] , metapopulation models on networks [30] for simultaneously studying network connectivity and subpopulations with different characteristics, non-markovian contagions on networks for exploring memory effects [188] , and explicit incorporation of individuals with essential societal roles (e.g., health-care workers) [161] . as i discuss in section 4.4, one can also examine coupling between biological contagions and the spread of information (e.g., "awareness") [50, 192] . one can also use compartmental models to study phenomena, such as dissemination of ideas on social media [58] and forecasting of political elections [190] , that are much different from the spread of diseases. one of the most prominent examples of a compartmental model is a susceptibleinfected-recovered (sir) model, which has three compartments. susceptible nodes are healthy and can become infected, and infected nodes can eventually recover. the steady state of the basic sir model on a network is related to a type of bond percolation [63, 68, 87, 181] . there are many variants of sir models and other compartmental models on networks [89] . see [114] for an illustration using susceptible-infectedsusceptible (sis) models. suppose that an infection is transmitted from an infected node to a susceptible neighbor at a rate of λ. the probability of a transmission event on one edge between an infected node and a susceptible node in an infinitesimal time interval dt is λ dt. assuming that all infection events are independent, the probability that a susceptible node with s infected neighbors becomes infected (i.e., for a node to transition from the s compartment to the i compartment, which represents both being infected and being infective) during dt is if an infected node recovers at a constant rate of µ, the probability that it switches from state i to state r in an infinitesimal time interval dt is µ dt. when there is no source of stochasticity, a dynamical process on a network is "deterministic". a deterministic dynamical system can take the form of a system of coupled maps, odes, pdes, or something else. as with stochastic systems, the network structure encodes which entities of a system interact with each other and which do not. there are numerous interesting deterministic dynamical systems on networksjust incorporate nontrivial connectivity between entities into your favorite deterministic model -although it is worth noting that some stochastic features (e.g., choosing parameter values from a probability distribution or sampling choices of initial conditions) can arise in these models. for concreteness, let's consider the popular setting of coupled oscillators. each node in a network is associated with an oscillator, and we want to examine how network structure affects the collective behavior of the coupled oscillators. it is common to investigate various forms of synchronization (a type of coherent behavior), such that the rhythms of the oscillators adjust to match each other (or to match a subset of the oscillators) because of their interactions [138] . a variety of methods, such as "master stability functions" [132] , have been developed to study the local stability of synchronized states and their generalizations [6, 142] , such as cluster synchrony [133] . cluster synchrony, which is related to work on "coupled-cell networks" [59] , uses ideas from computational group theory to find synchronized sets of oscillators that are not synchronized with other sets of synchronized oscillators. many studies have also examined other types of states, such as "chimera states" [128] , in which some oscillators behave coherently but others behave incoherently. (analogous phenomena sometimes occur in mathematics departments.) a ubiquitous example is coupled kuramoto oscillators on a network [6, 39, 149] , which is perhaps the most common setting for exploring and developing new methods for studying coupled oscillators. (in principle, one can then build on these insights in studies of other oscillatory systems, such as in applications in neuroscience [7] .) coupled kuramoto oscillators have been used for modeling numerous phenomena, including jetlag [104] and singing in frogs [126] . indeed, a "snowbird" (siam) conference on applied dynamical systems would not be complete without at least several dozen talks on the kuramoto model. in the kuramoto model, each node i has an associated phase θ i (t) ∈ [0, 2π). in the case of "diffusive" coupling between the nodes5, the dynamics of the ith node is governed by the equation where one typically draws the natural frequency ω i of node i from some distribution g(ω), the scalar a i j is an adjacency-matrix entry of an unweighted network, b i j is the coupling strength on oscillator i from oscillator j (so b i j a i j is an element of an adjacency matrix w of a weighted network), and f i j (y) = sin(y) is the coupling function, which depends only on the phase difference between oscillators i and j because of the diffusive nature of the coupling. once one knows the natural frequencies ω i , the model (4) is a deterministic dynamical system, although there have been studies of coupled kuramoto oscillators with additional stochastic terms [60] . traditional studies of (4) and its generalizations draw the natural frequencies from some distribution (e.g., a gaussian or a compactly supported distribution), but some studies of so-called "explosive synchronization" (in which there is an abrupt phase transition from incoherent oscillators to synchronized oscillators) have employed deterministic natural frequencies [16, 39] . the properties of the frequency distribution g(ω) have a significant effect on the dynamics of (4). important features of g(ω) include whether it has compact support or not, whether it is symmetric or asymmetric, and whether it is unimodal or not [149, 170] . the model (4) has been generalized in numerous ways. for example, researchers have considered a large variety of coupling functions f i j (including ones that are not diffusive) and have incorporated an inertia term θ i to yield a second-order kuramoto oscillator at each node [149] . the latter generalization is important for studies of coupled oscillators and synchronized dynamics in electric power grids [196] . another noteworthy direction is the analysis of kuramoto model on "graphons" (see, e.g., [108] ), an important type of structure that arises in a suitable limit of large networks. an increasingly prominent topic in network analysis is the examination of how multilayer network structures -multiple system components, multiple types of edges, co-occurrence and coupling of multiple dynamical processes, and so onaffect qualitative and quantitative dynamics [3, 34, 90] . for example, perhaps certain types of multilayer structures can induce unexpected instabilities or phase transitions in certain types of dynamical processes? there are two categories of dynamical processes on multilayer networks: (1) a single process can occur on a multilayer network; or (2) processes on different layers of a multilayer network can interact with each other [34] . an important example of the first category is a random walk, where the relative speeds and probabilities of steps within layers versus steps between layers affect the qualitative nature of the dynamics. this, in turn, affects methods (such as community detection [38, 80] ) that are based on random walks, as well as anything else in which the diffusion is relevant [22, 36] . two other examples of the first category are the spread of information on social media (for which there are multiple communication channels, such as facebook and twitter) and multimodal transportation systems [51] . for instance, a multilayer network structure can induce congestion even when a system without coupling between layers is decongested in each layer independently [1] . examples of the second category of dynamical process are interactions between multiple strains of a disease and interactions between the spread of disease and the spread of information [49, 50, 192] . many other examples have been studied [3] , including coupling between oscillator dynamics on one layer and a biased random walk on another layer (as a model for neuronal oscillations coupled to blood flow) [122] . numerous interesting phenomena can occur when dynamical systems, such as spreading processes, are coupled to each other [192] . for example, the spreading of one disease can facilitate infection by another [157] , and the spread of awareness about a disease can inhibit spread of the disease itself (e.g., if people stay home when they are sick) [61] . interacting spreading processes can also exhibit other fascinating dynamics, such as oscillations that are induced by multilayer network structures in a biological contagion with multiple modes of transmission [79] and novel types of phase transitions [34] . a major simplification in most work thus far on dynamical processes on multilayer networks is a tendency to focus on toy models. for example, a typical study of coupled spreading processes may consider a standard (e.g., sir) model on each layer, and it may draw the connectivity pattern of each layer from the same standard random-graph model (e.g., an erdős-rényi model or a configuration model). however, when studying dynamics on multilayer networks, it is particular important in future work to incorporate heterogeneity in network structure and/or dynamical processes. for instance, diseases spread offline but information spreads both offline and online, so investigations of coupled information and disease spread ought to consider fundamentally different types of network structures for the two processes. network structures also affect the dynamics of pdes on networks [8, 31, 57, 77, 112] . interesting examples include a study of a burgers equation on graphs to investigate how network structure affects the propagation of shocks [112] and investigations of reaction-diffusion equations and turing patterns on networks [8, 94] . the latter studies exploit the rich theory of laplacian dynamics on graphs (and concomitant ideas from spectral graph theory) [107, 187] and examine the addition of nonlinear terms to laplacians on various types of networks (including multilayer ones). a mathematically oriented thread of research on pdes on networks has built on ideas from so-called "quantum graphs" [57, 96] to study wave propagation on networks through the analysis of "metric graphs". metric graphs differ from the usual "combinatorial graphs", which in other contexts are usually called simply "graphs". 6 in metric graphs, in addition to nodes and edges, each edge e has a positive length l e ∈ (0, ∞]. for many experimentally relevant scenarios (e.g., in models of circuits of quantum wires [195] ), there is a natural embedding into space, but metric graphs that are not embedded in space are also appropriate for some applications. as the nomenclature suggests, one can equip a metric graph with a natural metric. if a sequence {e j } m j=1 of edges forms a path, the length of the path is j l j . the distance ρ(v 1 , v 2 ) between two nodes, v 1 and v 2 , is the minimum path length between them. we place coordinates along each edge, so we can compute a distance between points x 1 and x 2 on a metric graph even when those points are not located at nodes. traditionally, one assumes that the infinite ends (which one can construe as "leads" at infinity, as in scattering theory) of infinite edges have degree 1. it is also traditional to assume that there is always a positive distance between distinct nodes and that there are no finite-length paths with infinitely many edges. see [96] for further discussion. to study waves on metric graphs, one needs to define operators, such as the negative second derivative or more general schrödinger operators. this exploits the fact that there are coordinates for all points on the edges -not only at the nodes themselves, as in combinatorial graphs. when studying waves on metric graphs, it is also necessary to impose boundary conditions at the nodes [96] . many studies of wave propagation on metric graphs have considered generalizations of nonlinear wave equations, such as the cubic nonlinear schrödinger (nls) equation [123] and a nonlinear dirac equation [154] . the overwhelming majority of studies in metric graphs (with both linear and nonlinear waves) have focused on networks with a very small number of nodes, as even small networks yield very interesting dynamics. for example, marzuola and pelinovsky [106] analyzed symmetry-breaking and symmetry-preserving bifurcations of standing waves of the cubic nls on a dumbbell graph (with two rings attached to a central line segment and kirchhoff boundary conditions at the nodes). kairzhan et al. [85] studied the spectral stability of half-soliton standing waves of the cubic nls equation on balanced star graphs. sobirov et al. [168] studied scattering and transmission at nodes of sine-gordon solitons on networks (e.g., on a star graph and a small tree). a particularly interesting direction for future work is to study wave dynamics on large metric graphs. this will help extend investigations, as in odes and maps, of how network structures affect dynamics on networks to the realm of linear and nonlinear waves. one can readily formulate wave equations on large metric graphs by specifying relevant boundary conditions and rules at each junction. for example, joly et al. [82] recently examined wave propagation of the standard linear wave equation on fractal trees. because many natural real-life settings are spatially embedded (e.g., wave propagation in granular materials [101, 129] and traffic-flow patterns in cities), it will be particularly valuable to examine wave dynamics on (both synthetic and empirical) spatially-embedded networks [9] . therefore, i anticipate that it will be very insightful to undertake studies of wave dynamics on networks such as random geometric graphs, random neighborhood graphs, and other spatial structures. a key question in network analysis is how different types of network structure affect different types of dynamical processes [142] , and the ability to take a limit as model synthetic networks become infinitely large (i.e., a thermodynamic limit) is crucial for obtaining many key theoretical insights. dynamics of networks and dynamics on networks do not occur in isolation; instead, they are coupled to each other. researchers have studied the coevolution of network structure and the states of nodes and/or edges in the context of "adaptive networks" (which are also known as "coevolving networks") [66, 159] . whether it is sensible to study a dynamical process on a time-independent network, a temporal network with frozen (or no) node or edge states, or an adaptive network depends on the relative time scales of the dynamics of network structure and the states of nodes and/or edges of a network. see [142] for a brief discussion. models in the form of adaptive networks provide a promising mechanistic approach to simultaneously explain both structural features (e.g., degree distributions and temporal features (e.g., burstiness) of empirical data [5] . incorporating adaptation into conventional models can produce extremely interesting and rich dynamics, such as the spontaneous development of extreme states in opinion models [160] . most studies of adaptive networks that include some analysis (i.e., that go beyond numerical computations) have employed rather artificial adaption rules for adding, removing, and rewiring edges. this is relevant for mathematical tractability, but it is important to go beyond these limitations by considering more realistic types of adaptation and coupling between network structure (including multilayer structures, as in [12] ) and the states of nodes and edges. when people are sick, they stay home from work or school. people also form and remove social connections (both online and offline) based on observed opinions and behaviors. to study these ideas using adaptive networks, researchers have coupled models of biological and social contagions with time-dependent networks [100, 142] . an early example of an adaptive network of disease spreading is the susceptibleinfected (si) model in gross et al. [67] . in this model, susceptible nodes sometimes rewire their incident edges to "protect themselves". suppose that we have an n-node network with a constant number of undirected edges. each node is either susceptible (i.e., of type s) or infected (i.e., of type i). at each time step, and for each edge -so-called "discordant edges" -between nodes of different types, the susceptible node becomes infected with probability λ. for each discordant edge, with some probability κ, the incident susceptible node breaks the edge and rewires to some other susceptible node. this is a "rewire-to-same" mechanism, to use the language from some adaptive opinion models [40, 97] . (in this model, multi-edges and selfedges are not allowed.) during each time step, infected nodes can also recover to become susceptible again. gross et al. [67] studied how the rewiring probability affects the "basic reproductive number", which measures how many secondary infections on average occur for each primary infection [18, 89, 130] . this scalar quantity determines the size of a critical infection probability λ * to maintain a stable epidemic (as determined traditionally using linear stability analysis of an endemic state). a high rewiring rate can significantly increase λ * and thereby significantly reduce the prevalence of a contagion. although results like these are perhaps intuitively clear, other studies of contagions on adaptive networks have yielded potentially actionable (and arguably nonintuitive) insights. for example, scarpino et al. [161] demonstrated using an adaptive compartmental model (along with some empirical evidence) that the spread of a disease can accelerate when individuals with essential societal roles (e.g., health-care workers) become ill and are replaced with healthy individuals. another type of model with many interesting adaptive variants are opinion models [23, 142] , especially in the form of generalizations of classical voter models [148] . voter dynamics were first considered in the 1970s by clifford and sudbury [29] as a model for species competition, and the dynamical process that they introduced was dubbed "the voter model"7 by holley and liggett shortly thereafter [69] . voter dynamics are fun and are popular to study [148] , although it is questionable whether it is ever possible to genuinely construe voter models as models of voters [44] . holme and newman [71] undertook an early study of a rewire-to-same adaptive voter model. inspired by their research, durrett et al. [40] compared the dynamics from two different types of rewiring in an adaptive voter model. in each variant of their model, one considers an n-node network and supposes that each node is in one of two states. the network structure and the node states coevolve. pick an edge uniformly at random. if this edge is discordant, then with probability 1 − κ, one of its incident nodes adopts the opinion state of the other. otherwise, with complementary probability κ, a rewiring action occurs: one removes the discordant edge, and one of the associated nodes attaches to a new node either through a rewire-to-same mechanism (choosing uniformly at random among the nodes with the same opinion state) or through a "rewire-to-random" mechanism (choosing uniformly at random among all nodes). as with the adaptive si model in [67] , self-edges and multi-edges are not allowed. the models in [40] evolve until there are no discordant edges. there are several key questions. does the system reach a consensus (in which all nodes are in the same state)? if so, how long does it take to converge to consensus? if not, how many opinion clusters (each of which is a connected component, perhaps interpretable as an "echo chamber", of the final network) are there at steady state? how long does it take to reach this state? the answers and analysis are subtle; they depend on the initial network topology, the initial conditions, and the specific choice of rewiring rule. as with other adaptive network models, researchers have developed some nonrigorous theory (e.g., using mean-field approximations and their generalizations) on adaptive voter models with simplistic rewiring schemes, but they have struggled to extend these ideas to models with more realistic rewiring schemes. there are very few mathematically rigorous results on adaptive voter models, although there do exist some, under various assumptions on initial network structure and edge density [10] . researchers have generalized adaptive voter models to consider more than two opinion states [163] and more general types of rewiring schemes [105] . as with other adaptive networks, analyzing adaptive opinion models with increasingly diverse types of rewiring schemes (ideally with a move towards increasing realism) is particularly important. in [97] , yacoub kureh and i studied a variant of a voter model with nonlinear rewiring (where the probability that a node rewires or adopts is a function of how well it "fits in" within its neighborhood), including a "rewire-tonone" scheme to model unfriending and unfollowing in online social networks. it is also important to study adaptive opinion models with more realistic types of opinion dynamics. a promising example is adaptive generalizations of bounded-confidence models (see the introduction of [110] for a brief review of bounded-confidence models), which have continuous opinion states, with nodes interacting either with nodes or with other entities (such as media [21] ) whose opinion is sufficiently close to theirs. a recent numerical study examined an adaptive bounded-confidence model [19] ; this is an important direction for future investigations. it is also interesting to examine how the adaptation of oscillators -including their intrinsic frequencies and/or the network structure that couples them to each other -affects the collective behavior (e.g., synchronization) of a network of oscillators [149] . such ideas are useful for exploring mechanistic models of learning in the brain (e.g., through adaptation of coupling between oscillators to produce a desired limit cycle [171] ). one nice example is by skardal et al. [167] , who examined an adaptive model of coupled kuramoto oscillators as a toy model of learning. first, we write the kuramoto system as where f i j is a 2π-periodic function of the phase difference between oscillators i and j. one way to incorporate adaptation is to define an "order parameter" r i (which, in its traditional form, quantifies the amount of coherence of the coupled kuramoto oscillators [149] ) for the ith oscillator by and to consider the following dynamical system: where re(ζ) denotes the real part of a quantity ζ and im(ζ) denotes its imaginary part. in the model (6), λ d denotes the largest positive eigenvalue of the adjacency matrix a, the variable z i (t) is a time-delayed version of r i with time parameter τ (with τ → 0 implying that z i → r i ), and z * i denotes the complex conjugate of z i . one draws the frequencies ω i from some distribution (e.g., a lorentz distribution, as in [167] ), and we recall that b i j is the coupling strength on oscillator i from oscillator j. the parameter t gives an adaptation time scale, and α ∈ r and β ∈ r are parameters (which one can adjust to study bifurcations). skardal et al. [167] interpreted scenarios with β > 0 as "hebbian" adaptation (see [27] ) and scenarios with β < 0 as anti-hebbian adaptation, as they observed that oscillator synchrony is promoted when β > 0 and inhibited when β < 0. most studies of networks have focused on networks with pairwise connections, in which each edge (unless it is a self-edge, which connects a node to itself) connects exactly two nodes to each other. however, many interactions -such as playing games, coauthoring papers and other forms of collaboration, and horse racesoften occur between three or more entities of a network. to examine such situations, researchers have increasingly studied "higher-order" structures in networks, as they can exert a major influence on dynamical processes. perhaps the simplest way to account for higher-order structures in networks is to generalize from graphs to "hypergraphs" [121] . hypergraphs possess "hyperedges" that encode a connection between on arbitrary number of nodes, such as between all coauthors of a paper. this allows one to make important distinctions, such as between a k-clique (in which there are pairwise connections between each pair of nodes in a set of k nodes) and a hyperedge that connects all k of those nodes to each other, without the need for any pairwise connections. one way to study a hypergraph is as a "bipartite network", in which nodes of a given type can be adjacent only to nodes of another type. for example, a scientist can be adjacent to a paper that they have written [119] , and a legislator can be adjacent to a committee on which they sit [144] . it is important to generalize ideas from graph theory to hypergraphs, such as by developing models of random hypergraphs [25, 26, 52 ]. another way to study higher-order structures in networks is to use "simplicial complexes" [53, 54, 127] . a simplicial complex is a space that is built from a union of points, edges, triangles, tetrahedra, and higher-dimensional polytopes (see fig. 1d ). simplicial complexes approximate topological spaces and thereby capture some of their properties. a p-dimensional simplex (i.e., a p-simplex) is a p-dimensional polytope that is the convex hull of its p + 1 vertices (i.e., nodes). a simplicial complex k is a set of simplices such that (1) every face of a simplex from s is also in s and (2) the intersection of any two simplices σ 1 , σ 2 ∈ s is a face of both σ 1 and σ 2 . an increasing sequence k 1 ⊂ k 2 ⊂ · · · ⊂ k l of simplicial complexes forms a filtered simplicial complex; each k i is a subcomplex. as discussed in [127] and references therein, one can examine the homology of each subcomplex. in studying the homology of a topological space, one computes topological invariants that quantify features of different dimensions [53] . one studies "persistent homology" (ph) of a filtered simplicial complex to quantify the topological structure of a data set (e.g., a point cloud) across multiple scales of such data. the goal of such "topological data analysis" (tda) is to measure the "shape" of data in the form of connected components, "holes" of various dimensionality, and so on [127] . from the perspective of network analysis, this yields insight into types of large-scale structure that complement traditional ones (such as community structure). see [178] for a friendly, nontechnical introduction to tda. a natural goal is to generalize ideas from network analysis to simplicial complexes. important efforts include generalizing configuration models of random graphs [48] to random simplicial complexes [15, 32] ; generalizing well-known network growth mechanisms, such as preferential attachment [13] ; and developing geometric notions, like curvature, for networks [156] . an important modeling issue when studying higher-order network data is the question of when it is more appropriate (or convenient) to use the formalisms of hypergraphs or simplicial complexes. the computation of ph has yielded insights on a diverse set of models and applications in network science and complex systems. examples include granular materials [95, 129] , functional brain networks [54, 165] , quantification of "political islands" in voting data [42] , percolation theory [169] , contagion dynamics [174] , swarming and collective behavior [179] , chaotic flows in odes and pdes [197] , diurnal cycles in tropical cyclones [182] , and mathematics education [28] . see the introduction to [127] for pointers to numerous other applications. most uses of simplicial complexes in network science and complex systems have focused on tda (especially the computation of ph) and its applications [127, 131, 155] . in this chapter, however, i focus instead on a somewhat different (and increasingly popular) topic: the generalization of dynamical processes on and of networks to simplicial complexes to study the effects of higher-order interactions on network dynamics. simplicial structures influence the collective behavior of the dynamics of coupled entities on networks (e.g., they can lead to novel bifurcations and phase transitions), and they provide a natural approach to analyze p-entity interaction terms, including for p ≥ 3, in dynamical systems. existing work includes research on linear diffusion dynamics (in the form of hodge laplacians, such as in [162] ) and generalizations of a variety of other popular types of dynamical processes on networks. given the ubiquitous study of coupled kuramoto oscillators [149] , a sensible starting point for exploring the impact of simultaneous coupling of three or more oscillators on a system's qualitative dynamics is to study a generalized kuramoto model. for example, to include both two-entity ("two-body") and three-entity interactions in a model of coupled oscillators on networks, we write [172] x where f i describes the dynamics of oscillator i and the three-oscillator interaction term w i jk includes two-oscillator interaction terms w i j (x i , x j ) as a special case. an example of n coupled kuramoto oscillators with three-term interactions is [172] where we draw the coefficients a i j , b i j , c i jk , α 1i j , α 2i j , α 3i jk , α 4i jk from various probability distributions. including three-body interactions leads to a large variety of intricate dynamics, and i anticipate that incorporating the formalism of simplicial complexes will be very helpful for categorizing the possible dynamics. in the last few years, several other researchers have also studied kuramoto models with three-body interactions [92, 93, 166] . a recent study [166] , for example, discovered a continuum of abrupt desynchronization transitions with no counterpart in abrupt synchronization transitions. there have been mathematical studies of coupled oscillators with interactions of three or more entities using methods such as normal-form theory [14] and coupled-cell networks [59] . an important point, as one can see in the above discussion (which does not employ the mathematical formalism of simplicial complexes), is that one does not necessarily need to explicitly use the language of simplicial complexes to study interactions between three or more entities in dynamical systems. nevertheless, i anticipate that explicitly incorporating the formalism of simplicial complexes will be useful both for studying coupled oscillators on networks and for other dynamical systems. in upcoming studies, it will be important to determine when this formalism helps illuminate the dynamics of multi-entity interactions in dynamical systems and when simpler approaches suffice. several recent papers have generalized models of social dynamics by incorporating higher-order interactions [75, 76, 118, 137] . for example, perhaps somebody's opinion is influenced by a group discussion of three or more people, so it is relevant to consider opinion updates that are based on higher-order interactions. some of these papers use some of the terminology of simplicial complexes, but it is mostly unclear (except perhaps for [75] ) how the models in them take advantage of the associated mathematical formalism, so arguably it often may be unnecessary to use such language. nevertheless, these models are very interesting and provide promising avenues for further research. petri and barrat [137] generalized activity-driven models to simplicial complexes. such a simplicial activity-driven (sad) model generates time-dependent simplicial complexes, on which it is desirable to study dynamical processes (see section 4), such as opinion dynamics, social contagions, and biological contagions. the simplest version of the sad model is defined as follows. • each node i has an activity rate a i that we draw independently from a distribution f(x). • at each discrete time step (of length ∆t), we start with n isolated nodes. each node i is active with a probability of a i ∆t, independently of all other nodes. if it is active, it creates a (p − 1)-simplex (forming, in network terms, a clique of p nodes) with p − 1 other nodes that we choose uniformly and independently at random (without replacement). one can either use a fixed value of p or draw p from some probability distribution. • at the next time step, we delete all edges, so all interactions have a constant duration. we then generate new interactions from scratch. this version of the sad model is markovian, and it is desirable to generalize it in various ways (e.g., by incorporating memory or community structure). iacopini et al. [76] recently developed a simplicial contagion model that generalizes an si process on graphs. consider a simplicial complex k with n nodes, and associate each node i with a state x i (t) ∈ {0, 1} at time t. if x i (t) = 0, node i is part of the susceptible class s; if x i (t) = 1, it is part of the infected class i. the density of infected nodes at time t is ρ(t) = 1 n n i=1 x i (t). suppose that there are d parameters 1 , . . . , d (with d ∈ {1, . . . , n − 1}), where d represents the probability per unit time that a susceptible node i that participates in a d-dimensional simplex σ is infected from each of the faces of σ, under the condition that all of the other nodes of the face are infected. that is, 1 is the probability per unit time that node i is infected by an adjacent node j via the edge (i, j). similarly, 2 is the probability per unit time that node i is infected via the 2-simplex (i, j, k) in which both j and k are infected, and so on. the recovery dynamics, in which an infected node i becomes susceptible again, proceeds as in the sir model that i discussed in section 4.2. one can envision numerous interesting generalizations of this model (e.g., ones that are inspired by ideas that have been investigated in contagion models on graphs). the study of networks is one of the most exciting and rapidly expanding areas of mathematics, and it touches on myriad other disciplines in both its methodology and its applications. network analysis is increasingly prominent in numerous fields of scholarship (both theoretical and applied), it interacts very closely with data science, and it is important for a wealth of applications. my focus in this chapter has been a forward-looking presentation of ideas in network analysis. my choices of which ideas to discuss reflect their connections to dynamics and nonlinearity, although i have also mentioned a few other burgeoning areas of network analysis in passing. through its exciting combination of graph theory, dynamical systems, statistical mechanics, probability, linear algebra, scientific computation, data analysis, and many other subjects -and through a comparable diversity of applications across the sciences, engineering, and the humanities -the mathematics and science of networks has plenty to offer researchers for many years. congestion induced by the structure of multiplex networks tie-decay temporal networks in continuous time and eigenvector-based centralities multilayer networks in a nutshell multilayer networks in a nutshell temporal and structural heterogeneities emerging in adaptive temporal networks synchronization in complex networks mathematical frameworks for oscillatory network dynamics in neuroscience turing patterns in multiplex networks morphogenesis of spatial networks evolving voter model on dense random graphs generative benchmark models for mesoscale structure in multilayer networks birth and stabilization of phase clusters by multiplexing of adaptive networks network geometry with flavor: from complexity to quantum geometry chaos in generically coupled phase oscillator networks with nonpairwise interactions topology of random geometric complexes: a survey explosive transitions in complex networksõ structure and dynamics: percolation and synchronization factoring and weighting approaches to clique identification mathematical models in population biology and epidemiology how does active participation effect consensus: adaptive network model of opinion dynamics and influence maximizing rewiring anatomy of a large-scale hypertextual web search engine a model for the influence of media on the ideology of content in online social networks frequency-based brain networks: from a multiplex network to a full multilayer description statistical physics of social dynamics bootstrap percolation on a bethe lattice configuration models of random hypergraphs annotated hypergraphs: models and applications hebbian learning architecture and evolution of semantic networks in mathematics texts a model for spatial conflict reaction-diffusion processes and metapopulation models in heterogeneous networks multiple-scale theory of topology-driven patterns on directed networks generalized network structures: the configuration model and the canonical ensemble of simplicial complexes structure and dynamics of core/periphery networks the physics of spreading processes in multilayer networks mathematical formulation of multilayer networks navigability of interconnected networks under random failures identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems explosive phenomena in complex networks graph fission in an evolving voter model a practical guide to stochastic simulations of reaction-diffusion processes persistent homology of geospatial data: a case study with voting limitations of discrete-time approaches to continuous-time contagion dynamics is the voter model a model for voters? the use of multilayer network analysis in animal behaviour on eigenvector-like centralities for temporal networks: discrete vs. continuous time scales community detection in networks: a user guide configuring random graph models with fixed degree sequences nine challenges in incorporating the dynamics of behaviour in infectious diseases models modelling the influence of human behaviour on the spread of infectious diseases: a review anatomy and efficiency of urban multimodal mobility random hypergraphs and their applications elementary applied topology two's company, three (or more) is a simplex binary-state dynamics on complex networks: pair approximation and beyond quantum graphs: applications to quantum chaos and universal spectral statistics the structural virality of online diffusion patterns of synchrony in coupled cell networks with multiple arrows finite-size effects in a stochastic kuramoto model dynamical interplay between awareness and epidemic spreading in multiplex networks threshold models of collective behavior on the critical behavior of the general epidemic process and dynamical percolation a matrix iteration for dynamic network summaries a dynamical systems view of network centrality adaptive coevolutionary networks: a review epidemic dynamics on an adaptive network pathogen mutation modeled by competition between site and bond percolation ergodic theorems for weakly interacting infinite systems and the voter model modern temporal network theory: a colloquium nonequilibrium phase transition in the coevolution of networks and opinions temporal networks temporal networks temporal network theory an adaptive voter model on simplicial complexes simplical models of social contagion turing instability in reaction-diffusion models on complex networks games on networks the large graph limit of a stochastic epidemic model on a dynamic multilayer network a local perspective on community structure in multilayer networks structure of growing social networks wave propagation in fractal trees synergistic effects in threshold models on networks hipsters on networks: how a minority group of individuals can lead to an antiestablishment majority drift of spectrally stable shifted states on star graphs maximizing the spread of influence through a social network second look at the spread of epidemics on networks centrality prediction in dynamic human contact networks mathematics of epidemics on networks multilayer networks authoritative sources in a hyperlinked environment dynamics of multifrequency oscillator communities finite-size-induced transitions to synchrony in oscillator ensembles with nonlinear global coupling pattern formation in multiplex networks quantifying force networks in particulate systems quantum graphs: i. some basic structures fitting in and breaking up: a nonlinear version of coevolving voter models from networks to optimal higher-order models of complex systems hawkes processes complex spreading phenomena in social systems: influence and contagion in real-world social networks wave mitigation in ordered networks of granular chains centrality metric for dynamic networks control principles of complex networks resynchronization of circadian oscillators and the east-west asymmetry of jet-lag transitivity reinforcement in the coevolving voter model ground state on the dumbbell graph random walks and diffusion on networks the nonlinear heat equation on dense graphs and graph limits multi-stage complex contagions opinion formation and distribution in a bounded-confidence model on various networks network motifs: simple building blocks of complex networks portrait of political polarization six susceptible-infected-susceptible models on scale-free networks a network-based dynamical ranking system for competitive sports community structure in time-dependent, multiscale, and multiplex networks multi-body interactions and non-linear consensus dynamics on networked systems scientific collaboration networks. i. network construction and fundamental results network structure from rich but noisy data collective phenomena emerging from the interactions between dynamical processes in multiplex networks nonlinear schrödinger equation on graphs: recent results and open problems complex contagions with timers a theory of the critical mass. i. interdependence, group heterogeneity, and the production of collective action interaction mechanisms quantified from dynamical features of frog choruses a roadmap for the computation of persistent homology chimera states: coexistence of coherence and incoherence in networks of coupled oscillators network analysis of particles and grains epidemic processes in complex networks topological analysis of data master stability functions for synchronized coupled systems cluster synchronization and isolated desynchronization in complex networks with symmetries bayesian stochastic blockmodeling modelling sequences and temporal networks with dynamic community structures activity driven modeling of time varying networks simplicial activity driven model the multilayer nature of ecological networks network analysis and modelling: special issue of dynamical systems on networks: a tutorial the role of network analysis in industrial and applied mathematics a network analysis of committees in the u.s. house of representatives communities in networks spectral centrality measures in temporal networks reality inspired voter models: a mini-review the kuramoto model in complex networks core-periphery structure in networks (revisited) dynamic pagerank using evolving teleportation memory in network flows and its effects on spreading dynamics and community detection recent advances in percolation theory and its applications dynamics of dirac solitons in networks simplicial complexes and complex systems comparative analysis of two discretizations of ricci curvature for complex networks dynamics of interacting diseases null models for community detection in spatially embedded, temporal networks modeling complex systems with adaptive networks social diffusion and global drift on networks the effect of a prudent adaptive behaviour on disease transmission random walks on simplicial complexes and the normalized hodge 1-laplacian multiopinion coevolving voter model with infinitely many phase transitions the architecture of complexity the importance of the whole: topological data analysis for the network neuroscientist abrupt desynchronization and extensive multistability in globally coupled oscillator simplexes complex macroscopic behavior in systems of phase oscillators with adaptive coupling sine-gordon solitons in networks: scattering and transmission at vertices topological data analysis of continuum percolation with disks from kuramoto to crawford: exploring the onset of synchronization in populations of coupled oscillators motor primitives in space and time via targeted gain modulation in recurrent cortical networks multistable attractors in a network of phase oscillators with threebody interactions analysing information flows and key mediators through temporal centrality metrics topological data analysis of contagion maps for examining spreading processes on networks eigenvector-based centrality measures for temporal networks supracentrality analysis of temporal networks with directed interlayer coupling tunable eigenvector-based centralities for multiplex and temporal networks topological data analysis: one applied mathematicianõs heartwarming story of struggle, triumph, and ultimately, more struggle topological data analysis of biological aggregation models partitioning signed networks on analytical approaches to epidemics on networks using persistent homology to quantify a diurnal cycle in hurricane felix resolution limits for detecting community changes in multilayer networks analytical computation of the epidemic threshold on temporal networks epidemic threshold in continuoustime evolving networks network models of the diffusion of innovations graph spectra for complex networks non-markovian infection spread dramatically alters the susceptible-infected-susceptible epidemic threshold in networks temporal gillespie algorithm: fast simulation of contagion processes on time-varying networks forecasting elections using compartmental models of infection ranking scientific publications using a model of network traffic coupled disease-behavior dynamics on complex networks: a review social network analysis: methods and applications a simple model of global cascades on random networks braess's paradox in oscillator networks, desynchronization and power outage inferring symbolic dynamics of chaotic flows from persistence continuous-time discrete-distribution theory for activitydriven networks an analytical framework for the study of epidemic models on activity driven networks modeling memory effects in activity-driven networks models of continuous-time networks with tie decay, diffusion, and convection key: cord-281177-2eycqf8o authors: robertson, colin; nelson, trisalyn a.; macnab, ying c.; lawson, andrew b. title: review of methods for space–time disease surveillance date: 2010-02-20 journal: spat spatiotemporal epidemiol doi: 10.1016/j.sste.2009.12.001 sha: doc_id: 281177 cord_uid: 2eycqf8o a review of some methods for analysis of space–time disease surveillance data is presented. increasingly, surveillance systems are capturing spatial and temporal data on disease and health outcomes in a variety of public health contexts. a vast and growing suite of methods exists for detection of outbreaks and trends in surveillance data and the selection of appropriate methods in a given surveillance context is not always clear. while most reviews of methods focus on algorithm performance, in practice, a variety of factors determine what methods are appropriate for surveillance. in this review, we focus on the role of contextual factors such as scale, scope, surveillance objective, disease characteristics, and technical issues in relation to commonly used approaches to surveillance. methods are classified as testing-based or model-based approaches. reviewing methods in the context of factors other than algorithm performance highlights important aspects of implementing and selecting appropriate disease surveillance methods. early detection of unusual health events can enable coordinated response and control activities such as travel restrictions, movement bans on animals, and distribution of prophylactics to susceptible members of the population. our experience with severe acute respiratory syndrome (sars), which emerged in southern china in late 2002 and spread to over 30 countries in 8 months, indicates the importance of early detection (banos and lacasa, 2007) . disease surveillance is the principal tool used by the public health community to understand and manage the spread of diseases, and is defined by the world health organization as the ongoing systematic collection, collation, analysis and interpretation of data and dissemination of information in order for action to be taken (world health organization, 2007) . surveillance systems serve a variety of public health functions (e.g., outbreak detection, control planning) by integrating data representing human and/or animal health with statistical methods (diggle, 2003) , visualization tools (moore et al., 2008) , and increasingly, linkage with other geographic datasets within a gis (odiit et al., 2006) . surveillance systems can be designed to meet a number of public health objectives and each system has different requirements in terms of data, methodology and implementation. outbreak detection is the intended function of many surveillance systems. in syndromic surveillance systems, early-warning signals are provided by analysis of pre-diagnostic data that may be indicative of people's care-seeking behaviour during the early stages of an outbreak. in contrast, systems designed to monitor food and water-borne (e.g., cholera) pathogens are designed for case detection, where one case may trigger a response from public health workers. similarly, where eradication of a disease in an area is a public health objective, surveillance may be designed primarily for case detection. alternatively, where a target disease is endemic to an area, perhaps with seasonal variation in incidence, such as rabies, monitoring space-time trends may be the primary surveillance objective (childs et al., 2000) . surveillance systems differ with respect to a number of qualities which we term contextual factors. for evaluation of surveillance systems, this is well known, as the evaluative framework set out by the centre for disease control and prevention (cdc) encompasses assessment of simplicity, flexibility, data quality, acceptability, sensitivity, predictive value positive, representativeness, timeliness, and stability (buehler et al., 2004) . selection of appropriate methods for space-time disease surveillance should consider system-specific factors indicative of the context under which they will be used (table 1) . these factors are summarized in table 1 , and are the axes along which we will review methods for space-time disease surveillance. there has been rapid expansion in the development of automated disease surveillance systems. following the 2001 bioterrorism attacks in the united states, there was expanded interest and funding for the development of electronic surveillance networks capable of detecting a bioterrorist attack. many of these were designed to monitor data that precede diagnoses of a disease (i.e., syndromic surveillance). by may 2003 there were an estimated 100 syndromic surveillance systems in development throughout the us (buehler et al., 2003) . due to the noisy nature of syndromic data, these systems rely heavily on advanced statistical methods for anomaly detection. as data being monitored in syndromic systems precede diagnoses they contain a signal that is further removed from the pathogen than traditional disease surveillance, so in addition to having potential for early warning, there is also greater risk of false alarms (i.e., mistakenly signaling an outbreak) (stoto et al., 2004) . one example is a national surveillance system called biosense developed by the cdc in the united states. bio-sense is designed to support early detection and situational awareness for bioterrorism attacks and other events of public health concern (bradley et al., 2005) . data sources used in biosense include veterinary affairs and department of defense facilities, private hospitals, national laboratories, and state surveillance and healthcare systems. the broad mandate and national scope of the system necessitated the use of general statistical methods insensitive to widely varying types, quality, consistency and volume of data. two methods used in biosense are a generalized linear mixed-model which estimates counts of syndrome cases based on location, day of the week and effects due to seasonal variation and holidays. counts are estimated weekly for each syndrome-location combination. a second temporal surveillance approach computed for each syndrome under surveillance is a cumulative sum of counts where events are flagged as unusual if the observed count is two standard deviations above the moving average. the selection of surveillance methods in biosense considered factors associated with heterogeneity of data sources and data volume among others. another example is provided by a state-level disease surveillance system developed for massachusetts called the automated epidemiological geotemporal integrated surveillance (aegis) system, where both time-series modelling and spatial and space-time scan statistics are used (reis et al., 2007) . the modular design of the system allowed for 'plug-in' capacity so that functionality already implemented in other software (i.e., satscan) could be leveraged. in aegis, daily visit data from 12 emergency department facilities are collected and analyzed. the reduced data volume and greater standardization enable more advanced space-time methods to be used as well as tighter integration with the system's communication and alerting functions (reis et al., 2007) . decisions on method selection and utilization are based on a variety of factors, yet most reviews of statistical methods for surveillance data compare and describe algorithms from a purely statistical or computational perspective (e.g., buckeridge et al., 2005; sonesson and bock, 2003; . the selection of statistical approaches to surveillance for implementation as part of a national surveillance system is greatly impacted by design constraints due to scalability, data quality and data volume whereas the use of surveillance data for a standalone analysis by a local public health worker may be more impacted by software availability, learning curve, and interpretability. selection of appropriate statistical methods is key to enabling a surveillance system to meet its objectives. a frequently cited concern of surveillance systems is how to evaluate whether they are meeting their objectives (reingold, 2003; sosin and dethomasis, 2004) . a framework for evaluation developed by the cdc considers outbreak detection a function of timeliness, validity, and data quality (buehler et al., 2004) . the degree to which these factors contribute to system effectiveness may vary for different surveillance systems, especially where objectives and system experiences differ. for example, newly developed systems in developing countries may place a table 1 contextual factors for evaluation of methods for space-time disease surveillance. the spatial and temporal extent of the system (e.g., local/regional/national/ international) scope the intended target of the system (e.g., single disease/multiple disease, single host/multiple host, known pathogens/unknown pathogens) function the objective(s) of the systems (outbreak detection, outbreak characterization, outbreak control, case detection, situational awareness (mandl et al., 2004; buehler et al., 2004) , biosecurity and preparedness (fearnley, 2008) ) disease characteristics is the pathogen infectious? is this a chronic disease? how does it spread? what is known about the epidemiology of the pathogen? technical the level of technological sophistication in the design of the system and its users (data type and quality, algorithm performance, computing infrastructure and/or reliability, user expertise) greater emphasis on evaluating data quality and representativeness, as little is known about the features of the data streams at early stages of implementation (lescano et al., 2008) . algorithm performance is usually measured by sensitivity, specificity and timeliness. sensitivity is the probability of an alarm given an outbreak, and specificity is the probability of no alarm when there is no outbreak. timeliness is measured in number of time units to detection, and has been a focus of systems developed for early outbreak detection (wagner et al., 2001) . the importance of each of these measures of performance need to be evaluated in light of the system's contextual factors outlined in table 1 . our goal in this review of approaches to space-time disease surveillance is to synthesize major surveillance methods in a way that will focus on the feasibility of implementation and highlight contrasts between different methods. first, we aim to place methods in the context of some key aspects of practical implementation. second, we aim to highlight how methods of space-time disease surveillance relate to different surveillance contexts. disease surveillance serves a number of public health functions under varying scenarios and methods need to be tailored and suited to particular contexts. finally, we provide guidance to public health practitioners in understanding methods of space-time disease surveillance. we limit our focus to methods that use data encoded with both spatial and temporal information. this paper is organized as follows. the next section describes space-time disease surveillance. following, is a description of different statistical approaches to spacetime disease surveillance with respect to the contextual factors outlined in table 1 . we conclude with a summary and brief discussion of our review. methods for space-time disease surveillance can address a surveillance objective in a variety of ways. most methods assume a study area made up of smaller, nonoverlapping sub-regions where cases of disease are being monitored. the variable under surveillance is the count of the number of cases. in retrospective analysis, the data are fixed and methods are used to determine whether an outbreak occurred during the study period, or characterize the spatial-temporal trends in disease over the course of the study period (marshall, 1991) . in the prospective scenario, the objective is to determine whether any single sub-region or collection of sub-regions is undergoing an outbreak (currently), and analysis occurs in an automated, sequential fashion as data accumulate over time. prospective methods require special consideration as data do not form a fixed sample from which to make inferences about (sonesson and bock, 2003) . parallel surveillance methodologies compute a test statistic separately for each sub-region and signal an alarm if any of sub-regions are significantly anomalous (fig. 1a) . while in vector accumulation methods, test statistics in a parallel surveillance setting are combined to form one general alarm statistic (fig. 1b) . conversely, a scalar accumulation approach com-putes one statistic over all sub-regions for each time period (frisen and sonesson, 2005) (fig. 1c ). for example, rogerson (1997) used the tango (1995) statistic to monitor changes in spatial point patterns. statistical tests in space-time disease surveillance generally seek to determine whether disease incidence in a spatially and temporally defined subset is unusual compared to the incidence in the study region as a whole. thus, this class of methods is designed to detect clusters of disease in space and time, and suit surveillance systems designed for outbreak detection. most spatial cluster detection methods such as the geographical analysis machine (openshaw et al., 1987) , density estimation (bithell, 1990; lawson and williams, 1993 ), turnbull's method (turnbull et al., 1990) , the besag and newell (1991) test, spatial autocorrelation methods such as the gi * (getis and ord, 1992) , and lisas (anselin, 1995) , and the spatial scan statistic (kulldorff and nagarwalla, 1995) are types of statistical tests. the development of methods for space-time cluster detection naturally evolved from these purely spatial methods. we can stratify methods in the statistical test class into three types: tests for space-time interaction, cumulative sum methods, and scan statistics. space-time interaction of disease indicates that the cases cluster such that nearby cases in space occur at about the same time. the form of the null hypotheses is usually conditioned on population, and can factor in risk covariates such as age, occupation, and ethnicity. detecting the presence of space-time interaction can be a step towards determining a possible infectious etiology for new or poorly understood diseases (aldstadt, 2007) . additionally, non-infectious diseases exhibiting space-time interaction may suggest the presence of an additional causative agent, such as a point source of contamination and/or pollution or an underlying environmental variable. these methods require fixed samples of space-time data representing cases of disease. all tests for space-time interaction consider the number of cases of disease that are related in space-time, and compare this to an expectation under a null hypothesis of no interaction (kulldorff and hjalmars, 1999) . the knox test (1964) uses a simple test statistic which is the number of case pairs close both in space and in time. this count is compared to the null expectation conditional on the number of pairs close only in space, and the number of pairs close only in time; i.e., the times of occurrence of the cases are independent of case location. a major shortcoming of the knox (1964) method is that the definition of ''closeness" is arbitrary. mantel's (1967) test addresses this by summing across all possible space-time pairs, while diggle et al. (1995) identify clustering at discrete distance bands in the space-time k function. for infectious diseases, it is likely that near space-time pairs are of greater importance, so mantel suggests a reciprocal transformation such that distant pairs are weighted less than near pairs. the mantel test can in fact be used to test for association between any two distance matrices, and is often used by ecologists to test for interaction between space and another distance variable such as genetic similarity (legendre and fortin, 1989) . the reciprocal transformation used in the mantel statistics assumes a distance decay effect. while this may be appropriate for infectious diseases, for non-infectious diseases or diseases about which little is known, this assumed functional form of disease clustering may be inappropriate. a different approach is taken by jacquez (1996) where relations in space and time are defined by a nearest neighbour relation rather than distance. here, the test statistic is defined by the number of case pairs that are k nearest neighbours in both space and time. when space-time interaction is present, the test statistic is large. another method for testing an infectious etiology hypothesis given by pike and smith (1974) , assesses clustering of cases relative to another control disease, though selection of appropriate controls can be difficult. the scale of the disease surveillance context can impact the selection of space-time interaction tests because these tests are sensitive to changes in the underlying population at risk (population shift bias). therefore, large temporal scales will be more likely to exhibit changes in population structure and introduce population shift bias. an unbiased version of the knox test given by kulldorff and hjalmars (1999) accounts for this by adjusting the statistic by the space-time interaction inherent in the background population. changes in background population over time can be incorporated into all space-time interaction tests using a significance test based on permutations conditioned on population changes. however, this obviously requires data on the population over time which may not always be easy to obtain. space-time interaction tests are univariate and therefore only suitable for testing cases of a single disease. consideration of multiple host diseases is possible, though there is no mechanism to test for interaction or relationships between different host species. another major consideration is the function of the surveillance system or analytic objective. interaction tests can only report the presence or absence of space-time interaction. they give no information about the spatial and temporal trends in cases, nor consider naturally occurring background heterogeneity. a final point is that these tests use case data, and therefore require geo-coded singular event data, making these methods unsuitable when disease data are aggregated to administrative units. cumulative sum methods for space-time surveillance developed out of traditional statistical surveillance applications such as quality control monitoring of industrial manufacturing processes. in cusum analysis, the objective is to detect a change in an underlying process. in application to disease surveillance, the data are in the form of case counts for sub-regions of a larger study area. a running sum of deviations is recalculated at each time period. for a given sub-region, a count y t of cases at time t is monitored as follows where s t is the cumulative sum alarm statistic, k is a parameter which represents the expected count, so that observed counts in exceedence of k are accumulated. at each time period, an alarm is signalled if s t is greater than a threshold parameter h. if a cusum is run long enough, false alarms will occur as exceedences are incrementally accumulated. the false-positive rate is controlled by the expected time it takes for a false alarm to be signalled, termed the in-control average run length, denoted arl 0 . the arl 0 is directly related to the threshold value for h, which can be difficult to specify in practice. high values of h yield long arl 0 and vice versa. in practice, approximations are used to estimate a value for h for a chosen arl 0 (siegmund, 1985) , though this remains a key issue in cu-sum methods. the basic univariate cusum in (1) can be extended to incorporate the spatial aspect of surveillance data. in this sense, cusum is a temporal statistical framework around which a space-time statistical test can be built. in an initial spatial extension, rogerson (1997) coupled the (global) tango statistic (1995) for spatial clustering in a cusum framework. for a point pattern of cases of disease, compute the spatial statistic, and use this value of the statistic to condition the expected value at the next time period. observed and expected values are used to derive a z-score which is then monitored as a cusum (rogerson, 2005a) . one scalar approach taken by rogerson (2005b) is to monitor only the most unexpected value, or peak, of each time period as a gumbel variate (gumbel distribution is used as a statistical distribution for extreme values). an additional approach is to compute a univariate cusum in a parallel surveillance framework (woodall and ncube, 1985) . here the threshold parameter h must be adjusted to account for the multiple tests occurring across the study area. yet this approach takes no account of spatial relationships between sub-regions (i.e., spatial autocorrelation). cusum surveillance of multiple sub-regions can be considered a multivariate problem where a vector of differences between the observed and expected counts for each sub-region is accumulated. spatial relationships between sub-regions can be incorporated by explicitly modelling the variance-covariance matrix. rogerson and yamada (2004) demonstrate this approach by monitoring a scalar variable representing the multivariate distance of the accumulated differences between observed and expected over all sub-regions. this is modelled as , and p is a variance-covariance matrix capturing spatial dependence, and s t is a 2 â p vector of differences between observed and expected cases of disease in time t for each p sub-region (rogerson and yamada, 2004) . cusum methods are attractive for prospective disease surveillance because they offer a temporal statistical framework within which spatial statistics can be integrated. they therefore overcome one of the limitations of traditional spatial analysis applied to surveillance in that repeated testing over time (and space) can be corrected for. a full description of the inferential properties of the cusum framework is given by rogerson (2005a) . these methods are therefore most appropriate for long temporal scales, especially when historical data are used to estimate the baseline. multivariate cusum given by rogerson and yamada (2004) is for a singular disease over multiple sub-regions, but could be used to monitor multiple diseases over multiple sub-regions. this may be most applicable in a syndromic surveillance application. the simplicity of univariate cusum makes training and technical expertise less of a factor than the multivariate case. multivariate cusum is also more difficult to interpret and specification of the threshold parameter requires simulation experimentation or a large temporal extent from which to establish a baseline. scan statistics developed originally for temporal clustering by naus (1965) test whether cases of disease in a temporally defined subset exceed the expectation given a null hypothesis of no outbreak. the length of the temporal window is varied systematically in order to detect outbreaks of different lengths. this approach was first extended to spatial cluster detection in the geographical analysis machine (openshaw et al., 1987) . the spatial approach looks for clusters by scanning over a map of cases of disease using circular search areas of varying radii. kulldorff and nagarwalla (1995) refined spatial scanning with the development of the spatial scan statistic which adjusts for the multiple testing of many circular search areas. the spatial scan statistic overcomes the multiple-testing problem (common to many local spatial analysis methods) by taking the most likely cluster defined by maximizing the likelihood that the cases within the search area are part of a cluster compared to the rest of the study area. significance testing for this one cluster is then assessed via monte carlo randomization. secondary clusters can be assessed in the same way and ranked by p-value. in kulldorff (2001) , the spatial scan statistic is extended to space-time, such that cylindrical search areas are used where the spatial search area is defined by cylinder radius, and the temporal search area is defined by cylinder height. in prospective analysis, candidate cylinders are limited to those that start at any time during the study period and end at the current time period (i.e., alive clusters). significance is determined through randomization and comparing random permutations to the likelihood ratio maximizing cylinder in the observed data. an additional consideration to take account of multiple hypothesis testing over time (correlated sequential tests) is given by including previously tested cylinders (which may be currently 'dead') in the randomization procedure (kulldorff, 2001) . the space-time scan statistic (kulldorff, 2001) approaches the surveillance problem in a novel way and aptly handles some key shortcomings of other local methods (multiple testing, locating clusters, pre-specifying cluster size). however, a limitation is that the expectation is conditional on an accurate representation of the underlying population at risk, data which may be hard to obtain. in long term space-time surveillance scenarios, accurate population estimates between decennial censuses are rare or must be interpolated. in syndromic applications, where cases are affected by unknown variations in care-seeking behaviours, the raw population numbers may not accurately reflect the at-risk population. in kulldorff et al. (2005) , the expected value for each unit under surveillance is estimated from historical case data rather than population data. generating the expected value from the history of the process under surveillance is most suitable for real-time prospective surveillance contexts where the current state of the process is of interest. this extension allows the application of the space-time scan statistic in a wider range of surveillance applications. a remaining limitation of the cylindrical space-time scan statistic is the use of circular search area over the map. the power of the scan statistics that use circularbased search areas decline as clusters become more irregular in shape, for example, for cases clustered along a river valley or where disease transmission is linked to the road network. the spatial scan statistic has been extended to detect irregularly-shaped clusters in patil and taillie (2004) and tango and takahashi (2005) . extensions of these approaches to space-time are active areas of research. a space-time version of the tango and takahashi (2005) method uses spatial adjacency of areal units added incrementally up to k nearest neighbour units which are connected through time to form 3-dimensional prism search areas (takahashi et al., 2008) . a similar approach is given by costa et al. (2007) . however, these methods are very computationally intensive. scan statistics are one of the most widely used statistical methods for outbreak detection in surveillance systems. space-time scan statistics are able to detect and locate clusters of disease, and can condition expected counts for individual sub-regions on population data or on previous case data, making these methods suitable for implementation where data volume is large. the scope of scan statistics, like most statistical tests, is limited to monitoring case data, either case event point data or counts by sub-region. scan statistics are best served to detect and locate discrete localized outbreaks. secondary clusters can be identified by ranking candidate clusters by their likelihood ratio. yet region-wide outbreaks cannot be detected with scan-statistics because of the assumed form of a cluster as a compact geographical region where cases are greater than expected. novel space-time methods that search for raised incidence via graph-based connectivity may model spatial relationships of disease processes more accurately than circular search areas. however, the computational burden and complexity of these approaches limits their use to expert analysts and researchers. at the root of the problem is a conceptual discrepancy between the definition of a disease outbreak (which disease surveillance systems are often interested in detecting) and a disease cluster (defined by spatial proximity) which is common to all statistical testing methods for space-time surveillance (lawson, 2005) . model-based approaches to surveillance developed recently as the need emerged to include other variables into the specification of our expectation of disease incidence. for example, we often expect disease prevalence to vary with age, gender, and workplace of the population under surveillance. statistical models allow for these influences to adjust the disease risk through space and time. a second impetus for the development of statistical models for disease surveillance is that a large part of epidemiology concerned with estimating relationships between environmental variables and disease risk (i.e., ecological analysis) provided a methodological basis from which to draw. modelling for space-time disease surveillance is relatively recent, and this is a very active area of statistical surveillance research. again we stratify statistical models into three broad classes: generalized linear mixed models, bayesian models, and models of specific space-time processes. generalized linear mixed models (glmm) offer a regression-based framework to model disease counts or rates using any of the exponential family of statistical distributions. this allows flexibility in the expected distribution of the response variable, as well as flexibility in the relationship between the response and the covariate variables (the link function). one application of this approach to prospective disease surveillance for detection of bioterrorist attacks is given by kleinman et al. (2004) . here, the number of cases of lower respiratory infection syndromes in small geographic areas act as a proxy for possible anthrax inhalation. a glmm approach is used to combine fixed effects for covariate variables (i.e., season, day of the week) with a random effect that accounts for varying baseline risks in different geographic areas. in kleinman et al. (2004) , the logit link function is used in a binomial logistic model to estimate the expected number of cases y it in area i for time t. this is a function of the probability of an individual being a case in area i at time t and the number of people n it in area i at time t. this expectation is conditional on a location specific random effect b i and is then converted to a z-score and evaluated to determine if it is unusual (i.e., an emerging cluster). this approach was extended to a model using poisson random effects in kleinman (2005) . the use of glmm in prospective surveillance has also been suggested for use in west nile virus surveillance due to the ease with which covariates can be included and flexibility in model specification (johnson, 2008) . the glmm approach has attractive advantages as a flexible modelling tool. particularly, relaxation of distributional assumptions, flexibility in link functions, and the ability to model spatial relationships (at multiple spatial scales) as random effects make glmm useful for prospec-tive space-time disease surveillance. the scale and scope of the surveillance context does not limit a model-based approach, and models may be even more useful when data abnormalities such as time lags occur (as estimates can be based on covariates alone). one feature of glmm that are important for many disease surveillance contexts are the ease with which spatial hierarchies can be incorporated. ecological relationships that are structured hierarchically that impact disease emergence (e.g., climate, vegetation, vector life-cycle development) can be represented and accounted for. further, human drivers of disease emergence (e.g., land-use policies, travel patterns, demographics) are often organized hierarchically through administrative units. in social sciences glmms are often used (i.e., multi-level models) that incorporate these 'contextual effects' on an outcome variable. a further advantage of glmms is their ability to incorporate spatial variation in the underlying population at risk by conditioning the expected value on the random effect component (b i in eq. (3)). where fewer people are present, the expected value is adjusted toward the mean. this can somewhat account for the small-numbers problem of smrs in epidemiology, reducing the likelihood of estimating extremely low expected values in rural areas. bayesian models have been used extensively in disease mapping studies (best et al., 2005; lawson, 2009 ). analysis of disease in a bayesian framework centers around inference on unknown area-specific relative risks. inference on this unknown risk distribution is based on the observed data y and a prior distribution. these are combined via a likelihood function to create a distribution for model parameters which can be sampled for prediction. bayesian models have been applied for retrospective space-time surveillance (e.g., macnab, 2003) and are now being developed for prospective space-time disease surveillance. the basic bayesian model can incorporate space and time dependencies. in abellan et al. (2008) a model is described where the counts of disease are taken to be binomial distributed, and the next level of the model is composed of a decomposition of the unknown risks into model parameters for general risk, spatial effects, temporal effects, and space-time interaction. estimation requires specifying prior distributions for each of the model components and sampling the posterior distribution via monte carlo markov chain (mcmc) methods. here, the authors describe space-time bayesian models for explanation of overall patterns of disease, speculating on their use in disease surveillance contexts. rodeiro and lawson (2006a) offer a similar model based on a poisson distributed disease count. specifically, the counts y i are poisson with mean a function of the expected number of cases e ij in location i at time j and the area-specific relative risk rr ij . similar to abellan et al. (2008) , the log (rr ij ) are decomposed into spatial effects u i , uncorrelated heterogeneity v i , temporal trend t j , and space-time interaction c ij : again, these components need prior distributions specified. for the spatial correlation term, a conditional autoregressive model (car) is suggested for modelling spatial autocorrelation. residuals are then extracted from model predictions for incoming data and can be used to assess how well the data fits the existing model. as discussed in rodeiro and lawson (2006a) , monitoring residuals in this way makes the detection of specific types of disease process change feasible by adjusting how residuals are evaluated. while adding to the complexity of the analysis, this may be of great use in a surveillance application. alternative proposals such as bayesian cluster models with ''a priori" cluster component for spatiotemporal disease counts was developed by yan and clayton (2006) . more recently, bayesian and empirical bayes semi-parametric spatiotemporal models with temporal spline smoothing were developed for the analysis of univariate spatiotemporal small area disease and health outcome rates (macnab, 2007a; macnab and gustafson, 2007; ugarte et al., 2009 ) and multivariate spatiotemporal disease and health outcome rates (macnab, 2007b) . tzala and best (2008) also proposed bayesian hierarchical latent factor models for the modelling of multivariate spatiotemporal cancer rates. these spatiotemporal models, with related bayesian and empirical bayes methods of inference, may also be considered for disease surveillance applications. the statistical methodology for applying bayesian models to surveillance in space-time is still being developed, and as such these approaches are suited primarily to researchers. bayesian models are attractive because they allow expert and local knowledge of disease processes to be incorporated via the specification of prior distributions on model parameters. however, this can also be a drawback, as a subjective element is introduced to the model. it is generally recommended that sensitivity analysis be conducted on a variety of candidate priors for model parameters (e.g., macnab and gustafson, 2007; macnab, 2007a) . these technical aspects of model-fitting require advanced statistical training. a further complexity of bayesian models is estimation. mcmc methods are required for generating the posterior distributions for these types of models and are computationally very demanding (although see rodeiro and lawson, 2006b) . this might negate the use of these approaches in surveillance contexts that require daily refitting of models (i.e., fine temporal resolution), however, monthly or annual model refitting may be possible. as with glmms, bayesian models lend themselves to modelling hierarchical spatial relationships, and this can be important for both ecological and humanmediated drivers of disease emergence. some modelling approaches to surveillance have been designed to model specific types of spatial processes, generally represented as a realization from a statistical distribution. while all models require some distributional assumptions, those considered here purport to associate specific statistical processes with disease processes in the context of surveillance. in held et al. (2005) , a model is based on a poisson branching process whereby outcomes are dependent on both model parameters describing a particular property (e.g., periodicity) and past observed data. spatial and space-time effects can also be included as an ordinary multivariate extension. a useful aspect of this formulation for disease surveillance is the separation of the disease process at time t into two parts: an endemic part v and an epidemic part with conditional rate ky tà1 the endemic component can also be adjusted for seasonality, day of the week effects and other temporal trends. extended to the multivariate case, the model becomes where the endemic rate adjusted by the number of people in area i at time t, and area-specific previous model estimates for the epidemic part. spatial dependence can be incorporated by adding a spatial effects term that accounts for correlated estimates in ky i;tà1 via a weights matrix. however, this type of model yields separate parameters for each geographical unit. a point process methodology for prospective disease surveillance is presented in diggle et al. (2005) . point data representing cases are modelled with separate terms for spatial variation, temporal variation, and residual spacetime variation. the method is local, in the sense that recent cases are used for prediction, producing continuously varying risk surfaces. however, there are also global model parameters which estimate the background variation in space and time estimated from historical data. outbreaks are defined when variation in the residual space-time process exceeds a threshold value c. different values for the threshold parameter are evaluated and exceedence probabilities are mapped. model parameters are fixed allowing the model to be run daily on new data. however, as noted in diggle et al. (2005) , this may fail to capture unknown temporal trends, and periodic refitting may be required. a different approach is given by järpe (1999) , which instead of decomposing the process into separate components, monitors a single parameter of spatial relationships in a surveillance setting. this is similar in spirit to rogerson's work (rogerson, 1997) monitoring point patterns with spatial statistics, though here a specific underlying process is assumed: the ising model. the ising model represents a binary-state two dimensional lattice (sites coded 0 or 1). there are two parameters for the ising model; one governs the overall intensity (probability of a site being a 1), and another the spatial interaction (probability of nearby sites being alike). in järpe (1999) , the intensity parameter is assumed equal and unchanging, and the surveillance is performed on the interaction parameter under different lattice sizes and types of change. the interaction parameter is essentially a global measure of spatial autocorrelation. this can then be monitored using temporal surveillance statistics such as cusum. since the properties of the underlying model are known, järpe is able to detect very small changes in spatial autocorrelation which could indicate the shift of a disease from endemic to epidemic. while significant spatial autocorrelation is often present at both endemic and epidemic states, changes in clustering can reveal threshold dynamics of the process in a surveillance setting. this is a common feature of forest insect epidemics (peltonen et al., 2002) . further, the effect of the lattice size can easily be estimated, and as lattice size is increased, sensitivity to changes in the interaction parameter increases as well. while most methods discussed thus far have been developed with the analysis of aggregated counts of disease in mind, analysis of sites on a lattice may have applicability in certain disease surveillance contexts. for example, square lattices are used for remotely sensed image processing, and surveillance of the presence or absence of a disease in these sampling units using an ising modelbased approach could incorporate remotely sensed environmental covariates (e.g., normalized differential wetness index) as is commonly done for zoonotic disease risk mapping and forecasting (kitron et al., 1996; rogers et al., 1996; wilson, 2002) . however, it is unclear how covariates are included in the ising model. this highlights an important point with model-based approaches to prospective surveillance: the main advantage of models is to incorporate extra information and to estimate smooth relative risks, yet as models grow in complexity they become more difficult to re-fit. this has implications for how suitable models are in different surveillance contexts. where the temporal scale is large, expected counts can be based on observed data rather than using census or other data sources. this is particularly important where diseases follow seasonal trends. with limited temporal data available, estimating model parameters may make be impacted by regular variation in disease occurrence. for surveillance systems monitoring many small areas (i.e., large spatial scale), the held et al. (2005) model would be of limited value as separate parameters need to be estimated for every sampling unit. broad scale patterns over large areas might better captured by the point process approach of diggle et al. (2005) . although here, case event data with fine spatial resolution is required. for all modelling approaches, complex decisions are required such as what covariates to include, how often to re-fit the model, how to test incoming data for fit against the existing model which require advanced statistical knowledge. this limits the applicability of modelling approaches to advanced analysts and researchers except for use in a black-box sense by analysts and public health practitioners. surveillance models can be tailored to detect specific types of disease process changes, such as a regionwide increase, or small changes in spatial autocorrelation suggesting a shift from endemic to epidemic states. however, models also required additional tests to determine if incoming data differ from the expected (i.e., modelled) pattern of cases. thus, in practice surveillance models are best utilized to estimate a realistic relative risk, and can then be combined with statistical tests such as cusum (järpe, 1999) and scan statistics . research into space-time disease surveillance methods has increased dramatically over the last two decades. many new methods are designed for specific surveillance systems, or are in experimental/developmental stages and not used in practical surveillance. here, we report on some newly developed approaches for public health surveillance to alert readers to the most recent developments in these emerging research areas. while test and model-based approaches to surveillance build on classical statistical methods, many recent spacetime disease surveillance methods have been developed specifically to take advantage of advanced computing power and data sources. these approaches include networks (reis et al., 2007; wong and moore, 2006) simulation-based methods such as agent-based models (eubank et al., 2004) and bootstrap models (kim and o'kelly, 2008) , and hidden markov models (madigan, 2005; sun and cai, 2009; watkins et al., 2009) . other new methods are designed to address limitations of existing surveillance methods. one problem for most methods of surveillance, is the specification of the null hypothesis, or expected disease prevalence. while expected rates are generally conditional on population data, spatial heterogeneity in the background rates are rarely accounted for. that is, complete spatial randomness (csr) is the underlying null model. goovaerts and jacquez (2004) have used geostatistical approaches, estimating spatial dependence of background rates via the semivariogram, to develop more realistic null models for disease cluster detection. the geostatistical framework has the advantage of estimating spatial dependence from the data, rather than defining it a priori via a spatial weights matrix as is common in disease mapping models. another problem common to most surveillance methods is that maps of disease represent either home address (case events) or small areas (tract counts). unusual clusters on the map imply heightened risk is associated with those locations. however, movement of animals and people decouples the location of diagnosis from disease risk by modifying exposure histories. methods that account for mobility may be an important area for future surveillance, especially in the context of real-time, prospective outbreak detection. the relationship between case, location, and exposure is further complicated by disease latency periods, which gives rise to space-time lags in diagnoses (schaerstrom, 1999) . this may be most important in the context of retrospective cluster analysis and investigation of possible environmental risk factors. statistical tests have been developed to account for exposure history and mobility for case-control data (jacquez and meliker, 2009 ) and case-only data (jacquez et al., 2007) . kernel-based approaches to risk estimation that incorporate duration at each location have been utilized for amyotrophic lateral sclerosis (sabel et al., 2003) . the general approach is to model and analyze the space-time path of individuals in the sense of hägerstrand (1967) . as personal location data continues to become ubiquitous due to new technology such as gps-enabled cell phones, surveillance methods that account for individual space-time histories may see more application in public health surveillance. the development of space-time disease surveillance systems holds great potential for improving public health via early warning and monitoring of health. the selection of which method(s) to implement in a given context is dependent on a variety of factors (table 2) . this review has demonstrated that there is no best method for all systems. there are many aspects to consider when thinking about methods for space-time disease surveillance. many of the methods described in this review are active areas of research and new methods are constantly being developed. as more data sources become available, this trend is expected to continue, and the methods described here provide a snapshot of options available to public health analysts and researchers. a brief outline of some of the factors reviewed and how they relate to surveillance methods is given below. the spatial scale of the surveillance context is an important factor for selecting appropriate methods. spatial effects (i.e., clustering) are likely only of interest when cases/counts collected over a relatively large, heterogeneous area are being analyzed. over smaller more homogeneous areas, where spatial effects are negligible, temporal surveillance is optimal. when space-time surveillance is warranted, choice of which surveillance approach to use may be impacted by how spatial effects can be incorporated. where spatial scale is small, one would likely focus on either process models or statistical tests which use an underlying distribution for the null hypothesis (i.e., poisson model). the temporal scale of surveillance is also important. large temporal scales can use either testing or modelling methods, and most suit methods where baselines are estimated from previous cases, such as with the space-time permutation scan statistic. short temporal scales are not appropriate for models when diseases have complex day of the week effects or seasonal variation in incidence. scale will also affect the computational burden placed on the system. many approaches reviewed here, particularly statistical tests such as scan statistics, use approximate randomization to generate a distribution of a test statistic under the null hypothesis. methods that utilize randomization procedures, while powerful, impose constraints when applied with large spatial-temporal datasets. most methods are designed for a single disease and all methods are suitable for single host diseases, but finer detail in case distribution may be important for multiple host zoonotic diseases. stratification into separate diseases by host type will result in a loss of information as associations between host types will be lost. as zoonotic diseases make up the majority of emerging infectious diseases (greger, 2007) , multiple host surveillance methods are required. multivariate tests such as multivariate cusum can be used to monitor multiple signals. modelling approaches can also be used by creating a generalized risk index as the variable under surveillance. multivariate extensions to existing methods can be used to monitor associations between two diseases, for example, human and animal strains of the same pathogen. the objective of surveillance is one of the main drivers of method selection. all statistical tests are commonly used for outbreak detection. in general, modelling approaches are better suited to monitoring space-time trends. for what has been termed situational awareness, multiple signals are usually monitored. this is often the case in large syndromic applications such as biosense and essence. these contexts are best suited to a modelling approach, as often heterogeneity needs to be modelled with covariates. consideration of technical expertise is required for practical disease surveillance. broadly speaking, greater statistical expertise is required for model-based methods than testing (understanding model assumptions, parameterizing models, preparing covariate data, and interpreting output), while testing concepts are generally easier to grasp. however, for epidemiologists already familiar with generalized linear mixed models, some model approaches that incorporated space and time may be quickly attainable, such as that of kleinman et al. (2004) . yet for analysts from a health geography or spatial analysis background, testing methods might be more familiar. in any case, the use of space-time surveillance methods in public health will only increase in the future, and it is important that training and education keep pace with the changing methods available for surveillance data analysis. use of space-time models to investigate the stability of patterns of disease an incremental knox test for the determination of the serial interval between successive cases of an infectious disease local indicators of spatial association-lisa spatio-temporal exploration of sars epidemic the detection of clusters in rare diseases a comparison of bayesian spatial models for disease mapping an application of density estimation to geographical epidemiology biosense: implementation of a national early event detection and situational awareness system algorithms for rapid outbreak detection: a research synthesis syndromic surveillance and bioterrorism-related epidemics framework for evaluating public health surveillance systems for early detection of outbreaks predicting the local dynamics of epizootic rabies among raccoons in the united states a space time permutation scan statistic with irregular shape for disease outbreak detection second-order analysis of space-time clustering statistical analysis of spatial point patterns point process methodology for on-line spatio-temporal disease surveillance modelling disease outbreaks in realistic urban social networks signals come and go: syndromic surveillance and styles of biosecurity spatial and syndromic surveillance for public health the analysis of spatial association by use of distance statistics accounting for regional background and population size in the detection of spatial clusters and outliers using geostatistical filtering and spatial neutral models: the case of lung cancer in long island the human/animal interface: emergence and resurgence of zoonotic infectious diseases innovation diffusion as a spatial process a statistical framework for the analysis of multivariate infectious disease surveillance counts a k nearest neighbour test for space-time interaction in search of induction and latency periods: space-time interaction accounting for residential mobility, risk factors and covariates case-control clustering for mobile populations surveillance of the interaction parameter of the ising model prospective spatial prediction of infectious disease: experience of new york state (usa) with west nile virus and proposed directions for improved surveillance a bootstrap based space-time surveillance model with an application to crime occurrences spatial analysis of the distribution of tsetse flies in the lambwe valley, kenya, using landsat tm satellite imagery and gis generalized linear models and generalized linear mixed models for small-area surveillance a generalized linear mixed models approach for detecting incident clusters of disease in small areas, with an application to biological terrorism a model-adjusted spacetime scan statistic with an application to syndromic surveillance the detection of space-time interactions prospective time periodic geographical disease surveillance using a scan statistic a spacetime permutation scan statistic for the early detection of disease outbreaks the knox method and other tests for space-time interaction spatial disease clusters: detection and inference spatial and syndromic surveillance for public health bayesian disease mapping; hierarchical modeling for spatial epidemiology applications of extraction mapping in environmental epidemiology spatial pattern and ecological analysis statistical analyses in disease surveillance systems a bayesian hierarchical model for accident and injury surveillance spline smoothing in bayesian disease mapping mapping disability-adjusted life years: a bayesian hierarchical model framework for burden of disease and injury assessment regression b-spline smoothing in bayesian disease mapping: with an application to patient safety surveillance spatial and syndromic surveillance for public health implementing syndromic surveillance: a practical guide informed by the early experience the detection of disease clustering and a generalized regression approach a review of methods for the statistical analysis of spatial patterns of disease visualization techniques and graphical user interfaces in syndromic surveillance systems. summary from the disease surveillance workshop the distribution of the size of the maximum cluster of points on a line using remote sensing and geographic information systems to identify villages at high risk for rhodesiense sleeping sickness in uganda a mark 1 geographical analysis machine for the automated analysis of point data sets upper level set scan statistics for detecting arbitrarily shaped hotspots spatial synchrony in forest insect outbreak roles of regional stochasticity and dispersal a case-control approach to examine diseases for evidence of contagion, including diseases with long latent periods if syndromic surveillance is the answer, what is the question? aegis: a robust and scalable real-time public health surveillance system monitoring changes in spatio-temporal maps of disease online updating of space-time disease surveillance models via particle filters predicting the distribution of tsetse flies in west africa using temporal fourier processed meteorological satellite data surveillance systems for monitoring the development of spatial patterns a set of associated statistical tests for spatial clustering monitoring spatial maxima monitoring change in spatial patterns of disease: comparing univariate and multivariate cumulative sum approaches spatial clustering of amyotrophic lateral sclerosis in finland at place of birth and place of death apparent and actual disease landscapes. some reflections on the geographical definition of health and disease sequential analysis: tests and confidence intervals a review and discussion of prospective statistical surveillance in public health evaluation challenges for syndromic surveillance-making incremental progress syndromic surveillance: is it worth the effort? large-scale multiple testing under dependence a flexibly shaped space-time scan statistic for disease outbreak detection and monitoring a class of tests for detecting 'general' and 'focused' clustering of rare diseases a flexibly shaped spatial scan statistic for detecting clusters monitoring for clusters in disease: application to leukemia incidence in upstate new york bayesian latent variable modelling of multivariate spatiotemporal variation in cancer mortality spatio-temporal modeling of mortality risks using penalized splines the emerging science of very early detection of disease outbreaks disease surveillance using a hidden markov model emerging and vector-borne diseases: role of high spatial resolution and hyperspectral images in analyses and forecasts multivariate cusum quality-control procedures classical time-series methods for biosurveillance world health organization. global early warning system for major animal diseases, including zoonoses (glews) a review of public health syndromic surveillance systems a cluster model for space-time disease counts this project was supported in part by the teasdale-corti global health research partnership program, national sciences and engineering research council of canada, and geoconnections canada. the authors would like to thank dr. barry boots for direction and suggestions during the starting phase of this research. key: cord-339789-151d1j4n authors: hong, hyokyoung g.; li, yi title: estimation of time-varying reproduction numbers underlying epidemiological processes: a new statistical tool for the covid-19 pandemic date: 2020-07-21 journal: plos one doi: 10.1371/journal.pone.0236464 sha: doc_id: 339789 cord_uid: 151d1j4n the coronavirus pandemic has rapidly evolved into an unprecedented crisis. the susceptible-infectious-removed (sir) model and its variants have been used for modeling the pandemic. however, time-independent parameters in the classical models may not capture the dynamic transmission and removal processes, governed by virus containment strategies taken at various phases of the epidemic. moreover, few models account for possible inaccuracies of the reported cases. we propose a poisson model with time-dependent transmission and removal rates to account for possible random errors in reporting and estimate a time-dependent disease reproduction number, which may reflect the effectiveness of virus control strategies. we apply our method to study the pandemic in several severely impacted countries, and analyze and forecast the evolving spread of the coronavirus. we have developed an interactive web application to facilitate readers’ use of our method. a recent work [35] demonstrated that r 0 is likely to vary "due to the impact of the performed intervention strategies and behavioral changes in the population". the merits of our work are summarized as follows. first, unlike the deterministic odebased sir models, our method does not require transmission and removal rates to be known, but estimates them using the data. second, we allow these rates to be time-varying. some timevarying sir approaches [27] directly integrate into the model the information on when governments enforced, for example, quarantine, social-distancing, compulsory mask-wearing and city lockdowns. our method differs by computing a time-varying r 0 , which gauges the status of coronavirus containment and assesses the effectiveness of virus control strategies. third, our poisson model accounts for possible random errors in reporting, and quantifies the uncertainty of the predicted numbers of susceptible, infectious and removed. finally, we apply our method to analyze the data collected from the aforementioned github time-series data repository. we have created an interactive web application (https://younghhk.shinyapps.io/ tvsirforcovid19/) to facilitate users' application of the proposed method. we introduce a poisson model with time-varying transmission and removal rates, denoted by β(t) and γ(t). consider a population with n individuals, and denote by s(t), i(t), r(t) the true but unknown numbers of susceptible, infectious and removed, respectively, at time t, and by s (t) = s(t)/n, i(t) = i(t)/n, r(t) = r(t)/n the fractions of these compartments. the following ordinary differential equations (ode) describe the change rates of s(t), i(t) and r(t): with an initial condition: i(0) = i 0 and r(0) = r 0 , where i 0 > 0 in order to let the epidemic develop [36] . here, β(t) > 0 is the time-varying transmission rate of an infection at time t, which is the number of infectious contacts that result in infections per unit time, and γ(t) > 0 is the time-varying removal rate at t, at which infectious subjects are removed from being infectious due to death or recovery [33] . moreover, γ −1 (t) can be interpreted as the infectious duration of an infection caught at time t [37] . from (1)-(3), we derive an important quantity, which is the time-dependent reproduction number r 0 ðtþ ¼ bðtþ gðtþ : time-varying sir based poisson model for covid -19 to see this, dividing (2) by (3) leads to where (di/dr)(t) is the ratio of the change rate of i(t) to that of r(t). therefore, compared to its time-independent counterpart, r 0 ðtþ is an instantaneous reproduction number and provides a real-time picture of an outbreak. for example, at the onset of the outbreak and in the absence of any containment actions, we may see a rapid ramp-up of cases compared to those removed, leading to a large (di/dr)(t) in (4), and hence a large r 0 ðtþ. with the implemented policies for disease mitigation, we will see a drastically decreasing (di/dr)(t) and, therefore, declining of r 0 ðtþ over time. the turning point is t 0 such that r 0 ðt 0 þ ¼ 1; when the outbreak is controlled with (di/dr)(t 0 ) < 0. under the fixed population size assumption, i.e., s(t) + i(t)+ r(t) = 1, we only need to study i (t) and r(t), and re-express (1)-(3) as with the same initial condition. as the numbers of cases and removed are reported on a daily basis, t is measured in days, e.g. t = 1, . . ., t. replacing derivatives in (5) with finite differences, we can consider a discrete version of (5): iðt þ 1þ à iðtþ ¼ bðtþiðtþf1 à iðtþ à rðtþg à gðtþiðtþ; rðt þ 1þ à rðtþ ¼ gðtþiðtþ; where β(t) and γ(t) are positive functions of t. we set i(0) = i 0 and r(0) = r 0 with t = 0 being the starting date. model (6) admits a recursive way to compute i(t) and r(t): iðt þ 1þ ¼ f1 þ bðtþ à gðtþgiðtþ à bðtþiðtþfiðtþ þ rðtþg; for t = 0, . . ., t − 1. the first equation of (7) implies that β(t) < γ(t) or r 0 ðtþ ¼ bðtþg à 1 ðtþ < 1 leads to that i(t + 1) < i(t) or the number of infectious cases drops, meaning the spread of virus is controlled; otherwise, the number of infectious cases will keep increasing. to fit the model and estimate the time-dependent parameters, we can use nonparametric techniques, such as splines [38] [39] [40] [41] [42] [43] , local polynomial regression [44] and reproducible kernel hilbert space method [45] . in particular, we consider a cubic b-spline approximation [46] . denote by b(t) = {b 1 (t),. . .,b q (t)} t the q cubic b-spline basis functions over [0, t] associated with the knots 0 = w 0 < w 1 < . . . < w q−2 < w q−1 = t. for added flexibility, we allow the number of knots to differ between β(t) and γ(t) and specify log bðtþ ¼ when b 1 ¼ � � � ¼ b q 1 and g 1 ¼ � � � ¼ g q 2 , the model reduces to a constant sir model [46] . we use cross-validation to choose q 1 and q 2 in our numerical experiments. denote by β ¼ ðb 1 ; . . . ; b q 1 þ and γ ¼ ðg 1 ; . . . ; g q 2 þ the unknown parameters, by z i (t) and z r (t) the reported numbers of infectious and removed, respectively, and by z i (t) = z i (t)/n and z r (t) = z r (t)/n, the reported proportions. also, denote by i(t) and r(t) the true numbers of infectious and removed, respectively at time t. we propose a poisson model to link z i (t) and z r (t) to i(t) and r(t) as follows: we also assume that, given i(t) and r(t), the observed daily number {z i (t), z r (t)} are independent across t = 1, . . ., t, meaning the random reporting errors are "white" noise. we note that (9) is directly based on "true" numbers of infectious cases and removed cases derived from the discrete sir model (6) . this differs from the markov process approach, which is based on the past observations. with (6), (7) and (8), r(t) and i(t) are the functions of β and γ, since given the data (z i (t), z r (t)), t = 1, . . ., t, we obtain ðβ;ĝþ, the estimates of (β, γ), by maximizing the following likelihood or, equivalently, maximizing the log likelihood function where c is a constant free of β and γ. see the s1 appendix for additional details of optimization. we then estimate the variance-covariance matrix of ðβ;γþ by inverting the second derivative of −ℓ(β, γ) evaluated at ðβ;γþ. finally, for t = 1, . . ., t, we estimate i(t) and r(t) byîðtþ ¼ nîðtþ andrðtþ ¼ nrðtþ, whereîðtþ andrðtþ are obtained from (7) with all unknown quantities replaced by their estimates; estimate β(t) and γ(t) bybðtþ andĝðtþ, obtained by using (8) with (β, γ) replaced by ðβ;γþ; and estimate r 0 ðtþ byr 0 ðtþ ¼βðtþ=γðtþ. estimation: let n be the size of population of a given country. the date when the first case was reported is set to be the starting date with t = 1, i 0 = z i (1)/n and r 0 = z r (1)/n. the observed data are {z i (t), z r (t), t = 1, . . ., t}, obtained from the github data repository website mentioned in the introduction. we maximize (10) to obtainβ ¼ ðb 0 ;b 1 ; . . . ;b q 1 þ and γ ¼ ðĝ 0 ;ĝ 1 ; . . . ;ĝ q 2 þ. the optimal q 1 and q 2 are obtained via cross-validation. we denote by since the first case of covid-19 was detected in china, it quickly spread to nearly every part of the world [6] . covid-19, conjectured to be more contagious than the previous sars and h1n1 [48] , has put great strain on healthcare systems worldwide, especially among the severely affected countries [49] . we apply our method to assess the epidemiological processes of covid-19 in some severely impacted countries. the country-specific time-series data of confirmed, recovered, and death cases were obtained from a github data repository website (https://github.com/ulklc/covid19-timeseries). this site collects information from various sources listed below on a daily basis at gmt 0:00, converts the data to the csv format, and conducts data normalization and harmonization if inconsistencies are found. the data sources include in particular, the current population size of each country, n, came from the website of worldometer. our analyses covered the periods between the date of the first reported coronavirus case in each nation and june 30, 2020. in the beginning of the outbreak, assessment of i 0 and r 0 was problematic as infectious but asymptomatic cases tended to be undetected due to lack of awareness and testing. to investigate how our method depends on the correct specification of the initial values r 0 and i 0 , we conducted monte carlo simulations. as a comparison, we also studied the performance of the deterministic sir model in the same settings. fig 1 shows that, when the initial value i 0 was mis-specified to be 5 times of the truth, the curves of i (t) and r(t) obtained by the deterministic sir model (6) were considerably biased. on the other hand, our proposed model (9), by accounting for the randomness of the observed data, was robust toward the mis-specification of i 0 and r 0 : the estimates of r(t) and i(t) had negligible biases even with mis-specified initial values. in an omitted analysis, we mis-specified i 0 and r 0 to be only twice of the truth, and obtain the similar results. our numerical experiments also suggested that using the time series, starting from the date when both cases and removed were reported, may generate more reasonable estimates. using the cubic b-splines (8), we estimated the time-dependent transmission rate β(t) and removal rate γ(t), based on which we further estimated r 0 ðtþ, i(t) and r(t). to choose the optimal number of knots for each country when implementing the spline approach, we used 5-fold cross-validation by minimizing the combined mean squared error for the estimated infectious and removed cases. fig 2 shows sharp variations in transmission rates and removal rates across different time periods, indicating the time-varying nature of these rates. the estimated i(t) and r(t) overlapped well with the observed number of infectious and removed cases, indicating the reasonableness of the method. the pointwise 95% confidence intervals (in yellow) represent the uncertainty of the estimates, which may be due to error in reporting. fig 3 presents the estimated time-varying reproduction number,bðtþĝðtþ à 1 , for several countries. the curves capture the evolving trends of the epidemic for each country. in the us, though the first confirmed case was reported on january 20, 2020, lack of immediate actions in the early stage let the epidemic spread widely. as a result, the us had seen soaring infectious cases, and r 0 ðtþ reached its peak around mid-march. from mid-march to early april, the us tightened the virus control policy by suspending foreign travels and closing borders, and the federal government and most states issued mandatory or advisory stay-home orders, which seemed to have substantially contained the virus. the high reproduction numbers with china, italy, and sweden at the onset of the pandemic imply that the spread of the infectious disease was not well controlled in its early phases. with the extremely stringent mitigation policies such as city lockdown and mandatory mask-wearing implemented in the end of january, china was reported to bring its epidemic under control with a quickly dropping r 0 ðtþ in february. this indicates that china might have contained the epidemic, with more people removed from infectious status than those who became infectious. sweden is among the few countries that imposed more relaxed measures to control coronavirus and advocated herd immunity. the swedish approach has initiated much debate. while some criticized that this may endanger the general population in a reckless way, some felt this might terminate the pandemic more effectively in the absence of vaccines [50] . fig 3 demonstrates that sweden has a large reproduction number, which however keeps decreasing. the "big v" shape of the reproduction number around may 1 might be due to the reporting errors or lags. our investigation found that the reported number of infectious cases in that period suddenly dropped and then quickly rose back, which was unusual. around february 18, a surge in south korea was linked to a massive cluster of more than 5,000 cases [51] . the outbreak was clearly depicted in the time-varying r 0 ðtþ curve. since then, south korea appeared to have slowed its epidemic, likely due to expansive testing programs and extensive efforts to trace and isolate patients and their contacts [52] . . estimated i(t), r(t), β(t), γ(t) , and r 0 ðtþ. the us (left) and china (right) are shown based on the data up to june 30, 2020. the blue dots and the red dashed curves represent the observed data and the model-based predictions, respectively, with 95% confidence interval. more broadly, fig 3 categorizes countries into two groups. one group features the countries which have contained coronavirus. countries, such as china and south korea, took aggressive actions after the outbreak and presented sharper downward slopes. some european countries such as italy and spain and mideastern countries such as iran, which were hit later than the east asian countries, share a similar pattern, though with much flatter slopes. on the other hand, the us, brazil, and sweden are still struggling to contain the virus, with the r 0 ðtþ curves hovering over 1. we also caution that, among the countries whose r 0 ðtþ dropped below 1, the curves of the reproduction numbers are beginning to uptick, possibly due to the resumed economy activities. we have developed a web application (https://younghhk.shinyapps.io/tvsirforcovid19/) to facilitate users' application of the proposed method to compute the time-varying reproduction number, and estimated and predict the daily numbers of active cases and removed cases for the presented countries and other countries; see fig 4 for an illustration. our code was written in r [53] , using the bs function in the splines package for cubic b-spline approximation, the nlm function in the stats package for nonlinear minimization, and the jacobian function in the numderiv package for computation of gradients and hessian matrices. graphs were made by using the ggplot2 package. our code can be found on the aforementioned shiny website. the rampaging pandemic of covid-19 has called for developing proper computational and statistical tools to understand the trend of the spread of the disease and evaluate the efficacy of mitigation measures [54] [55] [56] [57] . we propose a poisson model with time-dependent transmission and removal rates. our model accommodates possible random errors and estimates a timedependent disease reproduction number, r 0 ðtþ, which can serve as a metric for timely evaluating the effects of health policies. there have been substantial issues, such as biases and lags, in reporting infectious cases, recovery, and deaths, especially at the early stage of the outbreak. as opposed to the deterministic sir models that heavily rely on accurate reporting of initial infectious and removed cases, our model is more robust towards mis-specifications of such initial conditions. applications of our method to study the epidemics in selected countries illustrate the results of the virus containment policies implemented in these countries, and may serve as the epidemiological benchmarks for the future preventive measures. several methodological questions need to be addressed. first, we analyzed each country separately, without considering the traffic flows among these countries. we will develop a joint model for the global epidemic, which accounts for the geographic locations of and the connectivity among the countries. second, incorporating timing of public health interventions such as the shelter-in-place order into the model might be interesting. however, we opted not to follow this approach as no such information exists for the majority countries. on the other hand, the impact of the interventions or the change point can be embedded into our nonparametric time-dependent estimates. third, the validity of the results of statistical models eventually hinges on the data transparency and accuracy. for example, the results of chinazzi et al. [58] suggested that in china only one of four cases were detected and confirmed. also, asymptomatic cases might have been undetected in many countries. all of these might have led to underestimation of the actual number of cases. moreover, the collected data could be biased toward patients with severe infection and with insurance, as these patients were more likely to seek care or get tested. more in-depth research is warranted to address the issue selection bias. finally, our present work is within the sir framework, where removed individuals include recovery and deaths, who hypothetically are unlikely to infect others. although this makes the model simpler and widely adopted, the interpretation of the γ parameter is not straightforward. our subsequent work is to develop a susceptible-infectious-recovered-deceased (sird) model, in which the number of deaths and the number of recovered are separately considered. we will report this elsewhere. containment of covid-19 requires the concerted effort of health care workers, health policy makers as well as citizens. measures, e.g. self-quarantine, social distancing, and shelter in place, have been executed at various phases by each country to prevent the community transmission. timely and effective assessment of these actions constitutes a critical component of the effort. sir models have been widely used to model this pandemic. however, constant transmission and removal rates may not capture the timely influences of these policies. we propose a time-varying sir poisson model to assess the dynamic transmission patterns of covid-19. with the virus containment measures taken at various time points, r 0 may vary substantially over time. our model provides a systematic and daily updatable tool to evaluate the immediate outcomes of these actions. it is likely that the pandemic is ending and many countries are now shifting gear to reopen the economy, while preparing to battle the second wave of virus attack [59, 60] . our tool may shed light on and aid the implementation of future containment strategies. coronaviruses: an overview of their replication and pathogenesis bats are natural reservoirs of sars-like coronaviruses discovery of seven novel mammalian and avian coronaviruses in the genus deltacoronavirus supports bat coronaviruses as the gene source of alphacoronavirus and betacoronavirus and avian coronaviruses as the gene source of gammacoronavirus and deltacoronavirus human coronavirus and severe acute respiratory infection in southern brazil. pathogens and global health evolving epidemiology and impact of non-pharmaceutical interventions on the outbreak of coronavirus disease johns hopkins cornonavirus resource center real-time epidemic forecasting for pandemic influenza mathematical models of infectious disease transmission modelling transmission and control of the covid-19 pandemic in australia challenges in control of covid-19: short doubling time and long delay to effect of interventions individual vaccination as nash equilibrium in a sir model with application to the 2009-2010 influenza a (h1n1) epidemic in france estimating epidemic parameters: application to h1n1 pandemic data bayesian estimation of the dynamics of pandemic (h1n1) 2009 influenza transmission in queensland: a space-time sir-based model. environmental research modeling super-spreading events for infectious diseases: case study sars deterministic sir (susceptible-infected-removed) models applied to varicella outbreaks an introduction to compartmental modeling for the budding infectious disease modeler risk analysis foundations, models, and methods a contribution to the mathematical theory of epidemics statistics based predictions of coronavirus 2019-ncov spreading in mainland china. medrxiv a time delay dynamical model for outbreak of 2019-ncov and the parameter identification epidemic analysis of covid-19 in china by dynamical modeling preliminary prediction of the basic reproduction number of the wuhan novel coronavirus 2019-ncov effective containment explains sub-exponential growth in confirmed cases of recent covid-19 outbreak in mainland china lessons from the history of quarantine, from plague to influenza a. emerging infectious diseases a time-dependent sir model for covid-19 with undetectable infected persons sir model with time dependent infectivity parameter: approximating the epidemic attractor and the importance of the initial phase an epidemiological forecast model and software assessing interventions on covid-19 epidemic in china. medrxiv modeling count data methods for estimating disease transmission rates: evaluating the precision of poisson regression and two novel methods fitting outbreak models to data from many small norovirus outbreaks multi-species sir models from a dynamical bayesian perspective the estimation of the basic reproduction number for infectious diseases mathematical epidemiology of infectious diseases: model building transmission potential of smallpox: estimates based on detailed data from an outbreak measurability of the epidemic reproduction number in data-driven contact networks a time-dependent sir model for covid-19 with undetectable infected persons notes on r0 a practical guide to splines parameter estimation for differential equations: a generalized smoothing approach modelling transcriptional regulation using gaussian processes linear latent force models using gaussian processes latent force models mechanistic hierarchical gaussian processes empirical-bias bandwidths for local polynomial nonparametric regression and density estimation new reproducing kernel functions. mathematical problems in engineering a review of spline function procedures in r. bmc medical research methodology a note on the jackknife, the bootstrap and the delta method estimators of bias and variance covid-19, chronicle of an expected pandemic covid-19: how doctors and healthcare systems are tackling coronavirus worldwide closing borders is ridiculous': the epidemiologist behind sweden's controversial coronavirus strategy why a south korean church was the perfect petri dish for coronavirus coronavirus cases have dropped sharply in south korea r: a language and environment for statistical computing current status of global research on novel coronavirus disease (covid-19): a bibliometric analysis and knowledge mapping. available at ssrn 3547824 investigating the cases of novel coronavirus disease (covid-19) in china using dynamic statistical techniques the impact of social distancing and epicenter lockdown on the covid-19 epidemic in mainland china: a data-driven seiqr model study. medrxiv covid-19 italian and europe epidemic evolution: a seir model with lockdown-dependent transmission rate based on chinese data the effect of travel restrictions on the spread of the 2019 novel coronavirus (covid-19) outbreak as china's virus cases reach zero, experts warn of second wave asian nations face second wave of imported cases key: cord-329388-defbarkz authors: keane, martin g.; wiegers, susan e. title: time (f)or competency date: 2020-08-03 journal: j am soc echocardiogr doi: 10.1016/j.echo.2020.05.029 sha: doc_id: 329388 cord_uid: defbarkz nan martin g. keane, md, fase, and susan e. wiegers, md, fase, philadelphia, pennsylvania seventy years ago, susan's 92-year-old father's first teaching job was in a one-room schoolhouse in fayetteville, maine. he recently came across the ''register'' that he was required to keep of his students' daily attendance. he explained that attending an adequate proportion of school days was the sole determinant of whether or not a child was promoted to the next grade. ''apparently, actual accomplishment was not considered,'' he laughed. indeed, time spent in training is an essential component of the development of skill and expertise-time in rank, time on service, and time devoted to learning and performing the skill in question. linked with time spent in training, appropriately robust experience to develop expertise requires repeated exposure to and performance of tasks essential to the skill over that time-amounts of consults/evaluations, accumulation of procedures, numbers of echocardiograms. it is important to recognize, however, that competency in the skill is the outcome of interest. time and numbers are merely surrogate markers. the core cardiovascular training statement (cocats 4)-task force 5-outlined the expected behaviors and work product for echocardiographers. 1 levels of training from most basic echocardiographic knowledge (level i) to most advanced knowledge suitable for an echocardiography lab director (level iii) are clearly defined by duration of echo-specific training as well as specified numbers of procedures (transthoracic, transesophageal, and stress echocardiography) performed by the trainee. the task force clearly recognized that competency-based evaluations and assessments of echocardiographic knowledge base are essential elements in the certification of skill. however, as lab/program directors responsible for the providing certification letters over many years, it has been our experience that the ''focus'' of the fellowship trainees (and sometimes of their mentors as well) is frequently geared toward meticulous documentation of ''time served'' and ''procedures performed,'' as evidence for the proverbial notches in their belts. suitable evaluation of the individual candidate's competency is potentially at risk for being overlooked. necessity is the mother of invention, however, and the ongoing covid-19 crisis may prove instrumental in shifting the focus of echocardiography training evaluation from time and numbers to consideration of alternative measures of skill. dr. jose madrazo rightly and extensively illustrates this important point in his letter in this issue of journal of the american society of echocardiography. 2 as with the johns hopkins program, training programs across the country are faced with significant decline in the volume of all forms of echocardiographic evaluation as clinical focus shifts toward the care of an overwhelming number of patients with covid-19. dr. madrazo notes that social distancing measures, so crucial to thwarting the spread of sars-cov2, additionally hamper opportunities for hands-on training as well as face-to-face mentoring and supervision between expert echocardiographer and trainee. he makes numerous worthwhile recommendations for alternative experiential and evaluative tactics. these must be considered for implementation in the certification process, especially during a pandemic that is here to stay for the foreseeable future. we applaud his recommendations and reiterate a need for a shift toward competency-based assessment. the recent 2019 american college of cardiology/american heart association/american society of echocardiography advanced training statement 3 focused on select competencies and echo procedure volumes for level iii advanced training. the document is unique in its greater focus on delineating strategies for the evaluation of competency, in addition to recommended numbers of advanced echo techniques and procedures performed. it recognized that the endorsed volumes for specific advanced echo techniques and procedural guidance to achieve level iii have been developed by the expert committee consensus, in consultation with echocardiography training authorities across the country. in all instances, these procedure volumes are noted to be recommendations only. they serve as recognition that diverse trainees develop competency at different levels of experience-some quickly, others requiring more procedural practice. perhaps it is an appropriate time for a similar shift toward competency-based assessment when certifying level i and level ii training as well. to that end, the advanced training document delineates several evaluation tools that can be utilized for robust competency documentation 3 : 1. examination 2. direct observation 3. procedure logbooks 4. simulation 5. conference presentation 6. multisource evaluation 7. echo lab quality improvement and quality assurance projects to supplement the procedural logbooks and decrease in face-toface direct observation of skills, alternative evaluation methods are readily available. we admit that faced with weaker trainees, it can be easier to recommend ''more studies'' rather than giving uncomfortable and negative feedback regarding their current level of accomplishment. as noted, 2 ''distant'' overreading of fellow-interpreted studies can be as valuable, or even more so, than direct observation. more conscious effort must be expended on the part of the expert mentor to virtually review all aspects of each study overread in order to provide the trainee with as comprehensive an assessment and education as occurs in side-by-side reading. whenever distancing norms permit, every effort should be expended to maintain direct supervision, with at least one or two trainees in direct contact with the mentor using proper personal protective equipment. in addition to evaluation of interpretive skills, it remains possible to evaluate the breadth and depth of trainee knowledge through participation in and presentation of didactic conferences. today's sophisticated video conferencing mechanisms allow for a remarkable level of interaction under difficult circumstances. we endorse additional novel video conference applications, including interactive case reviews and case series presented to the fellowship group. 3 finally, while formal national board of echocardiography board examination remains a final tool to assess knowledge base, allowing trainees more time and access to board review courses, board review questions, and seminars in both print and online format will only serve to enhance the comprehension and critical thinking of echocardiography-focused fellows. in terms of the performance of echocardiography techniquestransthoracic, transesophageal, and stress modalities-it is undeniable that frequent access to individual scanning of patients with a multitude of pathologies is essential. in the face of a relative dearth of clinical subjects, as well as concerns regarding prolonged interpersonal exposure and possible coronavirus transmission, programs (and certification authorities) must adapt, with utilization of simulators and other techniques focused on recognition of technical adequacy and pitfalls in acquisition of previously acquired study images. several simulation systems are available for purchase. these systems can often analyze probe position and angling in three-dimensional space far more effectively than with a human mentor. although the views and the simulator ''patient'' are idealized, fellows benefit tremendously from exposure to repeated simulator scanning to perfect their technique in all echocardiographic windows. pathologic cases can also be programmed, with appropriate clinical scenario and the opportunity for the operator to evaluate diverse pathologies in multiple views. both transthoracic and transesophageal performance are included on most simulators, and the trainee may be evaluated directly by a mentor or more remotely using extensive recording of probe motion and images obtained by the simulator. stress echo cases-using both practice sets 2 or via simulator-can be virtually ''performed'' and/or reviewed in similar fashion. ongoing participation in echo laboratory quality assurance projects, even when done on a remote basis, increases the sophistication of understanding of proper performance and application of echocardiographic techniques-essential for both level ii and level iii training. remote evaluation of clinical requests for echo examina-tions and application of appropriate use criteria principles further broaden a trainee's knowledge base. recognition of the appropriate application and mentored interpretation of an increased number of point-of-care ultrasound studies is an additional and unique skill that has been enhanced in the covid-19 pandemic. 2 furthermore, exposure to the echocardiographic findings of covid-19 patients and the unique clinical scenarios (such as elevation in biomarkers) that mandate at least a limited echocardiographic evaluation of the covid-19 patient will be an essential part of overall competency in the future. the covid-19 pandemic and the clinical exigencies that accompany it have merely magnified the difficulties with a time-based and numbers-/volume-based documentation of echocardiographic skill. the pandemic has conversely provided extensive opportunities for innovation and expansion of traditional educational and assessment strategies. most importantly, the desired outcome of true echocardiographic competence at all levels of training can be achieved despite the change in the training paradigms. cocats 4 task force 5: training in echocardiography new challenges and opportunities for echocardiographic education during the covid-19 pandemic: a call to focus on competency and pathology ase advanced training statement on echocardiography (revision of the 2003 acc/aha clinical competence statement on echocardiography) key: cord-340260-z13aa1wk authors: farewell, v. t.; herzberg, a. m.; james, k. w.; ho, l. m.; leung, g. m. title: sars incubation and quarantine times: when is an exposed individual known to be disease free? date: 2005-10-19 journal: stat med doi: 10.1002/sim.2206 sha: doc_id: 340260 cord_uid: z13aa1wk the setting of a quarantine time for an emerging infectious disease will depend on current knowledge concerning incubation times. methods for the analysis of information on incubation times are investigated with a particular focus on inference regarding a possible maximum incubation time, after which an exposed individual would be known to be disease free. data from the hong kong sars epidemic are used for illustration. the incorporation of interval‐censored data is considered and comparison is made with percentile estimation. results suggest that a wide class of models for incubation times should be considered because the apparent informativeness of a likelihood depends on the choice and generalizability of a model. there will usually remain a probability of releasing from quarantine some infected individuals and the impact of early release will depend on the size of the epidemic. copyright © 2005 john wiley & sons, ltd. control of infectious diseases is a major public health concern. after an individual's exposure to infection, opposing biological processes take place both in the infecting organism and in the host and these result in either that individual's development of clinical evidence of the disease or in an imperceptible host-victory. during this variable period of time, the individual may in turn become infectious to others and thus play a part in generating or perpetuating an epidemic. historically, attempts have been made to prevent and control epidemics by isolating, for an arbitrary period of time after which the biological struggle could be assumed complete, any individuals who might be incubating the disease. the word 'quarantine', derived from the latin word quaresma, means 40 and re ects the origin of the practice in the 40-day period of compulsory isolation of ships arriving in venice in the 14th century. as more has been learned about the di erent infections, quarantine periods have varied, but when a hitherto unknown disease appears it is extremely di cult to decide what arbitrary period should be applied. and yet this is especially important if there should be no e ective treatment for the disease or its infectious state. controlling or preventing an epidemic then depends solely on releasing no infectious individuals into the general community. but, as was noted earlier, the period of unperceived changes in the individual is variable. quarantine was one of the key aspects of infection control introduced during the recent severe acute respiratory syndrome (sars) epidemic. individuals who may have been exposed to the sars virus were quarantined for a ÿxed period of time, most commonly 10 days. the premise was that those who may have been exposed, but who showed no signs of illness after 10 days, were unlikely to come down with the disease. since sars was previously unknown, a quarantine policy o ered the only control. an important paper on epidemiological aspects of sars was that of donnelly et al. [1] which made use of data from the hong kong experience with sars. the estimation of the incubation period in this paper was based on only '57 patients with only one exposure to sars over a limited time scale with recorded start and end dates'. donnelly et al. [1] assumed a gamma distribution for the incubation times, implicitly therefore assuming the possibility of very long incubation periods. the work reported here arose from a question related to the conÿdence a community should have that an individual who has passed through the sars quarantine period is disease-free and how long the quarantine period should be to make the probability of this very high. the concept of a maximum incubation time could be relevant to these considerations. there are many issues to be considered in setting a quarantine time, for example the extent of disruption to individuals' lives. also, and quite sensibly, it can be argued that there is unlikely to be a 'true' maximum incubation time. however, one motivation for a quarantine policy is the assumption that there is a reasonably well-behaved distribution of incubation times and some maximum time beyond which it is biologically quite implausible that symptoms may arise. this time could be the basis for setting a quarantine time. whether it is helpful to think about quarantine in this way is debatable. to inform this debate, we investigated what might reasonably be inferred about such a maximum incubation time based on the moderately sized samples that would typically be available in the early course of an epidemic. for comparison, brief consideration is also given to the estimation of tail behaviour in untruncated distributions. our general premise is that careful speciÿcation of the available knowledge concerning the incubation distribution must be central to public health decisions to control epidemics. the work reported here should be viewed primarily as an exploration of statistical methodology that might be useful for this purpose, not as a critique of other approaches or speciÿc estimates, such as those for sars. to illustrate the general principles involved, we follow donnelly et al. [1] and consider a gamma distribution for incubation times. thus if t is the random variable representing an incubation time, with an observed value of t = t, then a gamma distribution for t is speciÿed by the probability density function where t¿0; a¿0 and s¿0. the expectation of this distribution is as and the variance is as 2 . however, we now introduce the assumption that this distribution is truncated at some time m , so that 0¡t ¡m , and that the density function for t now becomes assume that data are available on n incubation times t 1 ; t 2 ; : : : ; t n . maximum likelihood estimation of the parameters a; s and m can then be based on the likelihood function standard asymptotic distributional results for mles will not be applicable for the parameter m . in the consideration of inferential statements concerning m , there are parallels with je reys' 'bus problem' or more accurately, 'tramcar problem', raised in a letter to fisher on 10 april 1934 [2, p. 163] . a brief summary is that in a town it is known that tramcars are numbered consecutively and that a new arrival in the town observes a tramcar numbered 100. can the new arrival infer anything about the number of tramcars, say n , in the town? the problem can be extended by allowing the observation of more than one tramcar. je reys' considered the use of a prior proportional to 1=n , after showing that a constant prior leads to no useful inferential statements. a very similar problem is the estimation of n in binomial (n; p) models. in both situations, the choice of the prior can be shown to be highly in uential inferentially. in the tramcar problem, the maximum observed number is the mle for n and is su cient for its estimation if a uniform distribution is assumed for the observed numbers. it is, however, a biased estimate. a unique unbiased estimate can be derived but the question of optimal interval estimation remains. for the binomial problem, it has been shown that no unbiased estimator of n exists [3] . for the purpose of this paper, we simply deÿne the mle of m and make no claims for its optimality in any sense. for public health purposes, the upper end-point of some interval of plausible values is more likely to be useful for decision making than a point estimate of the parameter. we consider the likelihood function simply as representing the information available from the data for inference concerning the unknown parameters. comparison of the shape of the likelihoods is su cient for the issues considered here and the likelihood function, particularly through providing ratios of likelihoods, is simply regarded as giving the relative plausibilities of parameter values [4, p. 50 ]. since, by deÿnition, it is true that for m ¿t (n) , l p (m ) can be deÿned for t (n) 6m ¡∞, it will thus provide some indication of the values of m which are plausible given the observed data. it is frequently convenient to standardize this function so that the maximum value is one by dividing by the value of the likelihood function at the mles. this function can then be deÿned as wherem is the mle of m . while the mle of m will be the same irrespective of the distributional assumption made concerning t , the shape of the proÿle likelihood for m , and therefore the range of plausible values for m , will depend on the assumption and, in particular, on assumptions about the tails of the truncated distribution. while the gamma model is well known in epidemic theory, motivated by regarding the incubation period as a ÿxed number of independent and successive stages of infection, each exponentially distributed, alternatives to the gamma distribution should be considered from a model ÿtting perspective at least. for illustration, we consider the log-normal distribution. a log-normal regression model can be written as a location scale model y = log(t) = + e, where e follows a standard normal distribution f(e) = 1 (2 ) 0:5 exp(−0:5e 2 ) and where ∈ r and ¿0. the development of a truncated log-normal model follows the development for the truncated gamma given in section 2.1 as does the likelihood development with ( , , m ) replacing (a, s, m ) as the set of model parameters. the use of this model is also considered in section 3. more general distributions than the truncated gamma and the truncated log-normal can also be considered. a convenient choice is the so-called log-gamma distribution of farewell and prentice [5] which represents a reparameterization and extension of a generalized gamma distribution. with ; q ∈ r and ¿0, the log-gamma model can be written as the location scale model y = log(t) = + w, where the density f(w; q) for w is if q = 0 and, when q = 0, is the standard normal distribution. the cumulative distribution function can be written as the log-gamma distribution includes the weibull (q = 1) and exponential (q = = 1) distributions as special cases as well as the gamma (q = 1= ) and log-normal (q = 0). the distribution of w is negatively skewed for q¿0 and positively skewed for q¡0. another alternative truncated distribution for incubation times is, therefore, the truncated log-gamma. the development of a proÿle likelihood for m will follow as in sections 2.1 and 2.2 with maximization over ; and q. the use of this more general distribution is also illustrated in section 3. we consider data from 128 sars cases, a subset of 1755 cases in a hong kong hospital authority database, for which some information was available on time of infection. the data consist of the date of the appearance of the symptoms of sars and an earliest and latest possible date of exposure. initially, we restrict attention to 67 cases whose interval of possible exposure times is less than 5 days and also exclude 10 cases recorded as having ÿrst symptoms on the date of exposure. these may represent questionable records or cases related to an unusually high level of exposure, possibly hospital acquired, not of general relevance for setting quarantine times for controlling community outbreaks. relatively short intervals of exposure times are used to provide some reasonably precise information concerning incubation, as is done in aids seroconverter cohorts [6] . table i provides some comparison of the 67 cases with infection intervals less than 5 days with the cases with longer intervals. the variables examined were age, sex, health care worker status, vital status on hospital discharge and lactate dehydrogenase (ldh) level, where higher values of ldh re ect more severe disease. it can be seen that while the cases are similar in age, sex and worker status, there is a higher death rate and some evidence of more severe disease in the cases with the longer possible infection intervals. this may re ect the fact that more severe cases arriving at a hospital might well have had a longer period with the disease and be less able to characterize precisely their possible time of infection. the impact of extending the allowed interval size is examined later. there remains, of course, the implicit assumption that the cases with some information on infection time are a random sample of the entire distribution of cases. however, the possibility of biases in reporting, heterogeneity in routes of transmission or varying infectious doses of the sars coronavirus remains. table ii presents the longest and shortest possible incubation times for these patients as well as the average of these two times, rounded to the nearest day since that is how the data would normally be recorded. we consider ÿrst the averaged times. for the data set of averaged times, figure 1 presents the proÿle likelihoods, l * p (m ), based on the gamma, log-normal and log-gamma models discussed in section 2. figure 1 allows the comparison of the apparent information in the data set under the di erent modelling assumptions. for the truncated gamma model, the proÿle likelihood never drops below 60 per cent suggesting that any value of m greater than the maximum time observed, 14, is plausible. thus it appears that the data is uninformative with respect to the maximum incubation time if a gamma distribution is assumed. however, the situation is di erent for the truncated lognormal model. while the mle for m is again 14 days for this model, any value for m greater than 19.5 days makes the data more than 10 times less plausible than does the mle of 14 days. for public health purposes, it could therefore be argued that, based only on such data and an assumed log-normal model, that a quarantine time of 20 days might be necessary to ensure that sars cases were not released 'too early'. recall that the focus here is on the upper limit of an interval of plausible values rather than any speciÿc estimator for the maximum incubation time. a possible reason for the widely di erent behaviour of the proÿle likelihood for the two models is a di erence in model ÿt. if we consider the more general log-gamma model that includes both of the other models as special cases, the proÿle likelihood for m is more informative than that based on a gamma model, but it never falls below a value of 20 per cent. thus the apparent ability to rule out larger values of m under the log-normal model is not present if a less restrictive model assumption is made. this is true even though the maximum likelihood estimate of q is −0:13, a value close to the value q = 0 corresponding to the log-normal model. the hypothesis of a truncated gamma distribution would not be supported within this class of models. the maximum likelihood estimates of the various models are given in figure 2 along with a histogram of the data. the estimated log-normal and log-gamma distributions are quite similar while the truncated gamma does not appear to ÿt the data very well. all the models fail to some extent in re ecting the preponderance of short incubation times. since the use of the log-gamma model suggests there is little information for the estimation of a maximum incubation time, this may raise doubts about the assumption of a to illustrate the di erent behaviour of the proÿle likelihoods for the log-normal and loggamma models, figure 3 plots the estimated log-gamma and log-normal models ÿt when the truncation time is taken to be m = 18 days. this shows that the lack of ÿt is much more pronounced for the log-normal distribution than for the log-gamma thus reducing the plausibility of m = 18 under the log-normal model. in general, as for the sars cases in hong kong, it will be di cult to specify precisely when the exposure leading to a case occurred. as with many other diseases therefore, the usual data on incubation times will derive from cases in which the exposure is known to be within a small window of time. this will generate interval-censored incubation time information for each case. assume that such information leads to a set of data {(t li ; t ui ); (i = 1; n)} where, for individual i, the incubation time is known to lie between a lower limit, t li , and an upper limit, t ui . if we assume that f(t) = g(t)=g(m ) is the probability density function for the actual incubation time, as in section 2 but where g(t) is not restricted to be a gamma distribution, then, with only minor modiÿcations, the development given there can be followed for interval-censored data. the assumption of a maximum possible incubation time m creates some complication because it will limit the intervals of possible incubation times. it is also convenient to assume that all incubation times are interval-censored and that m is only allowed to take values greater than max(t li ). this avoids any possibility of a case contributing to the likelihood via its probability density function for the smallest value of m and via a probability value otherwise. in principle, other cases could be taken to have a known incubation time if such times were below any plausible values for m , but in practice such accuracy does not exist in any event. this type of consideration arises in other non-standard likelihood inference problems [7] . the likelihood function for the estimation of the parameters of g(t) and m can then be written a proÿle likelihood for m can be deÿned in the usual manner. however, it is not possible to determine immediately the mle,m , which will lie somewhere between the lowest allowed value, max(t li ) + , and max(t ui ). to illustrate the e ect of interval-censoring, we consider the data in table ii which show the lower and upper limits of the incubation times for the 67 sars cases, for which average rounded times were used in section 3. we have subtracted 0.5 from the lowest time in days and added 0.5 to the highest to give appropriate intervals in continuous time and to make all observations interval-censored. as outlined earlier, it is convenient mathematically to make all observations interval-censored. observations with a single day of presumed exposure are given an interval of width 1 day in our analysis but, in principle, a much narrower interval could be used if the precision could be justiÿed. a brief exploration suggests that this would have little impact on the likelihood. figure 4 presents proÿle likelihoods for m based on the gamma, log-normal and log-gamma models of section 2. these plots are based on calculations of the likelihoods for values of m at intervals of 0.25 and beginning at min(t li ) + 0:5 = 13. for convenience, the mle of m has been taken to be the value among these which gives the largest likelihood. further precision could be achieved but is not likely to be important. it can be seen that while the general pattern of the likelihoods is similar to that in figure 1 , with interval-censoring not even the log-normal likelihood drops to less than the 10 per cent level. this is, of course, reasonable in the sense that much less precise information is being assumed about the incubation times and this must impact the precision of inferences. in spite of this slight, but perhaps important, change in the likelihoods, the ÿtted distributions are not much altered by the interval-censoring. for example, with the log-gamma model and table ii and (1:33; 0:81; −0:10) the interval censored data in table ii . finally, to show the e ect of more extreme interval-censoring, we consider extending the set of data in table ii by including additional sars cases from hong kong whose period of possible exposure, which deÿnes the width of the interval within which their incubation time lies, is thought to be less than 10 days rather than 5 days. this produces a set of data of 86 cases and figure 5 presents the relative likelihoods for the three models based on these data. the proÿle likelihoods are seen to be substantially less informative with the gamma likelihood being virtually at for m values greater than 16. note that one of the additional cases has an interval of (13:5; 19:5) for their incubation time in days. the use of censoring intervals of width 10 days is quite large in the context of sars and could not be recommended in practice. consideration of models for incubation times which incorporate truncation may provide valuable information for public health purposes. nevertheless, as is illustrated in the earlier sections, there might often be insu cient evidence to be very conÿdent about a maximum incubation time, even within the context of a particular model. in this situation, an alternative approach is to set a quarantine time on the basis of percentile estimation, i.e. a quarantine time might be set as the time below which 95 per cent of cases are expected to develop. for comparison with the analyses presented earlier, the use of parametric models for this purpose is considered here. model choice will be important since the behaviour of a distribution in the tail is very model dependent. thus, the log-gamma model which incorporates a signiÿcant component of model choice through the parameter q might be recommended. a more ad hoc approach to model choice could be adopted although the uncertainty involved in the choice might be more di cult to incorporate into inferences. figure 6 illustrates the best ÿtting log-gamma and log-normal distributions, not involving truncation, to the average incubation time data in table ii . the slightly better ÿt of the log-gamma at shorter times can be seen and there is some di erence in the tails. for the log-gamma, the probability of an incubation time greater than 14 days is 0.013 while, for the log-normal, it is 0.032. the mle for the log-gamma, in contrast to the case with truncated distributions, is further from the log-normal model withq = 0:61. essentially this re ects the need for the distribution to drop more quickly at larger values of t . the estimated 95th percentiles for the log-gamma and log-normal distributions are 10.66 and 12.09. conÿdence intervals for these values can be derived by simulating from the estimated asymptotic distribution of the mles to produce an interval within which 95 per cent, say, of the corresponding simulated percentiles lie. this methodology has been compared with a delta method and a non-parametric bootstrap and performed well for the estimation of a complicated function of mles [8] . based on a simulation of 1000 values, the corresponding 95 per cent intervals are (9:24; 13:68) and (9:95; 15:34). interestingly, these values suggest the commonly adopted quarantine time for sars of 10 days is associated with the possibility of 'releasing' approximately 5 per cent of patients 'too early'. in fact, to ensure that this is the maximum fraction released, consideration should be given to longer quarantine times re ecting the upper endpoint of the estimation intervals. note that if the interval-censored data in table ii is used to ÿt the log-gamma model, then the estimated 95th percentile is 10.2 with a conÿdence interval of (8:64; 13:68), an interval 14 per cent longer than that for the average data. the present paper explores methodology to characterize the available knowledge on incubation times early in an infectious epidemic. issues such as di erent routes of infection or di erent subsets of infectious individuals have not been discussed. in principle, the models used could be extended to incorporate explanatory variables deÿned by such factors. preliminary investigations of possible explanatory variables in the hong kong data did not reveal any strong relationships. we have made pragmatic decisions as to which data to include for model ÿtting. these might warrant revisiting in a more comprehensive analysis. also, since infection events cannot be observed, some data on incubation times will inevitably be 'guesses'. many aspects of the comparison of methodologies will not be altered by this but such data will naturally give rise to interval-censoring which the methodologies discussed here do allow. a further extension is to consider individuals with more than one period of possible exposure prior to the development of symptoms. meltzer [9] considers a simple simulation approach to this. deÿnitive conclusions about the choice of statistical methodology are not warranted based on the investigations reported here. in the early days of an epidemic this will usually be the case. thus, the range of inferences based on di erent methodologies will often be the basis of decisions. nevertheless, some comments can be made. inference concerning a truncation parameter is apparently more informative the stronger the assumptions made about the form of the incubation distribution. in the absence of independent reasons to make such an assumption however, the use of a general model, such as the loggamma, for inference should be considered, at least as part of a sensitivity analysis. the key aspect to such inferences will be the shape of the tails encompassed in the model for the incubation times. in the absence of precise information on a truncation time, estimation of percentiles provides a natural way to ÿx quarantine times. it can also be argued that this approach is less risky, and more realistic, than making the assumption of a truncated distribution. because of its exibility in the tails, the log-gamma can also be recommended for percentile estimation. investigation of other methods is warranted. possibilities would include the use of sample quantiles to deÿne non-parametric conÿdence intervals for population quantiles [10, chapter xi, section 3.1] or the asymptotic distribution of sample quantiles [4, appendix a.2.3] . whatever method is adopted, the uncertainty involved in any estimation of percentiles should be incorporated into public health decisions. in the setting of quarantine times, other factors must also be considered. meltzer [9] presents evidence for some sars incubation times greater than 10 days. it appears based on the data presented here that a quarantine time of 10 days for sars might release one infectious patient in twenty. therefore, for a quarantined population of 200, this would correspond to 10 individuals but the larger the quarantined population, the larger the number of released infectious individuals. thus the length of a quarantine period might well be set in light of the expected number of quarantined individuals. also consideration of the psychological and economic impact of quarantine on individuals and the population as a whole must be balanced against the risks associated with early release of infected individuals. finally, note that the implicit assumption in setting a quarantine time is that quarantine is isolation of x days from the supposed day of contact whereas it is often implemented as isolation of x days from the ÿrst day on which an individual is identiÿed as having been exposed to the disease. this may build in an additional margin of safety from the public health perspective. epidemiological determinants of spread of causal agent of severe acute respiratory syndrome in hong kong statistical inference and analysis. selected correspondence of r.a on maximum likelihood estimation of the binomial parameter n theoretical statistics a study of distributional shape in life testing extending public health surveillance of hiv disease on a singularity in the likelihood for a change-point hazard rate model a markov model for hiv disease progression including the e ect of hiv diagnosis and treatment: application to aids prediction in england and wales multiple contact dates and sars incubation periods mcgraw-hill kogakusha: tokyo, 1974. copyright ? we thank the referees for their comments that led to an improved presentation. this work was supported by the medical research council (u.k.), the national science and engineering research council (canada) and the research fund for the control of infectious diseases of the health, welfare and food bureau of the hong kong sar government. key: cord-299048-92j3p8e5 authors: suomi, aino; schofield, timothy p.; butterworth, peter title: unemployment, employability and covid19: how the global socioeconomic shock challenged negative perceptions toward the less fortunate in the australian context date: 2020-10-15 journal: front psychol doi: 10.3389/fpsyg.2020.594837 sha: doc_id: 299048 cord_uid: 92j3p8e5 unemployed benefit recipients are stigmatized and generally perceived negatively in terms of their personality characteristics and employability. the covid19 economic shock led to rapid public policy responses across the globe to lessen the impact of mass unemployment, potentially shifting community perceptions of individuals who are out of work and rely on government income support. we used a repeated cross-sections design to study change in stigma tied to unemployment and benefit receipt in a pre-existing pre-covid19 sample (n = 260) and a sample collected during covid19 pandemic (n = 670) by using a vignette-based experiment. participants rated attributes of characters who were described as being employed, working poor, unemployed or receiving unemployment benefits. the results show that compared to employed characters, unemployed characters were rated substantially less favorably at both time points on their employability and personality traits. the difference in perceptions of the employed and unemployed was, however, attenuated during covid19 with benefit recipients perceived as more employable and more conscientious than pre-pandemic. these results add to knowledge about the determinants of welfare stigma highlighting the impact of the global economic and health crisis on perception of others. the onset of covid19 pandemic saw unemployment climb to the highest rate since the great depression in many regions globally 1 . over just one month, from march to april 2020 unemployment rate in the united states increased from 4.4% to over 14.7% and in australia the effective rate of unemployment increased from 5.4 to 11.7% (australian bureau of statistics, 2020) 2 . in australia, a number of economic responses were rapidly introduced including a wage subsidy scheme (jobkeeper) to enable employees to keep their employees connected to the workforce, one-off payments to many welfare recipients, and a doubling of the usual rate of the unemployment benefits (jobseeker payment) through a new coronavirus supplement payment. at the time of writing in july 2020, many countries, including australia remain in the depths of a health and economic crisis. a rich research literature from a range of disciplines has documented the pervasive negative community views toward those who are unemployed and receiving unemployment benefits, with the extent of this "welfare stigma" being particularly pronounced in countries with highly targeted benefit systems such as the united states and australia (fiske et al., 2002; baumberg, 2012; contini and richiardi, 2012; schofield and butterworth, 2015) . the stigma and potential discrimination associated with unemployment and benefit receipt are known to have negative impacts on health, employability and equality (for meta-analyses, see shahidi et al., 2016) . in addition, the receipt of unemployment benefits co-occurs with other stigmatized characteristics such as poverty and unemployment (schofield and butterworth, 2018a) . the changing context related to the covid19 crisis provides a novel opportunity to better understand the determinants of stigmatizing perceptions of unemployment and benefit receipt. negative community attitudes and perceptions of benefit recipients are commonly explained by the concept of "deservingness" (van oorschot and roosma, 2017) . the unemployed are typically seen as less deserving of government support than other groups because they are more likely to be seen as responsible for their own plight, ungrateful for support, not in genuine need (petersen et al., 2011; van oorschot and roosma, 2017) , and lacking reciprocity (i.e., seen as taking more than they have given -or will give -back to society; van oorschot, 2000; larsen, 2008; petersen et al., 2011; aarøe and petersen, 2014) . given the economic shock associated with covid19, unemployment and reliance on income support are less likely to seen as an outcome within the individuals control and may therefore amplify perceptions of deservingness. prior work has shown that experimentally manipulating perceived control over circumstances does indeed change negative stereotypes (aarøe and petersen, 2014) . a number of experimental paradigms have been used to investigate perceptions of "welfare recipients" and the "unemployed." the stereotype content model (scm; fiske et al., 2002) , for example, represents the stereotypes of social groups on two dimensions: warmth, relating to being friendly and well-intentioned (rather than ill-intentioned); and competence, relating to one's capacity to pursue intentions (fiske et al., 2002) . using this model, the "unemployed" have been evaluated as low in warmth and competence across a variety of welfare regime types (fiske et al., 2002; bye et al., 2014) . the structure of stereotypes has also been studied using the big five personality dimensions (schofield and butterworth, 2018b; schofield et al., 2019) : openness, conscientiousness, extraversion, agreeableness, and emotional stability (for background on the big five see: goldberg, 1993; hogan et al., 1996; saucier and goldberg, 1996; mccrae and terracciano, 2005; srivastava, 2010; chan et al., 2012; löckenhoff et al., 2014) . there are parallels between the big five and the scm: warmth relating to the dimension of agreeableness, and competence relating to conscientiousness (digman, 1997; ward et al., 2006; cuddy et al., 2008; abele et al., 2016) and these constructs have been found to predict employability and career success (barrick et al., 2001; cuesta and budría, 2017) . warmth and agreeableness have also been linked to the welfare-specific characteristics of deservingness (aarøe and petersen, 2014) . the term "employability" has been previously defined as a set of achievements, skills and personal attributes that make a person more likely to gain employment and leading to success in their chosen career pathway (pegg et al., 2012; o'leary, 2017 o'leary, , 2019 . while there are few studies examining perceptions of others, perceptions of one's own employability have been recently studied in university students, jobseekers (atitsogbe et al., 2019) and currently employed workers (plomp et al., 2019; yeves et al., 2019) , consistently showing higher levels of perceived employability being linked to personal and job-related wellbeing as well as career success. examining other's perceptions of employability may be more relevant to understand factors impacting on actual employment outcomes. a majority of studies examining other's perceptions of employability have focused on job specific skills study (lowden et al., 2011; dhiman, 2012; saad and majid, 2014) . building on this previous work, our own research has focused on the effects of unemployment by drawing on frameworks of big five, scm and employability in pre-covid19 samples (schofield and butterworth, 2018b; schofield et al., 2019) . our studies consistently show that unemployed individuals receiving government payments are perceived as less employable (poorer "quality" workers and less desirable for employment) and less conscientious. we found similar but weaker pattern related to agreeableness, emotional stability, and the extent that a person is perceived as "uniquely human" (schofield et al., 2019) . further, we found that vignette characters described as currently employed but with a history of welfare receipt were indistinguishable from those described as employed and with no reference to benefit receipt (schofield et al., 2019) . findings such as this provide experimental evidence that welfare stigma is malleable and can be challenged by information inconsistent with negative stereotype (schofield and butterworth, 2018b; schofield et al., 2019 ; see also petersen et al., 2011) . the broad aim of the current study was to extend this previous work by examining the impact of covid19 on person perceptions tied to employment and benefit recipient status. it repeats a pre-covid19 study of an australian general population sample in the covid19 context, drawing on the same sampling frame, materials and study design to maximize comparability. the study design recognizes that the negative perceptions of benefit recipients may reflect a combination of difference sources of stigma: poverty, lack of work, and benefit receipt. therefore, the original study used four different conditions to seek to differentiate these different sources: (1) employed; (2) working poor; (3) unemployed; and (4) unemployed benefit recipient. finally, for the covid19 sample we added a novel fifth condition: (5) unemployment benefit recipient also receiving the "coronavirus" supplement. we except that the reference to a payment specifically applicable to the covid19 context may lead to more favorable perceptions (more deserving) than the other unemployed and benefit receipt characters. the study capitalizes on a major exogenous event, the covid19 crisis, which we hypothesize will alter perceptions of deservingness by fundamentally challenging social identities and perceptions of one's own vulnerability to unemployment. the study tests three hypotheses, and in doing so makes an important empirical and theoretical contribution to understanding how deservingness influences person perception, and understanding of the potential "real world" barriers experienced by people seeking employment in the covid19 context. the pre-covid19 assessment uses a subset of data from a pre-registered study, but this reuse of the data was not preregistered 3 . we hypothesize that, at time 1 (pre-covid19 assessment) we will find that employed characters will be rated more favorably than characters described as unemployed and receiving unemployment benefits, particularly on dimensions of conscientiousness, worker and boss suitability. moreover, we expect a gradient in perceptions across the four experimental conditions, from employed to working poor, to unemployed to unemployed receiving benefits and to show a similar trend for the other outcome measures included in the study. we hypothesize that the character in the unemployed condition(s) would be rated less negatively relative to the employed condition(s) at time 2, compared to time 1. we predict a two-way interaction between time and condition for the key measures (conscientiousness, worker and boss suitability) and a similar trend on other outcomes. we expect that explicit reference to the unemployed benefit character receiving the "coronavirus supplement" payment will increase the salience of the covid19 context and lead to more positive ratings of this character relative to the standard unemployed benefit condition in the pre-covid19 and covid19 occasions. two general population samples (pre-covid19 and covid19) were recruited from the same source: the australian online research unit (oru) panel. the oru is an online survey platform that provides access to a cohort of members of the general public who are interested in contributing to research. the oru randomly selects potential participants who meet study eligibility criteria, and provides the participant with an incentive for their participation. the sample for the time 1 (pre-covid19) occasion was part of a larger study (768 participants) collected in november 2018. from this initial dataset, we were able to use data from 260 (50.1% female, m age = 42.1 [16.7] years, range: 18-82) participants who were presented with the one vignette scenario that we could replicate at the time of the social restrictions applicable in the covid19 context (i.e., the vignette character was not described as going out and visiting friends, as these behaviors were illegal at time 2). the sample for time 2 (covid19) was collected in may-june 2020, at the height of the lock down measures in australia and included 670 participants (40.5% female, m age = 51.0 [15.8] years, range: 18-85). the two samples were broadly similar (see below), though the proportion of male participants at time 2 was greater than at time 1. the pre-covid assessment at time 1 was restricted to those participants who completed the social-distancing consistent vignette in the first place to avoid potential order/context effects. this provided, on average, 65 respondents in each of the four experimental conditions. using the results from our previous published studies as indicators of effect size (schofield and butterworth, 2018b; schofield et al., 2019) . monte carlo simulation was used to identify the time 2 sample size that would provide 90% power to detect an interaction effect that represented a 50% decline in the difference between the two employment and two unemployment conditions on the three-key measures at the covid occasion relative to the pre-covid difference. this sample size of 135 per condition also provided between 60 and 90% power to detect a difference of a similar magnitude between the employed and unemployment benefit conditions across the two measurement occasions. given previous evidence that the differences between employed and unemployed/welfare conditions is robust and large for conscientiousness and worker suitability (schofield and butterworth, 2018b) , the current study is also adequately powered to detect the most replicable effects of unemployment and welfare on perceptions of a person's character (even in the absence of the hypothesized interaction effect). the procedures were identical on both study occasions. participants read a brief vignette that described a fictional character, and then rated the character on measures reflecting personality dimensions, their suitability as a worker or boss, morality, warmth, and competence, and the participant's beliefs the character should feel guilt and shame, or feel angry and disgusted. at time 1 (pre-covid19 context) participants then repeated this process with a second vignette, but we do not consider data from the second vignette. the key experimental conditions were operationalized by a single sentence embedded within the vignette that was randomly allocated to different participants (employed: "s/he is currently working as a sales assistant in a large department store"; working poor: "s/he is currently working as a sales assistant, on a minimum-wage, in a large department store"; unemployed: "s/he is currently unemployed"; and receipt of unemployment benefits: "s/he is currently unemployed, and is receiving government benefits due to his/her unemployment"). the four experimental conditions were identical at both time points. at time 2, an additional covid19-specific condition was included (to maximize the salience of the covid19 context): "s/he is currently unemployed and is receiving government benefits, including the coronavirus supplement, due to his/her unemployment." all three study conditions will imply poverty/low income. in australia, few minimum-wage jobs are supplemented by tips, and so a minimum-wage job indicates a level of relative poverty. a full-time worker in a minimum wage job is in the bottom quartile of income earners (australian bureau of statistics, 2017). prior to the covid19 crisis and the increase in payment level, a single person with no dependents receiving unemployment benefits received approximately 75% of the minimum-wage in cash assistance. during covid19 and at the time of the data collection, the rate of pay exceeds the minimum-wage. several characteristics of the vignette character, including age and relationship status, were balanced across study participants. age was specified as either 27 or 35 years, relationship status was either "single" or "lives with his/her partner." the character's gender was also varied and names were stereotypically white. for time 1, manipulated characteristics yielded 32 unique vignettes, comprised of four key experimental conditions (employed, working poor, unemployed, and unemployment benefits) × 2 ages × 2 genders × 2 relationship statuses. for time 2, manipulated characteristics yielded 40 unique vignettes, comprised of five key experimental conditions (employed, working poor, unemployed, unemployment benefits, and unemployed + coronavirus supplement) × 2 ages × 2 genders × 2 relationship statuses. the vignette template construction is presented in figure 1 including each component of the vignette that was randomly varied. in both studies, participants were required to affirm consent after debriefing or had their data deleted. participant comprehension of the vignettes was checked via three freeresponse comprehension questions about the character's age and weekend activities. participants who did not answer any questions correctly were not able to continue the study. personality, employability (suitability as a worker or boss), communion and agency, cognitive and emotional moral judgments, and dehumanization were included as the study outcomes. while not all personality or character dimension measures can be considered as negative or positive, higher scores were used in the study to indicate more "favorable" perceptions by the participants of the characters. the ten item personality inventory was used to measure the big five (gosling et al., 2003) and adapted to other-oriented wording (i.e., "i felt like the person in the story was. . .") (schofield et al., 2019) . two items measured each trait via two figure 1 | outline of vignette construction in 4 parts. bullet pointed options replace the underlined text, with gendered pronouns in each option selected to match character name. paired attributes. one item contained positive attributes and one contained negative attributes. participants indicated the extent to which "i think [name] is [attributes]" from 1 (strongly disagree) to 7 (strongly agree). the order of these 10 items was randomized. agreeableness (α = 0.54) was assessed from "sympathetic, warm" and "critical, quarrelsome" (reversed); extraversion (α = 0.50) was assessed from "extraverted, enthusiastic" and "reserved, quiet" (reversed); conscientiousness (α = 0.76) was assessed from "dependable, self-disciplined" and "disorganized, careless" (reversed); openness to experience (α = 0.36) was assessed from "open to new experiences, complex" and "conventional, uncreative" (reversed); emotional stability (α = 0.65) was assessed from "calm, emotionally stable." and "anxious, easily upset" (reversed). the order of these 10 items was randomized. single item measures: "i think [name] would be a good worker" (worker suitability) and "i think [name] would be a good boss" (boss suitability) were rated on the same scale as the personality measure. the order of these two items was randomized. higher scores indicated better employability. communion and agency was assessed using bocian et al. (2018) adaptation of abele et al. (2016) scale that measures the fundamental dimensions of communion and agency using twosubscales for each dimension. the morality and warmth subscales are seen as measures of communion (referred to as warmth in scm; fiske, 2018) ; while the competence and assertiveness subscales measure agency (what fiske refers to as competence in scm; fiske, 2018) . this subscale structure has been identified in multiple samples. participants indicated the extent to which "i think [name] [attributes]" from 1 (not at all) to 5 (very much so). morality (α = 0.92) was measured with six items, e.g., "is just, " "is fair"; warmth (α = 0.96) with six items, e.g., "is caring, " "is empathetic"; competence (α = 0.90) with five items, e.g., "is efficient, " "is capable"; and assertiveness (α = 0.83) with six items, e.g., "is self-confident, " "stands up well under pressure." these items were presented in a random order. dehumanization was measured with a composite scale of twoitems drawn from bastian et al. (2013) . based on prior research, we measured dehumanization with two items: "i think [name] is mechanical and cold, like a robot" and "i think [name] lacked self-restraint, like an animal" order of these two items was randomized. we reverse coded the two items for the analyses for consistency for the other variables, so that higher scores were indicative of more favorable perceptions. moral emotions were measured by four items that asked about emotional responses to the character that were framed as selfcondemning or other-condemning (haidt, 2003; giner-sorolla and espinosa, 2011) . two other-condemning items asked the participant about their own emotional response to the character in the vignette (anger: "[name]'s behavior makes me angry"; disgust: "i think [name] is someone who makes me feel disgusted, " α = 0.92). the two self-condemning items asked about the character's emotional response (guilt: " [name] should feel guilty about [his/her] behavior"; shame: "i think [name] should feel ashamed of [him/her]self "; α = 0.95). we reverse coded the two scales to ensure consistency with other variables, with higher scores indicative of more favorable perceptions. with the exception of the moral emotion (and communion and agency) scales that are new to this study and the previously tested openness to experience, our previous research has demonstrated differences between the ratings of employed and unemployed characters on the included outcome measures (schofield and butterworth, 2018b; schofield et al., 2019) . we undertake the analysis using a four-step process. we use mixed-effects multi-level models, with the 14 outcome measures nested within participants, and predicted by fixed (betweenperson) terms representing the experimental "condition, " "time" (pre-/covid19) and their interaction, and controlling for measure differences and allowing for random effects at the participant level: i) we initially assessed the effect of condition in the pre-covid19 occasion to establish the baseline pattern of results; ii) we then evaluated the interaction term and, specifically, the extent to which the baseline difference observed between employment and unemployment conditions is attenuated at time 2 (covid19 occasion); iii) we tested the three-way interaction between condition, occasion and measure to assess whether this two-way interaction varies across the outcome measures; and if significant iv) repeated the modeling approach using separate linear regression models for each outcome measure. our initial model contrasts the two employed (employed and working poor) and unemployed (unemployed and benefit receipt) conditions. the second model examines the four separate vignette conditions separately, differentiating between unemployed and unemployed benefit conditions. finally, we contrast the three unemployment benefit conditions: (1) unemployment benefit recipients at time 1; (2) unemployment benefit recipients at time 2; and (3) unemployment benefit recipients receiving the coronavirus payment at time 2. for all models, we consider unadjusted and adjusted results (controlling for participant demographics). to address a potential bias from gender differences between samples, post-stratification weights were calculated for the covid19 sample to reflecting the gender by age distribution of the pre-covid19 sample. all models were weighted. the two samples from time 1 (pre-covid19) and time 2 (covid19) were comparable on all demographic variables, except for gender (χ 2 [1, 923] = 7.04, p < 0.001) and employment (χ 2 [1, 910] = 27.66, p < 0.001): the gender distribution was more balanced at time 1 with 49.8% of males, compared to 59.5% of males at time 2. there was also a significant increase in unemployment with 20.9% of time 1 participants out of work compared to 39.3% of the time 2 participants. this was likely reflective of the employment rate nearly doubling in australia during covid19 crisis. bivariate correlations showed significant positive correlations between all 14 outcomes (p's < 0.001), except for extraversion that was only positively correlated with emotional stability, boss suitability, warmth, assertiveness, and competence (p's < 0.05). the results, both adjusted and unadjusted, from the initial overall multilevel model using a binary indicator of whether vignette characters were employed (those in the employed or working poor conditions) or unemployed (unemployed or welfare) and testing the interaction between vignette condition and time (pre-covid19 vs covid19) are presented in the supplementary table s1. the adjusted results (holding participant age, gender, employment, and education constant) indicated that the unemployed characters were rated lower than the employed characters at time 1 (b = −0.57). this difference in the ratings of employed and unemployed characters was reduced in the covid19 assessment at time 2, declining from 0.57 to 0.26, across all the outcome measures. the addition of the threeway interaction between condition, time and outcome measure significantly improved overall model fit, χ 2 (52) = 482.94, p < 0.001, indicating the interaction between condition and time varied over measures. a series of separate regression models considering each outcome separately (see supplementary table s2 ) showed a significant effect of condition (employment rated higher than unemployment) at time 1 (pre-covid) for all outcomes except openness and extraversion. the lower ratings for unemployed relative to employed characters were significantly moderated at time 2 on the competence, worker and boss suitability, and guilt/shame outcomes (p's < 0.05). the next set of analyses consider the four separate vignette conditions, differentiating between the unemployed and unemployed benefit recipient conditions. the overall mixedeffects multilevel model incorporating the four distinct vignette conditions provided evidence of significant effects for condition and condition by time in both adjusted and unadjusted models. the result for the adjusted model (table 1) , averaged across the various outcomes, replicated the previous finding of a difference in ratings of employed and unemployed characters at time 1 (pre-covid19): relative to the employed condition, there was no difference in ratings of the working poor, but the unemployed and the unemployed benefit recipient characters were rated less favorably. there was some evidence of a gradient across the unemployed characters: the average rating of the unemployed condition was higher than the unemployed benefit condition, though this difference was not statistically significant. in the presence of the interaction effect, the non-significant effect of time shows that, averaged across all the outcome measures, there was no difference in the rating of the characters in the employed condition on the pre-covid19 and covid19 occasions. we tested for the effect of sociodemographic characteristics as covariates in the adjusted models (employment and benefit receipt status, education, age, and gender) but found no main effects of any of the covariates except for gender: females tended to rate characters higher (b = 0.13, 95% ci [0.04, 0.21]) compared to males. testing the heterogeneity of these patterns across outcomes via the inclusion of a three-way interaction between vignette condition, occasion and measure significantly improved overall model fit, χ 2 (104) = 533.40, p < 0.001, prompting analysis of each outcome separately. the separate linear regressions for each outcome measure (supplementary table s3) show that ratings of unemployed benefit recipients at the time 1 (pre-covid19) were significantly lower than the employed characters for all outcomes except openness and extraversion. statistically significant condition by time terms indicated that the unemployed benefit effect was moderated at time 2 (covid19) for the three key outcome measures identified in previous research (conscientiousness, worker and boss suitability) and for the measure of guilt and shame. figure 2 depicts this interaction for these four outcomes. these occurred in two profiles. for conscientiousness, worker and boss suitability, covid19 attenuated the negative perceptions of unemployed relative to employed characters, providing support for hypothesis 2. by contrast, covid19 has induced a new difference, such that participants thought employed characters should feel higher levels guilt and shame at time 2, compared to time 1. while the "working poor" condition was not central to the covid19 hypotheses, we note that we found no evidence that ratings of these characters on any outcome differed from the standard employed character, or that this difference was changed in assessment at time 2 (covid19 occasion). the inclusion of the fifth covid19-specific unemployment benefit condition did not generate more positive (or different) ratings than the standard unemployment benefit condition. overall mixed-effects multilevel models, both adjusted and unadjusted, indicated that participants in the coronavirus supplement condition (adjusted model: b = 0.26, 95% ci [0.06, 0.45]) and the general unemployed benefit recipient condition at time 2 (adjusted model: b = 0.28, 95% ci [0.08, 0.48]) were both rated more favorably in comparison to unemployed benefit recipients at time 1. there was no difference between these two time 2 benefit recipient groups (b = 0.03, 95% ci [−0.12, 0.19]). these results did not support hypothesis 3. previous research has demonstrated that people who are unemployed, and particularly those receiving unemployment benefits, are perceived more negatively and less employable than those who are employed. however, the economic shock associated with the covid19 crisis is likely to have challenged people's sense of their own vulnerability and risk of unemployment, and altered their perceptions of those who are unemployed and receiving government support. the broad aim of the current study was to examine the potential effect of this crisis on person perceptions tied to employment and benefit recipient status. we did this by presenting brief vignettes describing fictional characters, manipulating key experimental conditions related to employment status, and asking study participants to rate the characters' personality and capability. we contrasted results from two cross-sectional general population samples collected before and during the covid19 crisis. the pre-covid19 assessment replicated our previous findings (e.g., schofield and butterworth, 2018b) showing that employed characters are perceived more favorably than those who were unemployed and receiving government benefits on measures of conscientiousness and suitability as a worker. these findings supported hypothesis 1. in comparison, the assessment conducted during the covid19 crisis showed that unemployed and employed characters were viewed more similarly on these same key measures, with a significant interaction effect providing support for hypothesis 2. our third hypothesis, suggesting that n reference to the coronavirus supplement (an additional form of income support introduced during the pandemic) would enhance ratings of unemployed benefit recipients at the second assessment occasion, was not supported. we found that benefit recipients at time 2 were rated more favorably than the benefit group at time 1, irrespective of whether this covid19-specific payment was referenced. this suggests the broader context in which the study was conducted was responsible for the change in perceptions. we sampled participants from the same population, used identical experimental procedures, and found no difference over time in the ratings of employed characters on the key outcome measures of employability (worker and boss suitability) and conscientiousness. the more favorable ratings of unemployed and benefit receiving characters at time 2 is likely to reflect how the exogenous economic shock brought about by the covid19 crisis challenged social identities and the stereotypes held of others 4 . the widespread impact and uncontrollable nature of this event are inconsistent with pre-covid19 views that attribute ill-intent to those receiving to unemployment benefits (fiske et al., 2002; baumberg, 2012; contini and richiardi, 2012; bye et al., 2014) . we suggest the changing context altered perceptions of the "deservingness" of people who are unemployed as unemployment in the context of covid19 is less indicative personal failings or a result of one's "own doing" (petersen et al., 2011; van oorschot and roosma, 2017) . it is important to recognize, however, that the negative perceptions of unemployed benefit recipients were attenuated in the covid19 assessment, but they continued to be rated less favorably than those who were employed on the key outcome measures. in contrast to our findings on the key measures of employability and conscientiousness, the previous and current research is less conclusive for the other outcome measures. the current study showed a broadly consistent gradient in the perception of employed and unemployed characters for all outcome measures apart from openness and extraversion. findings on these other measures have been weaker and inconsistent across previous studies (schofield and butterworth, 2018b; schofield et al., 2019) , and the current experiment was not designed with sufficient power to demonstrate interaction effects for these measures. there was, however, one measure that showed significant divergence from the expected profile of results. a significant interaction term suggested that study participants at the time 2 (covid19) assessment reported that the employed characters should feel greater levels of guilt and shame than those who participated in the pre-covid19 assessment. in contrast, there was consistency in the ratings of unemployed characters on this measure across the two assessment occasions. while not predicted, these results are also interpretable in the context of the pervasive job loss that accompanied the covid19 crisis. haller et al. (2020) , for example, argue that the highly distressing, morally difficult, and cumulative nature of covid19 related stressors presents a perfect storm to result in a guilt and shame responses. the context of mass job losses may leave "surviving" workers feeling increasingly guilty. the main findings of the current study are consistent with previous experimental studies that show that the stereotypes of unemployed benefit recipients are malleable (aarøe, 2011; schofield et al., 2019) . these previous studies, however, have demonstrated malleability by providing additional information about unemployed individuals that was inconsistent with the unemployed benefit recipient stereotype (e.g., the external causes 4 https://pursuit.unimelb.edu.au/articles/our-changing-identities-under-covid-19 of their unemployment). in contrast, the current study did not change how the vignette characters were presented or the experimental procedures. rather, we assessed how the changing context in which study participants were living had altered their perceptions: suggesting the experience of covid19 altered stereotypical views held by study participants rather than presenting information about the character that would challenge the applicability of the benefit recipient stereotype in this instance. perceptions and stereotypes of benefit recipients can be reinforced (and potentially generated) by government actions and policies. structural stigma can be used as a policy tool to stigmatize benefit receipt as a strategy to reduce dependence on income support and encourage workforce participation (moffitt, 1983; stuber and schlesinger, 2006; baumberg, 2012; contini and richiardi, 2012; garthwaite, 2013) . in the current instance, however, the australian government acted quickly to provide greater support to australians who lost their jobs (e.g., doubling the rate of payment, removing mandatory reporting to the welfare services) and this may have reduced the stigmatizing structural features of the income support system and contributed to the changed perceptions of benefit recipients identified in this study. the current study took advantage of a natural experimental design and replicated a pre-covid19 study during the covid19 crisis. the study is limited by the relatively small sample size at time 1, which was not designed for current purposes but part of another study. we were not able to include most of the participants from the original time 1 study as most of the experimental conditions described activities that were illegal/inconsistent with recommend activity at the time of the covid19 lockdown and social restriction measures. finally, the data collection for the current study occurred very quickly after the initial and sudden covid19 lockdowns and economic shock, which is both a strength and a limitation for the generalizability of the results. the pattern of results using the same sampling frame offers compelling support for our hypothesis that the shared economic shock and increase in unemployment attenuates stigmatizing community attitudes toward those who need to receive benefits. our current conclusions would be further strengthened by a subsequent replication when the public health and economic crises stabilize, to test whether pre-covid perceptions return. the current study provides novel information about impact of the covid19 health and economic crisis, and the impact of the corresponding policy responses on community perceptions. this novel study shows how community perceptions of employment and benefit recipient status have been altered by the covid19 pandemic. these results add to knowledge about the determinants of welfare stigma, particularly relating to employability, highlighting societal level contextual factors. the raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. the studies involving human participants were reviewed and approved by melbourne university human research ethics committee. the patients/participants provided their written informed consent to participate in this study. as led the review conceptualized by ts and pb. as and pb conducted the analyses and wrote up the review. ts led the data collection, reviewed and edited the manuscript, and provided data management support. this manuscript is based on previous extensive work by ts and pb on stereotypes toward the unemployed and welfare benefit recipients. all authors contributed to the article and approved the submitted version. this study was funded by the australian research council (arc) grant # dp16014178. investigating frame strength: the case of episodic and thematic frames crowding out culture: scandinavians and americans agree on social welfare in the face of deservingness cues facets of the fundamental content dimensions: agency with competence and assertiveness-communion with warmth and morality perceived employability and entrepreneurial intentions across university students and job seekers in togo: the effect of career adaptability and self-efficacy employment and unemployment: international perspective. australia: labour force personality and performance at the beginning of the new millennium: what do we know and where do we go next? the roles of dehumanization and moral outrage in retributive justice three ways to defend welfare in britain the mere liking effect: attitudinal influences on attributions of moral character stereotypes of norwegian social groups stereotypes of age differences in personality traits: universal and accurate? reconsidering the effect of welfare stigma on unemployment warmth and competence as universal dimensions of social perception: the stereotype content model and the bias map unemployment persistence: how important are non-cognitive skills? employers' perceptions about tourism management employability skills higher-order factors of the big five stereotype content: warmth and competence endure a model of (often mixed) stereotype content: competence and warmth respectively follow from perceived status and competition fear of the brown envelope: exploring welfare reform with long-term sickness benefits recipients social cuing of guilt by anger and of shame by disgust the structure of phenotypic personality traits a very brief measure of the big-five personality domains the moral emotions a model for treating covid-19-related guilt, shame, and moral injury personality measurement and employment decisions: questions and answers knowledge network hubs and measures of research impact, science structure, and publication output in nanostructured solar cell research gender stereotypes of personality: universal and accurate? employers' perceptions of the employability skills of new graduates. london: edge foundation universal features of personality traits from the observer's perspective: data from 50 cultures an economic model of welfare stigma graduates' experiences of, and attitudes towards, the inclusion of employability-related support in undergraduate degree programmes; trends and variations by subject discipline and gender gender and management implications from clearer signposting of employability attributes developed across graduate disciplines pedagogy for employability deservingness versus values in public opinion on welfare: the automaticity of the deservingness heuristic psychological safety, job crafting, and employability: a comparison between permanent and temporary workers employers' perceptions of important employability skills required from malaysian engineering and information and communication technology (ict) graduates the language of personality: lexical perspectives patterns of welfare attitudes in the australian population are negative community attitudes toward welfare recipients associated with unemployment? evidence from an australian cross-sectional sample and longitudinal cohort community attitudes toward people receiving unemployment benefits: does volunteering change perceptions? the persistence of welfare stigma: does the passing of time and subsequent employment moderate the negative perceptions associated with unemployment benefit receipt? does social policy moderate the impact of unemployment on health? a multilevel analysis of 23 welfare states the five-factor model describes the structure of social perceptions sources of stigma for means-tested government programs who should get what, and why? on deservingness criteria and the conditionality of solidarity among the public the social legitimacy of targeted welfare measurement of agency, communion, and emotional vulnerability with the personal attributes questionnaire age and perceived employability as moderators of job insecurity and job satisfaction: a moderated moderation model the supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg. 2020.594837/full#supplementary-material key: cord-313777-eydkfqi2 authors: feng, mingxiang; shaw, shih-lung; fang, zhixiang; cheng, hao title: relative space-based gis data model to analyze the group dynamics of moving objects date: 2019-05-15 journal: isprs j photogramm remote sens doi: 10.1016/j.isprsjprs.2019.05.002 sha: doc_id: 313777 cord_uid: eydkfqi2 the relative motion of moving objects is an essential research topic in geographical information science (giscience), which supports the innovation of geodatabases, spatial indexing, and geospatial services. this analysis is very popular in the domains of urban governance, transportation engineering, logistics and geospatial information services for individuals or industrials. importantly, data models of moving objects are one of the most crucial approaches to support the analysis for dynamic relative motion between moving objects, even in the age of big data and cloud computing. traditional geographic information systems (gis) usually organize moving objects as point objects in absolute coordinated space. the derivation of relative motions among moving objects is not efficient because of the additional geo-computation of transformation between absolute space and relative space. therefore, current giss require an innovative approach to directly store, analyze and interpret the relative relationships of moving objects to support their efficient analysis. this paper proposes a relative space-based gis data model of moving objects (rsmo) to construct, operate and analyze moving objects’ relationships and introduces two algorithms (relationship querying and relative relationship dynamic pattern matching) to derive and analyze the dynamic relationships of moving objects. three scenarios (epidemic spreading, tracker finding, and motion-trend derivation of nearby crowds) are implemented to demonstrate the feasibility of the proposed model. the experimental results indicates the execution times of the proposed model are approximately 5–50% those of the absolute gis method for the same function of these three scenarios. it’s better computational performance of the proposed model when analyzing the relative relationships of moving objects than the absolute methods in a famous commercial gis software based on this experimental results. the proposed approach fills the gap of traditional gis and shows promise for relative space-based geo-computation, analysis and service. moving objects are the most common and important component in a diverse range of phenomena, such as human mobility (fang et al., 2017; jiang et al., 2017; almuhisen et al., 2018) , urban transportation , tang et al., 2015 tu et al., 2017) , ship logistics in the ocean (yu et al., 2017b; fang et al., 2018) and even animal migrations (bastille-rousseau et al., 2017) . many research projects have been driven and improved by moving-object data analysis, such as individual/group behavior analysis, path discovery and behavior prediction. because of the large amount of moving objects in real applications, these analyses require a powerful gis data model to store, analyze and interpret physical and contextual information (e.g., absolute topology and relative motion) of moving objects. current gis data models usually record essential information for moving objects within an absolute space. in absolute space, geocoded locations are bound to previously existing geometry and topology relationships among the corresponding points in the space (meentemeyer, 1989; couclelis, 1999) . therefore, moving objects are always represented as a series of observations that consist of an id, location and time (hornsby and egenhofer, 2002; spaccapietra et al., 2008) , alongside additional information such as activities (wang and cheng, 2001; shaw and yu, 2009 ) and semantics vandecasteele et al., 2014) . furthermore, these models describe their movement processes and interactions alongside basic geographical information (such as land use, poi, and events). in fact, these models require a large https://doi.org/10.1016/j.isprsjprs.2019.05.002 received 30 november 2018; received in revised form 3 april 2019; accepted 7 may 2019 amount of computation to support analysis from the perspectives of individual or groups of moving objects (which could be called relative space). moreover, several analysis, such as the surrounding dynamics and motion trends of nearby crowds, are critical to moving objects in highly complex and dynamic environments according to the personal requirements of decision-making (e.g., on relax life style, feeling of safety). current gis data models must be improved in terms of the geocomputation of these analyses. actually, relative space is an instinctual approach to moving objects and is a powerful theoretic framework to represent the surrounding dynamics and motion trends of moving objects in nearby crowds. traditional relative space is often studied in the research communities of mathematical theories (einstein, 1921) , physics (veltman, 1994; ruggiero, 2003) , astronomy (rots et al., 2015) and aerospace science (sinclair et al., 2015) . currently, very few gis data models have been built for relative space, which requires additional space-transformation computations to implement this instinctual approach in applications. in relative space, relative dynamic relationships between moving objects are easy to build independently of whether they can be geocoded by coordinates. importantly, the analysis of moving objects in relative space could easily follow instinctual requirements. therefore, the motivation of this paper is to create a relative space-based gis data model of moving objects and propose some basic gis operators for analyzing moving objects, which changes the analysis of current absolute space-based gis models and facilitates the efficient computation of real-time relative relationship dynamics, such as the surrounding dynamics and motion trends of crowds near moving objects. the contributions from this paper are summarized as follows: • a relative space-based gis data model of moving objects (rsmo) is introduced to construct, operate and analyze moving objects' relative relationships. • a relationship-querying algorithm and relative dynamic patternmatching algorithm are introduced to analyze the dynamic relationships of moving objects. • three scenarios (epidemic spreading, tracker finding, and derivation of the motion trends of nearby crowds) are implemented to demonstrate the feasibility of the proposed model, which also shows better performance compared to absolute methods. this paper is structured as follows. related work is discussed in section 2. section 3 proposes the modeling process, structure and operation of this model. section 4 illustrates the implementation of rsmo and the two algorithms. experiments with three case studies and performance testing are reported in section 5. section 6 discusses the proposed model. finally, conclusions are presented in section 7. moving objects are a very popular representation requirement in current gis applications because of the large volume of their tracked trajectories, e.g., cars in cities, vessels at sea, and even people around the world. therefore, many research projects have been conducted to store, manage and query moving-object data in the communities of computer science and geographical information science. in computer science, several data structures have been defined to support the storage and querying of moving objects. the first structure is the storage unit in dbmss (database management systems). güting and his group abstracted moving objects into moving points and moving regions and developed a discrete model in a dbms to store, manage and query moving objects' data (forlizzi et al., 2000; güting et al., 2000 güting et al., , 2004 güting and schneider, 2005) . based on abstracted data types, moving objects could be managed and queried by using sql. these works provided a solid foundation to support research on moving objects. the second is tree structures. moving objects were usually abstracted as points (ranu et al., 2015) , single continuous polynomials (ni and ravishankar, 2007) , or non-regulated sequences of roads in transportation networks (sandu popa et al., 2011) . then, these objects were indexed by the tree's derivatives, for example, b + -trees (sandu popa et al., 2011) , r-trees (yue et al., 2018) , 3d r-trees (xu et al., 2018) , fn (fixed network) r-trees , pa-trees (ni and ravishankar, 2007) , and grid trees (yan et al., 2015) . these tree-based indices could facilitate efficient queries on moving objects. the third structure is moving object-oriented databases. to satisfy the demand of faster processing with big data, postgreesql , mon-godb , monetdb (boncz et al., 2005) , and cassandra (hospital et al., 2015) were developed to store and analyze large volumes of moving objects' information. in the community of geographical information science, traditional gis data models, such as field-based models (e.g., raster-based models) (cova and goodchild, 2002) and feature models (e.g., object-and vector-based models) (tang et al., 1996) , usually depended on absolute coordinates in their reference frames to describe spatial object's motion or relationships. these models were often used to represent some key concepts that were related to moving objects, i.e., the place, activity and semantics. the concept of "place" was used to indicate the location of activity, which usually complied with a certain upper-bound distance and lower-bound duration (kang et al., 2004) . human daily activities were always represented as a sequence of place visits (do and gatica-perez, 2014) . additionally, space time path was a key concept in classical time geography framework that was used to represent and analyze activities and interactions in a hybrid physical-virtual space (shaw and yu, 2009 ) and social space (yin and shaw, 2015) . several basic semantics of moving objects were derived from the trajectories of moving objects, for example, episodes, stops, and moves (ilarri et al., 2015) . based on these basic semantics, researchers (parent et al., 2013; bogorny et al., 2014; jiang et al., 2015; wan et al., 2018) built semantic trajectories from semantic point sequences. semantic trajectories were easy for urban planners and migration or transportation agencies to use to mine spatial-temporal mobility patterns. moving object-oriented operators (i.e., range queries, nearestneighbor queries) were derived to support the analysis of moving objects' relationships. the first common operator is range queries, which specify a value of moving objects to fall within a lower and upper boundary zhan et al., 2015; xu et al., 2018; yue et al., 2018) , e.g., finding all the objects of a specific traveler between 7 am and 9 am. filtering and refining are two important steps of range queries. filtering determines candidate locations (such as a node in a tree structure) that contain specified attribute values and overlap the query area in space and time, while refining retrieves well-matched objects under the querying conditions through some advanced filtering techniques, such as statistics-based approaches (zhan et al., 2015) and time-based partitioning techniques (yue et al., 2018) . the second common operation is nearest-neighbor (nn) queries, which find the nearest neighbors to a given object in a database for a specific time interval. two similar phases of nn queries include searching for candidate units based on the idea of node pruning with large coverage and determining the well-matched results according to various conditions, such as time limits (güting et al., 2010) , obstacles (gao et al., 2011) , spatial-network structures (cheema et al., 2012) , reverse nearest neighbors (cheema et al., 2012) and movement uncertainties (niedermayer et al., 2013) . these operators are usually applied by calculating the distance between the coordinates of moving objects to derive relative metrics such as the distance, direction and time. this situation could produce repeated computations and misrepresent the relative relationships of moving objects without a geocoding technique. a wide spectrum of applications can be conducted for individuals, groups or the public. the first application is the movement-behavior analysis of moving objects, for example, individuals (gonzález et al., 2008; ando and suzuki, 2011; gao et al., 2013; renso et al., 2013; song et al., 2014) and groups (zheng et al., 2014; li et al., 2013; gupta et al., 2013; mcguire et al., 2014; liu et al., 2016) . the second application is path recommendation from historical trajectories (luo et al., 2013; dai et al., 2015; yang et al., 2017; zheng et al., 2018) . the third application is location prediction for tourists (lee et al., 2016; yu et al., 2017a) , navigators (li et al., 2016; besse et al., 2018) , and driverless vehicles . these applications require that the dynamic relationships of moving objects in space over time are inferred. however, very few models can directly organize the dynamic relationships of moving objects. these applications greatly depend on computationally intensive infrastructures. the studies about relative space mainly appear in the fields of vessels in maritime and airplane in aviation. in maritime, the moving object data model also can be applied for assessing collision risk (bye and aalberg, 2018; fang et al., 2018) , predicting vessel behavior (zissis et al., 2016; xiao et al., 2017) and planning path (cummings et al., 2010; hornauer et al., 2015) . in aviation field, motion guidance and control (yu et al., 2016; sun et al., 2017; li and zhu, 2018; zhu et al., 2018) for spacecraft rendezvous, position and attitude estimation (philip and ananthasayanam, 2003; qiao et al., 2013) between the chaser and the target satellites were also referred to the applications of moving object data model. these studies aim to control the moving object in real time to avoid risk and complete the established movement in the ocean or aerospace, where no appropriate global reference for them. those studies focus on the real-time motion control to respond to surrounding moving objects in a high dynamic scenario. these missions need an efficient data model to handle relationships of moving objects as well. in short, current research on moving objects rarely organizes their dynamic relationships directly because of a lack of relative space-based data models and analytic frameworks in the communities of computer science and geographical information science. this paper attempts to fill this gap by introducing a relative space-based gis data model to analyze the group dynamics of moving objects, which should reduce the intensive computation that occurs when deriving the relationships of moving objects and provide better analytic performance than traditional absolute coordinate-based gis data models. this section introduces a relative space-based gis data model of moving objects (rsmo). rsmo is extended from the space-time cube (stc) structure in arcgis. an stc is a three-dimensional cube that includes space-time bins in the x and y dimensions and time in the t dimension and represents limited space coverage over a fixed period. the basic idea of rsmo is to directly record moving objects' relationships and relative dynamics with a three-dimensional space-time cube to facilitate transfer to absolute space by maintaining the current locations of moving objects via additional space-time bins. this data model reduces the calculation of moving objects' relationships and relative dynamics in traditional gis data models to provide efficient relative space-based queries between moving objects for real applications with cars, vessels, people, etc. rsmo is designed to support some basic functions such as moving objects' motion storage and organization, queries based on relative relationships, relative motion-pattern analysis and mining, and transformation between absolute space and relative space. the following section will first introduce rsmo and then the basic functions that depend on rsmo. before introducing the proposed rsmo, this section defines some basic concepts at first as follows: definition 1: space-time cube (stc) is a three-dimensional (3d) euclidean space containing of a 2d geographical space (x and y) plus time (t) for visualization and analysis of objects' behavior and interactions across space and time (bach et al., 2017) . definition 2: space-time bin (stb) is a basic unit of space-time cube, which lies in the fixed position based on their x and y dimensions in 2d geographical space and t dimension in time to represent a limited location with a certain time. definition 3: relative relationship bin (rrb) is a derived stb with a substituted reference framework of reference object, target object and time (fig. 1 ). it contains a quadruple < t, ref_obj, tar_obj, relationship > , which indicates the relationships of corresponding reference object (ref_obj) and target object (tar_obj) during a specific time t. definition 4: relative relationship matrix is a matrix of all moving objects' rrbs in the time t and within a specific environment. definition 5: relative relationship cube is a time series of relative relationship matrices, which is organized in three-dimensional space (two dimensions in space and one dimension in time) according to time. the proposed rsmo model extends the structural organization of space-time bins in arcgis to model the relative relationships among moving objects and facilitate their relative space-based analysis. here, the motion in the stc is represented by the relationships of moving objects, which is a natural approach to represent the actual cognitive processes of moving objects in an environment. fig. 2 illustrates the extended-entity-relationship (eer) diagram of the rsmo model, which shows the elements (entities, relationships and attributes) and the hierarchical relationships among these elements. this model defines six basic entity types, i.e., "object", "relative location", "relationship", "relative relationship bin", "relative relationship matrix" and "relative relationship cube". here, object represents the moving object in a real scenario, such as the people in fig. 2 , vehicles/ unmanned ground vehicles, unmanned aerial vehicles, vessels, planes, and so on. relative location is the position of any object relative to other object (fig. 2) , which can be represented by the relative distance and angle. relationship indicates the mutual spatial or social relationships among objects, for example, closeness, friendship, colleague does not require much additional computation to build relationships among objects from the coordinates in traditional stcs and provides a natural approach to analyze a moving object's behavior. fig. 3 provides an example of rsmo's organization. fig. 3 (a) illustrates the spatial trajectories of five moving objects (objects labelled as a, b, c, d and e). fig. 3 (b) illustrates a right-handed coordinate system for detecting objects' relative location from each object's view. fig. 3 (c) shows a geometric transformation method from relative locations to ) of moving objects are organized akin to x and y coordinates. in this matrix, rows represent "reference" objects and columns represent their detected "target" objects. fig. 3 (e) shows a relative relationship cube that includes all relative relationship matrices sorted by time. the proposed model replaces the orthogonal coordinate axes (x, y) in the stc with two object axes that represent reference and target objects to store their relationships. recording the changing relationships between moving objects is insufficient because this process only allows relationship comparison between two different moving objects and cannot support transformation with absolute space to match the computational tasks in current gis tools. to solve these problems, an additional object r (called the initial reference object), which is a fixed coordinate in absolute space, was added to rsmo (the object r in fig. 3(b) ). this object represents a fixed location acting as a local reference system. by comparing the location change in this local reference system, each object's motion relative to itself can be derived. after determining the coordinates in absolute space, this object also facilitates transformation between relative space and absolute space. the details of this procedure will be introduced in 3.2.3. this relative relationship cube forms the basic structure of rsmo. this section introduces a set of basic operators for the implemented gis functions. those operators are organized into six classes according to the type of relationship outcome. (1) query processing is an operator to get the direct relative relationship in terms of relative distance and angle between objects and time. (2) group relative dynamic calculation is an operator to derive the relative dynamic characteristics of the group. (3) transformation between absolute space and relative space is an operator to transform the data between absolute space (e.g. coordinates) and relative space (e.g. relative distance and angle). (4) initial reference object transformation is an operator to change the reference object of relative relationship cube, and update all relative relationships of all objects. (5) attribute transformation is an operator to derive the relationship attributes based on relative relationship cube, such as, closeness. (6) relative relationship dynamic pattern transformation is an operation to derive the relative dynamic pattern between object, such as moving trends of them. these operators are expanded and developed from stcs to adapt the features of a moving object's motion in relative space. the following subsections will describe these classes in detail. query processing searches for the relative mutual relationships of moving objects and their changes. the six operators of query processing are explained as follows: (1) point extraction retrieves the relationships between designated objects at specific times. this operator is a basic function for querying and analysis. (2) time drilling retrieves the dynamic processes of the relationships between designated objects. (3) target drilling extracts the specific reference object's relationships with other target objects at a designated time. (4) relationship curvilinear drilling retrieves the target objects that present a specific relationship with the designated reference object. this operator can be used to extract the reference object's behavior relative to other objects, such as the interactions between two moving objects if "distance < 2 m". the reference object's interactions can be expressed as a planar 3d curve that consists of target objects for each time. (5) time cutting retrieves all the objects' mutual relationships at any designated time. (6) reference cutting retrieves the movement of all objects relative to the reference object. detailed descriptions of these query-processing operators are listed 1. obtain relative relationship bins with different references and target objects by time drilling; 2. obtain the results from fun(), whose input is relative space-time lines from the previous step; 3. fill the results in the corresponding position of the two-dimensional matrix built by the reference and target axes. 1. obtain relative relationship bins with different references and target objects by target drilling; 2. obtain the results from fun(), whose input is relative space-time lines from the previous step; 3. fill the results in the corresponding position of the two-dimensional matrix built by the reference and time axes. note: in group relative dynamic calculations, min in example is used to find the minimum in the dataset. process details the procedures in the illustration and provides text to help readers understand this operator. flattening(axis, fun()) is used to express a uniform form. two parameters are required: the axis indicates the input of the operator and fun() is a function to process the values along the axis. additionally, fun() can be set by the user for different purposes. in table 1 . group relative dynamic calculation computes the group relative dynamic characteristics (i.e., average distance, interaction frequency in social media, and the closeness of social relationships) based on the relationship changes of all objects. the two operators time flattening and target flattening are explained as follows: (1) time flattening computes the relative dynamic statistical characteristics between each pair of objects during the entire period and retrieves a matrix that contains the results. for instance, the average distance is calculated through a statistics function, and the unit in the resulting matrix shows the average distance between each pair of objects, which infers the closeness of their relationship during the entire period. (2) target flattening computes the relative dynamic statistical characteristics of all the target objects relative to each reference object at each time stamp and retrieves a matrix that contains the results. for example, if we want to calculate the interaction frequency, we can obtain a result matrix that indicates the interaction changes of each reference object relative to other objects over time. detailed descriptions of these group relative dynamic calculations are listed in table 2 . the purpose of this operator is to build a transformation method between absolute space and relative space. this operator is used to adapt current gis modules because most gis software analyzes moving objects only in absolute space. if we know the coordinates ( x y (2) the increment in the x and y directions ( x, y) in absolute space can be calculated with equation (3) by distance decomposition: (3) based on ( x, y), the coordinates of a in absolute space can be expressed with eq. (4): initial reference transformation transforms the relationship of any moving object with another object to that of another object. this process changes the analysis view between objects, which could support the relative space-based analysis of any individual object in parallel. a detailed description of initial reference transformation is listed in table 3 . below, we present an example to explain this process. an example is introduced to explain the transformation process in fig. 4(a) . a 0 is the initial reference object of object a at t 0 . if we want to shift the initial reference object from a 0 to b 0 (object b at t 0 ), where b 0 is the new initial reference object, we compute all the objects' unknown relationships with b 0 . for example, fig. 4(a) shows the unknown mutual relationships b 0 -a 1 and b 0 -c 1 in the new cube's matrix. the mutual relationship between b 0 and c 1 ( fig. 4(b) ) can be derived by the following steps. first, the coordinates in absolute space are calculated with the transformation operator between absolute space and relative space from section 3.2.3. second, the distance between b 0 and c 1 is calculated by the law of cosines. the distance between a b 0 0 and a c 0 1 can be derived by computing the difference between ang a c , 0 1 and ang a b , 0 0 . then, the distance between b and c (dis finally, the angle between b 0 and c 1 is calculated by the following rotation method to ensure uniqueness. (1) the rotations of b 0 and c 1 relative to object a 0 (yaw a b , 0 0 and yaw a c , 0 1 ) are calculated with equation (6) thus, we set shift(a 0 ,x) as the transform function to convey the initial reference object from a 0 to x , where the purpose of attribution transformation is to label new attributions to each relative relationship bin and filter bins with any specific conditions. the two operators (labeling and filtering) are explained as follows: (1) labeling adds new attributions to each relative relationship bin note: label(fun()) and filter(fun()) are used to express uniform forms. the parameters of fun() in labeling is a function to acquire new attributes based on the relative relationships of bins. and fun() in filtering is used to select the relative relationship bins based on the new attribute by labeling (e.g. closeness is intimate). additionally, fun() in two operators can be set by user for different purposes. based on the conditions or classifier set from fun(). (2) filtering is used to remove some bins from the results of the labeling step. (3) detailed descriptions of these attribution transformations are listed in table 4 . the goal of relative relationship dynamic pattern transformation is to analyze the motion features of moving objects in relative space. two common operators are explained as follows: (1) trending computes motion features (such as distance changes, angle changes, relative speeds and relative acceleration) by comparing relationship changes between each pair of objects. (2) matching discovers relative motion patterns with the condition of a movement pattern, such as accompaniments and trackers. the detailed descriptions of these relative relationship dynamic pattern transformations are listed in table 5 . a prototype (fig. 6 ) was developed to implement the proposed rsmo model. all the functions store, manage and analyze relative space data were encoded into a dynamic link library within c++ environment. the visualization of this prototype was developed by the qgis (quantum gis) framework in a python environment. two typical analysis modules were implemented, namely, query processing and relative relationship dynamic pattern transformation. the query processing contained all six operators (point extraction, time drilling, target drilling, time curvilinear drilling, time cutting and target cutting). two relative relationship dynamic pattern matching algorithms (accompaniments and trackers) were implemented in this prototype. a query algorithm called query processing is described here to for retrieving relative relationship in relative space. the main idea of this algorithm are to organize the relative relationship as three dimensions, namely, reference object, target object and time, which is based on model's structure, and then to index relative relationship by corresponding objects and time. by using a composite index consisting of reference object, target object and time, this algorithm can quickly find out the relative relationship completely covered by the given restricted objects and time. in the query, this algorithm retrieves relationships by accessing the relative relationship according to the corresponding composite index of given object and time, instead of extracting each relative relationship bins of reference or target object one by one in the cube. therefore, this algorithm can be integrated with all the operators shown in section 3.2.1 and constrained by six parameters, namely, cube, query_type, c_ref, c_ tar, c_t, and rel, which are explained in algorithm 1. cube is a structure of map, whose key consists of a reference and target object, while value is also a map whose key is the time and value is the relationship. the detail of the query processing algorithm is described in algorithm 1. when comparing this algorithm with traditional gis querying in absolute space, this algorithm can avoid many redundant computational address coordinates, which can improve the efficiency of the relative relationship querying. we will test this approach in section 5.5. algorithm 2, which is called pattern matching, illustrates the process of searching a specific relative pattern in relative space. dynamic pattern is represented as a space-time trajectory (bao et al., 2017) , which is consist of specific locations in absolute space (shown in fig. 7) . given a specific pattern (pattern_s), each object's distance to the pattern_s needs to be calculated to measure its similarity to pattern_s. finally, the trajectory, whose distances are all in matching threshold (in fig. 7) , would be got as the outcome of pattern matching in absolute space. in relative space, dynamic pattern is represented as a relative relationship bin series, which is composed of bin with specific relative distance. in this algorithm, the bins with each object's relative distance to pattern_s can be directly got by reference cutting at first. then, they are labeled according to whether their relative distance is in matching threshold. finally, the relative relationship bin series are obtained, which match the pattern_s in all time. this algorithm provides a more efficient way for relative relationship dynamic pattern matching. it simplifies the spatial computation to measure the similarity on basis of reference cutting and labeling in the proposed model. the details of the pattern matching algorithm are described in algorithm 2. algorithm 2: pattern matching (cube, c_ref, pattern_s) input: cube -the relationships between all objects in a relative space-time cube form; c_ref -the reference object, an index on ref of a relative relationship bin in cube; pattern_s -a specific relative relationship pattern to be found, which is expressed as a time series that consists of a triple of time, rel. output: the object in cube, which produces a trend of pattern_s relative to c_ref. 1 let q be a sequence of bins in the cube. q contains pairs of the form < objects_pair, relptr > , where objects_pair is the index on attribute ref and tar in a pair form and relptr is a map structure whose time is the key and relationship is the value; 2 let objectslist be the set of objects that contains the codes of all the objects in the scenario; 3 let timestamps be the total number of time stamps in the data; 4 let * be any value; 5 let labeling be the function that get the relative relationship bins, which match the pattern_s; this algorithm allows users to search objects with specific motion patterns relative to dynamic reference objects. this advantage can help users discover several meaningful behaviors of objects that are hidden in a moving group, for example, being followed. the proposed rsmo model was tested with three application scenarios, such as epidemic spreading, tracker finding, and the motiontrend derivation of nearby crowds. each case study was used to demonstrate the feasibility and advantages of the proposed model. this paper collected pedestrians' (experimental subjects) walking trajectories on the street in the three experimental scenarios. a lenovo phab 2 pro phablet was used to record their trajectories in both absolute and relative space. this device is openly available and equipped with sensors for motion tracking. this phablet can record the position and rotation angle relative to the starting pose of the device. the gps receiver recorded the motion in absolute space and relative space. fig. 8 shows all 34 trajectories of the experimental subjects walking along a road in wuhan city, china. the detailed information includes the following fields in table 6: • "user id" is the unique identifier for each object. • "time stamp" is the time when the location was recorded. • "longitude" and "latitude" were used to record the pedestrians' location in absolute space with gps receiver. • "x increment" and "y increment" show the relative locations in the reference frame that was built by the user's starting pose. • "rotation angle" records the change in angle relative to the initial pose in a counterclockwise direction. • "initial azimuth" is the angle of the user's face relative to the north. the collected data needed to be pre-processed before saved into the proposed model as the following steps. (1) derive relative distance and angle between all objects. first, the relative distance could be computed by the operator of transformation between absolute space and relative space with the coordinates in the fields of longitude and latitude in collected data. then, the relative angle was computed based on the movementdirection azimuth ( absolute ), which was derived from rotation angle ( ) and initial azimuth ( ) according to the equation (13). (2) set the initial reference object for the model. initial reference object was set as object 2 at t0 in this experiment by computing all objects' relationships to object 2 at t0. the operator of transformation between absolute space and relative space was implemented for this task under the condition that each objects' coordinates (longitude and latitude), rotation angle and initial azimuth are known in collected data. the first scenario involved finding the person who contacted with an epidemic carrier. in the research community of epidemiology, the spread of diseases is closely related to the spatial and social activities of patients, and the propagation of diseases greatly depends on the trajectory of a patient's social activities. thus, tracing the back-propagation path of a virus and finding close contacts are meaningful to prevent the spread of diseases. this experiment assumed a severe acute respiratory syndrome (sars) carrier's walking path to find its close contacts. sars is a viral respiratory disease of zoonotic origin that is caused by the sars coronavirus (sars-cov). the primary route of transmission for sars is the contact of mucous membranes with respiratory droplets or fomites (world health organization, 2003) . the research about respiratory droplets transmission shows that the largest droplets will totally evaporate before falling 2 m away (xie et al., 2007) . therefore, a person will be identified as a close contact if the distance to the carrier (whose id is 2 in fig. 9 ) is less than 2 m. therefore, this paper uses 2 m as the parameter for querying close contacts. this query was implemented by combining the reference cutting and labeling operators (shown in fig. 10(a) and (b) ). first, the relative relationships (distance) of all the experimental subjects relative to the carrier were derived by reference cutting. here, a matrix (shown in fig. 9 ) was used to show the change in distance to the carrier for each experimental subject; blue means a closer distance to the carrier, and red means a farther distance to the carrier. then, the relative relationship bin indicating a close contact (in green) was determined by labeling for distances below 2 m. thus, the subjects (19 and 32) were found as close contacts. reference cutting (a) and labeling (b) are used to search the relative relationship bins containing close contacts in the data model, which are labeled in red. the close contacts' relative locations to the carrier are marked in (c). and their location in absolute space are shown in (d) by the operator of transformation between relative space and absolute space. by using the transformation between absolute space and relative space , this study could also locate the exact locations of close contacts. fig. 10(c) and (d) show the two locations of the subjects 32 and 19 in relative space and absolute space. one was much close to the carrier, the first being at 1.94 m at 16:17:20 and the other at 1.44 m at 16:18:30. according to these querying results, public health agencies should take actions to control these two people. thus, this query is very helpful for public health agencies to control epidemic spreading if embedded into any real-time gis analytical systems. the second scenario was to find any tracker of a subject in a crowd. in the research community of crime, tracking is a basic behavior for further crime (beauregard et al., 2010) . finding trackers are helpful to take caution actions to prevent crime. usually, the tracker's behavior is similar to other normal pedestrians in the crowd. therefore, victims and public security organizations may experience difficulty discovering a tracker in absolute space. in this experiment, we assumed that the subject 2 carried large amounts of cash, and he had to be aware of potential trackers. to remain unnoticed, trackers usually stay within a certain range from the target to neither be found nor lose the target. this distance range is a critical parameter to build specific patterns for trackers. according to the sociological theory (moore et al., 2010) , trackers are easily found if their distance to the victim is less than 3 m. therefore, this study set the parameter of 3 m as the lower distance limit for this query. on the other hand, the upper distance limit must guarantee that the target is within the tracker's sight. this study set the upper distance limit as 42 m to ensure that the tracker did not lose the target. this parameter was calculated according to the waiting time of intersections in this study area (30 s). therefore, 30 s × 1.4 m/s = 42 m, where 1.4 m/s is the normal average walking speed (fitzpatrick et al., 2006) . sometimes, the distance is probably larger than 42 m, which may cause the tracker to lose the target; thus, the tracker will speed up to follow the target. in this case, the tracker must find the target within 30 s because of the constraints of traffic lights. the conditions of tracker finding can be expressed as follows: (1) the distance between the tracker and target is always [3 m, 42 m); (2) the time that the distance between the tracker and target is not [3 m, 42 m) is below 30 s, which occurs only once while speeding up because of the traffic-light locations in the study area. three main operators (reference cutting, labeling, matching) were used to find the tracker by meeting the above conditions. similar to the close-contact querying of epidemic spreading, this study used the reference cutting operator to find a matrix with all subjects' relationships relative to 2. then, this process used the labeling operator to label the relative distance to all relative relationship bins. fig. 11 shows the labeled results. in this figure, bins that were labeled in red and yellow had a relative distance less than 42 m, while bins that were labelled in blue had a relative distance larger than 42 m. fig. 11 illustrates the change in the relative distances between all the subjects and the target. two related patterns could be derived through matching: (1) the subject was close to the target most of the time (i.e., 4, 8 and 21), or (2) the tracker lost the target after a very short time (i.e., 6). fig. 12 shows the speed pattern of this subject. obvious acceleration was observed between t1 and t3, meeting the second condition of identifying the tracker. based on the above results, the four subjects were viewed as trackers. in order to demonstrate the advantages of the proposed approach, this study also shows the trajectories of 4 and 6. by the operator of transformation between absolute space and relative space, the trajectories of these individuals and the target did not present obvious abnormal features compared to other trajectories in absolute space ( fig. 13(a) ). however, their relative trajectories relative to the target presented an obvious accompaniment feature in relative space ( fig. 13(b) and (c)). therefore, this proposed approach can mine hidden patterns that are ignored by traditional gis data models. the third scenario was to derive the motion trends of nearby crowds. the motion trend of a nearby crowd is an important feature of surrounding dynamics and has a large influence on decision-making processes (fang et al., 2003) . this concept is necessary for individuals to understand nearby motion trends to avoid heavy congestion or stampedes. this experiment assumed the subject 2 as a walker who fig. 11 . distances of all the subjects relative to 2. fig. 12 . acceleration of 6, which was positive when the target was lost. m. feng, et al. isprs journal of photogrammetry and remote sensing 153 (2019) 74-95 hoped to be aware his surroundings, for example, how many people moved closer to or farther from him and whether they were accelerating towards him. five main operators (reference cutting, trending, labeling, filtering and time flattening) were used to implement this application. first, the relative relationships of all the experimental subjects relative to the feng, et al. isprs journal of photogrammetry and remote sensing 153 (2019) 74-95 walker were derived by using reference cutting. then, a matrix that contained the motion trend (relative distance change and relative acceleration) was calculated through the trending operator. this matrix was labeled with a legend of colors in fig. 14 through the labeling operator, which illustrates changes in the relative trend motion of the other subjects compared to the walker. in fig. 14 , the red bins in the matrix mean that these subjects are accelerating towards the walker, orange bins mean that the subjects are decelerating towards the walker, gray bins mean that the subjects are accelerating away from the walker, blue bins mean that the subjects are decelerating away from the walker and yellow bins mean no change in distance and speed. the gray, blue and yellow relative relationship bins were removed with the filtering operator. fig. 15(a) shows the results after this operation. then, the number of bins where the subjects moved towards the walker was counted by time flattening, which counted any red or orange bins. the updated result is shown in fig. 15(b) , indicating that more than half of the individuals were moving closer to the walker from t1 to t27 (16:14:05-16:16:15) . similarly, fig. 15 (c) shows the number of subjects that were accelerating towards the walker at each time, indicating that more than half of the subjects were accelerating towards the walker at eight times (t2, t4, t9-t11, t12, t15, t17, t20-t23 and t26-t27) . this study transformed these bins from relative space to absolute space by the operator of transformation between absolute space and relative space, and the locations of surrounding people are shown in fig. 16(a) and (b) at 16:14:05 and 16:14:40, respectively. by combining all the locations of surrounding people close to the walker, we could find the concentrated area of surrounding people, which is labeled in red in fig. 16(c) . in this case, the walker may have felt uncomfortable and potentially at risk in the crowd road. setting walker as initial reference object, this model derived the crowd's motion trends for all subjects by the operator of initial reference transformation. fig. 17 showed the results of initial reference transformation from object 2 at t 0 to object 14 at t 16 . those who were moving closer to subject 2 in fig. 17 (a) didn't show clear motion trend for the new walker (subject 14 at t 16 ) in fig. 17(b) . this method is helpful for people-oriented routing services, such as pedestrian navigation or tourism. this study tested the performance of the proposed model in two aspects. one was the time complexity of querying functions; the other was time complexity between the proposed model and the absolute method in arcgis geodatabase. this study used a dataset (gramaglia et al., 2014) of vehicle trajectories on two highways around madrid, spain. the detailed information in these trajectories contained five fields (time, id, x position (m), y position (m) and speed (m/s)). before the comparison, this study chose 14 datasets for this open-access dataset that contained m subjects for n times, where m = 10, 50, 100, 500, 1000, and n = 50, 100 or 200. table 7 lists these datasets and the sizes of their storage spaces. all the computational tasks were singlethreaded implemented on a dell desktop (four intel(r) xeon(r) processors with cpu e3-1220 v5 @ 3.00 ghz, 16g of ram and a 64-bit operating system). six main functions (point extraction, time drilling, reference cutting, querying close contacts, finding trackers and deriving the motion trends of nearby crowds (deriving trend)) were selected to test the time table 7 lists the execution times for the 14 datasets and fig. 18 illustrates the computation time of each function among these datasets. both tables show that the computation time of all the selected functions was less than 3 s if the data volume of the datasets was smaller than 1 gb. the execution times significantly increased when the dataset contained more than 5.625*10 7 records. among the six functions, point extraction and finding track spent significantly less time than the other four functions when the records increased to 7*10 7 in the dataset. this result shows that the proposed model could work effectively with more than 3 gb of data and meet behavior-analysis requirements in scenarios with 1000 subjects. the second comparison examined the time efficiency between the proposed model and the absolute model (geodatabase in arcgis). the proposed model presents an obvious advantage in terms of conducting three functions (point extraction, time drilling, reference cutting) because these functions only require a simple query, while the absolute model requires additional relative relationship transformation. the other three functions (querying close contacts, finding tracker and deriving the motion trend of crowd nearby) are meaningful to real situations and require a relatively complicated implementation. therefore, this study compared the second set of three functions that were implemented by the proposed model and absolute model. in arcgis, "buffer" and "overlay analysis" were used to implement querying close contacts. the point features of the reference subject's trajectory were the input features of buffer analysis in arctoolbox. the outputs of this operation were face features that represented the close-contact area. then, the intersect tool in overlay analysis could be used to check the status of the subjects' trajectories inside or outside the close contact area. however, no appropriate tools exist in arcgis to directly implement finding tracker or deriving the motion trend of crowd nearby. in terms of finding tracker, we computed each subject's distance relative to the reference subject at each time via coordinates in geodatabase and filtered the subjects according to the conditions in section 5.3. in terms of deriving the motion trend of crowd nearby, we calculated all the subjects' relative speeds and relative accelerations relative to the reference subject and statistically counted five types of motion trends, as described in section 5.4. table 8 lists the execution times of the three selected functions for the proposed model and absolute gis method with 12 of the 14 datasets; these last two datasets were omitted because the absolute gis method would stop responding if the volume the of dataset increased to 3.5 gb. fig. 19 illustrates the computation time of each function among these datasets. the execution times of the proposed model were approximately 5-50% those of the absolute gis method for the same function. the computation time of querying close contacts for the proposed model was approximately 5% that for the absolute gis method, the most evident improvement from the proposed model. this improvement considerably increased with increasing dataset volume ( fig. 19(a) ), and the execution-time difference was 331 s when the volume was 3.46 gb. the second function was deriving the motion trend of the crowd nearby, whose time-efficiency improvement was similar to that of querying close contacts. the execution time of the proposed model was approximately 10% that of the absolute gis method. additionally, the improvement from the proposed model became significant if the dataset's records were larger than 3*10 7 (fig. 19(c) ). the execution-time difference was 171 s when the volume was 3.46 gb. the execution time of the third function (finding tracker) with the proposed model was only 10%-50% that of the absolute gis method; this improvement was not as evident as that for the previous two functions. in fig. 19(b) , the execution time from the proposed model and absolute gis method steadily increased as the number of dataset records increased. however, the execution-time difference only reached 8.146 s when the dataset's volume was 3.46 gb, an insignificant increase compared to the previous two functions. this result indicates that the proposed model exhibits better efficiency when implementing these functions compared to the absolute gis method. in this study, the execution times of the three selected functions (querying close contacts, finding tracker and deriving the motion trend of crowd nearby) for the proposed model and absolute gis method with 12 of the 14 datasets are used to indicate the efficiency improvement of the proposed model. thus, this section discusses the improvement in respect of model's structure and operator for the previous three case studies. (1) in the case of epidemic spread and close-contact querying, this paper analyzed the processes of the proposed model and absolute gis method for querying close contacts of the carrier. in absolute gis, this task could be achieved by buffer analysis in arcgis software. corresponding buffer areas related to a carrier had to be built while it was moving all the time in absolute space. however, in the proposed model, there is no need to build many buffer areas to recognize relationships in all time. it only executed once reference cutting to get all object's relationships to the carrier, no matter how many objects in the scenario. this is fundamental reason for the better performance of our model in this case study. table 8 and fig. 19(a) show that more relationships had to be recognized in absolute gis with the increasing number of objects in the scenario, which leads to the execution time increase more quickly compared with the proposed model. (2) in the case of tracker finding, the pattern matching approach in absolute space needs to compute the relative relationships of moving objects first, then uses data structure to store the time series of relative relationships between target objects with others, and finally filters the time series of relative relationships with distance constraints to match the pattern of the tracker. it is a complicated computation process in absolute gis method. the proposed model directly extracted each object's relationship referred to the target in relative space by reference cutting. then, the operators of trending and matching were used to find tracker. these computation processes are simple matrix-based computation. therefore, it could have a better computation performance than the pattern matching approach in absolute space. results in table 8 and fig. 19 (b) demonstrate the execution times by the proposed model is much shorter than that in absolute gis method. (3) in the case of deriving the motion trends of nearby crowds, motion trends of crowd nearby in absolute gis method need to computing each object's distance to the walker from coordinates in each time first, then compares the time series of relative distance to find the moving trend of close or far away, and finally to plot it as a distribution map in absolute space, which supports the choosing of walk routes. this computation task is also an implicated computation process. the proposed model could analyze their distance change characteristic to the walker by the operators of trending and matching, which are also matrix-based computations suitable to computer environment. table 8 and fig. 19 (c) shows that the proposed model saved approximately 90% of the execution time compared to absolute space-based gis method.. in short, the proposed model outperforms the absolute space-based approach in relative motion analysis by simplifying the process of relationships computation and querying.. by extending stc in arcgis, this paper presents a novel model for storing, managing and analyzing moving objects based on their relationships in relative space. in this model, the x and y in stc are substituted as the reference and target objects, and the stb in stc are extended to store the data structure related to relative relationship among moving objects. to support the analysis of moving objects, this paper introduces six classes of operators according to the type of relationship outcome, namely, query processing, group relative dynamic calculation, transformation between absolute space and relative space, initial reference object transformation, attribute transformation, relative relationship dynamic pattern transformation. then, two common used algorithms, relative relationship querying and relative relationship dynamic pattern matching, are introduced on the base of these defined operators. the former algorithm organizes the relative relationship as three dimensions (reference object, target object and time) based on the model's structure, and queries relative relationship by corresponding objects and time. the later algorithm implements reference cutting and labeling to match each object's relative motion with specific pattern, which simplifies similarity measure in absolute space. this study collected the trajectories of walking pedestrian and used these data to demonstrate the feasibility of the proposed model. finally, the proposed model was successfully tested in three real-life scenarios to analyze dynamic and complex relationships between moving objects. the results validated the capabilities of these designed functions and indicated that the proposed model saved up to 50%-95% execution time compared with traditional absolute gis methods. and the proposed model could be widely applied in the domains of public health, security, individual-based services. therefore, the contributions of this paper could be summarized as follows: (1) a relative space-based gis data model of moving objects (rsmo) is proposed to construct, operate and analyze moving objects' relative relationships in relative space. (2) relative space-based relationship-querying algorithm and relative fig. 19 . curves of the execution times for "querying close contacts" (a), "finding tracker" (b) and "deriving the motion trends of nearby crowds" (c) with the proposed model (blue line) and geodatabase (red line). (for interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) dynamic pattern-matching algorithm are proposed on the base of the proposed model. (3) three real-life scenarios are implemented to demonstrate the feasibility and usefulness of this proposed model in discovering interesting behavior hidden in crowd (e.g. querying close contact, finding tracker and deriving the crowd's motion trends). moreover, the computational time of these real-life scenario analysis could save up to 50-95% while compared with absolute space based methods. the proposed model still has a big disadvantage. the data volume of the proposed data model was much larger than that of the traditional absolute space-based data organization. the compression of these relative data is a challenge. future research need study efficient compressing algorithms to simplify the data pre-process and reduce the volume of relative data. the handling of multiple types of relationships could also be examined in this model, e.g., social and mental relationships. powerful operators (i.e., closeness analysis in groups) in the social domain could be integrated into the model to support the analysis of social phenomena. moreover, high-performance computing technologies such as cloud computing could be incorporated with the proposed model to design more efficient algorithms for relative relationship querying and pattern mining. detecting behavior types of moving object trajectories role-behavior analysis from trajectory data by cross-domain learning a descriptive framework for temporal data visualizations based on generalized space-time cubes: generalized space-time cube algorithms for mining human spatial-temporal behavior pattern from mobile phone trajectories animal movement in the absence of predation: environmental drivers of movement strategies in a partial migration system target selection patterns in rape destination prediction by trajectory distribution-based model constant -a conceptual data model for semantic trajectories of moving objects monetdb/x100: hyper-pipelining query execution maritime navigation accidents and risk indicators: an exploratory statistical analysis using ais data and accident reports continuous reverse k nearest neighbors queries in euclidean space and in spatial networks space, time, geography extending geographical representation to include fields of spatial objects supporting intelligent and trustworthy maritime path planning decisions personalized route recommendation using big trajectory data network-matched trajectory-based movingobject database: models and applications the places of our lives: visiting patterns and automatic labeling from longitudinal smartphone data a brief outline of the development of the theory of relativity on the relationship between crowd density and movement velocity spatiotemporal model for assessing the stability of urban human convergence and divergence patterns automatic identification system-based approach for assessing the near-miss collision risk dynamics of ships in ports another look at pedestrian walking speed a data model and data structures for moving objects databases modeling temporal effects of human mobile behavior on location-based social networks continuous nearest-neighbor search in the presence of obstacles understanding individual human mobility patterns vehicular networks on two madrid highways a mobility simulation framework of humans with group behavior modeling secondo: an extensible dbms architecture and prototype efficient k-nearest neighbor search on moving object trajectories a foundation for representing and querying moving objects moving objects databases trajectory planning with negotiation for maritime collision avoidance modeling moving objects over multiple granularities bignasim: a nosql database structure and analysis portal for nucleic acids simulation data semantic management of moving objects: a vision towards smart mobility a density-based approach for mining movement patterns from semantic trajectories activity-based human mobility patterns inferred from mobile phone data: a case study of singapore extracting places from traces of locations next place prediction based on spatiotemporal pattern mining of mobile device logs model predictive control for spacecraft rendezvous in elliptical orbit effective online group discovery in trajectory databases t-desp: destination prediction based on big trajectory data exemplar-amms: recognizing crowd movements from pedestrian trajectories revealing travel patterns and city structure with taxi trip data intersection delay estimation from floating car data via principal curves: a case study on beijing's road network finding time period-based most frequent path in big trajectory data mining trajectories of moving dynamic spatio-temporal regions in sensor datasets geographical perspectives of space, time, and scale nonverbal communication: studies and applications indexing spatio-temporal trajectories with efficient polynomial approximations probabilistic nearest neighbor queries on uncertain moving object trajectories semantic trajectories modeling and analysis relative position and attitude estimation and control schemes for the ÿnal phase of an autonomous docking mission of spacecraft relative position and attitude estimation of spacecrafts based on dual quaternion for rendezvous and docking indexing and matching trajectories under inconsistent sampling rates how you move reveals who you are: understanding human behavior by analyzing trajectory data representations of time coordinates in fits-time and relative dimension in space relative space: space measurements on a rotating platform indexing in-network trajectory flows a gis-based time-geographic approach of studying individual activities and interactions in a hybrid physical-virtual space geometric interpretation of the tschauner-hempel solutions for satellite relative motion prediction of human emergency behavior and their mobility following large-scale disaster a conceptual view on trajectories adaptive nonlinear robust relative pose control of spacecraft autonomous rendezvous and proximity operations a spatial data model design for featurebased geographical information systems uncovering urban human mobility from large scale taxi gps data coupling mobile phone and social media data: a new approach to understanding urban functions and diurnal patterns from movement data to objects behavior using semantic trajectory and semantic events perturbation theory and relative space smopat: mining semantic mobility patterns from trajectories of private vehicles a spatio-temporal data model for activity-based transport demand modelling vdnet: an infrastructure-less uavassisted sparse vanet system with vehicle location prediction consensus document on the epidemiology of severe acute respiratory syndrome (sars maritime traffic probabilistic forecasting based on vessels' waterway patterns and motion behaviors how far droplets can move in indoor environments--revisiting the wells evaporation--falling curve range queries on multi-attribute trajectories efficient location-based search of trajectories with location importance scalable space-time trajectory cube for path-finding: a study using big taxi trajectory data exploring space-time paths in physical and social closeness spaces: a space-time gis approach relative dynamics estimation of noncooperative spacecraft with unknown orbit elements and inertial tensor modeling user activity patterns for next-place prediction revealing the linkage network dynamic structures of chinese maritime ports through automatic information system data time-based trajectory data partitioning for efficient range query range search on uncertain trajectories splitter: mining fine-grained sequential patterns in semantic trajectories spatial-temporal travel pattern mining using massive taxi trajectory data probabilistic range queries for uncertain trajectories on road networks online discovery of gathering patterns over trajectories robust model predictive control for multi-step short range spacecraft rendezvous real-time vessel behavior prediction the research was supported in part by the national natural science foundation of china (grants 41771473, 41231171), and the fundamental research funds for the central university. key: cord-351940-cg0bewqb authors: ngwira, a.; kumwenda, f.; munthali, e.; nkolokosa, d. title: a snap shot of space and time dynamics of covid-19 risk in malawi. an application of spatial temporal model date: 2020-09-14 journal: nan doi: 10.1101/2020.09.12.20192914 sha: doc_id: 351940 cord_uid: cg0bewqb background: covid-19 has been the greatest challenge the world has faced since the second world war. the aim of this study was to investigate the distribution of covid-19 in both space and time in malawi. methods: the study used publicly available data of covid-19 cases for the period from 24th june to 20th august, 2020. semiparametric spatial temporal models were fitted to the number of weekly confirmed cases as an outcome data, with time and location as independent variables. results: the study found significant main effect of location and time with the two interacting. the spatial distribution of covid-19 showed major cities being at greater risk than rural areas. over time the covid-19 risk was increasing then decreasing in most districts with the rural districts being consistently at lower risk. conclusion. future or present strategies to avert the spread of covid-19 should target major cities by limiting international exposure. in addition, the focus should be on time points that had shown high risk. covid-19 is a corona virus disease (covid which was first reported in wuhan, china in 2019. it is characterized by severe acute respiratory syndrome (sars) hence also known as sars-cov -2 (who, 2020). since its onset, covid-19 has been one of the greatest disease pandemics of all times. from its discovery in december in china, 19 718 030 people world over have been confirmed of the disease and over 722 285 people have died as of 9 august 2020 (who, 2020). the space trend worldwide has shown the americas (10 590 929), europe (3 3 582 911) and south east asia (2 632 773) being the hardest hit as of 10 th august, 2020. africa had a slow progression of the disease at the beginning of the disease, earlier 2020, but the continent had rising cases in the middle of the year, 2020, with 895 696 confirmed cases and 16 713 deaths as of 10 th august, 2020. malawi at this time of the year had recorded 5193 confirmed cases and 163 deaths (unicef malawi, 2020b). the first three cases of covid-19 were recorded on 2 nd april, 2020 (unicef malawi, 2020a). understanding disease space and time dynamics is important for the epidemiologists as with space distribution, the hot spot areas are marked for intervention. in addition, possible drivers of the epidemic in those hot spots are suggested for further scientific investigation. regarding temporal distribution, times with high disease risk are also identified which gives crew to possible causes including, in particular, seasonal changes. a number of studies on spatial temporal distribution of covid-19 have been conducted (chen et ye and hu, 2020) . the majority of these though have used the geographical information system (gis) technology as compared to statistical modelling using spatial temporal models. a few studies that have used the statistical approach to spatial temporal analysis to my knowledge are gayawan et al (2020) who used the possion hurdle model to take into account excess zero counts of covid-19 cases, briz-redon and serrano aroca (2020) who used the separable random effects model with structured and unstructured area and time effects, and chen et al (2020) who used the inseparable spatial temporal model. in addition, in africa, spatial temporal analysis of covid-19 cases has been limited (gawayan et al, 2020; arashi et al, 2020; adeneknle et al, 2020) as of 9 th august. in malawi, at the time of this study, no study on spatial temporal distribution of covid-19 cases had been spotted. only one study that focused on prediction of covid-19 cases using mathematical models was seen (kuunika, 2020) . the aim of this study was to determine the spatial temporal trends of covid-19 confirmed cases in malawi while using the spatial temporal statistical models. the objectives of the study were: • to establish the estimated or predicted risk trend by geographical location • to estimate the temporal risk trend of covid-19 by geographical location. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the article has been organized as follows. first, study methods are described in terms of data collection and statistical analysis. thereafter, results and discussion of results are presented. finally conclusions and implications regarding the findings of the study are made. the study used the publicly available districts' covid-19 confirmed daily cases data for malawi which was extracted from the malawi data portal website (https://malawi.opendataforafrica.org) after registering with the portal. the total population, population density, and percentage of people with running water data for each district were also extracted from the same data portal. the population size for each district was used as the expected number of people to be infected in each district. population density and percentage of people with running water in each district were taken as covariates. though the cases started to be recorded on 2 nd april, 2020, the extracted data for covid-19 cases used for spatial temporal modelling in this study, only covered the period from 24 th june to 20 th august. this was the case, considering that covid-19 daily cases for the districts were only available from 24 th june on the portal. the study period was divided into six weeks as follows: descriptive analysis involved the time series plot of the cumulative confirmed cases and those who had died of covid-19 for the whole country from the beginning of the epidemic to the time this study was conducted, that is, 20 th august, 2020. it also involved bivariate correlation of daily confirmed cases of covid-19 and their potential covariates, that is, population size, population density and percentage of people with running water in each district. the potential covariates would be selected for further multiple variable modelling if their p-values were less than 0.20. after the descriptive analysis, the multiple variable spatial temporal models were fitted in r using the bayesian approach while using the integrated nested laplace approximations (inla where it η is the predictor specified as . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september 14, 2020. the model with predictor (1) will be denoted by a. in (1), μ is the overall disease relative risk, i u is the area level unstructured random effect and i v is the area level structured random effect, and φ is overall time trend and i ϕ is the area specific time trend. the unstructured area level effects were modelled by the independent normal distribution with zero mean, that is, σ , and the structured random effects were assigned the intrinsic conditional autoregressive (icar) according to besag (1991) , that is, . the weakness of model a is the linearity assumption on the effect of time on the relative risk of the disease. to take a more flexible approach on the effect of time, nonlinear spatial temporal models were also explored. the predictor, it η for the nonlinear spatial temporal model for the time effect is specified as follows: and let it be denoted by b. the i u and i v in the model are the area level unstructured and structured effects respectively as defined in (1), and t γ and t β are the unstructured and structured temporal effects. the unstructured time effects was modelled by the independent normal distribution with zero mean, that is, ) , 0 ( 2 λ σ γ n t and the structured temporal effects were assigned the first order random walk prior distribution defined as: a second order random walk was also explored in case the data would show a more pronounced linear trend. the second order random walk is defined as: the last term in (2), it δ , represents the interaction between area and time. four forms of interaction between space and time are possible according to knorr-held (2000) . the first form of interaction assumes interaction between the unstructured region effect ( i u ) and the unstructured temporal effect ( t γ ) (denote it by model b 1), and in this case the interaction effect is assigned the independent normal distribution, that is, the second type of interaction, is the interaction of structured area effect ( i v ) and the unstructured temporal effect ( t γ ) (denote it by model b 2). this form of interaction assumes conditional intrinsic (car) distribution for the areas for each time independently from all the other times. the third is the interaction between unstructured area effect ( i u ) and the structured temporal effect ( t β ) (denote this by model b 3). the prior distribution for each area is assumed to be a second order random walk across time. the last possible space time interaction is that between area structured effect ( i v ) and the structured time effect ( j β ) (call it model b 4). in this case, the second order random walk prior that depends on neighboring . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september 14, 2020. areas was assigned for each area. the prior for the variance parameter for the independent normal distribution (i.i.d) and that for the spatial effect (besag) was the gamma (1, 0.001). the prior for the random walk variance was the gamma (1, 0.0005). the intercept was assigned the default normal, n (0, 0). the model choice was by the deviance information criteria (dic) as proposed by spiegelhalter et al (2002) , where a smaller dic means a better model in terms of fit and complexity. it is the sum of the measure of model fit, called the deviance denoted by d and the effective number of parameters denoted by d p . the selected model was then used to estimate the relative risk, it r . figure 1 shows the graph of cumulative confirmed and dead cases of covid-19 from the time the first case was reported to 20 th august, 2020. there were 5282 confirmed cases and 165 deceased cases as of 20 th august, 2020. generally, there were low total cases of those who died of covid-19 as compared to those who were confirmed. , p-value = 0.380), revealed lack of correlation. there was also no significant correlation between confirmed cases and percentage of people with running water in each district ( 000 . 0 = ρ , p-value = 0.998). since the p-values of the correlation coefficients were more than 0.20, the significance level set to select potential covariates, the two covariates, population density and proportion of those with running water were dropped when fitting the spatial temporal models of the weekly confirmed cases of covid-19. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this this version posted september 14, 2020. table 1 shows the dic of the fitted spatial temporal models defined in the methods section. model b 11 and b 12, had smaller dic than the rest of the models, and the dic difference between the two models was not significant as it was not greater than 3 (spiegelhalter et al, 2002) . the results of the model with rw2, that is, model b 11 are therefore presented and discussed. table 2 presents the variance parameters of the random effects. all model terms including the interaction effect of location and time, were significant predictors of the risk of contracting covid-19 as the estimated variances were significantly greater than zero, since the confidence intervals excluded zero. area level spatial effects and the effects of time modelled by the second order random walk prior (rw2), were highly significant as evidenced from their bigger variances. the spatial temporal distribution of overall fitted risk (figure 2, figure 3) , shows that by space, mzimba, mzuzu and nkhata bay in the north; lilongwe, lilongwe city and mchinji in the center; blantyre, mwanza, zomba and mangochi in the south; were at increased risk to being confirmed of covid-19. overtime, from week 1 to week 6, the risk in mzimba and mzuzu, had been decreasing and the risk in blantyre was consistently high. most of the districts in the rural areas were consistently at low risk of contracting covid-19 over time. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. figure 4 presents the spatial risk of contracting covid-19. spatial risk represents the resdual risk due to unobserved or unmeasured factors of covid-19. in general, the spatial risk looks randomly distributed in week 1, 3, 4 and 5 and non-random in week 2 and 6. precisely though, in the first week, the risk was high in the south than in the center and north. in the second week, the spatial risk was high in the north and a little bit in the center close to the western border, than in the south. in the third week, the spatial risk was high in dowa (center) and south of the lake, that is, machinga and mangochi. in the fourth week, the spatial risk was high in areas surrounding major cities in all the three regions. in the fifth week, the spatial risk shifted to the south and to one district in the northern region. most areas in the last week, had reduced spatial risk to covid-19, with exception to two central districts. the study looked at a snap shot of spatial temporal distribution of covid-19 in malawi by focusing on the period, 24 th june to 20 th august, while using the inseparable statistical spatial temporal model. the use of inseparable model allowed the investigation of the joint or interaction effect of time and location on covid-19 cases. the use of non-parametric model for time effect (rw2) also enabled the capturing of the subtle influences of time on the risk of contracting covid-19. the space distribution of covid-19 risk in malawi in the given time period, shows the cities and the surrounding areas being at increased risk. the explanation to the observed spatial gradient is a matter of conjecture. one possible factor driving the observed spatial pattern would be the population size and population density. the cities have higher population density than the rural and covid-19 is therefore more likely to spread fast through the . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september 14, 2020. movement and frequent contact between people. case comparison investigations have found positive correlation of population density and covid-19 (penerliev and petkov, 2020) , where for example in italy, lombardy, the population density is three times higher than in piedmont and the incidence rate was also over three times higher than in piedmont. evidence of high population density as a risk of disease transmission has been seen in india where influenza transmission rates in india have been found to increase above a population density of 282 people per square kilometer (african centre of strategic studies, 2020). the other possible contributor to the observed rural city spatial gradient of covid-19 risk in malawi would be international exposure. in this case, cities have higher international exposure than the rural through international flights among others, which would mean more imported cases. evidence of international exposure as a risk factor of covid-19 transmission has been observed in africa as a whole where countries with high international exposure like south africa, nigeria, morroco, egypt, and algeria have had higher covid-19 cases than their counterparts. international exposure as a fuel of covid-19 transmission has also been documented in brazil where it was found that cases increased with increase in international flights jetting into the country (pequeno et al, 2020 regarding the temporal distribution, the disease risk was increasing gently from week 1 to week 4 and increased sharply from week 4 to week 5 when it started to decline in most regions. the relative sharp increase in risk in week 4 and 5 may be attributed to the effects of post presidential general election which was held on 23 rd june, 2020. the unrestricted political rallies before the election might have caused a spike in covid-19 risk thereafter. in addition, the rise in covid-19 cases during this time would be attributed to the decreasing temperatures at this time of the year as this is the time of cold season. the decline of risk from week 5 to the last week may be due to the increasing temperatures as this time marks the beginning of hot season. negative correlation between covid-19 cases and temperature has been documented (pequeno et al, 2020). the study did not go without weaknesses. the first weakness was that, due to the absence of population size for each area at each time point, the base population at risk for each area was assumed to constant across time which was not practically valid. the other weakness was that the study did not look at future predictions of covid-19 risk beyond the specified period of the study to give an idea how the disease would progress thereafter. this would have important implications particularly on planning activities that had been brought to a halt by covid-19 like education and football games. nonetheless, the study gave an overview of the disease dynamics in both space and time in the specified time frame so as to identify hot spots in both space and time for further epidemiological investigations or interventions. . cc-by-nc-nd 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september 14, 2020. the study found a significant effect of both location and time on covid-19 risk and the effect of either of the two depended on the other, that is, interaction. the risk of covid-19 for major cities was high compared to the rural districts and that over time, the risk for rural areas remained relatively lower than in cities. the risk of getting covid-19 in almost all districts started to decline in the last week which was in august. the implications of the study are that future interventions to halt the disease transmission in case the disease repeats itself, should target the major cities like blantyre, zomba, mangochi, lilongwe and mzuzu and that by time, attention should be paid to the month of june and july when it is very cold. who. coronavirus disease (covid-19): situation report -203 distribution of the covid-19 epidemic and correlation with population emigration from wuhan spatio-temporal patterns of the 2019-ncov epidemic at the county level in hubei province the spatio-temporal epidemic dynamics of covid-19 outbreak in africa. medrxiv preprint changes in the spatial distribution of covid-19 incidence in italy using gis-based maps modelling spatial variations of coronavirus disease (covid-19) in africa spatial analysis and prediction of covid-19 spread in south africa after lockdown spatial distribution and impact assessment of covid-19 on human health using geospatial technologies in india a spatio-temporal analysis for exploring the effect of temperature on covid-19 early evolution in spain pougué biyong jn. the relatively young and rural population may limit the spread and severity of covid-19 in africa: a modelling study population flow drives spatio-temporal distribution of covid-19 in china the spatial and temporal pattern of covid-19 and its effect on humans development in china the impacts of covariates on spatial distribution of corona virus 2019 (covid-19): what do the data show through ancova and mancova application of geospatial technologies in the covid-19 fight of ghana spatial distribution and geographic mapping of covid-19 in northern african countries; a preliminary study spatiotemporal distribution and trend of covid-19 in the yangtze river delta region of the people's republic of china quantifying the potential burden of novel coronavirus bayesian analysis of space-time variation in disease risk bayesian image restoration with two applications in spatial statistics bayesian modelling of inseparable space-time variation in disease risk bayesian measures of model complexity and fit (with discussion) geodemographic aspects of covid-19 mapping risk factors for the spread of covid-19 in africa key: cord-292475-jrl1fowa authors: abry, patrice; pustelnik, nelly; roux, stéphane; jensen, pablo; flandrin, patrick; gribonval, rémi; lucas, charles-gérard; guichard, éric; borgnat, pierre; garnier, nicolas title: spatial and temporal regularization to estimate covid-19 reproduction number r(t): promoting piecewise smoothness via convex optimization date: 2020-08-20 journal: plos one doi: 10.1371/journal.pone.0237901 sha: doc_id: 292475 cord_uid: jrl1fowa among the different indicators that quantify the spread of an epidemic such as the on-going covid-19, stands first the reproduction number which measures how many people can be contaminated by an infected person. in order to permit the monitoring of the evolution of this number, a new estimation procedure is proposed here, assuming a well-accepted model for current incidence data, based on past observations. the novelty of the proposed approach is twofold: 1) the estimation of the reproduction number is achieved by convex optimization within a proximal-based inverse problem formulation, with constraints aimed at promoting piecewise smoothness; 2) the approach is developed in a multivariate setting, allowing for the simultaneous handling of multiple time series attached to different geographical regions, together with a spatial (graph-based) regularization of their evolutions in time. the effectiveness of the approach is first supported by simulations, and two main applications to real covid-19 data are then discussed. the first one refers to the comparative evolution of the reproduction number for a number of countries, while the second one focuses on french departments and their joint analysis, leading to dynamic maps revealing the temporal co-evolution of their reproduction numbers. the ongoing covid-19 pandemic has produced an unprecedented health and economic crisis, urging for the development of adapted actions aimed at monitoring the spread of the new coronavirus. no country remained untouched, thus emphasizing the need for models and tools to perform quantitative predictions, enabling effective managements of patients or an optimized allocations of medical ressources. for instance, the outbreak of this unprecedented pandemic was characterized by a critical lack of tools able to perform predictions related to the pressure on hospital ressources (number of patients, masks, gloves, intensive care unit needs,. . .) [1, 2] . as a first step toward such an ambition goal, the present work focuses on the pandemic time evolution assessment. indeed, all countries experienced a propagation mechanism that is basically universal in the onset phase: each infected person happened to infect in average more than one other person, leading to an initial exponential growth. the strength of the spread is quantified by the so-called reproduction number which measures how many people can be contaminated by an infected person. in the early phase where the growth is exponential, this is referred to as r 0 (for covid-19, r 0 * 3 [3, 4] ). as the pandemic develops and because more people get infected, the effective reproduction number evolves, hence becoming a function of time hereafter labeled r(t). this can indeed end up with the extinction of the pandemic, r(t)!0, at the expense though of the contamination of a very large percentage of the total population, and of potentially dramatic consequences. rather than letting the pandemic develop until the reproduction number would eventually decrease below unity (in which case the spread would cease by itself), an active strategy amounts to take actions so as to limit contacts between individuals. this path has been followed by several countries which adopted effective lockdown policies, with the consequence that the reproduction number decreased significantly and rapidly, further remaining below unity as long as social distancing measures were enforced (see for example [4, 5] ). however, when lifting the lockdown is at stake, the situation may change with an expected increase in the number of inter-individual contacts, and monitoring in real time the evolution of the instantaneous reproduction number r(t) becomes of the utmost importance: this is the core of the present work. monitoring and estimating r(t) raises however a series of issues related to pandemic data modeling, to parameter estimation techniques and to data availability. concerning the mathematical modeling of infectious diseases, the most celebrated approaches refer to compartmental models such as sir ("susceptible-infectious-recovered"), with variants such as seir ("susceptible-exposed-infectious-recovered"). because such global models do not account well for spatial heterogeneity, clustering of human contact patterns, variability in typical number of contacts (cf. [6] ), further refinements were proposed [7] . in such frameworks, the effective reproduction number at time t can be inferred from a fit of the model to the data that leads to an estimated knowledge of the average of infecting contacts per unit time, of the mean infectious period, and of the fraction of the population that is still susceptible. these are powerful approaches that are descriptive and potentially predictive, yet at the expense of being fully parametric and thus requiring the use of dedicated and robust estimation procedures. parameter estimation become all the more involved when the number of parameters grows and/or when the amount and quality of available data are low, as is the case for the covid-19 pandemic real-time and in emergency monitoring. rather than resorting to fully parametric models and seeing r(t) as the by-product of their identification, a more phenomenological, semi-parametric approach can be followed [8] [9] [10] . this approach has been reported as robust and potentially leading to relevant estimates of r(t), even for epidemic spreading on realistic contact networks, where it is not possible to define a steady exponential growth phase and a basic reproduction number [6] . the underlying idea is to model incidence data z(t) at time t as resulting from a poisson distribution with a time evolving parameter adjusted to account for the data evolution, which depends on a function f(s) standing for the distribution of the serial interval. this function models the time between the onset of symptoms in a primary case and the onset of symptoms in secondary cases, or equivalently the probability that a person confirmed infected today was actually infected s days earlier by another infected person. the serial interval function is thus an important ingredient of the model, accounting for the biological mechanisms in the epidemic evolution. assuming the distribution f to be known, the whole challenge in the actual use of the semi-parametric poisson-based model thus consists in devising estimatesrðtþ of r(t) with satisfactory statistical performance. this has been classically addressed by approaches aimed at maximizing the likelihood attached to the model. this can be achieved, e.g., within several variants of bayesian frameworks [5, 6, 8, 10] , with even dedicated software packages (cf. e.g., https://shiny.dide.imperial. ac.uk/epiestim/). instead, we promote here an alternative approach based on inverse problem formulations and proximal-operator based nonsmooth convex optimisation [11] [12] [13] [14] [15] . the questions of modeling and estimation, be they fully parametric or semi-parametric, are intimately intertwined with that of data availability. this will be further discussed but one can however remark at this point that many options are open, with a conditioning of the results to the choices that are made. there is first the nature of the incidence data used in the analysis (reported infected cases, hospitalizations, deaths) and the database they are extracted from. next, there is the granularity of the data (whole country, regions, smaller units) and the specificities that can be attached to a specific choice as well as the comparisons that can be envisioned. in this respect, it is worth remarking that most analyses reported in the literature are based on (possibly multiple) univariate time series, whereas genuinely multivariate analyses (e.g., a joint analysis of the same type of data in different countries in order to compare health policies) might prove more informative. for that category of research work motivated by contributing in emergency to the societal stake of monitoring the pandemic evolution in real-time, or at least, on a daily basis, there are two classes of challenges: ensuring a robust and regular access to relevant data; rapidly developing analysis/estimation tools that are theoretically sound, practically usable on data actually available, and that may contribute to improving current monitoring strategies. in that spirit, the overarching goal of the present work is twofold: (1) proposing a new, more versatile framework for the estimation of r(t) within the semi-parametric model of [8, 10] , reformulating its estimation as an inverse problem whose functional is minimized by using non smooth proximal-based convex optimization; (2) inserting this approach in an extended multivariate framework, with applications to various complementary datasets corresponding to different geographical regions. the paper is organized as follows. it first discusses data, as collected from different databases, with heterogeneity and uneven quality calling for some preprocessing that is detailed. in the present work, incidence data (thereafter labelled z(t)) refers to the number of daily new infections, either as reported in databases, or as recomputed from other available data such as hospitalization counts. based on a semi-parametric model for r(t), it is then discussed how its estimation can be phrased within a non smooth proximal-based convex optimization framework, intentionally designed to enforce piecewise linearity in the estimation of r(t) via temporal regularization, as well as piecewise constancy in spatial variations of r(t) by graph-based regularization. the effectiveness of these estimation tools is first illustrated on synthetic data, constructed from different models and simulating several scenarii, before being applied to several real pandemic datasets. first, the number of daily new infections for many different countries across the world are analyzed independently. second, focusing on france only, the number of daily new infections per continental france départements (départements constitute usual entities organizing the administrative life in france) are analyzed both independently and in a multivariate setting, illustrating the benefit of this latter formulation. discussions, perpectives and potential improvements are finally discussed. datasets. in the present study, three sources of data were systematically used: • source1(jhu) johns hopkins university provides access to the cumulated daily reports of the number of infected, deceased and recovered persons, on a per country basis, for a large number of countries worldwide, essentially since inception of the covid-19 crisis (january 1st, 2020 time series. the data available on the different data repositories used here are strongly affected by outliers, which may stem from inaccuracy or misreporting in per country reporting procedures, or from changes in the way counts are collected, aggregated, and reported. in the present work, it has been chosen to preprocess data for outlier removal by applying to the raw time series a nonlinear filtering, consisting of a sliding-median over a 7-day window: outliers defined as ±2.5 standard deviation are replaced by window median to yield the pre-processed time series z(t), from which the reproduction number r(t) is estimated. an example of raw and pre-processed time series is illustrated in fig 3. when countries are studied independently, the estimation procedure is applied separately to each time series z(t) of size t, the number of days available for analysis. when considering continental france départements, we are given d time series z d (t) of size t each, where 1 � d � d = 94 indexes the départements. these time series are collected and stacked in a matrix of size d × t, and they analyzed both independently and jointly. model. although they can be used for envisioning the impact of possible scenarii in the future development of an on-going epidemic [3] , sir models, because they require the full estimation of numerous parameters, are often used a posteriori (e.g., long after the epidemic) with consolidated and accurate datasets. during the spread phase and in order to account for the on-line/on-the-fly need to monitor the pandemic and to offer some robustness to partial/ incomplete/noisy data, less detailed semi-parametric models focusing on the only estimation of the time-dependent reproduction number can be preferred [8, 9, 16] . let r(t) denote the instantaneous reproduction number to be estimated and z(t) be the number of daily new infections. it has been proposed in [8, 10] that {z(t), t = 1, . . ., t} can be modeled as a nonstationary time series consisting of a collection of random variables, each drawn from a poisson distribution p p t whose parameter p t depends on the past observations of z(t), on the current value of r(t), and on the serial interval function f(�): the serial interval function f(�) constitutes a key ingredient of the model, whose importance and role in pandemic evolution has been mentioned in introduction. it is assumed to be independent of calendar time (i.e., constant across the epidemic outbreak), and, importantly, independent of r(t), whose role is to account for the time dependencies in pandemic propagation mechanisms. for the covid-19 pandemic, several studies have empirically estimated the serial interval function f(�) [17, 18] . for convenience, f(�) has been modeled as a gamma distribution, with shape and rate parameters 1.87 and 0.28, respectively (corresponding to mean and standard deviations of 6.6 and 3.5 days, see [5] and references therein). these choices and assumptions have been followed and used here, and the corresponding function is illustrated in fig 1. in essence, the model in eq (1) is univariate (only one time series is modeled at a time), and based on a poisson marginal distribution. it is also nonstationary, as the poisson rate evolves along time. the key ingredient of this model consists of the poisson rate evolving as a weighted moving average of past observations, which is qualitatively based on the following rationale: whenr is above 1, the epidemic is growing and, conversely, when this ratio is below 1, it decreases and eventually vanishes. non-smooth convex optimisation. the whole challenge in the actual use of the semiparametric poisson-based model described above thus consists in devising estimatesrðtþ of r (t) that have better statistical performance (more robust, reliable, and hence usable) than the direct brute-force and naive form defined in eq 2. to estimate r(t), and instead of using bayesian frameworks that are considered state-of-the-art tools for epidemic evolution analysis, we propose and promote here an alternative approach based on an inverse problem formulation. its main principle is to assume some form of temporal regularity in the evolution of r(t) (we use a piecewise linear model in the following). in the case of a joint estimation of r(t) across several continental france départements, we further assume some form of spatial regularity, i.e., that the values of r(t) for neighboring départements are similar. univariate setting. for a single country, or a single département, the observed (possibly preprocessed) data {z(t), 1 � t � t} is represented by a t-dimensional vector z 2 r t . recalling that the poisson law is pðz ¼ njpþ ¼ p n n! e à p for each integer n � 0, the negative log-likelihood of observing z given a vector p 2 r t of poisson parameters p t is where r 2 r t is the (unknown) vector of values of r(t). up to an additive term independent of p, this is equal to the kl-divergence (cf. section 5.4. in [15] ): given the vector of observed values z, the serial interval function f(�), and the number of days t, the vector p given by (1) reads p = r � fz, with � the entrywise product and f 2 r t�t the matrix with entries f ij = f(i − j). maximum likelihood estimation of r (i.e., minimization of the negative log-likelihood) leads to an optimization problem min r d kl (zjr � fz) which does not ensure any regularity of r(t). to ensure temporal regularity, we propose a penalized approach usinĝ r ¼ argmin r d kl ðz j r � fzþ þ oðrþ where o denotes a penalty function. here we wish to promote a piecewise affine and continuous behavior, which may be accomplished [19, 20] using o(r) = λ time kd 2 rk 1 , where d 2 is the matrix associated with a laplacian filter (second order discrete temporal derivatives), k�k 1 denotes the ℓ 1 -norm (i.e., the sum of the absolute values of all entries), and λ time is a penalty factor to be tuned. this leads to the following optimization problem: spatially regularized setting. in the case of multiple départements, we consider multiple vectors (z d 2 r t , 1 � d � d) associated to the d time series, and multiple vectors of unknown (r d 2 r t , 1 � d � d), which can be gathered into matrices: a data matrix z 2 r t�d whose columns are z d and a matrix of unknown r 2 r t�d whose columns are the quantities to be estimated r d . a first possibility is to proceed to independent estimations of the (r d 2 r t , 1 � d � d) by addressing the separate optimization problemŝ which can be equivalently rewritten into a matrix form: is the entrywise ℓ 1 norm of d 2 r, i.e., the sum of the absolute values of all its entries. an alternative is to estimate jointly the (r d 2 r t , 1 � d � d) using a penalty function promoting spatial regularity. to account for spatial regularity, we use a spatial analogue of d 2 promoting spatially piecewise constant solutions. the d continental france départements can be considered as the vertices of a graph, where edges are present between adjacent départements. from the adjacency matrix a 2 r d�d of this graph (a ij = 1 if there is an edge e = (i, j) in the graph, a ij = 0 otherwise), the global variation of the function on the graphs can be computed as ∑ ij a ij (r ti − r tj ) 2 and it is known that this can be accessed through the so-called (combinatorial) laplacian of the graph: [21] . however, in order to promote smoothness over the graph while keeping some sparse discontinuities on some edges, it is preferable to regularize using a total variation on the graph, which amounts to take the ℓ 1 -norm of these gradients (r ti − r tj ) on all existing edges. for that, let us introduce the incidence matrix b 2 r e�d such that l = b > b where e is the number of edges and, on each line representing an existing edge e = (i, j), we set b e,i = 1 and b e,j = −1. then, the ℓ 1 -norm krb > k 1 = kbr > k 1 is equal to p t t¼1 p ði;jþ:a ij ¼1 jr ti à r tj j. alternatively, it can be computed as krb > k 1 ¼ p t t¼1 kbrðtþk 1 where rðtþ 2 r d is the t-th row of r, which gathers the values across all départements at a given time t. from that, we can define the regularized optimization problem: optimization problems (6) and (7) involve convex, lower semi-continuous, proper and non-negative functions, hence their set of minimizers is non-empty and convex [11] . we will discuss right after how to compute these using proximal algorithms. by the known sparsity-promoting properties of ℓ 1 regularizers and their variants, the corresponding solutions are such that d 2 r and/or rb > are sparse matrices, in the sense that these matrices of (second order temporal or first order spatial) derivatives have many zero entries. the higher the penalty factors λ time and λ space , the more zeroes in these matrices. in particular, when λ space = 0, no spatial regularization is performed and (7) is equivalent to (6) . when λ space is large enough, rb > is exactly zero, which implies that r(t) is constant at each time since the graph of départements is connected. optimization using a proximal algorithm. the considered optimization problems are of the form where f and g m are proper lower semi-continuous convex, and k m are bounded linear operators. a classical case for m = 1 is typically addressed with the chambolle-pock algorithm [22] , which has been recently adapted for multiple regularization terms as in eq. 8 of [23] . to handle the lack of smoothness of lipschitz differentiability for the considered functions f and g m , these approaches rely on their proximity operators. we recall that the proximity operator of a convex, lower semi-continuous function φ is defined as [24] prox φ ðyþ ¼ arg min in our case, we consider a separable data fidelity term: as this is a separable function of the entries of its input, its associated proximity operator can be computed component by component [25] : where τ > 0. we further consider g m (�) = k.k 1 , m = 1, 2, and k 1 (r) ≔ λ time d 2 r, k 2 (r) ≔ λ space rb > . the proximity operators associated to g m read: where (.) + = max(0,.). in algorithm 1, we express explicitly algorithm 161 of [23] for our setting, considering the moreau identity that provides the relation between the proximity operator of a function and the proximity operator of its conjugate (cf. eq. (8) of [23] ). the choice of the parameters τ and σ m impacts the convergence guarantees. in this work, we adapt a standard choice provided by [22] to this extended framework. the adjoint of k m , denoted k � m , is given by the sequence ðr ðkþ1þ þ k2n converges to a minimizer of (7) (cf. thm 8.2 of [23] ). input: data z, tolerance � > 0 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi p m¼1;2 kk m k to assess the relevance and performance of the proposed estimation procedure detailed above, it is first applied to two different synthetic time series z(t). the first one is synthesized using directly the model in eq (1), with the same serial interval function f(t) as that used for the estimation, and using an a priori prescribed function r(t). the second one is produced from solving a compartmental (sir type) model. for such models, r(t) can be theoretically related to the time scale parameters entering their definition, as the ratio between the infection time scale and the quitting infection (be it by death or recovery) time scale [26, 27] . the theoretical serial function f associated to that model and to its parameters is computed analytically (cf., e.g., [28] ) and used in the estimation procedure. for both cases, the same a priori prescribed function r(t), to be estimated, is chosen as constant (r = 2.2) over the first 45 days to model the epidemic outbreak, followed by a linear decrease (till below 1) over the next 45 days to model lockdown benefits, with finally an abrupt linear increase for the last 10 days, modeling a possible outbreak at when lockdown is lifted. additive gaussian noise is superimposed to the data produced by the models to account for outliers and misreporting. for both cases, the proposed estimation procedure (obtained with λ time set to the same values as those used to analyze real data in the next section) outperforms the naive estimates (2), which turn out to be very irregular (cf. fig 2) . the proposed estimates notably capture well the three different phases of r(t) (stable, decreasing and increasing), with notably a rapid and accurate reaction to the increasing change in the 10 last days. the present section aims to apply the model and estimation tools proposed above to actual covid-19 data. first, specific methodological issues are addressed, related to tuning the hyperparameter(s) λ time or (λ time , λ space ) in univariate and multivariate settings, and to comparing the consistency between different estimates of r(t) obtained from the same incidence data, yet downloaded from different repositories. then, the estimation tools are applied to the estimation of r(t), both independently for numerous countries and jointly for the 94 continental france départements. estimation of r(t) is performed daily, with t thus increasing every day, and updated results are uploaded on a regular basis on a dedicated webpage (cf. http://perso.ens-lyon.fr/patrice. abry. regularization hyperparameter tuning. a critical issue associated with the practical use of the estimates based on the optimization problems (5) and (7) lies in the tuning of the hyperparameters balancing data fidelity terms and penalization terms. while automated and data-driven procedures can be devised, following works such as [29] and references therein, let us analyze the forms of the functional to be minimized, so as to compute relevant orders of magnitude for these hyperparameters. let us start with the univariate estimation (5). using λ time = 0 implies no regularization and the achieved estimate turns out to be as noisy as the one obtained with a naive estimator (cf. eq (2)). conversely, for large enough λ time , the proposed estimate becomes exactly a constant, missing any time evolution. tuning λ time is thus critical but can become tedious, especially because differences across countries (or across départements in france) are likely to require different choices for λ time . however, a careful analysis of the functional to minimize shows that the data fidelity term (9), based on a kullback-leibler divergence, scales proportionally to the input incidence data z while the penalization term, based on the regularization of r(t), is independent of the actual values of z. therefore, the same estimate for r(t) is obtained if we replace z with α × z and λ with α × λ. because orders of magnitude of z are different amongst countries (either because of differences in population size, or of pandemic impact), this critical observation leads us to apply the estimate not to the raw data z but to a normalized version z/std(z), alleviating the burden of selecting one λ time per country, instead enabling to select one same λ time for all countries and further permitting to compare the estimated r(t)'s across countries for equivalent levels of regularization. considering now the graph-based spatially-regularized estimates (7) while keeping fixed λ time , the different r(t) are analyzed independently for each département when λ space = 0. conversely, choosing a large enough λ space yields exactly identical estimates across départments that are, satisfactorily, very close to what is obtained from data aggregated over france prior to estimation. further, the connectivity graph amongst the 94 continental france départements leads to an adjacency matrix with 475 non-zero off-diagonal entries (set to the value 1), associated to as many edges as existing in the graph. therefore, a careful examination of (7) shows that the spatial and temporal regularizations have equivalent weights when λ time and λ time are chosen such that the use of z/std(z) and of (10) above gives a relevant first-order guess to the tuning of λ time and of (λ time , λ space ). estimate consistency using different repository sources. when undertaking such work dedicated to on-going events, to daily evolutions, and to a real stake in forecasting future trends, a solid access to reliable data is critical. as previously mentioned, three sources of data are used, each including data for france, which are thus now used to assess the impact of data sources on estimated r(t). source1(jhu) and source2(ecdpc) provide cumulated numbers of confirmed cases counted at national levels and (in principle) including all reported cases from any source (hospital, death at home or in care homes. . .). source3(spf) does not report that same number, but a collection of other figures related to hospital counts only, from which a daily number of new hospitalizations can be reconstructed and used as a proxy for daily new infections. the corresponding raw and (sliding-median) preprocessed data, illustrated in fig 3, show overall comparable shapes and evolutions, yet with clearly visible discrepancies of two kinds. first, source1(jhu) and source2(ecdpc), consisting of crude reports of number of confirmed cases are prone to outliers. those can result from miscounts, from pointwise incorporations of new figures, such as the progressive inclusion of cases from ehpad (care homes) in france, or from corrections of previous erroneous reports. conversely, data from source3 (spf), based on hospital reports, suffer from far less outliers, yet at the cost of providing only partial figures. second, in france, as in numerous other countries worldwide, the procedure on which confirmed case counts are based, changed several times during the pandemic period, yielding possibly some artificial increase in the local average number of daily new confirmed cases. this has notably been the case for france, prior to the end of the lockdown period (mid-may), when the number of tests performed has regularly increased for about two weeks, or more recently early june when the count procedures has been changed again, likely because of the massive use of serology tests. because the estimate of r(t) essentially relies on comparing a daily number against a past moving average, these changes lead to significant biases that cannot be easily accounted for, but vanishes after some duration controlled by the typical width of the serial distribution f (of the order of ten days). confirmed infection cases across the world. to report estimated r(t)'s for different countries, data from source2(ecdpc) are used as they are of better quality than data from source1(jhu), and because hospital-based data (as in source3(spf)) are not easily available for numerous different countries. visual inspection led us to choose, uniformly for all countries, two values of the temporal regularization parameter: λ time = 50 to produce a strongly-regularized, hence slowly varying estimate, and λ time = 3.5 for a milder regularization, and hence a more reactive estimate. these estimates being by construction designed to favor piecewise linear behaviors, local trends can be estimated by computing (robust) estimates of the derivativeŝ bðtþ ofrðtþ. the slow and less slow estimates ofrðtþ thus provide a slow and less slow estimate of the local trends. intuitively, these local trends can be seen as predictors for the forthcoming value of r:rðt þ nþ ¼rðtþ þ nbðtþ. let us start by inspecting again data for france, further comparing estimates stemming from data in source2(ecdpc) or in source3(spf) (cf. fig 4) . as discussed earlier, data from source2(ecdpc) show far more outliers that data from source3(spf), thus impacting estimation of r and β. as expected, the strongly regularized estimates (λ time = 50) are less sensitive than the less regularized ones (λ time = 3.5), yet discrepancies in estimates are significant, as data from source2(ecdpc) yields, for june 9th, estimates of r slightly above 1, while that from source3(spf) remain steadily around 0.7, with no or mild local trends. again, this might be because late may, france has started massive serology testing, mostly performed outside hospitals. this yielded an abrupt increase in the number of new confirmed cases, biasing upward the estimates of r(t). however, the short-term local trend for june 9th goes also downward, suggesting that the model is incorporating these irregularities and that estimates will return to unbiased after an estimation time controlled by the typical width of the serial distribution f (of the order of ten days). this recent increase is not seen in source3(spf)based estimates that remain very stable, potentially suggesting that hospital-based data are much less affected by changes in testing policies. this local analysis at the current date can be complemented by a more global view on what happened since the lifting of the lockdown. considering the whole period starting from may 11th we end up with triplets [5th percentile; median; 95th percentile] that read as given in table 1 : source2(ecdpc) provides data for several tens of countries. figs 5 to 8 reportrðtþ and bðtþ for several selected countries. more figures are available at perso.ens-lyon.fr/patrice.abry. as of june 9th (time of writing), fig 5 shows that, for most european countries, the pandemic seems to remain under control despite lifting of the lockdown, with (slowly varying) estimates of r remaining stable below 1, ranging from 0.7 to 0.8 depending on countries, and (slowly varying) trends around 0. sweden and portugal (not shown here) display less favorable patterns, as well as, to a lesser extent, the netherlands, raising the question of whether this might be a potential consequence of less stringent lockdown rules compared to neighboring european countries. fig 6 shows that whiler for canada is clearly below 1 since early may, with a negative local trend, the usa are still bouncing back and forth around 1. south america is in the above 1 phase but starts to show negative local trends. fig 7 indicates that iran, india or indonesia are in the critical phase withrðtþ > 1. fig 8 shows that data for african countries are uneasy to analyze, and that several countries such as egypt or south africa are in pandemic growing phases. phase-space representation. to complement figs 5 to 8, fig 9 displays a phase-space representation of the time evolution of the pandemic, constructed by plotting one against the other the local average (over a week) of the slowly varying estimated reproduction numberrðtþ and local trend, ð � rðtþ; � bðtþþ, for a period ranging from mid-april to june 9th. country names are written at the end (last day) of the trajectories. interestingly, european countries display a c-shape trajectory, starting with r > 1 with negative trends (lockdown effects), thus reaching the safe zone (r < 1) but eventually performing a u-turn with a slow increase of local trends till positive. this results in a mild but clear reincrease of r, yet with most values below 1 today, except for france (see comments above) and sweden. the usa display a similar c-shape though almost concentrated on the edge point r(t) = 1, β = 0, while canada does return to the safe zone with a specific pattern. south-american countries, obviously at an earlier stage of the pandemic, show an inverted c-shape pattern, with trajectory evolving from the bad top right corner, to the controlling phase (negative local trend, with decreasing r still above 1 though). phase-spaces of asian and african countries essentially confirm these c-shaped trajectories. envisioning these phase-space plots as pertaining to different stages of the pandemic (rather than to different countries), this suggests that covid-19 pandemic trajectory resembles a clockwise circle, starting from the bad top right corner (r above 1 and positive trends), evolving, likely by lockdown impact, towards the bottom right corner (r still above 1 but negative trends) and finally to the safe bottom left corner (r below 1 and negative then null trend). the lifting of the lockdown may explain the continuation of the trajectory in the still safe but. . . corner (r below1 and again positive trend). as of june 9th, it can be only expected that trajectories will not close the loop and reach back the bad top right corner and the r = 1 limit. continental france départements: regularized joint estimates. there is further interest in focusing the analysis on the potential heterogeneity in the epidemic propagation across a given territory, governed by the same sanitary rules and health care system. this can be achieved by estimating a set of localrðtþ's for different provinces and regions [5] . such a study is made possible by the data from source3(spf), that provides hospital-based data for each of the continental france départements . fig 4 (right) already reported the slow and fast varying estimates of r and local trends computed from data aggregated over the whole france. to further study the variability across the continental france territory, the graphbased, joint spatial and temporal regularization described in eq 7 is applied to the number of confirmed cases consisting of a matrix of size k × t, with d = 94 continental france départements, and t the number of available daily data (e.g., t = 78 on june 9th, data being available only after march 18th). the choice λ time = 3.5 leading to fast estimates was used for this joint study. using (10) as a guideline, empirical analyses led to set λ space = 0.025, thus selecting spatial regularization to weight one-fourth of the temporal regularization. first, fig 10 ( top row) maps and compares for june 9th (chosen arbitrarily as the day of writing) per-département estimates, obtained when départements are analyzed either independently (r indep using eq 6, left plot) or jointly (r joint using eq 7, right plot). while the means of r indep andr joint are of the same order ('0.58 and '0.63 respectively) the standard deviations drop down from '0.40 to '0.14, thus indicating a significant decrease in the variability across departments. this is further complemented by the visual inspection of the maps which reveals reduced discrepancies across neighboring departments, as induced by the estimation procedure. in a second step, short and long-term trends are automatically extracted fromr indep and r joint and short-term trends are displayed in the bottom row of fig 10 (left and right, respectively) . this evidences again a reduced variability across neighboring departments, though much less than that observed forr indep andr joint , likely suggesting that trends on r per se are more robust quantities to estimate than single r's. for june 9th, fig 10 also indicates reproduction numbers that are essentially stable everywhere across france, thus confirming the trend estimated on data aggregated over all france (cf. fig 4, right plot) . video animations, available at perso.ens-lyon.fr/patrice.abry/deptregul.mp4, and at barthes.enssib.fr/coronavirus/ixxi-sisyphe/., updated on a daily basis, report further comparisons betweenr indep andr joint and their evolution along time for the whole period of data availability. maps for selected days are displayed in fig 11 ( with identical colormaps and colorbars across time). fig 11 shows that until late march (lockdown took place in france on march 17th),r joint was uniformly above 1.5 (chosen as the upper limit of the colorbar to permit to see variations during the lockdown and post-lockdown periods), indicating a rapid evolution of the epidemic across entire france. a slowdown of the epidemic evolution is visible as early as the first days of april (with overall decreases ofr joint , and a clear north vs. south gradient). during april, this gradient rotates slightly and aligns on a north-east vs. south-west direction and globally decreases in amplitude. interestingly, in may, this gradient has reversed direction from south-west to north-east, though with very mild amplitude. as of today (june 9th), the pandemic, viewed hospital-based data from source3(spf), seems under control under the whole continental france. estimation of the reproduction number constitutes a classical task in assessing the status of a pandemic. classically, this is done a posteriori (after the pandemic) and from consolidated data, often relying on detailed and accurate sir-based models and relying on bayesian frameworks for estimation. however, on-the-fly monitoring of the reproduction number time evolution constitutes a critical societal stake in situations such as that of covid-19, when decisions need to be taken and action need to be made under emergency. this calls for a triplet of constraints: i) robust access to fast-collected data; ii) semi-parametric models for such data that focus on a subset of critical parameters; iii) estimation procedures that are both elaborated enough to yield robust estimates, and versatile enough to be used on a daily basis and applied to (often-limited in quality and quantity) available data. in that spirit, making use of a robust nonstationary poisson-distribution based semiparametric model proven robust in the literature for epidemic analysis, we developed an original estimation procedure to favor piecewise regular estimation of the evolution of the reproduction number, both along time and across space. this was based on an inverse problem formulation balancing fidelity to time and space regularization, and used proximal operators and nonsmooth convex optimization. this tool can be applied to time series of incidence data, reported, e.g., for a given country. whenever made possible from data, estimation can benefit from a graph of spatial proximity between subdivisions of a given territory. the tool also provides local trends that permit to forecast short-term future values of r. the proposed tools were applied to pandemic incidence data consisting of daily counts of new infections, from several databases providing data either worldwide on an aggregated percountry basis or, for france only, based on the sole hospital counts, spread across the french territory. they permitted to reveal interesting patterns on the state of the pandemic across the world as well as to assess variability across one single territory governed by the same (health care and politics) rules. more importantly, these tools can be used everyday easily as an onthe-fly monitoring procedure for assessing the current state of the pandemic and predict its short-term future evolution. updated estimations are published on-line every day at perso.ens-lyon.fr/patrice.abry and at barthes.enssib.fr/coronavirus/ixxi-sisyphe/. data were (and still are) automatically downloaded on a daily basis using routines written by ourselves. all tools have been developed in matlab™ and can be made available from the corresponding author upon motivated request. at the methodological level, the tool can be further improved in several ways. instead of using o(r) ≔ λ time kd 2 rk 1 + λ space krb > k 1 , for the joint time and space regularization, another possible choice is to directly consider the matrix d 2 rb > of joint spatio-temporal derivatives, and to promote sparsity with an ℓ 1 -norm, or structured sparsity with a mixed norm ℓ 1,2 , e.g., kd 2 rb > k 1,2 = ∑ t k(d 2 rb > )(t)k 2 . as previously discussed, data collected in the process of a pandemic are prone to several causes for outliers. here, outlier preprocessing and reproduction number estimation were conducted in two independent steps, which can turn suboptimal. they can be combined into a single step at the cost of increasing the representation space permitting to split observation in true data and outliers, by adding to the functional to minimize an extra regularization term and devising the corresponding optimization procedure, which becomes nonconvex, and hence far more complicated to address. finally, when an epidemic model suggests a way to make use of several time series (such as, e.g., infected and deceased) for one same territory, the tool can straightforwardly be extended into a multivariate setting by a mild adaptation of optimization problems (6) and (7), replacing the kullback-leibler divergence d kl (zjr � fz) by p i i¼1 d kl ðz i j r � fz i þ. finally, automating a data-driven tuning of the regularization hyperparameters constitutes another important research track. factors determining the diffusion of covid-19 and suggested strategy to prevent future accelerated viral infectivity similar to covid pooling data from individual clinical trials in the covid-19 era expected impact of lockdown in ile-de-france and possible exit strategies estimating the burden of sars-cov-2 in france the impact of a nation-wide lockdown on covid-19 transmissibility in italy measurability of the epidemic reproduction number in data-driven contact networks mathematical models in epidemiology a new framework and software to estimate time-varying reproduction numbers during epidemics the r0 package: a toolbox to estimate reproduction numbers for epidemic outbreaks improved inference of time-varying reproduction numbers during infectious disease outbreaks convex analysis and monotone operator theory in hilbert spaces image restoration: total variation, wavelet frames, and beyond proximal splitting methods in signal processing proximal algorithms. foundations and trends ® in optimization wavelet-based image deconvolution and reconstruction different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures epidemiological parameters of coronavirus disease 2019: a pooled analysis of publicly reported individual data of 1155 cases from seven countries epidemiological characteristics of covid-19 cases in italy and estimates of the reproductive numbers one month into the epidemic nonlinear denoising for solid friction dynamics characterization sparsest continuous piecewise-linear representation of data the emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains a first-order primal-dual algorithm for convex problems with applications to imaging proximal splitting algorithms: relax them all! fonctions convexes duales et points proximaux dans un espace hilbertien. comptes rendus de l'acadé mie des sciences de paris a douglas-rachford splitting approach to nonsmooth convex variational signal recovery on the definition and the computation of the basic reproduction ratio r0 in models for infectious diseases in heterogeneous populations reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission figs 10 and 11 are produced using open ressources from the openstreetmap foundation, whose contributors are here gratefully acknowledged. mapdata©openstreetmap contributors. conceptualization: patrice abry, pablo jensen, patrick flandrin. key: cord-254894-ta7hebbg authors: balachandar, s.; zaleski, s.; soldati, a.; ahmadi, g.; bourouiba, l. title: host-to-host airborne transmission as a multiphase flow problem for science-based social distance guidelines date: 2020-09-04 journal: nan doi: 10.1016/j.ijmultiphaseflow.2020.103439 sha: doc_id: 254894 cord_uid: ta7hebbg covid-19 pandemic has strikingly demonstrated how important it is to develop fundamental knowledge related to generation, transport and inhalation of pathogen-laden droplets and their subsequent possible fate as airborne particles, or aerosols, in the context of human to human transmission. it is also increasingly clear that airborne transmission is an important contributor to rapid spreading of the disease. in this paper, we discuss the processes of droplet generation by exhalation, their potential transformation into airborne particles by evaporation, transport over long distances by the exhaled puff and by ambient air turbulence, and final inhalation by the receiving host as interconnected multiphase flow processes. a simple model for the time evolution of droplet/aerosol concentration is presented based on a theoretical analysis of the relevant physical processes. the modeling framework along with detailed experiments and simulations can be used to study a wide variety of scenarios involving breathing, talking, coughing and sneezing and in a number of environmental conditions, as humid or dry atmosphere, confined or open environment. although a number of questions remain open on the physics of evaporation and coupling with persistence of the virus, it is clear that with a more reliable understanding of the underlying flow physics of virus transmission one can set the foundation for an improved methodology in designing case-specific social distancing and infection control guidelines. the covid-19 pandemic has made clear the fundamental role of airborne droplets and aerosols as potential virus carriers. the importance of studying the fluid dynamics of exhalations, starting from the formation of droplets in the respiratory tracts to their evolution and transport as a turbulent cloud, can now be recognized as the key step towards understanding sars-cov-2 transmission. respiratory droplets are formed and emitted at high speed during a sneeze or cough [106] , and at a lower speed while talking or breathing. the virus-laden droplets are then initially transported as part of the coherent gas puff of buoyant fluid ejected by the infected host [15] . the very large drops of o(mm) in size, which are visible to the naked eye, are minimally affected by the puff. they travel semi-ballistically with only minimal drag adjustment, but rapidly fall down due to gravitational pull. they can exit the puff either by overshooting or by falling out of the puff at the early stage of emission (fig. 1 ). smaller droplets ( o(100 µm)) that remain suspended within the puff are advected forward. as the suspended droplets steadily evaporate within the cloud, the virus takes the form of potentially inhalable droplet nuclei when the evaporation of water is complete. meanwhile, the velocity of the turbulent puff continues to decay both due to entrainment and drag. once the puff slows down sufficiently, and its coherence is lost, the eventual spreading of the virus-laden droplet nuclei becomes dependent on the ambient air currents and turbulence. the isolated respiratory droplet emission framework was introduced by wells [128] in the 1930s and remains the framework used for guidelines by public health agencies, such as the who, cdc and others. however, it does not consider the role of the turbulent gas puff within which the droplets are embedded. regardless of their size and their initial velocity, the ejected droplets are subject to both gravitational settling and evaporation [15] . although droplets of all sizes undergo continuous settling, droplets with settling speed smaller than the fluctuating velocity of the surrounding puff can remain trapped longer within the puff (fig. 2) . furthermore, the water content of the droplets continuously decreases due to evaporation. when conditions are appropriate for near complete evaporation, the ejected droplets quickly become droplet nuclei of non-volatile biological material. the settling velocity of these droplet nuclei is sufficiently small that they can remain trapped as a cloud and get advected by ambient air currents and dispersed by ambient turbulence. based on the above discussion, we introduce the following terminology that will be consistently used in this paper: • puff: warm, moist air exhaled during breathing, talking, coughing or sneezing, which remains coherent and moves forward during early times after exhalation • cloud: the distribution of ejected droplets that remain suspended even after the puff has lost its coherence. the cloud is advected by the air currents and is dispersed by ambient turbulence • exited droplets: droplets that have either overshot the puff/cloud or settled down due to gravity • airborne (evaporating) droplets: droplets which have not completed evaporation and retained within the puff/cloud • (airborne) droplet nuclei: droplets that remain airborne within the puff/cloud and that have fully evaporated, which will also be termed aerosols. host-to-host transmission of virus-laden droplets and droplet nuclei generally occurs through direct and indirect routes [5, 14, 94] . the direct route of transmission involves the larger droplets that may ballistically reach the recipient's mucosa. this route is currently thought to involve either the airborne route or drops that have settled on surfaces. the settled drops remain infectious, to be later picked up by the recipient, and are generally thought to be localized to the vicinity or at close range of the original infectious emitter. with increased awareness and modified physical distancing norms, it is possible to minimize the spreading of the virus by such direct route. the indirect route of transmission is one that does not necessarily involve a direct or close interaction between the infectious individual and the recipient or for the two to be synchronously present figure 1 : image reproduction showing the semi-ballistic largest drops, visible to the naked eye, and on the order of mm, which can overshoot the puff at its early stage of emission [16, 17] . the puff continues to propagate and entrain ambient air as it moves forward, carrying its payload of a continuum of drops [15] , over distances up to 8 meters for violent exhalations such as sneezes [19] . in the same contaminated space at the same time. thus, the indirect route involves respiratory droplets and fully-evaporated droplet nuclei that are released to the surrounding by the infected individual, which remain airborne as the cloud carries them over longer distances [22, 84, 85, 107] . the settling speeds of the airborne droplets and droplet nuclei are so small, that they remain afloat for longer times [112] , while being carried by the background turbulent airflow over distances that can span the entire room or even multiple rooms within the building (o(10 − 100) feet). a schematic of the two routes of transmission is shown in fig. 2 and in this paper we will focus on the indirect airborne transmission. another factor of great importance is the possibility of updraft in the region of contamination, due to buoyancy of the virus-laden warm ejected air-mass. these slight updrafts can keep the virusladen droplets suspended in the air and enhance the inhalability of airborne droplets and droplet nuclei by recipients who are located farther away. the advection of airborne droplets and nuclei by the puff and subsequently as a cloud may represent transmission risk for times and distances much longer than otherwise previously estimated, and this is a cause of great concern [109, 110] . note that if we ignore the motion of the puff of air carrying the droplets, as in the analysis of wells, the airborne droplets and nuclei would be subjected to such high drag that they could not propagate more than a few cm away from the exhaler, even under conditions of fast ejections, such as in a sneeze. this illustrates the importance of incorporating the correct multiphase flow physics in the modeling of respiratory emissions [18] , which we shall discuss further here. it has been recently reported that the covid-19 virus lives in droplets and aerosols for many hours in laboratory experiments [29] . at the receiving end, an increased concentration of virusladen airborne droplets and nuclei near the breathing zone increases the probability of them settling on the body or, more importantly, being inhaled. depending on its material and sealing properties, the use of a mask by the infected host can help reduce the number of virus-laden droplets ejected into the air. the use of a mask or other protective devices by the receiving host may reduce the probability of inhalation of the virus-laden airborne droplets and nuclei in a less effective way. the above description provides a clear sketch of the sequence of processes by which the virus is transferred host-to-host. this simplistic scenario, though pictorially evocative, is tremendously insufficient to provide science-based social distancing guidelines and recommendations. there is substantial variability (i) in the quantity and quality of contaminated droplets and aerosols generated by an infected person, (ii) in the manner by which the contaminated droplets and droplet nuclei remain afloat over longer distances and time, (iii) in the possibility of the contaminant being inhaled by a recipient and (iv) in the effectiveness of masks and other protection devices. violent exhalations, such as sneezing and coughing, yield many more virus-laden droplets and aerosols than breathing and talking [15, 81] . all coughing and sneezing events are not alike -the formation of droplets by break up of mucus and saliva varies substantially between individuals. significant variation in initial droplet size and velocity distribution has been reported in [4, 15, 49, 94] . the measured droplet size distribution, particularly for transient biological emissions such as respiratory exhalations, also depends on ambient temperature and humidity and on the methodology and instrumentation used to characterize the size distribution [5, 18, 81] . furthermore, it is of importance to consider the volume of air, and the pathogen load, being inhaled during breathing by the receiving host. thus, there is great variability in how much of the virus-laden aerosols reach from the infected host to the receiving host. although less violent, it has been suggested that breathing can also be a significant source of contagion since it occurs at great regularity, and thus much more frequently [2, 39, 59, 107] . furthermore, these works suggest different possible mechanisms of droplet generation in the lower respiratory tracts for these less violet periodic ejection events and as a result the ejected droplets and aerosols are typically much smaller. as a result the effectiveness of ordinary cotton and gauze masks have been questioned [83] , leung, bae. though the general mathematical framework to be presented in this paper applies to all forms of exhalations, our particular focus of demonstration will be for more violent ejections in the form of coughing and sneezing. cdc guideline of social distancing of 2 meters (6 feet) is based on the disease transmission theory originally developed in 1930s and later improved by others [100, 127, 130] . the current recommendation of 6 feet as the safe distance is somewhat outdated and based on the assumption that the direct route is the main mechanism of transmission. therefore, it can be improved in several ways: (i) by accurately accounting for the distance traveled by the puff and the droplets contained within it, while some continuously settling out of the puff, (ii) by accurately evaluating the evaporation of droplets and the subsequent advection and dispersal of droplet nuclei as a cloud [8] , (iii) by incorporating the effect of adverse flow conditions that prevail under confined indoor environments including elevators, aircraft cabins, and public transit, or favorable conditions of open space with good breeze or cross ventilation, and (iv) by correctly assessing the effectiveness of masks and other protective devices [23] . thus, mechanistic, evidence-based understanding of exhalation and dispersal of expelled respiratory droplets, and their subsequent fate as droplet nuclei in varying scenarios and environments is important. we must therefore revisit the safety guidelines and update them to modern understanding. in particular, a multi-layered guideline that differentiates between crowded class rooms, auditoriums, buses, elevators and aircraft cabins from open outdoor cafes is desired. only through a reliable understanding of the underlying flow physics of virus transmission, one can arrive at such nuanced guidance in designing case-specific social distancing guidelines. the objective of the paper is to aid in the development of a comprehensive scientific guideline for social distancing that (i) considers airborne transmission via state-of-the-art understanding of respiratory ejections and (ii) substantially improve upon the older models of [127, 130] . towards this objective we present a coherent analytic and quantitative description of the droplet generation, transport, conversion to droplet nuclei, and eventual inhalation processes. we will examine the available quantitative relationships that describe the above processes and adapt them to the present problem. the key outcomes that we desire are (i) a simple universal description of the initial droplet size spectrum generated by sneezing, coughing, talking and breathing activities. such a description must recognize the current limitations of measurements of droplet size distribution under highly transient conditions of respiratory events. (ii) a first-order mathematical framework that describes the evolution of the cloud of respiratory droplets and their conversion to droplet nuclei, as a function of time, and (iii) a simple description of the inhalability of the aerosols along with the corresponding evaluation of the effectiveness of different masks based on existing data reported to date. the physical picture and the quantitative results to be presented can then be used to study a statistical sample of different scenarios and derive case-specific guidelines. we anticipate the present paper to spawn future research in the context of host-to-host airborne transmission. after presenting the mathematical framework in section 2, the three different stages of transmission, namely droplet generation, transport and inhalation will be independently analyzed in sections 3, 4 and 5. these sections will consider the evolution of the puff of exhaled air and the droplets contained within. section 6 will put together the different models of the puff and droplet evolution described in the previous sections, underline their simplifications, and demonstrate their ability to make useful predictions. finally, conclusions and future perspectives are offered in section 7. we wish to describe the three main stages involved in the host-to-host transmission of the virus: droplet generation during exhalation, airborne transport, and inhalation by the receiving host. in the generation stage, virus-laden drops are generated throughout the respiratory tract by the exhalation air flow, which carries them through the upper airway toward the mouth where they are ejected along with the turbulent puff of air from the lungs. the ejected puff of air can be characterized with the following four parameters: the volume q pe , the momentum m pe , and the buoyancy b pe of the ejected puff, along with the angle θ e to the horizontal at which the puff is initially ejected. the initial momentum and buoyancy of the puff are given by m pe = ρ pe q pe v pe and b pe = (ρ a − ρ pe )q pe g, where v pe is the initial velocity of ejected puff, ρ pe and ρ a are the initial density of the puff and the ambient, respectively, and g is the gravitational acceleration. the ejected droplets are characterized by their total number n e , size distribution n e (d), droplet velocity distribution v de (d) and droplet temperature distribution t de (d), where d is the diameter of the droplet. to simplify the theoretical formulation, here we assume the velocity and temperature of the ejected droplets to depend only on the diameter and show no other variation. as we shall see in section 4, this assumption is not very restrictive, since the velocity and temperature of the droplets that remain within the puff very quickly adjust to those of the puff. both the ejected puff of air and the detailed distribution of droplets depend on the nature of the exhalation event (i.e., breathing, talking, coughing or sneezing), and also on the individual. this is followed by the transport stage, where the initially ejected puff of air and droplets are transported away from the source. the volume of the puff of air increases due to entrainment of ambient air. the puff velocity decreases due to both entrainment of ambient air as well as drag. since the temperature and moisture content of the ejected puff of air is typically higher than the ambient, the puff is also subjected to a vertical buoyancy force, which alters its trajectory from a rectilinear motion. the exhaled puff is turbulent, and both the turbulent velocity fluctuations within the puff and the mean forward velocity of the puff decay over time. the time evolution of the puff during the transport stage can then be characterized by the following quantities: the volume q p (t), the momentum m p (t), buoyancy b p (t) of the ejected puff, and ρ p (t) is the density of air within the puff which changes over time due to entrainment and evaporation. the trajectory of the puff is defined in terms of the distance traveled s(t) and the angle to the horizontal θ(t) of its current trajectory. following the work of bourouiba et al. [15] we have chosen to describe the puff trajectory in terms of s(t) and θ(t). this information can be converted to horizontal and vertical positions of the centroid of the puff as a function time. if we ignore the effects of thermal diffusion and ambient stratification between the puff and the surrounding air, then the buoyancy of the puff remains a constant as b p (t) = b pe . furthermore, as will be seen below, the buoyancy effects are quite weak in the early stages when the puff remains coherent, and thus, the puff to good approximation can be taken to travel along a straight line path, as long as other external flow effects are unimportant. to characterize the time evolution of the virus-laden droplets during the transport stage, we distinguish the droplets that remain within the puff, whose diameter is less than a cutoff (i.e., d < d exit ), from the droplets (i.e., d > d exit ) that escape out of the puff. as will be discussed subsequently in §4, the cutoff droplet size d exit decreases with time. thus, the total number of droplets that remain within the puff can be estimated as n (t) = d exit 0 n (d, t) dd. however, the size distribution of droplets at any later time, denoted as n (d, t), is not the same as that at ejection. due to evaporation, size distribution shifts to smaller diameters over time. we introduce the mapping d(d e , t), which gives the current diameter of a droplet initially ejected as a droplet of diameter d e . then, assuming well-mixed condition within the puff, the airborne droplet and nuclei concentration (number per volume) distribution can be expressed as where the inverse mapping d −1 gives the original ejected diameter of a droplet whose current size is d. the prefactor 1/q p (t) accounts for the decrease in concentration due to the enlargement of the puff over time. in this model, the airborne droplets and nuclei that remain within the coherent puff are assumed to be in equilibrium with the turbulent flow within the puff. under this assumption, the velocity v d (d, t) and temperature t d (d, t) of the droplets can be estimated with the equilibrium eulerian approximation [36, 38] . when the puff's mean and fluctuating velocities fall below those of the ambient, the puff can be taken to lose its coherence. thus, the puff remains coherent and travels farther in a confined relatively quiescent environment, such as an elevator, class room or aircraft cabin, than in an open outdoor environment with cross-wind or in a room with strong ventilation. we define a transition time t tr , below which the puff is taken to be coherent and the above described puff-based transport model applies. for t > t tr , we take the aerosol transport and dilution to be dominated by ambient turbulent dispersion. accordingly, this late-time behavior of total number of airborne droplets and nuclei and their number density distribution are given by the theory of turbulent dispersion. it should be noted that the value of transition time will depend on both the puff properties as well as the level of ambient turbulence (see section 4.4) . we now consider the final inhalation stage. depending on the location of the recipient host relative to that of the infected host, the recipient may be subjected to either the puff that still remains coherent, carrying a relatively high concentration of virus-laden droplets or nuclei, or to the more dilute dispersion of droplet nuclei, or aerosols. these factors determine the number and size distribution of virus-laden airborne droplets and nuclei the recipient host will be subjected to. the inhalation cycle of the recipient, along with the use of masks and other protective devices, will then dictate the aerosols that reach sensitive areas of the respiratory tract where infection can occur. following the above outlined mathematical framework we will now consider the three stages of generation, transport and inhalation. knowing the droplet sizes, velocities and ejection angles resulting from an exhalation is the key first step in the development of a predictive ability for droplet dispersion and evolution. respiratory droplet size distributions have been the object of a large number of studies, as reviewed in [44] , and among them, those of duguid [30] and loudon & roberts [76] have received particular scrutiny as a basis for studies of disease transmission by nicas, nazaroff & hubbard [90] . there are substantial differences in the methodologies used for quantification of respiratory emission sprays. few studies have used common instrumentation that have enough overlap to reconstruct the full distribution of sizes. for example, there are important gaps in reporting the total volume or duration of air sampling, in addition there are issues in reporting the effective evaporation rates used to back-compute the initial distribution and in the documentation of assumptions about optical or shape properties of the droplets being sampled. in addition, sensitivity analyses are often missing regarding the role of orientation or calibration of sensing instruments with respect to highly variable emissions from human subjects. finally, regarding direct high-speed imaging methods [7, 106] , the tools for precise quantification of complex unsteady fragmentation and atomization processes are only now being developed [18, 123, 125] . there are far fewer studies on the velocities and angles of the droplets produced by atomizing flows. the studies of duguid and loudon & roberts have been performed by allowing the exhaled droplets to impact various sheets or slides, with different procedures being used for droplets smaller than 20 µm. the size of the stains on the sheets was observed and the original droplet size was inferred from the size of the stains. to account for the difference between the droplet and the stain sizes an arbitrary factor is applied and droplets smaller than 10 or 20 microns are processed differently than larger droplets. the whole process makes the determination of the number of droplets smaller than 10 microns less reliable. the data are replotted in fig. 3 . many authors have attempted to fit the data with a log-normal probability distribution function. in that case, the number of droplets between diameter d and d + dd is n e (d) dd, and the frequency of ejected droplet size distribution is given by where dd is a relatively small diameter increment or bin width, b is a normalization constant,μ is the expected value of ln d, also called the geometric mean andσ is the standard deviation of ln d, also called the geometric standard deviation (gsd). on the other hand, there have also been numerous studies of the fragmentation of liquid masses in various physical configurations other than the exhalation of mucosalivary fluid [43, 106, 120, 124] . these configurations include spray formation on wave crests [118] , droplet impacts on solids and liquids [86] , wave impacts on vertical or finite walls/surfaces [66, 123, 126] , and jet atomization [33] . these studies reveal a number of qualitative similarities between the various processes, which can be best described as a sequence of events. those events include a primary instability of sheared layers in high speed air flows [40] , and then the nonlinear growth of the perturbation into thin liquid sheets. the sheets themselves may be destabilized by two routes, one involving the formation of taylor-culick end rims [24, 115] , and their subsequent deformation into detaching droplets [123] . the other route to the formation of droplets is the formation of holes in the thin sheets [69, 93, 106] . the holes then expand and form free hanging ligaments, which fragment into droplets through the rayleigh-plateau instability [33] . considering the apparent universality of the process, one may infer that a universal distribution of droplet sizes may exist. indeed, the log-normal distribution has often been fitted to experimental [79] and numerical data on jet formation [52, 72] , for droplet impacts on solid surfaces [129] , and for wave impacts on solid walls [129] . the log-normal distribution is frequently suggested for exhalations [90, 128] . the fit of the numerical results of [72] is shown in fig. 4 . however, this apparent universality of the log-normal distribution is questionable for several reasons. first, many other distributions, such as exponential, poisson, weibull-rosin-rammler, beta, or families of gamma or compound gamma distributions [67, 119] capture to some extent the complexity of atomization physics. second, the geometrical standard deviation (gsd) of the log-normal fits to the many numerical and experimental measurements is relatively small (of the order of 1.2 [72] or 1.8 [79] ), while the wide range of scales in fig. 3 seems to indicate a much larger gsd. indeed nicas, nazaroff & roberts [90] obtainσ 8 − 9. one explanation for the smaller gsd in jet atomization studies, both numerical and experimental, is that the numerical or optical resolution is limited at the small scales. indeed, as grid resolution is increased, the observed gsd also increases [72] . third, many authors [49, 112] observe multimodal or bimodal distributions, that can be obtained for example by the superposition of several physical processes. this would arise in a very simple manner if the taylor-culick rim route produced drops of a markedly different size than the holes-infilm route. the non-newtonian nature of the fluid will also influence the instabilities and thereby the droplet generation process. other less violent processes could lead to the formation of small droplets such as the breakup of small films and menisci described in [77] without going through the sequence of events described above. frequency n e (d) (1/microns) duguid cough data loudon and roberts cough data pareto b/d 2 fit figure 3 : frequency of droplet size distribution, replotted from duguid [30] and loudon & roberts [76] . the pareto distribution is also plotted. in order to elucidate this discrepancy, we take another look at the fit of the duguid data in fig. 5 . we replot the data that was provided in table 3 of duguid. since the data are given as counts n i in bins defined by the interval (d i , d i+1 ), we approximate n e (d) at collocation points fig. 5 , since if plotted in the variables x = ln d and y = ln[dn e (d)] the distribution (2) appears as a parabola. when one attempts to fit a parabola between 2 and 50 µm, one obtains a log-normal distribution withσ = 0.7 andμ = ln(12) (for diameters in microns). however, the data above 50 µm are completely outside this distribution. if instead the whole range from 1 to 1000 µm is fit to a log-normal distribution, one obtains a very wide log-normal or alternatively a pareto distribution of power 2 in figs roberts data. it is especially clear from fig. 3 that if one does not trust either data at d < 50 µm then both data sets are well described by the pareto distribution. this, however, does not eliminate the possibility that more data with more statistical power could show deviations from pareto, in particular, as multimodal distributions. nevertheless, the multimodal deviation from the pareto distribution is difficult to characterize and will not be pursued in what follows for the sake of simplicity. it is clear that the pareto distribution cannot be valid at diameters that are either too large or too small. the equivalent diameter of the total mass of liquid being atomized is an obvious upper bound, but it is also very unlikely that droplets with d > h where h is the initial film thickness will be observed. it is reasonable to put this film thickness on the scale of 1 mm, which corresponds to the upper bound on diameters in the data of figs. 3 and 5. the lower bound on droplet diameter is much harder to determine. exhalations are highly transient, or unsteady, processes involving complex multiscale geometry [106] , and thread breakup is a fractal multiscale process with satellite droplets [32, 116] . going down in scale, the fractal process repeats itself as long as continuum mechanics remains valid, to around 1 nm. this would not be relevant for viral disease propagation as a lot of the relevant viruses have sizes ranging from o(10 − 100 nm), with an estimated size for sars-cov-2 ranging from 60-120 nm, for example. if the smallest length scale is the thickness at which the thin liquid sheets will break, then experimental observations in water [93] suggest a scale of o(100) nm. other fluids, including biological fluids or biologically contaminated fluids such as those investigated in [97, 98, 124] may yield different length scales. based on the above considerations, we take a histogram of droplet sizes that reads where d 1 is set to o(100 nm) and d 2 to o(1 mm) for simplicity. the total volume of the droplets is fig. 4 . the fit is adequate only up to 50 µm. as a result, only a fraction of the reliable data fits the log-normal. the pareto distribution is a reasonable capture of the data in the 10 to 1000 µm range. in the log-log coordinates, the log-normal distribution appears as a parabola while the pareto distribution is a straight line. since d 1 is four orders of magnitude smaller than d 2 , the total number of droplets is well approximated by and the cumulative number of droplets f (x) = n e (d 1 ≤ d ≤ x), i.e., the number of droplets with diameter smaller than x, is very well approximated by so that f (10d 1 )/n e = 90% of the droplets are of size less than 10 d 1 1 µm. in other words, a numerical majority of the droplets are near the lower diameter bound. on the other hand, a majority of the volume of fluid is in the larger droplet diameters. the distribution of velocities and ejection angles has been investigated in the atomization experiments of [27] , which follow approximately the geometry of a high speed stream peeling by a gas layer. these experiments were qualitatively reproduced in the numerical simulations of [73] . to cite ref. [27] , "most of the ejection angles are in the range 0 • to 40 • , however, it occurs occasionally that the drops are ejected with angles as high as 60 • ". on the other hand, there are to our knowledge no experimental data on the velocity of droplets, as they are formed in an atomizing jet, that could be used directly to estimate the ejection speed of droplets in exhalation. there are however numerical studies [13, 58] in the limit of very large reynolds and weber numbers. the group velocity of waves formed on a liquid layer below a gas stream has been estimated by dimotakis [28] as where ρ d is droplet density. in [13, 58] it was shown that this was also the vertical velocity of the interface perturbation. it is thus likely that this velocity plays a role at the end of the first instability stage of atomization. after this stage, droplets are detached and immersed in a gas stream of initial ejection velocity v pe . since the density ratio ρ p /ρ d is o(10 −3 ), we expect the initial velocity of the ejected droplets at the point of their formation to be small. as we show below, it is interesting to note that the large reynolds number limit may apply at the initial injection stage to a wide range of droplets in the spectrum of sizes found above. indeed the ejection reynolds number of a droplet ejected at a velocity v de in a surrounding air flow of velocity v pe is where ν a is the kinematic viscosity of the ejected puff of air (here taken to be the same as that of the ambient air). the largest reynolds number is obtained for the upper bound of d = 1 mm. for example, if the droplet's initial velocity is set to v de ≈ 0, and the air flow velocity in some experiments [19] is as high as 30 m/s, we can estimate the largest ejection reynolds number to be re e ≈ 2000 and the reynolds number will stay above unity for droplets down to micron size. but as the puff of air and the droplets move forward, the droplet reynolds number rapidly decreases for the following reasons: (i) as will be seen in section 4.1 the puff velocity decreases due to entrainment and drag, (ii) as will be seen in section 4.2.1 the droplet diameter will decrease rapidly due to evaporation, (iii) as will be seen in section 4.2.2 the time scale τ v on which the droplet accelerates to the surrounding fluid velocity of the puff is quite small, and (iv) very large droplets quickly fall out of the puff and do not form part of airborne droplets. thus, it can be established that droplets smaller than 100 µm quickly equilibrate with the puff within the first few cm after exhalation. this section will consider the evolution of the puff of hot moist air with the droplets after their initial ejection. first in section 4.1 we will present a simple modified model for the evolution of the puff of exhaled air, evaluating the effects of drag and the inertia of the droplets within it. this will enable us, in section 4.2 to discuss the evolution of the droplet size spectrum, velocity and temperature distributions, with simple first order models. additionally, section 4.3 will discuss the effect of non-volatiles on the droplet evolution and the formation of a fully evaporated droplet nuclei or aerosol particle. late-time turbulent dispersion of the virus-laden droplet nuclei, when the puff of air within which they are contained stops being a coherent entity, is then addressed in section 4.4. for the puff model, we follow the approach of bourouiba et al. [15] , but include the added effects of drag and the mass of the injected droplets. in addition, a perturbation approach is pursued to obtain a simple solution with all the added effects included. fig. 6 shows the evolution of the puff along with quantities that define the puff [15] . we define t to be the time elapsed from exhalation and s(t) to be the distance traveled by the puff since exhalation. for analytical considerations we define the virtual origin to be at a distance s e from the real source in the backward direction and t e to be the time it takes for the puff to travel from the virtual origin to the real source. we define t = t + t e to be time from the virtual origin and s = s + s e to the distance traveled from the virtual origin -their introduction simplifies the analysis. from the theory of jets, plumes, puffs and thermals [117] the volume of the puff exhaled grows by entrainment. bourouiba et al. [15] defined the puff to be spheroidal in shape with the transverse dimension to evolve in a self-similar manner as r (t ) = αs (t ), where α is related to entrainment coefficient. the volume of the puff is then q p (t ) = ηr 3 (t ) = ηα 3 s 3 (t ) and the projected, or cross-sectional, area of the puff a(t ) = βr 2 (t ) = βα 2 s 2 (t ), where the constants η and β depend on the shape of the spheroid. for a spherical puff η = 4π/3 and β = π. as defined earlier, the ejected puff at the real source (i.e., at t = t e ) is characterized by the volume q pe = ηα 3 s 3 e , momentum m pe = ρ pe q pe v pe , buoyancy b pe = q pe (ρ a − ρ pe )g and ejection angle θ e . from the assumption of self-similar growth, we obtain the virtual origin to be defined as where the constant c depends on the drag coefficient of the puff and will be defined below. if we assume a spherical puff with an entrainment factor α = 0.1 [117] , the distance s e depends only on the ejected volume. experimental measurements suggest q pe to vary over the range 0.00025 to 0.0025 m 3 . accordingly, s e can vary from 0.39 to 0.84 m. similar estimates of t e can be obtained for a spherical puff: as q pe varies from 0.00025 to 0.0025 m 3 and as the ejected velocity varies from 1 to 10 m/s the value of t e varies over the range 0.01 to 0.21 s. the horizontal and vertical momentum balances in dimensional terms are in the above c d is the drag coefficient of the puff and m d is the momentum of droplets within the puff. while the puff velocity decreases rapidly over time, the velocity of the larger droplets will change slowly. note that in the analysis to follow, we take the velocity of those droplets that remain within the puff to be the same as the puff velocity. figure 6 : evolution of a typical cloud of respiratory multiphase turbulent droplet-laden air following breathing, talking, coughing and sneezing activities. image adapted from [15] . we use s e and t e as the length and time scales to define nondimensional quantities:s = s /s e andt = t /t e . with this definition the virtual origin becomest = 0 ands = 0 and the real source becomest = 1 ands = 1. in terms of non-dimensional quantities the governing momentum equations can be rewritten as there are three nondimensional parameters: mass ratio of the initial ejected droplets to the initial air puff: r m = ρ d q de /(ρ p q pe ); the scaled drag coefficient: c = c d β/(2ηα); and the buoyancy parameter: a = b pe t 2 e /(ρ pe q pe s e ). in the above equations, r m is defined in terms of the mass of the initial ejected droplets. this is an approximation since some of the droplets exit the puff over time. even though the droplet mass decreases due to evaporation, the associated momentum is not lost from the system since it remains within the puff. in any case, soon it will be shown that the value of r m is small and the role of ejected droplets on the momentum balance is negligible. it should also be noted that under boussinesq approximation the small difference in density between the puff and that the ambient is important only in the buoyancy term. for all other purposes, the two will be taken to be the same and as a result the time variation of puff density is not of importance (i.e., ρ p = ρ pe = ρ a ). the importance of inertia of the ejected droplets, drag on the puff and buoyancy effects can now be evaluated in terms of the magnitude of the nondimensional parameters. typical experimental measurements of breathing, talking, coughing and sneezing indicate that the value of r m is smaller than 0.1 and often much smaller. furthermore, as droplets fall out continuously [15] from the turbulent puff, this ratio changes over time. here we will obtain an upper bound on the inertial effect of injected droplets by taking the value of r m to be 0.1. the drag coefficient of a spherical puff of air is also typically small -again as an upper bound we take c d = 0.1, which yields c = 0.375 for a spherical puff. the value of the buoyancy parameter a depends on the density difference between the ejected puff of air and the ambient, which in turn depends on the temperature difference. for the entire range of ejected volumes and velocities, the value of a comes to be smaller than 0.01, for temperature differences of the order of ten to twenty degrees between the exhaled puff and the ambient. since all three parameters r m , c and a can be considered as small perturbations, the governing equations can be readily solved in their absence to obtain the following classical expressions for the nondimensional puff location and puff velocity: with the inclusion of the drag term the governing equations become nonlinear. nevertheless, they allow a simple exact solution which can be expressed as thus, as to be expected, the forward propagation of the puff slows down with increasing nondimensional drag parameter c. for small values of c the above can be expanded in taylor series as a comparison of the exact solution with the above asymptotic expansion shows its adequacy for small values of c. for small non-zero values of r m , c and a, the governing equations can be solved using regular perturbation theory. the result can be expressed as and the above expression is accurate to o(c 3 , r 2 m , a 2 ). although the effect of buoyancy is to curve the trajectory of the puff, the leading order effect of buoyancy is to only alter the speed of rectilinear motion. also, as expected, the effect of non-zero r m is to add to the total inertia and thereby slow down the motion of the puff. on the other hand, the effect of buoyancy is to slow down if the initial ejection is angled down (i.e., if θ e < 0) and to speed up if the ejection is angled up, provided the ejected puff is warmer than the ambient. the time evolution of the puff as predicted by the above analytical expression is shown in fig. 7 . note that the point of ejection is given byt = 1,s = 1, and the initial non-dimensional velocitỹ v(t = 1) = 1/4. the results for four different combinations of c and r m are shown. the buoyancy parameter has very little effect on the results and, therefore, is not shown. it should be noted that at late stages when the puff velocity slows down the effect of buoyancy can start to play a role as indicated in experiments and simulations. it can be seen that the effect of inertia of the ejected droplets, even with the upper bound of holding their mass constant at the initial value, has negligible effect. only the drag on the puff has a significant effect in reducing the distance traveled by the puff. it can then be taken that the puff evolution to good accuracy can be represented by (16) . over a time span of 10 nondimensional units the puff has traveled about 0.7s e and the velocity has dropped to about 15% of the initial velocity. by 100 nondimensional units the puff has traveled about 1.75s e and the velocity has dropped to about 2.5% of the initial velocity. the ejected droplets are made of a complex fluid that is essentially a mixture of oral fluids, including secretions from both the major and minor salivary glands. in addition, it is added up with several constituents of non-salivary origin, such as gingival crevicular fluid, exhalted bronchial and nasal secretions, serum and blood derivatives from oral wounds, bacteria and bacterial products, viruses and fungi, desquamated epithelial cells, other cellular components, and food debris [60] . therefore, it is not easy to determine precisely the transport properties of the droplet fluid. although surface tension is measured similar to that of water, viscosity can be one or two orders of magnitude larger [42] making drops less coalescence prone [102, 111] . in the present context, viscosity and surface tension might be of importance, because they can influence droplet size distribution specifically by controlling coalescence and breakage. these processes are important only during the ejection stage, and once droplets are in the range below 50 µm, coalescence and break up processes are impeded. due to the dilute dispersed nature of the flow droplet-droplet interaction can be ignored. the ejected swarm of droplets is characterized by its initial size spectrum as given in (4). the time evolution of the spectrum of droplets that remain within the puff in terms of droplet size, velocity and temperature is the object of interest in this section. the evolution of the ejected droplets depends on the following four important parameters: the time scale τ v on which the droplet velocity relaxes to the puff fluid velocity (in the absence of other forcings), the time scale τ t on which the droplet temperature relaxes to the puff fluid temperature, the settling velocity w of the droplet within the puff fluid, and the reynolds number re based on settling velocity. these quantities are given by [11, 70, 71] where ρ ≈ 1000 is the droplet-to-air density ratio, c r ≈ 4.16 is the droplet-to-air specific heat ratio, g is the acceleration due to gravity, ν p and κ p are the kinematic viscosity and thermal diffusivity of the puff. in the above, φ = 1 + 0.15re 0.687 and n u = 2 + 0.6re 1/2 p r 1/3 are the finite reynolds number drag and heat transfer correction factors, where the later is the well-known ranz-marshall nusselt or sherwood number correlation. both corrections simplify in the stokes regime for drops smaller than about 50 µm. here we take the prandtl number of air to be p r = 0.72. in the stokes limit, the velocity and thermal time scales, and the settling velocity of the droplet increase as d 2 , while reynolds number scales as d 3 . the value of these four parameters for varying droplet sizes is presented in fig. 8 , where it is clear that the effect of finite re becomes important only for droplets larger than 50 µm. for smaller droplets τ v , τ t 1(s), w 1(m/s), and re 1. the size of the droplets under investigation is sufficiently small, and the swarm is dilute to prevent their coalescence. furthermore, the droplet weber number w e = ρ p w 2 d/σ can be estimated to be quite small even for droplets of size 50 µm, where σ is the surface tension of the droplet and the relative velocity will be shown in the next section to be well approximated by the settling velocity. therefore, secondary breakup of droplets within the puff can be ignored and the only way in which droplets change their size is via evaporation. according to the analysis of langmuir [64] , the rate of mass loss due to evaporation of a small sphere depends on the diffusion of the vapor layer away from the sphere surface, and under reasonable hypotheses [20, 64, 96, 104] , it can be expressed as : where, m is the mass of a droplet of diameter d, d is the diffusion coefficient of the vapor, ρ p is the density of puff air and is the spalding mass number, where y d is the mass fraction of water vapor at the droplet surface and y p is the mass fraction of water vapor in the surrounding puff. under the assumption that n u and b m are nearly constant for small droplets, the above equation can be integrated [26] to obtain the following law (mapping) for the evolution of the droplet: where d e is the initial droplet diameter at ejection and k = 4dn u ln(1 + b m )/ρ has units of m 2 /s and thus represent an effective evaporative diffusivity. it is important to observe that (20) would predict a loss of mass per unit area tending to infinity as the diameter of the drop tends to zero. this implies that the droplet diameter goes to zero in a finite time and we establish the result which for any time t yields a critical value of droplet diameter, and all droplets that were smaller, or equal, at exhalation (i.e., d e ≤ d e,evap ) would have fully evaporated by t. the only parameter is k . assuming n u = 2 and d = 2.8 × 10 −5 m 2 /s, even for very small values of b m , we obtain the evaporation time for a 10 µm droplet to be less than a second. however, it appears that smaller than a certain critical size, the loss of mass due to evaporation slows down [20] . this could partly be due to the presence of non-volatiles and other particulate matter within the droplet, whose effects were ignored in the above analysis, and will be addressed in section 4.3. it seems that (20) can give reliable predictions for droplet diameter down to a few µm with much slower evaporation rates for smaller sizes. irrespective of whether water completely evaporates leaving only the non-volatile droplet nuclei, or the droplet evaporation slows down, the important consequence on the evolution of the droplet size distribution is that it is narrower and potentially centered around micron size. we now consider the motion of the ejected droplets, while they rapidly evaporate. the equation of motion of the droplet is newton's law where e e e z is the unit vector along the vertical direction, m p is the mass of puff displaced by the droplet, v v v d and v v v p are the vector velocity of the droplet and the surrounding puff. provided the droplet time scale τ v is smaller than the time scale of surrounding flow, which is the case for droplets of diameter smaller than 50 µm, the above ode can be perturbatively solved to obtain the following leading order solution [10, 36, 37] according to the above equation, the equilibrium eulerian velocity of the droplet depends on the local fluid velocity plus the still fluid settling velocity w of the droplet plus the third term that arises due to the inertia of the droplet. though at ejection the droplet speed is smaller than the surrounding gas velocity, as argued in section 3.2, the droplets quickly accelerate to approach the puff velocity. in fact, since the puff is decelerating (i.e., |dv v v p /dt| < 0), the droplet velocity will soon be larger than the local fluid velocity. as long as the droplet stays within the puff, the velocity and acceleration of the surrounding fluid can be approximated by those of the puff as |v v v p | = ds/dt and |dv v v p /dt| = d 2 s/dt 2 . this allows evaluation of the relative importance of the third term (inertial slip velocity) in terms of the puff motion, which is given in (16) as [70] this ratio takes its largest value at the initial time of injection and then decays as 1/t. using the range of possible values of t e given earlier, this ratio is small for a wide range of initial droplet sizes. we thus confirm that for the most part droplet inertia can be ignored in its motion, and the droplet velocity can be taken to be simply the sum of local fluid velocity and the still fluid settling velocity of the droplet. while the effect of buoyancy on the puff was shown to be small, the same cannot be said of the droplets. the vertical motion of a droplet with respect to the surrounding puff, due to its higher density, is dependent only on the fall velocity w , which scales as d 2 , which in turn decreases as given in (21) due to evaporation. the droplet's gravitational settling velocity can be integrated over time to obtain the distance over which it falls as a function of time. we now set this fall distance (left hand side) equal to the puff radius (right hand side) to obtain where we have set the droplet diameter at exhalation to be d e,exit , indicating the fact that a droplet of initial diameter equal to d e,exit has fallen by a distance equal to the puff size at time t. thus all larger droplets of size d e > d e,exit have fallen out of the puff by t and we have been referring to these as the exited droplets. it should be pointed out that in the above simple analysis the vertical motion of the particle ignored the vertical component of fluid velocity both from turbulent fluctuations and from the entrainment process. the two critical initial droplet diameters, d e,evap and d e,exit are plotted in fig. 9a as a function of t. the only other key parameter of importance is k , whose value is varied from 10 −12 to 10 −6 m 2 /s. in evaluating d e,exit using (26), apart from the property values of water and air, we have used the nominal values of α = 0.1, s e = 0.5 m and t e = 0.05 s (as an example). the solid lines correspond to d e,exit , which decreases with increasing t and for each value of k , there exists a minimum d e below which there is no solution to (26) since the droplet fully evaporates before falling out of the puff. the dotted lines correspond to d e,evap , which increases with t. the intersection of the two curves is marked by the solid square, which corresponds to the limiting time t lim (k ), beyond which the puff contains only fully-evaporated droplet nuclei containing the viruses. correspondingly we can define a limiting droplet diameter d e,lim (k ). given sufficient time, all initially ejected larger droplets (i.e., d e > d e,lim ) would have fallen out of the puff and all smaller droplets (i.e., d e ≤ d e,lim ) would have evaporated to become droplet nuclei. at times smaller than the limiting time (i.e., for t < t lim ) we have the interesting situation of some droplets falling out of the puff (exited droplets), some still remaining as partially evaporated airborne droplets, and some fully-evaporated to become droplet nuclei. this scenario is depicted in fig. 9a with an example of t = 0.1 s for k = 10 −8 m 2 /s plotted as a dashed line. there can be significant presence of non-volatile material such as mucus, bacteria and bacterial products, viruses and fungi, and food debris in the ejected droplets [60] . however, the fraction of ejected droplet volume q de that is made up of these non-volatiles varies substantially from person to person. the presence of non-volatiles alters the analysis of the previous sections in two significant ways. first, each ejected droplet, as it evaporates, will reach a final size that is dictated by the amount of non-volatiles that were initially in it. the larger the droplet size at initial ejection, the larger will be its final size after evaporation, since it contains a larger amount of non-volatiles. if ψ is the volume fraction of non-volatiles in the initial droplet, the final diameter of the droplet nuclei after complete evaporation of volatile matter (i.e., water) will be this size depends on the initial droplet size and composition. note that even a small, for example 1%, non-volatile composition results in d dr being around 20% of the initial ejected droplet size. it has also been noted that the evaporation of water can be partial, depending on local conditions in the cloud or environment. we simply assume the fraction ψ to also account for any residual water retained within the droplet nuclei. the second important effect of non-volatile is to reduce the rate of evaporation. as evaporation occurs at the droplet surface, a fraction of the surface will be occupied by the non-volatiles reducing the rate of evaporation. for small values of ψ, the effect of non-volatiles is quite small only at the beginning. the effect of non-volatiles will increase over time, since the volume fraction of nonvolatiles increases as the volatile matter evaporates. because of this ever decreasing evaporation rate, it may take longer for a droplet to decrease from its ejection diameter of d e to its final droplet nuclei diameter of d dr , than what is predicted by (21) . it should be noted that intermittency of turbulence and heterogeneity of vapor concentration and droplet distribution within the puff will influence the evaporation rate [31, 34, 121] . nevertheless, for simplicity, and for the purposes of the present first order mathematical framework, we use the d 2 -law given in (21) , but with a smaller value of effective k to account for the effect of non-volatiles and turbulence intermittency. this approximation is likely to be quite accurate in describing the early evolution of the droplet. only at late stages as the droplet approaches its final diameter d dr , the d 2 -law will be in significant error. applying the analysis of the previous sections, taking into account the presence of non-volatiles, we separate the two different time regimes of t ≤ t lim and t ≥ t lim . in the case when t ≤ t lim , we have three types of droplets: (i) exited droplets whose initial size at injection is greater than d e,exit , (ii) droplets of size at ejection smaller than d e,evap that have completely evaporated to become droplet nuclei of size d dr and (iii) intermediate size airborne droplets that are within the puff and still undergoing evaporation. we assume an equation of the form (26) to approximately apply even in the presence of non-volatiles. with this balance between fall distance of a droplet and the puff radius we obtain the following expression the corresponding limiting diameter of complete evaporation can be obtained from setting d = d e,evap ψ 1/3 and d e = d e,evap in (21) as while the above two estimates are in terms of the droplet diameter at injection, their current diameter at t can be expressed as form the above expressions, we define t lim to be the time when d e,exit = d e,evap , which in terms of current droplet diameter becomes d exit = d evap . beyond this limiting time (i.e., for t > t lim ) the droplets can be separated into only two types: (i) exited droplets whose initial size at injection greater than d e,exit = d e,evap , and (ii) droplets of size at ejection smaller that have become droplet nuclei. the variation of t lim and d e,lim as a function of k is presented in fig. 9b . it is clear that as k varies over a wide range, t lim ranges from 0.01 s to 450 s, and correspondingly d e,lim varies from 415 to 7 µm. we now put together all the above arguments to present a predictive model of the droplet concentration within the puff. the initial condition for the size distribution is set by the ejection process discussed in section 3, and the simple pareto distribution given in (4) provides an accurate description. based on the analysis of the previous sections, we separate the two different time regimes of t ≤ t lim and t ≥ t lim . in the case when t ≤ t lim the droplet/aerosol concentration (or the number per unit volume of the puff) can be expressed as where we have recognized the fact that equation (21) is the mapping d between the current droplet size and its size at injection. due to the turbulent nature of the puff, the distribution of airborne droplets and nuclei is taken to be uniform within the puff. quantities such ass, d evap and d exit are as they have been defined above and the pre-factor 1/q(t) accounts for the expansion of the puff volume. in the case of t ≥ t lim , the droplet number density spectrum becomes and only droplet nuclei remain within the puff. here, the size of the largest droplet nuclei within the puff is related to its initial unevaporated droplet size as d lim = d e,lim ψ 1/3 , and the plot of d e,lim as a function of k for a specific example case of puff and droplet ejection was shown in fig. 9b . in this subsection we will briefly consider droplet temperature, since it plays a role in determining saturation vapor pressure and the value of k . following pirhadi et al. [96] we write the thermal equation of the droplet as where c pw is the specific heat of water, k p is the thermal conductivity of the puff air, l is the latent heat of vaporization, t d and t p are the temperatures of the droplet and the surrounding puff. the first term on the right accounts for convective heat transfer from the surrounding air and the second term accounts for heat needed for phase change during evaporation. it can be readily established that the major portion of heat required for droplet evaporation must come from the surrounding air through convective heat transfer. the equilibrium eulerian approach [36] can again be used to obtain the asymptotic solution of the above thermal equation and the droplet temperature can be explicitly written as where τ t is the thermal time scale of the droplet that was introduced earlier. the second term on the right is negative and thus contributes to the droplet temperature being lower than the surrounding puff. simple calculation with typical values shows that the contribution of the third term is quite small and can be ignored. as a result, the temperature difference between the droplet and the surrounding is largely controlled by the evaporation rate dm/dt, which decreases over time. again, using the properties of water and air, and typical values for n u and b m , we can evaluate the temperature difference t p − t d to be typically a few degrees. thus, the evaporating droplets need to be only a few degrees cooler than the surrounding puff for evaporation to continue. when the puff equilibrates with the surrounding and its velocity falls below the ambient turbulent velocity fluctuation, the subsequent dynamics of the droplet cloud is governed by turbulent dispersion. this late-time evolution of the droplet cloud depends on many factors that characterize the surrounding air. this is where the difference between a small enclosed environment such as an elevator or an aircraft cabin or an open field matters, along with factors such as cross breeze and ventilation. a universal analysis of the late-time evolution of the droplet nuclei cloud is thus not possible, due to problem-specific details. the purpose of this brief discussion is to establish a simple scaling relation to guide when the puff evolution model presented in the above sections gives way to advection and dispersion by ambient turbulence. it should again be emphasized that the temperature difference between the puff fluid containing the droplet nuclei cloud and the ambient air may induce buoyancy effects, which for model simplicity will be taken into account as part of turbulent dispersion. we adopt the classical scaling analysis of richardson [101] , according to which the radius of a droplet cloud, in the inertial range, will increase as the 3/2 power of time as given by where c is a constant, is the dissipation rate, which will be taken to be a constant property of ambient turbulence, and t 0 is the time shift required to match the cloud size at the transition time between the above simple late time model and the puff model. in the above, the subscript lt stands for the late-time behavior of the radius of the droplet-laden cloud. we now make a simple proposal that there exists a transition time t tr , below which the rate of expansion of the puff as given by the puff model is larger than dr lt /dt computed from the above expression. during this early time, ambient dispersion effects can be ignored in favor of the puff model. but for t > t tr droplet-laden cloud's ambient dispersion becomes the dominant effect. the constants t 0 and t tr can be obtained by satisfying the two conditions: (i) the size of the droplet-laden cloud given by (35) at t tr matches the puff radius at that time given by αs e ((t tr + t e )/t e ) 1/(4+c) , and (ii) the rate of expansion of the droplet-laden cloud by turbulent dispersion matches the rate of puff growth given by the puff model. this latter condition can be expressed as from these two simple conditions, we obtain the final expression for the transition time as given a puff, characterized by its initial ejection length and time scales s e and t e , and the ambient level of turbulence characterized by , the value of transition time can be estimated. if we take entrainment coefficient α = 0.1, the constant c = 0, and typical values of s e = 0.5 m and t e = 0.05 s, we can estimate t tr = 1.88 s for a dissipation rate of c = 10 −5 m 2 /s 3 . the transition time t tr increases (or decreases) slowly with decreasing (or increasing) dissipation rate. thus, the early phase of droplet evaporation described by the puff model is valid for o(1) s, before being taken over by ambient turbulent dispersion. however, it must be stressed that the scaling relation of richardson is likely an over-estimation of ambient dispersion, as there are experimental and computational evidences that suggest that the power-law exponent in (35) is lower than 3 [92] . but it must be remarked that even with corresponding changes to late-time turbulent dispersion, the impact on transition time can be estimated to be not very large. also, it must be cautioned that according to classical turbulent dispersion theory, during this late-time dispersal, the concentration of virus-laden droplet nuclei within the cloud will not be uniform, but will tend to decay from the central region to the periphery. nevertheless, for sake of simplicity here we assume (35) to apply and we take the droplet nuclei distribution to be uniform. according to above simple hypothesis, the effect of late-time turbulent dispersion on the number density spectrum is primarily due to the expansion of the could, while the total number of droplet nuclei within the cloud remains the same. thus, the expressions (31) and (32) still apply. however, the expression for the volume of the cloud must be appropriately modified as the location of the center of the expanding cloud of droplets is still given by the puff trajectory s(t), which has considerably slowed down during late-time dispersal. the strength of the above model is in its theoretical foundation and analytical simplicity. but, the validity of the approximations and simplifications must be verified in applications to specific scenarios being considered. for example, considering variability in composition, turbulence intermittency, initial conditions of emissions and the state of the ambient, direct observations show that the transition between puff dominated and ambient flow dominated fate of respiratory droplets vary from o(1-100 s) [19] . this section will mainly survey the existing literature on issues pertaining to what fraction of the droplets and aerosols at any location gets inhaled by the recipient host, and how this is modified by the use of masks. these effects modeled as inhalation (aspiration) and filtration efficiencies will then be incorporated into the puff-cloud model. the pulmonary ventilation (breathing) has a cyclic variation that varies markedly with age and metabolic activities. the intensity of breathing (minute ventilation) is expressed in l/min of inhaled and exhaled air. for the rest condition, the ventilation rate is about 5-8 l/min and increases to about 10-15 l/min for mild activities. during exercise, ventilation increases significantly depending on age and metabolic needs of the activity. in the majority of earlier studies on airflow and particle transport and deposition in human airways, the transient nature of breathing was ignored for simplification and to reduce the computational cost. haubermann et al. [50] performed experiments on a nasal cast and found that particle deposition for constant airflow is higher than those for cyclic breathing. shi et al. [108] performed simulations on nanoparticle depositions in the nasal cavity under cyclic airflow and found that the effects of transient flow are important. grgic et al. [45] and horschler et al. [53] performed experimental and numerical studies, respectively, on flow and particle deposition in a human mouth-throat model, and the human nasal cavity. particle deposition in a nasal cavity under cyclic breathing condition was investigated by bahmanzadeh et al. [9] , naseri et al. [88] , and kiasadegh et al. [62] , where the unsteady lagrangian particle tracking was used. they found there are differences in the predicted local deposition for unsteady and equivalent steady flow simulations. in many of these studies, a sinusoidal variation for the volume of air inhaled is used. that is here q max is the maximum flow rate, and t = 4 s is the period of breathing cycle for an adult during rest or mild activity. the period of breathing also changes with age and the level of activity. haghnegahdar et al. [47] investigated the transport, deposition, and the immune system response of the low-strain influenza a virus iav laden droplets. they noted that the shape of the cyclic breathing is subject dependent and also changes with nose and mouth breathing. they provided an eight-term fourier series for a more accurate description of the breathing cycle. the hygroscopic growth of droplets was also included in their study. analysis of aspiration of particles through the human nose was studied by ogden and birkett [91] and armbruster and breuer [3] . accordingly, the aspiration efficiency η a is defined as the ratio of the concentration of inhaled particles to the ambient concentration. using the results of earlier studies and also his works, vincent [122] proposed a correlation for evaluating the inhalability of particles. that is, the aspiration efficiency η a of particles smaller than 100 µm is given as, η a (d) = 0.5 [1 + exp(−0.06d)] for d < 100 µm . (40) figure 10 : influence of thermal plume on aspiration efficiency [87] . here, d is the aerodynamic diameter of the particles in micron. while the above correlation provides the general trend that larger particles are more difficult to inhale, it has a number of limitations. it was developed for mouth-breathing with the head oriented towards the airflow direction with speeds in the range of 1 m/s to 4 m/s. the experimental investigation of aerosol inhalability was reported by hsu and swift [54] , su and vincent [113, 114] , aitken et al. [1] , and kennedy and hinds [61] . dai et al. [25] performed in-vivo measurements of inhalability of large aerosol particles in calm air and fitted their data to several correlations. for calm air condition, they suggested, where d must be in microns. computational modeling of inhalability of aerosol particles were reported by many researchers [21, 56, 57, 63, 82, 88] . interpersonal exposure was studied by [41, 51] . the influence of thermal plume was studied by salmanzadeh et al. [103] . naseri et al. [87] performed a series of computational modeling and analyzed the influence of the thermal plume on particle aspiration efficiency when the body temperature is higher or lower than the ambient. their results are reproduced in figure 10 . here the case that the body temperature t b = 26.6 • c and the ambient temperature t a = 21.3 • c (upward thermal plume) and the case that t b = 32.2 • c and t a = 40.0 • c (downward thermal plume) are compared with the isothermal case studied by dai et al. [25] . it is seen that when the body is warmer than the surrounding, the aspiration ratio increases. when the ambient air is at a higher temperature than the body, the inhalability decreases compared to the isothermal case. in light of the results of the previous section, it can be concluded that at a distance of o(1) m the ejected mostly water droplets have sufficiently reduced in size that these o(1) µm aerosols have near perfect inhalability. using a respiratory face mask is a practical approach against exposure to airborne viruses and other pollutants. among the available facepiece respirators, n95, and surgical masks are considered figure 11 : filtration efficiency of different respiratory masks under normal breathing conditions [35, 131] . to be highly effective [46, 75] . n95 mask has a filtration efficiency of more than 95% in the absence of face leakage [89, 99] . surgical masks are used extensively in the hospital and operating rooms [74] . nevertheless, there have been concerns regarding their effective filtration of airborne bacteria and viruses [12, 65, 75] . there is often discomfort in wearing respiratory masks for extended durations that increases the risk of spread of infection. the breathing resistance of a mask is directly related to the pressure drop of the filtering material. the efficiency of respiratory masks varies with several factors, including the intensity and frequency of breathing as well as the particle size [132] . the filtration efficiencies of different masks under normal breathing conditions, as reported by zhang et al. [131] and feng et al. [35] , in the absence of leakage, are shown in figure 11 . as an example, the measured filtration efficiency of the surgical mask can be fit as where droplet nuclei diameter d must be in microns. it is seen that the filtration efficiencies of different masks vary significantly, with n95 having the best performance, which is followed by the surgical mask. it is also seen that all masks could capture large particles. the n95, surgical, and procedure masks remove aerosols larger than a couple of microns. cotton and gauze masks capture a major fraction of particles larger than 10 µm. the capture efficiency of all masks also shows an increasing trend as particle size becomes smaller than 30 nm due to the effect of the brownian motion of nanoparticles. figure 11 also shows that the filtration efficiencies of all respiratory masks drop for the particle sizes in the range of 80 nm to about 1 µm. this is because, in this size range, both the inertia impaction and the brownian diffusion effect are small, and the mask capture efficiency reduces. based on these results, and the earlier finding that most ejected droplets within the cloud reduce their size substantially and could become sub-micron-sized aerosol particles by about o(1 − 10) m distance, it can be stated that only professional masks such as n95, surgical, and procedure masks provide reliable reduction in the inhaled particles. hence, it is important for healthcare workers to have access to high-grade respirators upon entering a room or space with infectious patients [19] . another importance of mask is that it will eliminate the momentum of expelled puff during sneezing, coughing, speaking, and breathing, and reduce the distance that the droplet cloud would transport. therefore, wearing a mask will reduce the chance for transmission of infectious viruses. it should be emphasized that the concentration that a receiving host will inhale (φ inhaled ) depends on the local concentration in the breathing zone adjusted by the aspiration efficiency given by equations (40) and (41) (or plotted in figure 9 ). when the receiving host wears a mask, an additional important correction is needed by multiplying by a factor (1 − η f ), where η f is the filtration efficiency plotted in figure 11 . that is, where φ(d, t) is the droplet nuclei concentration at the breathing zone given in (31) or (32) . it is seen that the concentration of inhaled droplets larger than 10 microns significantly decreases when the mask is used. but the exposure to smaller droplets, particularly, in the size range of 100 nm to 1 µm varies with the kind of mask used. the object of this section is to put together the different models of the puff and droplet evolution described in the previous sections, underline their simplifications, and demonstrate their ability to make useful predictions. such results under varying scenarios can then be potentially used for science-based policy making, such as establishing multi-layered social distancing guidelines and other safety measures. in particular, we aim at modeling the evolution of the puff and the concentration of airborne droplets and nuclei that remain within the cloud so that the probability of potential transmission can be estimated. as discussed in section 4.2, the virus-laden droplets exhaled by an infected host will undergo a number of transformations before reaching the next potential host. to prevent transmission, current safety measures impose a safety distance of two meters. furthermore, cloth masks are widely used by the public and their effectiveness has been shown to be questionable for droplets and aerosols of size about a micron. the adequacy of these common recommendations and practices can be evaluated by investigating the concentration of airborne droplets and nuclei at distances larger than one meter and the probability of them being around a micron in diameter, since such an outcome will substantially increase the chances of transmission. in the following we will examine two effects: the presence of small quantities of non-volatile matter in the ejected drops that remain as droplet nuclei after evaporation, and the adequacy of the log-normal or pareto distribution to quantify the number of droplets in the lower diameter classes. first, in section 6.1, we will consider predictions based on a currently used model, where the droplets are allowed to fully evaporate. then, in section 6.2 we will consider improved predictions based on the present model, where the effect of non-volatiles and the motion of the puff are accurately modeled. let us consider the situation of speaking or coughing, whose initial puff volume and momentum are such that they yield s e 0.5 m and t e 0.05 s. under this specific condition, as shown in figure 7 the puff travels about 1 m in about 5 s 1 . for this simple example scenario, we will examine our ability to predict airborne droplet and nuclei concentration, as an important step towards estimating the potential for airborne transmission in situations commonly encountered. in most of the countries, current guidelines are based on the work by xie et al. [130] , who revisited previous guidelines by [127] with improved evaporation and settling models. they identified the possibility that, due to evaporation, the droplets quickly become vanishingly small before reaching a significant distance and thus may represent a minor danger for transmission due to their minimal virus loading. this scenario is shown in figure 12 , where we present the evolution of the drop size figure 12 : evolution of the drop size distribution spectra according to the currently used evaporation models [127, 130] . spectrum while droplets are transported by the ejected puff. the initial droplet size distribution is taken to be that measured by duguid [30] modeled with a log-normal distribution, which in the monte-carlo approach is randomly sampled with one million droplets divided into one thousand diameter classes. each droplet is then followed while evaporating and falling. the evaporation model is taken to be (21) with the effective diffusion coefficient estimated as k 1 · 10 −8 m 2 /s. this value is computed under the assumption that drops are made of either pure water or a saline solution [130] and that air has about 98% humidity. therefore, this is an environment unfavorable to evaporation and consequently drop size reduction happens relatively slowly. however, from the figure it is clear that, even in this extreme case, after few tens of centimeters, and within a second, all droplets have evaporated down to a size below 10 µm. this is in line with the predictions of xie et al. [130] . naturally, if the air is dryer, the effective evaporation coefficient will be larger (even as large as k 10 −5 m 2 /s) and the droplet size spectrum will evolve even faster, leaving virtually all droplets to be smaller than 1 µm in the puff. in the model, we set the minimum diameter that all drops can achieve equal to 1µm (shown by the single point indicated in the figure) so to emphasize this effect of the model. recall that intermittency of turbulence with the puff can create clusters of droplets and concentration of vapor and thereby significantly alter the evaporation rate [31, 34, 121] . hence, our estimate of evaporation time is a lower bound, as governed by the d 2 -law (21). as discussed in section 4.3 there is current consensus that droplets ejected during sneezing or coughing contain, in addition to water, other biological and particulate non-volatile matter. specifically, viruses themselves are of size almost 0.1 µm. here we will examine the evolution of droplet size distribution in the presence of non-volatile matter. it will be clear in the following, that in this case, even a small amount of non-volatile matter plays an important role with the evaporation coefficient being a minor factor in deciding how fast the final state is reached. in figure 13 , we show the final distribution of droplets under two scenarios, where the initially ejected droplets contain 0.1% and 3.0% of non-volatile matter. in figure 13a , the initial drop size distribution is modeled as a log-normal distribution (i.e., as in fig. 12 ), whereas in figure 13b , the initial drop size distribution is modeled according to the pareto distribution with initial droplet size varying between 1 and 100 µm. this range is smaller than that suggested earlier in section 3. however, drops that are larger than 100 µm fall out of the cloud and therefore are not important for airborne transmission and droplets initially smaller than 1 µm have much smaller viral load. here "final droplet size distribution" indicates the number of droplets that remain within the puff after all the larger droplets have fallen out and all others have completed their evaporation to become droplet nuclei. this final number of droplet nuclei as a function of size does not vary with time or distance. the size distribution is computed here as in figure 12 , with a random sampling from the initial log-normal or pareto distribution. as before, these computations used an evaporation coefficient of k = 10 −8 m 2 /s. however, there are two important differences: each droplet is allowed to fall vertically according to its time-dependent settling velocity, w , which decreases over time as the droplet evaporates. integration of the fall velocity over time provides the distance traveled by the droplet relative to the puff. droplets whose fall distance exceeds the size of the puff are removed from consideration. second, each droplet that remains within the puff evaporates to its limiting droplet nuclei size that is dictated by the initial amount of non-volatile matter contained within the droplet. for ψ = 0.1% non-volatile matter, the final aerosol size cannot decrease below 10% of the initial droplet diameter, whereas for 3.0% of non-volatile matter, the final droplet size cannot decrease below 30% of the initial diameter. from fig. 13 , it is clear that when evaporation is complete, the drop size distribution rigidly shifts towards smaller diameters, with a cut-off upper diameter due to the settling of large drops (these cut-offs are the upper limits of the blue and red curves). essentially, it is clear that the initial number of viruses that were in droplets of size smaller than d e,exit still remain within the cloud almost unchanged, representing a more dangerous source of transmission than predicted by the conventional assumption of near-full evaporation. again, it is important to note that the final droplet size distribution is established rapidly even with the somewhat lower effective evaporation diffusivity of k = 10 −8 m 2 /s, and when not accounting for the effect of localized moisture of the cloud in further reducing the rate. figure 13 also illustrates the important difference in the drop size distribution. the pareto distribution will predict a much larger number of drops in the micron and sub-micron range, possibly the most dangerous for both aspiration efficiency and filtration inefficiency. in this section we will demonstrate the efficacy of the simple model presented in (31) and (32) for the prediction of droplet/aerosol concentration. in contrast to the monte-carlo approach of the previous subsection, where the evolution of each droplet was accurately integrated, here we will use the analytical prediction along with its simplifying assumptions. the cases considered are identical to those presented in figure 13 for ψ = 0.1% and k = 10 −8 m 2 /s. the initial droplet size distributions considered are again log-normal and pareto distributions. in this case, however, we underline that the quantity of importance in airborne transmission is not the total number of droplet nuclei, but rather their concentration in the proximity of a susceptible host. accordingly, we plot in figure 14 airborne droplet and nuclei concentration (per liter) of volume as a function of droplet size. these results are without taking into account the aspiration and filtration efficiencies given in (43) . here the area under the curve between any two diameters yields the number of droplets within this size range per liter of volume within the cloud. at the early times of t = 0.025 and 0.2 s, we see that larger droplets above a certain size have fallen out of the cloud, while droplet nuclei smaller than d evap have fully evaporated and their distribution is a rigidly-shifted version of the original distribution. the distribution of intermediate size airborne droplets reflects the fact that they are still undergoing evaporation. unlike in figure 13 , the concentration continues to fall even after t lim 0.68 s when the number and size of droplets within the cloud have reached their limiting value. this is simply due to the fact that the volume of the puff continues to increase and this continuously dilutes the aerosol concentration. most importantly, the results of the simple model presented in (31) and (32) are in excellent agreement with those obtained from monte-carlo simulation. the increasing size of the contaminated cloud with time can be predicted with (38) and the centroid is given by the scaling law (16) . as the final step, we include the effect of aspiration and filtration efficiencies to compute the concentration of droplet nuclei that get into the receiving host. in computing φ inhaled using (43), we take the droplet/nuclei concentration at the location of the receiving host to be that computed and presented in figure 14 . we consider the receiving host to be using a surgical mask, whose efficiency was shown in figure 11 and given in (42) . the aspiration efficiency of the receiving host is taken to that given in (40) . the results are presented in figure 15 , where the figure includes the initial log-normal and pareto distributions (green lines). it is clear that due to filtration efficiency of the surgical mask no droplet nuclei of size greater than 2 µm gets into the receiving host. for smaller droplet nuclei, the inhaled concentration is substantially lower due to both the aspiration and the filtration efficiencies. clearly, the inhaled concentration will be higher and the size range will be wider, and will approach those shown in figure 14 , with the use of cotton or gauze masks. figure 14 : droplet/aerosol concentration evolution as predicted by the analytical model presented in (31) and (32) . left frame shows the evolution starting from the log-normal distribution. right frame shows the evolution starting from the pareto distribution. both cases use k = 10 −8 m 2 s. the primary goal of this paper is to provide a unified theoretical framework that accounts for all the physical processes of importance, from the ejection of droplets by breathing, talking, coughing and sneezing to the inhalation of resulting aerosols by the receiving host. these processes include: (i) forward advection of the exhaled droplets with the puff of air initially ejected; (ii) growth of the puff by entrainment of ambient air and its deceleration due to drag; (iii) gravitational settling of some of the droplets out of the puff; (iv) modeling of droplets evaporation, assuming that the d 2law prevails; (v) presence of non-volatile compounds which form the droplet nuclei left behind after evaporation; (vi) late-time dispersal of the droplet nuclei-laden cloud due to ambient air turbulent dispersion. despite the complex nature of the physical processes involved, the theoretical framework results in a simple model for the airborne droplet and nuclei concentration within the cloud as a function of droplet diameter and time, which is summarized in equations (31), (32) and (38) . this framework can be used to calculate the concentration of virus-laden nuclei at the location of any receiving host as a function of time. as additional processes, the paper also considers (vii) efficiency of aspiration of the droplet nuclei by the receiving host; and (viii) effectiveness of different kinds of masks in filtering the nuclei of varying size. it must be emphasized that the theoretical framework has been designed to be simple and therefore involves a number of simplifying assumptions. hence, it must be considered as the starting point. by relaxing the approximations and by adding additional physical processes of relevance, more complex theoretical models can be developed. one of the primary advantages of such a simple theoretical framework is that varying scenarios can be considered quite easily: these figure 15 : droplet nuclei concentration inhaled by the infected host wearing a surgical mask as predicted by the analytical model presented in (31) and (32) with the aspiration and filtration efficiencies given in (40) and (42) . the left and right frames show the results of initial log-normal and pareto distributions. both cases use k = 10 −8 m 2 s. different scenarios include varying initial puff volume, puff velocity, number of droplets ejected, their size distribution, non-volatile content, ambient temperature, humidity, and ambient turbulence. the present theoretical framework can be, and perhaps must be, improved in several significant ways in order for it to become an important tool for reliable prediction of transmission. (i) accurate quantification of the initially ejected droplets still remains a major challenge. further high-quality experimental measurements and high-fidelity simulations [22] are required, especially mimicking the actual processes of breathing, talking, coughing and sneezing, to fully understand the entire range of droplet sizes produced during the exhalation process. (ii) as demonstrated above, the rate at which an ejected droplet evaporates plays an important role in determining how fast they reach their fully-evaporated state. it is thus important to calculate more precisely the evaporation rate of non-volatile-containing realistic droplets resulting from human exhalation. the precise value of evaporation rate may not be important when droplets evaporate fast, since all droplets remaining within the puff would have completed their evaporation. but under slow evaporation conditions, accurate evaluation of evaporation is important. (iii) the assumption of uniform spatial distribution of droplets within the puff and later within the dispersing cloud is a serious approximation [105] . the intermittency of turbulence within the initial puff and later within the droplet cloud is important to understand and couple with the evaporation dynamics of the droplets. in addition to the role of intermittency, even the mean concentration of airborne droplets and nuclei may decay from the center to the outer periphery of the puff/cloud. characterization of this inhomogeneous distribution will improve the predictive capability of the model. (iv) the presence of significant ambient mean flow and turbulence either from indoor ventilation or outdoor cross-flow will greatly influence the dispersion of the virus-laden droplets. but accounting for their effects can be challenging even in experimental and computational approaches. detailed experiments and highly-resolved simulations of specific scenarios should be pursued. but it will not be possible to cover all possible scenarios with such an approach. a simpler approach where the above theoretical framework can be extended to include additional models such as random flight model (similar to those pursued in the calculation of atmospheric dispersion of pollutants [48] ) may be promising approaches. aerosol inhalability in low air movement environments effect of airway opening on production of exhaled particles investigations into defining inhalable dust aerosol emission and superemission during human speech increase with voice loudness natural ventilation for infection control in health-care settings edited by world health organization effectiveness of surgical and cotton masks in blocking sarscov-2: a controlled comparison in 4 patients an experimental framework to capture the flow dynamics of droplets expelled by a sneeze airborne or droplet precautions for health workers treating coronavirus disease 2019. the journal of infectious diseases unsteady particle tracking of micro-particle deposition in the human nasal cavity under cyclic inspiratory flow turbulent dispersed multiphase flow a scaling analysis for point-particle approaches to turbulent multiphase flows manikinbased performance evaluation of n95 filtering-facepiece respirators challenged with nanoparticles self-similar wave produced by local perturbation of the kelvin-helmholtz shear-layer instability turbulent gas clouds and respiratory pathogen emissions: potential implications for reducing transmission of covid-19 violent expiratory events: on coughing and sneezing anatomy of a sneeze. howard hughes medical institute image of the week the fluid dynamics of disease transmission turbulent gas clouds and respiratory pathogen emissions: potential implications for reducing transmission of covid-19 the rate of evaporation of droplets. evaporation and diffusion coefficients, and vapour pressures of dibutyl phthalate and butyl stearate prediction of particle transport in enclosed environment extended lifetime of respiratory droplets in a turbulent vapour puff and its implications on airborne disease transmission a systematic review of the science and engineering of masks and respiratory protection: need for standardized evaluation and testing comments on a ruptured soap film in vivo measurements of inhalability of ultralarge aerosol particles in calm air by humans dense spray evaporation as a mixing process gas-liquid atomisation: gas phase characteristics by piv measurements and spatial evolution of the spray entrainment and growth of a fully developed, two-dimensional shear layer aerosol and surface stability of sars-cov-2 as compared with sars-cov-1 the size and the duration of air-carriage of respiratory droplets and dropletnuclei preferential concentration of particles by turbulence nonlinear dynamics and breakup of free-surface flows quantification of preferential concentration of colliding particles in a homogeneous isotropic turbulent flow influence of wind and relative humidity on the social distancing effectiveness to prevent covid-19 airborne transmission: a numerical study a fast eulerian method for disperse two-phase flow a locally implicit improvement of the equilibrium eulerian method equilibrium eulerian approach for predicting the thermal field of a dispersion of small particles airborne infectious disease and the suppression of pulmonary bioaerosols instability regimes in the primary breakup region of planar coflowing sheets transient cfd simulation of the respiration process and interperson exposure assessment characterisation of human saliva as a platform for oral dissolution medium development modeling primary atomization the role of particle size in aerosolised pathogen transmission: a review the effect of unsteady flow rate increase on in vitro mouth-throat deposition of inhaled boluses performance of an n95 filtering facepiece particulate respirator and a surgical mask during human breathing: two pathways for particle penetration lung aerosol dynamics of airborne influenza a virusladen droplets and the resultant immune system responses: an in silico study a novel approach to atmospheric dispersion modelling: the puffparticle model characterizations of particle size distribution of the droplets exhaled by sneeze the influence of breathing patterns on particle deposition in a nasal replicate cast cfd study of exhaled droplet transmission between occupants under different ventilation strategies in a typical office room on simulating primary atomization using the refined level set grid method on the assumption of steadiness of nasal cavity flow the measurements of human inhalability of ultralarge aerosols in calm air using mannikins evolution of raindrop size distribution by coalescence, breakup, and evaporation: theory and observations detailed predictions of particle aspiration affected by respiratory inhalation and airflow source and trajectories of inhaled particles from a surrounding environment and its deposition in the respiratory airway vortices catapult droplets in atomization the mechanism of breath aerosol formation the diagnostic applications of saliva -a review inhalability of large solid particles transient numerical simulation of airflow and fibrous particles in a human upper airway model inhalability of micron particles through the nose and mouth the evaporation of small spheres respiratory performace offered by n95 respirators and surgical masks: human subject evaluation with nacl aerosol representing bacterial and viral particle size range edge-effect: liquid sheet and droplets formed by drop impact close to an edge atomization and sprays respiratory virus shedding in exhaled breath and efficacy of face masks effervescent atomization in two dimensions a scaling analysis of added-mass and history forces and their coupling in dispersed multiphase flows inter-phase heat transfer and energy coupling in turbulent dispersed multiphase flows spray formation in a quasiplanar gas-liquid mixing layer at moderate density ratios: a numerical closeup multiscale simulation of atomization with small droplets represented by a lagrangian point-particle model disposable surgical face masks for preventing surgical wound infection in clean surgery surgical mask vs n95 respirator for preventing influenza among health care workers: a randomized trial relation between the airborne diameters of respiratory droplets and the diameter of the stains left after recovery propagation and breakup of liquid menisci and aerosol generation in small airways density contrast matters for drop fragmentation thresholds at low ohnesorge number contributionà l'étude de l'atomisation assistée d'un liquide : instabilité de cisaillement et génération du spray experimental and analytical study of the shear instability of a gas-liquid mixing layer improved strategy to control aerosol-transmitted infections in a hospital suite a review of inhalability fraction models: discussion and recommendations influenza virus aerosols in human exhaled breath: particle size, culturability, and effect of surgical masks it is time to address airborne transmission of covid-19 airborne transmission of sars-cov-2: the world should face the reality droplet-wall collisions: experimental studies of the deformation and breakup process effect of turbulent thermal plume on aspiration efficiency of microparticles numerical investigation of transient transport and deposition of microparticles under unsteady inspiratory flow in human upper airways 42 cfr 84 respiratory protective devices: final rules and notice toward understanding the risk of secondary airborne infection: emission of respirable pathogens the human head as a dust sampler oceanic diffusion diagrams droplet-air collision dynamics: evolution of the film thickness collection, particle sizing and detection of airborne viruses use of breakup time data and velocity history data to predict the maximum size of stable fragments fo acceleration-induced breakup of a single drop phase change and deposition of inhaled droplets in the human nasal cavity under cyclic inspiratory airflow ageing and burst of surface bubbles biosurfactants change the thinning of contaminated bubbles at bacteria-laden water interfaces performance of n95 respirators: filtration efficiency for airborne microbial and inert particles oxford-mit evidence review: what is the evidence to support the 2-metre social distancing rule to reduce covid-19 transmission? atmospheric diffusion shown on a distance-neighbour graph viscosity-modulated breakup and coalescence of large drops in bounded turbulence effect of thermal plume adjacent to the body on the movement of indoor air aerosol particles advanced models of fuel droplet heating and evaporation mechanisms for selective radial dispersion of microparticles in the transitional region of a confined turbulent round jet visualization of sneeze ejecta: steps of fluid fragmentation leading to respiratory droplets breathing is enough: for the spread of influenza virus and sars-cov-2 by breathing only laminar airflow and nanoparticle or vapor deposition in a human nasal cavity model controversy around airborne versus droplet transmission of respiratory viruses: implication for infection prevention. current opinion in infectious diseases assessing the dynamics and control of dropletand aerosol-transmitted influenza using an indoor positioning system coalescence and size distribution of surfactant laden droplets in turbulent flow small droplet aerosols in poorly ventilated spaces and sars-cov-2 transmission new experimental studies to directly measure aspiration efficiencies of aerosol samplers in calm air experimental measurements of aspiration efficiency for idealized spherical aerosol samplers in calm air the dynamics of thin sheets of fluid iii. disintegration of fluid sheets satellite and subsatellite formation in capillary breakup buoyancy effects in fluids ocean spray drop fragmentation on impact fine structure of the vapor field in evaporating dense sprays aerosol sampling. science and practice unsteady sheet fragmentation: droplet sizes and speeds universal rim thickness in unsteady sheet fragmentation non-galilean taylor-culick law governs sheet dynamics in unsteady fragmentation transverse instabilities of ascending planar jets formed by wave impacts on vertical walls on air-borne infection: study ii. droplets and droplet nuclei airborne contagion and air hygiene. an ecological study of droplet infections. airborne contagion and air hygiene. an ecological study of droplet infections prediction of the size distribution of secondary ejected droplets by crown splashing of droplets impinging on a solid wall how far droplets can move in indoor environmentsrevisiting the wells evaporation-falling curve investigation of the flow-field in the upper respiratory system when wearing n95 filtering facepiece respirator airflow resistance and bio-filtering performance of carbon nanotube filters and current facepiece respirators key: cord-353246-q9qpec7t authors: nijhuis, r. h. t.; guerendiain, d.; claas, e. c. j.; templeton, k. e. title: comparison of eplex respiratory pathogen panel with laboratory-developed real-time pcr assays for detection of respiratory pathogens date: 2017-05-23 journal: j clin microbiol doi: 10.1128/jcm.00221-17 sha: doc_id: 353246 cord_uid: q9qpec7t infections of the respiratory tract can be caused by a diversity of pathogens, both viral and bacterial. rapid microbiological diagnosis ensures appropriate antimicrobial therapy as well as effective implementation of isolation precautions. the eplex respiratory pathogen panel (rp panel) is a novel molecular biology-based assay, developed by genmark diagnostics, inc. (carlsbad, ca), to be performed within a single cartridge for the diagnosis of 25 respiratory pathogens (viral and bacterial). the objective of this study was to compare the performance of the rp panel with those of laboratory-developed real-time pcr assays, using a variety of previously collected clinical respiratory specimens. a total of 343 clinical specimens as well as 29 external quality assessment (eqa) specimens and 2 different middle east respiratory syndrome coronavirus isolates have been assessed in this study. the rp panel showed an agreement of 97.4% with the real-time pcr assay regarding 464 pathogens found in the clinical specimens. all pathogens present in clinical samples and eqa samples with a threshold cycle (c(t)) value of <30 were detected correctly using the rp panel. the rp panel detected 17 additional pathogens, 7 of which could be confirmed by discrepant testing. in conclusion, this study shows excellent performance of the rp panel in comparison to real-time pcr assays for the detection of respiratory pathogens. the eplex system provided a large amount of useful diagnostic data within a short time frame, with minimal hands-on time, and can therefore potentially be used for rapid diagnostic sample-to-answer testing, in either a laboratory or a decentralized setting. i nfections of the upper and lower respiratory tract can be caused by a diversity of pathogens, both viral and bacterial. community-acquired respiratory tract infections are a leading cause of hospitalization and responsible for substantial morbidity and mortality, especially in infants, the elderly, and immunocompromised patients. the etiological agent in such infections differs greatly according to season and age of patient, with highest prevalences being those of respiratory syncytial virus (rsv) in children and influenza virus in adults. rapid microbiological diagnosis of a respiratory infection is important to ensure appropriate antimicrobial therapy and for the effective implementation of isolation precautions (1) . in the last decade, many conventional diagnostic methods such as culture and antigen detection assays have been replaced by molecular assays for diagnosing respiratory tract infections. multiplex real-time pcr assays have been developed and implemented for routine diagnostic application, detecting a wide variety of pathogens (2-7). these assays have shown high sensitivity and specificity, but the limited number of fluorophores that can be used per reaction resulted in the need to run several real-time pcr assays to cover a broad range of relevant pathogens. commercial assays using multiplex ligation-dependent probe amplification (mlpa), a dual priming oligonucleotide system (dpo), or a microarray technology were developed to overcome this problem and are able to detect up to 19 viruses simultaneously (8, 9) . all applications mentioned require nucleic acid extraction prior to amplification. for routine diagnostics, these methods are most suited for batch-wise testing, with a turnaround time of ϳ6 to 8 h. to decrease the time to result and enable random access testing, syndromic diagnostic assays have been developed. these assays combine nucleic acid extraction, amplification, and detection in a single cartridge per sample and are suitable for decentralized or even point-of-care testing (poct) with a time to result of ͻ2 h. a novel rapid diagnostic, cartridge-based assay for the detection of respiratory tract pathogens using the eplex system ( fig. 1) was developed by genmark diagnostics, inc. (carlsbad, ca). the eplex respiratory pathogen panel (rp panel) is based on electrowetting technology, a digital microfluidic technology by which droplets of sample and reagents can be moved efficiently within a network of contiguous electrodes in the eplex cartridge, enabling rapid thermal cycling for a short time to result. following nucleic acid extraction and amplification, detection and identification are performed using the esensor detection technology (fig. 2) , as previously applied in the xt-8 system (10) . in the current study, the performance of the syndromic rp panel was compared to those of laboratory-developed real-time pcr assays, using clinical specimens previously submitted for diagnosis of respiratory pathogens. the 323 positive clinical specimens contained a total of 464 respiratory pathogens as detected by laboratory-developed real-time pcr assays (table 1) . as shown in table 2 , the 57 nonnasopharyngeal (non-nps) specimens comprised 69 of the total 464 respiratory pathogens. testing all samples with the rp panel resulted in an overall agreement for 452 (97.4%) targets from 311 specimens, prior to discrepant analysis. of the specimens containing a single pathogen, the detected targets were concordant in 209/217 specimens. for samples with coinfection, the same pathogens could be identified in 77/81, 22/22, and 3/3 in the case of 2, 3, and 4 pathogens present, respectively. eight of 12 discordant targets (pcr ϩ /rp ϫ ) had a positive result with threshold cycle (c t ) values of ͼ35 (fig. 3) . retesting with a third assay confirmed 10 of 12 real-time pcr-positive targets being human bocavirus (hbov; n ϭ 3), rhinovirus (rv; n ϭ 2), parainfluenza virus type 2 (piv2; n ϭ 1)), human coronavirus (hcov) oc43 (n ϭ 1), hcov 229e (n ϭ 1), hcov hku1 (n ϭ 1), and human metapneumovirus (hmpv; n ϭ 1). the two unresolved pcr ϩ /rp ϫ results consisted of two hmpv-positive samples (c t values of 33.2 and 38.3). the rp panel yielded a positive result in 17 specimens, where the laboratorydeveloped test (ldt) remained negative (pcr ϫ /rp ϩ ), including 15 additional pathogens previously undetected by ldt in the 323 positive specimens and one influenza a h1n1 2009 virus that was detected as influenza a virus by ldt ( table 1) . seven of these 15 additional targets could be confirmed, including three of rv/enterovirus (ev) (all confirmed as rv), two of piv4, and one each of hbov and hcov nl63. one of the selected negative samples tested positive for human adenovirus (hadv) in the rp panel but could not be confirmed by discrepant testing. all other negative specimens tested negative in the rp panel as well. both middle east respiratory syndrome coronavirus (mers-cov) isolates could be detected by the rp panel. by testing a 10-fold dilution series of both isolates, it was shown that mers-cov with a c t value of ͻ30 in the laboratory-developed real-time pcr assay could be detected using the rp panel, while detection with a c t value of ͼ30 was achievable but was not reproducible in every instance. of the 12 specimens from the quality control for molecular diagnostics (qcmd) 2016 respiratory ii pilot external quality assessment (eqa) study panel, 10 were detected in full agreement with the content as reported by qcmd ( table 3 ). the 2 false-negative tested specimens both contained hcov nl63, of which one was a coinfection in an hmpv-positive sample. both specimens had been tested with the laboratory-developed real-time pcr assay as well and were found positive for hcov nl63, both with c t values of 37.4. the qnostics evaluation panel consisted of 17 samples, including 15 different respiratory pathogens and one negative sample ( table 3 ). the rp panel detected 15 of the specimens in agreement with the content, whereas hadv type 1 and chlamydophila pneumoniae were not detected. real-time pcr detection of these specimens was performed to confirm the presence of the respective pathogen in the specimen and was found positive for both hadv (c t value of 31.4) and c. pneumoniae (c t value of 35.4). the hybridized molecule is then exposed to another sequence-specific probe that is bound to a solid phase, which is a gold electrode (a). upon binding of the two molecules, the ferrocene comes in close proximity to the gold electrode, where an electron transfer that can be measured using genmark's esensor technology on the eplex system can occur (b). the performance of the eplex rp panel was assessed by retrospective testing of 343 clinical respiratory specimens (obtained in 2009 to 2016) comprising five different types of specimens. although the rp panel had been ce in vitro diagnostic (ce-ivd) cleared for detection of respiratory pathogens from nps swabs only, we included a range of alternate sample types that can be obtained and tested for respiratory pathogens in the diagnostic setting. by including a total of 57 respiratory non-nps specimens with different pathogens (table 2) , it was shown that the rp panel was able to accurately detect the pathogen(s) in the different types of specimens, as the assay showed 100% concordance with ldt. for sputum samples, preprocessing with sputasol was introduced after the initial 6 tested specimens, since 1 false-negative result was found, which was resolved on retesting with sputasol pretreatment. further studies need to determine the frequency of preprocessing of sputum samples before efficiently running the rp panel. specimens for inclusion in this study were previously tested at two different sites, using both their own systems and validated assays. although the initial setups of the ldt assays were the same (11, 12) , minor adjustments of the assays and the use of different pcr platforms may affect the performance of the ldts and therefore were a limitation of this study. comparison of the results from the rp panel with the results from the routine multiplex real-time pcr showed an agreement of 97.4% in 464 pathogens tested. targets with a c t value of ͼ40, the rp panel showed good detection rates with regard to lower viral or bacterial loads as well (fig. 3) . although the performance of the rp panel appeared to be excellent using the tested specimens in this study, for piv4 (n ϭ 2) and c. pneumoniae (n ϭ 0) the number of clinical specimens that could be analyzed was too low for a proper assessment of the assay, which was a limitation of this study. in 14 different specimens, the rp panel identified 15 pathogens that had not been detected by routine testing (pcr ϫ /rp ϩ ). in addition, one influenza a virus detected by ldt could be detected as influenza a h1n1 2009 virus by the rp panel. one of the selected negative samples was shown to contain an hadv, while all other pcr ϫ /rp ϩ targets were detected as copathogens to other positive targets in the samples. all the pcr ϫ /rp ϩ targets were found in samples obtained from 1 institute. discrepant analysis a small number of ldt-negative specimens (n ϭ 20) was included in this study since the main objective of this study was to determine the performance of the rp panel in detecting respiratory pathogens. although this is a limitation of the current study, we believe that this issue will be addressed extensively in upcoming prospective clinical studies. owing to the lack of clinical specimens containing mers-cov, dilutions of two different culture isolates were tested in this study, of which dilutions with c t values of ͻ30 as shown by the laboratory-developed real-time pcr assay could be detected consistently. it should be noted that the real-time pcr assay has been developed for research use and has not yet been validated for clinical use. assessment of the rp panel using eqa samples from qcmd and qnostics showed results that are in line with the results obtained from clinical specimens. a total of 4 targets included in the eqa samples could not be detected using the rp panel, showing c t values of ͼ35 (n ϭ 3) and 31.4 (n ϭ 1) when tested by real-time pcr. the rp panel on the eplex system enables rapid testing and can be used as a diagnostic system in either a laboratory or a decentralized setting that is closer to the patient. the assay turned out to be rapid and straightforward to perform. compared to routine testing, hands-on time of the rp panel was very low (ͻ2 min), whereas the hands-on time of the routine testing was about 30 to 45 min, depending on the nature and number of samples tested. the overall run time of the platforms was also in favor of the eplex system, as it takes approximately 90 min for nucleic acid extraction, amplification, hybridization, and detection, whereas routine testing takes up to 2 h and 45 min using different systems and multiple real-time pcr assays in multiplex. an important advantage of the eplex system is the possibility of random access testing, compared to batch-wise testing in the current diagnostic real-time pcr approach. with a relatively short turnaround time and the potential to randomly load and run up to 24 specimens, the eplex system is very suitable for testing stat samples, which require immediate testing. in contrast to ldts, where c t values represent a quantitative indicator, the eplex system generates qualitative results only. the c t value is dependent on many different factors such as sample type and course of infection and can therefore differ greatly, even within a single patient. hence, a qualitative result, e.g., identification of the pathogen, is the major factor for patient management. the costs of reagents per sample are relatively high for eplex compared to ldt. however, when taking into account the hands-on time of technicians and the clinical benefit of more rapid results, the assay will most likely be more cost-effective. studies evaluating a rapid diagnostic assay for respiratory pathogens, such as the filmarray respiratory panel (biofire diagnostics, salt lake city, ut), have already shown the impact of rapid diagnostics for respiratory pathogens, since it decreased the duration of antibiotic use, the length of hospitalization, and the time of isolation, delivering financial savings (13, 14) . although the rp panel on the eplex system has the same potential, clinical studies remain to be conducted to fulfill this potential. in conclusion, this study shows excellent performance of the genmark eplex rp panel in comparison to laboratory-developed real-time pcr assays for the detection of respiratory pathogens from multiple types of clinical specimens and eqa samples. the system provides a large amount of useful diagnostic data within a short time frame, with minimal hands-on time, helping to reduce laboratory costs for labor and deliver a faster result to the clinician in order to aid in appropriate antimicrobial therapy. therefore, this syndrome-based diagnostic assay could be used as rapid diagnostic testing in many different settings. clinical specimens selected for this study have previously been submitted and tested prospectively for diagnosis of respiratory infections at either the specialist virology center at the royal infirmary of edinburgh (rie) or the medical microbiology laboratory at the leiden university medical center (lumc). specimens were selected using the laboratory information management system of the corresponding institute, without prior selection based on c t value. ethical approval for this study was granted by the medical ethical committee provided that anonymized samples were used. diagnostic testing by lab-developed tests. in short, the routine testing method consisted of total nucleic acid extraction by the nuclisens easymag system (ϳ45 min; biomérieux, basingstoke, united kingdom) or the magna pure lc system (ϳ45 min to 1 h 30 min depending on the number of samples; roche diagnostics, almere, the netherlands), at the rie and the lumc, respectively. an input volume of 200 l per specimen and elution volume of 100 l were used for all specimen types. amplification and detection were performed by real-time pcr using the abi 7500 fast thermocycler (1 h; applied biosystems, paisley, united kingdom) or the bio-rad cfx96 thermocycler (ϳ1 h 40 min; bio-rad, veenendaal, the netherlands), at the rie and the lumc, respectively. real-time pcr assays were tested with updated versions (where needed) of primers and probes as described previously (11, 12) . rp panel. original clinical specimens were retrieved from storage at ϫ70°c and thawed at room temperature. after vortexing, 200 l of the specimen was pipetted into the sample delivery device with a buffer provided by the manufacturer. for 16 out of 21 sputum samples, preprocessing was done using sputasol (oxoid, basingstoke, united kingdom) according to the manufacturer's procedures (with the exception of washing the sputum) and incubation at 37°c for 15 min on a shaker at 500 rpm. after gentle mixing of the specimen and buffer in the sample delivery device, the mixture was dispensed into the cartridge using the sample delivery port, which was subsequently closed by sealing with a cap. after scanning of the barcode of the eplex rp panel cartridge and the barcode of the corresponding sample, the cartridge was inserted into an available bay of the eplex system. the test then started automatically and ran for approximately 90 min. a single cartridge of the rp panel is able to detect 25 respiratory pathogens, including differentiation of subtypes of influenza a virus, parainfluenza virus, and respiratory syncytial virus (rsv) ( table 1) . internal controls for extraction, bead delivery, and movement within the cartridge are present, as well as those for amplification, digestion, and hybridization of dna and rna targets. for every specimen tested, a sample detection report was created, comprising the results for all targets and internal controls. results of the targets are reported as positive or not detected. if an internal control fails, this will be noted on the detection report and samples should be retested with a new cartridge. discrepant testing. in the case of discrepant results, the discordant sample was retested either with a new eplex cartridge if the real-time pcr was positive and the rp panel was negative (pcr ϩ /rp ϫ ) or with the laboratory-developed real-time pcr assay in the case of pcr ϫ /rp ϩ results. for unresolved discrepancies, additional testing with a third pcr assay (different primers and probe) was performed for final resolution. laboratory diagnosis of pneumonia in the molecular age epidemiology and clinical presentations of the four human coronaviruses 229e, hku1, nl63, and oc43 detected over 3 years using a novel multiplex real-time pcr method diagnosis of human metapneumovirus and rhinovirus in patients with respiratory tract infections by an internally controlled multiplex real-time rna pcr rapid and sensitive method using multiplex real-time pcr for diagnosis of infections by influenza a and influenza b viruses, respiratory syncytial virus, and parainfluenza viruses 1, 2, 3, and 4 comparison and evaluation of real-time pcr, real-time nucleic acid sequence-based amplification, conventional pcr, and serology for diagnosis of mycoplasma pneumoniae development and clinical evaluation of an internally controlled, single-tube multiplex real-time pcr assay for detection of legionella pneumophila and other legionella species improved diagnosis of the etiology of community-acquired pneumonia with real-time polymerase chain reaction comparison of the luminex respiratory virus panel fast assay with in-house real-time pcr for respiratory viral infection diagnosis comparison of two commercial molecular assays for simultaneous detection of respiratory viruses in clinical samples using two automatic electrophoresis detection systems comparison of the genmark diagnostics esensor respiratory viral panel to real-time pcr for detection of respiratory viruses in children performance of different mono-and multiplex nucleic acid amplification tests on a multipathogen external quality assessment panel evaluation of real-time pcr for detection of and discrimination between bordetella pertussis, bordetella parapertussis, and bordetella holmesii for clinical diagnosis point-of-impact testing in the emergency department: rapid diagnostics for respiratory viral infections impact of a rapid respiratory panel test on patient outcomes we thank yvette van aarle, mario bussel, wilfred rijnsburger, and tom vreeswijk of the lumc and laura mackenzie of the edinburgh rie for performing the assays including (discrepant) diagnostics. also, we thank colleagues from the global emerging infections surveillance and response system (geis) of the u.s. naval medical research unit 3 (namru-3; cairo, egypt) for providing us the jordan/n3 isolate and l. enjuanes from the centro nacional de biotecnologia (cnb-csic; madrid, spain) for providing the recombinant middle east respiratory syndrome coronavirus isolate emc/2012.none of the authors have conflicts of interest to declare. genmark diagnostics, inc., provided kits and reagents to perform this study and was responsible for study design. genmark diagnostics, inc., did not have any influence on the content of the submitted manuscript. key: cord-302185-pnw3xiun authors: bodecka, marta; nowakowska, iwona; zajenkowska, anna; rajchert, joanna; kaźmierczak, izabela; jelonkiewicz, irena title: gender as a moderator between present-hedonistic time perspective and depressive symptoms or stress during covid-19 lock-down date: 2021-01-01 journal: pers individ dif doi: 10.1016/j.paid.2020.110395 sha: doc_id: 302185 cord_uid: pnw3xiun although numerous studies have addressed the impact of the covid-19 lock-downs on psychological distress, scarce data is available relating to the role of present-hedonistic (ph) time perspective and gender differences in the development of depressive symptoms and stress during the period of strict social distancing. we hypothesized that gender would moderate the relationship between ph and depressiveness or stress levels, such that ph would negatively correlate with psychological distress in women but correlate positively in men. the present study was online and questionnaire-based. n = 230 participants aged 15–73 from the general population took part in the study. the results of moderation analysis allowed for full acceptance of the hypothesis for depression as a factor, but for stress the hypothesis was only partially confirmed, since the relationship between ph time perspective and stress was not significant for men (although it was positive, as expected). the findings are pioneering in terms of including ph time perspective in predicting psychological distress during the covid-19 lock-down and have potentially significant implications for practicing clinicians, who could include the development of more adaptive time perspectives and balance them in their therapeutic work with people experiencing lock-down-related distress. in january 2020, the world health organization announced that covid-19 constituted a global pandemic (mahase, 2020) . the virus then proliferated worldwide and government actions to mitigate spread have significantly affected various areas of life, such as healthcare, transportation, freedom of movement and daily activity (simpson & katsanis, 2020; zajenkowski, jonason, leniarska, & kozakiewicz, 2020) . in poland, public health safety measures were initiated in january 2020, followed by declaration of a state of epidemic emergency and imposition of lock-down measures on march 14th and the declaration of a state of epidemic from march 20th (pinkas et al., 2020) . lock-down and social isolation, although quite effective in slowing down the pace of the epidemic, have been shown to impact emotional and mental health (de quervain et al., 2020; li et al., 2020; shigemura & kurosawa, 2020) . according to these reports, one of the most significant adverse consequences of the changes in everyday life due to epidemic is an elevation of stress and depressive symptoms in the population. for instance, initial results of the swiss corona stress study (de quervain et al., 2020) suggested that there was a 50% increase in stress levels during the lock-down compared to the period preceding it. changes in stress levels were strongly associated with changes in depressive symptoms as 57% of participants reported an increase in depressive symptoms, which is not unexpected considering the strong link between stressful life events and depression (hammen, 2005) . interestingly, approximately 25% of the participants reported lower stress levels during lock-down than before. the authors of the report suggest that, in this group, the decrease might have been due to a reduction of stressors or having more time for recovery from stress during lock-down than under non-lock-down circumstances. accordingly, the level of experienced stress during lock-down and the impact it may have on mental health may be an individual matter. one promising avenue for investigation is found in gender differences and their potential associations with perceived stress and depressive symptoms during lockdown. it is worth noting that a greater number of depression diagnoses are observed in women than men (essau, lewinsohn, seeley, & sasagawa, 2010; van de velde, huijts, bracke, & bambra, 2013) . women have higher incidence rates of clinical diagnosis of dysthymia, recurrent brief depression and minor depression (for a review see angst et al., 2002) , as well as major depressive disorder and its chronic course (essau et al., 2010) . women were also found to report twice as many depressive symptoms as men (girgus & yang, 2015) , were more likely to admit being under stress and were more likely to develop depressive symptoms after a stressful event (sherrill et al., 1997) . ruminative tendencies, chronic strain and low mastery were also found to be more common in women and mediate the gender difference in depressive symptoms (nolen-hoeksema, larson, & grayson, 1999) . this gender difference might also stem from hormonal fluctuations (for a review of psychosocial factors in depression across genders see leach, christensen, mackinnon, windsor, & butterworth, 2008) . additionally, social roles, among other determinants, have been acknowledged as potential risk factors for developing depression in both genders (piccinelli & wilkinson, 2000) . gender schemas (martin & halverson jr, 1981) may be connected to how women and men attribute the causes of their depression onset. for instance, physical illnesses or problems were the most important precipitants of depression for both genders but especially for men (angst et al., 2002) . for women, problems in relationships and illness or death in the family were identified as other significant causes, whereas, for men, additional causes included problems at work and unemployment. furthermore, the question of whether elevated levels of depressive symptoms in women might be a consequence of gender inequality has been a topic of wide discussion (salk et al., 2017) . such an idea is supported by the association of female social roles with lower role overload and lack of choice (szpitalak & prochwicz, 2013; van de velde et al., 2013) and the well-established linkage between feelings of powerlessness, lack of control in one's own life and depression (mirowsky & ross, 2003) . despite a climate of social change in gender roles (eagly, nater, miller, kaufmann, & sczesny, 2020) , a number of cross-cultural similarities in the gender division of labor has been observed in advanced industrial societies (pérez & tavits, 2019) . women were found to typically invest more time in raising children, preparing food and caring for home. in contrast, men were found to typically invest more time in extra-domestic tasks. the context of lock-down creates the situation of needing to remain at home, the constant presence of all family members at the home, an increased importance of female gender schema-related activities, and either a shifting of extra-domestic activities to the home space or reduction of these activities. therefore, typically, women during lock-down might be encouraged to play more gender schemacongruent roles in the course of everyday lock-down life, in contrast to men. the remote work lifestyle, as well as fear of job loss due to the economic crisis resulting from the epidemic, might be especially gender schema-threatening for men and contribute to depressive symptoms. additionally, a lock-down situation shifts attentions to everyday activities and the uncertainty of the present moment (versluis, van asselt, & kim, 2019) . as no one could predict the duration of lock-down and the covid-19 epidemic, time perspective (at an individual consideration) may be a particularly noteworthy factor in explaining adaptations to the adverse situation. time perspective is generally defined as an "often unconscious process whereby the continual flows of personal and social experiences are assigned to temporal categories or time frames that help to give order, coherence and meaning to those events" (zimbardo & boyd, 1999 , p. 1271 . a habitual bias to process time in a certain manner might become a relatively stable individual difference, formed through learning processes and cultural influences (jochemczyk, pietrzak, buczkowski, stolarski, & markiewicz, 2017) . boyd (1999, 2008) in their seminal works distinguished five time perspectives: past-negative, past-positive, present-hedonistic (ph), present-fatalistic and future. a tendency to focus on particular time perspectives, especially past-negative and present-fatalistic might be predictive of a higher level of depressive symptoms, whereas past-positive (anagnostopoulos & griva, 2012; zimbardo & boyd, 1999) appeared to protect individuals from elevated levels of depressive symptoms. in general, people rating high on past-positive and ph time perspectives also exhibit increased well-being and life satisfaction (stolarski, bitner, & zimbardo, 2011; zhang & howell, 2011) . additionally, they are happier, in contrast with those scoring higher in the past-negative time perspective, who experienced less happiness (drake, duncan, sutherland, abernethy, & henry, 2008) . however, compared to other time perspectives, ph time perspective was the most robust predictor of current emotional states (stolarski, matthews, postek, zimbardo, & bitner, 2014) . hedonism, from which the name for the ph time perspective is taken, is defined as openness to pleasurable experience (veenhoven, 2003) , and is associated with lower levels of depressive symptoms (disabato, kashdan, short, & jarden, 2017) , as well as with mania in bipolar disorder (gruber, cunningham, kirkland, & hay, 2012) . therefore, the ph time perspective is especially interesting for investigating depressive and stress symptoms during covid-19 lockdown. the main aim of the current study is to contribute to the knowledge about potential gender differences in the linkages between ph time perspective and depressive symptoms or perceived stress during covid-19 lock-down. personal characteristics, including time perspectives, are related to how people experience social events. phs are habitually oriented to pleasures of the present and excitement with little consideration of future consequences (zimbardo & boyd, 1999) . strong social situations "providing salient cues to guide behavior and having a high degree of structure and definition" (snyder & ickes, 1985; p. 904) can be more important in predicting certain behaviors or experiences than personality traits (sherman, nave, & funder, 2012 ). an epidemic, considered to be a strong social situation, can increase psychological distress, especially depressiveness and stress level. it is possible that, due to strict social distancing, the impossibility of realizing most needs outside of home and, hence, the blockage of pleasant stimuli could predict depressiveness and stress experience. moreover, lock-down compels the discounting of immediate rewards for the sake of the one's own health and that of others, which might be difficult for ph-oriented people in general (jochemczyk et al., 2017; stolarski et al., 2011) . therefore, one might suppose that people who tend to fulfill their hedonistic needs outside of their homes might experience greater lock-down distress than people who tend to take pleasure from homeand family-oriented activities. considering the gender schema theories, it is possible that the lockdown situation could prove more depressing for men. according to such theories, men might be inclined toward valuing hedonistic extra-domestic activities (compared to typical domestic activities), which were significantly limited due to the lock-down. moreover, although in general women tend to present higher levels of depression, the factors leading to this discrepancy are distinct for women and men. for instance, men more frequently attributed the onset of their depression to current life events, such as unemployment or problems at work, than females did (angst et al., 2002) . the lock-down was not only linked to shifting work life to homes but sometimes caused employment uncertainty and financial insecurity. based on the above-mentioned theoretical assumptions, our hypothesis is that gender would moderate the relationship between ph and depressiveness or stress levels, such that ph would be negatively related with psychological distress in women m. bodecka, et al. personality and individual differences 168 (2021) 110395 but positively correlated with psychological distress in men. we recruited 230 participants (141 women, 89 men) online. power analysis conducted in g*power 3.1 (faul, erdfelder, buchner, & lang, 2009; faul, erdfelder, lang, & buchner, 2007) indicated that this sample size would allow for the detection of a small effect of partial r 2 increase of 0.05 (alpha = 0.05) with a power of 0.81. the participants were not reimbursed. all participants were between the ages of 15 and 73 years (m = 30.37, sd = 10.21). only 4 participants had not graduated high school, 31 participants (27.4%) declared secondary education, 63 participants (27.4%) were students and 130 individuals reported higher education (56.5%). the majority of participants lived in cities with either less than 100,000 inhabitants (n = 45, 19.6%) or more than 100,000 (n = 124, 53.9%) while the other participants lived in the countryside (n = 61, 26.5%). participants were married, (n = 71, 30.9%), in a partnership (n = 53, 23%), single (n = 97, 42.2%), divorced (n = 8) or widowed (n = 1). participants mostly lived with other people (n = 204, 88.7%), including with family (children, spouse, parents and other family members), with romantic partners or with friends. 24 individuals (10%) declared that they were currently in psychotherapy. participants were recruited through social media, primarily facebook, through paid advertisement, a post about the study on the lab profile and on private profiles using the snowball method. the study conformed to the declaration of helsinki (world medical association, 2001) , and all participants provided informed consent to take part in the study. the respondents were informed that the purpose of the study is to examine "how people deal with the current situation, how they feel, what they think", that the survey is fully anonymous and that they could discontinue at any time. the average time for survey completion was approximately 15 min. depressive symptoms. a 9-item patient health questionnaire (phq-9) was used to assess severity of depressive symptoms. its items correspond to criteria for diagnosis of dsm-iv and dsm-v depression symptoms (kroenke & spitzer, 2002; mitchell, frayne, wyatt, goller, & mccord, 2019) and enabled grading of depressive symptom severity. it contains questions about psychological well-being within the last two weeks (e.g., how often have you been bothered by little interest or pleasure in doing things?), including a question related to hurting oneself (i.e., how often have you been bothered by thoughts that you would be better off dead or of hurting yourself in some way?). in the current study, the phq-9 provided a severity measure with scores ranging from 0 to 27-each of the nine items can be scored from 0 ("not at all") to 3 ("nearly every day"). depression severity was defined by the scale's authors as: 1-4 none, 5-9 mild, 10-14 moderate, 15-19 moderately severe and 20-27 severe. the phq-9 was found to be a reliable measure in our study (α = 0.86). perceived stress. the perceived stress scale (pss) was applied to measure levels of stress (cohen et al., 1983) and it measures the degree to which situations in one's life are considered stressful. the scale consists of 10 items (four positively stated, e.g., in the last month, how often have you felt that things were going your way? and six negatively stated, e.g., in the last month, how often have you been upset because of something that happened unexpectedly?) that can be scored from 0 ("never") to 4 ("very often"). the pss scores are obtained by reversing the responses to the positively stated items and then summing across all the scale items. the cronbach's alpha coefficient in the present research was α = 0.91. time perspectives. the zimbardo time perspective inventory (ztpi) was used to measure the ph time perspective (zimbardo & boyd, 1999 and future (13 items e.g., when i want to achieve something, i set goals and consider specific means for reaching those goals). the participants were asked to score on a five-point likert scale the degree to which each statement referred to him/her (1 = very untrue, 5 = very true), and some items were reverse coded. the level of a specific time perspective was obtained by summing the items results for each scale. in the current study, the cronbach alpha for ztpi ph subscale was α = 0.77. reliability of ztpi subscales which were not of the interest of the current study is presented in appendix table a .1. all analyses were conducted using ibm spss 25.0.0.2 for windows. our main hypotheses were tested employing regression analysis with bootstrapping methods using andrew f. hayes process 3.2.01 macro (hayes, 2018) . frequency analysis of the results from the phq-9 indicated that 40 (17.4%) participants had no depressive symptoms, whereas the rest of the sample displayed mild (n = 85, 37.0%); moderate (n = 53, 23.0%); moderately severe (n = 37, 16.1%) or severe (n = 15, 6.5%) depressive symptoms. next, we investigated descriptive statistics, performed correlation analysis and tested for gender differences in time perspectives, stress and depression scores. the results of these analyses for the variables of interest of the current study are presented in table 1 . correlations and descriptive statistics for all study variables including ztpi subscales other than ph are presented in the appendix table a.2. it should also be noted that depression scores were not significantly associated with ph. stress was negatively, although weakly, correlated with ph. women and men did not differ in ph. results also indicated that women declared higher perceived stress and more intensive depression symptoms. mean depression scores for women fell into the interval for m. bodecka, et al. personality and individual differences 168 (2021) 110395 moderate level of depressive symptoms, while the mean for men fell within the mild depressive symptoms level. next, we tested our main hypothesis using regression models with a bootstrapping method for depressive symptoms and stress as dependent variables in two separate models. ph was included as the predictor and gender was included as a moderator in both models. coefficients with 95% ci for both models are presented in table 2 . data from table 2 suggests that both models predicted a significant amount of variance in the dependent variables. the results also showed that ph was negatively related with depression and with stress. women were coded 1 and men were coded 2; thus, a negative relationship indicated that women were higher on stress and depression scores. in both models, the interactions were also significant. the interpretation of ph and gender interaction with simple slopes showed that the relationship between ph perspective and depression scores was also significant and negative for women, whereas for men it was significant and positive. the relationship between ph and stress was significant and negative in women, while this relationship was not significant in men. the results allow us to accept the hypothesis in the case of depression but, in the case of stress, the hypothesis was only partially confirmed, since the relationship between ph and stress was not significant for men (although it was positive, as expected). the relationship between ph and depression scores in men and women is presented in fig. 1 and relationship between ph and stress in men and women is presented in fig. 2. the aim of the study was to explore gender differences in the relationships between ph and depressive symptoms or perceived stress during the specific context of lock-down due to the covid-19 epidemic. we tested for two independent models predicting depression and stress. both of these variables were strongly related, which is in line with previous studies (see hammen, 2005) . the majority of participants displayed at least mild depressive symptoms (82.6%). the study was performed during the strict social distancing period in poland and the risk of distress connected to being apart from other people might have been heightened. a study conducted on a representative sample during covid lock-down suggested that depressive symptoms were twice as high as before the measure was introduced (gambin et al., 2020) in our study, gender moderated the relationship between ph and depressiveness, such that women that scored higher for ph presented with fewer depressive symptoms than women scoring lower on this time perspective. for men, the relationship was inverse-men scoring higher for ph displayed more depressive symptoms than men with lower ph scores. interestingly, this was observed even though men and women did not differ in their levels of ph. although a hedonistic view of the present was found to be related to a high positive affect (desmyter & de raedt, 2012) , other research suggests a significant positive association between ph and depression and anxiety (davies & filippopoulos, 2015) . based on these inconsistencies we can assume that in stress, ph might lead to the development of both adaptive or maladaptive forms of coping, especially emotion-focused forms (blomgren, svahn, åström, & rönnlund, 2016) . our results suggest that the way in which men and women actualize this time perspective may be different. as a consequence, ph-oriented women were able to succeed in the lock-down circumstances, while ph-oriented men were not (blomgren et al., 2016) . vandello and cohen (2008) conducted five studies to show that masculinity as opposed to femininity is a much more uncertain and vulnerable state, dependent on constant external stimulation and social acknowledgement in interactions with others. it m. bodecka, et al. personality and individual differences 168 (2021) 110395 is possible that ph-oriented men are likely to meet their hedonistic needs in contact with other people outside of their homes, and lockdown might have been a circumstance that restricted opportunities to maintain such contacts. ph was also found to be negatively related to stress levels only in women. in men, the relationship between these variables was not significant. one of the crucial stressors during lock-down might have been a fear of viral infection. women, although generally found to be more concerned about their health than men (thompson et al., 2016) , when high on ph, might have been concentrated on the present and oriented at pleasure so that they found pathways for reducing their stress levels. it should be noted that negative life events, such as an epidemic, may not always result in a decrease in well-being or deterioration in mental health but can lead to effective coping with the adversities and to sustained health (luhmann & eid, 2009 ). despite inconsistent findings (see eisenbarth, 2019) , some data has shown that men use avoidance (e.g., sigmon, stanton, & snyder, 1995) , and drugs or alcohol to cope (e.g., kieffer et al., 2006) more often than women. women are more likely than men to seek emotional support across a range of stressors (tamres, janicki, & helgeson, 2002) . it is possible that men high on ph are particularly willing to distract themselves from thinking about the danger of viral infection, in contrast to women, who might seek more social contact and support from close-others. however, these are just speculations and further studies are needed to investigate coping strategies during the recent pandemic in both men and women with high ph. these findings have potentially significant implications for practicing clinicians. given that time perspectives are based on learning processes, clinicians can utilize them to enhance the development of adaptive time perspectives and balance them-in order to enhance well-being and reduce lock-down related distress. several limitations necessitate a degree of care when interpreting these findings. the sample consisted mainly of caucasian participants from a developed country. it is possible that in more embedded cultures, where people live with several generations of relatives in the same household, our result would not be valid. it should also be noted that the forms that displayed depressive symptoms take on can differ between genders (martin et al., 2013) . including a wider variety of measures of distress might result in a more accurate estimation of depression prevalence, especially in men. the study was cross-sectional, which makes it impossible to form causality statements about the linkages between variables. furthermore, it is advisable to continue searching for other indicators of depression and stress during lock-down, such as feelings of loneliness or perceived social support. the involvement of mb and az in preparation of the manuscript was supported by national science centre of poland, grant no. umo-2017/ 26/d/hs6/00258 awarded to az. (continued on next page) m. bodecka, et al. personality and individual differences 168 (2021) 110395 exploring time perspective in greek young adults: validation of the zimbardo time perspective inventory and relationships with mental health indicators gender differences in depression. epidemiological findings from the european depres i and ii studies coping strategies in late adolescence: relationships to parental attachment and time perspective a global measure of perceived stress changes in psychological time perspective during residential addiction treatment: a mixed-methods study the swiss corona stress study the relationship between time perspective and subjective well-being of older adults what predicts positive life events that influence the course of depression? a longitudinal examination of gratitude and meaning in life time perspective and correlates of wellbeing gender stereotypes have changed: a cross-temporal meta-analysis of u.s. public opinion polls from coping with stress: gender differences among college students gender differences in the developmental course of depression statistical power analyses using g*power 3.1: tests for correlation and regression analyses g*power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences generalized anxiety and depressive symptoms in various age groups during the covid-19 lockdown. specific predictors and differences in symptoms severity gender and depression feeling stuck in the present? mania proneness and history associated with present-oriented time perspective stress and depression introduction to mediation, moderation, and conditional process analysis: a regression-based approach you only live once: present-hedonistic time perspective predicts risk propensity test and study worry and emotionality in the prediction of college students' reasons for drinking: an exploratory investigation the phq-9: a new depression diagnostic and severity measure gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators vicarious traumatization in the general public, members, and non-members of medical teams aiding in covid-19 control does it really feel the same? changes in life satisfaction following repeated life events covid-19: who declares pandemic because of "alarming levels" of spread, severity, and inaction a schematic processing model of sex typing and stereotyping in children the experience of symptoms of depression in men vs women: analysis of the national comorbidity survey replication social causes of psychological distress comparing the phq-9 to the multidimensional behavioral health screen in predicting depressionrelated symptomatology in a primary medical care sample explaining the gender difference in depressive symptoms language influences public attitudes toward gender equality gender differences in depression: critical review public health interventions to mitigate early spread of sars-cov-2 in poland gender differences in depression in representative national samples: meta-analyses of diagnoses and symptoms properties of persons and situations related to overall and distinctive personality-behavior congruence is life stress more likely to provoke depressive episodes in women than in men? mental health impact of the covid-19 pandemic in japan gender differences in coping: a further test of socialization and role constraint theories the immunological case for staying active during the covid-19 pandemic personality and social behavior time perspective, emotional intelligence and discounting of delayed awards how we feel is a matter of time: relationships between time perspective and mood psychological gender in clinical depression. preliminary study sex differences in coping behavior: a meta-analytic review and an examination of relative coping the influence of gender and other patient characteristics on health care-seeking behaviour: a qualicopc study macro-level gender equality and depression in men and women in europe culture, gender, and men's intimate partner violence. social and personality psychology compass hedonism and happiness the multilevel regulation of complex policy problems: uncertainty and the swine flu pandemic world medical association declaration of helsinki. ethical principles for medical research involving human subjects who complies with the restrictions to reduce the spread of covid-19?: personality and perceptions of the covid-19 situation do time perspectives predict unique variance in life satisfaction beyond personality traits? putting time in perspective: a valid reliable individual differences metric the time para-dox key: cord-268826-m3ikl4da authors: goh, hoe-han; bourne, philip e. title: ten simple rules for researchers while in isolation from a pandemic date: 2020-06-25 journal: plos comput biol doi: 10.1371/journal.pcbi.1007946 sha: doc_id: 268826 cord_uid: m3ikl4da nan the scale and intensity of the coronavirus disease 2019 (covid-19) worldwide pandemic is unprecedented in all our lifetimes. it has changed our lifestyles and our workstyles, in a manner and to a degree that is likely to persist for some time. here we offer some guidance, in the familiar ten simple rules format, for how to navigate a stressful situation, considering it realistically as both a curse and an opportunity. this is written for all of us involved in scientific research-graduate student, postdoc, academic, staff scientist, in academia, government or industry. each such person has so much to contribute in a time of need but is simultaneously also a member of a worldwide population under threat. what can one do in a time like this? this is so fundamental that we have taken the liberty of calling it rule 0. as one reviewer put it, "put on your own oxygen mask before taking care of others." what use are you to your loved ones and our society at large if your mental and physical state are less than optimum, or worse still, if you do not survive? you may be a researcher who studies the basic biochemistry of infectious diseases or who performs pandemic modeling-we need all your dedication and expertise as we collectively work through this threat. more likely, your scientific training is in a variety of other fields. use that training. review the literature and the public data being produced, understand, and explain to others the need for seemingly onerous measures such as social distancing; you can use past pandemics as your guide [1] . take every opportunity to use your scientific understanding to explain the importance of protective clothing, isolation, and appropriate hygiene. it can be read anywhere but so can false or misleading claims. use your scientific knowledge to influence others as best you can with facts and solid arguments, never shying away from clearly indicating what we do not know (that's just as much a part of science as what we do know). science-seriously undervalued in many parts of the world in recent years-has almost literally overnight become what the general population craves to understand, so as to grasp their predicament. use your training and knowledge wisely and effectively, allowing all to benefit. do so via social media, such as facebook, twitter, instagram, and linkedin, to disseminate useful information to the general public and reinforce #stayathome. you do not need to be a leader to show leadership. we can all lead in our own way. whether it is as a group leader providing strength, comfort, compassion, flexibility, and direction to your team or as a graduate student providing the same to fellow graduate students or undergraduates, it all counts. it counts for compassion shown to those less fortunate. researchers are one of the least likely groups to be seriously disadvantaged. remember and act on that. leadership comes in many forms, but one that might help here is the setting of new goals in times of unsettled circumstances. labs and individuals have goals. working towards those goals in uncertain times provides a sense of purpose and accomplishment, which would seem so important. goals can be in various time spans, from daily to the time span of the pandemic and beyond (months, years). goal setting is particularly difficult for experimental work when the lab is closed and equally difficult for young researchers when a thesis or paper needs to be completed. try and be creative with goals that are achievable and that offset the sense of loss. another form of leadership is to appropriately complement those that support your research, in whatever way. this is important at the best of times, but particularly so at the worst of times. show leadership in understanding. as researchers, for all that is lost, it is mostly trite relative to what many have lost by way of family members, jobs, and a way of life. look to lead efforts that address this imbalance either through university or other programs that reach out to the community. you have skills that can help others-employ them. if you can't find ways, then at least do more for your profession-tutor others remotely, review more, etc. by institution, we mean everything from the government (federal, state, and local) to your workplace to your individual laboratory. enormous effort has likely gone into contingency planning, albeit not exactly for the scenario that we now face. follow those plans, even if they have been made late relative to the spread of the virus. those plans include how best to work remotely and knowing your classification tier (your level of "essentiality" within your organization-are you central or not to the operational capacity and wellbeing of the organization?). essentiality can relate to the care and maintenance of laboratory animals, equipment, or materials for which safety is paramount (e.g., radioisotopes), your students, etc. act according to that essentiality. each of us occupies a unique niche. if a contingency of operation plan (coop), guideline, or something similar exists, study it and follow directives from your organization, as your safety and wellbeing are their top priority. (if you're unsure if one exists, ask your immediate supervisor.) be aware of support groups and other mechanisms to help you through these difficult times. finally, provide feedback on the guidance you are receiving, good or bad-providing such guidance is part of an agile process. this is particularly important when the plan is not clear or does not help you as a stakeholder. slack channels are a good option for you to try and get involved in contingency planning. how science is conducted has changed, almost overnight. some of what you did before will likely not be possible, but as you will see in the rules that follow, new opportunities arise. likely, most of your computational work can continue-essential staff (rule 3) will keep servers, high-performance computing facilities, networks, etc. running, albeit under greater strain than usual. compensate for the loss of other forms of work and try and recreate yet others as best you can while being remote. set up your home office to be comfortable and functional if it is not already. you will have likely gained time that was previously spent on commuting; use that time wisely. embrace the new opportunities that exist but also consider the physical and mental wellbeing that comes from keeping a regular schedule and associated calendar, diet, exercise regime, etc. we are social animals in both personal and professional lives. most of us have the technology to create some level of virtual normality in our otherwise physical communications. use all the synchronous and asynchronous tools you have at your disposal on both computer and cellular networks to recreate as much as possible of the normal. you can still have your regular group meetings virtually via zoom, google meet, microsoft teams, etc. do turn on your camera during video calls as seeing familiar faces helps to alleviate the sense of loneliness. it also forces you to dress accordingly and keeping an organized workspace as a reminder that you are still working in your cozy home. keep the team spirit going with regular communications and checking in with your colleagues if they are coping well or need assistance. slack channels are good for this; physical presence is certainly irreplaceable, but a silver lining is that we will be in a better place when the pandemic subsides because of the extreme stress-testing of technology. to illustrate this point from our own experience, we have put classes online that would have taken years to achieve otherwise. our faculty have quickly embraced technologies they would otherwise ignore; we have records of lab meetings, lectures, workshops, etc. that we would not otherwise have. in short, our scientific digital footprint has expanded significantly out of necessity; the most advantageous and beneficial of our new habits will persist in our post-pandemic world. recognize this and prepare for it-do not be shy or reticent on the new format and opportunities to communicate your science. it is amazing and extremely heartening to see scientists come together in the face of adversity. we all have a part to play in the face of this pandemic-to return us to a sense of normality and to make sure that, as a society, we learn from these tragic events. although open science (sharing knowledge, data, software, etc.) should be the norm always, it has become much more imperative now. use this opportunity to both give and take. make your data, software, and papers immediately available through public resources and preprint servers. donate anything from your lab that is not being used when it may benefit others. you can seize this opportunity to develop online courses [2] or workshops [3] based on the various open learning platforms available nowadays. this is not only useful to share your knowledge and skills but also helps build your scientific reputation, even in times of adversity. this is especially true for data scientists in teaching various bioinformatics and programming skills. it is important to keep up to date with the latest development of the pandemic situation. what can you contribute both to society and to science, as a researcher helping combat the pandemic? it could demand more than "thinking outside of the box" to throw away the box in helping those at the front line! remember researchers are disadvantaged in different ways. the more experimental your work, the more you are likely disadvantaged. do what you can for the most disadvantaged. continue to support scientific publications by accepting invitations to review manuscripts (an example of a type of activity that is pandemic-compatible). perhaps it is a good time for you to widen your public outreach and write for public media? again (see rule 1) , to fully utilize the potential of social media to disseminate useful information to the general public and reinforce #stayathome. you can also get involved with webinars [4] or online conferences [5] , either as a participant or organizer. for bioinformaticians, you will be familiar with the online sharing of computational analyses [6] or software tools [7] ; for others, this will be a key time to develop skills on multisite collaborations [8] . you can even seek help from online scientific communities [9] . but also keep in mind rule 1: all you contribute can be undone by overstating your case at a time when exactly what we know and don't know is made abundantly clear. as a researcher, there are likely some things you simply cannot do right now. on the other hand, there may be time for either scholarly or professional pursuits that you've always had on your to do list but for which you never had time. there is always that review or paper to be written that is constantly on the back burner, the exploration of a new area of research with never the time to read the background literature, that software that needs to be written or rewritten, a grant you have always meant to write and so on. it is never too late for experimental biologists to sharpen programming skills [10] . refer to rule 6 in thinking about these opportunities in terms of what can contribute scientifically in this time of need. an example would be online materials for a broad audience that imparts your knowledge and skill for the common good. for us, this article embraces rules 6 and 7. personal adversity changes everything, including family relationships. when the adversity is population-wide, as in the case of a pandemic, the potential for positive collective impacts at the societal level is simply unparalleled, almost by definition. conducting research can be an all-consuming challenge to work-life balance. confinement provides time to reflect and begin to act if you feel your balance may be off. don't just use time saved to work, but use it to make a difference to others in nonscientific ways. for many, this will come out of necessity when the daycare center is closed, or aging parents need help. make those who need you in ways they did not before a priority. evaluate priorities (see rule 10) and share the responsibilities fairly and equitably. reassessment of work-family balance is especially hard with both parents working and your children at home with no childcare. likewise, if you are on a tenure clock, there is added pressure to perform in impossible circumstances. look to what your institution offers in help, and discuss the situation with those to whom you report. exploring interests not related to your research is important at the best of times and especially important at the worst of times. it is all too easy to wander over to the computer and just keep working. embrace distractions. think about your physical and mental wellbeing. other interests are critical here. is there a way those interests can benefit others less fortunate in this time of need? perhaps do something that works on emotions in need of bolstering right now. take humor (https://slydor.com/comics-explained-working-from-home-vs-office/). perhaps tap into your creativity to create your own comics [11] . do what works for you that explores your artistic (or really any nonscientific) side. this is also a unique opportunity for researchers with busy schedules that are often fragmented with meetings and such to improve our sedentary lifestyles by getting outside to exercise, hike in nature, and so on (minding the 2-meter criteria interpersonal distances, of course). suddenly, what used to feel so important is less so in the face of a crisis. think about what is indeed important now, and consider what you will do differently when it is over. considering also (like a new year's resolution), you may not keep to what you commit to. writing down what is important during this stressful time and referring back to it in both the near and distant future is, at least, a start. doing so has nothing to do with doing the research; but, as a researcher, part of that evaluation should be considering the value of the various aspects of your research. perhaps in the light of recent events, other avenues of inquiry that use your skillset are more or less important than you would have imagined before the pandemic? inspiration for this article came from hhg, who provided a first draft, and peb while (virtually) staring into the eyes of the graduate students that make up his school of data science. what can we do next to help? what can you do? tell us by commenting on this article, using social media, or writing to us directly. soper ga the lessons of the pandemic ten simple rules for developing a mooc ten simple rules for developing a short bioinformatics training course ten simple rules for organizing a webinar series ten simple rules for organizing a virtual conference-anywhere ten simple rules for writing and sharing computational analyses in jupyter notebooks ten simple rules for taking advantage of git and github ten simple rules to enable multi-site collaborations through data sharing ten simple rules for getting help from online scientific communities ten simple rules for biologists learning to program ten simple rules for drawing scientific comics we never imagined writing such an article. this article is dedicated to all those on the front line of the pandemic. thanks to cameron mura, claudia scholz, and mark borodovsky for their valuable input into this article.an earlier version of this article was made public on march 24, 2020 as a google document (major preprint servers would not take it as it is not a research article) with authors from central virginia, usa and selangor, malaysia. neither had yet to be hit hard by covid-19 at that time. key: cord-289372-bk348l32 authors: lin, chung‐ying; imani, vida; majd, nilofar rajabi; ghasemi, zahra; griffiths, mark d.; hamilton, kyra; hagger, martin s.; pakpour, amir h. title: using an integrated social cognition model to predict covid‐19 preventive behaviours date: 2020-08-11 journal: br j health psychol doi: 10.1111/bjhp.12465 sha: doc_id: 289372 cord_uid: bk348l32 objectives: rates of novel coronavirus disease 2019 (covid‐19) infections have rapidly increased worldwide and reached pandemic proportions. a suite of preventive behaviours have been recommended to minimize risk of covid‐19 infection in the general population. the present study utilized an integrated social cognition model to explain covid‐19 preventive behaviours in a sample from the iranian general population. design: the study adopted a three‐wave prospective correlational design. methods: members of the general public (n = 1,718, m (age) = 33.34, sd = 15.77, male = 796, female = 922) agreed to participate in the study. participants completed self‐report measures of demographic characteristics, intention, attitude, subjective norm, perceived behavioural control, and action self‐efficacy at an initial data collection occasion. one week later, participants completed self‐report measures of maintenance self‐efficacy, action planning and coping planning, and, a further week later, measures of covid‐19 preventive behaviours. hypothesized relationships among social cognition constructs and covid‐19 preventive behaviours according to the proposed integrated model were estimated using structural equation modelling. results: the proposed model fitted the data well according to multiple goodness‐of‐fit criteria. all proposed relationships among model constructs were statistically significant. the social cognition constructs with the largest effects on covid‐19 preventive behaviours were coping planning (β = .575, p < .001) and action planning (β = .267, p < .001). conclusions: current findings may inform the development of behavioural interventions in health care contexts by identifying intervention targets. in particular, findings suggest targeting change in coping planning and action planning may be most effective in promoting participation in covid‐19 preventive behaviours. statement of contribution: what is already known on this subject? curbing covid‐19 infections globally is vital to reduce severe cases and deaths in at‐risk groups. preventive behaviours like handwashing and social distancing can stem contagion of the coronavirus. identifying modifiable correlates of covid‐19 preventive behaviours is needed to inform intervention. what does this study add? an integrated model identified predictors of covid‐19 preventive behaviours in iranian residents. prominent predictors were intentions, planning, self‐efficacy, and perceived behavioural control. findings provide insight into potentially modifiable constructs that interventions can target. research should examine if targeting these factors lead to changes in covid‐19 behaviours over time. constructs and covid-19 preventive behaviours according to the proposed integrated model were estimated using structural equation modelling. results. the proposed model fitted the data well according to multiple goodness-of-fit criteria. all proposed relationships among model constructs were statistically significant. the social cognition constructs with the largest effects on covid-19 preventive behaviours were coping planning (b = .575, p < .001) and action planning (b = .267, p < .001). conclusions. current findings may inform the development of behavioural interventions in health care contexts by identifying intervention targets. in particular, findings suggest targeting change in coping planning and action planning may be most effective in promoting participation in covid-19 preventive behaviours. what is already known on this subject? curbing covid-19 infections globally is vital to reduce severe cases and deaths in at-risk groups. preventive behaviours like handwashing and social distancing can stem contagion of the coronavirus. identifying modifiable correlates of covid-19 preventive behaviours is needed to inform intervention. an integrated model identified predictors of covid-19 preventive behaviours in iranian residents. prominent predictors were intentions, planning, self-efficacy, and perceived behavioural control. findings provide insight into potentially modifiable constructs that interventions can target. research should examine if targeting these factors lead to changes in covid-19 behaviours over time. novel coronavirus disease 2019 infections, declared by the world health organization (who) as a pandemic (world health organization, 2020a), have had unprecedented global effects on people's daily activities and way of life heymann & shindo, 2020; kobayashi et al., 2020; lin, 2020; pakpour, griffiths, chang, et al., 2020; pakpour, griffiths, & lin, 2020a tang et al., 2020) . despite government actions such as enforced self-isolation, travel bans, and national lockdowns of non-essential services, schools, and universities, infection and mortality rates continue to rise (baud et al., 2020; heymann & shindo, 2020; wu & mcgoogan, 2020) . iran, as of 9 june 2020, is the tenth leading country in total reported cases of covid-19 and is continuing to experience a sharp rise in reported new cases of infections and deaths related to the infection: 175,927 total cases (+2,095 new cases) and 8,425 total deaths (+74 new deaths; worldometer, 2020) . to date, there is no vaccine to protect against covid-19 infection and therefore, nonpharmacological interventions are the only currently available means to reduce the spread of infection and 'flatten the curve' of infection rates (kim, kim, peck, & jung, 2020) . in response, the who has proposed a global action plan aimed at reducing the spread of covid-19 infections (world health organization, 2020b) . the plan highlights the importance of adopting a range of health protection behaviours including, for example, washing hands frequently, maintaining social distancing, practising respiratory hygiene, and self-isolating if feeling unwell (world health organization, 2020b) . however, the who guidance is limited by the fact that it does not focus on understanding the mechanisms of action that underpin these preventive behaviours, or on strengthening individuals' capacity, to adopt them. application of theories of social cognition has demonstrated promise in providing an understanding of the determinants of preventive behaviours (hagger, cameron, hamilton, hankonen, & lintunen, 2020) . such theories help identify potentially modifiable factors that have been shown to be reliably related to behaviour. once identified, these modifiable factors can inform the content and design of behavioural interventions aimed at promoting increased adherence to preventive behaviours in health contexts (hagger, cameron, et al., 2020; kok et al., 2016) . in the current study, we aimed to identify the key social psychological factors that underpin uptake and maintenance of the covid-19 preventive behaviours advocated by the who (world health organization, 2020b). we therefore focused on identifying the motivational and volitional determinants of covid-19 preventive behaviours among iranians based on an integrated model of behaviour that combined social psychological constructs from the theory of planned behavior (tpb; ajzen, 1991; ajzen & schmidt, 2020) and the health action process approach (hapa; schwarzer, 2008; schwarzer & hamilton, 2020) . the tpb is a prominent social cognition theory that has been frequently applied to predict multiple health behaviours (mcdermott et al., 2015; rich, brandes, mullan, & hagger, 2015) . intention is a focal construct of the theory and considered the most proximal predictor of behaviour. intention is a function of three belief-based constructs: attitudes (evaluation of the positive and negative consequences of the behaviour), subjective norms (perceived expectations of important others approving the intended behaviour), and perceived behavioural control (perceived capacity to carry out the behaviour). in addition, perceived behavioural control is proposed to directly predict behaviour when it closely approximates actual control. although the extant literature applying the tpb has shown that intentions consistently predict health behaviour and mediate effects of the social cognition constructs on behaviour (hagger, chan, protogerou, & chatzisarantis, 2016; hamilton, van dongen, & hagger, 2020; mceachan, conner, taylor, & lawton, 2011; rich et al., 2015) , the intention-behaviour relationship is imperfect (orbell & sheeran, 1998; rhodes & de bruijn, 2013) . therefore, dual-phase models of behaviour, such as the hapa (schwarzer, 2008; schwarzer & hamilton, 2020) , propose a post-intentional volitional phase in which individuals may employ a range of self-regulatory strategies to enact their intentions. one self-regulatory strategy that may lead individuals to effectively enact on their intentions is planning. according to the hapa, there are two types of planning: action planning and coping planning (schwarzer, 2008; schwarzer & hamilton, 2020; sniehotta, schwarzer, scholz, & sch€ uz, 2005) . action planning is a task-facilitating strategy and relates to how individuals prepare themselves in performing a behaviour. this includes making plans of when, where, and how to perform the specific behaviour. such plans connect the individual with good opportunities to act. coping planning is a strategy that relates to how individuals prepare themselves in avoiding foreseen barriers and obstacles that may arise when performing a specific behaviour, and potentially competing behaviours that may derail the behaviour. such plans protect good intentions from anticipated obstacles and competing behaviours. another important behavioural determinant proposed by the hapa is self-efficacy. in the hapa, self-efficacy is proposed to be important at all stages (i.e., motivational and volitional) of the health behaviour change process and is considered phase-specific (schwarzer & hamilton, 2020; zhang, fang, zhang, hagger, & hamilton, 2020; zhang, zhang, schwarzer, & hagger, 2019) . accordingly, several types of self-efficacy can be distinguished: action self-efficacy (an optimistic belief about personal agency during the pre-actional, motivational phase) and maintenance self-efficacy (an optimistic belief about personal agency during the post-actional, volitional phase). action self-efficacy reflects individuals' perceived capacity and confidence to engage in a behaviour in which they have not yet adopted or initiated (schwarzer & hamilton, 2020; zhang et al., , 2020 . maintenance self-efficacy refers to individuals' perceived confidence and ability in maintaining the behaviours they have already adopted and performed (schwarzer & hamilton, 2020; zhang et al., , 2020 . meta-analytic research has provided support for the hapa constructs of planning and self-efficacy in predicting health behaviours . previous research has also shown intention, planning, and self-efficacy to predict health preventive behaviours more specifically (caudwell, keech, hamilton, mullan, & hagger, 2019; cheng et al., 2019; fung et al., 2019; hamilton, kirkpatrick, rebar, & hagger, 2017; hou, lin, wang, tseng, & shu, 2020; lin, scheerma, yaseri, pakpour, & webb, 2017; lin et al., 2018 lin et al., , 2020 lin, updegraff, & pakpour, 2016; reyes fern andez, knoll, hamilton, & schwarzer, 2016; strong et al., 2018; zhang et al., 2020) . given the high rates of covid-19 infections worldwide, it is imperative that people engage in covid-19 preventive behaviours to 'flatten the curve' on rates of increase in new cases and, ultimately, reduce mortality rates from covid-19 infection. identifying the key theory-based determinants of key preventive behaviours (regular handwashing; respiratory hygiene practices; maintaining social distancing; self-isolating) will help to inform effective interventions to promote participation in these behaviours. the purpose of the current study was to examine the efficacy of an integrated theoretical model of behaviour that incorporated constructs that represent motivational and volitional processes from the tpb and hapa in predicting engagement in covid-19 preventive behaviours of iranian individuals. the tpb and hapa constructs of attitudes, subjective norms, perceived behavioural control, action self-efficacy, and intention represented effects in the motivational phase of behavioural decision-making. the hapa constructs of maintenance self-efficacy, action planning, and coping planning represented effects in the volitional phase of decision-making. the study adopted a three-wave correlational design with measures of constructs from the motivational phase taken at an initial data collection occasion (time 1), constructs from the volitional phase taken at a first follow-up occasion (time 2), and measures of covid-19 preventive behaviours taken at a second follow-up occasion (time 3). study hypotheses are outlined in the next section and illustrated in figure 1 . the target behaviour selected in the current study was covid-19 preventive behaviours, which comprised four specific actions: regular handwashing, respiratory hygiene practices, maintaining social distancing, and self-isolating. these behaviours all have the goal of preventing infection and spread of the virus in common and, therefore, have utility in attaining that goal. the proposed behavioural outcome, therefore, represents a behavioural category servicing a common goal. this is consistent with previous research examining the determinants of target behaviours that comprise multiple actions that service a particular goal. for example, researchers frequently aim to predict physical activity, which encompasses multiple actions (e.g., walking, cycling, swimming, running, going to the gym, playing various sports; cheng et al., 2019; fung et al., 2019; rhodes & de bruijn, 2013) . we adopted a behavioural outcome comprising multiple actions in the current study because these behaviours have a common goal and may, therefore, have common determinants. evidence for this comes from research examining the clustering of similar health behaviours, which demonstrates considerable consistency in the behaviours themselves and their determinants (e.g., kremers, de bruijn, schalmaa, & brug, 2004) . similarly, recent research has demonstrated that specific covid-19 preventive behaviours such as social distancing clusters with other healthrelated behaviours such as physical activity (bourassa, sbarra, caspi, & moffitt, 2020) . it is also important to note that although the determinants of the individual behaviours may differ at the level of the specific sets of beliefs that underpin the model constructs, when measuring the determinants at the global level, we expected the determinants to be consistent. finally, we also expected consistency among the selected preventive behaviours and aimed to ensure this was the case by examining whether measures of the behaviours indicated a latent behavioural variable in our analyses. in terms of specific model predictions, in the motivational phase of the proposed model, we expected that time 1 attitudes, subjective norms, perceived behavioural control, and action self-efficacy would be associated with time 1 intentions. in addition, time 1 intentions and perceived behavioural control were expected to predict time 3 behaviour. it was also expected that time 1 action self-efficacy would predict time 2 maintenance self-efficacy. with respect to model relationships in the volitional phase, it was expected that time 1 intentions would predict time 2 action planning and coping planning, and time 3 behaviour. moreover, time 2 maintenance self-efficacy was expected to be associated with time 2 action planning and coping planning. finally, time 2 maintenance self-efficacy, action planning, and coping planning were expected to predict time 3 behaviour. a set of indirect effects consistent with theory was also specified. it was expected that time 1 attitudes, subjective norms, and perceived behavioural control would predict time 2 action planning and coping planning mediated by time 1 intentions. in addition, attitudes, subjective norms, and perceived behavioural control were expected to predict time 3 behaviour mediated by time 1 intentions and time 2 action planning and coping planning. we also expected time 1 action self-efficacy would predict time 2 action planning and coping planning mediated by time 1 intentions and time 2 maintenance self-efficacy. additionally, we expected that time 1 action self-efficacy would predict time 3 behaviour mediated by time 1 intentions and time 2 maintenance self-efficacy, action planning, and coping planning. finally, it was expected that time 1 intentions and time 2 maintenance self-efficacy would predict time 3 behaviour mediated time 2 action and coping planning. participants and procedure participants were iranian adults aged 18 years and older recruited via online social media platforms. we posted the web link to the survey on three popular social media sites in iran: instagram, telegram, and whatsapp. we also posted the link to several email listservs with many subscribers nationally. to be eligible for inclusion, participants had to be aged 18 years and older, had to provide consent to participate in the study, and had to have access to the internet. the link directed respondents to an initial page describing study aims and requirements, followed by the consent form and, finally, the survey measures. participants' were prompted to provide their telephone number, email address, or social media contact details in order to receive a link to the follow-up survey by sms, email, or social media. data were collected between 21 february 2020 and 17 march 2020. this period is critical to the immediacy of the current data as the first confirmed cases of covid-19 infections in iran were reported on 19 february 2020 in qom. by 21 february, 18 cases had been confirmed with a total death toll of four. total confirmed cases had increased to 16,169 with 988 deaths by 17 march 2020. media coverage of the pandemic was widely broadcast by state and private media during the period, with state broadcasters providing information on guidelines to prevent the spread of infection and social distancing rules. covid-19 hotlines were set up at the time to provide help and guidelines on covid-19 issues. the study adopted a three-wave correlational design with 1-week intervals between each wave. participants (n = 1,718; male = 796, female = 922) completed a survey at an initial data collection occasion (time 1) comprising self-report measures of action selfefficacy, attitudes, subjective norms, perceived behavioural control, and intention. the survey also included self-report measures of demographic factors including age, sex, education level, and employment status. at a second data collection occasion (time 2), participants (n = 1,627, male = 760, female = 867, attrition rate = 5.30% from time 1) completed self-report measures of maintenance self-efficacy, action planning, and coping planning. at a third data collection occasion (time 3), participants (n = 1,569, male = 747, female = 849, attrition rate = 8.67% from time 1) self-reported their participation in covid-19 preventive behaviours performed over the past week. we conducted a statistical power analysis based on maccallum, browne, and sugawara (1996) model fit criterion to establish the required sample size to detect effects. the analysis suggested a sample size of 1,456 was required for a well-fitting model with an rmsea of 0.05 against a null model with an rmsea set at 0.00, 3 degrees of freedom, alpha set at 0.05, and power set at 0.80. data across each of the time points were matched using a code assigned to each participant. the study was conducted in accordance with the declaration of helsinki and was approved by the research ethics committee of blinded for review (qazvin university of medical sciences (ir.qums.rec.1398.375)). all participants provided informed consent to participate prior to the first data collection occasion. participants completing measures at all three data collection occasions received points valued at irr 30,000 that were exchangeable for rewards. the points could be used to purchase healthy mobile phone apps like cognitive behavioural therapy, mindfulness, yoga, and weight management apps. only those participants who completed all three surveys were rewarded. psychological constructs were assessed on multi-item psychometric instruments developed using standardized guidelines and adapted to make reference to the target behaviour in the current study, participation in covid-19 preventive behaviours. we collected data on different constructs across the three time points to allay common method variance and to provide prospective prediction of key outcomes in the integrated model over time. brief details of the measures are provided below, and the full set of measures is available in table 1 . questions were presented in persian, a language commonly used and widely spoken in iran. current measures were adopted from those used in previous studies to tap tpb (lin et al., 2016) , phase-specific self-efficacy (zhang et al., 2020) , and planning constructs. intention to perform the covid-19 preventive behaviours in the coming week was assessed using three items (e.g., 'in the coming week, i am willing to perform the covid-19 preventive behaviors every day'), scored 1 = strongly disagree to 5 = strongly agree. attitude was assessed using six semantic differential items in response to a common stem: 'for me, following the recommendation of the who on engaging in covid-19 preventive behaviors every day in the coming week is. . .'. this was followed by a series of bipolar adjectives (e.g., extremely bad-extremely good). responses were scored on five-point scales. subjective norm was assessed using two items measuring participants' perceptions of their important others' approval on performing the target behaviour (e.g., 'most people who are important to me would want me to perform the covid-19 preventive behaviors every day in the coming week'), scored 1 = strongly disagree to 5 = strongly agree. perceived behavioural control perceived behavioural control was assessed using three items measuring participants' perceptions of their control and confidence in performing the target behaviour (e.g., 'whether or not i perform the covid-19 preventive behaviors every day in the coming week is completely up to me'), scored 1 = strongly disagree to 5 = strongly agree. action self-efficacy was assessed using three items measuring participants' perceived confidence in initiating the target behaviours immediately (e.g., 'if you have not followed the recommendation of the who on the covid-19 preventive behaviors every day yet, do you have the confidence to start to follow the recommendation even if you have to force yourself doing so at the current stage'), scored 1 = totally disagree to 5 = totally agree. maintenance self-efficacy was assessed using four items measuring participants' confidence in maintaining the target behaviour in the long term (e.g., 'if you are able to follow the recommendation of the who on the covid-19 preventive behaviors every day, do you have the confidence to maintain it in the long term even if you are stressed out'), scored 1 = totally disagree to 5 = totally agree. action planning was assessed using three items measuring the extent to which participants had made a plan in terms of how, when, and with whom to perform the target behaviour (e.g., 'i have made a detailed plan regarding where to perform the covid-19 preventive behaviors every day'), scored 1 = totally disagree to 5 = totally agree. coping planning was assessed using three items measuring how much participants planned to overcome the obstacles preventing them from performing preventive behaviours (e.g., 'i have made a detailed plan regarding what to do if something interferes with my plans'), scored 1 = totally disagree to 5 = totally agree. participants self-reported their age (in years), sex (coded as male = 1, female = 2), educational level (in years), and employment status (retired, homemaker, student, employed; coded as retired and homemaker = 1, student and employed = 2). participants' covid-19 preventive behaviour was assessed over the last week of the study. participants reported their frequency of participation in four preventive behaviours recommended by the who: washing hands frequently, maintaining social distancing, practising respiratory hygiene, and staying home if feeling unwell ; world health organization, 2020b; e.g., 'regularly and thoroughly clean your hands with an alcohol-based hand rub or wash them with soap and water'). before responding to the behavioural measure, participants were provided with a clear definition of the covid-19 preventive behaviours and recommendations for how and when they should be performed based on the who guidelines. moreover, these guidelines corresponded with those provided by state media released by the iranian ministry of health. therefore, participants were fully aware of the definitions of the preventive behaviours and the guidelines. responses to each behaviour were scored on five-point scales (1 = almost never to 5 = almost always) and were used to indicate a latent covid-19 preventive behavioural variable in subsequent analyses. higher scores indicated greater adherence to the who recommendations in engaging in covid-19 preventive behaviours. hypothesized relationships among the proposed integrated social cognition model were analysed using structural equation modelling (sem). the model was estimated using the amos software v24.0 with a maximum-likelihood estimator and bias-corrected bootstrapped standard errors approach with 5,000 resamples. less than 10% of the data were missing and data were missing completely at random based on little's (1988) mcar test (v 2 = 1.068, df = 4, p = .899). missing data were imputed using the full information maximum-likelihood method. psychological and behavioural constructs were latent variables indicated by their respective sets of items. hypotheses of the proposed integrated model were tested by specifying structural relationships between latent variables (see figure 1 ), with each latent variable indicated by the set of scale items for each, including the behaviour factor. age, sex, educational status, and employment status were included as non-latent control variables in the model. overall model fit with the data was assessed using multiple fit indices: the goodness-offit chi-square test, the comparative fit index (cfi), the tucker-lewis index (tli), the standardized root-mean-square residual (srmr), and the root-mean-square error of approximation (rmsea). as the chi-square test is highly oversensitive to even minor misspecification especially in large, complex models, values for the cfi and tli that exceeded 0.95, and srmr and rmsea values that exceeded 0.05 and 0.06, respectively, were considered indicative of satisfactory fit of the model with the data (hu & bentler, 1999) . reliability of the study measures (intentions, attitudes, subjective norm, perceived behavioural control, action self-efficacy, maintenance self-efficacy, action planning, coping planning, and covid-19 preventive behaviours) was examined using either cronbach's a or mcdonald's x coefficients and the composite reliability (cr) coefficient. values for a and x exceeding 0.70, and cr values exceeding 0.60, were considered indicative of adequate internal consistency. in addition, we also looked at the average variance extracted (ave) for each latent variable to ensure that items were contributing adequately to the construct they indicated, with values in excess of 0.50 considered satisfactory. the large sample size in the current study meant that most estimates of effects among study constructs in the sem were likely to exceed conventional criteria for statistical significance (ory & mokhtarian, 2010; wu, chang, chen, wang, & lin, 2015) . as a consequence, assessment of effect sizes of parameter estimates among constructs from the proposed sem was an imperative. effect sizes were evaluated using standardized path coefficients, which allowed for the interpretation of absolute and relative effect sizes of the coefficients against cohen's suggested rules of thumb. interpretation of effect sizes of standardized path coefficients for indirect effects was less easily interpretable as they comprised multiplicative composites of multiple effects. based on previously suggested rules of thumb, we judged standardized path coefficients for indirect effects equal to or exceeding .075 as non-trivial and effect sizes below this value as trivial (hagger, koch, chatzisarantis, & orbell, 2017; seaton, marsh, & craven, 2010) . demographic characteristics of participants who completed measures at each time point are presented in table 2 . attrition analyses indicated that there were no significant differences in age (f(3,1,714) = 1.35; p = .26), gender distribution (v 2 (3) = 2.77; p = .43), educational level (f(3,1,714) = 1.69; p = .17), employment status (v 2 (3) = 3.23; p = .36), and psychological variables (wilks' k = 1.00, f (8,1,618) = 0.68; p = .71), and preventive behaviours (t(1,594) = 0.20; p = .84) among participants who remained in the study at time 3 and those who dropped out of the study at time 1 or time 2. descriptive statistics, reliability coefficients, factor loadings, and average variance extracted for study measures are presented in table 1 . cronbach's a and mcdonald's x coefficients all exceeded 0.70, cr values were above 0.60, and all ave values were above 0.50 supporting internal consistency and reliability of the measures. consistent with the acceptable ave values, factor loadings for each item on its respective latent factor was zero-order factor correlations among study constructs are presented in table 3 . most of the correlations were small to medium in effect size (i.e., r range .30 to .50), and all were statistically significant. although each item representing a separate covid-19 preventive behaviour effectively indicated the latent behaviour variable, it was prudent to check the mean scores for each behaviour item to verify the consistency with which they were performed by participants. mean scores were highly consistent (m range = 2.05 to 2.39) with high consistency in their variability (sd range = 0.85 to 0.89). a one-way withinparticipants anova showed significant differences on each, which was unsurprising considering the large sample size. however, the small effect sizes for the differences (cohen's d range = 0.09 to 0.39) pointed to the consistency with which participants performed each behaviour, providing further justification for adopting a single covid-19 behaviour factor. mean scores, standard deviations, mean differences, and test of difference for each preventive behaviour item are presented in supplementary table s1 . the integrated social cognition model proposed in the present study had good fit with the data (v 2 = 1,948.06, df = 497; p < .001; cfi = 0.957, tli = 0.949, srmr = .0822, rmsea = .041, 90% ci = [0.039, 0.043]). path coefficients for the direct effects among study constructs in the model are summarized in figure 1 and path coefficients for the direct, indirect, and total effects are presented in table 4 . all proposed direct and indirect effects were statistically significant, although most effect sizes were small, with most of the standardized path coefficients less than .30. perceived behavioural control had the largest effect on intention (b = .226, p < .001), with much smaller effects for action selfefficacy, attitude, and subjective norms (bs < .178, ps < .001). the largest direct effects on covid-19 preventive behaviours were for coping planning (b = .575, p < .001), action planning (b = .267, p < .001), and maintenance self-efficacy (b = .227, p < .001), while effects for intentions and perceived behavioural control were much smaller (bs < .143, ps < .001). importantly, effects of intentions were mediated by both action planning and coping planning, consistent with the hapa (total indirect effect, b = .194, p < .001), with a non-trivial effect size, although a small residual effect of intention on behaviour (b = .143, p < .001). along with the direct effect and the mediated effects through the continued planning constructs, there was also a total effect of intentions (b = .327, p < .001), again with a non-trivial effect size. in addition, there were indirect effects of action self-efficacy on behaviour through intentions, maintenance self-efficacy, and the coping planning and action planning constructs (b = .194, p < .001). furthermore, perceived behavioural control (b = .330, p < .001) and maintenance self-efficacy (b = .411, p < .001) had the largest total effects on behaviour. the total effect of perceived behavioural control comprised a direct effect (b = .087, p < .001) and indirect effects through intention (b = .045, p < .001) and action planning and coping planning (b = .243, p < .001). the total effect of maintenance self-efficacy comprised a direct effect (b = .227, p < .001) and indirect effects through the planning constructs (b = .184, p < .001). although the items of our covid-19 preventive behavioural measure effectively indicated the latent behaviour variable, for completion we also explored whether the effects in our model differed according to the specific preventive behaviour adopted as the target behaviour. we therefore re-estimated our structural equation model with each of the four individual behaviours as the dependent variable, represented by singleindicator latent variables. results indicated high consistency in the pattern and size of the parameter estimates in each of the four models, and these were virtually unchanged from the estimates in the overall model. on the basis of these findings, our conclusions with respect to model effects remained unchanged (the analyses are summarized in tables s2 to s4 and figures s1-s4 in the supplemental materials). the present study applied an integrated social cognition model to predict participation in covid-19 preventive behaviours among members of the iranian general public. findings lend support to the proposed relationships among the integrated social cognition model in identifying the determinants of covid-19 preventive behaviours. in particular, the research is consistent with previous studies applying the tpb and hapa to identify the determinants of health behaviours and the processes involved (hagger et al., 2016; mceachan et al., 2011; rich et al., 2015; . the current model suggests that perceived behavioural control, intentions, forms of planning, and maintenance self-efficacy are prominent note. age, sex, educational status, and occupational status were included as control variables in the structural equation model. ap = action planning; ase = action self-efficacy; b = unstandardized path coefficient; cp = coping planning; ll = lower limit of 95% ci; mse = maintenance self-efficacy; pbc = perceived behavioural control; sn = subjective norm; se = standard error; b = standardized path coefficient; 95% ci = 95% confidence interval of unstandardized path coefficient; ul = upper limit of 95% ci. *p < .05; **p < .01; ***p < .001. behavioural determinants as they report non-trivial indirect and total effects on covid-19 preventive behaviours. current findings also support the importance of constructs representing both the motivational and volitional phases of action, again, consistent with previous research and syntheses of research applying the constituent theories (mceachan et al., 2011; . in particular, current findings support previous research applying these constructs to predict similar behaviours in other health-related contexts, such as hand hygiene behaviours and face mask wearing (contzen & mosler, 2015; zomer et al., 2013) , although the previous research was not conducted in the presence of a current pandemic while the current research was conducted at the peak of the ongoing covid-19 pandemic. while the pattern of effects among model constructs in the current study was consistent with theory and identified salient determinants of covid-19 preventive behaviours, the majority of effects were small in magnitude. even though the total effect of intentions on behaviour was non-trivial, substantive variance in behaviour remained unexplained. although shortfalls in the link between intention and behaviour are not uncommon in social cognition models (orbell & sheeran, 1998; rhodes & de bruijn, 2013) , the link in the current study is particularly modest and suggests that individuals were not following through on their intentions to perform these preventive behaviours. this is aptly illustrated by the average levels of both variables in the current study, with the value for intentions (m = 3.73, sd = 1.10) exceeding the hypothetical midpoint on the five-point scale and larger than the value for behaviour (m = 2.19, sd = 0.73), which was substantially below the midpoint. while it seems that coping planning and action planning accounted for a substantive proportion of the intention-behaviour relationship in the current study, results do not provide a sufficient explanation for the shortfall in the intention-behaviour relationship. the apparent reluctance to engage in these in preventive behaviours is surprising given the high level of threat posed by the covid-19 outbreak in iran and the widespread media coverage of the pandemic (tuite et al., 2020) . furthermore, a recent study identified elevated levels of fear of covid-19 in the general iranian population and, although we did not assess risk perceptions in the current study, theory suggests that risk perceptions may translate into increased intentions to perform preventive behaviours to minimize risk (rogers, 1975; schwarzer, 2008; schwarzer & hamilton, 2020) . however, one possible mitigating factor is that excessively heightened fear may be counterproductive in motivating individuals to engage in preventive behaviours (lin, 2020) . in fact, theory on illness beliefs and perceptions suggests that fear and beliefs reflecting high seriousness and consequences may motivate emotion-focused coping responses aimed at mitigating fear, such as avoidance or denial, neither of which may be focused on behaviours to manage the risk itself leventhal, leventhal, & contrada, 1998) . this is also consistent with research demonstrating that heightened risk perceptions may not translate into performance of preventive behaviours when self-efficacy is low (peters, ruiter, & kok, 2013) . however, these ideas remain speculative given we did not assess risk perceptions in the current study, and assessing risk perceptions and their interaction with self-efficacy on performing preventive behaviours may be an important avenue for future research. it is also important to consider possible contextual influences on the low covid-19related behavioural response and modest intention-behaviour relationship in the current study. the study was conducted in the run-up to the persian new year on 3 march 2020. consequently, many iranians may have been reluctant to follow covid-19 preventive behaviours and resisted government and who recommendations. traditional new year's celebrations in iran involve large family gatherings and social events, festive behaviours that are ingrained and habitual, and form a strong part of the persian culture. given the cultural significance of this celebration, it is possible that the traditional festive behaviours may have taken precedence over performing covid-19 preventive behaviours, particularly the social distancing aspect, as they are incompatible. modest effect sizes among model constructs notwithstanding, the current study is among the first to provide preliminary evidence of the potentially modifiable constructs that relate to preventive behaviours known to be critical in minimizing the spread of covid-19 infections. current findings may contribute to efforts to increase populationlevel participation in preventive behaviours by signposting the constructs that should be targeted in behavioural interventions. research that identifies constructs that are reliably related to behaviour form an important part of the process by which interventionists develop behavioural interventions (hagger, moyers, mcanally, & mckinley, 2020; rothman, klein, & sheeran, 2020) . this can be coupled with recent research that has linked these constructs with sets of methods or techniques purported to change them based on theory and previous evidence. interventionists can therefore identify appropriate techniques that may be effective in affecting change in the behaviour of interest by targeting change in the target constructs, a mechanism of action (connell et al., 2018) . the current study, therefore, may provide part of the chain of evidence necessary to develop effective behaviour change interventions for covid-19 preventive behaviours. based on current evidence, interventionists should consider strategies that target change in perceived behavioural control, action and maintenance self-efficacy, and coping planning as these the constructs had the largest direct and indirect effects on covid-19 preventive behaviour. strategies known to promote self-efficacy include providing opportunities to experience success with the behaviour through, for example, demonstration, modelling, and positive feedback (warner & french, 2020) . these strategies could be tailored to focus on uptake of the behaviour in the motivational (e.g., demonstrating what is an appropriate social distance when waiting in line at a grocery store; showing effective handwashing technique and prompting practice) or maintenance (e.g., prompting individuals to identify an appropriate rule of thumb on keeping an appropriate social distance every time one is in a store; how to incorporate handwashing into a daily routine) phase. similarly, promoting effective coping planning entails prompting individuals to identify potential barriers to the target behaviour and identifying potential actions that can be put in place to mitigate them (e.g., for the barrier of not having access to handwashing facilities, an individual could plan to make sure they have a personal supply of alcohol-based hand sanitizer available; rhodes, grant, & de bruijn, 2020) . these strategies would form the content of communications delivered through various media (e.g., television, leaflets, posters, web-based messages) to the affected population. the current research has a number of strengths: (1) identifying the determinants of a set of appropriate behaviours aimed at preventing spread of covid-19, an infection that poses a substantive global health threat and a priority area for behavioural intervention; (2) adoption of an appropriate integrated theoretical model that provides a set of a priori predictions on the motivational and volitional determinants of covid-19 preventive behaviours; (3) recruitment of a large sample of participants in a population subjected to substantive threat of infection; and (4) use of appropriate longitudinal study design, previously validated measures, data collection techniques, and analytic methods. however, a number of limitations to the current data should be noted. first, although the prospective design provides some basis for the temporal order of relationships among constructs, the current data are correlational, so inferences of causality were drawn from theory alone and not the data. furthermore, the prospective design did not model the covariance stability or change in constructs over time. this is an important caveat to consider when making recommendations for practice. while correlations between constructs and behavioural outcomes may provide some indication of potential targets for intervention, these data do not provide sufficient basis that affecting change in a construct will lead to change in a behavioural outcome, future research adopting panel designs that model change in constructs over time, and intervention or experimental designs that affect change constructs and observe their effects on behavioural outcomes, are needed. it is also important to note that the study was conducted over 2-week period, a relatively brief follow-up period. the short time period is appropriate given the high speed of transmission of the coronavirus, creating an imperative for immediate mass adoption of covid-19 preventive behaviours in the population to prevent widespread infection. however, the current study does not provide evidence on the extent to which model constructs predict covid-19 preventive behaviours over a longer period, and long-term follow-up would provide important data on long-term maintenance of these behaviours. moreover, it is important to note that the current study relied exclusively on self-report measures. although we adopted previously validated measures which demonstrated good reliability and construct validity, such measures have the potential to introduce error variance through recall bias and socially desirable responding. future studies may consider verification of behavioural data with non-self-report data such as data on infection rates. another important limitation is the aggregation of multiple covid-19 preventive behaviours into a single behavioural score representing covid-19 preventive behaviours with corresponding social cognition measures that made reference to those specific behaviours rather than the general category of covid-19 preventive behaviours. our original rationale for this was that these behaviours all service the same goal and, therefore, we would expect these behaviours to be closely aligned and therefore, have the same determinants and the same strength of effects within the proposed model. evidence for this comes from the high factor loadings of each behavioural measure on the latent covid-19 preventive behaviours variable, suggesting relative consistency in the way participants' performed these behaviours. in addition, estimation of the model with each of the behavioural items as the target behaviour demonstrated substantive consistency in model effects. taken together, these findings provide evidence that the pattern and size of model effects observed in the current study are consistent across the behaviours. nevertheless, we cannot unequivocally rule out idiosyncratic variation in the determinants, and the strength of their effects, on the model constructs for each specific preventive behaviour. this could only be done by examining the corresponding determinants of each specific behaviour separately and then testing the invariance of the model effects for the model for each behaviour. this remains an imperative for future research. in addition, the stem phrases used in the items might have presented difficulties for some participants to interpret the item meaning. for example, the self-efficacy items were prefixed with the phrase: 'if you are doing. . ..', and other items included the prefix: 'if you have not followed the recommendation of the who. . .'. participants without such experience or had followed recommendations might have had difficulties understanding the item content. finally, some of the items aimed at assessing covid-19 preventive behaviours might have been difficult for people to answer. problems with interpreting these items for some participants may have introduced additional error variance to the measures and, therefore, affected the strength of model relationships involving these variables. it is also important to acknowledge that we cannot confirm that the current model was conducted in a context where individuals were adopting the covid-19 behaviours for the first time. the current research was conducted during a period when it is likely that participants were just starting to introduce these new behaviours given that very few cases of covid-19 had been detected in iran at the time. nevertheless, we cannot unequivocally rule out that a proportion of the participants were not already enacting these behaviours, or rule out the possibility that some participants already had substantive experience with these behaviours, albeit with a different goal. so, while it is likely that the current research captured individuals when they were adopting behaviours for the first time, we cannot rule out the possibility of past experience with the behaviours. future research should consider the inclusion of past behaviour as an additional predictor in the model consistent with previous research applying social cognition theories (e.g., brown, hagger, & hamilton, 2020; chatzisarantis, hagger, smith, & phoenix, 2004; hagger et al., 2016; hagger, polet, & lintunen, 2018) . urgent action is required to stem the spread of covid-19 in order to 'flatten the curve' of infection rates and minimize stress on available resources and health care facilities, and, importantly, reduce mortality. the current study identified a number of important social psychological determinants of participation in covid-19 preventive behaviours, particularly forms of self-efficacy, perceived behavioural control, and planning. assuming these determinants are modifiable through intervention, the current research provides important formative data that may assist development of optimally effective behavioural interventions. however, the relatively low levels of participation in these preventive behaviours endemic in the current population are a concern. future research should consider testing the efficacy of behavioural interventions that target change in the constructs identified in the current study using appropriately matched behaviour change techniques. in addition, longitudinal studies adopting panel designs are also a priority to identify directional effects among theory constructs in this high priority context. interpretation, or writing of the report. martin s. hagger's contribution was supported by a finland distinguished professor (fidipro) award (#1801/31/2105) from business finland. the authors declare that they have no competing interests. the study was approved by the ethics committee of the qazvin university of medical sciences (ir.qums.rec.1398.375). participants completed an online informed consent form before beginning the survey. the following supporting information may be found in the online edition of the article: figure s1 . standardized path coefficients among constructs from the integrated social cognition model with hand hygiene as the target behavior. figure s2 . standardized path coefficients among constructs from the integrated social cognition model with practicing respiratory hygiene as the target behaviour figure s3 . standardized path coefficients among constructs from the integrated social cognition model with maintaining a one meter distance as the target behavior figure s4 . standardized path coefficients among constructs from the integrated social cognition model with staying at home if unwell as the target behaviour. associations between fear of covid-19, mental health, and preventive behaviours across pregnant women and husbands: an actor-partner interdependence modelling fear of covid-19 scale: development and initial validation the theory of planned behavior changing behaviour using the theory of planned behavior real estimates of mortality following covid-19 infection. the lancet infectious diseases social distancing as a health behavior: county-level movement in the united states during the covid-19 pandemic is associated with conventional health behaviors the mediating role of constructs representing reasoned-action and automatic processes on the past behavior-future behavior relationship reducing alcohol consumption during pre-drinking sessions: testing an integrated behaviour-change model the influences of continuation intentions on the execution of social behaviour within the theory of planned behaviour extended theory of planned behavior on eating and physical activity links between behavior change techniques and mechanisms of action: an expert consensus study identifying the psychological determinants of handwashing: results from two cross-sectional questionnaire studies in haiti and ethiopia psychosocial variables related to weight-related self-stigma in physical activity among young adults across weight status the handbook of behavior change using meta-analytic path analysis to test theoretical predictions in health behavior: an illustration based on metaanalyses of the theory of planned behavior the common-sense model of selfregulation: meta-analysis and test of a process model known knowns and known unknowns on behavior change interventions and mechanisms of action the reasoned action approach applied to health behavior: role of past behavior and test of some key moderators using meta-analytic structural equation modeling child sun safety: application of an integrated behavior change model an extended theory of planned behavior for parent-for-child health behaviors: a meta-analysis. health psychology covid-19: what is next for public health? the lancet assessing related factors of intention to perpetrate dating violence among university students using the theory of planned behavior cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives predicting covid-19 preventive behaviours 21 school opening delay effect on transmission dynamics of coronavirus disease 2019 in korea: based on mathematical modeling and simulation study communicating the risk of death from novel coronavirus disease (covid-19) a taxonomy of behavior change methods: an intervention mapping approach clustering of energy balance related behaviours and their intrapersonal determinants self-regulation, health, and behavior: a perceptual-cognitive approach social reaction toward the 2019 novel coronavirus (covid-19) can a modified theory of planned behavior explain the effects of empowerment education for people with type 2 diabetes? a cluster randomised controlled trial of an intervention based on the health action process approach for increasing fruit and vegetable consumption in iranian adolescents a cluster randomized controlled trial of a theory-based sleep hygiene intervention for adolescents the relationship between the theory of planned behavior and medication adherence in patients with epilepsy a test of missing completely at random for multivariate data with missing values power analysis and determination of sample size for covariance structure modeling the theory of planned behaviour and dietary patterns: a systematic review and meta-analysis prospective prediction of health-related behaviors with the theory of planned behavior: a meta-analysis inclined abstainers': a problem for predicting health related behaviour the impact of non-normality, sample size and estimation technique on goodness-of-fit measures in structural equation modeling: evidence from ten empirical models of travel behavior assessing the fear of covid-19 among different populations: a response to ransing et al (2020) assessing the psychological response to the covid-19: a response to bitan et al "fear of covid-19 scale: psychometric characteristics, reliability and validity in the israeli population assessing psychological response to the covid-19: the fear of covid-19 scale and the covid stress scales threatening communication: a critical re-analysis and a revised meta-analytic test of fear appeal theory social-cognitive antecedents of hand washing: action control bridges the planning-behavior gap how big is the physical activity intention-behaviour gap? a meta-analysis using the action control framework planning and implementation intention interventions theory of planned behavior and adherence in chronic illness: a meta-analysis a protection motivation theory of fear appeals and attitude change moving from theoretical principles to intervention strategies: applying the experimental medicine approach modeling health behavior change: how to predict and modify the adoption and maintenance of health behaviors changing behavior using the health action process approach big-fish-little-pond effect: generalizability and moderation -two sides of the same coin action planning and coping planning for long-term lifestyle change: theory and assessment sleep hygiene behaviors in iranian adolescents: an application of the theory of planned behavior estimation of the transmission risk of the 2019-ncov and its implication for public health interventions estimation of coronavirus disease 2019 (covid-19) burden and potential for international dissemination of predicting confidence and self-efficacy interventions q&a on coronaviruses (covid-19) covid-19 coronavirus pandemic further psychometric evaluation of the self-stigma scale-short: measurement invariance across mental illness and gender characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china: summary of a report of 72 314 cases from the chinese center for disease control and prevention health beliefs of wearing facemasks for influenza a/h1n1 prevention: a qualitative investigation of hong kong older adults predicting hand washing and sleep hygiene behaviors among college students: test of an integrated social-cognition model a meta-analysis of the health action process approach sociocognitive determinants of observed and self-reported compliance to hand hygiene guidelines in child day care centers we are most grateful to all participants. the study was funded by qazvin university of medical sciences. this study was supported by grants from the qazvin university of medical sciences. the funders were not involved in the study design, data collection, data analysis, data not applicable. all authors contributed to the study design, data interpretation, editing, and critical review of the manuscript. vi, nrm, and z gh collected data. ahp and cyl performed the data handling and data analysis and drafted the first manuscript. mdg interpreted the data and its analysis. kh and msh helped draft and revise the manuscript. all authors read and approved the final manuscript. key: cord-321966-q0if8li9 authors: simpson, ryan b.; zhou, bingjie; alarcon falconi, tania m.; naumova, elena n. title: an analecta of visualizations for foodborne illness trends and seasonality date: 2020-10-13 journal: sci data doi: 10.1038/s41597-020-00677-x sha: doc_id: 321966 cord_uid: q0if8li9 disease surveillance systems worldwide face increasing pressure to maintain and distribute data in usable formats supplemented with effective visualizations to enable actionable policy and programming responses. annual reports and interactive portals provide access to surveillance data and visualizations depicting temporal trends and seasonal patterns of diseases. analyses and visuals are typically limited to reporting the annual time series and the month with the highest number of cases per year. yet, detecting potential disease outbreaks and supporting public health interventions requires detailed spatiotemporal comparisons to characterize spatiotemporal patterns of illness across diseases and locations. the centers for disease control and prevention’s (cdc) foodnet fast provides population-based foodborne-disease surveillance records and visualizations for select counties across the us. we offer suggestions on how current foodnet fast data organization and visual analytics can be improved to facilitate data interpretation, decision-making, and communication of features related to trend and seasonality. the resulting compilation, or analecta, of 436 visualizations of records and codes are openly available online. disease surveillance systems worldwide face increasing pressure to maintain and distribute data in usable formats with clearly communicated visualizations to promote actionable policy and programming responses 1 . decade-long efforts to sustain surveillance systems improve early outbreak detection, infection containment, and mobilization of health resources [1] [2] [3] [4] and create adaptive, near-time forecasts for disease outbreaks 5, 6 . web-based platforms provide access to more accurate, timely, and frequent surveillance data. the world health organization's (who) flunet, for example, provides time-referenced data on worldwide influenza 7 . publicly available downloads increase the flexibility for analyses and enables adaptive research due to frequent and timely reporting. the pandemic of 2019 novel coronavirus disease (covid-19) serves as a vivid demonstration of how limited access to publicly available high-quality data can stymy research. as the quantity and diversity of data available for processing, synthesizing, and communicating increases, new visual analytics, including complex multi-panel plots, must be considered to monitor trends, investigate seasonality, and support public health planning 8 . these visualizations, and the methodologies used to generate them, must be standardized to enable comparability across time periods, locations, at-risk populations, and pathogens. however, current surveillance systems, including foodborne disease surveillance in the united states, often compress time series records to simplistic annual trends [9] [10] [11] [12] [13] and describe seasonality by the month(s) with the highest cases per year or the first month of outbreak onset [14] [15] [16] [17] [18] [19] . visualizations using these annual trends or broad assessments of seasonality fail to utilize the full complexity of surveillance data and in some cases may be misleading. more specifically, these visualizations fail to provide detailed examination of how long-term trends change over time, how seasonality estimates vary by year or across locations, or how peak timing and amplitude estimates could change over time. the cdc foodborne disease active surveillance network (foodnet) provides preprocessed population-based foodborne-disease surveillance records and visualizations via foodnet fast, a publicly available data portal 20, 21 . the foodnet fast platform contains rich demographic data, including age group, gender, and ethnic group, valuable for a broad spectrum of analyses. the visualizations aim to aid users in identifying trends of nine laboratory-confirmed foodborne diseases in select counties from ten us states and nationally. however, in the present form and due to substantial data compression, the available data and visualizations provided are limited in scope preventing the foodnet fast allows data download and visualization of these diseases for a user-specified time period. data downloads include information on the incidence of confirmed cases, monthly percentage of confirmed cases, distribution of cases by pathogen, and totals of cases, hospitalizations, and deaths. for multi-year periods, the portal aggregates totals and monthly percentages into single statistics for the full time period selected rather than showing individual years. this aggregation ensures case anonymity but monthly time units minimize the refinement of trend and seasonality analyses. to calculate monthly percentages of confirmed cases for all diseases in one year and one location, we had to download each state-year combination individually, for a total of 221 files in ms excel format. to create a time series of total monthly cases by pathogen and location, we used data from two tables in each data download: annual counts of confirmed cases (long format) and monthly percentage of confirmed cases (wide format). we transposed the monthly percentages of confirmed cases from wide to long format and then multiplied them by the annual counts of confirmed cases (supplementary figure s1 ). since the provided monthly percentages are rounded to 1 digit in the data download, calculated counts slightly under-or over-estimate annual totals. we did not round non-integer cases in our calculated time series to best preserve the monthly distribution of cases from the original data download. a monthly time series of confirmed cases of hospitalizations or deaths could not be reconstructed as described because no information is provided on their monthly percentages. we next calculated disease rates using confirmed monthly cases and annual population data. rates are preferred over counts since changes in counts could be a direct result of changes in the population catchment area of a surveillance system. the number of counties and states monitored in foodnet increased between 1996 and 2004 and has remained constant to date since (supplementary table s1 ). we downloaded county-level population estimates from the 1990, 2000, and 2010 us census bureau interannual census reports, which provide annual population estimates [26] [27] [28] . we then estimated state-level foodnet population catchment area by adding all mid-year (july 1 st ) populations of surveyed counties monitored in each year. next, we calculated the united states population catchment area by adding all state-level estimates for all surveyed counties for each year. finally, we developed a time series of monthly rates per 1,000,000 persons for each pathogen and location by dividing monthly counts by annual population estimates and multiplying this quotient by 1,000,000. in addition to monthly rates, we calculated yearly rates by adding all monthly counts each year, dividing by the annual population, and multiplying this quotient by 1,000,000. modeling trends and seasonality. we estimated trend and seasonality characteristics using negative binomial harmonic regression (nbhr) models, which are commonly used to analyse count-based time series records with periodic fluctuations [29] [30] [31] . these models include harmonic terms representing sine and cosine functions, which allow us to fit periodic oscillations. the regression parameters for these harmonic terms serve as a base for estimating important characteristics of seasonality: when the maximum rate occurs (peak timing) and the magnitude at that peak (amplitude). we calculated peak timing, amplitude, and their confidence intervals from nbhr model coefficients using the δ-method, which allow us to transform the regression coefficients of the model to seasonality characteristics based on the properties of the basic trigonometric functions (supplementary table s2) 29, 30 . to estimate annualized seasonality characteristics, we applied a nbhr model for each study year and location with the length of the time series set to 12 to represent the months of the year. we also estimated seasonality characteristics for the full time period. to show average trends across the entire 22-year period, we fit a nbhr model with three trend terms (linear, quadratic, and cubic) where the length of the time series varied according to when foodnet began surveying that location from 168 to 264 months. the selection of three polynomial terms was driven by the clarity of interpretation as a monthly increase and the potential for overall acceleration or deceleration, although other ways of assessing the trend such moving averages and spline functions could be also explored. plot terminology. we develop multi-panel visualization techniques using the best practices of current data visualization resources 32,33 and our own research 8, 34, 35 . a multi-panel plot, as defined by our earlier work, "involves the strategic positioning of two or more graphs sharing at least one common axis on a single canvas 8 . " these plots can effectively illustrate multiple dimensions of information including different time units (e.g. yearly, monthly), disease statistics (e.g. pathogens, rates, counts), seasonality characteristics (e.g. peak timing, amplitude), and locations (e.g. state-level, national). we use the following common, standardized terminology across visualizations to ensure comprehension: • disease -each of the nine reported foodnet infections, including campylobacteriosis (camp), listeriosis (list), salmonellosis (salm), shigellosis (shig), infection due to shiga toxin-producing escherichia coli o157 and non-o157 (ecol), vibriosis (vibr), infection due to yersinia enterocolitica (yers), cryptosporidiosis (cryp) and cyclosporiasis (cycl) • monthly rate -monthly confirmed cases per 1,000,000 persons • yearly rate -total confirmed cases in a year divided by the mid-year population of all surveyed counties in that location (cases per 1,000,000 persons) • frequency -the number of months reporting the disease rates in the same range • peak timing -the time of year according to the gregorian calendar that a disease reaches its maximal rate; for monthly time series, peak timing ranges in [1, 13[, i .e. from 1.0 (beginning of january) to 12.9 (end of december) • amplitude -the mathematical amplitude, or the midpoint of relative intensity; for nbhr models, the amplitude estimate reflects the ratio between the disease rate at the peak (maximum rate) and the disease rate at the midpoint (median rate) • foodnet surveyed county -the counties under foodnet surveillance as of 2017 • non-surveyed county -all remaining counties within a surveillance state as of 2017. we present our analecta of visualizations allowing to describe trend, examine seasonal signatures, curves depicting characteristic variations in disease incidence over the course of one year, and understand features of seasonality, such as peak timing and amplitude across locations and diseases. we illustrate all visualizations using salmonellosis for the united states from 1996-2017. the full analecta with time series data and code are available on our website (https://sites.tufts.edu/naumovalabs/analecta/) with data and code also available on figshare 36 . describing trend. the interpretability of trends in a time series plot is greatly affected by the length and units of the time series. foodnet fast aggregates data annually, as shown in supplementary figure s2 , which provides clear, concise information on annual rates. in this example, the rate of salmonellosis remains largely unchanging over time with distinct outbreaks seen in 1999 and 2010. as expected, by compressing data to annual rates, supplementary figure s2 masks within-year trends of disease rates. foodnet reports and publications similarly tend to show only inter-annual changes in disease counts or rates [9] [10] [11] [12] [13] 37, 38 . without more granular within-year variations, the viewer cannot determine if increased yearly rates are driven by erratic outbreaks in a specific month or higher rates across all months of the year. to capture within-year trends, we propose a multi-panel plot that combines information on monthly rates, inter-annual trends, and the frequency distribution of rates by utilizing the shared axes of individual plots (fig. 1) . the right panel of fig. 1 provides a time series of monthly rates with a nbhr model fit with three trend terms (linear, quadratic, and cubic). the inclusion of polynomial terms allows us to capture long-term trends (linear term) and their acceleration and deceleration over time (quadratic and cubic terms). the predicted trend line is shown in blue and its 95% confidence interval is in grey shades. the estimated median monthly rate is shown in red. the left panel depicts a rotated histogram of rate frequencies indicating the right-skewness of the monthly rate distribution. the histogram shares the vertical monthly rate-axis with the time series plot and is essential for connecting two concepts: the distribution of monthly counts on the base of their frequency and the distribution of monthly counts over time. two pictograms refer to the selected pathogen and location. figure 1 shows the stability of seasonal oscillation in salmonellosis over time series with increased rates from 1998-2010 followed by a gradual decrease in rates through 2017. while preserving the within-year seasonal fluctuations, the plot provides additional information. alternating background colours help distinguish differences in the shape of seasonal curves between adjacent years. an increasingly darker hue for the monthly rate values distinguishes more recent data from more historic data. contrasting background colours mixed with a gradual intensity of line hues, saturation, brightness, and transparency allow for greater focus and attention to trends in the data [32] [33] [34] . the rotated histogram in the left panel of fig. 1 shows the distribution of monthly rates and its degree of skewness due to months with high counts. we include the red median line to provide the most appropriate measure of central tendency for the skewed distribution. the shared vertical axis helps readers track those high values to a www.nature.com/scientificdata www.nature.com/scientificdata/ specific month in the time series. the distribution also justifies the use of negative binomial regression models to evaluate temporal patterns. by supplementing the time series plot with the distribution of monthly rates, we show a visual rationale for using appropriate analytical tools (negative binomial model, in this case) for calculating inter-annual trends. to better understand annual differences in seasonal behaviors, we propose a multi-panel plot that incorporates annual seasonal signatures, summary statistics of monthly rates, and radar plots (fig. 2 ). given varying visual perceptions of these three ways of presenting seasonal patterns, we offer side-by-side comparisons that aim to increase comprehension. the top-left panel provides an overlay of all annual seasonal signatures, a set of curves depicting characteristic variations in disease incidence over the course of one year, where line hues become increasingly darker with more recent data and a red line indicates median monthly rates, as in fig. 1 . the bottom-left panel provides a set of box plots for each month that aggregates information over the study period and provides essential summary statistics, including the median rate values and the measures of spread. the shared horizontal axis allows the two plots to be compared across the years using identical scales. to provide visual context, background colours were used to indicate the four seasons (winter, spring, summer and autumn). the right panel provides overlaying monthly rates using a radar plot where time is indicated on the rotational axis and rates are indicated on the radial axis. the radar plot emphasizes the periodic nature of seasonal variations in one continuous line with graduating colours. the colour hue of the lines, background colour, median line colour and the axis scales are uniform across all three panels. we also repeat the pictograms to refer to the selected pathogen and location. for salmonellosis, disease rates are highest in the summertime (with peaks in july and august) and lowest during the wintertime (with a well-defined february nadir). rate increases and decreases during equinox periods indicate bacterial growth rates due to more and less favourable climate conditions, respectively. the www.nature.com/scientificdata www.nature.com/scientificdata/ confidence interval (whisker), and outliers or potentially influential observations (markers) over the 22-year period. measures of distribution spread provide an insight for the dispersion of rates in each month: the variability of salmonellosis rates decreases in winter months closer to the february nadir but increases in summer months of july and august closer to the seasonal peak. unusually high values are indicative of erratic behavior characterized by spikes in specific months and years. the right-hand panel of fig. 2 further emphasizes the periodic nature and the positioning of the seasonal peaks and nadirs. radar or spider plots describe time using a rotational axis where the radial distance from the centre of the plot depicts rate magnitude [39] [40] [41] [42] . radial axes, compared to perpendicular axes, show annual fluctuations as a continuous flow. this more clearly demonstrates declines of salmonellosis rates during nadir months (november to march) without the visual discontinuity of left panel visuals. to capture the advantage of a multi-panel plot (fig. 3) , we incorporate the boxplot from fig. 2 (lower left panel) with a calendar heatmap containing 264 monthly rate values. in the heatmap, information for each individual year is shown as stacked rows of width 12 (for each month of the year) where cell colour intensity represents the magnitude of monthly rates. like fig. 2 , the heatmap illustrates the highest rates (shown as the darker cells) are in july and august. compared to stacked line plots, however, fig. 3 provides an individual row for each year of the time series, allowing for greater decomposition, differentiation, and comparison of seasonal signatures across years. in this plot, seasonal changes are shown horizontally from left to right -from january to december and the yearly trend transition can be observed in a vertical view from bottom to top-from year 1996 to 2017 in the right panel. while fig. 2 provides the annual variability of seasonal patterns, monthly rate values for each year are difficult to ascertain. instead, the emphasis is placed on similarities and differences of the seasonal curvature over time. in fig. 3 , the attention shifts to comparing the intensity of rates per month of the year across years. here, we evaluate which months of the year are most intense across years using the intensity of each cell's colour hue to describe the intensity of rates. the fig. 3 panel integrates information on both trends and seasonality along with the individual monthly values unlike any of the previously shown visualizations. yearly rates provide a bar graph for comparing fluctuations in inter-annual rates while the adjacent heatmap indicates the month(s) driving these fluctuations. in doing so, the calendar heatmap identifies whether inter-annual changes are driven by sporadic outbreaks or increased seasonal magnitude of rates. at the same time, the shared axis box plot provides an overview of the average seasonal signature for the entire time series, as emphasized in fig. 2. understanding seasonal features. detailed characterization of the timing and intensity of seasonal peaks requires a standardized estimation of peak timing and amplitude. this standardization improves upon implemented techniques of comparing months with the highest cases in a given year by applying the δ-methods to www.nature.com/scientificdata www.nature.com/scientificdata/ nbhr model parameters 29, 30 . average seasonality characteristics can be estimated across the full time series while annual estimates allow for more granular comparisons between years. to depict point estimates and confidence intervals of seasonality characteristics, we use forest plots -a technique commonly used in meta-analyses 18, 43, 44 . we develop a multi-panel forest plot to depict annual peak timing, annual amplitude, and their joint distribution, to better understand the relationship among the seasonal features and how it changes over time (fig. 4) . figure 4 is a multi-panel plot that incorporates two forest plots (one each for annual peak timing and amplitude estimates) and one scatterplot (for peak timing and amplitude) to describe seasonality features. the top-left www.nature.com/scientificdata www.nature.com/scientificdata/ panel shows peak timing estimates (as month of the year, ranging from 1.0 (beginning of january) to 12.9 (end of december) -horizontal axis) for each study year (vertical axis). the bottom-right panel shows amplitude estimates where the horizontal axis indicates the study year and the vertical axis shows the amplitude (ratio between the disease rate at peak and the median rate). the bottom-left corner shows the scatterplot of peak timing (horizontal axis) and amplitude (vertical axis) with markers representing each pair of annual estimates. measures of uncertainty (95% confidence intervals) are reflected in error bars of each marker; dashed red lines show median peak timing and amplitude estimates. forest plots in fig. 4 provide a compact, clear, and comprehensive visual describing the stability of peak timing and amplitude, even without showing the entire seasonal signature. for example, salmonellosis peak timing and amplitude vary little each year indicating strong, stable seasonal peaks in july and august. consistent peak timing means practitioners could time preventive strategies, increase awareness for foodborne illnesses to prevent transmission, and inform food retailers of when food safety inspections should be in higher demand within their supply chains. consistent amplitude estimates show that the intensity of salmonellosis varies little over time, suggesting that federal food safety regulations have not greatly influenced the number of salmonellosis cases annually. this type of information is likely to benefit foodnet fast users. supplementary figure s6 provides an example of how a sporadic outbreak behavior can be depicted by forest plots of peak timing and amplitude estimates for shigellosis in ny. the lack of seasonality for shigellosis is shown by the broad confidence intervals for peak timing, spanning the entire year and beyond. figure s7 provides an example bar chart showing differences in the average annual incidence of salmonellosis for the ten foodnet-surveyed states. as with other foodnet visualizations, data has been compressed to show only average annual estimates. like in fig. 1 , annual rates mask within-year seasonal variations, calling into question if differences in states are driven by single year outbreaks. the alphabetical organization of the horizontal axis makes states ranking and comparison more difficult than if they were ordered from highest to lowest rates. to ease comparisons of a single disease across www.nature.com/scientificdata www.nature.com/scientificdata/ geographic locations, we generated two multi-panel plots (figs. 5 and 6 ). these plots mirror the same techniques shown above but include multiple shared axes and multiple locations to draw spatial comparisons. supplementary figure s8 follows the same design as fig. 3 ; we replicate this design for salmonellosis in all foodnet-surveyed states. we present all states in one plot in a descending order by the sum of yearly rates in each state and display all available data so that state level patterns can be compared. the box plot in the top panel provides an overview of the seasonal signature for the entire us. the bottom panel disaggregates the entire us by states. as shown, all states share similar peak timing in july and august for almost every surveillance year from 1996-2017. for some states, like ga and ca, rates are densely concentrated from july to september with rapid decline from september to february and gradual incline from february to july. for other states, like ny and or, seasonal peaks are much less pronounced and rate differences are smaller between months. clear indication of missing data provides additional information on differences in reporting completeness not captured by previous figures. while heatmaps provide information on seasonal signatures, yearly rate bar graphs (right panel) capture state-level trends over time. states are stacked in the order of total cases from 1996-2017, showing differences in the intensity of salmonellosis infection across states. comparisons within states between years help identify inter-annual rate changes over time. for example, while md and ca have generally declined in annual rates over the 22-year period, ga rates increased from 1996-2012 and steadily declined from 2012-2017. in combination with heatmaps, yearly rates also allow for detailed assessment of sporadic outbreaks. for example, erratic outbreaks came from two monthly spikes in april and june for ct in 1997 while for nm in 2000 a multi-month outbreak lasted from may to july. by using shared horizontal and vertical axes, this plot eases the comparison of disease rates across months, years and states. it also helps to determine hotspots and detect potential co-occurrences of infection in different states. moreover, the plot can be periodically updated by adding new information, offering a sustainable approach to make consistent comparisons between historical data and data captured in the future. to compare seasonality features across locations, we designed a multi-panel plot similar to fig. 4 to show average peak timing and amplitude estimates over the 22-year period for each state. in fig. 6 the top-left panel plots peak timing estimates ordered from the earliest (or) to latest (ga) peak timing while the bottom-right panel plots amplitude estimates in order of magnitude. marker and line colours are used to differentiate the seasonality feature estimate and its measure of uncertainty between states. the bottom-left panel shows the relationship between peak timing and amplitude across states. figure s8 provides an example of foodnet fast bar chart showing differences in the total confirmed infections for each of the nine surveyed pathogens in the us from 1996-2017. the visual shows that infections due to campylobacter and salmonella have the highest cumulative counts of infections while cyclospora has the lowest counts. while depicting these differences clearly, this visual lacks sufficient specificity for drawing more intricate comparisons between infections. how are counts or rates distributed by year? what are the within-year variations of rates by pathogen? how do seasonal signatures and their variability differ by pathogen? can axes be reordered or recalculated for easier comparisons between pathogen counts or rates? we propose two multi-panel plots (figs. 7 and 8 ) that improve the comparisons of multiple diseases for a given geographic location. figure 7 replicates the plot design of fig. 5 but emphasizes comparisons between pathogens for a single location. instead of a seasonal signature box plot, the top panel provides a scatterplot to illustrate the peak timing and amplitude of each pathogen. in combination with the heatmap in the bottom panel, these plots illustrate the strong seasonality of salmonellosis, campylobacteriosis, and stec in july and august and cryptosporidiosis in august. these seasonal peaks are consistent across almost all years suggesting a stable seasonal periodicity and strong alignment between infections. in contrast, infections caused by yersinia enterocolitica, vibriosis, listeriosis, and cyclosporiasis have much less pronounced seasonality and monthly rates much lower than salmonellosis or campylobacteriosis. yearly rates, shown in the right panel, indicate erratic outbreak behaviors for cyclosporiasis. given sizable differences in rates across diseases we applied a high-order calibration colour scheme. we also provide the same multi-panel, shared-axis visualization design seen in fig. 6 for comparisons across pathogens. figure 8 includes a forest plot of peak timing by disease pathogen (top-left panel), a forest plot of amplitude by pathogen (bottom-right panel), and a scatterplot between peak timing and amplitude estimates (bottom-left panel). as in fig. 6 , average peak timing and amplitude estimates are calculated using nbhr models for the entire 22-year time series. comparisons between diseases allow for understanding the alignment of seasonal processes across pathogens as well as shared relative magnitudes in a specific location. in our case, most of the pathogens peak during the summertime except cyclosporiasis. however, if the selected diseases peak during winter months, we recommend adjusting the starting and ending months to center these peaks in the figure. in this study we offered ways of thinking on how public data platforms can be improved by using visual analytics to provide a comprehensive description of trends and seasonality features in reported infectious diseases. we emphasize the utility of multi-panel graphs by showing side-by-side different methods of depicting trends over time and features of seasonality, including disease peak timing and amplitude. we provided visual tools to show trends (fig. 1) , examine seasonal signatures (figs. 2 and 3) and their characteristics (fig. 4) , compare diseases across locations for trends (fig. 5 ) and seasonal signatures (fig. 6) , and drawing comparisons across pathogens for trends (fig. 7) and seasonal signatures (fig. 8) . we also provide guides on how to explore and compare trends and seasonality between multiple diseases and geographic locations using foodnet fast data. given varying visual perceptions, we offer side-by-side comparison of different tools aiming to increase comprehension and faster adoption of efficient graphical depictions. www.nature.com/scientificdata www.nature.com/scientificdata/ we developed a time series of monthly rates by reconstructing a time series of monthly counts (see fig. 1 ) then dividing counts by the sum of all foodnet-surveyed counties' mid-year populations per state per 1,000,000 persons. in this calculation, we recognize that average monthly percentages are rounded in the raw data file and do not sum to 100% annually for downloaded years. this rounding resulted in obtaining non-integer counts within our time series. to prevent modification of raw data files, we did not round counts to integers before or after calculating rates. no information is provided on the foodnet fast website for the definition of confirmed cases, and data downloads provide no metadata for distinguishing cases from hospitalizations and deaths. although the case definition is provided on the cdc website as "laboratory-confirmed cases (defined as isolation for bacteria or identification for parasites of an organism from a clinical specimen) and cases diagnosed using culture-independent methods" 45 , it forces the user to assume that a confirmed case is any person with laboratory confirmed cultures of a specific pathogen who may or may not have been hospitalized or died from infection. foodnet also collects information on hospitalizations and deaths, but does not provide information on the monthly percentage of hospitalizations or deaths, so users are unable to reconstruct a monthly time series for deaths or hospitalizations. the foodnet fast platform states all confirmed diseases as "incidence" calculations. technical documentation on the foodnet website shows that the term incidence reflects cases per 100,000 persons (used interchangeably with a disease rate) with no distinction of whether these are newly introduced within the population (i.e. incidence) or the total persons diagnosed with a disease (i.e. prevalence) 46 . we found that monthly rates can similarly be calculated by multiplying annual incidence rates and the monthly percentage of confirmed cases for each disease-state pair. differences between our calculations and this alternative method are no more than ± 2%. we suspect that rounding errors of average monthly percentages and differential population catchment areas for rate calculations cause these differences. as shown in supplemental table s1 , population of the surveillance catchment area is changing over time. oftentimes, publicly available surveillance datasets, including foodnet fast, do not include location-and year-specific population catchment area estimates, which are needed for calculating rates from diseases counts. as foodnet does not provide population catchment areas for calculating rates, it www.nature.com/scientificdata www.nature.com/scientificdata/ www.nature.com/scientificdata www.nature.com/scientificdata/ forces the user to assume that foodnet surveillance reaches the total population of a surveyed county (likely an overestimate), yet such oversight is easy to fix. three collaborators confirmed our monthly rate calculations for quality control. we applied the negative binomial harmonic regression nbhr models, commonly used in the time series analysis of counts and cases. while the use of nbhr models, specifically the inclusion of trigonometric harmonic oscillations, is similar to existing works on foodborne illnesses, these studies often incorporate harmonic oscillators only to adjust for or remove seasonal oscillations [47] [48] [49] [50] [51] [52] [53] [54] [55] . we have extended the use of harmonic terms and develop the tools to estimate peak timing and amplitude 8, 29, 30 . the developed δ-method provides a systematic calculation of confidence intervals for peak timing and amplitude estimates based on the results of harmonic regression models. in the proposed approach, we present the amplitude as the ratio of seasonal peak to seasonal median, which offers robust estimation even for rare or highly sporadic infections. these features are not available when traditional models, like auto-regressive integrated moving average (arima), are applied 56 . measures of uncertainty enable formal testing and comparisons across diseases in the same location or locations for the same disease. in our previous works, we have demonstrated the broad utility of the δ-method and applications of peak timing and amplitude estimation in the context of epidemiological studies 6, 29, [56] [57] [58] [59] [60] [61] [62] . we evaluate each state's cases individually as well as all national cases as the sum of all states' cases. our analysis evaluated all cases reported to foodnet fast irrespective of demographic factors such as age group, sex, or ethnic group. future analyses can consider conducting analyses using demographic factors available on the foodnet fast platform such as age group (<5, 5-9, 10-19, 20-29, 30-39, 40-49, 50-59, 60-69, 70 + years), sex (male and female), and ethnic group (american indian and alaskan native, asian and pacific islander, black, multiple, white). to incorporate this information, our methodology for data extraction would need to be repeated for each subcategory or combination of categories desired (e.g. download 221 files for males and 221 files for females). www.nature.com/scientificdata www.nature.com/scientificdata/ future analyses can also consider differences in pathogen strain, which can only be obtained if extracting data for each pathogen-location-year combination (e.g. 221 files for each of the 9 diseases for each of 11 locations or 21,879 files). foodnet fast, like many global disease surveillance databases, has no metadata describing missing data. foodnet fast reports missing counts using "n/a" for years when pathogens or locations were not under surveillance. however, there are also years when foodnet surveillance was live in a state, but a pathogen is missing from the data download. we believe that this missing data comes when, for a given year, a pathogen has 0 total cases. however, we cannot specify whether absences of surveillance reporting came due to a breakdown in reporting or 0 annual counts. without specification, we have set any year with "n/a" as missing due to no reported case information. when calculating peak timing and amplitude using the δ-methods, we applied nbhr models adjusted for harmonic seasonal oscillators and three trends (linear, quadratic, and cubic). we selected the polynomial terms as an example, yet researchers can consider alternative techniques for measuring seasonality such as splines, nonparametric regression, arima models, or their extensions. additionally, the cdc recommends using a mixed effects model when conducting time series analyses on foodnet fast data to account for differential population catchment areas and laboratory culture confirmation techniques pre-and post-2004 1,2,46 . we focus on the analysis of individual states and diseases and adjust for population catchment variations by calculating monthly rates using county-level population estimates. future analyses could include detailed assessments between peak timing and amplitude across diseases, locations, and time periods. such analyses will help determine whether a synchronization of outbreak peaks occurs or if social, economic, or environmental factors influence peak timing and amplitude. future applications. this analecta of visualizations intends to communicate detailed information on foodborne outbreak trends and seasonality suitable for a general audience, public health professionals, stakeholders, and policymakers. future applications would involve the development of an interactive web-based platform allowing users to select the outcome, timeframe, and location of interest for educational training and research purposes. for example, public health researchers and practitioners could use this tool to generate insights related to long-term trends, changes in disease dynamics, or changes in populations at risk 62 . information on when and where outbreaks are most common enable producers, distributors, and retailers to improve food safety practices to prevent these outbreaks. finally, this platform could aid policymakers in shaping public understanding of outbreak dynamics and using scientific evidence to refine public health policies. the analecta of our time series of monthly rates, 436 data visualizations, and code used for all calculations and visualizations are available on our website (https://sites.tufts.edu/naumovalabs/analecta/). data and code can be directly downloaded from the website while visualizations are linked on the website to an external visualization repository. time series data and code are also available on figshare 36 . visualizations on our website are provided in the same order as presented here: describing trends (fig. 1) , examining seasonal signatures with the three standard techniques: line graphs, boxplots, and radar plots (fig. 2) and heatmaps (fig. 3) , characterizing features of seasonality (fig. 4) , drawing comparisons across locations for trends (fig. 5 ) and seasonal signatures (fig. 6) , and drawing comparisons across pathogens for trends (fig. 7) and seasonal signatures (fig. 8) . file downloads are available for trend, seasonal signature, and annual time series visualizations. for images examining a single disease in a single location, downloads are formatted where the prefix abbreviates the location and the suffix abbreviates the pathogen (see supplementary table s3 ). for visualizations comparing multiple locations or diseases, the prefix "loc" indicates comparisons across locations while the prefix "dis" indicates comparisons across pathogens (see supplementary tables s4,s5 ). all statistical analyses were conducted using stata (se 15.1) software. all visualizations were created using r version 3.6.2 and tableau professional 2019.1 software. all software code is open access on our website (https:// sites.tufts.edu/naumovalabs/analecta/) and figshare, and is available for public reuse with proper citation of this manuscript 36 . web-based infectious disease surveillance systems and public health perspectives: a systematic review detecting influenza epidemics using search engine query data innovation in observation: a vision for early outbreak detection use of unstructured event-based reports for global infectious disease surveillance algorithms for rapid outbreak detection: a research synthesis influenza seasonality: underlying causes and modeling theories flunet. global influenza surveillance and response systems (gisrs) visual analytics for epidemiologists: understanding the interactions between age, time, and disease with multi-panel graphs incidence and trends of disease with pathogens transmitted commonly through food -foodborne diseases active surveillance network preliminary incidence and trends of disease with pathogens transmitted commonly through food-foodborne diseases active surveillance network, 10 us sites incidence and trends of diseases with pathogens transmitted commonly through food and the effect of increasing use of culture-independent diagnostic tests on surveillance-foodborne diseases active surveillance network disease with pathogens transmitted commonly through food and the effect of increasing use of cultureindependent diagnostic tests on surveillance -foodborne diseases active surveillance network centers for disease control and prevention (cdc). foodborne diseases active surveillance network: foodnet 2013 surveillance report. national center for emerging and zoonotic diseases produce-associated foodborne disease outbreaks comparing characteristics of sporadic and outbreak-associated foodborne illnesses increasing campylobacter infections, outbreaks, and antimicrobial resistance in the united states climate change, extreme events and increased risk of salmonellosis in maryland, usa: evidence for coastal vulnerability systematic review and meta-analysis of the proportion of campylobacter cases that develop chronic sequelae the campylobacteriosis conundrum -examining the incidence of infection with campylobacter sp foodnet fast: pathogen surveillance tool bacterial enteric diseases among older adults in the united states: foodborne diseases active surveillance network common source outbreaks of campylobacter disease in the usa temporal patterns of campylobacter contamination on chicken and their relationship to campylobacteriosis cases in the united states united states department of commerce united states department of commerce annual estimates of the resident population incorporating calendar effects to predict influenza seasonality in the shift in seasonality of legionellosis in the usa a negative binomial model for time series of counts visualization analysis and design information dashboard design: the effective visual communication of data dynamic maps: a visual-analytic methodology for exploring spatio-temporal disease patterns effects of data aggregation on time series analysis of seasonal infections an analecta of visualizations for foodborne illness trends and seasonality foodborne diseases active surveillance network -2 decades of achievements activities, achievements, and lessons learned during the first 10 years of the foodborne diseases active surveillance network the spatial structure of epidemic emergence: geographical aspects of poliomyelitis in northeastern usa prime mover or fellow traveller: 25-hydroxy vitamin d's seasonal variation, cardiovascular disease and death in the scottish heart health extended cohort (shhec) season and outdoor temperature in relation to detection and control of hypertension in a large rural chinese population climate change impact assessment of food-and waterborne diseases a systematic review and meta-analysis of the campylobacter spp. prevalence and concentration in household pets and petting zoo animals for use in exposure assessments global prevalence of asymptomatic norovirus disease: a meta-analysis foodnet fast: pathogen surveillance tool faq assessing the impact of environmental exposures and cryptosporidium disease in cattle on human incidence of cryptosporidiosis in southwestern ontario do contamination of and exposure to chicken meat and water drive the temporal dynamics of campylobacter cases? review of epidemiological studies of drinking-water turbidity in relation to acute gastrointestinal illness seasonality and the effects of weather on camylobacter diseases increase in reported cholera cases in haiti following hurricane matthew: an interrupted time series model complex temporal climate signals drive the emergence of human water-borne disease association between community socioeconomic factors, animal feeding operations, and campylobacteriosis incidence rates: foodborne diseases active surveillance network (foodnet) climate, human behaviour or environment: individual-based modelling of campylobacter seasonality and strategies to reduce disease burden temperature-driven campylobacter seasonality in england and wales rotavirus seasonality: an application of singular spectrum analysis and polyharmonic modeling mystery of seasonality: getting the rhythm of nature seasonal synchronization of influenza in the united states older adult population geographic variations and temporal trends of salmonella-associated hospitalization in the u.s. elderly, 1991-2004: a time series analysis of the impact of haccp regulation hospitalization of the elderly in the united states for nonspecific gastrointestinal diseases: a search for etiological clues assessing seasonality variation with harmonic regression: accommodations for sharp peaks intelligence advanced research projects activity (iarpa), via 2017-17072100002. the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of odni, iarpa, or the u.s. government. the u.s. government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. the research was in part supported by the national science foundation (nsf) innovations in graduate education (ige) program, via grant award 1855886 and by the united states department of agriculture (usda) national institute of food and agriculture (nifa) cooperative state research, education, and extension service fellowship meghan hartwick for editorial and technical assistance r.s. contributed to data extraction, formal analysis, and writing. b.z. contributed to data validation, conceptualization of visual aids, and visualization creation. t.m.a.f. contributed to data validation, review, and editing. e.n.n. contributed to methodology development, review and editing, supervision, project administration and funding acquisition. the authors declare no competing interests. supplementary information is available for this paper at https://doi.org/10.1038/s41597-020-00677-x.correspondence and requests for materials should be addressed to e.n.n.reprints and permissions information is available at www.nature.com/reprints.publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. key: cord-348436-mwitcseq authors: bu, f.; steptoe, a.; mak, h. w.; fancourt, d. title: time-use and mental health during the covid-19 pandemic: a panel analysis of 55,204 adults followed across 11 weeks of lockdown in the uk date: 2020-08-21 journal: nan doi: 10.1101/2020.08.18.20177345 sha: doc_id: 348436 cord_uid: mwitcseq there is currently major concern about the impact of the global covid 19 outbreak on mental health. but it remains unclear how individual behaviors could exacerbate or protect against adverse changes in mental health. this study aimed to examine the associations between specific activities (or time use) and mental health and wellbeing amongst people during the covid 19 pandemic. data were from the ucl covid 19 social study; a panel study collecting data weekly during the covid 19 pandemic. the analytical sample consisted of 55,204 adults living in the uk who were followed up for the strict 11 week lockdown period from 21st march to 31st may 2020. data were analyzed using fixed effects and arellano bond models. we found that changes in time spent on a range of activities were associated with changes in mental health and wellbeing. after controlling for bidirectionality, behaviors involving outdoor activities including gardening and exercising predicted subsequent improvements in mental health and wellbeing, while increased time spent on following news about covid 19 predicted declines in mental health and wellbeing. these results are relevant to the formulation of guidance for people obliged to spend extended periods in isolation during health emergencies, and may help the public to maintain wellbeing during future pandemics. a number of studies have demonstrated the negative psychological effects of quarantine, lockdowns and stay-at-home orders during epidemics including sars, h1n1 influenza, ebola, and covid-19 1 2 3 4-6 . these effects include increases in stress, anxiety, insomnia, irritability, confusion, fear and guilty [4] [5] [6] . to date, much of the research on the mental health impact of enforced isolation during the pandemic has focused on the mass behavior of "staying at home" as the catalyst for these negative psychological effects. but there has been little exploration into how specific behaviors within the home might have differentially affected mental health, either exacerbating or protecting against adverse psychological experiences. re-allocation of time use has been shown from other social shocks where people suddenly are forced to spend a significant amount of time at home, with individuals quickly having to adapt behaviorally to new circumstances and develop new routines. for example, during the 2008-2010 recession, adults in the us who lost their jobs reallocated 30% of their usual working time to "non-market work", such as home production activities (e.g. cleaning, washing), childcare, diy, shopping, and care of others, and spent 70% of the time on leisure activities, including socializing, watching television, reading, sleeping, and going out 7 . similarly, during the covid-19 pandemic, research suggests that while many individuals were able to continue working from home, others experienced furloughs or loss of employment, and many had to take on increased childcare responsibilities 8 . further, individuals globally experienced a sharp curtailing of leisure activities, with shopping, day trips, going to entertainment venues, face-to-face social interactions, and most activities in public spaces prohibited. analyses of google trends have suggested negative effects of these limitations on behaviors, showing a rise in search intensity for boredom and loneliness alongside searches for worry and sadness during the early weeks of lockdown in europe and the us 9 . but it's not yet clear what effect these changes in behaviors had on mental health. there is a substantial literature on the relationship between the ways people spend their time and mental health. certain behaviors have been proposed to exert protective effects on mental health. for instance, studies on leisure-time use show that taking up a hobby can have beneficial effects on alleviating depressive symptoms 10 , engaging in physical activity can reduce levels of depression and anxiety and enhance quality of life [11] [12] [13] [14] , and broader leisure activities such as reading, listening to music, and volunteering can reduce depression and anxiety, increase personal empowerment and optimism, foster social connectedness, and improve life satisfaction [15] [16] [17] [18] [19] . however, other behaviors may have a negative influence on mental health. engaging in productive activities (e.g. work, housework, caregiving) has been found in certain circumstances to be associated with higher levels of depression 20 , and sedentary screen time can increase the risk of depression 21 , especially when watching news or browsing internet relating to stressful events. this relationship between time use and mental health is bidirectional, as mental ill health has been shown to predict lower physical activity 22 , lower motivation to engage in leisure activities 23 and increased engagement in screen time 24 . however, there have been little data on the association between daily activities and mental health amongst people staying at home during the covid-19 pandemic. further, it is unclear if activities that are usually beneficial for mental health had similar psychological benefits during the pandemic. this topic is pivotal as understanding time use will help in formulating healthcare guidelines for individuals continuing to stay at home due to quarantine, shielding, or virus resurgences during the current global crisis and in potential future pandemics. therefore, this study involved analyses of longitudinal data from over 50,000 adults captured during the first two months of 'lockdown' due to the covid-19 pandemic in the uk. it explored the time-varying relationship between a wide range of activities and mental health, including productive activities, exercising, gardening, reading for pleasure, hobby, communicating with others, following news on covid-19 and sedentary screen time. specifically, given research showing the inter-relationship yet conceptual distinction between different aspects of mental health, we focused on three different outcomes. anxiety combines negative mood states with physiological hyperarousal, while depression also combines negative mood states with anhedonia (loss of pleasure), and life satisfaction is an assessment of how favorable one feels towards one's attitude to life 25,26 . crucially, symptoms of anxiety and depression can coexist with positive feelings of subjective wellbeing such as life satisfaction, and even in the absence of any specific symptoms of mental illness, individuals can experience low levels of wellbeing 27 . so this study sought to disentangle differential associations between time use and multiple aspects of mental health. as these relationships can be complex and are likely bidirectional, this study explored (a) concurrent changes in behaviors and mental health to identify associations over time, and (b) whether changes in behaviors temporally predicted changes in mental health, accounting for the possibility of reverse causality by using dynamic panel methods. participants data were drawn from the ucl covid-19 social study; a large panel study of the psychological and social experiences of over 50,000 adults (aged 18+) in the uk during the covid-19 pandemic. the study commenced on 21st march 2020 involving online weekly data collection from participants for the duration of the covid-19 pandemic in the uk. whilst not random, the study has a well-stratified sample that was recruited using three primary approaches. first, snowballing was used, including promoting the study through existing networks and mailing lists (including large databases of adults who had previously consented to be involved in health research across the uk), print and digital media coverage, and social media. second, more targeted recruitment was undertaken focusing on (i) individuals from a low-income background, (ii) individuals with no or few educational qualifications, and (iii) individuals who were unemployed. third, the study was promoted via partnerships with third sector organisations to vulnerable groups, including adults with pre-existing mental illness, older adults, and carers. the study was approved by the ucl research ethics committee (12467/005) and all participants gave informed consent. the full study protocol, including details on recruitment, retention, and weighting is available at www.covidsocialstudy.org in this study, we focused on participants who had at least two repeated measures between 21st march and 31st may 2020, when the uk went into strict lockdown on the 23 rd march and remained largely in that situation until 1 st june (although the lockdown measures started to be eased earlier in different uk nations). this provided us with data from 55,204 participants (total observations 338,083, mean observations per person 6.1 range 2 to 11). depression during the past week was measured using the patient health questionnaire (phq-9); a standard instrument for diagnosing depression in primary care 28 . the questionnaire involves nine items, with responses ranging from "not at all" to "nearly every day". higher overall scores indicate more depressive symptoms. anxiety during the past week was measured using the generalized anxiety disorder assessment (gad-7); a well-validated tool used to screen and diagnose generalised anxiety disorder in clinical practice and research 29 . there are 7 items with 4-point responses ranging from "not at all" to "nearly every day", with higher overall scores indicating more symptoms of anxiety. life satisfaction was measured by a single question on a scale of 0 to 10: "overall, in the past week, how satisfied have you been with your life?" thirteen measures of time-use/activities were considered. these included (i) working (remotely or outside of the house), (iii) volunteering, (iii) household chores (e.g. cooking, cleaning, tidying, ironing, online shopping etc.) or caring for other including friends, relatives or children, (iv) looking after children (e.g. bathing, feeding, doing homework or playing with children), (v) gardening, (vi) exercising outside (including going out for a walk or other gentle physical activity, going out for moderate or high intensity activity such as running, cycling or swimming), or inside the home or garden (e.g. doing yoga, weights or indoor exercise), (vii) reading for pleasure, (viii) engaging in home-based arts or crafts . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted august 21, 2020. . https://doi.org/10.1101/2020.08.18.20177345 doi: medrxiv preprint activities (e.g. painting, creative writing, sewing, playing music etc.), engaging in digital arts activities (e.g. streaming a concert, virtual tour of a museum etc.), or doing diy, woodwork, metal work, model making or similar, (ix) communicating with family or friends (including phoning, video talking, or communicating via email, whatsapp, text or other messaging service), (x) following-up information on covid-19 (e.g. watching, listening, or reading news, or tweeting, blogging or posting about covid-19), (xi) watching tv, films, netflix etc. (not for information on covid-19), (xii) listening to the radio or music, and (xiii) browsing the internet, tweeting, blogging or posting content (not for information on covid-19). each measure was coded as, rarely (<30mins), low (30mins-2hrs) and high (>2hrs), except for low-intensity activities such as volunteering, gardening, exercising, reading, and arts/crafts. these were coded as, none, low (<30mins) and high (>30mins). we used a 'stylized questions' approach where participants were asked to focus on a single day and consider how much time they spent on each activity on the list. however, given concerns about the cognitive burden of focusing on a 'typical' day (which involve aggregating information from multiple days and averaging), we asked participants to focus just on the last weekday (either the day before or the last day prior to the weekend if participants answered on a saturday or sunday). this approach follows aspects of the 'time diary' approach, but we chose weekday to remove variation in responses due to whether participants took part on weekends 30 . data analyses started by using standard fixed-effects (fe) models. fe analysis has the advantage of controlling for unobserved individual heterogeneity and therefore eliminating potential biases in the estimates of time-variant variables in panel data. it uses only withinindividual variation, which can be used to examine how the change in time-use is related to the change in mental health within individuals over time. as individuals are compared with themselves over time, all time-invariant factors (such as gender, age, income, education, area of living etc.) are all accounted for automatically, even if unobserved. compared with standard regression method, it allows for causal inference to be made under weaker assumptions in observational studies. however, fe analysis does not address the direction of causality. given this limitation, we further employed the arellano-bond (ab) approach 31 , which uses lags of the outcome variable (and regressors) as instruments in a first-difference model (eq. 1). the ab model uses −2 and further lags as instruments for −1 − −2 . the rationale is that the lagged outcomes are unrelated to the error term in first differences, − −1 , under a testable assumption that are serially uncorrelated. further, we treated the regressors, , as endogenous ( ( ) ≠ 0 ≤ , ( ) = 0, > ). therefore, should be instrumented by −2 , −3 and potentially further lags. the ab models were estimated using optimal generalized method of moments (gmm). to account for the non-random nature of the sample, all data were weighted to the proportions of gender, age, ethnicity, education and country of living obtained from the office for national statistics 32 . to address multiple testing, we provided adjusted p values (q values) controlling for the positive false discovery rate. these were generated by using the 'qqvalue' package 33 . all analyses were carried out using stata v15 and the ab models were fitted using the user-written command, xtabond2 34 . . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . https://doi.org/10.1101/2020.08.18.20177345 doi: medrxiv preprint demographic characteristics of participants are shown in table s1 in the supplement. as shown in table 1 , the within variation accounted for about 15% of the overall variation for depression, and 16% for anxiety. anxiety explained 56% of the variance in depression (r=0.75, p<.001) and 27% of the variance in life satisfaction (r=-0.52, p<.001), while depression explained 32% of the variance in life satisfaction (r=-0.57, p<.001). there were also substantial changes in the time-use/activity variables ( figure 1 ). over 60% of participants changed status in all activities, except for volunteering (23%) and childcare (21%). increases in time spent working, doing housework, gardening, exercising, reading, engaging in hobbies, and listening to the radio/music were all associated with decreases in depressive symptoms ( table 2 , model i-i). the largest decrease in depression was seen for participants who increased their exercise levels to more than 30 minutes per day, who increased their time gardening to more than 30 minutes per day, or who increased their work to more than 2 hours per day. on the contrary, increasing time spent following covid-19 news or doing other screen-based activities (either watching tv or internet use/social media) were associated with an increase in depressive symptoms. when examining the direction of the relationship (table 3 , model i-ii), increases in gardening, exercising, reading, and listening to the radio/music predicted subsequent decreases in depressive symptoms. however, increases in time spent following news on covid-19 predicted increases in depressive symptoms, as did increases in time spent looking after children or moderate increases in communicating via videos, calling or messaging with others. increases in time spent gardening, exercising, reading and other hobbies were all associated with decreases in anxiety, while increasing time spent following covid-19 news and communicating remotely with family/friends were associated with increases in anxiety ( table 2 , model ii-i). the largest decrease in anxiety was seen for participants who increased their time on gardening, exercising or reading to 30 minutes or more per day. when looking at the direction of the relationship (table 3 , model ii-ii), increases in gardening predicted a subsequent decrease in symptoms of anxiety. but increasing time spent following news on covid-19 predicted an increase in anxiety. life satisfaction increases in time spent working, volunteering, doing housework, gardening, exercising, reading, engaging in hobbies, communicating remotely with family/friends, and listening to the radio/music were all associated with an increase in life satisfaction, while increasing . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . https://doi.org/10.1101/2020.08.18.20177345 doi: medrxiv preprint time spent following covid-19 news was associated with a decrease in life satisfaction ( table 2 , model iii-i). when looking at the direction of the relationship (table 3 , model iii-ii), increases in volunteering, gardening and exercising predicted a subsequent increase in life satisfaction. but increasing time spent following news on covid-19, working, and looking after children predicted a decrease in life satisfaction. we carried out sensitivity analyses excluding keyworkers who might not have been isolated at home in the same way and therefore might have had different patterns of behaviors during lockdown. the results were materially consistent with the main analysis (see the supplementary material). this is the first study to examine the impact of time-use on mental health amongst people during the covid-19 pandemic. time spent on work, housework, gardening, exercising, reading, hobbies, communicating with friends/family, and listening to music were all associated with improvements in mental health and wellbeing, while following the news on covid-19 (even for only half an hour a day) and watching television excessively were associated with declines in mental health and wellbeing. whilst the relationship between time use and behaviors is bidirectional, when exploring the direction of the relationship using lagged models, behaviors involving outdoor activities including gardening and exercising predicted subsequent improvements in mental health and wellbeing, while time spent watching the news about covid-19 predicted declines in mental health and wellbeing. our findings of negative associations between following the news on covid-19 and mental health echo a cross-sectional study from china showing that social media exposure during the pandemic is associated with depression and anxiety 1 . the fact that exposure to covid-19 news is largely screen-based, and the fact that watching high levels of television or high social media engagement unrelated to covid-19 was also found to be associated with depression could suggest that this finding is more about the screens than the news specifically 35 . however, the association with following the news on covid-19 was independent of these other screen behaviors and was found for even relatively low levels of exposure (30mins-2 hours). further, there have been wider discussions of the negative impact of news during the pandemic, including concerns about the proliferation of misinformation and sensationalised stories on social media 36 , and information overload, whereby the amount of information exceeds people's ability to process 37 . it is notable that these associations were found for all measures of mental illhealth and wellbeing and even in lagged models that attempted to remove the effects of reverse causality, suggesting the strength of its relationship with mental health. however, other activities were shown to have protective associations with mental health. in particular, outdoor activities such as gardening and exercise were associated with better levels of mental health and wellbeing across all measures, with many of these results maintained in lagged models. these results echo many previous studies into the benefits of outdoors activities [10] [11] [12] [13] . exercise (including gentle activities such as gardening) can . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . https://doi.org/10.1101/2020.08.18.20177345 doi: medrxiv preprint affect mental health via physiological mechanisms (such as reducing blood pressure), neuroendocrine mechanisms (such as reducing levels of cortisol involved in stress response), neuroimmune mechanisms (including reducing levels of inflammation associated with depressive symptoms and increasing the synthesis and release of neurotransmitters and neurotrophic factors associated with neurogenesis and neuroplasticity), and psychological mechanisms (including improving self-esteem, autonomy and mood) 38 . particularly during lockdown, such activities (which provided opportunities to leave the home) may have helped in providing physical and mental separation from fatiguing or stressful situations at home, offering a change of scenery, and proving a feeling of being connected to something larger 39 . hobbies such as listening to music, reading, and engaging in arts and other projects were also associated with better mental health across all measures. this builds on substantial literature showing the benefits of such activities in reducing depression and anxiety, building a sense of self-worth and self-esteem, fostering self-empowerment, and supporting resilience 16 . the associations presented here show that these activities have remained beneficial to mental health during lockdown. however, these associations were not retained as consistently across lagged models. this suggests that they may be linked more bidirectionally with mental health, with changes in mental health also driving individuals' motivations to engage with these activities. there are several other noteworthy findings from these analyses. first, volunteering was associated with higher levels of life satisfaction, including across lagged models that explored with the direction of association, but not with other aspects of mental health. previous studies have suggested psychological benefits of volunteering, but our findings suggest that it plays a specific role in supporting evaluative wellbeing during the pandemic 17 19 . second, both work and housework had some protective associations when looking at parallel changes with mental health over time. however, when looking at lagged models, housework does not appear to have been a precursor to changes in mental health, whilst frequent working was associated with lower life satisfaction, independent of other types of predictors. this echoes research highlighting working from home as a cause of stress for many people during the covid-19 pandemic 8 . similarly, looking after children was not associated with changes in mental health in our main models, but increases to high volumes of childcare were associated with higher levels of depression and lower life satisfaction over time. this could reflect strain from spending substantial amounts of time on childcare or, as such increases may reflect changes in other aspects of home life such as a partner having to reduce childcare to go back to work, it could also reflect other stressors that may have in fact been driving changes in mental health. finally, communicating with family/friends had mixed effects in our main models, but when exploring the direction of association, it was in fact associated with higher levels of depression. this could be explained by data from previous studies showing that while face-to-face interactions can decrease loneliness (which is associated with mental health including depression), communication over the telephone (or other digital means) can in certain circumstances increase loneliness, perhaps as it is perceived as a less emotionally rewarding experience 40 . this study has a number of strengths including its large sample size, repeated weekly follow-up over the 11 weeks of uk lockdown, and robust statistical approaches being applied. however, the ucl covid-19 social study did not use a random sample. . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . https://doi.org/10.1101/2020.08.18.20177345 doi: medrxiv preprint nevertheless, the study does have a large sample size with wide heterogeneity, including good stratification across all major socio-demographic groups, and analyses were weighted on the basis of population estimates of core demographics, with the weighted data showing good alignment with national population statistics and another large scale nationally representative social survey. but we cannot rule out the possibility that the study inadvertently attracted individuals experiencing more extreme psychological experiences, with subsequent weighting for demographic factors failing to fully compensate for these differences. this study looked at adults in the uk in general, but it is likely that "lock-down" or "stay at home" orders had different impact on time-use for people with different sociodemographic characteristics, for example age and gender. while our analyses statistically took account of all stable participant characteristics (even if unobserved) by comparing participants against themselves, future studies could examine how the relationship between time-use and mental health differs by individuals' characteristics and backgrounds. we also lack data to see how behaviors during lockdown compared to behaviors prior to covid-19, so it remains unknown whether changes such as increasing time spent on childcare or leisure activities were unusual for participants and therefore not part of their usual coping strategies for their mental health. finally, we asked individuals to focus on the last available weekday in answering the questions on time use. whilst this has been shown to improve the quality and accuracy of recollection, it does mean that variations in time use across the entire week are not captured. finally, whilst we standardised our questions to the last week day and used the same response with all participants consistently across lockdown (which is well recognised as an approach in tracking time use, as discussed in the methods section), it is nevertheless possible that behaviors across weekends may also have been influencing mental health independent of weekday behaviors. overall, our analyses provide the first comprehensive exploration of the relationship between time-use and mental health during lockdowns due to the covid-19 pandemic. many behaviors commonly identified as important for good mental health such as hobbies, listening to music, and reading for pleasure were found to be associated with lower symptoms of mental illness and higher wellbeing. these results were seen when exploring parallel changes in time use and behaviors, attesting to the importance of both encouraging health-promoting behaviors to support mental health, and understanding mental health when setting guidelines on healthy behaviors during a pandemic. we also explored the direction of the relationship, finding that changes in outdoor activities including exercise and gardening were strongly associated with subsequent changes in mental health. however, increasing exposure to news on covid-19 was strongly associated with declines in mental health. these results are important in formulating guidance for people likely to experience enforced isolation for months to come (either due to quarantine, self-isolation or shielding) and are also key in preparing for future pandemics so that more targeted advice can be given to individuals to help them stay well at home. . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . https://doi.org/10.1101/2020.08.18.20177345 doi: medrxiv preprint . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. . https://doi.org/10.1101/2020.08.18.20177345 doi: medrxiv preprint . cc-by 4.0 international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august 21, 2020. the impact of covid-19 epidemic declaration on psychological consequences: a study on active weibo users the depressive state of denmark during the covid-19 pandemic anxiety and depression among general population in china at the peak of the covid-19 epidemic the experience of quarantine for individuals affected by sars in toronto survey of stress reactions among health care workers involved with the sars outbreak the psychological impact of quarantine and how to reduce it: rapid review of the evidence time use during the great recession lockdown in the uk: why women and especially single mothers are disadvantaged assessing the impact of the coronavirus lockdown on unhappiness, loneliness, and boredom using google trends fixed-effects analyses of time-varying associations between hobbies and depression in a longitudinal cohort study: support for social prescribing? gardening is beneficial for health: a meta-analysis physical activity for cognitive and mental health in youth: a systematic review of mechanisms growing minds: evaluating the effect of gardening on quality of life and physical activity level of older adults self-esteem, self-efficacy, and social connectedness as mediators of the relationship between volunteering and well-being who health evidence synthesis report. cultural contexts of health: the role of the arts in improving health and wellbeing in the who european region volunteering and well-being: do self-esteem, optimism, and perceived control mediate the relationship? why fiction may be twice as true as fact: fiction as cognitive and emotional simulation key: cord-132843-ilxt4b6g authors: zhao, liang title: event prediction in the big data era: a systematic survey date: 2020-07-19 journal: nan doi: nan sha: doc_id: 132843 cord_uid: ilxt4b6g events are occurrences in specific locations, time, and semantics that nontrivially impact either our society or the nature, such as civil unrest, system failures, and epidemics. it is highly desirable to be able to anticipate the occurrence of such events in advance in order to reduce the potential social upheaval and damage caused. event prediction, which has traditionally been prohibitively challenging, is now becoming a viable option in the big data era and is thus experiencing rapid growth. there is a large amount of existing work that focuses on addressing the challenges involved, including heterogeneous multi-faceted outputs, complex dependencies, and streaming data feeds. most existing event prediction methods were initially designed to deal with specific application domains, though the techniques and evaluation procedures utilized are usually generalizable across different domains. however, it is imperative yet difficult to cross-reference the techniques across different domains, given the absence of a comprehensive literature survey for event prediction. this paper aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction in the big data era. first, systematic categorization and summary of existing techniques are presented, which facilitate domain experts' searches for suitable techniques and help model developers consolidate their research at the frontiers. then, comprehensive categorization and summary of major application domains are provided. evaluation metrics and procedures are summarized and standardized to unify the understanding of model performance among stakeholders, model developers, and domain experts in various application domains. finally, open problems and future directions for this promising and important domain are elucidated and discussed. and is the focus of this survey. accurate anticipation of future events enables one to maximize the benefits and minimize the losses associated with some event in the future, bringing huge benefits for both society as a whole and individual members of society in key domains such as disease prevention [167] , disaster management [140] , business intelligence [226] , and economics stability [24] . "prediction is very difficult, especially if it's about the future. " -niels bohr, 1970 event prediction has traditionally been prohibitively challenging across different domains, due to the lack or incompleteness of our knowledge regarding the true causes and mechanisms driving event occurrences in most domains. with the advent of the big data era, however, we now enjoy unprecedented opportunities that open up many alternative approaches for dealing with event prediction problems, sidestepping the need to develop a complete understanding of the underlying mechanisms of event occurrence. based on large amounts of data on historical events and their potential precursors, event prediction methods typically strive to apply predictive mapping to build on these observations to predict future events, utilizing predictive analysis techniques from domains such as machine learning, data mining, pattern recognition, statistics, and other computational models [16, 26, 92] . event prediction is currently experiencing extremely rapid growth, thanks to advances in sensing techniques (physical sensors and social sensors), prediction techniques (artificial intelligence, especially machine learning), and high performance computing hardware [78] . event prediction in big data is a difficult problem that requires the invention and integration of related techniques to address the serious challenges caused by its unique characteristics, including: 1) heterogeneous multi-output predictions. event prediction methods usually need to predict multiple facets of events including their time, location, topic, intensity, and duration, each of which may utilize a different data structure [171] . this creates unique challenges, including how to jointly predict these heterogeneous yet correlated facets of outputs. due to the rich information in the outputs, label preparation is usually a highly labor-intensive task performed by human annotators, with automatic methods introducing numerous errors in items such as event coding. so, how can we improve the label quality as well as the model robustness under corrupted labels? the multi-faceted nature of events make event prediction a multi-objective problem, which raises the question of how to properly unify the prediction performance on different facets. it is also challenging to verify whether a predicted event "matches" a real event, given that the various facets are seldom, if ever, 100% accurately predicted. so, how can we set up the criteria needed to discriminate between a correct prediction ("true positive") and a wrong one ("false positive")? 2) complex dependencies among the prediction outputs. beyond conventional isolated tasks in machine learning and predictive analysis, in event prediction the predicted events can correlate to and influence each other [142] . for example, an ongoing traffic incident event could cause congestion on the current road segment in the first 5 minutes but then lead to congestion on other contiguous road segments 10 minutes later. global climate data might indicate a drought in one location, which could then cause famine in the area and lead to a mass exodus of refugees moving to another location. so, how should we consider the correlations among future events? 3) real-time stream of prediction tasks. event prediction usually requires continuous monitoring of the observed input data in order to trigger timely alerts of future potential events [182] . however, during this process the trained prediction model gradually becomes outdated, as real world events continually change dynamically, concepts are fluid and distribution drifts are inevitable. for example, in september 2008 21% of the united states population were social media users, including 2% of those over 65. however, by may 2018, 72% of the united states population were social media users, including 40% of those over [37] . not only the data distribution, but also the number of features and input data sources can also vary in real time. hence, it is imperative to periodically upgrade the models, which raises further questions concerning how to train models based on non-stationary distributions, while balancing the cost (such as computation cost and data annotation cost) and timeliness? in addition, event prediction involves many other common yet open challenges, such as imbalanced data (for example data that lacks positive labels in rare event prediction) [206] , data corruption in inputs [248] , the uncertainty of predictions [25] , longer-term predictions (including how to trade-off prediction accuracy and lead time) [27] , trade-offs between precision and recall [171] , and how to deal with high-dimensionality [247] and sparse data involving many unrelated features [208] . event prediction problems provide unique testbeds for jointly handling such challenges. in recent years, a considerable amount of research has been devoted to event prediction technique development and applications, in order to address the aforementioned challenges [157] . recently, there has been a surge of research that both proposes and applies new approaches in numerous domains, though event prediction techniques are generally still in their infancy. most existing event prediction methods have been designed for a specific application domains, but their approaches are usually general enough to handle problems in other application domains. unfortunately, it is difficult to cross-reference these techniques across different application domains serving totally different communities. moreover, the quality of event prediction results require sophisticated and specially-designed evaluation strategies due to the subject matter's unique characteristics, for example its multi-objective nature (e.g., accuracy, resolution, efficiency, and lead time) and heterogeneous prediction results (e.g., heterogeneity and multi-output). as yet, however, we lack a systematic standardization and comprehensive summarization approaches with which to evaluate the various event prediction methodologies that have been proposed. this absence of a systematic summary and taxonomy of existing techniques and applications in event prediction causes major problems for those working in the field who lacks clear information on the existing bottlenecks, traps, open problems, and potentially fruitful future research directions. to overcome these hurdles and facilitate the development of better event prediction methodologies and applications, this survey paper aims to provide a comprehensive and systematic review of the current state of the art for event prediction in the big data era. the paper's major contributions include: • a systematic categorization and summarization of existing techniques. existing event prediction methods are categorized according to their event aspects (time, location, and semantics), problem formulation, and corresponding techniques to create the taxonomy of a generic framework. relationships, advantages, and disadvantages among different subcategories are discussed, along with details of the techniques under each subcategory. the proposed taxonomy is designed to help domain experts locate the most useful techniques for their targeted problem settings. • a comprehensive categorization and summarization of major application domains. the first taxonomy of event prediction application domains is provided. the practical significance and problem formulation are elucidated for each application domain or subdomain, enabling it to be easily mapped to the proposed technique taxonomy. this will help data scientists and model developers to search for additional application domains and datasets that they can use to evaluate their newly proposed methods, and at the same time expand their advanced techniques to encompass new application domains. • standardized evaluation metrics and procedures. due to the nontrivial structure of event prediction outputs, which can contain multiple fields such as time, location, intensity, duration, and topic, this paper proposes a set of standard metrics with which to standardize existing ways to pair predicted events with true events. then additional metrics are introduced and standardized to evaluate the overall accuracy and quality of the predictions to assess how close the predicted events are to the real ones. • an insightful discussion of the current status of research in this area and future trends. based on the comprehensive and systematic survey and investigation of existing event prediction techniques and applications presented here, an overall picture and the shape of the current research frontiers are outlined. the paper concludes by presenting fresh insights into the bottleneck, traps, and open problems, as well as a discussion of possible future directions. this section briefly outlines previous surveys in various domains that have some relevance to event prediction in big data in three categories, namely: 1. event detection, 2. predictive analytics, and 3. domain-specific event prediction. event detection has been an extensively explored domain with over many years. its main purpose is to detect historical or ongoing events rather than to predict as yet unseen events in the future [181, 222] . event detection typically focuses on pattern recognition [26] , anomaly detection [92] , and clustering [92] , which are very different from those in event prediction. there have been several surveys of research in this domain in the last decade [9, 15, 63, 146] . for example, deng et al. [63] and atefeh and khreich [15] provided overviews of event extraction techniques in social media, while michelioudakis et al. [146] presented a survey of event recognition with uncerntainty. alevizos et al. [9] provided a comprehensive literature review of event recognition methods using probabilistic methods. predictive analysis covers the prediction of target variables given a set of dependent variables. these target variables are typically homogeneous scalar or vector data for describing items such as economic indices, housing prices, or sentiments. the target variables may not necessarily be values in the future. larose [120] provides a good tutorial and survey for this domain. predictive analysis can be broken down into subdomains such as structured prediction [26] , spatial prediction [105] , and sequence prediction [88] , enabling users to handle different types of structure for the target variable. fülöp et al. [81] provided a survey and categorization of applications that utilize predictive analytics techniques to perform event processing and detection, while jiang [105] focused on spatial prediction methods that predict the indices that have spatial dependency. baklr et al. [17] summarized the literature on predicting structural data such as geometric objects and networks, and arias et al. [12] phillips et al. [163] , and yu and kak [232] all proposed the techniques for predictive analysis using social data. as event prediction methods are typically motivated by specific application domains, there are a number of surveys event predictions for domains such as flood events [47] , social unrest [29] , wind power ramp forecasting [76] , tornado events [68] , temporal events without location information [87] , online failures [182] , and business failures [6] . however, in spite of its promise and its rapid growth in recent years, the domain of event prediction in big data still suffers from the lack of a comprehensive and systematic literature survey covering all its various aspects, including relevant techniques, applications, evaluations, and open problems. the remainder of this article is organized as follows. section 2 presents generic problem formulations for event prediction and the evaluation of event prediction results. section 3 then presents a taxonomy and comprehensive description of event prediction techniques, after which section 4 categorizes and summarizes the various applications of event prediction. section 5 lists the open problems and suggests future research directions and this survey concludes with a brief summary in section 6. this section begins by examining the generic denotation and formulation of the event prediction problem (section 2.1) and then considers way to standardize event prediction evaluations (section 2.2). an event refers to a real-world occurrence that happens at some specific time and location with specific semantic topic [222] . we can use y = (t, l, s) to denote an event where its time t ∈ t , its location l ∈ l, and its semantic meaning s ∈ s. here, t , l, and s represent the time domain, location domain, and semantic domain, respectively. notice that these domains need to have very general meanings that cover a wide range of types of entities. for example, the location l can include any features that can be used to locate the place of an event in terms of a point or a neighborhood in either euclidean space (e.g., coordinate and geospatial region) or non-euclidean space (e.g., a vertex or subgraph in a network). similarly, the semantic domain s can contain any type of semantic features that are useful when elaborating the semantics of an event's various aspects, including its actors, objects, actions, magnitude, textual descriptions, and other profiling information. for example, ("11am, june 1, 2019", "hermosillo, sonora, mexico", "student protests") and ("june 1, 2010", âăijberlin, germany", "red cross helps pandemics control") denote the time, location, and semantics, for two events respectively. an event prediction system requires inputs that could indicate future events, called event indicators, and these could contain both critical information on events that precede the future event, known as precursors, as well as irrelevant information [86, 171] . event indicator data can be denoted as x ⊆ t × l × f , where f is the domain of the features other than location and time. if we denote the current time as t now and define the past time and future time as t − ≡ {t |t ≤ t now , t ∈ t } and t + ≡ {t |t > t now , t ∈ t }, respectively, the event prediction problem can now be formulated as follows: definition 2.1 (event prediction). given the event indicator data x ⊆ t − × l × f and historical event data y 0 ⊆ t − × l × s, event prediction is a process that outputs a set of predicted future eventsŷ ⊆ t + × l × s, such that for each predicted future eventŷ = (t, l, s) ∈ŷ where t > t now . not every event prediction method necessarily focuses on predicting all three domains of time, location, and semantics simultaneously, but may instead predict any part of them. for example, when predicting a clinical event such as the recurrence of disease in a patient, the event location might not always be meaningful [167] , but when predicting outbreaks of seasonal flu, the semantic meaning is already known and the focus is the location and time [27] and when predicting political events, sometimes the location, time, and semantics (e.g., event type, participant population type, and event scale) are all necessary [171] . moreover, due to the intrinsic nature of time, location, and semantic data, the prediction techniques and evaluation metrics of them are necessarily different, as described in the following. event prediction evaluation essentially investigates the goodness of fit for a set of predicted eventŝ y against real events y . unlike the outputs of conventional machine learning models such as the simple scalar values used to indicate class types in classification or numerical values in regression, the outputs of event prediction are entities with rich information. before we evaluate the quality of prediction, we need to first determine the pairs of predictions and the labels that will be used for the comparison. hence, we must first optimize the process of matching predictions and real events (section 2.2.1) before evaluating the prediction error and accuracy (section 2.2.2). 2.2.1 matching predicted events and real events. the following two types of matching are typically used: • prefixed matching: the predicted events will be matched with the corresponding groundtrue real events if they share some key attributes. for example, for event prediction at a particular time and location point, we can evaluate the prediction against the ground truth for that time and location. this type of matching is most common when each of the prediction results can be uniquely distinguished along the predefined attributes (for example, location and time) that have a limited number of possible values, so that one-on-one matching between the predicted and real events are easily achieved [1, 244] . for example, to evaluate the quality of a predicted event on june 1, 2019 in san francisco, usa, the true event occurrence on that date in san francisco can be used for the evaluation. • optimized matching: in situations where one-on-one matching is not easily achieved for any event attribute, then the set of predicted events might need to assess the quality of the match achieved with the set of real events, via an optimized matching strategy [168, 171] . for example, consider two predictions, prediction 1: ("9am, june 4, 2019", "nogales, sonora, mexico", "worker strike"), and prediction 2: ("11am, june 1, 2019", "hermosillo, sonora, mexico", "student protests"). the two ground truth events that these can usefully be compared with are real event 1: ("9am, june 1, 2019", "hermosillo, sonora, mexico", "teacher protests"), and real event 2: ("june 4, 2019", "navojoa, sonora, mexico", "general-population protest"). none of the predictions are an exact match for any of the attributes of the real events, so we will need to find a "best" matching among them, which in this case is between prediction 1 and real event 2 and prediction 2 and real event 1. this type of matching allows some degree of inaccuracy in the matching process by quantifying the distance between the predicted and real events among all the attribute dimensions. the distance metrics are typically either euclidean distance [105] or some other distance metric [92] . some researchers have hired referees to manually check the similarity of semantic meanings [168] , but another way is to use event coding to code the events into an event type taxonomy and then consider a match to have been achieved if the event type matches [49] . based on the distance between each pair of predicted and real events, the optimal matching will be the one that results in the smallest average distance [157] . however, suppose there are m predicted events and n real events, then there can be as many as 2 m ·n possible ways of matching, making it prohibitively difficult to find the optimal solution. moreover, there could be different rules for matching. for example, the "multiple-to-multiple" rule shown in figure 2 (a) allows one predicted (real) event to match multiple real (predicted) events [170] , while "bipartite matching" only allows one-to-one matching between predicted and real events (figure 2(b) ). "non-crossing matching" requires that the real events matched by the predicted events follow the same chronological order (figure 2 (c)). in order to utilize any of these types of matching, researchers have suggested using event matching optimization to learn the optimal set of "(real event, predicted event)" pairs [151] . the effectiveness of the event predictions is evaluated in terms of two indicators: 1) goodness of matching, which evaluates performance metrics such as the number and percentage of matched events [26] , and 2) quality of matched predictions, which evaluates how close the predicted event is to the real event for each pair of matched events [171] . • goodness of matching. a true positive means a real event has been successfully matched by a predicted event; if a real event has not been matched by any predicted event, then it is called a false negative and a false positive means a predicted event has failed to match any real event, which is referred to as a false alarm. assume the total number of predictions is n , the number of real events isn , the number of true positives is n t p , the number of false negatives is n f n and the number of false positives is n f p . then, the following key evaluation metrics can be calculated: prediction=n t p /(n t p + n f p ), recall=n t p /(n t p + n f n ), f-measure = 2 · precision · recall/(precision + recall). other measurements such as the area under the roc curves are also commonly used [26] . this approach can be extended to include other items such as multi-class precision/recall, and precision/recall at top k [2, 103, 123, 252] . • quality of matched predictions. if a predicted event matches a real one, it is common to go on to evaluate how close they are. this reflects the quality of the matched predictions, in terms of different aspects of the events. event time is typically a numerical values and hence can be easily measured in terms of metrics such as mean squared error, root mean squared error, and mean absolute error [26] . this is also the case for location in euclidean space, which can be measured in terms of the euclidean distance between the predicted point (or region) and the real point (or region). some researchers consider the administrative unit resolution. for example, a predicted location ("new york city", "new york state", "usa") has a distance of 2 from the real location ("los angeles", "california", "usa") [240] . others prefer to measure multi-resolution location prediction quality as follows: (1/3)(l count ry + l count ry · l st at e + l count r y · l st at e · l city ), where l city , l st at e , and l count r y can only be either 0 (i.e., no match to the truth) or 1 (i.e., completely matches the truth) [171] . for a location in non-euclidean space such as a network [187] , the quality can be measured in terms of the shortest path length between the predicted node (or subgraph) and the real node (or subgraph), or by the f-measure between the detected subsets of nodes against the real ones, which is similar to the approach for evaluating community detection [92] . for event topics, in addition to conventional ways of evaluating continuous values such as population size, ordinal values such as event scale, and categorical values such as event type, actors, and actions, as well as more complex semantic values such as texts, can be evaluated using natural language process measurements such as edit distance, bleu score, top-k precision, and rouge [10] . this section focuses on the taxonomy and representative techniques utilized for each category and subcategory. due to the heterogeneity of the prediction output, the technique types depend on the type of output to be predicted, such as time, location, and semantics. as shown in figure 3 , all the event prediction methods are classified in terms of their goals, including time, location, semantics, and the various combinations of these three. these are then further categorized in terms of the output forms of the goals of each and the corresponding techniques normally used, as elaborated in the following. semantic sequence discrete time occurrence research problems 1. supervised: geo-featured classification; spatial multi-task learning; spatial autoregressive 2. unsupervised: spatial scan, network scan step 1: event representation step 2: event causality inference. step 3: future event inference. fig. 3 . taxonomy of event prediction problems and techniques. event time prediction focuses on predicting when future events will occur. based on their time granularity, time prediction methods can be categorized into three types: 1) event occurrence: binary-valued prediction on whether an event does or does not occur in a future time period; 2) discrete-time prediction: in which future time slot will the event occur; and 3) continuous-time prediction: at which precise time point will the future event occur. occurrence prediction is arguably the most extensive, classical, and generally simplest type of event time prediction task [12] . it focuses on identifying whether there will be event occurrence (positive class) or not (negative class) in a future time period [244] . this problem is usually formulated as a binary classification problem, although a handful of other methods instead leverage anomaly detection or regression-based techniques. 1. binary classification. binary classification methods have been extensively explored for event occurrence prediction. the goal here is essentially to estimate and compare the values of f (y = "yes ′′ |x ) and f (y = "no ′′ |x ), where the former denotes the score or likelihood of event occurrence given observation x while the latter corresponds to no event occurrence. if the value of the former is larger than the latter, then a future event occurrence is predicted, but if not, there is no event predicted. to implement f , the methods typically used rely on discriminative models, where dedicated feature engineering is leveraged to manually extract potential event precursor features to feed into the models. over the years, researchers have leveraged various binary classification techniques ranging from the simplest threshold-based methods [121, 251] , to more sophisticated methods such as logistic regression [18, 248] , support vector machines [102] , (convolutional) neural networks [35, 135] , and decision trees [58, 185] . in addition to discrminative models, generative models [11, 239] have also been used to embed human knowledge for classifying event occurrences using bayesian decision techniques. specifically, instead of assuming that the input features are independent, prior knowledge can also be directly leveraged to establish bayesian networks among the observed features and variables based on graphical models such as (semi-)hidden markov models [54, 183, 239] and autoregresive logit models [199] . the joint probabilities p(y = "yes ′′ , x ) of p(y = "no ′′ , x ) can thus be estimated using graphical models, and then utilized to estimate f (y = "yes ′′ |x ) = p(y = "yes ′′ |x ) and f (y = "no ′′ |x ) = p(y = "no ′′ |x ) using bayesian rules [26] . 2. anomaly detection. alternatively, anomaly detection can also be utilized to learn a "prototype" of normal samples (typical values corresponding to the situation of no event occurrence), and then identify if any newly-arriving sample is close to or distant from the normal samples, with distant ones being identified as future event occurrences. such methods are typically utilized to handle "rare event" occurrences, especially when the training data is highly imbalanced with little to no data for "positive" samples. anomaly detection techniques such as one-classification [189] and hypotheses testing [100, 187] are often utilized here. 3. regression. in addition to simply predicting the occurrence or not, some researchers have sought to extend the binary prediction problem to deal with ordinal and numerical prediction problems, including event count prediction based on (auto)regression [73] , event size prediction using linear regression [229] , and event scale prediction using ordinal regression [84] . 3.1.2 discrete-time prediction. in many applications, practitioners want to know the approximate time (i.e. the date, week, or month) of future events in addition to just their occurrence. to do this, the time is typically first partitioned into different time slots and the various methods focus on identifying which time slot future events are likely to occur in. existing research on this problem can be classified into either direct or indirect approaches. 1. direct approaches. these of methods discretize the future time into discrete values, which can take the form of some number of time windows or time scales such as near future, medium future, or distant future. these are then used to directly predict the integer-valued index of future time windows of the event occurrence using (auto)regression methods [147, 154] , or to predict the ordinal values of future time scales using ordinal regression or classification [201] . 2. indirect approaches. these methods adopt a two-step approach, with the first step being to place the data into a series of time bins and then perform time series forecasting using techniques such as autoregressive [26] based on the historical time series x = {x 1 , · · · , x t } to obtain the future time seriesx = {x t +1 , · · · , xt }. the second step is to identify events in the predicted future time seriesx using either unsupervised methods such as burstness detection [31] and change detection [109] , or supervised techniques based on learning event characterization function. for example, existing works [165, 173] first represent the predicted future time seriesx ∈ rt ×d using time-delayed embedding, intox ∈ rt ×d ′ where each observation at time t can be represented x t } and t = t ,t + 1, · · ·t . then an event characterization function f c (x t ) is established to mapx t to the likelihood of an event, which can be fitted based on the event labels provided in the training set intuitively. overall, the unsupervised method requires users to assume the type of patterns (e.g., burstiness and change) of future events based on prior knowledge but do not require event label data. however, in cases where the event time series pattern is difficult to assume but the label data is available, supervised learning-based methods are usually used. discrete-time prediction methods, although usually simple to establish, also suffer from several issues. first, their time-resolution is limited to the discretization granularity; increasing this granularity significantly increases the computations al resources required, which means the resolution cannot be arbitrarily high. moreover, this trade-off is itself a hyperparameter that is sensitive to the prediction accuracy, rendering it difficult and time-consuming to tune during training. to address these issues, a number of techniques work around it by directly predicting the continuous-valued event time [191] , usually by leveraging one of three techniques. 1. simple regression. the simplest methods directly formalize continuous-event-time prediction as a regression problem [26] , where the output is the numerical-value future event time [212] and/or their duration [80, 129] . common regressors such as linear regression and recurrent neural networks have been utilized for this. despite their apparent simplicity, this is not straightforward as simple regression typically assumes gaussian distribution [129] , which does not usually reflect the true distribution of event times. for example, the future event time needs to be left-bounded (i.e., larger than the current time), as well asbeing typically non-symmetric and usually periodic, with recurrent events having multiple peaks in the probability density function along the time dimension. 2. point processes. as they allow more flexibility in fitting true event time distributions, point process methods [167, 219] are widely leveraged and have demonstrated their effectiveness for continuous time event prediction tasks. they require a conditional intensity function, defined as follows: where д(t |x ) is the conditional density function of the event occurrence probability at time t given an observation x , and whose corresponding cumulative distribution function, g(t |x )), n (t, t + dt), denotes the count of events during the time period between t and t + dt, where dt is an infinitelysmall time period. hence, by leveraging the relation between density and accumulative functions and then rearranging equation (1), the following conditional density function is obtained: once the above model has been trained using a technique such as maximal likelihood [26] , the time of the next event in the future is predicted as: although existing methods typically share the same workflow as that shown above, they vary in the way they define the conditional intensity function λ(t |x ). traditional models typically utilize prescribed distributions such as the poisson distribution [191] , gamma distribution [53] , hawks [69] , weibull process [56] , and other distributions [219] . for example, damaschke et al. [56] utilized a weibull distribution to model volcano eruption events, while ertekin et al. [72] instead proposed the use of a non-homogeneous poisson process to fit the conditional intensity function for power system failure events. however, in many other situations where there is no information regarding appropriate prescribed distributions, researchers must start by leveraging nonparametric approaches to learn sophisticated distributions from the data using expressive models such as neural networks. for example, simma and jordan [191] utilized of rnn to learn a highly nonlinear function of λ(t |x ). 3. survival analysis. survival analysis [62, 204] is related to point processes in that it also defines an event intensity or hazard function, but in this case based on survival probability considerations, as follows: where h (t |x ) is the so-called hazard function denoting the hazard of event occurrence between time (t −dt) for a t for a given observation x . either h (t |x ) or ξ (t |x ) could be utilized for predicting the time of future events. for example, the event occurrence time can be estimated when ξ (t |x ) is lower than a specific value. also, one can obtain ξ (t |x ) = exp − ∫ t 0 h(u|x)du according to equation (4) [132] . here h (t |x ) can adopt any one of several prescribed models, such as the wellknown cox hazard model [61, 132] . to learn the model directly from the data, some researchers have recommended enhancing it using deep neural networks [119] . vahedian et al. [204] suggest learning the survival probability ξ (t |x ) and then applying the function h (·|x ) to indicate an event at time t if h (t |x ) is larger than a predefined threshold value. a classifier can also be utilized. instead of using the raw sequence data, the conditional intensity function can also be projected onto additional continuous-time latent state layers that eventually map to the observations [62, 205] . these latent states can then be extracted using techniques such as hidden semi-markov models [26] , which ensure the elicitation of the continuous time patterns. event location prediction focuses on predicting the location of future events. location information can be formulated as one of two types: 1. raster-based. here, a continuous space is partitioned into a grid of cells, each of which represents a spatial region, as shown in figure 4 (a). this type of representation is suitable for situations where the spatial size of the event is non-negligible. 2. point-based. in this case, each location is represented by an abstract point with infinitely-small size, as shown in figure 4 (b). this type of representation is most suitable for the situations where the spatial size of the event can be neglected, or the location regions of the events can only be in discrete spaces such as network nodes. there are three types of techniques used for raster-based event location prediction, namely spatial clustering, spatial embedding, and spatial convolution. 1. spatial clustering. in raster-based representations, each location unit is usually a regular grid cell with the same size and shape. however, regions with similar spatial characteristics typically have irregular shapes and sizes, which could be approximated as composite representations of a number of grids [105] . the purpose of spatial clustering here is to group the contiguous regions who collectively exhibit significant patterns. the methods are typically agglomerative style. they typically start from the original finest-grained spatial raster units and proceed by merging the spatial neighborhood of a specific unit in each iteration. but different research works define different criteria for instantiating the merging operation. for example, wang and ding [211] merge neighborhoods if the unified region after merging can maintain the spatially frequent patterns. xiong et al. [220] chose an alternative approach by merging spatial neighbor locations into the current locations sequentially until the merged region possesses event data that is sufficiently statistically significant. these methods usually run in a greedy style to ensure their time complexity remains smaller than quadratic. after the spatial clustering is completed, each spatial cluster will be input into the classifier to determine whether or not there is an event corresponding to it. 2. spatial interpolation. unlike spatial clustering-based methods, spatial interpolation based methods maintain the original fine granularity of the event location information. the estimation of event occurrence probability can be further interpolated for locations with no historical events and hence achieve spatial smoothness. this can be accomplished using commonly-used methods such as kernel density estimation [5, 93] and spatial kriging [105, 114] . kernel density estimation is a popular way to model the geo-statistics in numerous types of events such as crimes [5] and terrorism [93] : where k(s) denotes the kernel estimation for the location point s, n is the number of historical event locations, each s i is a historical event location, γ is a tunable bandwidth parameter, and k(·) is a kernel function such as gaussian kernel [85] . more recently, ristea et al. [176] further extended kde-based techniques by leveraging localized kde and then applying spatial interpolation techniques to estimate spatial feature values for the cells in the grid. since each cell is an area rather than a point, the center of each cell is usually leveraged as the representative of this cell. finally, a classifier will take this as its input to predict the event occurrence for each grid [5, 176] . 3. spatial convolution. in the last few years, convolutional neural networks (cnns) have demonstrated significant success in learning and representing sophisticated spatial patterns from image and spatial data [88] . a cnn contains multiple convolutional layers that extract the hierarchical spatial semantics of images. in each convolutional layer, a convolution operation is executed by scanning a feature map with a filter, which results in another smaller feature map with a higher level semantic. since raster-based spatial data and images share a similar mathematical form, it is natural to leverage cnns to process it. existing methods [19, 150, 164, 209] in this category typically formulate a spatial map as input to predict another spatial map that denotes future event hotspots. such a formulation is analogous to the "image translation" problem popular in recent years in the computer vision domain [46] . specifically, researchers typically leverage an encoder-decoder architecture, where the input images (or spatial map) are processed by multiple convolutional layers into a higher-level representation, which is then decoded back into an output image with the same size, through a reverse convolutional operations process known as transposed convolution [88] . 4. trajectory destination prediction. this type of method typically focuses on populationbased events whose patterns can be interpreted as the collective behaviors of individuals, such as "gathering events" and "dispersal events". these methods share a unified procedure that typically consists of two steps: 1) predict future locations based on the observed trajectories of individuals, and 2) detect the occurrence of the "future" events based on the future spatial patterns obtained in step 1. the specific methodologies for each step are as follows: • step 1: here, the aim is to predict each location an individual will visit in the future, given a historical sequence of locations visited. this can be formulated as a sequence prediction problem. for example, wang and gerber [214] sought to predict the probability of the next time point t + 1's location s t +1 based on all the preceding time points: p(s t +1 |s ≤t ) = p(s t +1 |s t , s t −1 , · · · , s 0 ), based on various strategies including a historical volume-based prior model, markov models, and multi-class classification models. vahedian et al. [203] adopted bayesian theory p(s t +1 |s ≤t ) = p(s ≤t |s t +1 ) · p(s t +1 )/p(s ≤t ) which requires the conditional probability p(s ≤t |s t +1 ) to be stored. however, in many situations, there is huge number of possible trajectories for each destination. for example, with a 128 × 64 grid, one needs to store (128 × 64) 3 ≈ 5.5 × 10 11 options. to improve the memory efficiency, this can be limited to a consideration of just the source and current locations, leveraging a quad-tree style architecture to store the historical information. to achieve more efficient storage and speed up p(s ≤t |s t +1 ) queries, vahedian et al. [203] further extended the quad-tree into a new technique called vigo, which removes duplicate destination locations in different leaves. • step 2: the aim in this step is to forecast future event locations based on the future visiting patterns predicted in step 1. the most basic strategy here is to consider each grid cell independently. for example, wang and gerber [214] adopted supervised learning strategies to build predictive mapping between the visiting patterns and the event occurrence. a more sophisticated approach is to consider the spatial outbreaks composited by multiple grids. scalable algorithms have also been proposed to identify regions containing statistically significant hotspots [110] , such as spatial scan statistics [116] . khezerlou et al. [110] proposed a greedy-based heuristic tailored for the grid-based data formulation, which extends the original "seed" grid containing statistically-large future event densities to four directions until the extended region is no longer a statistically-significant outbreak. . unlike the raster-based formulation, which covers the prediction of a contiguous spatial region, point-based prediction focuses specifically on locations of interest, which can be distributed sparsely in a euclidean (e.g., spatial region) or non-euclidean space (e.g., graph topology). these methods can be categorized into supervised and unsupervised approaches. 1. supervised approaches. in supervised methods, each location will be classified as either "positive" or "negative" with regard to a future event occurrence. the simplest setting is based on the independent and identically distributed (i.i.d.) assumption among the locations, where each location is predicted by a classifier independently using their respective input features. however, given that different locations usually have strong spatial heterogeneity and dependency, further research has been proposed to tackle them based on different locations' predictors and outputs, resulting in two research directions: 1) spatial multi-task learning, and 2) spatial auto-regressive methods. • spatial multi-task learning. multi-task learning is a popular learning strategy that can jointly learn the models for different tasks such that the learned model can not only share their knowledge but also preserve some exclusive characteristics of the individual tasks [244] . this notion coincides very well with spatial event prediction tasks, where combining the outputs of models from different locations needs to consider both their spatial dependency and heterogeneity. zhao et al. [244] proposed a spatial multi-task learning framework as follows: where m is the total number of locations (i.e., tasks), w i and y i are the model parameters and true labels (event occurrence for all time points), respectively, of task i. l(·) is the empirical loss, f (w i , x i ) is the predictor for task i, and r(·) is the spatial regularization term based on the spatial dependency information m ∈ r m×m , where m i, j records the spatial dependency between location i and j. c(·) represents the spatial constraints imposed over the corresponding models to enforce them to remain within the valid space c. over recent years, there have been multiple studies proposing different strategies for r(·) and c(·). for example, zhao et al. [245] assumed that all the locations would be evenly correlated and enforced their similar sparsity patterns for feature selection, while gao et al. [85] further extended this to differentiate the strength of the correlation between different locations' tasks according to the spatial distance between them. this research has been further extended this approach to tree-structured multitask learning to handle the hierarchical relationship among locations at different administrative levels (e.g., cities, states, and countries) [246] in a model that also considers the logical constraints over the predictions from different locations who have hierachical relationships. instead of evenly similar, zhao, et al. [243] further estimated spatial dependency d utilizing inverse distance using gaussian kernels, while ning et al. [156] proposed estimating the spatial dependency d based on the event co-occurrence frequency between each pair of locations. • spatial auto-regressive methods. spatial auto-regressive models have been extensively explored in domains such as geography and econometrics, where they are applied to perform predictions where the i.i.d. assumption is violated due to the strong dependencies among neighbor locations. its generic framework is as follows: where x t ∈ r m×d andŷ t +1 ∈ r m×m are the observations at time t and event predictions at time t + 1 over all the m locations, and m ∈ r m×m is the spatial dependency matrix with zero-valued diagonals. this means the prediction of each locationŷ t +1,i ∈ŷ t +1 is jointly determined by its input x t,i and neighbors {j |m i, j 0} and ρ is a positive value to balance these two factors. since event occurrence requires discrete predictions, simple threshold-based strategies can be used to discretizeŷ i intoŷ ′ i = {0, 1} [32] . moreover, due to the complexity of event prediction tasks and the large number of locations, sometimes it is difficult to define the whole m manually. zhao et al. [243] proposed jointly learning the prediction model and spatial dependency from the data using graphical lasso techniques. yi et al. [228] took a different approach, leveraging conditional random fields to instantiate the spatial autoregression, where the spatial dependency is measured by gaussian kernel-based metrics. yi et al. [227] then went on to propose leveraging the neural network model to learn the locations' dependency. 2. unsupervised approaches. without supervision from labels, unsupervised-based methods must first identify potential precursors and determinant features in different locations. they can then detect anomalies that are characterized by specific feature selection and location combinatorial patterns (e.g., spatial outbreaks and connected subgraphs) as the future event indicators [41] . the generic formulation is as follows: where q(·) denotes scan statistics which score the significance of each candidate pattern, represented by both a candidate location combinatorial pattern r and feature selection pattern f . specifically, f ∈ {0, 1} d ′ ×n denotes the feature selection results (where "1" means selected; "0", otherwise) and r ∈ {0, 1} m×n denotes the m involved locations for the n events. m(g, β) and c are the set of all the feasible solutions of f and r, respectively. q(·) can be instantiated by scan statistics such as kulldorff's scan statistics [116] and the berk-jones statistic [41] , which can be applied to detect and forecast events such as epidemic outbreaks and civil unrest events [171] . depending on whether the embedding space is an euclidean region (e.g., a geographical region) or a non-euclidean region (e.g., a network topology), the pattern constraint c can be either constrained to predefined geometric shapes such as a circle, rectangle, or an irregular shape or subgraphs such as connected, cliques, and k-cliques. the problem in equation (8) is nonconvex and sometimes even discrete, and hence difficult to solve. a generic way is to optimize f using sparse feature selection; there is a useful survey provided in [127] and r can be defined using the two-step graph-structured matching method detailed in [42] . more recently, new techniques have been developed that are capable of jointly learning both feature and location selection [42, 187] . event semantics prediction addresses the problem of forecasting topics, descriptions, or other metaattributes in addition to future events' times and locations. unlike time and location prediction, the data in event semantics prediction usually involves symbols and natural languages in addition to numerical quantities, which means different types of techniques may be utilized. the data are categorized into three types based on how the historical data are organized and utilized to infer future events. the first of these categories covers rule-based methods, where future event precursors are extracted by mining association or logical patterns in historical data. the second type is sequence-based, considering event occurrence to be a consequence of temporal event chains. the third type further generalizes event chains into event graphs, where additional cross-chain contexts need to be modeled. these are discussed in turn below. . association rule-based methods are amongst the most classic approaches in data mining domain for event prediction, typically consisting of two steps: 1) learn the associations between precursors and target events, and then 2) utilize the learned associations to predict future events. for the first step, for example, an association could be x = {"election", "fraud"} → y ="protest event", which indicates that serious fraud occurring in an election process could lead to future protest events. to discover all the significant associations from the ocean of candidate rules efficiently, frequent set mining [92] can be leveraged. each discovered rule needs to come with both sufficient support and confidence. here, support is defined as the number of cases where both "x" and "y" co-occur, while confidence means the ratio indicating that "y" occurs once "x" happens. to better estimate these discrimination rules, further temporal constraints can be added that require the occurrence time of "x" and "y" to be sufficiently close to be considered "co-occurrences". once the frequent set rules have been discovered, pruning strategies may be applied to retain the most accurate and specific ones, with various strategies for generating final predictions [92] . specifically, given each new observation x ′ , one of the simplest strategies is to output the events that are triggered by any of the association rules starting from event x ′ [206] . other strategies first rank the predicted results based on their confidence and then predict just the top r events [252] . more sophisticated and rigorous strategies tend to build a decision list where each element in the list is an association rule mapping, so once a generative model has been built for the decision process, the maximal likelihood can be leveraged to optimize the order of the decision list [124] . this type of research leverages the causality inferred among the historical events to achieve future event predictions. the data here typically shares a generic framework consisting of the following procedures: 1) event representation, 2) event graph construction, and 3) future event inference. step 1: event semantic representation. this approach typically begins by extracting the events from the target texts using natural language processing techniques such as sanitization, tokenization, pos tag analysis, and name entity recognition. several types of objects can be extracted to represent the events: i) noun phrase-based [39, 94, 111] , where the noun-phrase corresponds to each event (for example, "2008 sichuan earthquake"); ii) verbs and nouns [112, 168] , where an event is represented as a set of noun-verb pairs extracted from news headlines (for example, "", "", or ""); and iii) tuple-based [249] , where each event is represented by a tuple consisting of objects (such as actors, instruments, or receptors), a relationship (or property), and time. an rdf-based format has also been leveraged in some works [57] . step 2: event causality inference. the goal here is to infer the cause-effect pairs among historical events. due to its combinatorial nature, narrowing down the number of candidate pairs is crucial. existing works usually begin by clustering the events into event chains, each of which consist of a sequence of time-ordered events under the relevant semantics, typically the same topics, actors, and/or objects [2] . the causal relations among the event pairs can then be inferred in various ways. the simplest approach is just to consider the likelihood that y occurs after x has occurred throughout the training data. other methods utilize nlp techniques to identify causal mentions such as causal connectives, prepositions, and verbs [168] . some formulate causal-effect relationship identification as a classification task where the inputs are the cause and effect candidate events, often incorporating contextual information including related background knowledge from web texts. here, the classifier is built on a multi-column cnn that outputs either "1" or "0" to indicate whether the candidate has an effect or not [115] . in many situations, the cause-effect rules learned directly using the above methods can be too specific and sparse, with low generalizability, so a typical next step is to generalize the learned rules. for example, "earthquake hits china" → "red cross help sent to beijing" is a specific rule that can be generalized to "earthquake hits [a country]" → "red cross help sent to [the capital of this country]". to achieve this, some external ontology or a knowledge base is typically needed in order to establish the underlying relationships among items or provide necessary information on their properties, such as wikipedia (https://www.wikipedia.org/), yago [196] , wordnet [75] , or conceptnet [137] . based on these resources, the similarity between two cause-effect pairs (c i , ε i ) and (c j , ε j ) can be computed by jointly considering the respective similarity of the putative cause and effect: σ ((c i , ε i ), (c j , ε j )) = (σ (c i , c j )+σ (ε i , ε j ))/2. an appropriate algorithm can then be utilized to apply hierarchical agglomerative clustering to group them and hence generate a data structure that can efficiently manage the task of storing and querying them to identify any cause-effect pairs. for example, [168, 169, 190] leverage an abstraction tree, where each leaf is an original specific cause-effect pair and each intermediate node is the centroid of a cluster. instead of using hierarchical clustering, [249] directly uses the word ontology to simultaneously generalize cause and effect (e.g., the noun "violet" is generalized to "purple", the verb "kill" is generalized to "murder-42.1 1 ") and then leverage a hierarchical causal network to organize the generalized rules. step 3: future event inference. given an arbitrary query event, two steps are needed to infer the future events caused by it based on the causality of events learned above. first, we need to retrieve similar events that match the query event from historical event pool. this requires the similarity between the query event and all the historical events to be calculated. to achieve this, lei et al. [123] utilized context information, including event time, location, and other environmental and descriptive information. for methods requiring event generalization, the first step is to traverse the abstraction tree starting from the root that corresponds to the most general event rule. the search frontier then moves across the tree if the child node is more similar, culminating in the nodes which are the least general but still similar to the new event being retrieved [168] . similarly, [45] proposed another tree structure referred to as a "circular binary search tree" to manage the event occurrence pattern. we can now apply the learned predicate rules starting from the retrieved event to obtain the prediction results. since each cause event can lead to multiple events, a convenient way to determine the final prediction is to calculate the support [168] , or conditional probability [226] of the rules. radinsky et al. [168] took a different approach, instead ranking the potential future events by their similarity defined by the length of their minimal generalization path. for example, the minimal generalization path for "london" and "paris" is "london" alternatively, zhao et al. [249] proposed embedding the event causality network into a continuous vector space and then applying an energy function designed to rank potential events, where true cause-effect pairs are assumed to have low energies. these methods share a very straightforward problem formulation. given a temporal sequence for a historical event chain, the goal is to predict the semantics of the next event using sequence prediction [26] . the existing methods can be classified into four major categories: 1) classical sequence prediction; 2) recurrent neural networks; 3) markov chains; and 4) time series predictions. sequence classification-based methods. these methods formulate event semantic prediction as a multi-class classification problem, where a finite number of candidate events are ranked and the top-ranked event is treated as the future event semantic. the objective isĉ = arg max c i u(s t +1 = c i |s 1 , · · · , s t ), where s t +1 denotes the event semantic in time slot t +1 andĉ is the optimal semantic among all the semantic candidates c i (i = 1, · · · ). multi-class classification problems can be split into events with different topics/semantic meaning. three types of sequence classification methods have been utilized for this purpose, namely feature-based methods, prototype-based methods, and model-based methods such as markov models. • feature-based. one of the simplest methods is to ignore the temporal relationships among the events in the chain, by either aggregating the inputs or the outputs. tama and comuzzi [198] formulated historical event sequences with multiple attributes for event prediction, testing multiple conventional classifiers. another type of approach based on this notion utilizes compositional based-methods [89] that typically leverage the assumption of independency among the historical input events to simplify the original problem u(s t +1 |s 1 , s 2 , · · · , s t ) = u(s t +1 |s ≤t ) into v(u(s t +1 |s 1 ), u(s t +1 |s 2 ), · · · , u(s t +1 |s t )) where v(·) is simply an aggregation function that represents a summation operation over all the components. each component function u(s t +1 |s i ) can then be calculated by estimating how likely it is that event semantic s t +1 and s i (i ≤ t ) co-occur in the same event chain. granroth-wilding and clark [89] investigated various models ranging from straightforward similarity scoring functions through bigram models and word embedding combined with similarity scoring functions to newly developed composition neural networks that jointly learn the representation of s t +1 and s i and then calculate their coherence. some other researchers have gone further to consider the dependency among the historical events. for example, letham et al. [125] proposed to optimizing the correct ordering among the candidate events, based on the following equation: where the semantic candidate in the set i should be ranked strictly to be lower than those in j , with the goal being to penalize the "incorrect ordering". here, 1 [·] is an indicator function which is discrete such that 1 [b ≥a] ≤ e b−a and can thus be utilized as the upperbound for minimization, as can be seen in the right-hand-side of the above equation. w is the set of parameters of the function u(·). this can now be relaxed to an exponentialbased approximation for effective optimization using gradient-based algorithms [88] . other methods focus on first transferring the sequential data into sequence embeddings that can encode the latent sequential context. for example, fronza et al. [79] apply random indexing to represent the words in terms of their its vector representations by embedding the information from neighboring words into each word before utilizing conventional classifiers such as support vector machines (svm) to identify the future events. • model-based. markov-based models have also been leveraged to characterize temporal patterns [224] . these typically use e i to denote each event under a specific type and e denotes the set of event types. the goal here is to predict the event type of the next event to occur in the future. in [7] , the event types are modeled using the markov model so given the current event type, the next event type can be inferred simply by looking up the state with the highest probability in the transition matrix. a tool called wayeb [8] has been developed based on this method. laxman et al. [121] developed a more complicated model, based on a mixture of hidden markov models and introducing new assumptions and the concept of episodes composed of a subsequence of event types. they assumed different event episodes should have different transition patterns so started by discovering the frequent episodes for events, each of which they modeled by a specific hidden markov model over various event types. this made it possible to establish the generative process for each future event type s based on the mixture of the above episode markov models. when predicting, the likelihood of a current observed event sequence over each possible generative process, p(x |λ y ) is evaluated, after which a future event type can be considered as either being larger than some threshold (as in [121] ) or the largest among all the different y values (in [239, 241] ). • prototype-based. adhikari et al. [3] took a different approach, utilizing a prototype-based strategy that first clusters the event sequences into different clusters in terms of their temporal patterns. when a new event sequence is observed, its closest cluster's centroid will then be leveraged as a "reference event sequence" whose sub-sequential events will be referred to when predicting future events for this new event sequence. recurrent neural network (rnn)-based methods. approaches in this category can be classified into two types: 1. attribute-based models; and 2. descriptive-based models. the attribute-based models, ingest feature representation of events as input, while the descriptive-based models typically ingest unstructured information such as texts to directly predict future events. • attributed-based methods. here, each event y = (t, l, s) at time t is recast and represented as e t = (e t,1 , e t,2 , · · · , e t,k ), where e t,i is the i-th feature of the event at time t. the feature here can include location and other information such as event topic and semantics. each sequence e = (e 1 , · · · , e t ) is then input into the standard rnn architecture for predicting next event e t +1 in the sequence at time point t + 1 [134] . various types of rnn components and architecture have been utilized for this purpose [33, 34] , but a vanilla rnn [70, 88] for sequence-based event prediction can be written in the following form: where h i , o i , and a i are the latent state, output, and activation for the i-th event, respectively, and w , u , and v are the model parameters for fitting the corresponding mappings. the prediction e t +1 := ψ (t + 1) can then be calculated in a feedforward way from the first event and the model training can be done by back-propagating the error from the layer of ψ (t). existing work typically utilizes the variants of vanilla rnn to handle the gradient vanishing problem, especially when the event chain is not short. the most commonly used methods for event prediction are lstm and gru [88] . for example, the architecture and equation for lstm are as follows: where the additional components c i−1 and ζ i are introduced to keep tracking the previous "history" and gating the information for forgetting in order to handle longer sequences. for example, some researchers opt to leverage a simple type lstm architecture to extend the rnn-based sequential event prediction [33, 97] , while others leverage variants of lstm, such as bi-directional lstm instead [113, 155] and yet others prefer to leverage gated-recurrent units (gru) [70] . moving beyond considering just the chain relationships among events, li et al. [131] generalized this into graph-structured relationships to better incorporate the event contextual information via the narrative event evolutionary graph (neeg). an neeg is a knowledge graph where each node is an event and each edge denotes the association between a pair of events, enabling the neeg to be represented by a weighted adjacency matrix a. the basic architecture can be denoted by the following, as detailed in the paper [131] : here, the current activation a i is not only dependent on the previous time point but also influenced by its neighbor nodes in neeg. • descriptive-based methods. attribute-based methods require extra effort during preprocessing in order to convert the unstructured raw data into feature vectors, a process which is not only computationally labor intensive but also not always feasible. therefore, multiple architectures have been proposed to directly process the raw (textual) event descriptions to enable them to be used to predict future event semantics or descriptions. these models share a similar generic framework [96, 97, 139, 195, 221, 231] , which begins by encoding each sequence of words into event representations, utilizing an rnn architecture, as shown in figure 5 . the sequence of events must then be characterized by another higher-level rnn to predict future events. under this framework, some works begin by decoding the predicted future candidate events into event embedding, after which they are compared with each other and the one with the largest confidence score is selected as the predicted event. these methods are usually constrained by the known list of event types, but sometimes we are interested in open set predictions where the predicted event type can be a new appearance of a type that has not previously been seen in the training set. to achieve this, other methods focus on directly generating future events' descriptions that characterize event semantics that may or may not have appeared before by designing an additional sequence decoder that decodes the latent representation of future events into word sequences. more recent research has enhanced the utility and interpretability of the relationship between words and relevant events, and all the previous events for the relevant future event, by adding a hierarchical attention mechanisms. for example, yu et al. [231] and su and jiang [195] both proposed word-level attention and event-level attention, while hu [96] leveraged word-level attention in the event encoder as well as in the event decoder. this section discusses the research into ways to jointly predict the time, location, and semantics of future events. existing work in this area can be categorized into three types: 1) joint time and for example, vilalta and ma [206] defined lhs as a tuple (e l , τ ), where τ is the time window before the target in rhs predefined by the user. only the events occurring within a time window before the event in rhs will satisfy the lhs. similar techniques have also been leveraged by other researchers [45, 194] . however, τ is difficult to define beforehand and it is preferable to be flexible to suit different target events. to handle this challenge, yang et al. [225] proposed a way to automatically identify information on a continuous time interval from the data. here, each transaction is composed of not only items but also continuous time duration information. lhs is a set of items (e.g., previous events) while rhs is a tuple (e r , [t 1 , t 2 ]) consisting of a future event semantic representation and its time interval of occurrence. to automatically learn the time interval in rhs, [225] proposed the use of two different methods . the first is called the confidence-intervalbased method, which leverages a statistical distribution (e.g., gaussian and student-t [26] ) to fit all the observed occurrence times of events in rhs, and then treats the statistical confidence interval as the time interval. the second method is known as minimal temporal region selection, which aims to find the temporal region with the smallest interval and covers all historical occurrences of the event in rhs. time expression extraction. in contrast to the above statistical-based methods, another way to achieve event time and semantics joint prediction comes from the pattern recognition domain, aiming to directly discover time expressions that mention the (planned) future events. as this type of technique can simultaneously identify time, semantics, and other information such as locations, it is widely used and will be discussed in more details later as part of the discussion of "planned future event detection methods" in section 3.4.3. time series forecasting-based methods. the methods based on time series forecasting can be separated into direct methods and indirect methods. direct methods typically formulate the event semantic prediction problem as a multi-variate time series forecasting problem, where each variable corresponds to an event type c i (i = 1, · · · ) and hence the predicted event type at future timet is calculated asŝt = arg max c i f (st = c i |x ). for example, in [128] , a longitudinal support vector regressor is utilized to predict multi-attribute events, where n support vector regressors, each of which corresponds to an attribute, is built to achieve the goal of predicting the next time point's attribute value. weiss and page [219] took a different approach, leveraging multiple point process models to predict multiple event types. to further estimate the confidence of their predictions, biloš et al. [25] first leveraged rnn to learn the historical event representation and then input the result into a gaussian process model to predict future event types. to better capture the joint dynamics across the multiple variables in the time series, brandt et al. [30] extended this to bayesian vector autoregression. utilizing indirect-style methods, they focused on learning a mapping from the observed event semantics down to low-dimensional latent-topic space using tensor decompositionbased techniques. similarly, matsubara et al. [142] proposed a 3-way topic analysis of the original observed event tensor y 0 ∈ r d o ×d a ×d c consisting of three factors, namely actors, objects, and time. they then went on to decompose this tensor into latent variables via three corresponding low-rank matrices p o ∈ r d k ×d o , p a ∈ r d k ×d a , and p c ∈ r d k ×d c respectively, as shown in figure 6 . here d k is the number of latent topics. for the prediction, the time matrices p c are predicted into the futurep c via multi-variate time series forecasting, after which a future event tensor are estimated by recovering a "future event tensor"ŷ by the multiplication among the predicted time matrixp c as well as the known actor matrix p a and object matrix p o . raster-based. these methods usually formulate data into temporal sequences consisting of spatial snapshots. over the last few years, various techniques have been proposed to characterize the spatial and temporal information for event prediction. the simplest way to consider spatial information is to directly treat location information as one of the input features, and then feed it into predictive models, such as linear regression [250] , lstm [174] and gaussian processes [118] . during model training, zhao and tang [250] leveraged the spatiotemporal dependency to regularize their model parameters. most of the methods in this domain aim to jointly consider the spatial and temporal dependency for predictions [64] . at present, the most popular framework is the cnn+rnn architecture, which implements sequenceto-sequence learning problems such as the one illustrated in figure 7 . here, the multi-attributed spatial information for each time point can be organized as a series of multi-channel images, which can be encoded using convolution-based operations. for example, huang et al. [99] proposed the addition of convolutional layers to process the input into vector representations. other researchers have leveraged variational autoencoders [215] and cnn autoencoders [104] to learn the lowdimensional embedding of the raw spatial input data. this allows the learned representation of the input to be input into the temporal sequence learning architecture. different recurring units have been investigated, including rnn, lstm, convlstm, and stacked-convlstm [88] . the resulting representation of the input sequence is then sent to the output sequence as input. here, another recurrent architecture is established. the output of the unit for each time point will be input into a spatial decoder component which can be implemented using transposed convolutional layers [233] , transposed convlstm [104] , or a spatial decoder in a variational autoencoder [215] . a conditional random field is another popular technique often used to model the spatial dependency [105] . point-based. the spatiotemporal point process is an important technique for spatiotemporal event prediction as it models the rate of event occurrence in terms of both spatial and time points. it is defined as: various models have been proposed to instantiate the model of the framework illustrated in equation (11) . for example, liu and brown et al. [136] began by assuming there to be a conditional independence among spatial and temporal factors and hence achieved the following decomposition: where x , l,t , and f denotes the whole input indicator data as well as its different facets, including location, time, and other semantic features, respectively. then the term λ 1 (·) can be modeled based on the markov spatial point process while λ 2 (·) can be characterized using temporal autoregressive models. to handle situations where explicit assumptions for model distributions are difficult, several methods have been proposed to involve the deep architecture during the point process. most recently, okawa et al. [159] have proposed the following: where k(·, ·) is a kernel function such as a gaussian kernel [26] that measures the similarity in time and location dimensions. f (t ′ , l ′ ) ⊆ f denotes the feature values (e.g., event semantics) for the data at location l ′ and time t ′ . д θ (·) can be a deep neural network that is parameterized by θ and returns an nonnegative scalar. the model selection of д θ (·) depends on the specific data types. for example, these authors constructed an image attention network by combining a cnn with the spatial attention model proposed by lu et al. [138] . in this section, we introduce the strategies that jointly predict the time, location, and semantics of future events, which can be grouped into either system-based or model-based strategies. system-based. the first type of the system-based methods considered here is the model-fusion system. the most intuitive approach is to leverage and integrate the aforementioned techniques for time, location, and semantics prediction into an event prediction system. for example, a system named embers [171] is an online warning system for future events that can jointly predict the time, location, and semantics including the type and population of future events. this system also provides information on the confidence of the predictions obtained. using an ensemble of predictive models for time [160] , location, and semantic prediction, this system achieves a significant performance boost in terms of both precision and recall. the trick here is to first prioritize the precision of each individual prediction model by suppressing their recall. then, due to the diversity and complementary nature of the different models, the fusion of the predictions from different models will eventually result in a high recall. a bayesian fusion-based strategy has also been investigated [95] . another system named carbon [108] also leverages a similar strategy. the second type of model involves crowd-sourced systems that implement fusion strategies to generate the event predictions made by the human predictors. for example, in order to handle the heterogeneity and diversity of the human predictors' skill sets and background knowledge under limited human resources, rostami et al. [177] proposed a recommender system for matching event forecasting tasks to human predictors with suitable skills in order to maximize the accuracy of their fused predictions. li et al. [126] took a different approach, designing a prediction market system that operates like a futures market, integrating information from different human predictors to forecast future events. in this system, the predictors can decide whether to buy or sell the "tokens" (using virtual dollars, for example) for each specific prediction they have made according to their confidence in it. they typically make careful decisions as they will obtain corresponding awards (for correct predictions) or penalties (for erroneous predictions). planned future event detection methods. these methods focus on detecting the planned future events, usually from various media such sources as social media and news and typically relying on nlp techniques and linguistic principles. existing methods typically follow a workflow similar to the one shown in figure 8 , consisting of four main steps: 1) content filtering. methods for content filtering are typically leveraged to retain only the texts that are relevant to the topic of interest. existing works utilize either supervised methods (e.g., textual classifiers [117] or unsupervised methods (e.g., querying techniques [152, 238] ); 2) time expression identification is then utilized to identify future reference expressions and determine the time to event. these methods either leverage existing tools such as the rosetta text analyzer [55] or propose dedicated strategies based on linguistic rules [101] ; 3) future reference sentence extraction is the core of planned event detection, and is implemented either by designing regular expression-based rules [153] or by textual classification [117] ; and 4) location identification. the expression of locations is typically highly heterogeneous and noisy. existing works have relied heavily on geocoding techniques that can resolve the event location accurately. in order to infer the event locations, various types of locations are considered by different researchers, such as article locations [152] , authors' profile locations [49] , locations mentioned in the articles [22] , and authors' neighbors' locations [107] . multiple locations have been selected using a geometric median [49] or fused using logical rules such as probabilistic soft logic [152] . tensor-based methods. some methods formulate the data into tensor-form, with dimensions including location, time, and semantics. tensor decomposition is then applied to approximate the original tensors as the product of multiple low-rank matrices, each of which is a mapping from latent topics to each dimension. finally, the tensor is extrapolated towards future time periods by various strategies. for example, mirtaheri [148] extrapolated the time dimension-matrix only, which they then multiplied with the other dimensions' matrices to recover the estimated extrapolated tensor into the future. zhou et al. [253] took a different approach, choosing instead to add "empty values" for the entries in future time to the original tensor, and then use tensor completion techniques to infer the missing values corresponding to future events. this category generally consists of two types of event prediction: 1) population level, which includes disease epidemics and outbreaks, and 2) individual level, which relates to clinical longitudinal events. there has been extensive research on disease outbreaks for many different types of diseases and epidemics, including seasonal flu [3] , zika [200] , h1n1 [158] , ebola [13] , and covid-19 [162] . these predictions target both the location and time of future events, while the disease type is usually fixed to a specific type for each model. compartmental models such as sir models are among the classical mathematical tools used to analyze, model, and simulate the epidemic dynamics [186, 237] . more recently, individual-based computational models have begun to be used to perform network-based epidemiology based on network science and graphtheoretical models, where an epidemic is modeled as a stochastic propagation over an explicit interaction network among people [52] . thanks to the availability of high-performance computing resources, another option is to construct a "digital twin" of the real world, by considering a realistic representation of a population, including membersâăź demographic, geographic, behavioral, and social contextual information, and then using individual-based simulations to study the spread of epidemics within each network [27] . the above techniques heavily rely on the model assumptions regarding how the disease progresses individually and is transmitted from person to person [27] . the rapid growth of large surveillance data and social media data sets such as twitter and google flu trends in recent years has led to a massive increase of interest in using data-driven approaches to directly learn the predictive mapping [3] . these methods are usually both more time-efficient and less dependent on assumptions, while the aforementioned computational models are more powerful for longer-term prediction due to their ability to take into account the specific disease mechanisms [242] . finally, there have also been reports of synergistic research that combines both techniques to benefit from their complementary strengths [98, 242] . this research thread focuses on the longitudinal predictive analysis of individual health-related events, including death occurrence [62] , adverse drug events [185] , sudden illnesses such as strokes [124] and cardiovascular events [23] , as well as other clinical events [62] and life events [59] for different groups of people, including elders and people with mental disease. the goal here is usually to predict the time before an event occurs, although some researchers have attempted to predict the type of event. the data sources are essentially the electronic health records of individual patients [172, 185] . recently, social media, forum, and mobile data has also been utilized for predicting drug adverse events [185] and events that arise during chronic disease (e.g., chemical radiation and surgery) [62] . this category focuses on predicting events based on information held in various types of media including: video-based, audio-based, and text-based formats. the core issue is to retrieve key information related to future events utilizing semantic pattern recognition from the data. video-and audio-based. . while event detection has been extensively researched for video data [129] and audio mining [192] , event prediction is more challenging and has been attracting increasing attention in recent years. the goal here is usually to predict the future status of the objects in the video, such as the next action of soccer players [60] or basketball players [154] , or the movement of vehicles [235] . text-and script-based. a huge amount of news data has accumulated in recent decades, much of which can be used for big data predictive analytics among news events. a number of researchers have focused on predicting the location, time, and semantics of various events. to achieve this, they usually leverage the immense historical news and knowledge base in order to learn the association and causality among events, which is then applied to forecast events when given current events. some studies have even directly generated textual descriptions of future events by leveraging nlp techniques such as sequence to sequence learning [57, 97, 123, 153, 168, 170, 190, 195, 221] . . this category can be classified into: 1) population based events, including dispersal events, gathering events, and congestion; and 2) individual-level events, which focus on fine-grained patterns such as human mobility behavior prediction. 4.3.1 group transportation pattern. . here, researchers typically focus on transportation events such as congestion [43, 104] , large gatherings [203] , and dispersal events [204] . the goal is thus to forecast the future time period [80] and location [203] of such events. data from traffic meters, gps, and mobile devices are usually used to sense real-time human mobility patterns. transportation and geographical theories are usually considered to determine the spatial and temporal dependencies for predicting these events. another research thread focuses on individual-level prediction, such as predicting an individual's next location [130, 223] or the likelihood or time duration of car accidents [19, 174, 233] . sequential and trajectory analyses are usually used to process trajectory and traffic flow data. different types of engineering systems have begun to routinely apply event forecasting methods, including: 1) civil engineering, 2) electrical engineering, 3) energy engineering, and 4) other engineering domains. despite the variety of systems in these widely different domains, the goal is essentially to predict future abnormal or failure events in order to support the system's sustainability and robustness. both the location and time of future events are key factors for these predictions. the input features usually consist of sensing data relevant to specific engineering systems. • civil engineering. this covers various a wide range of problems in diverse urban systems, such as smart building fault adverse event prediction [21] , emergency management equipment failure prediction [66] , manhole event prediction [179] , and other events [99] . • electrical engineering. this includes teleservice systems failures [61] and unexpected events in wire electrical discharge machining operations [184] . • energy engineering. event prediction is also a hot topic in energy engineering, as such systems usually require strong robustness to handle the disturbance from the nature environments. active research domains here include wind power ramp prediction [83] , solar power ramp prediction [1] , and adverse events in low carbon energy production [50] . • other engineering domains. there is also active research on event prediction in other domains, such as irrigation event prediction in agricultural engineering [161] and mechanical fault prediction in mechanical engineering [197] . here, the prediction models proposed generally focus on either network-level events or devicelevel events. for both types, the general goal is essentially to predict the likelihood of future system failure or attacks based on various indicators of system vulnerability. so far these two categories have essentially differed only in their inputs: the former relies on network features, including system specifications, web access logs and search queries, mismanagement symptoms, spam, phishing, and scamming activity, although some researchers are investigating the use of social media text streams to identify semantics indicating future potential attack targets of ddos [142, 217] . for device-level events, the features of interest are usually the binary file appearance logs of machines [160, 210] . some work has been done on micro-architectural attacks [90] by observing and proactively analyzing the observations on speculative branches, out-of-order executions and shared last level caches [188] . political event prediction has become a very active research area in recent years, largely thanks to the popularity of social media. the most common research topics can be categorized as: 1) offline events, and 2) online activism. 4.6.1 offline events. this includes civil unrest [171] , conflicts [218] , violence [28] , and riots [67] . this type of research usually targets the future events' geo-location, time, and topics by leveraging the social sensors that indicate public opinions and intentions. utilization of social media has become a popular approach for these endeavors, as social media is a source of vital information during the event development stage [171] . specifically, many aspects are clearly visible in social media, including complaints from the public (e.g., toward the government), discussions about their intentions regarding specific political events and targets, as well as advertisements for the planned events. due to the richness of this information, further information on future events such as the type of event [85] , the anticipated participants population [171] , and the event scale [84] can also be discovered in advance. 4.6.2 online events. due to the major impact of online media such as online forums and social media, many events such as online activism, petitions, and hoaxes in such online platform also involve strong motivations for achieving some political purpose [213] . beyond simple detection, the prediction of various types of events have been studied in order to enable proactive intervention to sidetrack the events such as hoaxes and rumor propagation [106] . other researchers have sought to foresee the results of future political events in order to benefit a particular group of practitioners, for example by predicting the outcome of online petitions or presidential elections [213] . different types of natural disasters have been the focus of a great deal of research. typically, these are rare events, but mechanistic models, a long historical records (often extending back dozens or hundreds of years), and domain knowledge are usually available. the input data are typically collected by sensors or sensor networks and the output is the risk or hazard of future potential events. since these event occurrence are typically rare but very high-stakes, many researchers strive to cover all event occurrences and hence aim to ensure high recalls. 4.7.1 geophysics-related. earthquakes. predictions here typically focus on whether there will be an earthquake with a magnitude larger than a specified threshold in a certain area during a future period of time. to achieve this, the original sensor data is usually proccessed using geophysical models such as gutenbergâăşrichterâăźs inverse law, the distribution of characteristic earthquake magnitudes, and seismic quiescence [14, 175] . the processed data are then input into machine learning models that treat them as input features for predicting the output, which can be either binary values of event occurrence or time-to-event values. some studies are devoted to identifying the time of future earthquakes and their precursors, based on an ensemble of regressors and feature selection techniques [178] , while others focus on aftershock prediction and the consequences of the earthquake, such as fire prevention [140] . it worth noting that social media data has also been used for such tasks, as this often supports early detection of the first-wave earthquake, which can then be used to predict the afterstocks or earthquakes in other locations [181] . fire events. research in this category can be grouped into urban fires and wildfires. this type of research often focuses on the time at which a fire will affect a specific location, such as a building. the goal here is to predict the risk of future fire events. to achieve this, both the external environment and the intrinsic properties of the location of interests are important. therefore, both static input data (e.g., natural conditions and demographics) and time-varying data (e.g., weather, climate, and crowd flow) are usually involved. shin and kim [189] focus on building fire risk prediction, where the input is the building's profile. others have studied wildfires, where weather data and satellite data are important inputs. this type of research focuses primarily on predicting both the time and location of future fires [216, 234] . other researchers have focused on rarer events such as volcanic eruptions. for example, some leverage chemical prior knowledge to build a bayesian network for prediction [44] , while others adopt point processes to predict the hazard of future events [56] . 4.7.2 atmospheric science-related. flood events. floods may be caused by many different reasons, including atmospheric (e.g., snow and rain), hydrological (e.g., ice melting, wind-generated waves, and river flow), and geophysical (e.g., terrain) conditions. this makes the forecasting of floods highly complicated task that requires multiple diverse predictors [212] . flood event prediction has a long history, with the latest research focusing especially on computational and simulation models based on domain knowledge. this usually involves using ensemble prediction systems as inputs for hydrological and/or hydraulic models to produce river discharge predictions. for a detailed survey on flood computational models please refer to [47] . however, it is prohibitively difficult to comprehensively consider and model all the factors correctly while avoiding all the accumulated errors from upstream predictions (e.g., precipitation prediction). another direction, based on data-driven models such as statistical and machine learning models for flood prediction, is deemed promising and is expected to be complementary to existing computational models. these newly developed machine learning models are often based solely on historical data, requiring no knowledge of the underlying physical processes. representative models are svm, random forests, and neural networks and their variants and hybrids. a detailed recent survey is provided in [149] . tornado forecasting. tornadoes usually develop within thunderstorms and hence most tornado warning systems are based on the prediction of thunderstorms. for a comprehensive survey, please refer to [68] . machine learning models, when applied to tornado forecasting tasks, usually suffer from high-dimensionality issues, which are very common in meteorological data. some methods have leveraged dimensional reduction strategies to preprocess the data [230] before prediction. research on other atmosphere-related events such as droughts and ozone events has also been conducted [77] . there is a large body of prediction research focusing on events outside the earth, especially those affecting the star closest to us, the sun. methods have been proposed to predict various solar events that could impact life on earth, including solar flares [20] , solar eruptions [4] , and high energy particle storms [141] . the goal here is typically to use satellite imagery data of the sun to predict the time and location of future solar events and the activity strength [74] . business intelligence can be grouped into company-based events and customer-based events. 4.8.1 customer activity prediction. the most important customer activities in business is whether a customer will continue doing business with a company and how long a costumer will be willing to wait before receiving the service? a great deal of research has been devoted to these topics, which can be categorized based on the type of business entities namely enterprises, social media, and education, who are primarily interested in churn prediction, site migration, and student dropout, respectively. the first of these focuses on predicting whether and when a customer is likely to stop doing business with a profitable enterprise [71] . the second aims to predict whether a social media user will move from one site, such as flickr, to another, such as instagram, a movement known as site migration [236] . while site migration is not popular, attention migration might actually be much more common, as a user may "move" their major activities from one social media site to another. the third type, student dropout, is a critical domain for education data mining, where the goal is to predict the occurrence of absenteeism from school for no good reason for a continuous number of days; a comprehensive survey is available in [143] . for all three types, the procedure is first to collect features of a customer's profile and activities over a period of time and then conventional or sequential classifiers or regressors are generally used to predict the occurrence or time-to-event of the future targeted activity. financial event prediction has been attracting a huge amount of attention for risk management, marketing, investment prediction and fraud prevention. multiple information resources, including news, company announcements, and social media data could be utilized as the input, often taking the form of time series or temporal sequences. these sequential inputs are used for the prediction of the time and occurrence of future high-stack events such as company distress, suspension, mergers, dividends, layoffs, bankruptcy, and market trends (rises and falls in the company's stock price) [36, 40, 65, 91, 122, 144, 226] . it is difficult to deduce the precise location and time for individual crime incidences. therefore, the focus is instead estimating the risk and probability of the location, time, and types of future crimes. this field can be naturally categorized based on the various crime types: 4.9.1 political crimes and terrorism. this type of crime is typically highly destructive, and hence attracts huge attention in its anticipation and prevention. terrorist activities are usually aimed at religious, political, iconic, economic or social targets. the attacker typically targets larger numbers of people and the evidences related to such attacks is retained in the long run. though it is extremely challenging to predict the precise location and time of individual terrorism incidents, numerous studies have shown the potential utility for predicting the regional risks of terrorism attacks based on information gathered from many data sources such as geopolitical data, weather data, and economics data. the global terrorism database is the most widely recognized dataset that records the descriptions of world-wide terrorism events of recent decades. in addition to terrorism events, other similar events such as mass killings [202] and armed-conflict events [193] have also been studied using similar problem formulations. most studies on this topic focus on predicting the types, intensity, count, and probability of crime events across defined geo-spatial regions. until now, urban crimes are most commonly the topic of research due to data availability. the geospatial characteristics of the urban areas, their demographics, and temporal data such as news, weather, economics, and social media data are usually used as inputs. the geospatial dependency and correlation of the crime patterns are usually leveraged during the prediction process using techniques originally developed for spatial predictions, such as kernel density estimation and conditional random fields. some works simplify the tasks by only focusing on specific types of crimes such as theft [180] , robbery, and burglary [51] . 4.9.3 organized and serial crimes. unlike the above research on regional crime risks, some recent studies strive to predict the next incidents of criminal individuals or groups. this is because different offenders may demonstrate different behavioral patterns, such as targeting specific regions (e.g., wealthy neighborhoods), victims (e.g., women), for specific benefits (e,g, money). the goal here is thus is to predict the next crime site and/or time, based on the historical crime event sequence of the targeted criminal individual or group. models such as point processes [130] or bayesian networks [133] are usually used to address such problems. despite the major advances in event prediction in recent years, there are still a number of open problems and potentially fruitful directions for future research, as follows: increasingly sophisticated forecasting models have been proposed to improve the prediction accuracy, including those utilizing approaches such as ensemble models, neural networks, and the other complex systems mentioned above. however, although the accuracy can be improved, the event prediction models are rapidly becoming too complex to be interpreted by human operators. the need for better model accountability and interpretability is becoming an important issue; as big data and artificial intelligence techniques are applied to ever more domains this can lead to serious consequences for applications such as healthcare and disaster management. models that are not interpretable by humans will find it hard to build the trust needed if they are to be fully integrated into the workflow of practitioners. a closely related key feature is the accountability of the event prediction system. for example, disaster managers need to thoroughly understand a model's recommendations if they are to be able to explain the reason for a decision to displace people in a court of law. moreover, an ever increasing number of laws in countries around the world are beginning to require adequate explanations of decisions reached based on model recommendations. for example, articles 13-15 in the european union's general data protection regulation (gdpr) [207] require algorithms that make decisions that âăijsignificantly affect" individuals to provide explanations ("right to explanation") by may 28, 2018. similar laws have also been established in countries such as the united states [48] and china [166] . the massive popularity of the proposal, development, and deployment of event prediction is stimulating a surge interest in developing ways to counter-attack these systems. it will therefore not be a surprise when we begin to see the introduction of techniques to obfuscate these event prediction methods in the near future. as with many state-of-the-art ai techniques applied in other domains such as object recognition, event prediction methods can also be very vulnerable to noise and adversarial attacks. the famous failure of google flu trends, which missed the peak of the 2013 flu season by 140 percent due to low relevance and high disturbance affecting the input signal, is a vivid memory for practitioners in the field [82] . many predictions relying on social media data can also be easily influenced or flipped by injecting scam messages. event prediction models also tend to over-rely on low-quality input data that can be easily disturbed or manipulated, lacking sufficient robustness to survive noisy signals and adversarial attacks. similar problems threaten to other application domains such as business intelligence, crime, and cyber systems. over the years, many domains have accumulated a significant amount of knowledge and experience about event development occurrence mechanisms, which can thus provide important clues for anticipating future events, such as epidiomiology models, socio-political models, and earthquake models. all of these models focus on simplifying real-world phenomena into concise principles in order to grasp the core mechanism, discarding many details in the process. in contrast, data-driven models strive to ensure the accurate fitting of large historical data sets, based on sufficient model expressiveness but cannot guarantee that the true underlying principle and causality of event occurrence modeled accurately. there is thus a clear motivation to combine their complementary strengths, and although this has already attracted great deal of interest [98, 242] , most of the models proposed so far are merely ensemble learning-based and simply merge the final predictions from each model. a more thorough integration is needed that can directly embed the core principles to regularize and instruct the training of data-driven event prediction methods. moreover, existing attempts are typically specific to particular domains and are thus difficult to develop further as they require in-depth collaborations between data scientists and domain experts. a generic framework developed to encompass multiple different domains is imperative and would be highly beneficial for the various domain experts. the ultimate purpose of event prediction is usually not just to anticipate the future, but to change it, for example by avoiding a system failure and flattening the curve of a disease outbreak. however, it is difficult for practitioners to determine how to act appropriately and implement effective policies in order to achieve the desired results in the future. this requires a capability that goes beyond simply predicting future events based on the current situation, requiring them instead to also take into account the new actions being taken in real time and then predict how they might influence the future. one promising direction is the use of counterfactual event [145] prediction that models what would have happened if different circumstances had occurred. another related direction is prescriptive analysis where different actions can be merged into the prediction system and future results anticipated or optimized. related works have been developed in few domains such as epidemiology. however, as yet these lack sufficient research in many other domains that will be needed if we are to develop generic frameworks that can benefit different domains. existing event prediction methods mostly focus primarily on accuracy. however, decision makers who utilize these predicted event results usually need much more, including key factors such as event resolution (e.g., time resolution, location resolution, description details), confidence (e.g., the probability a predicted event will occur), efficiency (whether the model can predict per day or per seccond), lead time (how many days the prediction can be made prior to the event occurring), and event intensity (how serious it is). multi-objective optimization (e.g., accuracy, confidence, resolution). there are typically trade-offs among all the above metrics and accuracy, so merely optimizing accuracy during training will inevitably mean the results drift away from the overall optimal event-prediction-based decision. a system that can flexibly balance the trade-off between these metrics based on decision makers' needs and achieve a multi-objective optimization is the ultimate objective for these models. this survey has presented a comprehensive survey of existing methodologies developed for event prediction methods in the big data era. it provides an extensive overview of the event prediction challenges, techniques, applications, evaluation procedures, and future outlook, summarizing the research presented in over 200 publications, most of which were published in the last five years. event prediction challenges, opportunities, and formulations have been discussed in terms of the event element to be predicted, including the event location, time, and semantics, after which we went on to propose a systematic taxonomy of the existing event prediction techniques according to the formulated problems and types of methodologies designed for the corresponding problems. we have also analyzed the relationships, differences, advantages, and disadvantages of these techniques from various domains, including machine learning, data mining, pattern recognition, natural language processing, information retrieval, statistics, and other computational models. in addition, a comprehensive and hierarchical categorization of popular event prediction applications has been provided that covers domains ranging from natural science to the social sciences. based upon the numerous historical and state-of-the-art works discussed in this survey, the paper concludes by discussing open problems and future trends in this fast-growing domain. forecasting of solar power ramp events: a post-processing approach causal prediction of top-k event types over real-time event streams epideep: exploiting embeddings for epidemic forecasting prediction of solar eruptions using filament metadata area-specific crime prediction models methodological approach of construction business failure prediction studies: a review event forecasting with pattern markov chains wayeb: a tool for complex event forecasting probabilistic complex event recognition: a survey on-line new event detection and tracking a bayesian approach to event prediction forecasting with twitter data forecasting ebola with a regression transmission model earthquake magnitude prediction in hindukush region using machine learning techniques a survey of techniques for event detection in twitter modern information retrieval predicting structured data customer event history for churn prediction: how long is long enough? a spatiotemporal deep learning approach for citywide short-term crash risk prediction with multi-source data a comparison of flare forecasting methods. i. results from the âăijall-clearâăi̇ workshop scalable causal learning for predicting adverse events in smart buildings identifying content for planned events across social media sites comparison of machine learning algorithms for clinical event prediction data-driven prediction and prevention of extreme events in a spatially extended excitable system uncertainty on asynchronous time event prediction pattern recognition and machine learning epifast: a fast algorithm for large scale realistic epidemic simulations on distributed memory systems predicting local violence: evidence from a panel survey in liberia forecasting civil wars: theory and structure in an age of "big data" and machine learning real time, time series forecasting of inter-and intra-state political conflict forecasting social unrest using activity cascades estimating binary spatial autoregressive models for rare events sensor event prediction using recurrent neural network in smart homes for older adults prediction of next sensor event and its time of occurrence using transfer learning across homes temporal convolutional networks allow early prediction of events in critical care making words work: using financial text as a predictor of financial events social media fact sheet event summarization using tweets extracting causation knowledge from natural language texts a text-based decision support system for financial sequence prediction non-parametric scan statistics for event detection and forecasting in heterogeneous social media graphs a generic framework for interesting subspace cluster detection in multi-attributed networks pcnn: deep convolutional networks for short-term traffic congestion prediction bayesian networks based rare event prediction with sensor data a tree-based approach for event prediction using episode rules over event streams stargan: unified generative adversarial networks for multi-domain image-to-image translation ensemble flood forecasting: a review regulating by robot: administrative decision making in the machine-learning era using publicly visible social media to build detailed forecasts of civil unrest infrequent adverse event prediction in low carbon energy production using machine learning an architecture for emergency event prediction using lstm recurrent neural networks disease transmission in territorial populations: the small-world network of serengeti lions bayes predictive analysis of a fundamental software reliability model hidden markov models as a support for diagnosis: formalization of the problem and synthesis of the solution text analytics apis, part 2: the smaller players a volcanic event forecasting model for multiple tephra records, demonstrated on mt news events prediction using markov logic networks a new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees leveraging fine-grained transaction data for customer life event predictions predicting soccer highlights from spatiotemporal match event streams event prediction for individual unit based on recurrent event data collected in teleservice systems isurvive: an interpretable, event-time prediction model for mhealth an overview of event extraction from twitter traffic congestion prediction by spatiotemporal propagation patterns deep learning for event-driven stock prediction online failure prediction for railway transportation systems based on fuzzy rules and data analysis forecasting location-based events with spatio-temporal storytelling tornado forecasting: a review recurrent marked temporal point processes: embedding event history to vector on clinical event prediction in patient treatment trajectory using longitudinal electronic health records systematic review of customer churn prediction in the telecom sector reactive point processes: a new approach to predicting power failures in underground electrical systems forecasting heroin overdose occurrences from crime incidents mag4 versus alternative techniques for forecasting active region flare productivity christiane fellbaum. 2012. wordnet. the encyclopedia of applied linguistics a survey on wind power ramp forecasting managing the risks of extreme events and disasters to advance climate change adaptation: special report of the intergovernmental panel on climate change issues in complex event processing: status and prospects in the big data era failure prediction based on log files using random indexing and support vector machines titan: a spatiotemporal feature learning framework for traffic incident duration prediction survey on complex event processing and predictive analytics google flu trends' failure shows good data> big data a review on the recent history of wind power ramp forecasting incomplete label multi-task ordinal regression for spatial event scale forecasting incomplete label multi-task deep learning for spatio-temporal event subtype forecasting extreme events: dynamics, statistics and prediction a taxonomy of event prediction methods deep learning what happens next? event prediction using a compositional neural network model fortuneteller: predicting microarchitectural attacks via unsupervised deep learning automated news reading: stock price prediction based on financial news using context-specific features data mining: concepts and techniques simulating spatio-temporal patterns of terrorism incidents on the indochina peninsula with gis and the random forest method toward future scenario generation: extracting event causality exploiting semantic relation, context, and association features bayesian model fusion for forecasting civil unrest integrating hierarchical attentions for future subevent prediction what happens next? future subevent prediction using contextual hierarchical lstm social media based simulation models for understanding disease dynamics mist: a multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting improved disk-drive failure warnings estimating time to event of future events based on linguistic cues on twitter using machine learning methods to forecast if solar flares will be associated with cmes and seps skip n-grams and ranking functions for predicting script events deepurbanevent: a system for predicting citywide crowd dynamics at big events a survey on spatial prediction methods epidemiological modeling of news and rumors on twitter that's what friends are for: inferring location in online social media platforms based on social relationships carbon: forecasting civil unrest events by monitoring news and social media time-series event-based prediction: an unsupervised learning framework based on genetic programming forecasting gathering events through trajectory destination prediction: a dynamic hybrid model extracting causal knowledge from a medical database using graphical patterns supervenience and mind: selected philosophical essays diversityaware event prediction based on a conditional variational autoencoder with reconstruction prediction for big data through kriging: small sequential and one-shot designs improving event causality recognition with multiple background knowledge sources using multi-column convolutional neural networks a spatial scan statistic leveraging unscheduled event prediction through mining scheduled event tweets spatio-temporal violent event prediction using gaussian process regression time-to-event prediction with neural networks and cox regression data mining and predictive analytics stream prediction using a generative model based on frequent episodes in event sequences a hybrid model for business process event prediction event prediction based on causality reasoning interpretable classifiers using rules and bayesian analysis: building a better stroke prediction model sequential event prediction the wisdom of crowds in action: forecasting epidemic diseases with a web-based prediction market system feature selection: a data perspective multi-attribute event modeling and prediction over event streams from sensors time-dependent representation for neural event sequence prediction next hit predictor-self-exciting risk modeling for predicting next locations of serial crimes constructing narrative event evolutionary graph for script event prediction failure event prediction using the cox proportional hazard model driven by frequent failure signatures a novel serial crime prediction model based on bayesian learning theory mm-pred: a deep predictive model for multi-attribute event sequence grid-based crime prediction using geographical features a new point process transition density model for space-time event prediction conceptnetâăťa practical commonsense reasoning tool-kit knowing when to look: adaptive attention via a visual sentinel for image captioning sam-net: integrating event-level and chain-level attentions to predict what happens next major earthquake event prediction using various machine learning algorithms data handling and assimilation for solar event prediction fast mining and forecasting of complex time-stamped events a survey of machine learning approaches and techniques for student dropout prediction a multi-stage deep learning approach for business process event prediction counterfactual theories of causation event recognition and forecasting technology forecasting occurrences of activities tensor-based method for temporal geopolitical event forecasting flood prediction using machine learning models: literature review urban events prediction via convolutional neural networks and instagram data embers at 4 years: experiences operating an open source indicators forecasting system capturing planned protests from open source indicators a prototype method for future event prediction based on future reference sentence extraction future event prediction: if and when sequence to sequence learning for event prediction staple: spatio-temporal precursor learning for event forecasting spatio-temporal event forecasting and precursor identification real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (h1n1-2009) deep mixture point processes: spatio-temporal event prediction with rich contextual information mobile network failure event detection and forecasting with multiple user activity data sets prediction of irrigation event occurrence at farm level using optimal decision trees forecasting the novel coronavirus covid-19 using social media to predict the future: a systematic literature review towards a deep learning approach for urban crime forecasting a new temporal pattern identification method for characterization and prediction of complex time series events assessing china's cybersecurity law pairwise-ranking based collaborative recurrent neural networks for clinical event prediction learning causality for news events prediction learning to predict from textual data mining the web to predict future events beating the news' with embers: forecasting civil unrest using open source indicators an investigation of interpretable deep learning for adverse drug event prediction forecasting natural events using axonal delay a deep learning approach to the citywide traffic accident risk prediction neural networks to predict earthquakes in chile spatial crime distribution and prediction for sporting events using social media a crowdsourcing triage algorithm for geopolitical event forecasting machine learning predicts laboratory earthquakes a process for predicting manhole events in manhattan theft prediction with individual risk factor of visitors earthquake shakes twitter users: real-time event detection by social sensors a survey of online failure prediction methods using hidden semi-markov models for effective online failure prediction unexpected event prediction in wire electrical discharge machining using deep learning techniques adverse drug event prediction combining shallow analysis and machine learning forecasting seasonal outbreaks of influenza an efficient approach to event detection and forecasting in dynamic multivariate social media networks tiresias: predicting security events through deep learning autoencoder-based one-class classification technique for event prediction predicting an effect event from a new cause event using a semantic web based abstraction tree of past cause-effect event pairs modeling events with cascades of poisson processes neural speech recognizer: acoustic-to-word lstm model for large vocabulary speech recognition fundamental patterns and predictions of event size distributions in modern wars and terrorist campaigns high-impact event prediction by temporal data mining through genetic algorithms hierarchical gated recurrent unit with semantic attention for event prediction yago: a core of semantic knowledge machine learning for predictive maintenance: a multiple classifier approach an empirical comparison of classification techniques for next event prediction using business process event logs probabilistic forecasting of wind power ramp events using autoregressive logit models dynamic forecasting of zika epidemics using google trends predicting time-to-event from twitter messages a multimodel ensemble to forecast onsets of state-sponsored mass killing forecasting gathering events through continuous destination prediction on big trajectory data predicting urban dispersal events: a two-stage framework through deep survival analysis on mobility data a measurement-based model for estimation of resource exhaustion in operational software systems predicting rare events in temporal domains the eu general data protection regulation (gdpr). a practical guide graph-based deep modeling and real time forecasting of sparse spatio-temporal data deep learning for real-time crime forecasting and its ternarization an iot application for fault diagnosis and prediction a hierarchical pattern learning framework for forecasting extreme weather events towards long-lead forecasting of extreme flood events: a data mining framework for precipitation cluster precursors identification incomplete label uncertainty estimation for petition victory prediction with dynamic features using twitter for next-place prediction, with an application to crime prediction csan: a neural network benchmark model for crime forecasting in spatio-temporal scale cityguard: citywide fire risk forecasting using a machine learning approach ddos event forecasting using twitter data the perils of policy by p-value: predicting civil conflicts forest-based point process for event prediction from electronic health records on predicting crime with heterogeneous spatial patterns: methods and evaluation a miml-lstm neural network for integrated fine-grained event forecasting event history analysis spatio-temporal check-in time prediction with recurrent neural network based survival analysis finding progression stages in time-evolving event sequences web-log mining for quantitative temporal-event prediction using external knowledge for financial event prediction based on graph neural networks neural network based continuous conditional random field for fine-grained crime prediction an integrated model for crime prediction using temporal and spatial factors predicting future levels of violence in afghanistan districts using gdelt tornado forecasting with multiple markov boundaries dram: a deep reinforced intra-attentive model for event prediction a survey of prediction using social media hetero-convlstm: a deep learning approach to traffic accident prediction on heterogeneous spatio-temporal data blending forest fire smoke forecasts with observed data can improve their utility for public health applications a data-driven approach for event prediction social media mining: an introduction forecasting seasonal influenza fusing digital indicators and a mechanistic disease model unsupervised spatial event detection in targeted domains with applications to civil unrest modeling spatiotemporal event forecasting in social media multi-resolution spatial event forecasting in social media online spatial event forecasting in microblogs simnest: social media nested epidemic simulation via online semi-supervised deep learning spatial auto-regressive dependency interpretable learning based on spatial topological constraints multi-task learning for spatio-temporal event forecasting feature constrained multi-task learning models for spatiotemporal event forecasting spatial event forecasting in social media with geographically hierarchical regularization distant-supervision of heterogeneous multitask learning for social event forecasting with multilingual indicators hierarchical incomplete multi-source feature learning for spatiotemporal event forecasting constructing and embedding abstract event causality networks from text snippets modeling temporal-spatial correlations for crime prediction prediction model for solar energetic proton events: analysis and verification a pattern based predictor for event streams a tensor framework for geosensor data forecasting of significant societal events key: cord-285484-owpnhplk authors: salfi, f.; amicucci, g.; corigliano, d.; d'atri, a.; viselli, l.; tempesta, d.; ferrara, m. title: changes of evening exposure to electronic devices during the covid-19 lockdown affect the time course of sleep disturbances date: 2020-10-21 journal: nan doi: 10.1101/2020.10.20.20215756 sha: doc_id: 285484 cord_uid: owpnhplk study objectives: during the covid-19 lockdown, there was a worldwide increase in electronic devices' daily usage. the exposure to backlit screens before falling asleep leads to negative consequences on sleep health through its influence on the circadian system. we investigated the relationship between the changes in evening screen exposure and the time course of sleep disturbances during the home confinement period due to covid-19. methods: 2123 italians were longitudinally tested during the third and the seventh week of lockdown. the web-based survey evaluated sleep quality and insomnia symptoms through the pittsburgh sleep quality index and the insomnia severity index. during the second assessment, respondents reported the changes in the backlit screen exposure in the two hours before falling asleep. results: participants who increased electronic device usage showed decreased sleep quality, exacerbated insomnia symptoms, reduced sleep duration, higher sleep onset latency, and delayed bedtime and rising time. in this subgroup, the prevalence of poor sleepers and clinical insomniacs increased. conversely, respondents reporting decreased screen exposure exhibited improved sleep quality and insomnia symptoms. in this subgroup, the prevalence of poor sleepers and clinical insomniacs decreased. respondents preserving their screen time habits did not show any change in the sleep parameters. conclusions: our investigation demonstrated a strong relationship between the modifications of the evening electronic device usage and the time course of sleep disturbances during the lockdown period. interventions to raise public awareness about the risks of excessive exposure to backlit screens are necessary to prevent sleep disturbances and foster well-being during the home confinement due to covid-19. the present the rapid worldwide spread of the covid-19 pandemic marked the first months of 2020. during this unprecedented situation, governments across the globe implemented extraordinary measures to reduce the spread of the contagion and the pressure on the healthcare systems. from 9 march to 4 may 2020 a total lockdown was imposed in italy, involving a large-scale closure of most work activities, social distancing, and a home-based quarantine imposition to the general population. the home confinement measures had a substantial negative impact on global mental health and psychological well-being. 1, 2 the considerable impairment of the daily routine had consistent repercussions on sleep health and circadian rhythms, as documented by several studies. [3] [4] [5] [6] however, evidence on the time course of sleep disturbances during the extended period of restraining measures is scarce. 7 the forced social isolation and the limitations of the outdoor activities led to a worldwide increase in web-based social communication. [8] [9] [10] electronic devices daily usage increased [11] [12] [13] to compensate for the limited social interactions, fill free time, and ward off boredom. the implementation of these habits may have helped to cope with the challenging and stressful isolation period. nevertheless, the increase of screen exposure in the hours before bedtime could have determined adverse consequences on sleep health. the sleep rhythms are intimately linked with the ambient light, which represents a crucial regulator of the biological clock. human eyes comprise non-visual photoreceptors that are primarily responsive to ~450-480 nm light within the blue portion of the spectrum. 14, 15 the activation of this system leads to a suppression of the melatonin release, which is a key sleep-related pineal gland hormone. 16 the evening exposure to short-wavelength-enriched light has alerting effects and detrimental consequences on sleep. 17, 18 nowadays, most screens of modern electronic devices (computer, smartphone, tablet, television) are equipped with light-emitting diodes (leds) having a peak wavelength in the blue range of ~460 nm. 19 . cc-by-nc-nd 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint therefore, the exposure to backlit screens during the hours preceding the habitual bedtime can interfere with the circadian physiology. epidemiological and cross-sectional studies indeed showed a strong relationship between the electronic device usage after the sundown and alterations of sleep patterns. [20] [21] [22] [23] [24] [25] [26] [27] the impact of screen exposure before falling asleep on the melatonin secretion was confirmed by several studies that experimentally manipulated the evening exposure to tablet, 28 ereader, 29 and computer screens. 19, 30 these investigations also reported decreased objective and self-reported sleepiness, higher sleep onset latency, and altered sleep architecture. conversely, other studies showed protective effects of blocking blue light emissions on melatonin regulation 31 and sleep quality, both in healthy 32 and clinical insomniac subjects. 33, 34 based on this evidence, the present study aimed to shed light on the relationship between the longitudinal changes of sleep disturbances between the third and the seventh week of home confinement in italy and the retrospectively reported modifications of the exposure to electronic devices before falling asleep during the same lockdown period. we hypothesized that the changes in electronic device usage could be a crucial mediator of the lockdownrelated sleep alterations over time. we expected that individuals that increased their screen exposure should have shown the largest sleep impairments and the most marked alterations of the sleep/wake schedule. on the other hand, subjects that reduced screen time should have exhibited a positive time course of sleep disturbances. the present investigation is part of a larger research project aimed to understand the consequences of covid-19 lockdown on the italian population. 6 a total of 7107 italian citizens were recruited in a web-based survey through a snowball sampling during the third is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint 40 stai-x1), with the option to stop after each of them. this optionality was aimed at ensuring higher reliability of the data collected, avoiding false answers in the last questionnaires. the bdi-ii is a validated questionnaire used to assess clinical depression symptoms (range 0-63). the 10-pss is a 10-item questionnaire evaluating the perceived stress following stressful events (range, 0-50). the stai-x1 is a well-established 20-item scale measuring state anxiety (range, 1-80). for all these questionnaires, higher scores indicate more severe conditions. after four weeks, the website link of the follow-up survey was provided to the participants via email address/telephone number. a total of 2701 subjects completed the second assessment in a seven-day period (21-27 april 2020) . from this large follow-up sample, we included in the reported analyses only the 2123 respondents (mean age ± standard deviation, 33.1 ± 11.6; range, 18-82; 401 men, see table 1 ) that completed the first survey during the four days preceding the daylight-saving time (25 march-28 march; time 1). this is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint allowed us to avoid interfering and confounding effects at the baseline measurement due to the summertime beginning (for a review, 41 ). during the follow-up survey (time 2), participants completed the same questionnaires of time 1. moreover, they were asked to retrospectively evaluate the changes (increase, maintenance, reduction) from the first assessment in the usage duration of electronic devices (smartphone, computer, tablet, television, ereader) in the two hours before falling asleep. the study has been approved by the institutional review board of the university of l'aquila (protocol n. 43066) and has been carried out according to the principles established by the declaration of helsinki. online informed consent to participate in the whole research was obtained from all the respondents during the first assessment. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint to control for potential selection bias of the follow-up participants, we performed preliminary mixed model analyses comparing the time 1 questionnaire scores of respondents that participated only to the first assessment and those who attended both the measurements (time 1 and time 2). these control analyses did not highlight significant differences (all p > 0.10). according to the purpose of the present study, the main variables were the psqi and isi scores. additionally, from the psqi questionnaire, we extracted other variables such as total sleep time (tst, min), sleep onset latency (sol, min), bedtime (bt, hh:mm), and rise time (rt, hh:mm). to evaluate the time course of the sleep dimensions as a function of the reported changes of exposure to electronic devices, all the above variables were submitted to mixed model analyses with a random intercept per participant, accounting for the expected intraindividual variability. the models comprised "time" (time 1, time 2), "screen exposure" (increased, unchanged, reduced), and their interaction as predictors. additionally, "gender" (man, woman) was included as factor, and age as covariate, to control for putative effects of these demographic variables on the main outcomes of the present study. subsequently, explorative analyses were carried out adding to the models the scores of meqr, bdi-ii, 10-pss, and stai-x1 as covariates. these further analyses aimed to control the effects of chronotype, depression, stress, and anxiety on sleep measures. mixed model analyses were performed using the "lme4" r package. 42 models were fitted using reml, using the satterthwaite approximation to compute p-values. bonferroni post hoc tests were obtained using the "emmeans" r package. 43 finally, the validated cut-off scores of psqi and isi were used to determine the prevalence of poor sleepers and moderate/severe is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint falling asleep. for all the analyses, statistical significance was set at p < 0.05, and all tests were 2-tailed. at time 1, there were no differences in psqi and isi scores between respondents who later reported an increase or a reduction of screen exposure (both p = 1.00). participants is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint maintaining their device use habits showed lower psqi scores at time 1, compared to those who increased or reduced the exposure to backlit screens (both p < 0.001). isi scores were lower at time 1 for subjects who did not change the screen exposure than participants who incremented or reduced it (p < 0.001, p = 0.04; respectively). the three groups did not differ at time 1 on tst, sol, bt, and rt (all p > 0.85). participants who reported an increase of screen exposure also showed higher psqi and isi scores at time 2, and more advanced bt and rt compared to the other two groups (all p < 0.01), as well as shorter tst and longer sol compared to the group that did not change the device usage habits (both p < 0.001, see figure 2 ). no differences for all the variables were obtained at time 2 between subjects who reduced or maintained the device usage duration before falling asleep (all p > 0.32). is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint during the seventh week of lockdown, also controlling for the differences of the baseline scores. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint in the present study, we showed a strong relationship between the changes in evening screen exposure and the time course of sleep parameters during the covid-19 lockdown. in line with the initial assumption, individuals declaring increased electronic device usage before falling asleep showed a general sleep impairment over time (from the third to the seventh week of home confinement). this outcome is exemplified by decreased sleep quality, an exacerbation of insomnia symptoms, reduced sleep duration, and longer sleep onset latency. consistently, we found an increased prevalence of poor sleepers and moderate/severe insomnia condition only within this group of respondents. the increased screen exposure was also linked to delayed bedtime and rising time, outlining an advanced sleep phase across the home confinement period. remarkably, we obtained the present findings controlling for the effects of gender and age, and they were confirmed also controlling for the covariance of chronotype, depression, stress, and anxiety. therefore, our results indicate a direct relationship between evening . cc-by-nc-nd 4.0 international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. the covid-19 pandemic has affected all the world, and the home confinement constitutes the most widely used measure to contrast the spread of the contagion. in modern societies, the increase of screen-based device usage could represent an unavoidable consequence of the pandemic-related home confinement periods. indeed, more than one-third of our sample reported an increase in electronic device usage in the two hours before falling asleep. consequently, our findings have substantial large-scale implications when contextualized to the current unprecedented situation. adequate sleep quantity/quality is essential to deal with stressful events 47 and preserve mental health, 48, 49 and it plays a crucial role in emotional processing 50, 51 and mood regulation. 52 indeed, aberrant light exposure and excessive screen time were associated with sleep and mental health problems. 53, 54 consistently, blocking screen-emitted blue light has proved to be effective in promoting at the same time sleep quality and mood 32, 34 and it was proposed as a useful approach to treat both clinical insomnia 33, 34 and mood disorders. 53, 55 finally, sleep and the circadian system support the proper functioning of the immune system, 56,57 never as important as during the current historical period. in light of these considerations, the relationship between screen time and sleep outcomes has a broad spectrum of implications, configuring a major public health concern during covid-19 outbreak. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint the present results were obtained in an italian sample, but they could be generalizable to other modern societies since the putative underlying mechanisms involve a disruption of circadian physiology and a direct arousal-induced effect caused by evening light exposure. however, we can not infer the causality of this relationship since this is an observational study, and the measurement of screen exposure changes has been retrospectively reported during the second assessment. notwithstanding that comprehensive literature supported the detrimental effect of the blue light emitted by electronic devices on melatonin secretion and sleep patterns, 19,28-30 we can not exclude reverse causation. nevertheless, the two interpretations are not mutually exclusive, and a bidirectional model of causation has been suggested. 58 we propose that a vicious circle during the confinement period was established, in which the increased screen exposure before falling asleep negatively impacted the sleep parameters, which in turn supported the overuse of electronic devices after the sunset. in conclusion, our findings corroborate the assumption that the governments should pursue policies to raise public awareness on healthy sleep behaviors during the confinement due to the covid-19 pandemic, discouraging the excessive use of electronic devices before falling asleep. 59, 60 the evening use of blue light glasses and the application of blue wavelength light filter (night shift settings) on the electronic screens should be encouraged to mitigate the well-known detrimental consequences of electronic device usage. to date, the feared risk of a second wave of contagion is increasingly becoming a concrete reality, and hundreds of thousands of people are subjected to home confinement measures worldwide. in light of our results, the above-mentioned interventions focused on sleep hygiene are fundamental to counteract the occurrence and exacerbation of sleep disturbances and foster the general well-being during the home confinement due to the covid-19 pandemic. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint to the best of our knowledge, the present investigation is the first to provide insights about the relationship between electronic device usage and the time course of sleep disturbances during the covid-19 lockdown. however, it should be acknowledged that we used a nonprobabilistic sampling technique, and the sample comprised a higher prevalence of women and young people. moreover, under-eighteen years-old individuals were not included. however, the relationship between evening screen time and sleep disturbances was widely shown in adolescents. [61] [62] [63] we hypothesize that our results could be generalizable to the youngest people. additionally, the electronic device category of our survey included a broad set of devices, and we can not discern the relationship between each device usage and the time course of the sleep outcomes. finally, in our survey, we did not assess the implementation of protective approaches to reduce the emission or perception of screen light, and thus we can not estimate their contribution to the present findings. we are grateful to professor marco lauriola for his valuable support in the statistical analysis and to jasmin cascioli for her help in data collection. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted october 21, 2020. . https://doi.org/10.1101/2020.10.20.20215756 doi: medrxiv preprint covid-19 and mental health: a review of the existing literature covid-19 pandemic and mental health consequences: systematic review of the current evidence effects of the covid-19 lockdown on human sleep and restactivity rhythms sleep in university students prior to and during covid-19 stay-at-home orders changes in sleep pattern, sense of time and digital media use during covid-19 lockdown in italy the impact of home confinement due to covid-19 pandemic on sleep quality and insomnia symptoms among the italian population a longitudinal study on the mental health of general population during the covid-19 epidemic in china keeping our services stable and reliable during the covid-19 outbreak the new york times. the virus changed the way we internet increased media device usage due to the coronavirus outbreak among internet users worldwide as of lockdown leads to surge in tv screen time and streaming our iphone weekly screen time reports are through the roof, and people are 'horrified action spectrum for melatonin regulation in humans: evidence for a novel circadian photoreceptor an action spectrum for melatonin suppression: evidence for a novel non-rod, non-cone photoreceptor system in humans light, melatonin and the sleep-wake cycle wavelength-dependent effects of evening light exposure on sleep architecture and sleep eeg power density in men alerting effects of light evening exposure to a light-emitting diodes (led)-backlit computer screen affects circadian physiology and cognitive performance electronic device use in bed reduces sleep duration and quality in adults bedtime mobile phone use and sleep in adults direct measurements of smartphone screen-time: relationships with demographics and sleep effects of mobile use on subjective sleep quality evening and night exposure to screens of media devices and its association with subjectively perceived sleep: should "light hygiene" be given more attention? sleep health the sleep and technology use of americans: findings from the national sleep foundation's 2011 sleep in america poll the association between use of electronic media in bed before going to sleep and insomnia symptoms, daytime sleepiness, morningness, and chronotype association between television viewing and sleep problems during adolescence and early adulthood light level and duration of exposure determine the impact of self-luminous tablets on melatonin suppression evening use of light-emitting ereaders negatively affects sleep, circadian timing, and next-morning alertness evening light exposure to computer screens disrupts human sleep, biological rhythms, and attention abilities blue blocker glasses impede the capacity of bright light to suppress melatonin production amber lenses to block blue light and improve sleep: a randomized trial blocking nocturnal blue light for insomnia: a randomized controlled trial block the light and sleep well: evening blue light filtration as a part of cognitive behavioral therapy for insomnia validity of the italian version of the pittsburgh sleep quality index (psqi) validation study of the italian version of the insomnia severity index (isi) validity of the reduced version of the morningness-eveningness questionnaire beck depression inventory-ii: edizione italiana. firenze, it: giunti editore psychometric evaluation of three versions of the italian perceived stress scale the state-trait anxiety inventory (stai) test manual for form x the impact of daylight saving time on sleep and related behaviours fitting linear mixed-effects models using lme4 package "emmeans": estimated marginal means, aka least-squares means to sleep, perchance to tweet": in-bed electronic social media use and its associations with insomnia, daytime sleepiness, mood, and sleep duration in adults adolescent sleep patterns and night-time technology use: results of the australian broadcasting corporation's big sleep survey sleep and use of electronic devices in adolescence: results from a large population-based study the impact of sleep disturbance on the association between stressful life events and depressive symptoms insomnia as a precipitating factor in new onset mental illness: a systematic review of recent findings the effects of improving sleep on mental health (oasis): a randomised controlled trial with mediation analysis sleep and emotional processing the impact of five nights of sleep restriction on emotional reactivity emotions, and emotion regulation: an overview. sleep affect assess theory clin implic timing of light exposure affects mood and brain circuits low physical activity and high screen time can increase the risks of mental health problems and poor sleep quality among chinese college students dark therapy for bipolar disorder using amber lenses for blue light blockade sick and tired: does sleep have a vital role in the immune system? sleep and immune function bidirectional relationships between sleep duration and screen time in early childhood johns hopkins medicine. managing sleep problems during covid-19 electronic media use and sleep in school-aged children and adolescents: a review screen time and sleep among school-aged children and adolescents: a systematic literature review association between portable screen-based media device access or use and sleep outcomes: a systematic review and meta-analysis key: cord-006226-fn7zlutj authors: nan title: abstracts of the 4th annual meeting of the german society of clinical pharmacology and therapy: hannover, 14–17 september 1994 date: 1994 journal: eur j clin pharmacol doi: 10.1007/bf00193489 sha: doc_id: 6226 cord_uid: fn7zlutj nan grapefruit juice may considerably increase the systemic bioavailability of drugs as felodipine and nifedipine. this food-drug interaction has potential practical importance because citrus juices are often consumed at breakfasttime when drugs are often taken. it is likely that a plant flavonoid in grapefruit juice, naringenin, is responsible for this effect (inhibition of cytochrome p-450 enzymes in the liver or in the small intestinal wall). ethinylestradiol (ee2), the estrogen of oral contraceptive steroides, shows a high first-pass-metabolism in vivo. therefore, the purpose of this study is to test the interaction between grapefi-uite juice and ee 2, the area under the serum concentration-time curve (auc0_24h) ofee 2 was determined in a group of young healthy women (n = 13) on day 4 + 1 ofmenstruale cycle. to compare intraindividually, the volunteers were randomly allocated to two test days. the female volunteers took 50 lag ee 2 together with either 200 ml of herb tea or with the same amount of grapefruit juice (content of naringenin 887 mg/1). furthermore, on the day of testing the women drank 4 times 200 ml of the corresponding fluid every three hours up to four times. the auc0.24 h of ee 2 amounts to 1110.5 + 637.7 pg x mi-1 x h after the administration of the drug with grapefruit juice; that means 28 % higher in comparison to 868 0 + 490.0 pg x m1-1 x h after concomitant intake of tea. also, the mean cmax-value increases to 37 %, p _< 0.01 (117.5 + 53.2 pg x m1-1 and 85.5 + 32.9 pg x m1-1, respectively). this result shows that the systemic bioavailability ofee 2 increases after intake of the drug with grapefruit juice. the extent of this effect is lower than the extent of known interindividual variability. procarbazine is a tumourstafic agent widely used m hodgin's disease, non-hodgldn's lymphomas and mmours of brain and lung. procarbazine is an inactive prodrug which is converted by a cytochrome p 450 mediated reaction to its active metabolites, in the first step to azoprocarbazine. the kinetics of both procarbazine and azoprocarbazine is not described in humans up to now. on 10 turnout patients we have investigated the plasma kinetics of both procarbazine and azoprocarbazine after oral adminislxation of 300 mg procarbazine in form of capsules and drink solution, respectively. a hplc method with uv-detection (254 nrn) and detection limits of 50 and 150 ng/ml was developed for procarbazine and azoprocarbazine respectively. after both the capsules and drink solution the parent drug could be detected in plasma only for 1 h. in contrast the tl/2 of terminal elimination of azoprocarbazine was estimated in the range of 0,9 to 6,5 h with a mean of 2,96 h -2 + 1,46 h. the auc of procarbazine was less than 5 % of that of azoprocarbazine. cma x values of azoprocarbazine were determined in the range of 1,3 to 6,l gg/ml. in comparison to the drink solution we determined on the basis of the plasma levels of azoprocarbazine a bioavailability of the therapeutic used procarbazine capsules of 93,1 + 26,3 %. prostaglandin e1 (pge1) is used for the treatment of patients with peripheral arterial disease, and probably effective due to its vasodilator and antiplatelet effects. l-arginine is the precursor of endogenously synthesized nitric oxide (no). in healthy human subjects, larginine also induces peripheral vasodi]ation and inhibits platelet aggregation due to an increased no production. in the present study the influence of a single intravenous dose of l-arginine (30g, 60 min) or pge1 (40p.g, 60 min) on blood pressure, peripheral hemodynamics (femoral artery duplex sonography), and urinary no3-and cgmp excretion rates was assessed in ten patients with peripheral arterial disease (fontaine iii -iv). blood flow in the femoral artery was significantly increased by l-arginine by 68% (p < 0.01), and by pge1 by 25% (p < 0.05). l-arginine more strongly decreased systolic and diastolic blood pressure than pge1. plasma arginine concentration was increased 4-fold by l-arginine, but unaffected by pge1. urinary excretion of no3-increased by 118% after l-arginine (p < 0.05), and by 78% after pge1 (p = n.s.). urinary cgmp excretion increased by 76% after l-arginine and by 43% after pgei (each p = n.s.). we conclude that intravenous l-arginine decreases peripheral arterial resistance, resulting in enhanced blood flow and decreased blood pressure in patients with peripheral arterial disease. these effects were paralleled by increased urinary no3-excretion, indicating that systemic no production was enhanced by the infusion. increased no3-excretion may be a sum effect of no synthase substrate provision (l-arginine) and increased shear stress (pge1 and l-arginine). it is weli established that the endothelial edrf/no-mediated relaxing mechanism is impaired in atherosclerotic and in hypertensive arteries. recently it was suggested that primary pulmonary hypertension might be another disease in which the endothelial edrf/no pathway is disturbed. we tested the hypothesis that intravenous administration of l-arginine (l-arg), the physiological precursor of edrf/no, stimulates the production of no, subsequently increasing plasma cgmp levels and reducing systemic and / or pulmonary vasular resistance, in patients with coronary heart disease (chd; n = 15) and with primary pulmonary hypertension (pph; n = 5). l-arg (30g, 15 min) or placebo (nac1) was infused in chd patients, and l-arg was infused in pph patients undergoing cardiac catheterization. mean aortic (pao) and pulmonary (ppul) arterial pressures were continuously monitored. cardiac output (co; by thermodilution), and total peripheral resistance (tpr) were measured before and during the infusions. plasma cgmp was determined by ria. in chd patients, pao decreased from 87.2 + 4.9 to 81.8 + 5. mm hg during l-arg (p<0.05), whereas ppul was unchanged. tpr decreased from 1008.9 -+ 87.9 to 845.0 +81.7 dyne sec cm -5 during l-arg administration (p<0.01). co significantly increased during l-arg (from 7.3 + 2.8 to 8.1 + 0.9 1/min, p<0.05). placebo did not significantiy influence any of the haemodynamic parameters, cgmp slightly increased by 12.2 + 9.6% during l-arg, but slightly decreased during placebo (-12.3 +9.2 %)(p < 0.05 for l-arg vs. placebo). in pph patients, l-arg induced no significant change in pao, tpr, and co. mean ppul was 63.4 + 8.8 mm hg at the beginning of the study, but was only slightly reduced by l-arg to 55.0 + 12,3 mm hg (p = n.s.). plasma cgmp was not affected by l-arg in these patients. we conclude that l-arg stimulates no production and induces vasorelaxation in chd patients, but not in patients with primary pulmonary hypertension. thus, the molecular defects underlying the impaired no foimation may be different m both diseases. institutes of clinical pharmacology, *cardiology, and **pneumology, medical school, hannover, germany. the influence of submaximal exercise on the urinary excretion of 2,3-dinor-pgflc, (the major urinary prostacyclin metabolite), 2,3dinor-txb2 (the major urinary thromboxane a2 metabolite), and pge2(originating from the kidney), and on platelet aggregation was assessed in 6 untrained and 10 endurance-trained male subjects before and after 7 days of 50 rag/day of aspirin. urinary 2,3-dinor-txb2 excretion was significantly higher in the athletes at rest (p < 0.05). submaximal exercise increased urinary 2,3-dinor-6-keto-pgfl~ excretion without affecting 2,3-dinor-txb2 or pge2 excretion or platelet aggregation. aspirin treatment induced an -80% inhibition of platelet aggregation and 2,3-dinor-txb2 excretion in both groups. however, urinary 2,3-dinor-6-keto-pgfl~ was inhibited by only 24% in the untrained, but by 51% in the trained group (p <0.05). urinary pge2 was unaffected by aspirin in both groups, indicating that cyclooxygenase activity was not impaired by a systemic aspirin effect. after low dose aspirin administration, the same selective stimulatory effect of submaximal exercise on urinary 2,3-dinor-6-keto-pgfl~ excretion was noted in both groups as before. the ratio of 2,3-dinor-6-keto-pgfld2,3-dinor-txb2 was increased by exercise; this effect was potentiated by aspirin (p < 0.05). our results suggest that the stimulatory effect of submaximal exercise on prostacyclin production is not due to an enhanced prostacyclin endoperoxide shift from activated platelets to the endothelium, but rather the result of endothelial prostacyclin synthesis activation from endogenous precursors. 50 mg/day of aspirin potentiates the favorable effect of submaximal exercise on endothelial prostacyclin production by selectively blocking platelet cyclooxygenase activity. institute of clinical pharmacology, medical school, hannover, germany. soluble guanylyl cyclases (gc-s) are heterodimeric hemeproteins consisting of two protein subunits (70 kda, 82 kda). the enzyme is activated by nitric oxide (no) and catalyzes the formation of the signal molecule "cgmp" (cyclic guanosine-3's'-monophosphate) from gtp. numerous physiological effects of cgmp are already very well characterized. however, detailed insights in the no-activation mechanism of this enzyme have been described to date only in a hypothetical model (1). recently, this concept was supported by experimental data using sitedirected mutagenesis to create a no-insensitive soluble guanylyl cyclase mutant (2). it is generally accepted that the prostethic heine-group plays a crucial role in the activation mechanism of this protein. nonetheless, some interesting questions with regard to structure and regulation of soluble guanylyl cyclases still need to be uncovered (e.g. activation with other free radicals, such as carbon monoxide). since this kind of studies is limited so far by isolating large quantities of a biologically active enzyme with conventional purification techniques, the recombinant protein was expressed in the baculovirus / insect cell system. we describe here the construction and characterization of recombinant baculoviruses, harboring the genes that encode both protein subunits of the soluble guanylyl cyclase. insect cells infected with these recombinant baculoviruses produce between 20-30% (as related to total cell protein) of functional soluble guanylyl cyclase. positive infection was monitored as a change in morphology of the cells and by production of the respective recombinant viruses detected by polymerase-chain-reaction (pcr). so far examined, the recombinant enzyme exhibits similar physicochemical characteristics as the "natural" protein. exogenous addition of several heme analogues to the infected cells is able to either stimulate or inhibit the enzymatic activity of gc-s. we are confident to purify milligram amounts of the recombinant protein in the near future. pet studies of myocardial pharmacology have principally concerned the sympathetic nervous system and u'acers have been developed to probe the integrity of both pre-and post-synaptic sites. the sympathetic nervous system plays a crucial role in the control of heart rate and myocardial contractility as well in the conlrol of the coronary circulation. alterations of this system have been implicated in the pathophysiology of a number of .cardiac disorders, in particular, heart failure, ventricular arrhythmogenesis, coronary artery disease, idiopathic dilated and hypertrophic cardiomyopathy. several beta blockers have been labelled with carbon-ll for imaging by pet. the most promising of these is cgp 12177 which is a non-selective beta adrenoceptor anatagonist particularly suited for pet studies due to its high affinity and low lipophilicity, thus enabling the functional receptor pool on the cell surface to be studied. studies in our institution in a group of young healthy subjects have yielded bmax values of 10.4_+1.7 pmol/g myocardium. these data are consistent with literature values of bmax for beta adrenoceptors in human ventricular myocardium determined by a variety of in vitro assays. a recent study in patients with hypertrophic cardiomyopathy has shown that myocardial beta adrenoceptor density is decreased by approximately 25-30% relative to values in normal subjects. the decrease in receptor density occurs in both hypertrophied and nonhypertrophied portions of the left ventricle. these data are consistent with the hypothesis that sympathetic overdrive might be involved in the phenotypic expression of hypertrophic cardiomyopathy. a further decrease of myocardial beta adrenoceptor density (to levels well below 76_5-7.7 pmol/g) has been observed in those patients with hypertrophic cardiomyopathy who procede to ventricular dilatation and heart failure. cyp1a1 hydroxylates polycyclic aromatic hydrocarbons such as benzo(a)pyrene occurring e.g. in cigarette smoke. two hereditary mutations are discovered: ml, a t to c transition 1,194 bp downstream of exon 7; m2, located at position 4,889 in exon 7 representing an a to g transition resulting an isoleucine to valine substitution in the heme-binding region. recently we could demonstrate in caucasians that carriers of the m2-mutation possess an increased risk towards lung cancer (drakoulis et al clin.lnvestig. 72:240,1994) , whereas the ml-mutation shows no such association. the phasg-ii enzyme gstm1 catalyses the conjugation of glutathione to electrophilic compounds such as products of cyp1ai. gstm1 is absent in 50.9% of the caucasian population due to base deletions in exon 4 and 5 of the gene. we found no contrariety in the gstm1 distribution, including frequencies of type a (p.) and type b (v) among lung cancer patients (odds ratio = 1.01, n = 117; cancer res. 53:1004 res. 53: ,1993 . 149 lung cancer patients and 408 reference patients were investigated for mutations of cypia1 and gstm1 by allele-specific pcr and rflp. a statistically significant higher risk for lung cancer among carriers of the m2trait was found (odds ratio = 2.63, p = 0.001). interestingly, amid lung cancer, m2-alleles were less often linked to ml than in controls (odds ratio = 9.50, 95%-confidence limits = 1.50 -99.7, p = 0.0054). however, the frequency of cypia1 mutations did not differ among active and defective gstm1 types. consequently, we could not confirm in the caucasian population the synergistic effects of cypia1 mutations (especially m2) and deficient gstm1 as combined susceptibility factors for lung cancer as described among the japanese (cancer res. 55:2994 ,1993 in healthy subjects the effect of gastrointestinal hormones like somatostatin and glucagon on splanchnic hemodynamics is not well defined due to the invasiveness of the direct measurement of e.g. portal vein (pv) wedged pressure. methods : now, we applied duplex sonography (3.5 m~z) and color coded flow mapping to compare the effects of ocreotide (i00 ~g sc), a long acting somatostatin agonist, and glucagon (i mg iv) on the hemodynamics of the pv, superior mesenteric artery (sma) and common hepatic artery (ha) in 14 healthy volunteers (13 g,i q; 25 ± 2y; x ± sem). basal values of pv flow (12.9 ± 0.8 cm/s), pv flow volume (549 ± 50 ml/min), sma systolic (sf: 195 ± 13 cm/s) and diastolic flow (df: 28 ± 4 cm/s), sma pourcelot index (pi) (0.86 ± 0.01), ha sf (80 ± 8 cm/s) and df (19 ± 1 cm/s) and ha pi (0.75 ± 0.01) well agreed with previously reported results. within 15 min ocreotide resulted in a decrease of sma sf (-32 ± 4%) sma df (-31 ± 4%), ha sf (-18 ± 5%) and ha df (-24 ± 7 %). maximum drop of pv flow (-33 ± 3%) and flow volume (-34 ± 7%) occurred at 30 min. all effects diminished at 60 min. no significant change of vessel diameter and pi was seen. 5 min following its application glucagon caused a highly variable, only short lasting increase of pv flow volume (+51 ± 18%) and sma df (+49 ± 17%). ha fd (+14 ± 4%) showed a tendency to rise (ns). we conclude that in clinical pharmacology duplex sonography is a valuable aid for measuring effects of hormones and drugs on splanchnic hemodynamics. pectanginal pain and signs of silent myocardial ischemia frequently occur in hypertensives, even in the absence of coronary artery disease (cad) and/or left ventricular hypertrophy, probably due to a reduced coronary flow reserve. since the oxygen extraction of the heart is nearly maximal during rest, increases of oxygen demand cannot be balanced by increases of myocardial perfusion: to assess the frequency of ischemic type st-segment depressions in this patients and to determine the influence of heart rate (hr) and blood pressure (bp), simultaneous 24 h hoher-and 24 h ambulatory bp monitoring were performed in 18 hypertensives (age 43 -71 years, 9 f, 9 m) without cad before and after four weeks on therapy with the 13 -blocker betaxolol. 25 episodes of significant st-segment depressions (>0.1 mv,> 1 min) of a total length of 470 min could be demonsu'ated in 9/18 patients (50%) without antihypertensive therapy_ systolic bp significantly increased from 139 + 13.9 mmhg (mean + sd, p < 0.05) 60 min before to a maximum of 191 + 44.5 mmhg during the ischemic episodes, hr and rate-pressure product (rpp) increased from 76 + 6.3 min -t and 100.6 + 20.9 mmhg x rain -t x 102 to 138 _+ 16.8 min-: and 230.4 + 88.9 mmhg x min -1 x 102 (p < 0.05). the extent of st-segment depressions significantly correlated with hr and rpp (p < 0.05). drug therapy with 10 -20 mg/d betaxolol for 4 weeks significantly decreased mean hr, systolic' and diastolic bp (p < 0.05). 6 ischemic episodes of a total length of 38 min were recorded only in 4 of 15 hypertensives (26.7 %; p < 0.05; x2-test). in conclusion, increases of hr and systolic bp seem to be the most important factors which induce myocardial ischemia in hypertensives without cad. as silent ischemia is a independent risk factor for sudden cardiac death and other cardiac events, specific antihypertensive therapy should not only be aimed to normalize blood pressure, but should also address reduction of ischemic episodes as demonstrated here. phosphodiesterase inhibitors exert their positive inotropic effects by inhibiting camp degradation and increasing the intracellular calcium concentration in cardiomyocytes. an identical phosphodiesterase type 1i[ has been demonstrated in platelets and vascular smooth muscle cells. we studied the influence ofpiroximone on platelet function in vitro and ex vivo and the hemodynaimc effects of a bolus application of piroximone in patients with severe heart failure (nyha iii-iv) using a swan -ganz-catheter. in order to study the influence ofpiroximone on platelet function in vitro, platelet rich plasma from healthy volunteers was incubated with piroximone (10-100 ~tmol/l) from 1 minute to 2 hottrs and aggregation was induced by addition of adp. for the ex vivo experiments platelet rich plasma was obtained from patients, who received piroximone in doses of 0.25, 0.5, 1.0 or 2.0 mg/kg bw. blood samples were drawn immediately before and 30, 60, 90, 120 and 240 minutes after bolus application. the adp-induced platelet aggregation was inhibited time-and dosedependently. the ic50 value for piroximoue in vitro amounted to 67 +14 omol/1. in the ex vivo experiments the maximal inhibition of adp-induced aggregation was obtained in prp from patients who had received 2 mg/kg bw piroximune 60 minutes before. the admitdstration ofpiroximone resulted in a marked hemodynamic improvement with a dose-dependent increase in cardiac index and decreases in pulmonary artery pressure and resistance. to treat conditions associated with acute and chronic multiorgan dysfunction. studies indicate patients receive approximately ten drugs, on average during their icu stay, from several drug classes. commonly prescribed drugs include narcotics, sedatives, antibiotics, antiarrhythmics, antihypertensives, drugs for stress ulcer prophylaxis, diuretics, vasopressors, and inotropes. reports suggest surgical icu patients cost the hospital an average of $18,000/patient in un-reimbursed costs under fixed-price reimbursement. furthermore, patients with the greatest drain in revenue received catecholamines, triple antibiotics, or antifungal agents. thrombolytics, antibiotics, plasma expanders, and benzodiazepines account for nearly twothirds of the cost of drugs prescribed in medical and surgical icus. agents with considerable economic impact include biotechnology drugs for sepsis. pharmacoeconomic data in icu patients suggest increased attention should be directed towards several areas, including patients with pneumonia, intraabdominal sepsis, nosocomial bloodstream infections, optimizing sedation and analgesic therapy, preventing persistent paralysis from neuromuscular blockers, preventing stress ulcers, treating hypotension, and providing optimal nutritional support. studies are needed to assess the impact of strategies to improve icu drug prescribing on length of stay and quality of life. if expensive drugs are shown to decrease the length of icu stay, then their added costs can have positive economic benefits to the health care system. the responses to 10 min iv. infusions of the 131-and 132-adrenoceptor agonist isoprenalin (iso) and the 132-(and c~-) adrenoceptor agonist adrenalin (adr) at constant rates of 1 ijg/min were evaluated noninvasively after pretreatment (pre-tr) with placebo (pl), 100 mg of the 131-selective adrenoceptor antagonist talinolol (tal) and 80 mg of the non-selective 13antagonist propranolol (pro) in 12 healthy subjects. the following were analysed: heart rate (hr, bpm), pre-ejection time (pep, ms), ejection time (vet, ms), hr-corrected electromechanical systole (qs2c, ms), impedance-cardiographic estimates of stroke volume (sv, ml), cardiac output (co, i/min) and peripheral resistance (tpr, dyn.s.cm -5) calculated from co and mean blood pressure (sbp and dbp according to auscultatory korotkoff-i and -iv sounds this indicates that 1) about half the rise of hr and co and half the shortening of pep is 131-respectively 1~2-determined, 2) that predominant 132-adrenergic responses, whilst not affecting vet, take optimal benefit from the inodilatory enhancement of pump performance, 3) that an additional 131-adrenergic stimulation is proportionally less efficient, as vet is dramatically shortened, thus blunting the gain in sv so that the rise in co relies substantially on the amplified increase of hr and 4), vet is more sensitive than qs2c in expressing additional 131-adrenoceptor agonism and 5) prime systolic time intervals provide a less speculative and physiologically more meaningful represenation of cardiac pump dynamics than hr-corrected ones. zentrum flit kardiovaskul~re pharmakologie, mathildenstral3e 8, 55116 mainz, brd a20 regression between blunting of ergometric rise of heart rate and l~ladrenoceptor occupancies in healthy man c. de mey, d. palm, k. breithaupt-grsgler, g.g. belz the hr-responses to supine bicycle ergometry (4 min at appr. 200 watt) were investigated at several time points after the administration of propranolol (pro: 40, 80, 160 mg), carvedilol (car: 12.5, 25, 50, 100 mg), talinolol (tal: 25, 50, 100, 400 mg), metoprolol (met: 600 mg) and celipro-iol (cel: 1200 mg) to healthy man. the effects of the agents (= difference of the ergometric response for active drug and placebo) were analysed for both the end values (end) and the increments (inc) from resting values immediately before ergometry up to end. the effects were correlated with the %-~l-adrenoceptor occupancies estimated using a standard emax-model (sigmoidicity=l) from the concentrations of active substrate in plasma determined by i~l-adrenoceptor specific radioreceptor assay. the respective intercepts (i), slopes (s) and correlation coefficients (r) are detailed here below : inhibition of leukotrienes is a promising approach to the treatmer~t of several diseases because excess formation of these lipid mediators has been shown to play an important role in a wide range of pathophysiological conditions. since until recently we were not able to obtain specific drugs suppressing leukotriene biosynthesis or action for clinical practice, we started investigating the effects of putative natural modulators of leukotriene biosynthesis such as fish oil. 10 healthy male volunteers were supplemented for 7 days with fish oil providing 40 mg eicosapentaenoic and docosahexaenoic acid per kg body weight and day. the urinary concentration of leukotriene e4 plus n-acetyl leukotriene e4 served as a measure for the endogenous leukotriene production, treatment resulted in a significant increase in the eicosapentaenoate concentration in red blood cell membranes. fish oil reduced the endogenous leukotriene generation in 8 of the 10 volunteers. the effect was associated with a decrease in urinary prostaglandin metabolites, determined as tetranorprostanedioic acid. in contrast to what was expected from published in vitro and ex vivo experiments, no endogenously generated cysteinyl leukotrienes of the 5 series could be identified. the inhibitory effect of fish oil on the endogenous leukotriene generation was not synergistic to the effect of vitamin e, which also exhibited some suppressive activity. early clinical data on the effects of fish oil on teukotriene production in patients with allergy or rheumatoid arthritis are not yet conclusive. we conclude that fish oil exhibits some inhibitory activity on leukotriene production in vivo. the effectivity of fish oil may be attenuated by concomitant modulation of other mediator systems e.g. up-regulation of tumor necrosis factor production. • the number and affinity of platelet thromboxane (txa2) and prostacyclin (pgi2)-receptors are regulated by several factors. we studied the influence of oral intake of acetylsalieylic acid (asa) on ex-vivo binding studies with human platelet membranes on the binding of the specific thromboxane a 2 antagonist 3h-sq-29548 and the pgi 2 agunist 3h-l]oprost. the number of receptors (bmm) and the binding affinity (kd) were calculated using scatchard's plot analysis. in healthy male volunteers 11o significant difference was seen following intake of 500 mg/d of asa for 14 days (mean -+ sem): the potency of meloxicam (mel), a new anti-inflammatory drug (nsaid), in the rat is higher than that of well-known nsaids, in adjuvant arthrtitis rats, mel is a potent inhibitor of the local and the systemic signs of the disease. mel is also a potent inhibitor of pg-biosynthesis by leukocytes found in pleuritic exudate in rats. conversely, the effect of mel on pg-biosynthesis in isotated enzyme preparations from bull seminal vesicle in vitro, the effect on intragastric and intrarenal pg-biosynthesis and the influence on the txb2-1evel in rat serum is weak. in spite of the high antiinflammatory potency in the rat, mel shows a low gastrointestinal toxicity and nephrotoxicity in rats. -cyclooxygenase-2 (cox-2) has been recently identified as a isoenzyme of cyclooxygenase. nsaids are anti-inflammatory through inhibition of pg-biosynthesis by inducible cox-2 and are ulcerogenic and nephrotoxic through inhibition of the constitutive cox-1. we have investigated the effects of mel and other nsaids on cox-1 of non stimulated and on cox-2 of lps-stimulated guine pig peritoneal macrophages. cells were cultured with and without lps for 6 hrs together with the nsaid. arachidonic acid was then added for further 20 mins, the medium removed and pge 2 measured by ria. bimakalim, emd 52692, is a new investigational k+-channel activator with vasod[lating properties. single pereral doses of 0.2 mg bimakalim, 60 mg diltlazem, either alone or in combination, were investigated in 13 healthy male supine volunteers (20 to 28 years of age) [n a placebo-controlled, periodbalanced, randemised, double-blind, 4way cross-over design. point estimates of the global effects of bimakalim [k] , di]tiazem [d] and their interaction [kxd, =0 in case of mere additivity] incl. 95% confidence intervals (ci) were analysed for systolic and diastolic blood pressure (sbp, dbp; mmhg), heart rate (hr; bpm), pq (ms), systolic time intervals (pep, qs2c, lvetc; ms), cardiac output (co; i.min-1), total peripheral resistance (tpr; dyn.s.cm-5), heather index (hi; q.s-2); 1,5 h after dosing, *statistically significant at a=0.05: -7 to -1 -4 to 3 -8 to 0 -3 to 5 0.5 to 5.9 -3.4 to 1,9 4.9 to 11.8 * -3.9 to 3,1 -8.2 to -1.2 * -5.3tol,6 -5.1 to 2,0 -4.8 to 2.3 -2.9 to 6.6 -3.9 to 5.6 -0.25 to 0.75 -0.66 to 0.35 -161 to 3 -74 to 90 -&07 to 1.14 -0.50 to 0.71 afterload reduction and a drop in dbp occurred with bimakalim associated with a rise in hr and mild increase in cardiac performance, diltiazem (slightly) decreased afterload and bp with little (reflectory) accompanying changes and had a negative dromotropic effect. the combination caused additive effects. center for cardiovascular pharmacology, zekapha gmbh, mathildenstr. 8, 55116 mainz, germany. rheumatoid arthritis (ra) is characterized by an immunological mediated inflammatory reaction in affected joints. infiltration of granulocytes and monocytes is the pathophysiological hallmark within the initial phase of inflammation. these cells are able to synthesize leukotrienes. ltb 4 is a potent chemotactic factor and therefore could be responsible for the influx of granulocytes from the circulation. cysteinyl leukotrienes ltc4, d 4 and e 4 augment vascular permeability and are potent vasoconstrictors. ltb 4 and cysteinyl leukotrienes have been detected in synovial fluid of patients with ra. however, these results are difficult to interprete, because the procedure is invasive and artificial synthesis cannot be excluded. we used a different, noninvasive approach by assessing the excretion of lte 4 into urine. studies with 3hltc4 have demonstrated that lte 4 is unchanged excreted into urine and is the major udnary metabolite of cysteinyl leukotrienes in man. udnary lte 4 was isolated from an aliquot of a 24 hour urine collection by solid phase extraction followed by hplc and quantitated by ria. nine patients were enrolled in the present study. all met the american college of rheumatology criteria for ra. patients were treated with nonsteroidal inflammatory drugs and disease modifying drugs. therapy with prednisolon was started after collection of the initial 24 hour urine sample. disease activity was assessed by crp (mean 59+22 mg/l) and esr (mean 57_+37mm/hour platelet aggregation is mediated by the binding of an adhesive protein, fibrinogen, to a surface receptor, the platelet glycoprotein lib/ilia. gpiib/llla is one of a family of adhesion receptors, integrins, which consist of a ca++-dependent complex of two distinct protein subunits. under resting conditions, gpiib/llla has a low affinity for fibrinogen in solution. however, activation of platelets by most agonists, including thrombin, adp and thromboxane results in a conformational change in the receptor and the expression of a high affinity site for fibrinogen. binding of fibrinogen to platelets is a common end-point for all agonists and therefore is a potential target for the development of antiplatelet drugs. these have included chimeric, partially humanised antibodies (7e3), peptides and peptidomimetics that bind to the receptor and prevent fibrinogen binding. the peptides often include the sequence rgd, a sequence that is present in fibrinogen and is one of the ligand's binding sites. when administered in vivo, antagonists of gpiib/llla markedly suppress platelet aggregation in response to all known agonists, without altering platelet shape change, a marker of platelet activation. they also prolong the bleeding time in a dose and perhaps drug dependent manner, often to more than 30 rain. in experimental models of arterial thrombosis, gpllb/llla antagonists have proved highly effective and are more potent than aspirin. studies in man have focused on coronary angioplasty, unstable angina and coronary thrombolysis and have given promising results. 7e3 given as a bolus and infusion combined with aspirin and heparin reduced the need for urgent revascularisation in patients undergoing high-risk angioplasty, although bleeding was more common. some compounds have shown oral bioavailability raising the possibility that these agents could be administered chronically. antagonists of the platelet gpiib/llla provide a novel and potent approach to antithrombotic therapy. drug databases on computers are commonly textfiles or consist of tables of generic-names or prices for example. until now pharmacokinetic data are not easily available for regular use, because searching parameters in a textfile is time consuming and personal intensive. on the other hand these pharmacokinetic data are the fundamental background of every dosage regimen and individual dosage adjustment. for many drugs elimination is dependent on the patients renal function. renal failure leads to accumulation, possibly up to toxic plasma concentrations. therefore, the decision was to build up a pharmacokinetic database. the aim is to achieve simplicity and effectiveness by using the basic rules. only three parameters are needed to describe the pharmacokinetics: clearance (ci), volume of distribution (vd) and half-life (t~). moreover, with two parameters the third can be calculated ancl'controlled by the equation: cl = vd * 0,693 / t½ according to the dettli-equation and the bayes' theorem estimation of individual pharmacokinetic parameters will be done by a computer program. the advantage is that the impact of therapeutic drug monitoring can be increased. using the population data and the bayesian approach, only one measurement of serum drug concentrations might be enough to achieve an individual dosage regimens (el desoky et al., ther drug monitor 1993, 15: 281) higher therapeutic security for the patient can be achieved. there is also a major pharmacoeconomic aspect: adapting drug dosage reduces costs (susanka et al., am j hosp pharm 1993, 50:909) the basic database for future pharmacokinetic clinical desicions is going to be built up. the pharmacokinetic interactions with grape#uit juice reported for many drugs are attributed to the inhibition of cytochrome p450 enzymes by nanngenin, which is the aglycene of the bitter juice component nadngin. however, only circumstantial evidence exist that naringenin is indeed formed when grapefruit juice is ingested, and the lack of drug interaction when naringin solution is given instead of the juice is still unexplained. we investigated the pharmacokinetics of naringin, naringenin and its conjugated metabolites following ingestion of 20 ml grapefruit juice per kg body weight, containing 621 ijm naringin, in 3 male and 3 female healthy adults. urine was collected 0-2, 2-4, 4-6, 6-8, 8-10, 10-12, 12-16 and 16-24 hours alter juice intake. naringin and naringenin concentrations were measured by reversed phase hplc following extraction using ethyl acetate, with a limit of quantitation of 300 nm. conjugated metabolites in urine were transformed by incubation with glucuronidase (28000 u/ml) / sulfatase (733 u/ml) from abalone entrails for 4 h at ph 3.8 and determined as parent compounds. additionally, naringin and naringenin concentrations were measured in plasma samples from grapefl'uit juice interaction studies conducted previously. neither naringin nor its conjugated products were detected in any of the samples. naringenin was not found in plasma. small amounts of nanngenin appeared in urine alter a median lag time of 2 hours and reached up to 0.365 % of the dose (measured as nanngin). after treatment with glucuronidase / sulfatase, up to 57 % of the dose was recovered in urine: the absence of naringin and its conjugates and the lag time observed for naringenin to appear in urine suggests that cleavage of the sugar moeity may be required before the flavonoid can be absorbed as the aglycone. naringenin itself undergoes rapid phase ii metabolism. whether the conjugated metabolite is a potent cytochrome p450 inhibitor is unknown but not probable. the pronounced variability of naringenin excretion provides a possible explanation for apparently contradictory results in grapefruit and/or naringin interaction studies. grapefruit juice increases the oral bioavailablity of almost any dihydropyridine tested, presumably due to inhibition of first-pass metabolism mediated by the cytochrome p450 isoform cyp3a3/4. the mean extent of increase was up to threefold, observed for felodipine, and more pronounced drug effects were also reported. thus, a such interaction may be of considerable clinical relevance. no data are yet available for nimodipine. we conducted a randomized cross-over interaction study on the effects of concomitant intake of grapefruit juice on the pharmacokinetics of nimodipine and its metabolites m11 (pyridine analogue), m10 (demethylated) and m8 (pyridine analogue, demethylated). 8 healthy young men (4 smokers / 4 nonsmokers) were included into the investigation. nimodipine was given as a single 30 mg tablet (nimotop e) with either 250 ml of water or 250 ml of grapefruit juice (d~hler gmbh, darmstadt, 751 mg/i naringin). concentrations ef nimodipine and its metabolites in plasma withdrawn up to 24 hours p.ostdose were measured by gc-ecd, and model independent pharmacokinetic parameters were estimated. the study was handled as an equivalence problem, and anova based 90 % confidence intervals were calculated for the test (=grapefruit period) to reference (= water period) ratios. the absence of a relevant interaction was assumed if the ci were within the 0.67 to 1.50 range: grapefruit juice was reported to inhibit the metabolism of a variety of drugs, including dihydropyridines, verapamil, terfenadine, cyclosporine, and caffeine. these drugs are metabolized mainly by the cytochrome p450 isoforms cyp1a2 (caffeine and, in part, verapamil) and cyp3a (others). theophylline has a therapeutic range of 5-20 mg/i and is also in part metabolized by cyp1a2. therefore, we conducted a randomized changeover interaction study on the effects of concomitant intake of grapefruit juice on the pharmacokinetics of theophylline. 12 healthy young male nonsmokers were included. theophylline was given as a single dose of 200 mg in solution (euphyllin e 200), diluted by either 200 ml of water or 200 ml of grapefruit juice (d0hler gmbhi darmstadt, 751 mg/i nadngin). subsequently, additional fractionated 0.8 i of either juice or water were administered until 16 hours postdose. theophylline concentrations in plasma withdrawn up to 24 hours postdose were measured by hplc, and pharmacokinetics were estimated using compartment model independent methods. the study was handeled as an equivalence problem, and anova based 90 % confidence intervals were calculated for the test (=grapefruit period) to reference (= water period) ratios (trnax: differences thus, no inhibitory effect of grapefruit juice on theophylline pharmacokinetics was observed. the lower contribution of cyp1a2 to primary theophylline metabolism or differences in naringin and/or naringenin kinetics are possible explanations for the apparent contradiction between the effects of grapefruit juice on caffeine and on theophylline metabolism. the physical stability of erythromycin stearate film tablets was studied according to a 23 factorial design with experimental variables temperature, relative humidity, and storage time. after one half year of storage at 40oc and 20% relative humidity, the fraction of dose released within 30 min in a usp xxl paddle apparatus under standard conditions decreased from 93% for the reference stored at ambient temperature in intact blister packages to 13% for the stress-tested specimens. chemical degradation of the active ingredient did not become apparent before 12 months of storage. under all other storage conditions, no effects of physical aging upon drug release were found. the bioequivalence of reference and stress-tested samples was studied in six healthy volunteers. the extent of relative bioavailability of the test product was markedly reduced (mean: 8.5%, range: 6-12 %), mean absorption times of the test product were significantly prolonged. the results indicate that the product tested can undergo physical alterations upon storage under unfavourable conditions, and lose its therapeutic efficacy. it can be expected that this phenomenon is reduced by suitable packaging, but the magnitude of deterioration may cause concern. on the other hand, incomplete drug release is in this case easily detected by dissolution testing. whether similar correlations exist for other erythromycin formulations remains to be demonstrated. the efficacy of a drug therapy is considerably influenced by patient compliance. within clinical trials the effects of poor compliance on the interpretation of study results frequently leads to underestimating the efficacy of the treatment. in the evaluation of the "lipid research clinics primary coronary prevention trial" and the "helsinki heart study" special attention was focused on compliance with medication. the strong influence of compliance on clinical outcome and the dilutional effect of poor compliance on the efficacy of the respective drugs occured in both these trials. there are indirect (e.g. pill-count, patient interview) and direct methods (e.g. measurement of drugs, metabolites or chemical markers in body fluids) used to assess compliance with drug therapy. the indirect methods mentioned are commonly considered as unreliable. the direct methods can prove dose ingestion a short time before the sample is taken, however, they cannot show the time history of the drug use. an advanced method of measuring compliance is to use electronic devices. the integration of time/date-recording microcirculty into pharmaceutical packaging, so as to compile a time history of package use, provides real-time data as indicative of the time when dosing occurred. this method supports a precise, quantitative definition of "patient compliance" as: the extent to which the actual time history of dosing corresponds to the prescribed drug regimen. by taking real-time compliance data into account the results from clinical trials show not only clearer evaluations of drug efficacy and dese-reponse-relationship but also a better understanding of dose dependant adverse drug reactions. in the present study, we examined the usefulness of eroderm-1 and eroderm-2. seventy five impotent men, 24 to 65 years old, participated in the present trial. the patients were classified into 3 groups, 25 patients each. the first group was treated by cream containing only co-dergocrine mesilate (eroderm-1), the second received a cream containing isosorbide dinitrate, isoxsuprine hcl and co-dergocrine mesilate (eroderm-2), while the third used a cream containing placebo. the cream was applied to penile shaft and gland 1/2-1 hr before sexual stimulation and intercourse. the patients were asked to report their experience via questi'onnaire after one week. the results of treatment are as follows: seven patients (28%) who applied eroderm-1 indicated a full erection and successful intercourse. the use of eroderm-2 restored potency in 14 patients (56%) of the second group. three men (12%) of psychogenic type reported overall satisfaction with placebo cream. treatment of impotence with eroderm cream was most successful in patients with psychogenic disorders which are often coincident with minor vascular or neurological disorders. fair results were reported by patients afflicted by moderate neurological disorders. except for one case of drug allergy following the use of eroderm-2, no side effects were reported. we believe that eroderm cream has obvious advantages and may be a suitable treatment before the use of non-safe method as intracavernous medication. a new type of topically applied drugs (eroderm creams) for impotence is presented. eroderm creams contain vasoactive drugs. these drugs have ability to penetrate the penile cutaneous tissue and facilitate erection. in the present study, we examine the usefulness of eroderm-3 in the treatment of erectile dysfunction. eroderm-3 contains tiemonium methylsulfate, a.f. piperazine and jsosorbide dinitrate. a randomized, double blinded control trial on 36 patients was performed. the etiology of impotence was investigated. all patients received eroderm-3 and placebo cream. the patients randomized into 2 groups of 18. the first group received eroderm-3 on day 1 and placebo cream on day 2, however, group two received placebo on day 1. the patients were advised to apply the cream on the penile shaft 1/2 -1 hr, before sexual stimulation and intercourse. the patients reported their experience via questionnaire. overall 70 percent of patients demonstrated a response with eroderm-3. the other responders reported a partial erection and tumescenous. three men (8%) reported a full crection and satisfied intercourse with either cream. these patients were psychogenic impotence. neither eroderm-3 nor placebo cream produced marked response in 11 patients. four patients were venous leakage which were advised to use tourniquet at the base of penis after 1/2 hr. of cream application. only one of them indicated a good response. the highest activity proved to occur in psychogenic impotence. less rate of success was observed in patients with minor to moderate neurological and/or arterial disorders. no marked side effects were recorded. for these reasons eroderm-3 may be proposed as first line therapy of erectile dysfunction. control of cell proliferation is a basic homeostatic function in multicellular organisms. we studied the effects of some prostaglandins and leukotrienes and of their pharmacological inhibitors on cell proliferation in murine mast cells and mast cell lines, in a human promyelocytic cell line (hl-60 cells) and in burkitt's lymphoma cell lines. in addition, prostaglandin and leukotriene production was investigated in mast cells, representing putative endogenous sources of these lipid mediators. murine mast cells were derived from bone marrow of balb/c mice. proliferation of cells was estimated using a colorimetric assay (mtt-test). production of prostaglandin d2 (pgd2), pgj2, delta-12-pgj2, leukotriene c4 (ltc4) and ltb4 by mast cells was determined by combined use of high performanceliquid chromatography and radioimmunoassay. pgd2 and its metabolites pgj2 and delta-12-pgj2 exhibited significant antiproliferative effects in the micromolar range in mast cells, mast cell lines, hl-60 and burkitt's lymphoma cell lines whereas inhibition of cyclooxygenase by indomethacin was without major effects. ltc4 and ltb4 had a small stimulatory effect on cell proliferation in hl-60 cells. degradation and possibly induction of cell differentiation may have attenuated the actions of leukotrienes. the leukotriene biosynthesis inhibitors aa-861 and mk-886 reduced proliferation of hl-60 and lymphoma cells significantly but had no major effects on mast cell growth. on the other hand, mast cells stimulated with calcium ionophore produced pgd2 and its metabolites, as well as ltb4 and ltc4 in significant amounts. from our data we conclude that prostaglandins and leukotrienes may play an important role in the control of cell proliferation. we compared the pattern of drug expenditures of several hospitals in 1993 (size: 1000 to 1400 beds). a, b are university hospitals in the ,,old"and c,d,e are university hospitals in the ,,new" german countries, f is a community based institution in an ,,old" german country. main data source were lists comprising all drags according to their expenditures in a rank order up to 100%. items were classified into i) pharmaceutical products including immunoglobulines, ii) blood and -derived products (cell concentrates, human albumin, clotting factors) and iii) contrast media (x-ray). with regard to group i) the highest expenditures nccured in hospitals a and b whereas drug costs in c -e were 1/3 less and came to only 20% in hospital f. the main groups of drugs which together account for > 50% of these expenditures are shown in the table. ) products were about 20% up to 40% of group i and highest in hospitals a, b and e, but about 1/3 lower in hospitals c and d. these results suggest meaningful differences in the drug utilization between the old and new countries as well as betv,,een university institutions and community based hospitals. however, although all hospitals provide oncology and traumatology services and all university hospitals offer ntx, differences in other subspecialities e.g bone marrow and liver transplantation and treatment of patients with haemophilia must be considered, too. dr.medsebastian harder, dept c]inicai pharmacology, university hospital frankfurt, theodor stern kai 7, 60590 frankfurt/main frg m. hgnicka, r. spahr, m. feelisch, and r. gerzer organic nitrates like glyceryl trinitrate (gtn) act as prodrugs and release nitric oxide (no), which corresponds to the endogenously produced endothelium-derived relaxing factor. in the vascular tissue, no induces relaxation of smooth muscle cells, whereas in platelets it shows an antiaggregatory effect. both activities are mainly mediated via stimulation of soluble guanylyl cyclase (sgc) by no. in contrast to compounds which release no spontaneously, a membrane-associated biotransformation step is thought to be required for no release from organic nitrates. glutathione-s-transferasea and cytochrome p-450 enzymes have been shown to metabolize organic nitrates in the liver, but little is known as to whether these enzymes are involved in the metabolic conversion of organic nitrates in the vasculature. furthermore, it is still unclear whether or not platelets are capable of metabolizing organic nitrates to no. we isolated the microsomal fraction of bovine aorta in order to characterize the activities towards organic nitrates using the guanylyl cyclase reaction as an indirect and the oxyhemoglobin-technique as a direct measure for no liberation. gtn was metabolized to no by the microsomal fraction under aerobic conditions already in the absence of added cofactors. this activity was not influenced by the cytochrome p-450 inhibitors cimetidine and metyrapone. in contrast, the glutathione s-transferase substrate 1-chloro-2,4-dinitrobenzene and the glutathione s-transferase inhibitors sulfobromophthalein and ethacrynic acid did not affect no release, but potently inhibited sgc activity. blocking of microsomal thiol-groups resulted in a decreased no release from gtn. homogenates of human plateles isolated by thrombapheresis and stabilized by addition of 5 mm n-acetylcysteine did not show no-release from gtn as determined by the stimulation of the platelet sgc even after addition of the possible cosubstrates glutathione and nadph. these data demonstrate (1) that bovine aortic microsomes exhibit an organic nitrate metabolizing and no-releasing activity whose properties are clearly different from classical cytochrome p-450 enzymes and from glutathione s-transferases, and (2) that human platelets itself are not capable of bioactivating organic nitrates and therefore require organic nitrate metabolism in the vessel wall for antiaggregation to occur. bioavailability of acesal ®, acesal ® extra, micristin ® (all 500 mg acetylsalicylic acid -asa), and miniasal ® (30 mg asa), opw oranienbufg, relative to respective listed references was studied in female and male healthy volunteers (age 18-35 y, weight 48-90 kg, height 161-198 cm). asa and salicylic acid (sa) were measured using an hplc method validated from 50 ng/ml to 60 pg/ml. extent of absorption was assessed by auc (bioequivalence range 0.8-1.25), rate by cr~/auc (bioequivalence range 0.7-1.43). geometric means and 90%-confidence limits of the ratios test/reference (multiplicative model) are shown in the acesal ® and micdstin ® were bioequivalent in rate and extent of absorption with the reference formulations. the fast liberating acesal ® extra was bioequivalent with respect to extent only. asa from miniasal ® was absorbed more slowly than from an asa solution (cm= (68%-range): 264-536 ng/ml and 387-726 ng/ml; t~ (min-max): 0.33-2.0 h and 0.17-0.5 h). asa from micdstin ® and the corresponding reference was absorbed more slowly than from acesal ® and acesal ® extra. this was accompanied by decreased aucasa (increase of first pass metabolism) and increased apparent trrz (absorption being rate limiting). all ratios of aucsa/aucasa after administration of 500 mg asa were markedly higher than after 30 mg asa. thus, the formation of salicyludc acid from sa might be capacity limited at doses of 500 mg asa. in the study >>physicians' assessment of internal practice-conditions and regional health-services-conditions in accordance with ambulatory patient-management<< a sampie of 130 primary care physicians -comprising gps and internists -provide data for continuons analyses of arnbulatory health care quality and structure. focussing on the physicians' drug prescription, the impacts of 1993 reform law (gesundheitsstralcturgesctz, gsg) upon primary care providers and their therapeutic decisions were examined in 1993. four different surveys were carded out during the year, dealing with frequent patients' reasons for encounter in gps' offices. after a pretest was carried out, physicians reported on 3728 patient-physician-encounters, basing on mailed questionnaires. for every therapeutic change patients received, the reasons for the change were recorded (e.g, reform law, medical indication) and above the physicians' expectations towards three criteria to measure the quality: 1) physicians' assessment of the patients' satisfaction, 2) adverse drug effects, 3) therapeutic benefit. according to therapeutic changes due to 1993 reform law (drag budgets, blacklist) it can be stated: 1) therapeutic changes due to reform law were carried out with relevant frequency. 2) the reform law was of different concern regarding the different reasons for encounter we investigate& 3) the impacts' strangth of the legal control mechanisms differed among several groups of physicians: those who already have been liable to recourse before 1993 more often carried out therapeutic changes according to fixed drug budget. different multivariate logistic regression-models yield an estimation of the odds-ratio of about 3. 4) therapeutic changes in accordance with the 1993 reform law having been carried out at the beginning of the year more often suffered from negative expectations towards the therapeutic quality then changes during the actual encounter, e.g. >>joint pains5.0 ku/l to 253 ± 14 min in those with a che s 1.0 ku/l, the metabolic clearance rate (mcr) decreased from 226 ±29 ml/min to iii ±180 ml/min. in patients on phenytoin the t½-b was reduced to 90% of the platelet mass) was much stronger affected by the dt-tx 30 treatment: the mean area was reduced by 61+p7% after 25rag, 69+20% after 50mg, 78_+9% after 100mg, 53+25% after 200mg and 71_+8% after 400mg dt-tx 30 versus -16+40% after placebo. in the presence of cells of the vessel wall (smc) the overall thrombus formation was reduced by up to 42+21% after only 25 mg, 36+31% after 50mg, 59+_20% after 100mg, 72_+5% after 200rag and 81-+6% after 400 mg dt-tx 30 versus 2+_12% after placebo. dt-tx 30, a molecule combining potent and specific th romboxane synthetase inhibition with prostaglandin endoperoxide/thromboxane a 2 receptor antagonism, has been examined in healthy male subjects. collagen-induced platelet aggregation in platelet rich plasma prepared from venous blood was measu red photometrically before and up to 24 hours after a single oral dose of 25, 50,100, 200 or 400 mg dt-tx 30 in a placebo-controlled, double-blind study. platelet aggregation was induced in the ex vivo samples by collagen in concentrations between 0.5 and 10 p.g/ml to evaluate platelet aggregation in relation to the strength of the proaggregatory stimulus. the ecs0, i.e. the concentration of collagen required for a half-maximal aggregatory response (defined as the maximal change of the optical density), was determined. in the placebo-treated control group, the mean ecso was 365+55 ng/ml collagen (+ se; n=10) before treatment. it then varied between 362+41 and 417+_83 ng/ml collagen after treatment. the'ratio of the post-to the individual pre-treatment ecso values was 1.08 _+0.10 (n=10) at 0.5 h, 1.05_+0.11 at lh, 1.13_+0.14 at 2 h, 1.14-0.20 at 4 h, 1.07+ 0.07 at 8 h and 1.02+0.06 at 24 h. this indicates that the sensitivity of the platelets to collagen was not affected by the placebo treatment. oral treatment with dt-tx 30, however, strongly inhibited the aggregatory response of the platelets to collagen stimulation. the ecs0-ratio was increased to a maximum of 4. the detection of endogenous opioids suggested the opinion that in case of the presence in the organism of a receptor for an exogenous substance there is probably a similar endogenous substance.the occurrence in the blood of persons, who were not treated with cardiac glycosides, of endogenous digoxin-like or ouabain-like [actors confirms that opinion. in our study we took up the research of other drug-like [actors in the blood serum of healthy people. in two hundered and twenty-five healthy volunteers (llom,ll5f) non-smokers not receiving any treatment before or during the test and aged between ib and49 y(mean age 36y) the occurrence of drug-like [actors in blood serum was studied.the examinations were carried out with the use of the fluorescence-polarization-immunoassay (fpia)-tdabbott. th e presence of the following endogenous drug-like foctors in the blood serum was evaluated: quinidine,phenytoin, earbamazepine,theophylline, cyclosporineand gentamicin. the presence of endogenous phenytoin-like, theophyllinelike and cyclosporine-like [actors has been demonstrated. the drug-like [actors were not found in the case of quinidine ,carbamazepine and gentamicin. the phenytoin-like factor was found in 91,1~, theophylline-like [actor 95,1~ and cyclosporine-like [actor in 96,9~ of examined volunteers.the mean value of the drug-like [actors were as follow : phenytoin 0,18~ 0,05 pg/ml,theophylline 0,16~ o,ll pg/ml and cyclosporine 12,41z 4,24 ng/ml. the supposition may be proponued that organism produces drug-like substances according to its needs. the acetylation and oxidation phenotypes were studied in 448 healthy volunteers (235m, 213[) aged between ib and 46 years (mean 36y) in the wielkopolska region in poland. the acetylation phenotype was studied with the use of sulphadimidine which was given in a dose of 44 mg/kg b.w. per os.sulphadimidine was determined by a spectrophotometric method.the border value of m.r. was 75~ in urine. the oxidation phenotype was studied with the use of sparteine which was given in a dose of 1,5 mg/kg b.w.per de. sparteine was determined by the gas chromatographic method in urine. if m~ was 20 0.05). cpb induced a significant decrease of pche (-37%)(p<0.05) and protein concentration (-24%)(p<0.05) and a less pronounced numedcal reduction the specific pche (-15%)(p>0.05). the reduction of pche and protein concentration was not significantly affected by ending cpb (p>0.05), and the values remained low over the remaining operation time. there was no significant difference in pche, measured at 37°c in vitro, or protein concentration between the normothermic and hypothermic group (p>0.05). furthermore, there was no correlation between serum hepadn-activity and pche reduction. pche in the plasma of healthy volunteers was not significantly affected by either hepadn up to 100 u/ml or apretinin up to 10 000 u/ml (p>0.05). conclusion: (1) the concentration of the antitumor antibiotic mitomycin c (mmc), used in ophtalmic surgery for its antiproliferative effects, was measured in the aqueous humor of 7 glaucoma patients undergoing trabeculectomy. sponges soaked with mmc-solution (100 ul of mmc-solution 0.2 mg/ml: 20 rag) were applied intraoperatively under the scleral flap for 5 rain. 100 to 200 ul of aqueous humor were drawn with a needle 10 min following the end of topical mmc-treatment. samples were assayed for mmc using a reverse-phase hplc-system with ultraviolet detection (c18-column, elution: phosphate-buffer (0.01m, ph:6.5):methanol, v:v = 70:30 , 365 nm). swabs were extracted in phosphatebuffer (0.1m, ph:7.0) before hplc-analysis. external calibration was used for mmc quantitetion. quantitation limit was 10 ng/ml. in all aqueous humor samples mmc-concentration was below 10 ng/ml. mmc in the swabs amounted to 37 % of the mmc amount applied. conclusion: after intraoperetive topical application, mmc concentration in the aqueous humor of patients is very low. the substantial loss of mmc from the swabs used for the topical mmc-treatment suggests (1) rapid systemic absorption of mmc and/or (2) a loss through irngation of the operative field following topical mmc-application. institut fur pharmakologie und * klinik for augenheilkunde, universitcit k61n, gleuelerstrasse 24, 5093 k61n al14 a 85 due to runaway costs of the national health service which are reflected as well in growing expenditures for drugs at the university hospital of jena investigation of indication related drug administration patterns becomes more and more interesting. this holds especially true for intensive care units (itu's) which are determined by similar high costs for technical equipment as for drugs (1) although any economical considerations seem to be questionable due to ethical reasons (2). over a 4 month period indication related drug administrations of 2 surgical itu's of the university hospital jena have been recorded and analyzed by using a pc-notebook. total expenditures for all 465 included patients add up to dm 1.144.773 regarding these drugs and blood products which caused 80% of total costs in 1993. the 10 leading substances ( antithrombin ill, human albumin 20 %, prothrembine complex, ...) represent 67 % of total costs including blood products, antibiotics and ig m endched intravenous immunglobine. therefore the indication of particulary these drugs became mere interesting for further investigations. already during the study actual discussions with the treating medical staff have been made leading to new developed therapy recommendations. providing same high standard of medical treatment a remarkable cost saving of some drugs by more cdtical and purposeful use could already be achieved as a first result. however, the results of the study underline impressivly the benefit of such investigations for improvement of drug treatment. the simple replacement of expensive drugs ( e.g. prothrembine complex ) by higher quantities of cheaper ones of the same indication group ( e.g. fresh frozen plasma (3)) does not necessarily mean less expenditures in all cases but may cause unsiderable side effects. ( ketokonazole is known to decrease pituitary acth secretion in vitro and inhibits adrenal ll-hydroxylase activity. to work out the clinical significance of both effects analysis of episodic secretion of acth, cortisol (f) and ll-deoxycortisol (df) was performed in patients with cushing's syndrome (cs) requiring adrenostatic therapy. methods : ketokonazole was started in ii patients with cs (9 acth-secreting pituitary adenomas [cd], 2 adrenal adenoma [aa] ). in 9 of them (8 cd, 1 aa) blood samples were obtained for 24 hours at i0 min intervals (144 samples/patient) before and again under treatment (mean dose i000 mg/d, >6 weeks). hormone levels were measured by ria and secretion patterns analysed by means of pulsar, cluster and desade. 2 patients were investigated only once because treatment was stopped due to side effects. results : the we conclude that the observed 34 % increase of plasma acth and the 58 % decrease of f/df ratio demonstrate that inhibition of adrenal li6-hydroxylase activity is the primary mode of action of ketoconzole in vivo. even at high doses acth and f secretion patterns could not be normalized. the improvement of pain and swelling conditions by means of drugs is an important method of achieving an enhanced perioperative quality of life in cases of dentoalveolar surgery. in 5 prospective, randomised, double-blind studies the influence of various concentrations of local anaesthetics and accompanying analgesic and antioedematons drugs was investigated in the case of osteotoimes. all of the studies were carded out according to a standardised study procedure. a comparison of the local anaesthetics articaine 4 % mad articaine 2 % (study 1) demonstrated the superior effect of articaiue 4% with respect to onset relief on pain, period of effectiveness and ischaemia. recordings of the cheek swelling in the remaining studies were made both sonographically and with tape measurement, while the documentation of the pain was carried out by means of visual analogue scales on the day of operation and on the first and third post-operative days. tile perioperative, exculsive administration of 2x6 mg dexamethasone (study 2) resulted in a significant reduction in the swelling (58 %) while the exclusive administration of 3x400 mg lbuprofen (study 3) was accompained by a marked decrease in pain (64%) but no significant reduction of swelling in comparison to the placebo group. the combination of 3x400 mg ibuprofen und 32 mg methylprednisolone (study 4) yielded a decrease in pain of 67.7% and a reduction in swelling of 58%. a cdmparison between a mono-drug ibuprofen and a combination drug ass/paracetamol (study 5) resulted in no significant difference in the reduction of swelling and pain and therefore highlighted no advantages for the combined drug. a mono-drug should therefore be given priority as an analgesic. the combinatton of ibuprofen und methylprednisolone offers the greatest reduction in pain and swelling. using the results of the randomised studies, a phased plan for a patietu-orietued, anti-inflammatory therapy to accompany dento-alveolar surgery is presented. in a placebo controlled study 20 patients with congestive heart failure (nyka class ii) were treated orally for seven days with i00 mg ibopamine t.i.d, i0 subjects had a normal renal function (mean inulin clearance (gfr) 91 ± 3,4 ml/min), i0 patients suffered from chronic renal insufficiency (gfr 36 ± 3,9 ml/min; x ± sem). pharmacokinetic parameters of epinine, the maximum plasma concentration, the time to reach maximum plasma concentration and the area under the curve from 0 to 6 hours were unaltered in impaired renal function when measured on the first or on the seventh treatment day. however plasma concentrations in both groups were significantly higher on the first treatment day than after one week of ibopamine administration. in this context antipyrine clearance as a parameter of oxidative liver metabolism which might have been induced by ibopamine revealed no differences between placebo and ibopamine values. in conclusion kinetic and dynamic behaviour of ibopamine was not altered by impaired renal function. human protein c (hpc) is a vitamin k-dependent in the liver produced glycoprotein with anticoagulant properties. when active protein c splits the coagulation factors va and vuia by means of limited proteolysis (kisiel et al 1977) . its concentration in normal plasma is 2-6 lag/m[ i-ipc's biological importance became evident when a congenital protein c deficiency, which results in difficult recurrent thromboembolic diseases was discovered (griffin eta/1981) . the recognition of a congenital hpc deficiency, as wall as the connection between acquired protein c deficiency and the appearance of thromboembolic complications by means of highly accurate and sensitive ascertained methods is therefore of great practical importance for the clinic. murine monoclonal antibodies (moabs) against hpc were formed. antibody producing hybridomas were tested by an ,,indirect elisa" against soluble antigens. the plates were coated with purified hpc up to 50 ng/1001al. the peroxydase-system was used to identify antibodies the antibodies were tested with the remaining vitamin k-dependent proteins for cross-reactivity, as well as with hpc deficiency plasma for disturbances by other plasma proteins. the above described experiment represents a sensitive and specific method for measuring the hpc concentration with moabs. assessment of local drug absorption differences ("absorption window") in the human gastrointestinal tract is relevant for the development of prolonged release preparations and for the prediction of possible absorption changes by modification of gastrointestinal motility. current methods are either invasive and expensive (catheterization of the intestinum, hf-capsule method) or do not deliver the drug to a precisely defined localization. we evaluated the delay of drug release from tablets coated with methacrylic acid copolymer dissolving at different ph values as an alternative method. three coated preparations of caffeine tablets (onset of drug release in in vitro tests at ph 5.5, 6.0 and 7.0) and an uncoated tablet (control) were given to six healthy male volunteers in a randomized order. caffeine was used because of its rapid and complete absorption and good tolerability. blood samples were drawn up to 24 h postdose (coating ph 7.0 up to 30 h postdose), and caffeine concentrations were measured by hplc. auc, time to reach measurable caffeine concentrations (tia~), tr, ax, cmax and mean absorption time (mat) values for coated preparations were compared to the reference tablet (mean + sd of n=6): the relative bioavailibility for the coated preparations did not differ from the reference, suggesting complete release of caffeine. all coatings delayed caffeine absorption onset. the tlag for the ph 5.5 preparation suggests that release started immediately after the tablet had left the stomach. the mean delay of 2.1 h for the ph 6.0 coating was highly reproducible and should reflect small intestine release. the ph 7.0 coating delayed absorption to the highest extent, however the drug was probably released before the colon was reached. there is evidence that nitric oxide (no) plays a role in cardiovascular disease like hypertension, myocardial ischemia and septic cardiomyopath.y. no stimulates the guanylyl cyclase leading to an increase m cgmp content we investigated by immunoblotting the expression of the inducible nitric oxide synthase (inos) in left ventricular myocardium from failing human hearts due to idiopathic dilative cardiomyopathy (idc, n=10), ischemic cardiomyopathy (icm, n=7), beeker muscular dystrophy (n=2) and sepsis (sh, n=3) compared to non-failing human hearts (nf, n=4). cytokine-stimulated mouse macrophages were used as positive controls sds-polyacrylamide gel electrophoresis (7.5 %) was perfomed with homogenates of left ventricular myocardium and mouse macrophages respectively. proteins were detected by enhanced chemiluminescence using a mouse monoclnal antibody raised against inos. furthermore, we measured the cgmp content in these hearts by radioimmunoassy. a band at about 130 kda was observed in two out of three hearts from patients with sepsis and in stimulated mouse macrophage~ no inos-protein expression was detected in either non-failing human hearts (n=4) or failing human hearts due to idc, ihd or bmd. in ventricular tissue from patients with sepsis cgmp content was increased to 230% (72 + 17 fmol/mg ww, n=3) compared to non-failing hearts (100% or 31 + 6.9 fmol/mg ww, n=4). in left ventricular tissue tissue from patients with heart failure due to idc, ihd and bmd cgmp content did not differ from that in non-failing hearts. it is concluded that an enhanced inos protein expression may play a role in endotoxin shock, but is unlikely to be involved in the pathophysiology of end-stage heart failure due to idc, ihd and bmd. (supported by the dfg.) nitric oxide (no) has been shown to be a major messenger molecule regulating blood vessel dilatation, platelet aggregation and serving as central and peripheral neurotransmitter; furthermore no is a crucial mediator of macrophage cytotoxicity. no production can be assessed reliably by determination of its main metabolites nitrite and nitrate in serum, reflecting no synthesis at the time of sampling, or in 24 h urine, reflecting daily no synthesis. farrell et ai. (ann rheum dis 1992; 51:1219) recently reported elevated serum levels of nitrite in patients with rheumatoid arthritis (ra). we report here total body nitrate production and the effect of prednisolone in patients with ra. nitrate excretion in 24 h urines of 10 patients with ra as defined by the 1987 revised criteria of the american rheumatism association was measured by gas chromatography at 2 times: first before start of a antiinflammatory therapy with prednisolone, when the patients had high inflammatory activity as indicated by mean crp serum concentrations of 71 + sd 61 mg/i and elevated esr with a mean of 62 ]:28 after 1 hour. secondly 2-4 weeks after start of prednisolone therapy in a dosage of 0.5 mg/kg body weight, when the patients showed clinical and biochemical improvement (crp 6 + 5 mg/i, p<0.05, esg 32 + 17, p<0.001, two-tailed, paired t-test). for comparison 24 h urines from 18 healthy volunteers were obtained. before start of predniselone therapy the urinary nitrate excretion in patients with ra (mean 223 + sd 126 p.mol/mmol creatinine) was more than twofold higher (p<0.001, twoaailed unpaired t-test) than in healthy volunteers (83 + 63 ~tmol/mmol creatinine). the urinary nitrate excretion decreased significantly (p<0.001, two-tailed, paired t-test) to 162 + 83 i.tmol/mmol creatinine under therapy with prednisolone, when inflammatory activity was reduced considerably. despite the decrease the urinary nitrate excretion was still twc, fold higher (p<0.05, two-tailed, unpaired t-test) in patients with ra than in the control group. our data suggest that the endogenous no production is enhanced in patients with ra. furthermore the results indicate that this elevated no synthesis could be reduced in accordance with suppression of systemic inflammation by prednisolone therapy. but now as ever the physicians are entitled to prescribe drugs which have to prepare in a pharmacy for a particular patient. little information is available on the frequency and patterns of these prescriptions. we had occasion to analyse the prescriptions of drugs which were prepared in 6 pharmacies in north thuringia (east germany) from october to december 1993 at the expense of a large health insurance company (allgemeine ortskrankenkasse). the selected pharmacies are loealised in 6 cities. we found 2172 prescriptions of drugs made up in pharmacies among a total number of 58472 reviewed drug prescriptions. this is 3.7 % of the total. most of these prescriptions were performed by dermatologists (56.6 %), general practitioners (21.9 %), paediatrists (7.9 %) and otolaryngologists (4. 1%). according to this, the most frequently prescribed groups of drugs were dermatics enteric eoated tablets with 100 nag and 300 nag acetylsalicylic acid (asa) have been developed wluch should avoid the known gastrointestinal adverse events by a controlled drug release mainly in the duodenum after having passed the stomach. a 4-way cross-over study in 24 healthy male subjects, aged from 19-39 years, was conducted to investigate the pharmacokinetics, bioavailability, safety, and tolerance of asa and its metabolites salicylic acid and salicylurie acid following enteric coated tablets in comparison with plain tablets. asa and its metabolites were determined by a sensitive, specific, and validated hplc method. pharmacokinetic parameters were determined by non-compartreental analysis. bioequivalence was assessed by 90% confidence intervals. following the admimstration of enteric coated tablets, a delayed absorption can be observed for both the 100mg dose and the 300rag dose. this is likely due to a delayed release of the active substance from the enteric-coated tablets in the small intestine arer gastric passage. considering the mean residence times (mrt), there is a difference of at least 2.8 h following the enteric coated tablets compared to the plain tablets for asa and the two metabolites measured• this difference represents the sum of residence time in the stomach plus the time needed to destroy the coating of the tablet when it left the stomach• in general, the maximum observed concentrations of both enteric coated formulations occurred 3-6 h post dose. the pharmacokinetics of a novel immunoglobulin g (lgg) preparation (bt507, biotest, dreieich, frg) have been determined in 12 healthy, male anti-hbs-negative volunteers. for this preparation only plasma from hiv-, hbv-and hcv-negative donors was used, the quality control for the product was in accordance with the ec-guideline for virus removal and inactivation procedures. each volunteer received a single, intravenous infusion of 100ml bt507 containing 5g igg and anti-hbs > 5,000 iu. anti-hbs was used as a simply measurable and representative marker for the igg. blood samples for determination of anti-hbs (ausab eia, abbott, frg) were drawn before and directly after the infusion, after 1,3,6,12 and 24 hours, on day 3, 5,8,15,22, 29,43,57,71,85 and 99 . additionally, total protein, igg, iga, igm and c3/c4 complement were measured and blood hematology and clinical chemistry parameters determined. the phar~gacokinetic parameters of anti-hbs were calculated using the topfit ~" pc program assuming a 2-compartment model. pharmacoeconomic evaluations (pe) describe the relationship between a certain health care input (costs) for a defined treatment and the clinical outcome of patients measured in common natural units (e.g. blood pressure reduction in mmhg), quality of life (qol) gained, lifes saved or even in money saved due to the improvement in patients functional status. this implies that the efficacy of a treatment has been measured and proven in clinical trials. in addition, in order to transfer data obtained in clinical trials to the clinical setting, an epidemiological database for diseases and eventually drug utiiization may be required. the evaluation of the efficacy depends on the disease to be treated or prevented and the mode of treatment. for acute, e.g. infectious diseases, the endpoint can be defined easily by the cure rate, but for pe the time (length of hospital stay) and other factors (e.g. no. of dally drug administrations) have to be considered. in the case of chronic diseases, e.g. hypertension or hypercholesterolaemia, surrogate endpoints (blood pressure or serum cholesterol reduction) and information on side effects may be acceptable for the approval, but cannot be used for a meaningful pe. the latter should include the endpoints of the disease, i.e. cardiovascular events (requiring hospitalisation and additional treatment) and mortality. furthermore, the qol has to be measured and considered for chronic treatment. several questionaires have been developed to measure the overall qol or the health related qol. especially the latter may be a more useful tool to detect mad quantify the impact of a treatment on qol. combining the clinical endpoint mortality and qol by using qalys (quality-adjusted lifeyears) may be a useful tool to determine the value and costs of a given drug treatment but cannot be applied to all treatments under all circumstances. sorbitol was used as a model substance to investigate the dynamics of the initial distribution process following bolus intravenous injection of drugs. to avoid a priori assumptions on the existence of well-mixed compartments data analysis was based upon the concept of residence time density in a recirculatory system regarding the pulmonary and systemic circulation as subsystems. the inverse gaussian distribution was used as an empirical model for the transit time distribution of sorbitol across the subsystems, distribution kinetics was evaluated by the relative dispersion of transit (circulation) times. the distribution volumes calculated from the mean transit times were compared with the modelindependent estimate of the steady-state volume of distribution. kinetic data and estimates of cardiac output were obtained from 10 patients after percutaneous transluminal coronary angioplasty. each received a single 0.8 g iv bolus dose of sorbitol. arterial blood samples were collected over 2 hours. while the disposition curve could be well fitted by a tri-exponential function the results indicate that distribution kinetics is also influenced by the transit time through the lungs, in contrast to the assumption of a wellmixed plasma pool underlying compartmental modelling. a karit@ "bu£ter" is used traditionally in west afr%can manding colture as a cosmetic to protect the skin against the.sun. gas chromatography was used to analyze the ingredients of karit@ butter from guinea. we found 3% palmitic acid, 42% stearic acid, 42% oleic acid and 8% linoleic acid and 0.1% of other fatty acids with higher chain lengths like arachidonio acid. some of these are essential fatty aclds (vitamine f). furthermore karit@ contains vitamine a and d as well as triterpene alcohols and phytosterines. an original extract was used to prepare a skin cream. this preparation was tested in 25 volunteers (18 women, 7 men; age 20-55 y.). the cream contained at least 50% karit@, glycerol, emulsifiers and no preservative agent except for sorbic acid. 24 of the volunteers very well tolerated the cream and thought it effective. the skin became more tender and elastic. good results were obtained when the volunteers suffered from very dry skin. two of them who were known to be allergic against the most available skin creams had no problems in using our karit8 cream. pure karit@ butter was used for four months to treat an african infant with neurodermitis. after this time the symptoms had markedly improved whereas previous therapy trials with other usual topical medicaments had been unsuccessful. these pre-studies had shown that dermatologic preparations containing karit# may be a good alternative in the treatment of therapyreslstent skin diseases and may in some cases be able to replace eorticoid treatment. ) and a low molecular weight heparin preparation (fragmin ~, 75 iu/kg bodyweight s.c.) on coagulation and platelet activation in vivo by measuring specific coagulation activation peptides [prothrombin fragment 1+2 (f1+2), thrombin antithrombin iii complex (tat), 13-thromboglobulin (~-tg)] in bleeding time blood (activated state) and in venous blood (basal state). in bleeding time blood, r-hirudin and the heparin preparations significantly inhibited the formation of both tat and f1+2. however, the inhibitory effect of r-hirudin on f1+2 generation was short-lived and weaker compared to ufh and lmwh and the tat/f1+2 ratio was significantly lower after r-hirudin than both ufh and lmwh. thus, in vivo when the coagulation system is in an activated state r-hirudin exerts its anticoagulant effects predominantly by inhibiting thrombin (lla), whereas ufh and lmwh are directed against both xa and ila. a different mode of action of ufh and lmwh was not detectable. in venous blood, r-hirudin caused a moderate reduction of tat formation and an increase (at 1 hour) rather than decrease of f1+2 generation. formation of tat and f1+2 was suppressed at various time points following both ufh and lmwh. there was no difference in the tat/f1+2 ratio after r-h[rudin and heparin. thus, a predominant effect of rhirudin on ila (as found in bleeding time blood) was not detectable in venous blood. in bleeding time blood, r-hirudin (but neither ufh nor lmwh) significantly inhibited ~-tg release. in contrast, both ufh and lmwh caused an increase of ~-tg 10 hours after hepadn application. our observation of reduction of platelet function after r-hirudin compared to delayed platelet activation following ufh and lmwh suggests an advantage of r-h[rudin over heparin, especially in those clinical situations (such as arterial thromboembolism) where enhanced platelet activity has been shown to be of particular importance. the human cytochrome p450 isoform cyp1a2 determines the level of a variety of drugs metabolized by the enzyme, including caffeine (ca) and theophylline (th). more than 50 compounds are potential or proven inhibitors of this enzyme. some of them were reported to be substrates or inhibitors to cyp1a2 in vitro, ethers caused pharmacokinetic interactions with drugs metabolised by cyp1a2. we characterized a series of these compounds with.respect to their effect on cyp1a2 in human liver microsomes in relation to-published pharmacokinetic interactions in vivo. cyp1a2 activity in vitro was measured as ca 3-demethylation at the high affinity site in human liver microsomes, using 15 rain incubation at 37 °c with 125 -20001jm caffeine, an nadph generating system, and inhibitor concentrations covering 1.5 orders of magnitude. apparent kr values were estimated using nonlinear regression analysis. for inhibitory effects on cyp1a2 activity in vivo, the absorbed oral dose causing 50 % reduction in ca or th clearance (edso) was estimated from all published interaction studies using the emax model. %)i followed by disinfectants (14.1%)r ointments (54.8 %) and solutions (17.8%) were the most frequent drug forms 6 %) or german (17.4 %). our results show that even now drugs prepared trend analysis of the expenses at the various departments may be a basis for a ratio-hal and economic use of the drug budget. total drug expenses amounted to 30 mill. dm in 1993. s0 milldm (33%) were used in surgical departments with intensive care units (icu) (general surgery, kardiovascular surgery, neurosurgery, gynecology, anaesthesiology) of wtfich 40 % are needed by the icu and 25 % in the operating rooms. surgical departments without scu but similar patient numbers (ophthalmology, ent, orthopedics and urology) get only 10 % of the budget (30 % needed for the operating rooms). the medical departments spent s0 mill.dm of which icu needs only 10 % whereas the oncology (oncu) and antiinfective units uses more than 50 %• similar relation could be seen in the child hospital (2.4 milldm, 8 %) where 25 % were spent for icu and 40 % for oncu. the departments of dermatology and neurology get 10 %, the depart-merits of radiology, nuclear medicine and radiation therapy only 8 % of the budget. antiinfective drugs (antibiotics, antimycotics, virustatics) are most expensive (21% of budget) followed by drugs used for radiological procedures (8%) sncreasing the knowledge about the costs of medical items and the rational and economical use may stop the overproportional increase of the drug budget the mostly used :20) and a 100-fold higher efficiency than the r-form the elimination of the talinolol enantiomers was studied in 12 healthy volunteers (age: 23 -32 years, body weight: 55 -84 kg) given a single oral dose (50 mg) or an intravenous infusion (30 rag) of the racemi c drug. three volunteers were phenotypically poor metabolisers and nine were extensive metabolisers of the debrisoquine-type of hydroxylation. the r-and senantiomers of talinolol were analysed in urine by a hplc method after enantioselective derivatisation. the concentrations of the enantiomers within every sampling period as well as the amounts of s-and r-enantiomer this corresponds to a s/r-ratio of 1,00 + 0,02. the mean total amount (= s-+ r-enantiomer) eliminated was on average 50 % &the administered dose. after oral administration 26 _+ 7 % of the dose were eliminated within 36 h. the amounts of talinolol enantiomers recovered were equally (senantiomer: 6416 _+ 1624 gg the ratios of s-to r-concentrations at every sampling interval and of every volunteer were assessed between 0,82 and 1,11 (mean: 1,00 after infusion and 1,01 after oral administration, respectively) medizinische fakult~t carl gustav cams, teelmische university, t, fiedlerstr nitric oxide (no), synthesized by the inducib]e form of no synthase, has been implicated as an important mediator of-specific and non-specific immune response, little is known about the in vivo synthesis or no in inflammatory joint diseases. therefore we have studied the excretion of the major urinary metabolite of no, nitrate, in rats with adjuvant arthritis, a well established model of polyarthritis in addition we assessed the urinary excretion of cyclic gmp, which is known to serve as second messenger for the vascular effects of no, synthesized by the constitutive form of no synthase, affecting blood vessels, plate]et aggregation and neurotransmission, in 24 h urines of 12 male sprague daw]ey rats at day 20 after induction of adjuvant arthritis we measured nitrate excretion by gas chromatography and cyclic gmp by radioimmunoassay. for contro] we determined the same parameters in 24 h urines of non-arthritic rats of the same strain and age, we found a significant (p <0 001, two-tailed, unpaired t-test), more than 3-fo]d increase of urinary nitrate excretion in arthritic rats (mean 541 ± sd 273 pmo]/mmol creatinine) as compared to non arthritic rats (169 _+ 39 izmot/mmo] creatinine). urinary cyclic gmp excretion was slightly, but not significant]y lower in arthritic rats (510 ± 44 nmol/mmol creatinine) than in controls (747 ± 33 nmo]/mmo] creatinine).there were no major differences in food or water intake which cou]d account for these results. the increased urinary nitrate excretion accompanied by normal cyclic gmp excretion suggests that no production by the inducible form of no synthase is enhanced in rats with adjuvant arthritis institute of c]inica] pharmacology~ hannover medical school, d-30623 hannover, germany and *research center gr@nentha] gmbh, zieg]erstr 6, d-52078 aachen, germany background: pge1 has been shown to be efficacious in the treatment of critical leg ischemia. despite of an almost complete first pass metabolism in the lung the clinical effects of intraarterial and intravenous pge1 do not differ significantly. in addition, it is not fully understood which of the various pharmacological actions of pge1 is the main factor; by most authors, however, it is thought to be the increase of cutaneous and muscular blood flow. by means of [15-0]-h20-pet, we studied muscular blood flow (mbf) of the leg in patients with peripheral arterial disease comparing intraarterial and intravenous pge1. patients and methods: 8 patients (3 f, 5 m; mean age 59 y) with pad were studied, (5 atherosclerosis, 3 thromboangiitis obliterans). at the first day, 5pg pge1 were infused intraarterially within 50 minutes; pet scanning of the lower leg was performed at minutes 0, 25 und 50. at the following day, 40pg pge1 were infused intravenously within 2 hours; pet scanning was performed at minutes 0, 30, 60 and 120. results: in the infused leg the increase of mbf caused by intraarterial pge1 averaged 79+59% at minute 25 and 1004_85% at minute 50; in the not infused leg there was no effect. the increase rate in the infused leg was highly variable but did not correlate with sex, age, disease or clinical outcome. for intravenous pge1 the change of mbf at any time averaged almost 0%. conclusion: unlike intraarterial pge1, intravenous pge1 does not increase the muscular blood flow of the leg. a comparable clinical effect provided, increase of muscular blood flow may not be considered the main way of action of pge1 in critical leg ischemia. eslrogen(er) and progesterone(pr) receptor status as well as lymph node involvement are important factors in predicting prognosis and sensitivity to hormone and chemotherapy in patients with breast cancer. prognostic relevance of ps2-protein, egfr and cathepsin d is currently under debate. especially ps2 and egfr expression appears to provide additional information regarding the responsiveness of the tumour tissue to tamoxifen. the aim of the present study was to investigate the relationships between these parameters and established prognostic factors in breast cancer. in a prospective study ps2 and cathepsin d were assayed immunoradiometricauy in the tumour cytosol of 122 patients, egfr was measured by elisa. relating the level of these factors to the lymph node involvement, menopausal status as well as turnout size, no significant association could be established. jn our findings er and pr are significantly correlated with the expression of ps2 but none is correlated with the cathepsin d status. egfr was shown to be inversely correlated with the content of er. a significant association between cathepsin d and ps2 could be established in patients with early recurrence. at a median follow-up of 15-24 months, recurrence was more common in patients with tumours having negative status for ps2, independent of receptor status. in conclusion, because of the relative independence on the er and pr status and other prognostic factors and the influence on the recurrence behaviour, demonslrated here, and their role in promoting tumour dissemination and changing hormone therapy sensitivity, all three factors represent markers of prognostic relevance.deparlancnts of clinical pharmacology l, nuclear medicine 2 and surgery 3,pharmacoeconomic studies, conducted either separately from or together with clinical trials are increasing in both number and meaning. in a period of limited health care budgets, political and medical decision makers alike run the risk of accepting the results of such studies without critical reflection. careful evaluation of those studies by state-of-the-art methods is one way out of the trap. another could be to refer to ethical considerations. the problem in this context is, that the discussion concerning ethical aspects of pharmacoeconomic research, at least in europe, is just in its beginning. therefore, no widely accepted standards are available. but they are essential to answer four main questions: 1. who should perfom a pharmacoeconomic study? 2. which objectives should be considered? 3. what kind of study should be performed (e. g. cost-effectiveness, cost-utility, cost-benefit analysis)? 4. which consequences will be drawn from the results?based on the case study-orientated "moral cost-benefit model" (r. wilson, sci. tech. human values 9: 11-22, 1984) , a three-step decision and evaluation model is proposed to handle bioethical problems in pharmacoeconomic studies: 1. moral risk analysis 2. moral risk assessment 3. moral risk management. possible practical consequences for decision making in research policy, study design and assessment of results are discussed. hirudin is the most potent known natural inhibitor of thrombin and is presently gaining popularity as an anticoagulant since recombinant forms have become available. the aim of the present study was to compare platelet aggregation, sensitivity to prostaglandin e1 (pge1) and thromboxane a2 (txa2) release in r-hirudinized and heparinized blood. platelet aggregation was measured turbidimetrically using a dual channel aggregometer (labor, germany) in blood samples of healthy volunteers anticoagulated with r-hirndin w015 (behring) and hepatin (20 gg/mi blood each). aggregation was induced by arachidonic acid (aa; 0.5, 1.0 and 2.0 ram) and adp (1.0 lam). pge1 in concentrations 10, 20 and 40 ng/ml was used. plasma txb2 content was measured by gas chromatography/mass spectrometry. this study showed a significantly lower a.a-induced platelet aggregation in r-hirudinized plasma. three minutes after the aggregation induction by 0.5 mm aa the plasma txb2 concentration was 23 0 ng/ml in blood anticoagulated with rhimdin and 108.4 ng/ml in heparin-anticoagulated blood. the extent of the adp-induced aggregation was nearly the same in rhimdinized and heparinized plasma. platelet sensitivity to pge1 was significantly higher in r-hirudinized blood. thus, aa-induced platelet aggregation is significantly lower and sensitivity to pgei higher in r-himdin-anticoagulated blood in comparison with beparin-anticoagulated blood.university of tartu, puusepa str. 8, tartu ee 2400, estonia anaemia has been reported in renal transplant (ntx) recipients treated with azathioprine (aza) and angiotensin converting enzyme-inhibitors (ace-i). an abnormal aza metabolism with increased 6-thioguanine nucleotide (tgn) levels in erythrocytes is a possible cause of severe megaloblastic anaemia (lennard et al, br j clin pharmaco11984). methods: 15 ntx patients receiving aza (1,9_+0,54 mg/kg/d), prednisolone (0,12+0,06 mg/kg/d) and enalapril (ena) (0,16+0,07 mg/kg/d) for more than 6 months were studied prospectively. blood samples were taken before and 3h after administration of aza on 2 visits during ena treatment and 10 weeks after ena had been replaced by other antihypertensives (x). tgn in erythrocytes, 6-mercaptopurin (mp) and 6-thiouric acid (tua) in 3h post dose plasma (p.) und 24h urine (u.) samples were analyzed by hplc using a mercurial cellulose resin for selective absorption of the thiol compounds. pharmacodynamic variables were hemoglobin (hb), erythropoietin (epo) and creatinine clearance (ci ace~,lcholine plays an important role in regulating various functions in the airway's. in human lung less is known about regional differences in cholinergic innervation and about receptor-mediated r%m.flation of acetylcholine release. in the present study the tissue content of endogenous acetylcholine and the release of newly-synthesized [~h]acetylcholine were measured in human lung human tissue was obtained at thoracotomy from patients with lung cancer moreover, in isolated rat tracheae with intact extrinsic vagal innervation possible effects of g__-adrenoceptor agonists on evoked ph]acctylcholine release were studied. endogenous acetylcholine was measured by hplc with ec-detection; evoked ph]acetylcholme release was measured after a preceding incubation of the tissue with [~h]choline. huma n large (main bronchi) and small (subsegmental bronchi) airways contained similar amounts of acetylcholine (300 pmol/100 mg), whereas significantly less acetylcholine was found in lung parenchym (60 pmol/100 mg). release of [3h]acetylcholine ,,,,'as evoked in human bronchi by transmural electrical stimulation (four 20 s trains at 15 hz). oxotremorine, an agonist at muscarine receptors, inhibited evoked [~hiacetylcholine release indicating the existence of neuronal inhibitor 3' receptors on pulmona~ parasympathetic neurones. scopolamine shifted the oxotremorine curve to the right suggesting a competitive interaction (pa 2 value: 88: slope &the schild plot not different from unity) however, a rather sluggish schdd plot was obtained for pirenzepine. scopolamine but not pirenzepine enhanced evoked [3h]acetylcholine release. the present experiments indicate a dense cholinergic innervation in human bronchi; release of aceu, lcholine appears to be controlled by facilitatory and inhibitou' nmscarinc receptors. in isolated, mucosa-intact rat tracheae isoprenaline (100 nm) inhibited [~h]acetylcholine release evoked by preganglionic nerve stimulation isoprenaline was ineffective in mucosa-denuded tracheae or in the presence of indomethacin thus, 13adrenoceptor agonists appear to inhibit acetylcholine release in the airways by the liberation of inhibitoiy prostanoids from the mucosa. the occurrence of the non-enzymatic reactions between glucose and structural proteins is well known (vlassara h et al. (1994) lab invest 70: 138-151) . the reaction between proteins and fructose (i.e. fmctation), however, can also occur. like glucose-protein adducts the fructose analognes are able to form so-called advanced glycation endproducts (age). the inhibition of early and advanced products of fmctation may be ilnportant for the prevention of diabetic late complications (mcpherson jd et al. (1988 ) biochemistry 27: 1901 -1907 . we investigated the in vitro fmctation of human serum albumin (hsa) and its inhibition by selected drugs. hsa was fmctated by incubation with 20 mmol/1 fructose in 0. i mol/l phosphate buffer, ph=7.4.,at 37 ° c for 21 days. the rate of fmctation was measured by the following methods: -a colorimetric method based on deglycatien of glycated, proteins by hydrazine (kobayashi k et ai.(1994) bioi pharm bull 17: 365-369), -affinity chromatography with aminophenyl-boronate-agarose, -fluorescence measurement for the delermination of age we used aminoguanidine, pcnicillamine, captopril and alpha-lipoic acid(20 mmol/1) to study the inhibition of hsa fmctation. after three weeks incubation the formation of early glycation products was inhibited by aminogalanidine (15%) and captopril (3%) whereas penicillamine and alpha-lipoic acid showed minimal inhibition. aminognanidine inhibited the formation of age by 42%, penicillamine by 37%, alpha-lipoic acid by 11% and captopril by 37%. these results may suggest a potential use of the investigated drags in the prevention of the formation of protein-fructose addncts. key: cord-314295-itr3b63z authors: cori, anne; ferguson, neil m.; fraser, christophe; cauchemez, simon title: a new framework and software to estimate time-varying reproduction numbers during epidemics date: 2013-09-15 journal: american journal of epidemiology doi: 10.1093/aje/kwt133 sha: doc_id: 314295 cord_uid: itr3b63z the quantification of transmissibility during epidemics is essential to designing and adjusting public health responses. transmissibility can be measured by the reproduction number r, the average number of secondary cases caused by an infected individual. several methods have been proposed to estimate r over the course of an epidemic; however, they are usually difficult to implement for people without a strong background in statistical modeling. here, we present a ready-to-use tool for estimating r from incidence time series, which is implemented in popular software including microsoft excel (microsoft corporation, redmond, washington). this tool produces novel, statistically robust analytical estimates of r and incorporates uncertainty in the distribution of the serial interval (the time between the onset of symptoms in a primary case and the onset of symptoms in secondary cases). we applied the method to 5 historical outbreaks; the resulting estimates of r are consistent with those presented in the literature. this tool should help epidemiologists quantify temporal changes in the transmission intensity of future epidemics by using surveillance data. initially submitted november 26, 2012 ; accepted for publication may 23, 2013 . the quantification of transmissibility during epidemics is essential to designing and adjusting public health responses. transmissibility can be measured by the reproduction number r, the average number of secondary cases caused by an infected individual. several methods have been proposed to estimate r over the course of an epidemic; however, they are usually difficult to implement for people without a strong background in statistical modeling. here, we present a ready-to-use tool for estimating r from incidence time series, which is implemented in popular software including microsoft excel (microsoft corporation, redmond, washington). this tool produces novel, statistically robust analytical estimates of r and incorporates uncertainty in the distribution of the serial interval (the time between the onset of symptoms in a primary case and the onset of symptoms in secondary cases). we applied the method to 5 historical outbreaks; the resulting estimates of r are consistent with those presented in the literature. this tool should help epidemiologists quantify temporal changes in the transmission intensity of future epidemics by using surveillance data. incidence; influenza; measles; reproduction number; sars; smallpox; software abbreviations: ci, credible interval; sars, severe acute respiratory syndrome. the reproduction number, r, is the average number of secondary cases of disease caused by a single infected individual over his or her infectious period. this statistic, which is time and situation specific, is commonly used to characterize pathogen transmissibility during an epidemic. the monitoring of r over time provides feedback on the effectiveness of interventions and on the need to intensify control efforts (1) (2) (3) (4) , given that the goal of control efforts is to reduce r below the threshold value of 1 and as close to 0 as possible, thus bringing an epidemic under control. a wide range of methods have been proposed to estimate r from surveillance data (5) (6) (7) (8) (9) (10) (11) (12) . however, methods based on fitting mechanistic transmission models to incidence data are often difficult to generalize because of the context-specific assumptions often made (e.g., presence/absence of a latency period or size of the population studied). recently, a simpler statistical approach was proposed, which addressed this issue. the wallinga and teunis method (13) is generic and requires only case incidence data and the distribution of the serial interval (the time between the onset of symptoms in a primary case and the onset of symptoms of secondary cases) to estimate r over the course of an epidemic. it is based on the probabilistic reconstruction of transmission trees and on counting the number of secondary cases per infected individual. the method estimates 1 value of r per time step of incidence (typically, per day). however, the approach has several drawbacks. first, estimates are right censored, because the estimate of r at time t requires incidence data from times later than t. approaches to correct for this issue have been developed (14) . when the data aggregation time step is small (e.g., daily data), estimates of r can vary considerably over short time periods, producing substantial negative autocorrelation. other studies have developed methods to achieve smoother estimates, but the results can be sensitive to the selected time step or to smoothing parameters (15) (16) (17) (18) . the implementation of these methods requires time and expertise, especially to produce confidence or credible intervals for r. hence, although there are many methods to quantify transmissibility during an epidemic, none currently comes as a ready-to-use tool for nonmodelers. the aim of our study was to develop a generic and robust tool for estimating the time-varying reproduction number, similar in spirit to earlier methods, but implemented with ready-to-use software and without the drawbacks mentioned above. we provide microsoft excel (microsoft corporation, redmond, washington) and r software (r foundation for statistical computing, vienna, austria) versions of this tool, and a userfriendly web interface will soon be available, as well. (the use of the letter r to denote both the reproduction number and the software package is coincidental). after describing our approach, we apply it to data from selected historical outbreaks of pandemic influenza, severe acute respiratory syndrome (sars), measles, and smallpox. a more detailed description of our methods can be found in the web appendices 1-13 available at http://aje.oxfordjournals. org/. we assume that, once infected, individuals have an infectivity profile given by a probability distribution w s , dependent on time since infection of the case, s, but independent of calendar time, t. for example, an individual will be most infectious at time s when w s is the largest. the distribution w s typically depends on individual biological factors such as pathogen shedding or symptom severity. the instantaneous reproduction number (19) , r t , can be estimated by the ratio of the number of new infections generated at time step t, i t , to the total infectiousness of infected individuals at time t, given by p t s¼1 i tàs w s , the sum of infection incidence up to time step t − 1, weighted by the infectivity function w s . r t is the average number of secondary cases that each infected individual would infect if the conditions remained as they were at time t. in practice, contact rates and transmissibility can change over time, particularly when control measures are initiated. this affects the number of secondary cases that a given individual infected at time step t will actually infect. the case reproduction number at time step t, r c t , takes into account those changes. it is the average number of secondary cases that a case infected at time step t will eventually infect (19) . it is sometimes called the cohort reproduction number because it counts the average number of secondary transmissions caused by a cohort infected at time step t. however, estimation of r c t can be undertaken only in retrospect, once the secondary cases generated by cases infected at t have been infected. r c t is the quantity estimated in wallinga and teunis-type approaches (although the wallinga and teunis method considers cohorts of individuals with symptom onset at time t rather than infection at time t) (13) . the distinction between r c t and r t is similar to the distinction between the actual life span of individuals born in 2013, which we can measure only retrospectively after all individuals have died (i.e., in a century), and life expectancy in 2013, estimated now by assuming that death rates in the future will be similar to those in 2013. r t is the only reproduction number easily estimated in real time. moreover, effective control measures undertaken at time t are expected to result in a sudden decrease in r t and a smoother decrease in r c t (19) . hence, assessing the efficiency of control measures is easier by using estimates of r t . for these reasons, we focus on estimating the instantaneous reproduction number r t in this article. (see wallinga and teunis (13) , and cauchemez et al. (14, 15) for methods used to estimate the case reproduction number). given the definition of r t stated above, the incidence of cases at time step t is, on average, e½i t ¼ r t p t s¼1 i tàs w s , where e½x denotes the expectation of a random variable x, and i t−s is the incidence at time step t − s (19) . bayesian statistical inference based on this transmission model leads to a simple analytical expression of the posterior distribution of r t if we assume a gamma prior distribution for r t . this makes obtaining any desired characteristic of this posterior distribution (e.g., the median, the variance, or the 95% credible interval) straightforward (web appendix 1). however, the resulting r t estimates can be highly variable and hence difficult to interpret when the time step of data is small (20) . we therefore calculate estimates over longer time windows, under the assumption that the instantaneous reproduction number is constant within that time window. at each time step t, we calculate the reproduction number over a time window of size τ ending at time t. these estimates, denoted r t,τ , quantify the average transmissibility over a time window of length τ ending at time t. they are expected to be less variable as the window size τ increases, because 2 successive time windows will then have increasing overlap. as τ increases, the estimates of r t,τ will also be more precise. in fact, we show in web appendix 2 that the precision of these estimates depends directly on the number of incident cases in the time window [t − τ + 1; t]. this allows us to control the precision by adjusting the window size. we also provide estimates of r t,τ that take into account the uncertainty in the serial interval distribution parameters by integrating over a range of means and standard deviations of the serial interval (web appendix 4). the estimation method presented above is developed for the ideal situation in which times of infection are known and the infectivity profile w s may be approximated by the distribution of the generation time (i.e., time from the infection of a primary case to infection of the cases he/she generates) (19) . however, times of infection are rarely observed, and the generation time distribution is therefore difficult to measure. on the other hand, the timing of onset of symptoms is usually known, and such data collected in closed settings where transmission can reliably be ascertained (e.g., households) can be used to estimate the distribution of the serial interval (time between onset of symptoms of a case and onset of symptoms of his/her secondary cases). therefore, in practice, we apply our method to data consisting of daily counts of onset of symptoms where the infectivity profile w s is approximated by the distribution of the serial interval. for many diseases, including influenza (21), sars (22), measles (23) , and smallpox (24), it is expected that infectiousness starts only around the time of symptom onset. in such diseases, and when the infectiousness profile after symptoms is independent of the incubation period, the distributions of the serial interval and the generation time are identical (web appendix 9), and our estimates are exact (albeit with t defined as the time of symptom onset of a primary case and a time lag in our estimates of r t equal to the incubation period). we provide a microsoft excel spreadsheet (available at http://tools.epidemiology.net/epiestim.xls) that implements the estimation method described above. documentation on how to use the microsoft excel file is provided in web appendix 14. we have also developed an r package, epiestim, which can be downloaded at http://cran.r-project.org/web/ packages/epiestim/index.html, in which both our method and the wallinga and teunis method (13) are implemented to facilitate comparison. we also developed a user-friendly web interface that will soon be available at http://shiny.epidemiology. net/epiestim. to illustrate the insights that our method can provide, we applied it to 5 historical epidemics that varied in terms of transmissibility, serial interval, and population size. for each epidemic, we retrieved the epidemic curve, as well as the mean and standard deviation of the serial interval, from the literature ( table 1 ). the discrete distribution of the serial interval, w s , was then obtained by assuming a gamma distribution (web appendix 11). for each day t of each epidemic, we estimated the reproduction number for the weekly window ending on that day (r t,τ = 7 , now denoted r for simplicity). the 5 epidemic curves, serial interval distributions, and r estimates are presented in figure 1 . estimates are not shown from the very beginning of each epidemic because precise estimation is not possible in this period (web appendix 3). the estimated case reproduction numbers for those 5 epidemics are also shown in figure 1 for comparison. r initially decreased from an initial median value of 4.3 (95% credible interval (ci): 2.0-8.2) in the middle of the third week to 3.0 (95% ci: 1.3-5.9) at the end of the same week, and then increased to 11.5 (95% ci: 8.3-15.3) in the middle of week 4, and finally decreased again until the end of the epidemic, falling below 1 at the beginning of week 7. the increase in r from weeks 3 to 4 suggests increasing transmissibility. previous studies have highlighted the importance of the structure (by classroom and household) of the contact network in this epidemic and have suggested the existence of early "superspreaders" (25) . these characteristics could explain the increase in r. interestingly, just after the first peak of incidence, r was still above 1, indicating that the epidemic was not yet over; and indeed, a second peak was still to come. pandemic influenza in baltimore, maryland, 1918 this epidemic curve was characterized by 2 days with unusually high incidence, on the 1st and the 15th of october 1918 (days 31 and 45). this might be related to a recollection bias, because the data were collected after the epidemic. although we used a 1-week (τ = 7) time window to calculate r, estimates still fluctuated. we found an initial median estimate of r of 1.4 (95% ci: 1.0-1.9) at the end of week 2. estimates were then quite stable until r peaked at 2.4 (95% ci: 2.2-2.6) in the middle of week 5 (coincident with the second highest peak in incidence). estimates then decreased, with r falling below 1 early in week 7, before the largest peak in incidence (though around that peak, r estimates just exceed 1 for a few days). at the very end of the epidemic, the credible intervals widen because of low case numbers. fraser et al. (20) found a similar temporal trend for r and attributed the decrease in r to social distancing measures that were undertaken around october 10 (day 40). this is consistent with our analysis (given that we were looking at r estimates over the past week) in which r fell below 1 on day 43. because the serial interval distribution for the 1918 pandemic is poorly documented, we explored a range of means and standard deviations for the serial interval and derived estimates of r by integrating over all these values (web appendix 4). the results are shown in figure 2 . the median estimated r in the presence of this uncertainty differed by 3% or less from the estimates obtained with a fixed serial interval distribution. however, the credible intervals were wider, reflecting the increased uncertainty. we also examined the choice of the assumed time window width used to estimate r for this data set. figure 3 shows daily estimates of r for 1-day, 1-week, 2-week, and 4-week windows, assuming a known serial interval distribution (as in table 1 ). the estimates varied substantially according to the window size chosen. the 1-day window estimates were so variable that it was hard to derive any trend from them. as the window size grew, the median estimates were smoother, and the credible intervals were narrower, as expected. for 4-week windows, the upper credible interval was below 1 at the end of the epidemic. however, longer intervals delayed the time at which the median estimated r fell below 1. overall, for this data set, a 1-week window represents a good compromise. the analysis of the smallpox outbreak in kosovo in 1972 illustrates the potentially long delay between the first case and the time when it is reasonable to start estimating r. here, for a small epidemic with a long mean serial interval, that delay was as long as 4 weeks. we found that r increased from a median value of 3.4 (95% ci: 0.8-9.3) early in the fourth week to 23.9 (95% ci: 19.0-29.5) in the middle of week 6 and then decreased again until the end of the epidemic, falling below 1 only in the beginning of week 8. the initial increase in r is consistent with a report that identified that transmission during the "second generation of cases" was unusually high, which the authors assumed to be "associated with inadequate protection from vaccination" (26) . estimates stayed above 1 until very late in the epidemic, indicating the limited success of control measures. vaccination, which started on march 16 (day 31) was slow (95% coverage was achieved only by the end of april, around day 70) and sometimes ineffective (26) . kong, 2003 for the sars outbreak in hong kong in 2003, we find 2 successive peaks in r. the first occurred in the middle of week 3 with a median estimate of 12.2 (95% ci: 10.0-14.7), and the second occurred at the end of week 6 with a median estimate of 2.6 (95% ci: 2.4-2.9). r then fell below 1 by the end of week 7. these 2 peaks coincide with the occurrence of known superspreading events, the first occurring in weeks 3 and 4, and the second occurring between weeks 5 and 6 (16, 27, 28) . it is notable that r falls below 1 very quickly after the epidemic peak, while incidence is still quite high. similar trends were found in previous analyses of this epidemic (13, 14) . we estimated that r was relatively constant over the whole second week of the epidemic, with a median around 1.7 (early in the week, 95% ci: 1.0-2.6; late in the week, 95% ci: 1.2-2.2). r then decreased, falling below 1 early in week 4. this could reflect the impact of control measures or could be due to the depletion of susceptibles in the school population. in the last days of the epidemic, 2 new cases appeared, probably as a result of reintroduction of infection from outside the school, resulting in estimates of r increasing again from a minimum value of 0.2 (95% ci: 0.1-0.5) to 0.9 (95% ci: 0.3-2.0). comparison between instantaneous reproduction number r and case reproduction number r c figure 1 shows the instantaneous (r) and case (r c ) reproduction numbers estimated for the 5 epidemics. r c was estimated by using the wallinga and teunis method (13), but on weekly windows (web appendix 5). the estimates of r c on weekly windows are smoother than the estimates of r on weekly windows. moreover, they are ahead of the estimates of r by a mean serial interval. when the serial interval is short (e.g., for influenza or sars), this delay is small, and the smoothing effect is not very strong. however, when the serial interval is long (e.g., for measles or smallpox), both effects are more dramatic, and the curves have very different interpretations. for the measles outbreak in hallegoch, germany, the highest estimate of r c is early in the epidemic, on the week ending on day 11, which reflects high transmissibility 2 weeks later (i.e., on the week ending on day 25), coinciding exactly with the peak in the estimated r. this means that the high transmissibility around day 25 is due to cases who have shown symptoms, on average, 2 weeks earlier. similarly, for the smallpox epidemic in kosovo, the peak in r c is on the week ending on day 15, and the peak in r is on the week ending on day 38 (i.e., 23 days later), which is the mean serial interval we have assumed. transmissibility, measured by the instantaneous reproduction number, was very high during the second generation of cases, around day 38, but this was caused by the first generation of cases, who had symptoms around day 15. we have developed a simple and generic method to estimate time-varying instantaneous reproduction numbers from incidence time series. a simulation study presented in web appendix 6 shows that our method is able to detect changes in the reproduction number, for instance, following a control measure. we applied this method to analyze the time course of transmissibility for 5 historical outbreaks. our estimates of the instantaneous reproduction number are consistent with estimates of the case reproduction number despite considerable differences in interpretation. our analyses are also in agreement with previously published results obtained with generally more complicated and less general methods. for instance, although our estimates of the reproduction number for the 1918 influenza pandemic in baltimore, maryland, are similar to the maximum likelihood estimates obtained by fraser et al. (20) , it is much easier to produce credible intervals with our method than to produce confidence intervals with the previously used maximum likelihood estimates approach. our method is also easier to implement and more flexible than the parametric estimation used by white and pagano (17) on the same data set. similarly, for the 2003 sars epidemic in hong kong, wallinga and teunis (13) , as well as cauchemez et al. (14) , found temporal trends for the case reproduction number similar to our estimates of the instantaneous reproduction number but with lower peak values. the case reproduction number (which is the quantity derived in those studies) is estimated over a generation of infection (i.e., over 8.4 days on average for sars). when the instantaneous reproduction number is estimated on time windows shorter than the average generation time (which is the case for our weekly windows), we expect the case reproduction to be smoother than the instantaneous reproduction number, which could explain why we find higher peaks. again, it is more straightforward to produce credible intervals with our method than it is with those approaches. robust estimation of r provides important insights into temporal changes in transmission during an epidemic. however, interpreting the temporal trends is not always straightforward. changes in r can be due to changes in underlying transmissibility (e.g., due to seasonality), changes in contact patterns in the population affected, the impact of control measures, or the depletion of the size of the susceptible population. for instance, we found that r decreased very early during the sars epidemic in hong kong. however, because many control measures were put in place at different times during the sars epidemic (29, 30) , it is difficult to relate the decrease seen directly to a specific control measure from this analysis alone. likewise, we estimated that, for the outbreak of 2009 pandemic influenza in a school in pennsylvania, there was again an early decrease in r. but just estimating r does not allow us to determine whether this reflects a true reduction in transmissibility, possibly due to the school closure between may 14-20 (days [17] [18] [19] [20] [21] [22] [23] or the depletion of susceptibles. by using a more complex analysis, cauchemez et al. (31) showed that the second of these explanations was more likely. the method developed here relies on knowledge of the serial interval distribution but is able to directly incorporate uncertainty in serial interval distribution estimates. allowing the mean and variance of the serial interval distribution to vary round average values affects median r estimates to a limited extent but increases the credible intervals around those estimates. the estimates of r obtained with our method are quite sensitive to the size of the sliding window over which the estimates are calculated. small windows can lead to highly variable estimates with wide credible intervals, whereas longer windows lead to smoothed estimates with narrower credible intervals. in web appendix 2, we discuss an interesting result on the minimum number of cases that need to be included in a time window to achieve a given precision in the estimate of r. because the beginning of an epidemic has few incident cases, we used this result to provide guidance on when it is reasonable to start estimating r. finally, our method makes several assumptions that would benefit from reiteration. first, we applied our method to time series of onset of symptoms, and we used the serial interval distribution as an approximation for the infectivity profile w s . we showed in web appendix 12 that for diseases for which infectiousness starts only at the time of symptom onset, with an infectiousness profile after symptom onset independent of the incubation period, this leads to exact, but time-lagged, estimates of the reproduction number. although for many diseases, including those studied here (21) (22) (23) (24) , this assumption is sensible, it does not hold for pathogens such as human immunodeficiency virus, for which infectiousness precedes symptoms (32) . in such cases, and if data on the incubation period (delay between infection and symptom onset) are available, a possible strategy would be to use the incubation period distribution to back-calculate the incidence of infections from the incidence of symptoms and then apply our method to estimate the reproduction number from those inferred data (19) . however, this approach may lead to oversmoothed incidence time series compared with the true infection incidence (19) . second, we assumed that all cases were detected. we showed in a simulation study that this should not dramatically affect estimates as long as the proportion of asymptomatic cases and the reporting rate are constant through time (web appendix 8). as an example, an early precursor of the method applied here was used to analyze time series of polio disease incidence, in which approximately 1 in 200 infections was symptomatic (33) . in some situations, the reporting rate is likely to change over the course of an epidemic, for instance, as a result of improved case ascertainment or case definitions or changes in health care-seeking behavior over time. if data on reporting are available, it is possible to extend our method to take variable reporting rates into account (6) . we assumed that there were no imported cases, so that each incident case could be attributed to a previous case in the incidence time series. however, if imported cases were identified as such, our method could easily be adapted to account for them, as was done in previous studies by using other estimation methods (12, (34) (35) (36) . finally, we assumed that the serial interval distribution was constant throughout the epidemic. however, if there were independent data (e.g., from contact tracing studies) suggesting evidence of changes in the serial interval distribution, our method could be applied with different serial interval distributions for different time periods of the epidemic. despite these assumptions, we feel the simplicity of the method we have presented here outweighs the limitations highlighted above. we hope our method will be adopted by epidemiologists and public health organizations. this will be facilitated by the r package and, more importantly, the simple microsoft excel tool and the web interface we have developed and released with this paper.these softwareprograms should allow rapid analysis of incidence time series of any infectious disease within the scope described above and should be valuable tools for future outbreak investigations. infectious diseases of humans: dynamics and control strategies for mitigating an influenza pandemic factors that make an infectious disease outbreak controllable directly transmitted infectious diseases: control by vaccination transmission dynamics of the etiological agent of sars in hong kong: impact of public health interventions pandemic potential of a strain of influenza a (h1n1): early findings transmission intensity and impact of control policies on the foot and mouth epidemic in great britain definition and estimation of an actual reproduction number describing past infectious disease transmission: application to hiv epidemics among homosexual men in denmark real time bayesian estimation of the epidemic potential of emerging infectious diseases the estimation of the effective reproductive number from disease outbreak data estimation of a time-varying force of infection and basic reproduction number with application to an outbreak of classical swine fever pandemic (h1n1) 2009 influenza community transmission was established in one australian state when the virus was first identified in north america different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures real-time estimates in early detection of sars estimating in real time the efficacy of measures to control emerging communicable diseases temporal variability and social heterogeneity in disease transmission: the case of sars in hong kong transmissibility of the influenza virus in the 1918 pandemic estimating the effective reproduction number for pandemic influenza from notification data made publicly available in real time: a multi-country analysis for influenza a/h1n1v estimating individual and household reproduction numbers in an emerging epidemic influenza transmission in households during the 1918 pandemic viral shedding and clinical illness in naturally acquired influenza virus infections clinical progression and viral load in a community outbreak of coronavirus-associated sars pneumonia: a prospective study infectiousness of communicable diseases in the household (measles, chickenpox, and mumps) transmission potential of smallpox: estimates based on detailed data from an outbreak a network-based analysis of the 1861 hagelloch measles data smallpox and its eradication a major outbreak of severe acute respiratory syndrome in hong kong the epidemiology of severe acute respiratory syndrome in the 2003 hong kong epidemic: an analysis of all 1755 patients sars expert committee of hksar government. chronology of the sars epidemic in hong kong sars expert committee sars: chronology of a serial killer role of social networks in shaping disease transmission during a community outbreak of 2009 h1n1 pandemic influenza time from hiv-1 seroconversion to aids and death before widespread use of highly-active antiretroviral therapy: a collaborative re-analysis new strategies for the elimination of polio from india the effective reproduction number of pandemic influenza: prospective estimation estimation of the reproduction number for 2009 pandemic influenza a(h1n1) in the presence of imported cases transmissibility of 2009 pandemic influenza a(h1n1) in new zealand: effective reproduction number and influence of age, ethnicity and importations bayesian inference for contact networks given epidemic data influenza in maryland: preliminary statistics of certain localities estimates of the reproduction numbers of spanish influenza using morbidity data strategies for containing an emerging influenza pandemic in southeast asia transmission parameters of the a/h1n1 (2009) influenza virus pandemic: a review transmission potential of smallpox in contemporary populations smallpox transmission and control: spatial dynamics in great britain transmission dynamics and control of severe acute respiratory syndrome we thank dr. thibaut jombart for his help in developing the r package epiestim, and dr. david aanensen for his help in setting up the web interface.professor christophe fraser and dr. simon cauchemez contributed equally to this work.conflict of interest: none declared. key: cord-301000-ozm5f5dy authors: naqvi, zainab batul; russell, yvette title: a wench’s guide to surviving a ‘global’ pandemic crisis: feminist publishing in a time of covid-19 date: 2020-09-04 journal: fem leg stud doi: 10.1007/s10691-020-09435-1 sha: doc_id: 301000 cord_uid: ozm5f5dy it has been quite a year so far(!) and as the wenches we are, we have been taking our time to collect our thoughts and reflections before sharing them at the start of this issue of the journal. in this editorial we think through the covid-19 pandemic and its devastating effects on the world, on our lives and on our editorial processes. we renew our commitment to improving our operations as a journal and its health along with our own as we deploy wench tactics to restore, sustain and slow down to negotiate this new reality, this new world. we conclude with an introduction to the fascinating contents of this issue along with a collaborative statement of values on open access as part of a collective of intersectional feminist and social justice editors. through all of the pain and suffering we focus our gaze on hope: hope that we can come through this global crisis together engaging in critical conversations about how we can be better and do better as editors, academics and individuals for ourselves, our colleagues and our journal. in these strange times…in these uncertain times…in these unprecedented times. how quickly our conversations and communications have become prefixed with a constant reminder of our current situation. our concern, our sympathies and our connectedness have all increased for one another as we 'check in' with those we interact with regularly, and crucially, those we don't. as pandemic-related lockdowns in the uk and many parts of the world continue, causing many to experience restrictions in their movement and routines that they have never encountered before along with the enforced closure of businesses and places of work, we have been reflecting on the spaces and positions we inhabit as feminist individuals, academics and editors. in this editorial we think through some of the consequences of the covid-19 pandemic and state responses to the spread of the virus in the context of our ongoing efforts to employ decolonising techniques and deploy wench tactics (fletcher et al. 2017; naqvi et al. 2019) . in doing so, we seek to make sense of our new lived realities, although in many ways, just this attempt to make sense of the effects on our existence is both bewildering and revealing. one way this lack of sense is most starkly manifest is in the way it plays with and disrupts time. we experience time as both exponentially sped up and painfully slowed down. over the three months during which we have tried to draft this editorial there have been political, social and economic changes and events too numerous to detail; literally thousands of people have died. but somehow this flux is accompanied by a nagging sense of stasis; we're mulling the same issues, many of us are 'stuck' in our homes with or away from family, and we are still beholden to the virus. this discombobulating confrontation with the contingency of linear time leads us here to feminist work that contemplates time and timeliness and to a necessary reflection on the nature of scholarly work and publishing and the temporal imperatives driving them. in mulling our work and the time in and according to which it occurs, we are also led inevitably to a rumination on health: what is 'health', and who possesses it? the oversimplified answer is that human health refers to our state of physical, mental and emotional wellbeing; and that surely everyone has health which makes it a global concern. unfortunately, we have seen that it really isn't that simple or tidy. fassin (2012) warns us that concern for global health is not something we can take for granted. both 'global' and 'health' are contested concepts (bonhomme 2018) . global health is neither universal nor worldwide; it is not free from the politics of life and the value-giving processes that lead to lives being weighed against each other. the term 'global' is not only a geographical signifier but a "political work in progress that calls on us to remain ever mindful of the imperial durabilities of our time" (biehl 2016 , 128 referring to stoler 2016 . in what follows we reflect further on the lessons of the pandemic and how we view health: as a global public good that we should all be working together to improve and maintain for everyone? or as a privilege that in this patriarchal capitalist society is only available to those who can afford it? we return to the question of time and the timely and try to think collectively with our feminist colleagues about publishing, 'slow scholarship' and wench tactics. we consider what feminist leadership looks like in a time of covid-19 and renew our calls for a firm commitment to decolonising academic publishing and the university. for us, this has recently manifested in a collective statement on publishing and open access, which we have jointly produced and signed with several other intersectional feminist and social justice journal editorial boards. the editorial concludes with an acknowledgement of recently retired editorial board members and an introduction to the copy included in this issue of fls. they are dirty, they are unsuited for life, they are unable, they are incapable, they are disposable, they are non-believers, they are unworthy, they are made to benefit us, they hate our freedom, they are undocumented, they are queer, they are black, they are indigenous, they are less than, they are against us, until finally, they are no more. (indigenous action 2020) a pandemic is the worldwide spread of a new disease (who 2010) in her discussion of affliction, disease and poverty, das (2014) discusses the way in which definitions of global health centre on the control of communicable diseases. controlling the spread of infectious disease between us then, is how we measure the success of global health. along the same vein, the world health organization (who) has defined a pandemic as "the worldwide spread of a disease" (2010). these two statements may seem innocuous but contain layers of historical and contemporary oppressions trapped within their layers of meaning. it would be naive to claim that the covid-19 crisis is the first phenomenon to lay bare the structural injustices and inequalities which already plague us. and that is the key takeaway for us: we are already plagued and have always been plagued with communicable diseases including war, poverty, racism, colonialism, sexism (bonhomme 2018; siyada 2020) . all around the world, lives are lost unnecessarily because of these diseases and now the privileged among us are personally at risk we urgently realise that the status quo is a problem-that we are all fragile (msimang 2020) . these existing diseases have spread worldwide and mutate into new forms all the time: we have been living through multiple, simultaneous pandemics our entire lives and for many of us, this is only now being thrown into stark relief. if we broaden this out further, the editors in us start to inquire into health as a broader concept. health is not only relevant to the human condition but can be applied to systems, processes and institutions such as the academy and the academic publishing industry. the crisis has highlighted the urgent need for us to reflect on the health of our academic lives and spaces along with the ways in which social plagues have infected our ways of writing, editing, working and being. in doing so, we plan and strategise the best ways to be 'wenches in the works' (franklin 2015) taking advantage of this period to deploy wench tactics and rest, restore and sustain. we aim then, to take an all-encompassing approach to health to interrogate the sicknesses and weaknesses that afflict our spaces and worlds and then try to act for change. whilst the who states that a pandemic involves a new disease spreading throughout the world, the current situation shows us that it is not just the spread of this viral pathogen that is causing the pandemic. it is the combination of intersecting oppressions with the spread of the covid-19 viral molecules that make up the pandemic. we always benefit from employing decolonising techniques, by looking back to the past to better understand the present. the history of global health is a neo-colonial project (biehl 2016; magenya 2020 ) mired in imperialist and eurocentric attitudes towards disease control. disease control has been repeatedly used "to bolster the moral case for colonialism" (flint and hewitt 2015, 297) with colonialist administrators considering their ability to control the spread of infectious diseases in the colonies as an important skill. this speaks to their civilising missionary attitude that in purportedly tackling the spread of infectious disease, they were benefitting the natives and their presence was therefore positive (flint and hewitt 2015, 297) . this is further reflected in the international regulations for containing infectious disease spread, which have historically emphasised controls to protect european and north american interests. the us' endorsement of the who prior to its inception in 1969 was predicated on concerns for trading relationships, which were central to us economic growth (white 2020) . the us was only dedicated to "wiping out disease everywhere" when there was a risk disease would enter its borders and affect its economy (white 2020 (white , 1251 . these imperialist attitudes around prioritising the health and economies of majority white countries in europe, north america and australia display the lack of regard for the deaths of those outside of these territories. the spread of cholera in haiti in 2010, the ebola outbreaks (one of which is still ongoing in drc), avian flu, swine flu, and bse have all been deemed epidemics which are not seen as serious or widespread enough to count as pandemics despite the alarming number of sufferers and fatalities. these disease outbreaks all have one thing in common: they affected racialised people in "exotic, far-away and (made-to-be) poor lands" . these are lands that have deficient healthcare systems and resources because of imperial exploitation. this perception of communicable disease outbreaks as afflicting unhealthy and dirty others in far-away places underpins global health approaches and policy along with responses to the outbreaks in the west. even the labelling of the covid-19 crisis as a 'global pandemic' is loaded with meaning: it represents that the disease has spread to and is also killing white people in the west on a mass scale. if it were not affecting this subset of the population in a meaningful way, it would 'just' be an epidemic. this is clearly demonstrated by the uk government's woeful response to the disease and the increasing mortality rate we are experiencing. aaltola states that "diseases exist, flourish and die wider than physical environments where they adapt to local memories, practices and cultures" (2012, 2). by thinking that the uk is invincible because of imperial arrogance, the government has wilfully ignored the memories, practices and cultures of other countries with experience of managing such crises. instead of rushing to save lives, there was a rush to save the economy telling us that there has been no progress in mindset since the 1940s. who suffers the most as a result of this? the marginalised, the poor and the underpaid key workers who are disproportionately not white because diseases are "embedded in and violently react with the fabric of political power" making them "signifiers of the underlying patterns of power" (aaltola 2012, 2; asia pacific forum on women, law and development 2020). the virus may not discriminate but systems and structures by which the imperial state dictates who lives and survives do (anumo 2020; rutazibwa 2020) . this discrimination has spilled onto our streets and into our living rooms with the racist and xenophobic discourses that have led to the unjust treatment of vulnerable minorities in society. from the us president deliberately calling covid-19 the 'chinese virus' to the abuse shouted at east asian people on the streets, reactions to the virus have been rooted in intolerance and ignorance. these unsurprising (and imperialist) responses are further manifested in policy responses which disadvantage the most vulnerable of us including minorities with greater representation in lower socioeconomic groups; victims of abuse who are now told to stay locked up in the house with their abuser; elderly people in care homes or congregate living arrangements; those with 'pre-existing' conditions or disabilities and; immigrants who are being subjected to yet more nationalist rhetoric around border control and surveillance (see also step up migrant women uk 2020 amidst these waves of pain and suffering we reel as we witness the continual devaluing of life with the deaths of breonna taylor on 13 march; belly mujinga on 5 april; george floyd on 25 may; bibaa henry and nicole smallman on 6 june; dominique rem'mie fells on 8 june and riah milton on 9 june. these are just a few of the black women and men whose lives have been cruelly and callously taken this year. the list is endless: bleeding and weeping like an open wound. the black lives matter movement was given the attention it deserves by the national and international media as people took to the streets in solidarity, but the wheels of justice move slowly, grinding regularly to a halt leaving us feeling helpless and hopeless at times. we wanted to express our support and decided to share links to free downloads of our articles on a twitter thread written by black authors and papers that adopt critical race approaches. 1 we have now provided a period of openaccess to some key contributions made by black scholars and activists to fls over the last 27 years. 2 what might have always been an inadequate response/intervention quickly revealed some undeniable shortcomings in the journal, its processes and the academic publishing context we negotiate every day. the lack of contributions published in the journal by black scholars is undeniable and something we intend to reflect further on as a board. we know that this needs to be better and we need to have more critical conversations around realising this. we are not looking for an immediate fix or cure. much like the coronavirus, there is none forthcoming yet. but health for humans and for journals is in a state of constant flux, it is an ongoing journey and we can only keep trying to take steps in the right direction to be the best we can: to affirm the value of black scholarship and black lives and counter the appalling racism of the institutions and operations that has heightened in visibility throughout this pandemic. so, yes, we are afflicted and have been since before this pandemic spread, but amongst the fear and the trauma, this situation has revealed hope in humanity and offered an opportunity to step back and re-evaluate. if what they say is true, things will never be the same again and that's exactly what we need: for things to change for the better; to remind us that our health is our wealth and that we need a (genuinely) global effort towards achieving this and not just for certain parts of the world. as feminist academics, writers, dreamers and above all wenches, we have been thinking about how to best deploy our tactics, our resources and our energies to change the 'global' health of academic publishing for the better. an approach to health which is not geared towards saving the economy of the industry first, but the people, their ideas and creativity, their knowledges from all over this suffering world. this leads to us to try and make sense of the gendered and racialised impacts of the virus and this 'new reality' on the workforce and our work as part of an industry: the academic industry. as is to be expected, the recent crisis has laid bare and exacerbated existing socioeconomic inequalities. in the united kingdom, for example, british black africans and british pakistanis are over two and a half times more likely to die in hospital of covid-19 than the white population (platt and warwick 2020) . researchers speculate that among the reasons for the higher death rates among black, asian and minority ethnic (bame) 3 populations in the uk include the fact that that a third of all working-age black africans are employed in key worker roles, 50% more than the share of the white british population. pakistani, indian and black african men are respectively 90%, 150% and 310% more likely to work in healthcare, where they are particularly at risk, than white british men (siddique 2020) . underlying health conditions which render people more vulnerable to risk from infection are also overrepresented in older british bangladeshi men and in older people of a pakistani or black caribbean background. over 60% of those national health service workers who have died due to covid-19 thus far were from bame ethnic backgrounds (cook et al. 2020) . in addition to these stark ethnic and racial mortality disparities, research continues to emerge attesting to the disproportionate health, social protection and security, care, and economic burdens shouldered by women as the pandemic progresses (united nations 2020). while men appear to carry a higher risk of mortality from the virus, the differential impacts on men and women of covid-19 remain largely ignored by governments and global health institutions, perpetuating gender and health inequities (wenham et al. 2020) . in terms of labour politics, a noticeable shift in working patterns during the pandemic has, perhaps unsurprisingly, disproportionately impacted women. as joanne conaghan observes, the effects of the pandemic are compounded for women due variously to "…their weak labour market position in low paid, highly precarious, and socially unprotected sectors of employment, their greater propensity to be living in poverty, along with the practical constraints which a significant increase in unpaid care work is likely to place on women's ability to pursue paid work" (2020). conaghan points out that the gender division of labour manifests itself in different ways throughout history, affecting the social and economic status of women. but what the covid-19 crisis reveals in this historical time period is the extent to which labouring practices for many have been 'feminised', "not just in the sense that the proportion of women participating in paid work has exponentially increased but also because the working conditions traditionally associated with women's work-lowpaid, precarious, and service-based rather than manufacturing-have become the norm" (conaghan 2020). thus women workers, but also vulnerable young people, migrants, and low-paid precarious workers are further exposed by the "perfect storm of poverty, destitution, sickness and death" generated by covid-19 (conaghan 2020) . how then are these labour realities relevant to us in the academic publishing sector? some editors are reporting a noticeable downturn in submissions by women authors and, in some cases, an upturn in submissions by men (fazackerley 2020) , which would be consistent with conaghan's thesis. the current paradigm, however, provides us with another opportunity to look at the mode of production operating in journal publishing, one that we at fls are implicated in and have long been critical of (fletcher et al. 2016 (fletcher et al. , 2017 . our insistence that academic publishing, and feminist publishing in particular, be seen as a political endeavour drives a lot of our editorial policies including an emphasis on the importance of global south scholarship, employing decolonising techniques in our editorial practice, our involvement in the recent global south writing workshops (naqvi et al. 2019 ) and our continuing support for early career researchers (ecrs), particularly those from marginalised or minoritised communities. 4 we remain troubled, however, by the insidious ambivalence of the neoliberal university as it lumbers on, undeterred and uninterested in the new lives we are all trying to adjust to. it was of serious concern to us, for example, that the ref publication deadline remained unaltered well into the onset of the pandemic with associated impacts on journal editors and boards, reviewers and authors. 5 in another appalling example of how structural disadvantages for black researchers are embedded in the 4 zainab naqvi and kay lalor have recently secured a grant from feminist review trust to run a workshop for 'global south' feminist ecrs based in the uk who work in the social sciences and humanities. see https ://www.femin ist-revie w-trust .com/award s/. 5 we joined with many colleagues in signing this open letter to demand an immediate cancelation of the publication deadline: https ://femre v.wordp ress.com/2020/04/30/call-for-the-immed iate-cance llati on-ofthe-ref-2021-publi catio n-perio d/. 1 may 2020. academy, we are currently watching the unfolding saga of none of the £4.3 million worth of funding allocated by ukri and nihr to investigate the disproportionate impacts of covid-19 on 'bame' communities being awarded to black academics. this is compounded by the revelation that of the 6 grants awarded, 3 had a member of the awards assessment panel as a named co-investigator. 6 many of those in our feminist community have come together over the last four months in various fora to share ideas and to support one another as we both adjust to this new paradigm, and resist the continued imposition of the old one (see, for example, graham et al. 2020) . we took part in collective discussion in july with colleagues on the editorial boards of feminist theory, feminist review, european journal of cultural studies, european journal of women's studies and sociological review about the academic publishing in the context of covid-19. that discussion enabled us to share resources and build morale with a view to envisioning a future for feminist and critical academic publishing. a future in which we challenge existing models of open access in publishing as a starting point. the issues with open access are manifold and we have reflected on these previously (fletcher et al. 2017) . we aim to build and strengthen the links between the board and our fellow social justice and feminist journals to address this along with the other problems that we have identified, experienced and maybe even fed into as editors. as a first step, we have written a collaborative statement on our joint reflections concerning open access which sets out a non-exhaustive list of some of the values we wish to embody as journals and imbue our editorial work and processes with going forward. you can read the collective statement below at the end of this editorial, and we encourage other journals to join and sign the statement. reflecting on what has changed in this time of covid-19 and what has stayed the same has led us back in many ways to where we started with wench tactics (fletcher et al. 2017) . how do we engender our own time and space when what little time and space there is isn't really for us? returning to a conscious consideration of timeliness and to the promise of the decolonial public university might be a way to carve out time and space anew or to resist the pull back to 'normality'. one way of undoing time in the institutional contexts in which we find ourselves is through attempting to articulate and practice an ethics of slowness. mountz and colleagues deploy a feminist ethics of care in trying to reimagine working conditions that challenge the imperatives of the neoliberal university (2015). the authors emphasise the need to prioritise "slow-moving conversation[s] on ways to slow 6 we support this open letter that has been produced by ten black women colleagues to call on ukri for transparency and accountability regarding this: https ://knowl edgei spowe r.live/about /. 18 august 2020. the letter points out that according to hesa data, only 1.3% of full-time research positions in the uk are awarded to black and mixed heritage women exposing the seriousness of this marginalisation where black researchers cannot even get grants to do research with their own communities. down and claim time for slow scholarship and collective action informed by feminist politics" (mountz et al. 2015 (mountz et al. , 1236 . this understanding of slow scholarship is predicated, of course, on a thoroughgoing critique of neoliberal governance and its drivers, which have fundamentally transformed the university in the uk (and elsewhere) over the last 10 years. karin van marle points out how neoliberal epistemologies crowd out other ways of knowing and being such that they become common sense. this has a chilling effect on the university, which "instead of being a space where multiple views and knowledges are celebrated… becomes a very specific place of exclusion and limitation" (2019). van marle insists that we try and think of the university by reference to a different set of aesthetics: "at least it should be one that acknowledges bodily-presence, sensory experiences, complexity and the need to slow down, to step aside from counting, competitiveness and suffocation" (2019). amid covid-19 we are on the precipice of an economic catastrophe for higher education in which many of our colleagues will lose their jobs and the futures of early career researchers and those without permanent jobs is looking more precarious than ever. we are also concerned about those vulnerable and disabled colleagues, pregnant people and others who can't acclimatise to the changes that are going to be demanded of us. we need to combine our ethos of slow scholarship with sustainable collective labour politics that prioritises the most vulnerable among us, and one that is particularly attentive to the disadvantages that devolve in line with the socio-economic/class, race and gender disparities discussed above. that the sector has long been poised on a knife-edge is something that many of our colleagues and unions have been warning about, and in the uk this has become more and more acute as austerity politics ravage the state sector, of which the university used to be a part. the notion of the public university feels often like a concept that is fast fading in our collective consciousness, but publishing, teaching and living in a time of covid-19 makes it prescient once again. as corey robin puts it: "public spending, for public universities, is a bequest of permanence from one generation to the next. it is a promise to the future that it will enjoy the learning of the present and the literature of the past. it is what we need, more than ever, today" (2020). that imagining how we want our world(s) and universities to be is also a profoundly decolonial imperative is something that we must reckon with and take responsibility for (see also, otto and grear 2018). as many institutions of higher learning in the global north have been forced to confront their complicity in the global slave trade and in other forms of imperialism in the wake of #blacklivesmatter, we have to insist on meaningful accountability and not, as foluke adebisi warns, pr stunts or marketing sops to 'diversity' politics (2019). adebisi makes clear the importance of locating ourselves as researchers and teachers as a continuing part of the university's legacy, and the need to acknowledge racism and colonialism as ongoing processes: "my constant fear is that in the process of universities 'coming to terms', our proposals can turn out to be non-contextualised recommendations that do not take into account the embedded and extended nature of slavery and the slave trade" (2019). what if we showed to our students, asks adebisi, "in very concrete terms, exactly how the past bleeds into the present, how we walk side by side with histories ghosts, how we breathe coloniality every day, how our collective history is literally present in every single thing we do?" (2019). in other words, how can we effectively distinguish, asks olivia rutazibwa, between teaching and learning that foregrounds the will to power versus the will to live? by this she means that in our attempts at decolonising we "go beyond the merely representational" by engaging with and understanding the very materiality of being and the systems that determine and produce our lives (and deaths) (rutazibwa 2018, 172) . that such pedagogical and activist praxis necessarily requires time, space and slow conversations is immediately clear. trying to think through slowness in the context of feminist decolonial editorial praxis is also a key aspect of wench tactics (fletcher et al. 2017; fletcher 2020) . being a wench in the works entails us deploying tactics to influence how our journal is used, accessed and circulated. we add to this by now utilising wench tactics to influence how our journal is produced. intrinsic in this is the timeline around production, use, access and circulation. as we work from home, experiencing lockdowns and shielding and distancing, time simultaneously runs away from us and stretches out before us. to ground ourselves then, we take a step back: we step out of the rat race that life has become and prioritise health; for ourselves, for others and for the journal. we first set out to rest and restore. we break out of the increasingly frantic rhythms and deadlines that are being fired at us by our institutions and do something else-we aim to remind ourselves of who we are and what we do. in practical terms, this has reminded us that the production and success of our journal are not dependent on us alone but on others including our amazing authors, reviewers, copyediting team and of course our readers. in recognition of the hard work, commitment and engagement of all these people, we have given extended periods for the different steps in the issue production process from reviews to revisions and even writing this editorial. as we do this, we remain defiant and difficult in the face of the publishing industry's environment which requires constant, enthusiastic engagement. this is mirrored in higher education more broadly as we are inundated with email after email about all the changes we must effect to our teaching, research and general working practices in the upcoming year. we need to rest and take restoration measures; we need time and resources to return to ourselves and using wench tactics is an important way to achieve this. to support our rest and restoration, we have also been guided by slow scholarship principles which sideline the measures of productivity, competition and finances underpinning the current institutional and structural approaches to this crisis. instead we emphasise slower conversations and work to sustain ourselves, our health and the journal. we withdraw from institutional priorities which value automation-levels of speed so that we can sustain critical engagement with ourselves, our editorial practices and one another. we place worth and value then on ways of being and working which sustain us, nourish us and keep us grounded, reminding ourselves and one another that it is completely understandable things will take time, need more time, deserve more time than the industry wants us to believe. again, this requires us to be difficult and defiant; a decolonial feminist technique which reconfigures what is seen as valuable and worthy. here, we critically question what is currently being positioned as valuable and worthy in industrial terms and then re-order the list to move our rest, restoration and sustenance to the top. getting things done is valuable and worthy but ensuring that we are rested and restored so that we can sustain ourselves, our engagement with the work we do, and our health are more so. fls is a community and we have been taking time in our meetings to make the space to check in with one another, hear how each member is doing and to practise building care and solidarity with and for one another. this is not limited to our meetings but even the spaces and platforms outside of our 'formal', scheduled interactions. we aim to be there for each other on social media and in collective and individual ways. in doing so, we seek to model best practices of feminist leadership. inspired by leila billing's writings (2020), we first make the invisible, visible and cultivate cultures of mutual care. we make ourselves visible to one another, and to others-we want to be accessible to all of you and remind our colleagues and readers that like you, we are human beings struggling with our lives, health and commitments during this crisis. we are there for each other in an ongoing state of mutual care. in response to the terrible impacts this crisis and the already toxic aspects of the academy are having on minoritised ecrs and to make us more visible and model these mutual care principles, zainab and kay have secured funding to run a writing and mentoring workshop for 'global south' feminist ecrs in the social sciences and humanities based in the uk. more information will be released on our social media channels, so please look out for it and apply if you are eligible and interested. finally, we want to model feminist leadership by imagining and celebrating alternatives. this is exhibited in our recent work to imagine what a life after existing models of open access could and should look like with our colleagues from other feminist and social justice journals (see below). the dreamers inside us envisage alternative ways to share our research and celebrate forms and productions of knowledges that are not given enough attention by us or the academy. in her work around complaint sara ahmed advances the formation of the 'complaint collective' (2019). when we complain, we object to something that should not be happening, but also because we are hopeful about how things could be different (ahmed 2017) . as we speak out against existing publishing models, we are optimistic about how things can change and become connected with others who have the same complaints and same hope. this leads us to form a complaint collective with our fellow editors and those who are also concerned about the status quo giving us the necessary space, time and opportunity to collaboratively imagine, celebrate and speak out in hope for an alternative model of publishing that is healthier, more equitable and representative. change and movement are inevitable and as we face the challenges of the present and dream about how we can make things better for the future, we now celebrate several of our cherished colleagues who are moving on to new and exciting things. before we introduce the papers that make up this issue of the journal, we want to acknowledge the work of our colleagues who have recently retired from their roles on the editorial board of fls. julie mccandless, nicola barker and diamond ashiagbor are irreplaceable members of our collective and we already miss their sage wisdom, warmth and dedication to fls. all three joined the fls editorial board in 2014, when the journal became independent of the university of kent and were instrumental in guiding the journal as it has grown over the last six years. julie mccandless is a powerhouse whose commitment to and influence on fls cannot be overstated. she was a co-ordinating editor for the journal for most of her tenure and authors will remember her thoroughness, care and generosity as an editor. nicola barker was a book reviews editor during her time on the board and her invaluable contributions to our lively discussions and decision-making processes filled our time together with warmth and laughter. finally, diamond ashiagbor, as well as serving as a book review editor for a significant period of her tenure, brought such vast experience and rigour to her role on the editorial board we will dearly miss her wise counsel. we send our love and solidarity with these wonderful colleagues (and fellow wenches) and wish them well as they continue to blaze a feminist trail for us. and so, we are on the lookout for some more wonderful colleagues to join our editorial board. we have released a call for members aiming to recruit colleagues from the uk and ireland through an application process. if you are interested in joining the board, please do apply. we want the board to be as representative as possible and especially encourage colleagues with a feminist background at any career stage from minoritised groups to apply. if you have any questions about applying, please do get in touch with us, we would be delighted to tell you just how much fun it is to be a wench in the works. this issue of the journal includes some remarkable feminist legal scholarship, notable for its breadth, both scholarly and geographically. caroline dick's article entitled 'sex, sexism, and judicial misconduct: how the canadian judicial council perpetuates sexism in the legal realm' is a fascinating and sobering look bias in decisions of the canadian judicial council. dick considers two separate judicial misconduct complaints adjudicated by the council, one in which a male judge exhibited bias against women while adjudicating a sexual assault trial and a second in which graphic, sexual pictures of a female judge were posted on the internet without her knowledge or consent. dick concludes that the decisions of the council indicate that it is itself perpetuating gendered stereotypes informed by the notional ideal victim, further perpetuating sexism both in canadian courtrooms and among the judiciary. in our second article of this issue, maame efua addadzi-koom carefully examines the history and effectiveness of the 2003 maputo protocol, a uniquely african instrument on women's rights that was established with the promise of addressing the regional peculiarities of african women. analysing what little case law there is invoking the protocol and concerning gender-based violence against women, addadzi-koom takes stock of the potential of the protocol and the burgeoning due diligence principle on the women's rights jurisprudence of the ecowas community court of justice (eccj). addadzi-koom concludes her discussion with some recommendations arguing that the protocol and the due diligence principle should be more widely applied by the eccj to centre women's rights in the sub-region and beyond. in '"is this a time of beautiful chaos?": reflecting on international feminist legal methods' faye bird delves deep into feminist jurisprudence with an intriguing interrogation of margaret radin's work, and in particular, her distinction between 'ideal' and 'non-ideal' to evaluate different methodologies for critiquing international law and institutions. bird asserts that (re)viewing radin's framework in this context presages a new and more fruitful feminist pluralism through which we might better navigate institutional strategising. having featured heavily in faye bird's foregoing article, in our next paper dianne otto reflects artfully on the latest iteration of the feminist judgments project in her review essay: "feminist judging in action: reflecting on the feminist judgments in international law project". otto observes aspects of the feminist judgments that were transformative, before turning to the contributors' 'reflections', which highlight some of the obstructions encountered and compromises made in the processes of judging. otto concludes that the new collection makes a useful and compelling contribution to concretising feminist methods and highlighting the role of international jurisprudence as a feminist endeavour, while contributing to the insight of the feminist judgments project more broadly by exposing the scope and limits of justice delivered by the legal form of judging. the issue is completed by book reviews of three exciting new titles, all of which speak to issues of immediate concern to feminist legal scholars: eva nanopoulos reviews honor brabazon's wonderful edited collection neoliberal legality: understanding the role of law in the neoliberal project; lynsey mitchell considers the research handbook on feminist engagement with international law, edited by susan harris rimmer and kate ogg and; felicity adams reviews emma k russell on queer histories and the politics of policing. we are, as always, eternally grateful for the generosity and collegiality of our reviewers, without whom the journal could not function. we conclude this editorial with the recently written feminist and social justice editors' collaborative statement of intent on the values and principles we wish to adopt and embody in our work and efforts to survive, thrive and maybe even dismantle parts of the academic publishing machine. our journey and vital conversations around and towards health continue as we try to become better editors, academics and women: taking the time and resources, to be our best (and healthiest) wench selves. we are a collective of intersectional feminist and social justice journal editors. we reject the narrow values of efficiency, transparency and compliance that inform current developments and policies in open access and platform publishing. together, we seek further collaboration in the development of alternative publishing processes, practices and infrastructures imbued with the values of social and environmental justice. the dominant model of open access is dominated by commercial values. commercial licenses, such as cc-by 7 are mandated or preferred by governments, funders and policy makers who are effectively seeking more public subsidy for the private sector's use of university research, with no reciprocal financial arrangement (berry 2017) . open access platforms such as academia.edu are extractive and exploitative. they defer the costs of publishing to publishers, universities and independent scholars, while selling the data derived from the uses of publicly funded research. as such they represent the next stage in the capitalisation of knowledge. commercial platforms are emphatically not open source and tend towards monopoly ownership. presenting themselves as mere intermediaries between users, they obtain privileged access to surveil and record user activity and benefit from network effects. a major irony of open access policy is that it aims to break up the giants of commercial journal publishing but facilitates existing or emerging platform monopolies. the tech industry-now dominating publishing, and seeking to dominate the academy through publishing-having offered open access as a solution to the ills of scholarly publishing is currently offering solutions to the problems caused by open access including discoverability, distribution, digital preservation and the development and networking of institutional repositories that stand little to no chance of competing with academia.edu. platforms are not only extractive but have material effects on research, helping to effect a movement upstream in the research cycle whereby knowledge is redesigned, automatically pre-fitted for an economy based on efficiency, competition, performance, engagement or impact. alongside the transformative agreements currently being made between commercial journal publishers (mainly) and consortia led by powerful universities, publishers such as elsevier are gaining greater access to the research cycle and to the data currently owned by universities. open access benefits commercial interests. the current model also serves to sideline research and scholarship produced outside of universities altogether, creating financial barriers to publishing for scholars outside of the global north/west and for independent scholars, as well as for early career researchers and others whose institutional affiliation is, like their employment status, highly precarious and contingent, and for authors who do not have the support of well-funded institutions and/or whose research is not funded by research councils. moreover, stem fields and preprint platforms are determining the development of open access publishing cultures. these are forms of content management that offer cost reduction and other efficiencies by erasing the publisher and minimising editorial function. they raise questions of quality assurance and further the technologisation, standardisation and systematisation of scholarly research such as the automation of peer review, and the disaggregation of journals and books into article or article-based units that can be easily monitored and tracked. therefore, the underlying values of widening participation, public knowledge and the fair sharing of resources need to be reclaimed. platforms should be refitted for ahss scholarship (where speed, for example, is not an indicator of the importance of research) and integrated with more conventional modes of dissemination and distribution more suited to the field and its preference for print monographs. platform development should be distributed and institutionally owned and instead of replacing the publisher-as-problem, it should recognise and represent a more diverse set of publishing interests, stemming from scholar-led and university press publishers that are mission-driven and not-for-profit. it should enable and sustain the innovation generated through intellectual kinship across diaspora spaces. open access reaches into, and disrupts the academy through policy mandates that are, at present, unfunded or underfunded and that defer more of the costs of publishing onto a sector that could not support them even before the covid-19 pandemic and its catastrophic effect on institutional finances as well as individual lives and wellbeing. as a collective of feminist and social justice journal editors we believe that journal publishing during and after the pandemic should seek to end the exploitation of scholarly labour and foreground a new ecological economics of scholarly publishing based on cooperation and collaboration instead of competition; responsible householding, or careful management of the environment rather than the extraction; and the fair-sharing of finite resources (such as time and materials). rather than extracting more resource (including free labour) from an already depleted and uneven sector, thereby further entrenching inequalities within and between universities globally, and sidelining scholarship produced outside of universities altogether, journal publishing after open access should be responsive and responsible toward the wellbeing, values and ambitions of diverse scholars and institutions across ahss and stem in the global south and the global north. we will learn from, and engage with other collaborative ventures such as amel-ica in latin america, 8 coko in the us 9 and copim in the uk. 10 building on these initiatives, which are primarily concerned with implementing open science or open humanities agendas, we are inaugurating a more radical project of reevaluating and reorganising journal publishing: • replacing the values of efficiency, transparency and compliance with those of equality, diversity, solidarity, care and inclusion • providing a more sustainable and equitable ecological economics of scholarly publishing in tune with social and environmental justice • working collectively and collaboratively rather than competitively • thinking and acting internationally, rather than through parochial national or regional policies • working across publishing and the academy with a view to responsible householding and accountability in both sectors • seeking to work across funding and institutional barriers, including between stem and ahss scholars • seeking further collaborations and partnerships in order to build new structures (disciplines, ethics, processes and practices of scholarship including peer review, citation, impact, engagement and metrics) and infrastructures to support a more healthy and diverse publishing ecology • challenging the technologisation and systematisation of research by working to increase our visibility as editors and academics making us and our publications more accessible and approachable for those who are minoritised in academic publishing publishing after open access does not have a resolution (let alone a technological solution) or endpoint, but rather is a continual process of discussion, controversymaking and opening up to possibilities. we do not know what journal publishing after open access is, but we do know that we must work together in order to create a just alternative to the existing extractive and predatory model, an alternative that operates according to a different set of values and priorities than those that dominate scholarly publishing at the moment. these values and priorities need to inform or constitute new publishing systems committed to the public ownership rather than the continued privatisation of knowledge. we recognise that the choice we face is not between open and closed access, since these are coterminous, but between publishing practices that either threaten or promote justice. we fully recognise the scale of the challenge in promoting justice against the global trend of entrenched populism, nationalism and neoliberalism. collective action and intervention is a start point, and we take inspiration from the recent statement issued by the black writers' guild. 11 our open exploration of the future of journal publishing will be informed by the history of radical and social justice publishing and by intersectional feminist knowledge and communication practices that are non-binary, non-hierarchical, situated, embodied and affective. against the instrumentalisation and operationalisation of knowledge, we will foreground both validation and experimentation, authority and ethics. we will ask, against a narrow implementation of impact and metrics, what really counts as scholarship, who gets to decide, who gets counted within its remit, and what it can still do in the world. we believe that knowledge operates in, rather than on the world, co-constituting it, rather than serving as a form of mastery and control. the re-evaluation of knowledge and its dissemination is, therefore, we believe, a necessary and urgent form of re-worlding. we are open to other journals joining this collective. if you are interested please get in touch with any of the signatories below: european journal of cultural studies european journal of women's studies feminist legal studies feminist theory the sociological review understanding the politics of pandemic scares: an introduction to global politosomatics complaint as diversity work why complain? feministkilljoys unchecked corporate power paved the way for covid-19 and globally, women are at the frontlines. cambridge core covid-19 highlights the failure of neoliberal capitalism: we need feminist global solidarity the uses of open access. stunlaw, philosophy and critique for a digital age theorizing global health what does feminist leadership look like in a pandemic? medium epidemics and global history: the power of medicine in the middle east coronavirus, colonization, and capitalism. common dreams exclusive: deaths of nhs staff from covid-19 analysed covid-19 and inequalities at work: a gender lens. futures of work affliction: disease health and poverty (forms of living) that obscure object of global health women's research plummets during lockdown -but articles from men increase. the guardian on being uncomfortable wench tactics? openings in conditions of closure playing with the slow university? thinking about rhythm, routine and rest in decelerating life. presentation at qmul 18 sexism as a means of reproduction: some reflections on the politics of academic practice colonial tropes and hiv/aids in africa: sex, disease and race dialogue on the impact of coronavirus on research and publishing indigenous action. 2020. rethinking the apocalypse: an indigenous anti-futurist manifesto making a feminist internet in africa: why the internet needs african feminists and feminisms for slow scholarship: a feminist politics of resistance through collective action in the neoliberal university homesick: notes on a lockdown back at the kitchen table: reflections on decolonising and internationalising with the global south sociolegal writing workshops international law, social change and resistance: a conversation between professor anna grear (cardiff) and professorial fellow dianne otto (melbourne) are some ethnic groups more vulnerable to covid-19 than others? the institute for fiscal studies the pandemic is the time to resurrect the public university. the new yorker on babies and bathwater: decolonizing international development studies the corona pandemic blows the lid off the idea western superiority https ://olivi aruta zibwa .wordp ress.com/2020/04/12/the-coron a-pande mic-blows -the-lid-off-the-idea-ofweste rn-super iorit what they did yesterday afternoon british bame covid-19 death rate 'more than twice that of whites'. the guardian coronavirus pandemic in the shadow of capitalist exploitation and imperialist domination of people and nature-statement by the regional secretariat for the north african network for food sovereignty. committee for the abolition of illegitimate debt migrant women: failed by the state, locked in abuse duress: imperial durabilities in our times. durham: duke up. united nations. 2020. policy brief: the impact of covid-19 on women. 9 april life is not simply fact" -aesthetics, atmosphere & the neoliberal university covid-19: the gendered impacts of the outbreak the art of medicine -historical linkages: epidemic threat, economic risk, and xenophobia key: cord-348584-j3r2veou authors: sipetas, charalampos; keklikoglou, andronikos; gonzales, eric j. title: estimation of left behind subway passengers through archived data and video image processing date: 2020-07-30 journal: transp res part c emerg technol doi: 10.1016/j.trc.2020.102727 sha: doc_id: 348584 cord_uid: j3r2veou crowding is one of the most common problems for public transportation systems worldwide, and extreme crowding can lead to passengers being left behind when they are unable to board the first arriving bus or train. this paper combines existing data sources with an emerging technology for object detection to estimate the number of passengers that are left behind on subway platforms. the methodology proposed in this study has been developed and applied to the subway in boston, massachusetts. trains are not currently equipped with automated passenger counters, and farecard data is only collected on entry to the system. an analysis of crowding from inferred origin–destination data was used to identify stations with high likelihood of passengers being left behind during peak hours. results from north station during afternoon peak hours are presented here. image processing and object detection software was used to count the number of passengers that were left behind on station platforms from surveillance video feeds. automatically counted passengers and train operations data were used to develop logistic regression models that were calibrated to manual counts of left behind passengers on a typical weekday with normal operating conditions. the models were validated against manual counts of left behind passengers on a separate day with normal operations. the results show that by fusing passenger counts from video with train operations data, the number of passengers left behind during a day’s rush period can be estimated within [formula: see text] of their actual number. public transportation serves an important role in moving large numbers of commuters, especially in large cities. transit performance is an important determinant of ridership, and transit services that offer short and reliable waiting times for commuters offer a competitive alternative to driving, which contributes to reduced congestion and improved quality of life. crowding is a major challenge for public transit systems all over the world, because it increases waiting times and travel times and decreases operating speeds, reliability, and passenger comfort (tirachini et al., 2013) . studies show that crowding in public transit increases anxiety, stress, and feelings of invasion of privacy for passengers (lundberg, 1976) . the covid-19 pandemic has also highlighted the public health risks associated with passenger crowding in transit vehicles. although transit ridership dropped precipitously during the pandemic in cities around the world, concerns about crowding on transit continue as economies re-open, commuters return to work, and agencies plan for the future. when overcrowded, commuters may not be able to board on the first train or bus that arrives. these commuters are left behind the vehicle that wished to board, and their number is directly related to various basic performance measures of public transportation there are a number of technologies that can be used to observe, count, and track pedestrians and pedestrian movements in an area. digital image processing for object detection is an appealing approach for transit systems because surveillance videos are already being recorded in transit stations for safety and security purposes. the video feed records passenger positions and movements in the same way that a person would observe them, as opposed to infrared or wireless signal detectors that merely detect the movement of a person past a point or their proximity to a detector. the detection of objects in surveillance videos is an invaluable tool for passenger counting and has numerous applications. for example, object detection can be used for passenger counting or tracking, recognizing crowding, and hazardous object recognition. in a relevant application, velastin et al. (2006) uses image processing techniques to detect potentially dangerous situations in railway systems. computer vision is the duplicate of human vision aiming to electronically perceive, understand and store information extracted from one or more images (sonka et al., 2014) . there are various techniques to use computers to process an image for object detection by extracting useful information. recent methods use feature-based techniques rather than segmentation of a moving foreground from a static background, which was used in the past. then, the detected features are extracted and classified, typically using either boosted classifiers or support vector machine (svm) methods (viola, 1993; cheng et al., 2015) . svm is one of the most popular methods used in object detection algorithms and especially passenger counting, because it offers a method to estimate a hyperplane that splits feature vectors extracted from pedestrians and other samples (cheng et al., 2015) , differentiating pedestrians from other unwanted features. boosting uses a sequence of algorithms to weight weak classifiers and combine them to form a strong hypothesis when training the algorithm to attain accurate detection (zhou, 2012) . current methods for object detection take a classifier for an object and evaluate it at several locations and scales in a test image, which is time-consuming and creates numerous computational instabilities at large scales (deng et al., 2010) . the most recent methods, such as region based convolutional neural network (r-cnn), use another method to decrease the region over which the classifier runs and includes the svm. first, category-independent regions are proposed to generate potential bounding boxes. second, the classifier runs and extracts a fixed-length feature vector for each of the proposed regions. finally, the bounding boxes are refined by the elimination of duplicate detections and rescoring the boxes based on other objects on the scene using svms (girshick et al., 2014) . the bounding box is a rectangular box located around the objects in order to represent their detection (coniglio et al., 2017; lézoray and grady, 2012) . the resulting object detection datasets are images with tags used to classify different categories (deng et al., 2009; everingham et al., 2010) . an open-source software tool called you only look once (yolo) uses a different method than the above-mentioned techniques for object detection. it generates a single regression problem to estimate bounding box coordinates and class probabilities simultaneously by using a single convolutional network that predicts multiple bounding boxes and class probabilities for these boxes (redmon, 2016; redmon et al., 2016) . another advantage of yolo is that, unlike other techniques such as svms, it sees the entire image globally instead of sections of the image. this feature enables yolo to implicitly transform contextual information to the code about classes and their appearance and at the same time makes yolo more accurate, making fewer than half the number of errors compared to fast r-cnn . yolo uses parameters for object detection that are acquired from a training dataset. yolo can learn and detect generalizable representations of objects, outperforming other detection methods, including r-cnn. the ability to train yolo on images has the potential to directly optimize the detection performance and increase the bounding box probabilities . the calibration of parameters for object detection using an algorithm like yolo requires training datasets with a large number of tagged images. although a custom training set that is specific to the context of application (e.g., mbta transit stations) would be desirable for achieving the most accurate object detection outcomes, it is very costly to create a large tagged training set from scratch. the common objects in context (coco) dataset is a large-scale object detection, segmentation, and captioning dataset that is freely available to provide default parameter values for yolo. the coco dataset is not specific to passengers or transit stations, but it is a general dataset that includes 328,000 images, 2.5 million tagged objects and 91 object types, including "person" (lin et al., 2014) . nevertheless, the tool is effective for identifying individual people in camera feeds, and the use of general training data allows the same tool to be applied in other contexts without requiring additional training data. the proposed methodology aims to estimate the number of left behind passengers at a transit station when trains are too crowded to board. fig. 1 presents a flowchart of the data and methods used in this study in order to provide a roadmap for the analysis described in this paper. the methods rely heavily on two data sources that are automatically collected and recorded (shown in blue): train tracking records that indicate train locations over time, and surveillance video feeds. additional archived data on inferred travel patterns from farecard records is used only to identify the most crowded parts of the system (shown in purple), and manual counts are used to estimate and validate models (shown in red). for model implementation, the proposed models require only the automatically collected input data. the first step of the analysis presented in this paper is to identify the stations and times of day when crowding is most likely to cause passengers to be left behind on the platform. this analysis is used only for determining where to collect data to demonstrate the implementation of the proposed model. this step could be skipped for cases in which the locations for implementation are already c. sipetas, et al. transportation research part c 118 (2020) 102727 known. the identification of study sites involves a crowding analysis that makes use of two data sources: train tracking records, which denote the locations of trains over time; and origin-destination-transfer (odx) passenger flows, which are inferred from passenger farecard data. peaks in train occupancy and numbers of boarding passengers show where and when passengers are most likely to be left behind, as described in section 4.1. then, section 4.2 describes an analysis of surveillance camera views to determine which stations have unobstructed platform views and station geometry that allows the automated video analysis techniques to be used to count passengers. train tracking data, which includes the time each train enters a track circuit, is automatically recorded into the mbta research database. by comparing this data against manual observations of the times that train doors open and close in the station, a linear regression model is estimated to predict dwell time from the train tracking records, as described in section 5.1. this model is used to obtain automated dwell time estimates as inputs to the model of left behind passengers. automated counts of the number of passengers on each station platform are obtained using yolo, an automated image detection algorithm. the parameters of the algorithm are associated with the freely-available coco training dataset, as described in section 2. the threshold for object identification is calibrated, as described in section 6.1, by applying the algorithm to the surveillance video feed and comparing with manual counts of the passengers remaining on the platform after the doors have closed (section 5.2) and the passengers entering and exiting the platform (section 5.3). with the parameter values and calibrated threshold, yolo produces estimates of the number of passengers on the platform as a time series. the number of passengers that remain on the platform after the doors close is a raw automated passenger count, as shown in section 6.2. these raw counts are not very accurate as a direct measure (section 6.3), but they provide a useful input for modeling the number of left behind passengers. a logistic regression is used to predict the probability that a passenger is left behind on the station platform based on automated dwell time estimates and/or automated passenger counts from video. the model parameters are estimated using the manually c. sipetas, et al. transportation research part c 118 (2020) 102727 observed counts of passengers left behind on the station platforms as the observed outcome. in this study, data collected on november 15, 2017, were used for model estimation. the diagnostics, parameters, and fit statistics are presented for three models in section 7.1. the quality of the proposed models is evaluated through validation against manually collected counts on a different day. in this study, the estimated models are used to predict the number of left behind passengers using automated dwell time estimates and automated passenger counts on january 31, 2018. the accuracy of the model predictions is then calculated relative to manually observed passenger counts on the same day, as shown in section 7.2. implementation of the model to make ongoing estimates of the numbers of passengers left behind each departing train requires only train tracking data and surveillance video feeds as model inputs. the manual observations of door opening/closing times and the number of passengers on the platforms are used only for estimating model parameters. the models then produce predictions of the number of passengers left behind each departing train based only on data that is automatically collected. therefore, the numbers of left behind passengers and the associated impact on the distribution of wait times experienced by passengers can be tracked as a performance measure over time. if data feeds were processed as they are recorded, it would also be possible to implement the models to make real-time predictions of the left behind passengers. to test the implementation of object detection with video in transit stations, a first step is to identify locations and times to collect video feeds as well as direct manual observations of left-behind passengers. for this study, stations were selected based on a crowding analysis and evaluation of station geometry and camera view characteristics. the goal was to identify stations with the greatest likelihood of passengers being left behind during a typical morning or afternoon rush and where object detection techniques would be most successful. the analysis focused on the orange line, which is 11 miles long with 20 stations. oak grove and forest hills are the northern and southern end stations, respectively. there are two main reasons for choosing this specific line. first and most important, it has no branch lines, so all travelers can reach their destination by boarding the next available train. this simplifies the identification of left-behind passengers. second, it passes through several transfer stations in the center of boston, which highlights its significance for passengers' daily commuting. a crowding analysis is a necessary step to identify the times and stations where crowding is observed and left behinds have the highest probability of occurring. the data used in this part of the analysis have been extracted from the rail flow database in the mbta research and analytics platform. the rail flow dataset includes aggregated boarding and alighting counts by time of day with 15-min temporal resolution averaged across all days in a calendar quarter. an example is given in fig. 2 for 5:15-5:30 pm in winter 2017. these data are derived from the origin-destination-transfer (odx) model, which makes use of afc and avl systems to infer the flow of passengers within the subway (sánchez-martínez, 2017). the odx model identifies records from afc that can be linked in order to infer transfers or return trip patterns. for example, a passenger using a charlie card (mbta's farecard) to enter a rail station and later board a bus near a different rail station can be assumed to have used the rail system and then transferred to the bus. another passenger who enters one rail station in the morning and enters a different rail station in the afternoon may be completing a round-trip commute, so the destination of the morning and afternoon trips can be inferred by linking the two trips. some trip origins and/or destinations cannot be inferred, for example if the fare is paid with cash or the trip has only one farecard transaction. for more details about the odx model, the reader is referred to sánchez-martínez (2017) , where the model's application inferred the origins of 98% and the destinations of 73% of the total number of fare transactions. for the crowding analysis in this paper, cumulative counts of passengers boarding and alighting at each station have been created along the direction of train travel using the aggregated railflow data. for a 15-min time period, b n t ( , ) is the cumulative count of all passengers that board trains in the direction of interest at stations preceding and including station n during time interval t. similarly, a n t ( , ), is the cumulative count of passengers that are assumed to have exited trains traveling in the direction of interest at stations preceding and including station n during time interval t. it should always be true that a n t b n t ( , ) ( , ), because passengers can only alight a train after boarding it. the difference between the cumulative boardings, b n t ( , ), and alightings, a n t ( , ), is the estimated passenger flow, q n t ( , ), between station n and + n 1 during each 15-min time period. this calculation is approximate, because cumulative counts are calculated for a single 15-min time period, and real trains take more than 15 min to traverse the length of a line. to calculate the number of passengers per train, the passenger flow per time period must be converted to passenger occupancy, o n t ( , ) (passengers/train), which is calculated by multiplying the passenger flow by the scheduled headway of trains, h t ( ) (minutes), at time t. c. sipetas, et al. transportation research part c 118 (2020) the headway is divided by 15 min to account for the fact that the passenger flow is per 15-min time period. this measure is an approximation of the number of passengers onboard each train that is based on the assumptions that headways are uniform and passengers are always able to board the next arriving train. in reality, variations in headways may lead to increased crowding after longer headways, increasing the likelihood that some passengers will be left behind. the 2017 mbta service delivery policy (sdp) (mbta, 2017) provides guidelines for reliability and vehicle loads. in the 2010 mbta sdp (mbta, 2010), the maximum vehicle load was explicitly defined as 225% of seating capacity in the peak hours (start of service to 9:00 am; 1:30 pm -6:30 pm) and 140% of the seating capacity in other hours. the 2017 sdp notes that accurately monitoring the passenger occupancy of heavy rail transit is not yet feasible on the mbta system. nevertheless, the guidelines from table b2 in the 2017 sdp are used to identify general crowding levels, recognizing that each orange line train is six cars long and has a total of 348 seats. a visualization of average train occupancy for the winter 2017 rail flow data is shown in the color plot in fig. 3a . the color for each station and 15-min time interval corresponds to the value of o n t ( , ) . since the trains have 348 seats, red parts of the plot indicate large numbers of standing passengers, with dark red indicating crowding near vehicle capacity. this figure shows that in the northbound direction, the most severe crowding occurs between downtown crossing and north station shortly before 6:00 pm. note that the crowding appears to decrease before rebounding again at 6:30 pm. this is due to the change in scheduled headway at 6:30 pm from 6 min to 10 min, which increases occupancy, as calculated in eq. (2). c. sipetas, et al. transportation research part c 118 (2020) 6 a more detailed visualization combines transit vehicle location records and inferred origin-destination trip flows from a specific date. as mentioned already, the odx trip flows are constructed with simplifying assumptions about passenger movements; for example, all passengers entering a station are assumed to board the first arriving train. despite such assumptions, however, the model is valuable for many applications. the trajectories in fig. 3b are associated with the recorded arrival and departure times of train at each station. the colors are associated with the estimated train occupancy based on the inferred boardings and alightings, assuming that no passengers are left behind. the trajectory plot shows that the headways between trains can vary substantially, especially for c. sipetas, et al. transportation research part c 118 (2020) 102727 the stations north of downtown crossing. longer headways are followed by more crowded trains, because more passengers have arrived to board since the previous train. the occurrence of left-behind passengers would make actual train occupancies slightly lower for the trains following long headways. those left-behind passengers would then be waiting to board the next train, thereby increasing the occupancy on one or more subsequent trains. tracking the average number of passengers onboard trains provides an indicator for the likelihood of passengers being left behind, because full trains leave little room for additional passengers to board. during the most crowded times of the day, it is also useful to look at the numbers of passengers boarding and alighting trains at each station. passengers are most likely to be left behind at stations where trains arrive with high occupancy, few passengers alight, and many more passengers wait to board. by this measure, north station in the afternoon peak appears to be an ideal candidate for observing left behind passengers. using the same method for the southbound direction, sullivan square station was identified as an ideal candidate location for data collection in the morning peak. other candidate stations include back bay, chinatown and wellington stations. in addition to identifying stations with the greatest likelihood of passengers getting left behind crowded trains, the stations that are selected for detailed analysis should also have characteristics that are amenable to successful testing of video surveillance counting methods. there are a variety of station layouts and architectures that contribute complicating factors to the analysis of left behind passengers, and the goal of this study is to identify the potential for the adopted detection method under the best possible conditions. ideal conditions for the proposed analysis are: • dedicated platform for line and direction of interest -in this case, all passengers on a platform are waiting for the same train, so any passenger that does not board can be counted as being left behind. in the case of an island platform, observed passengers may be waiting for trains arriving on either track. in the mbta system, more than half of the station platforms for heavy rail rapid transit in the city center (the most crowded part of the system) meet this criterion. 1 • high quality camera views -surveillance cameras vary in age, quality, and placement throughout the mbta system. newer cameras have higher definition video feeds. the quality of the view is also affected by lighting conditions, especially at aboveground station where sunlight and shadows can affect the clarity of the images. • platform coverage of camera views -the surveillance systems are designed to provide views of the entire platform area for security purposes. in some stations, the locations of columns obfuscate the views, requiring more cameras to provide this coverage. surveillance camera views were considered from five stations on the orange line (back bay, chinatown, north station, sullivan square, and wellington) that were identified through crowding analysis as candidate stations. ultimately, north station was selected as the study site for the northbound direction afternoon peak period because the station exhibits consistent crowding and the geometry provided good camera views. samples of the camera views from this station are shown in fig. 4 . manual observations on the platform needed to be collected to establish a ground truth against which to compare alternative methods for measuring and estimating the number of passengers left behind crowded trains. detailed data collection at north station was conducted during afternoon peak hours (3:30-6:30 pm) on midweek days during non-holiday weeks (wednesday, november 15, 2017, and wednesday, january 31, 2018) . three observers worked simultaneously on the station platform to record observations. although train-tracking records (ttr) report the times that each train enters the track circuit associated with a station, there is no automated record of the precise times that doors open and close. since passengers can only board and alight trains while the doors are open, recording these times manually is important for identifying when passengers board trains, when they are left behind, the precise dwell time in the station, and the precise headway between trains. each of the three observers recorded the times of doors opening and closing. the average of these observations is considered the true value. a simple linear regression model shows that observed dwell times (time from doors opening to doors closing) can be accurately estimated from automatic records of ttr arrival and departure times associated with each station. fig. 5 shows the data and regression results combining manual counts for november 15, 2017 and january 31, 2018. there is no systematic difference between records from different days, and the r 2 is greater than 0.9, indicating a good fit. 1 all stations from tufts medical center through haymarket and the northbound platform at north station on the orange line (11 platforms), three out of four blue line stations in downtown boston (5 platforms), and all northbound platforms for the red line from south station to porter (8 platforms) meet this criterion. c. sipetas, et al. transportation research part c 118 (2020) each observer counted the number of passengers left behind on the station platforms after the train doors closed. in order to avoid double-counting, each observer was responsible for observing passengers in a two-car segment of the six-car train (front, middle, and back). some judgement was necessary in determining which passengers to count, because some passengers linger on the platform after alighting the train and some choose to wait for a later train even when there is clearly space available to board. the goal of the left-behind passenger count is to measure the number of passengers that are left behind due to crowding within ± 2 passengers of the true number. in addition to counting the number of passengers left behind by crowded trains, it is important for model calibration to get an accurate count of the number of passengers waiting to board each arriving train. given the large number of commuters using the heavy rail system during commuting hours, it is not possible to accurately count this total number of passengers in person. surveillance video feeds of escalators, stairs, and elevators used to access the platform of interest were used to manually count the number of passengers entering and exiting the platform offline. specifically, an open-source software tool was used to track passenger movements by logging keystrokes to the video timestamp during playback (campbell, 2012) . counts were conducted by watching the surveillance video playback of each entry and exit point from the platform and logging the entry and exit of each individual passenger. the resulting data log records the time (to the nearest second) that each passenger entered and exited the platform. since the platforms of interest serve only one train line in one direction, all entering passengers are assumed to wait to board the next train, and all exiting passengers are assumed to have alighted the previous train. combining these counts with the direct observations of the number of passengers left behind each time the doors close provides an accurate estimate of the number of passengers that were successfully able to board each train. fig. 6 illustrates the cumulative numbers of passengers entering the platform (blue curve) and boarding the trains (orange curve). the steps in the orange curve correspond to the times that the train doors close. if passengers are assumed to arrive onto the platform and board trains in first-in-first-out (fifo) order, the red arrow represents the waiting time that is experienced by the respective passenger, which is estimated as the difference between the arrival and the boarding time. a timeseries of the actual number of passengers waiting on the platform is constructed by counting the cumulative arrivals of passengers to the platform over time and assuming that all passengers board departing trains except those that are observed to be left behind. this ground truth for data collected on november 15, 2017, is shown in blue in fig. 7 . the sawtooth pattern shows the growing number of passengers on the platform as time elapses from the previous train. the drops correspond to the times when doors close. at these times, the platform count usually drops to zero. when passengers are left behind, the timeseries drops to the number of left behind passengers. one such case is illustrated with the red arrow just before 17:30 in fig. 7. fig. 4 . selected camera views from north station, orange line, northbound direction. c. sipetas, et al. transportation research part c 118 (2020) 102727 6. automated detection of passengers on platforms in video feeds the yolo algorithm uses pattern recognition to identify objects in an image. the coco training dataset was used to define the object detection parameters in yolo, as described in section 2. a threshold for certainty can also be calibrated to adjust the number of identified objects in a specific frame. if the threshold is set too high, the algorithm will fail to recognize some objects that do not adequately match the training dataset. if the threshold is set too low, the algorithm will falsely identify objects that are not really present. in order to identify the optimal threshold, frames from 14 camera views were analyzed. each frame was analyzed separately for threshold values ranging from 6% to 25% to determine the optimal threshold value in relation to a manual count of passengers visible in the frame. the optimal threshold across all camera views is 7%, which minimizes the mean squared error between yolo and manual counts as shown in table 1 . fig. 8 shows the identified objects at each threshold level for the same frame from a camera installed in north station. the input for yolo is a set of frames, each of which are analyzed independently to detect objects. the algorithm runs quickly enough to analyze each frame in less than one second, so the surveillance video feeds are sampled at one frame per second to allow yolo to run faster than real time. although the analysis for this paper was conducted off line, it would be possible to implement the algorithm in real time. the output from yolo is a text file that lists the objects detected for each frame and the bounding box for the object within the image. a time series count of passengers on the platform is simply the number of "person" objects identified in the corresponding c. sipetas, et al. transportation research part c 118 (2020) 102727 frames from each sample video feed. fig. 9a shows the raw passenger counts on the platform at north station for the time period from 5:00 pm -6:30 pm on november 15, 2017. although there are noisy fluctuations, there is a clear pattern of increasing passenger counts until door opening times (green). a surge of passenger counts while doors are open (between green and red) represents the passengers alighting the train and exiting the platform. passenger counts drop off dramatically following the door closing time (red), except in cases that passengers are left behind. for example, the third train in fig. 9a arrives after a long headway and shows roughly nine passengers left behind. to facilitate analysis of the automatic passenger counts from the surveillance videos, it is useful to work with a smoothed time series of passenger counts. using a smoothing window of ± 10 seconds, the smoothed series is shown in fig. 9b . this smoothed time series is more suitable for a local search to identify the minimum passenger count following each door closing time. this represents the count of left-behind passengers identified through the automated object detection process. the smoothed video counts from the three surveillance camera feeds used to monitor the northbound orange line platform at north station are shown as the green curve in fig. 10 . the automated passenger counting algorithm clearly undercounts the total number of passengers on the platform. the reason for this large discrepancy is that the algorithm can only identify people in the foreground of the images, where each person is large. therefore, the available camera views do not actually provide complete coverage of the platform for automated counting purposes. furthermore, when conditions get very crowded, it becomes more difficult to identify separate bodies within the large mass of people. the problem of undercounting aside, it is clear that the automated counts generate a pattern that is representative of the total number of passengers on the platform. using regression, the smoothed timeseries can be linearly transformed into a scaled timeseries (the orange curve in fig. 10) , which minimizes the squared error compared with the manually counted timeseries. using this scaling method, the data from november 15, 2017, were used to compare estimated counts of left-behind passengers in the peak periods with the directly observed values. this provides a measure of the accuracy of automated video counts. the total number of left-behind passengers estimated by this method is presented in table 2 , where the root mean squared error (rmse) is calculated by comparing the number of passengers left-behind each time the train doors close. the scaling process, which makes the blue and orange curves in fig. 10 match as closely as possible, results in substantially overcounted left behinds, because the scaling factor tends to over-inflate the counts when there are few passengers on the platform. as a direct measurement method, automated video counting is not satisfactory, at least as implemented with yolo. however, fig. 10 c. sipetas, et al. transportation research part c 118 (2020) 102727 shows a clear relationship between the video counts and passengers being left behind on station platforms, so there is potential to use the video feed as an explanatory variable in a model to estimate the likelihood of passengers being unable to board a train. in order to improve the accuracy of estimates of the number of passengers left behind on subway platforms, a logistic regression model is formulated to estimate the probability that each passenger is left behind based on explanatory variables that can be collected automatically. a logistic regression is used to estimate the number of passengers left behind by way of estimating the probability that each waiting passenger is left behind, because the logistic function has properties that are more amenable to this application. since passengers are only left behind when platforms and trains are very crowded, a linear regression has a tendency to provide many negative estimates of left behind passengers, which are physically impossible. the binary logit model, by contrast is intended for estimating the probability that one of two possible outcomes is realized (e.g., a passenger is either left behind or not left behind). the estimated probability from a logit model is always between 0 and 1, so the resulting estimate of the number of left-behind passengers is always non-negative and cannot exceed the total number of waiting passengers. for estimation of the logistic regression, each passenger is represented as a separate observation, and all passengers waiting for the same departing train are associated with the same set of explanatory variables. over the course of a 3-h rush period, there are typically about 30 trains serving north station, serving 1,500 to 3,000 passengers per period, and leaving behind well over 100 c. sipetas, et al. transportation research part c 118 (2020) 102727 passengers. logistic regression models are generally expected to give stable estimates when the data set for fitting includes at least 10 observations for each outcome, so there is sufficient data to estimate parameters for a model that is structured this way. the logistic function defines the probability that a passenger is left behind by where x is a vector of explanatory variables, is a vector of estimated coefficients for the explanatory variables, and 0 is an estimated alternative-specific constant. the estimation of the model can be thought of as identifying the values of 0 and that best fit the observed outcomes y 1 corresponds to a passenger being left behind, and = y 0 corresponds to a passenger successfully boarding. the underlying assumption in this formulation is that the likelihood of being left behind can be expressed in terms of a linear combination of explanatory variables and a random error term, , which is logistically distributed. the explanatory variables that are considered in this study are as follows: 1. dwell time (time from door opening to door closing) or difference of ttr arrival and departure times 2. video count of passengers on platform following doors closing these explanatory variables can all be monitored automatically, without manual observations. video counts of passengers on the platform following doors closing are obtained from the object detection process described above. although dwell time is an appropriate explanatory variable because doors stay open longer when trains are crowded, the dwell time is not directly reported in archived databases. as demonstrated in fig. 5 , observed dwell times can be accurately estimated from automatic records of ttr arrival and departure times. this leads to using ttr reported values of difference between train arrival and departure instead of dwell times for the model development. since these are essentially the same explanatory variable, we call this difference "dwell time" for the remainder of the paper. initially, three models were estimated, making use of only ttr data (model 1), only video counts (model 2), and then fused ttr c. sipetas, et al. transportation research part c 118 (2020) 102727 and video counts (model 3). the data from november 15, 2017, were used to develop these models. the number of passengers waiting on the platform (as described in section 5.3) are used to determine the number of observations for estimating the parameters of the logit model. in total, 2167 passengers boarded arriving trains at north station during the rush period and 198 of them were left behind. this leads to a sample size of 2365 passengers for the logistic models. models 1 and 2 are simple logistic regressions, each with only one independent variable. neither model has influential values (i.e., values that, if removed, would improve the fit of the model). model 3 uses both ttr data and video counts, so it is important to diagnose the model's fit, especially with respect to the assumptions of the logistics regression. first, multicollinearity of explanatory variables should be low. the correlation between dwell time and video count is 0.643 and the variance inflation factor is 1.7, both indicating that the magnitude of multicollinearity is not too high. second, no influential values were identified. third, the logistic regression is based on the assumption that there is a linear relationship between each explanatory variable and the logit of the response, p p log( /(1 )), where p represents the probabilities of the response. fig. 11 shows that dwell time is approximately linear with the logit response, while there is somewhat more variability with respect to the video counts. neither plot suggests that there is a systematic mis-specification of the model. a summary of the estimated model coefficients and fit statistics is presented in table 3 . the log likelihood is a measure of how well the estimated probability of a passenger being left behind matches the observations. the null log likelihood is associated with no model at all (every passenger is assigned a 50% chance of being left behind), and values closer to zero indicate a better fit. the 2 value is a related measure of model fit, with values closer to 1 indicating a better model. for all three models, the estimated coefficients have the expected signs and magnitudes. the positive coefficients for dwell time and video counts indicate a positive relationship with the probability of having left-behind passengers, which is intuitive. in order to compare models, the likelihood ratio statistic is used to determine whether the improvement of one model is statistically significant compared to another. the likelihood ratio test statistic is calculated by comparing the log likelihood of the restricted model (with fewer explanatory variables) to the unrestricted model (with more explanatory variables): comparing model 1 (restricted) to model 3 (unrestricted), one additional variable in model 3, indicates one degree of freedom, which sipetas, et al. transportation research part c 118 (2020) 102727 requires > d 3.84 to reject the null hypothesis at the 0.05 significance level. comparison between models 1 and 3 gives = d 75.02, indicating that model 3 provides a significant improvement over model 1 by adding video counts. comparison between models 1 and 2 gives = d 37.7, which is also a significant improvement. the akaike information criterion (aic) is an additional model fit statistic that weighs the log likelihood against the complexity of the model. although model 3 has more parameters, the aic is greater than for model 1 or model 2, indicating that the improved log likelihood justifies the inclusion of both ttr and video count data. the logistic regression provides an estimate of the probability that passengers are left behind each time the train doors close. in order to translate this probability into a passenger count, the estimated number of passengers waiting on the platform from the scaled video count is used as an estimate of the number of passengers waiting to board. table 4 shows the validation results when the models were applied to data collected on january 31, 2018, for north station. the scaling factor used for the number of passengers waiting on the platform is estimated from november 15, 2017 data. considering the estimated number of left behind passengers for each train separately, it is observed that these models achieve higher accuracy when there are a few passengers left behind. overall, model 1 exhibits error of only 3.3% since it estimates that 116 passengers are left behind in total when 120 passengers were observed to be left behind. model 3 gives a lower estimate of 100 passengers being left behind, which leads to an error of approximately 17%. as shown in table 2 and table 4 , direct video counts (unscaled and scaled) do not provide accurate estimates of the total numbers of passengers left behind without some additional modeling. the unscaled video counts underestimate the total, while the scaled video counts overestimate the total. the logistic regression provides much better results. although there are some discrepancies for specific train departures, the estimated numbers of passengers left behind are not significantly biased and the total number of passengers left behind during the three-hour rush period is similar to the manually counted total. the logistic regressions estimate the probability of a passenger being left behind using only the explanatory variables listed in table 3 . however, the estimated number of left behind passengers is calculated by multiplying the probability by the scaled video count of passengers on the platform at the time the doors opened, as estimated from the ttr data. therefore, the estimated number of c. sipetas, et al. transportation research part c 118 (2020) 102727 passengers left behind with model 1 and model 3 rely only on ttr data that is currently being logged and supplemented by automated counts of passengers in existing surveillance video feeds. the models therefore utilize explanatory variables that are monitored automatically, and they can be deployed for continuous tracking of left behind passengers without needing additional manual counts. the logistic models could actually perform even better if there were a way to obtain a more accurate count of the number of passengers waiting for a train. during the morning peak period, the count of farecards entering outlying stations can provide a good estimate for the number of passengers waiting to board each inbound train. this is more challenging at a transfer station, like north station, in which many passengers are transferring from other lines. in some cases, strategically placed passenger counters could provide useful data. nevertheless, table 5 presents the performance of the developed logistic regression models if their estimated probabilities are multiplied by the actual number of passengers on the platform instead of the estimated number as in table 4 . this reveals the value of more accurate data, because model 3 decreases its error compared to table 4 . model 3 in table 5 estimates 122 passengers being left behind in the afternoon rush on the observed date when the previous estimate was 100, which is a reduction of error from 17% to 2% for this model compared to the 120 observed left behind passengers. another way to evaluate the performance of the developed models is to consider whether or not trains that leave behind passengers can be distinguished from trains that allow all passengers to board. through the course of data collection and analysis, the number of passengers being left behind because of overcrowding can only be reliably observed within approximately ± 2 passengers. the reason for this is that sometimes people choose not to board a train for reasons other than crowding, and one or two passengers left on the platform did not appear to be consistent with problematic crowding conditions. if a train is defined to be leaving behind passengers when more than 2 passengers are left behind, the results presented in table 4 can be reinterpreted to evaluate each method by four measures: the number of trains in a time period that leave behind passengers due to overcrowding. 2. correct identification rate: the percent of trains that are correctly classified as leaving behind passengers or not leaving behind passengers, as compared to the manual count. this value should be as close to 1 as possible. 3. detection rate: the percent of departing trains that were manually observed to leave behind passengers that are also flagged as such by the estimation method. this value should be as close to 1 as possible. 4. false detection rate: the percent of departing trains that are estimated to leave behind passengers but have not, according to manual observations. this value should be as close to 0 as possible. there is an important distinction to make here, because there are two ways that the model to identify trains leaving behind passengers can be used: (a) to estimate the number of trains that leave behind passengers, in which case we only care about measure 1; or (b) to identify which specific trains are leaving behind passengers, in which case measures 2 through 4 are important. depending on how the data will be used, application (a) or (b) may be more relevant. for example, application (a) provides an aggregate measure of the number of trains leaving behind passengers. application (b), on the other hand, is what would be needed to get toward a real-time system for identifying (even predicting) left-behind passengers. a comparison of the four measures is presented in table 6 for the 30 trains that departed north station between 3:30 pm and 6:30 pm on january 31, 2018. unscaled video counts provide a good estimate of the number of trains that leave behind passengers sipetas, et al. transportation research part c 118 (2020) 102727 (measure 1), but suffer from a low detection rate and high false detection rate. scaled video counts are poor estimators for the occurrence of left-behind passengers because they are high enough to trigger too many false detections. the modeled estimates both perform well in approaching the actual number of trains leaving behind passengers. model 3 has the best performance for measures 2 through 4. it never falsely identifies a train as leaving behind passengers, and it correctly detects most occurrences of passengers being left behind. like the count estimates above, both model 1 and model 3 rely on the scaled video counts to estimate the number of passengers waiting on the platform when the train doors open, so a fusion of ttr records and automated video counts provide the most reliable measures. another application of the model is to consider the distribution of waiting times implied by the estimated probabilities that passengers are left behind each departing train. from the direct manual counts, a cumulative count of passengers arriving onto the platform and of passengers boarding trains provides a timeseries count of the number of passengers on the platform. if passengers are assumed to board trains in the same order that they enter the platform, the system follows a first-in-first-out (fifo) queue discipline. although it is certainly not true that passengers follow fifo order in all cases, this assumption allows the cumulative count curves to be converted into estimated waiting times for each individual passenger. the fifo assumption yields the minimum possible waiting time that each passenger could experience, and the waiting time for each passenger can be represented graphically by the horizontal distance between the cumulative number of passengers entering the platform and boarding trains (see fig. 6 for data from november 15, 2017). the yellow curve in fig. 12a represents the cumulative distribution of waiting times that are implied by the observed numbers of passengers entering the platform if all passengers on the platform are assumed to be able to board the next departing train. we call this the expected waiting time. the blue curve in fig. 12a is the cumulative distribution of waiting times if the number of left-behind passengers are accounted for when trains are too crowded to board. we call this the observed waiting time, because it reflects direct observation of passengers waiting on the platform using manual counts. the distribution indicates the percentage of passengers that wait less than the published headway for a train departure, which is the reliability metric used by the mbta. for the orange line during peak hours, the published headway is 6 min (360 s). currently, the mbta is only able to track the expected wait time as a performance metric. the difference between the yellow and blue curves indicates that failing to account for left-behind passengers leads to overestimation of the reliability of the system. the models developed in this study provide the estimated probability that a passenger is left behind each time the train doors close. in the absence of additional passenger count data, a constant arrival rate is assumed over the course of the rush period, the door closing times from ttr and the probability of passengers being left behind from model 3 can be used to estimate the cumulative passenger boardings onto trains over time. under the same fifo assumptions described above, the distribution of experienced waiting times can be estimated based on train-tracking and video counts. by this process a cumulative distribution of waiting times is estimated using probabilities from model 3 is shown as a red curve in fig. 12b , which we call the uniform arrivals modeled wait time. table 7 includes the values of experienced waiting times for the observed, the expected, and the modeled distributions. this table also shows how the accuracy of estimating waiting times can be improved if we consider the actual arrival rate under the same assumptions used to develop the uniform arrivals modeled wait time. we call this distribution the actual arrivals modeled wait time. the earth mover's distance (emd) is used to measure the difference between the observed distribution and the expected, uniform arrivals and actual arrivals modeled distributions (rubner et al., 2000) . as shown in table 7 , the emd for the expected case is much higher than the emd for the modeled cases, which indicates that the proposed model reduces errors. the modeled distributions of waiting times closely approximate the observed distribution. this suggests that the estimated probabilities of passengers being left behind each departing train are consistent with the overall passenger experience. the percentage of passengers experiencing waiting times lower or equal to the 6 min published headway is 79% for both the observed and uniform arrivals model curve, and 77% for the actual arrivals model curve. the automated count of left behind passengers provides a close approximation of the actual service reliability when applied to the independent data collected on january 31, 2018. the expected distribution, which does not account for left-behind passengers produces an estimate of 81% of passengers waiting less than 6 min. the expected distribution overestimates the reliability of the system by failing to account for the waiting time that left-behind passengers experience. this paper presents a method for measuring passengers that are left behind overcrowded trains in transit stations without records of exiting passengers. a study performed by miller et al. (2018) also addresses this challenging case using manual video counts to calibrate the developed models. the methodology proposed in this paper uses archived data with automatic video counts as inputs to estimate the total number of left behind passengers during peak demand periods. the automatic video counts are obtained through the implementation of image processing tools. this paper presents an investigation of the effects of accounting for left behind passengers on the estimation of the current reliability metric used by the mbta, the experienced waiting times. following a preliminary study of crowding conditions on the mbta's orange line, data collection and analysis focused specifically on northbound trains at north station during the afternoon peak hours. data was collected on two typical weekdays and confirmed that overcrowding is a common problem, even on days without disruptions to service. this is an indication that the system is operating very near capacity, and even small fluctuations in headways lead to overcrowded trains that result in left-behind passengers. this study specifically investigated the potential for measuring the number of left-behind passengers using existing data sources c. sipetas, et al. transportation research part c 118 (2020) 102727 and automated passenger counts derived from existing surveillance video feeds. the analysis of automated passenger counts was based on the implementation of a fast, open-source algorithm called you only look once (yolo) using existing training sets that identify people as well as other objects. the performance is fast enough that frames from surveillance video feeds could potentially be analyzed in real time. although video counts were not accurate in isolation, the development of models to use automated video counts with automated train-tracking records (model 3) demonstrated good results for different applications. in predicting the number of trains leaving c. sipetas, et al. transportation research part c 118 (2020) 102727 behind passengers, the developed models can correctly identify whether or not passengers were left behind for 93% of the trains. the number of passengers that are left behind during the afternoon rush period can be estimated within 17% of their actual number using only automated video counts and automatically collected train tracking records. with actual counts of the numbers of passengers on the station platform at each train arrival the model can predict the number of left behind passengers with 2% of the actual number. furthermore, the modeled distribution of experienced waiting times reduced the total emd error by more than 50% compared to the error of the operator's expected distribution, where left-behind passengers are not considered. this highlights the need of accounting for left-behind passengers when tracking the system's reliability metrics. there are a number of ways that this study could be extended. one approach would be to implement and evaluate the developed models over more days. in terms of passenger flow data, the odx model has some known drawbacks given existing limitations, such as lack of tap-out farecard data or passenger counters on trains. in systems without these limitations, the developed models could achieve higher accuracy. the methodology presented here could also be combined with the previous study by miller et al. (2018) in order to improve the overall process for estimating left behind passengers in subway systems without tap-out. comparing the two studies, miller et al. (2018) achieves higher accuracy for very crowded conditions, whereas our method performs better when there are few passengers left behind. the automated object detection presented in our study could also be combined with the model proposed by miller et al. (2018) as part of its real-time implementation in case of special events where real-time afc is not available. in the area of image processing, a number of steps could be taken to improve the accuracy of video counts and extend the feasibility to more challenging station environments. suggested approaches include comparing the algorithm with other fast and accurate video detection algorithms and training the algorithm to detect heads rather than whole bodies. although there are limitations to any single data source, the potential for improving performance metrics through data fusion and modeling continues to grow. valuing crowding in public transport: implications for cost-benefit analysis simple player uncovering the influence of commuters' perception on the reliability ratio training mixture of weighted svm for object detection using em algorithm people silhouette extraction from people detection bounding boxes in images what does classifying more than 10,000 image categories tell us? imagenet: a large-scale hierarchical image database estimating the cost to passengers of station crowding waiting time perceptions at transit stops and stations: effects of basic amenities, gender, and security rich feature hierarchies for accurate object detection and semantic segmentation the distribution of crowding costs in public transport: new evidence from paris crowding in public transport: who cares and why? crowding cost estimation with large scale smart card and vehicle location data does crowding affect the path choice of metro passengers? transit service and quality of service manual discomfort externalities and marginal cost transit fares image processing and analysis with graphs: theory and practice crowding and public transport: a review of willingness to pay evidence and its relevance in project appraisal microsoft coco: common objects in context urban commuting: crowdedness and catecholamine excretion mining smart card data for transit riders' travel patterns estimation of denied boarding in urban rail systems: alternative formulations and comparative analysis massachusetts bay transportation authority estimation of passengers left behind by trains in high-frequency transit service operating near capacity smart card data use in public transit: a literature review a behavioural comparison of route choice on metro networks: time, transfers, crowding, topology and sociodemographics darknet: open source neural networks in c you only look once: unified, real-time object detection the earth mover's distance as a metric for image retrieval inference of public transportation trip destinations by using fare transaction and vehicle location data: dynamic programming approach image processing, analysis, and machine vision crowding in public transport systems: effects on users, operation and implications for the estimation of demand a motion-based image processing system for detecting potentially dangerous situations in underground railway stations feature-based recognition of objects ensemble methods: foundations and algorithms inferring left behind passengers in congested metro systems from automated data this study was undertaken as part of the massachusetts department of transportation research program. this program is funded with federal highway administration (fhwa) and state planning and research (spr) funds. through this program, applied research is conducted on topics of importance to the commonwealth of massachusetts transportation agencies. sipetas, et al. transportation research part c 118 (2020) 102727 key: cord-301171-1lpd8dh9 authors: davison, robert m. title: the transformative potential of disruptions: a viewpoint date: 2020-05-19 journal: int j inf manage doi: 10.1016/j.ijinfomgt.2020.102149 sha: doc_id: 301171 cord_uid: 1lpd8dh9 i engage with the impact of disruptions on my work life, and consider the transformative potential that these disruptions offer. i focus on four parts of my life: as a researcher, teacher, administrator and editor. in each, i examine the nature of the disruption and the way i deal with it. i also consider how the present disruption may facilitate a transformation of current practices that lead to a better world at the individual and institutional levels. rather than lamenting the inconvenience of a crisis, i prefer to celebrate the opportunity to do better. every so often the more-or-less smooth tenor of our lives is disrupted. we are forced to deal with a new set of challenges and circumstances. the exact situation varies: it may be a local situation or a global one. even when it is global, each of us experiences the disruption in different ways and to different degrees. the underlying characteristics of disruptions vary but include political, economic, environmental, medical and/or social. disruptions don't always come singly. distinct global and local disruptions may co-exist, further adding to the complexity of a situation. i suggest that while these disruptions are undoubtedly inconvenient, not to mention potentially life-threatening, they do offer us an opportunity for transformative change. out of the darkness of disruption we may perceive glimmers of hope: the potential to do things better. in 2008, barack obama's soon-to-be-chief of staff, rahm emanuel observed "you never want a serious crisis to go to waste" 1 . a crisis thus presents an opportunity to further disrupt the status quo and bring about radical changes that might otherwise be inconceivable. in chinese, the two characters (危機) for crisis imply danger (危) and opportunity (機) . as researchers, we have the potential to play a significant role in transforming the opportunity and making the world a better place (davison et al., 2019) . but the operative word is 'potential'! there is never a guarantee that the transformation will actually occur, nor indeed that it will necessarily lead to better things. furthermore, overly hasty reactions are more likely to introduce fresh crises. considerable thought and care needs to go into the design of the transformed processes (or artifacts) before their benefits can be fully reaped. i was asked to write about the impact of covid-19 (c19) on information management research and practice. however, i find that the topic is too narrow. instead, i see the opportunity to consider disruptions more generally, hence the title and the current focus. i first offer some contextual details to help you to make sense of my perspective. i live in hong kong, where we have never (yet) had a lockdown that legally confines most people to their homes most or all of the time, whether for the current situation or any other in living memory. public transport operates, shops are mostly open, social distancing is broadly adhered to, and i can choose to work at home or in the office. my work includes a mix of research, teaching, administration and journal editing. i deal with each of these activities below. disruptions undoubtedly exert a negative impact on my work as a researcher. as a qualitative researcher, i need to observe and interview people. i can still do this online, but it is less effective. one major project has been delayed indefinitely because i need to be in shanghai to collect data, and that is impractical at the moment, simply because even if i could travel to shanghai, i would need to be quarantined for two weeks on arrival and a further two weeks on return to hong kong. the inconvenience of the quarantining is too disruptive to bear. if the current situation persists or returns, both the topics that i choose to investigate and the way i do research will need to change. remote data collection will become normal as we adapt. this will not be limited to remote interviews, but must also include remote site visits. we will need to develop new data collection protocols for instance. but it seems to me that the real problem is that we are trying to replicate our physical world online. we used to have synchronous, face to face meetings; now we have synchronous, virtual meetings. we used to collect data in person; now we try to mediate data collection through technology. that's not transformation in my view. it's replication. we need to transform the way we do things. we need to find a better way to meet, to collect data, to do research. simple things like turning off the video can help because this reduces the number of cues that we need to process. it also prevents us from noticing the existence of video-audio lags that are annoying at the best of times, though it may also impoverish the richness of the medium as some of the paralinguistic cues disappear. creating a natural yet virtual space where both researcher and researchee feel comfortable to engage in a meaningful and efficient conversation is challenging. whether you are more persuaded by media richness theory (daft and lengel, 1986) or media synchronicity theory (dennis et al., 2008) , each of us has to select a medium that balances these various constraints and thus is oriented more toward replication of face to face interaction or transformation into something different. as a regular invited speaker, visiting professor and general globetrotter, i habitually travel extensively. this aspect of my life has also been severely curtailed. from march to june, 2020, i have had to cancel five work trips to eight countries: china, finland, indonesia, morocco, norway, sweden, switzerland, uk (thrice). the objectives to be attained on these trips included a mix of data collection, student recruitment, research seminars and collaboration, and a phd thesis defence. i have been able to hold some seminars online, but the effect is not the same. it is also hard work: you may have read about 'zoom fatigue' 2 . a research visit, for instance, is much more than a seminar. it also involves many one-to-one conversations, brainstorms, insights, and the exchange of ideas, lubricated with laughter, intellectual spice and good cheer. transforming this kind of work is challenging: there is a grave risk that it will be reduced to the functionality of a seminar but without the rich interactions that take place on the side-lines. i have been invited to a seminar + conversation event in late june, 2020, that will see me 'zoom' into umeå university, in northern sweden. it will be interesting to see if the transformation is effective for the audience and productive for the speakers. i fear that a world without the freedom to travel unhindered may be a much less global world. that's probably less of a concern if you live in a big country with extensive domestic travel opportunities. however, while we will miss the global opportunities, we should examine local opportunities more carefully, and appreciate our local contexts. as a teacher, all my classes since mid-january, 2020, have been online using a variety of technologies. while some of my students remain in hong kong, many are elsewhere in china as well as further afield. the vast majority are accessing the internet from home, quite often on slow connections. my guest lecturers are doing the same, and for them it is certainly more convenient than travelling to the university campus. i anticipated that this new environment would be very hard to adapt to, as a teacher, but strangely enough i was wrong. i have a number of principles that help me ensure that students receive as high quality an individual education as is possible in the circumstances. some of these are transformative, but they build on earlier work. a key challenge that i encounter in a normal (face to face) class is the low level of student interaction. perhaps this is cultural, but i find that while a small minority may be willing and able to interact, the vast majority is not. one of the drivers appears to be fear of making mistakes in front of others and thus losing face. a second relates to interacting in a second (or third) language. in an online class, the dynamics change and i find that, with a little effort, i can get 90% of the students to interact without disrupting each other or me. those of you familiar with the research into group support systems (gss) will recall the various benefits associated with this technology, notably the lack of air-time fragmentation and the elimination of dominant individuals (davison and briggs, 2000) . in a gss-facilitated meeting, there is more time for discussion, more even participation, and more interaction and feedback. these meetings might also induce distractions and digressions, and suffer from participants flaming or insulting each other. i have certainly seen the positive effects in my classes. with 60+ students, it is essentially impossible for each of them to have significant contribution time in a regular classroom, especially if one or two dominate the air time. with parallel conversations, it becomes a reality and the primary problem is restraining their creativity and drawing a conversation to a close, so that the class can move onwards. in an online class, judicious use of the 'chat' feature enables students to type to each other or to me as they wish or need to. they can raise questions, make suggestions, provide links to external sources, and so can i. in order to maintain student attention, a lesson that i learned early on was that i need both to slow down my speech and to break it up with frequent interludes where students can ask questions and create ideas. not all students think equally quickly: some need more time to reflect on the content and thus interjecting regular hiatuses into my content delivery enhances access to that content immensely. thus, every 15 minutes or so, i will stop and seek to provoke them with an issue that has no easy answer, where a range of perspectives 2 https://www.bbc.com/worklife/article/20200421-why-zoom-video-chats-are-so-exhausting j o u r n a l p r e -p r o o f can reasonably be identified, and where students are likely to have an opinion. i give the students 10 minutes to brainstorm in the 'chat' feature of whichever software i am using to run the class. another approach is to use the online breakout rooms where students first discuss an idea in small groups before presenting it to the class as a whole. whichever technique is used, both the instructor and other students can provide additional feedback and commentary. over a three-hour class with 50-60 students, i find that these 'chat' interludes generate around 9,000 words of generally high quality comment. i regard this sharing of ideas as essential to individual learning because in order to write sensibly you have to think. i make sure that i read all the comments. i type my own reactions to some while others i react to verbally. i save all the typed comments to a doc file and email it to all the students after the class. all the comments are identified (there is no anonymous function) but this seems reasonable as in a real class they would be identified anyway. moreover, individual students receive credit for their ideas, both from me and from each other. is this transformative? well, it is a small-scale transformation in the way i teach because of the way i deliberately fracture my episodes of speaking into smaller fragments and punctuate them with chat interludes that provide the opportunity for students to voice out their thoughts in parallel. although they can contribute to the chat at any time, most wait for the invitation from me. i do believe that this new teaching-learning protocol transforms the learning process, because students are more actively engaged in the process of learning. they know (because i tell them) that they will be rewarded for their participation (in my classes, i typically award 20% of marks for participation), and when they see others typing in the 'chat', then this seems to create a gentle peer pressure for them to emulate their peers. while some students type just a few words and then submit, others take considerably more care and write a few tens or even hundreds of words. their communication style is transformed by the technology that exerts no time pressure to complete a communication by a hard deadline. these longer comments often attract attention from other students who comment on them in turn, setting up a viral pattern of inter-student learning. i intervene as necessary to minimise digressions, correct misunderstandings, offer an opinion and further challenge their imagination. where assessments are concerned, it is clearly more difficult for students to undertake group work if they lack the luxury of a face to face environment, but virtual teams have been around for a long time and there is no reason why students cannot work virtually. it is a new skill to learn (hardin et al., 2007; davison et al., 2017) that will be of value in the workplace. a blended learning approach might see a mix of synchronous and asynchronous virtual team work. the synchronous events may be psychologically more comfortable and productive, yet simultaneously less convenient because of the need for everyone to be virtually present. the asynchronous states that persist in between the synchronous events will still see work done, even if the intensity is lower (maznevski and chudoba, 2000) . examinations online are also feasible, though there are fears that students will take each and every opportunity to cheat, whether by employing proxies to answer the questions for them, by outsourcing questions to experts on demand, or by some other ingenious means. my preference is to set an exam question that is really much too long (or difficult) for the time available and which requires analysis but not memory, pushing students to the limit of their capabilities and thus rendering cheating that much more complicated. i thus transform the assessment process and provide students with the opportunity to demonstrate what they have learned. it must be said, however, that evaluating these kinds of assessments is challenging in itself. i invest considerable time and mental effort in carefully reading lengthy answers to questions and then justifying the marks awarded. a marking scheme may help, but answers inevitably deviate from the ideal or model answer, and so flexibility is essential. the proof of the effectiveness of the transformation of teaching, learning and assessment will be in the proverbial pudding: student evaluations of my teaching, teacher evaluations of student learning, as approximated via exam answer papers, and the eventual employment into which students enter on graduation. initial feedback from students is positive, but a single cohort of students will not provide sufficient evidence; there are lessons to keep learning on both sides. if online learning is here to stay, i am confident that we can transform both ourselves and it to a high level of effectiveness that will in some measure exceed what is possible in face to face environments. i earlier alluded to the problems of zoom fatigue. i know people who are in back-to-back zoom meetings for hours and days at a stretch. here the replication problem is all too evident. we need to evaluate carefully if a synchronous online meeting is really essential. if it is not, and i feel that we should assume that the vast majority are not, then a move to asynchronous interactions is called for. in effect, this means less meetings, which is surely a good thing! the gss software that i mentioned earlier provides an excellent basis for this kind of interaction. a meeting can be open for a period of hours or days and members should expect to visit on several occasions so as to add remarks, read those of others, offer comments and engage in a prolonged deliberation. this kind of extended meeting requires both a good work ethic and for the meeting organiser to have good facilitation skills, in order that a meeting can be drawn to a productive close in a timely manner without cutting people off. the most challenging aspect of this transformation is accepting that asynchronous interactions can work, and that meetings are really not essential most of the time. thus, instead of forcing the technology to support existing meeting patterns, we should allow the technology to support a transformed meeting arrangement. for myself, i find that the vast majority of administrative tasks can be completed perfectly adequately through email, i.e. asynchronously. response times are generally fast for urgent matters and i see no reduction of effectiveness or efficiency. where group discussion is needed, gss (or a similar technology) may offer a richer environment than zoom, yet not require synchronous presence. as an editor, i see a pair of disparate effects. firstly, many more papers are being submitted. alas, these are not always of the highest quality. many of them relate to the current c19 pandemic and consist of quickly-thrown-together collections of notes with little scientific import or practical value. these are politely rejected. special issues on the impacts of pandemics have also been proposed, equally hurriedly, and they too are rejected. secondly, reviewers of papers tell me that they need more time to complete their reviews and authors currently revising their papers also ask for more time. a month, or three, would be nice! the culture of asking for extensions is rife in our world and it is encouraged by those who grant such extensions, sometimes without even being asked! an extension request that is received well in advance, with some careful argument as to why it is needed, is fine and will be granted. an extension request that is received in the hours (or minutes) before the deadline (for an extra 6 months) will not be entertained. if you wish to work right up to the deadline, that's your choice. but careful time management is always a good idea, and since emergencies will happen, we need to allow time for them. don't assume that deadlines will be extended! any transformation here has to be at the personal level. we can work around disruptions if we want to. for me, it is primarily a matter of time management and learning to say no. quality is still going to be the issue. a disruption does not justify sloppy or haphazard work, or knee-jerk research either. bear in mind that your article is going to be in multiple review-revise cycles for several months, if not longer. we want to publish high quality research that will stand the test of time. that hasn't changed and i don't see it changing. my editing work seems least affected by disruptions, though i admit that it is progressively harder to secure good reviewers willing to complete their assignments on time and to a high quality level. technology has great transformative potential, if we want to transform and to be transformed. but do we? my personal suspicion is that while we teach our students about the value of disruptive technology, we are less keen to be disrupted ourselves, unless it is on terms of our own choosing. punctuated equilibrium theory (eldredge and gould, 1972; gersick, 1989) suggests that disrupting the underlying structures of a stable situation (equilibrium) may create the potential for the introduction of radical changes that enhance the status quo. a pandemic virus, or rather the human reaction to it, is certainly disruptive and is punctuating many of our stable states. do we try to go back to the old stable state or do we accept the transformation challenge? disruptions to the research process are the most difficult to resolve, and i see this as a work in progress. disruptions to teaching, learning and assessment, and administration, on the other hand, are more amenable to transformative action. we will need to plan to teach in a different way, our students will need to accept to learn in a different way, and we will have to create new ways to assess that learning. we may even be able to escape the iron grip of meetings! it can be done, if we have the will. it has been suggested that the current c19 pandemic will disrupt academia in a way that will permanently change it. apparently the top ranked universities are destined to survive while others may disappear 3 . i'm not so sure: in my view, those that thrive will be those that transform to and profit by the new status quo. the ability to transform is no more than survival of the fittest in a new set of circumstances. but this applies as much at the level of the institution as at the level of the unit or the individual. digitising core activities is a start. reinventing the institution (of everything we do and where and how we do it) is down the road. are we ready to transform? my own institution was the first in hong kong to put all classes online several months ago. transformation is often revolutionary, which is why punctuated equilibrium theory is so pertinent. revolutions are not tea parties, and so it may be some time before we can enjoy the next stable state. finally, we need to be careful not to change too quickly. contact-tracing has emerged as one of the tools that can be used to trace possibly infected individuals. however, early reports suggest significant concern about the privacy implications of contact-tracing apps like australia's covidsafe, which was designed and implemented very quickly with less attention to usability issues than would normally be the case 4,5 . digital surveillance, incorporating the tracking and tracing of individual people, is already rather common in many societies and not a new research topic (clarke, 2001) . under the cover of a pandemic, where fear of infection is the lowest common denominator, it is not hard for governments to ramp up surveillance activities in a way that would be firmly rejected in normal times, yet now is broadly accepted! klein 6 underscores the concerns here by pointing out how politicians are working with technology firms "with an emphasis on permanently integrating technology into every aspect of civic life" where "our every move, our every word, our every relationship is trackable, traceable and data-mineable by unprecedented collaborations between government and tech giants". with such a panoptic vision, it seems that revocation of surveillance is not envisaged and so that this is the new norm. this is as much a research issue as a philosophical one: as researchers, we should be critical of actions that further diminish our already eroded right to be left alone. since contact tracing and quarantine are enabled and enforced through information systems applications, these topics are firmly in zone for im researchers. person location and person tracking: technologies, risks and policy implications organizational information requirements, media richness and structural design gss for presentation support: supercharging the audience through simultaneous discussions during presentations establishing effective global virtual student teams call for papers: responsible is research for a better world tasks, and communication processes: a theory of media synchronicity punctuated equilibria: an alternative to phyletic gradualism revolutionary change theories: a multilevel exploration of the punctuated equilibrium paradigm i know i can, but can we? culture and efficacy beliefs in global virtual teams bridging space over time: global virtual team dynamics and effectiveness key: cord-308867-mrtf8l4f authors: heaney, jude; rolfe, kathryn; gleadall, nicholas s.; greatorex, jane s.; curran, martin d. title: chapter 6 low-density taqman® array cards for the detection of pathogens date: 2015-12-31 journal: methods in microbiology doi: 10.1016/bs.mim.2015.06.002 sha: doc_id: 308867 cord_uid: mrtf8l4f abstract real-time pcr assays have revolutionised diagnostic microbiology over the past 15 years or more. adaptations and improvements over that time frame have led to the development of multiplex assays. however, limitations in terms of available fluorophores has meant the number of assays which can be combined has remained in single figures. this latter limitation has led to the focus tending to be on individual pathogens and their detection. this chapter describes the development of taqman® array cards (tacs), technology which allows the detection of multiple pathogens (up to 48 targets) from a single nucleic acid extract, utilising small volumes and real-time pcr. this in turn lends itself to a syndromic approach to infectious disease diagnosis. using the examples of tacs we have developed in our own laboratory, as well as others, we explain the design, optimisation and use of tacs for respiratory, gastrointestinal and liver infections. refinement of individual assays is discussed as well as the incorporation of appropriate internal and process controls onto the array cards. finally, specific examples are given of instances where the assays have had a direct, positive impact on patient care. clinicians have a complex task when attempting to identify infectious disease aetiologies, particularly in critically ill patients. current diagnostic practices typically detect and report single pathogen analysis from individual patient samples, often being curtailed when a pathogen is detected. adopting a syndromic approach for the rapid identification of infection(s) would not only change current clinical diagnostic practices but also provide a rapid diagnostic tool to inexpensively supply evidence of specific infections and super-infections, particularly in paediatric, critically ill and immunosuppressed patients. test selection may be based on limited information and therefore screening for many relevant pathogens may not occur, simply because they were not requested by the referring clinician, consequently infectious aetiologies can remain unidentified. delays may also occur if reference laboratories are required to run tests for less common pathogens, greatly increasing the turn-around time for results. the patient may remain for extended periods on broad-spectrum therapy pending the results of different tests, with the associated increased drug toxicity, unwarranted selective pressure for antibiotic resistance, cost and length of hospital stay. there is, therefore, a great need to simplify complex workflows for pathogen detection and identification to allow the rapid initiation of tailored treatment regimens. over the past 10 years, the application of both qualitative and quantitative nucleic acid detection techniques has dramatically altered clinical diagnostic practices. pcr and, more recently, real-time pcr have revolutionised diagnostic practices in clinical microbiology laboratories (bankowski & anderson, 2004; cockerill, 2003; mackay, 2004) , allowing rapid pathogen identification. the combination of excellent sensitivity and specificity, low contamination risk, speed, reduced hands on time and ease of use has made real-time pcr technology an appealing alternative to conventional culture-based or immunoassay-based testing methods, which are often time consuming and labour intensive. as a direct result of the introduction of realtime pcr into clinical microbiology, turn-around times have decreased considerably with obvious benefits at the bedside. the use of molecular diagnostics has proven advantageous resulting in more accurate results with overall better patient care and outcomes. when molecular diagnostics were first introduced into clinical microbiology, there were few commercial assays available; therefore, in-house pcr and real-time pcr assays were developed for the majority of targets. in more recent years, an expanded portfolio of commercial assays has become available, enabling laboratories for whom extensive assay development and validation is not practicable to still utilise molecular-based methods. all in-house assays should now be developed and validated according to the conformité européene-in vitro diagnostic (ce-ivd) recommendations and should be considered for ce marking or iso 15189 accreditation. all pcr-based assays, qualitative and quantitative, should meet minimum information for publication of quantitative real-time pcr experiments (miqe) requirements as detailed in the literature (bustin et al., 2010 (bustin et al., , 2013 johnson, nour, nolan, huggett, & bustin, 2014; taylor, wakem, dijkman, alsarraj, & nguyen, 2010) and comply with the guidelines outlined for the development and validation of diagnostic tests that depend on nucleic acid amplification and detection (saunders et al., 2013) . there is also a requirement for the use of standardised materials and participation in external quality control programmes. 2 taqman ® array cards pcr and real-time pcr technology have evolved from monoplex to multiplex assays with mastermixes comprising several primer/probe sets for multiple pathogen detection. array technology has revisited monoplex assays but utilising them as 'simultaneous singleplex' to detect many more pathogens from the full spectrum of organisms, thus introducing a syndromic rather than pathogen-focused approach to diagnosis. the taqman ® array card (tac) (life technologies, carlsbad, ca) formally known as taqman ® low-density array (tlda) is a microfluidic card which utilises primers and probes pre-loaded and lyophilised onto wells. the tac platform offers a 384-well, single-plate real-time pcr in an 8 sample by 48-well format with a single target tested in each of 48 available wells/pods (including controls). microfluidic technology distributes the sample into individual pcrs and real-time pcr and detection takes place within an analyser (viia7 or quantstudio 7) with results generated in less than 1 h. the process and individual steps involved in setting up a card are schematically outlined in figure 1 . to our knowledge, tacs are the only available real-time pcr platform with the inherent ability to utilise in-house-validated pcr assays onto a single array and detect and quantify up to 48 targets simultaneously on a single specimen in such a simple, robust and easily transferable application. nucleic acid extract (20-75 ml) is added to a mastermix (containing enzyme, buffer and nucleotides). the 4 â taqman ® fast virus 1-step mastermix (life technologies) is recommended for use with tac but others are widely available. the reaction volume is adjusted to 100 ml with nuclease-free water, mixed and transferred to a single reservoir/port ( figure 1 ). once eight specimens have been added into the 8 reservoirs/ports, the plate is centrifuged twice for 2 min at 1200 rpm/300 â g, sealed, the ports trimmed off with scissors and loaded into the real-time pcr viia 7 instrument (life technologies): the run time is approximately 53 min ( figure 1 ). in practical terms, the advantages of tac are numerous. handling time and training required are both minimal, substantially improving the real cost of clinical diagnosis. data interpretation is straightforward as data output is a single, user-friendly file. the lyophilisation of the primers and probes onto the card ensures a shelf-life of up to 2 years with refrigeration, allowing less commonly required cards to be stored but readily available, with no loss of sensitivity. tacs can also be shipped at room temperature. an added advantage of the tac is the ability to include confirmatory assays for many of the pathogens with assays being designed to detect multiple targets per pathogen, enhancing confidence in diagnosis ( figure 2 ). the tac format has some disadvantages, and some groups have reported a reduced sensitivity when compared to monoplex assays (kodani et al., 2011) . this is processing steps involved in running a tac. once the nucleic acid extract/rt-pcr mastermix (100 ml) is added to the port/reservoir for each specimen, centrifugation is used to channel the mix into the 48 wells/pods. these contain the pairs of lyophilised primers and probes for the specific targets to be amplified by real-time pcr on each of the eight clinical specimens applied to the plate. taqman array card layouts with all the pathogens included for each sample. for each sample, one internal positive control with bacteriophage ms2 (ms2 ic) and two human dna/rna controls (rnase p/18s rna) are included. some cards include additional internal positive controls, i.e., phocine distemper virus (pdv), bacillus thuringiensis and escherichia coli green fluorescent protein (gfp ic). for several pathogens, more than one genetic target is included (indicated with #number). usually restricted to a one log 10 lower limit of detection (lod) when compared to monoplex assays (kodani, mixson-hayden, drobeniuc, & kamili, 2014; kodani et al., 2011; rachwal et al., 2012) . however, careful optimisation of tac such as testing multiple targets for some pathogens within the card (see figure 2 ), using efficient nucleic acid extraction methods (particularly for samples such as blood which may have a low yield of organism) and increasing the extraction volume (as well as the nucleic acid input volume), may all help to increase the sensitivity to that suitable for use in routine diagnostics laboratories (diaz et al., 2013) . additional drawbacks of tac technology include the necessity to screen eight samples in parallel to avoid waste plus the limited number of samples per plate (maximum of eight), lack of automation and inaccessibility of the cards for downstream analysis of pcr products. the array requires a single annealing temperature for all assays on a card; therefore, individual assays must be optimised to work efficiently at this temperature. this requires a careful validation process so that all assays perform optimally under the universal conditions of the array card. however, once this validation has been undertaken, addition of other assays to the panel or replacement of existing assays with new, improved versions can be carried out without the need for extensive revalidation, allowing for a flexible approach. the reported drop in sensitivity compared to monoplex pcrs (pertaining to the small volume analysed) (kodani et al., 2011) does not have a negative impact because pathogens at clinically relevant levels are detected. lods of single copy number are achievable, and this has been demonstrated for our in-house-developed respiratory tac ( figure 3 ) where three of our independent influenza b real-time array assays were demonstrated to detect down to 3 copies per reaction using a synthetic puc57 plasmid control (www.genscript. com) containing all three target sequences. at present, quantitation is not performed on tacs; however, when attention is focused upon what organism is responsible for infection, a detected or not detected answer is usually sufficient. adaptation of tac to perform quantitation is possible if there is clinical need for this function. multiple organisms may be detected from a single specimen, requiring careful interpretation by clinicians to determine which of the identified organisms are responsible for disease and which may be considered commensals. this relies on clinical judgement; however, the cycle threshold (ct) value at which a particular organism is detected on the tac (and which represents the relative amount of that organism within the sample) can aid in this interpretation. a good example to illustrate this point is the multiple infections found in a specimen recently processed on our gastro tac which is currently undergoing validation in our network of public health england (phe) laboratories ( figure 4 ). one of the major attractions of array technology is the ability to design cards with specific syndromes in mind. examples of these, some of which are described within this chapter, include infectious respiratory disease cards, hepatitis/jaundice cards, infectious gastrointestinal disease cards, central nervous system (cns) infection cards and sexually transmitted infection (sti) cards (see figure 2 for the layouts of the cards developed to date in our laboratory). there is the possibility to include assays for antibiotic and anti-viral resistance relevant to a particular organism/syndrome, for example, carbapenamases or oseltamivir resistance. this technique can also be used in surveillance for pathogens which are not sought routinely, for example, toroviruses, kobuviruses and parvoviruses in gastrointestinal disease. array cards can be used in outbreak response, for example, with respiratory outbreaks whereby outbreak specimens can be rapidly tested for multiple respiratory pathogens allowing for rapid initiation of appropriate public health response, implementation of infection control precautions and prophylaxis/treatment where necessary. although this chapter focuses on tac technology, it is important to recognise that there are other types of array technology which are currently in development or diagnostic use. filmarray technology (biofire diagnostics inc., salt lake city, ut) is a 'pouch-format' array which incorporates nucleic acid extraction, multiplex pcr followed by nested pcr in wells of 1 ml volume each to produce a result within 1 h (pierce, elkan, leet, mcgowan, & hodinka, 2012; popowitch, o'neill, & miller, 2013) . the single-sample format makes this technology inadequate for high-throughput testing but does make it ideal for point-of-care testing which could be deployed in critical situations such as field-based needs for the armed forces. this chapter explores how tacs are being developed to aid diagnosis of infection and to individualise the process according to the presenting syndrome based predominantly on our experiences. in the fight against infectious disease where emerging and re-emerging pathogens add to the complexity, it is important to apply contemporary knowledge and novel technologies to aid diagnosis and to simplify the complex workflows that are required for pathogen identification. gastro tac specimen result displaying the numerous pathogens that were detected, with a broad range of ct values. both miqe and ivd guidelines (saunders et al., 2013) were followed when designing and optimising assays on tacs, taking into consideration: experimental design, sample properties, nucleic acid extraction and quality assessment, reverse transcription (rt), target information, primer and probe design, real-time pcr protocol optimisation and validation details, and data analysis. existing, previously validated, in-house assays were taken from a variety of formats including monoplex and multiplex real-time pcr with differing fluorescent probes and quenchers. the initial step was to transform all assays to a single monoplex format, with high specificity and sensitivity while using identical reaction conditions, chemistry, probes with the same reporter fluorophore and quencher. although each assay had previously been extensively validated, the changes required for transformation to tac assays necessitated re-optimisation and validation. this opportunity provided a chance to re-evaluate each assay individually and perform blast analysis (www.ncbi.nlm.gov/blast) of primers and probes to ensure specificity (utilising recently deposited sequences in the genbank database); to examine primer and probe melting temperatures (t m ) and whether any adverse primer/probe dimer and heterodimer interactions existed using oligoanalyzer 3.1 (www.idtdna. com) that required modification. in some instances, the primers needed to be modified at the 5 0 end (adding target-matched extensions) to raise their t m to approximately 55-60°c to ensure uniform amplification across 48 different assays at the single defined temperature (60°c). ideally, the amplicon length should be between 60 and 150 nucleotides but we have had acceptable results with amplicons up to 200 nucleotides in length. a huge benefit is that once validated on the tac format, an assay can be incorporated into any syndromic tac. although alternatives may be considered, for ease of use and interpretation, probes with a single fluorescent 5 0 dye (fam) and minor groove binder (mgb) incorporated into a non-fluorescent quencher at the 3 0 end are favoured for the development of tacs. these probes have proven to generate better precision in quantitation due to a lower background signal. further, the mgb moiety stabilises the hybridised probe, effectively raising the melting temperature (t m ), and increasing specificity. mgb probes can therefore be shorter than traditional dual-labelled probes, making them better suited for applications such as allelic discrimination, or when designing probes in regions of high at content, for genotyping or when designing assays to detect mutations conferring drug resistance. the majority of assays using traditional dual-labelled quenchers transfer to the mgb format with minimal re-optimisation required; indeed, in many cases the shorter mgb probes increase the sensitivity and efficiency of the reaction, as well as the amplitude of the fluorescence signal giving high rising sigmoidal amplification curves. in our experience, designing the t m of the mgb probe to be between 48 and 55°c works effectively with probe lengths ranging from 14 to 24 nucleotides. the taqman ® fast virus 1-step mastermix (life technologies) is our preferred chemistry for tac assays; therefore, initial development work must ensure all existing primer and probe sets perform adequately using this chemistry and fast ramping and cycling times. if existing primer and probe sets do not perform adequately, modified or new assays must be designed. the fast virus 1-step mastermix amplifies both rna and dna with high sensitivity allowing for mix-and-match rt-pcr and pcr on any tac. it has an added benefit of retaining high sensitivity even in the presence of rt-pcr inhibitors often found in blood, stool and other typically inhibitory specimens. although not essential, for ease of use, we chose to have one cut-off threshold value for all assays on each developed tac (figure 2) . again, this requires careful assay design ensuring all true amplification curves (with high rising amplitudes) cross this threshold effectively for each assay. in the initial developmental stages, assay optimisation should utilise the real-time platform routinely used for in-house assays to allow for ease of comparison. assay sensitivity, specificity, efficiency and fluorescence intensity using both known standards and patient samples should be determined. a range of sample types including plasma, stool, nasal swabs, bronchoalveolar lavage, naso-pharyngeal aspirates (npas), cns, fresh biopsy tissue and formalin-fixed, paraffin-embedded specimens may be included in assay optimisation. extraction volumes can also be optimised and increasing the volume can be beneficial when low levels of pathogen are suspected, e.g., very early or late in the infection. an internal control such as bacteriophage ms2 (or bacillus thuringiensis cells) should be included in all extraction protocols to serve as a process control (rolfe et al., 2007) . the next step is to transfer the assays to a 384-well pcr plate format using fast reaction conditions (see figure 1 ) on the viia7 and determine sensitivity, specificity and lods for each assay, again using known standards and patient samples. although a larger volume (20 ml) is used in the 384-well pcr plate format on the viia7, it does provide a good indication of how assays will perform on a tac (1 ml reaction volume) due to the proximity to the fluorescence detectors within the viia7 instrument. when a tac has been spotted with primer and probe sets lyophilised into individual pods, validation procedures must be undertaken. known standards (such as viruses and diluted nucleic acid extracts), united kingdom national external quality assessment (neqas) and quality control for molecular diagnostics (qcmd) (www.qcmd.org), external quality assurance panels, patient specimens derived from different sample types and engineered synthetic plasmid controls can be used for validation (kodani & winchell, 2012) . employing such a combination allows for optimal assay scrutiny, providing information on lod, sensitivity, specificity and any reaction inhibitors from different sample types. neqas and qcmd panels provide a good indication if a generated tac is of sufficient sensitivity to be used in an accredited diagnostic service. a panel of engineered plasmid controls can serve multiple purposes: initially to determine specificity of each assay on a tac and their lods, and subsequently to determine any batch variation. for our tac development, we used a commercial company, genscript (www.genscript.com), to generate a panel of synthetic control plasmids containing all our target sequences (with 20 nucleotides each side of the primer target sites also included) combined together. these plasmid panels also serve to quality-check each new batch of tac plates, in a checker board fashion, to ensure all the primers and probes have been spotted correctly into their assigned pods. extraction and internal controls (ms2, rnase p gene and 18s) can be used to determine sample quality, the reaction failing if any of the control reactions are outside of pre-determined ranges. specific positioning of the control assays throughout the tac also enables the user to determine whether sufficient centrifugation has taken place and the mastermix has reached all pods on the tac. as emphasised in section 1, adopting a syndromic approach to infectious disease diagnoses has many advantages. however, it also requires careful planning and input from both clinicians and diagnostic laboratory teams. designing syndromic tacs involves an initial decision as to which pathogens are considered important in the targeted syndromes: those identified most frequently and/or those with serious complications and adverse consequences for the patient. syndromic diagnosis allows not only for rapid turn-around times in routine diagnosis but also rapid response times in outbreak situations. the syndromic approach to infectious disease diagnosis is not new and both in-house and commercial systems have been described using techniques ranging from multiplex pcr and microarrays to maldi-tof and sequencing or combinations of these approaches ( the first syndromic tac for infectious diseases to be described was a respiratory array card developed by kodani et al. (2011) . this was capable of detecting 21 targets (13 viruses and 8 bacteria) plus control, aimed at diagnosing the cause of acute respiratory infection. others, including ourselves, have expanded the range of organisms to include those causing atypical pneumonias, a wider range of specific organisms (e.g. influenza a subtypes) and those causing infections in specific patient groups, e.g., those receiving extracorporeal membrane oxygenation (ecmo) therapy for severe respiratory failure due to a suspected infectious aetiology (see figure 2 for respiratory/ecmo card layouts). one of the inherent advantages of the tac development process is the ability to make changes and improvements to individual assays when ordering the next batch of plates (minimum order is 50 plates/400 specimens), which take 6-10 weeks to manufacture. our current respiratory tac is now in its eighth version and has dramatically improved during this evolutionary process. its performance with a commercially available respiratory zeptometrix verification panel (www.zeptometrix.com) compared to cepheid genexpert flu/rsv test, biofire filmarray respiratory panel and our routine multiplex real-time respiratory assays is clearly demonstrated in table 1 . moreover, performance with all the available qcmd respiratory pathogen panels (www.qcmd.org) over the last few years has been excellent and is equal to and in most cases now superior to our current routine molecular real-time respiratory assays. improvements observed in tac performance during development of the plate have correlated with increased analytical sensitivity of the individual assays and reflect the fact that all assays on the eighth version of the respiratory card can detect down to approximately 3 copies per reaction using our panel of synthetic plasmid controls. a recent comparison with our routine multiplex assays on 417 consecutive respiratory specimen demonstrated the tac performance (table 2) , with discrepant results seen only in specimens with very late ct values (low viral load) and highlighting that the card now outperforms the gold standard test, i.e., more sensitive for rsv and adenovirus. a modification of our respiratory tac plate tailored to our ecmo service (adjusted assays on the ecmo card are highlighted in bold font in figure 2 ) has been in evaluation in our laboratory since november 2013. to date (april 2015), 55 patients (151 specimens) have been processed on the ecmo card in parallel with routine investigations. in addition to confirming all routine investigative findings, the ecmo array card has had a significant beneficial impact, directly influencing clinical management in some patients. notable infections identified include mycoplasma pneumoniae (two cases), aspergillus fumigatus (one case), streptococcus pyogenes (two cases, both dual infections with influenza a), mycobacterium tuberculosis (one case) and streptococcus pneumoniae (six cases). it has long been recognised that mass spectator events represent huge challenges to public health-bringing the possibility of imported infections, mass transmissions and food-borne infection outbreaks. transmission of gastrointestinal infections can cause major issues at mass spectator events and identifying causative pathogens can be problematic. repeated studies have shown that the overall positivity of routine microscopy and culture of stool samples from symptomatic individuals are poor compared to conventional pcr. selected references in the reference list demonstrate this finding (morgan, paillart, & thompson, 1998; santos & rivera, 2013; stensvold & nielsen, 2012) . the need for comprehensive screening assays in both outbreak settings and routine clinical investigation has therefore been recognised and the latter developed and used in a number of settings (liu et al., 2013 (liu et al., , 2014 pholwat et al., 2015) . to date, a single screening assay has not been available for mass event settings but such assays are under development. the tac recently developed by liu et al. (2013) , which simultaneously detects 19 enteropathogens, constitutes a significant advance in diagnostics for gastrointestinal pathogens. not only do the cards allow fast, accurate and quantitative detection of a broad spectrum of enteropathogens (bacteria, viruses and parasites), the authors in-house routine multiplex real-time respiratory pcr assays (clark et al., 2014) . b influenza samples diluted 1:100 in virus transport medium and then processed in all assays to increase the challenge. also concluded that they were well suited for surveillance or clinical purposes. in a follow-up seminal study (liu et al., 2014) assessing the performance of their tac alongside two other molecular platforms (pcr luminex and multiplex real-time pcr; both in-house) against comparator methods (bacterial culture, elisa and pcr) using over 1500 specimens, a molecular quantitative approach was clearly superior 'of the laboratories participating in this study, the taqman ® array card platform was viewed the most favourable for a complete syndromic screen because implementation and procurement was simple, risk of contamination scant, and quantification robust'. we too have developed a gastrointestinal tac incorporating a few of the assays described by liu et al. (2013) but expanding it with additional assays (e.g. enterovirus, norovirus group i, parechovirus and hepatitis a & e viruses), outlined in figure 2 . while validation of this plate is currently underway in our network of regional laboratories within phe, the data generated (>500 samples processed) so far are extremely encouraging, and this is clearly illustrated in the performance obtained for the recent norovirus qcmd (table 3) and clostridium difficile qcmd (table 4 ) panels. rachwal and colleagues published a study in which they described an array for the detection of biothreat organisms (rachwal et al., 2012) . these included bacillus anthracis, francisella tularensis, yersinia pestis, burkholderia mallei and burkholderia pseudomallei. as in the initial paper from kodani et al. (2011) , these authors pointed out that the array system is around 10-fold less sensitive than singleplex rt-pcr assays performed alongside. this is a factor that has to be borne in mind when considering the clinical application of these assays but since the relevance of very low ct value results is always a point of contention, this issue is not new to molecular diagnostics. however, as outlined above for our respiratory tac, striving to improve assays during card development and ensuring only sublime assays (with exquisite analytical sensitivity) migrate to the finalised validated diagnostic card should mitigate and address this point. additional tacs have been developed within our laboratory. like the respiratory and gastro tacs shown in figure 2 , a jaundice tac was designed incorporating assays considered important in causing or complicating jaundice in patients. assays on the jaundice tac currently include hepatitis viruses a-g, hcv genotyping (1-4), erythrovirus b19, sen virus (sen-v), sv40, dengue virus, pan-adenovirus, polyomaviruses bk and jc, herpes viruses 1-8, toxoplasma gondii, m. tuberculosis complex, chlamydophila psittaci and internal controls (figure 2) . a similar tac has also been developed at the centers for disease control and prevention (cdc, atlanta) but is limited to the hepatitis viruses (kodani et al., 2014) . including additional pathogens allows for a more rapid and comprehensive diagnosis without the requirement for multiple blood or biopsy samples to be sought from a patient. it also removes or limits the possibility of missing a complicating infection once an initial diagnosis is made. another area in which tacs are being applied is in the diagnosis of infections in immunocompromised patients. the number of transplants per year continues to rise, placing increased demands on the clinical care teams and therefore also on diagnostic laboratories. despite technical innovations, morbidity and mortality rates have not improved in the past 20 years, mainly due to infection, cardiovascular disease and malignancy. significant advances in early diagnosis of infections will have a beneficial impact on morbidity and mortality rates and therefore will continue to be the focus of current diagnostic algorithm improvements. a modification of the jaundice tac, replacing the hcv genotyping assays with 12 additional assays, carefully selected to include those considered problematic in transplant and immunocompromised patients has generated the first comprehensive diagnostic algorithm for infections in this setting (figure 2 ). assays on this 'transplant' tac include herpes viruses 1-8, human polyomaviruses (bk/jc/wu/ki/ mc), human papillomaviruses (pan hpv, e6 mrna genotype 16 and 18), erythrovirus b19, pan-adenovirus, hepatitis viruses a-g, sen v, t. gondii, m. tuberculosis complex and fungal species (aspergillus and candida species). there are many other tacs in development, e.g., for cns infections ( figure 2 ) and stis, both in the united kingdom and elsewhere. the versatility of this approach and its adaptability highlights the fact that it can be readily recruited or used in a 'mix-and-match' style depending on local or outbreak requirements. furthermore, due to their ease of use, they can be considered a parachute technology, fast tracked when need arises or in resource poor settings. this chapter has outlined the development and use of tacs in the clinical diagnostic laboratory. the number of different tacs available continues to expand and their versatility has been explained. as a clinical intervention tool, they offer many possibilities. locally, the use of tac assays has provided rapid diagnosis in seriously ill patients requiring ecmo. it has also been instrumental in identifying b19 as the possible causative agent of recurring rhabdomyolysis in a young child, hsv type 1 pneumonia in an immunocompetent patient and a case of bk virus-induced pneumonia in an immunocompromised patient, all of which would have otherwise remained undiagnosed. in addition, the real potential for the use of tac in outbreak situations has been recognised in its ability to identify m. pneumoniae as the causative agent in an outbreak among university students (waller et al., 2014) . in that outbreak, the tac exhibited 100% sensitivity and specificity when compared to multiplex realtime pcr and allowed the outbreak to be quickly recognised and appropriately managed. the syndromic approach offers many advantages over a monoplex targeted one. the assays are not without their limitations, and the concommitant loss of sensitivity when using relatively small amounts of nucleic acid extract has to be considered. that aside, the tac heralds a step forward in the development of versatile molecular assays for diagnosis of infectious disease. real-time nucleic acid amplification in clinical microbiology miqe precis: practical implementation of minimum standard guidelines for fluorescencebased quantitative real-time pcr experiments the need for transparency and good practices in the qpcr literature c-reactive protein level and microbial aetiology in patients hospitalised with acute exacerbation of copd application of rapid-cycle real-time polymerase chain reaction for diagnostic testing in the clinical microbiology laboratory optimization of multiple pathogen detection using the taqman array card: application for a population-based study of neonatal infection multiplex pcr: optimization and application in diagnostic virology the increasing application of multiplex nucleic acid detection tests to the diagnosis of syndromic infections minimum information necessary for quantitative real-time pcr experiments rapid and sensitive approach to simultaneous detection of genomes of hepatitis a, b, c, d and e viruses engineered combined-positive-control template for real-time reverse transcription-pcr in multiple-pathogen-detection assays application of taqman low-density arrays for simultaneous detection of multiple respiratory pathogens a laboratorydeveloped taqman array card for simultaneous detection of 19 enteropathogens development and assessment of molecular diagnostic tests for 15 enteropathogens causing childhood diarrhoea: a multicentre study real-time pcr in the microbiology laboratory a low complexity rapid molecular method for detection of clostridium difficile in stool comparison of pcr and microscopy for detection of cryptosporidium parvum in human fecal specimens in a clinical trial integrated microfluidic card with taqman probes and high-resolution melt analysis to detect tuberculosis in 10 genes comparison of the idaho technology filmarray system to real-time pcr for detection of respiratory pathogens in children molecular diagnosis of diarrhea: current status and future potential comparison of the biofire filmarray rp, genmark esensor rvp, luminex xtag rvpv1, and luminex xtag rvp fast multiplex assays for detection of respiratory viruses the potential of taqman array cards for detection of multiple biological agents by real-time pcr an internally controlled, one-step, real-time rt-pcr assay for norovirus detection and genogrouping comparison of direct fecal smear microscopy, culture, and polymerase chain reaction for the detection of blastocystis sp. in human stool samples guidance on the development and validation of diagnostic tests that depend on nucleic acid amplification and detection comparison of microscopy and pcr for detection of intestinal parasites in danish patients supports an incentive for molecular screening platforms a practical approach to rt-qpcr-publishing data that conform to the miqe guidelines detection and characterization of mycoplasma pneumoniae during an outbreak of respiratory illness at a university pcr-electrospray ionization mass spectrometry: the potential to change infectious disease diagnostics in clinical and public health laboratories the authors wish to thank dr. marijke reydners and dr. patrick descheemaeker (az sint-jan hospital, bruges, belgium) for their collaborative contribution on the development of the respiratory tac. we would also like to thank all our collaborators within phe network for their efforts in developing and validating the gastro tac, namely, dr key: cord-236830-0y5yisfk authors: chan, justin; foster, dean; gollakota, shyam; horvitz, eric; jaeger, joseph; kakade, sham; kohno, tadayoshi; langford, john; larson, jonathan; sharma, puneet; singanamalla, sudheesh; sunshine, jacob; tessaro, stefano title: pact: privacy sensitive protocols and mechanisms for mobile contact tracing date: 2020-04-07 journal: nan doi: nan sha: doc_id: 236830 cord_uid: 0y5yisfk the global health threat from covid-19 has been controlled in a number of instances by large-scale testing and contact tracing efforts. we created this document to suggest three functionalities on how we might best harness computing technologies to supporting the goals of public health organizations in minimizing morbidity and mortality associated with the spread of covid-19, while protecting the civil liberties of individuals. in particular, this work advocates for a third-party free approach to assisted mobile contact tracing, because such an approach mitigates the security and privacy risks of requiring a trusted third party. we also explicitly consider the inferential risks involved in any contract tracing system, where any alert to a user could itself give rise to de-anonymizing information. more generally, we hope to participate in bringing together colleagues in industry, academia, and civil society to discuss and converge on ideas around a critical issue rising with attempts to mitigate the covid-19 pandemic. several communities and nations seeking to minimize death tolls from covid-19, are resorting to mobilebased, contact tracing technologies as a key tool in mitigating the pandemic. harnessing mobile computing technologies is an obvious means to dramatically scale-up conventional epidemic response strategies to do tracking at population scale. however, straightforward and well-intentioned contact-tracing applications can invade personal privacy and provide governments with justification for data collection and mass surveillance that are inconsistent with the civil liberties that citizens will and should expect-and demand. to be effective, acceptable, and consistent with the need to observe commitments to privacy, we must leverage designs and computing advances in privacy and security. in cases where it is valuable for individuals to share data with others, systems must provide voluntary mechanisms in accordance with ethical principles of personal decision making, including disclosure, and consent. we refer to efforts to identify, study, and field such privacy-sensitive technologies, architectures, and protocols in support of mobile tracing as pact (p rivacy sensitive protocols and mechanisms for mobile c ontact t racing). the objective of pact is to set forth transparent privacy and anonymity standards, which permit adoption of mobile contract tracing efforts while upholding civil liberties. the basic idea is that users broadcast signals ("pseudonyms"), while also recording the signals they receive. notably, this colocation approach avoids the need to collect and share absolute location information. credit: m eifler. this work specifies a third-party-free set of protocols and mechanisms in order to achieve these objectives. while approaches which rely on trusted third parties can be straightforward, many naturally oppose the aggregation of information and power that it represents, the potential for misuse by a central authority, and the precedent that such an approach would set. it is first helpful to review the conventional contact tracing strategies executed by public health organizations, which operate as follows: positively tested citizens are asked to reveal (voluntarily, or enforced via public health policy or by law depending on region) their contact history to public health officers. the public health officers then inform other citizens who have been at risk to the infectious agent based on co-location, via some definition of co-location, supported by look-up or inference about locations. the citizens deemed to be at risk are then asked to take appropriate action (often to either seek tests or to quarantine themselves and to be vigilant about symptoms). it is important to emphasize that the current approach already makes a tradeoff between the privacy of a positively tested individual and the benefits to society. we describe mobile contact-tracing functionalities that seeks to augment the services provided by public health officers, by enabling the following capabilities via computing and communications technology: • mobile-assisted contact tracing interviews: a citizen who becomes ill can use this functionality to improve the efficiency and completeness of manual contact tracing interviews. in many situations, the citizen can speed up the interview process by filling in much of a contact interview form before the contact interview process even starts, reducing the burden on public health authorities. the privacy-sensitivity here is ensured since all the data remains on the user's device, except for what they voluntarily decide to reveal to health authorities in order to enable contact tracing. in advance of their making a decision to share, they are informed about how their data may be used and the potential risks of sharing. • narrowcast messages: public health authorities can make available custom-tailored messages to specific, relevant subsets of citizens. for example, the following message might be issued: "if you visited the x eldercare center between march 7th and 10th, please email yy@hhhealth.org" or "please refrain from entering playground z until april 6th because it needs to undergo decontamination." a mobile app can download all of these messages and display those relevant to a citizen based on the app's sensory log or potential future movements. this capability allows public health officials to quickly warn people when new hotspots arise, or canvas for general information. it enables a citizen to be well-informed about extremely local pandemic-relevant events. • privacy-sensitive, mobile tracing: proximity-based signals seem to provide the best available contact sensor from one phone to another; see figure 1 for the basic approach. proximity-based sensing can be done in a privacy-sensitive manner. with the approach, no absolute location information is collected nor shared. variants of proximity-based analyses have been employed in the past for privacy-sensitive analyses in healthcare [23] . taking advantage of proximity-based signals can speed the process of contact discovery and enable contact tracing of otherwise undiscoverable people like the fellow commuter on the train. this can also be done with a third-party-free approach providing similar privacy tradeoffs as manual contact tracing. this functionality can enable someone who has become ill with symptoms consistent with covid-19, or who has received confirmation of infection with a positive test for covid-19, to voluntarily and under a pseudonym, share information that may be relevant to the wellness of others. in particular, a system can manage, in a privacy-sensitive manner, data about individuals who came in close proximity to them over a period of time (e.g., the last two weeks), even if there is no personal connection between these individuals. individuals who share information do so with disclosure and consent around potential risks of private information being shared. we further discuss disclosure, security concerns, and re-identification risks in section 2. importantly, these protocols, by default, keep all personal data on a citizens' phones (aside for pseudonymous identifiers broadcast to other local devices), while enabling these key capabilities; information is shared via voluntary disclosure actions taken, with the understandings relayed via careful disclosure. for example, if someone never tests positive for covid-19 or tests positive but decides not to use the system, then *no* data is ever sent from their phone to any remote servers; such individuals would be contacted by standard contact tracing mechanisms arising from reportable disease rules. the data on the phone can be encrypted and can be set up to automatically time out based on end-user controlled policies. this would prevent the dataset from being accessed or requested via legal subpoena or other governmental programs and policies. we specify protocols for all three separate functionalities above, and each app designer can decide which ones to use. these protocols notably have different value adoption curves: narrowcast and mobile-assisted contact tracing have a value which is linear in the average adoption rate while privacy-sensitive mobile tracing has value quadratic in the average adoption rate due to requiring both ends of the connection be working. this quadratic dependence implies low initial value so we expect narrowcast and mobile-assisted contact tracing to provide initial value in adoption while privacy-sensitive mobile tracing provides substantial additional value once adoption rates are high. we note that there are an increasing number of concurrent contact tracing protocols being developedsee in particular section 5 for a discussion of solutions based on proximity based tracing (as in figure 1 ). in particular, there are multiple concurrent approaches using proximity based signaling; our approach has certain advantageous properties, as it is particularly simple and requires very little data transfer. one point to emphasize is that, with this large number of emerging solutions, it is often difficult for the user to interpret what "privacy preserving" means in many of these protocols 1 . one additional goal in providing the concrete protocols herein is to have a broader discussion of both privacy-sensitivity and security, along with a transparent discussion of the associated re-identification risks -the act itself of alerting a user to being at risk provides de-anonymizing information, as we discuss shortly. from a civil liberties standpoint, the privacy guarantees these protocols ensure are designed to be consistent with the disclosures already extant in contract tracing methods done by public health services (where some information from a positive tested citizen is revealed to other at risk citizens). in short, we seek to empower public health services, while maintaining civil liberties. we also note that these contact tracing solutions are not meant to replace conventional contact tracing strategies employed by public health organizations; not everyone has phones, and not everyone that has a phone will use this app. therefore, it is still critical to leverage conventional approaches, along with the figure 2 : pact tracing protocol. first, a user generates a random seed, which they treat as private information. then all users broadcast random-looking signals to users in their proximity via bluetooth and, concurrently, all users also record all the signals they hear being broadcast by other users in their proximity. each person's broadcasts (their "pseudonyms") are a function of their private seed, and they change these broadcasted pseudonyms periodically (e.g. every minute). whenever a user tests positive, the positive user then can voluntarily publish, on a public server, information which enables the reconstruction of all the signals they have broadcasted to others during the infection window (precisely, they publish their private seed, and, using the seed, any other user can figure out what pseudonyms the positive user has previously broadcasted). now, any other user can determine whether they are at risk by checking whether the signals they heard are published on the server. note that the "public lists" can be either lists from hospitals, which have confirmed seeds from positive users, or they can be self-reports (see section 2.3). credit: m eifler. approaches outlined in this paper. in fact, two of our protocols are designed for assisting public health organizations (and are designed with input from public health organizations). throughout, we refer to an at risk individual as one who has been in contact with an individual who has tested as positive for covid-19 (under criteria as defined by public health programs, e.g., "within 6 feet for over 10 minutes"). before we start this discussion, it is helpful to consider one principle which the proposed protocols respect: "if you do not report as being positive, then no information of yours will leave your phone." from a more technical standpoint, the statement that is consistent with our protocols is: if you do not report as being positive, then only random ("pseudonymized") signals are permitted to be broadcast from your phone. these random broadcasts are what allows proximity based tracing; see figure 2 for a description of the mobile tracing protocol. it is worthwhile to note that this principle is consistent, in spirit, with conventional contract tracing approaches, where only positively tested individuals reveal information to the public health authorities. with the above principle, the discussion at hand largely focuses on what can be inferred when a positive disclosure occurs along with how a malicious party can impact the system. we focus the discussion on the "mobile tracing" protocol for the following reasons: "narrowcasting" allows people to listen for events in their region, so it can viewed as a one way messaging system. for "mobile-assisted interviews," all the data remains on the user's device, except for what they voluntarily reveal to public health authorities in order to enable contact tracing. all the claims are consequences of basic security properties that can formally be proved about the protocol, and in particular, about the cryptographic mechanism generating these random-looking signals. we start first with what private information is protected and what is shared voluntarily, following disclosure and consent. the inferential risk is due to that the alert itself is correlated with other information, from which a user could deduce de-anonymizing information. 1. if i tested positive and i voluntarily disclose this information, what does the protocol reveal to others? any other citizen who uses a mobile application following this protocol who has been at risk is notified. in some versions the time(s) that the exposure(s) occurred may be shared. in the basic mobile tracing system that we envision, beyond exposure to specific individuals, no information is revealed to any other citizens or entities (authorities, insurance companies, etc). it is also worthwhile noting that, if you are negative, then the protocol does not directly transmit any of your private information to any public database or any other third party; the protocol does transmit random ("pseudonymized") signals that your phone broadcasts. 2. re-identification and inferential risks. can a positive citizen's identity, who chooses to report being positive, be inferred by others? identification is possible and is a risk to volunteers who would prefer to remain de-identified. preventing proximity-based identification of this sort is not possible to avoid in any protocol, even in manual contact tracing as done by public health services, simply because the exposure alert may contain information that is correlated with identifying information. for example, an individual who had been in close proximity to only one person over the last two weeks can infer the identity of this positively tested individual. however, the positive's identity will never be explicitly broadcast. in fact, identities are not even stored in the dataset: it is only the positive person's random broadcasts that are stored. 3. mitigating re-identification. can the app be designed so as to mitigate re-identification risks to average users? while the protocol itself allows a sophisticated user, who is at risk, to learn the time at which the exposure occurred, the app itself can be designed to mitigate the risk. for example, in the app design, the reidentification risk could be mitigated by only informing the user that they are at risk, or the app could only provide the rough time of day at which the exposure occurred. this is a mild form of mitigation, which a malicious or sophisticated user could try to circumvent. we now directly address questions about the potential for malicious hackers, governments, or organizations to compromise the system. in some cases, cryptographically secure procedures can prevent certain attacks, and, in other cases, malicious disclosure of information is prevented because the protocol stores no data outside of your device by default. only cryptographically secure data from positively confirmed individuals is stored outside of devices. 1. integrity attacks. if you are negative, can a malicious citizen listen to your phone's broadcasts and, then report positive pretending to be you? no, this is not possible, provided you keep your initial seed private (see figure 2 ). furthermore, even if the malicious party records all bluetooth signals going into and out of your phone, this is not possible. this attack is important to avoid, since, suppose a malicious entity observes all bluetooth signals sent from your phone. then, you would not want this entity to report you as positive. this attack is not possible as the seed uniquely identifies your broadcasts and remains unknown to the attacker, unless the attacker is able to successfully break the underlying cryptographic mechanism, which is unlikely to be possible. 2. inferential attacks. can a positive citizen's location, who chooses to report being positive, be inferred by others? it is possible for a malicious party to simultaneously record broadcasts at multiple different locations, including those that the positive citizen visited. using these recordings, the malicious party could infer where the positive citizen was. the times at which the citizen visited these locations can also be inferred. 3. replay and reliability attacks. if a citizen is alerted to be at risk, is it possible the citizen was not in the proximity of a positive individual? there are a few unlikely attacks that can trigger a false alert. one is a replay attack. for example, suppose a malicious group of multiple individuals colludes to try and pretend to be a single individual; precisely, suppose they all use the same private seed (see figure 2 ). then if only one of these malicious individuals makes a positive report, then multiple people can be alerted, even if those people were not in the proximity of the person who made the positive report. the protocol incorporates several measures to make such attacks as difficult as possible. 4. physical attacks. what information is leaked if a citizen's device is compromised by a hacker, stolen, or physically seized by an authority? generally, existing mechanisms protect access to the storage of a phone. should these mechanisms fail, the device only stores enough information to reconstruct the signals broadcast over a period of time prior to the compromise which amounts to the length of the infection window (i.e., two weeks), in addition to collected signals. this enables some additional inference attacks. it is not possible to learn whether the user has ever reported positive. given that we would like the protocol to be of use to different states and countries, we seek an approach which allows for both security in reporting and for flexibility from the app designer in regions where it may make sense to consider reports which are self-confirmed positives tests or self-confirmed symptoms. reporting. does the protocol support both medically confirmed positive tests and self-confirmed positives tests? yes, it supports both. the uploaded files contain signatures from the uploading party (i.e. from a hospital lab or from any app following the protocol). this permits an app designer the freedom to use information from health systems and information from individuals in possibly different manners. in less developed nations, it may be helpful to permit the app designer to allow for reports based on less reliable signatures. reliability. how will the protocol handle issues of false positives and false negatives, with regards to alerting? what about cases when users don't have (or use their) mobile phones? the protocol does not explicitly address this, but a deployment requires both thoughtful app design and responsbile communication with the public. with regards to the former, the false positive and false negative rates have to be taken into account when determining how to make at risk reports. more generally, estimates of the probabilities can be helpful to a user (or an otherwise interpretable report); such reports can be particularly relevant for those in high risk categories (such as the elderly and immuno-compromised individuals). furthermore, not everyone has a smartphone, and not everyone with a smartphone will use this app. thus, users of this app -if they have not received any notification of exposure with covid-19 positive cases -should not assume that they have not been around such positive cases. this means, for example, that they should still be cautious and follow all appropriate current public health guidelines, even if the app has not alerted them to possible covid-19 exposure. this is particularly important until there is sufficient penetration of the app in any local population. we now list threats that are outside of the scope of the protocol, yet important to consider. care should be taken to address these concerns: • trusted communication. communication between users and servers must be protected using standard mechanisms (i.e., the tls protocol [17] ). • spurious entries. self-reporting allows a malicious user to report themselves positive when they are not, and generally may allow several fake reports (i.e. a flooding attack). mitigation techniques should be introduced to reduce the risk of such attacks. • invalid authentication. positive reports should be validated using digital signatures, e.g., by healthcare providers. this requires appropriate public-key infrastructure to be in place. additional vulnerabilities related to misuse or misconfiguration of this infrastructure can affect reliability of positive reports. • implementation issues. implementation aspects may weaken some of our claims, and need to be addressed. for example, signals we send over bluetooth as part of our protocol may be correlated with other signals which de-anonymize the user. we now provide an overview of the three functionalities of pact. this section describes and discusses a privacy-sensitive mobile tracing protocol. our protocol follows a pattern wherein users exchange ids via bluetooth communication. if a user is both infected (we refer to such users as positive, and otherwise as negative) and willing to warn others who may have been at risk via proximity to the user, then de-identified information is uploaded to a server to warn other users of potential exposure. the approach has been followed by a number of similar protocols -we describe the differences with some of them in section 5. in appendix b, we discuss an alternative approach which may offer some efficiency and privacy advantages, at the cost of relying on signatures as opposed to hash functions. low-level technical details are omitted, e.g., how values are broadcast. further, it is assumed the communication between users and the server is protected using the transport layer security (tls) protocol. we first describe a variant of the protocol without entry validation, and discuss how to easily extend it to validate entries below. • parameters. we fix an understood time unit dt and define ∆ such that ∆ · dt equals the infection window. (typically, this would be two weeks.) we also fix the bit length n of the identifiers. (typically, n = 128.) we also use a function g : {0, 1} n → {0, 1} 2n which is assumed to be a secure cryptographic pseudorandom generator (prg). 2 if n = 128, we can use g(x) = sha-256(x). • pseudorandom id generation. every user broadcasts a sequence of ids id 1 , to generate these ids, the user initially samples a random n-bit seed s 0 , and then computes for i = 1, 2, . . .. after i time units, the user only stores s * ← s max{i−∆,0} , the time t * at which s * was generated, the current s i , and the time t i at which s i was generated. note that if the device was powered off or the application disabled, we need to advance to the appropriate s i . • pseudorandom id collection. for every id broadcast by a device in its proximity at time t, a user stores a pair (id, t) in its local storage s. • reporting. to report a positive test, the user uploads (s * , t start = t * , t end = t i ) to the server, which appends it to a public list l. the server checks that t start and t end are reasonable before accepting the entry. once reported, the user erases its memory and restarts the pseudorandom id generation procedure. • checking exposure. a user downloads l from the server (or the latest portion of it). for every entry (s * , t start , t end ) in l, it generates the sequence of ids id * 1 , . . . , id * ∆ starting from s * , as well as estimates t * i of the time at which each id * i was initially broadcast. if s contains (id * i , t) for some i ∈ {1, . . . , ∆} such that t and t * i are sufficiently close, the user is alerted of potential exposure. setting delays. to prevent replay attacks, an entry (s * , t start , t end ) should be published with a slight delay. this is to prevent an id * ∆ generated from s * being recognized as a potential exposure by any user if immediately rebroadcast by a malicious party. entry validation. entries can (and should) be validated by attaching a signature σ on (s * , t start , t end ) when reporting, as well as (optionally) a certificate to validate this signature. an entry thus has form (s * , t start , t end , σ, cert). entries can be validated by multiple entities, by simply re-uploading them with a new signature. a range of designs and policies are supported by this approach. upon an initial update, a (weakly secure) signature with an app-specific key could be attached for self-reporting. this signature does not provide any real security (as we cannot guarantee that an app-specific signing key remains secret), but can be helpful to offer improved functionality. third-parties (like health-care providers) can re-upload an entry with their signature after validation. an app can adopt different policies on how to display a potential exposure depending on how it is validated. we also do not specify here the infrastructure required to establish the validity of certificates, or how a user interacts with a validating party, as this is outside the scope of this description. fixed-length sequences of ids. as stated, during the first ∆ − 1 time units a user will have generated a sequence of fewer than ∆ ids. during this time, the number of ids the user has generated from its current s * is determined by how long ago the user started the current pseudorandom id generation procedure (either when they initially started using the protocol or when they last submitted a report). this may be undesirable information to reveal to a party that gains access to the sequence of ids (e.g. if the user submits a report or if the party gains physical access to the user's device). so to avoid revealing this information, a user may optionally iterate to s ∆ and use id ∆ as the first id they broadcast when starting or restarting the pseudorandom id generation procedure. synchronized updates. suppose a user updates their seed every dt amount of time after whenever they happened to originally start the id generation process. then it may be possible to correlate two ids of a user by noticing that the times at which the ids were initially broadcast were separated in time by a multiple of dt. to mitigate this it would be beneficial to have an agreed schedule of when all users update their seed. for example, if dt is 15 minutes then it might be agreed that everyone should update their seed at midnight utc, followed by 12:15, 12:30, and so forth. privacy and integrity properties of the protocol follow from the following two propositions. (their proofs are omitted and follow from standard techniques.) in the following discussion, it is convenient to refer to an id value id i output by a user as unreported if it is not within the ∆ id's generated by a seed the user has reported to the server. proposition 1 (pseudorandomness) all unreported ids are pseudorandom, i.e., no observer (different than the user) can distinguish them from random looking strings (independent from the state of the user) without compromising the security of g. proposition 2 (one-wayness) no attacker can produce a seed s which generates a sequence of ∆ ids that include an unreported id generated by an honest user (not controlled by the adversary) without compromising the security of g. to discuss the consequences of these properties on privacy and integrity, let us refer to users as either "positive" or "negative" depending on whether they decided to report as positive, by uploading their seed to the server, or not. • privacy for negative users. by the pseudorandomness property, a negative user u only broadcasts pseudorandom ids. these ids cannot be linked without knowledge of the internal state of u. this privacy guarantee improves with the frequency of updating the seed s i -ideally, if a different id i is broadcast each time, no linking is possible. this however results in less efficient checking for exposure by negative users. 3 • privacy for positive users. upon reporting positive, the last ∆ ids generated by the positive user can be linked. ( we discuss what this means below, and possible mitigation approaches.) however, by pseudorandomness, this is only true for the ids generated within the infection window. older ids and newer ids cannot be linked with those in the infection window, and with each other. therefore, a positive user has the same guarantees as a negative user outside of the reported infection window. • integrity guarantees. it is infeasible for an attacker to upload to the server a value s * which generates an unreported id that equals one generated by another user. this prevents the attacker from misreporting ids of otherwise negative users and erroneously alerting their contacts. timing information and replay attacks. the timestamping is necessary to prevent replay attacks. in particular, we are concerned by adversaries rebroadcasting ids of legitimate users (to be tested positive) outside the range of their devices. this may create a high number of false exposures to be reported. an attack we cannot prevent is the following relay attack: an attacker captures an id of an honest user at location a, sends it over the internet to location b, where it is re-broadcast. however, as soon as there is sufficient delay, the attack is prevented by maintaining sufficiently accurate timing information. (one can envision several accuracy compromises in the implementation, which we do not discuss here.) strong integrity. our integrity property does not prevent a malicious user from reporting a seed s * generating an id which has been already reported. given an entry with seed s * , the attacker just chooses (for example) s * as the first half of g(s * ). the threat of such attacks does not appear significant. however, they could be prevented with a less lightweight protocol, as we explain next. we refer to the resulting security guarantee as strong integrity. each user generates a signing/verification-key pair (sk, vk) along with the initial seed. then, we include vk in the id generation process, in particular let (s i , id i ) ← g(s i−1 , vk). an entry now consists of (s * , t start , t end , vk, σ), where σ is a signature (with signing key sk) on (s * , t start , t end , vk). entries with invalid signatures are ignored. (this imposes slightly stronger assumptions on g -pseudorandomness under related seeds sharing part of the input and binding of vk to s i .) the cen protocol, discussed in section 5, is the only one that targets strong integrity, though their initial implementation failed to fully achieve it. (the issue has been fixed after our report.) one explicit compromise we take is that ids of a positive user can be linked within the infection window, and that the start and end time of the infection window is known. for example, an adversary collecting ids at several locations can detect that the same positive user has visited several locations at which it collects broadcast identifiers. this can be abused for surveillance purposes, but arguably, surveillance itself could be achieved by other methods. the most problematic aspect is the linking of this individual with the fact that they are positive. a natural approach to avoid linking, as in [4] , is for the the server to only expose the ids, rather than a seed from which they are computed. however, this does not make them unlinkable. imagine, at an extreme, that the storage on the server is append only (which is a realistic assumption). then, the ids belonging to the same user are stored sequentially. one can obfuscate this leakage of information in several ways, for example by having the server buffer a certain amount of new ids, and shuffle them before release. nonetheless, the actual privacy improvement is hard to assess without a good statistical model of upload frequency. this also increases the latency of the system which directly harms its public health value. a user could also learn at which time the exposure took place, and hence infer the identity of the positive user from other available information. we stress that the application can and should refuse to display the time of potential exposure -thus preventing a "casual attacker" from learning timing information. however, a malicious app can always remember at which time an id has been seen. contact tracing interviews are laborious and often miss important events due to the limitations of human memory. our plan to assist here is to provide information to the end user that can (with consent) be shared with a public health organization charged with performing contact tracing interviews. this is not an exposure of the entire observational log, but rather an extract of the information which is requested in a standard contact tracing interview. we have been working with healthcare teams from boston and the university of washington on formats and content of information that are traditionally sought by public health agencies. ideally, such extraction can be done working with the user before a contact tracing interview even occurs to speed the process. healthcare authorities from nyc have informed us that they would love to have the ability to make public service announcements which are highly tailored to a location or to a subset of people who may have been in a certain region during specific periods of time. this capability can be enabled with a public server supporting (area x time,message) pairs. here "area" is a location, a radius (minimum 10 meters), a beginning time and an ending time. only announcements from recognized public health authorities are allowed. anyone can manually query the public server to determine if there are messages potentially relevant to them per their locations and dwells at the locations over a period of time. however, simple automation can be extremely helpful as phones can listen in and alert based on filters that are dynamically set up based on privately-held locations and activities. upon downloading (area x time, message) pairs a phone app (for example) can automatically check whether the message is relevant to the user. if it is relevant, a message is relayed to the device owner. querying the public server provides no information to the server through the protocol itself, because only a simple copy is required. we discuss some alternative approaches to mobile tracing. some of these are expected to be adopted in existing and future contact-tracing proposals, and we discuss them here. hart et al. [9] provides a useful high-level understanding of the issues involved in contact tracing. they discuss, among other topics, the value of using digital technology to scale contract tracing and the trade-offs between different classes of solutions. pact users upload their locally generated ids upon a positive report. an alternative is to upload collected ids of potentially at risk users. this approach (which we refer to as the dual approach) has at least one clear security disadvantage and one mild privacy advantage over pact. (the latter is only true if the system is carefully implemented, as we explain below.) disadvantages: reliability and integrity attacks. in the dual approach, a malicious user cannot be prevented from very easily reporting a very large number of ids which were not generated by users in physical proximity. these ids could have been collected by colluding parties elsewhere, at any time before the report. such attacks can seriously hurt the reliability of the system. in pact, to achieve a similar effect, the attacker needs to (almost) simultaneously broadcast the same id in direct proximity of all individuals who should be falsely alerted to be potentially at risk. pact ensures integrity of positive reporting by exhibiting a seed generating these ids, known only to the reporter. a user u cannot frame another negative user u as a positive user by including an id generated by u . in the dual approach, user u could be framed for example by uploading ids that have been broadcast in their surroundings. advantage: improved temporal ambiguity. both in the dual approach and in pact-like designs, a user at risk can de-anonymize a positive user from the time at which the matching id was generated/collected, and other contextual information (e.g., a surveillance video). the dual approach offers a mitigation to this using re-randomization of ids. we explain one approach [11] . let g be a prime-order cyclic group with generator g (instantiated via a suitable elliptic curve). 1. each user u chooses a secret key s u as a random element in z p . 2. each broadcast id takes the form id i = (g ri , g risu ), where r 1 , r 2 , . . . are random elements of z p . 3. to upload an id with form id = (x, y) with a report, a positive user uploads instead a re-randomized version id = (x r , y r ), where r is a fresh random value from z p . 4. to determine whether they are at risk, user u checks whether an id of the form id = (x, y) such that y = x su is stored on the server. under a standard cryptographic assumption -the so-called decisional diffie-hellman (ddh) assumptionthe ids are pseudorandom. further, a negative user who learns they are at risk cannot tell which one of the ids they broadcast has been reported, as long as the reporting user re-randomized them and all ids have been generated using the same s u . note that incorrect randomization only hurts the positive user. crucially, however, the privacy benefit inherently relies on each user u re-using the same s u , and we cannot force a malicious user to comply. for example, to track movements of positive users, a surveillance entity can generate ids at different locations with form (x, y) where y = x s l and s l depends on the location l. identifiers on the server with form (x, x s l ) can then be traced back to location l. a functionally equivalent attack is in fact more expensive against pact, as this would require storing all ids of users broadcast at location l. we discuss an alternative centralized approach here, which relies on a trusted third party (ttp), typically an agency of a government. such a solution requires an initial registration phase with the ttp, where each user subscribes to the service. moreover, the protocol operates as follows: 1. users broadcast random-looking ids and gather ids collected in their proximity. 2. upon a positive test, a user reports to the ttp all of the ids collected in their proximity during the relevant infection window. the ttp then alerts the users who generated these ids, who are now at risk. in order for the ttp to alert potentially at risk users, it needs to be able to identify the owners of these identifiers. there a few technical solutions to this problem. • one option is to have the ttp generate all ids which are used by the users -this requires either storing them or (in case only seeds generating them are stored) a very expensive check to identity at risk users. • a more efficient alternative for the ttp (but with larger identifiers) goes as follows. the trusted thirdparty generates a public-key/secret-key pair (sk, pk), making pk public. it also gives a unique token τ u to each user u upon registration, which it remembers. then, the i-th id of user u is id i = enc(pk, τ u ). (note that encryption is randomized here, so every id i appears independent from prior ones.) the ttp can then efficiently identify the user who generated id i by decrypting it. privacy considerations. such a centralized solution offers better privacy against attackers who do not collude with the ttp -in particular, only pseudorandom identifiers are broadcast all times. moreover, at risk individuals only learn that one of the ids they collected belongs to a positive individual. a -risk users can still collude, learning some information from the time of being reported at risk, and correlate identifiers belonging to the same positive user, but this is harder. the biggest drawback of this solution, however, is the high degree of trust on the ttp. for example: • the ttp learns the identities of all at risk users who have been in proximity of the positive subject. • the ttp can, at any time and independently of any actual report, learn the identity of the user u who broadcasts a particular id, or at least link them to their token τ u . this could be easily exploited for surveillance of users adopting the service. security consideration. as in the dual approaches described above, it is trivial for a malicious party identifying as honest to report valid identifiers of other users (which may have been collected in a distributed fashion) to erroneously alert them as being at risk. replay attacks can be mitigated by encrypting extra meta-data along with τ u (e.g., a timestamp), but this would make ids even longer. if the ttp is malicious it can target specific users to falsely claim they are at risk or to refrain from informing them when they actually are at risk. it is also possible to design protocols based on the sensing of absolute locations (gps, and gps extended with dead reckoning, wifi, other signals per current localization methods) consistent with "if you do not report as being positive, then no information of yours will leave your phone" (see section 2). for example, a system could upload location traces of positives (cryptographically, in a secure manner), and then negative users, whose traces are stored on their phones could intersect their traces with the positive traces to check for exposure. this could potentially be done with stronger cryptographic methods to limit the exposure of information about these traces to negative users; one could think of this as a more general version of private-set intersection (psi) [1, 8, 15] . however, such solutions would still reveal traces of positives to a server. there are two reasons why we do not focus on the details of such an approach here: • current localization technologies are not as accurate as the use of bluetooth-based proximity detection, and may not be accurate enough to be consistent with medically suggested definitions for exposure. • approaches employing the sensing and collection of absolute location information would need to rely more heavily on cryptographic protocols to keep the positive users traces secure. however, this is an approach worth keeping in mind as an alternative, per assessments of achievable accuracies and relevance of the latter accuracies for public health applications. there are an increasing number of contact tracing applications being created with different protocols. we will briefly discuss a few of these and how their mobile tracing protocols compare with the approaches described in section 3.1 and 4. the privacy-sensitive mobile tracing protocols proposed by coepi [5] , covidwatch [6], as well as dp 3 t [20] , have a similar structure to our proposed protocol. we briefly describe the technical differences between all of these protocols and discuss the implications of these differences. similar to our proposed protocol, these are based on producing pseudorandom ids by iteratively applying a prg g to a seed. coepi and covidwatch use the contact event numbers (cen) protocol, in which the initial seed is derived from a digital signature signing key rak and g is constructed from two hash functions (which during each iteration incorporate an encoding of the number of iterations done so far and the verification key rvk which matches rak). another proposal is the dp 3 t [20] protocol, in which g is constructed from a hash function, a prf, and another prg. the latter prg is used so that a single iteration of g produces all the ids needed for a day. these ids are used in a random order throughout the day. both of these (under appropriate cryptographic assumptions) achieve the same sort of pseudorandomness and one-wayness properties as our protocol. the incorporation of rvk into g with cen is intended to provide strong integrity and allow a reporting user to include a memo with their report that is cryptographically bound to the report. two ideas for what such a memo might include are a summary of the user's self-reported symptoms (coepi) or an attestation from a third party verifying that the user tested positive (covidwatch). because a counter of how many times the seed has been updated is incorporated into g, a report must specify the corresponding counters. this leaks how long ago the user generated the initial seed, which could potentially be correlated with identifying information about the user (e.g., when they initially downloaded the app). an earlier version of cen incorrectly bound the digital signature key to the identifiers in a report. suppose an honest user has submitted a report for id j through id j (for j < j ) with a user chosen memo. given this report, an attacker could create their own report that verifies as valid, but includes the honest user's id i for some i between j and j together with a memo of the attacker's choosing. a fix was proposed after we contacted the team behind the cen protocol. the random order of a user's ids for a day by dp 3 t is intended to make it difficult for an at risk individual to identify specifically when they were at risk (and thus potentially, by whom they were exposed). a protocol cannot hope to hide this sort of timing information from an attacker that chooses to record the time when they received every id they see; this serves instead as a mitigation against a casual attacker using an app that does not store this sort of timing information. in our protocol and cen, information about the exposure time is not intended to be as hidden at the protocol. in our protocol the time an id was used is even included as part of a report and used to prevent replay attacks, as discussed earlier. cen does not use timing information to prevent replay attacks, but considers that an app may choose to give users precise information about where they were exposed (so the user can reason about how likely this potential exposure was to be an actual exposure). a similar protocol idea was presented in [4] . it differs from the aforementioned proposals in that individual ids are uploaded to the server, rather than a seed generating them (leading to increased bandwidth and storage). alternatives using bloom filters to reduce storage are discussed, but these inherently decrease the reliability of the system. dp 3 t also recently included a similar protocol as an additional option, using cuckoo filters in place of bloom filters. the tracetogether [19] app is currently deployed in singapore. it uses the bluetrace protocol designed by at team at the government technology agency of singapore. this protocol is closely related to the encryption-based technique discussed in section 4.2. the private kit: safe paths app [18, 16] intends to use an absolute-location-centric approach to mobile tracing. they intend to mitigate some of the downsides discussed in section 4.3 by reported location traces of positive users to be partially redacted. it is unclear what methodology they intend to use for deciding how to redact traces. the trade-off in this redaction process between how easily a positive user can be identified from their trace and how much information must be removed from it (decreasing its usefulness). they intend to use cryptographic protocols (likely based on [2] ) to minimize the amount of information revealed about positive users' traces. a group of scientist at the big data institute of oxford university have proposed the use of a mobile contact-tracing app [12, 21] based on their analysis in [7] . the nexttrace [13] project aims to coordinate with covid-19 testing labs and users, providing software to enable contact tracing. the details of these proposals and the privacy protections they intend to provide are not publicly available. the projects we refer to are only a small selection of the mobile contract-tracing efforts currently underway. a more extensive listing of these projects is being maintained at [22] , along with other information of interest to contract tracing. 6 discussion and further considerations most protocols like ours store a seed on a server, which is then used to deterministically generate a sequence of identifiers. details differ in how exactly these sequences are generated (including the adopted cryptographic algorithms). however, it appears relatively straightforward for apps to be modified to support all of these different sequence formats. a potential challenge is data from different protocol may provide different levels of protection (e.g., the lack of timing information may reduce the effectiveness against replay attacks). this difference in reliability may be surfaced via the user-interface. in order to support multiple apps accessing servers for different services, it is important to adopt an interoperable format for entries to be stored on a server and possibly, to develop a common api. we acknowledge that ethical questions arise with contact tracing and in the development and adoption of any new technology. the question of how to balance what is revealed for the good of public health vs individual freedoms is one that is central to public health law. we iterate that privacy is already impacted by tracing practices. in some nations, positively tested citizens are required, either by public health policy or by law, to disclose aspects of their history. such actions and laws frame multiple concerns about privacy and freedom, and bring up important questions. the purpose of this document is lay out some of the technological capabilities, which supports broader discussion and debate about civil liberties and the risks that contact tracing can pose to civil liberties. another concern is accessibility to the service: not everyone has a phone (or will have the service installed). one consequence of this is that the quality of contract tracing in a certain population inherently depends on factors orthogonal to the technological aspects, which in turn raises important questions about fairness. tracing is one part of a conventional epidemic response strategy, based on tests, tracing, and timeouts (ttt). programs involving all three components are as follows: • test heavily for the virus. south korea ran over 20 tests per person found with the virus. • trace the recent physical contacts for anyone who tests positive. south korea conducted mobile contact tracing using telecom information. • timeout the virus by quarantining contacts until their immune system purges the virus, rendering them non-infectious. the mobile tracing approach allows this strategy to be applied at a dramatically larger scale than only relying on human contact tracers. this chain is only as strong as its weakest link. widespread testing is required and wide-scale adoption must occur. furthermore, strategies must also be employed so that citizens takes steps to self-quarantine or seek testing (as indicated) when they are exposed. we cannot assume 100 percent usage of the application and concomitant enlistment in ttt programs. studies are needed of the efficacy of the sensitivity of the effectiveness of the approach to different levels of subscription in a population. and nsf (1914873, 1812559) . stefano tessaro acknowledges support from a sloan research fellowship and from the nsf under grants cns-1553758, cns-1719146. • bluetooth message: a bluetooth message consists of a fixed-length string of bytes. it is used with the bluetooth sensory log to discover if there is a match, which results in a warning that the user may have been in contact with an infected person. • message: a message is a cryptographically signed string of bytes which is interpreted by the phone app. this is used for either a public health message (announced to the user if the sensory log matches) or a bluetooth message. with the above defined, there are two common queries that the server supports as well as an announcement mechanism. • getmessages(region, time) returns all of the (area, message) pairs that the server has added since time for the region. the app can then check locally whether the area intersects with the recorded sensory log of (location,time) pairs on the phone, and alert the user with the message if so. • howbig(region, time) returns the (approximate) number of bytes worth of messages that would be downloaded on a getmessages call with the same arguments. howbig allows the phone app to control how much information it reveals to the server about locations/times of interest according to a bandwidth/privacy tradeoff. for example, the phone could start with a very coarse region, specifying higher precision regions until the bandwidth required is acceptable, then invoke getmessages. (this functionality is designed to support controlled anonymity across widely varying population densities.) • announce(area,message) uploads an (area, message) pair for general distribution. to prevent spamming, the signature of the message is checked against a whitelist defined with the server. we propose an alternative to the protocol in section 3.1. one main difference is that the server cannot generate the ids broadcast by a positive user, and only stores a short verification key used to identify ids broadcast by the positive user. while this does not prevent many of the inference scenarios we discussed above, this appears to be a desirable property. as we explain below, this protocol offers a different cost for checking exposure, which may be advantageous in some deployment scenarios. this alternative approach inherently introduces risks of replay attacks which cannot be prevented by storing timestamps, because the server obtains no information about the times at which ids have been broadcast. to overcome this, we build on top of a very recent approach of pietrzak [14] for replay-attack protection. (along similar lines, this can also be extended to relay-attack protection by including gps coordinates, but we do not describe this variant here.) • setup and parameters. we fix an understood time unit dt. we make use of a digital signature scheme specifying algorithms for key generation, signing, and verification, denoted kg, sign, and vrfy, respectively. we also use a hash they also determine the current time t i = t d + dt · (i − 1). finally, the user samples n-bit random strings r i and r i and computes the identifier as where σ i = sign(sk d , r i ||h i ) and h i = h(r i , t i ). they broadcast (id i , r i , t i ). when day d ends the user deletes their signing key sk d . (the verification key vk d is not deleted, until an amount of time equal to the infection window has elapsed.) • pseudorandom id collection. for every id i = ((σ i , r i , h i ), r i , t i ) broadcast by a device in their proximity, a user first checks if t i is sufficiently close to their current time and if h i = h(r i , t i ). if so, they store id i in their local storage s. • reporting. to report a positive test, the user uploads each of their recent vk d to the server, which appends them to a public list l. once reported, the user erases their memory and restarts the pseudorandom id generation procedure. • checking exposure. a user downloads l from the server (or the latest portion of it). for every entry vk in l and every entry (σ, r, h) in s, they run vrfy(vk, σ, r||h). if this returns true, the user is alerted of potential exposure. efficiency comparisons. let ∆ be the number of ids broadcast over the infection window. let s = |s| be the size of the local storage. let l be the number of new verification keys a user downloads. to check exposure, the protocol from section 3.1 roughly runs in time where t g is the time needed to evaluate g. in contrast, for the protocol in this section, the time is where t vrfy is the time to verify a signature. one should note that t vrfy is generally larger than t g , but can still be quite fast. (for example, ed25519 enables fast batch signature verification.) therefore, the usage of this scheme makes particular sense if a user does not collect many ids, i.e., s is small relative to ∆ · log(s). assumptions. we require the following two standard properties for the hash function h: • pseudorandomess: for any x and a randomly chosen r ∈ {0, 1} n , the output h(r, x) looks random to anyone that doesn't know r. • collision resistance: it is hard to find distinct inputs to h that produce the same output. of our digital signature scheme we require the following three properties. the first is a standard property of digital signature schemes. the latter two are not commonly required of a digital signature scheme, so one needs to be careful when choosing a signature scheme to implement this protocol. we have verified that these properties are achieved by ed25519 under reasonable cryptographic assumptions. • unforgeability: given vk and examples of σ = sign(sk, m) for attacker-chosen m, an attack cannot produce a new (σ , m ) for which vrfy(vk, σ , m ) returns true. • one-wayness: given examples of σ = sign(sk, m) for attacker-chosen m (but not given vk), an attacker cannot find vk for which vrfy(vk , σ, m) returns true for any of the example (σ, m). • pseudorandomess: the output of sign(sk, ·) looks random to an attacker that does not know vk or sk. privacy and security properties. we discuss the privacy and integrity properties this protocol has in common with the earlier protocol, as well as some newer properties not achieved by the earlier protocol. • privacy for negative users. by the pseudorandomness property, the signatures broadcast by a user u look pseudorandom. beyond that, u broadcasts two random strings and their view of the current time t i which is already known by any device hearing the broadcast. 4 thus these broadcasts cannot be linked without knowledge of the internal state of u. as before, this privacy guarantee improves with the frequency of generating new ids. • privacy for positive users. upon reporting positive, the ids broadcast by a user within a single day can be linked to each other. ids broadcast on different days can be linked if the server does not hide which vk's were reported together. older ids from days before the infection window and newer ids from after the report cannot be linked with those in the infection window or with each other. therefore, a positive user has the same guarantees as a negative user outside of the reported infection window. • integrity guarantees. it is infeasible for an attacker to upload to the server a value vk which verifies an unreported id that was broadcast by another user. this prevents the attacker from misreporting ids of otherwise negative users and erroneously alerting their contacts. • replay protection. the incorporation of t in each id prevents an attacker from performing a replay attack where they gather ids of legitimate users (to be tested positive) and re-broadcast the ids at a later time to cause false beliefs of exposure. a vk reported to the server cannot be used to broadcast further ids that will be recognized by other users as matching that report. • non-sensitive storage. because h(r i , t i ) looks random, the information intentionally stored by the app together with an id does not reveal when the corresponding interaction occurred. (of course, it may be possible to infer information about t i through close examination of how the id was stored, e.g., where it was written in memory as compared to other ids.) information sharing across private databases assessing disease exposure risk with location histories and protecting privacy: a cryptographic approach in response to a global pandemic high-speed high-security signatures anonymous collocation discovery:taming the coronavirus while preserving privacy coepi: community epidemiology in action quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing efficient private matching and set intersection outpacing the virus: digital response to containing the spread of covid-19 while mitigating privacy risks rfc 8032: edwards-curve digital signature algorithm (eddsa) delayed authentication: replay and relay attacks on dp-3t phasing: private set intersection using permutationbased hashing apps gone rogue: maintaining personal privacy in an epidemic rfc 8446: the transport layer security (tls) protocol version 1.3. internet engineering task force (ietf) private kit: safe paths; privacy-by-design contact tracing decentralized privacy-preserving proximity tracing sustainable containment of covid-19 using smartphones in china: scientific and ethical underpinnings for implementation of similar approaches in other settings unified research on privacy-preserving contact tracing and exposure notification for covid-19 from web search to healthcare utilization: privacy-sensitive studies from mobile data we gratefully acknowledge dean foster for contributions that are central in designing the current protocol, along with contributions throughout the current document. the authors thank yael kalai for numerous helpful discussions, along with suggesting the protocol outlined in section 4.1. we thank edward jezierski, nicolas di tada, vi hart, ivan evtimov, and nirvan tyagi for numerous helpful discussions. we also graciously thank m eifler for designing all the figures. sham kakade acknowledges funding from the washington research foundation for innovation in data-intensive discovery, the onr award n00014-18-1-2247, nsf grants #ccf-1637360 and #ccf 1740551. jacob sunshine acknowledges funding from nih (k23da046686) a number of practical issues and details may arise with implementation.1. with regards to anonymity, if the protocol is implemented over the internet, then geoip lookups can be used to localize the query-maker to a varying extent. people who really care about this could potentially query through an anonymization service.2. the narrowcast messages in particular may be best expressed through existing software map technology. for example, we could imagine a map querying the server on behalf of users and displaying public health messages on the map.3. the bandwidth and compute usage of a phone querying the full database may be to high. to avoid this, it's reasonably easy to augment the protocol to allow users to query within a (still large) region.we mention one such approach below.4. disjoint authorities. across the world, there may be many testing authorities which do not agree on a common infrastructure but which do wan to use the protocol. this can be accommodated by enabling the phone app to connect to multiple servers.5. the mobile proximity tracing does not directly inform public authorities who may be a contact. however, it does provide some bulk information, simply due to the number of posted messages.there are several ways to implement the server. a simple approach, which works fine for not-to-many messages just uses a public github repository.a more complex approach supporting regional queries is defined next. anyone can ask for a set of messages relevant to some region r where r is defined by a latitude/longitude range with messages after some timestamp. more specific subscriptions can be constructed on the fly based on policies that consider a region r and privately observed periods of time that an individual has spent in a region. such scoped queries and messaging services that relay content based on location or on location and periods of time are a convenience to make computation and communication tractable. the reference implementation uses regions greater in size than typical geoip tables.to be specific, let's first define some concepts.• region: a region consists of a latitude prefix, a longitude prefix, and the precision in each. for example, new york which is at 40.71455 n, -74.00712 e can be coarsened to 40 n, -74 e with two digits of precision (the actual implementation would use bits).• time: a timestamp is specified in the number of seconds (as a 64 bit integer) since the january 1, 1970.• location: a location consists of a full precision latitude and longitude• area: an area consists of a location, a radius, a beginning time, and an ending time. key: cord-349548-loi1vs5y authors: mueller, markus; derlet, peter; mudry, christopher; aeppli, gabriel title: using random testing in a feedback-control loop to manage a safe exit from the covid-19 lockdown date: 2020-04-14 journal: nan doi: 10.1101/2020.04.09.20059360 sha: doc_id: 349548 cord_uid: loi1vs5y we argue that frequent sampling of the fraction of infected people (either by random testing or by analysis of sewage water), is central to managing the covid-19 pandemic because it both measures in real time the key variable controlled by restrictive measures, and anticipates the load on the healthcare system due to progression of the disease. knowledge of random testing outcomes will (i) significantly improve the predictability of the pandemic, (ii) allow informed and optimized decisions on how to modify restrictive measures, with much shorter delay times than the present ones, and (iii) enable the real-time assessment of the efficiency of new means to reduce transmission rates. here we suggest, irrespective of the size of a suitably homogeneous population, a conservative estimate of 15000 for the number of randomly tested people per day which will suffice to obtain reliable data about the current fraction of infections and its evolution in time, thus enabling close to real-time assessment of the quantitative effect of restrictive measures. still higher testing capacity permits detection of geographical differences in spreading rates. furthermore and most importantly, with daily sampling in place, a reboot could be attempted while the fraction of infected people is still an order of magnitude higher than the level required for a relaxation of restrictions with testing focused on symptomatic individuals. this is demonstrated by considering a feedback and control model of mitigation where the feed-back is derived from noisy sampling data. the covid-19 pandemic has led to a worldwide shutdown of a major part of our economic and social activities. this political measure was strongly suggested by epidemiologic studies assessing the cost in human lives depending on different possible strategies (doing nothing, mitigation, suppression). [1] [2] [3] mitigation can be achieved by different strategies, such as physical distancing, contact tracing, restricting public gatherings, and the closing of schools, but also the testing for infections. the quantitative impact of very frequent testing of the entire population for infectiousness has been studied in a recent unpublished work by jenny et al. in ref. [4] . we will estimate in sec. iii that to fully suppress the covid-19 pandemic by widespread testing for infections, one needs a capacity to test millions of people per day in switzerland. this should be compared to the present number of 7'000 tests per day across switzerland. 1 however, we show that tracking and control of this pandemic is possible by testing a much smaller number of randomly selected people per day. in addition, we will argue that even with currently available testing rates, extremely valuable information on the rates of transmission depending on geographic regions of switzerland can be obtained. figure 1 summarizes the key concept of the paper, namely a feedback and control model for the pandemic. the key output from random testing is the growth rate of the number of infected people, which itself is regulated by measures such as those enforcing physical distances between persons (physical distancing), 2 and whose tolerable values are fixed by the capacity of the healthcare system. a feedback and control approach, [5] familiar from everyday implementations such as for thermostats regulating heaters and air conditioners, should allow policy makers to damp out oscillations in disease incidence which could lead to peaks in stress on the healthcare system as well as the wider economy. a further important benefit of this feedback and control scheme is that it allows a much faster and safer reboot of the economy than with the current feedback through confirmed infection numbers, for the latter is heavily delayed and reflects the state of the pandemic only incompletely. the resulting difference in the ability to control the disease is illustrated in fig. 2 . without feedback and control informed by a key parameter, analogous to the temperature provided by the thermometer in the thermostat example, measurable in (near) real time, there is a huge delay between policy changes and the observable changes in terms of positively tested people. to release restrictions safely, the fraction of infected people must decrease to a level i * * such that a subsequent undetected growth during 10-14 days will not move it above the critical fraction i c manageable by the healthcare system. the current situation where we are mainly looking at lagging indicators, namely infection rates among symptomatic individuals or even deaths, is comparable to driving a car from the back seat and with knowledge only of the damage caused by previous collisions. to minimize harm to the occupants of the vehicle, driving very slowly is essential, and oscillations from a straight course are likely to be large. daily random testing reduces the delay between changes in policy and the observation of their effects very significantly. moreover, it directly measures the key quantity of interest, namely the fraction of infected people and its growth rate, information that is very valuable to gauge further interventions. such information is much harder to infer from data about positively tested patients only, by fitting it to specific epidemiological models with their inherent uncertainties. the shortened time delay due to feedback and control allows a reboot to be attempted at much higher levels of infections, i * > i * * , which implies a much shorter time in lockdown. the paper is organized as follows. we summarize and explain the key findings of the paper in simple terms in sec. ii. in sec. iii, we discuss the use of massive testing as a direct means to contain the pandemics, showing that it requires a 100-fold increase of the current testing frequency. in sec. iv, we define the main challenge to be addressed: to measure the quantitative effect of restrictive measures on the transmission rate. section v introduces the idea of randomized testing. section vi constitutes the central part of this paper. it is shown how data from sparse sampling tests can be used to infer essentially instantaneous growth rates, and their regional dependence. we define a model of testing feed-back driven intervention strategy and analyze it theoretically. this model is also analyzed numerically in sec. vii. section viii generalizes the modelling to a regionally refined analysis of the epidemic growth pattern which becomes the preferred choice if higher testing rates become available. we conclude with sec. ix by summarizing our results and their implication for a safe reboot after the current lockdown. in the appendix we address the use of contact tracing and argue that it can complement, but not substitute for random testing. the key quantity measured by random testing is the growth rate k of infection numbers. if k exceeds a tolerable upper threshold κ + , restrictions are imposed. for k below a lower threshold κ − , and if infection numbers are below critical, restrictions are released. in the absence of a substantial influx of infected people from outside the country, and provided infection numbers are below a critical value, the optimal target of the growth rate is k = 0, corresponding to a marginally stable state, where infections neither grow nor decrease exponentially with time. if higher testing rates are available, the measured observables and control strategies can be geographically refined. we argue that the moderate number of 15'000 random tests per day yields valuable information on the dynamics of the disease. assuming that at a given time a fraction of about i 0 ≈ 0.07% of the population is infected, the order of 10 infected people will be detected every day. can such a small number of detected infections be useful at all, given that these numbers fluctuate significantly from day to day? the answer is yes. we show that after a few days the acquired signal becomes stronger than the noise level. it is then possible to establish whether the infection number is growing or decreasing and, moreover, to obtain a quantitative estimate of the instantaneous growth rate k(t). one of our central results is eq. (12c) for the time where the signal becomes clear, which we rewrite in the simplified form where k 1 is the current growth rate of infections to be detected, and r is the number of tests per day. the numerical constant c depends on the required signal to noise ratio. a typical value when detecting large values of k 1 is c ≈ 30 − 40. this result shows that the higher the number of tests r per day, the shorter the time to detect a growth or a decrease of the infected population. the smaller the dynamics of the pandemic with and without a feedback and control scheme in place, as measured by the fraction i of infected people (logarithmic scale). after the limit of the health system, i c , has been reached, a lockdown brings i down again. the exponential rate of decrease is expected to be very slow, unless extreme measures are imposed. the release of measures upon a reboot is likely to re-induce exponential growth, but with a rate difficult to predict. three possible outcomes are shown in blue curves in the scenario without testing feedback, where the effect of the new measures becomes visible only after a delay of 10-14 days. in the worst case, i grows by a multiplicative factor of order 20 before the growth is detected. a reboot can thus be risked only once i ≤ i * * ≡ i c /20, implying a very long time in lockdown after the initial peak. due to the long delay until policy changes show observable effects, the fluctuations of i will be large. random testing (the red curve) has a major advantage. it measures i instantaneously and detects its growth rate within few days, whereby the higher the testing rate the faster the detection. policy adjustments can thus be made much faster, with smaller oscillations of i. a safe reboot is then possible much earlier, at the level of i ≤ i * ≈ i c /4. current growth rate k 1 , the longer the time to detect it above the noise inherent to the finite sampling. how long would it take to detect that a release of restrictive measures has resulted in a nearly unmitigated growth rate of the order of k 1 = 0.23 (which corresponds to doubling every 3 days)? even with a moderate number of r = 15 000 per day, we find that within only ∆t 1 ≈ 3 − 4 days such a strong growth will emerge above the noise level, such that countermeasures can be taken (see fig. 6 ). during this short time, the damage remains limited. the infection numbers will have risen by a multiplicative factor between 2 and 3. this degree of control must be compared to a situation where no information on the current growth rate is available, and where the first effects of a new policy are seen in the increased number of symptomatic, sick people only 10-14 days later. over this time span, with a growth rate of k 1 = 0.23, the infection numbers will have grown by a factor of 10-30 before one realizes eventually that an intervention must be made. random testing decreases both the time scale until informed policy adjustments can be taken and the temporal fluctuations of the infection numbers. as in any feedback and control loop, the more frequent the testing is, the shorter are the delay times, and thus the smaller are the fluctuations. the various benefits of increasing the testing frequency are shown in fig. 5 , which are obtained by simulating a specific mitigation strategy, where we built in the uncertainty about the efficacy of political interventions. the shorter delay times and the reduced fluctuations result in decreased strain on the health system, lower economic costs, and a lower number of required interventions. in addition to these benefits, a higher testing rate r also opens the opportunity to analyze geographic differences and refine the mitigation strategy accordingly, as we discuss in sec. viii. if the massive frequency of 1.5 million tests per day becomes available in switzerland, it will be possible to test any swiss resident every 5 to 6 days. if the infected people that have been detected are kept in strict quarantine (such that they will not infect anybody anymore with high probability), such massive testing could be sufficient to prevent an exponential growth in the number of cumulated infections without the need of draconian physical distancing measures. we now explain qualitatively our approach to reach this conclusion. a refined analysis has been given in ref. [4] . the required testing rate can be estimated as follows. let ∆t denote the average time until an infected person infects somebody else. the reproduction number r, i.e., the number of new infections transmitted on average by an infected person, falls below 1 (and thus below the threshold for exponential growth) if non-diagnosed people are tested at time intervals of no more than 2∆t . thus, the required number of tests over the time 2∆t , the full testing rate τ −1 full , is where is the number of inhabitants of switzerland. 3 without social restrictions, it is estimated that [6] ∆t ≈ 3 days, such that i.e., about 1.4 million tests per day would be required to control the pandemics by testing only. if additional restrictions such as physical distancing etc., are imposed, ∆t increases by a modest factor and one can get by with indirectly proportionally fewer tests per day. nevertheless, on the order of 1 million tests per day is a minimal requirement for massive testing to contain the pandemics without further measures. however, even while the swiss capabilities are still far from reaching 1 million tests per day, testing for infections offers two important benefits in addition to identifying people that need to be quarantined. first, properly randomized testing allows to monitor and study the efficiency of measures that keep the reproduction number r below 1. this ensures that the growth rate k of case numbers and new infections is negative, k < 0. second, frequent testing, even if applied to randomly selected people, helps suppress the reproduction number r and thus allows policy to be less restrictive in terms of other measures, such as physical distancing. to quantify the latter benefit, observe that the effect of massive testing on the growth rate k is proportional to the testing rate. [4] let us assume that without testing or social measures one has a growth rate k 0 . then, if the testing rate τ −1 full is sufficient to completely suppress the exponential growth in the absence of other measures, a smaller testing rate τ −1 decreases the growth rate k 0 down to (τ −1 /τ −1 full ) × k 0 . the remaining reduction of k to zero must then be achieved by a combination of restrictive social measures and contact tracing. it is possible to refine the argument above to take account of the possibility of a spectrum of tests with particular cost/performance trade offs, i.e., a cheaper test with more false positives and negatives could be used for random testing, whereas those displaying symptoms would be subjected to a "gold standard" (pcr) assay of viral genetic material. a central challenge for establishing reliable predictions for the time evolution of pandemics is the quantification of the effect of social restrictions on the transmission rate. [3] policymakers and epidemiologists urgently need to know by how much specific restrictive measures reduce the growth rate k. without that knowledge, it is essentially impossible to take an informed decision on how to optimally combine such measures to achieve a (marginally) stable situation, defined by the condition of a vanishing growth rate indeed, marginal stability is optimal for two reasons. first, it is sustainable in the sense that the burden on the health system does not grow with time. second, it is the least economically and socially restrictive state compatible with the stability requirement. in secs. v and vi, we show how marginal stability can be achieved, while simultaneously measuring the effects of a particular set of restrictions. we claim that statistically randomized testing can be used in a smart way, so as to keep the dynamics of the pandemics under control as per the feedback loop of fig. 1 . we emphasize that this is possible without the current time delays of up to 14 days. the latter arises since we only observe confirmed infections stemming from a highly biased test group that eventually shows symptoms long after the initial infection has occurred. the idea of smart testing is the following. one regularly tests randomized people 4 for infectiousness 5 we stress that randomized testing is essential to obtain information on the current number of infections and its evolution with time. it serves an additional and entirely different purpose from testing people with symptoms, medical staff, or people close to somebody who has been infected, all of whom constitute highly biased groups of people. the first goal of random testing is to obtain a firm test/confirmation of whether the current restrictive measures are sufficient to mitigate or suppress the exponential growth of the covid-19 pandemic, 6 and whether the effectiveness differs from region to region. in case the measures should still be insufficient, one can measure the current growth rates and monitor the effect of additional restrictive measures. 4 it is important that the set of randomly selected people must change constantly, so that it should happen extremely rarely that a given person is tested twice. 5 here, we solely focus on a person being infectious, but not on whether the person has developed antibodies. the latter test indicates that the person has been infected any time in the past. testing for antibodies and (potential) immunity has its own virtues, but aims at different goals from the random testing for infections that we advocate here. by following the fraction of infections as a function of time, we can determine nearly instantaneously the growth rate of infections, k(t), and thus assess and quantify the effectiveness of socio-economic restrictions through the observed changes in k following a change in policy. this monitoring can even be carried out in a regionally resolved way, such that subsequently, restrictive or relaxing measures can be adapted to different regions (urban/rural etc.). 6 a suppression of the covid-19 pandemic is achieved if, for a sufficiently long time, the number of infections decays exponentially with time. mitigation aims to reduce the exponential rate of growth in the number of infections. stability is achieved when that number tends to a constant. once stability is reached, one may start relaxing the restrictions step by step and monitor the effect on the growth rate k as a function of geographic regions. . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint we first analyze random testing for the case where we treat the country as a homogeneous entity with a population n . this will allow us to understand how testing frequency affects key characteristics of policy strategies. we consider the following model. let u be the actual undetected number of infected people. (we assume that detected people do not spread the disease.) the spreading of infections is assumed to be governed by the inhomogeneous, linear growth equation where k(t) is the instantaneous growth rate and φ(t) accounts for infections arising from people crossing the national border. we will later set that influx to zero. an equation of the form (5) is usually part of a more refined epidemiological model [7] [8] [9] that accounts explicitly for the recovery or death of infected persons. for our purpose, the effect of these has been lumped into an overall time-dependence of the rate k(t). for example, it evolves as the number of immune people grows, restrictive measures change, mobility is affected, new tracking systems are implemented, hospitals reach their capacity, testing is increased, etc. nevertheless, over a short period of time where such conditions remain constant, and the fraction of immune people does not change significantly, we can assume the effective growth rate k(t) to be piecewise constant in time. 7 we will exploit this below. for t < 0, we assume stability with such a stable state needs to be reached before a reboot of the economy can be considered. at t = 0 restrictive measures are first relaxed, resulting in an increase of the growth rate k from k 0 to k 1 , which we assume positive, hence, compensating counter measures are required at later times in order to avoid another exponential growth of the pandemic. we now want to monitor the performance of policy strategies that relax or re-impose restrictions, step by 7 replacing the function k(t), assumed to be differentiable, by a piecewise constant function is a good approximation provided wherek(t) is the time derivative of k(t) and ∆t(k) is given by eq. (13a) with the replacement k 1 → k(t). step. the goal for an optimal policy strategy is to reach a marginally stable state (4) (i.e., with k = 0) as smoothly, safely, and rapidly as possible. in other words, marginal stability is to be reached with the least possible damage to health, economy, and society. this expected outcome is to be optimized while controlling the risk of rare fluctuations. to model the performance of policy strategies we neglect the contributions to the time evolution of k(t) due to the increasing immunity or the evolution in the age distribution of infected people. we also neglect periodic temporal fluctuations of k(t) (e.g., due to alternation between workdays and weekends), which can be addressed in further refinements. instead, we assume that k(t) changes only in response to policy measures which are taken at specific times when certain criteria are met, as defined by a policy strategy. an intervention is made when the sampled testing data indicates that with high likelihood, k(t) exceeds some upper threshold likewise, a different intervention is made should k(t) be detected to fall below some negative threshold note that if there is substantial infection influx φ(t) across the national borders, one may want to choose the threshold κ + to be negative, to avoid a too large response to the influx. from now on we neglect the influx of infections, and consider a homogeneous growth equation. to reach decisions on policy measures, data is acquired by daily testing of of random sets of people for infections. we assume that the tests are carried out at a limited rate r (a finite number of tests divided by a nonvanishing unit of time). let i(t, ∆t) be the fraction of positive infections detected among the r ∆t 1 tests carried out in the time interval [t, t + ∆t]. by the law of large numbers, it is a gaussian random variable with mean and standard deviation the current value of k(t) is estimated as k fit (t) by fitting these test data to an exponential, where only data since the last policy change should be used. the fitting also yields the statistical uncertainty (standard deviation), which we call δk(t). it will take at least 2-3 days to make a fit that is reasonably trustworthy. if the instability threshold is surpassed by a certain level, i.e., if . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint a new restrictive intervention is taken. if instead a new relaxing intervention is taken. here, the parameter α is a key parameter defining the policy strategy. it determines the confidence level that policymakers require, before deciding to declare that a stability threshold has indeed been crossed. this strategy will result in a series of intervention times starting with the initial step to reboot at t 1 = 0. in the time window [t ι , t ι+1 ], the growth rate k(t) is constant and takes the value where the policy choice ∆k (ι) > 0 corresponding to a restrictive measure is made to bring back k(t) below the upper threshold κ + , while the policy choice ∆k (ι) < 0 is made to bring back k(t) above the lower threshold κ − . the difficulty for policymakers is due to the fact that so far the quantitative effect of an intervention is not known. we model this uncertainty by assuming ∆k (ι) to be random to a certain degree. if at time t, k fit (t) crosses the upper threshold κ + with confidence level p, we set t ι = t and a restrictive measure is taken, i.e., ∆k (ι) is chosen positive. we take the associated decrement ∆k (ι) to be uniformly distributed on the interval 0, 2 ∆k this describes that while the policymakers aim to reset the growth factor k to κ + , the result of the measure taken may range from having no effect at all (when ∆k (ι) = 0) to overshooting by a factor of 2 (when ∆k (ι) = 2 ∆k opt,+ being optimum. if instead k fit (t) crosses the lower threshold κ − with confidence level p at time t, we set t ι = t and a releasing measure is taken, i.e., ∆k (ι) is chosen negative. again, with the optimum choice ∆k the process described above is stochastic for two reasons. first, the sampling comes with the usual uncertainties in the law of large numbers. second, the effect of policy measures is not known beforehand (even though it may be learnt in the course of time, which we do not include here). it should be clear that the faster the testing the more rapidly one can respond to a super-critical situation. a significant simplification of the model occurs when the two thresholds are chosen to vanish, in which case with |∆k (ι) | uniformly distributed on the interval in this case the system will usually tend to a critical steady state with k(t → ∞) → 0, as we will show explicitly below. in this case the policy strategy can simply be rephrased as follows. as soon as one has sufficient confidence that k has a definite sign, one intervenes, trying to bring k back to zero. the only parameter defining the strategy is α. let us now detail the fitting procedure and analyze the typical time scales involved between subsequent policy interventions when choosing the thresholds (7). after a policy change at time t ι , data is acquired over a time window ∆t. we then proceed with the following steps to estimate the time t ι+1 at which the next policy change must be implemented. step 1: measurement we split the time window of length ∆t after the policy change into the time interval and the time interval testing delivers the number of infected people for the time interval (8b) and . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint for the time interval (8c), where we recall that r denotes the number of people tested per unit time. given those two measurements over the time window ∆t/2, we obtain the estimate with the standard deviation as follows from the statistical uncertainty n ι,γ (∆t) of the sampled numbers n ι,γ (∆t) and standard error propagation. step 2: condition for new policy intervention a new policy intervention is taken once the magnitude |k fit ι (∆t)| with k fit ι (∆t) given by eq. (8f) exceeds α δk(∆t) with δk(∆t) given by eq. (8g). here, α controls the accuracy to which the actual k has been estimated at the time of the next intervention. the condition for a new policy intervention thus becomes . (9b) step 3: comparison with modeling we call i(t) the actual fraction of infections (in the entire population) as a function of time, which we assume to follow a simple exponential evolution between two successive policy interventions, i.e., the normalized solution to the growth equation (5) on the interval t ι < t < t ι+1 . the expected number of newly detected infected people in the time interval (8b) is similarly, the predicted number of infected people in the time interval (8c) is step 4: estimated time for a new policy intervention we now approximate n ι,1 and n ι,2 by replacing them with their expectation value eqs. (11a) and (11b), respectively, and anticipating the limit we further anticipate that for safe strategies the fraction of infected people i(t) does not vary strongly over time. more precisely, it hovers around the value i * defined in fig. 2 . we thus insert into eq. (9b) and solve for ∆t. the solution is the time until the next intervention from which we deduce the relative increase of the fraction of infected people over the time window. this relative increase is close to 1 if the argument of the exponential on the right-hand side is small. we will show below that the characteristics and of the first time interval [t 1 , t 2 ] set the relevant scales for the entire process. from eqs. (12c) and (12d), we infer the following important result. the higher the testing frequency r, the smaller the typical variations in the fraction of infected people, and thus in the case numbers. the band width of fluctuations decreases as r −1/3 with the testing rate. note that, as one should expect, it is always the average rate to detect an infected person, r i * , which enters into the expressions (12c) and (12d). the higher the fraction i * , the more reliable is the sampling, the shorter is the time to converge toward the marginal state (4), and the smaller are the fluctuations of the fraction of infected people. . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint if the fraction i * is too low the statistical fluctuations become too large and little statistically meaningful information can be obtained. on the other hand, if the fraction of infections drops to much lower values, then policy can be considered to have been successful and can be maintained until further tests show otherwise. we seek an upper bound for a manageable ı * . we assume that a fraction p ch icu of infected people in switzerland needs to be in intensive care. more precisely, p ch icu is the expected time (in switzerland) for an infected person to spend in an intensive care unit (icu) divided by the expected time to be sick. here, we will use the value p ch icu = 0.05. let ρ icu be the number of icu beds per inhabitant that shall be allocated to covid-19 patients. the swiss national average is about [10] ρ ch icu ≈ 1200 8 500 000 for the pandemics not to overwhelm the health system, one thus needs to maintain the infected fraction safely below together with similar constraints related to the capacity for hospitalizations, medical care personnel and equipment for specialized treatments. we take the constraint from intensive care units to obtain an order of magnitude for the upper limit admissible for the infected fraction of people, i. the objective is to mitigate the pandemic so that values of the order of i c or below are achieved. before that level is reached restrictions cannot be relaxed. it may prove difficult to push the fraction of infected people significantly below i c , since the recent experience in most european countries shows that it is very hard to ensure that growth rates k fall well below 0. the main aim would then be to reach at least stabilization of the number of infected people (k = 0). for the following we thus assume that the fraction of infections i will stagnate around a value i * of the order of i c . we will discuss below what ratio i * /i c can be considered safe. we seek the testing rate that is needed to obtain a strategy with satisfactory outcome. we assume that after the reboot at t 1 = 0, the initial growth rate may turn out to be fairly high, say of the order of the unmitigated growth rate. in many european countries a doubling of cases was observed every three days before restrictive measures were introduced. this corresponds to a growth rate of we assume an initial growth rate of just after the reboot. we choose the reasonably stable confidence parameter in sec. vii we will find that this choice strikes a good balance between several performance criteria. we further assume that the rate of infections initially stagnates at a level of the level i * should, however, be measured by random testing before a reboot is attempted. we should then ensure that the first relative increase of does not exceed a factor of 4. from eq. (13b), we thus obtain the requirement for the testing rate r. this yields an estimate of the order of magnitude required. in the next section we simulate a full mitigation strategy and confirm that with additional capacity for just about 15'000 random infection tests per day a nation-wide, safe reboot can be envisioned for switzerland. we close with two observations. first, this minimal testing frequency is just twice the testing frequency currently available for suspected infections and medical staff in switzerland. second, while the latter tests require a high sensitivity with as few false positives and negatives as possible, random testing can very well be carried out with tests of much lower quality. indeed, a lower sensitivity acts as a systematic error in the estimate of the infection rate, which, however, drops out in the determination of its growth rate k. after the reboot at time t 1 = 0 further interventions will be necessary, as we assume that the reboot will have resulted in a positive growth rate k 1 . in subsequent interventions, the policymakers try to take measures that aim at reducing the growth rate to zero. even if they had perfect knowledge of the current growth rate k(t), they would not succeed immediately since they do not know the precise quantitative effect of the measures they will take. nevertheless, had they the perfect knowledge of k(t), our model assumes that they would at least be able to estimate the effects to an extent such they would . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint not need to intervene more strongly than twice of what would be necessary to reduce with time k(t) to 0. this assumption implies that, if α is large, so that k(t) is known with relatively high precision at the time of intervention, the growth rate k 2 is smaller than k 1 in magnitude with high probability (tending rapidly to 1 as α → ∞). 8 the smaller α however, the more likely it becomes, that k(t) is overestimated, and a too big corrective measure is taken, which may destabilize the system. in this context, we observe that the ratio is a random variable with a distribution that is independent of ι in our model. to proceed, we assume that α is sufficiently large, i.e., such that the probability for ρ ι < 1 to be true is indeed high. the second policy intervention occurs after a time that can be predicted along the same lines that lead to eq. (12c). one finds where ∆t 1 is given by eq. (13a) . since, the growth rate k 3 is likely to be smaller than k 2 in magnitude, the third intervention takes place after yet a longer time span, etc. if we neglect that the fitted value k fit ι (t) differs slightly from k ι (a difference that is negligible when α 1), our model ensures that ρ ι is uniformly distributed in [0, 1]. after the ι-th intervention the growth rate is down in magnitude to to reach a low final growth rate k final , a typical number n int (k final ) of interventions are required after the reboot, where where the last approximation holds in the limit of large enough α. the time to reach this low rate is dominated by the last and first time intervals through the estimate thus, the system converges to the critical state where k = 0, but never quite reaches it. at late times t , the residual growth rate behaves as k final ∼ t −3/2 . 8 one uses eq. (7) to reach this conclusion. the parameter α encodes the confidence policymakers need about the present state before they take a decision. here we discuss various measures that allow choosing an optimal value for α. as α decreases starting from large values, the time for interventions decreases, being proportional to α 2/3 according to eq. (13a). likewise the fluctuations of infection numbers will initially decrease. however, the logarithmic average − ln ρ ι in the denominator of eq. (20) will also decrease from 1, and thus the necessary number of interventions increases. moreover, when α falls significantly below 1, interventions become more and more illinformed and erratic. it is not even obvious anymore that the marginally stable state is still approached asymptotically. from these two limiting considerations, we expect to be an optimal choice for α. let us now discuss a few quantitative measures of the performance of various strategies, which will allow policymakers to make an optimal choice of confidence parameter for the definition of a mitigation strategy. the time to reach a certain level of quiescence (low growth rates, infrequent interventions) is given by the time (21), and thus by the expectation value of ∆t 1 . as a measure for the political cost, c p , we may take the number of interventions that have to be taken to reach quiescence. as we saw in eq. (20), it scales inversely with the logarithmic average of the ratios of growth rates, ρ, i.e., if restrictions are over-relaxed, the infection numbers will grow with time. the maximal fraction of infected people must never be allowed to rise above the manageable threshold of i c . this means that continuous (random) monitoring of the fraction of infected people is needed, so that given the knowledge from the time before the reboot, about the conditions under which the system can be stabilized, lock-down conditions can always be imposed at a time that is sufficient to prevent reaching the level of i c . beyond this consideration one may want to . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint keep the expected maximal increase of infection numbers low, which we take as a measure of health costs c h , note that as defined, c h is a stochastic number. its mean and tail distribution (for large r) will be of particular importance. imposing restrictions such that k < 0 imply restrictions beyond what is absolutely necessary to maintain stability. if we assume that the economic cost c e is proportional to the excess negative growth rate, −k (and a potential gain proportional to k), a measure for the economic cost is the summation over time of −k(t), which converges, since k(t) decays as a sufficiently fast power law. hereto, c e is a stochastic variable that depends on the testing history and the policy measures taken. however, its mean and standard deviation give a good idea of the performance in terms of economic considerations. we introduced in sec. vi a feedback and control strategy to tune to a marginal state with vanishing growth rate k = 0 after an initial reboot. interventions were only taken based on the measurement of the growth rate. however, in practice, a more refined strategy will be needed. in case the infection rate drops significantly below i * , one can safely afford to have a positive growth rate k. we thus assume that if i(t)/i * falls below some threshold i low = 0.2, we intervene by relaxing some measures, that we assume to increase k by an amount uniformly distributed in [0, k 1 ], but without letting k exceed the maximal value of k high = 0.23. likewise, one should intervene when the fraction i(t) grows too large. we do so when i(t)/i * exceeds i high = 3. in such a situation we impose restrictions resulting in a decrease of k by a quantity uniformly drawn from [k high /2, k high ]. the precise algorithm is given in appendix b. figure 3 shows how our algorithm implements policy releases and restrictions in response to test data. the initial infected fraction and growth rate are i(0) = i c /4 = 0.0007 and k 1 = 0.1, respectively, with a sampling interval of one day. to more easily demonstrate the feedback protocol, we employ a high value of α = 5 and a number the unmitigated exponential growth with the initial growth rate k 1 is also plotted as the black line. of r = 100 000 tests per day, resulting in a higher confidence in the estimated growth rate and a longer time (> 3 days) until intervention. figure 3a displays the infection fraction, u (t)/n , as a function of time, derived using our simple exponential growth model, which is characterized by a single growth rate that changes stochastically at interventions [eq. (5) without the source term]. in the absence of intervention, the infected population would grow rapidly representing uncontrolled runaway of a second epidemic. at each time step (day) the infected fraction of the population is sampled. the result is normally distribution with mean and standard deviation given by eqs. (6e) and (6f) to obtain i(t). the former are represented by small circles, the latter by vertical error bars in fig. 3 . if i/i * lies outside the range [i low , i high ], we intervene as described above. otherwise, on each day k fit (t) and its standard deviation are estimated using the data since the last intervention. with this, at each time step, eqs. (6m) to (6o) decide whether or not to intervene. in fig. 3 , each red circle represents an intervention and therefore either a decrease or increase of the growth rate constant of our model. fig. 3 shows the evolution of the fraction of infected . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. people. after an initial growth with rate k 1 subsequent interventions reduce the growth rate down to low levels within a few weeks. at the same time the fraction of infected people stabilizes at a scale similar to i * -for the given parameter-set this is a general trend independent of realization. figure 3 b displays the instantaneous value of the model rate constant and also the estimated value together with its standard deviation. the estimate follows the model value reasonably well. one sees that the interventions occur when the uncertainty in k is sufficiently small (given the large choice of α = 5). we now assume that we have the capacity for r = 15 000 per day, and assess the performance of our strategy as a function of the confidence parameter α in fig. 4 . values of α ≤ 2 lead to rapid, but at the same time erratic interventions, as is reflected by a rapidly growing number of interventions. for larger values of α, the time scale to reach a steady state increases while the economic and health costs remain more or less stable. a reasonable compromise between minimizing the number of interventions, and shortening the time to reach a steady state suggests a choice of α ≈ 2 − 3. it is intuitive that the higher the number r of tests per day is, the better the mitigation strategy will perform. the characteristic time to reach a final steady state decreases as r −1/3 , see eq. (13a). other measures for performance improve monotonically upon increasing r. this is confirmed and quantified in fig. 5 , where we show how the political, health. and economic cost decreases with increasing test rate. after a reboot it is likely that the growth rate k 1 jumps back to positive values, as we have always assumed so far. the time it takes until one can distinguish a genuine growth from intrinsic fluctuations due to the finite number of sample people depends on the growth rate k 1 , see eq. (13a). in the worst case where the reboot brings back the unmitigated value k 0 , one will know within 3-4 days with reasonable confidence that the growth rate is well above zero. this is shown in fig. 6 . in such a catastrophic situation, an early intervention can be taken, while the number of infections has at most tripled at worst. this reaction time is 4-5 times much faster than without random testing! . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . fig. 6. time after which a significant positive growth rate is confirmed in the worst case scenario for which the growth rate k 1 jumps to k 0 = 0.23 after reboot. an intervention will be triggered in 3-4 days. results are shown for a confidence level α = 3 and r = 15 000 test a day. the circles are the mean values, the vertical lines indicate the standard deviations of the first intervention time. we have shown that the minimal testing rate r min (16) is sufficient to obtain statistical information on the growth rate k as applied to switzerland as a whole. this assumes tacitly that the simple growth equation (5) describes the dynamics of infections in the whole country well. that this is not necessarily a good description can be conjectured from recent data on the current rates with which numbers of confirmed infections in the various cantons grow. these data indeed show a very significant spread by nearly a factor of four suggesting that a spatially resolved approach is preferable, if possible. similar heterogeneity of the time evolution of infection numbers can even be seen within a single big city, such as london. if the testing capacity is limited by rates of order r min , the approach can still be used. but caution should be taken to account for spatial fluctuations corresponding to hot spots. one should preferentially test in areas that are likely to show the largest local growth rates so as not to miss locally super-critical growth rates by averaging over the entire country. if however, higher testing frequencies become available, new and better options come into play. valuable information can be gained by analyzing the test data not only for switzerland as a whole, but by distinguishing different regions. it might even prove useful not to lift restrictions homogeneously throughout the country, but instead to vary the set of restrictions to be released, or to adapt their rigor. by way of example, consider that after the spring vacation school starts in different weeks in different cantons. this regional difference could be exploited to probe the relative effect of re-opening schools on the local growth rates k. however, obviously, it might prove politically difficult to go beyond such "naturally" occurring differences, as it is with no doubt a complex matter to decide what region releases which measures first. a further issue is that the effects might be unclear at the borders between regions with different restrictions. there may also be complications with commuters that cross regional borders. finally, there may be undesired behavioral effects, if regionally varying measures are declared as an "experiment". such issues demand careful consideration if regionally varying policies are applied. even if policy measures should eventually not be taken in a region-specific manner, it is very useful to study a regionally refined model of epidemic dynamics. indeed a host of literature exists that studies epidemiological models on lattices and analyzes the spatial heterogeneities. [11, 12] in certain circumstances those have been argued to become even extremely strong. [13] in the present paper, we will content ourselves with a few general remarks concerning such refinements. we reserve a more thorough study of regionally refined testing and mitigation strategies to a later publication. let us thus group the population of switzerland into g sets. the most natural clustering is according to the place where people live, cities or counties. 9 the more we partition the country, the more spatially refined the acquired data will be, and the better tailored mitigation strategies could potentially become. however, this comes at a price. namely, for a limited national testing rate r tot , an increased partitioning means that the statistical uncertainty to measure local growth rates in each region will increase. the minimal test rate r min that we estimated on the right-hand side of eq. (16) still holds, but now for each region, which can only test at a rate r = r tot /g. to refine switzerland g regions we thus have the constraint that the total testing capacity exceeds on the other hand, if testing at a high daily rate r tot becomes available, nothing should stop one to refine the statistical analysis to g ≈ r tot /r min to make the best use of available data. 9 one might also consider other distinguishing characteristics of groups (age or commuting habits, etc.), but we will not do so here, since it is not clear whether the increased complexity of the model can be exploited to reach an improved data analysis. in fact we expect that the number of fitting parameters will very quickly become too large by making such further distinctions. . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint b. spatially resolved growth model each of the population groups m ∈ {1, · · · , g} is assumed to have roughly the same size, containing people, u m of whom are infected, but yet undetected. the spreading of infections is again assumed to follow a linear growth equation (where we neglect influx from across the borders from the outset) (28) here, the growth kernel k(t) is a g × g matrix with matrix elements k mn (t). the matrix k(t) has g (complex valued) eigenvalues λ n , n = 1, · · · , g. the largest growth rate is given by for the sake of stability criteria, κ(t) now essentially takes the role of k(t) in the model with a single region, g = 1. we note that the number of infections grows exponentially if κ(t) > 0, and decreases if κ(t) < 0. as in the case of a single region, we assume k(t) to be piece wise constant in time, and to change only upon taking policy interventions. in the simplest approximation, one assumes no contact between geographically distinct groups, that is, the offdiagonal matrix elements are set to zero [k m =n (t) = 0] and the eigenvalues become equal to elements of the diagonal: k m (t) ≡ k mm (t). as current cantonal data suggests, the local growth rate k m (t) depends on the region, and thus k m (t) = k n (t). it is natural to expect that k m (t) correlates with the population density, the fraction of the population that commutes, the age distribution, etc. if on top of the heterogeneity of growth rates one adds finite but weak inter-regional couplings k m =n (t) > 0 (mostly between nearest neighbor regions), one may still expect the eigenvectors of k(t) to be rather localized (a phenomenon well known as anderson localization [14] in the context of waves propagating in strongly disordered media). by this, one means that the the eigenvectors have a lot of weight on few regions only, and little weight everywhere else. that such a phenomenon might occur in the growth pattern of real epidemics is suggested by the significant regional differences in growth rates that we have mentioned above. in such a situation it would seem preferable to adapt restrictive measures to localized regions with strong overlap on unstable eigenvectors of k(t), while minimizing their socio-economic impact in other regions with lower k m (t). c. mitigation strategies with regionally refined analysis as mentioned above, in the case with several distinct regions, g > 1, an intervention becomes necessary when the largest eigenvalue κ(t) of k(t) crosses an upper or a lower threshold (with a level of confidence α again to be specified). if the associated eigenvector is delocalized over all regions, one will most likely respond with a global policy measure. however, it may as well happen that the eigenvector corresponding to κ(t) is well-localized. in this case one can distinguish two strategies for intervention: (a) global strategy one always applies a single policy change to the whole country. this is politically simple to implement, but might incur unnecessary economic cost in regions that are not currently unstable. (b) local strategy one applies a policy change only in regions which have significant weight on the unstable eigenvectors. this means that one only adjusts the corresponding diagonal matrix elements of k(t) and those off-diagonals that share an index with the unstable region. likewise, regions that have i m < i * and have negligible overlap with eigenvectors whose eigenvalues are above κ − , could relax some restrictions before others do. fitting test data to a regionally refined model will allow us to estimate the off-diagonal terms k mn (t), which are so far poorly characterized parameters. however, the k mn (t) contain valuable information. for instance, if a hot spot emerges [that is, a region overlapping strongly with a localized eigenvector with positive re λ n (t)], this part of the matrix will inform which connections are the most likely to infect neighboring regions. they can then be addressed by appropriate policy measures and will be monitored subsequently, with the aim to contain the hot spot and keep it well localized. this model allows us to calculate again economic, political, and health impact of various strategies. it is important to assess how the global and the local strategy perform in comparison. obviously this will depend on the variability between the local growth rates k m (t), which is currently not well known, but will become a measurable quantity in the future. at that point one will be able to decide whether to select the politically simpler route (a) or the heterogeneous route (b) which is likely to be economically favorable. we are currently engaged in developing an analysis tool to quickly process test data for multi-region modelling. we are developing and assessing intervention strategies with the perspective of running it daily with the best available current data and knowledge. we will report on these activities in subsequent memoranda. . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint we have analyzed a feedback and control model for managing a pandemic such as that caused by covid-19. the crucial output parameters are the infection growth rates in the general population and spatially localized sub-populations. when planning for an upcoming reboot of the economy, it is essential to assess and mitigate the risks of relaxing some of the restrictions that have brought the covid-19 epidemic under control. in particular, the policy strategy chosen must suppress a potential second exponential wave when the economy is rebooted, and so avoid a perpetual stop-and-go oscillation between relaxation and lockdown. feedback and control models are designed with precisely this goal in mind. having random testing in place, the risk of a second wave can be kept to a minimum. additional testing capacity of r min = 7 700 day −1 tests (on top of the current tests for medical purposes) carried out with randomly selected people would allow us to follow the course of the pandemics almost in real time, without huge time delays, and without the danger of increasing the number of infected people by more than a factor of two, if our intervention strategy is followed. if testing rates r significantly higher than r min become available, a regionally refined analysis of the growth dynamics can be carried out, with g ≈ r/r min regions that can be distinguished. in the worst case scenario, where releasing certain measures immediately make the country jump back to the unmitigated growth rate of k 0 = 0.23 day −1 , random testing would detect this within 3-4 days from the change coming into effect. this is in stark contrast to the nearly 14 days of delay required for symptomatic individuals to emerge in statistically significant numbers. after such a time delay a huge increase (a factor of order 20) of infection numbers may have already occurred, which would be catastrophic. daily random testing safely prevents this. thereby the significant reduction of the time delay is absolutely crucial. note that without daily polling of infection numbers and without knowledge about the quantitative effect of restriction measures, a reboot of the economy could not be risked before the number of infections has been suppressed by at least a factor of 10-20 below the current level. given the limits of suppression rates that can be achieved without most draconic lockdown measures, this will require a very long time and thus translates into an enormous economic cost. in contrast, daily polling will allow us to carefully reboot the economy and adjust restrictive measures, while closely monitoring their effect. since the reaction times are so much shorter, one can safely start an attempted reboot already at infection numbers corresponding roughly to the status quo. at some point one might consider the option to start releasing different sets of restrictions in different regions, with the aim to learn faster about their respective effects and thus to optimize response strategies in subsequent steps. we are grateful to emma slack, giulia brunelli, and thomas van boeckel for helpful discussions, and the erc hero project 810451 for supporting ga. appendix a: assessment of contact tracing as a means to control the pandemics let us briefly discuss the strategy of so-called contact tracing as a means to contain the pandemics, as has been discussed in the literature [15] . we argue that contact tracing is a helpful tool to suppress transmission rates, but is susceptible to fail when no other method of control is used. contact tracing means that once an infected person is detected, people in their environment (i.e., known personal contacts, and those identified using mobile-phone based apps etc) are notified and tested, and quarantined if detected positive. as a complementary measure to push down the transmission rate, it is definitely useful, and it represents a relatively low cost and targeted measure, since the probability to detect infected people is high. however, as a sole measure to contain a pandemic contact tracing is impractical (especially at the current high numbers of infected people) and even hazardous. the reason is as follows. it is believed that a considerable fraction f asym of infected people show only weak or no symptoms, so that they would not get tested under the present testing regime. the value of f asym is not well known, but it might be rather high (30% or even much higher). such asymptomatic people will go undetected, if they have not been in contact with a person displaying symptoms. if on average they infect r people while being infectious, and if r f asym > 1, there will be an exponential avalanche of undetected cases. they will produce an exponentially growing number of detectable and medically serious cases. the contact tracing of those (upward in the infection tree) is tedious, and cannot fully eliminate the danger of such an avalanche. contact tracing as a main strategy thus only becomes viable once the value of f asym is well established, and one is certain to be able to control the value of r such that r f asym < 1. • t = 1, 2, · · · : time in days (integer). . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint • n int : number of interventions (including the reboot at t = 1). • t int (j): first day on which the j'th rate k j applies. on day t int (1) ≡ 1 the initial reboot step is taken. • ∆t(j) = t int (j + 1) − t int (j): time span between interventions j and j + 1. • t first : first day on which the current rate k = k(t) applied. • i(t): fraction of infected people on day t. • k(t): growth rate on day t. • r: number of tests per day. • c h : health cost. • c e : economic cost. • k min = 0.005: minimal growth rate targeted. • i low = 0.2: lower threshold for i/i * . if i/i * < i low , no intervention is made even if k is above α δk. • i high = 3: upper threshold for i/i * . if i/i * > i high , an intervention is made even if k is still smaller than α δk. • k low = −0.1: minimal possible decreasing rate considered. • k high = 0.23: maximal possible increasing rate considered. • t min = 3: minimal time to wait since the last intervention, for interventions based on the level of i(t). • b = 0.5: parameter defining the possible range of changes ∆k due to measures taken after estimating k. |∆k/k est | ∈ [b, 2]. • α: confidence parameter. • n (t): cardinality of random sample of infected people on day t. the number n (t) is obtained by sampling from a gaussian distribution of mean i(t) r and standard deviation i(t) r and rounding the obtained real number to the next non-negative integer. • t first = t int (1) = 1. • n int = 1. • c h = 1. • c e = 0. • k(1) = k 1 = 0.1. (initial growth rate) • i(1) = i * . common choice i * = i c /4 = 0.0007. • draw n (1). • k(2) = k(1). (no intervention at the end of day 1) • set t = 2. define i(t) = i(t − 1) e k (t−1) , define c h = max{c h , i(t)/i * }, determine what will be k(t + 1), by assessing whether or not to intervene: if t = t first , then k(t + 1) = k(t). (no intervention) else distinguish three cases: 1. if i(t)/i * < i low and t − t first ≥ t min , then k(t + 1) = min{k(t) + x k 1 , k high } with x = unif[0, 1]. 2. if i(t)/i * > i high and t − t first ≥ t min , then k(t + 1) = max{k(t) − (1 + x)/2 k high , k low } with x = unif[0, 1]. 3. if i low < i(t)/i * < i high , then • set ∆t ≡ t − t first + 1 • compute k est (t first , ∆t), and δk est (t first , ∆t) using sec. b 4. if |k est | > k min and [k est > α δk est or k est < −α δk est ], set k(t + 1) = k(t) − xk est with x = unif[b, 2]. if k(t + 1) > k high , put k(t + 1) = k high . if k(t + 1) < k low , put k(t + 1) = k low . else k(t + 1) = k(t), t = t + 1. if an intervention was taken above: put n int = n int + 1. define t int (n int ) = t + 1. define ∆t(n int − 1) = t int (n int ) − t int (n int − 1). set t first = t + 1. . cc-by-nc-nd 4.0 international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20059360 doi: medrxiv preprint if |k est | < k min and k(t) < k min and t − t first > 10, exit. else return to daily routine for next day. computing k est (t first , ∆t) and δk est (t first , ∆t): if ∆t is even: define n (t first + ∆t/2 + m). • else return k est = 0, δk est = 1000. if ∆t is odd: n (t first + (∆t + 1)/2 + m), • else return k est = 0, δk est = 1000. time to first intervention: ∆t(1) health cost: c h political cost: n int economic cost c e strategies for mitigating an influenza pandemic covid-19 reports from the mrc centre for global infectious disease analysis impact of non-pharmaceutical interventions (npis) to reduce covid-19 mortality and healthcare demand cybernetics: or control and communication in the animal and the machine the reproductive number of covid-19 is higher compared to sars coronavirus infectious diseases of humans; dynamic and control epidemic modeling: an introduction stochastic epidemic models: a survey the number of icu beds in switzerland was taken from the neue zürcher zeitung from march on the critical behavior of the general epidemic process and dynamical percolation epidemic models and percolation infinite-randomness critical point in the twodimensional disordered contact process absence of diffusion in certain random lattices the efficacy of contact tracing for the containment of the 2019 novel coronavirus (covid-19) coronavirus: policy design for stable population recovery: using feedback to maximize population recovery rate while respecting healthcare capacity key: cord-263620-9rvlnqxk authors: li, zhi-chun; huang, hai-jun; yang, hai title: fifty years of the bottleneck model: a bibliometric review and future research directions date: 2020-09-30 journal: transportation research part b: methodological doi: 10.1016/j.trb.2020.06.009 sha: doc_id: 263620 cord_uid: 9rvlnqxk abstract the bottleneck model introduced by vickrey in 1969 has been recognized as a benchmark representation of the peak-period traffic congestion due to its ability to capture the essence of congestion dynamics in a simple and tractable way. this paper aims to provide a 50th anniversary review of the bottleneck model research since its inception. a bibliometric analysis approach is adopted for identifying the distribution of all journal publications, influential papers, top contributing authors, and leading topics in the past half century. the literature is classified according to recurring themes into travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of demand and supply sides. for each theme, typical extended models developed to date are surveyed. some potential directions for further studies are discussed. the bottleneck model was first introduced by vickrey in 1969, aiming at addressing the departure time choices of commuters on a bottleneck-constrained highway during the morning rush hours. in this model, all individuals are assumed to have an identical preferred time to arrive at their destination and incur a schedule delay cost proportional to the amount of time that they arrive early or late. commuters choose their departure time to minimize their own travel cost based on a trade-off between bottleneck congestion delay cost and schedule delay cost of early or late arrival. this model is able to model the formation and dissipation of queuing behind the bottleneck in a simple and tractable way, thus making it a benchmark representation of the dynamics of traffic congestion in peak period. the past 50 years (from 1969 to 2019) have witnessed significant progress in the bottleneck model research since the pioneering work of vickrey (1969) . a lot of insights into understanding the features of traffic congestion in peak period have been obtained via the bottleneck model. these insights cover various aspects, such as behavioral analysis (e.g., the nature of shifting peak, inefficiency of unpriced equilibria, behavioral difference of heterogeneous commuters, connection between morning and evening commutes, effects of commuter scheduling preferences), demand management (e.g., congestion / emission / parking pricing and tradable credit schemes, relationship between bottleneck congestion tolling and urban structure), and supply management (e.g., bottleneck / parking capacity expansion). the insights also play an important role in deeply understanding the essence of commuters' travel behavior during morning/evening peak periods, and in evaluating and making reasonable transport policies for alleviating peak-period traffic congestion. to date, there have been a few reviews on the topic of bottleneck models or their variations (e.g., arnott et al., 1998 ; lindsey and verhoef, 2001 ; small, 1992 ; small, 2015 ) . these early reviews appeared in different years, aiming to track the development of the bottleneck models on some selected specific topics. because the research area is still growing and new disruptive trends of automation and sharing in mobility are emerging, it is timely to provide a state-of-the-art review of this area, particularly on the occasion of its 50th anniversary. this paper attempts to provide a systematic and critical review that differs from previous reviews in several aspects. first, it is meaningful to conduct a bibliometric study of the large body of literature to celebrate the 50th anniversary of vickrey's bottleneck model. to do so, we carry out a literature review to analyze the research progress on the bottleneck model research throughout the past half century. the review tries to cover all relevant topics published in journals rather than some specific ones as covered in the previous review papers. second, a bibliometric analysis approach is adopted that can trace the footprints underlying the scholarly publications by constructing network connections of the publications, journals, researchers, and keywords. with the aid of visualization technique (e.g., a software called vosviewer), the bibliometric approach can map the landscape of the knowledge domain of the bottleneck model studies, allowing us to clearly identify the distribution of publications by journal, influential papers, top contributing authors, and leading topics. third, based on the bibliometric analysis, a critical review on the previous relevant studies is provided, together with some discussions on the current research gaps and opportunities. it is noted that a bottleneck system consists of the following elements: users, the authority (or the government), and the bottleneck (i.e., transport infrastructure). from the perspectives of these elements, we categorize the literature into four classes: travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of demand and supply sides. the travel behavior analysis from the users' perspective focuses on the equilibrium analysis of commuters' travel choice behavior, such as the choices of departure time, route, mode, and/or parking. the demand-side strategies from the government's perspective refer to the travel demand management strategies, such as congestion / emission / parking pricing and tradable credit schemes. the supply-side strategies from the transport infrastructure's perspective include such topics as bottleneck capacity expansion and parking capacity design. the joint strategies of demand and supply sides from both the government's and transport infrastructure's perspectives are a hybrid of demand-side and supply-side strategies. for each theme, typical models proposed in previous studies are reviewed. the remainder of this paper is organized as follows. in the next section, a bibliometric study is conducted. section 3 presents a literature review based on the literature categorization. in section 4 , some potential directions for further studies are discussed. finally, section 5 concludes the paper. this section provides a general bibliometric analysis of various bottleneck model studies. the bibliometric analysis uses quantitative methods to classify bibliometric data and build up representative summaries. it has been recognized as a useful approach for analyzing the performances of journals, institutes and authors, as well as the characteristics of research fields or topics. with the aid of visualization technique (e.g., vosviewer software), the bibliometric networks, such as cocitation network, co-authorship network, and keyword co-occurrence network, can be constructed and visually presented. to measure the influences of publications, authors and journals, various bibliometric indicators are considered, including the number of publications, total citations, and citations per paper. in order to collect the publication data since 1969, we scout the three well-recognized journal databases or search engines, namely web of science core collection, scopus, and google scholar, using such topics or keywords as bottleneck, bottleneck model(s), morning commute or commuting, and bottleneck congestion. we further retrieve literature by tracking the references cited by the papers searched from the three databases. in particular, we check all references citing the original work of vickrey (1969) , entitled "congestion theory and transport investment". after repeated sifting and checking, a total of 232 relevant papers during the period of 1969-2019 are finally retrieved. during the 20 years of 1990-2009, this topic received growing attention, with a total of 68 relevant papers published, and the number of relevant publications per five years exceeds 10, more than the total number of publications during the first 15 years (1969) (1970) (1971) (1972) (1973) (1974) (1975) (1976) (1977) (1978) (1979) (1980) (1981) (1982) (1983) (1984) . during the past 10 years from 2010 to 2019, this topic attracted further increasing interest, and a total of 149 relevant papers were published, accounting for 64.2% of total number of publications in the past 50 years. particularly, the largest amount emerges in the most recent 5 years of 2015-2019, with 82 publications (about 35.3% of total number of publications). this continued growing tendency clearly shows that the bottleneck model is still an important and hot research topic in the field of transportation and this tendency is expected to continue in coming years. table 1 shows the top 15 journals (over a total of 38 journals) by the number of published related papers. it can be seen that these journals mainly belong to "transportation" and "economics" categories in terms of the journal categories in the jcr (journal citation reports) published by thomson reuters. transportation research part b (tr-b for short, a leading journal in transportation field) leads table 1 with 75 papers (accounting for 32.3% of the total number of publications), followed by journal of urban economics (jue, a leading journal in urban economics field) with 27 papers (accounting for 11.6%). the total percentage of papers published in these two journals (a total of 102 papers) reaches nearly half of the total transportation research part c 10 6 economics of transportation 9 7 transportation research record 9 8 transportation research part e 8 9 regional science and urban economics 7 10 journal of transport economics and policy 6 11 american economic review 5 12 transportmetrica a: transport science 5 13 applied economics 4 14 journal of public economics 4 15 transportmetrica b: transport dynamics 4 number of publications (about 44.0%). the number of papers published in each of transportation science (ts), transportation research part a and part c (tr-a, tr-c) reaches 10 or more. notably, as a young journal founded in 2012, economics of transportation published 9 papers on this topic. in order to look at the co-relation among journals publishing the 232 publications, a bibliographic coupling of the 38 journals is conducted, as shown in fig. 2 . the size of a solid circle (or a vertex) represents the number of publications related to the topic of the bottleneck model in a journal. the line between circles represents the co-citation relationship of journals. the color of the line represents the cluster of journals, such as the journal categories of economics or transportation. the width of the lines between circles represents the co-citation degree or intensity between journals (i.e., the total number of co-citations of the documents in the journals concerned). specifically, a thick line means a strong co-citation degree between journals, and vice versa. it can be seen that the papers published in tr-b have highly been cited by those published in tr-a, tr-c, tr-e, jue, ts, trr (transportation research record), economics of transportation, transportmetrica a, jtep (journal of transport economics and policy), networks and spatial economics, and journal of public economics. the papers published in jue have a strong citation with those published in tr-b, rsue (regional science and urban economics), and economics of transportation. we now look at the most influential papers about the topic of the bottleneck model during the past five decades, which are determined according to total citations or average citations per year. it should be pointed out that in this paper, citation count is based on the sci/ssci citation databases. here, sci/ssci means science citation index expanded and social science citation index in the web of science core collection. table 2 shows the top 50 most influential papers, each having more than 50 citations. it can be noted that the top 3 most influential papers are vickrey (1969) , small (1982) and adl (1993a) in terms of the total number of citations. here, "adl" respectively refers to the first letters of the surnames of three scholars, i.e., arnott r, de palma a, and lindsey r. all the 3 most cited papers are from american economic review (aer), which is a well-recognized top economics journal. particularly, the pioneering work of vickrey (1969) is the most influential paper with the highest total citations of 944, and the highest average citations of 18.51 per year. there are 18 papers, each having more than 100 citations, and 9 papers, each having an average of no less than 10 citations per year. it should be pointed out that the work of fosgerau and karlstrom (2010) , entitled "the value of reliability", has had a total of 140 citations, and a second ranking in terms of average citations per year, regardless of its short publication history. 15 out of the top 50 papers are from tr-b, 8 from ts, 5 from jue, and 5 from tr-a. in terms of total citations, 5 out of the top 10 most influential papers were written by adl. in terms of average citations per year, 5 out of the top 10 most influential papers were published in tr-b, with publication years being between 2010 and 2013 and research focuses on tradable credit schemes ( nie and yin, 2013 ; xiao et al., 2013 ) and values of travel time and its variance ( fosgerau and karlstrom, 2010 ; fosgerau and engelson, 2011 ) . in order to understand the co-citation relationship of authors in the bottleneck model research, fig. 3 shows the bibliographic coupling network of authors, in which a solid circle represents a researcher and an edge represents the co-citation between a pair of researchers. the size of a solid circle represents the number of papers published by a researcher, and the width of an edge represents the co-citation intensity between studies by that pair of authors. it can be noted that as far as the total number of related publications is concerned, the authors, such as de palma a, lindsey r, huang hj, yang h, fosgerau m, arnott r, verhoef et, zhang hm, liu w, and van den berg vac, are the most productive, influential top 10 authors (see also table 3 ) because they are associated with large circles. table 3 further shows the top 23 influential authors in terms of total number of publications (no less than 5 papers), together with total number of citations and average citations per paper. the research institute and country/area of the associated authors have also been indicated in this table. it can be seen that de palma a leads the list in the total number of publications, with 31 publications. it is followed by lindsey r and huang hj, each having 23 publications. 10 authors have more than 10 publications. small ka, adl, and daganzo cf are the top 5 most influential authors in terms of average citations per year. de palma a leads this table in the total number of citations (2185), and small ka leads this list in the average citations per paper, reaching an average of 114 citations per paper. the other author, having more than 100 citations per paper, is arnott r, reaching an average of 101 citations per paper. in order to identify the research hotspots in the bottleneck model research, fig. 4 shows the bibliographic coupling network of keywords, in which the size of the solid circle represents the number of occurrences of a keyword, and the width of the line represents the occurrence degree of the two keywords connected by that line. one can find some high-frequency keywords in fig. 4 , such as "traffic congestion", "bottleneck model", "transportation", "commuting", "morning commute", "travel time", "costs", "travel behavior", "traffic control", "numerical model", "traffic management", "scheduling", "departure time choice", "user equilibrium", "parking", "road pricing", "congestion pricing", "travel time variabilities", and "heterogeneity". in order to make the review clearer, a cluster analysis of the 232 papers is conducted. 1 we categorize the 232 papers into four classes in terms of their research focuses: travel behavior analysis, travel demand management (i.e., demand-side strategies), infrastructure operations and management (i.e., supply-side strategies), and joint strategies of demand and supply sides. the travel behavior analysis mainly focuses on the analysis of the trip and/or activity scheduling behavior of travelers through building various travel choice behavior models, such as departure time / route / parking / mode choices, morning vs evening commutes, piecewise constant vs time-varying scheduling preferences, normal congestion vs hypercongestion, homogeneous vs heterogeneous users, individual vs household, deterministic vs stochastic situations, single vs multiple bottlenecks, and analytical approach vs dta (dynamic traffic assignment) approach. travel demand management focuses on a set of strategies and policies to reduce travel demand, or to redistribute the demand in space and/or time, including congestion / emission / parking pricing and their effects on urban system. infrastructure operations and management means to determine the optimal capacity or service level of infrastructure elements (e.g., road bottleneck, parking lot, airport, port). joint strategies are a hybrid of both demand-side and supply-side strategies. among these modules, travel behavior analysis is a basis of the travel demand management studies and the infrastructure operations and management studies. the travel demand management strategies and the infrastructure operations and management strategies interplay through demand-supply interaction. the interrelationships among them are shown in fig. 5 . the shaded part in fig. 5 represents the joint strategies of demand and supply management. in the following section, we will provide a systematic review of the bottleneck model studies published in the past half century based on the classification in fig. 5 . the classical vickrey's bottleneck model aims to model the departure time choice behavior of commuters during the morning commute. for the convenience of readers, the detailed formulation of the classical bottleneck model is provided in appendix. in this subsection, some basic assumptions underlying this model are presented. various extensions to relax these assumptions are then reviewed. these extensions include considerations of other travel choice dimensions (e.g., route / parking / mode choices), morning-evening commutes, time-varying scheduling preferences, vehicle physical length in queue and hypercongestion, heterogeneous users, household travel and carpooling, stochastic models and information, multiple bottlenecks, and dta-approach bottlenecks. the classical vickrey's bottleneck model, as a stylized representation of the dynamics of traffic congestion, has been widely recognized as an important tool for modeling the formation and dissipation of queuing at a bottleneck in rush hours. in the model, it is assumed that homogeneous commuters travel from a single origin (home) to a single destination (workplace) along a single road that has a bottleneck with a fixed capacity during the morning rush hours. all commuters choose their departure time based on a trade-off between the bottleneck queuing delay and the schedule delay of arriving early or late. equilibrium is reached when no individual has an incentive to alter his/her departure time. the attractiveness of vickrey's bottleneck model lies in its ability to derive closed-form solutions for equilibrium departure interval (i.e., the departure times of the first and last commuters from home), equilibrium departure rate, equilibrium queuing delay at the bottleneck, and equilibrium cumulative departures and arrivals. the derivations of these analytical solutions are built on some strong assumptions, stated as follows. 1) departure time choice for morning commute. the classical bottleneck model involves only the departure time choice dimension for morning commute. the other travel choice dimensions, such as route, mode and parking choices, and the evening commute are not taken into account. in reality, commuters may also decide on their travel route and/or travel mode besides departure time, subject to parking capacity constraint. moreover, their travel decisions are usually based on day-long schedules, but not on morning or evening activity schedule only. some studies, e.g., de palma and lindsey (2002a) ; zhang et al., (2005) , and li et al., (2014) , showed that commuters' morning and evening departuretime decisions are interdependent under some conditions, and the morning and evening departure patterns for specific individuals are not symmetric. it is, thus, necessary to consider multiple travel choice dimensions of commuters on a whole day basis. 2) piecewise constant scheduling preferences. vickrey's bottleneck model assumes that the value of travel time and the value of schedule delay of arriving early or late are constants, usually denoted by three parameters î±, î² and î³ , respectively. however, some empirical studies have confirmed that the marginal utility of time for performing an activity at a certain location changes over time (see, e.g., tseng and verhoef, 2008 ; jenelius et al., 2011 ; hjorth et al., 2013 ; peer and verhoef, 2013 ; peer et al., 2015 ) . it is therefore meaningful to relax the assumption of piecewise constant scheduling preferences to develop a scheduling model with time-varying marginal activity utilities. 3) normal congestion with point queue (or vertical queue). vickrey's bottleneck model assumes that traffic flow does not fall under heavily congested conditions and flow increases with density, which is called "normal (or ordinary) congestion". however, in reality, a phenomenon of hypercongestion (i.e., flow decreases with density) may occur in the downtown areas of major cities during rush hours. on the other hand, the point queue or vertical queue assumes that any vehicle that has to queue before passing through a bottleneck is stacked in a vertical pile at the bottleneck, i.e., vehicles stack vertically and queues take place at a point. a vertical queue does not occupy any road space and has no influence on upstream approaching vehicles. however, in reality, vehicles have physical lengths, which influence the movements of vehicles at the bottleneck and thus the queuing delays, particularly the queue spillback may block upstream link. 4) homogeneous individuals. the traditional bottleneck models mainly focus on individuals' travel choice behavior, and assume that the travel choice decision of an individual in a household is independent of that of other individuals. however, in reality, the interdependencies between household members (e.g., due to limited number of cars) indeed influence the activity schedules of household members. the classical bottleneck model also assumes that all commuters are homogeneous, i.e., they have the same desired arrival time and the same values of travel time and schedule delay. however, some studies have shown that there are big differences in the travel choice behavior of heterogeneous commuters due to their different travel preferences. there is thus a need to consider the heterogeneity of users. 5) deterministic model. a bottleneck system is in general a dynamic and stochastic system. the dynamicity and stochasticity result from various random events, ranging from non-recurrent random incidents, such as traffic accident, vehicle breakdown, signal failure, adverse weather and earthquake, to recurrent fluctuations in travel demand and capacity by time of day, day of week, and season. the travel time or queuing delay at the bottleneck is thus a stochastic variable. furthermore, commuters may not have perfect information about the traffic condition, and thus cannot perceive the travel time accurately, leading them to make travel choice decisions somewhat haphazardly. 6) only one bottleneck. the classical vickrey's bottleneck model assumes that there is only one single bottleneck on the highway connecting commuters' home and workplace. however, in reality one commuter often traverses multiple bottlenecks on his/her way to work, e.g., a y-shaped highway corridor with upstream and downstream bottlenecks. it is thus worthwhile to extend the single-bottleneck model to a multi-bottleneck case. 7) analytical approach. the classical bottleneck model has well-defined analytical solutions, as shown in appendix. this is because it tackles a single bottleneck only. in order to promote the realistic applicability of the model, it is necessary to extend the analytical bottleneck model to a general network with many links and many od (origin-destination) pairs. to do this, a point-queue dta approach (i.e., treating the queues on congested links in the network as a point bottleneck) and a traffic simulation technique may be adopted. these aforementioned assumptions play a significant role in deriving the analytical solutions of the bottleneck model and in revealing the nature of congestion dynamics. however, they also restrict the model's explanatory power and applications for a general case due to ignorance of many realistic characteristics of traffic system. to strengthen the realism of the bottleneck model, these assumptions have been relaxed in the literature through various extensions, which are in turn reviewed as follows. the classical vickrey's bottleneck model concerns only the departure time choice of commuters. in the literature, some extensions have been made to incorporate other travel choice dimensions, such as route / parking / mode choice dimensions. in terms of route choice dimension, arnott et al., (1990b) presented a simultaneous departure time and route choice model for a network with one od pair and two parallel routes. it showed that at the no-toll equilibrium, the number of users on each route coincides with that in the social optimum. optimal uniform and step tolls divert users towards longer routes, but only slightly. an optimal time-varying toll eliminates queuing without affecting route usage. arnott et al., (1992) and liu and nie (2011) proposed multi-class departure time and route choice models for identifying the behavioral difference of different user classes. siu and lo (2014) and further addressed the simultaneous departure time and route choice problem in a bottleneck system with uncertain route travel time. recently, kim (2019) empirically estimated the social cost of traffic congestion in the us using a simultaneous departure time and route choice bottleneck model. it was shown that the annual cost of congestion borne by all us commuters is about 29 billion dollars. these aforementioned simultaneous departure time and route choice models have derived some insights into understanding the nature of commuters' route choice behavior during the morning commute. however, these studies usually considered only one od pair and an auto-only bottleneck system. they also assumed that total travel demand was fixed (or inelastic) and the parking, as a source of congestion at the destination, was ignored. besides route choice dimension, parking dimension has also been incorporated in the bottleneck model studies, as shown in table 4 . it can be seen in table 4 that most of the parking studies considered the morning commuting problems from home to work through a bottleneck-constrained route. some exceptions are zhang et al., (2008 zhang et al., ( , 2019 , who studied the integrated morning and evening commutes through one two-way route with one bottleneck each way. some studies, such as arnott et al., (1991b) , zhang et al., (2008 zhang et al., ( , 2019 , and liu (2018) , made a strong assumption about the parking order, i.e., commuters park outwards from (or inwards to) the cbd. qian et al., ( , 2012 divided the parking lots into two discrete classes: a closer and a farther parking cluster. others, like yang et al., (2013) and liu et al., (2014a , considered parking reservation issues without/with expiration time and the effects of parking space constraints on the departure time and parking location choices. liu and geroliminis (2016) examined the effects of cruising-for-parking on commuters' departure time choices using mfd (macroscopic fundamental diagram) approach. however, all these studies did not concern the parking duration issue ( lam et al., 2006 ; li et al., 2008 ) , which directly affects the parking turnover and thus the real-time number of available parking spaces in a parking lot. in addition, the bottleneck model has also been extended to consider mode choice dimension. for the convenience of readers, we summarize in table 5 some principal contributions to the multi-modal bottleneck problems. it can be noted that most studies considered two physically separated modes (auto and rail), and thus cannot consider the congestion interaction between modes. moreover, some studies, such as tabuchi (1993) , danielis and marcucci (2002) , and gonzales and daganzo (2012) , ignored the effects of passenger crowding discomfort in transit vehicles on commuters' travel choices. however, some studies, such as huang (20 0 0 , 20 02), huang et al., (2007) , de palma et al. (2017) , showed that in-vehicle passenger crowding discomfort has a significant effect on passengers' travel choices. in order to achieve a social-optimum system, the in-vehicle passenger crowding in transit vehicles should be incorporated in the transit service optimization, together with passenger wait time at transit stops due to insufficient vehicle capacity. as previously stated, the standard bottleneck model focuses mainly on morning commuting problems, and little attention has been paid to evening or day-long commuting problems. this may be because the evening commuting is usually seen as a note: " ã� " represents "no" and " â�� " represents "yes". symmetric reverse process of the morning commuting. some studies, such as vickrey (1973) , de palma and lindsey (2002a) , gonzales and daganzo (2013) , and li et al., (2014) , have shown that the morning and evening equilibrium departure patterns are not symmetric under some conditions, e.g., the bottleneck system has multiple alternative travel modes, or commuters are heterogeneous in terms of their preferred work start/end times and/or the values of travel time and schedule delay. although investigation of the morning and evening commuting problems in isolation may provide some important insights, in reality commuters usually make travel decisions based on their day-long activity schedules. to date, only a few published papers have involved the analysis of day-long commuting problems. for example, zhang et al., (2008) presented an integrated day-long commuting model that links the morning and evening commuting trips via parking location choice. gonzales and daganzo (2013) incorporated mode choice dimension in the integrated morning and evening commuting problem. daganzo (2013) further examined the two-mode day-long commuting problem when the wish times of arriving at and departing from workplace follow a continuous distribution. the day-long commuting models mentioned above adopted a trip-based modeling approach, and thus the time allocations of commuters for activities and travel during a day cannot be properly addressed. different from the trip-based morning-evening commuting models, zhang et al., (2005) presented a day-long activitytravel scheduling model to address commuters' time allocations among activities and travel during a day. their model connects the home-to-work commute in the morning and the work-to-home commute in the evening via work duration. li et al., (2014) investigated the properties of the day-long activity-travel scheduling model. they presented a sufficient and necessary condition of interdependence between the morning and evening departure-time decisions, i.e., the marginal utility of work activity is not a constant, but depends on both the clock time of day and the work duration (implying a flexible work-hour scheme). recently, zhang et al., (2019) further investigated autonomous vehicles oriented morning-evening commuting and parking problems. these previous studies usually considered a simple activity chain, namely home-work-home chain. however, in reality commuters may engage in other activities before work (e.g., taking the kid to school) or after work (e.g., shopping or recreation). it is thus meaningful to incorporate other activity participations in the day-long activity-travel scheduling model. vickrey's bottleneck model assumes that the value of travel time and the values of schedule delays of arriving early and late are constants î±, î², and î³ , respectively (see eq. (a1) in appendix). this assumption has been widely adopted in various extensions or variations of vickrey's bottleneck model. however, some previous empirical studies have confirmed that the marginal activity utility varies in time and space. vickrey (1973) formulated the departure time choice model for the morning commuting problem, in which the utilities derived from time spent at home and at work are linear functions of time. tseng and verhoef (2008) estimated the scheduling model of the morning commuting problem, in which marginal utilities vary nonlinearly over time of a day. jenelius et al., (2011) explored the effects of activity scheduling flexibility and interdependencies between different segments in a daily trip chain on delay cost and value of time. hjorth et al., (2013) empirically estimated different types of activity scheduling preference functions (including const-step, const-affine or const-exp formulations) and compared them to a more general form (exp-exp formulation) with regard to model fit, based on the stated preference survey data collected from car commuters traveling in the morning peak in the city of stockholm. abegaz et al., (2017) used stated preference data to compare the valuation of travel time variability under a structural model where trip-timing preferences are defined in terms of timedependent utility rates (i.e., a "slope model") against its reduced-form model where departure time is assumed to be optimally chosen. fosgerau and small (2017) presented a dynamic model of traffic congestion in which scheduling preferences are endogenously determined. this is different from the traditional activity scheduling models, in which the scheduling preferences are assumed to be exogenously given. developed an activity-based bottleneck model for investigating the step tolling problem, in which the activity scheduling utilities of commuters at home and at work vary by the time of day. it showed that ignoring the preference heterogeneity of commuters would underestimate the efficacy of a step toll. recently, li and huang (2018) investigated the user equilibrium problem of a single-entry traffic corridor with continuous scheduling preferences. the results showed that the introduction of continuous scheduling preferences makes inflow rate of early arrivals first increase and then decrease. even though the introduction of continuous scheduling preferences can smooth the departure rate of commuters and make the user equilibrium flow pattern more stable, a series of shock waves still exist due to discontinuities in departure rates or sharply decreasing inflow rate at the entry point of the corridor. these aforementioned studies mainly focused on evaluation or comparison of the effects of different forms of the activity scheduling preference functions. other important factors were ignored. for example, the marginal utilities of commuters may change by gender, travel mode, income level, and so on. it is meaningful to reveal the effects of these heterogeneities on the activity scheduling preferences and to empirically calibrate the scheduling preference functions of various activities through field surveys. the point-queue assumption in vickrey's bottleneck model significantly facilitates the calculation of queuing delay at the bottleneck. however, it cannot account for the influence of vehicle queuing on upstream approaching vehicles due to ignoring the physical lengths of vehicles in queue. lago and daganzo (2007) investigated two important aspects: queue spillovers caused by insufficient road space, and merging interactions caused by the convergence of trips in a two-origin and single-destination network with limited storage space. they obtained some unexpected findings, e.g., ramp metering is beneficial, and providing more freeway storages is counterproductive. chen et al., (2019) explored the impact of queuelength-dependent capacity on travelers' departure time choices in the morning commute problem. it showed that multiple equilibria and even a continuum of equilibria may exist, and the equilibrium cost may be a locally decreasing function of the number of users. the standard model for analyzing traffic congestion with vehicle queue length consideration usually incorporates a relationship between volume, speed and density of traffic flow. there is a well-defined inverse-u-shaped relationship between traffic volume and density. however, most of traffic flow models focus on the situation of "ordinary (or normal) congestion", in which traffic volume increases as traffic density increases (or travel speed decreases as traffic volume increases). this is because it is believed that traffic flow does not fall under heavily congested conditions. however, in reality, the phenomenon of hypercongestion may occur, especially in the downtown areas of major cities during rush hours. hypercongestion refers to traffic jam situations where traffic volume decreases as traffic density increases. small and chu (2003) presented tractable models for handling demand fluctuations for a straight uniform highway and for a dense street network located in a central business district (cbd) . for the cbd model, they employed an empirical speed-density relationship for dallas texas to characterize hypercongested conditions. arnott (2013) presented a bathtub model of downtown rush-hour traffic congestion that captures the hypercongestion phenomenon. it was shown that when demand is high relative to capacity, applying an optimal time-varying toll can generate benefits that may be considerably larger than those obtained from standard models and that exceed the toll revenue collected. fosgerau and small (2013) combined a variable-capacity bottleneck with î± â�� î² â�� î³ scheduling preferences for a special case with only two possible levels of capacity. it showed that the marginal cost of adding a traveler is especially sensitive to the low level of capacity, and under hypercongestion the policies (an optimal toll, a coarse toll, and metering) can be designed so that travelers gain even without considering any toll revenue. fosgerau (2015) extended the bathtub model to assess the effects of the policies of road pricing, transit provision and traffic management under hypercongestion. it showed that the unregulated nash equilibrium is also the social optimum among a wide range of potential outcomes and any reasonable road pricing scheme would be welfare decreasing when the speed of alternative transit mode is high enough such that hypercongestion does not occur in equilibrium. large welfare gains can be achieved through road pricing when there is hypercongestion and travelers are heterogeneous. gonzales (2015) further considered the hypercongestion issue in a multi-modal context. it showed that hypercongestion may arise when modes are not priced, a stable steady equilibrium state can emerge when cars and high-capacity transit are used simultaneously, and there always exist fixed coordinated prices (i.e., fixed difference of prices) for cars and transit to achieve a stable equilibrium state without hypercongestion. in order to derive a closed-form solution for no-toll equilibrium for hypercongestion, arnott et al., (2016) proposed a special bathtub model through adapting the simplest bottleneck model to an isotropic downtown area where the congestion technology entails velocity being a negative linear function of traffic density. liu and geroliminis (2016) adopted an mfd approach to model the hypercongestion effects of cruising-for-parking in a congested downtown network. in addition to the hypercongestion issues, the traffic flow model that describes the relationship between velocity and density has also been extended for investigation of continuum corridor problems (see, e.g., arnott and depalma, 2011 ; depalma and arnott, 2012 ; li and huang, 2017 ; lamotte and geroliminis, 2018 ) . it should be mentioned that the tractability is a major challenge for the models with hypercongestion consideration. this is because the travel time of a traveler is determined by the decisions of other travelers throughout the duration of the trip, as pointed out in fosgerau and small (2013) . therefore, it is difficult to derive analytical solutions for a general case with general scheduling preferences, heterogeneous users, or travel time uncertainty. a simulation method has thus to be used. the standard bottleneck model has a strong assumption that commuters are homogeneous, i.e., all commuters have the same preference for arriving early or late and have an identical value of time. this assumption has been relaxed in the literature to consider the heterogeneity of commuters, such as heterogeneities of travel preferences and work start time (e.g., flexible or staggered work hours). the heterogeneity may be represented in discrete or continuous form. the discrete type of heterogeneity means that all commuters are divided into several groups and the commuters in one group are assumed to have the same preference and work start time. the continuous type of heterogeneity assumes a continuous distribution for the preference and/or work start time. considering user heterogeneity is important for achieving accurate estimates of welfare effects of various policy measures, such as congestion pricing, ramp metering, capacity investment, and flexible work schedules. table 6 provides a summary of bottleneck model studies involving heterogeneous users. it can be seen that the existing studies mainly focused on the case of piecewise constant scheduling preferences (i.e., î± â�� î² â�� î³ preferences), and considered the users' heterogeneities in the following ways: ( i ) identical preferred arrival time and discretely / continuously distributed scheduling preference parameters; ( ii ) discretely distributed preferred arrival time and identical / discretely distributed scheduling preference parameters; and ( iii ) continuously (including uniformly) distributed preferred arrival time and identical / continuously distributed scheduling preference parameters. however, the cases of discretely (continuously) distributed preferred arrival time but continuously (discretely) distributed scheduling preference parameters are not investigated yet, which provide a research opportunity for further study. it can also be seen that most studies concerned the proportional heterogeneity (i.e., all commuters have the same ratios of î²/ î± and î³ / î±), which helps derive the departure ortwo discrete groups with same î³ / î² analytical der of different commuter groups and analytical equilibrium solutions. some studies, such as newell (1987) , lindsey (2004) , ramadurai et al., (2010) , doan et al., (2011) , and liu et al., (2015c) , have relaxed this assumption to consider a general heterogeneity structure (i.e., î±, î² and î³ are allowed to vary independently). some properties of the model (e.g., the existence and uniqueness of solution) have been discussed. however, it is difficult to derive analytical solution of the model, which poses a challenge of designing an efficient solution algorithm. in addition, the marginal utility of an activity generally varies over time, as previously stated. it is thus necessary to relax the assumption of î± â�� î² â�� î³ scheduling preferences to consider the case of time-varying scheduling preferences. most of the previous bottleneck model studies focused on individual-based trips, and assumed that each household member makes activity-travel scheduling decisions independently. however, in reality, a large number of morning commute trips are indeed household-based travel, i.e., a multi-person trip among household members rather than a single-person trip. the interdependency between household members could influence the activity participation of each household member. therefore, the intra-household interaction should be considered in the activity-travel scheduling models. de palma et al. (2015) proposed a variant of vickrey's bottleneck model of the morning commute, in which individuals live as couples and value the time at home more when together than when alone. the results showed that the cost of congestion is higher for couples than for single individuals because the cost of arriving early rises proportionally more than the cost of arrival late decreases. the costs can be even higher if spouses collaborate with each other when choosing their departure times. jia et al., (2016) explored the departure time choice problem of the household travel (commuter and children) in a home-school-work trip chain with two preferred arrival times (a school start time and a work start time). liu et al., (2017b) further considered a hybrid of household travel (home-school-work trip chain) and individual travel (home-work trip). the findings showed that by appropriately coordinating the schedules of work and school, the traffic congestion at highway bottleneck and thus the total travel cost can be reduced. zhang et al., (2017) further investigated and compared the morning commuting equilibrium solutions in the "school near workplace" and "school near home" networks. it was shown that the dynamic commuting equilibrium solution is significantly affected by school locations, and in the ''school near home" network, households always arrive at school no later than the desired school arrival time. these abovementioned studies considered the morning trip-timing decisions of couples ( de palma et al., 2015 ) and of a parent and his/her children ( jia et al., 2016 ; liu et al., 2017b ; zhang et al., 2017 ) . however, for a family with two workers (husband and wife), couples must decide who takes the children to school in the morning and when, and who brings them back home in the evening. a parent has to trade-off not only his/her own schedule convenience with that of his/her spouse, but also the schedules of children. it is therefore meaningful to address the morning-evening activity scheduling issues with intra-household interaction consideration. carpooling or ridesharing refers to the case in which multiple persons travel together in an auto by sharing the cost. with carpooling, the seat capacity of an auto can be utilized more efficiently and the average individual travel costs, such as fuel cost, toll, and the stress of driving, are reduced. carpooling is also recognized as a more environmentally friendly and sustainable way to commute by reducing vehicular carbon emissions as well as the need for parking spaces. recently, xiao et al., (2016) incorporated carpooling behavior in morning commute problem with considering parking space constraint at destination. three modes, namely solo-driving, carpooling, and transit, were considered. it was shown that the departure period of solo drivers covers the departure period of carpoolers, and as the number of parking spaces decreases, the number of solo-drivers decreases gradually, while the number of carpoolers first increases and then decreases. liu and li (2017) examined the morning commute problem in the presence of ridesharing program. commuters simultaneously choose departure time from home and role in the program (solo driver, ridesharing driver and ridesharing rider). ma and zhang (2017) further explored the dynamic ridesharing problem on a highway with a single bottleneck together with parking. they designed the schemes with different ridesharing payments and shared parking prices. recently, presented a variable-ratio charging-compensation scheme to investigate dynamic ridesharing problem using bottleneck model approach. different objectives of the platform were considered, including minimization of system disutility, maximization of platform profit, and minimization of system disutility subject to zero platform profit. yu et al., (2019) incorporated users' heterogeneities in the carpooling problem based on the traditional bottleneck model, and revealed the effects of heterogeneities on the efficiency of carpool subsidization. all the aforementioned studies are based on a corridor with a single bottleneck, and thus cannot consider the interaction between flows of different links in the network (i.e., network effects). extending the single-bottleneck model to a general network with multiple od pairs and multiple bottlenecks could help deepen understanding of the effects of carpooling services on the urban transport system. the carpooling services can be implemented through mobile platforms in which passengers can call on riding services and drivers can respond to the service requests. the existing related studies considered a single carpooling platform. it is meaningful to examine the competition and/or collaboration among multiple carpooling platforms. transportation systems are stochastic, dynamic and nonlinear systems due to various disturbance factors from supply side and/or demand side, such as traffic accident, bad weather, and within-day and/or day-to-day demand variations. considering the impacts of stochastic factors on transportation systems has important implications for promoting the resilience and reliability of the systems. table 7 lists some major studies incorporating uncertainty effects in the bottleneck problems. it can be noted that most of previous studies focused on the supply uncertainty caused by travel time variation or capacity randomness. it is somehow surprising that no studies take into account the effects of the uncertainty in demand side in the bottleneck problems, though there are a few publications involving joint fluctuations in both supply and demand sides (e.g., arnott et al., 1999 ; fosgerau, 2010 ) . in order to model the uncertainty effects of bottleneck system, a probability distribution function needs to be specified for the random variables concerned. in this regard, most studies adopted a general distribution, and a few studies adopted some specific distributions, such as uniform distribution (e.g., xiao et al., 2014 wang and xu, 2016 ; zhang et al., 2018 ) , exponential distribution ( tian and huang, 2015 ) , uniform and exponential distributions (noland et al., 1995 (noland et al., , 1998 , and gumbel distribution ( xiao and fukuda, 2015 ) . in terms of modeling method, expectation value model is usually adopted in stochastic optimization problems. however, in order to well capture the risky attitudes of travelers towards random fluctuations in demand and/or supply sides, some studies have also incorporated the effects of travel time variation in the objective functions of models, such as fosgerau (2010) ; borjesson et al., (2012) ; engelson and fosgerau (2016) , and xiao et al., (2017) . it should be mentioned that in the field of travel time variability or reliability, fosgerau and his collaborators have made a number of works by using a model of time-varying scheduling preferences. for instance, they presented a new measure of travel time variability ) and explored the relationships among different measures of the cost of travel time variability ( engelson and fosgerau, 2016 ) . they also derived the value of travel time variability fosgerau and fukuda, 2012 ) , and revealed the relationship between the mean and variance of travel delay in dynamic queues with random capacity and demand ( fosgerau, 2010 ) . for a systematic review of the values of travel time and travel time reliability, the interested readers may refer to small (2012) . it should be pointed out that these previous studies are mainly based on expectation value (risk-neutral) model or meanvariance (risk-averse) model. it is meaningful to consider other measures of risk, such as var (value at risk) and cvar (conditional value at risk). this could lead to a difference in the value of travel time variability. moreover, these previous studies assumed that the scheduling preferences are exogenously given. incorporating endogenous scheduling preferences, as presented in fosgerau and small (2017) , is also an important direction for further studies. obviously, variability or uncertainty in supply and/or demand sides involves lack of information about how a stochastic process is realized. thereby, its analysis naturally invites considering the effects of information provision. travelers may have only partial information about traffic conditions before or during a trip. with the aids of various information and communication technologies (e.g., global navigation satellite system, global positioning system), real-time traffic information can be collected and disseminated efficiently. travelers can adjust their activity and travel schedules through day-to-day leaning and the traffic information guidance. in this regard, noland (1997) examined the congestion effects of providing commuters with pre-trip information and found that information provision does not necessarily bring benefits to the commuters using the information. ziegelmeyer et al., (2008) investigated the impact of public information about past departure rates on congestion level and travel cost based on learning model and the adl's bottleneck model. liu et al., (2017a) considered the effect of travelers' inertia in the day-to-day behavioral adjustment due to traffic information updating. khan and amin (2018) studied the effects of heterogeneous information (market penetration and accuracy) on traffic congestion. explored the impact of the cost of information provision on information quality provision strategy. zhu et al., (2019) examined the dayto-day departure time adjustment process of travelers with bounded rationality based on long-term historical knowledge (or short-term travel experience) and real-time information provision. however, all the aforementioned studies only considered a simplified case of one single route, which may not be able to capture the full impact of information on traffic congestion. it is thus necessary to extend it to a general network with multiple routes in a further study. some studies have relaxed the assumption that commuters pass through only one bottleneck during commuting peak periods to consider the case of passing through multiple bottlenecks during a trip. kuwahara (1990) analyzed the equilibrium queuing patterns at a two-tandem bottleneck on one freeway for which some commuters may pass through both bottlenecks. arnott et al., (1993b) studied a y-shaped highway corridor with two upstream bottlenecks and one downstream bottleneck. they found that expanding the capacity of one of the upstream bottlenecks can raise total travel cost (i.e., a paradox occurs), and metering access to reduce effective upstream capacity can improve efficiency. optimal capacity for an upstream bottleneck is equal to, or smaller than, the optimal capacity for the downstream bottleneck. kim (1999) further analyzed the dynamic equilibrium queuing patterns for a two-tandem bottleneck with two origins and one destination. it was found that in some cases, a queue does not occur at the upstream since the departure rate is always equal to its capacity at equilibrium. in order to avoid traffic congestion at a two-tandem bottleneck, the downstream bottleneck should be enlarged prior to the upstream bottleneck. further demonstrated the bottleneck paradox phenomenon by an experimental method in a y-shaped bottleneck network with two groups of commuters. the commuters in group one pass through only the downstream bottleneck, whereas the commuters in group two must pass through both upstream and downstream bottlenecks. they found that the observed departure times at aggregate level are in close agreement with the equilibrium solution. akamatsu et al., (2015) discussed the existence and uniqueness of the solution of departure-time choice equilibrium for a corridor with multiple discrete bottlenecks and heterogeneous users. these previous related studies mainly focused on a specific occasion (e.g., a two-tandem or y-shaped bottleneck structure), and thus the results obtained might not be applicable to a general network. therefore, further investigations on general bottleneck networks are needed. the bottleneck models presented in the aforementioned literature usually adopted analytical approaches because they treated only some simple cases with one or two routes. in order to apply the bottleneck models to real large-scale networks, a dta-based bottleneck modeling approach was presented, which was inspired by vickrey's bottleneck model. in this approach, the usual components of vickrey's bottleneck model are applied separately to the links in the network. in this regard, de palma and his colleagues have developed a dynamic network model, called metropolis, in which the travel mode, departure time and route choices can be endogenously determined. metropolis has been implemented both with a vertical queue for each link (i.e., the physical length of a queue is not considered), and with a horizontal queue which means the queue on one link can affect other links (i.e., queue spillback effects). the model is solved using microsimulation, and has been applied to evaluate various policies, such as congestion pricing (see, de palma and lindsey, 2006 ; de palma et al., 2005 , de palma et al., 2008 . for more details of metropolis, please refer to de palma et al. (1997) and de palma and marchal (2002) . besides the aforementioned various topics, some other topics related to the bottleneck model have also been studied, summarized as follows. i. properties of equilibrium solution. smith (1984) showed the existence of user equilibrium solution for a singlebottleneck model with homogeneous users. daganzo (1985) proved the uniqueness of the user equilibrium solution. newell (1987) extended the analysis to the case of heterogeneous linear schedule delay functions. an elastic demand version of the bottleneck model was analyzed in arnott et al., (1993a) . ii. variations of model formulation and solution algorithm. de palma et al. (1983) proposed a stochastic departure time choice logit model to consider the commuters' perception errors of utility. han et al., 2013a , han et al., 2013b reformulated vickrey's bottleneck model as a partial differential equation formulation. otsubo and rapoport (2008) presented a discrete version of vickrey's bottleneck model and a solution algorithm for computing the equilibrium solution. nie and zhang (2009) proposed numerical solution procedures for the morning commute problem. guo and sun (2019) considered personal perception in the travel cost function, aiming to incorporate the commuters' psychological tastes towards early arrival at the workplace in the bottleneck model. iii. doubly dynamic adjustments (day-to-day and within-day dynamics). ben-akiva et al., (1984 presented a dynamic simulation model to describe the evolution of queues and delays from day to day. ben-akiva et al., (1991) further presented a framework for evaluating the effects of traffic information systems based on the doubly dynamic adjustment model incorporating the drivers' information acquisition and integration. guo et al., (2018) considered the bounded rationality factor due to individual's limited cognitive level and imperfect information in the doubly dynamic bottleneck model. iv. time varying bottleneck capacity. the classical bottleneck model usually assumes a constant or invariant bottleneck capacity. using optimal control theory, yang and huang (1997) presented a bottleneck model with queue-dependent capacity and elastic demand for design of time-varying toll schemes, and found that queues must not be eliminated in the optimal state of the system. zhang et al., (2010) presented another bottleneck model in which the bottleneck capacity varies exogenously over time, in discrete steps. they derived user equilibrium and system optimal traffic patterns with (exogenously) time-varying capacities and the optimal tolls leading to the system optimum pattern. v. ramp metering. arnott et al., (1993b) suggested an optimal metering policy to improve the efficiency of a y-shaped bottleneck system. o'dea (1999) found that in the bottleneck model, metering can produce a sizable benefit and should not be regarded as a substitute for congestion pricing. shen and zhang (2010) designed a pareto-improving metering strategy for a multi-ramp linear freeway based on an analysis of priority order of the ramps. these studies mainly focused on a simple bottleneck system or a freeway. we can extend it to a general network in a further study. queuing delay is a pure deadweight loss for the society and results in inefficient use of transportation infrastructure. in order to make efficient use of transportation resources, congestion pricing has been widely suggested as a viable measure to internalize externalities caused by queuing at the bottleneck so as to relieve peak-period traffic congestion. congestion pricing scheme is generally based on the economic theory of marginal cost pricing and is a mechanism to improve social benefit . for comprehensive reviews of congestion pricing, readers can refer to lindsey et al., (2012) , van den berg (2012 , and fosgerau and van dender (2013) . a substantial stream of research has been conducted on bottleneck congestion pricing. table 8 provides a summary of the bottleneck congestion pricing studies. it should be pointed out that multi-modal bottleneck tolling studies are not shown in table 8 , and readers can refer to table 5 . it can be seen in table 8 that many of the existing studies focused on the topics of step tolling, users' heterogeneity, and tradable credit scheme. based on different assum ptions, the step bottleneck tolling studies can be classified into three main categories: the adl model of arnott et al., (1990a arnott et al., ( , 1993a arnott et al., ( , 1998 , the laih model of laih (1994 , laih, 2004 , and the braking model of lindsey et al., (2012) and xiao et al., (2012) . the laih model implicitly assumed that separate queues exist for tolled users and untolled users who arrive before the toll is turned off. despite this strong assumption, the laih model is useful for estimating the approximate efficiency of a multi-step toll scheme. the adl model assumed that a mass of commuters departs just after the toll is lifted. the braking model considered that as the end of the tolling period approaches, drivers have an incentive to stop before reaching the tolling point and wait until the toll is switched off. the congestion pricing studies shown in table 8 usually fall into the family of the piecewise constant î± â�� î² â�� î³ preferences, and focus on the case of normally recurrent traffic congestion. further investigated the congestion tolling problems in a framework of time-varying scheduling preferences. they compared the single-step and multi-step toll schemes with linear time-varying and piecewise constant marginal activity utilities. arnott (2013) and fosgerau (2015) incorporated the hypercongestion phenomenon in the congestion tolling problems using the bathtub model. vehicular use also causes environmental externality, besides congestion externality. in order to control vehicle pollution emissions and improve air quality, emission tax policy has been suggested. bulteau (2016) proposed a microeconomic model of urban toll system to internalize the negative externality effects (congestion and pollution) generated by vehicular use. in the proposed model, two modes of transportation (i.e., cars and public transport) were taken into account, and the vehicle emission rate was implicitly assumed to be a constant. based on the bottleneck congestion model of arnott et al., (1990a , arnott (1986) , bernstein and muller (1993) , mun (1999) , daganzo and garcia (2000) social optimum toll to totally eliminate bottleneck queue during morning peak pareto-improving time-varying toll in a time window minimize total social cost or user cost elastic travel demand braid (1989) , arnott et al., (1993a) , yang and huang (1997) consider the elasticity of travel demand to travel cost maximize total social surplus step tolling laih (1994 laih ( , 2004 , arnott et al., (1990a arnott et al., ( , 1993a , fosgerau (2011 , lindsey et al., (2012) , van den berg (2012 , gonzales and christofa (2014) , knockaert et al., (2016) , ren et al., (2016) , bao et al., (2017) , xu et al., (2019) single-step or multi-step tolling scheme as a substitute for the time-varying tolling scheme typical models: adl model, laih model, lindsey et al. braking model route or lane substitute braid (1996) , hall (2018) two routes: one tolled and one free a portion of lanes are tolled heterogeneous travelers cohen (1987) , arnott et al., (1992 arnott et al., ( , 1994 reward scheme rouwendal et al., (2012) , yang and tang (2018) a reward scheme means a subsidy instead of a penalty of tolling fare-reward scheme for transit users regulatory regime of bottleneck de palma and lindsey (2002b lindsey ( , 2008 , fu et al., (2018) private regime (profit maximization) public regime (welfare maximization) mixed regime stochastic environments yao et al., (2010) , , hall and savage (2019) stochastic toll stochastic capacity 1993a), three alternative toll schemes were compared: a fine toll (time-varying toll), a coarse toll (varies according to the peak period and off-peak period), and a uniform toll (constant over time). the policy of redistributing the gains from urban tax to public transport was also evaluated. liu et al., (2015b) presented a variable speed limit scheme to reduce total traffic emissions and travel costs based on vickrey's bottleneck model and constant vehicle emission rate assumption, and evaluated its effectiveness in improving traffic flow efficiency of the bottleneck system. these studies only considered one od pair with one route and assumed a constant vehicle emission rate. it is meaningful to extend these studies to incorporate the network effects and the changing vehicle emissions by vehicle type and speed. tradable credit scheme has recently been advocated as a useful tool for regulating the externalities caused by vehicular use and as a promising substitution for congestion pricing scheme. this is because such scheme does not involve money transfer from the public to the government, which can significantly increase the public acceptability towards the scheme ( yang and wang, 2011 ) . studied various parking permit schemes in a many-to-one network, in which each origin is connected to a single destination by a bottleneck-constrained highway and a parallel transit line. they compared three parking permit distribution schemes for commuters living in different origins: uniform, pareto improving, and system optimum schemes. liu et al., (2014a ,b) further developed tradable parking permit schemes to realize parking reservations for homogeneous or heterogeneous commuters in terms of their values of time. nie and yin (2013) proposed a general analytical framework for design of system optimal tradable credit scheme and analysis of the efficiency of tradable credit scheme for a two-route system. their results showed that the tradable credit scheme could provide substantial efficiency gains for a wide range of scenarios. tian et al., (2013) examined the efficiency of a tradable credit scheme in a competitive highway/transit network with continuous heterogeneity in terms of individuals' values of time. xiao et al., (2013) explored the efficiency and effectiveness of a tradable credit system with identical and non-identical commuters. credits are tradable between the commuters, and the credit price is determined by a competitive market. the credit system consists of a time-varying credit charged at the bottleneck and an initial credit distribution to the commuters. nie (2015) proposed a market-based tradable credit scheme for managing traffic congestion at critical bottlenecks (e.g., bridges and tunnels). it was assumed that users who avoid traveling in the peak-time window will be rewarded with mobility credits and those who do not will pay a congestion toll in the form of credits or cash. the travelers may trade their credits with each other. it was shown that the best choice of the rewarding-charging ratio is 1, i.e., each peak-time user is charged one credit and each off-peak user is awarded one credit. shirmohammadi and yin (2016) designed a tradable credit scheme to maintain the queue length of the bottleneck to be less than a queue length threshold specified by the authority. sakai et al., (2017) proposed a model for designing pareto-improving pricing scheme with bottleneck permits for a v-shaped two-to-one merging bottleneck. they showed that the first-best pricing scheme for this v-shaped network does not always achieve a pareto improvement, because the cost of one group of drivers is increased due to the permit pricing. xiao et al., (2019) presented two tradable parking permit schemes for a corridor system with three alternative travel modes, i.e., transit, driving alone and carpool, when the parking supply at destination is insufficient. it was found that the prices of parking permits, regardless of whether the trip is completed as a carpool or not, decrease with the parking supply, and the price that a solo driver should pay is higher than that a carpooler should pay. the tradable uniform parking permit scheme is more efficient than the tradable differentiated parking permit scheme for solo-driving and carpooling travelers. it can be noted that these abovementioned studies mainly focused on a single od pair with one or two routes, and the transaction costs for trading the credits were usually ignored. it is thus important to look at the impacts of network effects and transaction costs on the effectiveness of the tradable credit schemes. in reality, markets always create speculators. it is also meaningful to look at the effects of collusive behavior among credit purchasers. in addition, when the parking supply is insufficient at destination, one may first park his/her car at a park-and-ride lot and then transfers to a transit vehicle to reach final destination. it is therefore necessary to extend the existing models to incorporate the park-and-ride services in a further investigation. the redistribution of toll revenue is an important factor influencing the public acceptability of the toll schemes and thus practical implementation. adler and cetin (2001) developed an analytical model for a two-node two-route network, aiming to explore a direct redistribution approach in which money collected from the drivers on a more desirable route is directly transferred to the users on a less desirable route. it was shown that this model about toll collection and subsidization would reduce the travel cost for all travelers and totally eliminate the wait time in the queue. compared with the social optimal solution, the direct redistribution model yields almost identical results. mirabel and reymond (2011) analyzed the impact of toll redistribution on total cost and on modal split between railroad and road based on the two-mode model of tabuchi (1993) . in their model, it was assumed that toll revenue from road was redistributed to public transport. two kinds of road toll regimes were considered, i.e., a fine toll and a uniform toll. it was shown that a toll policy is more efficient as long as toll revenue is directed towards public transport when the railroad fare is equal to average cost. these previous studies mainly focused on the redistribution of toll revenue for public transport improvement purpose. it will be meaningful to take into account other use purposes, such as transportation infrastructure investment and fiscal revenue, and to determine the optimal redistribution proportion among different uses. although the first-best time-varying toll may eliminate queuing completely, congestion toll scheme may not be politically feasible. parking charging can be considered as a possible substitute for the congestion tolling because parking charges seem to be much easier to implement than congestion tolls. similar to congestion tolls, parking charges may be used to disperse demand over time so as to reduce congestion and gain efficiency. zhang et al., (2008) presented a morningevening commuting model to determine a location-dependent parking fee scheme that optimizes the commuter morning / evening commuting pattern. analyzed the regulatory schemes of parking market: price-ceiling and quantity tax/subsidy schemes. it was shown that both price-ceiling and quantity tax/subsidy regulations can efficiently reduce system cost and commuter cost under certain conditions, and help ensure the stability of the parking market. fosgerau and de palma (2013) determined the optimal parking charges and evaluated the benefits of parking pricing as an alternative to congestion tolls. zhang and van wee (2011) proposed a duration-dependent parking fee scheme, and compared it with three other pricing regimes: no charging, optimal time-varying road tolls, and a combination of optimal time-varying road tolls and location-dependent parking fees. ma and zhang (2017) derived dynamic parking charges for a bottleneck system with ridesharing, in which all travelers were assumed to participate in the ridesharing program, i.e., a traveler was either a driver or a passenger. as a substitute for parking pricing, parking permit schemes have also been studied in the literature ( liu et al., 2014b ; xiao et al., 2019 ) , as presented in subsection 3.2.3 . the aforementioned studies did not consider commuters' time spent on searching for available parking spaces. the search for parking spaces comprises a wasteful commuting component that contributes to traffic congestion, and thus should be considered in commuting cost. on the other hand, parking facilities are usually supplied by both private firms and public sector. it will be interesting to examine this mixed market and compare it with the extreme cases of either private-only or public-only parking provision regime. in addition, a mixed market consisting of solo-driving and ridesharing should be investigated for analyzing the effects of parking pricing on the market. ship queuing and waiting at a general anchorage to enter the berth under the port congestion are similar to the auto queuing and waiting at the road bottleneck. the congestion pricing concept for a road bottleneck has been extended to address the port congestion pricing issues. in this regard, laih and his collaborators have undertaken a number of studies (see, laih and hung, 2004 ; laih et al., 2007 , laih et al., 2015 chen, 2008 , laih and chen, 2009 ; laih and sun, 2013 ) . they derived optimal time-varying and/or step toll schemes to eliminate or decrease the port congestion. by levying port congestion tolls, the departure schedules of container ships can be rationally changed, and thus the arrival times of container ships at the busy port can be smoothed or dispersed. as a result, the queuing delays of container ships for port entry decrease. they also derived the resultant changes of container ships' departure schedules after levying port congestion tolls. however, they did not consider the redistribution of port congestion charges, which can help promote the public acceptability of port congestion charging scheme. in the literature, there are some studies about airport congestion pricing issues. for example, daniel (1995) proposed an airport runway congestion pricing model (i.e., a bottleneck model with time-dependent stochastic queuing) for estimating congestion prices and capacities for large hub airports. the proposed stochastic bottleneck model combines stochastic queuing, time-varying traffic rates, and intertemporal adjustment of traffic in response to queuing delay and fees. daniel and pahwa (20 0 0) showed that the stochastic bottleneck model of daniel (1995) can generate more realistic traffic patterns than earlier models, such as deterministic bottleneck model of vickrey (1969) . harback (2008 , daniel and harback, 2009 ) adopted the stochastic bottleneck model to address the airport congestion pricing issues for 27 major us hub airports. daniel (2011) further determined the equilibrium congestion pricing schedules, traffic rates, queuing delays, layover times, and connection times by time of day for four canadian airports (toronto, vancouver, calgary, and montreal). daniel (2014) examined the efficiency and practicality of airport slot constraints using a deterministic bottleneck model of landing and takeoff queues. it was shown that slot constraints at us airports would be ineffective, and effective slot constraints require many narrow slot windows. silva et al. (2014) studied airlines' interactions and scheduling behavior, together with airport pricing, using a combination of a deterministic bottleneck model and a vertical structure model that explicitly considers the roles of airlines and passengers. wan et al. (2015) treated terminal congestion and runway congestion separately, and studied its implication for design of optimal airport charges and/or terminal capacity investment. to capture the difference between these two types of congestion, they adopted a deterministic bottleneck model for the terminal and a conventional congestion model for the runways. they showed that the welfare-optimal uniform airfares do not yield the first-best outcome. the first-best fares charged to the business passengers are higher than the leisure passengers' fare if and only if the relative schedule-delay cost of business passengers is higher than that of leisure passengers. these airport pricing studies usually focused on a single airport, and did not consider the effects of airport pricing on the competition and collaboration among regional airports (e.g., the airports of hong kong, guangzhou, shenzhen, zhuhai, and macao in the greater bay area of china), which deserves a further study. queuing delays at the bottleneck during the morning and evening commutes may be an important factor influencing household residential location choice, which shapes urban spatial structure of a city ( mun et al., 2005 ) . arnott (1998) incorporated the departure time choice into a model of urban spatial structure by using vickrey's bottleneck model. it was shown that in contrast to the standard static model (without time dimension), congestion tolling in the bottleneck model can cause urban form to become less concentrated, and thus may have less pronounced effects on urban spatial structure than was previously thought. fosgerau and de palma (2012) introduced spatial heterogeneity into the bottleneck model via considering dynamic congestion in an urban setting where trip origins are spatially distributed. it was shown that at equilibrium, travelers sort according to their distances to the destination; the queue is always unimodal regardless of the spatial distribution of trip origins; and the travelers located beyond a critical distance from the cbd tend to gain from tolling, even when toll revenue is not redistributed, while nearby travelers lose. gubins and verhoef (2014) considered a monocentric city with a traffic bottleneck located at the entrance to the cbd. the commuters' departure times, household residential locations, and lot sizes are all endogenously determined. they showed that road pricing may lead to urban sprawl, even when the collected toll revenue is not redistributed back to the city inhabitants. takayama and kuwahara (2017) further developed a model considering commuters' heterogeneity, departure time and residential location choices in a monocentric city with a single bottleneck. the results showed that commuters sort themselves temporally and spatially according to their values of time and schedule delay flexibility. imposing a congestion toll without redistributing toll revenue causes the physical expansion of the city, which is opposite to the results of traditional location models. franco (2017) examined the effects of change in downtown parking supply on urban welfare, mode choice and urban spatial structure using a general spatial equilibrium model of a closed monocentric city with two transport modes, endogenous residential parking supply and bottleneck congestion at the cbd. xu et al., (2018) presented an integrated model of urban spatial structure and traffic congestion for a two-zone monocentric city in which the two zones are connected by a congested highway and a crowded railway. the commuters' departure time and mode choices are governed by a bottleneck model, and the endogenous interactions between travel and residential relocation choices are analyzed. fosgerau et al., (2018) presented a unified model of the bottleneck model and the monocentric city model. the model generates a number of new insights regarding the interaction between congestion dynamics and urban spatial equilibrium. unlike the traditional static congested city models, their model leads to an optimal city that is less dense in the center and denser in the suburb than the city at the laissez-faire equilibrium. this result is similar to that in gubins and verhoef (2014) . vandyck and rutherford (2018) developed a spatial general equilibrium model to study economy-wide and distributional implications of congestion pricing in the presence of agglomeration externalities and unemployment. fosgerau and kim (2019) presented a new monocentric city framework that combines a discrete urban space with multiple vickrey-type bottlenecks. they confirmed empirically the relationship between residential location choice and trip-timing choice, i.e., commuters traveling a longer distance tend to arrive at work early or late (i.e., at off-peak times) while commuters with a shorter distance tend to arrive at the peak time. these aforementioned studies considered the role of households' residential location decisions in shaping urban spatial structure, but ignored the role of firms' location decisions. in a further study, the effects of both households' and firms' location decisions should be simultaneously considered in the analysis of urban spatial equilibrium. the classical vickrey's bottleneck model has also been employed to address transit passenger travel choice behavior and transit system optimization issues. kraus and yoshida (2002) incorporated the commuter's time-of-use decision into a model of transit pricing and transit service optimization, in which waiting time at a transit stop was treated analogously to queuing time at the highway bottleneck. it was shown that increased ridership leads to higher average user cost, and the relationship between service frequency and ridership does not conform to the well-known square root principle. yoshida (2008) further studied the effects of passengers' queuing rules at transit stops (including the first-in-first-out and the random-access queuing) on the mass-transit policies, such as the number of trains and runs, scheduling, and pricing. the results showed that when the shadow value of a unit of waiting time exceeds that of a unit of time late for work, the passengers' queuing discipline does not have any effect on the optimal or second-best mass-transit policy. otherwise, the aggregate travel cost with random-access queuing is lower than that with first-in-first-out. tian et al., (2007) analyzed the equilibrium properties of commuters' trip timing during the morning commute on a many-to-one linear corridor transit system with considering in-vehicle passenger crowding effect and schedule delay cost. monchambert and de palma (2014) considered a bi-modal competitive system, consisting of a public transport mode (bus), which may be unreliable, and an alternative mode (taxi). the results showed that the public transport service reliability at the competitive equilibrium increases with the taxi fare, and the public transport service reliability and thus patronage at equilibrium are lower than those at the first-best social optimum. de investigated trip-timing decisions of rail transit users who trade off in-vehicle passenger crowding costs and disutility from traveling early or late. three fare regimes, namely no fare, an optimal uniform fare, and an optimal time-dependent fare, were studied and compared, together with determination of the optimal long-run number and capacities of trains. wang et al., (2017) designed the policies of transit subsidies (including cost and passenger subsidies) from either government funding or road toll revenue to circumvent the downs-thomson paradox appearing in a competitive highway/transit system. yang and tang (2018) proposed a fare-reward scheme for managing rail transit peak-hour congestion with homogeneous commuters, in which a commuter is rewarded with one free trip during pre-specified shoulder periods after taking a certain number of paid trips during the peak hours. such a fare-reward scheme aims to shift commuters' departure time to reduce their queuing at stations in an incentive-compatible manner while keeping the transit operator's revenue intact. tang et al., (2019) further considered the heterogeneous commuters, in terms of commuters' scheduling flexibility (i.e., arrival time flexibility interval), and proposed an incentive-based hybrid fare scheme, which combines the fare-reward scheme with a non-rewarding uniform fare scheme. it was shown that the hybrid fare scheme can create a revenue-preserving win-win-win situation for the transit operator, flexible commuters and non-flexible commuters. these previous studies have provided many insights into understanding the travel choice behavior of transit passengers, operations and scheduling of transit services, and the effects of various transit policies, such as transit service pricing and subsidies. however, they usually consider transit mode only or two physically isolated modes (e.g., auto and rail). in reality, auto and bus share the same roadway, and thus the interaction between them cannot be ignored. the congestion externality caused by intermodal interaction should be considered in the transit fare pricing, together with the in-vehicle crowding externality in transit vehicles. arnott et al., (1993a) concerned the capacity expansion issue of a road bottleneck with homogeneous commuters. it was shown that the self-financing result (i.e., toll revenue exactly covers its capital cost) holds even when the variation of the toll by time of day is constrained (e.g., a coarse toll). arnott and kraus (1995) investigated under what circumstances the first-best pricing and investment rules (i.e., first-best self-financing rule, or trip price equals marginal cost) for a congestible bottleneck facility apply when both the time variation of the congestion charge is constrained and users are different in unobservable characteristics so that the same congestion charge must be applied to heterogeneous users, in terms of work start time or value of time. their findings indicated that the first-best self-financing rule holds if the congestion externality is anonymous, independent of user type. thereby, marginal cost pricing of a congestible facility is feasible even if users differ in observationally indistinguishable ways, when a completely flexible toll is employed. but, when there are constraints on the time variation of the toll (e.g., uniform toll), marginal cost pricing is infeasible and a variant of ramsey pricing is (secondbest) optimal. liu et al., (2015a) designed a highway use reservation system to allocate highway space to potential users at different time intervals. they also evaluated the efficiency of the reservation system. lamotte et al., (2017) addressed the capacity allocation issue of a road between two vehicle types (i.e., conventional and bookable autonomous vehicles), using a variant of the bottleneck model. these studies usually assumed a fixed total travel demand, a single travel mode and a deterministic environment. in further studies, these assumptions can be relaxed to consider elastic demand, multiple travel modes and/or stochastic situation. qian et al., ( , 2012 investigated the design problems of parking capacity, parking fee, and access time when all parking lots in the parking market are operated by multiple profit-driven private operators or by a welfare-driven social planner. franco (2017) examined how the changes in cbd parking supply affect residential land rents, residential parking supply, mode choice, welfare, air pollution, share of auto users, population densities and city size, and whether the self-financing theorem holds in the context of the urban spatial model. liu (2018) presented an equilibrium model of departure time and parking location choices for optimizing the parking supply that minimizes the total system cost (i.e., the sum of travel cost and social cost of parking supply) under either user equilibrium or system optimum pattern. he found that the optimal planning of parking with autonomous vehicles is significantly different from that without autonomous vehicles. zhang et al., (2019) further analyzed the optimal parking supply strategy for autonomous vehicles to minimize the total system cost based on an integrated morning-evening commuting model. these previous studies did not concern the competition of different parking types (e.g., on-street and off-street) and the parking facility ownership issues (private and public), which can be considered in further studies. in the literature, there are a few studies involving joint strategies of capacity investment and demand management. for instance, arnott et al., (1994) explored the welfare effects of a toll-financed capacity expansion (i.e., toll revenues are used to finance transport investment) using a bottleneck model with user heterogeneity consideration. it was shown that if initial capacity is sufficiently small, a toll-financed expansion leaves all drivers better off. xiao et al., (2012) studied the feasibility of expanding bottleneck capacity by toll revenue. the results showed that if the revenue generated by the optimal flat toll is used to finance the capacity expansion, the trip cost of each commuter is reduced in the long run. however, the revenue from the optimal flat toll can never cover the capital cost of constructing the optimal capacity for minimizing the total system cost under constant returns. qian et al., (2012) derived the optimal parking capacity, fee, and access time which altogether yield the minimum total social cost. wan et al., (2015) investigated the joint impacts of airport terminal capacity expansion and time-varying terminal fine toll on passenger demand (including business and leisure passengers) and airport system. these previous studies usually assumed a constant returns to scale and piecewise constant scheduling preferences, which can be extended to consider other returns to scale (e.g., increasing) and time-varying scheduling preferences. in the previous subsections, we have reviewed the literature about bottleneck model studies from the perspectives of travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of both demand and supply sides. in spite of broad extensions conducted since the pioneering work of vickrey (1969) , there are still some limitations in the existing related studies, summarized as follows. (i) as shown in section 3.1 , various strong assumptions are often made in the related studies, aiming to simplify the model and derive analytical solutions. such simplicity may lead to a large deviation of the model results from the actual values, and thus restricts explanatory power and real applications of the model. in order to model more realism, it is necessary to relax these assumptions in further studies. (ii) the existing studies have mainly focused on the topics of travel behavior analysis and demand-side strategies (particularly on congestion tolling). however, only limited attention has been paid to the topics of supply-side strategies (e.g., financing mode for capacity expansion due to fiscal deficit) and joint strategies (e.g., using congestion tolls to finance capacity expansion). the disposition of toll revenue also lacks adequate research. these topics provide potential research opportunities for further studies. (iii) driving effects of information technology innovation on social development, such as sharing economy and smart mobility, are seldom incorporated in the previous related studies. as such, rapid development of new technologies has been bringing about significant social reform, which is changing people's behavior and reshaping urban development. by incorporating these factors causing social changes, the bottleneck model could continue to provide new theoretical insights. according to the literature review and analysis of the limitations of existing related studies presented in the previous section, one can identify some new and important gaps and opportunities for further studies, presented as follows. one solution to the bottleneck congestion is to expand the capacity of the bottleneck. such expansion needs a huge capital cost, which imposes a heavy financial burden on local authority. in order to broaden the range of fiscal sources for bottleneck capacity expansion, various franchising programs, such as build-operate-transfer (bot) or public-private partnership (ppp) projects, have been implemented in practice to encourage private sectors to invest in massive transit projects. in a bot contract, the private investor negotiates with the government to finance, design, construct, and operate transportation infrastructure for a certain period (i.e., a concession period). upon the expiration of the concession period, the government will take over the infrastructure. a ppp contract, as another procurement model of public projects or services, implies a collaborative agreement between private sectors and government targeted at financing, designing, implementing and operating infrastructure and services. partnerships between private sectors and government provide advantages to both parties. the technology and innovation of private sectors can help provide better public services through improved operational efficiency. the government provides the private sector with incentives to deliver projects on time and within budget. the ppp contract specifies the rights and obligations of each party, embodying risk and revenue allocations between the parties. it is important to address the bot or ppp contract design issues of the bottleneck capacity expansion, particularly under a situation of the shortage of funds. congestion pricing schemes have been operating for years in a few countries and regions, such as singapore, london, stockholm, and milan. such schemes are not worldwide implemented yet due to low public acceptance, which is caused by the following factors: privacy, equity, complexity, and uncertainty ( gu et al., 2018 ) . the privacy issue means that the itineraries of travelers are recorded by the charging facilities at different locations. the equity issue im plies that congestion pricing hurts the poor from using road facilities and makes the road resources become a privilege of the rich. the complexity issue concerns the desire for a simple and well-understood proposal for calculation of congestion charges. the uncertainty issue includes the uncertainty in the effectiveness of the proposed scheme, and the uncertainty in revenue allocation. in order to improve public acceptance towards congestion pricing policy, the redistribution of toll revenue from congestion pricing is a critical issue. the government should make a reasonable allocation scheme of toll revenue to improve people's livelihood, such as expanding road capacity, improving public transit services, and reducing taxation. to achieve strong public support, the details of the use of toll revenue should be publicized to the society. it is well known that the main economic principle behind congestion pricing is to internalize the congestion externality caused by transportation. transportation contributes to environmental externality due to vehicular pollution emissions, besides congestion externality. in order to control air pollution level and improve air quality, clean air action programs have been launched in some large chinese cities, such as beijing and shanghai. the measures adopted in the program include subsidizing use of clean energy (e.g., electric or natural gas vehicles), retrofit of old motorized vehicles, and purification of vehicular pollutant emissions (e.g., free provision of vehicular exhaust purifier). to achieve financial sustainability, it is proposed to levy the emission taxes and redistribute part of the emission taxes to fund the aforementioned programs. therefore, further studies can be focused on how to redistribute the emission tax revenue, which will affect the practical implementation of emission pricing scheme and the public acceptance towards this scheme. auto sharing or ridesharing, as an emerging hot topic in the filed of transportation, may have a significant effect on the auto ownership rationing. it is expected that implementation of ridesharing has a potential to reduce the maximum number of autos and parking spaces required in the transportation system, which affects the traffic congestion level and the residential location choice and thus spatial distribution of residents in the urban system. in the ridesharing service system, the platform for ridesharing (e.g., didi or uber) plays an important role in matching shared autos and passengers , and the fleet size, service price or subsidy for ridesharing can help adjust the shared auto utilization rate, balance the modal split, and thus relieve the traffic congestion level of the system. further studies can, therefore, be made to consider the relationships among ridesharing, auto ownership rationing, and urban spatial structure, and to investigate the fleet size, pricing or subsidizing problem of ridesharing in a competitive multi-modal transportation system. the competition and collaboration between different ridesharing platforms and between ridesharing platform and public transit are also an important direction for future study. it is widely recognized that the rapid developments of information and communication technologies have significantly changed people's learning, work and life styles. for example, telecommuting or teleworking, as an alternative work arrangement, becomes a growing trend in the information age. telecommuting will drive people away from workplaces, and thus save office space in urban areas and change the household residential location choice farther from the workplaces, leading to a more spread-out city. it will also reduce the number of work trips and thus the demand for ground transportation, leading to reduction in energy consumption, traffic congestion and air pollution. however, telecommuting reduces the chance and time for teamwork and face-to-face communications. as a result, team productivity may actually suffer, which hurts the productivity of individual's firm and the urban economy. regardless of its two sides, the telecommuting has recently become a major working mode of various professions due to outbreak of covid-19 across the globe, making people more aware of its importance. on the other hand, rapid developments of new technologies also change the mobility of people and goods. it is believed that the emerging 5 g and self-driving technologies will revolutionize the transportation industry. the 5 g technology will enable road users and transportation infrastructure to communicate with everything else on the road. the self-driving technology can drive vehicles automatically, and thus the car users do not need to carry out the driving task and thus they can spend in-vehicle time in the autonomous cars on work or leisure activities, yielding extra activity utility. the end-to-end connectivity across the city with the 5 g technology allows autonomous vehicles to drive close to each other through cooperating and platooning technologies, thus leading to increased network capacity and decreased traffic congestion in the peak period. the 5 g technology can also alert autonomous vehicles of change of traffic conditions, such as collisions, weather and traffic accidents, through direct and real-time communication from vehicle to vehicle, causing increased safety and reliability on the road. it is naturally needed to investigate the effects of new technology revolution on the movement behavior of people and goods and to design an efficient and sustainable urban system. the goal of this paper is to undertake a broad literature review of the bottleneck model research over the past half century. the review undertaken in this paper uses a bibliometric analysis approach, in which the literature data of a total of 232 relevant papers are extracted from three well-recognized journal databases or search engines, namely web of science core collection, scopus, and google scholar. this analysis identifies the leading topics, top contributing authors, influential papers, and distributions of publications by journal, allowing readers to track how and where the literature has evolved. the literature is classified in terms of recurring themes into four main categories: travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of demand and supply sides. for each theme, typical models proposed in previous studies are reviewed. based on a systematic review, we have identified some main gaps and opportunities in the bottleneck model research, which provides potential avenues for future research in this important and exciting area. by incorporating technological progress in the new digital era, the bottleneck model research keeps pace with the times and thus to contribute to new theoretical development. we are grateful to professor robin lindsey and three anonymous referees for their helpful comments and suggestions on earlier versions of the paper. the work described in this paper was jointly supported by grants from the national key research and development program of china ( 2018yfb160 090 0 ), the national natural science foundation of china ( 71525003 , 71890970/71890974 ), and the nsfc-eu joint research project (71961137001). the second author made a presentation entitled "the bottleneck and corridor problems" at the international workshop on transport modeling held in auckland, new zealand, on january 8-11, 2019, and had a heated discussion with the other two authors of this paper. this discussion led them to write a 50th anniversary review of the bottleneck models, as presented in this paper. however, the opinions expressed here are those of the authors themselves. the classical bottleneck model describes the departure time choice of commuters during morning commute. every morning, n homogeneous commuters travel from home to work along a highway containing a bottleneck with a capacity s . to simplify the analysis, all commuters want to reach the workplace at an identical preferred arrival time t * . 2 without loss of generality, the free-flow travel time from home to work is assumed to be zero. thus, a commuter arrives at the bottleneck immediately after leaving home and arrives at the workplace immediately after leaving the bottleneck. when the arrival rate at the bottleneck exceeds the bottleneck's capacity, a queue develops. those who arrive early or late face a schedule delay cost. commuters choose their departure times based on a trade-off between the bottleneck congestion and the schedule delay cost. let c ( t ) denote the travel cost of commuters departing from home to work at time t . it is composed of queuing delay cost at the bottleneck and schedule delay cost of arriving early or late. let t ( t ) be the queuing delay time at the bottleneck at time t. c ( t ) is then given as where î± is the unit cost of travel time, î² is the unit cost of arriving early, and î³ is the unit cost of arriving late. according to the empirical study of small (1982) , the relationship î³ > î± > î² should hold. the queuing delay time t ( t ) equals the queue length d ( t ) divided by the bottleneck capacity s , i.e., t (t) = d (t) /s , where d ( t ) is the difference between the cumulative arrivals and cumulative departures by that time, i.e., 2 one can also consider a preferred arrival time window [ t * â�� , t * + ], where is a measure of work start time flexibility. no penalty of schedule delay is incurred if a commuter reaches the destination within the time window. otherwise, a penalty of schedule delay takes place. for example, vickrey (1969) assumed a uniform distribution of t * over an interval, and hendrickson and kocur (1981) generalized it to a general distribution. where r ( t ) is the departure rate of commuters from home at time t and t q is the time at which the queue begins. at the equilibrium, all commuters have the same travel cost c ( t ) regardless of their departure time. this means d c(t ) / d t = 0 , â�� t â�� ( t q , t q ) , where t q is the time when the queue ends. one can thus derive the equilibrium departure rate r ( t ) as where ë� t is the departure time from home at which a commuter can arrive at workplace punctually, i.e., ë� t + t ( ë� t ) = t * . eq. (a3) shows that the equilibrium departure rate curve is piecewise constant. in the morning peak period ( t q ,t q ), the capacity of the bottleneck is fully utilized, and thus t q â�� t q = n/s holds. at the equilibrium, the first and last commuters do not face a queue, their queuing delays are zero, and their schedule delay costs must thus be equal, expressed as î²( t * â�� t q ) = î³ t q â�� t * . (a4) from eq. (a4) , t q â�� t q = n/s and ë� t + t ( ë� t ) = t * , one obtains the resultant equilibrium travel cost is c = ( î²î³ / (î² + î³ ) )( n/s ) . from equilibrium condition c ( t ) = c ( t q ) = c ( t q ) and eqs. (a1) and ( a4 ), one can derive the queuing delay time as eq. (a6) shows that a queue builds up linearly from t q to ë� t and then dissipates linearly until it disappears at t q . this means that the queuing delay curve is piecewise linear. testing the slope model of scheduling preferences on stated preference data a direct redistribution model of congestion pricing the corridor problem with discrete multiple bottlenecks analytical equilibrium of bicriterion choices with heterogeneous user preferences: application to the morning commute problem congestion tolling and urban spatial structure a bathtub model of downtown traffic congestion schedule delay and departure time decisions with heterogeneous commuters economics of a bottleneck departure time and route choice for the morning commute does providing information to drivers reduce traffic congestion? a temporal and spatial equilibrium analysis of commuter parking route choice with heterogeneous drivers and group-specific congestion costs a structural model of peak-period congestion: a traffic bottleneck with elastic demand properties of dynamic traffic equilibrium involving bottlenecks, including a paradox and metering the welfare effects of congestion tolls with heterogeneous commuters road pricing, traffic congestion and the environment: issues of efficiency and social feasibility information and time-of-usage decisions in the bottleneck model with stochastic capacity and demand the corridor problem: preliminary results on the no-toll equilibrium equilibrium traffic dynamics in a bathtub model: a special case financing capacity in the bottleneck model regulating dynamic congestion externalities with tradable credit schemes: does a unique equilibrium exist? transportation investigation of the traffic congestion during public holiday and the impact of the toll-exemption policy dynamic model of peak period congestion dynamic model of peak period traffic congestion with elastic arrival rates dynamic network models and driver information systems understanding the competing short-run objectives of peak period road pricing valuations of travel time variability in scheduling versus mean-variance models uniform versus peak-load pricing of a bottleneck with elastic demand peak-load pricing of a transportation route with an unpriced substitute partial peak-load pricing of a transportation bottleneck with homogeneous and heterogeneous values of time revisiting the bottleneck congestion model by considering environmental costs and a modal policy solving the step-tolled bottleneck model with general user heterogeneity optimal multi-step toll design under general user heterogeneity morning commute problem with queue-length-dependent bottleneck capacity endogenous trip scheduling: the henderson approach reformulated and compared with the vickrey approach commuter welfare under peak-period congestion tolls: who gains and who loses? the marginal social cost of travel time variability the uniqueness of a time-dependent equilibrium distribution of arrivals at a single bottleneck system optimum and pricing for the day-long commute with distributed demand, autos and transit a pareto improving strategy for the time-dependent morning commute problem congestion pricing and capacity of large hub airports: a bottleneck model with stochastic queues congestion pricing of canadian airports the untolled problems with airport slot constraints when) do hub airlines internalize their self-imposed congestion delays pricing the major us hub airports 20 0 0. comparison of three empirical models of airport congestion pricing departure times in y-shaped traffic networks with multiple bottlenecks bottleneck road congestion pricing with a competing railroad service stochastic equilibrium model of peak period traffic congestion congestion pricing on a road network: a study using the dynamic equilibrium simulator metropolis private toll roads: competition under various ownership regimes comparison of morning and evening commutes in the vickrey bottleneck model private roads, competition, and incentives to adopt time-based congestion tolling modelling and evaluation of road pricing in paris the economics of crowding in rail transit trip-timing decisions and congestion with household scheduling preferences private operators and time-of-day tolling on a congested road network real cases applications of the fully dynamic metropolis tool-box: an advocacy for large-scale mesoscopic transportation systems metropolis: modular system for dynamic traffic simulation morning commute in a single-entry traffic corridor with no late arrivals on the existence of pricing strategies in the discrete time heterogeneous single bottleneck model additive measures of travel time variability the cost of travel time variability: three measures with properties on the relation between the mean and variance of delay in dynamic queues with random capacity and demand how a fast lane may replace a congestion toll congestion in the bathtub congestion in a city with a central bottleneck the dynamics of urban traffic congestion and the price of parking the value of travel time variance commuting for meetings valuing travel time variability: characteristics of the travel time distribution on an urban road travel time variability and rational inattention the value of reliability commuting and land use in a city with bottlenecks: theory and evidence vickrey meets alonso: commute scheduling and congestion in a monocentric city trip-timing decisions with traffic incidents hypercongestion in downtown metropolis endogenous scheduling preferences and congestion road pricing with complications downtown parking supply, work-trip mode choice and urban spatial structure private road supply in networks with heterogeneous users coordinated pricing for cars and transit in cities with hypercongestion empirical assessment of bottleneck congestion with a constant and peak toll: san francisco-oakland bay bridge morning commute with competing modes and distributed demand: user equilibrium, system optimum, and pricing the evening commute with cars and transit: duality results and user equilibrium for the combined morning and evening peaks congestion pricing practices and public acceptance: a review of evidence dynamic bottleneck congestion and residential land use in the monocentric city day-to-day departure time choice under bounded rationality in the bottleneck model modeling the morning commute problem in a bottleneck model based on personal perception pareto improvements from lexus lanes: the effects of pricing a portion of the lanes on congested highways tolling roads to improve reliability a partial differential equation formulation of vickrey's bottleneck model, part i: methodology and theoretical analysis a partial differential equation formulation of vickrey's bottleneck model, part ii: numerical analysis and computation schedule delay and departure time decisions in a deterministic model estimating exponential scheduling preferences 20 0 0. fares and tolls in a competitive system with transit and highway: the case with two groups of commuters pricing and logit-based mode choice models of a transit and highway system with elastic demand modal split and commuting pattern on a bottleneck-constrained highway optimal utilization of a transport system with auto/transit parallel modes the value of travel time variability with trip chains, flexible scheduling and correlated travel times traveler delay costs and value of time with trip chains, flexible activity scheduling and information traffic managements for household travels in congested morning commute bottleneck model with heterogeneous information estimating the social cost of congestion using the bottleneck model bottleneck congestion: differentiating the coarse charge the user costs of air travel delay variability a new look at the two-mode problem the commuter's time-of-use decision and optimal pricing and service in urban mass transit equilibrium queueing patterns at a two-tandem bottleneck during the morning peak spillovers, merging traffic and the morning commute queueing at a bottleneck with single-and multi-step tolls effects of the optimal step toll scheme on equilibrium commuter behaviour economics on the optimal n-step toll scheme for a queuing port economics on the optimal port queuing pricing to bulk ships the optimal step toll scheme for heavily congested ports effects of the optimal port queuing pricing on arrival decisions for container ships effects of the optimal n-step toll scheme on bulk carriers queuing for multiple berths at a busy port optimal non-queuing pricing for the suez canal modeling time-dependent travel choice problems in road networks with multiple user classes and multiple parking facilities on the use of reservation-based autonomous vehicles for demand management the morning commute in urban areas with heterogeneous trip lengths user equilibrium in a bottleneck under multipeak distribution of preferred arrival time morning commute in a single-entry traffic corridor with early and late arrivals user equilibrium of a single-entry traffic corridor with continuous scheduling preference bottleneck model revisited: an activity-based perspective step tolling in an activity-based bottleneck model reliability evaluation for stochastic and time-dependent networks with multiple parking facilities existence, uniqueness, and trip cost function properties of user equilibrium in the bottleneck model with multiple user classes equilibrium in a dynamic model of congestion with large and small users step tolling with bottleneck queuing congestion handbook of transport systems and traffic control optimal information provision at bottleneck equilibrium with risk-averse travelers an equilibrium analysis of commuter parking in the era of autonomous vehicles modeling the morning commute for urban networks with cruising-for-parking: an mfd approach interactive travel choices and traffic forecast in a doubly dynamical system with user inertia and information provision expirable parking reservations for managing morning commute with parking space constraints efficiency of a highway use reservation system for morning commute a novel permit scheme for managing parking competition and bottleneck congestion effectiveness of variable speed limits considering commuters' long-term response managing morning commute with parking space constraints in the case of a bi-modal many-to-one network modeling and managing morning commute with both household and individual travels pricing scheme design of ridesharing program in morning commute problem departure time and route choices in bottleneck equilibrium under risk and ambiguity morning commute problem considering route choice, user heterogeneity and alternative system optima a semi-analytical approach for solving the bottleneck model with general user heterogeneity the morning commute problem with ridesharing and dynamic parking charges bottleneck congestion pricing and modal split: redistribution of toll revenue public transport reliability and commuter strategy peak-load pricing of a bottleneck with traffic jam optimal cordon pricing in a non-monocentric city flextime, traffic congestion and urban productivity the morning commute for nonidentical travelers traffic flow for the morning commute a new tradable credit scheme for the morning commute problem managing rush hour travel choices with tradable credit scheme numerical solution procedures for the morning commute problem commuter responses to travel time uncertainty under congested conditions: expected costs and the provision of information travel-time uncertainty, departure time choice, and the cost of morning commutes simulating travel reliability optimal metering in the bottleneck congestion model vickrey's model of traffic congestion discretized equilibrium at a bottleneck when long-run and short-run scheduling preferences diverge long-run versus short-run perspectives on consumer scheduling: evidence from a revealed-preference experiment among peak-hour road commuters the economics of parking provision for the morning commute managing morning commute traffic with parking modeling multi-modal morning commute in a one-to-one corridor network the morning commute problem with heterogeneous travellers: the case of continuously distributed parameters linear complementarity formulation for single bottleneck model with heterogeneous commuters a single-step-toll equilibrium for the bottleneck model with dropped capacity give or take? rewards versus charges for a congested bottleneck pareto-improving social optimal pricing schemes based on bottleneck permits for managing congestion at a merging section pareto-improving ramp metering strategies for reducing congestion in the morning commute tradable credit scheme to control bottleneck queue length on the existence and uniqueness of equilibrium in the bottleneck model with atomic users airlines' strategic interactions and airport pricing in a dynamic bottleneck model of congestion punctuality-based departure time scheduling under stochastic bottleneck capacity: formulation and equilibrium punctuality-based route and departure time choice the scheduling of consumer activities: work trips trip scheduling in urban transportation analysis valuation of travel time the bottleneck model: an assessment and interpretation the existence of a time-dependent equilibrium distribution of arrivals at a single bottleneck bottleneck congestion and modal split bottleneck congestion and distribution of work start times: the economics of staggered work hours revisited bottleneck congestion and residential location of heterogeneous commuters a pareto-improving and revenue-neutral scheme to manage mass transit congestion with heterogeneous commuters modeling the modal split and trip scheduling with commuters' uncertainty expectation the morning commute problem with endogenous shared autonomous vehicle penetration and parking space constraint tradable credit schemes for managing bottleneck congestion and modal split with heterogeneous users equilibrium properties of the morning peak-period commuting in a many-to-one mass transit system step-tolling with price-sensitive demand: why more steps in the toll make the consumer better off congestion tolling in the bottleneck model with heterogeneous values of time winning or losing from dynamic bottleneck congestion pricing? the distributional effects of road pricing with heterogeneity in values of time and schedule delay congestion pricing in a road and rail network with heterogeneous values of time and schedule delay autonomous cars and dynamic bottleneck congestion: the effects on capacity, value of time and preference heterogeneity multiclass continuous-time equilibrium model for departure time choice on single-bottleneck network regional labor markets, commuting, and the economic impact of road pricing visualizing bibliometric networks congestion theory and transport investment pricing, metering, and efficiently using urban transportation facilities a smart local moving algorithm for large-scale modularity-based community detection airport congestion pricing and terminal investment: effects of terminal congestion, passenger types, and concessions equilibrium trip scheduling in single bottleneck traffic flows considering multi-class travellers and uncertainty: a complementarity formulation e-hailing ride-sourcing systems: a framework and review dynamic ridesharing with variable-ratio charging-compensation scheme for morning commute overcoming the downs-thomson paradox by transit subsidy policies equilibrium and modal split in a competitive highway/transit system under different road-use pricing strategies an ordinary differential equation formulation of the bottleneck model with user heterogeneity the morning commute problem with coarse toll and nonidentical commuters managing bottleneck congestion with tradable credits the morning commute under flat toll and tactical waiting congestion behavior and tolls in a bottleneck model with stochastic capacity stochastic bottleneck capacity, merging traffic and morning commute on the morning commute problem with carpooling behavior under parking space constraint tradable permit schemes for managing morning commute with carpool under parking space constraint the valuation of travel time reliability: does congestion matter? on the cost of misperceived travel time variability constrained optimization for bottleneck coarse tolling pareto-improving policies for an idealized two-zone city served by two congestible modes analysis of the time-varying pricing of a bottleneck with elastic demand using optimal control theory mathematical and economic theory of road pricing on the morning commute problem with bottleneck congestion and parking space constraints managing network mobility with tradable credits managing rail transit peak-hour congestion with a fare-reward scheme congestion derivatives for a traffic bottleneck congestion derivatives for a traffic bottleneck with heterogeneous commuters commuter arrivals and optimal service in mass transit: does queuing behavior at transit stops matter? carpooling with heterogeneous users in the bottleneck model a new look at the morning commute with household shared-ride: how does school location play a role? impact of capacity drop on commuting systems under uncertainty integrated daily commuting patterns and optimal road tolls and parking fees in a linear city modelling and managing the integrated morning-evening commuting and parking patterns under the fully autonomous vehicle environment efficiency comparison of various parking charge schemes considering daily travel cost in a linear city improving travel efficiency by parking permits distribution and trading integrated scheduling of daily work activities and morning-evening commutes with bottleneck congestion analysis of user equilibrium traffic patterns on bottlenecks with time-varying capacities and their applications optimal official work start times in activity-based bottleneck models with staggered work hours day-to-day evolution of departure time choice in stochastic capacity bottleneck models with bounded rationality and various information perceptions road traffic congestion and public information: an experimental investigation key: cord-354941-0ocsf255 authors: amorin‐woods, deisy; fraenkel, peter; mosconi, andrea; nisse, martine; munoz, susana title: family therapy and covid‐19: international reflections during the pandemic from systemic therapists across the globe date: 2020-06-08 journal: aust n z j fam ther doi: 10.1002/anzf.1416 sha: doc_id: 354941 cord_uid: 0ocsf255 the covid‐19 pandemic has convulsed human communities across the globe like no previous event in history. family therapists, paradoxically, given the core of their work is with systems, are also experiencing upheaval in professional and personal lives, trying to work amidst a society in chaos. this paper offers a collection of reflections by systemic and family therapists from diverse cultures and contexts penned in the midst of the pandemic. the main intention in distilling these narratives is to preserve the ‘cultural diversity’ and ‘ecological position’ of the contributors, guided by phenomenology, cultural ecology, and systemic worldviews of ‘experiencing.’ the second intention is to ‘unite’ promoting solidarity in this isolating situation by bringing each story together, creating its own metaphor of a family: united, connected, stronger. as a cross‐cultural family practitioner, with a strong mission for collaboration, the lead author acknowledges the importance of context – the nation and location of the experience; culture – the manner in which culture impacts on experience; collaboration – enhancing partnership, enriching knowledge, and mapping the journey’s direction; and connectedness – combating isolation while enhancing unity. since the key transmission of culture is through language, raw reflections were sought initially in the practitioners’ own language, which were translated for an english‐speaking readership. these narratives are honest and rich descriptions of the authors’ lived experiences, diverse and distinctive. the contributors trust colleagues will find these reflections helpful, validating and acknowledging the challenges of this unique period in history. deisy amorin-woods 1 this compilation of reflections, while not a research project per se, is written in an auto-ethnographic style (ellis & bochner, 2000; rhodes, 2018) by five systemic therapists from varied cultures and contexts about the covid-19 pandemic. their narratives, experiences, perceptions, ponderings, and feelings were composed as they were living the experience; unique in their meaning making and in the way they contribute to this collection. as the lead author, i live and practice in perth, australia. reflecting my international and collective soul, my curiosity has given me the impetus to facilitate global exchange and initiate cross-cultural conversations. peter fraenkel, in new york, united states, incorporates his buddhist insight and psycho-musical intervention to provide a message of connection and hope to help others preserve a capacity for joy. andrea mosconi, from padua, italy, provides students and colleagues with a gentle message of wisdom, and insights to stimulate critical thinking and opportunities for new ways forward. martine nisse, from paris, france, delivers a 'mood note' which introduces a sense of 'lightness' in the midst of her complex and challenging work with domestic violence and incest. susana munoz from santiago, chile, with a background as a midwife, uses the metaphor of labour and birth to demonstrate survival. this 'pandemic project' initially arose mid-march as i reached out to colleagues abroad to check how they were navigating through the pandemic chaos. through our 'cross-cultural' conversations, the sense of feeling understood through shared experience and empathic means led to strengthened connection. this gave me the impetus to ask myself: wouldn't it be helpful if we could write something together to find some healing, whilst sharing our stories with others, in order for them to also find healing? i realised this was a global crisis; a collective narrative unfolding daily which has left no one untouched, and we as family therapists had important and individual stories to tell. time was of the essence, the view from this window was unique and so i drafted a brief. given my strong interest in collaboration and collaborative practice, over the years i had fostered relationships and developed partnerships including with family therapists abroad (amorin-woods, 2019). thus, i reached out to this wide network of colleagues and invited them to join me in telling our stories. most i invited responded enthusiastically to this initiative almost immediately. regrettably, due to the tight deadline, a few were understandably overwhelmed with other priorities. as a cross-cultural family therapist, my mission is to be diverse and inclusive when meeting the varied cultural needs of my clients, developing programs, delivering services, and working with families. this project was no different. as i heard the international collective voices it was thus vital to me that contributions were garnered from a cross-section of nations and continents as i wanted to honour different cultural backgrounds and language representation. i believed that by tackling phase 1sharing our experiences, we could capture each of our cultural and ecological worldviews and thus unite in this process. then, we could contemplate stepping into phase 2regenerative process in order to rebuild our own lives post-pandemic. after we are given the space to process the raw and organic heaviness of our own experience, we are able to support our clients to come up with practical responses, but not before. it was critical to put the stories together before the 'immediacy' of the raw accounts was lost. we are in the midst of the experience, in 'situ,' in confinement, confused, hypervigilant. we live in a world where we seem to 'rush' to come up with labels, treatments, and solutions to human distress, often bypassing our lived experience of suffering, loss, and isolation. i felt as if these early collections merged into one, represent the 'holding, healing space' that we as therapists need, before we dare consider what life may look like on the 'other side.' without this, how could we support our clients in putting themselves, their families, and their lives back together, in even contemplating what a world of 'functionality' or 'normality' may look like? while family therapists are aware of the importance of 'unmasking' to remain authentic to our clients and to self-disclose to deepen connection and trust, therapists do not often have the chance to share their accounts as they too travel through challenging life experiences. these reflections provide that opportunity. finally, any conversation about culture must include the element of language. since the key transmission of culture is through language, organic reflections were sought in the practitioner's primary language. as in therapy, authenticity of feelings and vulnerability cannot be truly transmitted except in the primary language (amorin-woods, 2020). however, since the main demographic of this journal speak english as their primary language, these reflections are translated and offered in english. i am deeply appreciative to each contributor for responding as i reached out. from the bottom of my heart, thank you andrea, peter, martine, and susana. deisy amorin-woods, perth, australia this pandemic is a collective trauma event and its multi-faceted impact cannot be underestimated. this leads us to first acknowledge and respect the interconnectedness and relationship between systems, within our body, between us and our environment, and between one another. this also causes us to question the notion of permanence versus impermanence and certainty versus uncertainty. as a cross-cultural family and systemic therapist, i have developed a thirst for collaboration within programs, across organisations, within communities, and across the globe (lee, thompson, & amorin-woods, 2009 ). as a migrant to australia from peru, i have always felt drawn to the idea of collectivism (amorin-woods, 2020), a sense of 'being together' and of 'joining with the other,' and that notion has never left me. whether working in government, non-government organisations (ngos), in private practice, or in academia, i have ensured i bring my 'collective soul' into my profession in the way i practice and in my teaching. i try to connect with the issues that impact on the lives of our families regardless of where they are located or where they originate (amorin-woods, 2016a , 2016b . the issues are varied and the systems diverse, whether a couple facing grief, families with intergenerational trauma, or refugees who have fled persecution. when the news broke in early 2020 about a possible epidemic, i had not conceptualised either the extent or the utter devastation that was to follow. i was due to travel to basel, switzerland in early march to present at the world congress of family global family therapists reflect on covid-19 ª 2020 australian association of family therapy therapy under the international family therapy association (ifta). ironically, my presentation was titled, therapeutic conversations and trauma informed systemic practice: acknowledging meaning making in the backdrop of relentless fear and unpredictability. two days before my departure, the conference was cancelled due to the pandemic, which was rapidly spreading around the globe. in the weeks that followed, the whole world convulsed through a rapid period of change. the need to find refuge from this potent, yet invisible, force left us vulnerable, confused, distressed, fearful, and ambivalent. in my own space, due to the need to isolate, and the closure of state borders, i have not been able to see my daughters or embrace my little 'grandies.' this has left me profoundly sad and nostalgic. having family, friends, and colleagues around the world, many located in the countries worst hit, i was driven to connect, to check-in on how they were navigating through the crisis. i heard directly their accounts of anguish, dread, panic, powerlessness, loss, and grief. this virus was burning, just like the recent ignition of highly combustible dry eucalyptus trees in the australian bush, spreading rapidly like a wildfire, loud and forceful, the echoed thoughts keeping alive the nagging fear. people wondering, will i be next? will i be a carrier and infect my loved ones? families losing parents, grandparents . . . a whole generation seemingly vanishing as in a puff of smoke. listening to these raw and distressing stories became rather confronting. i found myself holding their collective pain in my arms and with a virtual embrace. i was attempting to ease their suffering. personally, these stories transported me back to a similar experience as a child living in peru. this was a time of 'sendero luminoso' (shining path) a guerrilla group, along with 'sinchis' an anti-subversive 'police sub-group,' both of whom were responsible for committing human rights violations, terrorising and decimating over 70,000 people, most of whom were indigenous. this led to cases of intergenerational trauma, the stories of fear, sorrow, and confusion passed down as through the 'mother's breast milk' for generations. these painful memories accumulate in one's body becoming a historical site which is transported through time (rueda, 2015) . i remember my mother distraught as the situation evolved given my sibling was studying at the university at the epicentre. while not directly impacted by the war and the trauma, i felt them indirectly, yet latently, through the suffering of my mother and the stories told by my sibling who witnessed young students being murdered. this brings me back to the present time and leaves me concerned wondering whether we will see stories of pain and sorrow passed down as a result of the covid-19 war? will this generation transmit trauma to those to follow? professionally, as the epidemic evolved, so did the need to transition my practice from face-to-face to purely tele-health. while australia has been fortunate not having been impacted as heavily as other countries, i have observed among my clients signs of anxiety and hypervigilance related to the fear of transmission, as well as new manifestations of suicidal ideation previously absent. these signs are connected to a number of issues: the stress related to job loss and financial uncertainty; the sadness and loneliness linked to the inability to see family and friends; grandparents' sorrow not seeing their grandchildren, an embrace appearing so distant. children unsettled, mothers feeling stressed and exhausted juggling extra roles, including home-schooling. while some people see isolation ('iso') as simply confinement, others see it as imprisonment, fearing the threat of imminent danger if leaving their safe and familiar bubble. for others the uncertainty, the not knowing, the unpredictable re-organisation of families and communities, is too much to bear and leads to heightened levels of suicidal ideation. people ask themselves: will there be an end to this? are we facing the end of the world? there is documented concern for heightened risk of suicide among communities continuing past the pandemic (gunnell et al., 2020) . further to this, given my extensive experience working with people with acute mental health issues and with complex trauma, i am aware that people who have experienced trauma are highly impacted by threats about the safety and stability of the world (brown, 2009; james & mackinnon, 2012) . i have also observed how vulnerable families (such as families with intergenerational trauma and child abuse survivors) are particularly at high risk, and consequently presenting with clear signs of re-traumatisation. throughout this pandemic transition, i have noticed various community stances rapidly forming. one that blames, with xenophobic undertones, pointing to race and culture as the original cause of the virus, discriminating against certain cultural groups. another that labels, describing people as 'paranoid' or 'hysterical' as people frantically accumulate food supplies or simply voluntarily decide to self-isolate. a third, with traces of denial and avoidance about the existence or magnitude of the situation, that judges and ridicules those taking a more conservative approach. within this stance, there is an underlying expectation and pressure from strangers or loved ones to 'move on' rapidly through the stages of grief (kubler-ross & kessler, 2009) from shock and denial to acceptance, without recognition that people need to be allowed to 'feel their feelings.' on the other hand, i have also seen demonstrations of kindness, generosity, and solidarity for fellow community members, much of which has been led by children. teddies perched in trees, messages of encouragement and hope written around neighbourhoods to lighten the tone and brighten people's days. boxes of fresh produce or non-perishables left to help others financially disadvantaged. i have observed some people viewing this pandemic as an opportunity to start over and live in a healthier, humble, more acknowledging and respectful way towards self, towards family and their environment. i have seen occasions when cut-off families have re-connected, and couples considering separation have learned to appreciate and not take each other for granted. i have been mindful of being available for my clients, to provide the holding space to acknowledge and validate their feelings and allow them to process their experiences. the need for self-care and self-compassion takes on a new meaning in its recognition of our shared humanity. i thus ponder: are we able to provide clients with a holding space to process things if we are not connecting with ourselves? . . . even though we too are living amidst the chaos. we do not have to abnegate ourselves or our experiences and feelings to show we are capable practitioners, because when we abnegate ourselves we tell our clients to let go of parts of self in order to function, in order to accept themselves, in order to relate. we need to 'be,' before we can 'do.' remaining authentic is key in therapy. i often say to my students and supervisees, authenticity is to a therapist what breath is to life. we need to be authentic with the families who come to us with hope and trust and this includes possessing heightened awareness of ourselves, hence the need to uncover the 'self'. i recall a similar description often during my fellowship at the accademia di psicoterapia della famiglia in rome. andolfi would often make reference to ridding self of the mask and being fully self, fully human (andolfi, 2012) . it is only through such a process that we arrive to a place of recognition of what is ours and what is theirs. it is crucial that we are able to name and process our experiences in order to support our global family therapists reflect on covid-19 ª 2020 australian association of family therapy clients to do the same, in order to provide the holding space they need, and joining-in with them (minuchin, 1974) . this then allows us to welcome the rich exchange between each other in order to develop an empathic connection and trusting relationship with them (aponte, 1992) , because how can we join in with families and trust be elicited without this important element? this also helps not to transfer or project our experiences onto our clients (lum, 2002) . as systemic therapists we are in a privileged position as collaborators and partners influencing the family system and the environment and context where families are placed. this gives us the opportunity to awaken deeper knowledge and understanding within a given system, within the family who puts their trust in us. we become responsible in co-creating and co-constructing the therapeutic reality and in eliciting change (von foerster, 1981) . we influence the environment whether we bring empathy, healing, anxiety, or fear. understanding the self and understanding each other becomes the impetus for healing. around the globe, we are all living the experience of the covid-19 pandemic as a collective trauma. the chaos is real to us all. we are dealing with challenges that are comparable yet distinct. while we are all impacted, the impact is different depending on where we are located, in our culture, our ecological system, and the politics of our day. a frequently used expression suggests we are all in the same boat; however, i would like to use the metaphor, while we may be sailing the same storm, we are in different boats. how robust our ship is, and how we manoeuvre it, may determine how we survive the storm. in writing this i am mindful of families as they navigate through this pandemic. i am interested in supporting them to acknowledge their being 'human,' while rejecting reductionist ideas of 'experiencing' that rush them to numb or bypass their experience. instead, it is time to pause and connect with the basics, our relationships. the lesson from gregory bateson was the importance of the interconnectedness of living things to their natural environment (bateson, 1979) . this is so relevant to our current situation. we hear loudly the desire of communities to go back to 'normal' . . . however, do we really want to go back to the normal that we knew? or do we want to look at this as an opportunity for social change? for a change in direction? in the way we do things, in the way we relate to self, to other people, to other cultures, to our environment. instead let's nurture families and preserve relationships. let's propose a paradigm shift about the way we think of ourselves not as passive closed-off beings, but as active authors of our life, insightful, creative, purposeful, and connecting. let us use this time to co-create a space for healing and holding the human spirit. coping with covid-19: a time to focus on the simple gifts of life peter fraenkel 2 , new york, united states as news of the covid-19 pandemic quickly grew in march of this year, i immediately found myself re-immersed in the traumatic disorientation of 11 september, 2001. terrorist planes crashing into the world trade center launched new york city and the united states more generally, into what i termed 'the new normal' (fraenkel, 2001b) . yet this is different: an invisible virus with an unpredictable course, unseen yet life threatening. where to go? what to do? the answers to those questions quickly were determined in march 2020 by the governor of new york with guidance from federal and state health experts: go home, work at home. on 13 march, i delivered the last in-person workshop at the ackerman institute for the family. attendees spread out across the room as a safety precaution to decrease the likelihood of communicating the virus among us. none of the usual handshakes between newlymet colleagues, none of the usual hugs between old friends, my former students, and professional acquaintances. we carried on, focusing on effective techniques in couple therapy. but we all had a nagging sense, mostly at the periphery of our consciousness, that the world outside was growing increasingly fearsome, and that we were better off in here, held by the warm embrace of what mary pipher has called the 'shelter of each other.' as i wrote in another article after 9/11, we therapists were fortunate to be helped by the act of helping othersto belong to a community of care (fraenkel, 2002) . i've specialised for many years in work with families traumatised by incest, domestic violence, and homelessness. in my private practice, i've specialised in what i've called 'last chance couples'those who've often seen one or more couple therapists, without much improvement, and who are now on the brink of divorce or its nonmarried equivalent of relationship dissolution (fraenkel, 2019) . these two areas of my work now overlap: distressed, disengaged partners already strongly considering separating, or already living apart, now stuck together with kids and sometimes their elders and even other families. precipitous layoffs and loss of income, infection with the virus, and in some cases, the known death of relatives, friends, or colleagues, have led to the kind of traumatic effects of living through a tsunami, gathering force every day with no end in sight. at the same time, accurate information, guidance, and leadership from the federal government is sorely lacking, alarming in its own wayno comfort there. my clients are experiencing all the usual symptoms of psychological trauma: intrusive thoughts and nightmares, edginess and hypervigilance, sleeplessness, anxiety, and depression, as well as relational trauma and ruptures between couple partners, and among family members. the factor most clearly correlated with resilience in the face of trauma and other disruptive experiencessocial supportis in short supply, despite inventive use of social media and online meetings, including teletherapy. i have occasionally ventured into a nearby park with my percussion instruments to record myself playing along with inspiring songs about social justice and love, posting these videos on facebook and instagram. the response has been overwhelmingly one of gratitude and unexpected pleasure, and those responses have gratefully been received by me, giving me a slight sense that through my 'out-of-the-box' psycho-musical 'intervention,' i've helped others preserve their energy, hope, and capacity for joy. but i then return to my apartment, to be alone once again, as my kids are now attending college in europe. we're in close touch, as i am with friends and colleagues, but i surely feel isolated; and when i occasionally venture out and on to the streets to shop or bank, there's the uneasy, unreal sense of being on another planet, or at least, a familiar but altered landscape, with fearful masked neighbours strolling six feet apart. coincidentally, just a few weeks prior to news about the coronavirus, i had started writing a book for the general public about lessons to be learned from taoism about love and other relationships (fraenkel & akambe, 2020) . as a long-time buddhist and taoist, i've practiced noticing the little, familiar things and accustomed events of life that can bring unexpected joy once brought into the gentle gaze of the 'now.' for decades, i've tried to live my life as if inside a haiku poem, taking pleasure just from being sensate, breathing, and alive. musing on the nature of existence, the creator of taoism, 6th-century b.c. court librarian, philosopher, and social critic lao tzu, writes in the first passage of his classic text, the way of life (6th century bc): existence is beyond the power of words to define terms may be used but are none of them absolute . . . if name be needed, wonder names them both from wonder into wonder existence opens i've been advising all my clientsindividuals, couples, and familiesto use this disorienting time to reflect upon the following taoist suggestions, to look for opportunities to experience wonder, just as i have as a fellow traveller through this time. specifically, i've suggested they: look for pleasure and joy in small, everyday experiences. i took a break from writing this article, went to the kitchen window in my fifth-floor apartment, and watched the rain falling upon the black-painted fire escape. it was beautiful. a few hours later, i took another break, and saw my two kittens, whom i adopted from a shelter in october, cuddled together in my furry chair, sleeping contentedly. it was beautiful. it calmed me down. slow down. our usual life pace in nyc is frenetic, our lives overstuffed with aspiration, achievement, a never-ending quest for riches, possessions, and when we've accumulated enough of those, the relentless search for new experiences (which i've termed 'experience greed'), for unique travel destinations and novel experiences to empty our constantly refilling 'bucket list.' lao tzu (6th century bc) writes: gravity is the root of grace, the mainstay of all speed . . . what lord of countless chariots would ride them in vain, would make himself fool of the realm, with pace beyond rein, speed beyond helm? it's a time to stop, and look closely at what and who we already have in our lives. it's a time to breathe, walk, eat, and talk slowly, compassionately, and patiently with loved onesthose living with us or those whom the virus has separated from us. and while slowed down, take a good long look at your partner, your child, your parent, your friend, your neighbour, your colleague (online, most likely!)listen to their voices, and enjoy their unique aliveness. in the timeless words of songwriter cole porter and a song made famous by frank sinatra ('i get a kick out of you'), let yourselves 'get a kick' out of them (fraenkel, 2001a) . look at them like there's no tomorrow for them, or for you. let go of control. on a related note, we cannot definitively avoid illness, and we cannot insure through our hand-washing and social distancing and mask wearing that we will survive this pandemic, or at least, leave it unscathed health-wise, financially, or emotionally. nor can we avoid the suffering of learning that a family member, a neighbour, a student, a mentor, or a colleague is sick, or has died. radical humility is called for in this troubling, unpredictable time. take precautions, yes, but recognise that this virus is nature, and nature has its way of overcoming even the best-laid plans of mice and men. learn from persons who inhabit oppressed social locations. lao tzu (6th century bc) advocated wisdom over knowledge: leave off find learning! end the nuisance of saying yes to this and perhaps to that, what slightest use are they! if one man leads, another must follow, how silly that is and how false! communities that have experienced generations of oppression due to race, ethnicity, geographic region/country, culture, and economic status have much wisdom to share about surviving and even thriving under hardship, and about how to make do with few resources. one hysterically funny video that circulated around the internet early in the emergence of the coronavirus came from the philippines, in response to privileged westerners' panic about running out of toilet paper. it was a lively music video, with dancing women behind the male lead singer, demonstrating how to use a little colourful plastic pot called the 'tabo' to wash up after defecating. it's high time we erase the terms 'first world' and 'third world' and the power and knowledge hierarchy between the countries and cultures belonging to each politically constructed group. the world must come together to defeat this virus, and to survive, as a unified, mutually respectful people in harmony with the earth and nature itself. my mother mimi bialostolsky fraenkel, who died of cancer in 2001 just before 9/11, believed the world might eventually come together in a time she called 'creative chaos.' i think of her now, and whether this worldwide pandemic is enough to unite us in a common effort to live on. i'd like to hope so . . . but i'm not sure. what we are going through is not an ordinary crisis, this is an epochal crisis! it seems to me that it is the most obvious consideration which echoes with many. the question then may be the systemic one: what do we do in each situation for every person who presents to us with a problem? as i often say, a kind of systemic mantra: perfection, what does this allow me to learn? it is now clear that this virus has three aspects that makes it scary: 1) it is new and we cannot be immunised; 2) it also passes through healthy carriers who, even if they have a lower viral load, are still capable of contagion and therefore cannot be easily controlled; and 3) it is not a flu virus like most flu viruses. this virus directly attacks the lungs and therefore gives viral pneumonia; it also attacks many other global family therapists reflect on covid-19 ª 2020 australian association of family therapy systems in our body. from this point of view, respect for the required government measures is essential as the only defence, while there is no effective drug or vaccine. the alternative is to build herd immunity, through a selection of the species. only those with resistance will survive. on the level of interaction between this virus and the social system, the question is: how long will it last? if we take into account the date of government measures, the fact that the need to tighten up has often been felt, and also the fact that it is always difficult both to impose and to accept strict rules and that the incubation time is 15 days, it is not difficult to hypothesise that the so-called 'peak' of the infections is still to be reached. from there, it takes time for the sick to heal and for additional time to pass before the virus is eliminated by the carriers. by putting all this together, i believe that we must accept that the current situation will last a long time. epidemiologists seem to be moving increasingly in this direction, obviously, barring unforeseen events for the better and for the worse. let's have patience! however, let us look wider, at the macrosystem where all this happens. if we take into consideration the relationship between us, 'homo sapiens,' and the natural system into which we are inserted, this story changes some rules of the relationship. it allows us to realise that we are not omnipotent, but that we are part of a system that can destroy us in a short time if we do not take into account the feedback that comes from the interaction with the other elements of the system itself. its 'liberation' is part of this process of inappropriate invasion by us 'sapiens' of natural contexts not coordinated with the human system. from this point of view, this virus is a warning, it is almost a homeostatic mechanism that is produced in a system that is already at the limit of its range of possible interactions and is reaching a possible 'bifurcation point' (prigogine, 2001 p. 160) . this is a test of a general catastrophe, a global crisis. basically there are other 'covid' around which we do not want to be aware of and which can equally undermine our survival in a global way: global warming, pollution, deforestation, desertification, water consumption, economic imbalances within societies and between different parts of the world. just like the case of 'covid,' we pretend not to see them yet they advance in a hidden and creeping way until they explode. just like covid-19! the 'good' thing we can say about the virus is that, unlike wars which can be taken out on someone who is considered guilty, the same cannot happen as this virus does not discriminate; it is a common enemy of the whole system. the virus applies equally to everyone, just like in the family systems with which we want to work. just as the members of a family system at the end of a session should not be able to say who is right or who is wrong (selvini palazzoli, boscolo, cecchin, & prata, 1980) , so as humans we are driven to look at each other, to confront each other, to even try to understand and help those, who until yesterday, rejected each other. yes, there are those who, not understanding, continue to wave the flag of personal interest, or hunt the culprit, or act in their own defence, but for now they are exposed and forced to admit or to try to make amends. for now, solidarity prevails, and there is admiration for the commitment and generosity of those on the front line, a sense of being a community. and we see the increasingly clear signs that, where man takes a step back, the natural system of which we are part takes a breath, pollution decreases, and everything seems to show us what we must consider for development and the future. a great opportunity, don't you think? but several times i have said: for now . . . the real question is: will we be able to take this into account? this brings another consideration regarding the social system. in recent years, perhaps blinded by the race for well-being or perhaps because well-being itself had allowed many to take advantage of it incorrectly even in public structures, the focus has been rebalancing national budgets by destroying the network of structures that had functioned as the skeleton, blood, and neurological system of society by supporting and bringing food throughout the social system. i am talking about healthcare, public education, welfare . . . let's consider what it means not to have continued to be aware of the value of these 'elements' of the social system. i say this both for those who have dishonestly taken advantage of it from within and for those who, from their own ideology, have fought them. here, this crisis can help us rearrange the scale of values regarding what is most important to preserve in a social system, counterbalancing the excessive importance given to the production system, to take into due consideration what is at the basis of a possible safe and civil coexistence. even regarding the enormous development of the computer network that is becoming more and more the neuronal network of the planet: on the one hand it allows us to accelerate the feedback between parts of the system facilitating the possibility of co-building solutions; on the other hand tomorrow it will put us in the face of problems such as the enormous possibility of controlling, even more than now, individual lives and determining who and how to manage the enormous power that all this enters . . . and in whose hands? of course, it is always the two polarities of the life of a system that must be balanced: the competitive system and the collaborative one. the same goes for natural selection which is the survival of the fittest and the other towards the idea that the average good of the most is better than the maximum good of the few. but here the question is: will we remember it later? and what does all this say about our systems of daily interaction, to our closest systems? of course, stopping puts us in a position to change our position in the systems of which we are a part and the relationship rules change. we feel distant from those who were close to us, communicating virtually with those we used to touch, look in the face, caress, shake hands, pat on the shoulder, take by the arm. in contrast, we find ourselves living with those who we did not have close to us for so many hours or who we were even in conflict with, seeing him/her every day and maybe having to talk to each other. we can no longer take advantage of dissipative structures, as they are called, which diluted and differentiated, distributed the tensions of the systems: work, travel, school, various activities. maybe if we choose to use the avoidance mechanism that is so useful to delude ourselves that there is a balance in relationships? no, we can't back off now, we're in touch! and then? this is an opportunity to stop and listen to ourselves and others, try to look at them with different eyes, re-appreciate the small gestures, let things arise from building together and patiently look for solutions, not problems. so, in the infinite space we can perhaps rediscover the value of everyday life, of silence, of simple things, of doing together, of allowing our imagination to bring to mind an idea. sometimes it's simpler than it seems! and with our patients . . . 'clients?' here too our position changes. from time to time we may find ourselves inventing different ways of making ourselves feel close, exploring new and different tools for staying in relationship and offering help, believing that, beyond the tool used, the relationship is what matters. cecchin's words echo on us: we all need to feel seen, and to feel seen we are willing to invent all the colors! (cecchin, 1987 p. 410) . and then of course: hypothesis, circularity, and neutrality! (selvini palazzoli et al., 1980) . so, our patients will feel us present, even if in a different way, and they will appreciate our sincere and attentive commitment. as milton erickson (erickson & rossi, 1979, p. 87) said: there are no difficult or incurable patients, there are only therapists who are able or unable to find a way to communicate with them. another thing comes to mind. let's see how a small virus, which affects the system at a specific point, achieves great effects. this can be a teaching for our therapy. the 'saltology' mentioned by the milan team in the first pages of paradox and counterparadox comes to mind: the results confirmed that, when it is possible to discover and change a fundamental rule of a system, pathological behavior can quickly disappear. this leads us to accept rabkin's proposed idea: that in nature events of radical importance can happen suddenly when a fundamental rule of a system is changed. rabkin proposes the term "saltology", that is saltology (from the latin saltus) for the discipline that should study these phenomena. this finds its correspondence in the general theory of systems whose theorists speak of "p.s." as of that point of the system on which the maximum number of essential functions converges to a system, changing when obtaining the maximum change with a minimum of energy expenditure. (selvini palazzoli, boscolo, cecchin, & prata, 1975, p. 12) so, these are my reflections these days which i wanted to share. all this 'allows us to learn' and perhaps finally arrive. it is an opportunity to become aware what this covid virus . . . offers us as systemic therapists. let's go ahead, let's build better times together! small mood note from a french family therapist in times of pandemic (petit billet d'humeur d'une th erapeute familiale franc ßaise en temps de pand emie) the family therapy sessions i conduct are very rarely by videoconference. when they are, they involve my patients who, for one reason or another, have gone abroad to settle temporarily or permanently. but since the arrival of the pandemic in france, one of my patients, herself a psychotherapist and familiar with consultations by videoconference, has pushed me towards this mode of therapy. two days after the announcement of the confinement, i was equipped, and i launched the invitations to my patients. after researching what angle of view for my webcam would symbolically give an open message about the future, i decided to turn my back to my patio, so that the view behind me would lead them towards the sun and the plants. the weather is extraordinarily nice, the pollution in paris has dropped dramatically, the sun is shining without any halo of pollution. birds are heckling in the city. i half draw two curtains, in the colour of my therapy room, which has been abandoned since the confinement, and adjust lights towards my face to gently counterbalance the brightness of the outside. it is, finally, a kind of 'chiaroscuro' (light/dark) that makes my first affected and surprised patients say, keep going . . . it reminds us of vermeer . . . the surprises are mutual, i arrive directly in their living room, or their office, even if they have passed through my 'virtual waiting room' of zoom. i note that some have asked their spouse to use their one-hour right of exit during the consultation to be quiet, others are gathered on the living room sofa, a cup of tea in hand with their pet. sometimes a decorative detail jumps out at me, so 'connected' with the patient's problem that i cannot avoid referring to it. one of these surprised patients says to me, do you want me to show you where i live? i pose and respond, no thanks. or . . . maybe . . . yes, you could ask all your patients to show you their home, you could learn a lot from that? . . . don't you think so? . . . mmm, no, i don't think so. holding the structure is one of the signatures of family therapy. most of my patients have been exposed to violence that has broken into their psyche (nisse, 2020; nisse & sabourin, 2004) . re-establishing boundaries or maintaining distance from others requires an ongoing effort. it requires constant therapeutic adjustment. i find that they are grateful for the availability of their therapist. i find that after the first few sessions, i feel as if i am regenerated. despite the shock of this epidemic, i have not forgotten anything about my way of being a therapist with each of them, nor their family history. my abilities as a therapist seem to be naturally at my disposal. the new and artificial proximity of the screen requires me to be attentive to maintain a therapeutic atmosphere (nisse, 2020) , in conformity with the pre-existing one: that is to say, intimate and respectful at the same time. i also note that since the beginning of the pandemic their psychological work seems to be more productive between sessions. some patients have refused videoconference sessions. they cannot imagine hearing themselves or me talking at home about the violence, especially sexual violence, for which they have consulted; they don't have housing where it's possible to really isolate themselves, or they are single women raising their children. sometimes, a patient who has not responded to the offer checks in with her therapist with a fear of illness. some of them tested positive for covid-19. it is nothing too serious, but a great deal of fatigue and fear for the impact on others. i don't have a pre-established bilateral agreement for the therapeutic meeting by videoconferencethis bothers me a little, but i know my patients, i trust them. it will be possible to establish it afterwards for the next sessions. the french or european family therapy societies provide support, stimulate reflection on this subject, offer platforms for exchange with family therapists, or platforms for helping families stressed by covid-19 or affected by it and also affected by sudden bereavement. a large part of our patient population is made up of children who have been placed in care by the juvenile judges. they come from all over france, and half of the country is currently in the 'red zone,' which means that children are not allowed to go to school or to travel more than 100 kilometres to go to court. as for each of the people living in this 'red zone,' sometimes, as i know from the supervisions i give by videoconference, a certain number of them paradoxically relax knowing they will not receive any more parental visits, mediated at the same time . . . recently, the status of social workers has changed. they are now considered during this time of pandemic as health professionals, and as such they have the right to travel to meet with children. the idea emerges to organise family network therapy sessions by videoconference before the end of the confinement. the centre des buttes-chaumont is again in demand. the conventions established with each of the participants in the network session and the therapeutic tandem usefully frame the new disturbing context for the most vulnerable among them, by calming the fears of abandonment, or on the contrary of dictatorial control through this means of communication. a spontaneous energy flow appears as the homage 'at the windows' is paid every evening at 8 pm to the nursing staff. i too hit something, a shell casing from world war i (empty!)it reminds me of the spanish flu pandemicand as a drumstick, a saharan jewel offeredwell, well, well!by one of my former patients who went to the sahel to offer solidarity to women. it makes a rather high-pitched bell sound, somewhat close to the bells of the tibetan monks . . . everything blends into this positive energy. a neighbour who is ill with cancer is the most alert to beat the call of the neighbours . . . her care continues during this time. the calmness is conducive of reflection. what do i want to change in my life as a therapist? . . . nothing, other than taking more time for myself. sports, baking, tidying up, painting, talking . . . i miss my friends, but the family exchange is nicely intensifying. confinement slogans tirelessly spread their message, one week, two weeks, three weeks, we don't count the weeks anymore, time has changed value, the pace has slowed down. look, why don't i take the time to check my pension rights? no, i'd rather watch a good series . . . humanity in times of pandemic (humanidad en pandemia) therapists and patients living through times of pandemic have evolved in a context of threat and uncertainty. due to the regulations to avoid contagion, family members have been forced to stay in the same space for indefinite times. the personal and group impact of this dynamic unfolds in multiple dimensions and has an unsuspected scope. the experience of physical space has evolved as hours, days, weeks have passed. although at first it could be perceived as a break from the whirlwind of everyday life, the 'forced' stay has turned into a kind of narrowing of the limits; of the physical space, of the psychic/corporal world, relational, bonding. no matter how many people inhabit a place, the members resent the isolation and helplessness on many occasions. in some way, previous forms of relationship, ties and family, group, and social functioning have pointed out the dominant styles of behaviour in this type of closure (muñoz, 2019) . without external compensatory systems, which operate as sedatives of the sense and deniers of finitude, we observe how the first attachment style appears in a dynamic that goes through different levels depending on the global context and the verbal and non-verbal information that emerges from authority (bowlby, 1993) . groups with primarily disorganised attachments generate contexts where friction, punishment, and violence emerge quickly as a way to relieve anxiety, fear, and tension. when the primary attachments are associated with anxiety and ambivalence, the group is submerged in fear that spreads through networks infiltrating the psychocorporal world, facilitating extreme care behaviours alternated with reckless risk behaviours that increase fear and anxiety. systems with predominantly anxious-avoidant attachment tend to focus on demand and performance by amplifying effective control systems at the expense of body and emotions. these are subject to the dominance of reason, dissociating body and its messages controlling fear, anxietiy, and uncertainty. on the contrary, groups with predominantly secure attachments creatively and adaptively go through this turbulence at the pole of action, creating and recreating new realities and ways of living. then, the isolation gradually infiltrates the family system, so fear and emptiness take over the bonds, as a melody that silences with consumer products. therefore, an impossible gap associated with the absence of meaning and a deep fear of damage and death is attempted to be filled. so, time becomes a waiting time. simultaneously, we witness an institutional and organisational crisis with the ensuing collapse of credibility and trust. a massive disconnection of people who in this paralysis have lost their jobs confirms those premises about the hope and credibility violated, and although cognitively it is explainable emotionally and affectively, the experience is overwhelming. uncertainty, fear, and the experience of injustice increase in a context that is impossible to decode. on the other hand, teleworking has been a way of maintaining continuity of work and staying on the move. however, in many organisations, this system has forced workers to be permanently available online. for executive women workers, the demand has increased exponentially due to the exercise of multiple roles that overlap and require time, effort, and dedication. uncertainty in an emotional, relational, social, global context reduces security for people; groups, institutions, when faced with the threat, exacerbate control and defence mechanisms. they bring solutions that only increase the problem, generating fear and pain expressed in different ways. they deny mourning in the face of loss at all levels: stability, power, status, the lives of loved ones, and their own lives (sluzki, 1993) . and the body? often forgotten and uninhabited due to the predominance of the image, it becomes the repository of emotions that, given the context, are impossible to symbolise and integrate. raising these emotions to consciousness reminds us that we hurt ourselves in bonds and heal in them, so that it is possible to agree to feel enough fear for self-care and care for others. holding the bond of intimacy that the therapeutic space provides in this transit that emerges as a new context from the face-to-face to virtual, implies an opening to newness. in turn, group networks that met in transit rituals establishing contact, providing support, direction, and meaning when assisting life and death, have also been impacted and injured. however, the voice and face have enhanced as a contact image and company (sluzki, 1993) . the question of my being a therapist . . . being also a midwife refers me to processes and learning; my history and its multiple resonances weaved into a systemic psycho-dramatic tapestry that includes myself and humanity. i feel that we cross a threshold similar to labour, being delivered and giving birth, simultaneously, in a channel whose timelessness is felt in our bodies. we are leaving a womb that could no longer contain us. today by force of contractions, we remain at times with fear, compressed by narrow walls that adhere to the personal, group, social and human body, with fears of harm and death. simultaneously and in another polarity, with an unknown force and with the survival instinct to the maximum, we open the virtual space in search of the exit. ª 2020 australian association of family therapy where are we going? just like before we were born, it is a mystery; however, from another perspective i know that we will look to another territory, with keys, codes, and ways of survival different from what is known. a place where we will need to put into operation new approaches that, probably without much awareness, we have developed in this previous gestation process. these are bodily, personal, group, social, and of humanity. i feel that, as in any process, we may reach the other side crying, it is also possible that we remain detained in a space 'in-between,' without being able to advance. so, as humanity we are at risk; it is the essential trust, the conscience and bonding that sustains us to arrive someday at that 'other side. ' nowadays, for me to be a therapist is to be a midwife, creating contexts that, in the intimacy of psychotherapy, allow me to accompany in uncertainty, in fear, in the pain of losses, in silence and respect for the expression of grief. i trust that the strength of the bonds will allow us to be born to other unimaginable dimensions from the prisms in which we contact today. i only miss hugging my son and daughter. i carry with me the nostalgia and smell of their bodies. the loneliness of the therapist . . . my being a therapist cuts across the many roles i carry out. with maturity and conscience, the person of the therapist talks and integrates with my entire person. on one level, i feel lonely, but, on another, it is a joy to feel the strength of the bonds of systemic therapists around the world to reach out. this unique practice collection offers readers a glimpse into the professional and personal experiences and reflections of an international group of family and systemic therapists across the globe as they experience the first phase of the covid-19 pandemic. the world is navigating through unpredictable times. therapists need to 'be,' before they can 'do.' systemic therapists are able to heal and in turn support the healing of the families, couples, and individuals they work with through the process of reaching out to fellow therapists in shared experience. it is only then they can contemplate stepping into a next phase; the 'regenerative process' where they can rebuild their lives post-pandemic. there needs to be time to pose and consider; whether the familiar pre-pandemic 'normal' is the 'normal' that is desperately sought . . . or in fact whether this calls for the creation of new opportunities for social change. the pandemic illustrates the reality that society remains at its essence 'collectivist.' we are all in this together, as a collective humanity. it is evident humans are inter-dependent on one another. there is an inescapable inter-connectedness and relationship between systems, within the body, between one another; humanity cannot separate from the environment, just as therapists cannot separate from their families or fellow therapists. this crisis may assist therapists to rearrange the scale of values regarding what is important. beyond the interventions used, preservation of the relationship is what really matters. it seems ironic that an enforced need to 'stay apart' from one another (in order to stay alive), has birthed an invitation to be more human (in order to stay emotionally and relationally alive), and be closer to each other than ever before. this collection illustrates now more than ever the importance of looking at the 'macro' issues presenting for people and society from a systemic perspective. the more complex the issues, the more important they be considered and addressed through a systemic lens. approaching these complexities from a sequestered, individual perspective is reductionist, invalidating, unrealistic, but also disrespectful to other cultures. this present challenge also causes a questioning of the notion of permanence and certainty to give room to impermanence and uncertainty; while distressing, and unsettling, this provides opportunities. this is an unpredictable and crucial time. if there has ever been a need for systemic therapists and the 'world of systems' to advocate for systems change, this is the time. humanity is part of a system that can destroy it in a short time if it does not listen to the feedback that comes from the interaction with the other elements of that system. as the pathway to the 'other side' is navigated, there is a need to value context, culture, collaboration, and connectedness in order to combat isolation and trauma while enhancing unity. together we can begin to think about some of the implications and recommendations for family therapy practice and research in relation to covid-19. first, family and systemic therapists are in a key position to advise stakeholders such as governments and health departments in developing and implementing a response to the covid-19 pandemic. second, there are numerous advantages to understanding the effects of the pandemic through a systemic lens. it is unrealistic, illogical, and unscientific to imagine that complex issues like covid-19 can be considered simply by focusing on individuals. third, this suggests an integrated approach to the management of pandemic trauma and suicide prevention utilising 'systemic thinking' as a foundation. future collaborative research could focus on: the collective nature of trauma to consider the consequences of traumatic events shared by a 'social collective,' and how this may differ from 'interpersonal trauma'; mental health consequences that take into account the impact of pre-existing and co-existing mental health issues; the relational consequences of covid-19 in exploring whether collective traumas create greater resilience given the collective shared experience; further rigorous qualitative and phenomenological studies to capture the experiences of family therapists honouring different cultural backgrounds and languages. associate professor of psychology, subprogram in clinical psychology, department of psychology director of the paduan center of family therapy, academic of the milanese center of family therapy. email: mosconia1@gmail.com member ipscan (international society for prevention of child abuse and neglect il multi-linguaggio e il multi-tempo dell'amore: il lavoro con le coppie interculturali (the multi-language and the multi-time of love: work with intercultural couples). paper presented at the convegno residenziale apf 'il processo terapeutico. tempi e fasi della terapia familiare my story, your story: the role of culture and language in emotion expression of cross-cultural couples. the mi culture model reflection on aaft family therapy conference getting together with like minded people': a conference edition habla mi idioma? an exploratory review of working systemically with people from diverse cultures: an australian perspective children in the margins training the person of the therapist in structural family therapy mind and nature: a necessary unity el apego (attachment). barcelona: paid os treating complex traumatic stress disorders: an evidence-based guide revisione dei concetti di ipotizzazione, circolarit a, neutralit a: un invito alla curiosit a (hypothesizing, circularity, and neutrality revisited: an invitation to curiosity) autoethnography, personal narrative, reflexivity: researcher as subject hypnotherapy: an exploratory casework getting a kick out of you: the jazz taoist key to love the new normal: living with a transformed reality the helpers and the helped: viewing the mental health profession through the lens of love in action: an integrative approach to last chance couple therapy the tao of love: life lessons learned from laotzu and the way of life suicide risk and prevention during the covid-19 pandemic integrating a trauma lens into a family therapy framework: ten principles for family therapists the five stages of grief the way of life one service, many voices: enhancing consumer participation in a primary health service for multicultural women the use of self of the therapist families and family therapy v ınculo, percepci on y conciencia en la coordinaci on grupal: la persona del coordinador (link, perception and consciousness in group coordination: the person of the coordinator) g enogramme et inceste: tempo th erapeutique et tempo judiciaire, in i les g enogrammes aujourd'hui la clinique syst emique en mouvement quand la famille marche sur la tête: inceste, p edophilie, maltraitance (when the family walks on your head: incest, pedophilia, abuse) la fin des certitudes. temps, chaos et lois de la nature (the end of certainties. time, cahos and the laws of nature) how family therapy stole my interiority and was then rescued by open dialogue memory, trauma, and phantasmagoria in claudia llosa's 'la teta asustada paradosso e controparadosso. un nuovo modello nella terapia della famiglia a transazione schizofrenica (paradox and counterparadox. a new model in schizophrenic transaction family therapy) ipotizzazione, circolarit a, neutralit a: tre direttive per la conduzione della seduta (hypothesizing, circularity, neutrality: three guidelines for the conductor of the session) la red social: frontera de la pr actica sist emica observing systems the authors look forward to further systemic themed papers on the family therapy response to covid-19 such as focusing on the regenerative phase of the pandemic and the reporting of practice and practical responses. lyndon amorin-woods for assistance in preparation of the draft manuscript. key: cord-223560-ppu6idl2 authors: russo, daniel; hanel, paul h. p.; altnickel, seraphina; berkel, niels van title: predictors of well-being and productivity among software professionals during the covid-19 pandemic -a longitudinal study date: 2020-07-24 journal: nan doi: nan sha: doc_id: 223560 cord_uid: ppu6idl2 the covid-19 pandemic has forced governments worldwide to impose movement restrictions on their citizens. although critical to reducing the virus' reproduction rate, these restrictions come with far-reaching social and economic consequences. in this paper, we investigate the impact of these restrictions on an individual level among software engineers currently working from home. although software professionals are accustomed to working with digital tools, but not all of them remotely, in their day-to-day work, the abrupt and enforced work-from-home context has resulted in an unprecedented scenario for the software engineering community. in a two-wave longitudinal study ($n~=~192$), we covered over 50 psychological, social, situational, and physiological factors that have previously been associated with well-being or productivity. examples include anxiety, distractions, psychological and physical needs, office set-up, stress, and work motivation. this design allowed us to identify those variables that explain unique variance in well-being and productivity. results include (1) the quality of social contacts predicted positively, and stress predicted an individual's well-being negatively when controlling for other variables consistently across both waves; (2) boredom and distractions predicted productivity negatively; (3) productivity was less strongly associated with all predictor variables at time two compared to time one, suggesting that software engineers adapted to the lockdown situation over time; and (4) the longitudinal study did not provide evidence that any predictor variable causal explained variance in well-being and productivity. our study can assess the effectiveness of current work-from-home and general well-being and productivity support guidelines and provide tailored insights for software professionals. the mobility restrictions imposed on billions of people during the covid-19 pandemic in the first half of 2020 successfully decreased the reproduction rate of the virus [86, 111] . however, quarantine and isolation also come with tremendous costs on people's well-being [9] and productivity [66] . for example, the psychosocial consequences of covid-19 mitigation strategies have resulted in an estimated average loss of 0.2 years of life [75] . while prior research [9] has identified numerous factors either positively or negatively associated with people's well-being during disastrous events, most of this research was cross-sectional and included a limited set of predictors. further, whether productivity is affected by disastrous events and, if so, why precisely, has not yet been investigated in a peer-reviewed article to the best of our knowledge. this is especially relevant since many companies, including tech companies, have instructed their employees to work from home [32] at an unprecedented scope. thus, it is unclear whether previous research on remote work [31] still holds during a global pandemic while schools are closed, and professionals often have to work in non-work dedicated areas of their homes. it is particularly interesting to study the effect of quarantine on software engineers as they are often already experienced in working remotely, which might help mitigate the adverse effects of the lockdown on their well-being and productivity. therefore, there is a compelling need for longitudinal applied research that draws on theories and findings from various scientific fields to identify variables that uniquely predict the well-being and productivity of software professionals during the 2020 quarantine, for both the current and potential future lockdowns. the software engineering community has never before faced such a wide-scale lockdown and quarantine scenario during the global spread of the covid-19 virus. as a result, we can not build on pre-existing literature to provide tailored recommendations for software professionals. accordingly, in the present research, we integrate theories from the organizational [48] and psychological [72, 90] literature, as well as findings from research on remote work [61, 1, 8] and recommendations by health [78, 25] and work [21] authorities targeted at the general population. this longitudinal investigation provides the following contributions: -first, by including a range of variables relevant to well-being and productivity, we are able to identify those variables that are uniquely associated with these two dependent variables for software professionals and thus help improve guidelines and tailor recommendations. -second, a longitudinal design allows us to explore which variables predict (rather than are predicted by) well-being and productivity of software professionals. -third, the current mobility restrictions imposed on billions of people provide a unique opportunity to study the effects of working remotely on people's well-being and productivity. our results are relevant to the software community because the number of knowledge workers who are at least partly working remotely is increasing [38] , yet the impact of working remotely on people's health and productivity is not well understood yet [70] . we focus on well-being and productivity as dependent variables because both are crucial for our way of living. well-being is a fundamental human right, according to the universal declaration of human rights, and productivity allows us to maintain a certain standard of living and thus also affects our overall well-being. thus, our research question is: research question: what are relevant predictors of well-being and productivity for software engineers who are working remotely during a pandemic? in the remainder of this paper, we describe the related work about wellbeing in quarantine and productivity in remote work in section 2, followed by a discussion about the research design of this longitudinal study in section 3. the analysis is described in section 4, and results discussed in section 5. implications and recommendations for software engineers, companies, and any remote-work interested parties is then outlined in section 6. finally, we conclude this study by outlying future research directions in section 7. to slow down the spread of pandemics, it is often necessary to quarantine a large number of people [86, 111] and enforce social distancing to limit the spread of the infection [2] . this typically implies that only people working in essential professions such as healthcare, police, pharmacies, or food chains, such as supermarkets, are allowed to leave their homes for work. if possible, people are asked to work remotely from home. however, such measures are perceived as drastic and can have severe consequences on people's well-being [9, 69] . previous research has found that being quarantined can lead to anger, depression, emotional exhaustion, fear of infecting others or getting infected, insomnia, irritability, loneliness, low mood, post-traumatic stress disorders, and stress [96, 47, 63, 71, 85, 3] . the fear of getting infected and infecting others, in turn, can become a substantial psychological burden [56, 83] . also, a lack of necessary supplies such as food or water [108] and insufficient information from public health authorities adds on to increased stress levels [13] . the severity of the symptoms correlated positively with the duration of being quarantined and symptoms can still appear years after quarantine has ended [9] . this makes it essential to understand what differentiates those whose mental health is more negatively affected by being quarantined from those who are less strongly affected. however, a recent review found that no demographic variable was conclusive in predicting whether someone would develop psychological issues while being quarantined [9] . moreover, prior studies investigating such predictors focused solely on demographic factors (e.g., age or number of children [47, 99] ). this suggests that additional research is needed to identify psychological and demographic predictors of well-being. for example, prior research suggested that a lack of autonomy, which is an innate psychology need [90] , negatively affects people's well-being and motivation [14] , yet evidence to support this claim in the context of a quarantine is missing. to ease the intense pressure on people while being quarantined or in isolation, research and guidelines from health authorities provide a range of solutions on how an individual's well-being can be improved. some of these factors lie outside of the control for individuals, such as the duration of the quarantine, or the information provided by public authorities [9] . in this study, we therefor focus on those factors that are within the control of individuals. however, investigating such factors independently might make little sense since they are interlinked. for example, studying the relations between anxiety and stress with well-being in isolation is less informative, as both anxiety and stress are negatively associated with well-being [26, 95] . however, knowing which of the two has a more substantial impact on people's well-being above and beyond the other is crucial, as it allows inter alia policymakers, employers, and mental health support organizations to provide more targeted information, create programs that are aimed to reduce people's anxiety or stress levels, and improve people's well-being, since anxiety and stress are conceptually independent constructs. thus, it is essential to study these variables together rather than separately. the containment measures not only come at a cost for people's well-being but they also negatively impact their productivity. for example, the international monetary fund (imf) estimated in june 2020 that the world gdp would drop by 4.9% as a result of the containment measures taken to reduce the spread of covid-19 -with countries particularly hit by the virus, such as italy, would experience a drop of over 12% [51] . this expected drop in gdp would be significantly larger if many people were unable to work remotely from home. however, previous research on the impact of quarantine typically focused on people's mental and physiological health, thus providing little evidence on the effect on productivity of those who are still working. luckily, the literature on remote work, also known as telework, allows us to get a broad understanding of the factors that improve and hinder people's productivity during quarantine. the number of people working remotely has been growing in most countries already before the covid-19 pandemic [80, 38] . of those working remotely, 57% do so for all of their working time. the vast majority of remote workers, 97% would recommend others to do the same [11] , suggesting that the advantages of remote work outweigh the disadvantages. the majority of people who work remotely do so from the location of their home [11] . working remotely has been associated with a better work-life balance, increased creativity, positive affect, higher productivity, reduced stress, and fewer carbon emissions because remote workers commute less [80, 11, 1, 8, 102, 4, 19] . however, working remotely also comes with its challenges. for example, challenges faced by remote workers include collaboration and communication (named by 20% of 3,500 surveyed remote workers), loneliness (20%), not being able to unplug after work (18%), distractions at home (12%), and staying motivated (7%) [11] . while these findings are informative, it is unclear whether they can be generalized. for instance, if mainly those with a long commute or those who feel comfortable working from home might prefer to work remotely, it would not be possible to generalize to the general working population. a pandemic such as the one caused by covid-19 in 2020 forces many people to work remotely from home. being in a frameless and previously unknown work situation without preparation intensifies common difficulties in remote work. adapting to the new environment itself and dealing with additional challenges adds on to the difficulties already previously identified and experienced by remote workers, and could intensify an individual's stress and anxiety and negatively affect their working ability. the advantages of remote work might, therefore, be reduced or even omitted. substantial research is needed to understand further what enables people to work effectively from home while being quarantined [59] . the current situation shows how important research in this field is already. forecasts indicate that remote work will grow on an even larger scale than it did over the past years [80,38], therefore research results on predictors of productivity while working remotely will increase in importance. some guidelines have been developed to improve people's productivity, such as the guidelines proposed by the chartered institute of personnel and development, an association of human resource management experts [21] . examples include designating a specific work area, wearing working clothes, asking for support when needed, and taking breaks. however, while potentially intuitive, empirical support for those particular recommendations is still missing. adding to the complexity, the measurement of productivity, especially in software engineering, is a debated issue, with some authors suggesting not to consider it at all [57] . nevertheless, individual developer's productivity has a long investigation tradition [91] . prior work on developer productivity primarily focused on developing software tools to improve professionals' productivity [54] or identifying the most relevant predictors, such as task-specific measurements and years of experience [29] . similarly, understanding relevant skillsets of developers that are relevant for productivity has also been a typical line of research [65] . eventually, as la toza et al. pointed out, measuring productivity in software engineering is not just about using tools; instead, it is about how they are used and what is measured [62] . in the present research, we build on the literature discussed above to identify predictors of well-being and productivity. additionally, we also include variables that were identified as relevant by other lines of research. furthermore, we chose a different setting, sampling strategy, and research design than most of the prior literature. this is important for several reasons. first, many previous studies included only one or a few variables, thus masking whether other variables primarily drive the identified effects. for example, while boredom is negatively associated with well-being [34] , it might be that this effect is mainly driven by loneliness, as lonely people report higher levels of boredom [34] -or vice versa. only by including a range of relevant variables, it is possible to identify the primary variables, which can subsequently be used to write or update guidelines to maintain one's well-being and productivity while working from home. second, this approach simultaneously allows us to test whether models developed in an organizational context such as the two-factor theory [48] can also predict people's well-being in general and whether variables that were associated with well-being for people being quarantined also explain productivity. third, while previous research on the (psychological) impact of being quarantined [9] is relevant, it is unclear whether this research is generalizable and applicable to the covid-19 pandemic. in contrast to previous pandemics, during which only some people were quarantined or isolated, the covid-19 pandemic strongly impacted billions globally. for example, previous research found that people who were quarantined were stigmatized, shunned, and rejected [63] ; this is unlikely to repeat as the majority of people are now quarantined. fourth, research suggests [53] that pandemics become increasingly likely due to a range of factors (e.g., climate change, human population growth) which make it more likely that pathogens such as viruses are transmitted to humans. this implies that it would be beneficial to prepare ourselves for future pandemics that involve lockdowns. fifth, the trend to remote work has been accelerated through the covid-19 pandemic [74] , which makes it timely to investigate which factors predict well-being and productivity while working from home. the possibility to study this under extreme conditions (i.e., during quarantine), is especially interesting as it allows us to include more potential stressors and distractors of productivity. this is critical. as outlined above, previous research on the advantages and challenges of remote work can presumably not be generalized to the population because mainly people from certain professions and specific living and working conditions might have chosen to work remotely. sixth and finally, a longitudinal design allowed us to test for causal inferences. specifically, in wave 1, we identified variables that explain unique variance in well-being and productivity, which we measured again in waves 2. this is important because it is possible that, for example, the amount of physical activity predicts well-being or that well-being predicts physical activity. additionally, we are able to test whether well-being predicts productivity or vice versa -previous research found that they are interrelated [60, 15] . the variables we are planning to measure in the present longitudinal study are displayed in figure 1 . to facilitate its interpretation, we categorized the variables in four broad sets of predictors, which are partly overlapping. we include all variables related to people's well-being and productivity that we discussed above and measured on an individual level. to summarize, while the initial selection of predictors is theory-driven, based on previous research, or recent guidelines, the selection of predictors included in the second wave is data-driven. during the covid-19 pandemic, many governments and organizations have called for volunteers to support self-isolation (see, for example, [79, 22] ). while also relevant to the community at large, research suggests that acts of kindness have a positive effect on people's well-being [10] . additionally, volunteering has the benefit of leaving one's home for a legitimate reason and reducing cabin fever. we therefore decided to include volunteering as a potential predictor for well-being. coping strategies such as making plans or reappraising the situation are, in general, effective for one's well-being [106, 18] . for example, altruistic acceptance -accepting restrictions because it is serving a greater good -while being quarantined was negatively associated with depression rates three years later [67] . conversely, believing that the quarantine measures are redundant because covid-19 is nothing but ordinary flu or was intentionally released by the chinese government (i.e., beliefs in conspiracy theories), will likely lead to dissatisfaction because of greater feelings of non-autonomy. indeed, beliefs in conspiracy theories are associated with lower well-being [36] . we further propose that three needs are relevant to people's well-being and productivity [12, 90] . specifically, we propose that the need for autonomy and competence are deprived of many people who are quarantined, which negatively affects well-being and motivation [14] . further, we propose that the need for competence was deprived, especially for those people who cannot maintain their productivity-level. this might especially be the case for those living with their families. in contrast, the need for relatedness might be over satisfied for those living with their family. another important factor associated with one's well-being is the quality of one's social relationships [7] . as people have fewer opportunities to engage with others they know less well, such as colleagues in the office or their sports teammates, the quality of existing relationships becomes more important, as having more good friends facilitates social interactions either in person (e.g., with their partner in the same household) or online (e.g., video chats with friends). moreover, we expect that extraversion is linked to well-being and productivity. for example, extraverted people prefer more sensory input than introverted people [68] , which is why they might struggle more with being quarantined. extraversion correlated negatively with support for social distancing measures [16] , which is a proxy of stimulation (e.g., being closer to other people, will more likely result in sensory stimulation). finally, research on predictors of productivity while working from home can be theoretically grounded in models of job satisfaction and productivity, such as herzberg's two-factor theory [48] . this theory states that causes of job satisfaction can be clustered in motivators and hygiene factors. motivators are intrinsic and include advancement, recognition, work itself, growth, and responsibilities. hygiene factors are extrinsic and include the relationship with peers and supervisor, supervision, policy and administration, salary, working conditions, status, personal link, and job security. both factors are positively associated with productivity [5] . as there are little differences between remote and on-site workers in terms of motivators and hygiene factors [44] , the two-factor theory provides a good theoretical predictor of productivity of people working remotely. in our two-wave study, we are covering an extensive set of 51 predictors, as identified above. based on the literature mentioned earlier, we expected the strength of the association between the predictors and the outcomes' well-being and productivity to vary between medium to large. therefore, we assumed for our power analysis a medium-to-large effect size of f 2 = .20 and a power of .80. power analysis with g*power 3.1.9.4 [35] revealed that we would need a sample size of 190 participants. to ensure data quality and consistency, and to account for potential dropout in participants between the two waves, we invited almost 500 participants who were identified as software engineers in a previous study [89] to participate in a screening study in april 2020. to collect our responses, we used prolific, 1 a data collection platform, commonly used in computer science (see e.g., [49] ). we opted for this solution because of the high reliability, replicability, and data quality of dedicated platforms, especially compared with the use of mailing lists [82, 81] . to administer the surveys, we used qualtrics 2 and shared it on the prolific platform. the screening study was tailored for the covid-19 pandemic and was completed by 305 professionals. here, we aimed to select only participants from countries where lockdown measures where put into place. countries with unclear, mixed policies or early reopening (e.g., denmark, germany, sweden) were excluded. similarly, our participants were supposed to actively work from home during the lockdown for more than 20h a week. in the first wave of data collection, which took place in the week of april 20-26 2020, 192 participants completed the first survey. participation in the second wave (may 4-10) was high (96%), with 184 completed surveys. participants have been uniquely identified through their prolific id, which was essential to run the longitudinal analysis while allowing participants to remain anonymous. in each survey, we included three test items (e.g., "please select response option 'slightly disagree"'). moreover, we controlled if the participants were still working from home in the reference week and if lockdown measures were still in place in their respective countries. as none of our participants failed at least two of the three test items, all participants reported working remotely and answered the survey in an appropriate time frame, and we did not exclude anyone. the mean age of the 192 participants was 36.65 years (sd = 10.77, range = 1963; 154 women, 38 men). participants were compensated in line with the current us minimum wage (average completion time 1202 seconds, sd = 795.41). we employed a longitudinal design, with two waves set two-weeks apart from each other towards the end of the lockdown, which allowed us to test for internal replication. also, running this study towards the end of the lockdowns in the vast majority of countries allowed participants to provide a more reliable interpretation of lockdown conditions. we chose a period of two weeks because we wanted to balance change in our variables over time with the end of a stricter lockdown that was discussed across many countries when we run wave 2. many of our variables are thought to be stable over time. that is, a person's scores on x at time 1 is strongly predictive of a person's scores on x at time 2 (indeed, the test-retest reliabilities we found support this assumption, see table 1 ). the closer the temporal distance between wave 1 and 2, the higher the stability of a variable. in other words, if we had measured the same variables again after only one or two days, there would not have been much variance that could have been explained by any other variable, because x measured at time 1 already explains almost all variance of x measured at time 2. in contrast, we aimed to collect data for wave 2 while people were still quarantined. if at time 1 of the data collection people would still be in lockdown and at time 2 the lockdown would have been eased, this would have included a major confounding factor. thus, to balance those two conflicting design requirements, we opted for a two weeks break in between the two waves. we describe the measures of the two dependent (or outcome) variables in subsection 3.3. predictors (or independent variables) are explained in subsections 3.4, 3.5, 3.6, and 3.7. wherever possible, we relied on validated scales. if this was not possible (e.g., covid-19 specific conspiracy beliefs), we created a scale. all items are listed in the supplemental materials. additionally, we also explore whether there are any mean changes in the variables we measured at both times (e.g., has people's well-being changed?) well-being was measured with an adapted version of the 5-item satisfaction with life scale [28] . we adapted the items to measure satisfaction with life in the past week. example items include "the conditions of my life in the past week were excellent" and "i was satisfied with my life in the past week". responses were given on a 7-point likert scale ranging from 1 (strongly disagree) to 7 (strongly agree, î± time1 = .90, î± time2 = .90). productivity was measured relative to the expected productivity. we contrasted productivity in the past week with the participant's expected productivity (i.e., productivity level without the lockdown). as we recruited participants working in different positions, including freelancers, we can neither use objective measures of productivity nor supervisor assessments and rely on self-reports. we expect limited effects of socially desirable responses as the survey was anonymous. the general understanding and the widespread belief that many people could not be as productive as they usually are during the lockdown in 2020 (e.g., due to stress or caring responsibilities). we operationalized productivity as a function of time spent working and efficiency per hour, compared to a normal week. specifically, we asked participants: "how many hours have you been working approximately in the past week?" (item p1), "how many hours were you expecting to work over the past week assuming there would be no global pandemic and lockdown?" (item p2)to measure perceived efficiency, "if you rate your productivity (i.e., outcome) per hour, has it been more or less over the past week compared to a normal week?" (item p3). responses to the last item were given on a bipolar slider measure ranging from '100% less productive' to '0%: as productive as normal' to 'â�¥ 100% more productive' (coded as -100, 0, and 100). to compute an overall score of productivity for each participant, we used the following formula: productivity = (p1/p2) ã� ((p3 + 100)/100). values between 0 and .99 would reflect that people were less productive than normal, and values above 1 would indicate that they were more productive than usual. for example, if one person worked only 50% of their normal time in the past week but would be twice as efficient, the total productivity was considered the same compared to a normal week. we preferred this approach over the use of other self-report instruments, such as the who's health at work performance questionnaire [55] , because we were interested in the change of productivity while being quarantined as compared to 'normal' conditions. the who's questionnaire, for example, assesses productivity also in comparison to other workers. we deemed this unfit for our purpose as it is unclear to what extent software engineers who work remotely are aware of other workers' productivity. also, our measure consists of only three items and showed good test-retest reliability (table 1) . test-retest reliability is the agreement or stability of a measure across two or more time-points. a coefficient of 0 would indicate that responses at time 1 would not be linearly associated with those at time 2, which is typically undesired. higher coefficients are an additional indicator of the reliability of the measures, although they can be influenced by a range of factors such as the internal consistency of the measure itself and external factors. for example, the test-rest reliability for productivity is r = .50 lower than for most other variables such as needs or well-being, but this is because the latter constructs are operationalized as stable over time. in contrast, productivity can vary more extensively due to external factors such as the number of projects or the reliability of one's internet connection. self-discipline was measured with 3-items of the brief self-control scale [98] . example items include "i am good at resisting temptation" and "i wish i had more self-discipline" (recoded). responses were registered on a 5-point scale ranging from 1 (not at all) to 5 (very; î± = .64). coping strategies was measured using the 28-item brief cope scale, which measures 14 coping dimensions [17] . example items include "i've been trying to come up with a strategy about what to do" (planning) and "i've been making fun of the situation" (humor). responses were on a 5-point scale ranging from 0 (i have not been doing this at all) to 4 (i have been doing this a lot). the internal consistencies were satisfactory to very good for two-item scales: self-distraction (î± = .65), active coping (î± = .61), denial (î± = .66), substance use (î± = .96), use of emotional support (î± = .77), use of instrumental support (î± = .75), behavioral disengagement (î± 1 = .76, î± 2 = .71), venting (î± = .65), positive reframing (î± = .72), planning (î± = .76), humor (î± = .83), acceptance (î± = .61), religion (î± = .83), and self-blame (î± 1 = .75, î± 2 = .71). loneliness was measured using the 6-item version of the de jong gierveld loneliness scale [40] . the items are equally distributed among two factors, emotional; î± 1 = .68, î± 2 = .69) (e.g., "i often feel rejected") and social; î± 1 = .84, î± 2 = .87 (e.g., "there are plenty of people i can rely on when i have problems"). participants indicated how lonely they felt during the past week. responses were given on a 5-point scale ranging from 1 (not at all) to 5 (every day). compliance with official recommendations was measured using three items of a compliance scale [109] . the items are 'washing hands thoroughly with soap', 'staying at home (except for groceries and 1x exercise per day)' and 'keeping a 2m (6 feet) distance to others when outside.' reponses were given on a 7-point scale ranging from 1 (never complying to this guideline) to 7 (always complying to this guideline, î± = .71). anxiety was measured using an adapted version of the 7-item generalized anxiety disorder scale [95] . participants indicate how often they have experienced anxiety over the past week to different situations. example questions are "feeling nervous, anxious, or on edge" and "not being able to stop or control worrying". responses were given on a 5-point scale ranging from 1 (not at all) to 5 (every day, î± 1 = .93, î± 2 = .93). additionally, we measured specific covid-19 and future pandemic related concerns with two items "how concerned do you feel about covid-19?" and "how concerned to you about future pandemics?" responses on this were given by a 5-point scale ranging from 1 (not at all concerned) to 5 (extremely concerned; î± = .82) [77] . stress was measured using a four-item version of the perceived stress scale [24] . participants indicate how often they experienced stressful situations in the past week. example items include "in the last month how often have you felt you were unable to control the important things in your life?" and "in the last month how often have you felt confident about your ability to handle your personal problems?". responses were registered on a 4-point scale ranging from 1 (never) to 4 (very often; î± 1 = .80, î± 2 = .77). boredom was measured using the 8-item version [97] of the boredom proneness scale [34] . example items include "it is easy for me to concentrate on my activities" and "many things i have to do are repetitive and monotonous". responses were on a 4-point likert scale ranging from 1 (strongly disagree) to 7 (strongly agree; î± 1 = .87, î± 2 = .87). daily routines was measured with five items: "i am planning a daily schedule and follow it", "i follow certain tasks regularly (such as meditating, going for walks, working in timeslots, etc.)", "i am getting up and going to bed roughly at the same time every day during the past week", "i am exercising roughly at the same time (e.g., going for a walk every day at noon)", and "i am eating roughly at the same time every day". responses were taken on a 7-point likert scale ranging from 1 (does not apply at all) to 7 (fully applies; î± 1 = .75, î± 2 = .78). conspiracy beliefs was measured with a 5-item scale as designed by ourselves for this study. the first two items were adapted from the flexible inventory of conspiracy suspicions [110] , whereas the latter three are based on more specific conspiracy beliefs: "the real truth about coronavirus is being kept from the public.", "the facts about coronavirus simply do not match what we have been told by 'experts' and the mainstream media", "coronavirus is a bioweapon designed by the chinese government because they are benefiting from the pandemic most", "coronavirus is a bio-weapon designed by environmental activists because the environment is benefiting from the virus most", and "coronavirus is just like a normal flu". responses were collected on a 7-point likert scale ranging from 1 (totally disagree) to 7 (totally agree, î± = .83). extraversion was measured using the 4-item extraversion subscale of the brief hexaco inventory [103] . responses were given on a 5-point likert scale ranging from 1 (strongly disagree) to 5 (strongly agree; î± 1 = .71, î± 2 = .69). low scores on extraversion are an indication of introversion. since we found at wave 1 that extraversion and well-being were positively correlated contrary to our hypothesis (see below), and, in our view, contrary to widespread expectations, we decided to measure in wave 2 what participants' views are regarding the association between extraversion and well-being. we measured expectations with one item: "who do you think struggles more with the current pandemic, introverts or extraverts?" response options were 'introverts', 'both around the same', and 'extraverts'. autonomy, competence, and relatedness needs of the self-determination theory [90] was measured using the 18-item balanced measure of psychological needs scale [92] . example items include "i was free to do things my own way' (need for autonomy; î± 1 = .72, î± 2 = .76), "i did well even at the hard things" (competence; î± 1 = .77, î± 2 = .77), and "i felt unappreciated by one or more important people" (recoded; relatedness; î± 1 = .79, î± 2 = .78). participants were asked to report how true each statement was for them in the past week. responses were given on a 5-point scale ranging from 1 (no agreement) to 5 (much agreement). extrinsic and intrinsic work motivation was measured with the 6item extrinsic regulation 3-item and intrinsic motivation subscales of the multidimensional work motivation scale [37] . the extrinsic regulation subscale measures social and material regulations. specifically, participants were asked to answer some questions about why they put effort into their current job. example items include "to get others' approval (e.g., supervisor, colleagues, family, clients ...)" (social extrinsic regulation; î± = .85), "because others will reward me nancially only if i put enough effort in my job (e.g., employer, supervisor...)" (material extrinsic regulation; î± = .71) and "because i have fun doing my job" (intrinsic motivation; î± = .94). responses were given on a 7-point scale ranging from 1 (not at all) to 7 (completely). mental exercise was measured with two items: "i did a lot to keep my brain active" and "i performed mental exercises (e.g., sudokus, riddles, crosswords)". participants indicated the extent to which the items were true for them in the past week on a 7-point scale ranging from 1 (not at all) to 7 (very; î± = .56). technical skills was measured with one item: "how well do your technological skills equip you for working remotely from home?" responses were given on a 7-point scale ranging from 1 (far too little) to 7 (perfectly). diet was measured with two items [33] : "how often do you eat fruit, excluding drinking juice?" and "how often do you eat vegetables or salad, excluding potatoes?". responses were given on a 7-point scale ranging from 1 (never) to 7 (three times or more a day; î± = .60) quality of sleep was measured with one item: "how has the quality of your sleep overall been in the past week?" responses were given on a 7-point scale ranging from 1 (very low) to 7 (perfectly). physical activity was measured with an adapted version of the 3-item leisure time exercise questionnaire [43] . participants were be asked to report how many hours in the past they have been mildly, moderately, and strenuously exercising. the overall score was computed as followed [43] : 3ã� mild + 5ã� moderate + 9ã� strenuously. missing responses for one or more of the exercise types were be treated as 0. quality and quantity of social contacts outside of work were measured with three items. we adapted two items from the social relationship quality scale [7] and added one item to measure the quantity: "i feel that the people with whom i have been in contact over the past week support me", "i feel that the people with whom i have been in contact over the past week believe in me", and "i am happy with the amount of social contact i had in the past week." responses were given on a 6-point likert scale ranging from 1 (strongly disagree) to 6 (strongly agree; î± 1 = .73, î± 2 = .77). volunteering was measured with three items that measure people's behavior over the past week: "i have been volunteering in my community (e.g., supported elderly or other people in high-risk groups)", "i have been supporting my family (e.g., homeschooling my children)" and "i have been supporting friends, and family members (e.g., listened to the worries of my friends)". responses were given on a 7-point scale ranging from 1 (not at all) to 7 (very often; î± = .45). quality and quantity of communication with colleagues and line managers was measured with three items: "i feel that my colleagues and line manager have been supporting me over the past week", "i feel that my colleagues and line manager believed in me over the past week", and "overall, i am happy with the interactions with my colleagues and line managers over the past week." responses were given on a 6-point likert scale ranging from 1 (strongly disagree) to 6 (strongly agree; î± 1 = .88, î± 2 = .92). distractions at home was measured with two items: "i am often distracted from my work (e.g., noisy neighbors, children who need my attention)" and "i am able to focus on my work for longer time periods" (recoded). responses were given on a 5-point scale ranging from 1 (not at all) to 5 (very often; î± 1 = .64, î± 2 = .63). the participants' living situation was reported in the following categories. living with (babies/infants), (toddlers), (children), (teenager), and (adults), and additionally, it was displayed with how many people the participant is currently living. financial security was measured with two items that reflect the current but also the expected financial situation [42] : "using a scale from 0 to 10 where 0 means 'the worst possible financial situation' and 10 means 'the best possible financial situation', how would you rate your financial situation these days?" and "looking ahead six months into the future, what do you expect your financial situation will be like at that time?". responses were given on a 11-point scale ranging from 0 (the worst possible financial situation) to 10 (the best possible financial situation; î± = .81). office set-up was measured with three items: "in my home office, i do have the technical equipment to do the work i need to do (e.g., appropriate pc, printer, stable and fast internet connection)", "on the computer or laptop i use while working from home i do have the software and access rights i need", and 'my office chair and desk are comfortable and designed to prevent back pain or other related issues". responses were given on a 7-point likert scale ranging from 1 (strongly disagree) to 7 (strongly agree; î± = .65). demographic information were assessed with the following items: "what is your gender?", "how old are you?" "what type of organization do you work in" (public, private, unsure, other), "what is your yearly gross income?" (us< 20, 000, u s20-40,000, us40.001 â�� 60, 000, u s60,001-80,000, us80, 001 â�� 100, 000, > u s100,000; converted to the participant's local currency), "in which country are you based?", "have you been working from home or remotely in general before february 2020?" (yes, no, unsure), "what percentage of your time have you been working remotely (i.e., not physically in your office) over the past 12 months?", "in which region/state and country are you living?", "is there still a lockdown where you are living?". the data analysis consists of two parts. first, we used the data from time 1 to identify the variables that explain variance in participant well-being and productivity beyond the other variables. second, we used the pearson productmoment correlation coefficient (r), to identify which variables were correlated with at least r = .30 with well-being and productivity, to test whether they predict our two outcomes over time. r is an effect size which expresses the strength of the linear relation between two variables. we used .30 as a threshold as we are interested in identifying variables that are correlated with at least a medium-sized magnitude [23] with one or both of our outcome variables. also, a correlation of â�¥ .30 indicates that the effect is among the top 25% in individual difference research [41] . finally, selecting an effect size of this magnitude provides an effective type-i error control, as in total, we performed 103 correlation tests at time 1 alone (51 independent variables correlated with the two dependent variables, which were also correlated among each other). given a sample size of 192, this effectively changes our alpha level to .0001, which is conservative. this means that it is very unlikely that we erroneously find an effect in our sample even though there is no effect in the population (i.e., commit the type-i or false-positive error) we did not transform the data for any analysis. unless otherwise indicated above, scales were formed by averaging the items. the collected dataset is publicly available to support other researchers in understanding the impact of enforced work-from-home policies. to test which of the variables listed in figure 1 explains unique variance in well-being and productivity, we performed two multiple regression analyses with all variables that were correlated with the two outcome variables with â�¥ .30. in the first analysis, well-being is the dependent variable; in the second analysis, we use productivity as the dependent variable. this allows us to identify the variables that explain unique variance in the two dependent variables. however, one potential issue of including many partly correlated predictors is multicollinearity, which can lead to skewed results. if the variance inflation factor (vif) is larger than 10, multicollinearity is an issue [20] . therefore, we tested whether the variance inflation factor would exceed 10 before performing any multiple regression analysis. to analyze the data from both time-points, we performed a series of structural equation modeling analyses with one predictor variable and one outcome variable using the r-package lavaan [88] . unlike many other types of analyses, structural equation modeling adjusts for reliability [107] . specifically, models were designed with one predictor (e.g., stress), and one outcome (e.g., wellbeing) both as measured at time 1 and at time 2. we allowed autocorrelations (e.g., between well-being at time 1 and at time 2) and cross-paths (e.g., between stress at time 1 and well-being at time 2). autocorrelations are essential because without them we might erroneously conclude that, for example, stress at time 1 predicts well-being at time 2 although it is the part of stress which overlaps with well-being, which predicts well-being at time 2 [87] . to put it simply, we can only conclude that x1 predicts y2 if we control for y1. no items or errors were allowed to correlate. this is usually done to improve the model fit but has also been criticized as atheoretical: to determine which items and errors should be allowed to correlate to improve model fit can only be done after the initial model is computed and thus a data-driven approach which emphasizes too much on the model fit [39] . the regression (or path) coefficients and associated p-values were not affected by the type of estimator. we compared in our analyses the standard maximum likelihood (ml), the robust maximum likelihood (mlr), and the multi-level (mlm) estimator. the pattern of correlations was overall consistent with the literature. at time 1, 16 variables were correlated with well-being at r â�¥ .30 (table 1) 3 . stress, r = â��.58, quality of social contacts, r = .49, and need for autonomy, r = .48 were strongest associated with well-being (all p < .0001). the pattern of results from the 14 coping strategies were also in line with the literature [18] : self-blame, r = â�� .36, p < .001, behavioral disengagement, r = â�� .31, p < .001, and venting r = â�� .28, p < .001 were negatively correlated with well-being. interestingly, generalized anxiety was more strongly associated with well-being than covid-19 related anxiety (r = â�� .46 vs â��.25) which might suggest that specific worries have a less negative impact on well-being 4 . contrary to our 3 the pearson's correlation coefficient (r) represents the strength of a linear association between two variables and can range between -1 (perfect negative linear association), 0 (no linear association), to 1 (perfect positive linear association). the regression coefficient b indicates how much the outcome changes if the predictor increases by one unit. for example, the b of stress predicting well-being is -.60. this indicates that a person who has a well-being level of 5 has a stress level that is of -.60 units lower than a person who has a well-being level of 6. 4 a multiple regression with generalized anxiety and covid-19 related anxiety supports this interpretation: only generalized anxiety, b = â�� .58, se = .10, p < .001, but not expectations, extraversion was positively correlated with well-being, both at waves 1 and 2. the pattern of the associations was similar at time 2. a reason for participants' misinterpretation of the intensity to struggle with working from home for introverts could be explained by introverts usually having to avoid unwanted social interactions, and due to being quarantined, they now have to put effort into having social interactions actively. the added challenge to contribute more energy than usual to not being too lonely and changing their usual behavioral pattern demands much more from introverts than extraverts. at time 1, four variables were correlated with productivity at r â�¥ .30 (table 1) : need for competence, r = â��.37, distractions, r = â��.34, boredom, r = â��.33, and communication with colleagues and line-managers r = . 30 . surprisingly, work motivations were uncorrelated with well-being at î± = .001. at time 2, only distraction was still correlated with productivity, r = â�� .26, p < .001. the strength of association of most variables with productivity dropped between time 1 and 2, which means that those variables associated with productivity at wave 1 were no longer or less strongly associated with productivity at wave 2. the strengths of correlations remained the same when we computed spearman's rank correlation coefficients rather than pearson's correlations (spearman's coefficient is a non-parametric version of pearson's r and ranges also between -1 and 1). at time 2, we added additional questions to better understand the counterintuitive finding that well-being and extraversion are positively correlated. interestingly, the finding that extraversion is positively correlated with well-being during lockdown is contrary to the expectations of most participants. when asked whether introverts or extraverts struggle more with the covid-19 pandemic, only 2 participants correctly predicted introverts, where 136 stated extraverts, with 46 participants believing that both groups struggle equally. this highlights the value of our research because people's intuition can be blatantly wrong. through an analysis of the participants' statements about the informant's (i) choice, the explanation became more articulated. we now report selected quotes from participants, including their level of extraversion, in wave 1 5 . some informants reported their direct experience supporting the feeling that extraverts struggle more than introverts. "i'm introverted, and i don't feel the pandemic has affected me at all. rules aren't hard to follow and haven't feel bad. i feel for extraverts; they would struggle a bit with the rules." nonetheless, a minority of participants also provide alternative interpretations. according to those, both introverts and extraverts have difficulties in reaching out to people, although in different ways. the motivation for such answers is that both personality types struggle with different challenges. "both types need company, just that each needs company on their own terms. introverts prefer deeper contact with fewer people and extraverts less deep contact with a greater number of people." [i-80, extraversion score = 3.75] "extraverts miss human contact; introverts find it even harder to mark their presence online (e.g., in meetings)." [i-160, extraversion score = 3.50] interestingly, there is one informant which provide an insightful interpretation, aligned with our results. "introverts usually have more difficulty communicating with others, and confinement worsens the situation because they will not try to talk to others through video conferences." [i-136, extraversion score = 2.75] the lack of a structured working setting, where introvert are routinely involved, causes further isolation. being 'forced' to work remotely significantly increased difficulty in engaging with social contacts. this means that introverts have to put much more effort into interacting with others instead of their typical behavior of reduced interaction in office-based environments. whereas extraverts have it easier to find some way to maintain their social contacts, introverts might struggle more. thus, the lockdown had a more negative impact on the well-being of introverts than of extraverts, as shown in table 1 . to test which of the predictors had a unique influence on well-being and productivity, we included all variables that were correlated with either outcome with at least .30 at time 1. this is a conservative test because many predictors are correlated among each other and thus taking variance from each other. also, it allowed us to repeat the same analysis at time 2 because all predictors which correlated with either well-being or productivity at time 1 with r â�¥ .30 were included at time 2. in a first step, we tested whether multicollinearity was an issue. this was not the case, with vif < 4.1 for all four regression models and thus clearly below the often-used threshold of 10 [20] . sixteen variables correlated with well-being r â�¥ .30 (table 1 ). together, they explained a substantial amount of variance in well-being at time 1, r 2 = .44, adj.r 2 = .39, f (16, 167) = 8.21, p < .0001, and at time note. r: correlations, b: unstandardized regression estimates, r it : test-rest correlation. signif. codes: * * * < .001, * * < 0.01, * < 0.05, . < 0.1 2, r 2 = .47, adj.r 2 = .42, f (16, 162) = 8.90, p < .0001. at time 1, stress (negatively), social contacts, and daily routines uniquely predicted well-being at î± = .05 (see table 1 , column 3, and table 2) . at time 2, need for competence and autonomy, stress, quality of social contacts, and quality of sleep uniquely predicted well-being at î± = .05 (see table 1 , column 7, and table 4 ). together, stress and quality of social contacts predicted at both time points significantly well-being. four variables correlated with productivity r â�¥ .30 (table 1) . together, they explained 16% of variance in productivity at time 1, r 2 = .18, adj.r 2 = .16, f (4, 179) = 9.60, p < .0001, and 8% at time 2, r 2 = .08, adj.r 2 = .06, f (4, 173) = 4.02, p = .004. at both time points, none of the four variables explained variance in productivity beyond the other three variables, suggesting that they all are associated with productivity but we lack statistical power to disentangle the effects (tables 3 and 5 ). there is an ostensible discrepancy between some correlations and the estimates of the regression analyses which requires further explanations. an especially large discrepancy appeared for the variable need for competence, which correlated positively with well-being at time 1 and 2, r = .41 with p < .001, and r = .38 with p < .001, but was negatively associated with well-being when controlling for other variables in both regression analyses, b = -.20 with p = .24, and b = -.33 with p = .04. this suggests that including a range of other variables, that serve as control variables, impact the results. indeed, exploratory analyses revealed that need for competence was no longer associated with well-being when we included need for autonomy. that is, when we performed a multiple regression with the needs for autonomy and competence as the only predictors, need for competence became non-significant. need for competence also includes an autonomy competent, which might explain this. it is easier to fulfill one's need for competence while being at least somewhat autonomous [90] . further, including generalized anxiety and boredom reversed the sign of the association: need for competence became negatively associated with well-being. including those two variables remove the variance that is associated with enthusiasm (boredom reversed) and courage (generalized anxiety reversed), which might explain the shift to negative association with well-being. together, controlling for need for autonomy, generalized anxiety, and boredom, takes away positive aspects of need for competence, leaving a potentially cold side that might be closely related to materialism, which is negatively associated with well-being [30] . test-retest reliabilities were good for all variables, supporting the quality of our data (last column of table 1 , column 10). in total, we performed 20 structural equation modeling (sem) analyses to test whether well-being and productivity are predicted by or predict any of the 16 independent variables for well-being, including one model in which we tested whether well-being predicts productivity or vice versa, and four models for productivity. since the probability of a false signif. codes: * * * < .001, * * < 0.01, * < 0.05, . < 0.1 positive is very high, due to the high number of models analyzed, we used a conservative error rate of .005. we are using a different threshold for the longitudinal analysis than for the correlation analyses since we did a different number of tests for the latter. one example of our sem analyses is presented in figure 2 , where we looked at the predictive-causal relationship between stress and well-being in waves 1 and 2. the boxes represent the items and the circles the variables (e.g., stress). the arrows between the items and the variables represent the loadings, that is how strongly each of the items contributes to the overall variable score (e.g., item 3 of the stress scale contributes least and item 4 most to the overall score at both time points). the circular arrows represent errors. the bidirectional arrows between the variables represent the covariances, which are comparable signif. codes: * * * < .001, * * < 0.01, * < 0.05, . < 0.1 to correlations. the one-handed arrows show causal impacts over time. the arrows between the same variables (e.g., well-being 1 and well-being 2) show how strongly they impact each other and are comparable to the test-retest correlations. the most critical arrows are those between well-being 1 and stress 2 as well as between stress 1 and well-being 2. they show whether one variable causally predicts the other. the most relevant values in figure 2 are presented in table 6 . columns 2-4 show that stress and well-being were significantly associated at time 1, b = -0.75, se = .13, p < .001. this association was mirrored at time 2, b = -0.15, se = .05, p = .001 (columns 5-7). columns 8-10 show that stress at time 1 did not significantly predict well-being at time 2, b = -0.00, se = .16, p = .99. columns 8-10 of the second part of table 6 also show that well-being at time 1 did not predict stress at time 2, b = 0.03, se = .05, p = .55. columns 2-4 of the second part show the autocorrelation of well-being, that is how strongly well-being at time 1 predicts well-being at time 2, b = 0.71, se = .09, p < .001. autocorrelations can be broadly understood as the unstandardized version of the test-retest correlations (reliability) reported in table 1 . finally, columns 5-7 of the second part show the autocorrelation of stress, which are also significant b = .99, se = .16, p < .001. we conclude that no model revealed any significant associations at î± = .005. thus, no variable at time 1 (e.g., stress) is able to explain a significant amount of variance in another variable (e.g., well-being) at time 2. we only found a negative tendency regarding distraction â�� p roductivity with b = -.154, p = .006. furthermore, table 6 shows which variable is more likely to have a stronger impact on the other over time. for example, p roductivity â�� distraction has a b = .084, p = .602, suggesting that it is much more likely that distraction influence negatively productivity, rather than productivity influencing the level of distraction. additionally, we explored whether there are any mean changes between time 1 and 2, separately for all 18 variables. for example, has the well-being increased over time? this would suggest that people adapted further within a relatively short period of two weeks to the threat from covid-19. table 7 shows that the arithmetic mean (m ) of well-being has indeed slightly increased between time 1 and 2, m = 4.14 vs m = 4.34. a closer look revealed that 91 participants reported higher well-being at time 2 compared to time 1, 23 reported the same level of well-being, and 70 a lower level of well-being. further, on average people's score of behavioral disengagement and quality of social contacts increased, whereas emotional loneliness and the quality of communication with line managers and coworkers decreased. note. t: t-value of a paired sample t-test; higher: absolute number of people who scored higher on a variable at time 2 compared to time 1; lower: number of people who scored lower at time 2; equal: people whose score has not changed over time. our finding that office-setup is not significantly related to well-being and productivity seems to contradict a recent cross-sectional study by ralph et al. [84] that investigated how the fear of bioevents, disaster preparedness, and home office ergonomics predict well-being and productivity among software developers. in that study, ergonomics was positively related to both wellbeing and productivity. to measure ergonomics, the authors created six items concerning distractions, noise, lighting, temperature, chair comfort, and overall ergonomics. the first two items are closely related to our measure of distraction, which was negatively associated with well-being in wave 1 of our sample, r = -.23, and productivity, r= -. 34 . in contrast, the following four items are more closely associated with office-setup in our survey, which was positive but not significantly associated with well-being, r = .14, and productivity, r = .10. to better understand such inconsistency with our result, we run a replication analysis using ralph et al.'s data. to test whether ergonomics' effect is mainly driven by distraction and noise, we combined the first two items into variable ergonomics-distractions (recoded, higher scores indicate less distraction) and the other four items into ergonomics-others. indeed, ergonomics distractions was more strongly correlated with well-being, r = .25, and productivity, r = .29, than was ergonomics-other, rs = .19 and .19, respectively. this suggests that our findings replicate those of ralph et al. and emphasize the importance of distinguishing between distraction and office set-up. the covid-19 pandemic and the subsequent lockdown have had a definite impact on software professionals who were primarily forced to work from home. the first significant outcome of this research is that there are many variables that are associated with well-being and productivity. although we could not determine any causal relationship, the effect sizes for both waves are medium to large for several variables which have mainly shown high stability of the results over time. also, well-being and productivity were positively associated. in other words, neglecting well-being will likely also negatively impact productivity. therefore, we agree with ralph et al.'s [84] recommendation that pressuring employees to keep the average productivity level without taking care of their well-being will lower productivity. however, we would also like to present an alternative interpretation that having productive employees will strengthen their sense of achievement and improve their well-being. in the following, we focus on practical recommendations based on the most reliable predictors of well-being and productivity that we identified in our study through our regression analysis: need for autonomy, stress, daily routines, social contacts, need for competence competence, extraversion, and quality of sleep as predictors of well-being, in table 8 . distractions and boredom related to productivity are discussed in table 9 . persistent high-stress levels are related to adverse outcomes in the workplace [6] and people's well-being. to reduce stress, bazarko et al. [6] recommend practicing mindfulness-based stress reduction training and practices that can be performed at home. participating in such a program can lead to lower levels of stress and a lower risk of work burnout. grossman et al. recommended other stress reduction methods. [46] . moreover, naik et al. [76] , who found that mindfulness meditation practices, slow breathing exercises, mindful awareness during yoga postures, and mindfulness during stressful situations and social interactions can reduce stress levels. together, the results of these studies suggest that mindfulness practices, even when performed at home, can reduce stress, which could also improve software engineers' well-being while being quarantined. the quality of social contacts as part of the overall quality of life has a significant impact on people's well-being, as discovered in this study. therefore, employers should be interested in enabling their employees to spend time with people they value and encourage them to build strong, meaningful relationships within their work environment. creating a virtual office, (e.g., using an online working environment such as 'wurkr') allows people to work with the impression of sharing a physical workspace online to communicate more comfortably and work together from anywhere. for example, in order to simplify conversations, the slack plugin 'donut' [94] randomly connects employees for coffee breaks with the purpose to get to know each other better by spending some time chatting virtually. besides, our finding that quality of social contact, but not living alone is associated with well-being, is in line with the literature. quality of contact with one's partner and family independently predicted negatively depression, whereas the frequency of these contact did not [100] . together, this suggests that findings from the literature can overall be generalized to people being quarantined. organizing the day in a structured way at home, appears to be beneficial for software professionals' well-being. people tend to overwork when working remotely [11] . this could be further magnified during quarantine where usual daily routines are disrupted, and thus working might become the only meaningful activity to do. therefore, it is essential to develop new daily routines in order to not be completely absorbed by work and to prevent a burnout [9] . therefore, scheduling meetings and designating time specifically for hobbies or spending time with family and friends is helpful while working from home and helps to satisfy employees' needs for social contacts. to fulfill people's need for autonomy, it is necessary to allow employees to act on their values and interests [105] . while coordinating collaborative workflows and managing projects remotely comes with its challenges [11] . for remote workers it is crucial to have flexibility in how they structure, organize, and perform their tasks [105] . it is therefore helpful to delegate work packages instead of individual tasks. this makes it easier for individuals to work selfdirectedly and thus to fulfill their need for autonomy. to fulfill employees' need for competence, it is necessary to provide them with the opportunity to grow personally and advance their skill set [64] . two of the mainly required and highly demanded skills in remote work environments are communication skills and the ability to use virtual tools, such as presentation tools or collaborative project planning tools [11] . raising awareness for the unique requirements of virtual communication is crucial for a smooth working process. therefore, working remotely requires specific communication skills, such as mindful listening [73] or asynchronous communication, which allows people to work more efficiently [52] . collaborative tools such as github, trello, jira, google docs, klaxoon, mural, or slack can simplify work processes and enable interactive workflows. besides the training and development of employees' specific virtual skill set, it is also recommended to invest in employees' personal development within the company. taking action and offering employees the opportunity to grow will not only evolve their role but also strengthen their loyalty towards the employer and, therefore, employee retention [58] . introverted software professionals seem to be more affected by the lockdown than their more extraverted peers. this finding is counter-intuitive since extraverted people prefer more direct contacts than introverted people [68] . our interpretation of these results is that introverts have a much higher burden to reach out to colleagues than extraverted ones. also, being introverted does not mean that there is no need for social contacts at all. while in the office they had chances to be involved with colleagues both in a structured or unstructured fashion, at home it is much more difficult as they have to be more proactive to reach out to colleagues in a more formalized setting, such as online collaboration platform (e.g., ms teams). therefore, software organizations should regularly organize both formal and informal online meeting occasions, where introvert software engineers feel a lower entry barrier to participate. quality of sleep is also a relevant predictor for well-being. although it might sound obvious, there is a robust association between sleep, well-being, and mindfulness [50] . in particular, howell et al. found that mindfulness predicts quality of sleep, and quality of sleep and mindfulness predict well-being. distractions at home are a challenging obstacle to overcome while working remotely. designating a specific work area in the home and communicating non-disturbing times with other household members are easy and quick first steps to minimize distractions at the workplace at home. another obstacle that distracts remote workers more frequently is cyberslacking, which is understood as spending time on the internet for non-work-related reasons during working hours [27] . cyberslacking and its contribution to distractions at home for remote workers were not included in this study but would be worth exploring in future research. when people experience, boredom it makes them feel "...unchallenged while they think that the situation and their actions are meaningless" [101, p. 181 ]. especially people who thrive in a social setting at work are in danger of being bored quickly while working in isolation from their homes. the enumerated recommendations above, such as assigning interesting, personally tailored, and challenging work packages, using collaborative tools to hold yourself accountable, and having social interactions while working remotely, also help reduce boredom at work. ideally, employees are intrinsically motivated and feel fulfilled by what they do. if this is not the case over a more extended period, and the experienced boredom is not a negative side effect of being overwhelmed while being quarantined, it might be reasonable to discuss a new field of action and area of responsibility with the employee. to conclude, working from home certainly comes with its challenges, of which we have addressed several in this study. however, at least software engineers appear to adapt to the lockdown over time, as people's well-being increased, and the perceived quality of their social contacts improved. similar results have also been confirmed by a survey study of 2,595 new zealanders' remote workers [104] . walton et al. found that productivity was similar or higher than pre-lockdown, and 89% of professionals would like to continue to work from home, at least one day per month. this study also reveals that the most critical challenges were switching off, collaborating with colleagues, and setting up a home office. on the other hand, working from home led to a drastic saving of time otherwise allocated to daily commuting, a higher degree of flexibility, and increased savings. limitations are discussed using gren's five-facets framework [45] . reliability. this study used a two-wave longitudinal study, where over 90% of the initial participants, identified through a multi-stage selection process, also participated in the second wave. further, the test-retest reliabilities were high, and the internal consistencies (cronbach's î±) ranged from satisfactory to very good. construct validity. we identified 51 variables, which were drawn from the literature, and a suitable measurement instrument measured each. where possible, we used validated instruments. otherwise, we developed and reported the instruments used. to measure the construct validity, we also reported the cronbach's alpha of all variables across both waves. however, we note that despite a large number of variables in our study, we still might have missed organizations should redesign employees goals by letting them choose tasks as much as possible and diversify activities. negative predictor in both waves (b w 1 = â��.065, b w 2 = â��.077) organizations should support software engineers to set up a dedicate home office. routines and agreements with family members about working times also help to be more focused. one or more relevant variables, which would have been significantly turned out in our analysis. conclusion validity. to draw our conclusions, we used multiple statistical analyses such as correlations, paired t-tests, multiple linear regressions, and structural equation modeling. to ensure reliable conclusions, we used conservative thresholds to reduce the risk of false-positive results. the threshold depended on the number of comparisons for each test. additionally, we did not include covariates, nor did we stop the data collection based on the results, or performed any other practice that is associated with increasing the likelihood of finding a positive result and increasing the probability of false-positive results [93] . however, we could not make any causal-predictive conclusion since all 20 sem analyses provided non-significant results, using a threshold of significance that reduces the risk of false-positive findings. finally, we made both raw data and r analysis code openly available on zenodo. internal validity. this study did not lead to any causal-predictive conclusion, which was the main aim of the present study. we can not say that the analyzed variables influence well-being or productivity or vice versa. we are also aware that this study relies on self-reported values, limiting the study's validity. further, we adjusted some measures (i.e., productivity). participants were not supposed to report their perceived productivity but to make a comparison, which has been computed independently afterward in our analysis. we also underwent an extensive screening process, selecting over 190 software engineers of the initial 483 initial suitable subjects. typical problems related to longitudinal studies (e.g., attrition of the subjects over a long-term period) do not apply. the dropout rate between the two waves has been low (under 10%). we run this study towards the end of the lockdown of the covid-19 pandemic in spring 2020. in this way, participants were able to report rooted judgments of their conditions. waves were set at two weeks distance, which ensured that lockdowns had not been lifted yet during the data collection of wave 2, but was also not close enough so that variability in each of the variables would already be sufficiently high between the two-time points. since this was a pandemic, the surveyed countries' lockdown conditions have been similar (due to standardized who's recommendations). however, we did not consider region-specific conditions (e.g., severity of virus spread) and recommendations. also, lockdown timing differed among countries. to control these potential differences, we asked participants at each of the two waves if lockdown measures were still in place, and if they were still working from home. since all our participants reported positively to both these conditions, we did not exclude anyone from the study. external validity. our sample size has been determined by an a priori power analysis, manageable for longitudinal analyses. however, this study was designed to maximize internal validity, focusing on finding significant effects, rather than working with a representative sample of the software engineering population (with n â�� 400, such as russo and stol [89] did, where the research goal focused on the generalizability of results). the covid-19 pandemic disrupted software engineers in several ways. abruptly, lockdown and quarantine measures changed the way of working and relating to other people. software engineers, in line with most knowledge workers, started to work from home with unprecedented challenges. most notably, our research shows that high-stress levels, the absence of daily routines, and social contacts are some of the variables most related to well-being. similarly, low productivity is related to boredom and distractions at home. we base our results on a longitudinal study, which involved 192 software professionals. after identifying 51 relevant variables related to well-being or productivity during a quarantine from literature, we run a correlation study based on the results gathered in our first wave. for the second wave, we selected only the variables correlated with at least a medium effect size with well-being or productivity. afterward, we run 20 structural equation modeling analyses, testing for causal-predictive relations. we could not find any significant relation, concluding that we do not know if the dependent variables are caused by independent ones or vice versa. accordingly, we run several multiple regression analysis to identify unique predictors of well-being and productivity, where we found several significant results. this paper confirms that, on average, software engineers' well-being increased during the pandemic. also, there is a correlation between well-being and productivity. out of 51 factors, nine were reliably associated with wellbeing and productivity. correspondingly, based on our findings, we proposed some actionable recommendations which might be useful to deal with potential future pandemics. software organizations might start to experimentally ascertain whether adopting these recommendations will increase professionals' productivity and well-being. our research findings indicate that granting a higher degree of autonomy to employees might be beneficial, on average. however, while ex-tended autonomy might be perceived positively experienced by those with a high need for autonomy, it might be perceived as stressful for those who prefer structure. it is unlikely that any intervention will have the same effect on all people (since there is a substantial variation for most variables), it is essential to have individual differences in mind when exploring the effects of any interventions. thus, adopting incremental intervention, based on our findings, where organizations can get feedback from their employees, is the recommended strategy. future work will explore several directions. cross-sectional studies with representative samples will be able to test whether our findings are generalizable and do get a better understanding of underlying mechanisms between the variables. we will also investigate the effectiveness of specific software tools and their effect on the well-being and productivity of software engineering professionals with particular regard to the relevant variables. the full survey, raw data, and r analysis code are openly available on zenodo doi: https://doi.org/10.5281/zenodo.3959131. the impact of telework on emotional experience: when, and for whom, does telework improve daily affective well-being? how will country-based mitigation measures influence the course of the covid-19 epidemic? survey of stress reactions among health care workers involved with the sars outbreak teleworking: benefits and pitfalls as perceived by professionals and managers does herzberg's motivation theory have staying power the impact of an innovative mindfulness-based stress reduction program on the health and well-being of nurses employed in a corporate setting relationship quality profiles and well-being among married adults does working from home work? evidence from a chinese experiment the psychological impact of quarantine and how to reduce it: rapid review of the evidence acts of kindness and acts of novelty affect life satisfaction the need for cognition the factors affecting household transmission dynamics and community compliance with ebola control measures: a mixed-methods study in a rural village in sierra leone health surveillance during covid-19 pandemic improving employee well-being and effectiveness: systematic review and meta-analysis of web-based psychological interventions delivered in the workplace personality differences and covid-19: are extroversion and conscientiousness personality traits associated with engagement with containment measures? you want to measure coping but your protocol's too long: consider the brief cope assessing coping strategies: a theoretically based approach managing a virtual workplace cipd: getting the most from remote working help now nyc a power primer perceived stress in a probability sample of the united states danish health authority: questions and answers on novel coronavirus coronavirus/spoergsmaal-og-svar/questions-and-answers emotion beliefs in social anxiety disorder: associations with stress, anxiety, and well-being getting the most from remote working the satisfaction with life scale empirical evaluation of the effects of experience on code quality and programmer productivity: an exploratory study the relationship between materialism and personal well-being: a meta-analysis disrupted work: home-based teleworking (hbtw) in the aftermath of a natural disaster big tech firms ramp up remote working orders to prevent coronavirus spread european social survey: ess round 7: european social survey round 7 data boredom proneness-the development and correlates of a new scale statistical power analyses using g* power 3.1: tests for correlation and regression analyses the concomitants of conspiracy concerns the multidimensional work motivation scale: validation evidence in seven languages and nine countries structural equation modeling with lavaan a 6-item scale for overall, emotional, and social loneliness: confirmatory tests on survey data effect size guidelines for individual differences researchers a growing socioeconomic divide: effects of the great recession on perceived economic distress in the united states a simple method to assess exercise behavior in the community exploring the needs of teleworkers using herzberg's two factor theory standards of validity and the validity of standards in behavioral software engineering research: the perspective of psychological test theory mindfulness-based stress reduction and health benefits: a meta-analysis sars control and psychological effects of quarantine, toronto, canada motivation to work crowdsourcing personalized weight loss diets relations among mindfulness, well-being, and sleep web-based cases in teaching and learning-the quality of discussions and a stage of perspective taking in asynchronous communication ecology of zoonoses: natural and unnatural histories using task context to improve programmer productivity the world health organization health and work performance questionnaire (hpq) public risk perceptions and preventive behaviors during the 2009 h1n1 influenza pandemic why we should not measure productivity study on determining factors of employee retention psychological impacts of the new ways of working (nww): a systematic review employee wellbeing, productivity, and firm performance monotasking or multitasking: designing for crowdworkers preferences explicit programming strategies the experience of sarsrelated stigma at amoy gardens why do high school students lack motivation in the classroom? toward an understanding of academic amotivation and the role of social support what makes a great software engineer? in: proceedings of the 37th ieee/acm international conference on software engineering defining the epidemiology of covid-19 studies needed depression after exposure to stressful events: lessons learned from the severe acute respiratory syndrome epidemic extraversion and preferred level of sensory stimulation using behavioral science to help fight the coronavirus the psychological impact of teleworking: stress, emotions and health the relevance of psychosocial variables and working conditions in predicting nurses coping strategies during the sars crisis: an online questionnaire survey a meta-analysis of interventions to reduce loneliness transparency, communication and mindfulness the impact of the coronavirus on hr and the new normal of work years of life lost due to the psychosocial consequences of covid19 mitigation strategies based on swiss data effect of modified slow breathing exercise on perceived stress and basal cardiovascular parameters psychological and epidemiological predictors of covid-19 concern and health-related behaviors nhs: mental wellbeing while staying at home nhs: your nhs needs you -nhs call for volunteer army prolific.ac-a subject pool for online experiments beyond the turk: alternative platforms for crowdsourcing behavioral research a social-cognitive model of pandemic influenza h1n1 risk perception and recommended behaviors in italy pandemic programming: how covid-19 affects software developers and how their organizations can help understanding, compliance and psychological impact of the sars quarantine experience covid-19 outbreak on the diamond princess cruise ship: estimating the epidemic potential and effectiveness of public health countermeasures a critique of cross-lagged correlation lavaan: an r package for structural equation modeling and more. version 0.5-12 (beta) gender differences in personality traits of software engineers self-determination theory and the facilitation of intrinsic motivation, social development, and well-being exploratory experimental studies comparing online and offline programming performance the balanced measure of psychological needs (bmpn) scale: an alternative domain general measure of need satisfaction false-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant a brief measure for assessing generalized anxiety disorder: the gad-7 posttraumatic stress disorder in parents and youth after healthrelated disasters a short boredom proneness scale: development and psychometric properties high self-control predicts good adjustment, less pathology, better grades, and interpersonal success factors influencing psychological distress during a disease epidemic: data from australia's first outbreak of equine influenza social relationships and depression: ten-year follow-up from a nationally representative study on boredom: lack of challenge and meaning as distinct boredom experiences a within-person examination of the effects of telework the 24-item brief hexaco inventory (bhi) new zealanders attitudes towards working from home building autonomous learners: perspectives from research and practice using self-determination theory dealing with feeling: a meta-analysis of the effectiveness of strategies derived from the process model of emotion regulation statistically controlling for confounding constructs is harder than you think knowledge, attitudes, and practices among members of households actively monitored or quarantined to prevent transmission of ebola virus disease margibi county the importance of shared values in societal crises: compliance with covid-19 guidelines conspiracy suspicions as a proxy for beliefs in conspiracy theories: implications for theory and measurement world health organization: considerations for quarantine of individuals in the context of containment for coronavirus disease (covid-19): interim guidance we thank the editors-in-chief for fast-tracking our manuscript. the authors would also like to thank gabriel lins de holanda coelho for initial feedback on this project. key: cord-301537-uu2aykoy authors: johnston largen, kristin title: two things can be true at once: surviving covid‐19 date: 2020-05-27 journal: dialog doi: 10.1111/dial.12571 sha: doc_id: 301537 cord_uid: uu2aykoy nan as i sit in california, currently under a "stay at home" order to help stop the spread of the covid-19 pandemic, i have been reflecting on worship, the role of the church in providing healthcare, and our sacramental life. colleagues have asked for my thoughts on liturgy in the midst of a public health crisis, so i have decided to put things on paper so that others may be included in the conversation. many have already published resources on moving worship into the online environment, but a large percentage of those resources have approached it from a practical perspective rather than a theological or historical one. the title of my essay is a riff on luther' s 1527 open letter "whether one may flee from a deadly plague," which has made a resurgence during this health crisis. the letter has always been a favorite of mine: i assigned it during the four years i taught introduction to lutheranism, and it served as a conversation partner in my dissertation on liturgical rites of healing. quotes from the letter have circulated around facebook, especially as luther (near the end of the letter) provides very practical advice during a plague. the main point of the letter is that pastors and city officials are to work together to physically and spiritually care for those affected by disease, and those who are not bound by such responsibility should be free to leave without burdening the conscience. but before we use luther's letter as our urtext for the church's response during covid-19, we must remember that it seems unthinkable for luther (at least in the letter itself) that people would not be able to attend worship during a plague. our situation is different, with local, state, and federal governments providing recommendations and orders to stay home and not gather in groups. our understanding of science is also different, being on this side of the scientific revolution with a better grasp of how germs spread. luther's science is not our science, so that must be considered when reading his specific recommendations on liturgical practices during health crises. on the other hand, the letter reminds us that the church must work alongside civil authorities in preventing a plague from spreading, which means that congregations must follow the orders not to gather. attempting to spiritualize this pandemic as being the will of god-either as punishment or an opportunity to repent--is dangerous. the theological and historical concerns that i raise below are as i interpret our lutheran traditions, drawing on luther and the book of concord. the practical suggestions that i offer that differ from our customary practices are understood to be in extremis-in an emergency like we are facing today. as luther reminds his readers at the beginning of the letter, all christians must "come to their own decision and conclusion" (luther, 1968, p. 119 ). purpose" one thing that must be addressed before reflecting on particular issue is to define worship from a theological perspective or, to use john witvliet's (2006) modes of liturgical discourse, in terms of "deep meaning and purpose." for lutherans the primary theological understanding of worship is as a dialogue between god and humans, or as luther says in his torgau sermon, "where our dear [god] may speak to us through [the] holy word and we respond to [god] through prayer and praise" (luther, 1959, p. 333 ; see also luther, 2000, p. 397.84) . distinct from some other christian traditions, lutheran worship is primarily about god coming to us through the means of grace, which are concrete and external ways that god make godself known in the worship event. these are primarily preaching, the sacraments, absolution, and other liturgical practices. at the same time, lutheran worship also has a participatory response included in it. this is the second half of luther's torgau definition. active participation ("full, active, conscious," as would be articulated four centuries later at the second vatican council) is the other half of the dialogue between god and humans. participation is a diverse thing, just as christ was incarnated into a diverse human reality. the response is the response by the entire congregation, not just that of worship leaders or a select few. i think this should give us pause when we look at livestreaming (rather than web conferencing), as that calls into question the participatory ability of digital worship. unlike some theological traditions, lutherans are under no obligation to go to worship. yet, this question actually misses the point if we attend to our definition of worship-it is through worship that we know who god is and how god operates in the world. it contains external forms so that god's word may exert its power more publicly (luther, 2000, p. 399.94) . in fact, luther (2000, p. 398.85) believes worship is so important that we should have it daily, but understands that sunday as the chief day of the week is handed down from ancient times. in her book @worship: liturgical practices in digital worlds, liturgical theologian teresa berger (2018, p. 16) cautions us from too quickly succumbing to the false dichotomy of "real" and "virtual." such distinction can equate "virtual" with "non-real," which automatically privileges faceto-face relationships and practices over technology mediated. god primarily operates through means (neighbor, preaching, sacraments) ; thus, technologically mediated communication can certainly be the medium of god's own communication. another theological concept that helps bridge the gap between the so-called "virtual" and "real" is one that was used during the reformation to describe christ's presence in the sacrament. luther argued that, because of the ascension, christ had the attribute of ubiquity, meaning that he is available where he promised to be. for luther and his disagreements with the reformed, this meant that christ is truly present as the elements of the lord's supper on every altar. this same logic can be used in the digital environment: christ has promised to be present among gatherings of christians and in the midst of those who suffer (the theology of the cross). if such is true, then christ can be present online that transcends time and geography, just like christ's presence in the sacrament. theologian deanna thompson (2016; 2020, 26 march) follows this argument, claiming that virutal community is real community, mediating the body of christ. one of the arguments that i heard in my previous work of helping faculty teach online is the assumption that online coursework and whole-person formation contradict one another, since the online environment is all about the mind. in my experience of teaching online, i know this is not true. berger (2018, p. 19) affirms this by stating that the online environment definitely does have a physical effect on users. the types of relationships that can occur online can be described as "low stakes," meaning that they are "not associated with any cost, friction or risk" (simanowski, 2018, p. 9 ). does such a "low threshold" (berger, 2018, p. 36) for commitment allow one to flee from any sort of responsibility in the relationship, or does it allow for the freedom of openness without the increased possibility for negative consequences? constructing intentional community, as what (or should) already exist in our regular congregations, differs from the low stakes approach that one could see in facebook. in our current situation, the online environment is operating parallel to the communities that would occur in person if the health crisis did not exist. thus, such relationships are actually "high stakes" as this moment is temporary with the assumption that these relationships will continue outside technology. the phrase that health officers are using is "social distancing," the clinical term that encourages people to stay out of public spaces or larger gatherings. while limiting contact and large groups is an important step in reducing the pandemic peak, the term creates another set of problems. it is physical distancing that can reduce the spread of germs, but we must continue social encounters in this crisis even while limiting physical encounters. social networking and web conferencing technology, something i have been using since spring break to teach the remainder of the semester, can foster and assist us in maintaining our neighborliness in the midst of covid-19. maintaining both physical and social distancing can lead to isolation, which can make the health situation (especially mental health) worse. in some of the discourse i have seen online in the last weeks, i have noticed a discrepancy in terminology. when it comes to using video-based technology to broadcast online, two terms usually appear: livestream and web conference. although both of these practices use webcams and microphones, their level of interactivity is quite different. the livestreaming approach is unidirectional, which is how one currently watches television and youtube. the broadcaster creates the material, and those who watch consume the material. participation at best is passive and could be analogous to a pre-reformation understanding of the mass. the main role of the worshiper is to watch at the important moments, while simultaneously engaging in their own devotional practices. livestreaming (and admittedly web conferencing if the feature is enabled) allows for recording, meaning those unable to participate at the scheduled time could join in when they are able, but this can increase the individualism present in contemporary society today. also, taking seriously the role of the holy spirit means that it would be nearly impossible to replicate the live action in a recording (spadaro, 2014, p. 79) . the web conferencing approach is bidirectional and multidirectional. it allows for both proclamation and response through the same online tool, which is not the case with livestreaming. the "congregation" is part of the interactivity just as the worship leaders. this better simulates the dialogical nature of lutheran worship that i defined earlier. in his letter, luther (1968, p. 134 ) encouraged his readers to continue to participate in the weekly proclamation of the word through the sermon. he understands that central to christian life is the preaching and practice of god's word (luther, 2000, p 398.90) . historically, the service of the word has been the primary sunday liturgy for lutherans. some may wish to dispute this because our confessional documents and luther himself assume a weekly celebration of the sacrament (see melanchthon, 2000, p. 258.1; luther, 2000, p. 472.49 ). but we know that this was the ideal, and various circumstances usually prevented or hampered attaining this ideal: a shortage of pastors in early american efforts, laity still feeling unworthy to receive, the assumption that frequent reception would diminish the sacrament's specialness. the trend toward restoring weekly celebration of the lord's supper came with the early work done alongside the ecumenical liturgical renewal movement of the mid-twentiethcentury. 1 and still, among many lutheran congregations, weekly communion is not yet the practiced norm, although it is assumed in the newest worship books. when visiting my family in minnesota, the congregation where we worship still celebrates the sacrament twice a month. this attempt to restoring weekly sacramental celebrations has in some places turned into an overcorrection, with the assumption that every time the congregation gathers, or even a subset of the congregation gathers, the liturgy is deficient if the sacrament is absent. this practice has led to a phenomenon that looks more like votive masses than regular worship, in which the eucharistic liturgy appears to be for particular intentions ("votums") rather than the means of grace. the language around these votive (and sometimes private) masses becomes about mutual union and friendship, professing the faith we share, which is contrary to our understanding of the sacrament (melanchthon, 2000, p. 270.68) . these votive-like celebrations often separate the proclamation of the word from the sacrament, such that the sacramen-tal elements become the primary action rather than maintaining the historic order and balance (see evangelical lutheran church in america, 1997, p. 38) . one problem identified by those who advocate for perpetual fasting during this pandemic is that the sacrament must be celebrated within the assembly, as articulated in the use of the means of grace, principle 39 (evangelical lutheran church in america, 1997, p. 44 ). yet, this neglects the fact that the online environment is an assembly gathered for worship, while not raising objections to sacramental celebrations outside the assembly (e.g., church council meetings, retreats, etc.). the best advice would be to fast from the sacrament as long as possible, even in the midst of desiring it. 2 the season of lent provides a scheduled opportunity to do so, as we prepare ourselves for the annual celebration of christ's death and resurrection (lange, 2020, march 24) . as many congregations are already doing, these services of the word can easily take place in the online environment, especially through the communal nature of web conferencing. but the current health crisis may run many months; it already is running into eastertide and after. this requires other solutions (see below). because of the centrality of the proclamation of the word, lutheran congregations have not replaced the sunday liturgy with daily prayer; this was the custom in many anglican parishes pre-1979 prayer book. daily prayer in its ideal form is daily worship that does not occur on sundays, as the readings assigned are primarily from the other parts of scripture. the three main offices-morning prayer, evening prayer, night prayer-focus the gospel on the invariable canticles from luke's gospel, rather than the reading of a gospel lectionary text. the daily prayer offices are also better suited for smaller gatherings rather than the presence of the entire worshipping congregation. this may make the most sense in the online environment, as the web conferencing technology works best with smaller groups, with these rites particularly suited as domestic (at home) rituals rather than in the worship space of the congregation. the music that accompanies daily prayer, especially the historic orders with the assigned chants, can be done with little-to-no accompaniment, and can easily be spread among many people (both rostered and lay) for leadership. it is with the celebration of the lord's supper (holy communion, eucharist) where we encounter the most difficulty in this public health crisis. even many advocates for digitally mediated worship stop short of agreeing on "online communion." the main argument is that "the christian faith is deeply incarnational, and that means wedded to physicality and matter. … [o]ffering communion online short-circuits the communal, embodied nature of the eucharist" (berger, 2018, p. 84) . these critiques are important, as they lift up one of the many layers of meaning for the sacrament, namely, its physicality and incarnational nature. this physicality of the means of grace connect with our own physicality to remind us that our human/bodily nature is a god-given gift. the external nature of the sacraments provides the needed certainty to which our faith can cling/grasp (luther, 2000, p. 461.37) . when the sacramental elements are not possible in any way, the words of institution themselves serve the role of comfort and healing. the 1540 brandenburg church order notes that lay people can use the words of institution without administering the elements (they could not do that) so that the sick could "feed on the word" (rittgers, 2012, p. 171) . this is a natural extension from luther's claim that the sole source of comfort for christians is the word (rittgers, 2012, p. 103) . the proclamation of the gospel, aural and edible, is in service of consolation (treu, 1986, p. 18 ). the church's ministry to the sick has usually included bringing communion to those who are unable to attend sunday worship. this tradition dates to the second-century writings of justin martyr in his description of christianity to roman officials. in narrating an outline for the sunday liturgy, he describes the role of the deacons as the ones who bring the lord's supper to those who are absent (chapters 65 and 67). the deacons, since they did not have the role to "consecrate" the sacrament (justin assigns that to the "president of the assembly"), would have brought the already-consecrated elements as an extension of the assembly's sunday worship. this practice has continued to the present day and is seen in the current lutheran tradition with lbw's "distribution of holy communion to those in special circumstances" and elw's "sending of holy communion." i think the lbw's title better lifts up the issues we face today-we are in "special circumstances" that require us to rethink our customary practices. and this reevaluation makes it necessary that we be creative, especially since our congregations are all dealing with different restrictions and situations. extending our practices of "special circumstances" would be the ideal solution for distributing communion during these times. in contexts that are not in a quarantine-like state where visits are still allowed, a minister of communion would bring the sacrament to individuals or small groups in their homes or other arranged places. the caveat is that in many places, the assembly is not gathering on sunday mornings, so how would the extended distribution be extended from something that is not happening? if allowed under the civil orders, ministers of communion would gather with the pastor for a full eucharistic liturgy (including the proclamation of the gospel), receive the sacraments themselves, and then carry them to those who are not allowed to be present. preaching, which importantly connects to the distribution of the sacrament, could be recorded so that it could be played in the remote locations when the sacrament is distributed. it would be important that the minister gives the communion to the other person (and vice versa) so that the communal nature of communion continues. the more difficult context is when gatherings in general are disbanded by order of civil officials. one might argue that churches could exempt themselves from such a situation (e.g., two kingdoms), but luther (1968, p. 131) reminds us in his letter that part of the responsibility for both the body and soul means doing what it takes not to spread infection; in fact, it is considered sinful to not avoid places and persons in the case of possible infection. yet, there are possibilities even here. while maintaining physical distancing, it would be possible to deliver the sacrament to households, like permitted for food delivery. even though the sacrament is not mere bread and wine as served at the table, luther (2000, pp. 469.23-24, 474 .68) still calls it food (and medicine) for both the body and soul, nourishing us in a different mode than regular table food. again, pastors and ministers of communion would need to find a way to maintain the intimate connection between the proclamation of the gospel and the sacrament, so that we do not privilege one over the other. as suggested above, preaching could happen remotely through digital means and people would be able to receive communion. it is the distribution and reception-the "for you"-that is the central action of the sacrament, so ideally it would be someone else who distributes communion (luther, 2000, p. 469.21; formula of concord, 2000, p. 602.54) . as thomas schattuer (2014, p. 209) notes, the lutheran mass "culminated in the reception of the sacrament." this could occur among family members in a household, roommates in other situations, medical professionals and patients in a healthcare facility, and so on. this is the most difficult part of the discussion on digital worship in a health crisis. i have read essays on both sides of the argument, and most seem to be talking past one another. the ideal response is, as i have stated above, to fast from receiving the sacrament. in the small catechism, luther notes that the benefits of the sacrament are "forgiveness of sins, life and salvation," which makes it different from baptism (and the necessity of that for salvation). yet, in the large catechism, luther (2000, p. 469 .24) provides additional benefits: comfort, new strength, and refreshment. the sacrament also "a pure, wholesome, soothing medicine that aids you and gives life in both soul and body" (luther, 2000, p. 474.68) . so while the lord's supper is not salvifically necessary, it certainly could be considered pastorally necessary. before musing on what online communion may look like, i want to offer three caveats. the first is that the sacrament remains unimpaired even if we handle or use it unworthily (luther, 2000, p. 467.5; formula of concord, 2000, p. 597.24 ). this is not to excuse bad sacramental practice or to justify doing whatever we want with the lord's supper. rather, it does provide some comfort as we attempt to adjust to an unthinkable situation in a public health crisis. the second is that no one should deter someone from receiving the sacrament (luther, 2000, p. 473.62) . especially in times of pastoral necessity, the lord's supper should be received. luther (1968, p. 134 ) saw this as a requirement for pastors to minister to the sick and dying. it is part of the full work of ministry in the midst of a health crisis: preaching, teaching, exhorting, consoling, visiting, and administering the sacrament (luther, 1968, p. 136) . but, when visits are not allowed, the last two in this list of work must be rethought using the tools we have in our time. the third is that we should not doubt that christ the word can certainly accomplish what he promises (formula of concord, 2000, p. 601.47) . this simple argument was central to luther's disagreements with zwingli at marburg-if jesus promises (as he states in the words of institution) to truly be present, then we should not doubt those words or attempt to construe them to mean something else. so, is it possible to have online communion? i hesitate to answer in the definitive, but i provide here some theological rationale for doing so in extremis. by online communion i mean having worshippers gather in community through web conferencing with their own bread and cup as originating from their pantries. the two main objections that are raised by this are: (1) the lord's supper requires contact, and (2) it is akin to self-communion. the first objection should not be taken lightly, as generations of christians have gathered physically to participate as the ecclesial body of christ in the sacramental body of christ. distribution is important in lutheran theology, which regularly happens from person-to-person (see luther, 2000, p. 470.34-35) . unfortunately, in many dire situations, that is not possible and can be even dangerous; recall luther's exhortation that ministers are also responsible for not spreading infection (luther, 1968, p. 131 ). an even stronger objection related to contact is that the sacramental event must happen in-person, which would preclude the sacraments happening remotely. prior to the technological revolution, such remote sacramental event would be unthinkable and thus would not have come up in the theological discourse, certainly not during the reformation. to me this is a question of "use" and "action," the two words that the concordists identify as best expressing luther's sacramental theology in his "confession concerning christ's supper." the right use of the sacrament is reception and faith (schattauer, 2014, p. 209) . the sacrament cannot be present separate from its intended use (formula of concord, 2000, p. 595.15 ). this is the closest the lutheran tradition comes to defining the "how" of the sacrament, as that was not the important question in debating the lord's supper (the "who," "what," and "why" were primary in the regular celebration of the lord's supper, it is the presiding minister who completes steps one through three, and then the recipient of the sacrament completes steps four and five. yet, the concordists do not appear to say anything about the necessity of the presiding minister doing one and three, only two-speaking (or singing) the words of institutionbecause of having the proper "ministry or office" (formula of concord, 2000, p. 607.77 ). christ's body and blood is truly present as the sacramental bread and wine in its use and action, which i would argue can extend over digital means in the midst of the online community. the concordists insist on the language of use and action to prevent misuse of the sacrament through eucharistic adoration/reservation and corpus christi processions (formula of concord, 2000, p. 611-612.108 ). any adoration of christ occurs when the community gathers for sacramental worship, not as adoration of the sacrament itself. the distribution extra nos prevents self-communion, which is second objection. i find it peculiar that such objection is raised in regard to online communion, when generations of lutheran pastors have communed themselves during the eucharistic liturgy (rather than having an assisting minister commune them) without much objection. in fact, the use of the means of grace, application 46a, permits such practice (evangelical lutheran church in america, 1997, p. 50) . the role of the presiding minister in all of this is to proclaim the words of institution, just like the preacher proclaims the gospel-both of these are understood as "showing forth" christ (melanchthon, 2000, p. 272.80) . 4 the presiding minister is not acting in persona christi as a show or example of christ's action at the last supper (melanchthon, 2000, p. 271.72) . rather, the presiding minister is to proclaim christ through the audible and edible word (aural and sacramental) because christ is the word. in his liturgical reforms, luther underscores these ritually by requiring the words of institution to be spoken publicly and the eliminating the manual acts with the elements (schattauer, 2014, pp. 211, 216) . such reforms cause the eucharistic liturgy to focus on "christ's entire life and the meaning of that life for human salvation," rather than on reenacting a particular moment in christ's life (wandel, 2006, p. 100 ). when this essay is published online, most people will still be under "stay at home" or quarantine orders, and that may also be the case once this essay is published in hardcopy. while community continues to happen through online means, christians will be unable to gather in worship spaces for easterthe culmination of the liturgical year-in order to protect the vulnerable among us. this new life proclaimed in the death and resurrection of christ is a constant reminder that all christians are called to care for the neighbor in both their physical and spiritual life. during this time of being 'alone together,' i have been reflecting on the lectionary, especially the gospels for the second and third sundays of easter. like thomas, who missed the first appearance of resurrected jesus in the locked room, we may doubt christ's true presence unless we see and experience what has always happened in the past. yet, christ still comes to us, even when we do not experience church as in previous days. like the two disciples on the road to emmaus, we may not understand these events that have taken place, where our easter expectations have been disrupted by things outside out control. yet, christ still comes to us, even when we cannot gather physically to break bread as we have in previous days. pastors, deacons and all christians are responsible for the well-being of all people during this time, which may also include adjusting sacramental practice in extremis. the debate over online or virtual communion is not new, but the current health crisis has brought it to the foreground, and the ending of the covid-19 pandemic will not stop the debate. as we continue to figure out how to be church in the 21 st -century, we will need to attend to the many layers of meaning inherent in our practices. luther (2000, 472.49) argues that those who do not desire the sacrament actually despise it. 3 this is what luther means by connecting the elements with the word (luther, 2000, pp. 468.10, 469.30) . 4 although the apology does seem to assume the presiding minister is the one who distributes, this does not necessarily align with today's practice as articulated in the use of the means of grace, principle 41 (evangelical lutheran church in america, 1997, p. 46 ). kyle kenneth schiefelbein-guerrero https://orcid.org/0000-0001-9285-6161 kyle kenneth schiefelbein-guerrero spadaro, a. (2014) . cybertheology: thinking christianity in the era of the internet, m. way (trans.). new york: fordham university. thompson, d. (2016) . the virtual body of christ in a suffering world. nashville, tn: abingdon press. thompson, d. (2020, 26 march lutherjahrbuch, 53, 7-25. wandel, l. (2006) . the eucharist in the reformation: incarnation and liturgy. cambridge: cambridge university. witvliet, j. (2006) . teaching worship as a christian practice: musings on practical theology and pedagogy in seminaries and churchrelated colleagues. reformed journal. https://reformedjournal. com/teaching-worship-as-a-christian-practice-musing-on-practicaltheology-and-pedagogy-in-seminaries-and-church-related-colleges/. doi: 10.1111/dial.12555 should christians practice "virtual communion" in time of a plague? perhaps surprising to liturgical christians, but surely surprising to the public at large, if they cared, is that during this coronavirus-covid-19 pandemic a parochial debate also has gone viral; well, viral within our subculture. this debate concerns whether holy communion is legitimate when done "virtually" over the internet. this debate is serious. sometime it has been viscerally reactive, claiming that some pastors just want to do their "own thing" or even that such centering on the eucharist implies fetishism. i am grateful to observe instead that the conversation has become more civil as the involuntary fasting from the lord's supper extends into many weeks. more often now "both" sides recognize that they argue from common conviction; that we dearly cherish (rather than fetishize) the eucharist. still, i fear that a higher love yet has been missing from much of the conversation. i would prefer not to join the public conversation insofar as the conversation already seems premised on privileging doctrine over human wholeness. i do not like these terms on which the debate is set: "pro or con, care for holy things is more important than care for human lives." it is a hidden premise with a faulty disjunction. it forgets that lutheran doctrine has always carried within itself the quality of a quatenus, that we hold and must hold certain things doctrinally high insofar as they convey the promise of the gospel and that the gospel itself holds highest god's loving intention for the wholeness of human being, what the gospel of john thematizes as abundant life. so i join with a different premise. god's intention that the gospel be proclaimed in word and act to bless human beings with wholeness of life now and forever is the point. this requires that we are freed from self-preoccupation with ourselves so to serve others in the same love with which god embraces us all. luther, of course, as in his treatise on why christians should not flee in a time of plague, reminds us that self-care is required for other-care. that treatise defined in stark relief the ultimacy of the office of ministry's call to loosen and break the bonds of despair and anxiety as zealously as we can as loving service in itself and as reinforcing god's algorithmic formula in our temporal terms for the health/salvation (salus) of all; you know, "god's work, our hands." so much for prolegomena. but what about the weighty doctrinal loci that we also sincerely do hold dear (me included, if that is not clear to some) and bear on the presenting question? these include justification, the church, the sacraments, and the office of the ministry. these are the first and primary steps in the augsburg confession and, indeed, display a particular logic or trajectory we sometimes fail to see. further inputs for our thinking include some basic anthropology and some beyond-basic metaphysics. many on both sides have written fine constructive theology written on the matter in more general and popular terms. i will explore these lutheran premises with an eye for those whose questions are more dogmatically impelled. i will conclude with a coda in praise of the mystical body of christ, our appreciation of which is regrettably understated, if at all extant. no mere editorial (however longish) such as this can explore fully these loci. but i hope i can add some nuance to a mutually respectful dissensus. i have long sloganized that the vocation of the church is "the objectification of justification." human beings are fickle folk. emotional dis-ease routinely subsumes rational equanimity. anxiety is a chronic condition. when stressors of many kinds set us off, we can be locked into moral and spiritual trauma. all the dis-ease (and diss-ease) is caused in some way by sin, ours or another's. ptsd, moral injury, and spiritual trauma are contemporary names for the manifestation of the ancient general category of sin. depression and despair collude. one's whole physical and emotional and moral being is inhabited by these demons and only an-other can evict them. in other words, we are captive to our subjectivity and only an objective other can save us. concomitantly, only an-other carrying an-other's word in-with-under other objects objectively brings the saving grace word spoken and acted to us. the primal human need for a good word from outside oneself sets the initial logical ordering of the augsburg confession (ca). it makes sense, of course, for the first article to be about god. we start with the ultimate, however abstract the concept may be. then there is history, that is, sin and alienation. with article ii, in other words, the topic is more empirical, concrete, "objective." not to belabor the point, but the confession gets ever more "objective." jesus is our real and accessible rescue from alienation (iii). justification is the lutheran grammar to speak that (iv). the church (v), then, is the historical and empirically objective "paying forward" of justification as the very body of christ that evokes the life of new obedience (vi) and "is" church only as and when it proclaims the word and administers the sacraments (vii). the trajectory in this ordering underscores that any rescue of humanity from its alienation from god and self must come from a historical palpable "other." oswald bayer (a lutheran's lutheran) states the same. the augsburg confession and the smalcald articles insist that justifying faith "comes in the promise of the gospel and the alien righteousness (iustitia aliena) of christ that comes with it only in this manner must always receive proper emphasis." the gospel is never one's private possession. in other words, subjectivity cannot have the day (martin luther's theology, a contemporary interpretation, 2008, p. 252). external markers constitute the sacramental event. in the lord's supper those are (a) "the social and concurrently naturalcultural moment" of shared eating and drinking; (b) the actualization of a "definitive communal relationship between god and humanity" taking place within a physical assembly; (c) convened by and "through the performative word that has been addressed" to the assembly through bread and wine; (d) the whole action of which is empowered by the presence of the resurrected crucified jesus (bayer, pp. 252-253). then the perhaps surprising remark: the public external character of this word event and the private freedom of the individual "are correlates and empower and support one another." religion is surely not a private affair. but neither is it a form of heteronomy. alien righteousness must assume priority, but neither is it coercive. (bayer, 253). hold high the objective othernessthe alien-work of god. yet do not dismiss the integrity of the communicant's trust and desire. do it all with and within the objectivity of what is physical. there are two points in this fine summary that bear especially on our subject. the first concerns the relationship of physicality, communication, and location. as much as we emphasize the priority of the objectified grace of god in the elements of water, bread, and wine, it is puzzling to suppose nevertheless that the finite object that conveys grace (finitum capax infiniti) is bounded. bounded by what? well, it has been said forcefully in this debate on "virtual" communion that it is only legitimate when one assembly gathers around one loaf. that assembly must be physical and gathered in one place. but other scenarios that challenge that point are very familiar to us. suppose that the assembly is physically present, but is numbered in the thousands or even tens of thousands, as with churchwide assemblies and youth gatherings. there, of course, no one questions the subjectivity of adolescents gathered around and given the body and blood in the forms of hundreds of loaves and hundreds of cups. one loaf and one cup are lifted up at the center table and thousands of morsels and sips are consumed by de facto house churches without walls around the stadium, even behind walls and in other rooms by "overflow crowds." also, the distant eyes in the last row of the upper deck would not even be able to see the loaf and cup lifted were it not for the jumbo-tron screens hanging high over the arena. communication happens in multiple modes, most personally at distances of less than six feet. after that, from ear-aids to massive amplifiers, blue-toothed bridges, and towers and satellites convey the same grace in and with sound and light waves; the same intended original intimacy of god's voice in human voice to human ears still says "for you." why start and stop with one loaf and one very local assembly? of course we prefer that. in our subjectivity we prefer that. we respond more readily to the very familiar, to what has become intimate and intuitive. but infinity does not stop at walls; the real incarnate and divine christ shows up in emmaus, closed upper rooms, to roman military converts, under the floorboards with wwii american prisoners in the philippines (true story), and shares his incarnate divinity (communicatio idiomatum); wherever christ pleases. does the power of god, which for fickle human consciousness necessarily begins at the physical, end with the physical? well, yes, actually, but for comfort's sake christ does so even over great distances, metaphorically concomitant with entangled quantum particles. as bayer avers, the sacramental action begins with the physical but addresses (and redresses) subjectivity. the sacrament cannot begin with subjectivity. but enfleshed grace means to go to and through subjectivity so to move and change the receiver the more into christ. remember (re-member), christ counters zwingli's astral-projection direction and, as promised always, comes to us. the point of counter-precedents to the norm of one loaf in local assemblies in real time is very clear. what we already do already proves agreeable exceptions to the norm. we have already practiced-enthusiastically so!-virtual communion. and-of course!-we have always found ways to commune those who are sight and hearing impaired. we do not insist on an "ableist" assembly. the objective character of the eucharist is never purely so. it cannot be, because communication is not like that. thank goodness, still, the accent on the "other" is still more than on the receiver. new mass communication software platforms do not change that accent. if we argue that they do, we reveal our bondage to an aristotelian metaphysical stipulation of eucharistic conditions that fog our memory that christ's promises hold wherever he wants to, including the space/time relative and quantum qualities of postmodernity. perhaps the real error in this debate is not that we lack real communication and presence to each other when communion happens over longer distances, but that we have attached the word "virtual" to it. modalities have changed because metaphysical understandings have changed. we are not talking about donning headsets and entering into an alternate reality, as if being church is like going to the "feelies" predicted by orwell. we are talking about a real objectified message of forgiveness and liberation and re-union into a holy and incarnated comm-union that is just as real as the most localized assembly of two or three people in one space. since the stone was moved, we have always said this; we have always prayed this. our communion at the table happens with the saints and angels of all time and space. "with angels and archangels" we lift and consume bread and cup. might we not be at that blinking point of awakened insight now of actually converging our metaphysics and communion practices with the mystical poetry we have sung for millennia? let us just say it. the eucharist has always been "virtual" and the communion of saints has been our infinitum capax finiti. we physical and fickle folk are embraced by a boundless communion that graces and feeds our return to the holy and beloved community. indeed, we physical and folk are incarnated with christ no matter the visible or invisible walls. "virtual" in this frame does not mean fake. nor does it mean "spiritualist." the subjectivity of human perception does not and is commanded not to place bounds on the precise objects with and through which god gets to us. "virtual" here at least connotes the extension and incarnation of christ's physical body and blood beyond artificial boundaries of time and space, as the risen christ first did with walls of stucco and wood. the ubiquitous christ will be the incarnate christ and vice versa. one other predicate of "objectification" requires attention, at least for now. it is the function of the eucharistic presider, and so bears on the office of ministry. the reformers were clear that all the baptized are of the priesthood of all believers, and that the baptized are thus also servants in the spiritual estate, not only temporal. but ca v is not thereby collapsed with ca xiv. it is for the sake of good order that the pastoral office is distinct from the service of all the baptized. what all can do by virtue of their baptism cannot be done by all at the same time. the result would be chaos. so lw 44:128: "because we are all priests of equal standing, no one must push himself forward and take it upon himself, without our consent and election, to do that for which we all have equal authority. for no one dare take upon himself what is common to all without the authority and consent of community. and so it is that a pastor is one who is rightly called (rite vocatus) by his/her community of faith. the call and the command make one a pastor of the divine word (lw13:65). this call comes from the faith community and is for the good order of the faith community (1 corinthians 14:40). there are vital presuppositions in the lutheran understanding of the pastoral office. "vital," remember, has both to do with being "central" and being "life-giving" (vita). we have reviewed already the necessity that the gospel comes from outside the receiver's subjectivity. someone, an-other, a selected and called one, must proclaim the gospel in its purity and see to it that the sacraments are ministered rightly. that someone, the pastor, attends not only to the proclamation and the giving. he or she attends to the subjectivity of the receivers too. pastors know their people. knowing their people implies a reciprocal relationship of trust between pastor and people. the shaping of sermons and-i submit-the contextual understanding and framing of a sacramental occasion requires such mutual trust. the communicator of alien righteousness, in other words, is herself not at all alien to the receiver. good order does not imply that the pastor "does it all," however. pastoral "control" of a whole parish life, including the manner of its sacramental worship, does not belong to this conception of ministry. one wonders whether a pastor does not exhibit a privileged clericalism when the self-control volume level goes past 11, as if one held a personal sense of ontological difference between the pastoral office and the rest of the baptized priesthood. the life of a congregation managed by such a relationship may happen on time, concordant with a stiff lip of upper reine lehre. but that is not necessarily good order. good order resonates in a faith community when people are known for what they can do well for each other, including tasks of prayer and lay eucharistic ministry, even lay preaching, and maybe even on rare occasion when lay sisters and brothers commune each other under the express permission and direction of the pastor. characterizing all of such a congregation's life is a living trust, a deep and respectful loving relationship that shapes the worship community and precedes its gathering. this is why "online communion" can be understood to be as real as a more "concrete" local assembly. the figure on the video monitor is known and trusted. the gospel proclaimed and the word acted in the words of institution, to be sure, are effective no matter the trust level (otherwise there is that donatism matter). but the christian community already in relationship and rightly ordered completes the circuity however dispersed in space and time the already palpably related assembly is. let me be clear. it is always "better than good order" to commune together as one particular assembly within the shining affinity and infinity of all the saints and angels. that is the normative good for our subjectivities. but in a time of exigency when a fast from the eucharist is involuntary, it is not pastorally caring after a surfeit of heteronomously imposed fasting days to tell the flock to "remember what you ate" as if that is the same as the call to "remember your baptism." we need the manna. god means us to have food for the journey in the wilderness. and when the counsel in such days says that prayer and meditation and listening to the word is "just as effective anyway," does that logic not undercut the very reasons the same counselors once argued for regular celebration of holy communion? does it not in itself betray a favored "spiritualism," if not even a closeted gnosticism? yes, god comes to us in many ways, and can be seen to do so in many places, but only after christ is revealed to us in the indissoluble nexus of word and sacrament (so wrote luther when writing on the pun of "crystal," christall, christ-in all). there comes the time in an exigency when god's people, threatened deeply in our subjectivity during just such times, gotta eat. ignatius of antioch's apt synonym, "the medicine of immortality," is meant from faith for faith in such days. much more could and should be said, but is not necessary here. the evangelical effect of "virtual" communication (though not yet virtual communion) has been so very consequential and beautiful in the life of the congregation i am called to serve. great stories can and will be told. "virtual communion" is a responsible step in in extremis times for the encouragement and continued formation of the faithful individually and together. i do not mean this as "normative," as if this should regularly replace the side-by-side body language of the local worship assembly. i intend this argument as the exception that proves the rule. it is an interim measure that in and by the holy spirit's power will console and move from "inside-out" god's people further in the way of trust and loving service to this dis-eased world. and when the spirit brings us as a local assembly more palpably back together around the font and table, we will be the more grateful that we were re-membered as christ's body even as we were too long apart. university of houston doi: 10.1111/dial.12545 "the voice of one crying out in the wilderness" 1 there is something strange going on in our weather system. for the past 2 months our island, located in the northern part of the atlantic, has been literally closed down more than dozen times. this means that there have been no flights, international or domestic, roads have been closed down (either in parts of the country, or the whole country), schools have been closed, and electricity has been out in certain areas, for hours up to days, all because of the weather. this is a huge concern to all of us who live here in iceland, and even as i write this, we are in the midst of one of these events. there is really nothing "normal" about it, and questions about its relationship to a changing climate are compelling. but it is too soon to draw any conclusions. patterns have to have time to develop. at the same time, our glaciers are melting, right in front of our eyes, because of increase in temperature, which also are warming up the sea around the island, causing big changes, and real threats, to our fishing practices, as some fish species are leaving, seeking cooler waters elsewhere, while new arrive. it takes time for the fish industry to adapt, and the uncertainty is challenging for people, especially in the small fishing towns around the country, to say nothing of our whole economy. 2 like everywhere else, icelanders have been slow to wake up to the seriousness of a warming climate, but gradually people are realizing that this means that life cannot go on like usual any longer. the swedish teenage girl, greta thunberg, who started school strike for the climate in august of 2018, has made a huge impact in our country and elsewhere, by directing people's attention to the alarming reports scientists have been writing for years about the serious impact of global warming. because of greta, young people in iceland are starting their own school strike for the climate, and by doing that they have put much needed pressure on our government to act according to their commitment to the paris agreement, from december 2015. it has been breathtaking to watch what has happened since the greta thunberg school strike for the climate started, less than 2 years ago, outside of the swedish parliament. greta was only fifteen years old, and this was her own initiative. her parents supported her, although reluctantly to begin with, because they worried about her health and how the publicity would affect her. her aim was to remind swedish politicians of the climate crisis and their responsibility to react to the crises, three weeks before the fall election 2018. after the election greta decided she would continue her strike until the day the swedish government had fulfilled their promises to meet the conditions of the agreement reached in paris, and reiterated at other climate conferences. so her strike goes on, but she is certainly no longer by herself. 3 what started as a one-person act, has gradually developed into a world-wide movement, which, it is safe to say, has made greater impact than any other climate initiative. by speaking in clear terms, and making radical decisions, like not to fly, greta has managed, at her young age, to bring people all over the world, out to the streets, demanding responsible actions from those in charge, as well as individuals, who are contributing to the climate crisis by their daily behavior. there is something about her either/or rhetoric that makes people pay attention. she herself has said that the reason why she tends to see things as black and white is because she has asperger's syndrome. she also has told the story of her childhood, and how she became severely depressed after hearing about climate change at early age and realizing that people were not doing anything about it. after suffering for years from eating disorders and selective mutism, she was able to overcome her life-threatening condition, and start to eat and talk again, by speaking up and actively fighting for responsible reaction to the climate crisis. it is clear that for greta this is about life and death, and that is the message she wants to convey. during the past year and a half, greta has been invited to speak at numerous rallies, as well as exclusive meetings such as the european parliament, houses of parliament in london, the unites states congress, and the united nations. true to her black and white worldview, greta insists that we have to stop our emission of greenhouse gases; "either we do that or we don't," has been her repeated message. 4 there is something profoundly prophetic about her "clear text" rhetoric. it is not simply about actions but also about a change of heart, and mind. speaking to the european economic and social committee in brussels, in february 2019, greta challenged her audience to do their homework, because "once you have done your homework," she insisted, "you realize that we need new politics, we need new economics where everything is based on a rapidly declining and extremely limited remaining carbon budget." but, to greta, "that is not enough." what is needed is "a whole new way of thinking." instead of political systems based on competition, we need to cooperate and work together and to share the resources of the planet in a fair way. we need to start living within the planetary boundaries, focus on equity and take few steps back for the sake of all living species. we need to protect the biosphere, the air, the oceans, the soil, the forests. 5 for greta there is no compromise, "no lukewarm, and neither cold nor hot" way of thinking (rev. 3.16), you are either for or against, either willing to save the planet, and our future, or not. the burning house is a compelling metaphor, painted in strong colors. "our house is on fire. i am here to say, our house is on fire," greta said in her address to the world economic forum in davos in january 2019. she concluded her speech with this powerful, no beating-around-the-bush message: we must change almost everything in our current societies. there is something not right, when our kids and teenagers are missing out of school in order to protest and fight for the future of our planet, our common home, and the future of all of us who live here now, as well as future generations. there is something strange going on, and the young people are getting it. once again civil disobedience is proving to be an important tool against unjust systems, which are protecting the few, and not caring for the rest. this is what climate justice is all about. it reminds us that those who have contributed the least to the climate crisis are suffering the most; the poor, women, and children in the global south. those of us who belong to the privileged part of the world, need to start thinking globally; we need to look at the bigger picture. we cannot continue to think just about us, and our economy. greta thunberg argues what it all boils down to is the choice between money and the environment. there are multiple ways we can respond responsibly to the current crisis we are faced with, not only highly technical, and financially costly solutions. a book called drawdown: the most comprehensive plan ever proposed to reverse global warming (2017), lists, for example, education of girls as the sixth most important solution, and family planning as number seven, right after refrigeration, wind turbines, reduced food-waste, plant-rich diet and tropical forests; and before solar farms, and rooftop solar. 7 there is no surprise that people, who are paying close attention to the discourse about the climate crises, are worrying about the future. eco-anxiety is a growing concern, especially among the youth. melting glaciers, higher temperatures, and severe storms are among the signs of climate change that are raising the awareness, and even anxiety, of the people in iceland. it is important that people realize that something can still be done. greta thunberg has warned all of us that talk about hope can indeed keep us away from actions. at the un climate change conference, in katowice in poland in december 2018, greta gave a powerful talk, in front of world leaders, climate scientists, and other participants. she concluded her talk with those words: until you start focusing on what needs to be done rather than what is politically possible, there's no hope. we cannot solve a crises without treating it as a crisis. we need to keep the fossil fuels in the ground and we need to focus on equity. and if solutions within this system are so impossible to find then maybe we should change the system itself? we have not come here to beg world leaders to care. you have ignored us in the past and you will ignore us again. you've run out of excuses and we're running out of time. we've come here to let you know that change is coming whether you like it or not. the real power belongs to the people. 8 a prophetic voice; a challenging, encouraging, and compelling voice. but will she be able to move us into action? only time will tell. trauma, eco-spirituality, and transformation in frozen 2: guides for the church and climate change i recently became captivated by the film frozen 2. i was in florida for a psychotherapy professional training and one night decided to take myself on a date. nothing fancy; i was intentionally looking for something not too thought provoking or activating-just dinner and a movie. little did i anticipate how disney's new animated film would capture my imagination, heart, and theological intrigue. while there is enough material in my thoughts and consciousness to fill out a book (keep your eyes out for one in the future-the proposal is already in the works), i wanted to share a few reflections on how disney's frozen 2 can provide a lens for trauma, transformation, and the essential call for our faith communities to step more fully into an eco-spirituality as a means of fully incarnated repair. warning: spoilers ahead! first things first. "trauma," as i am using it, refers to any experience that overwhelms our capacity to respond to the challenges in our environment and results in either an over constriction or an over expansion. trauma is less about the event or experience itself and more about the ways in which it impacts us as individuals, communities, or global ecology. when faced with a significant threat that overwhelms our capacity for resiliency, we are at risk of developing symptoms of traumatic response. in its simplest form, traumatic responses cause us to be smaller or less than we truly are in an effort to protect ourselves from further wounding. we either shrink to escape further blows or we build and reside behind walls to project a larger image. trauma and transformation are the beating heart of frozen 2. as olaf wisely queries, "did you know that an enchanted forest is a place of transformation?" just as trauma entrenches us in protective patters; transformation calls from the beyond, into the unknown, and into the promise of authentic flow. transformation often requires us to enter into liminal places, the spaces betwixt and between, where our familiar habits are tested. these spaces, either geographically or relationally, disrupt our habits of constriction or fleeing protection and generate opportunities and wiggle room for the new. they require courage and offer hope for connection, fullness, and completion. the heartbeat pulsing through the film begins in the opening scene in which elsa and anna play with snow toys. anna explores the narrative that love, in this instance between a distressed damsel and a "fancy" prince, will save the day. elsa, meanwhile, weaves a story of trapped fairies and "the fairy princess who breaks the spell and saves everyone." their play prompts their father to tell a story of a real enchanted forest and how he became king. his story paints a picture of colonialism and subsequent acts of violence that rend the connection among the elemental spirits, the northuldra people (based on the sami people) from arendelle, and begins a cascade that separates anna from elsa, and their family from their community. while initially told from the perspective of king agnarr, the driving quest of the film is to discover the truth, brave the trauma of the truth, and make amends or reparation thus breaking the spell. at the center of the tale is elsa's quest to follow the lure of the voice that calls to her into the unknown and toward the source that holds memory and truth. along the way elsa must show her power to befriend the elemental spirits and witness the pieces of truth they hold. from the wind, she sees her parents as children and meets the northuldra people. the fire spirit shows her that she is not alone in hearing the call. the water spirit challenges her to recognize the limits of her power and to depend on another to go the distance. the earth giants, through the prompting of anna, break the wall that is the origin and symbol of violence and mistrust. it is only through the befriending and partnership with the elemental spirits that elsa finds her way home to who she fully is and anna steps into her power. the origins of trauma and separation in frozen 2 are located in the deceptive "gift" to build the dam and stop the flow of water and connection thus weakening the elemental spirits and leading to violence. transformation occurs by venturing into the unknown, befriending the elemental spirits and indigenous communities, courageously witnessing the source violence of trauma, and taking concrete actions to break the spell and restore resiliency and vitality. so, what wisdom can the church glean from frozen 2? first, in the midst of our ecological global crisis, we must find the courage to venture into the unknown. what are more sustainable practices? how do we speak with confidence about the limits of our solidified patterns and hope of restored connection? second, we need to find the fortitude to witness the ways in which we have enacted violence against one another, the earth, and non-human beings and the conviction to change, dismantle the dams we have built to enhance our power while limiting the magic of the natural world, and make reparations. humanity's chronic history of violence has profound implications for planetary health and eco-diversity. as we move forward in this critical period of ecological viability, who will we show ourselves to be? will we extend our awareness of the creation narrative in genesis 2 and live into a renewed confession that yhwh created the earth and all of her creatures and they are good? as the fires in california and, more acutely, australia, have made clear, the loss of animal life as a consequence of our unchecked impact on climate is devastating. will we protect the children of the earth from our unfettered goblin of destruction or will we break the spell and save everyone? as communities of faith, we have an opportunity to mend the traumatic wounds of colonial violence, humbly seek forgiveness and understanding for our histories of collective violence toward indigenous peoples, and offer reparations (in whatever form is appropriate) to our intra-and interspecies siblings. we must find the courage to befriend those who frighten us in their efforts to protect themselves from our histories of violence and to join with them to heal the traumas that threaten to freeze and drown. transformation is formed through the courage to venture into the unknown, the willingness to listen, witness, and befriend, the moral fortitude to break down the walls that were erected in fear, and connect ever more fully to the elemental spirits of the planet and, through those connections, to who we are meant to be. we are the ones we have been waiting for. can we fully step into our power and show our self who is made in the image of the divine? grounding flight wellness center doi: 10.1111/dial.12553 from the redwood forests and the cedars of lebanon to the tree of good and evil in the garden of eden, far back into the groundswells of the archaic human imagination, the experience of "treehood" (paul tillich) has claimed human hearts and minds all around this good earth for countless generations. 1 i myself, in my own mundane way, have been captivated by existential encounters with trees, real or imagined, ever since i can remember. but much as i have self-consciously and enthusiastically lived with, thought about, and contemplated trees my whole life, i have never explored that experience itself. i want to make a start at doing that here, with the hope that this might prompt others, particularly members of american christian communities, to go and do likewise, in fresh ways. 2 the first tree i ever fell in love with was a lombard poplar. i grew up in an exurban setting, near buffalo, new york. one side of the family land was lined with these tall, cylindershaped trees, which had already grown to full height, perhaps 60 feet tall, when i was a child. usually without my parents knowing, i would on occasion climb up one of those trees as high as i dared. the branches were fragile, but, for a slim 11year-old, that climb was safe, or so i thought back then. on those ascents, i often imagined myself to be a kind of heroic adventurer. i would station myself maybe 40 feet above the ground for a spell, as i surveyed our house below and the fields beyond, and felt the wind bending the tree and brushing my face. it was a boy's dream. for those moments, i lived ecstatically, in another world, thanks to that poplar tree. in retrospect, i can imagine that those tree-climbing adventures must have had an important psychological function for me. i was an unhappy child at times, a condition that i only began to understand some years later when i was in therapy during my college years. high up in one of those trees, i suppose that i was able to leave those familial tensions behind, if only for a short time. in therapy, i came to understand that, among other family dynamics, i had had a conflicted relationship with my father. he was a kind and caring man, but i began to realize that he was also distant at some deeper level. enter the world of trees. perhaps thanks to his german heritage-germans typically cherished their parks, perhaps more than other ethnic groups-my father loved trees. one of his uncles, who was also of german descent, had a top position in the buffalo parks department in the late nineteenth century. that uncle oversaw the implementation of a plan to plant what turned out to be many thousands of sweepingly gracious elm trees, along both sides of many of the city's parkways. in those days, long before the onset of dutch elm disease, buffalo was elm city without the name. that history behind him, my father often found times to take his mind off his busy professional life-he was a dentist-by planting and caring for trees all around our sizeable property. and he often enlisted me to work with him, which was always a joy for me. those were some of the times when i truly felt close to him and when, i believe, he truly felt close to me. adventurous joy with those poplar-climbings and warm personal bonding with those tree-plantings and that tree-care with my father-those were some of the deeper experiences of my younger years which i came to cherish as i grew into adulthood. also, during my high school years, my family had the means to travel to many of the nation's great national parks during extended summer vacations. under my father's tutelage on those trips, i came to affectionately know many trees, the majestic redwoods of california, for example, or the effervescent quaking aspens of utah. during the years of my doctoral studies, i found a way to read every volume of the collected works of john muir, even though those works were obviously not immediately germane for my chosen field of academic research, twentieth century german theology. john muir then led me to the much more famous henry david thoreau. i think, in retrospect, that i read muir first, and thoroughly, because he was so deeply imbued with calvin's theology, whether he fully understood that or not, and since, by that time, i had immersed myself in calvin's thought, along with luther's, both of whom, i came to believe and then subsequently to argue, were dedicated champions of the goodness of creation and the glories and the mysteries of the natural world, in particular. 3 i read muir and thoreau, ironically perhaps, at the same time that i was working on my doctoral dissertation on the great karl barth's-highly problematical-theology of nature. 4 barth's theology as a whole, seminal indeed as it was, never helped me to understand, much less to affirm, my longstanding love for trees. muir's and then thoreau's encounters with trees did. the result was a theological proposal, on my part, for a new way to understand my love-or anyone's love-for trees. barth had adopted what was, at the time, a more or less conventional theological way to understand human relationships with other creatures, a theme developed by many thinkers in his era, but which was most often associated with the name of the jewish philosopher, martin buber, and his book, i and thou. 5 buber contrasted an i-thou relation, which he thought of in intimate, personalistic terms, with an i-it relation, which he defined as an objectifying relationship between a person and a thing. so, when someone says to his or her partner, authentically, "i love you," that is an i-thou relationship. when he or she picks up a hammer and hits a nail, that is an i-it relationship. buber and others-among them, barth-who gave this way of thinking currency, were eager to protect and then to celebrate the authenticity of genuine human relationships and to reject any kind of objectifying relationships between humans and other humans. humans should always be regarded as ends-in-themselves, according to this way of thinking, and should never be treated as objects to be manipulated. what, then, about my relationship with trees? the i-thou, i-it way of thinking does not account for my love of trees. trees are not persons. you cannot communicate with a tree the way you can communicate with your spouse, as a thou. are all trees, therefore, in truth mere objects? was that lombard poplar which i adored when i was 11 years old merely an object i used, like a ladder, to climb up into the sky? or was it, in truth, a creature in its own right, worthy of my respect, even adulation? wasn't it the case that i not only clung to that tree, forty feet above ground, for safety's sake, but also to embrace it? that tree, for me, back then was no mere object. it was something else. but what? buber recognized this problem in an appendix to the second edition of i and thou. he even imagined a relationship to a tree that is somehow akin to an i-thou relationship, but he self-consciously chose not to try to think that through. i decided that i myself would give it a try. in my first scholarly article, i argued that a revision of buber's thought was required. hence my title: "i-thou, i-it, and i-ens." 6 i wanted to be able to talk about the trees that i loved as ends-in-themselves, no longer as mere objects. in that article, to illustrate i-ens relationships, i drew attention not only to the praxis of thinkers like thoreau and muir with regard to nature, but also to luther's and calvin's visions of earthly creatures. both reformers, like thoreau and muir, portrayed those creatures in non-objectifying terms and indeed celebrated those creatures as ends in themselves, as, in some sense, charged with the mystery of god. luther saw miracles in nature everywhere and stood in awe of them. calvin considered the whole of nature to be a theater of divine glory and celebrated that glory enthusiastically. in ensuing publications, i employed the constructs of i-thou, i-it, and i-ens as a kind of silent interpretive key to open up the whole sweep of classical christian theology in a new way. i argued that -notwithstanding lynn white jr.'s then widely hailed critique of the christian tradition as ecologically bankrupt, alleging that christians have almost always treated nature as a mere object, something to be manipulated-we can trace a major christian tradition that richly affirmed the natural world in its own right. that way of thinking i could have called the ens-tradition. in retrospect, i think that my reflections about buber's way of thinking and my historical investigations were existentially dependent on my early encounters with treehood. likewise for my conversion to environmental activism, along the way. that happened, emphatically, after i first began to work my way through books like rachel carson's silent spring and stewart udall's the quiet crisis in the early 1960s. 7 it was natural, as it were, for me in those days, and subsequently, not only to love trees in their own right, but also to do all that i could do to protect them, along with the whole world of god's earthly creatures. but my life with trees by no means came to expression just in youthful encounters or in mid-life scholarly writings or even in longstanding commitments to environmental or ecojustice activism. 8 i also have been blessed throughout my life by rich encounters with a range of particular trees. this story has unfolded in several locations, but i want to mention only one here, the old farmhouse at hunts corner, in southwestern maine, which has been a home away from home for me and my family for more than forty years. at hunts corner, notwithstanding the human incursions here and there and the ominous pipeline in particular, i have developed cordial relationships over the years with many of the trees on our land, i-ens relationships as i think about them. i have learned to call many of those trees by name and sometimes greet them, when no other humans are around. our plot was in all likelihood a farmland 150 years ago. the west side of our land is marked by one of those famous stone walls that defined the farm fields in historic new england. the oldest trees tend to be near that wall or to be growing from an adjacent, steep and stony incline, which never could have been farmed. one mother oak, in particular, has fascinated me ever since i first noticed it. it is enormous. i cannot put my arms even half way around its mammoth base. the poor tree has been hammered and seared over its long lifetime by the elements. the top of its central trunk was apparently sheared off, perhaps decades ago. but the tree has lived on. near that mother oak grow a number of smaller, but nevertheless sizeable descendants. i once walked through that area with a neighbor and he eagerly explained to me that i could make a lot of money if i were to have those oaks cut down for commercial sale. grand old towering mother white pines also grow in that area and elsewhere on our land. my brother, gary, and i once cut down one of those giants after it had died, this, for safety reasons. i did not want it to fall on anyone, particularly on my grandchildren, who sometimes had ventured out near that tree, at the edge of the forest. treehood should not be romanticized. a tearful older father once told me, in a long, quiet conversation, how he had lost his daughter to a tree, in the prime of her life. this was the story that onlookers reported. his daughter and her two toddlers had been picnicking in a park. on their way home, she was watching them run on playfully ahead of her. at one point, she saw a large tree falling down on to the children. she ran desperately to push them out of the way, which she did. but she herself was killed. that story was in my mind, as were my own grandchildren, all the time my brother and i were working to take down that immense, but dead pine tree at the edge of our forest. huge it was. gary and i barely had the strength together to roll pieces from that tree's trunk into the woods to their final resting places. early on in my family's tenure at hunts corner, i began to carve out paths in the back forest, where that mother oak and a number of the great white pines live and where american beeches are now moving in. closer to our house, i have planted a variety of individual trees over the years or occasionally cut away competitors, in order to allow some extant trees to flourish. perhaps the most striking of all the tree planting that laurel and i have done over the years was the operation that she and i once performed on what was, for us at the time, a nameless sapling. it was march, early on in our experience with the world of rural maine. what we did was sheer, youthful folly. laurel had decided at that time, that, come the next spring, we would turn over a plot just back of our house, where she would begin to create a perennial garden. but there stood that large sapling right in the middle of that space! without much thought, we decided that we would try to move that tree, right then. the ground was frozen, of course. i had to use an ax to cut out the ball of the roots. once cut free, we could barely drag that ball out of its earthen socket. now what? we decided to roll it maybe forty yards to the western side of our land. there, using the ax again, and a pick-ax, i hollowed out a cavity for that big, frozen root ball. finally, we were able to slide that sapling and the mass of its frozen roots into that hole. it was only then that it dawned on us that we had planted that tree close to the church next door, a pristine, white, wooden building, which easily could have appeared on some new england calendar cover. but that was that. never mind that sapling. the church building appeared to be as picturesque as ever. we hurried on into the house to warm ourselves by the franklin stove. little did we know back then that that nameless sapling, more than 40 years later, would magically turn into a graceful and fulsome red maple whose sumptuous branches would then completely cover our vista of the whole church building! that iconic structure is gone from our angle of vision for much of the year. there may be a parable hidden in this ironic tale, but, if so, i have yet to discover what it is. sadly, the sugar maple i planted at the front of our property many years ago recently died. it was painful for me to observe that large and lovely tree die over the course of several seasons and then to witness it standing there, barren, a skeleton, all by itself. true, stories like these sometimes have a blessed ending, according to one of the central themes of the christian faith, from death comes life. over the many years that we have lived at hunts corner, mostly from the early spring through the late fall, we have used our old iron stove in the kitchen steadily, sometimes even on cool summer nights. and we obtain fuel for those fires almost always from standing dead-wood, which we cut down at various places on our land and then drag in, cut up, split, and stack. that was to be the story of that dead sugar maple. we would give thanks for it one more time, so i thought, as it would later warm both our kitchen and our hearts. but that dead sugar maple's transition to firewood was not as smooth as i had anticipated. that project turned out to be an adventure. for many years, my brother and i have helped each other with forest and other chores at our respective rural homes, his in western connecticut. he learned to love trees the same way i did, working with our father on the grounds of our exurban buffalo home. after various childhood and adolescent skirmishes, some of them harsh, gary and i have remained close over the years and have grown even closer in these our golden years, especially by assisting each other outdoors either in connecticut or maine, for days at a time. that towering dead sugar maple had to be cut so that it would fall away from the street, not on to the street, where it might block or even hit some speeding car that was passing by. with some anxiety, i admit, i nevertheless trusted gary to cut that tree just so that it would fall precisely where it was supposed to. i had witnessed gary "place" (his term) falling trees in just the right locations many times. when this tree began to undulate, however, it did not immediately fall away from the street as gary had cut it to fall. the tree just stood there trembling, not falling in any direction! what was going to happen? with some sense of urgency (!) and with a long rope tying him to that oh-so-perilously oscillating tree, as it was readying itself to fall in one direction or another, gary dashed to a spot far away from the street and then pulled on the rope, again and again, until the tree finally fell toward him (it crashed down a few feet to his left!) and not on to some unsuspecting car that might have been speeding up or down our road. quite a feat for one who was at that time about to turn eighty! as i am constantly aware, trees are not always our friends. but thankfully, in this case, gary was able to coax that tree in a friendly direction, narrowly escaping injuring himself or anyone else. i have saved for last what is for me the best news about treehood. some years ago, laurel and i purchased a then ten-foot tall purple beech sapling, and planted it in our hidden garden. long before, we had come to adore the gigantic hundred-yearold purple beeches we had encountered in mt. auburn cemetery, near our massachusetts home. in this finite world, those great trees are, for me, the best natural symbols of eternal life that i can imagine. laurel and i have decided to have our ashes interred at the base of our own purple beech, which now rises high above us in the hidden garden. i have affixed a foot-high celtic cross-made of cementat the base of our purple beech, which one day will not only mark the place of our buried ashes, but will also announce the truth, for those who have ears to hear, that has claimed my own soul self-consciously since the first days of my theological study to these my octogenarian years, predicated on a reading of colossians 1:15ff.: the crucified and risen lord is the cosmic christ, both now and forever-"…[a]ll things have been created through him and for him. he himself is before all things, and in him all things hold together" (col.1:16f. nrsv). i saw that cosmic christology in the figures and designs on the historic celtic crosses that i encountered during a trip to ireland with laurel in 1996, along with throngs of other spiritual seekers. 9 i concluded then that the classical celtic saints were by no means essentially nature mystics, as many who have been fascinated with them in our time have believed. no, their spirituality of nature was consistently an eschatological celebration of the cross and resurrection. for the great celtic saints, the love of the seas and the earth and its creatures and the love of jesus christ, crucified and risen from the dead, is the same love, now and forever. hence i was overjoyed when i found and then was able to buy that cement celtic cross at home depot for $14.98. i eagerly carried it off to implant it in the earth next to the purple beech in our hidden garden. i wanted to announce that someone believes-or that someone, whose ashes are interred there, once did believe-that that tree, marked by that cross, is-or was-for that believer the lignum vitae. i cannot imagine the story i am telling here ending otherwise, for i now realize that my world, from the days of my childhood on, always has been, is, and, i hope, always will be, the world of treehood. 2 treehood, rightly construed, has a justice dimension. think of the remarkable work of nobel laureate wangari maathai (d. 2011), who started the green belt movement in kenya, which has planted more than 30 million trees in africa, in order to fight erosion, to create firewood, to give work to poor women, and, generally, to reestablish the health of the whole earthly biotic community. wangari's work presupposed that trees have their own standing, that trees, essentially, are not first and foremost objects for capitalist exploitation, whether directly, through commercial development, or indirectly, through the destructions wrought by impoverished peoples. nor, in wangari's perspective, were trees essentially a means for the wealthy temporarily to escape from the contradictions of modern industrial society, under the rubric of "ecotourism." 7 carson, r. (1962). silent spring. new york: houghton & mifflin; udall, s. (1963) . the quiet crisis. minneapolis, mn: fortress, 2013, ch. 18. 9 for an account of my engagement with celtic spirituality, see santmire, h. p. (2000) . the church is a global network. it has a presence around the world that is almost unsurpassed by any other organization or movement. the church has contact with other religions at all levels, and cooperates with a wide range of humanitarian organizations. it is in dialogue with world leaders, not least via the un system. the church has a presence in many places around the world that are not readily accessible. during crises and disasters, the church is often there before they happen, while they are happening, and long after the immediate relief work has been phased out. this is an obligation in an era in which the world must learn to live with the climate crisis and its consequences. we know that those people who have contributed least to global warming are often those most severely affected by climate change. we know that social challenges such as poverty, migration, and the global health situation are directly linked to environmental and climate issues. there is a need for climate justice. the issue is how we humans interact with the natural environment, of which we are a part. we therefore have to take action based on what feels most meaningful in our lives. we must therefore talk about the sacrifices that we can make together, so that our children and the children of others can have a future. the climate crisis is exacerbated by lifestyles that make greed seem like a virtue. resolving it will be difficult for as long as people and nature are viewed only from the perspective of economics and technology. only when we actually distinguish between our needs and our desires can we achieve fair and just climate goals. when will we learn to say, "enough is enough!"? what we think about and feel about nature really matters. is it a mechanism that simply keeps on rolling? an unlimited source of raw materials? our recreation area? our enemy? a place of endless harmony and balance? a system involving a constant battle for survival? how we relate to nature as creation reveals how we relate to the very basis of existencewhich we call god. the churches in the east and the west have developed somewhat differing points of focus with regard to humankind and creation. put in simple terms, western tradition has developed a deep trust in rationality and science. this has contributed to a demystifying of nature and humankind's role in creation. its secrets were dissolved in measurability. humans came to understand themselves to be rulers of nature, rather than stewards who are responsible for and have to care for something that they do not actually own. the emphasis was put on humankind's function. theologians in the east have talked more about nature as a mystery that cannot be fully described, not even with the most excellent measuring instruments available in the world of science. nature meets us and shows itself to us, but never fully. as humans, we are part of this mystery. each human being is itself a miniature cosmos, a microcosm. here, the relationship is at the forefront. the western view has a tendency to see too little concreteness, and something romantic, in this approach. but the fact is that a full understanding of our role as human beings requires both perspectives: function and relationship, doing and being. it is a characteristic of being human that we can have an indepth understanding of ourselves based on the relationships in which we are involved: to ourselves, to each other, to the entire creation, and to the ground of being itself. we can also gain a deep understanding of our mission as human beings, our function: why are we actually here? as we face the climate crisis, we need to focus on rational action inspired by the best science available, while also needing to have an existential understanding of how and why we feel and act as we do. destroying biodiversity; wrecking forests and wetlands; poisoning water, soil, and air-all these are violations of our mission as human beings. theology calls it a sin. this sin arises from our inability to see the earth as our home, a sacrament of community. our natural environment unites all the people on earth with every living thing, in a way that transcends any differences in faiths and convictions that may exist between us humans. experiencing the beauty of nature means a lot to us. but we are also created for another type of beauty: that people have quality of life, live in harmony with nature, meet in peace and help each other. if we want to have an ecologically, socially, economically, and spiritually sustainable approach to the world-which we must have-individual or commercial solutions will never be sufficient. this is why spiritual maturity is now required. such maturity means being able to see the difference between what i want and what the world needs. it can understand that the climate crisis is rooted in human greed and selfishness. it can elevate us above fear, greed, and fundamentally unhealthy ties. if we want technological development, fair and just economic systems, ecological balance, and social cohesion to work together to create a sustainable future on our earth, we also need a conversion, a new state of mind. a renewal of our humanity (in the dual sense of the word). it is not sufficient for us to only address the symptoms if we really want healing and wholeness. like pope francis, we are of the opinion that we are in urgent need of a humanism that is able to bring together different areas of knowledge, including economics, to form a more integrated and integrating vision. science, politics, business, culture, and religion-everything that is an expression of humankind's dignity-need to work together to put our earthly home on a more stable footing. real stature among leaders and rulers of various kinds becomes apparent when we in difficult times can maintain high moral principles and focus on the long-term common good. in these days, the bishops of the church of sweden will be issuing a bishops' letter about the climate that highlights these issues in more detail. the climate deadline is coming ever closer. indecision and negligence are the language of death. we must choose life. give the earth the opportunity to heal, so that it can continue to provide for us and so that people can live in a world characterized by fairness, justice, and freedom. archbishop antje jackelén doi: 10.1111/dial.12557 thoughts while sheltering … in the midst of a crisis it is easy to make statements that later seem unnecessarily alarmist. i do not think i am the kind of person who normally sounds alarmist (but who really who thinks that they are alarmist?), but it is hard to imagine that covid-19 will not remake our lives in ways we never could have envisioned a few months ago. it is hard to imagine that our lives-collectively and individually-will not be forever changed. some thoughts: 1. i do not generally read god's wrath and judgement into current events and i am not prepared to do that now. that said, i find myself wondering if covid-19 will not change our lives in a manner similar to the tower of babel (genesis 11). this story is, among other things, an account of how a united humanity became divided. i wonder if covid-19 will not threaten to (further?) divide us. 2. others have written and commented about the relationship between covid-19 and climate change. many of these people are much more knowledgeable and smarter than me. i plan to listen even harder to them. for much of my adult life, i have attempted to walk with a light environmental footprint (e.g., i have walked or rode a bicycle to work for over 25 years). in the past year, my wife and i have doubled down on such practices and we walk with an even lighter environmental footprint. i have a feeling that others might be joining me in the future. 3. i wonder if covid-19 will not accelerate the already fast pace of secularization in western societies. the churchfairly or not-has been associated with the status quo and thereby irrelevance by many people (especially younger people). with "physical distancing" forcing worshipping communities to meet virtually and disembodied, can they matter? what is life together if it is virtual and disembodied? i am not a luddite who wants to destroy technological tools. i am, after all, writing this because of the miracle of modern computer technology. however i think that social media and virtual life supplements, not supplants, embodied life. 4. i wonder if covid-19 might not reverse-or at least stem the tide of-the pattern of secularization and irrelevancy of the church. the church mattered in the early middle ages because of its commitment to caring for the sick and vulnerable. i am thinking, for example, of gregory the great who while he was pope used the wealth of the church to feed the hungry and care for the poor. is covid-19 such a moment for the church? 5. i think about hospitals and monasteries in the middle ages. they were beacons and refuges for christians fleeing plague and pestilence. hospitals and monasteries were beacons and refuges for christians to care for others who were fleeing plague and pestilence. these hospitals and monasteries were outposts of civilization in wildernesses of savagery and barbarianism. does the church need to reclaim that part of its history and heritage and make it more central to its mission and identity? 6. i think also of the babylonian exile. israel had to rethink what it meant to be faithful when it had no temple for people to worship in and bring their offerings to. what will it mean for us in the 21st century if we cannot gather together in the ways we have always gathered? i finish here with only six thoughts. god worked for 6 days and then rested. i will rest also with these six thoughts and observe a kind of sabbath. i will be thinking on god and the ways of god and who we are called to be. my seventh thought is a sabbath thought. it is a thought, such as it is, of worship, prayer, and contemplation. david c. ratke doi: 10.1111/dial.12558 the corona crisis unmasks prevailing social ideologies the current covid19 pandemic shows that dominant ideologies of our age-from individualism to social constructivism-fall short in meeting reality by disregarding the wider ecological community in which we are situated. human beings, for sure, play an increasing role in cultivating, shaping, and also destroying our shared world. maybe the corona virus experience teaches us to recover the importance of human communities as well as our place in ecological communities? it seems that neither individualism nor social constructivism stand the test of reality. if there is anything the corona crisis teaches us, it is that our lives are interconnected. there is no human being who only inhabits his or her own little world, and who is in charge. we are part of a great human community-for good and evil. we infect each other, yes, but we also live off each other's infectious smiles. what would our lives be like without close eye contact and bodily expressions of welcome? community is the first and most important part of our lives, and during quar-antine we experience how much we miss the normal social interaction with each other. in the meantime, we are thrown back on ourselves, or the very closest ones. there resides some truth in every ideology, otherwise it could not attract our attention, and be infectious. liberalism is the view that every citizen should have as much freedom to live as possible. most of us agree on this value across the political spectrum from left to right. yet the fact is that we are the blacksmiths not only of our own happiness, but also of our misfortune. the misery is that we cannot know in advance. but more than that, we also share the misfortune of others. self-restraint is necessary precisely because it is a primary fact that my desire for freedom and movement can put others in bondage and immobility. since the 1950s, existentialism has been a very widespread ideology. it still is under the guise of being against all ideology. since jean-paul sartre, existentialism has argued that you are what you do. it is your free decisions that give you the essence and character of your particular humanity. existentialism is a humanism, as sartre called his 1946-program. true it is that we live every day with small choices about where to go, but the idea that we are "decision-makers" all the day is an extremely forced view. fortunately, most of us do as we usually do, and if we do not, others would not be able to count on us. a human being who constantly decides pro or con would be an incalculable human being, a constantly ticking bomb under enduring relationships. an existentialism without the "humanism of the other person" (levinas) is a monster beyond the possibility of attunement and self-correction. the ideology of existentialism lies in its individualism. fortunately, however, we live in communities where we, as resonating beings, constantly tune in to each other. hopefully, we also live the greater part of our lives in a pre-conscious stream of experience that precedes our small and large decisions. otherwise, we would quickly become sleepless persons, incarcerated in a hyperactive consciousness, and eventually we would become insane. in short: it is not very often my decisions that determine who i am. rather, it is the sum of the resonance-and-dissonance experiences of my life that determines who i am and what i do. the community exists before the conscious self-awareness of the ego. alongside the over-spiritualized view of existentialism, we find another ideology, which sees a human being as the exclusive owner of a physiological body, curved in around itself. we could call it the skin-and-hair ideology. bodies, however, do not only include skin and hair, bones, and internal organs, for we live as socially and ecologically extended bodies. what we can learn from biology is that our bodies are in constant exchange between one's own body and everyone else's. humans are "holobionts," an organismic space for a variety of lifeforms. in discussing the nature-nurture problem, the controversy has been about how much genes (nature) determine us, and how much our society (nurture). but inside our body we do not only carry our specific human genome, but also a wider microbiome, made up of all the viruses, bacteria, and fungi that have entered our body from the outside into our nose, throat, ears, and not least gut-through food consumption, fluids, and the inhalation of air. overall, we should be grateful for the world of microbiota, for most microorganisms are symbiotic. without bacteria and viruses, we would curl up on the floor with abdominal pain, and we could never be able to "make decisions." overall, we need to be good friends with our bacteria and viruses. only a few are as harmful as covid-19, and here we naturally have to go into counter-procedures such as cleansing, preventing, and quarantining. as far the medical science goes, we do not have a cure at present. hence, the respirators will be running until the corona infection is over. let me now address what i see as the most widespread ideology in our time, at least in the academy: social constructivism. this ideology has spread from sociology to psychology, politics and pedagogy, and social constructivism has ended up being a quasi-orthodox consensus ideology within the humanities and substantial parts of theology as well. this movement's first epicenter was the book the social construction of reality, written by sociologists thomas luckman and peter l. berger in 1966, followed up by berger's the sacred canopy in 1967, focusing on the religious construction of reality. being a circumspective scholar, berger soon after realized the weaknesses of social constructivism but by then the ideology had already infected wider parts of the social sciences. its thesis was, and henceforth is, that human societies construct reality through language perception and the maneuvers of rhetoric, political, and social engineering, and that human societies do so under only a minimal resistance from pre-linguistic reality, including nature. again, there is an aspect of truth in this idea. the ways in which we use language and discursively define the boundaries of society do indeed have impact on the public perception of reality. for example, the political responses to the corona crisis show the considerable impact of our political constructions of reality. state leaders are the ones who have capacity to define states of emergency, and in an exceptional situation governments act as quasi-sovereign powers that determine the social reality for the general population. it cannot be different, but political decisions differ from country to country. do we proceed as in south korea and taiwan (with large screenings of the population and subsequent quarantines)? do we do as in denmark (with an early and strict lockdown but initially without many corona tests)? or, do we choose like sweden (avoid strong coercion but appeal to the population)? by comparison, the overarching federal strategy in the united states still (march 29, 2020) seems oscillating. this being the case, social constructivism does not sit well with a common sense realism: it is the de facto spread of the covid-19 infection, and the subsequent fatalities, that will determine whether our political measures have worked or not. in an infectious world, politics combines wait-and-see attitude with post hoc maneuvers. even the most powerful politicians cannot talk away covid-19. either you have it or you do not have it. either you infect others or you do not. either you will see an exponential spread or you will see a flat rising curve. thus, it is the spread of infections that tests politics, not the other way around. social and political constructs are not capable of defining reality. it seems that even the most clumpydumpy politicians are beginning to understand the reality test, after having tried to downplay covid-19 rhetorically. accordingly, what we need in the academy is a thorough revision of the prevailing ideologies of our age: individualism, social constructionism, discourse theory, etc. we need a biocultural and ecological paradigm shift within the social and human sciences, including theology. otherwise we see people of faith as individual faith decision-makers, and we overburden one another with overheated appeals to letting god come to our mind, as if we could conceptually enframe god. yet if god at all is, god is prior to our consciousness and our self-aware pious decisions. if god is, god is present to the child, and present to us when we are using our full energy and attention in solving a problem, when we are falling asleep, when we are aging and entering into states of dementia and no longer in conscious contact with god. the faith of any individual (each in his or her individual manner) is rather about tuning into a deeper reality-a reality which is already there, as the prime and pervasive source of resonance, present in a divine personal form beyond my own little personhood. faith is about plugging in, of moving into the prior reality of the divine self-communication, in words as well as beyond words. similarly, revising the assumptions of the skin-and-hair ideology, jesus is not a bygone entity, a "composite entity" of (a) a divine entity, (b) a skin-and-hair body, and (c) a particular lonely soul, as some analytical theologians redescribe chalcedonian christology in a so-called "compositional christology." but god was not incarnate in a man cave, but conjoined the shared flesh of humanity, shared also with non-human creatures beyond the skin of jesus. by becoming incarnate in jesus and in his extended body (also called the reign of god), god is no less present in the compressed respirator tents than in the open sunlight and fresh air. god is radically being there, being there with others and being there for others, not least for us who are gasping for fresh air. now back to us who hope to survive and go on. what kind of a reality do we hope to wake up to after the corona crisis? i guess we are waking up to a deeper sense of how much we miss one another, after we have had to separate ourselves from each other. we are missing the abillity to look each other in the eyes (not mediated through a screen), missing to give hands and hugs. we are missing the deep meaning of having skin. i hope that we may rediscover that our community is prior to me as individual, and that the interests of others precede my considerations of myself. it seems to be obvious that the corona crisis has unmasked the castles in the air that we have erected in our ruling ideologies, not least within the academy. individualism and social constructivism-both presuppose a remoteness of human existence from the world of which we are part. both tend to see individuals and communities as isolated islands, who are ceaselessly at work in imposing a human order into a presumably blank world slate. yet nature is not a blank slate, but is full of multiple life and regenerative powers. moreover, nature is not just "out there" but also "in here." we carry nature deep within ourselves, and our entire existence and well-being depends on it. this should not come as a surprise to theologians who speak of god as the benevolent creator of all that is, on the fields and work life, in our houses, and in ourselves. we need to be more than humanists in order to be truly humane. we can no longer pretend not to be deeply connected to circuits larger than ourselves, for we are at once symbolic creatures, living in cultures, and symbiotic creatures that benefit from the rich world of viruses, bacteria, and fungi. at the same time, however, we are also vulnerable beings. this has always been the case but in a global world, this has become even clearer because we travel as much as we do, and live as close to each other as we do in the big cities. covid-19 does something about us before we do anything about it. every moment, awake or asleep, our immune system trains in capturing the viruses and bacteria that make us sick. let us hope that the self-generative powers of nature, endowed by god the creator, will be strong enough to handle the covid-19 in most of us, until we some day can find a vaccine. in the meantime, let us look forward to being able to return to our beloved communities. "into the community" could conveniently become the new mantra after the corona era. university of copenhagen doi: 10.1111/dial.12559 the covid cross pandemic. it is not a word that falls easily from the lips. in a highly scientific and technological society it may strike one as a bid odd, like something from a more primitive past. that is the power of nature and a sobering reminder that while we have come to control many things in it, nature still can transcend our power and understanding, even with fatal results. this pandemic has reminded us all too clearly how limited human power is. it also brings into clear focus how thin and vulnerable human society is when the whole world can be turned upside down in a matter of weeks. in such a world being ravaged by an "invisible enemy," where is one to turn? the fact that one cannot see it or easily trace it places in the heart a fear and anxiety not unfamiliar from the middle ages. the existential experience is the same. we are left with a feeling of vulnerability against an unknown power greater than ourselves and for which we as yet do not have any strong defenses. evolutionary biology crashes into human society. to "shelter in place" and "social distance" are pretty basic but limited responses, ones not unfamiliar from centuries ago. we have been driven back to the most elemental of human responses, isolation. where, then, is god in the midst of pandemic? here incarnation meets the deepest of human needs, affirming god's identification with and understanding of our suffering and anxiety on the covid cross. when there is no obvious ultimate cause or reason, perhaps the only possible source is god. but, if god, then why would a good god do such a thing? for divisive theological dualists the next step is natural, god must be mad at us for something we have done and is punishing us. since it cannot be our fault, the search is then on for a scapegoat, whether it be 'gays' with the aids crisis, new orleans' perceived licentiousness for hurricane katrina, or america's secularism for 9/11. for some today the source must be china, the lgbtq community, or environmentalists. it is theodicy at its most brutal, and it must be challenged. a free creation and human greed combine to make an international disaster, not divine intervention. it is here that the cross confronts the covid-19 virus, not with platitudes or panaceas, with naming and blaming, but with the affirmation that god is with us. the first century world of jesus was a time of disease and death such that much of jesus' ministry was spent in healing from disease and disabilities. it was not unfamiliar to him. such is the nature of enfleshment. if one takes enfleshment with all biological seriousness, as niels gregersen does in his concept of "deep incarnation," (see the cross of christ in an evolutionary world), we can understand that god identifies with human suffering at the most basic of biological levels. the suffering of the covid virus is not foreign to god and therefore we are not left alone within it. it means that god is with us in all the biological suffering of an evolutionary world. while the source of the virus is not definitively confirmed, currently it is believed to have originated in bats (as a number of other coronaviruses have), which perhaps bit a pangolin (a sort of plated anteater), which, as an endangered species, was illegally captured and sold at an illegal wild animal market in wuhan, china. the source of the pandemic? human greed. to paraphrase winston churchill, "never have so few done such harm to so many." theologically we would call this a result of human sin. it requires human capacity to take something biologically derived and place it on the world market. had the pangolin been left alone, perhaps this would not have happened. at such a time of anxiety and isolation, there is a deep longing for hope, meaning, and perhaps forgiveness. to understand the enfleshment of god as deep incarnation, connecting throughout all biological creation, means that no creature, including the human, is truly separated from god, especially those who are dying alone from the virus. if god is truly present to us at the most intimate levels of our existence, then so too is the divine promise. this takes immanuel, "god with us," to a whole new level and connects the present suffering from the covid-19 virus to the cross of christ. it affirms that even if our cognitive faculties or awareness are not functioning well (or at all) that god is still with us. it is not our awareness of god that makes god's grace effective in our lives but god's awareness of us! that is the ground of our hope, not our own reason or strength, even as we pray that a medical solution may soon be found. as creator to creation one might metaphorically say that god is "entangled" (non-local, relational holism) with creation, ourselves included, at the foundational levels of material existence analogous to entangled subatomic particles (see simmons, the entangled trinity: quantum physics and theology). deep incarnation is a way of thinking christologically about the redemptive entanglement of the creator with the whole of creation, giving us hope and release from fear and anxiety as this is carried up into and transformed by god. this foundational relationality then grounds divine presence in a suffering world and provides a connectivity for accompaniment and hope in the midst of decline and loss. it is the covid cross. such accompaniment is also expressed through the medical professionals and others who are working tirelessly, and with some personal risk, to help everyone survive throughout the world. this too is an expression of god's care and love within an entangled creation. transcending one's self-interest for the sake of the ill other can certainly be understood as a gift of the spirit. pandemic reminds us that we too are part of that same entangled creation and that we are also our brothers' and sisters' keepers for we are all in this together. perhaps this may be one of the most hopeful outcomes from such a horrible pandemic. doi: 10.1111/dial.12560 there is the existential angst that comes with self-quarantine and the awareness of why it is necessary-we call it "plague dread." and then there are the various levels of explanation, the micro-meanings, you might say. and then there is the mystery-the big meaning, macro-meaning. each of us will fill in the dread with the facts of our own life. i am approaching age 90, with at least three of what the media call "underlying conditions"-more than enough empirical ground for me to dread the coronavirus. almost hourly, we hear precise scientific descriptions of the virus. these descriptions are crucial, because they enable competent people-physicians, nurses, and researchers-to treat the disease and even prevent its spread. the scientific theory of evolution helps me understand our situation. the coronavirus is an example of an evolutionary process wrapped within larger evolutionary processes. the behavior of the virus follows darwinian expectations. all of the processes that take place within our bodies-from the nano and molecular levels to the cells-follow the same evoiutionary pattern. these evolutionary processes within us are fundamentally ambiguous in that they bring us life and they also bring us death. leonard hummel and gayle woloschak describe this ambiguity in their fine 2017 book, chance necessity, love: an evolutionary theology of cancer (cascade books). this presents us with a dilemma-we are grateful for the life-giving work of our internal body processes, and we dread the deadly work of those processes. like cancer, the presence of coronavirus is fully "natural." nature within us is "naturally" ambiguous. further, these micro-evolutionary processes take place within a much larger story of evolution with several chapters: the evolution of life, which began millions of years ago, within the larger 4 billion year-long story of planet earth's evolution, within the still larger story of cosmic evolution, 12 billion years in the telling. our response to covid-19 is to resist the flow of evolution and redirect it. that is what our practice of medicine is about, the attempt to redirect evolutionary processes in our favor. the long processes of evolution bend because of our efforts. this reminds me how infinitesimally small we are, and yet how amazingly gifted we are. evolution has brought us life and also the skill to reorder evolution itself. nevertheless, despite our efforts, even when they are successful, the struggle with evolution takes its toll-and that means injury and death. in my caseevolution in my mother's womb caused me to be born with spina bifida, which, though moderate in severity, has radically impacted the last 10 years of my life. even as i write, i am aware of the mystery (note the capital "m") that wraps around us. we-and these incomprehensible processes of evolution-float in a sea of mystery. why is it that our existence is woven on this vast and complex loom of evolution? why has god chosen this particular way of bringing us into life and sustaining us? many thinkers down the millennia have pondered this "why?"-and they have given us no satisfying final answers. we can probe mystery, but we cannot resolve it like a puzzle. the book of job speaks to me at this point. when job raised the question and demanded god's response, the voice from the whirlwind spoke to him: your mind is too small and weak to comprehend the height and depths of mystery-you simply must accept it and trust it. the existentialist albert camus acknowledged the mystery, and he believed it is indifferent to human hopes and longings; we cry out for answers for our lives, but in return we hear only silence-he called it ultimate absurdity-absurdity with a capital "a." his novel the plague is the story of life during a plague. the plague was indifferent to human existence, the epitome of absurdity. others have called the mystery enemy, malevolent, intending to destroy us, if it can. christian faith calls the mystery friend, redeemer, suffering god. much like the message of job-death at the hands of the mystery is real; our attempts to understand it are futile; but the same mystery is our redeemer. we can trust it. after all, evolution is a process-faith believes the process is going somewhere, and that "somewhere" is in the life of god. the life of god is love, which is why in the midst of plague we find love, caring for others. medically, for most people our current plague will not have serious consequences. psychologically and economically, it will damage most people, at least to some degree. a small percentage of people will die. all of us will be borne along the same evolutionary process into our future. and for all of us, that future will be god's gift to us. think of the image of a train. some of us will get off the train at this station, everyone will get off sooner or later, at different stops. every station's name will be the same, "god's destination-love." to imagine that my words may speak to you well by the time they reach you seems like magical thinking. no one seems to know exactly where we are. our slow and then sudden awareness of the impacts of coronavirus left us in an existentially halted, almost eschatological space: we were caught in a world incredibly arrested and incredibly new at the same time. we're deeply aware of old tensions of injustice and vulnerability pulling taut, and simultaneously many of us feel the grit of the irreducible relationality of our bodies and planet anew. we've picked up familiar embodied routines in vital work and mundane practices, and yet now many of the familiar kin that once nourished us with convivial learning, signs of peace, earthly delights, bread and wine are learning to do so again with virtual creativity or picking up pieces. if we are honest with each other, there have been many world-ending plagues before-many apocalypses "now and then," as catherine keller says. native peoples know well the injustices and radical loss of histories of settler violence and plague; so too do lgbtiq folks know the ways that homophobia shaped responses to hiv/aids crises. even theologians from julian of norwich to martin luther knew the risks of bodied life together. we are "mutually bound" in moments like this one, luther himself wrote in his now much-cited 1527 letter, "whether one may flee from a deadly plague." these moments of crisis won't be the last, as much as we aim to prevent loss. earthly creatures are vulnerable and resilient, enfleshed with possibilities both tragic and felicitous. if the old kingdom of our everydayness met its match in the new kingdom of the present, the coming future that cultivates such anxiety in so much of our theological and ethical communities already only intensifies with unknowns. as life began to shift in ireland, i was waist-deep in a sabbatical research-ing the complex emotional, affective, and felt responses to the climate crises of our time. from eco-anxiety to environmental despair to climate grief, the present and anticipated losses aggregate and will continue to do so. the affective and emotional energies that mutually bind us in the midst of our planetary crises-including that of pandemic-are just as much part of the crises and ethical responses as the scientific approaches we desperately need. in an interview with the harvard business review, expert on grieving david kessler (known especially for his work with elisabeth kübler-ross), reflected that what the pandemic brings with it is "a number of different griefs." we (in all of the manifold diversity that term names) are grieving our imagined present, a sense of normalcy, our planet, our loved ones, our work, our relationships, our habits of interactions in the world, and more. more particularly, kessler argues that something called "anticipatory grief" is in the air. "anticipatory grief," he says, "is that feeling we get about what the future holds when we're uncertain." 1 in unhealthy ways, anticipatory grief morphs into shifting anxieties and end of the world imaginaries. in richer ways, it acknowledges that our lives undergo transformation into the future and that we must find ways to re-story the present, cultivate resilience, imagine and take action for better future societies. honoring grief is an active process that we undertake together in moments like these. and sometimes that process means we do the long, hard work of actively grieving our loved ones and gentle hopes for our future as they really do change forever. outside of pastoral care, affect, emotions or feeling rarely get much consideration in systematic or constructive theology. theology, in its rational and patriarchal guises, so often belittles affective archives as beneath the intellectual purity of doctrinal thinking. yet, theology at its richest and most compelling is felt, is inscribed emotional depth-not as cheap sentimentality or sensationalism, but as imaginative wondering, grieving, transforming, pacing in awe and praise in the middle of the night, lamenting loss in the middle of the day, and crying in terror or joy. even the driest of systematic theology sometimes can't escape tears when it anticipates our own angst and anticipations. when we do, we human animals make theology and theopoetics with everything we've got. we unleash our manifold imaginations to handle newness, especially in times of immense cultural grief. i want my theology to learn how to grieve better, especially in a time of pandemic. most researchers into climate grief will tell you that learning how to grieve a present moment opens up the possibilities of our relational connection. these psychologists, literary theorists, scholars of environmental humanities, and poets ask society to move beyond feelings of ethical individuality (e.g., if only i made "greener" choices) to ethical collectivity (e.g., if only we organized for structural transformation to a better world). grieving means we are thinking about relationality, shared worlds, and communal possibilities human and more-than-human. deep calls to deep, and the pathos of shared imagination can cultivate attentiveness to those who need care. that connectivity of spirit may lead to collectively questioning and lamenting power structures or unjust relationships in the world. questioning may lead to refiguring expectations to ask what the next possible course of action might be. that's just one possible route. along that route, the most curious feature of the literature of environmental despair is a persistent emphasis on the importance of play for times of transformation and collective grief. the vitality of playfulness may seem counterintuitive when everything is so dour. think, however, of the creativity emergent in our moment: churches playing with virtual connection, people taking up sourdough starters and knitting, movie nights with strangers over twitter, students coloring rainbows for their windows to encourage, reenergized hikes and reimagined forms of community, families performing skits and songs. play is how we grow, open our minds to what is next, and learn to create with what materials we have. even in moments of dire need, new creation can begin to emerge to help us connect and feel our way out in new imaginative and physical planetary landscapes. playing and creating joy in the wasteland, making possibilities in the midst of the ruins of dashed hopes is just another name for theology. it seems like a good model for divine creativity: divinity that grieves and transforms in response to our common life; divinity that cocreates out of playfulness with an unfolding creation still called "good." how are you doing? if you ask me that question, i have two very different answers, both of which are true. the first one is that i am fine, and i have much to be thankful for: my health is good, and so is the health of my family; i have a safe home and plenty of food; i have a job and discretionary income to buy hiking poles when i decided that hiking is my new covid-19 passion; and am able to get outside for long runs and long walks. the second one is, i am not doing great. i miss my routine, and i am anxious and disoriented. i feel like i am not very useful right now, and that is extremely painful. i miss my students in particular, and my colleagues and friends as well-i miss being with them in person, and i am sick of zoom. i am still grieving the loss of holy week and easter services, and i wonder what church is going to look like when we can finally gather again. and, i am missing being able to travel and see friends and family. as i said, both of these things are true. i share this because i wonder if you are having some of the same feelings, and if you are, i want to encourage you that it is ok. on the one hand, it is important to acknowledge and give thanks for your blessings; on the other hand, it is important to acknowledge your feelings of frustration and anxiety. it is important to both support and nourish others when we can, and also have a good cry and even a little tantrum when we need to-do not go crazy however; presumably there are oth-ers in your house who might be startled by your screaming. we are in uncharted territory, all adjusting to a new normal that seems to continually take from us, and we need to give ourselves permission to take time to recalibrate. but even in the midst of it all, we do not lose hope. even if we cannot see it, because the end of the tunnel still seems so far away, there is light there waiting for us. we will get through this, and we will find ourselves on the other side. we will be together once more, and my hope is that we will treasure the daily rhythm of our lives-and the people we share it withall the more for their absence. in the meantime, care for yourselves as best you can, and care for others. accept mediocrity in some things-now is not the time for perfection. do not lose heart. persevere. breathe. love. and when in doubt, love some more. united lutheran seminary philnevahefner@gmail.com doi: 10.1111/dial.12569 global christianity and theological education: introduction to "dialogue in dialog" the papers published in this issue's "dialogue in dialog" were initially presented in two successive luther colloquies held by united lutheran seminary in 2018 and 2019. the essays by madipoane masenya and elieshi ayo mungure were written for a 2018 colloquy on "theology and exegesis in african contexts," along with the essay by andrea ng'weshemi that appeared in the spring issue of dialog. 1 the essays by timothy wengert, kristopher norris, and david brondos were written for a 2019 colloquy on "theological education in the lutheran tradition". 2 the purpose of united lutheran seminary's luther colloquy is to explore the legacy of luther and the lutheran reformation for modern, global, and ecumenical christianity. readers may be interested in the logic behind and the connection between these particular topics. the topics are intimately connected-on the one hand, because the future shape of the church in africa will be determined partly by the accessibility of theological education and the appropriateness of curricula and methods to african contexts. the contributions by masenya, mungure, and ng'weshemi richly demonstrate this point. in turn, the vitality of the church in america may depend on our continued willingness to hear voices that remind us of our connectedness to the global church and our embeddedness in a global society-by our willingness to hear voices that remove the blinders we inherit simply by being born into a particular context and by accepting its structures and self-justifications as given and just. faith in the gospel gives us eyes to see the world anew, to see god present and active and redeeming even where chaos and death seem to abound. but faith comes from hearing, and we in north america need to open ourselves to the power of hearing christians from contexts other than our own and to living in mutual care for one another. theological education plays no small part in inculcating and practicing these habits of hearing and caring. as i remarked at the beginning of the 2018 colloquy, many of the luther biographies that rolled off the presses to mark the supposed 500th anniversary of the reformation spoke of the unintended consequences and even the failure of luther's efforts. 3 in this telling, luther aspired to reform the universal church, but he ended up the leader of a particular church; and the ensuing competition between particular churches and the political authorities aligned with them produced primarily oppression and warfare, before giving way to skepticism and, after a long and weary journey, the separation of church and state that we prize and the pervasive unbelief that we in the church lament. there is much to unpack in this grand narrative stretching "from luther to unbelief"-and this is not the place. but i will say two things. first, judged by luther's own standards, the reformation is not a failure as long as the church lives, the church gathered by the holy spirit through word and sacrament, the church sent into the world to proclaim and serve. luther knew full well that the church is constantly assailed by the false worship of gods less than god. he may not have been so unable to comprehend our world as we sometimes assume! the church today is and can be a force to repudiate the worship of lesser gods and to offer in their place the fullness of god's life and meaning. the second thing to be said is that the story of a straight line from luther to secularization is a story of the northern, western world-a story that readily occludes from view anyone but ourselves. it is a story that is somewhat defensible as an exercise in european and north american self-understanding; it is indefensible as a story that assumes the only meaningful chapter in the story of reformation christianity unfolds between wittenberg and gettysburg, between scandinavia and minnesota. the well-documented shifts in global christian population (including in the lutheran communion) need not be reviewed here. suffice to say: the majority of christians now reside outside of north america and europe, and in due time, the largest body of lutherans will probably be found in sub-saharan africa. there is no question that appropriate remembrance of the reformation in the church should recognize that our past, present, and future are global. that global context, in turn, becomes the context for theological education no matter where it occurs. in my introduction to the 2019 colloquy on theological education, i made these remarks: we live in a moment when theological education-in the seminary context, at leastfaces massive challenges. on the one hand, there is declining enrollment; on the other hand, the rising costs of doing business, including high property costs for older schools with residential campuses. there is also the challenge of serving new populations of seminary students: many students now come to seminary as second-, third-, or fourth-career students, as mature adults with significant obligations to family and community. whether first or later career, students come as parttime students, as commuter students, as distance-learning students. how are their needs to be met, so that they can meet the needs of christians? and if the church is to proclaim the gospel in every place of need, how do we train students for those contexts? one thing is for certain, when graduates leave seminarythey will find a church that needs them. in fact, they will find a church that many times more of them. this fact reminds us that it is not only theological education in the seminary that must be discussed; it is not only the education of pastors that needs to be discussed; the question is: how can church leaders of diverse vocations-pastors, deacons, and others-take their education, go forth, and educate through word and deed as part of their broader vocation? in moments of challenge, we are always in danger of finding ourselves in a reactive state. monumental decisions are suddenly demanded, and one simply does the best one can with faith, acting on principle but on the basis of limited information and limited prior reflection. the resulting action is inevitably constrained both by practical limitations-what else can we do?-and by intellectual constraints-what else can we imagine? what can we imagine if we have not had the time to reflect and study? it is urgent that we use the time we now have to study and imagine, that we think about the purposes of theological education and the ways that theological education must respond to changing contexts-a changing church, a changing worldon the basis of our enduring commitments, above all, our commitment to serve christ's church. as i planned this colloquy, i did not invite speakers to weigh in on any particular set of current proposals for seminary education. in order to evaluate this or that current proposal, in order to imagine alternatives faithful to the mission of the church, we need to bring to bear the insights of our tradition, of theology and history, and of our global church body. i thus invited speakers to address changes and innovation in theological education that occurred in moments of great pressure and even crisis-the reformation itself, the rise of nazi germany-and in the complicated history of christian expansion around the globe. such investigations give us insight into how those who came before us responded to the call of theological education in concrete, difficult circumstances. we can learn much from the thoughts and actions of those who have gone before us, from their successes and their failures: christianity does not invent itself ex nihilo with every new generation; rather we carry into the future a vibrant, living, diverse tradition, grounded in the greatest gift handed down to us, the heart and sum of our tradition, the gospel. theological education is not a task for the seminary alone; it is a core task of the entire church. while the term today often refers to seminary education, what we do at seminary is educate educators. we educate those who must educate others not only about basic doctrinal teachings but also about the depths of christian theological reflection and insight into scripture. we educate those who must teach others not only about doctrine and theology, but also about how we might worship, live, and work together as the church. shaped by seminary, church leaders in turn shape flocks and publics that will go forth and witness to the gospel (i.e., teach others about the gospel), including through the faithful exercise of vocation. theological education is a broad venture, and among the sixteenth century confessions, lutherans were uniquely concerned with the education of the rural peasantry-no other confession in the sixteenth century produced so much literary material that was aimed at rural and small church ministry, at the "simplest" of pastors. this is a tradition that we ought to be proud of; it is a tradition that we carry forward not only in our concern for rural and small church ministry, but fundamentally in our concern that theological education is for all christian peoples in all contexts. today, we look toward a future marked by big challenges and consequential decisions, and as we survey this future and seek to chart our way through it, we are standing on ground that has already shifted. this leads me to the final point i wish to make: as fallen human beings in a fallen world, we frequently respond to change with trepidation; an uncertain future stirs anxiety. much of the movement that has occurred in theological education in recent decades, however, ought to give us cause for hope: we now have women as well as men engaged in theological study and proclaiming the gospel; our understanding of the gospel is now enriched by diverse voices; we have long been a global church, we are now better aware of and better prepared to listen to witnesses from other parts of the world. diverse and global perspectives on scripture and theology and the life of the church challenge us and enrich us in our own contexts. the gospel is a magnificent thing to behold, and church is stronger for seeing its truth and work from different perspectives. tradition is not a zero-sum thing, as if adding a new voice drowns out the old-indeed, new voices can help us see better the depths of what martin luther and so many others wanted to teach us. the richer our field of study, the better we are prepared to do god's work in the world. in conclusion, i want to underline one further groundshift that i have already mentioned: many seminarians, many church leaders in training, are now second-, third-, fourthcareer students. many, whether first or later career, take on the challenge of higher theological study in diverse life circumstances. their willingness to undertake this work is a gift of god. they bring-all students bring-a great diversity of experiences, insights, talents, and vocational skills that strengthen the ministry of the church-that help the church to proclaim the gospel effectively to more people in more walks of life. this too is not a zero-sum game: we are all one in christ, who uses diverse gifts, who welcomes diverse forms of worship, who alone is our redeemer. theological education does face challenges, but it will go on as long as the church goes on. and, as isaiah 40 holds, the word of the lord endures forever. all of us who are involved in theological education-in other words, every committed believer-is called to undertake the venture of theological learning and theological teaching in the spirit of faith, with joyous confidence in both god's direction of our paths and god's redemption of our failings. and we are called, too, to be learners at the feet of the great cloud of witnesses, the great company of teachers who came before us and who span the globe around us. we are called to be learners first from our divine teacher. @worship: liturgical practices in digital words. london: routledge the use of the means of grace: a statement of the practice of word and sacrament digital worship and sacramental life in a time of pandemic sermon at the dedication of the castle church, torgau mn: fortress. (original work published 1544 ce whether one may flee from a deadly plague mn: fortress. (original work published 1527 ce book of concord: the confessions of the evangelical lutheran church d.). first apology. (original work published ca. 150 ce apology of the augsburg confession book of concord: the confessions of the evangelical lutheran church the reformation of suffering: pastoral theology and lay piety in late medieval and early modern germany from sacrifice to sacrament: eucharistic practice in the lutheran reformation facebook society: losing ourselves in sharing ourselves key: cord-335141-ag3j8obh authors: higgins, g.c.; robertson, e.; horsely, c.; mclean, n.; douglas, j. title: ffp3 reusable respirators for covid-19; adequate and suitable in the healthcare setting date: 2020-06-30 journal: j plast reconstr aesthet surg doi: 10.1016/j.bjps.2020.06.002 sha: doc_id: 335141 cord_uid: ag3j8obh nan "please doctor, could you tell him that i love him?": letter from plastic surgeons at the covid-19 warfront dear sir, how many times have we heard these words in this time? too many. the covid-19 pandemic has completely disrupted our normal surgical and clinical routine. in these days, many colleagues of whatever specialty are regularly employed by their hospitals to face covid-19 emergency in italy, europe and worldwide. we are not plastic surgeons anymore. many of us feel lost, unprepared and inadequate for such an emergency. here in bergamo, the centre of the italian epidemic, we felt small and incompetent at the beginning. 1 however, we must remember that first of all we are doctors, then plastic surgeons. in these weeks we are putting our willingness at the service of our patients and colleagues. the numbers of the covid-19 pandemic in bergamo are impressive: 8664 positive patients and over official 2000 deaths in about one month. at the same time, the reaction of our hospital, papa giovanni xxiii, has been impressive too: over 400 doctors and over 900 nurses entirely dedicated to covid-19 positive patients; 88 intensive (one of the largest intensive care unit in europe) and over 400 nonintensive care beds are set aside for those patients. this huge wave of covid-19 positive patients, forced the hospital management to progressively and rapidly recruit, train and put on ward over 400 physicians of any discipline and 900 nurses from march 6th. several training programs about covid-19 infection and management have been scheduled in order to prepare the entire staff. two plastic surgeons of our team (on a total of six) have been fully dedicated on the shifting in covid medical areas coordinated by a pulmonologist and an intensivist. main activities focus on patient clinical exam, adjustment of oxygen therapy, regulation of cpap systems, hemogasanalysis implementation, blood and radiological exam monitoring and consequent therapy modulation, admission, discharge and deaths bureaucracy. despite these new clinical fields which are new for a plastic surgeon, we are learning how isolation of patients, due to public health reason, is the most devastating aspect of covid-19 pandemic. 2 , 3 every single day we phone and update the relatives of those who, because of the worsening of their respiratory condition, are unable to speak and call home. we are sometimes those who communicate the death of his or her beloved but also those who bring words of hope, words of love: "please doctor, could you tell him that i love him so much?". some of these patients die without the hug of their families. a plastic surgeon is not usually used to face death because in our surgery it is not so frequent. we would say that the death of a lonely patient also takes a part of us away. it acquires a different hint, touching some inner cord, it makes you feel impotent and lost. as plastic surgeons we often take care of the psychological side of patients and, except for some tumours and traumas, the pathologies we treat -like breast reconstruction -are not fatal diseases. if we compare the contribution of plastic surgery department in term of numbers, we are like a drop in the ocean. but as ovid wrote in epistulae ex ponto "gutta cavat lapidem" i.e. "the drop digs the rock". thanks to our support, a clinical physician is able to evaluate a larger number of patients, focusing on the most critical ones. this is why we keep going on. we want to make our part, working with commitment, dedication and professionalism and assisting all our patients to the best of our in-continueupdating knowledge. we are proud to help bergamo community to face covid-19 emergency and trying to make the difference in our wounded city. we hope this letter will help other colleagues not to consider themselves unprepared or unready. the contribute of everyone is crucial to defeat this ongoing pandemic which has not only upset our clinical routine, but it has woken us up from our everyday life. before covid-19 everything was scheduled, now there are no plans and we are not sure about our priorities. only if we behave, as long as necessary, with the awareness of being able to make a difference, we will win this terrible fight against sars-cov-2. only together we will go back to hugging, kissing and loving each other. when the critical phase of this emergency is over, it will be necessary to think deeply about the socioeconomic development strategies to discover new horizons and new opportunities for a better future. we will never give up!…and what about you? are you ready to play your part? none. this research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. dear sir, covid-19 is a novel coronavirus with increasing outbreaks occurring around the world. 1 , 2 during the past 4 weeks, emergence of new cases has gradually decreased in china with the help of massive efforts from society and the government. in addition to those directly working in the respiratory, infectious, cardiology, nephrology, psychology, and icu departments and covid-19 patients, all members of the general population may encounter the new coronavirus. medical staff in plastics, reconstructive, and other departments also have a responsibility to prevent the disease spreading in our community. in order to protect both patients and medical staff, selective operations and cosmetic treatments were reduced or postponed in the plastic surgery hospital, beijing, china. gloves and medical masks were saved and donated to the doctors and nurses in wuhan as the demand for protective equipment increased significantly. in addition, a standard operation procedure for covid-19 was proposed in local hos-pitals. our hospital recommended online consultations to replace face-to-face interactions. hospital websites and official social media accounts provided updated practical disease prevention information instead of plastic surgery information. other colleagues also conducted publicity campaigns on disease prevention online via their own social media accounts for relatives and friends, especially for older persons who appeared to have developed a serious illness. at the early stages of the covid-19 outbreak in certain areas, the public may not care much about the new disease. as more information about covid-19 becomes available, people without medical background may be anxious to seek diagnosis, which may result in potential risks of cross infection in the crowded fever clinics. thus, proper information and guidance can help reduce their panic and anxiety. moreover, if individuals were exhibiting relevant symptoms with epidemiologic history, they were advised to seek medical care following the directions of local health authority. in general, plastic surgeons are particularly good at introducing novel surgical methods to the public and keeping in touch with a great number of patients. as a result, they may be able to present local health authority advice in the form of straightforward images and accessible videos, as well as promote practical information via personal social media or clinic websites. in addition to local doctors and nurses from other departments helping in fever clinics and isolation wards, 3 42,600 (as of march 8, 2020) members of medical staff from other provinces rushed to help their colleagues in hubei province. 4 plastic surgeons that had completed icu training in beijing and other cities supported wuhan on their own initiative as well. 5 we suggest that measures should be taken by medical staff from all departments to help slow further spread and to protect health systems from becoming overwhelmed. dear sir, as covid-19 spreads quickly from asia via europe to the rest of the world, hospitals are evolving into hot zones for treatment and transmission of this disease. with the increasing acceptance that operating theatres are high risk areas for transmission of respiratory infections for both patients and surgeons, 1 and with our health care systems being generally well-designed to only deal with occasional high-risk cases, there is an obvious need to evolve our practice. although social media campaigns via the british association of plastic, reconstructive and aesthetic surgeons (#staysafestayhome) and british society for surgery of the hand (#playsafestaysafe) are attempting to raise awareness and reduce preventable injuries, we are still seeing a steady stream of patients present to our plastic surgery trauma service. we have had to act immediately so our systems can support essential surgical care while protecting patients and staff and conserving valuable resources. as a department we have developed a set of standard operating procedures which cover the full scope of plastic surgery from the facilitation of emergent life and limb saving surgeries, rationalised oncological management to the management of minor soft tissue and bony injuries. we have been cognisant of the need to reduce footfall to the hospital and the stratification into "dirty" and "clean" areas with attempted segregation of non-, suspected and confirmed covid cases within inpatient clinical areas. this has resulted in displacement of assessment and procedure rooms within the unit. the ward itself has been earmarked as an extended intensive care unit due to its layout and facilities. standards of practise have changed, with an emphasis on "see and treat" as operating theatre availability has been reduced due to the reduced availability of nurses and theatre staff and their conversion into intensive care areas for ventilated patients. there is also an emerging assumption that all patients are covid-19 positive until proven otherwise. 2 the combination of unfamiliar environments, lack of accessible equipment, requirement to reduce time spent with patients and adherence to social distancing has resulted in the need to provide a more mobile and flexible service. in order to support our mobile service, we have found that, as in other disaster situations where specialised bags have been deployed, 3 using a simple bag containing essential equipment and consumables has revolutionised our ability to work at the point of referral and avoid unnecessary trips to theatre. despite their simplicity, bags have been fundamental for the development of human civilization, with the word originating from the norse word baggi and comparable to the welsh baich (load, bundle)!!! 4 our portable "pandemic pack" is now being carried by the first on-call in our department. this pack contains a 10 l ultra dry adventurer tm , polymer dry bag measuring 36 cm (w) × 70 cm (l) as shown in figure 1 . the contents are shown in figure 2 . we have found this adequate for managing most common plastic surgery trauma and emergency scenarios. the bag is easily cleaned with 1000 ppm available chlorine (in accordance with public health england guidance) after each patient exposure. we have found it useful to make up two packs in advance so that one is available at handover whilst the other is replenished by the outgoing team. we are sure that this concept has been used elsewhere, but if it is not common practice in your unit, we would advocate implementing such a toolkit to facilitate management of trauma patients and reduce the amount time frontline staff need to be in a potential "dirty" environment during the covid-19 pandemic. teleconsultation-mediated nasoalveolar molding therapy for babies with cleft lip/palate during the covid-19 outbreak: implementing change at pandemic speed dear sir, cleft lip/palate is among the most common congenital anomalies, requiring multidisciplinary care from birth to adulthood. the nasolaveolar molding (nam) revolutionized the care provided to babies with a complete cleft, with proving its benefits to patients, parents, clinicians, and society. 1 this therapeutic modality requires parents' engagement with nam care at home and continuous clinicianpatient/parent encounters, commencing at the second week of life and finishing just before the lip repair. the rapidly expanding covid-19 pandemic 2 has challenged clinicians who are dealing with nam therapy to fully stop it, or adjust it to protect, both, the patient/parent and the healthcare team. based on the current who recommendation, to maintain social distancing, and the national regulation for the use of telemedicine, 2 , 3 the nam-related clinician-patient/parent relationship has timely been adjusted by implementing the non-face-to-face care model. babies with clefts are consulted individually by clinicians, proactively establishing the initial and subsequent telemedicine consultations, also providing an open communication channel for parents. based on a shared decisionmaking process, all parents have the option to completely stop nam therapy or use only lip tapping. given that each patient is at a particular stage within the continuum of nam care, numerous patient-and parent-derived issues are being addressed by video-mediated consultations. overall, this has helped explain the current covid-19-related public health recommendations and precautions to parents, while addressing patients' needs and parents' feelings, fears, expectations, and answering parents' questions. moreover, clinical support is provided to patients and parents by visual inspection (looking for potential nam-derived facial irritation), and checking parents' hand-hold maneuvers, such as feeding and placement of the lip tapping and nam device, with immediate feedback for corrections. thus, the use of an audiovisual communication tool has considerably reduced the number of in-person consultations. when a face-to-face consultation could not be resolved using the telemedicine triage, an additional video-based conversation had been implemented, focusing on the key steps, established for patient/parent visits to the facility (i.e., frequent hand-cleaning, mask usage, and keeping 1 m social distance) and on the covid-19-focused screening. 5 symptom-and exposure-screened negative parents/babies have been consulted in a time-specific scheduling with minimum waiting time to avoid crowded waiting rooms, by a clinician wearing personal protective equipment (cap, face shield, n95 mask, goggles, gloves, and gowns), and working in an environment with constant surface/object decontamination. 5 parents, who screened positive for symptoms (e.g., fever, cough, sore throat), were indicated to follow to the appropriate self-care or triage mechanism, stipulated by the who guidelines and local authorities. [2] [3] [4] [5] in the covid-19 era, the care provision should be aligned with the latest clinical evidence. 4 in response to the constantly changing needs, clinicians across the globe could adapt the telemedicine-based possibilities to their own environment of national/hospital regulatory bodies, technology accessibility, and the parents' level of technological literacy. as most of the issues addressed in the video conversations were recurrent reasons for consultations prior to the covid-19 outbreak, future investigations could assist in truly defining the key aspects of telemedicinebased clinician-patient/parent relationship in delivering nam therapy, and its impact on nam-related proxy-reported and clinician-derived outcome measures. there are no conflicts of interest to disclose. virtual clinics: need of the hour, a way forward in the future. adapting practice during a healthcare crisis the whole world is gripped by the novel coronavirus pandemic, with huge pressures on the health services globally. within the coming days, this is only going to increase the pressure on the health care services and needs robust planning and preparedness for this unprecedented situation, lest the whole system may cripple and we may see unimaginable mortalities and suffering. 1 the whole concept of social distancing 2 and keeping people in self isolation has reduced footfall to the hospitals but this is affecting delivery of routine care to patients for other illnesses in the hospital and telehealth is an upcoming way to reduce the risk of cross contamination as well as reduce close contact without affecting the quality of health care delivered. 3 at the bedford hospital nhs trust, for the past one year we have been running a virtual clinic for our skin cancer suspect patients, where in after a particular biopsy if the clinical suspicion of a malignancy was low, these patients were not given a follow up clinic appointment and instead they were informed of the biopsy result through post, sent both to their gp and themselves. most patients encouraged this model to not have to come back to an appointment and this took significant pressure off our clinics. in the event we needed to see a patient, they were informed via a telephonic conversation to attend a particular clinic appointment. from an administration standpoint, this resulted in less unnecessary follow up appointments in our skin cancer follow up clinics, which could then be offered to our regular skin cancer follow up patients as per the recommended guidelines, without having to struggle with appointments. virtual clinics have previously shown to be safe and cost effective alternatives to the out patient visits in surgical departments like urology 4 and orthopedics. 5 they improved performance as well as improved economic output. 3 , 4 we have increased the use of these virtual clinics, with the onset of the novel coronavirus pandemic, in order to reduce the patient footfall to our clinics. most patients voluntarily chose not to turn up and with the risk being highest amongst the elderly, it was logical to keep them away from hospitals as far as possible. in order to achieve this, we have started virtual clinics for nearly all patients in order to triage patients that can do without having to come to the hospital for now. the world of telemedicine is the way forward in nearly all aspects of medical practice 3 and this pandemic situation might just be the right time to establish such methods. we propose setting up of more such clinics in as many subspecialties of plastic surgery, which not only will help in the current crises situation, but will also be useful in the future to take pressure of our health care services. none declared not required funding none webinars in plastic and reconstructive surgery training -a review of the current landscape during the covid-19 pandemic dear sir, the covid-19 pandemic has resulted in cancellation of postgraduate courses and the vast majority of elective surgery. plastic surgery trainees and their trainers have therefore needed to pursue alternative means of training. in the face of cross-speciality cover and redeployment there is an additional demand for covid-19 specific education. the joint committee on surgical training (jcst) quality indicators for higher surgical training (hst) in plastic surgery state that trainees should have at least 2 h of facilitated formal teaching each week. 1 social distancing requirements have meant that innovative ways of delivering this teaching have needed to be found. a seminar is a form of academic instruction based on the socratic dialogue of asking and answering questions, with the word originating from the latin word seminarium meaning "seed plot". 2 fast and reliable internet and the ubiquitous nature of webcams has led to the evolution of the seminar into the webinar. whilst webinars have been common place for a number of years, they represent an innovative and indispensable tool for remote learning during the covid-19 pandemic, where trainees can interact and ask questions to facilitate deep and meaningful learning. speciality and trainee associations have traditionally used their websites and email lists to publicise training opportunities. however, the covid-19 pandemic has seen a shift to social media; with people seeking constant updates and information from public figures, brands and organisations alike. surgical education has mirrored this trend, and we have increasingly observed that webinars are being launched through speciality and trainee association social channels to keep up with the fast-paced demand for accessible online content. the aim of this study was to audit cumulative compliance of active publicly accessible postgraduate plastic surgery training webinar frequency and duration against jcst quality indicators. we used the social listening tool brand24 tm ( https:// brand24.com ). this tool monitors social media platforms for selected 'keywords' and provides analysis of search results. we used the search terms "plastic surgery webinar", "reconstructive surgery webinar", "royal college of surgeons", "bapras", "bssh", "british burns association", "plasta" and "bssh". there were 733 mentions of these terms from 6th may 2019 to 5th may 2020 and 727 of these were after 23rd march 2020, the date that lockdown began in the united kingdom (uk). this represents an increase of 12,017% post-lockdown. we supplemented this search strategy by searching google tm and youtube tm with "plastic and reconstructive surgery webinar". these search engines rank results in order of relevance using a relevancy algorithm, we therefore reviewed the first 100 results only. additional webinars were identified through a snowballing technique where the host webinar webpage was searched for advertised webinars at other institutions. we included any educational webinar series aimed at trainees that was free to access, mirroring weekly plastic surgery hst teaching. free webinars which required membership registration were also included. we excluded webinars aimed at patient or parent education, webinars with less than one video, any historic webinar that did not have an accessible link and webinars behind a paywall or requiring paid membership. we systematically reviewed the search results from brand24 tm , google tm and youtube tm and identified webinar series currently in progress ( table 1 ) and historic webinar series ( table 2 ) . seven active webinar series and two historic webinar series were identified respectively. all were consultant or equivalent delivered. of the active webinar series, 3 (43%) related to covid-19, 2 (29%) related to aesthetic surgery, 1 (14%) related to pan-plastic surgery and 1 (14%) related to hand surgery. the weekly total running time for active webinars amounted to 8 h 30 min, with 4 h and 30 min plastic surgery specific. this was a surplus of 2 h 30 min to jcst quality indicators. limitations of this study include us only identifying webinars advertised publicly. we are aware of training pro-grammes in the uk running in-house webinar series to supplement training and therefore the total available for training is likely to be higher than we have identified. we have also not reviewed the quality of educational content. we acknowledge there are good quality webinar series that require paid for membership such as those provided by the british association of aesthetic plastic surgeons and american society of plastic surgeons but it was not the aim of the study to present them here. innovation flourishes during times of crisis. the education of surgical trainees is of paramount importance and should be maintained, even during the difficult times we currently face. while operative skills will be difficult to develop, the use of technology can allow for the remote delivery of expert teaching to a large number of trainees at once. in this study we identify a number of freely available webinar series that provide a greater number of teaching hours than is recommended by the jcst. the training exists, it is up to trainees to make the most of it. none. none. dear sir, salisbury district hospital (sdh) is based in southwest england and provides a plastic surgery trauma service across the south coast, serving six local hospitals and the designated major trauma centre (mtc). prior to the covid-19 pandemic all patients referred to the trauma service, apart from open lower limb trauma, were reviewed in person within the trauma clinic. if surgery was required, it was usual for patients to return on a separate day for their operation and in most instances this was carried out under general anaesthetic in the main operating theatres. after discharge, patients were referred to the hand therapy and plastics dressing services and returned in person for all follow-up visits including dressing changes and therapy. patients with lower limb injuries from the mtc were transferred from southampton general hospital as inpatients to sdh for all complex reconstruction including free tissue transfer. at the start of the covid-19 crisis, it became quickly apparent that reducing patient footfall within our department was necessary to protect both patients and staff from the disease. this included reducing inpatient stays in hospital. we responded to this challenge in the following ways and hope that our experience will be of assistance to other trauma services over the course of the global pandemic. firstly, all patient protocols underwent significant redesign following which changes to the layout of our plastic surgery outpatient facility were made and patient flow through the department was altered and reduced. now, when patients are referred to our hand trauma service from peripheral hospitals, the initial patient consultations are carried out remotely using the 'attend anywhere' video platform. we are following the bssh covid-19 hand trauma guidelines 2 for patient management. all patient decisions are discussed with the trauma consultant of the day. we are managing a greater number of patients conservatively and to aid this we have designed comprehensive patient information leaflets that enable our patients to increase understanding of their own management. patients who need to be seen in person at our department are screened for symptoms of covid-19 and their temperature taken at the department entrance. level 2 ppe is worn by staff at all times. for hand trauma patients requiring surgery, this is provided on the same day to maximize efficiency and reduce the need for multiple visits. we have transformed our minor operating theatres, located adjacent to our clinic, into fully functional theatres equipped with a mini c-arm and all instruments for trauma operating. this reduces the need for our patients to be taken into the main hospital theatre suite. operations are carried out either under local anaesthetic, walant or regional block depending on complexity. all theatre staff wear level 3 ppe and staffing is kept to a minimum. all wounds are closed with dissolvable sutures. immediately post operation, our on-site hand therapists review patients. splints are made on the same day and patients are educated about their post-operative management at this time. all follow-up is subsequently carried out virtually by the hand therapy team using 'attend anywhere'. with our hub and spoke service set up for lower limb trauma patients, we have ensured that there is an on-site consultant at the mtc every day. wound coverage is being undertaken for all patients at the mtc. two plastic surgery consultants in conjunction with the orthopaedic team carry out operating for these patients. all inter-hospital transfers for this group of patients have been stopped. choice of wound coverage for these patients is being designed to minimise inpatient stay and reduce operative time. the changes that we have made to our service in a short period of time have already been beneficial for patients, streamlining their care and reducing time spent in hospital. figure 1 shows the drop in numbers of trauma patients that we have seen during the first four weeks of the uk lockdown ( n = 213 in january 2020 to n = 75 over the first 4 weeks into lockdown). this is in line with reports from other uk units. this has given us time to refine our protocols for an expected upsurge of patients as the lockdown is lifted. furthermore, during this period where we have had extra capacity, our registrars have been trained to carry out new techniques. they now undertake insertion of both mid-lines and picc lines for medical inpatients under ultrasound guidance to support and reduce the burden placed on our anaesthetic and critical care colleagues who previously would have placed these. it is our expectation that many of the changes we have implemented to our service will be continued in the longterm. we will continue to learn and adapt our protocols as this phase of work continues. whilst many of the outcomes of the covid-19 pandemic will be negative, it has also been the catalyst for significant positive change within the uk nhs. dear sir, the covid-19 pandemic has caused unprecedented disruptions in patient care globally including management of breast and other cancers. 1 however, cancer care should not be compromised unnecessarily by constraints caused by the outbreak. clinic availability and operating lists have been drastically reduced with many hospital staff members reassigned to the "frontline". furthermore, all surgical specialties have been advised to undertake emergency surgery or unavoidable procedures only with shortest possible operating times, minimal numbers of staff and leaving ventilators available for covid-19 patients. 2 in consequence, much elective surgery including immediate breast reconstruction (ibr) has been deferred in accordance with guidance issued by professional organisations such as the association of breast surgery (uk) and the american society of plastic surgeons. 3 , 4 this will inevitably lead to backlogs of women requiring delayed reconstructions and it is therefore imperative that reconstructive surgeons consider ways to mitigate this and adapt local practice in accordance with national guidelines and operative capacity. in the context of the current "crisis" or the subsequent "recovery period", time consuming and complex autologous tissue reconstruction (free or pedicled flap) should not be performed. approaches to breast reconstruction might include the following options: 1. a blanket ban on immediate reconstruction, and all forms of risk-reducing, contralateral balancing and revisional/tertiary procedures. where reconstructive delay is neither feasible nor desirable, opting for simple and expedient surgery should be considered e.g.: a) expanded use of therapeutic mammaplasty: as a unilateral procedure in selected cases instead of mastectomy and ibr. b) exploring less technically demanding (albeit "controversial") implant-based forms of ibr: i. epipectoral breast reconstruction (fixed volume implants): this adds about 30 minutes to the ablative surgery as the pre-prepared implant-adm complex is easily secured with minimal sutures. ii. "babysitter" tissue expander/implant: this acts as a scaffold to preserve the breast skin envelope for subsequent definitive reconstruction. 3. during the restrictive and early recovery phase, either a solo oncological breast surgeon or a joint ablative and reconstructive team (breast and plastic surgeon) performs surgery without the assistance of trainees or surgical practitioners. for joint procedures, the plastic surgeon acts as assistant during cancer ablation and as primary operator for the reconstruction. despite relatively high rates of complications for implant-based ibr (risking re-admission, prolonged hospital stays or repeat clinic visits), 5 avoiding all ibr will lead to long waiting lists and have a negative psychological impact, particularly among younger patients. this will also impair aesthetic outcomes due to more extensive scars and inevitable loss of nipples. whilst appreciating the restrictions imposed by covid-19, there is opportunity to offer some reconstructive options depending on local circumstances, operating capacity and the pandemic phase. we suggest that these proposals involving greater use of therapeutic mammaplasty as well as epipectoral and "babysitter" prostheses be considered in efforts to offset some of the disadvantages of covid-19 on breast cancer patients whilst ensuring that their safety and that of healthcare providers comes first. dear sir, the covid-19 pandemic has shifted clinical priorities and resources from elective and trauma hand surgery with general anaesthesia (ga) to treat the growing number of covid patients. at the time of this correspondence, the pandemic has affected over 2 million people resulting in 129045 deaths worldwide, with 12868 uk deaths, with numbers still climbing. this has particularly affected our hand trauma services which serves north london, a population of more than 2 million. we receive referrals from a network of 8 hospitals in addition to 3 emergency departments of the royal free group of hospitals and numerous gp practices and urgent care centres. in the first week following the british government lockdown, which commenced march 23rd, we experienced a 75% drop in referrals, from 25 to 6 a day. subsequently, numbers have been steadily rising to 12-14 a day by 6 th of april. the british association of plastic, reconstructive and aesthetic surgeons, the british society for surgery of the hand and the royal college of surgeons of england, have all issued guidance: both encouraging patients to avoid risky pursuits, which could result in accidental injuries and to members how to prioritise and optimise services for trauma and urgent cancer work. we have adapted our hand trauma service to a 'one stop hand trauma and therapy' clinic, where patients are assessed, definitive surgery performed and offered immediate post-operative hand therapy where therapists make splint and give specialist advice on wound care and rehabilitation including an illustrated hand therapy guide. patients are categorised based on the bssh hand injury triage app. we already have a specific 'closed fracture' hand therapy led clinic, to manage the majority of our closed injuries. we combined this clinic with the plastic surgeons' led hand trauma clinic, and improved its efficiency further by utilising the mini c-arm fluoroscope within the clinic setting. this enabled us to immediately assess fractures and perform fracture manipulation under simple local anaesthesia. we have successfully been able to perform 95% of our operations for hand trauma under wide awake local anaesthesia no tourniquet (walant). 1 prior to the pandemic, we used walant for selected elective and trauma hand surgical cases. in infected cases, where local anaesthesia is known to be less effective, we have used peripheral nerve blocks. previous data showed 50% of our trauma cases were conducted under ga, 33% under la, and 17% under brachial or peripheral nerve blocks. 2 we have specifically modified our wound care information leaflets to minimise patient hospital attendance. afterwards patients receive further therapy phone consultations and encouragement to use the hand therapy exercise app developed by the chelsea and westminster hand therapists. the patient is given details of a designated plastic surgery nhs trust email address, for direct contact with the plastic surgery team: for concerns, questions and transfers of images. we have to date received 39 emails, of which 21 have been from patients directly, and the remainder from referring healthcare providers. the majority of inquiries are followed up via a telephone consultation and only complex cases or complications, attend face-to-face follow-up. this model has successfully combined assessment, treatment and post-op therapy into a one-stop session, which has greatly limited patient exposure to other parts of the hospital, such as the radiology and therapy departments. the other benefit of such clinic is an improved outcome through combined decision making. 3 there is also a cost saving benefit compared to our traditional model of patient care. we have treated 31 patients based on this model so far, who have been suitable for remote monitoring. on average we have saved 2 plastics dressing clinic (pdc) visits for wound checks per patient, as a very minimum. we have previously calculated the cost of pdc at our centre at £155 per visit 4 and for our 31 patients this translates to an approximately saving of £9000 per month just on pdc costs. if 30 patients each month could be identified for remote monitoring, this could potentially lead to an annual saving of more than £110,000. in addition, the estimated cost-saving by converting the mode of anaesthesia from ga to walant has been shown to cause a 70% reduction. 5 the concept of a one-stop clinic has already been successfully implemented in the treatment of head & neck tumours, following introduction of nice guidelines in 2004 3 and the covid-19 pandemic has made us redesign a busy metropolitan service for hand injuries along the same lines. we believe this model is a good strategy and combining this with more widespread use of the walant technique, technology such as apps and telemedicine, as well as encouraging greater patient responsibility in their post-operative care and rehabilitation; is the way forward. we hope sharing this experience will result in improved patient care at this time of crisis. 'this is a saint patrick's day like no other' declared the irish prime minster on march 17th 2020, whilst announcing sweeping social restrictions in a response to the worsening covid-19 pandemic. this nationwide lockdown involved major restrictions on work, travel and public gatherings and signified the government's shift from the suppression to the mitigation phase of the outbreak. the national covid-19 task force produced a policy specifying the redeployment of heath care workers to essential services such as the emergency department and intensive care. 1 with the introduction of virtual outpatient clinics and the curtailment of elective operating lists, the apparent clinical commitments of a plastic surgeon during this pandemic has lessened. trauma is a continual and major component of our practice 2 ; however, a decline in emergency department presentations has fuelled anecdotal reports of a reduction in the trauma workload. with diminishing resources, the risk of staff redeployment and consequences of poor patient outcomes we aim to assess the effect of the current lockdown due to covid-19 pandemic on plastic trauma caseload. we performed a retrospective review of a prospectively maintained trauma database at a tertiary referral hospiduring the first 25 days of the lockdown, 48 patients attended plastic surgery trauma clinic, in which 41 (85.4%) underwent a surgical procedure. as seen in figure 1 , these numbers are comparable over the same time frame for the two previous years. upper limb trauma accounted for the near majority of referrals. frequency and type of surgery performed during the lockdown were similar to the previous two years, as seen in table 1 . the percentage of patients requiring general anaesthesia was 46.3% (19/41) in 2020, 44.2% (19/43) in 2019, and slightly higher in 2018 at 58.9% (23/39). we have refuted any anecdotal evidence proposing a decline in plastic trauma caseload during the covid19 nationwide lockdown. comparing the same time in previous years, the lockdown has produced an equivalent trauma volume. despite, the widespread and necessary restriction of routine elective work, somewhat surprisingly the pattern and volume of trauma remains similar to preceding years. with people confined to their household, it is the 'diy at home' associated injuries which attributes to this trend. and the exemption from regulations of certain industries such as agriculture and the food preparation chain. whilst not every trauma risk may be mitigated, the potential for these diy injuries to overwhelm the healthcare service has resulted in the british society for surgery of the hand (bssh) cautioning the general public on the safety of domestic machinery. 3 as healthcare systems are stretched further than ever before we all must recognise the need for adaptation and structural reorganisation to treat those of our patients most in need during this pandemic. staff redeployment is a necessary tool to maintain frontline services; nonetheless, we wish to highlight the outcomes of this study to the clinical directors with the challenging job of allocating resources. our trauma presentations have not reduced during the first 25 days of this pandemic, resources (staff and theatre) should still be accessible for the plastic surgery trauma team, with observance of all the appropriate risk reduction strategies as documented by british association of plastic, reconstructive and aesthetic surgeons. 4 none. none. in light of the ongoing covid-19 pandemic, the american society of plastic surgeons (asps) has released a statement urging the suspension of elective, non-essential procedures. 1 this necessary and rational suspension will result in detrimental financial effects on the plastic surgery community. given the simultaneous economic downturn inflicted by public health social-distancing protocols, there will be a bear market for elective surgery lasting well past the bans being lifted on elective surgeries. this effect will largely be due to the elimination of discretionary spending as individuals attempt to recover from weeks to months of lost earnings. as demonstrated during the 2008-2009 recession, economic decline was associated with a decrease in both elective and non-elective surgical volume. 2 private practice settings performing mostly cosmetic procedures were particularly vulnerable to these fluctuations and demonstrated a significant positive correlation with gdp. 3 the surgery community must prepare for the economic impact that this pandemic will have on current and future clinical volumes. these effects are likely to be more severe than the previous recession as surgeons are currently indefinitely unable to perform elective surgeries, coupled with the immense strain on hospital resources at this time. given this burden, elective surgery cases may be some of the last to be added back to the hospital once adequate resources are restored. while surgeons are temporarily unable to operate, they do have the potential to use telehealth in order to arrange preoperative consults and postoperative follow-up appointments. this could be accomplished in private practice settings with the use of telehealth services such as teladoc health, american well, or zoom, which allow for live consultation with patients without unnecessary exposure of patients or providers to potential infection. 4 the main limitation of these types of appointments is the lack of an inperson physical exam, so providers have found that billing based on time spent with the patient is more effective with this tool. 5 this could generate revenue and facilitate future surgical cases after the suspension of in-person elective patient care has been lifted. several strategies should be considered by the elective surgery community to minimize financial losses. many financial entities have changed their policies in order to support small businesses. examples include the small business administration offering expanded disaster impact loans and deferment of the federal income tax payments by three months to july 15. 5 another option employers may leverage is temporarily laying off of employees so that employees can apply for and collect an expanded unemployment package by federal and state governments thereby reducing the payroll burden on stagnant practices with no cash flows and providing employees with a steady source of income during the pandemic. the employer's incentive to do this may be reduced with the potential suspension of the payroll tax on employers and loan forgiveness to employers who continue to pay employees wages. 5 once elective procedures are again permitted, plastic surgeons that have retained a reconstructive practice should make a strategic business decision to increase reconstructive surgery and emergent hand surgery bookings as historically these procedures are less fluctuant with the economy. 3 other options to maintain aesthetic case volume include price reductions or temporary promotions. however, it is important that these be adopted universally in order to minimize price wars between providers. as physicians, it is principle that surgeons practice nonmaleficence and minimize non-essential patient contact for the time being. however, this time of financial standstill should be used constructively to prepare for the financial uncertainty in the months to come. none demic advise certain groups to stringently follow social distancing measures. inevitably some health care workers fall into these categories and working in a hospital places them at high risk of exposure to the virus. studies have shown human to human transmission from positive covid-19 patients to health care workers demonstrating that this threat is real 1 , 2 and as in other infectious diseases is worse in certain situations such as aerosol generating and airway procedures 3 , 4 . there is therefore a part of our workforce that has been out of action reducing available workforce at a time of great need. in our hospital a group of vulnerable surgical trainees ranging from ct2 to st8, and also consultants, have been able to keep working while socially isolating within their usual workplace. in light of covid-19 our hospital, a regional trauma centre for burns, plastic surgery and oral and maxillofacial surgery, was reorganized to increase capacity for both trauma and cancer work. as part of this a virtual hand trauma service has been set up. the primary aim of the new virtual hand trauma clinic was to allow patients to be triaged in a timely manner while adhering to social distancing guidelines by remotely accessing the clinic from home. further aims were to reduce time spent in hospital and reduced time between referral and treatment. in brief, patients referred to our virtual hand trauma clinic from across the region receive a video or telephone consultation using attend anywhere software, supported by nhs digital. following the virtual consultation patients are then triaged to theatre, further clinic, or discharged. our group of isolating doctors, plus a pharmacist and trauma coordinator, have been redeployed away from their usual face to face roles and are now working solely in the virtual trauma clinic. they are able to work to provide this service in an isolated part of the hospital named the 'virtual nest.' the nest is not accessible in a 'face to face' manner by non-isolating staff or patients. this allows a safe 'clean' environment to be maintained. the virtual team is able to participate in morning handover with other areas of the hospital via video conferencing using webex software. the nest workspace is large enough to allow social distancing between clinicians and by being on site they benefit from availability of dedicated workspaces with suitable it equipment and bandwidth. it is widely recognised that reconfiguration of hospitals and redeployment of staff has meant that training is effectively 'on hold' for many trainees. we have found that a benefit of the new virtual hand trauma clinic is that trainees can continue to engage with the intercollegiate surgical curriculum programme with work based assessments in a surgical field. while direct observation of procedural skills and procedure based assessment are not feasible, case based discussions and clinical evaluation exercises have been easily achievable due to trainees managing patients with involvement of supervising senior colleagues in decision making. this plus a varied case mix seen has enhanced development of knowledge, decision making, leadership and communication skills. as trainees are unable to attend theatre practical skills may suffer depending on how long clinicians are non patient facing. this has been acknowledged by the gmc in the skill fade review; skills have been shown to decline over 6 -18 months 5 . although it can only be postulated at the current time colleagues who are patient facing but redeployed may face a similar skill decline. the structure of the team is akin to the firm structure of days gone by with the benefits that brings in terms of support and mentorship. patients benefit from having access to a group of knowledgeable trainees, supported by consultants, and a service accessible from their own home. this minimizes footfall within our hospital, exposure to, and spread of covid-19. local assessment of our practice is ongoing but we have found that this model has enabled a cohort of vulnerable plastic surgery trainees to successfully continue to work whilst reducing the risk of exposure to covid-19 and providing gold standard care for patients. none. nothing to disclose. dear sir, a scottish sarcoma network (glasgow centre) special study day on 6th march 2019 at the school of simulation and visualisation, glasgow school of art, with representatives from sarcoma uk, beatson cancer charity and the bbc. traditional patient information leaflets inadequately convey medical information due to poor literacy levels: 16-27% of uk population have the lowest adult literacy level 1 and 40% the lowest "health literacy" level (ability to obtain, understand, act on, and communicate health information). 2 it was hypothesised that an entirely visual approach, such as ar, may obviate literacy problems by faciliating comprehension of complex 3 dimensional concepts integral to reconstructive surgery. we report the first augmented reality (ar) in patient information leaflets in plastic surgery. to our knowledge we are among the first in the world to develop, implement, and evaluate an ar patient information leaflet in any speciality. developed for sarcoma surgery, the ar patient leaflet centred around a prototypical leg sarcoma. a storyboard takes patients through tumour resection, reconstruction, and the potential post-operative outcomes. input from specialist nurses, sarcoma patients, and clinicians during a scottish sarcoma network special study day in march 2019 informed the final content ( figure 1 ). when viewed by smartphone camera (hp reveal studio, hp palo alto, california usa), photos in the ar leaflet automatically trigger additional content display without need for qr codes or internet connectivity: (1) sequential tumour resection ( a 3d alt flap model was developed using body-parts3d (research organization of information and systems database centre for life science, japan) and custom anatomical data. 4 leaflet evaluation by 14 consecutive lower limb sarcoma patients was exempted from ethics approval by greater glasgow and clyde nhs research office as part of service evaluation. ar leaflets were compared with pooled data from traditional information sources (sarcoma uk website patient leaflets (6), self-directed internet searches (5), generic sarcoma patient leaflets (5); some patients used > 1 source). the mental effort rating scale evaluated perceived difficulty of comprehension (or extrinsic cognitive load), 3 as a key outcome measure in comparison to traditional information sources. patient satisfaction was assessed by likert scale (1 was very, very satisfied and 9 very, very dissatisfied). statistical analysis performed with social science statistics, 2019. ar leaflets were rated as 1.57 (very, very low mental effort), traditional information sources as 6.36 (high mental effort) [unpaired t -test p < 0.0001]. likert-scale satisfaction was 1.43, indicating a very, very high satisfaction. when asked "do you think the ar leaflet would make you less anxious about surgery?", 12/14 (86%) patients responded 'yes'. when asked "would you think other patients would like to have a similar ar leaflet before surgery" and "would you like to see further ar leaflets to be developed in the future?", 100% responded "yes". no correlation was found between age or educational level and mental effort rating scale scores for ar patient leaflet (data not shown). subjective feedback analysis found that self-directed internet searches had too much unfocussed information: " (i) didn't want to google as may end up with all sorts" and "(there is) good and bad stuff on the internet, don't know what you're looking at". all patients felt the visual content in ar leaflets helped their understanding: "incredible…that would have made a flap easier to understand", "tremen-dous… good way of explaining things to my family", "so much better seeing the pictures, gives an idea in your head", and "helpful for others with dyslexia". traditional patient leaflets were often difficult to comprehend: "(i) didn't fully understand the sarcoma leaflets", "couldn't take information in from leaflets". feedback recommended adding simple instructions on the leaflet, however the ar leaflet is intended for use by the clinician in clinic, and to be so simple that no instructions are required once software is downloaded to the patient's smartphone (i.e., point and shoot without technical expertise, menus, or website addresses). all patients desired an actual paper leaflet for reassurance, preferring something physical show their family rather than direction to a website or video. this study demonstrates significant reduction in extraneous cognitive load (mental effort required to understand a topic) with ar patient leaflets compared to traditional information sources ( p < 0.0001). ar visualisation may make inherently difficult topics (intrinsic cognitive load), such as reconstructive surgery, easier to understand and process. significant learning advantages exist over tradi-tional leaflets or web-based videos, including facilitating patient control, interactivity, and game-based learning. all contribute to increased motivation, comprehension, and enthusiasm in the learning process. 5 ar leaflets reduced anxiety (86% patients), and scored very highly for patient satisfaction with information, which is notable given increasing evidence of strong independent determination of overall health outcomes. this study provided impetus for investment in concurrent development of other ar leaflets across the breadth of plastic surgery, and non-plastic surgery specialties. chief scientist office (cso, scotland) funding was recruited to aid development of improved, free, fully interactive 3d ar patient information leaflets and a downloadable app. ethical approval is in place for a randomised controlled trial to quantify the perceived benefits of ar in patient education. our belief is that ar leaflets will transform and redefine the future plastic surgery patient information landscape, empowering patients and bridging the health literacy gap. none. dear sir, we investigated if age has an influence on wound healing. wound healing can result in hypertrophic scars or keloids. from previous studies we know that age has an influence on the different stages of wound healing. 1-4 a general assumption seems to be that adults make better scars than children. knowledge of the influence of age on healing and scarring can give opportunities to intervene in the wound healing process to minimize scarring. it could guide patients in their decision when to revise a scar. it could also lead patients and physicians in their decision of the timing of a surgery, if the kinds of surgery allows this. this study is a retrospective cohort study at the department of plastic, reconstructive, and hand surgery of the amsterdam university medical center. all patients underwent cardiothoracic surgery through a median sternotomy incision. all patients had to be at least one year after surgery at time of investigation. hypertrophic scars were defined as raised 1 mm above skin level while remaining within the borders of the original lesion. keloid scars were defined as raised 1 mm above skin level and extending beyond the borders of the original lesion. 5 the scars were scored with the patient and observer scar assessment scale (posas) as primary outcome measure. as secondary outcome measures we looked at wound healing problems and scar measurements. in order to ensure that the results of this study are as little as possible influenced by the already known risk and protective factors for hypertrophic scarring, the patients were questioned about co-existing diseases, scar treatment, allergies, medication, length, weight, cup size (females) and smoking. their skin type was classified with the fitzpatrick scale i to vi. all calculations were performed using spss and the level of significance was set at p ≤ 0.05. 105 patients were enrolled in this study. group 1 contained 53 children and group 2 contained 52 adults. there is a significant difference between the two groups for the amount of pain in the scar scored by the patient. this item was given higher scores by adults than children ( p = 0.025). there is no significant difference between the two groups for the other posas items (itchiness, color, stiffness, thickness, and irregularity), the total score of the scar and the overall opinion of the scar scored by the patient ( table 1 ) . there is a significant difference between the two groups in pliability of the scar scored by the observer. the posas item pliability of the scars of the children was assessed higher, thus stiffer, than in adults ( p = 0.022). there is no significant difference between the two groups for the other posas items (vascularization, pigmentation, thickness, relief, and surface), the total score of the scar and the overall opinion of the scar scored by the observer ( table 1 ) . there is no significant difference between children and adults in the occurrence of wound problems post-surgery. there is no significant difference in scar measurements between children and adults. in children we found three hypertrophic scars and two keloid scars. in adults we found seven hypertrophic scars and three keloid scars. for both groups together that is a percentage of 14.3 hypertrophic and keloid scars ( table 2 ) . patients with fitzpatrick skin type i and iv-vi scored significantly higher, thus worse, in their overall opinion of the scar ( p = 0.024) than patients with skin type ii and iii. observer and patient assessed the overall opinion of the scar significantly higher (worse) in people who had gone through wound problems (respectively p = 0.020 and p = 0.007) than those who had not. we found no significant differences in the primary outcome measure between men and women, cup size a-c and d -g, smokers and non-smokers, bmi < 25 and bmi > 25, allergies and no allergies, and scar treatment and no scar treatment. age at creation of a sternotomy wound does not seem to influence the scar outcome. this is contrary to what is often the fear of a parent of a child who needs surgery early in life. comparing scars remains difficult because of the many factors that can influence scar formation. we found that scars have the tendency to change, even years after they are made. a limitation of the study is the retrospective design. the long follow-up period after surgery is a strength of the study. to our best knowledge this is the first study that compares scars of children and adults to specifically look at the clinical impact of age on scar tissue. in order to detect even more reliable and possibly significant differences between children and adults, more patients should be enrolled in future prospective studies. for now we can conclude that there is no significant difference in the actual scar outcome between children and adults in the sternotomy scar. if we extend these results to other scars, the timing of surgeries should not depend on the age of a patient. none. none. metc. reference number: w18_050 # 18.068. we published a systematic review of randomized controlled trials (rcts) on early laser intervention to reduce scar formation in wound healing by primary intention. 1 while comparing our results with two other systematic reviews on the same topic, 2 , 3 we identified various overt methodological inconsistencies in those other systematic reviews. issue 1. including duplicate data ( table 1 ) : karmisholt et al. 2 included two rcts of which both reported the identical data on five people. the inclusion of duplicate data can bias the results of a systematic review and should be prevented in the quantitative as well as the qualitative synthesis of evidence. abbreviations. id: identity; n.l.t.: no laser treatment; pcs: prospective cohort study; pmid: pubmed identifier; rct: randomized controlled trial. a) listed are rcts which were included by at least one of the three identified systematic reviews. the systematic reviews are ordered by search date from left to right. b) "search date" refers to the searching of bibliographic databases by the authors of the corresponding systematic reviews. c) "publication date" refers to the publication history status according to medline®/pubmed® data element (field) descriptions. d) "n.l.t." means that the authors of the rcts compared laser treatment with no treatment or a treatment without laser. e) "pcs" means that the authors used this term to label the corresponding rct. f) "-" indicates that an rct could not have been identified because the publication of the corresponding rct happened after the search date. g) "missing study" means that an rct could have been identified because the publication of a corresponding rct happened before the search date. h) "excluded" that the authors of the present review excluded the corresponding rct based on the exclusion criteria provided. i) "not analyzed" means that an rct was reported within an article but the corresponding data were not included in the metaanalysis. j) "other laser" means that the authors of the rcts compared various types of laser treatment. 2 attached the label "prospective cohort" to almost all considered studies including 16 rcts and seven nonrandomized studies. in rcts, subjects are allocated to different interventions by the investigator based on a random allocation mechanism. in cohort studies, subjects are not allocated by the investigator but rather allocated in the course of usual treatment decisions or peoples' choices based on a nonrandom allocation mechanism. 4 we believe that 'cohort study' is certainly not an appropriate label for rcts. furthermore, it is known for a long time that the shorthand labeling of a study using the words 'prospective' and 'retrospective' may create confusion due to the experience that these words carry contradictory and overlapping meanings. 5 issue 4. mixing data from various study designs: karmisholt et al. 2 did not clearly separate randomized from nonrandomized studies. combinations of different study design features should be expected to differ systematically, and different design features should be analyzed separately. 4 issue 5. unclear definition of outcomes and measures of treatment effect: kent et al. 3 reported, quote: "the primary outcome of the meta-analysis is the summed measure of overall efficacy provided by the pooling of overall treatment outcomes measured within individual studies." we think that the so-called "summed measure" is not defined and not understandable. the meta-analysis reported in that article included mean and standard deviation values from four rcts. these rcts applied endpoints and time periods for assessment which differed considerably among the included studies. it appears obscure to us which data were transformed in what way to finally arrive in the meta-analysis. we believe that traceability and reproducibility of data analyses are mainstays of systematic reviews. issue 6. missing an understandable risk of bias assessment: kent et al. 3 reported, quote: "the risk of bias assessment tool provided by revman indicated that all studies had 2-3 categories of bias assessed as high risk." the term "revman" is a short term for the software "review manager 5 provided by cochrane for preparing their reviews. the cochrane risk-of-bias tool for randomized trials is structured into a fixed set of domains of bias including those arising from the randomization process, due to deviations from intended interventions, due to missing outcome data, those in measurement of the outcome, and in selection of the reported result. we believe that the risk of bias assessment reported by kent et al. 3 is not readily understandable and presumably does not match standard requirements. systematic reviews of healthcare interventions aim to evaluate the quality of clinical studies, but they might have quality issues in their own right. the identification of various inconsistencies in two systematic reviews on plateletrich plasma therapy for pattern hair loss should prompt future authors to consult the cochrane handbook ( https: //training.cochrane.org/handbook ) and the equator network ( http://www.equator-network.org/ ). the latter provides information to various reporting standards such as prisma for systematic reviews, consort for rcts, and strobe for observational studies. the authors declare no conflict of interest. dear sir, journal clubs have contributed to medical education since the 19th century. 1 along the way, different models and refinements have been proposed. recently, there has been a shift towards "virtual" journal clubs, often using social media platforms. 2 our team has refined the face-to-face journal club model and successfully deployed it at two independent uk national health service (nhs) trusts in 2019. we believe there are reproducible advantages to this model. over 6 months at one nhs trust, 8 journal club events were held, with iterative changes made to increase engagement and buy-in of the surgical team. overall, tangible outputs included 3 submissions of letters to editors, of which 2 have been accepted. following this, the refined model was deployed at a second nhs trust, which had expanded academic support increasing its impact. over 4 months, 6 journal club events were held, with 4 submissions of letters to editors, 3 of which have been accepted. thus, in 10 months of 2019, the two sequential journal clubs generated 7 submissions for publication, with 16 different authors. these tangible outputs are matched by other intangible benefits, such as improving critical appraisal skills. this is assessed in uk surgical training entry selection and is also a key skill for evidence-based professional practice. therefore, we feel this helps our team members' career progression and clinical effectiveness. key aspects of the model include: 1. face-to-face meetings continue to have multiple intangible benefits there is a trend towards social media and online journal clubs. while such initiatives have considerable benefits, maintaining face-to-face contact in a department allows for an efficient discussion, and enhances teambuilding. instead of replacing face-to-face meetings with virtual ones, we use social media platforms, such as whatsapp, to support our events. this includes communications to arrange the event in advance, and for maintaining momentum on post-event activities, such as authoring letters to journals from the discussion. while some articles describing journal club models highlight the benefit of expert input in article selection, 1 we also view it as a learning opportunity. a surgical trainee is allocated to present each journal club, with one of our three academically appointed consultant surgeons chairing and overseeing. trainees are encouraged to screen the literature and identify articles beforehand and make a shared decision with the consultant. the article must be topical and have potential to impact clinical practice. doing this prior to the session allows the article to be circulated to attendees with adequate time to read it. we routinely use both reporting guidelines (e.g., prisma for systematic reviews), and also methodological quality guidance (e.g., amstar-2 for systematic reviews) to guide trainees and structure the journal club presentation. in addition to three consultants with university appointments guiding critical appraisal, a locally based information scientist also joins our meetings. during journal club discussion, emphasis is placed on relating the article to the clinical experience of team members. this provides context and aids clinical learning for trainees. while undertaking critical appraisal may be a noble endeavour, in busy schedules, it is important that it adds value for everyone involved. reviewing contemporary topics can inform clinical practice for all levels of surgeon in the team, presenting the article improves trainees' presentation skills, and publishing the appraisal generates outputs that help trainees to progress. 8. publishing summaries of journal club appraisals can impact on multiple levels journal club does not only contribute to our trainees' development and departmental clinical practice. it benefits our own research strategy and quality, and open discussion of literature in plastic surgery contributes to a global culture of improving evidence. scheduling events on a regular basis increases familiarity with reporting and quality guidance and allows for the study of complementary article types (e.g., systematic review, randomised trial, cohort study). our iterations suggest that the following structure is most effective: joint article selection one week before event, dissemination to audience, set time and location during departmental teaching, chairing by an academic consultant with information scientist and senior surgeons present, presentation led by a surgical trainee, open-floor discussion of article and its implications for our own practice, summary, drafting of letter to the editor if appropriate. as we have used variations of this model successfully at two independent nhs trusts, we believe that these tactics can be readily adapted and deployed by others as well. nil. dear sir, surgical ablation of advanced scalp malignancies requires wide local excision of the lesion, including segmental craniectomies. the free latissimus dorsi (ld) flap is a popular choice for scalp reconstruction due to its potential for mass surface area resurfacing, ability to conform to the natural convexity of the scalp, reliable vascularity and reasonable pedicle length. 1 one of the disadvantages of ld free flap use is the perceived need for harvest in in a lateral position. this necessitates a change in position of the patient intraoperatively for flap raise and can add to the overall operative time. current literature in microvascular procedures on the elderly demonstrates that a longer operative time is the only predictive factor associated with an increased frequency of post-operative medical and surgical morbidity. as most patients undergoing scalp malignancy resection are elderly it is important to reduce this surgical time in this cohort of patients. 2 , 3 we present our experience of reconstruction of composite cranial defects with ld flaps using a synchronous tumour resection and flap harvest with supine approach to reduce operative times and potential morbidity. all patients undergoing segmental craniectomies with prosthetic replacement and ld reconstruction under the care of the senior surgeons were included in the study. patients were positioned supine with a head ring to support the neck; a sandbag is placed between the scapulae and the arm on the chosen side of flap raise is free draped. a curvilinear incision is made posterior to the midaxillary line ( figure 1 ). the lateral border of the ld muscle is identified, and dissection continued in a subcutaneous plane inferiorly, superiorly and medially until the midline is approached. the muscle is divided at the inferior and medial borders, and the flap lifted towards the pedicle. once the pedicle is identified, the assistant can manipulate the position of the free draped arm to aid access into the axilla; the pedicle is clipped once adequate length has been obtained. the flap is delivered through the wound and detached ( figure 2 ). donor site closure is carried out conventionally.the flap inset is performed using a "vest over pants" technique utilising scalp over muscle by undermining the remaining scalp edges. 5 a non-meshed skin graft is used to enhance aesthetic outcome. a total of 11 patients underwent 12 free ld muscle flaps. all were muscle flaps combined with split-thickness skin grafts. the study population included ten male patients and one female. the age range was 47-74 years with a mean age of 69.5 years. the defect area ranged from 99 cm 2 -360 cm 2 . a titanium mesh was utilised for dural cover in all patients fixed with self-drilling 5 × 1.5 mm cortical screws. the primary recipient vessel used was the superficial temporal artery and vein. however, in cases where a simultaneous neck dissection and parotidectomy are necessary for regional disease, the facial artery and vein are used ( n = 1 in this series) or contralateral superficial temporal vessels. the ischaemia time ranged from 48-71 min, with a mean of 61.3 min. there were no take backs for flap re-exploration. the overall flap success rate was 100%. marginal flap necrosis with secondary infection occurred in one patient with a massive defect (at one week post-op). the area was debrided and a second ld flap was used to cover the resultant defect (30%). a further posterior transposition flap was used to cover a minor area of exposed mesh. the scalp healed completely. the total operating time ranged between 210-410 min, with a mean of 289 min. all patients were followed up at 2 and then four weeks for wound checks. the ld flap remains a popular choice due to its superior size and ability to conform to the natural convexity of the scalp compared with other flap choices. 4 also, unlike composite flaps which often require postoperative debulking procedures, the ld muscle flap atrophy's and contours favourably to the skull. 5 however, the traditional means of access to this flap requires lateral decubitus positioning of the patient, which can hinder simultaneous oncological resection. the supine position facilitates access for neck dissection, especially if bilateral access is required. our approach ensures that the tumour ablation and reconstruction is carried out in a time efficient manner in an attempt to reduce postoperative medical and surgical complications. synchronous ablation and reconstruction are key in reducing overall operative time and complication risk and is practised preferentially at our institute. it is important to maintain a degree of flexibility to achieve this -there may be situation where supine positioning overall is more favourable. likewise, there are situations relating to flap topography where a lateral approach to tumour removal and reconstruction is preferred. the resecting surgeon or reconstructive surgeon may have to compromise to achieve synchronous operating but is worthwhile to reduce overall total operative time. none. not required. once established, lymphorrhea typically persists and can present as an external lymphatic fistula. lymphorrhea occurs in limbs with severe lymphedema, as a complication after lymphatic damage, and in obese patients. some cases are refractory to conservative treatment and require surgical intervention. reconstruction of a lymphatic drainage table 1 three patients had primary lymphedema, 4 had age-related lymphedema, 3 had obesity-related lymphedema, and 2 had iatrogenic lymphorrhea. in the 2 cases of iatrogenic lymphorrhea, the lesions were located in the groin and the others in the lower leg. abbreviations: bmi, body mass index; f, female; m, male. three patients had primary lymphedema, four had agerelated lymphedema (aging of the lymphatic system and function is thought to be the cause of age-related lymphedema 1 .), three had obesity-related lymphedema, and two had iatrogenic lymphorrhea ( table 1 ) . one of 2 cases of lymphorrhea in the inguinal region was caused by lymph node biopsy and the other by revascularization after resection of malignant soft tissue sarcoma. compression therapy had been performed preoperatively in 10 cases (using cotton elastic bandages in 6 cases). four patients wore a jobst r compression garment. compression therapy was difficult to apply in 2 patients. the duration of lymphorrhea ranged from 1 to 192 months. the severity of lymphedema 2 ranged from campisi stage 2 to 4 ( table 1 ). the clinical diagnosis of lymphorrhea was confirmed by observation of fluorescent discharge from the wound on lymphography. no signs of venous insufficiency or hypertension were observed in the subcutaneous vein intraoperatively. all anastomoses were performed between distal lymphatics and proximal veins. postoperatively, lymph was observed to be flowing from the lymphatic vessels to the veins. two to 4 lvas were performed in the region distal to the lymphorrhea and 1-4 in the region proximal to the lymphorrhea in patients with lower limb involvement. six lvas were performed in patients with lymphorrhea in the inguinal region ( table 1 ) . all patients were successfully treated with lvas without perioperative complications. the volume of lymphorrhea decreased within 5 days following the lva surgery in all cases and had resolved by 2 weeks postoperatively. the compression therapy used preoperatively was continued postoperatively. there has been no recurrence of lymphorrhea or cellulitis since the lvas were performed. an 86-year-old woman had gradually developed edema in her lower limbs over a period of 2-3 years. she had also developed erosions on both lower legs ( figure 1 ). compression with cotton bandages failed to terminate the percutaneous discharge; about 400 ml of lymphatic discharge through the erosion was noted each day. ultrasonography did not suggest a venous ulcer resulting from venous thrombosis, varix, or reflux. four lvas were performed in each leg (3 distal and 1 proximal to the leak). the lymphorrhea had mostly resolved by 5 days postoperatively. the erosions healed within 3 weeks of the surgery. no recurrence of lymphorrhea was noted during 12 months of follow-up. iatrogenic lymphorrhea occurs after surgical intervention involving the lymphatic system. it is also known to occur in patients with severe lymphedema. obesity 3 and advancing age 1 are also risk factors for lymphedema. most patients with lymphorrhea respond to conservative measures but some require surgical treatment. patients with lymphorrhea are at increased risk of lymphedema. lymphorrhea that occurs after surgery or trauma is caused by damage to lymphatic vessels that are large enough to cause lymphorrhea. lymphorrhea that occurs in association with lipedema or age-related lymphedema indicates accumulation of lymph that has progressed to lymphorrhea. it is possible to treat lymphorrhea by other methods, including macroscopic ligation, compression, or negative pressure wound therapy 4 . however, it is impossible to reconstruct a lymphatic drainage route using these procedures. we hypothesized that lymphorrhea can be managed by using lva to treat the lymphedema. lva is a microsurgical technique whereby an operating microscope is used to perform microscopic anastomoses between lymphatic vessels and veins to re-establish a lymph drainage route. the primary benefits of lva are that it is minimally invasive, can be performed under local anesthesia, and through incisions measuring 2-3 cm. one anastomosis is adequate to treat lymphorrhea and serves to divert the flow of the lymphorrhea-causing lymph to the venous circulation. if operative circumstances allow, 5 or more anastomoses are recommended for the treatment of lymphorrhea complicated by lymphedema. lymphedema is a cause of delayed wound healing, and lva procedures are considered to improve wound healing in lymphedema via pathophysiologic and immunologic mechanisms 5 . lva is a promising treatment for lymphorrhea because it can treat both lymphorrhea and lymphedema simultaneously. the focus when treating lymphedema has now shifted to risk reduction and prevention, so it is important to consider the risk of lymphedema when treating lymphorrhea. none over-meshing 1:1 meshed skin graft we were curious to learn if it's feasible to mesh already meshed skin grafts. we run our skin bank at the department of plastic surgery 1 and used allograft skin that was tested microbiologically positive and thus not suitable for patient use. grafts were cut into 4 cm x 4.5 cm pieces and meshed using mesh carriers to 1:1 and over-meshed with 1:1.5. we used two kind of mesh carriers for 1:1.5 meshes. the meshed grafts were maximally expanded and measured again. the results were expressed as ratios, figure 1 . we found that, over-meshing results in 1.25-fold increase in graft area regardless of the mesh carrier used. figure 2 illustrates close-up picture of the over-meshed graft. in the close-up picture the small 1:1 incisions are still visible. in those undesirable "oh no the graft is too small"or "the graft is too large" -situations this technique has its advantages. we have used over-meshed graft in a skin graft harvest site, supplemental figure, with acceptable outcome. it seems that the tiny extra incisions in the overmeshed skin graft do not deteriorate the aesthetic outcome from the 1:1.5 mesh. what is the clinical value of the tiny incisions, we don't know, but we approximate it to be minimal if even that. to best of our knowledge, only one previous publication has addressed the over-meshing of skin grafts 2 . henderson et al. showed in porcine split thickness skin grafts that overmeshing resulted in increase of 1.5 ratio, a bit larger compared to our results. taken together, the results point to the direction that meshing of already meshed graft is feasible and does not destroy the architecture of the original or succeeding mesh. each author declares no financial conflicts of interest with regard to the data presented in this manuscript. supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.bjps.2020.02. 048 . numerous autologous techniques for gluteal augmentation flaps have been described. in the well-known currently employed technique for gluteal augmentation, it is noticeable that added volume is unevenly distributed in the buttock. in fact, after a morphological analysis, it becomes clear that the volume is added to the upper buttock to the expense of the lower buttock. 1 according to wong's ideal buttock criteria, the most prominent posterior portion is fixed at the midpoint on the side view. 2 additionally, mendieta et al. suggest that the ideal buttock needs equal volume in the four quadrants and its point of maximum projection should be at the level of the pubic bone. 3 we describe a technique of autologous gluteal augmentation using a para-sacral artery perforator propeller flap (psap). this new technique can fill up all the quadrants vertically with a voluminous flap shaped like a gluteal anatomic implant. gluteal examination is done in a standing and prone position. patients must have a body mass index less than 30 kg/m2, an indication for a body lift contouring surgery, gluteal ptosis with platypygia and substantial steatomery on the lower back. when the pinch test is greater than 5 cm this is defined as substantial steatomery. preoperative markings: the ten steps a. standing position 1. limits of the trunk. the median limit (mlt) and the vertical lateral limit (llt) of the trunk are marked. 2. limits of the buttock. the inferior gluteal fold (igf) is drawn. the vertical lateral limit of the buttock (llb) is defined at the outer third between the mlt and the llt. 3. lateral key points. points c and c' are located on the vertical lateral limits: point c is 2 to 3 cm below the iliac crest, depending on the type of underwear. point c' is determined by an inferior strong tension pinch test performed from point c. mhz. this diagnostic tool is easy to access, non-invasive, and above all, reliable in the identification of perforating arteries, with sensitivity and a positive predictive value of almost 100%. 4 usually, one to three perforators are identified on each side and marked. 9. design of the gluteal pocket. the shape is oval, with the dimensions similar to those of the flaps. the base is truncated and suspended from the lower resection line. the width of the pocket is one to two centimeters from the lmt laterally and two centimetres from llt medially. the inferior border of the pocket is not more than two fingers'-breadth above the ifg. therefore, the pocket lies medial in the gluteal region. 10. design of the flap. the flap is shaped like a "butterfly wing" with the long axis following a horizontal line. after a 90 °medial rotation, the flap has a shape similar to an anatomical gluteal prosthesis. the medial boundary is two fingers'-breadth from the median limit of the buttock, and the width is defined by the two resection limits. the patient is placed in a prone position, arm in abduction. the flap is harvested from lateral to medial direction, first in a supra-fascial plane then sub-fascial when approaching the llb. the dissection is completed when the rotation arc of the flap is free of restriction (90 °−100 °), and viewing or dissection of the perforators is usually not required. to create the pocket, custom undermining is done in the sub-fascial plane according to the markings. the flap is then rotated and positioned into the pocket. the superficial fascial system is closed with 0 vicryl (ethicon) and the deep and superficial dermis are closed with a buried intradermal suture and running subcutaneous suture with 3.0 monocryl (ethicon). a compressive garment (medical z lipo-panty elegance coolmax h model, ec/002-h) was worn postoperatively for one month ( figure 1 ). rhinoplasty is one of the most common procedures in plastic surgery and 5-15% of the patients undergo revision. dorsal asymmetry is the leading (65%) nasal flaw in secondary patients. 1 careful management of the dorsum to achieve a smooth transition from radix to tip is necessary. camouflage techniques are well known maneuvers for correcting dorsal irregularities. cartilage, fascia, cranial bone, and acellular dermal matrix were previously used for this aim. 2 , 3 bone dust is an orthotopic option, which is easily moldable into a paste. it is especially useful in closed rhinoplasty, where our visual acuity on the dorsum is reduced. we introduce a new tool, a minimally invasive bone collector, as an effective and safe device for harvesting bone dust from the nasal bony pyramid to obtain camouflage on the dorsum and for performing ostectomy simultaneously. patients were operated for nasal deformity by the senior author (o.b.) with closed rhinoplasty between february 2018 and november 2018. in all cases, a minimally invasive bone collector was used for ostectomy and the harvest of bone dust. included patients were primary cases with standardized photos, complete medical records, and 1-year follow-up. written informed consent for operation and publishing their photographs was obtained and the study was performed in accordance with standards of declaration of helsinki. the authors have no financial disclosure or conflict of interest to declare. patient data were obtained from rhinoplasty data sheets and photographs were used for the analysis of nasal dorsum height, symmetry, and contour. physical examinations were carried out for detecting irregularities. micross (geitslich pharma north america inc., princeton, new jersey) is a bone collector, which allows easy harvest, especially in narrow areas. micross comes with a package containing 1 sterile disposable scraper. it is externally 5 mm in diameter and has a cutting blade tip. a collection chamber allows harvesting maximum of 0.25 cc graft at once. a sharp technique improves graft viability. incisions for lateral osteotomies were used to introduce micross when the planned ostectomy site was nasomaxillary buttress. infracartilaginous incision was used when the desired ostectomy site was dorsal cap or radix. bone dust was collected into a chamber with a rasping movement. the graft is mixed with blood during the harvest, this obtains an easily moldable bone paste (surgical technique is described in the video). after the completion of osteotomies and cartilaginous vault closure, the bone paste was placed on the site of bony dorsum, which is likely to show irregularities postoperatively. a nasal splint was used to maintain contour. the bone graft was not wrapped into any other graft. eighteen patients underwent primary closed rhinoplasty with 1-year follow-up. seventeen of 18 patients were female and one was male. harvesting sites were nasomaxillary buttress in 18 patients, radix in 7 patients and dorsal cap in 5 patients. the total graft volume was between 0.25 and 0.5 cc/per patient. the nasal dorsum height, symmetry, contour, and dorsal esthetic lines were evaluated using standardized preoperative and postoperative photographs. dorsal asymmetry, overcorrection of the dorsal height or residual hump were not observed in 17 of the patients ( figures 1-4 ). only 1 patient had a visible irregularity of the dorsum. physical examination revealed palpable irregularities in 3 patients. none of the patients required surgical revision for residual or iatrogenic dorsum deformity. asymmetries and irregularities of the upper one-third of the nose, lead to poor esthetic outcomes, and secondary revision surgeries. to treat open roof after hump resection; lateral osteotomies, spreader grafts, flaps and camouflage grafts are commonly used. warping, resorbtion and migration, visibility, limited volume, donor site morbidity, and the risk of infection are the main disadvantages of grafts. örero glu et al. have presented their technique of using diced cartilage combined with bone dust and blood. 4 tas have reported results with harvesting bone dust with a rasp and using this for dorsal camouflage. 5 the disadvantages of harvesting with a rasp were difficulty with collecting dust from the teeth of the rasp and losing a certain amount of graft material during the harvest. with using micross, a harvested graft is collected in the chamber, thereby the risk of losing the graft material is resolved. replacing "like with like" tissue concept is important, therefore the reconstruction of a bone gap can be achieved successfully with bone grafts. to limit the donor site morbidity, we prefer to harvest bone from the dorsal cap, which was preoperatively planned to be resected. the preference of lateral osteotomy lines as the donor site facilitates osteotomies by thinning the bone. the device allows us to effectively harvest the bone under reduced surgical exposure. simultaneous harvest and ostectomy contributes to a reduced operative time. operative cost is relatively low in comparison with alloplastic materials. in this series, we did not experience resorbtion, migration, visibility problems, or infection with bone grafts. a new practical, safe, and efficient tool for rhinoplasty was introduced. graft material was successfully used for smoothing the bony dorsum without any significant complications. none. not required. the authors have no financial disclosure or conflict of interest to declare in relation to the content of this article. no funding was received for this article. the work is attributed to ozan bitik, m.d. (private practice of plastic, reconstructive and aesthetic surgery in ankara, turkey) dear sir, early diagnosis of wound infections is crucial as they have been shown to increase patient morbidity and mortality. hence, it is important that such infections are detected early to guide decision-making and management 1 . currently, the most common methods of identifying wound infection is by clinical assessment and semi-quantitative analysis using wound swabs. bedside assessment is subjective, and it is shown that bacterial infection can often occur without any clinical features. on the other hand, swabs have the disadvantages of missing relevant bacterial infection at the periphery of the wound due to the sampling technique as well as delaying diagnostic confirmation which may lead to a change in the bioburden of the wound. although tissue biopsy is the gold standard diagnostic tool, it is seldom used as it is invasive, has a higher technical requirement and is also more expensive. a hand-held and portable point-of-care fluorescence imaging device (moleculight i:x imaging device, moleculight, toronto, canada) was introduced to address the limitations of the other diagnostic methods 2 . this device takes advantage of the fluorescent properties of certain by-products of bacterial metabolism such as porphyrin and pyoverdine. when excited by violet light (wavelength 405 nm), porphyrins will emit a red fluorescence whereas pyoverdine has a cyan/blue fluorescence. the types of bacteria that produce porphyrins include s. aureus, e. coli , coagulase-negative staphylococci, beta-hemolytic streptococci and others whereas pyoverdine which emits cyan fluorescence is specific to pseudomonas aeruginosa. this allows users to localise areas of bacterial colonisation at loads ≥10 4 amongst healthy tissue which instead emits green fluorescence 3 . the benefits of this device are that it is portable, non-contact which means minimising cross-contamination, non-invasive and it provides real-time localization of bacterial infection. all these features allow it to be a useful tool to aid diagnosis and guide further investigation and management. many previous studies that have examined the efficacy of auto fluorescent imaging in diagnosing infections in chronic wounds 3-5 . however, equally important is identifying infections in acute wounds which will help guide antimicrobial management as well as surgical debridement. often, broad-spectrum antibiotics are given where clinical assessment remains inconclusive. this, however, may lead to an increase in antimicrobial resistance. therefore, the use of moleculight i:x to identify infections in acute open wounds in hand trauma was evaluated. we collected data from patients who attended the hand trauma unit over a 4-week period prior to irrigation and/or debridement. wounds were inspected for clinical signs of infection and autofluorescence images were taken using the moleculight i:x device. wound swabs were taken, and the results of these interpreted according to the report by the microbiologist. autofluorescence images were interpreted by a clinician blinded to the microbiology results. 31 patients were included, and data collected from 35 wounds. 3 wounds (8.6%) showed positive clinical signs of infection, 3 (8.6%) were positive on autofluorescence imaging and 2 (5.7%) of wound swab samples were positive for significant infection. autofluorescence imaging correlated with clinical signs and wound swab results for 34 wounds (97.1%). in one case, the clinical assessment and autofluorescence imaging showed positive signs of infection but the wound swabs were negative. to the best of our knowledge, this is the first time the use of autofluorescence imaging in an acute scenario was investigated. in this study, out of 2 of the wound swab samples that were positive, autofluorescence imaging correctly identified both (100%) ( fig. 1 ) . one of the autofluorescence images which showed red fluorescence on the wound and which was clinically identified as infected showed growth of usual regional flora on microbiological studies. the reason behind this could be due to the method of sampling from the centre of the wound. on autofluorescence image, the areas of significant bacterial growth were on the edges of the wound ( fig. 2 ) . this example illustrates the potential of using autofluorescence imaging to guide more accurate wound sampling. this has also been shown in a non-randomised clinical trial performed by ottolino-perry et al. 4 . from a surgeon's perspective, autofluorescence imaging can guide surgical debridement by providing real-time information of the infected areas of the wound. furthermore, because of its portability, this device can also be used in intra-operative scenarios to provide evidence of sufficient debridement. although easy to use, the requirement for a dark environment causes a logistical problem. the manufacturers have realised that this is a limitation of the device and have created a single-use black polyethene drape called "darkdrape" which connects to the moleculight i:x using an adapter to provide optimal conditions for fluorescence imaging. while autofluorescence imaging can help clinicians to decide whether to start antibiotics or not, it does not provide any information on the sensitivities of the bacteria. another limitation with autofluorescence imaging we encountered in our study is the difficulty with imaging acute bleeding wounds where blood shows up as black on fluorescence and therefore may mask any underlying infection. in conclusion, autofluorescence imaging in acute open wounds may be useful to provide real-time confirmation of wound infection and therefore guide management. none declared. none received. supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.bjps.2020.03. 004 . when compared with the two previously published studies, publication rates have improved from 2007 and have not continued to decline. interestingly, the number of publications in jpras has fallen. this may be explained by a rise in the impact factor of the journal, increasing competitiveness for publications as well as an expansion in the number of surgical journals. we observed that journal impact factor for free paper publications was significantly greater and likely reflects the stringency of the bapras abstract vetting process. comparison with other specialties is inherently difficult, primarily due to differences in study design and inclusion criteria. exclusion of posters, inclusion of abstracts published prior to presentation and studies not referenced in pubmed affect the reported publication rates. a large meta-analysis, assessing publication of abstracts, reported rates of 44%. 3 rates from other specialties are shown in figure 2 . although our figures of close to 30% may seemingly rank low versus other specialties, including abstracts published prior to presentation would increase the publication rate to 39%, therefore making it more comparable. however, this would not be a direct comparison to the two previous bapras studies. one may debate that the academic value of a meeting should be judged upon its abstract publication ratio. however, the definition of a publication is itself clouded, with an increasing number of journals not referenced in the previous 'gold standard' of pubmed, including a number of open access journals. 4 most would still argue the importance of stringent peer review as the hallmark of a valuable publication 5 and perhaps this along with citability should remain the benchmark. in an age where publications are key components of national selection and indeed lifelong progression in many specialties, we must ensure that some element of quality control remains so as not to dilute production of meaningful data. we have been able to reassess the publication rates for the primary meeting of uk plastic surgery. the bapras meeting remains a high-quality conference providing a platform to access the latest advances in the field. significant differences in the methodology of available literature make other speciality comparisons challenging. however, when these are accounted for publication rates are similar. within a wider context, with the increase in open access journals, it has become ever more difficult to define a 'publication'. if publication rate is to be used as a surrogate for meeting quality, then only abstracts published after the date of meeting should be included. in order to continually assess the quality of papers presented at bapras meetings, the conversion to publication should be regularly re-audited. none. dear sir, global environmental impact and sustainability has been a heated topic in the recent years. plastics and singleuse items are widely, and perhaps unnecessary, used in the healthcare sector. various recent articles 1 , 2 discuss the negative impacts of this in the surgical world, but can we look at the nhs sustainability as a bigger picture? whilst it is a positive step to be considering how we can reduce the environmental impact of modern operating practice, it risks falling into the trap of being overly focused and not taking an holistic view of how the health service as a whole can become more environmentally focused and reduce costs. in fact, the operating theatre is one of the more difficult places to make change. single use medical devices seem like an obvious item to replace with a more environmentally friendly re-usable alterative, but what about patient safety? such a change would require the implementation of new workflows and supervision structures to make sure patient safety is maintained. these take time to create, will meet resistance in their design and implementation, and may not ultimately be adopted. in order to overcome these challenges, we must take a holistic view of the hospital environment -doing this reveals numerous opportunities for improvement with minimal impact on patient safety. the nhs incurs significant waste through using energy unnecessarily. some examples are readily visibly working in a hospital for a just few weeks: computers are left on standby through the night and at weekend; lights are left on throughout the night; and empty rooms are heated or cooled when left unoccupied. other sources of energy waste are less visible, but it is likely that some machinery (particularly air conditioning units) would show rapid return on investment through energy savings if they were replaced on a more regular basis. in the past, saving energy would have required a sustained campaign to educate staff and still be subject to the vagaries of human management (forgetting to switch the heating off on a friday night could lead to more than two days of wasted energy if not revisited until monday). today, solutions based on internet of things (iot) technology can use sensors to monitor the environment and take action to reduce consumption. with the use of ai and machine learning, these systems are becoming advanced such that they can even monitor and anticipate energy usage allowing rooms to be heated or cooled at times which mean that when staff arrive in the relevant room it is the ideal temperature. the nhs is starting to use such technology, with wigan hospital as the first example to install intelligent lighting. 3 adoption should not be limited to lighting, however, and the nhs needs to adopt best practice from the commercial sector. for example, sensorflow based in singapore, provide an intelligent system that optimises cooling/heating costs for hotels around south east asia, saving the operators up to 40% in energy costs. 3, 4 without doubt, these systems can also apply to hospital infrastructures and can help the nhs further reduce energy consumption. in addition to reducing energy consumption, the reduction of single use plastics has become a key focus in recent years and the nhs has started to address this issue. at least 196 million single use plastic items were purchased by the nhs last year. 5 the target to phase out plastic items used by retailers in the next 12 months is laudable, however there is also a significant amount of disposable plastic items used in staff coffee rooms and hospital canteen. getting rid of such items completely and encourage staff to use reusable coffee cups and metal cutlery can potentially compound the cost-saving and environmental benefits. the nhs has established an early leadership position tackling environmental challenges -the first european intelligent lighting installation and ambitious targets to cut disposable plastic items -but more needs to be done. to maximise impact, the nhs needs to be seen as a whole (not by department) with the most senior executives in the health service driving national level change. we read with interest the recent article 'healthcare sustainability -the bigger picture'. 1 the wider picture of the nhs environmental impact and sustainability clearly needs to be addressed. however, large-scale improvement projects to hospital buildings, such as intelligent lighting and heating systems, are likely to require huge investment in infrastructure and modernisation that the nhs in its current form is unfortunately unlikely to be able to make. we believe that the field of medical academia should similarly be contributing to environmental sustainability. firstly, the shelves of hospital libraries and offices internationally are lined with print copies of journals. we reviewed the 20 surgical journals with the highest impact factors and found that all were still offering the option of a subscription of print copies, with 19 of these printing monthly issues. 2 consumers are able to access all journals electronically through institutional subscriptions or via the nhs openathens platform, which in our view is a more time-efficient way to search for articles, read them and to reference them. as such, we commend jpras for their recent move to online-only publication. additionally, with the increasing use of social media to discuss research and the creation of visual abstracts for articles to encourage readership, this will be likely to encourage this shift further. secondly, the environmental impact of the current academic conferencing culture must be addressed. by the end of training, a uk surgical trainee spends an average of £5411 attending academic conferences, but beyond this personal expenditure, what is the environmental cost? 3 for each conference we attend, the printing of poster presentations, conference programmes and certificates all detrimentally impact our environment. furthermore, consider the conference sponsor bags we receive, filled with further printed material, plastic keyrings, stress-balls and disposable pens, all contributing to the build-up of plastic in our oceans. 4 conferences, such as the british association of plastic and reconstructive surgeons scientific meeting, have now started using electronic poster submissions, with presentations being held consecutively on large television screens -but further measures are possible. a well-designed conference smartphone app forgoes the need for printed programmes and leaflet advertising from sponsors and could include measures to reduce the carbon footprint, such as promotion of ride-share options for venue travel. the concept of virtual conferences has also been explored. organisers of an international biology meeting recently asked psychologists to assess the success of a parallel virtual meeting, with satellite groups organising local social events afterwards. more than 80% of the delegates joined online and there was an overall 10% increase those attending the conference; a full analysis of the success of this approach to conferences is awaited. 5 virtual conferences may enable delegates to sign in from multiple time zones and minimise travel, disruption of clinical commitments and time away from family. this option is being pursued by the reconstructive surgery trials network (rstn) in the uk, whereby the annual scientific meeting will be delivered using teleconferencing technology at four research active hubs across the uk, reducing delegate travel substantially and the conference's carbon footprint in turn. there is a clear but unmeasurable benefit of networking face-to-face for formation of personal connections, exchange of knowledge and opportunities for collaboration. the use of social media, instant messaging applications and modern teleconferencing technology are vital to retain this valuable aspect of academic conferencing. equally, perhaps there is a balance to be found, with societies currently holding biannual meetings moving to include one virtual, or running a parallel virtual event for those travelling long distances. the academic community must play a role in environmental sustainability by reducing the carbon footprint of our journals and conferences. jcrw is funded by the national institute for health and research (nihr) as an academic clinical fellow. none for completion of submission. none. we read with interest the study by sacher et al., who compare body mass index (bmi) and abdominal wall thickness (awt) with the diameter of the respective diea perforator and siea. 1 they found that there was a significant ( p < 0.05) positive correlation between these variables, concluding that this association may mitigate for the increased perioperative risk seen in patients with high bmi. their findings disagree with a previous smaller study by scott et al. 2 reconstruction in the high bmi patient group can be challenging, and is associated with higher complication rates. 3 despite this, satisfaction with autologous reconstruction appears similar across bmi categories. 4 as the authors discuss, perfusion, as a function of perforator diameter, is of key relevance to the safety of performing autologous breast reconstruction in patients with higher bmi. larger perforator sizes relative to total flap weight have been suggested to reduce the risk of post-operative flap skin or fat necrosis. 5 while this is likely an oversimplification, as flap survival will also depend on multiple factors including perforator row compared to abdominal zones harvested, it does suggest that if the high bmi patient group has reliably larger perforators then their risk profile may be reduced. however, we suggest caution regarding reliance on the correlation they found between bmi or awt and perforator size when planning free tissue transfer. while they demonstrate p values suggesting correlation between bmi or awt and perforator diameter, the r (correlation coefficient) values that they determined through pearson correlation analysis are low, ranging from 0.219 to 0.456. the resulting r 2 (coefficient of determination) values are therefore in the range 0.048-0.21, suggesting that only 4.8-21% of the variation in perforator diameter can be related to bmi or awt. it is therefore likely that other variables, such as height and historical abdominal wall thickness, that were not accounted for in the correlation analysis also play roles in determining perforator size, in addition to anatomical variation. in addition, their analysis and results depend on a linear relationship between the variables, which may not be the case. therefore although the authors demonstrate a correlation between abdominal wall thickness and perforator size, there is substantial variation between individual patients and so this relationship cannot be relied upon when planning autologous reconstruction. we read with interest pescarini's et al. article entitled 'the diagnostic effectiveness of dermoscopy performed by plastic surgery registrars trained in melanoma diagnosis'. 1 the article is of great interest in highlighting the potential of plastic surgery registrar training in domains such as dermoscopy, especially for those trainees looking to specialise in skin cancer. training in these experiential skill domains is essential to building a diagnostic framework, and the comparable accuracy in diagnosis to dermatologists reflects this. it would be of great benefit to understand further how diagnostic accuracy evolves along the inevitable learning curve experienced using the dermoscope. pescarini et al. comment briefly on method of training but we believe the timeline is key, as is mentorship and regular appraisal. terushkin et al. found that for the first year of dermoscopy training benign to malignant ratios in fact increased in trainee dermatologists before going on to decrease 2 potentially secondary to picking up more anomalies but not yet having the skill set to determine if these are benign or not. there is no reason to suggest that plastic surgery trainees' learning curves should differ significantly. this of course would skew the data presented in terms of accuracy at the end of the three year study period. more helpful would be a demonstration of how accuracy changes with time and experience, as one would expect, and of course how these rates are comparable to those of dermatologists. this would have implications for training programmes where specific numbers of skin lesions or defined timeframes for skin exposure during training are set as benchmarks for qualification. this is particularly pertinent for uk trainees; the nice guidelines for melanoma state that dermoscopy should be undertaken for pigmented lesions by 'healthcare professionals trained in this technique'. 3 to understand the number of lesions that trainee plastic surgeons have to assess with a dermatosope before their diagnostic accuracy improves -or the time needed to achieve that accuracymight be a key factor for placement duration and numbers required for trainees to become consciously competent dermoscopic practitioners. reproducible training programmes in this regard are therefore vital. it must be pointed out that the role of the dermascope for plastic surgeons is likely to be narrower than for our dermatological colleagues. within the uk, the role of the plastic surgeon is primarily reconstructive, with some subspeciality involvement in diagnosis of melanomas and a range of non-melanomatous skin cancers and skin lesions. the dermoscope is primarily a weapon in the diagnosis of insitu or early melanoma for plastic surgeons where diagnostic certainty is unclear following a referral for consideration for surgical removal. where doubt remains over a naevus, surgical excision is still the normal safe default. dermatologists use dermoscopes for a broad range of diagnostic purposes on a wide variety of skin conditions. the familiarity and expertise with this instrument that they garner is therefore not surprising. we must be clear in resource-limited healthcare systems about what our specific roles are as plastic surgeons and how the burden of patient assessment is shared to appropriately deploy our skills within the context of a broader multidisciplinary framework. accuracy with the dermoscope is essential to safely treating patients in a binary fashion -should the lesion be removed or monitored? comparison with dermatological expertise is helpful as a guide and dermoscopy has an important diagnostic role for plastic surgeons, but we should not strive to be equivalent in skills to dermatologists with dermascopes at the expense of the development of vital surgical reconstructive skills and excellence throughout plastic surgery training. response to the comment made on the article "the diagnostic effectiveness of dermoscopy performed by plastic surgery registrars trained in melanoma diagnosis" we strongly agree with the benefit correlated to understand the learning curve experienced by plastic surgery registrars using the dermoscope. as stated in our article, the limit of our study is its retrospective nature. moreover, the training and the level of competence differed between the three registrars. at the beginning of the data collection, two of them were at their third year of specialist training and were using dermoscope since at least one year while the other one was at his first year. all the registrars attended specific but different dermoscopy courses and all of them completed a 10 h on site training with a competent consultant. for this reason, the expertise partially differed among the three registrars. nevertheless, we believe a 3 years' period should be long enough to truly homogeneously estimate the accuracy in diagnosis of melanoma by them. in fact, townley et al. 1 demonstrate the attendance of the first international dermoscopy for plastic surgeons, oxford, improved the accuracy of diagnosing malignant skin lesions by dermoscopy rather than using naked eye examination. we believe a well-planned prospective study should be of great benefit in term of planning a reproducible dermoscopy plastic surgery-oriented training program. this could help to estimate when a clinician can be considered as competent dermoscopic practitioner. it should be underlined as learning how to use dermoscope is something is not possible to do from time to time but it need effort and self-study. we believed is important to properly plan a formal training in dermoscopy for all the plastic surgery registrars who will use this tool in their practice. vahedi et al. 2 stated, as per their survey, only one of 53% of the plastic surgery trainees that used dermoscope in their practice had formal training. as all trainees perform outpatient appointments dealing with skin lesions, especially for trainees looking to specialize in skin cancer, we believed the expertise gained through specific course and training is not at expense of the development of surgical reconstructive skills, but instead it can lead improvement in performing outpatient appointment. proper use of dermoscope will make the skin cancer specialized plastic surgeon more confident and truthful if not in detecting melanoma at least in leaving evident benign lesions. keeping always in mind a multidisciplinary approach and a close cooperation between dermatologists and plastic surgeon is of paramount importance in skin cancer treatment. there is no conflict of interest for all of the authors. dear sir, as the author mentioned in this publication, the correction of infra-orbital groove by microfat injection did increase the postoperative satisfaction of lower blepharoplasty surgery 1 . in this study, we want to explore whether this procedure can replace the previous fat pad transposition. months after the microfat injection, we have observed that fat continues to be present but its volume gradually disappears, and, with some, it totally vanishes. with fat pad transposition, the fat volume does not decrease, it seems that both have their advantages and disadvantages because the volume of transplanted fat after lower blepharoplasty might disappear gradually by time. survival of transposed fat through fat pad transposition is the best, creating a more natural look at the tear trough. however, the volume of augmentation might not be enough. it would be exceptional if we could combine both advantages; that is, to administer microfat injection after fat transposition. but prior to that, we would like to share the experience of the author. the fat pad is usually transposed to periosteum by two limits: one is the transposition of the medial fat pad to the inner groove and the other one is the transposition of the central fat pad to the center of the infra-orbital groove. as mentioned by the author, we fill the superficial layer (under the skin) and the periosteum layer (deep layer). injection into the deeper layer is not performed after lower blepharoplasty but before the musculocutaneous flap was closed. after fat pad transposition is completed, we would first cover up the musculocutaneous flap before asking the patient to sit up. then, the surgeon assesses whether a further filling of the groove with the fat is needed or not. if necessary, the musculocutaneous flap is opened and more fat is injected in-between the fat pads into the groove, but, definitely, not into the fat pads. the reason why we do the injection before the flap is closed is to accurately perform the insertion and to avoid entering into the intra-orbital fat pad, which may worsen the presence of eye bags. we inject the superficial fat only after the flap wound is closed. this procedure modifies the groove under the eye more accurately. we share with you our surgical methods with the hope that fat utilization and fat pad transposition will greatly improve surgical satisfaction. dear sir, eiben and gilbert are thanked for their comments. they may be correct in the original description of the respective flaps, but the five-flap z-plasty in our experience has always been known colloquially as the jumping man flap. indeed, extra caution is required in burns secondary reconstruction. the skin of these patients is typically thin, often scarred and unforgiving. flaps should never be undermined unless in an area of completely virgin tissue. the modification we presented does result in an apparently thinner base for the 'arm limb' flaps, but traditionally wider based flaps would have been transferred and then trimmed with the same outcome. the tiny sizes involved in paediatric eyelid surgery would not be the best forum to experiment, and certainly mustardé's original design would seem safest in that setting. we had uniquely sought to also measure precisely the geometric gain in length, and felt that the result was impressive. none letter to the editor: evaluating the effectiveness of plastic surgery simulation training for undergraduate medical students 1 we read with interest the recent correspondence regarding the effectiveness of plastic surgery simulation for training undergraduate medical students. we are in wholehearted agreement with the statement regarding medical school curricula lacking exposure to plastic surgery and commend the authors for their efforts to pique the interest of medical students in our specialty. we wish however to point out some vagueness that, unless clarified, could be misleading to your readership. the correspondence states: "the decrease in competition ratios for plastic surgery". we believe that current data supports the opposite view. taking into account published data from health education england over the last 4 years 2 , there has in fact been a 41% rise in the competition ratios from 2016 to 2019 ( fig. 1 .) suggesting an increasing interest in the specialty. highlighting this increase in demand supports the authors' desire for more undergraduate exposure to plastic surgery. this increased input in the uk curriculum would also help all medical students become aware of the support plastic surgeons can provide to other specialties as this is a particular feature of the specialty. in an increasingly specialised medical world, we feel it is important that all doctors are equipped with the knowledge to best serve their patients. no funding has been received for this work and the authors have no competing interest. dear sir/madam, in response to critical personal protective equipment (ppe) shortages during the covid-19 pandemic, medsupply-driveuk was established by ent trainee ms. jasmine ho, and medsupplydriveuk scotland by two plastic surgery trainees (ms. gillian higgins and mrs. eleanor robertson). we applied the principles of creative problem solving and multidisciplinary collaboration instilled by our specialty. since march 2020, we have recruited over 400 volunteers to mobilise over 200,000 pieces of high quality ppe donated from industry to the nhs and social care. we have partnered with academics and leaders of industry to manufacture: surgical gowns, scrubs and visors using techniques including laser cutting, injection molding, and 3d printing. we have engaged with nhs boards and trusts and politicians at local, regional and national level to advocate for healthcare worker protection in accordance with health and safety executive and coshh legislation including: engineering controls and ppe that is adequate for the hazard and suitable for task, user and environment. public health england (phe) currently advise ffp3 level of protection only in the context of a list of aerosol gener-the authors have no competing interests. ating procedures 1 . a surgical mask confers 6x (63%) protection, ffp2/n95 100x (92-98%) and ffp3 100-10,000x ( > 99%) protection ( figure 1 ). as sars-cov-2 is a novel pathogen, evidence is naïve and evolving, and since transmission occurs via aerosol, droplets and fomites from the aerodigestive tract, all 10 uk surgical associations have issued guidance to use higher levels of ppe for procedures that are not included in the phe list (2) . cbs, entuk and baoms have issued statements supporting the use of reusable respirators and power air-purifying respirators, and their use is approved by phe, health protection scotland, public health agency, public health wales, nhs and the academy of medical royal collages 1 . the first author has experienced the need to quote bapras guidance 2 in defense of their use of ppe 2 . medsupplydrive (uk and scotland) hope to empower all healthcare workers to demand provision of adequate (i.e. will protect from sars-cov-2) and suitable (for the task, user and environment) ppe by engaging with their employers directly or through unions, royal colleges and associations. as a nation we must learn from other countries who successfully protected their workforce. data suggests that staff death is avoidable with the use of occupational health measures and ffp3 grade ppe 3 , despite which at least 245 uk health care workers have died of covid-19 4 . the strain placed on systems by sars-cov-2, with reduced access to operating theatres, beds, equipment and staff has the potential for serious detrimental consequences for surgical training 5 . ppe shortages and the subsequent necessity for rationing is causing additional harm. due to global demand and supply chain failures, ffp3 disposable masks for people with small faces are in particularly short supply. the majority of these individuals are female, and they are currently provided with no solution apart from avoiding "high risk" operating if/when this resource runs out; further depriving them of training opportunities. reusable respirators provide superior respiratory protection over disposable ffp3 masks due to design characteristics. they are more likely to provide reliable fit due to increased seal surface area (half face 10mm, full face 20mm). as they are designed to be decontaminated between patients and after each shift they are both economically and ecologically advantageous whilst also reducing fit testing burden and negating reliance upon precarious supply chains. there are factories in the uk which already make reusable respirators and medsupplydrive have been contacted by uk manufacturers looking to retool to meet this demand. although some nhs trusts remain reluctant to use reusable respirators, others have already adopted them routinely, using manufacturer decontamination and filter change advice. one nhs trust has supplied every member of their workforce with a reusable respirator as a sustainable plan for ongoing pandemic waves. it is apparent that healthcare workers are unable to access sufficient quantities of high quality respiratory protection. reusable respirators provide adequate protection from sars-cov-2 as well as being eminently suitable for a wide range of users, tasks and environment. we call on those reviewing decontamination and filter policy for reusable respirators to appreciate the urgency of the situation and expedite the process to enable all health and social care workers to access the respiratory protection that they need. at the epicenter of the covid-19 pandemic and humanitarian crises in italy: changing perspectives on preparation and mitigation love in the time of corona references 1. world health organization world health organization. who director-general's opening remarks at the mission briefing on covid-19 -12 plastic and reconstructive medical staffs in front line national health commission of the people's republic of china. press conference of the joint prevention and control mechanism of the state council nam therapy-evidencebased results covid-19: how doctors and healthcare systems are tackling coronavirus worldwide governmental public health powers during the covid-19 pandemic: stay-at-home orders, business closures, and travel restrictions a plastic surgery service response to covid-19 in one of the largest teaching hospitals in europe transmission routes of 2019-ncov and controls in dental practice who declares covid-19 a pandemic covid-19: uk starts social distancing after new model points to 260 000 potential deaths telehealth for global emergencies: implications for coronavirus disease 2019(covid-19) prospective evaluation of a virtual urology outpatient clinic virtual fracture clinic delivers british orthopaedic association compliance quality indicators for plastic surgery training available at url: available at url: https: //en.wikipedia.org/wiki/seminar (accessed internet resource: the telegraph. the inflexibility of our lumbering nhs is why the country has had to shut down internet resource: the british society for surgery of the hand. covid-19 resources for members caring for patients with cancer in the covid-19 era maxillofacial trauma management during covid-19: multidisciplinary recommendations asps statement on breast reconstruction in the face of covid-19 pandemic statement from the association of breast surgery 15th march 2020: confidential advice for health professionals blazeby jmbreast reconstruction research collaborative. short-term safety outcomes of mastectomy and immediate implant-based breast reconstruction with and without mesh (ibra): a multicentre, prospective cohort study how the wide awake tourniquet-free approach is changing hand surgery in most countries of the world. hand clin hand trauma service: efficiency and quality improvement at the royal free nhs foundation trust one -stop" clinics in the investigation and diagnosis of head and neck lumps the implications of cosmetic tourism on tertiary plastic surgery services the need for a national reporting database references 1. policy on the redeployment of staff trauma management within uk plastic surgery units president of the british society for surgery of the hand. (2020) 24th march highlights for surgeons from phe covid-19 ipc guidance american society of plastic surgery website. asps guidance regarding elective and non-essential patient care the effect of economic downturn on the volume of surgical procedures: a systematic review an analysis of leading, lagging, and coincident economic indicators in the united states and its relationship to the volume of plastic surgery procedures performed telemedicine in the era of the covid-19 pandemic: implications in facial plastic surgery united states chamber of commerce website. resources to help your small business survive the coronavirus transmission of covid-19 to health care personnel during exposures to a hospitalized patient early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia otorhinolaryngologists and coronavirus disease 2019 (covid-19) quantifying the risk of respiratory infection in healthcare workers performing high-risk procedures skills fade: a review of the evidence that clinical and professional skills fade during time out of practice, and of how skills fade may be measured or remediated ad hoc committee on health literacy for the council on scientific affairs training strategies for attaining transfer of problemsolving skill in statistics: a cognitive-load approach use of a virtual 3d anterolateral thigh model in medical education: augmentation and not replacement of traditional teaching? augmenting the learning experience in primary and secondary school education: a systematic review of recent trends in augmented reality game-based learning aging and wound healing tissue engineering and regenerative repair in wound healing duration of surgery and patient age affect wound healing in children investigating histological aspects of scars in children formation of hypertrophic scars: evolution and susceptibility early laser intervention to reduce scar formation in wound healing by primary intention: a systematic review early laser intervention to reduce scar formation -a systematic review effectiveness of early laser treatment in surgical scar minimization: a systematic review and meta-analysis cochrane handbook for systematic reviews of interventions version 6 prospective or retrospective: what's in a name? how to run an effective journal club: a systematic review the evolution of the journal club: from osler to twitter free flap options for reconstruction of complicated scalp and calvarial defects: report of a series of cases and literature review the effect of age on microsurgical free flap outcomes: an analysis of 5,951 cases factors affecting outcome in free-tissue transfer in the elderly reconstruction of postinfected scalp defects using latissimus dorsi perforator and myocutaneous free flaps long-term superiority of composite versus muscle-only free flaps for skull coverage indocyanine green lymphography findings in older patients with lower limb lymphedema microsurgical technique for lymphedema treatment: derivative lymphatic-venous microsurgery lower-extremity lymphedema and elevated body-mass index lymphorrhea responds to negative pressure wound therapy lymphovenous anastomosis aids wound healing in lymphedema: relationship between lymphedema and delayed wound healing from a view of immune mechanisms evolving practice of the helsinki skin bank skin graft meshing, overmeshing and cross-meshing gluteal implants versus autologous flaps in patientswith postbariatric surgery weight loss: a prospective comparative of 3-dimensional gluteal projection after lower body lift redefining the ideal buttocks: a population analysis classification system for gluteal evaluation blondeel and others. doppler flowmetry in the planning of perforator flaps frequency of the preoperative flaws and commonly required maneuvers to correct them: a guide to reducing the revision rhinoplasty rate temporalis fascia grafts in open secondary rhinoplasty the turkish delight: a pliable graft for rhinoplasty bone dust and diced cartilage combined with blood glue: a practical technique for dorsum enhancement the use of bone dust to correct the open roof deformity in rhinoplasty wound microbiology and associated approaches to wound management moleculight _ ix _ user _ manual _ rev _ 1.0 _ english the use of the moleculight i:x in managing burns: a pilot study improved detection of clinically relevant wound bacteria using autofluorescence image-guided sampling in diabetic foot ulcers efficacy of an imaging device at identifying the presence of bacteria in wounds at a plastic surgery outpatients clinic publication rates for abstracts presented at the british association of plastic surgeons meetings: how do we compare with other specialties? are we still publishing our presented abstracts from the british association of plastic and reconstructive surgery (bapras)? full publication of results initially presented in abstracts the true cost of science publishing science for sale: the rise of predatory journals plastics in healthcare: time for a re-evaluation green theatre wigan's hospital organisation is first health trust in europe to install intelligent lighting sensorflow provides smart energy management for hotels in malaysia nhs bids to cut up to 100 million plastic straws, cups and cutlery from hospitals healthcare sustainability -the bigger picture on behalf of the council of the association of surgeons in training cross-sectional study of the financial cost of training to the surgical trainee in the uk and ireland plastic waste inputs from land into the ocean low-carbon, virtual science conference tries to recreate social buzz body mass index and abdominal wall thickness correlate with perforator caliber in free abdominal tissue transfer for breast reconstruction patient body mass index and perforator quality in abdomen-based free-tissue transfer for breast reconstruction increasing body mass index increases complications but not failure rates in microvascular breast reconstruction: a retrospective cohort study are overweight and obese patients who receive autologous free-flap breast reconstruction satisfied with their postoperative outcome? a single-centre study predicting results of diep flap reconstruction: the flap viability index the diagnostic effectiveness of dermoscopy performed by pastic surgery registrars trained in melanoma diagnosis analysis of the benign to malignant ratio of lesions biopsied by a general dermatologist before and after the adoption of dermoscopy assessing suspected or diagnosed melanoma dermoscopy-time for plastic surgeons to embrace a new diagnostic tool? the use of dermatoscopy amongst plastic surgery trainees in the united kingdom modification of jumping man flap combined double z-plasty and v-y advancement for thumb web contracture plastic surgery in infancy evaluating the effectiveness of plastic surgery simulation training for undergraduate medical students united kingdom mr. b.s. dheansa queen victoria hospital recommended ppe for healthcare workers by secondary care inpatient clinical setting, nhs and independent sector personal protective equipment (ppe) for surgeons during covid-19 pandemic: a systematic review of availability, usage, and rationing covid-19: protecting worker health. annals of work exposures and health memorial of health & social care workers taken by covid-19 nursing notes2020 covid-19 robertson canniesburn plastic surgery and burns unit georope geo-technical and rope access solutions, west quarry none. the authors have no financial interests to declare in relation to the content of this article and have received no external support related to this article. no funding was received for this work. the authors would like to thank catriona graham, sarcoma specialist nurse who helped in the evaluation of this study. the authors kindly thank the beatson cancer charity, uk (grant application number 19-20-001), the jean brown bequest fund, uk, and the canniesburn research trust, uk for funding this study. the sponsors had no influence on the design, collection, analysis, write up or submission of the research. supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.bjps.2020.03. 011 . none. the authors declare no funding. jeremy rodrigues provided data from the two nhs trust journal clubs and invaluable advice. nil. all authors declare that there were no funding sources for this study and they approved the final article. supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.bjps.2020.02. 029 . all authors disclose any commercial associations or financial disclosures. none. none. none. none. all authors agree to the fact there are no conflicts of interest to declare. no funding was provided for this letter. the authors have no financial or personal relationships with other people or organizations, which could inappropriately influence the work in this study. the authors have no financial disclosure or conflict of interest to declare in relation to the content of this article. no funding was received for this article. supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.bjps.2020.02. 034 . dear sir, long has the term 'publish or perish' been considered medical doctrine and this has historically been a prerequisite for progression in research-driven specialties such as plastic surgery. national, or indeed international, presentation is pivotal to disseminating information, but also provides a stepping-stone to future publications. in the uk, bapras meetings have always represented the ideal platform for this. of significant interest is the conversion of accepted abstracts into peer-reviewed publications.previous studies 1 , 2 have assessed abstract publication for bapras meetings and have shown a declining conversion rate. we re-assessed this in order to establish whether this reported downtrend is continuing and how plastic surgery compares to other specialties.all abstracts from bapras meetings between winter 2014 and summer 2016 were analysed. later meetings were excluded to allow adequate lag time for publication. abstracts were identified retrospectively from conference programmes accessible via the bapras website ( www.bapras. org.uk ). pubmed ( https://www.ncbi.nlm.nih.gov/pubmed/ ) and google scholar ( https://scholar.google.com/ ) databases were used to search for full publications. cross-referencing of published papers with abstracts for content was completed to ensure matched studies.abstracts published prior to the conference date were excluded. two-tailed t -testing was used to assess for statistical significance between variables. none. none. dear sir, diver and lewis described a modification of the "jumping man flap". 1 in fact, what they have described is a modification of the 5-flap z-plasty. this was described by hirschowitz et al. 2 it is not a jumping man as it has no body.the true jumping man flap was described by mustarde 3 for the correction of epicanthal folds and telecanthus.we have used the 5-flap z-plasty particularly for the release of 1st web space contractures following burns, the modification of raised curved scars of the trunk and limbs following burns, and for the correction of epicanthal folds in small children.using the diver and lewis modification in burn cases results in thin and less vascular flaps. when correcting epicanthal folds in children the flaps are so small that reducing their size in any way would make it near impossible to suture the flaps correctly. no conflicts of interest. key: cord-017031-i10q2569 authors: brix, gunnar; kolem, heinrich; nitz, wolfgang r.; bock, michael; huppertz, alexander; zech, cristoph j.; dietrich, olaf title: basics of magnetic resonance imaging and magnetic resonance spectroscopy date: 2008 journal: magnetic resonance tomography doi: 10.1007/978-3-540-29355-2_2 sha: doc_id: 17031 cord_uid: i10q2569 in this chapter, the basic principles of magnetic resonance imaging (mri) and magnetic resonance spectroscopy (mrs) (sects. 2.2, 2.3, and 2.4), the technical components of the mri scanner (sect. 2.5), and the basics of contrast agents and the application thereof (sect. 2.6) are described. furthermore, flow phenomena and mr angiography (sect. 2.7) as well as diffusion and tensor imaging (sect. 2.7) are elucidated. in this chapter, the basic principles of magnetic resonance imaging (mri) and magnetic resonance spectroscopy (mrs) (sects. 2.2, 2.3, and 2.4), the technical components of the mri scanner (sect. 2.5), and the basics of contrast agents and the application thereof (sect. 2.6) are described. furthermore, flow phenomena and mr angiography (sect. 2.7) as well as diffusion and tensor imaging (sect. 2.7) are elucidated. the basic physical principles of the nuclear magnetic resonance (nmr in medical literature: magnetic resonance [mr] ) can be understood in depth and in detail based on quantum mechanics. in sect. 2.2, however, another description is attempted that is almost physically exact and uses only a few simple arguments of quantum mechanics. in turn, the presentation will be more complex, but still can be understood with only basic knowledge in physics. for this reason, this synopsis should precede the detailed description in the following sections to guide the reader. mr examinations are possible if atomic nuclei of tissue of interest possess a nuclear magnetic moment µ. atomic nuclei with odd numbers of nucleons (here: protons, neutrons) do possess such magnetic moments. the nucleus of the hydrogen atom consisting of only one proton is the simplest atomic nucleus with an odd number of nucleons and thus has the biggest magnetic moment of all nuclei. its natural abundance of almost 100% and its ubiquitous occurrence and the high mobility of water protons in living matter are further prepositions for using low-sensitivity nmr method for imaging in human subjects. this low sensitivity compared with other imaging methods-e.g., positron emission tomography-cannot be emphasized enough. the sensitivity difference of this both methods is several orders of magnitude (~10 5 -10 6 ). this fact has to be taken into account when magnetic resonance imaging is envisioned for specific probe imaging, nowadays known as molecular imaging. in spite of the abovementioned low sensitivity of mr, proton imaging is possible in humans because of the high magnetic moment, ~ 100% abundance, high concentration, and high mobility of protons in tissue. the following consideration will be restricted to the hydrogen nuclei only. the basis of the magnetic resonance imaging is a simple resonance phenomenon. in a magnetic field, free environmental magnetic moments of a specimen are not oriented at all; however, in an external magnetic field the magnetic moments are no longer randomly oriented. the application of an external magnetic field b0 forces the magnetic moments µ to align along the magnetic field. due to basic physics principles, the orientation has two quantum states with respect to the external magnetic field: first the parallel, and second the antiparallel state, both of which have different magnetic energies em, and its energy difference being ∆em = γ · ħ · b0, and γ, ħ being the gyromagnetic ratio and planck's constant, respectively. in thermal equilibrium, both states possess different occupation numbers, with the low-energy parallel state having higher probability of occupation than does the low-energy antiparallel state, resulting in a macroscopic and therefore measurable net magnetization parallel to the orientation of the external magnetic field. this thermal equilibrium state can be distorted by irradiation with alternating electromagnetic field having a radiation energy erf identically to the energy-splitting ∆em caused by the magnetic field, and the radiation energy being erf = ħ · ω0, and ω0 being the resonance frequency of the spin system-the so-called larmor frequency. due to the resonant irradiation, the spin system takes up additional energy that can be dissipated only if the system is coupled to its microenvironment. this coupling strength is described by the so-called t1 relaxation time (also known as longitudinal or spin-lattice relaxation time). an equivalent for the coupling of the spins to each other is the t2 relaxation time (also known as transversal or spin-spin relaxation time). for tissues, typical t1 relaxation times for tissues are between 100 and 2,000 ms and t2 relaxation times between 10 and 1,000 ms. mr imaging utilizing pulsed nmr-this means the alternating electromagnetic field, the so-called radiofrequency (rf) field-is applied only for a short period of time (in general, pulses are some milliseconds). the short rf pulse excites the spin system via a transmitter coil. after irradiation of the nuclear spin system, a receiver coil can detect a damped time-dependent signal with a frequency of ω0. this signal is called the free induction decay (fid) . the damping of the signal is ruled by the t2 relaxation times, and the period by the strength of the external magnetic field (constant magnetic moment assumed). in practical terms, not only does the t2 relaxation time influence the damping of the signal, but also the technically related inhomogeneity of the external magnetic field. the signal damping caused by the inhomogeneity is called t2* relaxation time, and is in general much stronger than that caused by t2 relaxation times. only special pulse sequences (e.g., spin-echo sequences) can eliminate the influence of the inhomogeneity of the external magnetic field and thus allow the measurement of the t2 relaxation times specific to the substance/tissue. the influence of t2 relaxation times is mainly limited to the amplitude of the signal. preposition for the image reconstruction (sect. 2.3) is the exact information about the mr signal's origin. this spatial information can be generated by space-dependent magnetic fields additionally applied along the three space coordinates. these space-dependent magnetic fieldscalled magnetic field gradients-are small as compared with the main external field and are generated by special coils mounted in the bore of the magnet. due to these additional magnetic field gradients, the total magnetic field is slightly different in each volume element (voxel) and in turn, so is the resonance frequency of the spin system in each voxel. as a result, irradiation with a rf pulse of defined frequency ω′ excites only those nuclei in such voxels where the larmor frequency ω0 given by the field strength matches the resonance condition. suitable changes of the field gradients allow moving a volume element in space, fulfilling this condition. keeping in mind that the signal intensity of a volume element is given by the number of the spins in the volume element, the relaxation times of the tissue and the specific measurement parameters (e.g., pulse repetition time, echo time etc.), this signal intensity is assigned to the corresponding picture element (pixel). in this manner, the region of interest can be sampled by moving the volume element through space, and successively, an image with respect to pixels can be constructed. this method requires a long time to acquire images, assuming every experiment needs about 1 s to measure a voxel and a pixel, respectively. thus, the measurement of an image 128 × 128 pixels will require more than 16,000 s to complete. nowadays, 2d-, 3d-, and/or phase encoding methods as well as half-fourier methods are applied, allowing data acquisition times of minutes or even less. special fast imaging techniques (e.g., flash, rare, epi sequences) allow further reduction of the acquisition time (cf. sect. 2.4) . in contrast to x-ray computed tomography, where the attenuation is governed purely by the electron density, as mentioned above, in mri the signal intensity is a complex function of the proton density and the t1, t2, and t2* relaxation times. additionally, the signal intensity-and hence the image contrast-can be influenced by the measurement parameters (e.g., echo time, repetition time) set at the scanner. the knowledge of these interrelations of the different parameters influencing the signal intensity and hence the image contrast is mandatory in interpreting mr images correctly. the mr scanner is a complex system (sect. 2.5). its main components are the magnet, the rf system, and the gradient coils. the entire system is controlled and supervised by a computer. the development of mr imaging was only possible after the development of fourier trans-form nmr as well as fast computers calculating fast fourier transformations within minutes. the development of large-bore superconducting magnets of ≥ 0.3-1.5 t in the 1980s accelerated the development and the application of mri in clinical practice. nowadays, 3-t scanners are in routine clinical use. scanners with ≥ 7 t are installed and will further accelerate the development of mri and mrs. most of the magnets are made of solenoid coils. other magnet types, like scanners with helmholtz coils configuration, give better access to the patients; however, are installed mostly for special purposes, e.g., in an operation suite. mr scanners with conventional resistivity magnets and fields smaller than 0.5 t are rarely used, except in countries with short supplies of helium or other restrictions that may not allow installation of a superconducting system. the risk of side effects is assumed low if the magnetic fields are ≤ 1.5 t, except for the danger caused by ferromagnetic subjects accelerated into the magnet. nevertheless, at fields of 1.5 t and even ≥ 3 t, the knowledge about side effects is rare, especially the long-term exposure due to high static magnetic fields, gradient fields, and rf fields to organisms. the problems concerning safety are extensively discussed in sect. 2.9. in the early days of mri, the simplicity and wide range with which to manipulate contrasts in mri by changing the imaging parameters led to the conclusion that development of mr contrast agents is dispensable. however, experience taught that contrast media significantly improve mr diagnostics, not only in the central nervous system, but also in other diagnostic procedures. in contrary to x-ray contrast agents, where absorption is the dominating physical effect producing the contrast, mr contrast media are based on other principles. the paramagnetic and/or super-paramagnetic properties of the contrast media influence the relaxation times of tissue, or change contrast by obliterating the signal of protons and thus increase contrast. whereas in x-ray the contrast is proportional to the concentration of the contrast medium, in mr the dependency on the concentration is in general much stronger than linear -most often exponential. mr-contrast media are described in sect. 2.6. the intrinsic sensitivity of nmr to motion was already observed early in the 1950s. in mr imaging, motion, in particular flow, is often recognized as artifacts. however, these phenomena can be used to measure flow and/or represent the vascular system. two effects are used for these kinds of measurements, the time-of-flight phenomenon (or the wash-in/wash-out effect) or the spin-phase phenomenon. in time-of-flight measurements, moving spins are excited at one location (in the vessel), and detection of the spins is performed downstream at another known location (slice). the delay time between excitation and detection can be used to calculate the flow velocity. several modifications of the method exist (e.g., presaturation, bolus tracking), and are used depending on the setup of the measurement and sequences used. the spin-phase phenomenon can be used for angiographic imaging as well. the phase of the transverse magnetization of moving spins along a field gradient changes according to the larmor equation. these phase-shift effects are observed for flow in all directions. the phase changes are prone to different flow parameters (e.g., velocity, turbulences, acceleration, etc.) and on the pulse sequences used. the signal variations produced by the two effects can be used to produce images of the vascular structures. using phase-sensitive effects, magnitude subtraction is a common procedure: dephased and rephrased image are acquired sequentially and are subtracted. using time-offlight effects, mostly maximum-intensity projection is used to construct images of the vasculature. the angiographic techniques are described in detail sect. 2.7. diffusion-weighted and -tensor imaging is a method applied first for clinical problems in brain, e.g. stroke, characterization of brain tumors, multiple sclerosis, etc. molecules in gases and fluids undergo microscopic random motions due to the thermal energy proportional to the temperature of the gas or fluid. if the molecules-in this context only water molecules are considered-are imbedded in a structure, for instance in tissues, the random walk motion may be restricted by the cellular tissue structure and hence reduce diffusion constants. if the structure of tissue has a preferred direction, diffusion will no longer isotropic; the diffusion will have higher components in the preferred direction of tissue. this kind of diffusion is called anisotropic diffusion. in mathematical terms, the anisotropic diffusion can be represented by a tensor. the so-called apparent diffusion coefficient can be measured, and the anisotropy of the diffusion can be determined and contains information about the structure of tissue. the basics of diffusion imaging are elucidated in sect. 2.8. g. brix all nuclei with an odd number of protons and/or neutrons possess in their ground state a non-zero angular momentum or nuclear spin i, which results from the intrinsic angular momentums and the orbital angular momentums of the constituent protons and neutrons. as with any other angular momentum at the atomic and nuclear level, the angular momentum vector i is quantized. this quantization is described by the following fundamental postulates of quantum physics: • quantization of the magnitude: the magnitude (length) |i | of the angular momentum vector can only take the discrete values |i| = ħ i(i + 1), with ħ being the planck's constant (ħ = 1.05 × 10 -34 js) and i the spin quantum number, which is either integer or half-integer. • quantization of the direction: the component iz of the angular momentum vector i along the direction of an external magnetic field is quantized. for a given value of i, only the discrete values of iz= mħ are admitted, where m is the magnetic quantum number which is limited to the values -i, -i + 1, . . . , i -1, i. in total, there are thus only 2i + 1 orientations of the angular momentum vector i allowed. example: figure 2 .2.1 illustrates spin quantization in form of a vector diagram for a nucleus with the spin quantum number i = 3/2. in this case, there are 2i + 1 = 2 · 3/2 + 1 = 4 orientations of the spin vector i with the magnitude (length) i(i+1) |i| 3/2 · (3/2+1) 15/4 ћ ћ ћ allowed. remark: the spin quantum number i is frequently referred to as "nuclear spin, " which means that the maximum (minimum) component of the vector i along the chosen axes is ħi (-ħi). the angular momentum i of an atomic nucleus is always related with a magnetic moment μ. this nuclear magnetism forms the basis of magnetic resonance. remark: an atomic nucleus can be imagined as a rotating, positively charged sphere ( fig. 2.2 .2). the rotation of the charge results in a circular electric current, inducing a magnetic dipolar field. both the direction and magnitude of the magnetic field are characterized by the magnetic moment μ. in the simple model considered, the vector μ is collinear with the mechanical angular momentum of the sphere. surprisingly, in quantum physics this simple relationship is even valid when the angular momentum is an inherent property of a particle (e.g., an electron or a nucleus) which is not associated with a mechanic rotation. as shown by a large number of experiments, there is a linear relationship between the nuclear magnetic moment and the nuclear spin μ = γ i. (2.2.1) the proportionality constant γ is denoted as gyromagnetic ratio and is a characteristic property of a nuclide. whereas all nuclei with i ≠ 0 can be used in principle for spectroscopic mr examinations, the nucleus of the hyin the classical model, the rotation of a charged particle, described by its angular momentum i, results in an electric current, which induces a magnetic dipolar field. direction and magnitude of this field are described by the magnetic moment μ. the vector μ is directed collinear to the angular momentum i of the sphere (magnetomechanic parallelism) drogen atom, which has a spin quantum number of i =1/2, is almost exclusively used in mri due to two reasons: • it is the most abundant nucleus in biological systems. • it has the largest gyromagnetic ratio of all stable nuclei. in the absence of a magnetic field, all allowed orientations of the magnetic moment μ = γ i are energetically equal. this corresponds to the well-known fact that a bar magnet can be positioned arbitrarily within the field-free space; its potential energy is independent of its orientation. however, if the nucleus is located in a homogenous static magnetic field with the magnetic flux density b 0 (magnitude, b 0 = |b 0|) directed along the z-axis of a coordinate system, the nucleus has the additional potential energy splitting of the energy levels of a nucleus with the spin quantum number i = 3/2 in an external magnetic field with the flux density b 0. the energy difference between the four equidistant nuclear zeeman levels is δe = ħω0 = γħb 0 the b field represents the "real" magnetic field that interacts with the magnetic moments of the nuclei. the relation between the two magnetic field quantities is explained in sect. 2.2.8.1. when considering an isolated magnetic moment within a static magnetic field, one will find that transitions between the different energy levels are prohibited due to the law of energy conservation. transitions can exclusively be induced by an additional time-dependent electromagnetic rf field that interacts with the magnetic moment, the effect is known as magnetic resonance (mr). in mr, transitions are induced by a magnetic rf field b1(t) with the angular frequency ωrf, which is irradiated perpendicular to the direction of the static magnetic field b 0 . such a time-dependent magnetic field, however, can only induce transitions fulfilling the selection rule ∆m = ±1, i.e., transitions between neighboring energy levels. as a consequence, the energy erf = ħωrf of a photon of the rf field must be identical with the energy difference δe = ħω0 = γħb 0 between two neighbored energy levels, which yields the resonance condition ωrf = ω0 = γb 0. (2.2.4) remarkably, planck's constant ħ does not occur in this fundamental equation of magnetic resonance. this indicates that the basic principles of magnetic resonance can-not only be described by quantum physics, but also by a classical approach, which is mediated by the intuitive semi-classical model described in the next section. in an external magnetic field, a cylindrical permanent magnet-characterized by a magnetic moment μ-experiences a mechanical torque that tends to align the permanent magnet parallel to the external magnetic field and thus minimize the potential energy of the system. however, in the case that the permanent magnet rotates around its longitudinal axis and thus possesses an angular momentum ("magnetic gyroscope"), it cannot align parallel to the external field due to the conservation of the angular momentum. in this situation, it experiences a torque perpendicular to both the direction of the magnetic field and the angular momentum, which results in a rotation (precession) of the magnet on a cone about the direction of the external b 0 field (see fig. 2 .2.4b). the frequency of this precession, the larmor frequency, corresponds to the resonance frequency ω0 given by eq. 2.2.4. magnetic field can be illustrated by a mechanic analog. when a child's spinning top is deflected so that its axis is not parallel to the top and the nucleus is that the nucleus possesses an intrinsic angular momentum i, whereas the angular momentum l of the top has to be initiated mechanically the direction of the gravitational field, it will continue rotating around its axis, but the axis itself will start rotating-the top precesses on a cone around the direction of the gravitational field ( fig. 2.2 .4a). it should be mentioned, however, that the child's top and the nucleus differ with regard to the fact that the child's top has to be spun, whereas the nucleus possesses an intrinsic angular momentum. the quantization of direction of the nuclear magnetic moment μ can be integrated into this classical description by limiting the angle between the field axis and the precession cone to the discrete values which relate to the 2i + 1 orientations of the angular momentum i permitted. for a spin-1/2 nucleus, this results in a double-precession cone as shown in fig. 2 .2.5. however, this semiclassical model is rendered questionable, because the classical concept of a continuous trajectory in space is hardly compatible with the quantization of physical quantities. for instance, what would the trajectory of the vector μ look like when transitions between the various precession cones, reflecting discrete energy levels, are induced by an rf field, such as for a spin-1/2 nucleus, the transition from the lower to the upper precession cone (cf. fig. 2 .2.5)? is it possible to assign to the vector μ a well-defined direction in space at any point in time, and would this direction change over time? if so, then this negates the postulate of discrete energy and angular momentum levels. this aporime can only be solved by a rigorous quantum mechanical treatment of the system. however, when considering only the mean values of physical quantities averaged over a large ensemble of nuclei-which can only be measured in a real mr experiment-it becomes obvious that the models and laws of classical physics are valid. in field-free space, the magnetic moments of nuclei in a macroscopic sample are randomly oriented due to their thermal motion and thus mutually compensate each other. in a homogeneous static magnetic field b 0, however, only 2i + 1 discrete orientations of the magnetic moments with respect to the direction of the external field are permitted, the energy levels of which differ according eq. 2.2.3. in thermal equilibrium, the population of the 2i + 1 levels (spin states) is described by the boltzmann statistic: the lower the energy em = -γħb0m of a state with the magnetic moment μz = γħm in the zdirection, the greater is the occupation number. example: let us consider an ensemble of hydrogen nuclei in a static magnetic field of the flux density b0 = 1 t. according to the boltzmann statistic, more nuclei will occupy the state of the lower energy (m = +1/2, µz parallel to b 0) than the state of the higher energy (m = -1/2, µz antiparallel to b 0) (fig. 2.2.6 ). however, as compared with the thermal energy, the difference between the two energy levels is extremely small, so that the difference in the occupation numbers of the two levels is very small. at body temperature of 37°c, the difference in the occupation numbers with respect to the total number of spins is as low as 0.000003 ! fig 2. 2.5 double-precession cone for a nucleus with the nuclear spin quantum number i = 1/2. the two permitted spin states (precession cones) are characterized by the magnetic quantum numbers m = ±1/2 origin of the nuclear magnetization. in thermal equilibrium, the distribution of an ensemble of spin-1/2 nuclei on the two allowed precession cones is described by the boltzmann statistic. the occupation number of the state of the lower energy (m = +1/2, µz parallel to b 0) is somewhat higher than that of the state of the higher energy (m = -1/2, µz antiparallel to b 0) which leads to macroscopic (bulk) magnetization m0 although the difference in the occupation numbers is extremely small, it results in a measurable bulk magnetic moment along the direction of the b 0 field due to the large number of nuclei in a macroscopic sample ("nuclear paramagnetism"). the macroscopic magnetization in thermal equilibrium is described by the magnetization vector m0, which is defined as the vector sum of the nuclear magnetic moments per unit volume v. the magnitude of the equilibrium magnetization m0 is given by where n is the total number of nuclei in the sample, t the absolute temperature of the sample, and k boltzmann's constant (k = 1.38 · 10 -23 j/k). the ratio ρ = n/v is called spin density. as both the body temperature and the spin density cannot be altered in living beings, the equilibrium magnetization m0 can only be increased according to eq. 2.2.5 by increasing the magnetic flux density b0. the equilibrium state of a spin system can be disturbed by a magnetic rf field b 1(t) with a frequency ωrf equal to the larmor frequency ω0, which tilts the magnetization m. whereas a nuclear magnetic moment μ can only take 2i + 1 discrete orientations relative to the static magnetic field b 0 (quantization of direction), the macroscopic magnetization m can take any direction in space and change it steadily. the action of a magnetic rf field b 1(t), which rotates with the larmor frequency ω0 around the direction of the static b 0 field, can be analyzed most effectively in a rotating frame, i.e., a coordinate system that rotates with the larmor frequency around the z-axis ( fig. 2.2.7) . the change to a rotating frame with the axes (x′, y′, z) has two advantages: • as the x′-y′-plane of the rotating frame is synchronized with the rf field, the b 1 vector remains stationary in this frame. in the following analysis, we will assume that the static b 1 field points along the x′-axis ( fig. 2.2.7 ). • as shown in sect. 2.2.2.2, a nuclear magnetic moment μ precesses with the larmor frequency ω0around the direction of the b 0 field (see fig. 2 .2.5). of course, this holds equally for the sum of the nuclear magnetic moments, i.e., for the macroscopic magnetization m. therefore, an observer observing the precession of the magnetization m from the rotating frame will come to conclude that the position of the magnetization does not change. from his point of view, the magnetization behaves as if the b 0 field is absent (larmor's theorem). summarizing both reflections, it can be concluded that the dynamics of the magnetization m in the rotating frame is determined only by the static b 1 field. if it points toward the x′-axis, then the magnetization m will precess around the x′-axis ( fig. 2. 2.8a). analogous to eq. 2.2.4, the frequency ω1 of this precession is given by ω1 = γb1. (2.2.6) when looking at this simple rotation of the magnetization m in the y′-z-plane of the rotating frame from a laboratory frame of reference (x, y, z), the movement is superimposed by a markedly faster rotation (b0 > b1) around the z-axis. thus, within the laboratory frame of reference, the tip of the vector m moves in a helical manner on the surface of a sphere around the b 0 field; the length of the vector m remains constant ( fig. 2.2.8b ). if the magnetization m points toward the static field b 0 before the rf field b 1(t) is switched on, the magnetization m is rotated from the equilibrium position under the influence of the rf field during the duration tpby the flip angle: (2.2.7) if the duration tp of the rf field is chosen to rotate the magnetization in the rotating frame by 90°, then this radiofrequency field in a stationary and in a rotating frame of reference. in the stationary frame (x, y, z) the magnetic rf field b 1(t) rotates with the angular frequency ωrfin the x-yplane around the z-axis. if one observes this rotation from a rotating frame (x′, y′, z) , which rotates with the angular frequency ωrf around the z-axis, the vector is stationary. typically, the rotating frame is chosen in such a way that the b 1 field points in the x′-direction pulse is denoted as 90° or π/2 pulse ( fig. 2.2.9a ). accordingly, the magnetization m is rotated by 180° when the duration of the rf pulse is doubled at the same flux density b1. this pulse, which inverts the magnetization from the positive to the negative z-direction, is called 180° or π pulse ( fig. 2. 2.9b). remark: precisely speaking, a short rf pulse with the carrier frequency ωrf will excite not only the nuclei that exactly fulfill the resonance condition ωrf = ω0, but also nuclei whose resonance frequency slightly differs from ωrf. this is because the frequency spectrum of an rf pulse of finite duration consists of a continuous frequency band around the nominal frequency ωrf (fig. 2.2.10 ). the width of the frequency distribution is inversely proportional to the duration tp of the pulse: the shorter the pulse, the broader the frequency spectrum is be distributed around ωrf. if the rf field is irradiated over a very long period (tp → ∞), the spectrum will be quasi-monochromatic. to simplify the following analysis, the magnetization m is separated into two components: the longitudinal magnetization mz, which is parallel to the direction of the static magnetic field b 0 , and the transverse magnetization mxy, which is perpendicular to it ( fig. 2.2.11 ). in the laboratory frame the transverse magnetization mxy precesses with the larmor frequency ω0; in the rotating frame it remains stationary. it is instructive to describe the effect of a 90°/180° pulse on an ensemble of spin-1/2 nuclei within the semiclassi-cal model described in sect. 2.2.2.2. as can be shown, the magnetic rf field induces transitions between the two permitted spin states (precession cones) until the occupation numbers are either identical (90° pulse) or inverted (180° pulse). furthermore, irradiation of a 90° pulse results in a phase synchronization of the nuclear magnetic moments of the sample, which yields a macroscopic transverse magnetization mxy, the magnitude of which is equal to that of the equilibrium magnetization m0. figuratively speaking, this means that the precession of the transverse magnetization mxy can be described as a common (phase coherent) precession of a "spin package" (fig. 2.2.12 ). up to this point, we have assumed that interactions of nuclear spins between one another and with their environment can be neglected. however, this assumption is not valid for real spin systems, as the magnetization returns to its equilibrium (mxy = 0, mz = m0) after rf excitation. this process is called relaxation. two different relaxation processes have to be distinguished: • the relaxation of the longitudinal magnetization mz characterized by the longitudinal or spin-lattice relaxation time t1 • the relaxation of the transverse magnetization mxy characterized by the transverse or spin-spin relaxation time t2 . fig 2. 2.8 resonance excitation. a in a rotating frame of reference, which rotates with the larmor frequency ω0 around the direction of the b0 field, the magnetization m precesses with the frequency ω1 around the stationary b1 field. b in the stationary frame this simple rotation is superimposed by the markedly faster rotation around the z-axis. therefore, the tip of the vector m moves in a helical manner on the surface of a sphere fig. 2. 2.9 90° and 180° pulse. if one chooses the rotating frame so that the rf pulse is irradiated along the x′-axis, the magnetization m will be rotated (a) by a 90° pulse along the y′-direction and (b) by a 180° pulse to the negative z-direction 2.2 physical basics 1 in real spin systems, every nucleus is surrounded by other intraand intermolecular magnetic moments, which are in motion due to rotations, translations, and vibrations of molecules as well as exchange processes. these processes induce an additional fluctuating magnetic field blok(t) at the position of a given nucleus, which has to be added to the external field. as the movements and exchange processes are random, the fluctuating fields differ in time from nucleus to nucleus-in contrast to the coherent rf field blok(t) irradiated from the outside. as any other temporal process, the locally fluctuating magnetic fields blok(t) can be decomposed into its frequency components. remark: the decomposition of a function into harmonic (i.e., sinusoidal) basis functions is denoted as fourier analysis, the mathematical operation that gives the intensity (amplitude) of the harmonic basis functions as fourier transformation. if the given function is periodic with period t, it can be decomposed into a sum of sinus and/or cosine functions with the discrete frequencies ω, 2ω, 3ω . . . (ω = 2π/t ). in contrast, a nonperiodic function has a continuous spectrum of frequencies. the contribution of the different frequency components to the fluctuating local field blok(t) is described by the spectral density function j(ω). a general feature of this function is that the more rapidly the molecular motion is, the broader the frequency spectrum ( fig. 2.2.13 ). rf pulse in the time and frequency domain. a rf pulse with carrier frequency ωrf and duration tp. b fourier transformation of the rf pulse. due to its finite duration, the frequency spectrum of the pulse is not monochromatic, but contains an entire frequency band, which is distributed around the nominal frequency ωrf phase synchronization by a 90° pulse. the 90° pulse leads to a synchronization of the phases of the magnetic moments μ of the nuclei in the sample (spin packet), which results in a macroscopic transverse magnetization mxy, the magnitude of which corresponds to that of the longitudinal magnetization before irradiation of the 90° pulse. in the figure, only the part of the magnetic moments of the sample which are distributed in an anisotropic manner on the precession cone is shown fig. 2.2.11 definition of the longitudinal and transverse magnetization. as the macroscopic magnetization m precesses in the stationary frame around the z-axis, it is beneficial to split it into two components: the rotating transverse magnetization mxy and the longitudinal magnetization mz in order to understand the effect of the fluctuating local magnetic fields blok(t) on a spin system, the components parallel and perpendicular to b0 have to be discussed separately. whereas the parallel component exclusively contributes to t2 relaxation, the perpendicular component influences both t1 and t2 relaxation: • the field component perpendicular to the b0 field induces-in analogy to the external rf field b1(t) -transitions between the energy levels (precession cones) of an individual spin. the probability of these transitions depends on the intensity of the frequency component of the fluctuating fields that oscillates at the larmor frequency ω0: the higher the spectral density j(ω0), the more transitions are induced. as fig. 2.2.13 shows, j(ω0) assumes a maximum when the limiting frequency ωg of the spectral density function is comparable to the larmor frequency ω0. the described relaxation process allows the excited spin system to emit and absorb photons of energy ħω0 until the boltzmann distribution of the energy levels is reached. the energy difference between the excited and the equilibrium state is dissipated to the surrounding medium or "lattice. " since the change in the occupational numbers of the spin states (precession cones) is related with a change in the macroscopic longitudinal magnetization mz, the described mechanism contributes to longitudinal relaxation. moreover, it contributes to t2 relaxation, as the locally induced transitions between the precession cones destroy the phase coherence between those spins which form, as a spin-package, the macroscopic transverse magnetization (cf. fig. 2.2 .12). • the component of the fluctuating field blok(t) oriented parallel to the z-axis locally modulates the static field b0 at the position of a nucleus and thereby changes the precession frequency ω0 of its nuclear magnetic moment μ. since the local fluctuations seen by the nuclei are spatially uncorrelated, the precessing magnetic moments within a sample lose their phase coherence, which causes the transverse magnetization to decay (see fig. 2.2.12) . given the fact that the effect of the high-frequency components of the fluctuating field vanishes when averaged over time, only the quasistatic frequency components, the intensity of which is approximately given by j(ω = 0), have a measurable effect on the transversal magnetization (see fig. 2.2.13) . as no transitions between the energy levels (precession cones) are induced by the described relaxation mechanism, the longitudinal magnetization mz remains unchanged, which means that the mechanism solely contributes to transversal relaxation. the qualitative discussion of the relaxation mechanisms reveals that their effectiveness depends on two different factors, namely on the magnitude and the temporal characteristics of the field fluctuations. the dependence from the magnitude is utilized when using paramagnetic con-trast agents (see sect. 2.6), which possess unpaired electron spins and consequently a magnetic moment. when considering the fact that the magnetic moment of an electron amounts to 658 times the magnetic moment of a proton, one can easily understand why even the slightest amounts of paramagnetic substances can lower the relaxation times considerably. for spin systems with a sufficiently high molecular mobility, relaxation processes can be described by exponential functions with the time constant t1 or t2. the longitudinal magnetization increases exponentially toward its equilibrium value mz = m0, the transverse magnetization decreases exponentially toward mxy = 0. figure 2. 2.14 shows the exponential relaxation of both magnetization components after excitation of the spin system by a 90° pulse and gives a simple interpretation of the relaxation times t1 and t2: • the longitudinal relaxation time t1 gives the time required for the longitudinal magnetization after a 90° pulse to grow again to 63% of its equilibrium value m0. • the transverse relaxation time t2 gives the time required for the transverse magnetization after a 90° pulse do drop to 37% of its original magnitude. schematic representation of the density function j(ω) for three substances with a different thermal mobility of the constituting atoms or molecules. a if the atoms or molecules move very slowly (such as in solids), the intensity of high-frequency components is very low. b this is different in fluids. in this case, the atoms or molecules move very rapidly, so that the spectral density function contains high-frequency components to a significant degree. c at a given frequency ω0 the intensity j (ω0) will attain a maximum if the cut-off frequency ωg of the spectral density function approximately corresponds to the given frequency ω0. at low frequencies, j (ω) is nearly independent on the frequency, so that the density of the quasi-static frequency components can be approximated by j (ω = 0) 1 dephasing of the transverse magnetization. the transverse magnetization mxy of the sample is split up into several magnetization components, which precess with slightly differing larmor frequencies around the direction of the b0 field. a immediately after the 90° pulse, all magnetization components are aligned parallel. b-d afterward, the components dephase due to their different larmor frequencies, and thus the macroscopic transverse magnetization decays the process of transverse relaxation can be described intuitively on the macroscopic level. to this end, the transversal magnetization mxy is split into different magnetization components, or spin packets. whereas the spins of each spin packet precess with the same larmor frequency, the spins in different packets slightly differ in their lamor frequencies. right after excitation, all components of the magnetization point toward the same direction; shortly afterward, however, some parts precess more quickly than others around the direction of the b0 field. due to this fact, the components fan out (dephasing), and the resulting transverse magnetization decreases ( fig. 2.2.15 ). in real mr experiments always macroscopic samples are examined, so that not only the fluctuating local magnetic fields, but also spatial field inhomogeneities of the external field b0, introduced by technical imperfections, contribute to the transverse relaxation. as both effects superpose on one another, the resulting effective relaxation time t2* is always shorter than the real, substancespecific transverse relaxation time t2. relaxation times in solids and fluids differ markedly (fig. 2.2.16) . whereas the longitudinal relaxation in solids can take hours or even days, in pure fluids it only takes some seconds. this difference is because the spectral density function j(ω0) at the larmor frequency is much larger in fluids than it is in solids, in which the low-frequency components dominate (see fig. 2 .2.13). for the same physical reason, the t2 relaxation time in solids usually only amounts to some microseconds, whereas in fluids it is only slightly shorter than the longitudinal relaxation time t1. soft tissues range, based on their consistency, between solids and pure fluids: with regard to their relaxation be-havior, they can in general be treated as viscose fluids. table 2 .2.2 summarizes representative proton relaxation times for different biological tissues. due to the considerable differences in the tissue relaxation times, it is possible to acquire mr images with an excellent tissue contrast even when the proton densities of the tissues or organs only slightly differ from one another. when interpreting the relaxation times, two aspects have to be taken into account: • the relaxation time t1 of biological tissues strongly depends on the larmor frequency, whereas the relaxation time t2 is nearly independent of the frequency. when comparing t1 values, one therefore needs to consider the magnetic flux density b0 at which the t1 measurement was done. • relaxation processes often consist of multiple components, so that the description by a mono-exponential function is only a rough approximation. the relaxation times given in table 2 .2.2 therefore only represent weighted mean values of an entire spectrum of exponential functions, characterizing the relaxation behavior of protons in different cell and tissue compartments between which the water exchange is slow. however, at the timescale relevant for mri, relaxation processes of most tissues can be approximated rather well by a single exponential function. an exception is fat-containing tissue (such as subcutaneous fatty tissue or bone marrow), which demands at least two exponential functions to be considered for the parameterization of the relaxation processes observed. figure 2.2.17 shows the general setup of an mr experiment; technical details will be presented in sect. 2.5. the sample to be examined is located within a very homogeneous static magnetic field b0, which is created either by a permanent magnet or by a (superconducting) coil. the rf field required for the excitation of the spin system is generated by a transmit coil connected to the rf transmit system. this rf coil is positioned in such a way that the radiofrequency field b1(t) is irradiated perpendicular to the b0 field into the sample volume. remark: whereas atomic nuclei with a nuclear spin quantum number of i ≥ 1 can interact with both the electric and the magnetic component of the electromagnetic rf field, spin-1/2 nuclei are only affected by the magnetic component b1(t) of the rf field. after excitation of the spin system by an rf pulse, the precessing transverse magnetization mxy in turn induces a weak alternating voltage in a receiver coil, which in general is identical to the transmit coil ( fig. 2. 2.18a). the measured voltage is amplified, filtered, digitalized, and fed to the computer of the mr system. the measured mr signal s(t) has the form of a damped oscillation (fig. 2.2.18b ), which is denoted as the free induction decay (fid). the fid signal has the following characteristic features: • it oscillates with the larmor frequency ω0 of the stimulated nuclei. • it decays in time with the time constant t2*. • its initial amplitude is proportional to the number n of the excited spins in the sample (n = ρv ∝ m0v; cf. eq. 2.2.5). if the sample contains nuclei of a certain type whose resonance frequency slightly differs due to intramolecular interactions (see sect. 2.2.8), the mr signal induced in the receiver coil will consist of several interfering decay curves. however, such a curve is rather complicated to analyze and interpret. therefore, the detected curve is usually spit up into its frequency components (fourier analysis, see sect. 2.2.5.1) and presented as frequency spectrum. both types of description are merely different representations of the same data, which can be transformed into one another mathematically by a fourier transformation. example: figure 2 .2.18b,c illustrates the relation between the description of the mr signal in the time or frequency domain by the example of a substance whose mr spectrum only shows one resonance line. principle setup of an mr experiment. the object to be measured is placed within a homogeneous static magnetic field b0. excitation of the spin system is performed by an rf field b1(t) irradiated perpendicularly to b0 by an rf coil. after excitation, the mr signal of the sample is detected by an rf coil and transferred via a receiver channel to the computer of the mr system. (for details, see sect. 2.5) for quantitative analysis of an mr spectrum, the following features are important: • the center of the resonance curve is at the larmor frequency ω0. • the full width δω at half maximum of the curve is related with the characteristic time constant t2* of the fid by the relation δω = 2/t2 * . • the area under the curve is approximately proportional to the number of excited nuclei in the sample. in an mr experiment, only the rf signal can be determined by measurement, which is induced by the rotating transverse magnetization mxy in the receiver coil (cf. sect. 2.2.6). nevertheless, a large variety of mr experiments can be realized that differ in the way by which the spin system is excited and prepared by means of rf pulses before the signal is acquired. a defined sequence of rf pulses, which is usually repeated several times, is called a pulse sequence. in the following, three "classical" pulse sequences are described that are frequently used for mr experiments (imaging sequences are described in sect. 2.4): • the saturation recovery sequence • the inversion recovery sequence • the spin-echo sequence. the saturation recovery (sr) sequence consists of only a single 90° pulse, which rotates the longitudinal magnetization mz into the x-y-plane. the fid signal is acquired immediately after the rf excitation of the spin system. after a delay time, the repetition time tr, the sequence is repeated. the sr sequence is described schematically by the pulse scheme (90°-aq-tr) (aq = signal acquisition period; fig. 2.2.19a ). if the repetition time tr is long compared to t1, the magnetization m relaxes back to its equilibrium state (see fig. 2.2.14) . in this case, the initial amplitude of the fid, even after repeated excitations, does only depend on the equilibrium magnetization m0 and does not show any t1 dependency. however, if the repetition time tr is shortened to a value that is comparable to t1, the longitudinal magnetization mz will not fully relax after excitation, and the following 90° pulse will rotate the reduced longitudinal magnetization mz(tr) = m0[1-exp(-tr/t1)] into the x-y-plane (fig. 2.2.19b, c) . under the assumption that the transverse magnetization after the repetition time tr has been decreased to zero (tr >> t2*), the following expression is obtained for the initial amplitude ssr of the fid signal: which exclusively depends on the relaxation time t1 and the number n of the excited spins in the sample. free induction decay (fid) and frequency spectrum. a after excitation of the spin system by a 90° pulse the magnetization mxy precesses with the larmor frequency ω0 around the direction of the b0 field and induces an electric voltage in the receiver coil. b the measured fid signal s(t) has the form of a damped oscillation, the frequency of which is given by the larmor frequency ω0. the decay of the signal is defined by the time constant t2*. c a fourier transformation of the fid signal gives the frequency spectrum of the mr signal. the resonance curve has its center at the larmor frequency ω0; its full width at half maximum (fwhm) is related with the characteristic time constant t2* of the fid by the relation δω = 2/t2 * in the inversion recovery (ir) method, the longitudinal magnetization is inverted by a 180° pulse (inversion pulse), which is followed after an inversion time ti by a 90° pulse (readout pulse). immediately after the 90° pulse, which rotates the partially relaxed longitudinal magnetization mz(ti) into the x-y-plane, the fid signal is acquired (fig. 2. 2.20). the ir sequence is described by the pulse scheme (180°-ti 90° aq). the initial amplitude sir of the fid signal is directly proportional to the longitudinal magnetization immediately before irradiation of the read-out pulse, just as is the case in the sr method. in contrast to saturation recovery sequence. a pulse scheme of the sr sequence (aq: signal acquisition). b the 90° pulse rotates the actual longitudinal magnetization into the x-y-plane. during the repetition time tr, the longitudinal magnetization relaxes toward the equilibrium magnetization m0. the speed of this process is described by the longitudinal relaxation time t1. note that by the first 90° pulse, the equilibrium magnetization m0 is rotated into the x-y-plane, whereas the subsequent 90° pulses rotate the reduced longitudinal magnetization mz(tr) = m0[1 -exp(-tr /t1)]. c temporal evolution of the transverse magnetization mxy in the rotating frame. d induced mr signal ssr(t) inversion recovery sequence. a pulse scheme of the ir sequence (aq: signal acquisition). b initially, the longitudinal magnetization is inverted by the 180° pulse (inversion pulse), which is followed after an inversion time tiby a 90° pulse (readout pulse), which rotates the existing longitudinal magnetization mz(ti) into the x-y-plane. after the 90° pulse, the longitudinal magnetization relaxes toward the equilibrium magnetization m0. c temporal evolution of the transverse magnetization mxy in the rotating frame. d induced mr signal sir(t) the sr sequence, however, the change in the longitudinal magnetization is twice as high and thus-in analogy to eq. 2.2.8-the following expression is obtained (compare figs. 2.2.19b and 2.2.20) ssr ∝ n (1-2e -t i /t 1 ). (2.2.9) the derivation of this relation is based on the assumption that the spin system is in its equilibrium state before it is excited by the inversion pulse. when repeating the ir sequence, one has therefore to make sure that the repetition time tr is markedly longer than the relaxation time t1. remark: if the ir sequence is repeated several times with different inversion times ti, it is possible to sample the temporal course of the longitudinal magnetization step by step, since the initial amplitude of the fid signal is directly proportional to the longitudinal magnetization at time ti (see fig. 2.2.20) . this procedure is applied frequently in order to determine the relaxation time t1 of a sample according to eq. 2.2.9. as explained in sect. 2.2.5.2 the temporal decay of the transverse magnetization mxy is caused by two effects: fluctuating local magnetic fields and spatial inhomogeneities of the magnetic field b0. the transverse magnetization mxy therefore relaxes not with the substance-specific relaxation time t2 but rather with the effective time constant t2* (t2* < t2). when determining the relaxation time t2, it is therefore important to compensate the effect of the field inhomogeneities. this can be done, as e. hahn has already shown in 1950, by using the so-called spinecho (se) sequence. this sequence utilizes the fact that the dephasing of the transverse magnetization caused by b0 inhomogeneities is reversible since they do not vary in time, whereas the influence of the fluctuating local magnetic fields is irreversible. in order to understand the principle of the se sequence with the pulse scheme (90° τ 180° τ aq; see fig. 2 .2.22a), we initially neglect the influence of the explanation of the spin-echo experiment in the rotating frame. for the sake of simplicity, the substance-specific transverse relaxation is not been considered in this figure. a the 90° pulse rotates the longitudinal magnetization into the x′-y′plane. b,c in the course of time, the magnetization components, which form together the transverse magnetization mxy, dephase so that the transverse magnetization decays with the characteristic time constant t2* (see fig. 2 .2.15). d,e irradiation of the 180° pulse along the x′-axis mirrors the dephased magnetiza-tion vectors at the x′-axis. as neither the precession direction nor the precession velocity of the magnetization components are altered by the 180° pulse, the components rephase and thus the transverse magnetization increases. the regeneration of the transverse magnetization is called a spin echo. f at the time te = 2τ, all magnetization components point into the same direction again. due to the rephrasing effect of the 180° pulse, the amplitude mz(te = 2τ) of the spin echo is independent of the static inhomogeneities of the b0 field fluctuating local magnetic fields and solely consider the static magnetic field inhomogeneities. immediately after the 90° pulse, all magnetization components composing the transverse magnetization mxy point along the y′-axis ( fig. 2.2.21a ). shortly afterward, some components precess faster, others more slowly around the direction of the b0 field, so that the initial phase coherence is lost (see fig. 2.2.15) . when looking at this situation from a rotating frame, one observes a fanning out of the magnetization components around the y′-axis (fig. 2.2.21.b, c) . if a 180° pulse is applied after a time delay τ along the x′-axis, the magnetization components will be mirrored with respect to this axis ( fig. 2.2.21d) . however, the 180° pulse does not change the rotational direction of the magnetization components, but merely inverts the distribution of the components: the faster components now follow the slower ones ( fig. 2. 2.21e). after the time t = 2τ, all magnetization components again point to the same direction, and the signal comes to a maximum ( fig. 2.2.21f ). the 180° pulse thus induces a rephasing of the dephased transverse magnetization, which causes the mr signal to increase and to generate a spin echo (fig. 2.2.22 ). after the spin-echo time te = 2τ  , the echo decays againas the original fid does-with the time constant t2*. due to the rephasing effect of the 180° pulse, the spinecho signal sse(te) is independent from the inhomogeneities of the static magnetic field: the loss of signal at the time t = te as compared to the initial signal sse(0) is determined exclusively via the substance-specific relaxation time t2. if one irradiates a sequence of k 180° pulses at the times τ, 3τ, 5τ, …, (2k-1)τ, one can detect a spin echo in between the subsequent 180° pulses (fig. 2. 2.23). the envelope of the echo signals sse(2τk) (k = 1, 2, 3, …, k) decays exponentially with the relaxation time t2. sse ∝ n e -2τk/t 2 . (2.2.10) the major advantage of this multi echo sequence consists in the fact that the t2 decay can very effectively be detected by a single measurement ( fig. 2. 2.23). all the considerations so far have been based on the assumption that the external magnetic field b0 created by the rf coil is not altered by the electrons surrounding a nucleus. however, this is not the case as the electrons interact with the applied external magnetic field. in biological tissues in which atoms are covalently bound, two related effects need to be considered, the diamagnetism and the chemical shift. diamagnetism is a general feature of matter and is because electrons attempt to shield the interior of the sample against the external magnetic field. in electrodynamics, this effect is described by lenz's law. it states that the current induced in a circuit by the change of a magnetic field is directed in such a way that the secondary magnetic field induced by the electric current weakens the primary magnetic field ( fig. 2.2.24 ). if a sample is positioned in an external magnetic field, a current is induced in the electron shell of the atoms and molecules, whose magnetic moment is directed against the external magnetic field, following lenz's law. however, in contrast to the electrons within a macroscopic circuit, the electrons in the electron shell are "frictionless, " which means that an induced electron current remains constant until the external magnetic field changes or until the sample is removed from the magnetic field. the sum of the induced magnetic moments of the electrons per volume is-similar to the nuclear magnetization m-denoted as electron magnetization me. for averaging, the volume has to be chosen in such a way that, on the one hand, a great number of atoms and molecules is contained, and, on the other hand, that it is small compared to the volume of the sample (for example, 1 µm 3 water contains about 3.3 · 10 10 water molecules). the magnetization me thus represents a macroscopic quantity per definitionem. remark: due to practical reasons, distinction is made in electrodynamics between free and bound currents: free currents are experimentally controllable and are linked to macroscopic circuits, whereas bound currents are linked to atomic and molecular magnetic moments in matter. the field related to free currents is denoted as magnetic field h (unit: ampere/meter), the field created by the total current, i.e. by both the free and the bound current, as magnetic flux density b (unit: tesla). at every point in space, the vector quantities h, b, and me are related by the dimensionless proportionality constant χ is called the magnetic susceptibility. for diamagnetic substances, χ is, according to lenz's law, always negative and has a very small absolute value (e.g. water: χ = -0.72 · 10 -6 ). when putting a diamagnetic sample into an originally homogeneous magnetic field, a magnetization me is induced according to eq. 2.2.12, which itself creates a magnetic field that counters the primary field. therefore, the field distribution of the magnetic flux density b differs both inside and outside of the sample from the original field distribution. example: figure 2 .2.25 shows the field distribution of the magnetic flux density b inside and outside of a homogeneously magnetized sphere (χ = constant), which has been brought into an originally homogeneous field b0. inside the sphere, the magnetic flux density b is given by b = (1+2χ /3) b0. it should be noted, however, that the homogeneously magnetized sphere represents an ideal case in which the b field is homogeneous on multi echo sequence. a pulse scheme (aq: signal acquisition). b decay of the echo amplitudes as a function of time. the signal decay is determined exclusively by the substance-specific transverse relaxation time t2, whereas the decay and the regeneration of the fid are essentially determined by technically conditioned field inhomogeneities lenz's law. when a circuit is approached to a bar magnet with the magnetic flux density b, a current i is induced in the circuit. this current induces a magnetic dipolar field, which is directed in such a way that it weakens the primary magnetic field. magnitude and orientation of the dipolar field are described by the magnetic moment μ its inside, whereas in general, the b field is also inhomogeneous inside the object. the discussion provides three important aspects with respect to mr examinations of humans: • the distribution of the magnetic flux density b in the human body depends on the position, size, form, and magnetic susceptibility of all tissues and organs of the body. • at the interface between tissues with different magnetic susceptibilities, there are local field inhomogeneities. • the distortion of the external magnetic field caused by the body adds to the technical imperfections of the external field b0. in mri, susceptibility-related inhomogeneities of the static magnetic field inside the body are obviously unavoidable and can result in image artifacts. in spectroscopic examinations, however, this problem can be reduced by acquiring only mr signals from small, morphologically homogeneous tissues regions. furthermore, one has the possibility to locally adjust the b field by means of external shim coils generating a weak additional magnetic field, so the homogeneity within the examined region fulfils the demands. when speaking of the homogeneity of the b field within a given region of the body, this relates to the average macroscopic field; on the microscopic scale, the magnetic field is always inhomogeneous. the larmor frequency of a nucleus is determined by the local magnetic field blok at the position of the nucleus, not by the macroscopic field b, which has been averaged over a small but microscopically large volume surrounding the nucleus. denoting the perturbation of the mean field b at the position of a nucleus caused by the surrounding electrons of the molecule by δb, we get the relation blok = b + δb. (2.2.13) as experimental and theoretical investigations have shown, the small local field perturbation δb is proportional to the macroscopic field δb = -σb, (2.2.14) which yields the following expression for the resonance frequency of the nucleus, considered the dimensionless "shielding constant" σ gives the relative resonance frequency shift that is independent of the magnitude of the magnetic field. this shift depends on the distribution of the electrons around the nucleus and thus has different values in different molecules. the magnitude of the additional field δb at the position of a nucleus generally depends on the orientation of the molecule relative to the macroscopic field b. in molecules that rotate rapidly, such as in fluids and soft tissues, the chemical shift anisotropy vanishes, so the quantity σ in eq. 2.2.14 could be defined as a direction-independent constant. this quantity describes the shielding effect of the electron shell averaged over all spatial directions. as the absolute value of the frequency shift cannot easily by measured, it is usually determined relative to the resonance frequency ωr of a reference substance. the difference (ω -ωr) of the resonance frequencies is expressed as dimensionless constant relative to the frequency ω0 = γb0 of the mr system in parts per million (ppm). the chemical shift δ provides information about how the atom with the nucleus under study is bonded in the molecule and thus makes mr spectroscopy a powerful tool for the determination of the structure of molecules as well as for the investigation of biochemical processes. for the 1 h nucleus, which is surrounded by only one electron, the chemical shift is about 10 ppm; for atoms with several electrons (e.g., 13 c, 19 f, and 31 p) it can amount to several hundreds of ppm. to resolve these small differences in frequency it is necessary to use a strong and homogeneous static magnetic field (b0≥ 1.5 t, ∆b/b0< 0.1-0.5 ppm depending on the nucleus). variation of the magnetic field by a diamagnetic sphere. distribution of the b field inside and outside a homogeneously magnetized sphere, which was positioned in an originally homogeneous magnetic field. it should be noted, however, that field variations caused by biological tissue are markedly weaker than illustrated here if the object to be imaged, such as the human body, is divided into small cuboidal volume elements (i.e., voxels), the task in mr imaging is to distinguish the signal contributions of the voxels to the detected summation signal from one another and to present them in form of sectional images (tomograms). this can be achieved by superimposing the homogeneous magnetic field b0 by an additional magnetic field with a well-defined dependence on the spatial position, so the larmor frequency of the mr signal becomes a function of space. in practice, image reconstruction is achieved almost exclusively by means of magnetic gradient fields. these are three additional magnetic fields b x , b y , and b z , whose field vectors point toward the z-direction and whose field strengths depend linearly on the spatial position x, y, or z, respectively ( fig. 2. 3.1). if the z-components of the three magnetic gradient fields are denoted by b x , b y , and b z , the fields can be expressed as where the proportionality constants g x , g y , and g z describe the magnitude or steepness of the orthogonal gradient fields. remark: the magnetic gradient fields are shortly denoted as x-, y-, or z-gradients. what is meant are magnetic fields b x , b y , and b z , the magnitude of which varies linearly along the x-, y-, or zaxis, respectively (see fig. 2 .3.1). in order to avoid image distortions, the magnitude of the gradients has to be chosen in such a way that the local field variations are markedly greater than the local inhomogeneities of the main magnetic field b0; typical values are between 1 and 50 mt/m. technically, the gradient fields b x , b y , and b z are produced by three coil systems (gradient coils), which can be operated independently from one another. example: assuming a patient diameter in the x-direction of x = 30 cm, a magnetic flux density of the static field of b0 = 1 t, and a gradient strength of g x = 10 mt/m, the magnetic field b = b0 + g x x will increase within the patient (-x/2 ≤ x ≤ + x/2) linearly from 0.9985 to 1.0015 t. in mr imaging, the magnetic gradient fields are used in two different ways: • for selective excitation of the nuclear spins in a partial body region (e.g., a slice). • for position encoding within an excited partial body region (e.g., a slice). for image reconstruction, the homogeneous magnetic field b0shown in a is superimposed by additional magnetic fields b x , b y , and b z , so-called gradient fields, the field vector of which points into the z-direction and the magnitude of which (length of the black arrows) depends linearly on the spatial coordinate x, y, and z, respectively. b and c show the field distributions in case the field b0 is superimposed by a gra-dient field b y and b z , respectively. the open arrows indicate the direction of the field variation, whereas the constants g y and g z represent the magnitude of the field variation per unit of length. if the z-components of the two magnetic gradient fields are given by b y and b z , then the gradient strengths are defined by g y = δb y /δy and g z = δb z /δz, respectively an mr signal can principally only be detected from the volume in which the nuclei have been excited before by an rf pulse. this fact is used in planar imaging methods, in order to reduce the primarily 3d reconstruction problem to a 2d one by selectively exciting only nuclei in a thin slice of the body. remark: depending on the type of the selectively excited partial body volume, one distinguishes between single-point, line, planar, and volume sampling strategies (fig. 2.3 .2). as the intensity of the detected mr signal is proportional to the number of nuclei within the excited volume, the different strategies markedly differ in the time required for the acquisition of qualitatively comparable mr images. due to the long measurement time, single-point and line scanning techniques have not been successful in clinical practice. in order to excite a distinct slice of the body selectively, the homogeneous static magnetic field b0 is superimposed with a gradient field (slice-selection gradient) that varies perpendicular to the slice, i.e., for an axial slice a gradient field in longitudinal direction of the body. due to this superposition, the larmor frequency ω of the nu-clei varies along the direction of the gradient. if we consider, for instance, a z-gradient with magnitude g z , the larmor frequency is given by (see fig. 2 .3.1c): consequently, an object slice z1 ≤ z ≤ z2 is characterized by a narrow frequency interval γ(b0 + g z z1) ≤ ω ≤ γ(b0 + g z z2). if one irradiates an rf pulse, the frequency spectrum of which coincides with this frequency range, only the nuclei within the chosen slice will be excited ( fig. 2.3.3) . for the definition of a body slice, this has two implications: • the width d = z2 -z1 of the slice can be varied by changing either the bandwidth of the rf pulse, i.e. the width of the frequency distribution, or the gradient strength g z . • the position of the slice can be altered by shifting the frequency spectrum of the rf pulse. the practical realization of the concept of slice-selective excitation requires not only the shape of the rf pulse but also the switching mode of the slice-selection gradient to be carefully optimized: the main magnetic field b0 is superimposed with a magnetic gradient field b z = g z z in z-direction (slice-selection gradient), so the larmor frequency ω(z) = γ(b0 + g z z) of the nuclei depends linearly from the spatial coordinate z. the slice z1 ≤ z ≤ z2 within the object is thus unambiguously described by the frequency interval ω(z1) ≤ ω(z) ≤ ω(z2). if one irradiates an rf pulse with a frequency spectrum that corresponds to this frequency interval, only the nuclei within the chosen slice will be excited • pulse modulation: as shown in fig. 2 .2.10, the frequency spectrum of a rectangular rf pulse consists of several frequency bands with varying intensities. if such a pulse is used for the rf excitation, the profile of the excited slice is defined insufficiently. in order to obtain a uniform distribution of the transverse magnetization over the slice width, the shape of the selective rf pulse is modulated so that the frequency spectrum becomes as rectangular as possible ("sinc pulse, " see fig. 2 .2.23a). • compensation gradients: if one imaginatively dissects an object slice into several thin subslices, then the magnetization components of all subslices will be deflected by the same angle from the z-direction when an optimized rf pulse is used for slice-selective excitation. however, the magnetization components will be dephased at the end of the excitation period tp, since the larmor frequencies of the distinct subslices differ from one another as the slice-selection gradient is switched on. this effect can be compensated by reversing the polarity of the gradient field for a well-defined period after rf excitation ( fig. 2. 3.4b). after selective excitation of a partial body region, the mr signal from each voxel within this volume, e.g., a slice, needs to be spatially encoded. this can be achieved by two techniques: frequency and phase encoding of the mr signal. the principle of both encoding techniques will be explained in the following by the example of an axial slice parallel to the x-y-plane. for the sake of simplicity, relaxation effects are neglected in this connection. if the body to be imaged is placed in a homogenous magnetic field with the flux density b0, the magnetization components of all voxels in the excited slice will precess with the same frequency around the direction of the b0field. thus, the frequency spectrum consist of only a single resonance line at the larmor frequency ω0 = γ b0 -it does not contain any spatial information. however, if a magnetic gradient field, e.g., b x = g x x, is switched on during the acquisition phase of the mr signal ( fig. 2. 3.5), pulse modulation and gradient refocusing. a in order to obtain an approximately rectangular slice profile, one uses an rf pulse, the envelope of which is not rectangular but modulated in time. b if one dissects a thicker slice of the object into several thin subslices, an optimized rf pulse will deflect the magnetization of each subslice by the same angle from the z-direction, but the magnetization components are dephased after rf excitation, because the larmor frequencies in the subslices differ. if a 90° pulse with duration tp is used for slice-selective excitation, then the dephasing effect can be compensated in good approximation by inverting the gradient field after excitation for the duration tp/2 readout or frequency-encoding gradient. in frequency encoding, the main magnetic field b0 is superimposed with a gradient field (here b x + g x x) during the acquisition of the rf signal, so the precession frequency of the transverse magnetization in the selectively excited slice becomes a function of the coordinate x the larmor frequency is related to the position x (see fig. 2 .3.1) by the resonance condition or in other words, nuclei in parallel strips oriented perpendicular to the direction of the readout (or frequencyencoding) gradient will experience a different magnetic field and thus contribute with different larmor frequencies ω(x) to the detected mr signal of the excited slicethe spatial information is encoded in the resonance frequency. in order to determine the contribution of the distinct frequency components to the summation signal, a fourier transformation of the measured fid has to be performed (cf. sect. 2.2.6). the intensity i(ω) of the resulting spectrum at the frequency ω is proportional to the number of nuclei precessing with this frequency, i.e., to the number of nuclei that are according to eq. 2.3.3 at the position x = (ω γb0) / γ g x . the frequency spectrum of the fid signal therefore gives the projection of the spin density distribution in the excited slice onto the direction of the readout gradient ( fig. 2. 3.6). remark: when explaining the concept of frequency encoding, we assumed that there is only one resonance line in the frequency spectrum of the excited body region in the absence of the readout gradient. if this assumption is not fulfilled, i.e., if the spectrum contains several resonance lines due to chemical shift effects described in sect. 2.2.8.2, then these frequency shifts will be interpreted by the decoding procedure (i.e., the fourier analysis) of the fid signal as position information. consequently, the spin density projections of molecules with different chemical shifts are shifted in space against one another. in 1 h imaging, the situation is rather easy, as only two dominant proton components contribute to the mr signal of the organism, the protons of the water molecules and those of the ch2 group of fatty acids. as the resonance frequencies of the two components differ by about 3.5 ppm (fig. 2.3.7) , fat-and water-containing structures of the body are slightly shifted against one another in readout direction. this chemical-shift artifact becomes apparent predominantly at the interfaces between fatand water-containing tissue. from a technical point of view, the fid signal s(t) can only be sampled and stored in discrete steps over a limited period of time taq. consequently, there is only a limited number n = taq /δt of data points that can be used for fourier transformation. due to this reason, the spatial sampling interval δx of the spin density projection is limited too. the following relations hold between the number n of data points, the maximum object size x, the spatial sampling interval δx, the temporal principle of frequency encoding. if the fid signal from a slice is measured in the presence of a gradient field b x = g x x (see fig. 2.3.5) , then nuclei in strips oriented perpendicular to the direction of the gradient contribute with different larmor frequencies ω(x) = γ(b0 = g x x) to the measured mr signal. the contribution i(ω) of the distinct frequency components to the summation signal can be calculated by a fourier transformation of the fid signal. as the intensity i(ω) of the resulting spectrum at the frequency ω is, on the one hand, proportional to the number of nuclei precessing with this frequency, and, on the other hand, the spatial information is encoded in the frequency, the fourier transformation yields the projection of the spin density distribution within the considered object slice on the direction of the readout gradient in vivo 1 h spectrum of a human thigh at 1.5 t. the two resonance lines can be attributed to protons in water and in the ch2 groups of fatty acids sampling interval ∆t, and the gradient strength g x (sampling theorem): example: if the fid signal is sampled 256 times at ∆t = 30 µs in the presence of a magnetic gradient field of the strength g x = 1.566 mt/m, then the resolution along the x-axis is ∆x = 1.953 mm, and the maximum object size that can be imaged is x = nδx = 50 cm. with frequency encoding, the mr signal is sampled at discrete points in time tn = nδt (1 ≤ n ≤ n). according to eq. 2.3.3, the transverse magnetization at the position x precesses under the influence of the readout gradient until the time tn by the angle the spatial information is therefore encoded via the frequency ω(x) in the phase angles φn(x) (1 ≤ n ≤ n). however, the same phase angles can be realized by increasing the gradient strength gn x = nδg x in equidistant steps δg x at a fixed switch-on time of the gradient. this equivalent approach is called phase encoding. the concept of phase encoding can easily be realized by applying a magnetic gradient field, e.g., b x = g x x, for a fixed time tx before the fid signal is detected (fig. 2.3.8) . under the effect of this phase-encoding gradient, the magnetization at the position x precesses by the phase angle the parameter kn = γgn x tx = γnδg x tx is named spatial frequency. after switching off the phase-encoding gradient, the magnetization components of the voxels in the slice precess again with the original, position-independent larmor frequency ω0 = γb0 around the direction of the b0 field-now, however, with position-dependent phase angles φn (x). this is to say, in phase-encoding, all magnetization components of the excited voxels contribute to the detected mr signal with the same frequency ω0, but with differing phases φn (x). in order to calculate the projection of the spin density distribution in the slice onto the direction of the phase-encoding gradient, the chosen sequence is repeated n times with different spatial frequencies kn = n(γ δg x tx) = nδk (1 ≤ n ≤ n) (fig. 2.3.9 ). however, in contrast to frequency encoding, during phase encoding not the entire fid is sampled, but only the mr signal s(kn, t0) at a definitive time t0. after n measurements (sequence cycles), the spin density projection can be calculated by a fourier transformation of the acquired data set s(δk, t0), s(2δk, t0), s(3δk, t0), …, s(nδk, t0 in practical mri, techniques of image reconstruction have prevailed that merely differ only in the way the aforementioned techniques of selective excitation and spatial encoding are combined. phase-encoding gradient. in phase encoding, a gradient field (here b x = g x x), the magnitude of which is increased in equidistant steps ∆g x at each sequence cycle, is switched on for a fixed time tx before the fid signal is acquired. this method, which p. lauterbur used in 1973 to generate the first mr image, is based on a technique of image reconstruction used in computed tomography. its basic idea is easy to understand: if projections of the spin density distribution of an object slice are available for various viewing angels φn (1 ≤ n ≤ n), the spin density distribution in the slice can be reconstructed by "smearing back" the (filtered) profiles over the image plane along their viewing directions (fig. 2.3.11 ). this approach can be implemented easily by making use of the frequency-encoding technique by repeating the sequence shown in fig. 2 .3.5 several times while rotating step by step the direction of the readout gradient in the slice plane. in order to reconstruct a planar image of n × n picture elements (pixel), a minimum of n projections with n data points each is needed. the stepwise rotation of the readout gradient by the angle δφ=180°/n is performed electronically by a weighted superposition of two orthogonal gradient fields. the projection reconstruction method is easy to understand, but both mathematical description and data processing are rather complex. furthermore, it carries the disadvantage that magnetic field inhomogeneities and patient movements result in considerable image artifacts. due to these reasons, the fourier techniques described in the following sections are preferred for the reconstruction of mr images. principle of phase encoding (displayed in the rotating frame). as shown in fig. 2. 3.8, a phase-encoding gradient g x is switched on for a fixed time tx before the fid signal is acquired (aq). a-d the sequence is repeated several times at equidistantly increasing gradient strengths. under the influence of the gradient field b x = g x x, the magnetization components of the different voxels in the slice precess with different larmor frequencies. if the gradient is switched off after the time tx, all components rotate again with the original, position-independent frequency ω0 = γb0 around the direction of the b0 field. however, magnetization components which precess more quickly during the operating time tx of the gradient field, will maintain their advance compared with the slower ones. this advance is described by the phase angle φ(x) = γg x xtx of the different magnetization components. the figure shows the dependence of the phase angle φ(x) on the gradient strength g x and the spatial coordinate x schematically for four different gradient strengths (g x = 0, ∆g x , 2 ∆g x , and 3∆g x ) and three adjacent voxels (with the magnetization vectors m0, m1, and m2). as shown in b, the magnetization m1 will rotate at the position x1 under the influence of the gradient field by the phase angle φ(x1) = 45° and the magnetization m2 at the position x2 by the phase angle φ(x2) = 90° comparison between frequency and phase encoding. with both encoding techniques, the transverse magnetizations of all voxels within the excited slice contribute to the detected mr signal; the spatial information is encoded in both cases by the different phase of the magnetization components, which was formed by the influence of a magnetic gradient field (here b x = g x x) up to the moment of signal detection. in order to calculate the projection of an object onto the direction of the gradient, the mr signal s has to be measured n times (e.g., n = 128 or 256), with the phase difference between the magnetization components of the different voxels varying in a well-defined way form measurement to measurement. the difference between the two encoding techniques merely consists in the technique with which the data set {s1, s2, . . . , sn} is acquired. a with frequency encoding, the mr signal is sampled at equidistant time steps ∆t in presence of a constant gradient field (in the figure ∆g x ). as the magnetization components of the voxels within the excited slice steadily dephase under the influence of the gradient field, there is a different phase difference between them at every point in time tn= n∆t (1 ≤ n ≤ n), so the whole data set {s1, s2, . . . , sn} can be detected by a single application of the sequence. the figure shows the first three values of the signal. it should be noted that only the temporal change of the mr signal caused by the gradient field is shown here, whereas the rapid oscillation of the signal with the larmor frequency ω0 = γb0 as well as the t2 * decay of the signal is neglected. b with phase encoding, a phase-encoding gradient is switched on for a fixed duration tx before the fid signal is acquired. the magnitude of this gradient is increased at each sequence repetition by ∆g x . during the switch-on period of the gradient field, the magnetization components of the different voxels precess with different frequencies so that a phase difference is established between them which is proportional to the magnitude of the gradient applied (see fig. 2.3.9) . after switching off the gradient, all components rotate again with the original, position-independent frequency ω0 = γb0 around the direction of the b0 field. in the chosen description, one therefore observes an mr signal sn that is constant over time. in order to acquire the entire data set {s1, s2, …, sn}, the sequence needs to be repeated n times with different gradient strengths g x = n∆g x . the figure shows the dependence of the mr signal from the gradient strength schematically for three different gradient strengths (g x = 0, ∆g x , 2∆g x ). if the product of the gradient strength and the switch-on time of the gradient up to the time of signal detection is equal in both encoding techniques, the phase difference between the various magnetization components at the time of signal detection is also identical and thus the same mr signal is measured. the product of the two quantities is indicated in the figure by the dark areas in the planar version of fourier imaging, just as in projection reconstruction, the spins in a slice are selectively excited by an rf pulse in the first step. afterwards, however, spatial encoding of the spins in the slice is not done by a successive rotation of a readout gradient, but by a combination of frequency and phase encoding using two orthogonal gradient fields. if we consider an axial slice parallel to the x-y-plane, then these gradients are g x and g y (fig. 2. 3.12). the sequence is repeated n times for different values of the phase-encoding gradient gn x = nδg x (1 ≤ n ≤ n), with the mr signal being measured m times during each sequence cycle at the times tm = mδt (1 ≤ m ≤ m) in the presence of the readout gradient g y . thus, one obtains a measurement value for each combination (kn, tm) of the parameters kn = γnδg x tx and tm = mδt, i.e., a matrix of n × m data points. a 2d fourier transformation of this data set, the so-called hologram or kspace matrix (see fig. 2 .4.26), yields the mr image of the slice with a resolution of n × m pixels. in order to extend the 2d fourier method to a 3d one, the slice-selection gradient is replaced by a second phaseencoding gradient as shown in fig. 2. 3.13. this means that the rf pulse excites all spins in the sensitive volume of the rf coil and that the spatial information is encoded exclusively by orthogonal gradients-by two phase-encoding gradients and one frequency-encoding gradient. the spatial resolution in the third dimension is defined by the strength of the related phase-encoding gradient and the number k of the phase-encoding steps. depending reconstruction by back projection. the figure shows three different projections of two objects in the field of view. if many projections are acquired at different viewing angles, an image can be reconstructed by (filtered) back projection of the profiles. for the measurement of the various projections, the frequency-encoding technique is used, with the readout gradient rotating step by step fig. 2. 3.12 typical pulse and gradient sequence in 2d fourier imaging. g z is the slice-selection gradient, g x the phase-encoding gradient, and g y the readout gradient on the choice of these parameters, the voxels have a cubic or cuboidal shape (isotropic or anisotropic resolution). in order to acquire a 3d k-space matrix with n × m × k independent measurement values, the imaging sequence needs to be repeated n × k times. a 3d fourier transformation of the acquired 3d k-space matrix yields the 3d image data set of the partial body region excited by the rf pulse. based on this image data set, multiplanar images in any orientation can be refor-matted, which offers-among others-the possibility to look at an organ or a body structure from various viewing directions. in addition to the described conventional fashion of filling the k-space in fourier imaging, there are a number of alternative strategies. • spiral acquisition. as will be described later, the epi sequence is commonly using an oscillating frequencyencoding gradient. if both, the phase encoding as well as the frequency-encoding gradient are oscillating with increasing gradient amplitudes, the acquired data points will be along a spiral trajectory through k-space. that is why such an acquisition is called spiral epi. • radial acquisition. if the direction of the frequencyencoding gradient is rotated as described in sect. 2.3.4.1, the k-space trajectories will present a star. • blade, propeller, multivane. these hybrid techniques sample k-space data in blocks (so-called blades) each of which consists of some parallel k-space lines. in order to successively cover the entire k-space, the direction of the blades is rotated with a fixed radial increment. this sampling strategy offers some advantages. since each blade contains data points close to the center of the k-space, patient movements can, for example, be easily detected and corrected. to reconstruct mr images from alternative k-space trajectories by means of a conventional 2d or 3d fourier transformation, it is necessary to re-grid the sampled kspace data to a rectangular grid. data acquisition by 2d imaging techniques can be carried out very efficiently when considering the fact that the time required for slice-selective excitation, spatial encoding, and acquisition of the mr signal is much shorter than the time needed by the spin system to relax at least partially after rf excitation, before it can be excited once again. the long waiting periods can be used to excite-in a temporally shifted manner-adjacent slices and to detect the spatially encoded mr signal from these slices. thus, mr images from different parallel slices can be acquired simultaneously without prolongation of the total acquisition time (fig. 2. 3.14). example: let us consider that 50 ms are needed for excitation, spatial encoding, and data acquisition per sequence cycle and that the sequence is repeated after tr = 1,000 ms. then mr data from 20 adjacent slices can be acquired simultaneously without prolonging the measurement time. typical pulse and gradient sequence in 3d fourier imaging. g x and g y are the phase-encoding gradients and g z the readout gradient however, when using the multiple-slice technique, one has to consider that the distance between slices may not be too small, as the slice profile usually is not rectangular, but bell shaped. in order to avoid repeated excitation of spins in overlapping slice regions, the gap between adjacent slices should correspond approximately to the width of the slice itself. images from adjacent slices can be obtained in an interleaved manner by applying the sequence twice: in the first measurement, data are acquired from the odd slices and in the second from the even slices ( fig. 2. 3.15). 14 principle of the multiple-slice technique. in most 2d imaging sequences, the time t required for slice-selective excitation, spatial encoding, and detection of the mr signal is markedly shorter than the repetition time tr. the long waiting periods can be used to subsequently excite spins in parallel slices and to detect the spatially encoded signals from these slices consideration of the slice profile by the multipleslice technique. the profile of a slice is generally not rectangular but rather bell shaped. the thickness (th) of the slice is therefore usually defined by the full-width at half-maximum. in order to prevent overlapping of adjacent slices in the multiple-slice technique, a sufficient gap (g) between the two adjacent slices has to be chosen (g ≥ th). often the distance (d = g + th) between the slices is indicated instead of the gap g. images from adjacent slices can be detected without overlap by using in a first step a sequence that acquires data from the even slices and in a second step a sequence that acquires data from the odd slices. in both measurements, the gap g should be identical with the slice thickness th (d = g + th = 2th) the main advantage of mr imaging, apart from the flexibility in slice orientation, is the excellent soft-tissue contrast in the reconstructed mr images. it is based on the different relaxation times t1 and t2 of the tissues, which depend on the complex interaction between the hydrogen nuclei and their surroundings. compared to that, differences in proton densities (pd) are only of minor relevance, at least when considering soft tissues. the term proton density in mr imaging designates only those hydrogen nuclei whose magnetization contributes to the detectable image signal. essentially, this refers to hydrogen nuclei in the ubiquitous water molecules and in the methylene groups of the mobile fatty acids (see sect. 2.3.3.1 and fig. 2 .3.7). hydrogen atoms, which are included in cellular membranes, proteins, or other relatively immobile macromolecular structures, usually do not contribute to the mr signal; their fid signal has already decayed to zero at the time of data acquisition (t2 << te) (brix 1990 ). another important contrast factor is the collective flow of the nuclei. the influence of flowing blood on the image signal will be discussed in sect. 2.7 separately, in the context of mr angiography. whereas the image contrast of a ct scan only depends on the electron density of the tissues considered (as well as on the tube voltage and beam filtering), the mr signal and thus the character of an mr image is determined by the intrinsic tissue parameter pd, t1, and t2 as well as by the type of the sequence used and by the selected acquisition parameters. this variability offers the opportunity to enhance the image contrast between distinct tissues by cleverly selecting the type of sequence and the corresponding acquisition parameters, and thus to optimize the differentiation between these tissue structures. however, the subtle interplay of the many parameters bears the danger of misinterpretations. in order to prevent these, several mr images are always acquired in clinical routine, with different sequence parameters that are selected in such a way that the tissue contrast of the various images is determined mainly by a single tissue parameter; in this context, one uses the term t1-, t2-, or pd-weighted images. sometimes, one even goes one step further to calculate "pure" t1, t2, and pd parameter maps on the basis of several mr images that were acquired with different acquisition parameters. the advantage in doing this consists of the fact that the image contrast on the calculated parameter maps is usually more accentuated than in the weighted images. the calculated tissue parameters can furthermore be used to characterize various normal and pathological tissues. however, experience has shown that a characterization or typing of tissues by means of calculated mr tissue parameters is only possible with reservations (bottomley et al. 1987; higer and bielke 1986; pfannenstiel et al. 1987 ). this may be due not only to the insufficient measurement and analysis techniques used, but also to the fact that morphological information of the mr images as well as the clinical expertise of the radiologist have been left aside in many cases. these considerations indicate that each mr practitioners should be aware of the dependence of the image contrast on the selected type of imaging sequence as well as on the sequence and tissue parameters in order to fully benefit from the potential of mri and to avoid misinterpretations. the term imaging sequence designates the temporal sequence of rf pulses and magnetic gradient fields, which are used to determine the image contrast and for image reconstruction, respectively. the foregone section has made intuitive use of the term image contrast in order to describe the possibility to distinguish between adjacent tissue structures in an mr image. we will now define this term. if one describes the signal intensities of two adjacent tissues structures a and b with sa and sb, the image contrast between the two tissues can be expressed by the absolute value of the signal difference cab = |sa sb| (2.4.1) or by the normalized difference (2.4.2) remark: the delineation of a tissue structure depends, of course, also on the signal-to-noise ratio (s/n) as tiny, weakly contrasted structures can be masked by image noise. some authors therefore proposed to use the contrast-to-noise ratio for evaluating the detectability of a detail. however, the explanatory power of this quantity can hardly be objectified since the contrast-detail detectability strongly depends on the signal detection in the human retina as well as on the signal processing in the central visual system of the observer. in the following, we will therefore use the absolute contrast defined in eq. 2.4.1. example: in order to analyze the influence of the tissue and acquisition parameters on the image contrast by an example, we will consider in the following the contrast between white and gray brain matter. representative tissues parameters, which have been measured for a patient collective at 1.5 t, are summarized table 2 .4.1. in clinical routine, the spin-echo (se) sequence is still a frequently applied imaging sequence, due to two reasons: • it is rather insensitive to static field inhomogeneities and other inaccuracies of the mr system. • it allows for the acquisition of t1-, t2-, and pdweighted images by an appropriate choice of the acquisition parameters tr and te. of about 3t1. usually, however, the sequence is repeated much earlier, so that the longitudinal magnetization will be reduced at the beginning of the next sequence cycle compared to the equilibrium magnetization by the t1 factor [1 exp(-tr / t1)]. accordingly, the t1 contrast of an se image can be varied by the choice of the repetition time tr. in fig. 2 .4.2a, the t1 factor is plotted for white and gray brain matter. as this example shows, the t1 contrast reaches a maximum if the repetition time tr is between the t1 relaxation times of the two tissues considered. if tr is markedly longer than the longer t1 time, then the t1contrast vanishes. • t2 dependence: the influence of the t2 relaxation process on the signal intensity is described by the t2 factor exp (-te / t2) in the signal equation. for a given t2 time the signal loss is the bigger, the longer the echo time te becomes. in fig. 2 .4.2b, the t2 factor is plotted for white and gray brain matter versus the echo time te. the contrast will reach a maximum when the echo time te ranges between the t2 relaxation times of the two tissues considered. for small te values (te << t2), the contrast approximates zero, as the signal intensities in this case are independent of t2. and gray (gm) brain matter. as can be seen, the t1 contrast between the two tissues approaches 0 for very long as well as for very short repetition times. the highest t1 contrast is obtained for tr ~ 750 ms, i.e. for a repetition time in between the t1 times of the two tissues considered (see table 2 .4.1). b the same considerations hold for the t2 factor exp(-te /t2). the t2 contrast maximum is at te ~ 100 ms influence of the acquisition parameters tr and te on the contrast behavior of an se image. the figure shows the interplay of the longitudinal and transversal relaxation for this sequence at the example of white (wm) and gray brain matter (gm) for a fixed repetition time of tr = 1.000 ms. in the left part, the temporal evolution of the longitudinal magnetization mz during the recovery period (0 ≤ t ≤ tr) is depicted. at t = tr, the partially relaxed longitudinal magnetization is flipped into the x-y-plane by the 90° excitation pulse. the t2 relaxation of the resulting transversal magnetization mxy is plotted in the right part as a function of the echo time te. as can be seen, there is a reversal behavior of the t1 and t2 contrast. for te = 83 ms, the contrast is 0, so that the two types of brain matter cannot be differentiated in the relating se image in spite of differing tissue parameters (see fig. 2 .4.5d). note that the detected mr signal is directly proportional to the transversal magnetization mxy in general, adjacent tissues differ in all three tissue parameters pd, t1 and t2, so the different factors of the se signal equation, which can partially compensate one another, need to be considered altogether. this holds even more as the relaxation times are usually positively correlated, i.e., the tissues with longer t1 times usually also have longer t2 times. in order to illustrate this statement, fig. 2 .4.3 shows the course of the longitudinal and transverse magnetization for both white and gray brain matter for a repetition time of tr = 1,000 ms. as can be seen, the transverse magnetization of both substances-and therefore the signal intensities (sse ∝ mxy)-are identical at an echo time of te = 83 ms, so the tissues cannot be distinguished on the related se image in spite of different tissue parameters. fig. 2 .4.4 shows the contrast between white and gray matter as a function of both the repetition and echo time. as expected, there are two regions with a high tissue contrast, which are separated by a low contrast region (cf. fig. 2 inversion time ti as well as a on the repetition time tr. in order to optimize the t1 contrast, the inversion time ti is usually varied, whereas with the parameter td, respectively tris chosen as high as possible (td >> t1), to allow the recovery of a considerable longitudinal magnetization after rf excitation. the maximum range of values of the t1 factor is between -1 and +1, thus being double the range of values of the se sequence. there are two types of ir sequences depending on signal interpretation: if only the absolute values of the signal are considered (magnitude reconstruction, irm), the range of values is limited de facto to the interval between 0 and 1, as in the se sequence. if this mode of data representation is chosen, then the t1 factor will initially decrease to 0 and then converges toward the equilibrium magnetization m0. figure 2 .4.7 shows the dependence of the t1 factor from the inversion time ti for both possible modes of data representation for white and gray brain matter (tr = 3,000 ms). as this example reveals, the neglect of the sign of the t1 factors in the absolute value representation leads to a destructive t1 contrast behavior in the region between the zeros of both t1 functions considered. an ir sequence differentiating between parallel or antiparallel alignment of the longitudinal magnetization at the time of the excitation pulse is called phase sensitive. for evaluation of the tissue contrast, the pd and t2 dependence of the image signal sir needs to be included in the considerations, too. figure 2 .4.8 demonstrates this with an example. in this figure, the tissue contrast between white and gray brain matter is plotted as a function of the echo time te for ti= 800 ms and tr = 2,400 ms. for the chosen ti value, there is a reversal behavior of the t1 and t2 contrast, as the relaxation times of the two tissues if the last-mentioned mode of data representation is used, there will be a destructive t1 contrast behavior in the region between the zeros of the two tissue curves. the t1 contrast maximum in the considered case is at ti~ 760 ms, i.e., in between the t1 times of the two tissues considered (see table 2 .4.1) examined are positively correlated, i.e., the substance with the longer t1 time also has a longer t2 time. in order to fully grasp the complex interplay between the different tissue and acquisition parameters as a whole, the image contrast between white and gray brain matter is plotted in fig. 2 the influence of the acquisition parameters te, ti, and td on the contrast of an ir image is summarized in table 2 .4.3. in order to maximize the t1 contrast (t1-weighted image), the ti time should be between the t1 times of the two tissues considered, and the echo time te should be chosen as short as possible. as even for the acquisition of t1-weighted images, relatively long repetition times are needed (td >> t1), the ir sequence requires much more time than the se sequence. advantages occur mainly when the image signal of a given tissue structure shall be suppressed, e.g., the retrobulbar fatty tissue for the evaluation of the optic nerve. in this case, the acquisition parameter ti needs to be chosen so that the t1 factor of the tissue to be suppressed is approximately 0 (fig. 2 remark: an ir sequence with a very short ti time is called stir (short-tau inversion recovery) sequence. if the ti time is selected to suppress the signal from liquor, the sequence is called flair (fluid-attenuated inversion recovery). influence of the acquisition parameters ti and te on the contrast behavior of the ir sequence (absolute value representation). the figure shows the interplay of the longitudinal and transversal relaxation for this sequence at the example of white (wm) and gray brain matter (gm) for a fixed inversion time ti = 800 ms and a fixed repetition time tr = 2,400 ms. the left part shows the temporal evolution of the longitudinal magnetization mz during the inversion phase (0 ≤ t ≤ ti). at t = ti, the partially relaxed longitudinal magnetization is flipped into the x-y-plane by the 90° excitation pulse. the t2 relaxation of the resulting transversal magnetization mxy is plotted in the right part as a function of the echo time te. in the case considered, there is a reversal behavior of the t1 and t2 contrast, so that the contrast between the two brain tissues rapidly reduces with prolonged echo time. note that the detected mr signal is directly proportional to the transversal magnetization mxy • in some cases, the imaging sequence is repeated several times (e.g., naq = 2 or 4), in order to improve the signal-to-noise ratio ( . this is especially valid for t1-weighted se images, which have a relatively low s/n ratio due to the short repetition time. example: based on these considerations, the following representative acquisition times are obtained for se images: t = 2.1 min for a t1-weighted image (tr = 500 ms, nph = 256, naq = 1) and t = 10.2 min for a t2-weighted and/or a pd-weighted image (tr = 2,400 ms, nph = 256, naq = 1). by using the multiple slice technique described in sect. 2.3.5, one can simultaneously acquire mr images from multiple parallel slices within the given acquisition times, but the overall acquisition time required for the acquisition of the images will not be reduced. in clinical practice, this basic limitation of conventional imaging sequences leads to the following problems: • depending on the clinical question, the time needed for a patient examination ranges between 15 and 45 min. • this demands high cooperation from the patient, as the patient will be asked to remain motionless during the examination in order to assure the comparability of differently weighted mr images. • critically ill patients may not be examined full scale or might not fit for examination at all. • the image quality is impaired by motion artifacts (such as heart beat, blood flow, breathing, or peristaltic movement). this problem is especially acute in patients with thorax and abdominal diseases, as mr images in general cannot be acquired completely during breath hold, as is the case in ct. • dynamic imaging studies are limited. to overcome these limitations, several methods aiming to shorten the acquisition examination times have been developed. they can be categorized into two groups, depending on whether the repetition time tr or the number of sequence cycles nph needed for phase encoding is reduced (see eq. 2.4.5). the two strategies will be discussed in the following sections at the example of some selected imaging sequences. an almost complete overview of the clinically used fast imaging sequences will be provided in sect. 2.4.6. the long scan times of conventional imaging sequences are due to the fact that the 90° excitation pulse rotates the entire longitudinal magnetization into the x-y-plane, so the pulse sequence can only be repeated when the longitudinal magnetization has been-at least partially-recovered by t1 relaxation processes. to acquire mr images with an acceptable s/n, the sequence repetition time tr has to be of the order of the t1 relaxation time. this basic problem in conventional imaging can be prevented, however, by using an rf pulse with a flip angle of α < 90° to excite the spin system, so that only a part of the longitudinal magnetization mz will be rotated to the x-y-plane, nevertheless, one obtains a relatively large transverse magnetization. example: if, for instance, a flip angle of α = 20° is used, then the longitudinal magnetization mz will be reduced by 6%, whereas the transverse magnetization mxy amounts to 34% of the maximum value (fig. 2.4.11) . in order to discuss the principle of low-flip angle excitation, we will initially neglect the gradient fields needed for spatial encoding and consider the simple sequence shown in fig. 2 .4.12a. it consists of a single rf pulse with a flip angle α<90° and a spoiler gradient, which destroys the remaining transverse magnetization after the acquisition of the fid. remark: as an alternative to spoiler gradients, the phase of the rf excitation pulse may be varied with every sequence cycle, in order to prevent the buildup of a steady state for the transverse magnetization (rf spoiling). if the considered sequence is repeated several times, then the spin system already reaches a dynamic equilibrium after a few sequence cycles. figure 2 .4.13 shows the transient behavior of the longitudinal magnetization in white the value of the steady-state longitudinal magnetization depends not only on the flip angle α of the excitation pulse, but also on the repetition time tr and the longitudinal relaxation time t1. it will be smaller when α becomes bigger. for α = 90°, the longitudinal magnetization reaches the steady-state value m z ss = m0 [1-e -t r /t 1 ] after the first excitation, as expected. however, the mr signal is not given by the longitudinal magnetization but by the transverse magnetization mxy at the time of data acquisition. by using eq. 2.4.6 the amplitude s of the mr signal can be described by (2.4.7) whereas the factor exp (-te/t2*) describes the decay of the fid signal during the delay time te, the factor sinα gives the fraction of the steady state magnetization m z ss , which is rotated in the x-y-plane (see fig. 2 .4.12b,c). to illustrate this relation, fig. 2 .4.14 shows the signal intensity s as a function of the ratio tr/t1. from this plot, two important statements can be derived: • as compared with conventional 90° excitation, lowflip angle excitation yields considerably higher signal values for short repetition times. • when using low-flip angle excitation, the signal is already independent of t1 for tr < t1. the signal increase realized by low-flip angle excitation in combination with short repetition times is obtained, however, by omitting the 180° pulse generating a spinecho, as the 180° pulse not only inverts the phase of the transverse magnetization, but also the longitudinal magnetization (see fig. 2 in contrast to the conventional imaging sequences, the nomenclature of the gre sequences is not unified, but is handled differently by different manufacturers. in the following, the fundamentals of gre imaging will be discussed in detail at the example of two representative sequences denoted by the acronyms flash and truefisp. the excitation of the spin system and position encoding are identical in both sequences; they differ only in that the transverse magnetization is destroyed after acquisition of the mr signal in the flash sequence (spoiled gre sequence), whereas it is maximized in the truefisp sequence (refocused gre sequence). this difference, however, leads to an entirely different contrast behavior. dephasing of the transversal magnetization caused by the slice selection and the readout gradient is compensated by two additional inverted gradients, so that a gradient-echo occurs. the figure shows the deand rephasing process of two magnetization components (a,b), which are at different positions and therefore precess under the influence of the gradient-fields with different larmor frequencies. φx, φy, and φz are the corresponding phase angles however, this does not imply that for this angle the tissue contrast between two structures is at its maximum. in fig. 2 .4.18 the tissue contrast between white and gray brain matter is plotted for te << t2*, i.e., exp(-te / t2) ≈ 1, as a function of the repetition time tr and the flip angle α. for low flip angles, two contrast regions can be distinguished: for short tr times, the t1 contrast dominates (t1-weighted images), for longer tr times, the pd contrast (pd-weighted image). example: in order to illustrated the discussed contrast behavior, fig. 2 the influence of the acquisition parameters on the contrast of a flash image is summarized in table 2 .4.4. in 1986, oppelt et al. introduced a gre sequence with the acronym fisp (fast imaging with steady precession), which considerably differs in its contrast from the flash sequence. this sequence was later renamed to truefisp (see below). the pulse and gradient scheme of this sequence is shown in fig. 2 .4.20. instead of the spoiler grainfluence of the acquisition parameters tr and α on the t1 contrast of a flash image. for low excitation angles α, there will only be a considerable t1 contrast (here between white and gray brain matter) when short repetition times are selected. if the flip angle is increased, the t1 contrast maximum will shift to a higher tr value dient of the flash sequence, refocusing gradient pulses are introduced in slice-selection direction as well as in the direction of frequency and phase encoding, through which the transverse magnetization is not destroyed after the data acquisition of the mr signal, but rather rephased or refocused (fig. 2.4 .21). as practice has shown, the truefisp sequence is very susceptible to inhomogeneities of the static magnetic field, which are rendered visible as disturbing image artifacts. a more favorable behavior is achieved by omitting the gradient pulses (which have been shaded darkly in fig. 2. 4.20) . in this case, only the dephasing of the transverse magnetization caused by the slice selection and phase-encoding gradient is completely compensated. this realization is called fisp sequence. in the truefisp sequence, not only the longitudinal, but also the transversal magnetization reaches an equilibrium state after several sequence cycles. as both magnetization components are different from zero at the end two different contrast regions can be distinguished for low flip angles: for short tr times, the t1 contrast dominates (t1-weighted image), for longer tr times, the pd contrast (pd-weighted image). if the flip angle is increased, then the t1contrast curve will gradually approach the known contrast behavior of the se sequence (α = 90°). correspondingly, long repetition times need to be chosen in order to acquire pdweighted or t2-weighted images of a sequence cycle, they will be mixed by the following rf pulse, i.e., a part of the longitudinal magnetization is flipped into the x-y-plane and a part of the transverse magnetization into the z-direction. consequently, both magnetization components dependent on t1 and on t2. the t2 dependence increases proportional to the magnitude of the transverse magnetization remaining at the end of the sequence cycle (i.e., with decreasing tr/t2 ratio). vice versa, this means that the fisp signal for high tr values (tr>>t2) will approximate the flash signal. remark: the difference in the latter case merely consists in the fact, that in the flash sequence the transverse magnetization is rapidly destroyed by a spoiler gradient after the acquisition of the fid, whereas in the fisp sequence it decays with the time constant t2. therefore, the flash sequence is more useful for the acquisition of t1-weighted and pd-weighted images than is the fisp sequence. as the discussion has shown, the characteristic signal behavior of the fisp sequence manifests itself only for very short repetition times. for this special situation, the dependence of the signal intensity of the fisp sequence on the tissue parameters t1, t2, and pd can be described due to the complex gradient switching, the dephasing of the transversal magnetization caused by the three gradients is completely compensated after acquisition of the gradient-echo, so the transversal magnetization is restored before irradiation of the subsequent excitation pulse. the figure shows the deand rephasing process for two magnetization components (a,b), which are at different positions and therefore precess with different larmor frequencies under the influence of the gradient fields. φx, φy, and φz are the corresponding phase angles. at the end of the sequence, both magnetization components are in phase again (φx = φy= φz = 0), independent of their spatial position pulse and gradient scheme of the truefisp sequence. α flip angle of the excitation pulse, g z slice-selection gradient, g x phase-encoding gradient, g y readout gradient. instead of the spoiler gradient used in the flash sequence, there are refocusing gradient pulses in all three gradient directions, so that the transversal magnetization after acquisition of the gradient-echo is not destroyed but reconstructed (see fig. 2.4.21) . in practice, the gradient pulses (marked darkly) here is frequently omitted in order to reduce the susceptibility of the sequence for artifacts leading to a fisp sequence approximately by the expression , (2.4.10) which is independent of the repetition time tr. as this equation shows, the signal intensity of a fisp image for tr< 100 ms), the t2 times differ strongly (see fig. 2 .2.16). the t2 times of the 1 hf nuclei are generally higher than 40 ms, whereas for 1 hr nuclei they are less than 100 µs due to the strong dephasing effect of neighboring spins. the different t2 times are mirrored in the 1 h spectrum (fig. 2.4.29) . as the width ∆ω of a resonance line is inversely proportional to the t2 time (see sect. 2.2.6), the 1 hf pool has a line width of a few hertz, whereas the spectral width of the 1 hr nuclei is more than 10 khz. it is crucial that the two 1 h pools interact due to intermolecular processes (spin-spin interaction) and/or chemical exchange processes (wolff and balaban 1989). due to this reason, any change in the magnetization in one pool results in an alteration of magnetization of the other pool. this effect is called magnetization transfer (mt). to utilize this effect for mr imaging, the magnetization of the 1 hrpool is saturated by frequency-selective preparation pulses (saturation transfer). due to the mt effect, this leads to a significant reduction of the mr signal of 1 hf nuclei and thereby to a reduction of the image signal. in the simplest case, the frequency spectrum of the preparation pulse is defined by a rectangular function below and/or above the resonance frequency of the 1 hf pool, as is shown in fig. 2 .4.29. when doing this, the offset frequency has to be chosen big enough, so that local variations of the resonance frequencies of the 1 hf nuclei due to inhomogeneities of the static magnetic field and differences in tissue susceptibilities do not lead to a direct effect on the 1 hf magnetization. mt preparation pulses are often used in mr angiography to increase the blood-tissue contrast. this interesting application is based on the fact that the mt effect reduces the 1 hf magnetization of stationary tissue, whereas the magnetization of the flowing blood is not affected. imaging in magnetic resonance is based on spin warp imaging but is commonly referred to as fourier imaging. the main underlying principle is the use of magnetic field gradients to prepare the slice-selective excitation and to phase and frequency encode the signal that is induced by the rotating transverse magnetization. the motivation of continued sequence development is fuelled by the aim to improve the tissue distinction and the shortening of measurement time. in recent years, a great number of sequences have been developed (see table 2 .4.6), each of which are utilized in routine clinical applications. the following paragraph provides a systematic overview of the sequence families. schematic depiction of a 1 h spectrum of biological tissues. apart from the resonance line of 1 h nuclei in free water ( 1 hf), with a low spectral line width (< 20 hz), there is a broad underground due to 1 h nuclei in macromolecules with reduced mobility ( 1 hr), the mr signal of which cannot be detected directly because of their short t2 times. note that the spectral widths are not depicted in true scale. the frequency spectrum of a mt preparation pulse is marked in gray a first sequence classification can be performed in assigning the type of sequence in either a spin-echo or a gradient-echo group. the main difference between se and gre is the influence of susceptibility gradients on image contrast. in general, in gre imaging susceptibility gradients lead to a faster decay of the signal, whereas in se imaging dephasing mechanisms that are fixed in location and consistent over time are refocused by the 180° refocusing rf pulse. se image contrast depends on the tissue specific transversal relaxation time t2, whereas gre image contrast is a function of the transversal relaxation time t2*. some gre techniques utilize the excitation pulse also as a refocusing pulse, causing spin-echo components to contribute to the image contrast. within the se and the gre group, the contrast can be manipulated by preparing the longitudinal magnetization prior to starting the imaging sequence or prior to the measurement of a fourier line. in multi-echo imaging, the transverse magnetization is refocused and reutilized after the collection of a fourier line, omitting the necessity of a further excitation for the collection of another fourier line. this method is applicable within the se group as well as the gre group. again, a preparation of the magnetization is generating another sequence family. using only one excitation and multiple phase-encoded echoes to acquire all required k-space lines without a further excitation is called a single-shot technique. figure 2 .4.30 shows an overview scheme that provides one possible sequence-classification. within 3d gre with low-flip angle excitation and "spoiling" after data acquisition of a single fourier line and fourier interpolation in the direction of partition encoding combines the half-fourier method with a tse sequence to a degree at which only a single excitation pulse suffices to fill the raw-data matrix with following spin-echoes, one applies the so-called haste technique (half-fourier single-shot turbo spin-echo). the mix of spin-echoes with gradient-echoes or, more precisely the acquisition of gradient-echoes within an se envelope leads to the tgse (turbo gradient-echo sequence) sequence, also called grase (gradient and spin-echo). as expected, with the introduction of gradient-echoes within a multi-echo spin-echo sequence, the contrast behavior is also t2* related. this sequence is also called a hybrid. similar to the se sequences, the gre sequences can be grouped into: • conventional gre sequences (e.g., flash, fisp, truefisp, dess, ciss, psif) • gre sequences with preparation of the magnetization (e.g., turboflash, mp-rage) • multi-echo gre sequences (e.g., medic, segmented epi) • multi-echo gre sequences with preparation of the magnetization (e.g., segmented dw-se-epi) • single-shot gre sequences (epi) • single-shot gre sequences with preparation of the magnetization (e.g., dw-se-epi) as indicated above, conventional gre sequences can be further divided into: • ssi group (steady-state incoherent), which only aims at a steady state in the longitudinal magnetization (e.g., flash, spgr, t1-ffe) and • the ssc group (steady-state coherent), during which the steady state of the transversal magnetization equally contributes to the signal (e.g., fisp, truefisp, grass, fiesta, ffe, bffe). acronyms of the ssi group are flash (fast low-angle shot), spgr (spoiled gradient-recalled acquisition in the steady state), and t1-ffe (t1-fast field echo). in the ssc group there are truefisp (fast imaging with steady precession), grass (gradient-recalled acquisition in the steady state), and ffe (fast field echo). within the ssc group, there is a slow transition toward spin-echoes, as excitation pulses do not only excite, but also refocus various echo paths of a remaining or refocused transverse magnetization. the extreme form is psif (a backward-running fisp), also named ssfp (general electric) or t2-ffe (philips). in these techniques, the excitation pulse of the following measurement operates as a refocusing pulse for the transverse magnetization of the previous excitation. the contrast is t2 weighted, as the effective echo time amounts to almost two repetition times. a combination of fisp echo and psif echo is called dess (double-echo steady state), and having the fisp and psif echo coincide in time will result in a ciss (constructive interference steady state) or a truefisp sequence. the same preparation of the magnetization utilized for se techniques can be applied to gre techniques. with a very rapid gre sequence (rage, or rapid acquired gradient-echoes), with the aim of measuring as fast as possible, the tr is set to a minimum and consequently so is te, and the excitation angle is set to an optimum (ernst angle) in order to generate as much signal as possible. to reestablish a t1 weighting, a saturation or inversion pulse is applied, but not prior each fourier line as in se imaging, but at the beginning of the whole measurement. those techniques are called turboflash, fspgr (fast spoiled gradient-recalled acquisition in the steady state), tfe (turbo field echo) or, placing the inversion within the partition loop of a 3d sequence, mp-rage (magnetization prepared rapid acquired gradientechoes). as is the case in fast se sequences, gre sequences also can make use of multi-echo acquisitions. medic (multiecho data image combination) uses multiple echoes for averaging, thus improving snr and t2* contrast. the classical form of a single-shot gre technique, during which the raw data matrix is filled after a single excitation with several phase-encoded gradient-echoes, is called epi (echo planar imaging). simply collecting the free induction decay with multiple phase-encoded gradient-echoes is called fid-epi. placing the gradient-echoes beneath an se envelope is called se-epi. the most common magnetization prepared single shot gradient-echo technique is the diffusion-weighted spinecho echo planar imaging sequence (dw-se-epi). the idea of using multiple phase-encoded spin-echoes to fill the k-space more rapidly as compared with conventional imaging has surfaced as early as 1986, with the acronym rare-rapid acquisition with relaxation enhancement ( fig. 2.4 .31). the number of applied echoes is directly proportional to the potential reduction in measurement time. the overall image contrast is dominated by the weighting of those fourier lines acquired in the center of k-space (effective echo time). the qualities of the early images were not close to the quality of conventional t2-weighted se imaging. in the course of hardand software developments, mulkern and melki "re-discovered" multi-echo se imaging during a search for a fast t2-localizer, creating the acronym fse (fast spin-echo). siemens and philips use the acronym tse for turbo spin-echo. since the higher spatial frequencies, the "outer" k-space lines, are usually acquired using late echoes, early concern has been that small objects might be missed. fortu-nately, the time saving achieved with the use of multiple echoes has been utilized to improve the contrast by selecting longer repetition times and to improve the spatial resolution by increasing the matrix size. both measures have more than compensated the effect of an under-representation of high spatial frequencies within the k-space matrix. t2-weighted tse has replaced conventional spinecho imaging in all clinical applications. the acquired phase-encoded echo train can also be used to create pdweighted and t2-weighted images similar to conventional dual-echo spin-echo imaging. the use of phase-encoded echoes for the k-space of the pd-weighted image as well as the k-space of the t2-weighted image is customary and this procedure is called shared echo. t1-weighted tse imaging is also an option for some applications, although additional echoes will increase (unwanted) t2-weighting. t1-weighted tse imaging is rarely applied to the central nervous system as the use of additional echoes prolongs the time needed for a single slice and the number of necessary slices may not fit into the desired tr. for the genitourinary system (uterus, cervix, bladder, etc.) about three echoes are used to improve snr or to reduce measurement time. in areas where the amount of t1-weighting is less of an issue, e.g. t1weighted imaging of the cervical and the lumbar spine for degenerative disease, tse is usually used with an echo train length (etl) of five echoes. the same protocol is applied for enhanced and unenhanced studies of suspected vertebral metastases. t1-weighted tse imaging for the abdomen is not an issue, since the restriction of the measurement time down to a breath hold period is suggesting t1-weighted gre imaging. the remaining point of concern in comparing tse imaging with conventional se imaging is the reduced sensitivity to susceptibility artifacts. hemorrhagic lesions appear less suspicious on tse imaging as compared with conventional se imaging. the fourier transformation assumes a consistent signal contribution for all fourier lines. any violation of this assumption will lead to overor under-representation of spatial frequencies, with a correlated image blurring. although tse violates this assumption in using multiple phase-encoded echoes to r structure of the tse sequence. excitation, refocusing, frequency, and phase encoding are done as in the conventional se sequence. the dephasing done for the purpose of spatial encoding is rephased prior to generating another spin-echo using a 180° rf pulse followed by another phase-encoding step fill the k-space, where the signal amplitude of the echoes diminishes following t2 decay, there are several parameter that can be utilized to minimize the artifacts related to the t2 decay-related k-space asymmetry: • first, especially for t2-weighted imaging, the signal amplitudes for the late and closely spaced echoes can be approximated as being constant. tse sequences are mainly used for the acquisition of t2-weighted images. as lesions usually have a long t2 relaxation time, the signal loss caused by the t2 decay does not play a major role during data acquisition. • second, the matrix dimensions used in tse imaging are usually higher as compared with conventional se imaging. this will significantly minimize the risk of missing small objects. • third, in t2-weighted tse imaging, the repetition time is increased considerably, which leads to a remarkable improvement in contrast, again reducing the potential risk of missing small objects due to k-space asymmetry. in a typical tse protocol, 13-15 echoes per excitation are used for imaging, implying a theoretical shortening of measurement time of the factor 13-15. in practice, the shortening is about the factor 2-6. longer repetition times are selected for improved pd and t2 weighting, and larger image matrices are used for improved spatial resolution, diminishing the potential shortening of the measurement time when using multiple phase-encoded echoes. as the mentioned influence on the space encod-ing is only present in the direction of the phase encoding, the effect can be demonstrated by exchanging of the frequency-and phase-encoding gradients. figure 2 .4.32 shows this in the example of the cauda equine. apart from use for high-resolution images, tse sequences are also applied in cardiology, as shown in fig. 2 fast spin-echo imaging demonstrates two essential differences in imaging appearance as compared with conventional spin-echo imaging: fat appears bright, and there is a reduced sensitivity to hemorrhagic lesions. the bright appearance of fat is related to the j-coupling. the so-called j-coupling (see sect. 2.2.9) of the carbon-bound protons provides a slow dephasing of transversal magnetization in conventional se imaging, in spite of the refocusing pulse. if the refocusing pulses follow shortly after image of the cauda equina, acquired with a tse sequence with a the phase-encoding direction from left to right, and b the phase-encoding direction from head to foot. the longitudinal structures of the relatively thin nerves will be better visible, if the frequency-encoding direction is perpendicular to the nerve fiber. the resolution in frequency-encoding direction is not influenced as much by the t2 decay as by the resolution in phase-encoding direction one another, as is the case in tse imaging, the j-coupling will be overcome; the dephasing will be suppressed. consequently, fat tissue appears brighter in the tse image than it does in a conventional image. if desired fat saturation or fat suppression (see sect. 2.4.5.1) can be utilized to suppress this appearance. the susceptibility-related artifact of hemorrhagic lesions in spin-echo imaging is due to diffusion in between excitation, refocusing, and data acquisition. reducing this diffusion time by rapidly succeeding refocusing pulses will also reduce the related artifacts, thus making tse imaging less sensitive for hemorrhagic lesions. the tse sequence, like the conventional se technique, can be used with an inversion pulse for preparation of the longitudinal magnetization. thus, it becomes possible to yield a suppression of the fat signal, based on the short relaxation time of fat (see fig. 2 .4.10b). relaxation dependent fat suppression using an inversion pulse prior to the fast spin-echo train is routinely used to demonstrate bone infarctions and bone marrow abnormalities like bone marrow edema, e.g., in sickle cell anemia. this fat suppression scheme is also used in genitourinary applications, where the high signal intensity of fat may obscure contrast-enhanced tumor spread. since only the fat suppression is desired, the used inversion recovery technique is not phase sensitive, only the magnitude of the longitudinal magnetization is used. the acronym used in this case is tirm (turbo inversion recovery with magnitude consideration). the structure of a turbo inversion recovery (tir) sequence is presented in fig. 2 .4.34. the reduction in measurement time due to the utilization of multiple phase-encoded spin-echoes permits the use of inversion times in the order of 2 s, keeping the measurement time acceptable. an inversion time of 1.9 s will provide a relaxation dependent suppression of the cerebral spinal fluid (csf) signal ( fig. 2.4 .35). the utilization of a long inversion time is called fluid-attenuated inversion recovery, or flair. in combination with tse imaging (the structure of a 3d tse sequence is presented in fig. 2 .4.36), the technique is called turboflair or simply tirm. since csf has the longest t1 relaxation time, the longitudinal magnetization within all other tissues will be aligned parallel to the main magnetic field, and it is not necessary to have a phase sensitive ir method for this application. the attenuated csf signal allows a better differentiation of periventricular lesions and has demonstrated a superior sensitivity for focal white matter changes in the supratentorial brain, whereas posterior fossa located lesions can be missed. the turboflair method apparently allows the identification of hyperacute subarachnoid hemorrhage with mr, precluding the need for an additional ct. the time-consuming ir method has been used in the past for studying the development of white matter tracts in developmental pediatrics. this technique has been replaced using an inversion pulse prior to a spin-echo train of a tse sequence. the selected inversion time (~350 ms) allows a better delineation of small differences in t1 relaxation times, e.g., for the documentation of the development of the pediatric brain. the improved tissue characterization between gray matter and white matter tracts allows, e.g., the demonstration of mesial temporal sclerosis and visualizing hippocampal atrophy. for this application a so-called phase-sensitive inversion recovery is required to differentiate nuclear magnetization aligned parallel to the magnetic field as compared with antiparallel alignment at the time of excitation. the according acronym is tir. the residual transverse magnetization after measuring a single fourier line, or, in case of multi-echo imaging, after measuring the "package" of fourier lines, is usually spoiled. a later-introduced concept refocuses the transverse magnetization at the end of the echo train and uses an rf pulse to "restore" the residual transverse magnetization back to the longitudinal direction. the method "improves" the recovery of the longitudinal magnetization for tissue with long relaxation times, allowing a further shortening of the repetition time without loss of contrast. the technique is called restore (siemens), fast recovery fast spin-echo frfse (ge) and drive (philips). it does not make a difference whether the magnetization is prepared after the measurement of a fourier line or at the very beginning of a new excitation cycle. for this reason it is justified to list restore as a turbo spin-echo scheme with preparation of the longitudinal magnetization. multi-echo spin-echo imaging has the potential to acquire t1-and/or t2-weighted spin-echo imaging of the beating heart within a breath hold. the only obstacle that needs to be addressed is the significant flow artifacts caused by the flowing blood. the introduction of dark-blood preparation scheme finally revolutionized cardiac mr imaging. with this preparation scheme, it is now possible to acquire t1-and t2-weighted images of the beating heart within a breath hold, without any flow artifacts. the magnetization of the whole imaging volume is inverted nonselectively, followed by a selective reinversion of the slice. this is done at end diastole, with the detection of the qrs complex. during the waiting period to follow, most of the reinverted blood will be washed out of the slice, being replaced by the inverted blood-and the spin-echo train acquired again toward end diastole will show "black" blood. a double inversion pulse will even allow not only the black-blood preparation, but also the suppression of fat signal, which will be helpful in characterizing fatty infiltration of the myocardium in arrhythmogenic right ventricular dysplasia (ard). single shot, per definition, refers to a single excitation pulse and the use of multiple phase-encoded echoes to fill the required fourier lines. the original rare has been published as a single-shot spin-echo technique. other acronyms found in the literature are ss fse for single-shot fast spin-echo (ge) or ss tse of single-shot turbo spin-echo (philips). the combination with a half-fourier technique allows a further reduction in measurement time and has been named haste: half-fourier acquired single-shot turbo spin-echo. as elaborated on earlier, the first and the last data point of a fourier line are characterized by the transverse magnetization of adjacent voxel pointing into opposite direction. the same situation is found for the first and last fourier line within k-space, considering the transverse magnetization within adjacent voxel aligned in the direction of phase encoding. k-space is symmetrical. although this hermitian symmetry is ideal and reality is slightly different, it has been claimed, that the deviation from the ideal situation are only of coarse nature and that a few (e.g., eight) fourier lines measured beyond the center of k-space should be sufficient to correct for this insufficiency. as an example for a 128*256 matrix, a single-shot spin-echo technique using the half fourier approach would use 128 /2 + 8 = 72 phase-encoded echoes to fill the k-space. the measurement time using this acquisition method is about a second per slice. the high numbers of echoes suggest that this technique is only useful for t2-weighted imaging and the blurring effect due to signal variation in k-space as a result of t2 decay will be prohibitive for high resolution studies. nevertheless, it is an alternative, even in the brain, for a fast t2-weighted study for patients who are not able or willing to cooperate. since it is the perfect technique to visualize fluid-filled cavities, haste is used, e.g., for mrcp (magnetic resonance cholangiopancreatography). a typical result of this sequence is shown in fig. 2 .4.37. progress in hardware development and the correlated improvement in image quality, together with the pioneering research within this field have recently led to an impressive increase in haste utilization for obstetric imaging. although sonography remains the imaging technique of choice for prenatal assessment, the complementary role of mr imaging is getting more and more important in the early evaluation of brain development of the unborn child or even in the early detection of complications within the fetal circulatory system. the search for shorter measurement times for faster imaging led to the group of gradient-echoes (gre) in 1983. an mr signal can be detected immediately after the excitation pulse. that signal is the free induction decay (fid). in addition to the spin-spin interaction causing the t2 relaxation, other dephasing mechanism will contribute to the image contrast, dephasing mechanisms that are based on differences in precessional frequencies due to magnetic field variations across a voxel. the main sources of local magnetic field variations are differences in tis-sue-specific susceptibility values. since these dephasing mechanisms are fixed in location and are constant over time, they are refocused using a 180° rf refocusing pulse in se imaging. omitting this pulse will lead to a contribution of these dephasing mechanisms to the image contrast. the observed tissue specific relaxation parameter is then called t2* rather than t2. with t2´ being both machine and sample dependent. although the missing 180° rf refocusing pulse will cause a rapid dephasing of the transverse magnetization and with that a rapidly dephasing mr signal, the echo time can be reduced as well and so can the repetition time. the shorter echo time will allow, in most cases, a detection of a signal despite the rapid dephasing of the transverse magnetization. since the echo is now formed by using a bipolar gradient pulse in the direction of frequency encoding, these techniques are called gre. with shorter repetition times an extension of phase encoding for the direction of slice or slab selection can be considered, and 3d imaging becomes feasible. the short excitation pulses used in common gre imaging will result in a less perfect slice profile as compared with the slice profile achieved with a 90-180° combination of longer rf pulses as typically utilized in se imaging. as a result, there will be significant contributions of the low angle-excited outer regions of a slice, explaining the basic difference in contrast between a gre image as compared with an se image, even if a 90° excitation angle is utilized. similar to the spin-echo sequence acquisition scheme, there is residual transverse magnetization left at the end of the acquisition of one fourier line-and similar to the spoiling of the transverse magnetization at the end of the measurement in se imaging, the same process can be applied to gre imaging as well. spoiling can be done with a gradient pulse, distributing the transverse magnetization evenly, so that the next excitation pulse will not generate a stimulated echo. or the phases of the excitation pulses can be randomized in order to avoid the buildup of a steady state for the transverse magnetization (rf spoiling). spoiled gradient-echo imaging has been introduced as fast low angle shot (flash), t1-weighted fast field echo (t1-ffe), or spoiled gradient recalled acquisition in the steady state (spgr). flash imaging allows multislice imaging in measurement times short enough to allow breath hold acquisitions. since the contrast mainly depends on the t1 relaxation time, flash images are usually called t1 weighted. in clinical routine, flash sequences have been introduced for diagnosing cartilage lesions (fig. 2.4 .38), for abdominal breath-hold t1weighted imaging (fig. 2.4 .39), and in dynamic contrast enhanced studies. as has been discussed in sect. 2.4.3.2, not only the amplitude of the signal can be controlled, but also the basic contrast behavior can be influenced. for instance, when using an extremely small excitation angle and moderate repetition times, one can minimize the influence of the t1 relaxation time (see fig. 2.4.17) . thus, one can obtain the alternative to spoiling the residual transverse magnetization after the end of the fourier line acquisition is to rephase what has been dephased for spatial encoding. this was introduced as fast imaging with steady precession (fisp) (oppelt et al. 1986 ), later to be called true-fisp. the original implementation and publication of fisp uses a gradient refocusing in phase encoding as well as in frequency-encoding direction and slice-select direction. as this sequence was susceptible to artifacts at the time, the implemented and released fisp sequence, still used today, is only refocused in the direction of phase encoding, and no refocusing in readout and slice-selection direction. such a sequence has been called roast (resonant offset acquired steady state [haacke et al. 1991] ). for the fisp sequence, the phase encoding is reversed after the acquisition of the fourier line, undoing the dephasing that was applied for spatial encoding. this approach will lead to a steady state not only for the longitudinal magnetization, but also for the transverse magnetization-for tissue with long t2 relaxation times. differences in fisp contrast as compared with flash applications will only be visible for short repetition times; large excitation angles and will only enhance signal within tissue with long t2 relaxation times. general electric introduced this technique as gradient-recalled acquisition in the steady state (grass). philips is using the fast field echo (ffe). if one rephases at the end of the measurement of a fourier line, all parts of the transverse magnetization that have been dephased for spatial encoding and if one compensates in advance for the dephasing to be expected while the slice selection gradient is switched on, one obtains the truefisp sequence ( fig. 2.4.41 ). this technique combines the advantages of the fisp sequence and the psif sequence, with further echo paths contributing to the overall signal. a clinical application of this sequence is shown in fig. 2 .4.42. this original approach of refocusing all transverse magnetization at the end of the mea structure of the truefisp sequence. the sequence seems to be "balanced" due to a symmetry in time. all components of the transverse magnetization are refocused at the end of the measurement, leading to a steady state surement of one fourier line will not only cause a steady state for the transverse magnetization, but also the next excitation pulse will also operate as a refocusing pulse. the excitation pulse will not only convert longitudinal magnetization to transverse magnetization, but will also generate a spin-echo. the sequence seems to be symmetric, balanced. the challenge is to get all the generated echoes to have one phase, otherwise the echoes will destructively interfere, causing band-like artifacts. the po-sitions of these bands also depend on the starting phase of the rf pulse. adding another acquisition with a phase shifted rf will lead to a technique called constructive interference steady state technique, (ciss), or phasecycled fast imaging-employing steady-state acquisition (pc-fiesta, the acronym used by ge). since ciss contains spin-echo components, the technique is even useful in regions with significant susceptibility gradients, e.g., nerve imaging at the base of the skull. since this technique is a fast technique with hyperintense appearance of fluid filled cavities, it is primarily applied to study abnormalities of the internal auditory canal. the originally published fisp is the truefisp, where all the dephasing is reversed and even the slice selection gradient is preparing the dephasing to be expected during the first half of the next excitation pulse. the truefisp technique is a fast gradient-echo sequence with spin-echo contributions leading to hyperintense appearance of all tissues with long t2 relaxation times. the technique is primarily used se signal. such a sequence is called double echo steady state (dess, fig. 2.4.45) . the dess sequence is routinely used in orthopedic imaging (fig. 2.4.46) . it links the advantages of the fisp sequence with the additional signal enhancement of the psif sequence for tissues with long t2 (e.g., edema and joint effusion). in fast cardiac imaging, for cine snapshots of the beating heart. general electric is using the acronym fiesta for the same technique. philips is using bffe (balanced fast field echo) as the acronym for his technique. the previously mentioned spin-echo component of a balanced technique can be isolated and can be used to generate an image. the psif sequence shown in fig. 2 .4.43 appears to be violating the causality at first: a fisp sequence running backward. the signal inducing transverse magnetization is produced with the first excitation at the end of the first cycle, refocused with the second excitation at the end of the second cycle, and inducing a signal at the beginning of the third cycle. the effective echo time therefore amounts to almost two repetition times. the resulting images consequently show a remarkable t2 weighting. (note that in this case, it is a spin-echo and not a gradient-echo.) the psif sequence is insensitive to susceptibility gradients. in contrast to ciss, the psif is very sensitive to flow and motion, thus it is not applied for iac imaging but rather used as an adjunct to demonstrate abnormal csf flow pattern. general electric is calling this technique simply steady state free precession (sffp), while philips is using the acronym t2-ffe. imaging of the cochlea (fig. 2.4 .44) is no longer performed with psif but rather with ciss, due to the intrinsic flow insensitivity of the latter. when combining a fisp image with a psif image, one obtains an image with a t2* weighting via the gre signal and a t2 weighting via the in theory, all the magnetization preparation schemes previously applied for the se group can also be applied for the gre sequences. however, it has to be kept in mind that gre sequences are usually using shorter te and shorter tr than se imaging does, and therefore some preparation schemes should be slightly altered. as, for an example, the fat saturation scheme: the time necessary for a spectral saturation pulse followed by a gradient spoiler will add up to the slice-loop time for an otherwise short tr gre sequence. a feasible modification is to skip the fat saturation for a few slices-this is referred to as "quick fat sat. " although a slice-dependent recovery of the fat signal is observed, the compromise is in general acceptable, since the fat signal stays low and more slices can be measured per tr. the quality of a spectral saturation pulse depends on the overall homogeneity within the imaging volume. in addition, the spectral fat saturating rf pulse is very close to the water resonance, causing a loss in overall snr. for a nonselective excitation, it is theoretically possible to also simply excite either fat or water using the tissue specific larmor frequency. in practice, such an approach is very prone to artifacts due to imperfect field homogeneity within the volume of interest. better results in water excitation or fat excitation have been achieved with binomial pulses (1-1, 1-2-1, or 1-3-3-1). the mechanism of e.g., a 1-2-1 rf pulse is described as follows, leading finally to a 90° rf excitation pulse for just water. after an initial 22.5° rf pulse, there will be a waiting period, allowing the magnetization within fat to fall behind the magnetization of water. at the point of opposite position of the magnetizations, a 45° excitation angle will than move the magnetization within water to a 67.5° position with respect to the longitudinal direction, whereas the magnetization within fat will be flipped back to the 22.5° position. after the previously mentioned waiting period, another 22.5° excitation pulse will accomplish the 90° excitation for water, while the magnetization of fat will be restored to the longitudinal position, not contributing to the mr signal. another advantage of these binomial pulses is that they can be either executed using nonselective rf pulses or selective rf pulses. in the latter case, they are called spatial spectral frequency or simply composite pulses. inversion pulse prior to the measurement to improve t1 contrast short tr, short te gre imaging, utilizing the ernst angle leads to pd-weighted images rather than t1-weighted images. in se imaging, the t1 contrast is improved by placing an inversion pulse prior to the acquisition of the fourier line. this approach is not feasible in gre imaging, since the inversion time would be much larger than the commonly used repetition times. in fast gre imaging, an inversion pulse is used prior to the whole imaging sequence (fig. 2.4.47) . that concept has been introduced as turboflash (snapshotflash (haase et al. 1989) , fast spgr (fspgr) or turbo field echo (tfe)). the minor drawback is that the longitudinal magnetization, and consequently the generated transverse magnetization, will change throughout the measurement. the resulting violation of k-space symmetry will cause an under-or over-representation of some spatial frequencies producing a slight image blurring, typical for turboflash imaging. when using this method, one has to consider the following three facts: 1 the longitudinal component of the macroscopic magnetization will recover after the inversion pulse with the t1 relaxation time. this relaxation process also takes place during data acquisition. the various measured fourier lines will have different t1 weightings. the image contrast is dominated by the t1 weighting of the fourier line measured at the center of k-space. 2 the recovery of the longitudinal magnetization is influenced by the excitation angles of the rapid gre data after an inversion pulse and an inversion time, the small-angle excitation is repeated several times until the raw data matrix is filled acquisition. in order to minimize this influence and to obtain a maximum effect of the preparation-pulse on the image contrast, the rapid gre acquisition needs to be executed with small excitation angles. 3 every fourier line is measured with a different phase encoding gradient and contains the spatial information of the object in direction of phase encoding. the k-space symmetry is significantly violated due to the change in signal contribution for each spatial frequency measured as a consequence of t1 relaxation during sampling. as a result, the images appear to be blurred, with imprecise edges and coarse signal oscillations parallel to the edges. with the introduction of short tr gradient-echo acquisition schemes, 3d imaging became feasible. the application of an inversion pulse prior to a 3d acquisition scheme is not very promising, since the preparation of the longitudinal magnetization would vastly diminish during the relatively long measurement time and the significant number of low angle excitation pulses. a feasible alternative is to repeat the preparation of the longitudinal magnetization in either the partition-encoding loop, or the phase-encoding loop. although the timesavings would be larger for placing the inversion pulse prior to the longer phase encoding loop, fortunately at the time it was only possible to place the inversion pulse prior to the partition-encoding loop. fortunately, because the previously described turboflash-artifact based on the overand under-representation of k-space lines is now omitted. the phase-encoding gradient is prepared; the inversion pulse is set followed by a rapid execution of the partitionencoding loop, during which the amount of longitudinal magnetization will change according to the course of the t1 relaxation (recovery influenced by the low-angle excitation pulses). after this, the next phase-encoding line is prepared, the inversion pulse set, and again the whole partition loop executed. the amount of signal within each partition is identical for all phase-encoding steps and the k-space is again symmetric, resulting in artifact-free images. this technique has been introduced as magnetization prepared rapid acquired gradient-echoes (mp-rage, mulger and brookeman 1990) (fig. 2.4 .48). figure 2 .4.49 shows as a typical application of the mp-rage sequence showing the medial-sagittal t1-weighted slice out of 64 of an examination covering the entire skull in less than 5 min. the sequence had some promise to replace the conventional t1-weighted spin-echo imaging of the brain, since it allows the gapless coverage of the whole brain in less than 6 min. but, it is a gradient-echo sequence. susceptibility gradients especially at the base of the skull will cause geometric distorted representation of the anatomy or even signal voids. another disturbing effect is the appearance of contrast enhancement in active lesions. due to the commonly "squishy" content of lesions, the appearphase-encoding gradient activated during readout period. such a technique would be called fid-epi, since signal sampling is done during free induction decay. another variant is placing the gradient-echoes under an se envelope (fig. 2.4.51 ). in this case the central k-space contains a t2 contrast, in opposition to the t2* contrast of the fid-epi version. the se-epi sequence shows a lower sensitivity for susceptibility gradients. fig. 2.4 .51 shows a se-epi with an alternative approach to phase encoding. in this example, phase encoding is done using gradient "blips" during the ramping time of the frequency-encoding gradient. such a technique is called blipped epi. after an excitation pulse, multiple gres are generated using an oscillating frequency-encoding gradient. in this example the phase encoding is achieved with a low-amplitude, constant phase-encoding gradient throughout the measurement rf fig. 2.4 .51 se-epi sequence. after an excitation and refocusing-pulse, multiple gres are generated using an oscillating frequency-encoding gradient. the phase encoding is achieved in this sequence using small gradient pulses (blips) during ramping of the frequency-encoding gradient ance is usually isoto hypointense in t1-weighted imaging. mp-rage allows better control over t1 weighting, potentially causing the lesion to be more hypointense as compared with se imaging. in conjunction with contrast uptake, lesions will show up hyperintense on t1-weighted se imaging. they may or may not show up hyperintense on mp-rage imaging. the appearance has been reported to be inconsistent, likely to be due to the better t1weighting (a hypointense lesion may show up isointense after contrast uptake). the rapid acquisition of a gradient-echo or steady state sequence following an inversion is sometimes referred to as single-shot technique. this is not quite correct since as many low angle excitation pulses are applied as fourier lines are needed to fill the k-space. but the singleshot nomenclature allows a differentiation compared to the segmented, or multi-phase, imaging of the beating heart. similar to the stir approach used in fat signalsuppressed imaging, the inversion pulse prior to a singleshot technique enables the nulling of signal for a specific tissue (depending on the t1 relaxation time). the turboflash technique is used to study the firstpass of a contrast bolus through the cardiac chamber, showing a delayed enhancement in perfusion restricted ischemic myocardium. the inversion time is adjusted, so that normal myocardium will give no signal. in the early phase normal myocardium will be perfused with the t1-shortening contrast agent, whereas the perfusion restricted ischemic myocardium will remain hypointense. the same method of tissue signal nulling can be applied to the truefisp. this technique has been used to demonstrate the late enhancement of infarcted myocardium. an advantage for the truefisp versus turboflash is that the additional signal contributions due to refocusing and balancing (spin-echo components), allowing a higher bandwidth acquisition correlated with a shorter te, a shorter tr, and therefore a shorter measurement time (~450 ms). in addition, the truefisp has a significant lower sensitivity to flow and motion artifacts as compared to the turboflash, leading to (almost) artifact-free images. both methods are currently evaluated regarding their value in characterizing myocardial viability. the single-shot gradient-echo imaging is echo planar imaging (epi). similar to tse imaging, epi makes use of several phase-encoded echoes to fill the raw data matrix (fig. 2.4.50 ). there are multiple ways to acquire the data. a single excitation can be utilized, followed by multiple phase-encoded gradient-echoes with a small, constant addressing a different way of k-space sampling, both, the frequency-encoding" gradient and the "phase-encoding" gradient may oscillate, causing a spiral trajectory through k-space. such a method is known as spiral epi. the quotation marks are used to indicate that the magnetic field gradients do no longer have the apparent meaning of frequency and phase encoding. the high sensitivity of epi to local field inhomogeneities is utilized in (brain) perfusion imaging and for monitoring the oxygen level to identify cortical activation in bold (blood oxygenation level-dependent) imaging. in spite of many limitations, the epi sequences have attained high clinical potential in functional imaging and in perfusion studies. the preparation of the longitudinal magnetization is not only possible with the previously described multi-shot techniques, but also with the single-shot version. single shot, per definition, means one excitation pulse and multiple phase-encoded gradient-echoes for sampling of all fourier lines. the primary preparation scheme for singleshot gradient-echo imaging is diffusion weighting. any magnetic field gradient in the presence of a transverse magnetization will cause the larmor frequency to be a function of location. sometimes this effect is desired, as in any phase encoding, and sometimes it is a byproduct of another desired functionality, e.g. the frequency encoding. to rephase or refocus the dephased transverse magnetization, a magnetic field gradient of opposite polarity can be used prior to the frequency-encoding gradient. but, this will only work if the transverse magnetization does not change the position in the meantime, as is the case for diffusion. if the transverse magnetization changed positions, then the phase history will be different as compared with stationary tissue at that new location, and the rephasing will be insufficient. insufficient rephasing will result in a reduced signal. the signal drop is characterized by with b being a system or method specific parameter, and d being the diffusion coefficient for the tissue. the method specific parameter b is a function of the gradient amplitude g used, the duration δ for each amplitude and the temporal distance ∆ between the two gradient amplitudes. ∆ is also called the diffusion time. a typical value for b = 1,000 s/mm 2 . a sequence illustration is given in fig. 2 .4.53. the result of the application of such a technique to a patient with an acute infarction is shown in fig. 2 .4.54. diffusion-weighted imaging allows an evaluation of the extent of cerebral ischemia in a period where possible interventions could limit or prevent further brain injury. the diffusion anisotropy potentially measured with this method allows the mapping of neuronal connectivity and offers an exciting perspective to brain research. the turbo gradient spin-echo sequence (tgse), also called gradient and spin-echo (grase) is a combination of multiple gradient-echoes that are acquired within multiple se envelopes of a tse sequence, as shown in fig. 2 .4.55. this method holds several advantages in comparison to the "simple" tse sequence: the use of several phase-encoded gradient-echoes has the potential of further shortening measurement time. figure 2 .4.56 shows a transversal t2-weighted head image with a matrix size of 1,024, which has been measured with a tgse sequence in 2.56 min. another advantage is the fact that with the use of several gradient-echoes per spin-echo envelope, the gap between the refocusing pulses widens. therefore, the j-coupling remains intact. fat appears darker, and the contrast approaches the contrast of conventional se sequences. further, the enhanced sensitivity toward the susceptibility gradients that has been introduced with the gradient-echoes allows for a better depiction of blooddecay products, similar to conventional se-imaging. clinical mr systems come in various types and shapes; however, the fundamental components of a clinical mr tomograph are essentially the same. these are: • the magnet: the magnet creates a static and homogeneous magnetic field b0, which is needed to establish a longitudinal magnetization. • the gradients: the gradient coils generate additional linearly ascending magnetic fields that can be switched on and off. the gradient fields allow assigning a spatial location to the received mr signals (spatial encoding). for an image acquisition three independent gradient systems in x-, y-, and z-direction are required. • the radio frequency (rf) system: to rotate the longitudinal magnetization from its equilibrium orientation along b0 into the transverse plane, an oscillating magnetic field b1 is required. this rf field is generated by a transmitter and coupled into the patient via an antenna, the rf coil. radio frequency coils are also used to receive the weak induced mr signals from the patient, which are then amplified and digitized. • the computer system: measurement setup and image post-processing are performed by (distributed) computers that are controlled by a host computer. at this host computer, new measurements are planned and started and the reconstructed images are stored and analyzed. a schematic of the components of a clinical mr system is shown in fig. 2 .5.1; more detailed descriptions can be found in the works of oppelt (2005) , vlaardingerbroek et al. (2002), and chen and hoult (1989) . in the upper part, a cross-section through a superconducting magnet can be seen, with field-generating magnet windings embedded in a cryostatic tank. closer to the patient, the gradient coil and the whole-body rf coil are located outside the cryostat. the magnet is surrounded by an electrically conducting cabin (faraday cage) which is needed to optimally detect the weak mr signals, without rf background from other rf sources (e.g., radio transmitters). in the lower part, the computing architecture and the hardware control cabinets are shown. a hardware computer controls the gradient amplifier, the rf transmitter, and the receiver. the received and digitized mr signals are passed on to an image-reconstruction computer, which finally transfers the reconstructed image data sets to the host computer for display and storage. to generate the main magnetic field three different types of magnets can be utilized: permanent magnets, resistive magnets, and superconducting magnets (oppelt 2005; vlaardingerbroek 2002; chen 1989) . the choice of an individual magnet type is determined by the requirements on the magnetic field. important characteristics are the field strength b0, spatial field homogeneity, temporal field stability, patient accessibility, as well as construction and servicing costs. as outlined in sect. 2.2 a high magnetic induction is desirable as the mr signal s is approximately proportional to b0², and the signal-to-noise ratio (snr) increases approximately linearly with b0. it is thus expected that with increasing field strength the measurement time can be substantially decreased. the field strength is limited however for the following reasons: • for tissues in typical magnetic fields of 0.5 t and higher, the longitudinal relaxation time t1 increases with field strength. if the same pulse sequence with identical measurement parameters (tr, te, etc.) would be used at low field and high field, the t1 contrast would be less pronounced in the high-field image, since image contrast typically depends on the ratio of tr over t1. to achieve a similar t1 contrast with a conventional se or gre pulse sequence, tr (and thus the total measurement time) needs to be increased. • the resonance frequency ω0 increases linearly with field strength according to ω0 = γ b0. at higher frequencies the wavelength of the rf waves are of the order of or even smaller than the dimensions of the objects to be imaged. under these circumstances, standing waves can be created in the human body, which manifests in areas of higher rf fields (hot spots) and neighboring areas of reduced rf intensity. these unwanted rf inhomogeneities are difficult to control, as they are dependent on the geometry and the electric properties of the imaged object. • the power that is deposited in the tissue during rf excitation rises quadratically with ω0 (and thus with b0). to ensure patient safety at all times during the imaging procedure the specific absorption rate (sar), i.e., the amount of rf power deposited per kilogram of body weight, is monitored and limited by the mr system. with increasing field strength, the rf power generated by a pulse sequence increases, and thus the flip angle needs to be lowered to stay within the guidelines of sar monitoring. since most pulse sequences require certain flip angles (e.g., a 90-180° pulse pair for an se), the rf pulses need to be lengthened at higher field strength to reduce the rf power per pulse. additionally, the time-averaged power can be lowered by increasing the tr. • at field strengths above 1 t, only superconducting magnets are used for whole-body imaging systems. these magnets become very heavy and expensive. a typical 1.5-t mr magnet weighs about 6 t, whereas a 3-t magnet already has a weight of about 12 t. shielding of the stray fields, which is, e.g., necessary to avoid (1) that is filled with liquid helium. the cryotank also houses the primary magnet coils (4) together with the shielding coils (2) that create the magnetic field. the cryotank is embedded in a vacuum tank (3). in a separate tubular structure in the magnet bore, the gradient coil (5) and the rf body coil (6) are mounted. an mr measurement is initiated by the user from the host computer. the timing of the sequence is monitored by the hardware computer, which controls (among others) the rf transmitter, rf receiver, and the gradient system. during the measurement, the rf pulses generated by the transmitter are applied (typically) via the integrated body coil, whereas signal reception is done with multiple receive coils. the digitized mr signals are reconstructed at the image-reconstruction computer, which finally sends the image data to the host for further post-processing and storage interference with cardiac pacemakers, becomes increasingly difficult. • the absolute differences in resonance frequency between chemical substances increase with field strength. this effect is beneficial for high-resolution spectroscopy, as high field strengths allow separating the individual resonance lines. during imaging, however, a substantially increased chemical shift artifact (i.e., a geometric shift of the fatty tissues versus the water-containing tissues) is seen, which can only be compensated using higher readout bandwidths. • differences in magnetic susceptibility between neighboring tissues create a static field gradient at the tissue boundaries. the strength of these unwanted intrinsic field gradients scales with b0. therefore, increasingly higher imaging gradients are required at higher field strength to encode the imaging signal without geometric distortion; however, gradient strengths are technically limited. on the other hand, in neurofunctional mri (fmri) the increased sensitivity at higher field strengths is utilized to visualize those brain areas where local susceptibility differences in the blood are modulated during task performance. in the clinical environment, magnetic field strengths between 0.2 and 3 t are common. low-field mr systems (b0 < 1 t) are often used for orthopedic or interventional mri, where the access to the patient during the imaging procedure is important. high field strengths between 1 t and 3 t are used for all other diagnostic imaging applications. recently, whole body mr systems with field strengths up to 9.4 t have been realized (robitaille et al. 1999) . with these systems, in particular neurofunctional studies, high-resolution imaging and spectroscopy as well as non-proton imaging (e.g., for molecular imaging) are planned, since these applications are expected to profit most from the high static magnetic field. in the following, the three different types of magnet are described that are used to create the static magnetic field. permanent mri magnets are typically constructed of the magnetic material ndbfe. permanent magnet materials are characterized by the hysteresis curve, which describes the non-linear response of the material to an external magnetic field. if an external field is slowly increased, the magnetization of the material will also increase until all magnetic domains in the material are aligned-at this point the magnet is saturated, and no further amplification of the external field is possible. if the external field is then switched off a constant, non-vanishing magnetic field remains in the material because some of the domains remain aligned. permanent magnets offer very high remanence field strength. permanent magnets require nearly no maintenance because they provide the magnetic field directly without any electrical components. permanent magnets often use a design with two poles, which are either above or below (fig. 2.5.2) or at the sides of the imaging volume. within this volume the magnetic field lines should be as parallel as possible (high field homogeneity), which is achieved by shaping of the pole shoes. due to their construction, the magnetic field is typically orthogonal to the patient axis, whereas high-field superconductors use solenoid magnets with a parallel field orientation. magnetic field lines are always closed; therefore, an iron yoke is used in permanent magnets to guide the magnetic flux between the pole shoes. with increasing field strength permanent magnets become very heavy (10 t and more), and the high price of the material ndbfe becomes a limiting factor. additionally, to achieve high temporal field stability the material requires a constant room temperature, which should not vary by more than 1 k. for these reasons, permanent magnets are typically used only for field strengths below 0.3 t. if an electrical current is flowing through a conductor, a magnetic field is created perpendicular to the flow direction that is proportional to the current amplitude. unfortunately, in conventional conductors (e.g., copper wire) the electric resistance converts most of the electric energy into unwanted thermal energy and not into a magnetic field. therefore, a permanent current supply is required to maintain the magnetic field and to compensate for the ohmic losses in the wire. additionally, to dissipate the thermal energy resistive magnets need permanent water cooling as their power consumption reaches several 100 kw. resistive magnets use iron yokes to amplify and guide the magnetic field created by the electric currents. the iron yoke is surrounded by the current-bearing wires so that the field lines stay within the iron. in the simplest form, the closed iron yoke has a gap at the imaging location and the magnet takes the form of a c-arc, which can also be rotated by 90° to provide a good access for the patient (fig. 2.5.3 ). other magnet designs use two or four iron posts that connect the pole shoes. the magnetic field of a resistive magnet is typically not as homogeneous as that of a superconducting magnet of the same size. to achieve high field homogeneity within the imaging field-of-view, the diameter of the pole shoes should not be less than 2.5 times the desired diameter of the imaging volume (dfov), and the pole separation should be more than 1.5 dfov. at a typical pole separation of 45 cm, the imaging volume would thus have a diameter of 30 cm, and the pole shoe diameter amounts to 75 cm. resistive magnets are susceptible to field variations caused by instabilities of the electric power supply. to minimize this effect the magnetic field of the magnet can be stabilized using an independent method to measure the field strength (e.g., electron spin resonance). the difference between actual and desired field strength is then used to regulate the current in the magnet in a closed feedback loop. to create magnetic fields of more than 0.3 t with a bore size of 60 cm or more, today, typically superconducting magnets are utilized (fig. 2.5.4) . in principle, these magnets operate in a similar fashion as resistive magnets without iron yoke-superconducting magnets also generate their magnetic field by wire loops that carry a current. instead of copper wire, superconductors use special metallic alloys such as niobium-titanium (nbti). the alloys completely lose their electric resistance below a certain transition temperature that is characteristic for the material; this effect is called superconductivity. the transition temperature itself is a function of the magnetic field, so that lower temperatures are required when a current is flowing through the wire. unfortunately, an upper limit for the current density in the wire exists, which is also a function of the temperature and the magnetic field. to maintain the required low temperatures cooling with liquid helium is typically necessary (t < -270°c). the imaging volume of the mr system is typically kept at room temperature (t = 20°c), whereas the surrounding superconducting wires require temperatures near the absolute zero (-273.15°c). to maintain this enormous temperature gradient, the field-generating superconducting coils are encased in an isolating tank, the cryostat. the cryostat is a non-magnetic steel structure that contains radiation shields to prevent heat diffusion, heat conduction, and heat transport. if this isolation is not working properly and the wire is locally warming up over the transition temperature, then this section of the wire will become normally conducting, and the energy stored in the current will be dissipated as heat. the heat will then be transported to adjacent sections of the wire, which will also lose their superconductivity. this very rapid process is called a quench. when the magnet wire is heating up the liquid helium will evaporate, and the cryofig. 2.5.3 iron-frame electromagnet 0.6-t mr system (upright tm mri, fonar) with a horizontal magnetic field. this special construction of the mr systems allows for imaging in both upright and lying positions. this flexibility is especially advantageous for mr imaging of the musculoskeletal system 2.5 technical components stat is exposed to an enormous pressure. to prevent the cryostat from exploding, a so-called quench tube is connected to superconducting magnets with helium cooling, which safely guides the cold helium vapor out of the magnet room. recently also ceramic superconducting materials on the basis of niobium-tin (nb3sn) alloys have been used to make superconducting wires. the brittle nb3sn alloys show a higher transition temperature (-263°c) and thus do not necessarily require liquid helium cooling. if the cryostat is equipped with a good thermal vacuum isolation, a conventional cooling system (e.g., gifford-mc-mahon cooler) can be used to maintain the temperature. this technology has been realized both in a dual-magnet system (general electric sp, b0 = 0.5 t) and a low-field open mr system (toshiba opart, b0 = 0.35 t). because a helium-filled cryostat requires more space than does a system without helium, these magnets can be installed in smaller areas than can comparable magnets with helium. in the recent years, several mr magnets have been equipped with helium liquefiers to regain the evaporated helium gas in the magnet. once filled with helium these so-called zero boil-off magnets can operate in principle without any additional helium filling. magnets without helium liquefiers require replenishment of the helium at intervals between several months and 1-2 years, depending on the quality of the cryostat and the usage of the mr system. the most widespread form of a superconducting magnet is the solenoid, where the windings of the superconducting wire form loops around the horizontal bore of the cylindrical magnet. at a typical inner bore diameter of 60 cm for clinical mri systems, solenoid magnets can create very homogeneous magnetic fields with varia-tions of only a few parts per million (ppm). because the relatively bulky magnet structure limits access to the patient, shorter magnets of 1.5 m length with wider diameters of 70 cm have been designed (siemens magnetom espree, b0 = 1.5 t) (fig. 2.5 .5). in these magnets, obese patients can be imaged more conveniently, claustrophobic patients feel more at ease and some mr-guided percutaneous interventions might become feasible. another variant of the solenoid is a dual magnet mr system consisting of two collinear short solenoid magnets (general electric sp, b0 = 0.5 t) here the imaging area is located in between the two magnets and even intra-operative mr imaging is possible. recently, also two-pole systems with a magnet design similar to low-field resistive magnets have become, which offer a good patient access in combination with higher field strengths ( fig. 2.5.6) . outside a superconducting magnet, the field strength his falling off with the inverse third power of the distance (1/r ³) so that the stray fields can extend far outside the mr room. magnetic fields in commonly accessible areas must not exceed 0.5 mt, because higher fields can affect pace makers and other active electric devices (fig. 2.5.7) . for this reason, two shielding technologies have been utilized to reduce the magnetic fringe fields. with passive shielding ferromagnetic materials such as steel are mounted near the magnet. this shielding technique confines the field lines to the interior of the shielding material, and the stray fields are reduced. unfortunately, the amount of shielding material rapidly increases with increasing magnetic field, and between 400 t and 600 t of steel are required to shield a 7-t magnet (schmitt et al. 1998) . with active shielding a second set of wire loops is integrated in the cryostat of the magnet. the shielding fig. 2.5.4 conventional 1.5-t superconducting mr magnet (magnetom symphony, siemens) with a magnet length of 160 cm and a free open-inner bore diameter of 60 cm. the system is equipped with an in-room monitor (left), which allows controlling the mr system from within the rf cabin coils create a magnetic field in the opposite direction of the imaging field so that the stray field falls off more rapidly. the shielding coils have a larger diameter than do the field-generating primary coils. thus, the desired magnetic field within the magnet can be maintained by increasing the current in both coil systems. additionally, the shielding coils and the primary coils repel each other (lorentz forces), which requires a magnet design with more stable coil formers. the attractive forces acting on paramagnetic or ferromagnetic objects near such an actively shielded magnet are significantly higher than near an unshielded magnet; device compatibility and safety should thus always be specified with regard to the investigated magnet type. and an open-bore diameter of 70 cm. the additional 10 cm in bore diameter over conventional mr system with solenoid magnets and the shorter magnet length offer a better access to the patient, so that, e.g., percutaneous interventions can be performed in this magnet structure to localize the mr signals emitted by the imaging object, a linearly increasing magnetic field, the gradient g, is superimposed on the static magnetic field b0. the gradient fields are created by gradient coils that are located between the magnet and the imaging volume (schmitt et al. 1998 ). for each spatial direction (x, y, and z) a separate gradient coil is required, and angulated gradient fields are realized by linear superposition of the physical gradient fields. in a cylindrical bore superconducting magnet, the gradient coils are mounted on a cylindrical structure, which is often made of epoxy resin. this gradient tube reduces the available space in the cryostat from typically 90 cm, without gradient coils, to 60 cm, with gradient coils. the functional principle of a gradient system is best illustrated by a setup of two coaxial wire loops with a radius a that are separated by a distance d (fig. 2.5.8 ). if the two coils both carry the same current, however, in counterpropagating directions, their respective magnetic fields cancel at the iso-center of the setup. at distances not too far from iso-center the magnetic field will increase linearly, which is exactly the desired behavior of a gradient field. to achieve this linear gradient field the condition d = 3a must be met (maxwell coil pair) (jin 1999) . in commercially available gradient system, much more complicated wiring paths are utilized, which are optimized using the so-called target field approach (turner 1993). this often results in wire patterns that, when plotted on a sheet of paper, resemble fingerprints (fingerprint design). nevertheless, a common feature of all gradient systems is the absence of current at the cen-tral plane, which allows separating the gradient coils, e.g., for c-arc-type magnets. the quality of the gradient system is characterized by several parameters: the maximum gradient strength gmax, the slew rate smax, the homogeneity, the duty cycle, the type of shielding, and gradient pulse stability and precision. today, clinical mr systems have maximum gradient strengths of up to gmax = 40 mt/m at bore diameters of 60 cm. even higher gradient strengths of 80 mt/m and more can be realized when so-called gradient inserts with smaller diameters are used (e.g., for head imaging). the maximum gradient strength is limited by the capabilities of the power supply of the gradient system-modern gradient systems use power supplies that can deliver voltages up to 2,000 v and currents up to 500 a. another limiting factor for gmax is gradient heating: with increasing current through the gradient coil, the windings heat up to levels at which the gradient could be destroyed. therefore, to remove the heat from the gradient tube, pipes are integrated in the gradient coils for water-cooling. the maximum slew rate smax is the ratio of gmax over the shortest time required to switch on the gradient (rise time). when the current in the gradient coil is increased during gradient switching, according to lenz's law the coil will produce a current, which opposes the change. thus, it counteracts the switching process, and the rise time cannot be made infinitely short. during mr imaging, however, it is desirable to have very short rise times (i.e., high slew rates), as these times only prolong the imaging process. clinical mri systems have slew rates between 10 mt/m/ms and 200 mt/m/ms. if the gradient coil is connected to a capacitance via a fast switch, very short rise times can be achieved, as the inductance of the gradient coil and the capacitance form a resonance circuit. such a resonant gradient system has the disadvantage that the characteristic frequency of the resonance circuit determines the possible rise times. additionally, gradients can only be switched on after the capacitances have been charged. resonant gradient systems have nevertheless been successfully applied to epi studies, in which the sinusoidal gradient waveforms are beneficial, and multistage resonant systems have been utilized to approximate the trapezoidal gradient waveforms (harvey 1994) . when the gradient is switched on, the maximal rate of field change is observed at the ends of the gradient coil (i.e., fovmax/2): db/dt = smax fovmax/2. a changing magnetic field induces currents in electrically conducting structures in its vicinity-outside the gradient coil this structure is given by the cryostat, and on the inside the patient can act as a conductor. to avoid these parasitic currents (eddy currents) in the cryostat, which in turn create magnetic fields counteracting the gradients, often a second outer gradient structure is integrated in the gradient tube. the inner and outer gradient coils are designed so that their combined gradient field vanishes everywhere outside the gradient coil, whereas the desired gradient amplitudes are realized on the inside. this technique is called active shielding, and is conceptionally similar to the active shielding of superconducting magnets (mansfield and chapman 1986; harvey 1994) . gradient-induced currents in the human body pose a more severe problem, as these currents can potentially lead to painful peripheral nerve stimulation or, at higher amplitudes, to cardiac stimulation (mansfield and harvey 1994; schaefer 1998; liu et al. 2003) . these physiologic effects are not only dependent on the amplitude, but also on the frequency of field change. for clinical mr systems, different theoretical models have been established to determine the threshold for peripheral nerve stimulation. to make the best use of the available gradient system some fast pulse sequences (e.g., for contrast-enhanced mra or epi) operate very close to these threshold values. as individuals are more or less susceptible to peripheral nerve stimulation, for some patients the individual threshold might be exceeded, and they experience a tickling sensation during fast mr imaging. this physiologic effect currently prohibits the use of stronger gradient systems. since the field change is lower at shorter distances from iso-center, peripheral nerve stimulation can be avoided if shorter gradient systems are used. unfortunately, a shorter gradient system only covers a limited fov, and the anatomical coverage is compromised. to overcome this limitation a combined gradient system with a shorter, more powerful inner coil and a longer, less intense outer coil has been proposed (twin gradients) (harvey 1999 ). such a system can be used, e.g., to rapidly image the beat-ing heart with the small coil, or to acquire image data from the surrounding anatomy at lower frame rates. when the gradient system is mounted in the mr magnet, strong mechanical forces act on the gradient tube, which are proportional to the gradient current. these forces are generated by the interaction of the gradient field with the static magnetic field and thus increase with b0. the permanent gradient switching creates time-varying forces that lead to acoustic noise. several techniques have been proposed to reduce noise generation, which in some cases can exceed dangerous sound pressure levels of 110 db. the wire paths in the coil can be designed in such a way that the forces are locally balanced, the gradient tube can be mechanically stabilized, the gradients can be integrated in a vacuum chamber to prevent sound propagation in air, or the gradient system can be mounted externally to reduce acoustic coupling to the cryostat (pianissimo gradient, toshiba). another possibility to reduce acoustic noise is to limit the slew rates in the pulse sequences to lower values than technically possible; in some pulse sequences (e.g., spin-echo sequences), this does not significantly affect the pulse sequence performance, but severely increase patient comfort. shimming is a procedure to make the static magnetic field in the mr system as homogeneous as possible. inhomogeneities of the magnetic field that are caused during the manufacturing of the magnet structure can be compensated with small magnetic plates (passive shim). after a localized measurement of the initial magnetic field, the position of the plates is calculated, and the plates are placed in the magnet. this procedure is repeated until the desired homogeneity of the field is achieved (e.g., 0.5 ppm in a sphere of radius 15 cm). during mr imaging, objects are present in the static magnetic field that distort the homogeneous static field. field distortion is caused by susceptibility differences at the tissue interfaces and is thus specific for each patient. to at least locally compensate these field distortions, adjustable magnetic fields are required (active shim). if the field distortion is linear in space, then the gradient coils can be used for compensation. for higher-order field variations, additional shim coils are required. typically, shim coils up to fifth order are present in an mr system. higher-order shimming is particularly important for mr spectroscopy, where the field homogeneity directly affects the spectral line width. to optimize the shim currents, an interactive measurement process (the shim) is started after the patient is positioned in the magnet. during active shimming the field homogeneity is measured (e.g., using localized mr spectroscopy or a field mapping technique), and the currents are then adjusted to improve the field homogeneity (webb and macovski 1991). the radiofrequency (rf) system of an mr scanner is used to both create the transverse magnetization via resonant excitation and to acquire the mr signals (oppelt 2005; vlaardingerbroek et al. 2002; chen and hoult 1989 ). the rf system consists of a transmit chain and a receive chain. in the following, the details of the rf system are described. the mr signals, which are acquired by the rf coils of the mr system, are typically very low. to optimally detect these low signals, any other electromagnetic signals (e.g., radio waves) must be suppressed. therefore, the mr system is placed in a radiofrequency cabin (also called a faraday cage), which dampens rf signals at the resonance frequency by typically 100 db and more. in low-field mr tomographs, the rf screening is sometimes realized as a wire mesh that is integrated in the mr system. this has the advantage that rf-emitting equipment such as television screens can be placed very close to the mr unit. at larger magnet dimensions, these local screens are often not suitable. here, the whole mr room is designed as an rf cabin, and the screening material is integrated into the walls, doors, and windows. for screening often copper sheets are used, which are glued to the wall panels, or the cabin consists completely of steel plates. to be able to transmit signals to and receive signals from the rf cabin, openings are integrated in the cabin. in general, one distinguishes between so-called filter plates, which contain electronic filters and open waveguides. waveguides are realized as open tubes with a certain length-to-diameter ratio, which is dependent on the wavelength of the rf frequency. waveguides are used to deliver anesthesia gases to the rf cabin and to guide the quench tube out of the shielded room. at the beginning of the transmit chain the rf transmitter is found, which consists of a synthesizer with highfrequency stability and an rf power amplifier. the lowpower synthesizer oscillates at the larmor frequency. its output signal is modulated by a digitally controlled pulse shaper to form the rf pulse, which is then amplified by the power amplifier. for typical clinical mr systems, the transmitter needs to provide peak power output at the larmor frequency of 10 kw and more. besides high peak power, the rf transmitter should also allow for a high time-averaged power output, as several pulse sequences such as fast spin-echo sequences require rf pulses at short repetition times. the rf power is then transferred into the rf cabin via a shielded cable, and is delivered to the transmit rf coil. to guarantee a safe operation of the transmitter and to limit the rf power to values below the regulatory constraints for the specific absorption rates (sar), directional couplers are integrated in the transmission line. these couplers measure the rf power sent to the rf coil as well as the reflected power. high power reflection is an indicator of a malfunctioning of the connected coil, which could endanger the patient. if the reflected power exceeds a given threshold (e.g., 20% of the forward power), then the rf amplifier could be damaged by the reflected rf power and the transmitter is switched off. to couple the rf power of the rf transmitter to the human body an rf antenna is required, the so-called rf coil. before mr imaging starts, the coil is tuned to the resonance frequency of the mr system (rf tuning). simultaneously, the properties of the connecting circuitry are dynamically changed to match the resistance of the coil with the imaging object (loaded coil) to the resistance of the transmit cable (rf matching). once the coil is tuned and matched, the transmitter is adjusted. during this procedure, the mr system determines the transmitter voltage required to create a certain flip angle. for a given reference rf pulse shape sref (t), the transmitter voltage uref is varied until the desired flip angle αref (e.g., 90°) is realized. during the subsequent imaging experiments, use is made of the fact that the flip angle is linearly proportional to the (known) integral over the rf pulse shape, so that the required voltages can be computed from the reference values by linear scaling. radiofrequency coils are categorized into transmit (tx) coils, receive (rx) coils, and transmit/receive (txrx) coils. tx coils are only used to expose the imaging object to an rf b1 field during rf excitation, whereas rx coils detect the weak echo signal emitted from the human body-only if a coil performs both tasks, is it called a txrx coil. a typical example of a txrx coil is the body coil integrated into most superconducting mr systems; however, in some modern mr systems, it is used as a tx coil only due to its suboptimal receive characteristics. rx-only rf coils are the typical local coils found in mr systems that possess a (global) body coil, and local txrx coils are used in all other mr systems without a body coil (ultra-high field, dedicated interventional systems, openconfiguration low field). during signal reception, the oscillating magnetization in the human body induces a voltage in the rf coil. for an optimal detection of this weak signal, the rf coil should be placed as close to the imaging volume as possible. for this reason, optimized imaging coils exist for nearly any part of the human body. the largest coil of an mr system is typically the body coil (if present), which is often integrated in the magnet cover. to image the head or the knee, smaller volume resonators are used, where the imaging volume is in the interior of the rf coil ( fig. 2.5.9 ). flexible coils exist, that can be wrapped around the imaging volume (e.g., the shoulder). small circular surface coils are used to image structures close to the body surface (e.g., eyes). unfortunately, the sensitivity of these coils is rapidly decreasing with distance from the coil center, so that they are not suitable for imaging experiments, where a larger volume needs to be covered. during rf transmission, rx coils need to be deactivated, because a tuned and matched rx coil would ideally absorb the transmitted rf power, and a significant amount of the rf energy would be deposited in the coil. to avoid any electronic damages, the coil is actively de-tuned during rf transmission; this is often accomplished by fast electronic switches (e.g., pin diodes), which connect a dedicated detuning circuitry. to combine the high sensitivity of small surface coils with the volume coverage of a large volume resonator, the concept of the so-called phased-array coils has been introduced (roemer et al. 1990) . a phased-array coil consists of several small coil elements, which are directly connected to individual receiver channels of the mr system. the separate reconstruction of the coil elements is technically demanding, because a full set of receiver electronics (amplifiers, analog-to-digital converters) as well as an individual image reconstruction are required for each coil element. the signals of the individual coil elements are finally combined using a sum-of-squares algorithm, which yields a noise-optimal signal combination. under certain conditions when snr can be sacrificed, also a suboptimal image reconstruction can be achieved by a direct combination of the coil element signals, which reduces the number of receive channels and shortens the image reconstruction time. to be able to manually adjust snr versus reconstruction overhead, special electronic mixing circuits have been introduced which allow combining, e.g., three coil elements into a primary, a secondary, and a tertiary signal (total imaging matrix tim, siemens). in a phased-array coil, the coil elements are positioned in such a way that an induced voltage in one element does not couple to the adjacent element-this can be achieved by an overlapping arrangement of the coil elements (geometric decoupling). phased-array coils with up to 128 elements have been realized; however, typically the number of elements ranges between 4 and 32. today, mri systems with 32 independent receiver channels are available, at which up to 76 coil elements can be positioned simultaneously. the individual coil elements can be selected manually or automatically to achieve the highest possible snr for a given imaging location. phased-array coils are not only required to achieve a high snr. the individual coil elements can also be used to partially encode the spatial location in the image; this procedure is called parallel imaging. the simplest version of parallel imaging uses two adjacent coil elements with non-overlapping sensitivities. if one wants to image the full fov covered by both coils only fov/2 needs to be encoded, since each coil element is sensitive over this distance only. if the phase-encoding direction is chosen in this direction, the phase fov can be reduced by a factor of 2, which in turn halves the total acquisition time. in practice, the sensitivity profiles of the coil elements overlap and more sophisticated techniques such as smash sodickson and manning 1997) or sense (dumooulin et al. 1993 ) are required to reconstruct the image. nevertheless, in parallel imaging the intrinsic spatial encoding present in the different locations of the imaging coils is exploited to reduce the number of phase encoding steps. various receive coils on the patient table of a clinical 1.5-t mr system (magnetom avanto, siemens) with 32 receive channels and the possibility to connect a total of 76 coil elements. the head coil with 12 coil elements is combined with a neck coil (4 elements), and the remaining parts of the anatomy are imaged with multiple flexible anterior phased-array coils (2 × 3 elements) and the corresponding posterior coils, which are integrated in the patient table. for smaller imaging volumes dedicated surface coils (flexible coil, open loop coil, small loop coil) can be used, which share a common amplifier interface because the phase encoding direction is different for different slice orientations, the optimal phased-array coil for parallel imaging offers coil elements with separated sensitivity profiles in all directions. for mr spectroscopy and non-proton imaging, rf coils with resonance frequencies for the respective nuclei are required. these non-proton coils can also incorporate a coil at the proton resonance frequency to acquire proton images without the need for patient repositioning. double-resonant coils are also important in situations when both frequencies are used at the same time as, e.g., in decoupling experiments. for interventional mri, dedicated tracking coils have been developed that are attached to the interventional devices (e.g., catheters or needles). the signal from these coils can be used for high-resolution imaging (e.g., of the vessel wall), but it is often only utilized to determine the position of the device (doumoulin et al. 1993) . in these tracking experiments, the signal of the coil is encoded in a single direction using a non-selective rf excitation, and the position of the coil in this direction is extracted after a one-dimensional fourier transform. the mr signal received by the imaging coil is a weak, analog, high-frequency electric signal. to perform an image reconstruction or a spectral analysis, this signal must be amplified, digitized, and demodulated. the signal amplification is typically performed very close to the imaging coil to avoid signal interference from other signal sources. if the rf coil is a txrx coil, then the signal passes a transmit-receive switch that separates the transmit from the receive path. the amplified analog signal still contains the high-frequency component of the larmor frequency. to remove this unwanted frequency component, the signal is sent to a demodulator, which receives the information about the current larmor frequency from the synthesizer of the transmitter. after demodulation, the mr signal contains only the low-frequency information imposed by the gradients. finally, the analog voltage is converted into a digital signal using an analog-to-digital converter (adc). over the recent years the conversion into a digital signal has increasingly been performed at an earlier stage in the receiver chain (e.g., before demodulation), and all subsequent steps were carried out in the digital domain. at the end of the receiver chain, the digital signal is then handed over to the image reconstruction computer. the computing system of an mr tomograph is typically realized by a system of distributed computers that are connected by a local high-speed network. the requirements for the computing system are manifold: for the user of the system it should provide an intuitive interface for measurement control, image processing, archiving, and printing. during sequence execution, the computers should control the hardware (i.e., gradients, rf, adcs, patient monitoring, etc.) in real time. additionally, the computing system must reconstruct and visualize the incoming mr data. since a single computer cannot perform all of these tasks at the same time, typically three computers are used in an mr system: the host computer for interaction with the user, the hardware control computer for real-time sequence control, and the image reconstruction computer for high-speed data reconstruction. the host computer provides the interface between the user and the mr system. through the mr user interface, the whole mr system can be controlled, mr measurements can be started, and the patient monitoring is visualized. at the host computer, the incoming images are sorted into an internal database for viewing, post-processing, and archiving. the internal database stores and sorts the images by patients, studies, and series. the database is often connected to the picture archiving and communication system (pacs) of the hospital, from where it retrieves the patient information to maintain a unique patient registry. a 256 × 256 mr image typically requires about 130 kb of storage space, and for each patient investigation between 100 and 1,000 images are acquired. on an average working day between 10 and 30 patients can be examined. the data of all of these patients need to be stored in the database so that a storage volume of about 4 gb per day should be provided. with increasing matrix sizes and image acquisition rates, these numbers can easily be multiplied by factors of 10 and more. the host computer is also used to transfer the acquired data to archiving media such as magneto-optical disks (mod), tapes, compact disks (cd), digital versatile disks (dvd), or external computer archives (typically, the pacs). data transfer is increasingly accomplished using the image standard dicom (digital imaging and communications in medicine), which regulates not only the image data format, but also the transfer protocols. it is due to this imaging standard that images can be exchanged between systems from different vendors and can be shared between different modalities. for post-processing, typically different software packages are integrated. in mr spectroscopy, software packages for spectral post-processing are available to calculate, e.g., peak integrals automatically. for mr diffusion measurements, the apparent diffusion coefficient can be mapped. with flow-evaluation software, the flow velocities and flow volumes can be assessed. to visualize threedimensional data sets often multi-planar reformatting tools or projection techniques such as the maximum intensity projection (mip) are used. all of these software packages retrieve the image data from the integrated image database, into which the calculated images are finally stored. dedicated computer monitors are connected to the host computer for image visualization, which fulfill the special requirements for diagnostic imaging equipment. in addition, these screens must not be susceptible to distortions due to the magnetic field; for this reason liquid crystal monitors based on the thin-film transistor (tft) technology are increasingly used. for interventional mr, shielded monitors for in-room image display have been designed, where the monitor is shielded against electromagnetic interference. these monitors can be used within the faraday cage of the mr system without interfering with the image acquisition. the control of the imaging hardware (i.e., the gradients in x, y, and z, the rf sub-system, the receiver, and the patient-monitoring system) requires a computer with a real-time operating system. compared with conventional operating systems where the instructions are processed in an order and at a time that are influenced by many factors, a real-time operating system ensures that operations are executed on an exactly defined time scale. this real-time execution is necessary to maintain, e.g., the phase coherence during spin-echo mri or to ensure that a given steady state is established during balanced ssfp imaging. during sequence execution, the different instructions for the hardware are typically sent by the control program to digital signal processors (dsp) that control the individual units. thus, new instructions can be prepared by the control program, whereas the actual execution is controlled close to the individual hardware. to ensure that enough hardware instructions are available, many time steps are computed in advance during sequence execution. for real-time pulse sequences, this advance calculation needs to be minimized to be able to interactively change sequence parameters such as the slice position (controlled by the rf frequency) or orientation (controlled by the gradient rotation matrix). in real-time sequences, the information about the current imaging parameters is thus retrieved not only once at the beginning of the scan, but continuously during the whole imaging experiment. the reconstruction of the data arriving at the adcs is performed by the image-reconstruction computer. to estimate the amount of data this computer needs to process the following estimate can be used: during high-speed data acquisition about 256 raw data points (i.e., 256 × 16 bytes) arrive per imaging coil at time intervals of tr = 2 ms, so that with 10 rx coils a data rate of 20 mb/s results. these incoming data need to be rearranged, corrected, fourier transformed, combined, and geometrically distorted before the final image is sent to the host computer. to perform this task today multiprocessor cpus are used to perform some of these tasks in parallel. in particular, the image reconstruction for multiple coils lends itself naturally to parallelization, since each of the coils is independent of the other. additionally, some manufacturers are including simple post-processing steps into the standard image reconstruction. since the reconstruction computer does not provide a direct user interface, these reconstruction steps need to be designed in such a way that no user interaction is necessary. this is the case for the calculation of activation maps in fmri, for mip calculations under standard views in mr angiography, or for the calculation of the arrival time of a contrast agent bolus in perfusion studies. at the end of the image reconstruction, the image data are transferred to the host computer via the internal computer network. special mr imaging techniques require additional mr components that are not necessarily available at any mr scanner. these components often monitor certain physiologic signals such as the electrical activity of the heart (electrocardiogram, ecg) or breathing motion (fig. 2.5.10) . typically, the measured physiologic signals are not used to assess the health status of the patient but to synchronise the image acquisition with the organ motion, since heart and breathing motion can cause significant artifacts during abdominal imaging. synchronization of the image acquisition is performed either with prospective or retrospective gating. with prospective gating (or triggering), the imaging is started with the arrival of a certain physiologic signal (e.g., the r wave in the ecg). therefore, the physiologic signal is post-processed (e.g., thresholding and low-pass filtering) to create a trigger signal when the physiologic condition is present. with retrospective gating, the measurement is not interrupted, but data are acquired continuously, and for each measured data set, the physiologic state is stored with the data (e.g., the time duration after the last r wave). during image reconstruction, the measured data are sorted in such a way that images are formed from data with similar physiologic signals (e.g., diastolic measurements). the advantage of the retrospective over the prospective data acquisition is the continuous measurement without gaps that could lead to artifacts in steady state pulse sequences. the post-processing effort of retrospectively acquired data is higher because data need to be analyzed and sorted before image reconstruction. additionally, on average more data need to be acquired as compared to prospective triggering to ensure that for each physiologic condition at least one data set is present (over-sampling). to measure the ecg in the mr system mr-compatible electrodes made of silver-silver chloride (ag/agcl) are used. the measurement of the ecg in an mr system is difficult, because the switching of the gradients can induce voltages in the ecg cables that completely mask the ecg signal. this effect can be minimized, if short and loopless ecg cables are utilized. short ecg cables are additionally advantageous since long cables with a loose contact to the skin can be the cause for patient burns that are induced by the interaction with the rf field during rf excitation (kugel et al. 2003) . to reduce this potential danger to a minimum, ecg systems have been developed that amplify the ecg signal close to the electrodes, and which transmit the ecg signal to the mr system either via optical cables (felblinger et al. 1994) or as an rf signal at a frequency different from the larmor frequency. with this technology, ecg signals can be acquired even during echo planar imag-ing when gradients are permanently switched on and off (ives et al. 1993) . it should be noted that the ecg signal in the mr system significantly differs from the signal outside the magnet. the electrically conducting blood is flowing at different velocities in the cardiac cycle. within the magnetic field the blood flow induces velocity-dependent electric fields (hall effect) across the blood vessels, which in turn change the electric potentials measured at the ecg electrodes. typically, the t wave of the ecg is augmented, an effect that is more pronounced at higher field strengths (kangarlu and robitaille 2000) . for this reason, the ecg acquired in the mr system should not be regarded as of diagnostic quality. pulse oximeters measure the absorption of a red and an infrared light beam that is sent through perfused tissue (e.g., a finger). the absorption is proportional to the oxygen content, so that devices can determine the partial oxygen pressure (po2). additionally, the pulsation of the blood leads to a pulse-related variation of the transmitted light signal, which is used in the mr systems to derive a pulse-related trigger signal (shellock et al. 1992 ). since the pulse wave arrives at the periphery with a significant delay after the onset of systole, it is difficult to use the po2 signal for triggering in systolic mr imaging. pulse oximeters consist solely of non-magnetic and non-conduction optical elements, so that they are not susceptible to any interference with the gradient or rf activity. fig. 2.5.10 whole-body imaging with array coils covering the patient from head to toe (exelart vantage tm , toshiba). since not all coils are in the imaging volume of the mr system at the same time, a lower number of receiver channels (here: 32) are sufficient for signal reception 2.5 technical components to detect breathing motion, several mechanical devices such as breathing belts or cushions have been introduced. essentially, all these systems are air filled and change their internal pressure as a function of the breathing cycle when they are attached to the thorax of the patient. the pressure is continuously monitored and is used as an indicator for breathing status. as with the pulse oximeters, these systems are also free of any electrically conducting elements, so that no rf heating is expected. however, in clinical practice breathing triggering can pose a problem in long-lasting acquisitions since patients start to relax over time, and the initial breathing pattern is not reproduced. an alternative approach to the measurement of the breathing cycle is offered by the mr itself: if a single image line is excited in head-foot direction through the thorax (using, e.g., a 90° and 180° slice that intersect along the desired line), then the signal of this line has high contrast at the liver-lung interface. this diaphragm position can be detected automatically and can be used to extract the relative position in the breathing cycle. this technique is called a navigator echo (ehman and felmlee 1989) , since an additional echo for navigation needs to be inserted into the pulse sequence. similar approaches using lowresolution two-or three-dimensional imaging can be used to correct for patient motion in long-lasting image acquisitions such as fmri (welch et al. 2002) . here, the change in position is determined and used to realign the imaging slices (prospective motion correction). for neurofunctional studies, electroencephalogram (eeg) systems have been developed that can be operated in the mr tomography (muri et al. 1998) . compared with the ecg, the voltages induced during brain activity are about 100 times smaller in eeg recording, which poses a significant detection problem (goldman et al. 2000; sijbersa et al. 2000) . blood pulsation, patient motion, as well as induced voltages during gradient and rf activity can cause spurious signals in the eeg leads, which obscure the true eeg signal. to remove the imaging-related artifacts, dynamic filtering can be used, which removes all signal contributions associated with the basic frequencies of the mr system. a large variety of mr systems with different magnet types, coil configurations, and gradient sets is currently available for diagnostic and interventional mr imaging. to choose from these systems, the desired imaging applications as well as economic factors need to be considered: a small hospital might with few mr patients might want to use a low-field permanent magnet system with low maintenance cost, whereas a university hospital with a diverse patient clientele and high patient throughput should better offer a high-field mr system with state-ofthe-art gradient systems. physiologic monitoring and triggering units. the three electrodes of the ecg system as well as the tube of the breathing sensor are connected with a transceiver that transmits both signals to the patient monitoring unit of the mr system. the optical pulse sensor is attached to the finger, and the signals are guided via optical fibers to the detection unit. for increased patient safety the ecg system must be used together with a holder system (not shown here), which provides additional distance between the ecg leads and the patient body 12. 14. muri rm, felblinger j, rosler km, jung b, hess cw, boesch c (1998) during the pioneer period of mr imaging, expectations were that the high inherent contrast in mr imaging makes the use of contrast agents superfluous. however, increasing use of the modality in the clinical setting has revealed that a number of diagnostic questions require the application of a contrast agent. similar to other imaging modalities, the use of contrast agents in mr imaging aims at increasing sensitivity and specificity and, thereby, the diagnostic accuracy. the main contrast parameters in mr imaging are proton density, relaxation times, and magnetic susceptibility (ability of a material or substance to become magnetized by an external magnetic field). mr imaging contrast agents focus upon relaxation time and susceptibility changes. most of them are either paraor superparamagnetic. the most efficient elements for use as mr imaging contrast agents are gadolinium (gd), manganese (mn), dysprosium (dy), and iron (fe). the magnetic field produced by an electron is much stronger than that produced by a proton. however, in most substances the electrons are paired, resulting in a weak net magnetic field. gd with its seven unpaired electrons possesses the highest ability to alter the relaxation time of adjacent protons (relaxivity). for mr contrast agents, differentiation between positive and negative agents has to be made. paramagnetic contrast agents gd and mn have a similar effect on t1 and t2 and are classified as positive agents. since the t1 of tissues is much higher than the t2, the predominant effect of these contrast agents at low concentrations is that of t1 shortening. thus, tissues that take up gdor mn-based agents become bright in t1-weighted sequences. on the other hand, negative-contrast agents influence signal intensity by shortening t2 and t2*. superparamagnetic agents belong to this group and produce local magnetic field inhomogeneities of the local magnetic field. t2 is reduced due to the diffusion of water through these field gradients. magnetite, fe3o4, is such a paramagnetic particle. coated with inert material (e.g., dextranes, starch), it can be used for oral or intravenous applications. in addition to the classification in positive or negative agents, mr contrast agents can be differentiated according to their target tissue. the targeting of an agent is determined by the pharmaceutical profile of the substance. in the clinical environment, we differentiate currently three classes of agents: • unspecific extracellular fluid space agents • blood-pool and intravascular agents • targeted and organ-specific agents unspecific extracellular fluid space agents. low-molecular-weight paramagnetic contrast agents distribute into the intravascular and extracellular fluid space (ecf) of the body. their contrast effect is caused by the central metal ion. all approved ecf agents contain a gd ion, which contains seven unpaired electrons. because gd itself is toxic, the ion is bound in highly stable complexes. the different complexes and the physicochemical properties of all clinically used agents are listed in table 2 .6.1. the agents are not metabolized and are excreted in unchanged form via the kidneys. bound, they form low-molecular-weight, water-soluble contrast agents. gadopentetate dimeglumine (magnevist, bayer schering pharma, berlin, germany) and gadoterate meglumine (dotarem, laboratoires guerbet, aulnay-sous-bois, france) are ionic high-osmolality agents, whereas gadodiamide (omniscan, ge healthcare, buckinghamshire, uk) and gadoteriol (prohance, bracco imaging, milan, italy) are non-ionic low-osmolality agents. due to the low total amount of contrast agent usually applied in mr imaging, no difference in tolerance between both classes could be demonstrated (oudkerk et al. 1995; shellock 1999 ). an estimated 50% of ecf agents (as for example in gadopentetate dimeglumine, size 590 da) is cleared from the vascular space into the extravascular compartment on the initial passage through the capillaries. two agents in the group of ecf agents have to be mentioned separately. gadopentate dimeglumine (mul-tihance, bracco imaging) is an agent with a weak protein binding (about 10%) in human plasma. the bound fraction of the agent has a higher relaxivity than does the unbound fraction. in sum, the relaxivity of gadopentate dimeglumine is 50% higher as compared with gadopentetate dimeglumine at 1.5 t/37°c in plasma. the effect of higher relaxivity is highest at low field strengths (table 2.6.1). the concentration of the contrast agent is 0.5 mol/l. gadopentate dimeglumine was primarily developed as a liver-specific mr imaging agent, and is currently approved both in the indication detection of focal liver lesions and in mr angiography. most of the injected dose of gadopentate is excreted unchanged in urine within 24 h, although a fraction corresponding to 0.6-4.0% of the injected dose is eliminated through the bile and recovered in the feces (spinazzi et al. 1999 ). the second particular ecf agent, gadobutrol (gadovist, bayer schering pharma) is approved in a higher concentration (1 m) than all other available mr imaging contrast agents. in addition, gadobutrol has a higher relaxivity than most extracellular 0.5 m contrast agents on the market (table 2.6.1). the higher concentration has revealed to be particularly useful for mr perfusion studies and mr angiography (tombach et al. 2003 ). stay within the intravascular space with no or only slow physiologic extravasation. the agents can be used for firstpass imaging and delayed blood-pool phase imaging. the prolonged imaging window allows more favorable image resolution and signal-to-noise ratio. the absence of early extravasation also improves the contrast-to-noise ratio. the pharmacokinetic properties of blood-pool agents are expected to be well suited to mr angiography and coronary angiography, perfusion imaging, and permeability imaging (detection of ischemia and tumor grading). currently, three types of blood-pool agents are being developed: 1 gd compounds with a strong but reversible affinity to human proteins such as albumin 2 macromolecular-bound gd complexes 3 ultra small or very small super-paramagnetic particles of iron oxide (uspio and vsop) there are important differences between the three groups regarding pharmacokinetics in the body, i.e., distribution and elimination. gd compounds with a strong but reversible affinity to human proteins such as albumin exhibit prolonged plasma elimination half-life and increased relaxivity. the elimination is done by glomerular filtration of its unbound fraction. given that there is equilibrium between the bound and unbound fraction in the presence of albumin, the excreted molecules are immediately substituted due to dissociation of agent from the agent-albumin complex. two agents with affinities to albumin were developed and tested in clinical trials: gadofosveset (vasovist®, bayer schering pharma) 80-96% bound in human plasma (lauffer et al. 1998 ) and gadocoletic acid (b22956/1, bracco imaging), with a protein binding of approximately 95% in humans (cavagna et al. 2002; la noce et al. 2002) . currently, gadofosveset is the only blood-pool agent approved (for mra in europe). all other contrast agents with blood-pool characteristics are in clinical or in earlier phase development. gadofosveset is a stable gd diethylenetriaminepentaacetic acid (gd-dtpa) chelate substituted with a diphenylcyclohexylphosphate group. the mean plasma concentration at 1, 4 and 24 h after the 3.7 ± 0.2 4.0 ± 0.2 3.5 ± 0.2 3.7 ± 0.2 5.0 ± 0.3 5.5 ± 0.3 bolus injection of 0.03 mmol/kg body weight dose were 56%, respectively 41% and 14% of the concentration reached 3 min after injection. the mean half-life of the distribution phase (t1/2α) was 0.48 ± 0.11 h. relative to the reported clearance values of the non-protein-bound mri contrast agents, the clearance values of gadofosveset are markedly slower. gadofosveset is provided in a concentration of 0.25 mol/l and a dose of 0.03 mmol/kg body weight is recommended for mra (perrault et al. 2003) . as a further benefit, gd compounds with a strong, but reversible affinity to human proteins provide a long-lasting blood-pool effect even when small amounts of the substance leak out of the vasculature. the blood-pool effect persists because albumin remains highly concentrated in plasma while it shows a two-to three-times lower concentration in the extravascular space. thus, even when vasovist® leaks from the vasculature, the receptor-induced magnetization enhancement (rime) effect within the vascular spaces ensures that the signal enhancement in the blood dominates the mri contrast. in rabbits, enhancement with gadofosveset persisted at relatively constant levels from two minutes to up to 1 h, whereas the enhancement of ecf had virtually disappeared within 60 min (lauffer et al. 1998 ). the second blood-pool agent with binding to human serum albumin, gadocoletic acid has been tested in coronary mra (paetsch et al. 2006) . compared with gadofosveset, the slightly higher percentage of bounded agent may result in a lower percentage of extravasation and a further decreased elimination period. macromolecular gd-based blood-pool agents are large molecules with sizes between 30 and 90 kda. they are eliminated rapidly by glomerular filtration. due to their large size, they do not extravasate into the interstitial space. the two agents used in clinical trials were gadomer (schering) and p 792 (laboratoires guerbet, aulnay-sous-bois). gadomer contains multiple gd molecules (24 gd atoms, mr 35,000). p792 is a monodisperse monogadolinated macromolecular compound with mr 6.47 kda, based on a gadoterate meglumine core (port et al. 2001) . four hydrophilic arms account for its intravascular properties. in a preclinical study, p792 allowed acquisition of high-quality mr angiograms. image quality was rated as superior for p792 in the postbolus phase images compared with ecf agents. the intravascular properties lead to an excellent signal in the vasculature with limited background enhancement (ruehm et al. 2002 ). the first clinical use of uspio was done in specific parenchymal organ imaging due to the incorporation of uspio/spio into cells of the reticuloendothelial system of the liver, bone marrow, spleen, or lymphatic tissue. these particles produce a strong augmentation of the local magnetic field. predominant shortening of t2 and t2* produces a loss of signal intensity on mr images. the agents that have been developed as blood-pool agents provide different characteristics with a predominating t1 effect and a prolonged intravascular residence time due to the small size of the particles. nc 100150 (clariscan, ge healthcare) was the first uspio tested for mra (taylor et al. 1999 : weishaupt et al. 2000 . it is a strictly intravascular agent with an oxidized starch coating and has an approximate diameter of 20 nm. the half-life is 45-100 min, and it has shown to reduce blood t1 to below 100 ms (wagenseil 1999). another iron oxide particle mr contrast agent in the phase of clinical development is vsop-c184 (ferupharm, teltow, germany). it is classified as a vsop with a core diameter of 4 nm and a total diameter of 8.6 nm. vsop-c 184 is coated with citrate. the relaxivities in water at 0.94 t are (t1) 20.1 and (t2) 37.1 l/[mmol*s]. the plasma elimination half-life at 0.045 mmol fe/kg was 21.3 ± 5.5 minutes in rats and 36.1 ± 4.2 minutes in pigs, resulting in a t1 relaxation time of plasma of <100 ms for 30 min in pigs (wagner et al. 2002) . qualitative evaluation of image quality, contrast, and delineation of vessels showed that the results obtained with vsop-c184 at doses of 0.025 and 0.035 mmol fe/ kg was similar to those of gadopentetate dimeglumine at 0.1 and 0.2 mmol gd/kg. vsop-c184 is suitable for firstpass mra and thus, in addition to its blood-pool characteristics, allows for selective visualization of the arteries without interfering venous signal (schorr et al. 2004 ). another uspio is sh u 555 c (supravist, bayer schering pharma), an optimized formulation of carboxydextran-coated ferucarbotran (resovist; bayer schering pharma), which was formerly identified as sh u 555 a, with respect to t1-weighted mr imaging. sh u 555 c has a mean core particle size of about 3-5 nm and a mean hydrodynamic diameter of about 20 nm in an aqueous environment. relaxivity measurements yielded an r1 of 22 s -1 (mmol/l) -1 and an r2 of 45 s -1 (mmol/l) -1 at 40°c and 20 mhz in water (reimer et al. 2004 ). the efficacy of mri contrast agents is not just determined by their pharmacokinetic properties (distribution and time dependence of their concentration in the area of interest), but also by their magnetic properties, described by their t1 and t2 relaxivities. for all commercially available mri contrast agents, relaxivities are published and listed in the respective package inserts. however, the most commonly used field strength for relaxation measurements (0.47 t) is different from the currently most frequently used field strength of clinical mri instruments (1-3 t). rohrer et al. evaluated in a well-conducted and standardized phantom measurement study the t1 and t2 relaxivities of all currently commercially available mr contrast agents in water and in blood plasma at 0.47, 1.5, 3, and 4.7 t, as well as in whole blood at 1.5 t (rohrer et al. 2005) . they quantified significant dependencies of relaxivities on the field strength and solvents (table 2 .6.2). protein binding leads to both increased field strength and solvent dependencies and hence to significantly altered t1 relaxivity values at higher magnetic field strengths. mr contrast agents are in clinical use since 1988, and a wide experience is reported. severe or acute reactions after single intravenous injection of gd-based ecf agents are rare. in two large multiple-year surveys including, respectively, 21,000 and more than 9,000 examinations, an incidence of acute adverse reactions between 0.17 and 0.48% were reported (li et al. 2006 : murphy et al. 1996 . the severity of these adverse reactions was classified as mild (75-96%), moderate (2-20%), and severe (2-5%). typical nonallergic adverse reactions include nausea, headache, taste perversion, or vomiting, and typical reactions resembling allergy include hives, diffuse erythema, skin irritation, or respiratory symptoms. the incidence of severe anaphylactoid reaction is very low and was reported to be between 0.0003 and 0.01% in the literature (de ridder et al. 2001; li et al. 2006 : murphy et al. 1996 . the reported life-threatening reactions resembling allergy were severe chest tightness, respiratory distress, and periorbital edema. known risk factors for the development of adverse reactions are prior adverse reactions to iodinated contrast media, prior reactions to a gd-based contrast agent, asthma, and history of drug/food allergy. concerning liver-specific contrast media, a higher percentage of associated adverse reactions were reported for mangafodipir trisodium (7-17%) and ferumoxides (15%) (runge 2000). the recently approved bolus-injectable agent ferucarbotran (resovist, bayer schering pharma) has proven a better tolerance profile during the clinical development compared to ferumoxides. even bolus injections caused no cardiovascular side effects, lumbar back pain, or clinically relevant laboratory changes (reimer and balzer 2003). for the two approved gdbased agents gadopentate dimeglumine and gadoxetic acid, far fewer patients have been examined to date. according to the results of the clinical trials conducted for the approval of both agents, they are comparable to gdbased ecf agents in terms of safety (bluemke et al. 2005; halavaara et al. 2006; huppertz et al. 2004 ). post-marketing surveillance of gadopentate dimeglumine reporting approximately 100,000 doses revealed an overall adverse event incidence of <0.03%, with serious adverse eventss reported for <0.005% of patients (kirchin et al. 2001 ). in the class of blood-pool agents, only gadofosveset (bayer schering pharma) has been approved recently in some european countries. the tolerance of the agent must be estimated based on the clinical trials. based on these data, gadofosveset is well tolerated, and the incidence and profile of undesired side effects is very similar to ecf agents (goyen et al. 2005; petersein et al. 2000; rapp et al. 2005) . magnetic resonance contrast agents, particularly the gd-based agents, are extremely safe (niendorf et al. 1994) and lack in the usually applied diagnostic dosage the nephrotoxicity associated with iodinated contrast media. nevertheless, health care personnel should be aware of the (extremely uncommon) potential for severe anaphylactoid reactions in association with the use of mr contrast media and be prepared should complications arise. nephrogenic systemic fibrosis (nsf) is a rare disease occurring in renal insufficiency that only has been described since 1997. in 2006, a first report about a potential relationship with intravenous administration of gdbased mr contrast medium gadodiamide was published (us food and drug administration 2007). nsf appears to occur in patients with kidney failure, along with high levels of acid in body fluids (a condition known as metabolic acidosis) that is common in patients with kidney failure. the disease is characterized by skin changes that mimic progressive systemic sclerosis with a predilection for peripheral extremity involvement that can extend to the torso. however, unlike scleroderma, nsf spares the face and lacks the serologic markers of scleroderma. nsf may also result in fibrosis, or scarring, of body organs. diagnosis of nsf is done by looking at a sample of skin under a microscope. the risk of nsf in patients with advanced renal insufficiency does not suggest being the same for all gdbased contrast agents, because distinct physicochemical properties affect their stabilities and thus the release of free gd ions (bundesinstitut für arzneimittel und medizinprodukte [federal institute for drugs and medical devices] 2007). some gd-based contrast media are more likely than are others to release free gd 3+ through a process called transmetallation, with endogenous ions from the body (thomsen et al. 2006 ). these agents have the largest amount of excess chelate. gadodiamide and gadoversetamide differ from other gd-based contrast media because of an excess of chelate and is more likely to release free gd 3+ as compared with other agents. cyclic molecules offer better protection and binding to gd 3+ , compared with linear molecules (thomsen et al. 2006 ). the non-linear, non-ionic chelates gadodiamide and gadoversetamide seem to be associated with the highest risk of nsf (broome et al. 2007; sadowski et al. 2007) . the recommendations to prevent development of nsf are nonspecific (us food and drug administration 2007): • gd-containing contrast agents, especially at high doses, should be used only if clearly necessary in patients with advanced kidney failure (those currently na not applicable requiring dialysis or with a glomerular filtration rate (gfr) = 15 ml/min or less). • it may be prudent to institute prompt dialysis in patients with advanced kidney dysfunction who receive a gd contrast mra. although there are no data to determine the utility of dialysis to prevent or treat nsf in patients with decreased kidney function. the use of contrast agents in neuroimaging is an accepted standard for the assessment of pathological processes, which utilizes the extravasation of contrast agents through a compromised blood-brain or blood-spinal cord barrier. compared with contrast-enhanced ct, mr imaging with gd-based contrast agents is far more sensitive and depicts even subtle disruptions of the blood-brain barrier that are caused by a variety of noxious agents as, for example neoplastic or inflammatory processes and ischemic stress. moreover, mr contrast agents are increasingly used to evaluate brain perfusion in clinical practice for a variety of applications, including tumor characterization, stroke, and dementia. the contrast-enhanced brain perfusion mr examination is based on a magnetic susceptibility contrast phenomenon that occurs owing to the t2 and t2* relaxation effects of rapidly intravenous bolus-injected contrast agents. the contrast agents in current use are the standard ecf gd chelates (table 2.6.1). these extracellular agents show no appreciable differences in their enhancement properties and biologic behavior (akeson et al. 1995; brugieres et al. 1994; grossman et al. 2000; oudkerk et al. 1995; valk et al. 1993; yuh et al. 1991) . they equilibrate rapidly between the intraand extracellular spaces of soft tissues and enter central nervous system lesions only at sites of damaged blood-brain barrier. the standard dose for mr imaging of the central nervous system is 0.1 mmol/kg body weight; however, it has been shown that a higher dose of gd chelate-based contrast agents may help reveal more subtle disease states of the bloodbrain barrier regardless whether caused by tumors or by inflammatory lesions (bastianello et al. 1998; haustein et al. 2003; yuh et al. 1994) . this raises the question, in how far gd contrast agents with a higher concentration as for example gadobutrol or agents with a higher relaxivity as for example gadopentate dimeglumine help to increase the sensitivity and accuracy to detect lesions as compared to standard gd chelates. for gadobutrol, no comparative studies to standard gd chelates exist up to now; however, based on smaller cohorts it can be assumed that the higher amount of gd, which can be achieved by the higher gd concentration, is of value for lesion detection and characterization (vogl et al. 1995) . moreover, based on animal experiments the amount of gd in gliomas was higher after injection of gadobutrol in comparison to gadopentetate dimeglumine although identical doses of gd per kilogram body weight were injected for both contrast agents (le duc et al. 2004) . gadopentate dimeglumine proved significantly superior tumor enhancement of intraaxial enhancing primary and secondary brain tumors at a dosage of 0.1 mmol/kg body weight as compared with the same dosage of gadopentetate dimeglumine (knopp et al. 2004) . similar results were also obtained in comparison of gadopentate dimeglumine with other contrast agents as well as in special populations as for example in pediatric patients (colismo et al. 2001 (colismo et al. , 2004 (colismo et al. , 2005 . the increased contrast enhancement resulted also in an increased number of detected brain metastases. dynamic susceptibility-weighted (dsc) contrast agent-enhanced mr imaging is increasingly used for the assessment of cerebral perfusion in many different clinical settings, such as ischemic stroke (parsons et al. 2001) , neurovascular diseases (doerfler et al. 2001), brain tumors (essig et al. 2004) , and neurodegenerative disorders (bozzao et al. 2001) . unlike mr angiography, which depicts the blood flow within larger vessels, perfusion-weighted mr techniques are sensitive to perfusion on the level of the capillaries. the technique is based on the intravenous injection of a t2*-relaxing contrast agent and subsequent bolus tracking using a fast susceptibility-weighted imaging sequence. after converting voxel signal into concentration values, parametric maps of regional cerebral blood volume (rcbv) and blood flow (rcbf) can be calculated by unfolding tissue concentration curves and the concentration curve of the feeding artery. the contrast agents used for dynamic susceptibility-weighted mr perfusion are usually standard gd chelates; however, the dosages of gd per kilogram of body weight as well as the value of higher concentrated agents have been widely discussed. during the first pass of the gd chelate, the high intravascular concentration of gd causes the t2* effects, which can be measured by rapid imaging techniques. the length and the peak concentration of the bolus seem to have influence on the resulting measured signal with a highly concentrated small bolus of contrast agent being advantageous for mr brain perfusion imaging (essig et al. 2002; heiland et al. 2001) . in between the standard gd chelates, no notably different behavior of the available agents has been published up to now. the recommended dose for dsc perfusion mri is in the range of 0.15-0.30 mmol/kg body weight, with most authors preferring a value of 0.2, because the volume of the bolus gets too high when higher dosages are applied (bruening et al. 2000) . therefore, the use of higher concentrated contrast agents or agents with higher relaxivity are also interesting for cerebral perfusion mri. again, studies were able to demonstrate the value of the 1 m gabobutrol and gadopentate dimeglumine. tombach et al. (2003) showed that 1 m gadobutrol resulted in a significantly improved quality of the perfusion examination in comparison to 0.5 m gadobutrol at the same dosage of 0.3 mmol/kg body weight. the results were explained by the sharper, more concentrated bolus, which could be achieved due to the smaller injection volume. essig et al. directly compared 1 m gadobutrol and 0.5 m gadopentate dimeglumine at 1.5 t with a similar dosage of 0.1 mmol/ kg body weight and found no significant differences between the two agents (essig 2006) . the benefit of a double dose of 0.2 mmol was observed only as a trend; however, it was not considered to be of clinical relevance. similar results in a comparison between the two agents were recently obtained also on a 3-t system (thilmann et al. 2005) . for both agents sufficient high-quality perfusion examinations can be achieved with an acceptable injection volume, which is helpful for their clinical application in daily practice and can be considered superior to standard gd chelates (essig et al. 2004; thiman et al. 2005; tombach et al. 2003) . in a limited number of proof-of-concept studies, uspio were also used in neuroimaging (corot et al. 2004; manniger et al. 2005) . the long blood-circulating time and the progressive macrophage uptake in inflammatory tissues of uspios are two properties of major importance for pathologic tissue characterization. in the human carotid artery uspio, accumulation in activated macrophages induced a focal drop in signal intensity compared with unenhanced mri. the uspio signal alterations observed in ischemic areas of stroke patients is probably related to the visualization of inflammatory macrophage recruitment into human brain infarction, since animal experiments in such models demonstrated the internalization of uspio into the macrophages localized in these areas. in brain tumors, uspio particles that do not pass the ruptured bloodbrain barrier at early times post injection can be used to assess tumoral microvascular heterogeneity. twenty-four hours after injection, when the cellular phase of uspio takes place, the uspio tumoral contrast enhancement was higher in high-grade than in low-grade tumors. several experimental studies and a pilot multiple sclerosis clinical trial in 10 patients have shown that uspio contrast agents can reveal the presence of inflammatory lesions related to multiple sclerosis. the enhancement with uspio does not completely overlap with the gd-chelate enhancement. during the last few years magnetic resonance angiography (mra) has been established as a non-invasive alternative to conventional x-ray angiography in the diagnosis of arteriosclerotic and other vascular diseases (meany et al. 1997; meany 1999) . with the exception of imaging intracerebral vessels (gibbs et al. 2005; ozsarlak et al. 2004 ), contrast-enhanced techniques have revealed superiority over non-contrast-enhanced techniques as the time-of-flight (tof-mra) or phase-contrast (pc mra) technique (sharafuddin et al. 2002) . the main advantages over unenhanced techniques are the possibilities to acquire larger volumes, allowing, e.g., demonstration of the carotid artery from its origin to the intracranial portion, shorter acquisition times, and reduced sensibility to flow artifacts. contrast-enhanced mr angiography can be performed during the first-pass of a contrast agent, preferably in breath-hold technique, after rapid bolus injection or during steady-state conditions after injection of vascular specific blood-pool agents. most experiences were reported for first-pass mra after injection of ecf contrast agents. the demands on the agent are a high influence on the signal intensity on blood after injection and the possibility of fast and compact bolus injection. the most commonly applied group of contrast agents are 0.5 molar ecf. in the last years, two novel ecf agents with innovative properties were used for mra. the first one, the 0.5 m contrast agent gadopentate dimeglumine offers a higher t1 relaxivity. in studies in which gadopentate dimeglumine is compared at equal dose with other gd-based mr contrast agents without relevant protein binding in plasma, gadopentate dimeglumine has consistently shown significantly better quantitative and qualitative performance (goyen and debatin 2003) . even at lower doses compared with gadopentetate dimeglumine injected at a dose of 0.2 mmol/kg body weight, the greater relaxivity of gadopentate dimeglumine provides higher intravascular signal and signal-to-noise ratio (pediconi et al. 2003) . thus, gadopentate dimeglumine can be considered to have a very favorable risk-benefit ratio for mra. the second one, gadobutrol is available in 1 m concentration. in combination with a higher relaxivity compared to other ecf agents, the agent has revealed in quantitative evaluations a significant increase in signalto-noise and contrast-to-noise ratios in comparison to gadopentetate dimeglumine in pelvic mra and in whole body mra (goyen et al. 2001 (goyen et al. , 2003 . better delineation of arterial morphology was reported especially for small vessels, but no statistically significant difference in image quality could be seen. two different options for injection have been described: reduction of the injection rate by 50% compared to injection protocols using 0.5 m ecf (equimolar dosing) or reduction of the injection time by 50%. the equimolar dosing mainly exploits the higher relaxivity potential of gadobutrol. in this case, the injection duration is identical to a corresponding protocol using a 0.5m contrast agent a similar bolus geometry, and contrast delivery in the roi is obtained (e.g., in a 70-kg-weighing patient 7 ml of gadobutrol are injected at 1 ml/s compared with 14 ml of gadopentetate dimeglumine injected fig. 2.6.1 whole-body mra of a healthy volunteer after bolus injection of gadofosveset (0.025 mmol/kg body weight). firstpass and steady-state acquisition acquired immediately (a) and 10 min (b) after injection of the contrast agent. t1-weighted 3d gradient recalled echo sequence (tr/te/α 3.1/1.1/25, spatial resolution 1.6 × 1 × 1.5 mm). first-pass imaging depicts exclusively the arteries. steady-state imaging shows an enhancement of both arteries and veins. due to the higher concentration of the contrast agent during first-pass imaging the absolute level of enhancement is higher (a) at 2 ml/s). hence, well-known protocols can be adopted with good results. the second option keeps the injection speed unchanged in comparison to the 0.5 m agent protocol, resulting in shortening of the initial bolus duration by a factor of two (fink et al. 2004 ). the philosophy is to use a very compact, high-relaxivity bolus and to fully exploit the potential of 1 m gadobutrol. this approach is particularly recommended in conjunction with very fast acquisition techniques, e.g., time-resolved (often referred to as 4d) mra. although the effective bolus geometry in the respective roi is broadened, dependent on individual physiology and mainly influenced by the lung passage, this approach requires higher demands on precise bolus timing and is recommended to users with advanced mra experience and ultrafast imaging equipment. in addition, a further approach was reported by reducing the amount of contrast agent by a factor of two in abdominal mra (vosshenrich et al. 2003) . the injection speed was kept constant in comparison to a 0.5 m agent protocol, resulting in very short total bolus duration. vosshenrich et al. used an amount of 0.1 mmol/kg body weight. they compared the examinations qualitatively and quantitatively to exams acquired after injection of gadopentetate dimeglumine (0.2 mmol/kg), and concluded that for mra of the hepatic arteries and the portal veins, gadobutrol can be used at half the dosage as recommended for a standard 0.5 m contrast agent. the concept of contrast-enhanced mra based on ecf agents has some limitations. the primary problem is the rapid extravasation of the contrast agents limiting acquisition time and therefore spatial resolution as well as contrast-to-noise-ratio. to improve spatial resolution it is necessary to prolong imaging time. intravascular contrast agents are able to overcome the restrictions of spatial resolution. the longer acquisition period can be used to decrease voxel size, to repeat measurements, or to trigger acquisitions by ecg and/or respiratory gating. the second limitation of currently used mra is the quantification of artery stenoses, which still seems to be inferior to invasive catheter angiography. the cause is the inferior spatial resolution in mra using ecf agents, with which the increase of spatial resolution is limited by the acquisition time during first-pass (arterial phase). with intravascular contrast agents, a longer data acquisition during the distribution phase is possible. the spatial resolution can be increased on a similar level compared with catheter angiography and, therefore, the accuracy of stenosis quantification is significantly increased. optimally, a blood-pool agent permits a long acquisition window including first-pass mra as well as the possibility of separate imaging of arteries and veins by timing the injection and data acquisition. gadofosveset, the first mr blood-pool agent approved for clinical use, permits both a high-resolution approach with a long acquisition windows and first-pass contrast-enhanced mra (fig. 2.6 .1). the approval was based on the data of clinical trials in all different types of arterial vessels including high-flow vessels with large diameter (e.g., the pelvic arteries), low-flow vessels (e.g., foot arteries), and high-flow vessels with a small diameter (e.g., the renal arteries). ecf contrast agents are widely used in mr imaging of soft-tissue lesions. the enhancement in either inflammatory or neoplastic lesions makes their use inevitable for the detection and characterization of soft tissue lesions. relevant anatomical sites that in the daily clinical practice are subject to mr imaging are the female breast and the soft tissue related to the musculoskeletal system. for the female breast, mr imaging with extracellular contrast agents (mr mammography) is nowadays widely used for the detection and for the characterization of unclear breast tumors morris et al. 2002) . the histopathological basis of the different enhancement patterns in breast masses is not yet fully understood; however, it is well known that angiogenesis with the formation of new vessels, is an important aspect (knopp et al. 1999 ). the amount of angiogenesis and contrast agent extravasation is considered different for several benign and malignant lesions; however, the visible phenomenon of different enhancement is usually too small to be analyzed only visually. the discrete changes of contrast-agent enhancement are usually evaluated by using a semiquantitative evaluation with region-of-interest measurements at different time-points (kuhl et al. 2005) . the thereby achieved enhancement kinetics, as represented by the time-signal intensity curves, differ significantly for benign and malignant enhancing lesions, and are used as an aid in differential diagnosis. usually four to six measurements with an interval of 1-2 min are applied in the daily clinical practice (kuhl et al. 1999; pediconi et al. 2005) . a recently published study showed that the temporal resolution for the assessment of time-signal intensity curves is not as critical as the spatial resolution; therefore, the recommendations for the dynamic postcontrast mr imaging tend toward a 2-min interval with a high spatial resolution (e.g., full 512 imaging matrix) (kuhl et al. 2005) . a more detailed evaluation of perfusion parameters needs, however, a very high temporal resolution in a range of 3-5 s. first results for the differentiation of unclear breast tumors in an investigational setting are very promising; however, due to the high temporal resolution only single slices can be measured, which is not feasible for daily practice (brix et al. 2004) . usually standard gd chelates at a dose of 0.1 mmol/ kg body weight are used for contrastenhanced mr mammography. first results indicate that the use of the high-relaxivity mr contrast agent gadopentate dimeglumine in the same dosage can achieve a superior detection and identification of malignant breast lesions at mr imaging as compared with gadopentetate dimeglumine. however, up to now gadopentate dimeglumine is not officially approved for this indication. there are also first approaches to perform mr mammography with blood-pool contrast agents. a major limitation of ecf is that they extravasate nonselectively from the vasculature into the interstitium of both normal and pathological tissues in the breast. it is hypothesized that the degree of microvascular endothelial disruption inherent to cancer vessels with the resulting extravasation of macromolecular contrast agents may predict tumor aggressiveness and tumor grade more accurately that with standard gd chelates (daldrup-link and brasch 2003; daldrup-link et al. 2003) . first results with uspio have shown an improved characterization of unclear breast tumors at the expense of tumor enhancement, which is important for tumor detection. an interesting approach is also the use of small molecular gd chelates, which bind reversibly to plasma proteins as for example gadofosveset. this might allow for a sensitivity and specificity due to the presence of small and large molecules (daldrup-link et al. 2003) . the assessment of microvascular changes in experimental breast tumors seem not to be reliably depicted with theses agents in contrast to the macromolecular albumin-gd-(dtpa)30 (daldrup-link et al. 2003; turetscheck et al. 2001) . however, clinical experience on breast tumors does not exist now. although potential diagnostic applications have been investigated with various sized albumin-gd-dtpa, this contrast agent is considered a poor candidate for development as a clinical drug due to slow and incomplete elimination and a potentially immunologic toxicity (daldrup-link and brasch 2003) . for soft-tissue or bone lesions in the musculoskeletal system, the application of extracellular gd-contrast agents has become a clinical standard for characterization, staging of the local extent, biopsy planning, and the therapy monitoring (verstraete and lang 2000). the basic principle of contrast-enhanced imaging is as described above the distribution of the gd chelates in the intravascular space, showing enhancement in tumors with dense vascularity and neoangiogenesis as well as distribution into the extracellular space. for these clinical standard applications, there seem to be no relevant differences in the diagnostic performance between the different extracellular gd chelates, similar to neuroimaging. the role of gd-enhanced mri for exact tissue characterization is still very limited. a differential diagnosis in between different sarcomas, nerve sheath tumors, or other mesenchymal tumors is not possible based on the contrast agent behavior up to know. the differentiation between benign and malignant tumors is also often very limited, even with tools like dynamic time-resolved contrast-enhanced mri (verstraete and lange 2000). nevertheless, surrogate parameters for angiogenesis like histological tumor-vessel density can be correlated with this method (van dijke et al. 1996) . one major limitation is the extravasation of standard gd chelates through the intact endothelium so that pathological extravasation in tumor vessels disrupted endothelium cannot be separated from the physiological distribution. therefore, the experimental studies mainly focus on contrast agents which show no or only minor physiological extravasation. different studies-mainly in the animal-experimental stage-were able to show that characterization in benign and malignant tumors, evaluation of angiogenesis, and even tumor grading is feasible with blood-pool contrast agents (daldrup et al. 1998; kobayashi et al. 2001; preda et al. 2004a, b) . there have been promising results with albumin-gd-(dtpa); however, as mentioned above this agent is unlikely to be available for diagnostic use in humans (daldrup et al. 1998; daldrup-link and brasch 2003) . similar to breast tumors, uspio have also been utilized for the evaluation of perfusion and for the characterization of soft-tissue tumors in the past (bentzen et al. 2005 ). the basic group of contrast agents for hepatobiliary imaging is the group of ecf gd-based contrast agents. however, there are also tissue-specific contrast agents available, which allow for an increased detection and characterization of focal and diffuse liver disease. liver-specific contrast agents can be divided into two groups: on the one hand, there are iron-oxide particles (spio, or superparamagnetic particles of iron oxide), which are targeted to the reticuloendothelial system (res) to the so-called kupffer cells. these agents cause a signal decrease in t2/t2*-weighted sequences by inducing local inhomogeneities of the magnetic field. on the other hand, there is the group of hepatobiliary contrast agents, which are targeted directly to the hepatocyte and are excreted via the bile. these agents cause signal increase in t1-weigthed sequences by shortening of the t1 relaxation time. in europe there are five different liver specific contrast agents available on the market (table 2.6.3). the basic principle behind spio is the fact, that there are usually no kupffer cells in malignant liver tumors, in contrast to the normal liver parenchyma and to solid benign liver lesions. therefore, in the liver specific phase, which starts for ferucarbotran after about ten min and for ferumoxide after about 30 min, high contrast is produced in between malignant liver lesions and normal liver parenchyma. due to the signal loss in normal liver parenchyma the malignant lesions are contrasted as hyperintense lesions in t2*-weighted and t2-weighted sequences against the dark liver parenchyma. the first spio on the market in europe has been ferumoxide (endorem®, guerbet, aulnay sous bois, france). since 2001 in most european countries and in asia the bolus-injectable ferucarbotran (resovist®, bayer schering pharma ag, berlin, germany) is available. with regard to the basic principle of imaging there is no difference between the two agents; however, direct comparative studies have not been performed so far. the most striking advantage of ferucarbotran is the better workflow due to the possibility to inject ferucarbotran as a bolus. bolus-applicability is possible for ferucarbotran due to the different particle sizes and the coating of the particles; this is also responsible for the fewer rate of side effects (especially fewer events of severe back pain), which are encountered with ferucarbotran. in earlier clinical trials the effects of spio particles were evaluated almost exclusively on t2-weighted fse and t2 * -weighted gre sequences, whereas usually not much attention was paid to the t1-effects. however, the effect of spio particles on proton relaxation is not confined to t2 and t2 * . they also influence t1 relaxivity with increased signal intensity on t1-weighted gre sequences at low concentrations (chambon et al. 1993 ). this gave raise to the hope, that with the bolus-injectable ferucarbotran vascularity of focal liver lesions could be depicted; however, investigations have been shown, that the ferucarbotran-enhanced early dynamic examination with t1-weighted sequences does not permit to evaluate lesion vascularity, since (with exception of the cotton-wool paddling of hemangioma) the expected enhancement pattern cannot be seen with reliability (zech et al. 2005) . with regard to the t2 /t2* effects there might be differences between both agents, which could be related to the different average particle size of both agents (approximately 150 nm for ferumoxide and 60 nm for ferucarbotran). with help of spio-enhanced mr an accurate liver lesion detection can be achieved. there have been several studies comparing spio to ct during arterial portography (ctap), which has been considered as best practice and reference standard. these studies showed detection rates of more than 90% (ba-ssalamah et al. 2000; vogl et al. 2003) . in comparison to ctap this detection rate was comparable; moreover, spio-enhanced mr is more specific than ctap, in which false positive lesions are encountered frequently. the above cited references investigated spio-enhance mr in a mixed collective of patients; publications focusing on the cirrhotic liver showed, that in these patients the combination of spio and extracellular gd-contrast agents have to be considered as the gold-standard for lesion detection (ward et al. 2000) . with regard to lesion characterization spio particles can be of help for the differential diagnosis of focal liver lesions based on the cellular composition and function of the different lesions (or rather based on different kuppfer cell density and function). when the same mr sequence is acquired pre-contrast and after a definite time interval, then the signal loss in normal liver parenchyma and in different focal liver lesions can be quantitatively evaluated. this is helpful for the differentiation of benign and malignant lesions; when a threshold of 25% signal loss is chosen, than lesions with less signal loss are of a malignant nature with over 90% sensitivity and specificity (namkung and zech et al. 2007 ). however, the sequence must have the same parameters (including the same acceleration in case parallel imaging is used), since application of parallel imaging makes systematic changes in the spread of image-noise (zech et al. 2004 ). the second important group is the group of hepatobiliary contrast agents. the basic principle behind this group of contrast agents is the specific uptake directly into the hepatocyte. since the agents all shorten the t1 relaxation times, they cause a signal increase in normal liver parenchyma and in solid benign lesions, whereas in malignant lesions like metastases no specific uptake can be seen. these lesions contrast as hypointense lesions against the bright liver parenchyma. approved agents in europe are the manganese-based agent mangafodipir trisodium (teslascan®, ge healthcare), and the gd-based agents gadopentate dimeglumine and gadoxetic acid (primovist®, bayer schering pharma). mangafodipir has the drawback that it must not be administered as a bolus, but only as a short infusion; therefore, dynamic studies are not possible with mangafodipir. however, the liver specificity is high and the high uptake in normal liver parenchyma enables imaging of, e.g., metastases with high contrast to the surrounding liver parenchyma. gadopentate dimeglumine and gadoxetic acid are injectable as boluses. with both contrast agents, a valid early dynamic examination is feasible, allowing differentiation of lesions with regard to their hyper-or hypovascularity (huppertz et al. 2005; petersein et al. 2000b ). due to the lower liver specificity of gadopentate dimeglumine, the imaging time-point of the liver-specific phase starts about 40 min after injection; whereas gadoxetic acid allows for imaging at 20 min after injection. this can be of value with regard to the workflow in the mr department. similar to the situation at spio agents, direct comparative studies between the agents have not been published yet; therefore, the following remarks again hold true for all hepatobiliary contrast agents. however, with regard to lesion characterization, only gadoxetic acid has the official approval to be used for this indication. all three hepatobiliary agents are approved for lesion detection. in comparison to spio agents, the potential advantage of hepatobilary agents is the fact that t1-weighted sequences usually can be performed with less acquisition time, less artifacts, and substantial higher spatial resolution. this holds true especially for t1-weighted 3d-gre sequences derived from mr angiography sequences as, for example, vibe, or volumetric interpolated breathhold examination (siemens medical solutions, erlangen, germany), or lava, for liver acquisition with volume acceleration (ge healthcare). in how far these high-resolution sequences with a slice thickness of usually below 3 mm allow further increasing the detection of small (<1 cm) malignant lesions has to be investigated in the future. the present date for the hepatobiliary agents was acquired mostly with conventional 2d-gre sequences and a slice thickness between 6 and 8 mm; however even in this setting the detection of lesions <1 cm was improved in comparison to baseline mri and spiral ct (bartolozzi et al. 2004; gehl et al. 2001; huppertz et al. 2004; peterseing et al. 2000b ). an earlier trial showed slight superiority of spio-enhanced mri versus hepatobiliary mri in detection of liver metastases; however, the potential advantaged of modern t1-weighted 3d gre sequences were not available for this evaluation (del frate et al. 2002) . a recent evaluation showed comparable detection rates between these two contrast agent groups (kim et al. 2005) . for the diagnosis of solid benign liver lesions (as focal nodular hyperplasia [fnh] and hepatocellular adenoma), the basis is still the extracellular contrast agent behavior with flush-like, mostly homogenous arterial hypervascularization and fast, but only faint washout, being in portovenous and equilibrium phase mostly slightly hyperintense (and not hypointense in contrast to the strong washout in malignant lesions). contrast agents used for patients with suspected solid benign lesions must enable that this information can be acquired. therefore, spio agents or mangafodipir alone are not sufficient for this indication; however, especially spio can contribute to the diagnosis of these lesions in combination with extracellular contrast agents, which have to show the abovementioned enhancement pattern. with spio agents solid benign lesions typically show liver-specific uptake of the substance in the range of normal liver parenchyma, thereby allowing the differentiation from malignant lesions as for example hepatocellular carcinoma (hcc). with regard to the differential diagnosis between fnh and adenoma, results in a limited number of patients indicated that the quantification of iron uptake can be helpful for this issue, because in our cohort, adenoma showed stronger iron uptake in comparison to fnh, with only minimal overlap of the percentage sig-nal intensity loss (psil) measured in a t2-weighted fse sequence with fat-sat (namkung and zech et al. 2007 ). with the hepatobiliary contrast agents gadopentate dimeglumine and gadoxetic acid, diagnosis of fnh and adenoma is also possible on the one hand based on the extracellular contrast phenomena, on the other based on the liver specific uptake of the agents into these lesions (grazioli et al. 2001 (grazioli et al. , 2005 huppertz et al. 2005) . there is also valid data indicating that the differentiation between fnh and adenoma is feasible with hepatobiliary contrast agents. therefore, with the bolus-injectable agents of this class a timeand presumably cost-effective diagnosis can be achieved (fig. 2.6.2) . in patients with extrahepatic malignoma, confirmation or ruling out of liver metastasis is often crucial for the therapeutic management. moreover, the exact staging of metastatic disease of the liver is getting more and more important, since sophisticated, stage-adapted therapeutic regimens with different options from atypical liver resection over local ablative minimally invasive treatment (e.g., radiofrequency ablation) up to extended liver resection exist. therefore, contrast agents used for this indication have to provide an excellent detection rate for focal liver lesions; however, the characterization of these lesions is also important, especially the differential diagnosis of small cystic metastases and small benign cysts or atypical hemangioma. after injection of extracellular contrast agents, the vascularity can be differentiated in hypo-und hypervascular metastases. hypovascular metastases appear as hypointense lesions in the portovenous phase, whereas hypervascular metastases appear as hyperintense lesions in the arterial-dominant phase. in contrast to the enhancement pattern of benign lesions, the nodular "cotton-wool"-like paddling in hemangioma or the homogeneous enhancement of fnh or adenoma metastases typically show a heterogeneous, ring-like enhancement with strong washout in the portovenous and equilibrium phase, resulting in first hyper-, and then hypointense lesions. however, in very small lesions the morphology of vascularization between the different enfig. 2.6. 2 mr images of a 27-year-old male patient with a formerly unclear liver lesion. the primary mr examination (upper row) with gadopentetate dimeglumine and ferucarbotran shows a lesion (arrow) with strong arterial enhancement in the gd-enhanced t1-weighted 3d-gre sequence (a) (tr, 5.02 ms; te, 1.77 ms; flip angle 15°) and ongoing washout in portovenous and equilibrium phase (not shown). the t2-weighted fse sequence with fat-saturation 10 min after application of ferucarbotran (b) depicts the lesion as nearly liver isointense. the calculated percentage iron uptake compared to the pre-contrast t2-weighted sequence was about 40%, showing the benignity of the lesion. based on these imaging features and the lobulated margins as well as the central scar, the diagnosis of a fnh was made. the follow-up study (lower row) was performed with gadoxetic acid after a single bolus injection. in the t1-weighted 3d-gre sequence (same parameters as above) in the arterial phase (c), the same enhancement characteristics as the in prior study can be delineated. in the delayed t1-weighted 2d-gre sequence (d), the presence of hepatocytes is proven due to the liver-specific enhancement. note the excellent delineation of the central scar in the delayed images. in contrast to the upper row, the followup study gave information about vascularity and tissue composition, with a single contrast agent injection only tities is getting more and more similar, so that the present or missing liver-specific uptake is an additional criterion for differentiation. several publications have shown that detection of metastasis is feasible with the highest accuracy with help of liver-specific contrast agents, regardless if spio or hepatobiliary. according to the literature for detection of liver metastases, all liver-specific agents can be used with a very high diagnostic reliability and superiority to merely extracellular mri or spiral ct (bartolozzi et al. 2004; ba-ssalamah et al. 2000; del frate et al. 2002; gehl et al. 2001; huppertz et al. 2004; kim et al. 2005; petersein et al. 2000b; vogl et al. 2003) . a difficult situation is liver imaging in cirrhotic liver. it is known that extracellular agents are helpful for detection and characterization of hcc nodules, the sensitivity is with t1-weighted 3d-gre sequences 76 and 75% specificity (burrel et al. 2003) . moreover, for diagnosis a hcc according to the accepted guidelines, hypervascularity has to be demonstrated. therefore, a valid early dynamic phase is a mandatory part for imaging in the cirrhotic liver. because regenerative nodules, which can be found frequently in the cirrhotic liver, also can show hypervascularity differentiation between these nodules and hcc nodules, it is a crucial issue for the management of patients suffering from liver cirrhosis. this is the reason that liver-specific contrast agents play an important role for imaging of the cirrhotic liver. it has been demonstrated that hcc shows no relevant uptake of spio particles in contrast to benign regenerative nodules (bhartia et al. 2003; imai et al. 2000; ward et al. 2000; namkung and zech et al. 2007 ). since in the cirrhotic liver fibrotic areas are present frequently, spio alone are not sufficient to evaluate the cirrhotic liver. a reasonable approach for the diagnosis of hcc based on imaging alone is the correlation of hypervascularity and missing or at least decreased iron uptake (bhartia et al. 2003; ward et al. 2000) . however, there is also an indefinite area of overlapping phenomena between dysplastic nodules and well-differentiated hcc, which is the reason for false negative findings-meaning well-differentiated hcc with substantial iron uptake (imai et al. 2000) . with regard to lesion detection, the availability of high-resolution mr sequences gives advantages for gd-enhanced arterial-phase imaging alone (kwak et al. 2004) or in combination with spio as double contrast (ward et al. 2000) . imaging with hepatobiliary contrast agents is considered as inferior in comparison to spio agents in the cirrhotic liver, mainly due to substantial overlapping in the liver-specific uptake between well-differentiated hcc and regenerative nodules. the assessment of lymph node affections by extranodal tissue, i.e., lymph node metastasis, is currently based on morphologic parameters including, lymph node size, shape, irregular border, and signal intensity inhomogeneities (brown et al. 2003; zerhouni et al. 1996) . for all parameters, no clear cut-off values or cut-off characteristics can be defined. the definition of a cut-off value in individual studies is the result of finding a compromise between sensitivity and specificity (e.g., larger values around 10 mm give a high specificity but low sensitivity, whereas the reverse of low specificity and high sensitivity is observed when smaller diameter <10 mm are defined). the use of unspecific gd-based extracellular contrast agents has not revealed to overcome this limitation. lymphotropic mr contrast agents were, therefore, developed to increase the diagnostic accuracy of positive lymph node involvement. currently, none of these agents is approved for clinical use, and the experience with different formulations is limited to clinical studies. the most frequently used agents are uspios. they are administered intravenously and, as a result of their small diameter and their electrical neutrality, pass the first lymphatic barriers, i.e., the liver and the spleen. in the lymphatic nodes, they are phagocytosed by local macrophages. in healthy lymphatic tissue, the local concentration of iron oxides is resulting in a significant decrease of t2 and t2* relaxation, resulting in a marked decrease of signal in t2and t2*-weighted sequences. in contrast, metastatic tissue replacing the lymphatic tissue shows no relevant uptake of uspio, and no relevant change in signal intensity can be observed. gradient-recalled echo t2-weighted sequences are considered the most accurate to detect the signal loss in nonmetastatic nodes. the application of uspios offers not only the possibility to differentiate between tumorfree, reactive (koh et al. 2004 ) and tumor-positive lymph nodes, but enables to depict micrometastasis in case sequences when high spatial resolutions are used (harisinghani et al. 2003) . one representative of the group of lymph node-specific uspio, ferumoxtran-10 (sinerem ® guerbet, paris, france) is infused after dilution. the recommended dose is 2.6 mg fe/kg body weight. the optimal time-point for postcontrast imaging is 24-36 h after application. during their clinical development, uspios have shown to be effective in staging lymph nodes of patients with various primary malignancies (deserno et al. 2004; jager et al. 1996; michel et al. 2002; nguyen et al. 1999) . the usual way for diagnosis is to perform an initial precontrast scan and to compare the images with postcontrast images acquired 24-36 h after infusion of uspios for signal changes between both time-points. the type, onset, and intensity of adverse events after application of ferumoxtran-10 was evaluated in phase iii studies and seems to be similar to those related to infusion of ferumoxides (anzai et al. 2003) . in patients with esophageal or gastric cancer, uspios revealed a sensitivity of 100% and specificity between 92.6 and 95.4% (diagnostic accuracy between 94.8 and 96.2%) for diagnosis of metastatic nodes (nishimura et al. 2006; tatsumi et al. 2006) . in patients with carcinomas of the upper aerodigestive tract, application of ferumoxtran-10 has shown to increase the sensitivity from 64 to 94% while maintaining a specificity of 78.9%, compared with precontrast imaging (curvo-semedo et al. 2006 ). in patients with rectum cancer, uspios have shown wellpredictable signal characteristics in normal and reactive lymph nodes, and were able to differentiate the latter from malignant lymph nodes (koh et al. 2004) . dissimilarly, keller et al. studied females with uterine carcinoma and were able to show high specificity, but a low sensitivity for metastatic lymph nodes; mainly micro-metastases around 5 mm diameter were missed. a possible way to further improve the diagnostic accuracy for detection of small positive lymph nodes could be the use of 3-t high magnetic field strength scanner resulting in a lower spatial resolution (heesakkers et al. 2006) . different results were published concerning the necessity of both pre-and postcontrast images. whereas the majority of clinical publications using uspio for lymph node imaging used both pre-and postcontrast images and stets et al. (2002) were able to statistically prove the advantage of preand postcontrast studies, harisinghani et al. (2003) were showed that on ferumoxtran-10-enhanced mr lymphangiography, contrast-enhanced images alone may be sufficient for lymph node characterization. however, a certain level of interpretation experience seems to be required before contrast-enhanced images can be used alone. both uspios (rogers et al. 1998) and spios (maza et al. 2006) can alternatively be administered with subcutaneous or submucosal injection. this application route is able to identify sentinel lymph nodes and lymphatic drainage patterns (fig. 2.6.3) . additionally, high diagnostic accuracy of interstitial mr lymphography using blood-pool gd-based agents has been described (herborn et al. 2002 (herborn et al. , 2003 .using different macromolecular agents or gd-based agents with high protein binding in animal models, herborn et al. were able to show that the differentiation of tumor-bearing lymph nodes from reactive inflammatory and normal nodes based on a contrast uptake pattern assessed qualitatively as well as quantitatively is possible. in difference to the intravenous administration, subcutaneous injection gives the possibility to acquire the mr images as early as some min after application. the use of gd-based non-lymphotropic blood-pool agents induced a relatively short and inhomogeneous lymph node enhancement (misselwitz et al. 2004) . with the aim to become more specific, a new generation of lymphotropic t1 contrast agents was developed and tested in animal models after subcutaneous injection. these perfluorinated gd chelates were able to visualize fine lymphatic vasculature, even the thoracic duct in animal models (staatz et al. 2001 ). bowel mr contrast agents are generally classified as either positive (bright lumen) or negative (dark lumen) agents. in addition to enteral contrast agents especially approved for mr imaging, several existing pharmaceutical agents, such as methyl cellulose, mannitol, and polyethylene glycol preparations, licensed for other enteric application than mri, have also been exploited. fig. 2.6. 3 lymphatic drainage of a mucosal melanoma in the left nasal cavity to a lymph node located in the left submandibular region. mr images (upper row) and spect images (lower row). a concordant alignment of hot spots caused by the skin markers on the spect images with the vitamin e caps on the mr images (yellow arrows). b accurate sentinel lymph node localization (blue arrow) after subcutaneous injection of ferucarbotran (mg fe/kg body weight). t2*-weighted 2d gradient-recalled echo sequence tr /te /a 997/15/90. homogeneous signal intensity decrease in the depicted lymph node (arrow) indicates normal lymphatic tissue and, thereby, metastatic involvement can be ruled out (maza et al. 2006) the specific demands for enteral contrast media include: for enteral and rectal application a special formulation of gadopentetate dimeglumine was developed (magnevist enteral, schering). the agent contains a total of 15 g mannitol/l to prevent the absorption of the fluid simultaneously introduced in the gi tract, thus allowing homogeneous filling, distension, and constant gd concentration during the entire examination period. in the early development, an increase in diagnostic accuracy in examinations of the pancreas in the diagnostics of abdominal lymphoma and in pelvic mr imaging was shown (claussen et al. 1988 ). negative agents provide desirable contrast to those pathologic processes that are signal intense. they have been shown to improve the quality of images obtained by techniques such as mr cholangiopancreaticography (mrcp) and mr urography by eliminating unwanted signal from fluid-containing adjacent bowel loops, thus allowing better visualization of the pancreatic/biliary ducts and the urinary tract. an alternative to oral spios was described by using ordinary pineapple juice. it was demonstrated that pineapple juice decreased t2 signal intensity on a standard mrcp sequence to a similar degree than a commercially available negative contrast agent (ferumoxsil) (riordan et al. 2004) . oral spio preparations usually contain larger particles than injectable agents do. in europe, two spio preparations with the inn code ferumoxsil are approved for oral use: lumirem (laboratoires guerbet, france) with a particle size of 300 nm, and (oral magnetic particles) abdoscan (ge healthcare) with a particle size of 350 nm. they are coated with a non-biodegradable and insoluble matrix (siloxane for lumirem and polystyrene for abdoscan), and suspended in viscosity-increasing agents (usually based on ordinary food additives, such as starch and cellulose). these preparations can prevent the ingested iron from being absorbed, particles from aggregating, and improve homogeneous contrast distribution throughout the bowel. if spio particle aggregating occurs, magnetic susceptibility artifact may result, especially when high magnetic field strength and gradientecho pulse sequence are used (wang et al. 2001) . lumirem is composed of crystals of approximately 10 nm; the hydrodynamic diameter is approximately 300 nm (debatin and patak 1999) . the recommended concentration is 1.5-3.9 mmol fe/l. oral spio are administered over 30-60 min, with a volume of 900 ml for contrast enhancement of the whole abdomen, and 400 ml for imaging of the upper abdomen. oral spio suspensions are well tolerated by the patients (haldemann et al. 1995) ; the iron is not absorbed and the intestinal mucosal membrane is not irritated. combination of gd-enhanced t1-weighted sequences and t2-weighted sequences after oral contrast with spio has revealed highest accuracy in the evaluation of crohn's disease (maccioni et al. 2006) . furthermore, it has been shown that mri with negative superparamagnetic oral contrast is comparable to endoscopy in the assessment of ulcerative colitis. in difference to patients with crohn's disease, the double-contrast imaging does not provide more information than single oral contrast (de ridder et al. 2001) . in mrcp, negative oral contrast agents can be given before the examination to provide non-superimposed visualization of the bile and pancreatic ducts. there is no negative influence of the oral contrast agents on the diameter of the ducts (petersein et al. 2000a). in cardiac mri, contrast agents are obligatory for the assessment of myocardial perfusion, for the evaluation of enhancement of cardiac masses, and for the evaluation of myocardial viability. in addition, contrast agents are frequently used when mr angiography of the coronary arteries is performed. myocardial perfusion imaging is a promising and rapidly increasing field in cardiac mr imaging. in comparison to radionuclide techniques, mr imaging has several advantages, including higher spatial resolution, no radiation exposure, and no attenuation problem related to anatomical limitations. the examination is performed after rapid intravenous administration (e.g., 3-5 ml/sec) of a contrast agent and evaluation of the first-pass transit of the agent through the myocardium. with the use of fast scan techniques, perfusion imaging can be performed as a multislice technique, with imaging of three to five slice levels per heartbeat, possibly allowing coverage of the entire ventricle. from a series of images, signal intensity-time curves are derived from regions of interest in the myocardial tissue for generation of parametric images. the majority of data are published using ecf agents. in clinical practice, most investigators are using fast t1-weighted imaging and bolus injection of doses of 0.025 up to 0.05 mmol/kg body weight (edelman 2004) . for the evaluation, both quantitative and qualitative approaches can be used. in case of ecf agents, generally semiquantitative assessments are applied. to quantify myocardial perfusion a calculation was published by wilke et al. on the basis of first-pass data acquired after fast bolus injection of 0.025 mmol/kg body weight of ecf agents (wilke et al. 1997) . in practice, gd concentrations between 0.2 and 1.2 mmol/l result in a linear progression of the mr signal compared with the concentration of the agent itself. above this dose, the maximal relative increase in signal intensity begins to saturate (schwitter et al. 1997) . when the evaluation is performed by visual evaluation, a higher dose of 0.1-0.2 mmol/kg should be preferred to reach better myocardial enhancement and image quality. good correlation between the perfusion reserve with mr imaging and the coronary flow reserve with doppler ultrasonography could be proven. blood-pool agents have the potential to be applied for quantitative measurements also because their volume of distribution is limited to the intravascular space. a requirement for quantitative perfusion measurements is that the relation between the measured signal intensity on the mr images and the contrast agent concentration in the blood is known (brasch 1991) . there are two major differences between first-pass curves obtained from blood-pool agents and extracellular contrast agents. first, blood-pool contrast agents reach a lower tissue signal because their volume of distribution is limited to the intravascular space (wendland et al. 1997; wilke et al. 1995) . second, there is a better return to baseline for blood-pool contrast agents. the wash-in kinetics and the signal intensity in the myocardial tissue depend on the concentration of the contrast agent, the coronary flow rate, diffusion of the contrast agent into the interstitium, relative tissue volume fractions, bolus duration, and recirculation effects (burstein et al. 1991) . absolute quantification of myocardial perfusion has been performed in animal models using nc100150. a high absolute quantification correlation was found between mri and contrast-enhanced ultrasound (johansson et al. 2002) . delayed enhancement allows direct visualization of necrotic or scarred tissue and is an easy and robust method to assess myocardial viability. by measuring the transmural extent of late enhancement, a prognosis toward the degree of functional recovery of cardiac tissue may be possible. although several studies have been aimed at describing the mechanisms of late enhancement, these could not be fully explained up to now. the extent of late enhancement possibly depends on the time-point after injection as well as the time-point after myocardial infarction. relevant publications about delayed enhance-ment are reporting data after the administration of 0.5 m ecf agents in a dose range of 0.1-0.2 mmol/kg. ecf agents are probably more efficient in assessing the cellular integrity when they are distributed homogenously through damaged myocardium (wendland et al. 1997) , but homogenous distribution is not always the case, as in microvascular obstruction (kroft and de roos 1995) . differences exist between the distribution patterns of extracellular and blood-pool agents, and hypo-enhanced cores may be observed earlier using blood-pool agents (schwitter et al. 1997) . the sensitivity of blood-pool agents for myocardial infarction and, therefore, their potential value for the evaluation of myocardial viability, is unknown. a different strategy to determine myocardial viability is the use of necrosis-specific mr contrast agents. gadophrin-2 and -3 (schering) have shown to possess a marked and specific affinity for necrotic tissue components and showed persistent enhancement in necrotic tissue (40 min to 12 h in myocardial infarction). in preclinical studies, the agent has not proven superiority in estimation of infarcts compared with ecf agents, and the further development of the agent was, therefore, not continued (barkhausen et al. 2002) . to depict coronary arteries in mri, both unenhanced and contrast-enhanced techniques are used. a frequently performed contrast-enhanced examination strategy with acquisition of multiple 3d slaps in breath-hold depicting each coronary artery separately was primarily described by wielopolski et al. (1998) . for 3d coronary mr angiography, however, the contrast between blood and myocardium in relation to the inflow of unsaturated protons is reduced. thus, the use of an intravascular contrast agent may be particularly convenient due to the t1 relaxation time reduction in blood. the application of ecf agents has proven to be most effective for breath-hold acquisitions. however, the concentration of ecf agents declines rapidly as they extravasate into the interstitial space, thereby reducing the contrast between blood and myocardium. newly developed strategies use high-resolution free-breathing mr sequences for coronary mra. in this situation, ecf agents are less beneficial due to the relatively long acquisition time of these free breathing 3d sequences. this problem can be solved by the use of intravascular contrast agents. an additional benefit from application of a blood-pool agent is a longer acquisition window, which may be used to further increase both the signal-to-noise ratio and/or image resolution (nassenstein et al. 2006) . in an animal model use of the macromolecular gd-based agent p792 with a free-breathing technique allowed more distal visualization of the coronary arteries than did an ecf agent or non-enhanced mr images (dirksen et al. 2003 ). colosimo c, demaerel p, tortori-donati p et al (2005) comparison of gadopentate dimeglumine (gd-bopta) with gadopentetate dimeglumine (gd-dtpa) for enhanced mr imaging of brain and spine tumors in children. (2002) barkhausen j (2006) (2000b) one of the strengths of mri is the ability to visualize soft tissues with different image contrasts. additionally, various two and three-dimensional mr imaging techniques for morphologic and functional examinations exist. among the functional techniques the visualization and measurement of blood flow is of particular interest, since nearly all physiologic processes rely on an adequate blood supply. as with many other mr imaging techniques, the sensitivity of mri to blood flow was first observed in artifacts visible near larger blood vessels. to suppress these artifacts new imaging methods have been investigated. in a further refinement of these techniques the artifact (here: the blood flow) has been made the primary source of the imaging technique; thus, in search for new methods of flow artifact suppression the blood flow itself became the contrast-generating element. the delineation of the vascular tree with mri, mr angiography (mra), is such a development: in t1-weighted 3d gradient-echo data, it was observed that the blood vessel signal of the margin partitions was significantly higher than at the center of the image stack. furthermore, signal voids were seen in regions of turbulent flow, and in blood vessels with pulsating flow, ghost images of the vessel were visible in phase encoding direction. in the following, the underlying physical phenomena of these artifacts will be discussed, as they form the basis for time-of-flight mra, phase-contrast mra, and mr flow measurements. since the pioneering work of prince (1994) , many mr angiographies are acquired using contrast-enhanced acquisition techniques. in contrast-enhanced mra, the signal difference between the bright blood vessel and the dark surrounding tissue is induced by a reduction of the blood's t1 relaxation time. again, this technique has evolved from an unwanted vascular signal artifact in spin-echo images acquired after contrast agent injection into a major mr application. with the development of new contrast agents with a longer half-life in the vascular system, the so-called intravascular contrast agents, contrast-enhanced mra has been developed even further. in this section, techniques for mra with either intravascular or extracellular contrast agents will be presented. the visualization of the blood vessels with mri relies particularly on the specific properties of blood. blood consists nearly entirely of liquids, so that blood has a very high spin density and thus yields a strong mr signal. the t1 time of blood is long compared to that of other tissues (e.g., 1 200 ms at 1.5 t) (gomori et al. 1987) , and it depends on its oxygenation state. long t1 values are a disadvantage in t1-weighted acquisition strategies as the signal decreases with increasing t1. this disadvantage can be converted into an advantage if the blood signal needs to be suppressed (black-blood angiography) for visualization of the vessel walls. furthermore, using an inversion (or saturation) recovery technique, the prepared magnetization can be tracked for a longer time, as the preparation persists much longer than in other tissues; this is the basis of arterial spin labeling techniques. typical t2 values are of the order of 150-200 ms (at 1.5 t). this t2 value is long enough to provide a high signal in the blood vessels using dedicated t2-weighted image acquisition strategies. with conventional t2weighted spin-echo techniques, mr angiographies are difficult to acquire since the motion of the blood needs to be compensated; nevertheless, t2-weighted mra pulse sequences for imaging of the peripheral vasculature have been reported (miyazaki et al. 2000) . another approach is the use of balanced ssfp pulse sequences, where the contrast is dependent on the ratio of t1/t2-these fast pulse sequences have found a widespread use in the visualization of the cardiac system. in addition to the relaxation times, blood velocity is an important parameter. in healthy arterial vessels velocity values between 100 cm/s (e.g., in the aortic arch) and 30 cm/s (e.g., in the intracranial vessels) are common, whereas much lower values are found in the venous vasculature. high blood flow velocities lead to a pronounced inflow of fresh, unsaturated magnetization into an imaging slice, which increases the signal in the blood vesselsthis is the well-known time-of-flight contrast. furthermore, the velocity in the arterial system is not constant but changes as a function of time in the cardiac cycle. this pulsatility can be exploited to separate arterial from venous vessels, if image data are acquired with cardiac synchronization. all of the presented mra techniques rely on these properties of the blood; some exploit only one of them, whereas others use a combination of them to increase the vascular contrast even further. in any mr pulse sequence, the magnetization in a measurement slice is exposed to a series of radio frequency (rf) pulses. if the magnetization does not move out of the measurement slice (e.g., in static tissue) it approaches a so-called steady state which, for a spoiled gradient-echo pulse sequence, depends on the flip angle, the repetition time tr, and the relaxation times t1 and t2. the steady state magnetization is smaller than the magnetization at the beginning of the experiment-it is partially saturated. fresh, unsaturated blood flowing into the imaging slice is carrying the full magnetization and thus generates a significantly higher mr signal (fig. 2.7 .1); this is known as time-of-flight (tof) contrast (anderson and lee 1993; potchen et al. 1993) . a major disadvantage of tof mra is the sensitivity to blood signal saturation: the longer the inflowing blood remains in the measurement slice, the more its signal is saturated. in situations where the blood vessel is oriented over a long distance parallel to the imaging slice (or 3d slab), the inflowing magnetization is progressively saturated. thus, blood appears bright near the entry site but is seen less intense with increasing distance from this position. to maintain the tof contrast over the whole imaging volume tof mra should therefore be performed as a 2d acquisition with thin slices or, if a 3d acquisition technique is preferred, in the arterial vasculature where high-flow velocities result in fewer saturation pulses for the arterial blood. the inflow effect can be maximized if the measurement slice is oriented perpendicular to the blood vessel. this is often possible for the straight arterial vessels (e.g., the carotids in the head), but can be difficult for extended vascular territories with tortuous vessels. in 3d acquisitions of larger vessel structures, the saturation effect can be partially compensated, if the flip angle is increased from the entry side of the slab to the exit side (fig. 2.7.2) . thus, the saturation effect is less pronounced during entry, and the magnetization is still visible when it enters smaller vessels that are far away from the entry side. often, an rf pulse with a linearly increasing flip angle is utilized (tilted optimized non-saturating excitation, or tone [nagele et al. 1995] ). for an optimal vessel contrast the blood flow velocity, the repetition time, the mean flip angle, and the slope of the rf pulse profile are important parameters. in 2d tof mra, a very strong tof contrast can be achieved, if the slice thickness d is chosen such that the magnetization flowing with a velocity v is completely replaced during one tr interval, i.e. d ≤ tr · v. at a typifig. 2.7. 1 transient longitudinal magnetization, which is subjected to a series of excitation pulses (30°) at a repetition time of 30 ms after entering the readout slice at t = 0 during tof mra. the longer the blood spins remain in the slice the more they are saturated, and a differentiation between blood and surrounding tissue becomes difficult fig. 2.7 .2 3d tof mra data set of the intracranial vasculature in lateral (top) and axial (bottom) maximum intensity projection. to minimize saturation, a tone rf pulse was used for excitation, and the signal from static brain tissue was additionally suppressed using magnetization transfer pulses cal tr of 10 ms and a blood flow velocity of 40 cm/s, the slice thickness should thus not be larger than 4 mm. with these small slice thicknesses, the data acquisition in larger vascular territories such as the legs is very timeconsuming, and patient movements cannot be excluded during the several minutes of scan time. patient movements lead to artificial vessels shifts between the imaging slices, which are particularly observed in orthogonal data reformats-these artifacts can mimic pathologies such as stenoses and thus significantly reduce the diagnostic quality of the data sets (fig. 2.7.3) . 3d tof mra is advantageous over sequential 2d tof mra because an isotropic spatial resolution in all directions can be achieved. to reduce the saturation effects in 3d tof mra not only one thick, but also several thinner 3d slabs are acquired consecutively. thus, the saturation effects are smaller for the individual slabs and a stronger tof contrast is seen. unfortunately, the flip angle in fast 3d acquisitions is not constant over the slab, but is declining towards its margins. this inhomogeneous excitation results in a higher signal for the stationary tissue at the slab margin, providing an inhomogeneous signal background in lateral views of the data. combined with a higher tof contrast at the entry side compared with the exit side, spatially varying signal intensity is seen in lateral views of the whole data set (venetian blind artifact). to reduce this artifact, overlapping 3d slabs are acquired (multiple overlapping thin slab acquisition, or motsa [parker et al. 1991] ), and the marginal slices of each slab are removed; however, this results in an increased total scan time. to increase the contrast between the blood vessels and the surrounding tissue in tof mra, often magnetization transfer pulses are included in the pulse sequences (edelman et al. 1992 ). using off-resonant rf pulses, the magnetization transfer contrast (mtc) selectively saturates those tissues where macromolecules are present. for brain tissue, these additional rf pulses can reduce the signal from background tissue by 40% and more, which increases the conspicuity especially of the smaller blood vessels. the use of magnetization transfer pulses however increases the minimally achievable tr and, thus, the total acquisition time. additionally, through the integration of the mtc pulses more rf power is applied to the patient, so that the regulatory power limits for the specific absorption rate (sar) might be exceeded, an effect that is more pronounced at higher field strengths. nevertheless, mtc is often included in intracranial tof mra protocols where longer trs can be an advantage, as long trs additionally reduce the saturation effect. the flow velocity in arteries is typically not constant but varies over the cardiac cycle. thus, the tof contrast is a function of time, so that for image acquisition times that are longer than one cardiac cycle, a signal variation during k-space sampling is present. this periodic signal variation results in phantom images of the blood vessels in phase encoding direction after image reconstruction: the so-called pulsation artifacts or ghost images (haacke and patrick 1986; wood and henkelman 1985) . to avoid pulsation artifacts, the image acquisition can be synchronized with the cardiac cycle using ecg triggering, which typically prolongs the total acquisition time, as only part of the measurement time is used for data acquisition. another option to reduce pulsation artifacts is to saturate the inflowing blood in a slice upstream of the imaging slice. therefore, a slice-selective rf excitation is applied in a (typically parallel) saturation slice, so that the magnetization of the inflowing blood is significantly reduced. spatial presaturation avoids pulsation artifacts; however, the interior of the blood vessel now has a negative contrast, and the positive tof contrast is gone. another important ingredient of a tof mra pulse sequence is flow compensation (cf. paragraphs on flow measurements, below): the movement of the spins causes an additional velocity-dependent phase shift that is seen in tof mra data sets without flow compensation as a displacement. if multiple velocities are present as in turbulent flow, the different phases can cause signal cancellation (intra-voxel dephasing) that manifests, e.g., as a signal void behind a stenosis (saloner et al. 1996) . with special compensation gradients the velocity-depentof mra data sets of the arterial vasculature in the neck. due to swallowing, the blood vessel can move from one acquisition to the next, and the edges appear with discontinuities in the lateral views of the maximum intensity projection dent phase shifts can be reduced; however, this typically prolongs the echo time te. tof mra is susceptible to several artifacts and is strongly dependent on a sufficient inflow velocity of unsaturated blood. therefore, 3d tof mra techniques are typically only used in the head, where the arterial flow velocities are high and enough time is available for imaging. for abdominal studies, tof techniques are of minor interest, because long measurement times are not possible due to respiratory motion. in conventional tof mra, the difference in longitudinal magnetization between the saturated stationary tissue and the unsaturated inflowing blood is exploited to create a positive contrast between blood and tissue. with arterial spin-labeling techniques, a similar approach is taken to the visualization of the inflowing blood; however, here only a certain fraction of the inflowing blood is tagged (or labeled) and subsequently visualized, whereas in tof mra all inflowing material is detected (detre et al. 1994) . spin-labeling pulse sequences consist typically of a labeling section, during which an rf pulse is applied to the spins upstream of the imaging slice (fig. 2.7.4) . for labeling often adiabatic inversion pulses are used, which are less susceptible to motion during the inversion and that allow inverting the magnetization even in rf coils with a limited transmit homogeneity (e.g., a transmit/receive head coil). after an (often variable) inflow delay time ti, during which the labeled blood is flowing into the vascular target structure, the signal in the imaging slice is acquired. for signal reception different image-acquisition strategies can be employed such as segmented spoiled gradient-echo (flash), fast spin-echo (rare, haste), or even echo planar imaging (epi). note: this image data set contains both the signal from the labeled blood and the static background tissue. in a second acquisition, the entire pulse sequence is repeated without labeling of the blood, and a second image data set is acquired. to selectively visualize only the labeled blood the two data sets are subtracted; since the signal intensity of the blood differs in both acquisitions, a non-vanishing blood signal is seen, whereas the signal contribution from static tissue cancels. if the phase of the second image data set is shifted by 180° compared to the first, labeled data set, the images can be added (the minus sign is provided by the phase), and the technique is called signal targeting with alternating radiofrequencies (star) (edelman et al. 1994) . in clinical mri systems, arterial spin labeling (asl) is typically implemented with the described labeling pulses, which are applied only once per data readout; this approach is also termed pulsed arterial spin labeling (pasl). another method for asl uses a small transmit coil for the labeling pulse, which continuously applies an rf pulse to the arterial vessel, and thus achieves a much higher degree of inversion. unfortunately, these continuous asl techniques often cannot be used in a clinical mr system due to the regulatory constraints for the maximum rf power applied to the patient (sar limits). asl techniques are typically applied to study perfusion in the brain and other organs, where the inflow delay is chosen long enough for the labeled blood to have reached the capillary bed (golay et al. 2004 ). unfortunately, the labeling in the blood does not persist for much longer than one t1 time (i.e., 1-2 s at 1.5 t), and the signal differences are generally very small (2-5% of the total signal), which makes perfusion measurement asl a time-consuming procedure. another application to asl is the time-resolved visualization of blood flow, e.g., in intracranial malformations (essig et al. 1996) , where saturation effects limit the diagnostic quality of conventional 3d tof mras. here, dynamic asl data sets are acquired at a series of inflow delays to visualize the transit of the labeled bolus through the nidus of the malformation, and, more importantly, the arrival of the blood in the draining venous vessels, concept of arterial spin labeling: magnetization is prepared (e.g., using a slice-selective inversion pulse) in a section of the artery (red). after an inflow delay of several hundred milliseconds, the magnetization has reached the imaging slice (green), and an image is acquired. the procedure is repeated without preparation, and the two data sets are subtracted to remove the signal background from static tissue that cannot be seen on the tof data sets (fig. 2.7.5) . in addition to a morphologic representation of these blood vessels, transit-time measurement of the blood becomes feasible, which could be used as an indicator, e.g., for an increase in vascular resistance after a radiation therapy. mr angiographies can also be acquired using the special contrast properties of blood. in balanced steady state free precession pulse sequences (bssfp, truefisp, fiesta), an image contrast is created that depends on the ratio of the relaxation times t1 and t2 (oppelt et al. 1986) . for blood, this ratio is high, and thus the interior of the blood vessels are shown with higher signal intensity than the surrounding tissue. unfortunately, other liquid-filled spaces such as the ventricles also appear with a bright signal, so that conventional mra post-processing strategies such as the maximum intensity projection cannot be used to visualize the vascular tree (fig. 2.7.6) . despite their short repetition times and balanced gradient schemes, these pulse sequences are susceptible to flow artifacts caused by intra-voxel dephasing, which can be compensated using flow-compensation gradients (storey et al. 2004; bieri and scheffler 2005) . another problem with balanced steady state pulse sequences is the susceptibility to off-resonance artifacts: since both transverse and longitudinal magnetizations contribute to the mr signal, perfect phase coherence must be main-tained within one tr to establish the desired contrast. in off-resonant regions, this phase coherence is perturbed, and a contrast variation is seen in the form of dark bands. the banding artifacts can be reduced using a repetition time that is shorter than the inverse of the off-resonance frequency, i.e., for a 200-hz off-resonance, the tr should be shorter than 5 ms. off-resonance frequencies scale with field strength, so that banding artifacts become an increasing problem at higher field strengths. nevertheless, fast balanced ssfp pulse sequences are increasingly used in mra studies of the heart and the neighboring vessels in combination with ecg triggering to visualize the vascular anatomy and to assess. in conventional spin-echo images, one often observes that the interior of the blood vessels is darker than the surrounding tissue. this so-called black-blood contrast is caused by an incomplete signal refocusing of the 180° pulse. compared with tof mra with gradient-echo sequences, where the inflow of blood causes signal amplification, spin-echo sequences attenuate the signal from flowing blood because spins leave the imaging slice between the 90° excitation pulse and the 180° refocusing pulse, and thus do not contribute to the mr signal. therefore, blood signal attenuation can be increased with a longer spacing between the two rf pulses, i.e., with longer echo times te. to further suppress signals from slowly flowing blood near the vessel walls, often addifig. 2.7 .5 tof mra (top) and timeresolved dynamic mra with arterial spin labeling (bottom) of an intracranial arteriovenous malformation. in the tof mra, the nidus of the malformation is clearly seen, but the draining vein can hardly be identified because the inflowing blood is already completely saturated when it arrives in this part of the avm. in the three asl images acquired 100, 600, and 1 200 ms after signal preparation, the filling of the nidus and the drainage through the vein is clearly visible tional strong gradients are introduced in the black-blood pulse sequences, which cause an increased intra-voxel dephasing and thus suppress the signal (lin et al. 1993) . a different technique for blood signal suppression makes use of an inversion recovery blood signal preparation (edelman et al. 1991 ): similar to arterial spin labeling a non-selective 180° inversion pulse is applied; however, the signal in the imaging slice is reinverted by a subsequent slice-selective inversion pulse. with this preparation ,the magnetization of the blood (and of all other tissues) outside the imaging slice is selectively inverted. after a delay-time that is chosen to achieve a zero crossing of the longitudinal magnetization of the inverted blood, an image is acquired. if the blood has been completely exchanged during the delay, then the signal of the labeled blood is nulled and only the static tissue is visible. this technique is often used in combination with cardiac triggering to visualize, e.g., the myocardium (fig. 2.7.7) . in cardiac black-blood applications, both techniques are combined, which is possible, because data are acquired during diastole when the heart is nearly at rest, whereas the signal preparation is applied during systole. in this image blood is seen with high signal intensity; however, the surrounding tissue also appears with a strong mr signal. in the ascending aorta, signal voids are seen which are caused by turbulent flow, and banding artifacts are visible in the subcuta-neous fat. nevertheless, balanced ssfp sequences provide good angiographic overview images in very short acquisition times, without the need for contrast agent injection. in the contrastenhanced data acquisition, a better background suppression is possible, and the projection image of the 3d data set clearly delineates the aorta and the adjacent vessels fig. 2.7 .7 ecg-triggered dark blood image of the heart acquired with a single-shot fast spin-echo technique (haste). the blood signal both in the heart and in the cross-section of the descending aorta is completely suppressed the tof contrast relies on the increase in signal amplitude due to the inflow of unsaturated magnetization. in addition to elevated signal amplitude, the spin movement can also create a change in the phase of the mr signal. if a gradient is turned on and blood moves along the gradient direction (here: the x-direction), the phase ϕ(t) of the mr signal is given by: (2.7.1) here, the motion x(t) of the magnetization is expressed as a taylor series, and only the constant term (i.e., the initial position x0) and the linear term (i.e., the velocity v0) are considered. the two integrals m0 and m1 solely depend on the gradient timing and are called the zeroth and first moment of g(t). the next higher order term is proportional to the acceleration of the spins; however, the proportionality constant m2 only becomes large if long time scales are considered; thus, the estimation of the spin phase from the zeroth and first moments is justified for gradient-echo sequences with short echo times. if the gradient timing is modified such that the first moment is zero, the gradients are called flow compensated. flow compensation is important ingredient in many mr pulse sequences: if a range of velocities is present in a single voxel, then the mr signal amplitude is attenuated due to the incoherent addition of the signals. with flow compensation, the individual signals all have the same phase, and the signals of the different velocities add up coherently. flow compensation is especially important in regions of high velocity gradients as, e.g., turbulent jets or in highly angulated vessels. in general, both m0 and m1 will be non-vanishing, and the phase of the signal will become proportional to the local spin velocity. unfortunately, many other factors such as off-resonance, field inhomogeneity, or chemical shift also affect the spin phase, so that a direct velocity measurement is not possible with a single mr experiment alone. to create an mr image that is dependent on the local velocity, a minimum of two image acquisitions are required. in the first, velocity-sensitized acquisition a gradient timing is used with a carefully selected, nonvanishing first gradient moment (the zeroth moment is defining the spatial encoding, i.e., the k-space trajectory). in a second, flow-compensated acquisition, a gradient timing is chosen that cancels m1. if the two phase images are directly subtracted, the result is a phase difference image that is linearly dependent on the spin velocity: this is the basis of an mr flow measurement (bryant et al. 1984) . since phase data are only unambiguous in the angular range of ±180°, so is the velocity information in an mr flow measurement. to avoid artifacts due to multiple rotations of the spin phase (so-called wrap around artifacts), the first moment needs to be chosen such that the maximum velocity in the image creates a phase shift of 180°. in general, this velocity is set via the so-called velocity encoding, or venc, parameter in the pulse sequence. higher venc values require weaker encoding gradients, which can be realized in shorter echo times. despite an inadequate choice of the venc value, mr flow measurements are susceptible to phase noise, which is present in regions of low signal amplitude. if the snr is 1 or less, the phase in the image is nearly uniformly distributed between -180° and +180°; under these conditions a meaningful flow measurement is not possible. unfortunately, phase noise is also often present near blood vessels (e.g., in the air-filled spaces of the lung, close to the pulmonary vessels). here, the measurement of velocity values requires a very careful placement of the regions of interest (rois) to avoid systematic errors from included noise pixels. in a conventional flow measurement, velocity encoding is typically performed in slice-selection direction only, because the orthogonal placement of the flow measurement slice induces a high tof signal in the cross-section of the vessel lumen. additionally, a parallel orientation of the velocity encoding direction with the image plane makes the image acquisition susceptible to systematic errors due to displacement, as, e.g., the readout gradient are used for both spatial encoding and velocity measurement simultaneously. flow measurements in arterial vessels are often performed with cardiac synchronization to account for the pulsatility of the blood flow. cardiac synchronization can be performed by prospective ecg triggering or retrospective ecg gating. with prospective triggering, data acquisition is started by a trigger signal, which is generated by the ecg electronics during the qrs complex of the ecg (fig. 2.7.8) . after data have been acquired for a certain number of cardiac phases, the measurement sequence is stopped until a new cardiac trigger signal is detected. in retrospective gating, image data are continuously acquired, and the time between the last trigger and the current data set is stored. later, data are resorted into predefined time intervals (bins) in the cardiac cycle, and the images are reconstructed. prospective triggering is less time-consuming during image reconstruction and is very precise in the delineation of the cardiac activity; however, a temporal gap at the end of the cardiac cycle is required and thus flow measurements at late diastole are difficult. prospective gating uses continuous image acquisition, and the magnetization steady state is always maintained. unfortunately, more data need to be acquired than with prospective triggering to ensure a sufficient coverage of the cardiac cycle, and a temporal blurring due to the interpolation is seen in the velocity data. when the complex image data of the two acquisitions are subtracted instead of the phases, and the magnitude of the difference is displayed, a so-called phase-contrast mra image is created (dumoulin 1995) . this pc mra image is not only dependent on the velocity of the spins, but also on the signal amplitude in both acquisitions; thus, every pc mra data set always has an overlaid tof contrast (fig. 2.7.9 ). an advantage of pc mra is the fact that signal background of the surrounding stationary tissue is almost completely suppressed, and vessels can be traced further into the vascular periphery than with tof mra, with comparable measurement parameters. in pc mra, often not only the velocity in one spatial direction is encoded, but in all three directions. since separate velocity-encoded acquisitions have to be performed for each direction, the measurement time of a pc mra is twoto fourfold longer than that of a tof mra. a careful selection of the venc value is especially important in pc mra. if, e.g., the maximum velocity in the imaging slice is twice the venc value, a phase shift of 360° (or 0°, which cannot be distinguished) is created. under these conditions, the velocity-encoded and the velocity-compensated acquisition have the same phase, and no pc mra signal would be observable. as blood does not flow with a constant velocity and velocity values can be reduced by pathologies (aneurysms) or increased (stenoses), the optimum choice of venc value is often difficult. because pc mra is more time-consuming, is susceptible to artifacts, and suffers from the same signal saturation as tof mra, it is rarely used in clinical routine. with tof and pc mra techniques, the blood motion is used to create a signal difference between the vessel lumen and the surrounding tissue, whereas contrastenhanced mra utilizes the reduction of the longitudinal relaxation time t1 after administration of a contrast agent. when a contrast agent is injected, the t1 of blood is shortened from t1blood = 1.2 s (for b0 = 1.5 t) to less than 100 ms during the first bolus passage (first pass). the relaxation rate r1 (i.e., 1/ t1) is a function of the local contrast agent concentration: (2.7.2) the proportionality constant between contrast agent concentration c and the change in relaxation rate is called the relaxivity r1. the relaxivity is different for each contrast agent typical values range from 4 to 10 mmol -1 s -1 . in general, high relaxivities are desirable because lower contrast-agent concentrations are needed to achieve the same change in image contrast. to enhance the signal in the contrast agent bolus and to suppress the signal background from static tissue, heavily t1-weighted spoiled gradient-echo sequences (flash) with very short repetition times (tr < 5 ms) and high flip angles (α = 20°-50°) are used (fig. 2.7.10) . the use of short trs is advantageous because very short acquisition times of only a few seconds can be achieved even for the acquisition of a complete 3d data set. these short acquisition times are needed, because the contrast agent is progressively diluted during the passage, which reduces the vessel-to-background contrast. short acquisition times are also favorable because mra data sets can thus be acquired in a single breath hold; for this reason, contrast-enhanced mra techniques are especially suited for abdominal applications (prince 1996; sodickson and manning 1997) . to ensure isotropic visualization of the vascular territories, typically 3d techniques are used for data acquisition. conventional 3d techniques have measurement times of several minutes, so that even with short repetition times larger parts of the k-space data are acquired after the contrast agent concentration has fallen to levels where only a weak signal enhancement is observable. for this reason, the measurement times are reduced using partial k-space sampling, parallel imaging, and view sharing between subsequent 3d data sets (sodickson and manning 1997; wilson et al. 2004; goyen 2006) . in general, contrast agents can be categorized into extracellular agents that can leave the blood stream and intravascular agents that are specifically designed to remain in the vascular system. historically, the first approved mr contrast agent was gd-dtpa (gadopentate dimeglumine, magnevist, schering, germany), an extracellular agent, which has the paramagnetic gd 3+ ion as the central atom in an open-chain ionic complex (chelate). over the years, several similar extracellular agents such as gd-bt-do3a (gadovist, schering), gd-dota (dotarem, guerbet, france), gd-bma (omniscan, ge healthcare), gd-hp-do3a (prohance, bracco imaging, italy), and gd-bopta (multihance, bracco, italy) have been approved for clinical use, which only slightly differ in the stability of the gd chelates, pharmacokinetic properties, and safety profiles. in general, the most recently approved contrast agents have higher relaxivities and thus allow acquiring mra data sets with higher contrast at the same dose or with similar contrast at lower dose. only recently, the first intravascular contrast agent gadofosveset trisodium (vasovist, schering) has been approved for clinical use in europe (goyen 2006 ). this fig. 2.7.9 phase-contrast images encoding flow in head-foot (top) and left-right (center) direction, and phase contrast mra image (bottom). in the flow images, a velocity-sensitive and a velocity-compensated data set are subtracted, whereas the pc mra image is generated by complex subtraction of the respective signal amplitudes. note, that the pc mra image has nearly no background signal from static tissue molecule has a diphenylcyclohexyl group, which is covalently bound to a gd complex, which creates a reversible, non-covalent binding of the molecule to serum albumin that significantly prolongs the half-life of the agent in blood to about 16 h. after injection of the agent, at first a more rapid decline of concentration is observed, because the fraction of the contrast agent bound to albumin is dependent on the contrast agent concentration; thus, a steady-state concentration is established after the unbound fraction is renally excreted. both extracellular and intravascular agents can be imaged during the first pass of the contrast agent, when a high vessel-to-background contrast is present, whereas intravascular contrast agents additionally allow angiographic imaging during the subsequent steady state. the t1 shortening is dependent on the contrast agent concentration, which is getting smaller already a few seconds after infusion of the contrast agent, as the contrast agent bolus in the blood is increasingly diluted and, for the extracellular agents, the contrast agent is extravasculized. therefore, contrast-enhanced mra techniques usually use pulse sequences with very short acquisition times (ta < 30 s). the short passage time of the contrast agent bolus of a few seconds requires that imaging be precisely synchronized with the contrast agent infusion. the transit time of the bolus from the point of injection (usually a vein in the arm) through to the vascular target structure (e.g., the renal arteries) varies significantly with the heart rate and cardiac output and can be difficult to predict. therefore, various synchronization and acquisition tech-niques have been proposed for a reliable mra data acquisition: an automatic technique to start the 3d mra data acquisition (smartprep™, general electric) uses a fast pulse sequence before the 3d mra, which continuously acquires the signal in the vascular target region . after administration of the contrast agent, this signal exceeds a certain signal threshold, and the 3d mra acquisition is automatically started. if the signal threshold is selected too low, image noise can mimic a bolus arrival and the measurement is triggered too early, whereas a too high value of the threshold can lead to an omission of the data acquisition. with the test bolus technique, a small bolus of a few milliliters of the contrast agent is infused, and the passage of the bolus is imaged near the target vessel with a fast time-resolved 2d mr measurement (earls et al. 1997) . the trigger delay td for the subsequent 3d measurement is then calculated from the transit time of the bolus tt and the acquisition time of the 3d mra ta as: td = tt necho × ta. here, necho denotes the fraction of ta before the center of k-space is acquired (fig. 2.7 .11). as in the automatic start of the sequence, fluoroscopic previews (carebolus™, siemens medical solutions) image the contrast bolus during its passage; however, here fast 2d sequences are used with real-time image reconstruction and display riederer et al. 1988; wilman et al. 1997) . once the bolus has reached the target region, the operator of the mr scanner manually switches to the predefined 3d mra pulse sequence, which is then executed with minimal time delay. fig. 2.7. 10 signal intensity as a function of t1-contrast in a spoiled gradient-echo pulse sequence (flash). at high flip angles and short repetition times the signal from tissue (t1 > 300 ms at 1.5 t) is nearly completely saturated, whereas a high intraluminal signal is seen due to the high concentrations of the contrast agent • multiphase mra time-resolved mra has been increasingly used to completely avoid manual or automated synchronization. multiphasic acquisitions consecutively acquire 3d mra data sets during the bolus passage so that the optimal vessel contrast is obtained in at least one of the data sets. various methods of measurement acceleration are combined to ensure adequate temporal resolution; these include parallel imaging, asymmetric k-space readout, and temporal data interpolation (korosec et al. 1996; fink et al. 2005 ). nevertheless, timeresolved mra data sets are usually of a lower spatial resolution than are optimally acquired mra data with bolus synchronization (fig. 2.7 .12). artifacts arise, if the 3d mra data acquisition is not perfectly synchronized with the bolus passage. the appearance of these artifacts depends on the relative timing of the k-space acquisition and the concentration-time curve of the contrast agent. if the bolus arrives too late in the target vessel, then the center of the k-space has already been sampled and large structures, such as the interior of the blood vessel, appear dark, whereas fine structures, such as vessel margins, have a high signal if the bolus arrives during sampling of the k-space periphery. if the data acquisition is started too late, then the bolus has already reached the target vessel and the contrast has partly disappeared -thus, the signal is significantly reduced compared with an optimally synchronized data acquisition (maki et al. 1996; wilman et al. 2001; svensson et al. 1999) . another disadvantage of suboptimal bolus timing is the fact that the bolus may have passed from the arterial to the venous system in some vascular regions (e.g., the extremities) so that both veins and arteries are seen in the images. this venous contamination makes the interpretation of the image data difficult in cases where arterial and venous vessels are parallel to each other. the variation in contrast agent concentration over time also results in a reduction in the achievable image resolution (fain et al. 1999) . the spatial resolution of an mr image sampled with cartesian data acquisition is always uniquely defined by the measured number of k-space lines. with increasing number of k-space lines (i.e., larger k-space coverage), finer image details are encoded in the image. this so-called nyquist scanning theorem only applies if the signal intensity is constant during data acquisition. even with perfect synchronization with the contrast agent bolus, the contrast agent concentration is only optimal during acquisition of the central k-space lines. later on, the concentration is reduced and the peripheral k-space regions are acquired with significantly reduced signal intensity (fig. 2.7.13 ). this different weighting of the k-space regions results in a reduction in spatial resolution (blurring), which is mathematically described by the point-spread function, psf. the psf is the image of a point object for linear imaging systems, it is used to describe the imperfections of the image acquisition system. the psf depends on the acquisition time, the contrast agent dynamics, and the measurement parameters of the pulse sequence. the deviation from an ideal psf is particularly visible in those spatial directions that are acquired with the lowest sampling velocity. in conventional 3d acquisition, this is either the phase encoding direction or the partition encoding direction. to eliminate this asymmetry and to fig. 2.7 .11 test bolus measurement in the heart (blue) and the aorta (red) of a patient. for an optimal visualization of the aortic arch, a time delay of 16 s is required between contrast agent injection and the acquisition of the central k-space lines evenly distribute the blurring in both spatial directions, elliptical scanning of the phase and partition encoding steps has been proposed (bampton et al. 1992; , where the encoding steps are acquired along an elliptical path, starting from the center of the k-space. using the signal-time curve of a test bolus, the signal variation during data acquisition can be avoided. therefore, an injection scheme is calculated for the signal-time curve using linear system theory, where the injection rate is modulated such that there is a constant contrast agent concentration (i.e., an ideal psf) in the target region throughout the data acquisition. this technique requires a programmable contrast-agent injector and additional computations, and the constant concentration can only be achieved in a limited target volume. another option for reducing the intensity changes is to induce a blood flow stasis for a brief period after contrast agent inflow. this can be achieved in the peripheral blood vessels, without endangering the patient using an inflatable cuff, which temporarily blocks the blood flow during data acquisition; this technique has been used successfully applied to contrast-enhanced mra studies of the hand (zhang et al. 2004 ) and the legs (zhang et al. 2004; vogt et al. 2004) . in addition to reducing t1, an mr contrast agent always also reduces t2 (and t2*). the reduction in t2* can lead to a significant signal reduction in the mra image at high contrast agent concentrations; often the venous vessel through which the bolus is infused are seen dark in the mra data sets (albert et al. 1993) . to avoid these ar-tifacts, the contrast agent concentration or the echo time te can be reduced. besides reducing t2*, the contrast agent also causes a concentration-dependent resonance frequency shift. radial or spiral k-space data acquisitions especially susceptible to these frequency shifts that cause blurring artifacts, which can be compensated using dedicated off-resonance correction algorithms. contrast-enhanced mra studies are also susceptible to artifacts known from tof mra. in particular, pulsation artifacts are visible in contrast-enhanced measurements (al-kwifi et al. 2004) . intra-voxel dephasing is observed, although the effect is much lower due to the shorter tes used here. to keep the acquisition time short, generally flow compensation is not integrated into contrast-enhanced mra pulse sequences, because the additional gradients significantly prolong the measurement time. contrast-enhanced mra offers substantial advantages over tof or pc mra, because the saturation effects seen in tof mra are almost completely avoided. thus, extended vascular structures for example in the abdomen or the extremities can be visualized with a few slices oriented parallel to the vessel. the short acquisition times of contrast-enhanced mra allow breath-held acquisitions (ta < 30 s), which significantly reduces motion artifacts . the dynamic information of multiphase 3d mra contains information about vascular anatomy, flow direction (e.g., in aortal aneurysms), tissue perfusion (e.g., in the kidney), and vascular anomalies, which might not be visible on a single mra data set. using intravascular agents, the contrast agent concentration in the blood is maintained over time spans of minutes to hours. in general, the same pulse sequences can be applied for mra with intravascular contrast agents as with the extracellular contrast agents, as they share the same contrast mechanism; however, as the concentration of intravascular contrast agent in blood attains an equilibrium state after a few re-circulations (typically 20-40 s), mra data sets can also be acquired over longer acquisition times. this prolonged acquisition window can be used to increase the image resolution, because an ideal psf can be achieved and, thus, no blurring should be present (van bemmel et al. 2003a; grist et al. 1998) . because data acquisition does not have to be synchronized with the contrast agent bolus, data acquisition can be started once the contrast agent concentration has reached equilibrium. the contrast agent injection does not need to be performed with a contrast agent pump, but can be infused manually via a venous access port even prior to the mr examination. with intravascular contrast agents, the acquisition time ta is not limited by the transit time of the bolus, and data sets can be acquired over much longer acquisition periods. with longer acquisition times, special trigger and gating techniques (ecg triggering, respiratory gating, navigator echoes [ahlstrom et al. 1999] ) to suppress motion artifacts are required. if image data are acquired in the equilibrium phase, venous overlay is a fundamental problem in mra with intravascular contrast agents (fig. 2.7 .14). venous contamination particularly makes those images hard to interpret that are calculated through a projection technique such as the maximum intensity projection (mip), because here, the depth information is lost. a separation of arterial and venous vessels is possible with the help of dedicated post-processing software. therefore, a region in an arterial vessel is identified and a region-growing algorithm is used to find all connected regions. unfortu-nately, when arteries and veins are in close proximity the algorithm may artifactually connect arterial to venous vessels. with equilibrium-phase mra, data sets of intravascular contrast agents, these artifacts are easier to avoid than in first-pass studies because of the higher spatial resolution. nevertheless, for direct arteriovenous connections or shunts, a manual correction of the segmentation is always required (van bemmel et al. 2003b; svensson et al. 2002) . the use of intravascular contrast agent is not limited to the equilibrium phase, but can be combined with a first-pass study during the initial contrast agent injection to obtain both the dynamics of the contrast agent passage as well as the vascular morphology (grist et al. 1998) . additionally, the dynamic information can be utilized to separate arteries from veins in the high-resolution equilibrium-phase 3d mra data sets (bock et al. 2000) . although the long half-life of the intravascular contrast agent is advantageous for intraluminal studies, it can become a problem if a dynamic study has to be repeated. with extracellular contrast agents, this is possible within a few minutes, whereas up to several hours have to be waited after a study with an intravascular contrast agent. in practice, intravascular contrast agents are still advantageous for mra, as they allow combining high spatial resolution in the equilibrium phase with dynamic information during first passage. additionally, these contrast agents can be used to quantify perfusion (prasad et al. 1999) or to delineate vessels during mr-guided intravascular procedures (wacker et al. 2002; martin et al. 2003 ). various techniques for mr angiography and mr flow measurements exist that make use of the different physical properties of blood: flow, pulsation, or signal variation following administration of contrast agent. tof mra is often used in anatomical regions where a high inflow is fig. 2.7 .14 surface rendering (left) and mip (right) visualization of an abdominal mra with an intravascular contrast agent. the three-dimensional character of the data set is better captured with the surface display, whereas the finer details are better visualized on the mip. the presence of venous signal is making the interpretation of the data more difficult; however, a significantly higher spatial resolution can be achieved with intravascular contrast agents present, and long measurement times can be tolerated. phase-contrast flow measurements provide a quantitative assessment of blood flow when combined with cardiac triggering. contrast-enhanced studies are favorable in abdominal regions and the periphery, where saturation effects are a limiting factor for tof mra. intravascular contrast agents further extend the capabilities of contrastenhanced mra because high-resolution data sets can be acquired over an extended time. diffusion in the context of diffusion-weighted mri or diffusion tensor imaging (dti) refers to the stochastic thermal motion of molecules or atoms in fluids and gases, a phenomenon also known as brownian motion. this motion depends on the size, the temperature, and in particular on the microscopic environment of the examined molecules. diffusion measurements can therefore be used to derive information about the microstructure of tissue. in mri, stochastic molecular motion can be observed as signal attenuation. this was first recognized in the nmr spin-echo experiment by e.l. hahn in 1950 (hahn 1950 , long before the invention of actual magnetic resonance imaging. a number of more sophisticated experiments were described in following years that allowed the quantitative measurement of the diffusion coefficient (carr and purcell 1954; torrey 1956; woessner 1961) . of particular importance is the pulsed-gradient spin-echo (pgse) technique proposed by stejskal and tanner in 1965 (stejskal and tanner 1965) that is described in detail in sect. 2.8.3.1. diffusion as an imaging-contrast mechanism was first incorporated in mri pulse sequences in 1985 (taylor and bushell 1985; merboldt et al. 1985 ) and applied in vivo in 1986 (lebihan et al. 1986 ). its great potential for clinical mri became evident in around 1990, when diffusionweighted images were recognized to be extremely valuable for the early detection of stroke (moseley et al. 1990a,b; chien et al. 1990 chien et al. , 1992 . areas of focal cerebral ischemia appear hyperintense in diffusion-weighted images only minutes after the onset of symptoms (see also chap. 3, sect. 3.4) . having thus been pushed into publicity, diffusion-weighted imaging was evaluated in many other applications such as the characterization of brain tumors (tien et al. 1994; sugahara et al. 1999; okamoto et al. 2000) and of multiple sclerosis lesions (cercignani et al. 2000; filippi and inglese 2001) , but none of these reached the clinical significance of stroke diagnosis. mainly due to limitations of image quality, there are considerably fewer publications about diffusion-weighted imaging outside the central nervous system. examples of these are studies with the purpose of differentiating osteoporotic and malignant vertebral compression fractures (baur et al. 1998 (baur et al. , 2003 herneth et al. 2002) or benign and malignant lesions of the liver (moteki et al. 2002; taouli et al. 2003) and the kidneys (cova et al. 2004) . molecular diffusion is a three-dimensional process and is-depending on the tissue microstructure-in general anisotropic, i.e., the extent of molecular motion depends on spatial orientation. a physical quantity called the diffusion tensor is required to fully describe anisotropic diffusion. mri techniques to measure the diffusion tensor have been introduced in the 1990s (basser et al. 1994; pierpaoli et al. 1996) and gained considerably more popularity when tracking algorithms were proposed for three-dimensional reconstruction of white matter fiber tracts (mori et al. 1999; conturo et al. 1999) . today, diffusion tensor imaging is a valuable research tool with applications, e.g., in neurodevelopment snook et al. 2005) , neuropsychiatry (taber et al. 2002 , sullivan and pfefferbaum 2003 , or aging (moseley 2002 , sullivan and pfefferbaum 2003 , sullivan et al. 2006 ). all molecules in fluids or gases perform microscopic random motions. this motion is called molecular diffusion or brownian motion after robert brown (1773 brown ( -1858 , who observed a minute motion of plant pollens floating in water in 1827 (brown 1866) . these pollens were constantly hit by fast-moving water molecules, resulting in a visible irregular motion of the much larger particles. due to brownian motion, a tracer such as a droplet of ink given into water will diffuse into its surroundings, resulting in spatially and temporally varying tracer concentrations, until the ink is diluted homogeneously in the water. however, brownian molecular motion does not require concentration gradients, but occurs also in fluids consisting of only a single kind of molecule. the molecules of any arbitrary droplet of water within a larger water reservoir will stochastically disperse into their surroundings; this process is called diffusion or, to emphasize that the observed molecules do not diffuse into an external medium, self-diffusion. it should be noted that diffusion always refers to a stochastic and not directed motion and is strictly to be distinguished from any kind of directional flow of a liquid. the molecules in fluids or gases perform random motions due to their thermal kinetic energy, ekin, which is proportional to the temperature, t: ekin = -2 3 kt (k = 1.38 × 10 -23 j/k is the boltzmann constant). this energy corresponds to a mean velocity 1 m e v kin 2 = for a molecule of mass m; in the case of water at room temperature (t = 300 k), the mean velocity is about 650 m/s. due to frequent collisions with other particles, however, mol-strictly speaking, we calculate the square root of the mean value of squared velocity, i.e., mean (ν�) , which is slightly different (by a factor of 3π / 8 ≈ 1.085, i.e., by 8.5%) from the actual mean value of the velocity due to the asymmetry of the maxwell distribution. ecules do not move linearly in a certain direction but follow a random course that can be visualized in a randomwalk simulation as shown in fig. 2 .8.1a. this figure also demonstrates that, macroscopically, the mean displacement or diffusion distance, s, after a time t is much more interesting than is the linear velocity of the molecule. the mean diffusion distance of a particle is proportional to the square root of the diffusion time t and is described by the diffusion coefficient d: . this relation is shown in fig. 2. 8.1b for a water molecule with a diffusion coefficient of d = 2.03 × 10 -3 mm 2 /s at a temperature of 20°c. since diffusion is a stochastic process, the diffusion distance after the time t is not the same for all molecules but is described by a gaussian probability distribution as illustrated in fig. 2.8.2 . as shown in this illustration, after a diffusion time t most molecules are still found at or close to their original position; the diffusion distance s, corresponds to the standard deviation of the shown distributions. typical diffusion distances for free water molecules at room temperature are about 25 µm after a diffusion time of 50 ms and 110 µm after 1 s. in contrast to free diffusion in pure water ( fig. 2.8.3a) , the water molecules in tissue cannot move freely, but are hindered by the cellular tissue structure, in particular by cell membranes, cell organelles, and large macromolecules as shown schematically in fig. 2 .8.3b. due to additional collisions with these obstacles, the mean diffusion distance of water molecules in tissue is reduced compared to that of free water, and a decreased effective diffusion coefficient is found in tissue called the apparent diffusion coefficient (adc). obviously, the adc depends on the number and size of obstacles and therefore on the cell types that compose the tissue. hence, diffusion properties can be used to distinguish different types of tissue. examples of diffusion coefficients in different tissues and in fluids at different temperatures are summarized in table 2 .8.1. not only does the number and size of organelles influence diffusion, but also the geometrical arrangement of the cell membranes. in particular, the diffusion of water molecules can reflect an anisotropic arrangement of cells as indicated in fig. 2 .8.3c. since cell membranes are barriers for diffusing molecules, water diffuses more freely along the long axis of the cell than perpendicular to it (beaulieu 2002) . hence, the adc measured in the direction parallel to the cellular orientation will be greater than that measured in an orthogonal direction. this property, the dependence of a quantity on its orientation in space, is called anisotropy. it has proven useful to illustrate the diffusion properties by spheres for isotropic diffusion and by three-dimensional ellipsoids for anisotropic diffusion as shown in fig. 2.8 .3d-f. these shapes visualize the probability density function of diffusion distances in space. isotropic diffusion can be completely described by its (apparent) diffusion coefficient, d, which corresponds to the radius of the sphere (fig. 2.8.3d,e) . more quantities are required for a complete description of anisotropic diffusion, e.g., three angles that define the orientation of the ellipsoid in space, and the length of the three principal axes describing the magnitude of the diffusion coefficients. in physics or mathematics, a quantity that corresponds to such a three-dimensional ellipsoid is called a tensor. the physical object called tensor can also be explained by comparing it to more commonly known objects such as scalars and vectors. a scalar is a quantity that can be measured or described as a single number; typical examples are the temperature, the mass, or the density of an object. in imaging, the image intensity, e.g., in a t2-weighted image, is a scalar: a single number is required for each pixel to describe the intensity. as demonstrated in the last example, scalars can be spatially dependent and be visualized as intensity maps; another example is a temperature map of an object that describes the temperature as scalar quantity for each spatial position of an object. other physical quantities cannot be described by a single number, such as the velocity or acceleration of a particle in space or the flow of a liquid. these quantities are vectors and require both a direction in space and a magnitude to be fully described. a vector is typically visualized as an arrow. for example, in the case of velocity, the direction of the arrow describes the direction of motion and the length of the arrow represents the magnitude of the vector, e.g., as measured in meters/second. such an arrow can be mathematically described by three independent numbers: either by its length and two angles defining its orientation or by three coordinates (x-, y-, and z-component of the vector). these coordinates are often presented as a column or row vector, e.g., v = (vx vy vz). vectors as well as scalars can depend on the spatial position; a flowing liquid can be described by a velocity vector at each position. a full data set consisting of a vector (i.e., an arrow) at each point in space is called a vector field. some quantities such as the molecular diffusion cannot be fully described as scalars or vectors; they are tensors. as mentioned above, the diffusion properties can be depicted by a three-dimensional ellipsoid and therefore require six independent numbers to define the direction and length of all axes. these six values are visualized in fig. 2.8.4 as the three lengths of the axes defining the shape of the ellipsoid and the three angles describing its orientation. however, instead of using angles, tensors can equally well be described by six coordinates arranged in a symmetric 3 × 3-matrix in analogy to the three coordinates of a vector. these coordinates are called dxx, dyy, dzz, dxy, dxz, and dyz and form the matrix representation this matrix is called symmetric because the elements are mirrored at the diagonal. of these matrix elements, only the diagonal elements, dxx, dyy, dzz, can be measured directly in mri and correspond to the diffusion in the x-, y-, and z-directions; the off-diagonal elements must be determined indirectly from further measurements as described in sect. 2.8.4. in the case of isotropic diffusion, i.e., if the ellipsoid is a sphere, then this matrix has a very simple form, because a single diffusion coefficient suffices to describe the diffusion. this diffusion coefficient is found on the diagonal of the matrix, and all off-diagonal elements are zero: the diffusion tensor has some properties that are important to understand in order to measure and interpret diffusion imaging data. the mean diffusivity, i.e., the diffu-sion coefficient averaged over all spatial orientations, can be derived from the trace of the diffusion tensor, i.e., the sum of its diagonal elements: . an mri measurement of the mean adc is therefore also called trace imaging. to analyze the non-isotropic properties of the diffusion tensor, a process called diagonalization of the tensor is used. the meaning of tensor diagonalization can be visualized as finding the three axes (i.e., their length and orientation) that define the ellipsoid in fig. 2.8.4 . mathematically, the tensor matrix is transformed into a form where all off-diagonal elements are zero: since six parameters are still required to fully describe the tensor, in addition to the three diagonal elements, three vectors, v1, v2, v3, are determined which are called eigenvectors. the eigenvectors, which are always orthogonal and have unit length, define the orientation of the ellipsoid and are shown as thick grey arrows in fig. 2 .8.4d. the ratios of the diffusion eigenvalues describe the isotropy or anisotropy of diffusion. in the case of isotropic diffusion, all eigenvalues are the same, d1 = d2 = d3, and diffusion is represented by a sphere; see fig. 2 .8.5a. if the largest eigenvalue is much greater than the two other eigenvalues, d1 >> d2 ≈ d3, then the tensor is represented by a cigar-like shape as in fig. 2 .8.5b. in this case, diffusion in one direction is much less hindered than in the other directions and is sometimes called linear diffusion; this is typically found in white matter fiber tracts, where the motion of water molecules is restricted by the cell membranes and the glial cells perpendicular to the fiber tract orientation. the orientation of the fiber tracts is described by the eigenvector v1 belonging to the large eigenvalue, d1. if two large eigenvalues are much greater than the third one, d1 ≈ d2 >> d3, then the diffusion tensor is represented by a pancake-like shape; see fig. 2.8.5c . this tensor corresponds to preferred diffusion within a twodimensional plane, which can occur in layered structures and is referred to as planar diffusion. in order to describe the diffusion anisotropy quantitatively, several anisotropy indices have been introduced to reduce the diffusion tensor to a single number, i.e., a scalar, measuring the anisotropy. most frequently used is fig. 2.8.4 tensor visualized as three-dimensional ellipsoid: six independent numbers are required to define a tensor, three lengths (eigenvalues), d1, d2, d3, corresponding to the length of the principal axes of the ellipsoid (shown three-dimensionally in a and in two-dimensional sections b), and three angles, α, β, γ, describing the spatial orientation of the axes (c,d) . the eigenvectors of the tensor are shown as thick gray arrows in d the fractional anisotropy (fa), defined as where d = -1 3 (d 1 + d2 + d3) is the mean diffusivity. the fractional anisotropy ranges from 0 (isotropic diffusion) to 1 (maximum anisotropy) and can be interpreted as the fraction of the magnitude of the tensor that can be ascribed to anisotropic diffusion . a similar index is the relative anisotropy (ra), defined as the relative anisotropy is the magnitude of the anisotropic part of the tensor divided by its isotropic part and ranges from 0 (isotropy) to 414 . 1 2 ≈ (maximum anisotropy). in order to scale the maximum value of the ra to 1 as well, a normalized (or scaled) definition with an additional factor of 2 1 is sometimes used (and often called ra as well): less frequently used indices of anisotropy are the volume ratio (vr), all these anisotropy indices can be used to describe the diffusion anisotropy (kingsley and monahan 2005) , but fractional anisotropy may be considered the preferred index that is currently most frequently used. some typical values of these indices are compared in table 2 .8.2; fa, ra, nra, and vf start with 0 for isotropic diffusion and increase with increasing anisotropy. the volume ratio is 1 in the case of isotropic diffusion and decreases with increasing anisotropy. to introduce diffusion weighting in mri pulse sequences, today almost exclusively a technique proposed by stejskal and tanner in 1965 (stejskal and tanner 1965) is used. the basic idea is to insert additional gradients (usually referred to as diffusion gradients) into the pulse sequence in order to measure the stochastic molecular motion as signal attenuation. originally, these were two identical gradients on both sides of the refocusing 180° rf pulse of a spin-echo sequence: the so-called pulsed-gradient spin echo (pgse) technique. however, to simplify the explanation, we will replace this scheme with two gradients with opposite signs that do not require a 180° pulse in between, as shown in fig. 2 .8.6. the contrast mechanism is the same for both gradient schemes. as illustrated in fig. 2 .8.6, the diffusion gradients superpose a linear magnetic field gradient over the static field, b0. since the larmor frequency of the spins is proportional to the magnetic field strength, spins at different positions now precess with different larmor frequencies and, thus, become dephased. if the spins are stationary (no diffusion, i.e., diffusion coefficient d = 0) and remain at their position, the second diffusion gradient with opposite sign exactly compensates the effect of the first one and rephases the spins. hence, without diffusion, the signal after the application of the pair of diffusion gradients is the same as before (neglecting relaxation effects). in the case of diffusing spins, the second diffusion gradient cannot completely compensate the effect of the first one since spins have moved between the first and second gradient. the additional phase the spins gained during the first diffusion gradient is not reverted during the second one. consequently, rephasing is incomplete after the second diffusion gradient, resulting in diffusion-dependent signal attenuation. as can be deduced from this explanation, the signal attenuation is larger if the diffusivity, i.e., the mobility of the spins, is larger. quantitatively, the signal attenuation depends exponentially on the diffusion coefficient, dg, in the direction defined by the diffusion gradient gd: where s0 is the original (unattenuated) signal and s(dg, b) is the attenuated diffusion-weighted signal. the b-value, b, is the diffusion weighting that plays a similar role for diffusion-weighted imaging as the echo time for t2weighted imaging: the diffusion contrast, i.e., the signal difference between two tissues with different adcs, is low at small b-values and can be maximized by choosing the optimal b-value as discussed below. the b-value is expressed in units of s/mm 2 and depends on the timing and the amplitude, gd, of the diffusion gradients: as illustrated in fig. 2 .8.6, δ is the duration of each diffusion gradient, and ∆ is the interval between the onsets of the gradients; γ is the gyromagnetic ratio of the diffusing spins. a typical b-value used for diffusion-weighted imaging of the brain is 1,000 s/mm²; for other applications b-values range between 50 s/mm 2 (dark blood liver imaging) and about 20,000 s/mm 2 (imaging of the diffusion q-space [assaf et al. 2002; wedeen et al. 2005] ). to obtain b-values of about 1,000 s/mm 2 , diffusion gradients are required to be much longer (e.g., δ = 25 ms) and have larger amplitudes (e.g., gd = 25 mt/m) than normal imaging gradients applied in mri; hence, diffusion-weighted imaging can be demanding for the gradient amplifiers and is often acoustically noisy. the formula for the b-value given above is valid only for a pair of stejskal-tanner diffusion gradients. the diffusion weighting of arbitrary time-dependent diffusion gradient shapes, gd(t), applied between t = 0 and t = t, can be calculated according to (stejskal and tanner 1965) by applying diffusion gradients, diffusion-weighted images can be acquired in which the signal intensity depends on the adc, e.g., structures with large adc such as liquids appear hypointense. to quantify the adc at least two diffusion-weighted measurements with different diffusion weightings (i.e., different b-values) are required as shown in fig. 2 .8.7. by determining the signal intensity at the lower b-value, s(b1), and the higher b-value, s(b2), the adc can be calculated as this can be done either for the mean signal intensities in a region of interest or pixel by pixel in order to calculate an adc map as in fig. 2.8.7 . the adc can also be calculated from more than two b-values by fitting an exponential to the measured signal intensities or by linear regression analysis applied to the logarithm of signal intensities. it should be noted that diffusion-weighted images generally exhibit a mixture of different contrasts. many diffusion-weighted pulse sequences require relatively long echo times between 60 and 120 ms because of the long duration of the diffusion preparation. thus, diffusion-weighted images are often also t2-weighted, and it can be difficult to differentiate image contrast due to diffusion and t2 effects. this is a typical problem in diffusion-weighted mri of the brain and known as t2 shinethrough effect (burdette et al. 1999) . a further consequence of the long minimum echo times due to the diffusion preparation is the relatively low signal-to-noise ratio of diffusion-weighted images. the combined effects of diffusion weighting, which particularly decreases the signal of fluids, and of t2 weighting, which predominantly reduces the signal of other (non-fluid) tissue, results in globally low signal intensity on diffusion-weighted images. therefore, signal-increasing techniques such as increasing the voxel volume or (magnitude) averaging are often required for diffusion-weighted mri. in addition, adc calculation can be corrected for the decreasing signal-to-noise ratio at higher b-values (dietrich et al. 2001a) . the range of b-values chosen for a diffusion-weighted mri experiment should depend on the typical diffusion coefficients that are measured and on the signalto-noise ratio of the diffusion-weighted image data. as a rule of thumb, the signal attenuation should be at least in the case of diffusing spins, rephasing is incomplete since spins have moved between the first and second gradient; thus, diffusion-dependent signal attenuation is observed (red arrow) fig. 2.8.7 acquisition of two images with different diffusion weightings (b-values b1 and b2) in order to calculate an adc map. note the large signal attenuation in csf at the higher b-value, b2, and the correspondingly high diffusion coefficient in the adc map 2.8 diffusion-weighted imaging and diffusion tensor imaging about 60%, i.e., the product of diffusion coefficient and the b-value range, bmax-bmin, should be approximately 1 (xing et al. 1997) . this corresponds to a b-value difference of about 1,000-1,500 s/mm 2 in brain tissue with adcs between 0.6 × 10 -3 and 1.0 × 10 -3 mm 2 /s. however, the choice of the largest b-value is frequently limited by signal-to-noise considerations, and thus, the maximum diffusion weighting is often reduced in order to maintain sufficient signal-to-noise ratio. a second point to consider is the choice of the lowest b-value. although a b-value of 0 is often chosen, a slightly higher value of, for example, 50 s/mm 2 can be advantageous in order to suppress the influence of perfusion effects (lebihan et al. 1988; van rijswijk et al. 2002) . historically, the first mri pulse sequences with inserted diffusion gradients were stimulated-echo (taylor and bushell 1985; merboldt et al. 1985) and spin-echo sequences (lebihan et al. 1986 ); a schematic spin-echo pulse sequence with diffusion gradients is shown in fig. 2 .8.8. in this diagram, diffusion gradients are added for all three spatial directions (readout, phase, and slice direction); however, they are usually switched on in only one or two of the three directions at a time. since spins are refocused by a 180° pulse, both diffusion gradients have the same polarity. the main disadvantages of the diffusion-weighted spin-echo sequence are that it requires long acquisition times of many minutes per data set and is extremely sensitive to motion. examples of images acquired with a diffusion-weighted spin-echo sequence are shown in fig. 2.8.9a ,b. the volunteers were asked to avoid any movements, but no head fixation was applied; severe motion artifacts degrade the images. these artifacts are caused by inconsistent phase information of the complex-valued raw data; stimulated-echo, and spin-echo sequences are particularly sensitive to these effects because the diffusion preparation must be repeated for each raw data line and different states of motion will occur during these diffusion preparations. more details about the motion sensitivity of diffusion-weighted sequences and about approaches to reduce these artifacts are described in the following section. with techniques such as cardiac gating and navigator-echo correction, image quality of diffusion-weighted spin-echo sequences can be dramatically improved (fig. 2.8.9c,d) . today, the most commonly used pulse sequence for diffusion-weighted mri (particularly of the brain) is the single-shot echo planar imaging (epi) sequence with spin-echo excitation. the diffusion preparation of this sequence is the same as in the conventional spin-echo sequence of fig. 2 .8.8, but instead of acquiring a single echo after each excitation, the full k-space can be read. the advantages of the diffusion-weighted epi sequence are a very short acquisition time of less than 200 ms per slice and its insensitivity to motion. however, the image resolution is typically limited to 128 × 128 matrices and echo planar imaging is very sensitive to susceptibility variations as demonstrated in fig. 2.8.10a ,b-different susceptibilities of soft tissue, bone, and air, cause severe image distortion and signal cancellation close to interfaces between soft tissue and air or bone. these effects can be reduced with new imaging methods known as parallel imaging or parallel acquisition techniques (see sect. 2.4) . the underlying idea is to use several receiver coil elements with spatially different coil sensitivity profiles to acquire multiple data sets with reduced k-space sampling density in the phase-encode direction. these data sets are used to calculate a single image corresponding to a fully sampled k-space during post-processing. reducing the number of phase-encode steps shortens the epi echo train, decreases the minimum echo time as well as the total acquisition time, and increases the effective receiver bandwidth in the phase-encode direction. as a result, susceptibility-induced distortions are reduced as shown in fig. 2 .8.10c,d. alternatively, the accelerated acdiffusion-weighted imaging of the brain is almost exclusively performed with single-shot epi sequences (with or without parallel imaging). other organs or body areas, however, are less suited for echo-planar single-shot acquisitions because of much more severe susceptibility effects that often result in images of non-diagnostic quality. depending on the slice orientation and receiver coil system, these distortions can be reduced either with parallel imaging as described above or with segmented (i.e., multi-shot) epi sequences that assemble the raw data from multiple shorter echo trains (holder et al. 2000; ries et al. 2000; einarsdottir et al. 2004) . a disadvantage of this approach is the increased motion sensitivity since several excitations (and diffusion preparations) are required for a single data set, resulting in potentially inconsistent phase information; segmented epi sequences are therefore often combined with additional motion correction techniques. several other pulse sequences have been proposed for diffusion-weighted imaging. diffusion gradients can be added to single-shot fast spin-echo sequences with echo trains of multiple spin-echoes (see also sect. 2.4) such as haste or rare sequences (norris et al. 1992) . however, the additionally inserted diffusion gradients cause an irregular timing of the originally equidistant refocusing rf pulses. in combination with motion-dependent phase shifts, this violates the cpmg condition, which requires a certain phase relation between excitation and refocusing pulses. thus, in order to avoid artifacts, various modifications to diffusion-weighted fast spin-echo sequences have been suggested such as additional gradients (norris et al. 1992) , a split acquisition of echoes of even and odd parity (schick 1997) , or modified rf pulse trains (alsop 1997) . these modified diffusion-weighted singleshot fast spin-echo sequences are fast and insensitive to motion; disadvantages are a relatively low signal-to-noise ratio, and a certain image blurring that is characteristic for all single-shot fast spin-echo techniques. they have been applied in the brain (alsop 1997; lovblad et al. 1998) , the spine (tsuchiya et al. 2003, clark and werring 2002) , and in several non-neuro applications such as imaging of musculoskeletal (dietrich et al. 2005) or breast (kinoshita et al. 2002) tumors. in contrast to echo-planar fig. 2.8.9 diffusion-weighted spin-echo acquisitions (b = 550 s/mm²) of two healthy and cooperative volunteers. a,b uncorrected images acquired without cardiac gating. c,d images after navigator echo correction acquired with cardiac gating fig. 2.8 .10 images acquired with a diffusion-weighted epi sequence. a,b conventional epi sequence exhibiting severe distortions (arrows). c,d epi sequence with parallel imaging (acceleration factor 2) showing reduced susceptibility artifacts. (for better visualization of artifacts, only images without diffusion weighting (b = 0) are shown) imaging, these techniques are insensitive to susceptibility variations and, thus, particularly suited for applications outside the brain. another, however only infrequently used alternative to echo-planar sequences are fast gradient-echo techniques (flash, mp-rage) with diffusion preparation (lee and price 1994; thomas et al. 1998) . a special sequence type that has successfully been employed for diffusion-weighted imaging is based on steady-state free-precession (ssfp) sequences (see also sect. 2.4). pulse sequences known as ce-fast or psif sequences (the acronym psif refers to a reverted fast imaging with steady precession, i.e., fisp, sequence) have been adopted to diffusion-weighted imaging by inserting a single diffusion gradient (lebihan 1988; merboldt et al. 1989) . however, in contrast to all previously described sequences, the diffusion weighting of this technique cannot be easily determined quantitatively. the observed signal attenuation does not only depend on the diffusion coefficient and the diffusion weighting, but also on the relaxation times, t1 and t2, and the flip angle (buxton 1993) . since these quantities are usually not exactly known, the adc cannot be determined. instead, these sequences have been used to acquire diffusion-weighted images that are evaluated based only on visible image contrast. a general advantage of diffusion-weighted psif sequences is the relatively short acquisition time due to short repetition times of about 50 ms. thus, they exhibit only low motion sensitivity. the most important application of this sequence type is the differential diagnosis of osteoporotic and malignant vertebral compression fractures (baur et al. 1998 (baur et al. , 2003 . other applications include diffusion-weighted imaging of the brain (miller and pauly 2003) and the cartilage (miller et al. 2004 ). as mentioned above, an unwanted side effect arising in virtually all diffusion-weighted pulse sequences is extreme motion sensitivity (trouard et al. 1996; norris 2001) . by introducing diffusion gradients, the pulse sequence is made sensitive to molecular motion in the micrometer range, but it also becomes susceptible to very small macroscopic motions of the imaged object since the diffusion gradients do not distinguish between stochastic molecular motion and macroscopic bulk motion. hence, even very small and involuntary movements of the patient, e.g., caused by cardiac motion, cerebrospinal fluid pulsation, breathing, swallowing, or peristalsis, can lead to severe image degradation due to gross motion artifacts. typical appearances of these artifacts are signal voids and ghosting in phase-encode direction. several techniques and pulse-sequence modifications have been proposed to reduce the motion sensitivity of diffusion-weighted mri. on the one hand, any kind of motion should be minimized. depending on the body region being imaged, this can be achieved by improved fixation of the patient to the scanner, by imaging during breath hold, or by applying cardiac gating. effects of motion can also be reduced by decreasing the acquisition time of a pulse sequence, i.e., by using fast acquisition techniques. this is particularly effective if singleshot sequences such as echo planar imaging techniques are applied. most motion artifacts in diffusion-weighted imaging arise from inconsistent phase information in the complex-valued raw data set. this is caused by different states of motion in the repeated diffusion preparations of the acquisition. in single-shot sequences, only a single diffusion preparation is applied, and thus inconsistent phase information is avoided. it should be noted, however, that even single-shot sequences might be affected by inconsistent phase information if complex data of several measurements is averaged. instead, only magnitude images should be averaged in diffusion-weighted mri in order to improve the signal-to-noise ratio. another approach to reduce motion artifacts is to correct for motion-related phase errors in the acquired raw data. this can be done using navigator echo-correction techniques (ordidge et al. 1994; anderson and gore 1994; dietrich et al. 2000) . the navigator echo is an additional echo without phase encoding acquired after each diffusion preparation. in the absence of motion, all navigator echoes should be identical. thus, by comparing the acquired navigator echoes, bulk motion can be detected, and degraded image echoes can be discarded or a phase correction can be applied. more advanced navigator-echo techniques acquire several navigator echoes in different spatial directions (butts et al. 1996) or use spiral navigator readouts (miller and pauly 2003) . certain pulse sequences are self-navigated, i.e., a subset of the acquired raw data can be used as navigator echo without the need for an extra navigator acquisition. examples are pulse sequences with radial or spiral k-space trajectories that acquire the origin of k-space in every readout (seifert et al. 2001; dietrich et al. 2001b ). an improved self-navigation is possible with the propeller diffusion sequence, which repeatedly acquires a large area around the origin of k-space (pipe et al. 2002) . some image reconstruction techniques have been proposed that do not use the often-inconsistent phase information of raw data at all. in sequences with radial k-space trajectories, images can be reconstructed by filtered back projection of magnitude projection images (gmitro and alexander 1993) . another spin-echo-based approach known as line-scan diffusion imaging assembles the image from one-dimensional lines of magnitude data (gudbjartsson et al. 1996) . in addition to substantially reduced motion sensitivity, repetition times and thus image acquisition time can be considerably reduced since the one-dimensional lines are acquired independently of each other. on the other hand, the signal-to-noise ratio of line-scan sequences is substantially lower than that of conventional acquisition techniques and the spatial resolution of this approach is limited as well. a second unwanted side effect of diffusion-weighted sequences is eddy current effects caused by the extraordinarily long and strong diffusion gradients. eddy currents are induced electric currents in coils that occur after switching magnetic fields on or off. these currents then create unwanted additional gradient fields resulting in shifted or distorted images and in incorrect diffusion weightings. whereas most mri gradient systems compensate very well for eddy current effects after the switching of short gradients typically used for imaging, the longer diffusion gradients are often not well compensated. hence, diffusion-weighted images are sometimes distorted depending on the diffusion weighting and the direction of the diffusion gradients, resulting in artifacts on adc maps such as enhanced edges. to avoid these artifacts, several techniques have been suggested. diffusion gradients can be shortened by using bipolar diffusion gradients (alexander et al. 1997) or by adding additional 180° pulses during the diffusion preparation (reese et al. 2003) ; eddy currents can be partially compensated for by an additional long gradient before the 90° excitation pulse (alexander et al. 1997) ; or diffusion-weighted images can be acquired twice with diffusion gradients of opposite polarity (bodammer et al. 2004) . other eddycurrent correction schemes are based on the acquisition of diffusion gradient-dependent field maps and data correction in k-space (horsfield 1999; papdakis et al. 2005) . in general, (automated) image registration as the first step of postprocessing is recommended to reduce influences from both patient motion and eddy-current effects. imaging with the stejskal-tanner diffusion preparation as described above in sect. 2.8.3.1, is only sensitive for molecular diffusion parallel to the direction of the diffusion gradient. the diffusion preparation causes a dephasing of spins that move in the direction of the applied field gradient, i.e., between positions with different magnetic field strengths as illustrated in fig. 2 .8.6. molecular motion perpendicular to this direction does not contribute to the signal attenuation. in general, the diffusion displacement of spins depends on the considered spatial direction; e.g., protons of water molecules in nerve fibers move more freely parallel to the fiber direction than they do in perpendicular directions. this dependence of the diffusion on spatial orientation can be measured by applying diffusion gradients in different spatial directions, e.g., separately in slice, readout, and phase direction as demonstrated in fig. 2.8.11 . the resulting diffusion-weighted images show substantial signal differences in areas with strong anisotropic diffusion such as the corpus callosum. the signal intensity of the corpus callosum is decreased if diffusion gradients in the left-right direction (readout direction in the example) are applied, but increased for diffusion gradients in the head-foot (slice) direction or the anterior-posterior (phase-encode) direction. this finding is explained by the fact that water molecules diffuse more freely in the left-right direction (parallel to the nerve fibers) than they do in perpendicular directions, i.e., the effective diffusion coefficient is greater in the left-right direction than it is in other directions, and thus the signal attenuation is increased. this orientation dependence is visible in the adc maps as well: the adc in the left-right direction of the corpus callosum is increased compared to the adcs in perpendicular directions. other areas such as gray matter or the csf do not show significant differences depending on the diffusion gradient direction, indicating approximately isotropic diffusion. if the mean (or average) diffusivity of molecules in tissue is to be measured, then diffusion coefficients for all spatial directions must be averaged as shown in fig. 2.8.11d ; the corresponding adc map is given by the mean value of the three direction-dependent maps. since the direction-independent or mean adc of tissue is proportional to the trace of the diffusion tensor, this measurement is also referred to as diffusion trace imaging. the measurement of such a direction-independent diffusion-weighted image can be very important to avoid misinterpretation of hyperintense areas due to high anisotropy as tissue with generally reduced adc such as areas of focal ischemia. therefore, diffusion-weighted stroke mri is generally based on isotropically diffusionweighted images. if only a single direction-independent diffusionweighted image is required for diagnosis, it appears disadvantageous to perform three orthogonal diffusion measurements at the cost of three-times-increased acquisition duration. it should be noted that it is not possible to simply apply gradients in all three directions simultaneously for this purpose; this results in a single magnetic field gradient in diagonal direction, which is again only sensitive for diffusion parallel to this diagonal. however, the stejskal-tanner diffusion preparation can be extended by a more sophisticated series of gradient pulses in different directions to achieve an isotropic diffusion weighting within a single diffusion measurement (wong et al. 1995; mori and van zilj 1995; chun et al. 1998; cercignani and horsefield 1999) . isotropically diffusion-weighted images can thus be acquired by either a single or three orthogonal diffusion preparations. however, three measurements are not yet sufficient to determine the properties of diffusion anisotropy in all cases. for example, if a nerve fiber is oriented diagonally to all three coordinate axes, then the diffusion attenuation in this fiber will be the same for the three measurements and cannot be distinguished from isotropic diffusion. the measurement of the full diffusion tensor (cf. sect. 2.8.2.2) is required to cope with these more general cases. in spite of this limitation, some studies have used the ratio of the largest and the smallest of three perpendicular diffusion coefficients as an estimation of the anisotropy (holder et al. 2000) . however, this approach should be regarded as an inferior method in comparison to diffusion tensor evaluation and is generally not recommended. to determine the diffusion tensor, i.e., to fully measure anisotropic diffusion, more than three diffusionsensitized measurements with diffusion gradients in different spatial directions are required. however, only the diagonal elements of the tensor, i.e., dxx, dyy, dzz, can be measured directly; these elements are exactly the direction-dependent adcs determined in the example above. the other three (off-diagonal) tensor components dxy, dxz, dyz do not describe diffusion in a spatial direction but the correlation of diffusion in two different directions; they cannot be measured directly, but must be calculated as linear combinations of several measurements. the minimum number of measurements required to determine the full diffusion tensor can be deduced from the form of the diffusion tensor matrix: the tensor has six independent components dxx, dyy, dzz, dxy, dxz, dyz and, thus, at least six independent diffusion measurements are required. each of these measurements is based on images of at least two different b-values; in order to reduce the total number of measurements, usually a b-value of 0 is chosen as a direction-independent reference. thus, this reference image has to be acquired only once instead of separately for each diffusion direction. a possible and frequently used choice of seven diffusion-weighted acquisitions that are sufficient to determine the diffusion tensor (basser and pierpaoli 1998 ) is shown in fig. 2.8.12 . none of the six tensor components dxx, dyy, dzz, dxy, dxz, or dyz is measured directly by this gradient scheme; instead, all components must be calculated as linear combinations of the diffusion coefficients in these six directions. this calculation is based on the so-called b-matrix (basser and pierpaoli 1998 ), a symmetric 3 × 3 matrix describing the diffusion weighting for an arbitrary diffusion gradient fig. 2.8.11 diffusion-weighted imaging in different spatial directions. a diffusion gradients in slice (s), readout (r), and phase (p) direction; the row vectors (s, r, p) denote the selected gradients. b corresponding diffusion-weighted images. c calculated adc maps corresponding to the diffusion directions in a and images in b. d averaged adc map; all adcs are in units of 10 -3 mm 2 /s. note the differing contrast in the diffusion-weighted images and adc maps depending on the diffusion gradient direction (e.g., in the corpus callosum) where gg denotes the dyadic product of these two vectors. this matrix is used to describe the signal attenuation due to the diffusion gradient as where bd denotes the matrix product of the b-matrix and the diffusion tensor matrix. the elements of the diffusion tensor dij can be determined by solving a system of linear equations, since the b-matrix and the signal attenuation are known. the result of this calculation is shown in fig. 2.8.13 . the three calculated diagonal elements correspond to the direct adc measurements of fig. 2.8.11 . the off-diagonal elements are generally much lower than the diagonal elements (note the differently scaled intensity maps) are and are close to zero in areas with predominantly isotropic diffusion (gray matter and csf). a simple protocol for diffusion tensor imaging consists of one reference measurement without diffusion weighting (b-value is 0) and six diffusion-weighted measurements with different gradient directions. these gradient directions should be "as different as possible, " i.e., pointing isotropically in all spatial directions. a typical b-value for dti measurements of the brain is 1,000 s/mm 2 . averaging of multiple acquisitions is frequently performed to increase the snr especially of the images with diffusion weighting. however, all these parameters (b-values, diffusion directions, number of averages) have been evaluated in a number of studies with the aim of optimizing the accuracy of diffusion tensor data. several studies investigated the optimum choice of the b-values both for conventional diffusion-weighted imaging and for diffusion tensor imaging. although the results of these studies vary to a certain extent, generally b-values in the range between about 900 and 1400 s/ mm 2 have been found to provide the highest accuracy of diffusion measurements in the brain (jones et al. 1999; armitage and bastin 2001; kingsley and monahan 2004) . the optimum number of averages depends on the b-values, which influence the signal attenuation and, thus, the signal-to-noise ratio of the diffusion-weighted images. in general, a higher number of averages are recommended for the acquisition with the high b-value than for the reference image with low b-value or without fig. 2.8.12 diffusion tensor imaging a choice of diffusion gradients (s slice, r readout, p phase direction; the row vector denotes the selected gradients and their polarity) and b corresponding diffusionweighted images for the determination of the diffusion tensor. note the different contrast in the diffusion-weighted images depending on the diffusion gradient direction (e.g., in the corpus callosum) 1 any diffusion weighting. as shown by jones et al. (1999) for their choice of b-values, the optimum ratio of the total number of acquisitions with high b-value and low bvalue is about 8.4. the number of diffusion gradients and their directions has also been investigated in several studies. generally, the accuracy of diffusion tensor data, especially of the diffusion anisotropy and main diffusion direction, is improved when the number of different diffusion directions is increased (jones et al. 1999; papadakis et al. 2000; skare et al. 2000; jones 2004 ). if the number of different directions is fixed, then the accuracy of the measurements can be increased by choosing an optimized set of diffusion directions (skare et al. 2000; hasan et al. 2001) . no final consensus about the optimum number and choice of directions of diffusion gradients has yet been established, but protocols with 20 or more diffusion directions are currently recommended by many research groups. the diffusion tensor contains complex information about the tissue microstructure that is best visualized as a threedimensional ellipsoid as discussed in sect. 2.8.2.2. however, diffusion tensor data may be insufficient to describe tissue in certain geometrical situations. a well-known example is the crossing of white-matter fibers within a single voxel as illustrated in fig. 2.8 .14. water diffusion in such voxels cannot be fully described by a single ellipsoid, i.e., by the diffusion tensor. to overcome this limitation, more complex measurement techniques such as high-angular resolution diffusion imaging (hardi) (frank 2001; tuch et al. 2002) and q-ball imaging (tuch 2004 ) have been proposed. all these techniques use a large number of different diffusion directions (e.g., between 43 [frank 2001 ] and 253 [tuch 2004 ]) distributed isotropically in space. diffusion data is measured with high-angular resolution in order to determine the spatial distribution of diffusion in more detail as indicated in fig. 2.8.14d . a further generalization of diffusion tensor measurements loosens the assumption of gaussian diffusion, which was illustrated in fig. 2 .8.2. if diffusion is severely restricted, e.g., by cell membranes, no or very few molecules will move through this border; the probability distribution of diffusion distances will be limited to distances within the cell volume and will no longer be gaussian. the exact displacement probabilities in restricted diffusion can be measured with methods called q-space diffusion imaging (assaf et al. 2002) or diffusion spectrum imaging (wedeen et al. 2005) . both techniques require the acquisition of images with a large number of different b-values and, in the case of diffusion spectrum imaging, of different diffusion directions; e.g., the total number of diffusion measurements reported by wedeen et al. (2005) is 515. obviously, this large number of measurements severely limits the applicability of these new techniques in clinical studies; the studies should therefore be regarded as experimental work. another approach to overcome the limitations of models based purely on gaussian diffusion has been proposed by jensen et al. as diffusional kurtosis imaging (jensen et al. 2005) . diffusion data is acquired for several b-values over a large range between 0 and 2,500 s/mm 2 similarly to the way data is acquired in q-space imaging, but with a different mathematical model of the non-exponential decay. this method is related to several other studies that investigated diffusion properties in tissue at high b-values and found non-mono-exponential diffusion attenuation curves (inglis et al. 2001; clark et al. 2002) . this observation has frequently been attributed to the simultaneous measurement of water molecules in different environments such as the intracellular and the extracellular space; however, no final agreement on the interpretation of these data has been established (sehy et al. 2002) . diffusion tensor data calculated from the measurements shown in fig. 2.8.12 ; all diffusion coefficients are in units of 10 -3 mm 2 /s. a the three diagonal elements dxx, dyy, and dzz, and b the off-diagonal elements dxy, dxz, and dyz of the tensor matrix. note the different intensity scales for diagonal and off-diagonal elements. some remaining eddy-current artifacts can be seen as enhanced edges in the maps of the off-diagonal elements diffusion tensor imaging results in a large amount of data-a full diffusion tensor, i.e., a symmetric 3 × 3 matrix, is determined for each pixel of the image dataset. due to this complex data structure, there is no simple way to visualize the complete diffusion tensor as a single intensity or color map. it would be straightforward to display the six independent elements of the tensor as separate maps as shown in fig. 2.8.13 ; however, this would not be very helpful for the interpretation or quantitative evaluation of diffusion tensor data. instead, several techniques are used to reduce the diffusion tensor information to simpler datasets that can as easily be displayed and interpreted as other imaging data. most results of imaging examinations are presented as either signal intensity images or scalar parameter maps. these images and maps have the advantage that they can easily be manipulated, e.g., the contrast can be interactively adjusted, and they can be quantitatively evaluated by statistics over regions of interest. in order to obtain similar parameter maps of diffusion tensor data, a single scalar reflecting a certain tensor property must be calculated. the most important examples of such scalars are the mean diffusivity or trace of the diffusion tensor and the anisotropy of the tensor. the mean diffusivity of a diffusion tensor measurement, i.e., the diffusion coefficient averaged over all spatial directions, is displayed as parameter maps in fig. 2.8.15a ,b. the same data can be displayed either as an intensity-coded map (fig. 2.8.15a ) or as a color-coded map (fig. 2.8.15b ). both maps illustrate, e.g., the high diffusivity of csf and the typical adcs of about 0.7 × 10 -3 mm 2 /s in the white matter. many different scalar measures have been proposed to describe diffusion anisotropy, cf. sect. 2.8.2.3. the two most important are the fractional anisotropy and the relative anisotropy shown in fig. 2.8.15c ,d. the maps are very similar; both show the high anisotropy of white matter as hyperintense areas in contrast to low anisotropy in gray matter or csf. these two scalars derived from the diffusion tensor are by far the most important quantities for the clinical evaluation of diffusion tensor data. the vast majority of clinical studies based on diffusion tensor imaging determine the mean diffusivity and the anisotropy in regions of interest in order, e.g., to statistically compare these data between certain patient groups or between patients and healthy controls. the mean diffusivity and the anisotropy contain certain important information about the diffusion tensor; if the diffusion tensor is visualized as ellipsoid, then the diffusivity reflects the volume of the ellipsoid and the anisotropy its deviation from a spherical shape. however, any information about the main diffusion direction, i.e., the orientation of the longest axis of the diffusion tensor ellipsoid, is missing. this direction corresponds to the microstructural orientation of tissue, e.g., the orientation of white-matter tracts, and is determined as the eigenvector of the largest eigenvalue of the tensor (c.f. sect. 2.8.2.2). there are two common methods to visualize the direction of this eigenvector: color coding and direct vector display. the direction can be color-coded using the redgreen-blue (rgb) color model. each direction in space is defined by a three-component vector v = (vx vy vz) . if this three-component vector is interpreted as an rgb color specification, vectors in x-direction, v = (1 0 0), appear as red pixels, vectors in y-direction as green pixels, and vectors in z-direction as blue pixels. eigenvectors in other directions are displayed as (additive) mixtures of different colors, e.g., the vector v = (1 0 1) as mixture of red and blue, yielding violet pixels. the resulting color map is finally scaled with the diffusion anisotropy, since the fig. 2.8. 14 the diffusion tensor cannot represent diffusion properties in voxels with crossing nerve fibers. voxels with a single predominant fiber direction (a,b) show diffusion tensor ellipsoids whose longest axes correspond to the fiber orientation. voxels with crossing fibers (c) result in a diffusion tensor ellipsoid with reduced anisotropy pointing in an averaged fiber direction. advanced methods such as high angular resolution diffusion imaging (d) can resolve different fiber orientations within a single voxel 2.8 diffusion-weighted imaging and diffusion tensor imaging 1 main diffusion direction is of interest only in areas with high anisotropy. some examples of these color-coded vector maps are shown in fig. 2.8.16 . the red color of the corpus callosum demonstrates that the nerve fibers are predominantly oriented in the left-right direction. white-matter areas in green and blue are oriented in the anterior-posterior direction and the head-foot direction, respectively. alternatively, the main diffusion direction can be directly displayed by a small line in each pixel; some authors refer to this technique as whisker plots. this visualization is on the one hand more intuitive than color coding, but on the other hand difficult to display for large areas because of the large number of pixels (and hence lines) of a complete image. an example is shown in fig. 2.8.17 ; the magnified area shows again the corpus callosum, where the diffusion directions follow the anatomical orientation of the nerve fibers. a general problem and limitation of the visualization of the main diffusion direction is that it is based on the assumption of linear diffusion, i.e., the diffusion ellipsoid is supposed to have a cigar-like shape. this is usually true in white matter tracts, but may lead to deceptive graphical depictions at crossing fibers or if diffusion is described by a planar tensor. another disadvantage is that vector maps are difficult to compare or to evaluate statistically. it is also possible to visualize the full diffusion tensor using the diffusion ellipsoid introduced in sect. 2.8.2.2. as in the vector plots, it is often difficult to visualize the entire amount of data belonging to a single image slice at once. therefore, this 3d tensor visualization is usually combined with tools to zoom into the illustration and fig. 2.8.15 parameter maps displaying scalar quantities calculated from the diffusion tensor. direction-independent mean diffusivity shown as gray-scaled map (a) and as color-coded map (b); the diffusion coefficients are given in units of 10 -3 mm 2 /s. (c) fractional anisotropy and (d) relative anisotropy fig. 2.8.16 color-coded visualization of main diffusion orientation in four different slices (a-d). the main diffusion direction (orientation of the longest axes of the diffusion ellipsoid) is shown in red, green, and blue for left-right, anterior-posterior, and head-foot orientation, respectively, as indicated in e. the green rim at the frontal brain is caused by remaining eddy-current and susceptibility artifacts to rotate the slice in order to view the tensors in specific areas of the brain, as demonstrated in fig. 2 .8.18. the ellipsoids are additionally color-coded to emphasize the direction of their longest axis (the main diffusion direction); their brightness is scaled by the anisotropy. thus, the ellipsoid visualization combines features of the techniques described in the previous sections and, e.g., csf is displayed as large but relatively dark spheres (denoting a high diffusion coefficient and low anisotropy), while the tensors in fiber tracts appear as bright elongated ellipsoids corresponding to linear diffusion in a single predominant orientation. the exact depiction of the tensor information is not standardized but may look different depending on the tools used. an alternative visualization may substitute the ellipsoids by cuboids with equivalent dimensions as shown in fig. 2.8.19 . the presented information is the same as before, but the computational cost required to display cuboids is substantially lower than with smooth ellipsoids. thus, interactive manipulation of the 3d datasets may be faster using the cuboid visualization. close inspection of the main diffusion directions in figs. 2.8.17 or 2.8.19 suggests that the shape of white matter tracts can be reconstructed by connecting several diffusion directions in an appropriate way. this process is illustrated in fig. 2.8 .20, based on a magnification of fig. 2.8.17 . by choosing a start point and following the main diffusion direction, trajectories can be constructed that visualize the fiber tracts of white matter. a typical example is shown in fig. 2.8 .21, where a seed region was placed within the corpus callosum, and all fibers through this seed region were reconstructed. the color of the fibers reflects the local anisotropy in this case, but various other color schemes could be used instead. fiber tracking or diffusion tractography was developed in the late 1990s (mori et al. 1999; conturo et al. 1999; mori and van zijl 2002; melhem et al. 2002) , and a multitude of different algorithms to reconstruct fibers have been proposed since then. most techniques include data interpolation to increase the spatial resolution, and all require certain criteria to decide when the tracking of a fiber should be stopped (e.g., at pixels with low anisotropy or at sudden changes of diffusion direction). fiber tracking is usually based either on a single-region approach, in which all fibers are tracked that go through a user-defined region of interest, or on a two-region approach where connecting fibers between two regions are reconstructed. fiber tracking depends on good image quality, with sufficient signal-to-noise ratio and without substantial distortion artifacts. increased noise can reduce the calculated anisotropy (jones and basser 2004) and, thus, the length of the reconstructed fibers. image distortions cause a mismatch of anatomical fiber orientation and the measured diffusion direction and thus can lead to erroneous tractography results. therefore, parallel imaging and eddy-current correction techniques can improve the results of white-matter tractography. it is generally assumed that isotropic spatial image resolution is preferable for fiber tracking applications. a typical protocol suggested by jones et al. acquires data of the whole brain in isotropic 2.5 × 2.5 × 2.5 mm 3 resolution (jones et al. 2002b) . fiber tracking is a valuable tool to visualize white matter structures of the brain. however, it is still very difficult to evaluate tractography results quantitatively, to assess the accuracy of reconstructed fibers, or to compare the results of different examinations. first approaches to these questions include the spatial normalization of tenfig. 2.8.19 three-dimensional visualization of the full diffusion tensor as colorcoded cuboids; the cuboids are colored as in fig. 2 (jones et al. 2002a ) and the determination and visualization of uncertainties of diffusion tensor results (jones 2003; jones and pierpaoli 2005) . fig. 2.8.21 reconstruction of white matter tracts starting at a seed region in the corpus callosum. visualization was performed with the "dti task card" provided by the mgh/mit/hms athinoula a. martinos center for functional and structural biomedical imaging (ruopeng wang) risks and safety issues related to mr examinations with the rapid development of mr technology and the significant level of growth in the number of patients examined with this versatile imaging modality, the consideration of possible risks and health effects associated with the use of mr procedures is gaining increasingly in importance. as described in detail in the previous chapters, three types of fields are employed: • a high static magnetic field generating a macroscopic nuclear magnetization • rapidly alternating magnetic gradient fields for spatial encoding of the mr signal • radiofrequency (rf) electromagnetic fields for excitation and preparation of the spin system in the following, the biophysical interaction mechanisms and biological effects of these fields are summarized as well as exposure limits and precautions to be taken to minimize health hazards and risks to patients and volunteers undergoing mr procedures. in the recent past, a number of excellent reviews and books related to this topic have been published. for details and supplementary information, the reader is referred to these publications quoted in the following and to the bibliographies given therein. because no ionizing radiation is used in mri, it is generally deemed safer than diagnostic x-ray or nuclear medicine procedures in terms of health protection of patients. in this context, a fundamental difference between ionizing and non-ionizing radiation has to be noted: radiation exposure to ionizing radiation-at least at the relatively low doses occurring in medical imaging-results in stochastic effects, whereas biological effects of (electro)magnetic fields are deterministic. a stochastic process is one where the exposure determines the probability of the occurrence of an event but not the magnitude of the effect. in contrast, deterministic effects are those for which the magnitude is related to the level of exposure and a threshold may be defined (international commission on non-ionizing radiation protection [icnirp] 2002). as a consequence, the probability of detrimental effects caused by diagnostic x-ray and nuclear medicine examinations performed over many years accumulate, whereas physiological stress induced by mr procedures is related to the acute exposure levels of a particular examination and does, to the present knowledge, not accumulate over years. in the recent past, regulations concerning mr safety have been largely harmonized. there are two comprehensive reviews by international commissions that form the basis for both national safety standards and the implementation of monitor systems by the manufacturers of mr devices: with μ0 = 1.257 × 10 -6 vs/m the magnetic permeability in vacuum. due to the covalent binding of atoms, electron shells in most molecules are completely filled and thus all electron spins are paired. nevertheless, these diamagnetic materials can be weakly magnetized in an external magnetic field. as described in sect. 2.2.8.1, this universal effect 1 is caused by changes in the orbital motion of electrons in an external magnetic field. the induced magnetization is very small and in a direction opposite to that of the applied field (χ < 0). paramagnetic materials, on the other hand, contain molecules with single, unpaired electrons. the intrinsic magnetic moments related with these electrons tend-comparable to the much weaker nuclear magnetic moment (cf. sect. 2.2.3)-to align in an external magnetic field. this effect increases the magnetic field in paramagnetic materials (0 < χ < 0,01). in ferromagnetic materials-such as iron, cobalt, or nickel-unpaired electron spins align spontaneously with each other in the absence of a magnetic field in a region called a domain. these materials are characterized by a large positive magnetic susceptibility (χ > 0,01). biomolecules are in general diamagnetic and contain at most some paramagnetic centers. in almost all human tissues, the concentration of paramagnetic components is so low that they are characterized by susceptibilities differing by no more than 20% from that of water (χ = -9,05 · 10 -6 ) (schenck 2000 (schenck , 2005 . as a consequence, there is virtually no effect of the human body on an applied magnetic field (b ≅ μ0 h ). there are several established physical mechanisms through which static magnetic fields can interact with biological tissues and organisms. the most relevant mechanisms are discussed in the following. even in a uniform magnetic field, molecules or structurally ordered molecule assemblies with either a field-induced (diamagnetic) or permanent (paramagnetic) magnetic moment mmol experience a mechanical torque that tends to align their magnetic moment parallel (or antiparallel) to the external magnetic field and thus to minimize the potential energy ( fig. 2.9.1a ). orientation effects, however, can only occur when molecules or molecule clusters have a nonspherical structure and/or when the magnetic properties are anisotropically distributed. moreover, the alignment must result in an appreciable reduction of the potential energy (emag ∝ mmol b) of the molecules in the external field with respect to their thermal energy (etherm ∝ kt). at higher temperatures, as for example in the human body, the alignment of molecules with small magnetic moments is prevented by their thermal movement (brownian movement) . in a non-uniform magnetic field, as for example in the periphery of an mr system, paramagnetic and ferromagnetic materials, moreover, are attracted and thus can quickly become dangerous projectiles (fig. 2.9.1b) . magneto-hydromechanical interactions. static magnetic fields also exert forces (called lorentz forces) on moving electrolytes (ionic charge carriers) giving rise to induced electric fields and currents. for an electrolyte with charge q, the lorentz force, which acts perpendicular to the direction of the magnetic field, b, and the velocity, v, of the electrolyte is given by (2.9.3) since electrolytes with a positive or negative charge moving, for example, through a cylindrical blood vessel orientated perpendicular to a magnetic field are accelerated into opposite directions, this mechanism gives rise to an electrical voltage across the vessel, which is commonly referred to as blood flow potential (fig. 2.9 .2). moreover, the induced transversal velocity component also interacts with the magnetic field according to eq. 2.9.3, which results in a lorentz force that is directed antiparallel to the longitudinal velocity component. at very high magnetic field strengths, this secondary effect can reduce the flow velocity and the flow profile of blood in large vessels (tenforde 2005) . theoretically modeling of magneto-hydromechanical interaction processes was performed by tenforde (2005) based on the navier-stokes equation describing the flow of an electrically conductive fluid in the presence of a magnetic field using the finite element technique. induced current densities in the region of the sinoatrial node are predicted to by greater than 100 ma/m 2 at field levels of more than 5 t in an adult human. moreover, magneto-hydromechanical interactions were predicted to reduce the volume flow rate of blood in the human aorta by a maximum of 1.3, 4.9, and 10.4% at field levels of 5, 10, and 15 t, respectively. magnetic effects on chemical reactions. as shown by in vitro studies, several classes of organic chemical reactions can be influenced by static magnetic fields under fig. 2.9 .1 magneto-mechanical effects. a orientation of a molecule with a magnetic moment m in a uniform magnetic field. b attraction of a paramagnetic or ferromagnetic object in a non-uniform magnetic field. the direction of the acting forces f is indicated by arrows appropriate, non-physiological conditions (grissom 1995; world health organization [who] 2006 ). an established effect consists in the modification of the kinetics of chemical reactions with radicals as intermediate products, brought about by splitting and modification of electron spin states in the magnetic field. an example is the conversion of ethanolamine to acetaldehyde by the bacterial enzyme ethanolamine ammonia lyase. radical pair magnetic field effects are thus used as a tool for invitro studies of enzyme reactions (who 2006). for individual macromolecules, the extent of orientation in strong magnetic fields is very small. for example, measurements on dna in solution have been shown that a magnetic flux density of 12 t is required to produce orientation of about 1% of the molecules (maret et al. 1975) . in contrast, there are several examples of molecular aggregates that can be oriented to a large extend by static magnetic fields, such as outer segments of retinal rod cells, muscle fibers, and filamentous virus particles (icnirp 2003; who 2006 ). an example of an intact cell that can be oriented magnetically is the erythrocyte. it has been shown that both resting and flowing sickled erythrocytes align in fields of more than 0.35 t with their long axis perpendicular to the magnetic flux lines (brody et al. 1985; murayama 1965) . highashi et al. (1993) reported that normal erythrocytes could be oriented with their disk planes parallel to the magnetic field direction. this effect was detectable even at 1 t, and almost 100% of the cells were oriented when exposed to 4 t. on the other hand, calculations performed by schenck (2005) yielded that all of these orientation effects observed in vitro are probably too small to affect the orientation of the equivalent structures in vivo. however, although biophysical models make it possible to roughly estimate the magnitude of static magnetic field effects, the reality is so complex that calculations can in principle not rule out physiological effects (hore 2005) . based on the evidence at present, there is no strong likelihood of major physiological consequences arising from radical-pair magnetic field effects on enzymatic reactions. reasons against are the efficacy of homeo-static buffering and the fact that the contrived conditions needed to observe a magnetic field response in the laboratory are unlikely to occur under physiological conditions (hore 2005) . there have been only a few studies on the effects of static magnetic fields at the cellular level. they reveal that exposure to static magnetic fields alone has no or extremely small effects on cell growth, cell cycle distribution, and the frequency of genetic damage, regardless of the magnetic flux density. however, in combination with other external factors such as ionizing radiation or some chemicals, there is evidence to suggest that a static magnetic field modifies their effects (miyakoshi 2005) . with regard to possible effects on reproduction and development, no adverse effects of static magnetic fields have been consistently demonstrated; few good studies however have been carried out, especially to fields in excess of 1 t (icnirp 2003; saunders 2005; who 2006) . several studies indicate that implantation as well as prenatal and postnatal development of the embryo and fetus is not affected by exposure for varying periods during gestation to magnetic fields of flux densities between 1 and 9.4 t (konermann and mönig 1986; murakami et al. 1992; okazaki et al. 2001; sikov et al. 1979 ). on the other hand, mevissen et al. (1994) reported that continuous exposure of rats to a 30-mt field slightly decreased the numbers of viable fetuses per litter. electric flow potentials generated across the aorta and other major arteries by the flow of blood in a static magnetic field can routinely seen in the ecg of animals and humans, exposed to fields in excess of 100 mt. in humans, the largest potentials occur across the aorta after ventricular contraction and appear superimposed on the t-wave amplitude of the ecg. different animal studies demonstrated effects of static magnetic fields on blood flow, arterial pressure, and other parameters of the cardiovascular system, often at fields with flux densities much less than 1 t (saunders 2005). the results of these studies, however, have to be interpreted with caution because it is difficult to reach any firm conclusion from cardiovascular responses observed in anaesthetized animals (saunders 2005; who 2006 ). on the other hand, two recent studies on humans exposed to a maximum flux density of 8 t (chakeres et al. 2003; kangarlu et al. 1999 ) did not yield clinically relevant changes in the heart rate, respiratory positively and negatively charged electrolytes moving with a velocity v through a blood vessel oriented perpendicular to a magnetic field are accelerated into opposite directions and thus induce an electric voltage uh across the vessel (blood flow potential). cross-hatches indicate the direction of the magnetic field into the paper plane 2.9 risks and safety issues related to mr examinations 1 rate, diastolic blood pressures, finger pulse oxygenation levels, and core body temperature the only physiologic parameter that was found to be altered significantly by high-field exposure was a change in measured systolic blood pressure. this is consistent with a hemodynamic compensatory mechanism to counteract the drag on blood flow exerted by magneto-hydrodynamic forces as described in sect. 2.9.2.2 (chakeres and de vocht 2005) . various behavioral studies yielded that the movement of laboratory rodents in static magnetic fields above 4 t may be unpleasant, inducing aversive responses and conditioned avoidance (who 2006) . such effects are thought to be consistent with magneto-hydrodynamic effects on the endolymph of the vestibular apparatus (who 2006) . this is in line with reports that some volunteers and patients exposed in static magnetic fields with flux densities above 1.5 t experienced sensations of vertigo, nausea, and a metallic taste in the mouth (chakeres et al. 2003; kangarlu et al. 1999; schenck 2000 schenck , 2005 . moreover, some of them reported on magnetophosphenes occurring during rapid eye movement in a field of at least 2 t, which may be attributable to weak electric fields induced by movements of the eye, resulting in an excitation of structures in the retina (reilly 1998; schenck 2000) . two recent studies evaluated neurobehavioral effects among subjects exposed to static magnetic fields of 1.5 and 8 t, respectively, using a neurobehavioral test battery. performance in an eye-hand coordination test and a near-visual contrast sensitivity task slightly declined at 1.5 t (de vocht et al. 2003) , whereas a small negative effect on short-term memory was noted at 8 t (chakeres et al. 2003) . taking also into account the results of other neurobehavioral studies, it can be concluded that there is at present no evidence of any clinically relevant modification in human cognitive function related to static magnetic field exposure (chakeres and de vocht 2005) . there are only a few epidemiological studies available that were specifically designed to study health effects of static magnetic fields. the majority of these have been focused on cancer risks. in 2002, the international agency for research on cancer (iarc) (2002) reviewed epidemiological studies focused on cancer risks. generally, these studies have not pointed to higher risks, although the number of studies was small, the numbers of cancer cases were limited, and the information on individual exposure levels was poor. therefore, the available evidence from epidemiological studies is at present not sufficient to draw any conclusions about potential health effects of static magnetic field exposure (feychting 2005; who 2006) . some epidemiological studies have investigated reproductive outcome for workers involved in aluminum industry or in mri. kanal et al. (1999) , for example, evaluated 1,421 pregnancies of women working at clinical mr facilities. comparing these pregnancies with those occurring in employees at other jobs, they did not find significant increased risks for spontaneous abortions, delivery before 39 weeks, reduced birth weight, and male gender of the offspring. however, no studies of high quality have been carried out of workers occupationally exposed to fields greater than 1 t. although there are initial experiences concerning the examination of volunteers and patients in ultra-high mr systems with magnetic flux densities of up to 8 t, most clinical mr procedures have been performed so far at static magnetic fields below 3 t. as summarized in sect. 2.9.2.3, the literature does not indicate any serious adverse health effects from the exposure of healthy human subjects up to a flux density of 8 t (icnirp 2004). however, because movements in static magnetic fields above 2 t can produce nausea and vertigo, both the iec standard and the icnirp recommendation (table 2.9.1) regulate that mr examinations above this static magnetic flux density should be performed in the controlled operating mode under medical supervision. the recommended upper limit for the operating mode is 4 t, due to the limited information concerning possible effects above this magnetic flux density. for mr examinations performed in the experimental operating mode, there is no upper limit for the magnetic flux density. in a safety document issued in 2003, the us food and drug administration (fda) (2003) deemed mr devices significant risk only when a static magnetic field of more than 8 t is used. according to faraday's law, a time-varying magnetic field b(t) induces an electric field e(t), which has two important characteristics: the field strength is proportional to the time rate of change of the magnetic flux density, db(t)/dt, and the field lines form closed loops around the direction of the magnetic field. time-varying magnetic fields are used in mriamong others-to spatially encode mr signals arising from the different volume elements within the human body. to this end, three independent gradient coils are used to produce magnetic fields directed parallel to the static magnetic field b0 = (0, 0, b0) with a field strength varying in a linear manner along the x-, yand z-direction as shown in fig. 2. 3.1. for the special case of a spatially uniform magnetic field directed in the z-direction, bz(t), the electric field strength along a circular (conductive) loop of radius r in the x-y-plane is given by (2.9.4) this equation reveals that the electric field strength in the considered circular loop increases linearly with its radius as well as with the rate of change, db(t)/dt. this model gives, for example, the electric field induced by the magnetic gradient field b = (0, 0, g z · z) of the z-gradient coil. in contrast, the distribution of the electric fields induced by the time-varying magnetic gradient fields b = (0, 0, g x · x) and b = (0, 0, g y · y) is much more complex, since the magnetic flux density of these fields is not uniform over the x-y-plane. moreover, the generation of these gradient fields is inevitable connected due to fundamental principles of electrodynamics with the occurrence of magnetic fields directed in the xand y-direction, i.e., b = (bx, 0, 0) and b = (0, by, 0), respectively. although these "maxwell terms" are of no relevance for the acquisition of mr images, they have to be considered carefully with respect to biological effects. the distribution of electric fields induced by timevarying magnetic fields directed parallel and perpendicular to the long axis of the human body is schematically shown in fig. 2.9. 3. the precise spatial and temporal distribution of the electric fields in the human body, of course, strongly depends on both the technical characteristics of the gradient coils implemented at a specific mr system and the morphology of the body region exposed, and thus cannot be described by a simple mathematical expression. for worst-case estimations, however, it can be assumed that the electric field induced by a non-uniform magnetic field is equal or smaller than the electric field produced by a uniform magnetic field with field strength equal to the maximum magnetic flux density of the nonuniform field (schmitt et al. 1998) . for a uniform magnetic field, the electric field strength reaches, in general, a maximum when the magnetic field is orientated perpendicular to the coronal plane of the body (see fig. 2 .9.3, right) since the extension of conductive loops is largest in this direction (reilly 1998) . in conductive media, such as biological tissues, the internally induced electric field e(t), results in circulating eddy currents, j(t). both quantities are related by the electric conductivity of the medium, σ, (2.9.5) calculation of the current distribution in the human body is complicated due to widely differing conductivities of various tissue components. for rough estimations, however, the body can be treated as a homogeneous medium with an average conductivity of σ = 0.2 s/m (reilly 1998 ). according to eqs. 2.9.4 and 2.9.5, for example, a current density of 20 ma/m 2 is induced at a radius of 20 cm by a rate of change in the magnetic flux density of dbz/dt = 1 t/s. fig. 2.9 .3 schematic representation of the electric field induced by time-varying magnetic fields b(t) that are directed parallel (left) and perpendicular (right) to the long axis of the human body. the electric field lines form closed loops around the direction of the magnetic field 1 the magnetic flux density of gradient fields used in mri is about two orders of magnitude lower than that of the static magnetic field b0. therefore, time-varying magnetic fields produced by gradient coils in mri can be neglected compared to the strong static magnetic field as far as interactions of magnetic fields with biological tissues and organisms are concerned (cf. sect. 2.9.2.2). in contrast, however, biophysical effects related to the electric fields and currents induced by the magnetic fields have to be considered carefully. in general, rise times of magnetic gradients in mri are longer than 100 µs, resulting in time-varying electric fields and currents with frequencies below 100 khz. in this frequency range, the conductivity of cell membranes is several orders of magnitude lower than that of the extra-and intracellular fluid (foster and schwan 1995) . as illustrated in fig. 2 .9.4, this has two important consequences. first, the cell membrane tends to shield the interior of cells very effectively from current flow, which is thus restricted to the extracellular fluid. second, voltages are induced across the membrane of cells. when the electric voltages are above a tissue-specific threshold level, they can stimulate nerve and muscle cells (foster and schwan 1995) . theoretical models describing cardiac and peripheral nerve stimulation have been presented by various scientists (i.e., by irnich, mansfield, and reilly) . a detailed discussion of the underlying assumptions of the different models and the differences between them can be found, among others, in (schaefer et al. 2000; schmitt et al. 1998 ). the best approximation to experimental data is given by a hyperbolic strength-duration expression (2.9.6) which relates the stimulation threshold, expressed as rate of change db/dt of the magnetic flux density, with the stimulus duration t, i.e., the ramp time of the magnetic gradient field (schaefer et al. 2000; schmitt et al. 1998 ). a hyperbolic model comparable to eq. 2.9.6 was first established by g. weiss in 1901 for an electric current pulse and the corresponding electric charge. this "fundamental law of electrostimulation" has been confirmed meanwhile in numerous studies for neural and cardiac excitation as well as for defibrillation (schaefer et al. 2000) . as shown in fig. 2.9 .5, the threshold for the strength of a stimulus decreases with its duration. the asymptotic stimulus strength, b • ∞, for an infinite duration is denoted as "rheobase"; the characteristic response time constant, τc , as "chronaxie". it should be mentioned that according to a model presented by irnich et al. stimulation depends on mean (rather than peak) db/dt changes and is thus independent on the special shape of the gradient pulse (schaefer et al. 2000) . in current safety regulations, however, exposure limits are unanimously expressed as maximum db/dt values. in accordance with the biophysical mechanisms described in the previous section, there is now a strong body of evidence suggesting that the transduction processes through which induced electric fields and currents can influence cellular properties involve interactions at the level of the cell membrane (icnirp 2003) . in addition to the stimulation of electrically excitable tissues, changes in membrane properties-such as ion-binding to membrane macromolecules, ion transport across the membrane, or ligand-receptor interactions-may trigger transmembrane signaling events. cardiac and peripheral nerve stimulation. experimental studies with magnetic stimulation of the heart have been carried out since about 1991, with the introduction of improved gradient hardware for epi. experiments fig. 2.9.4 in the frequency range below 100 khz, the conductivity of cell membranes (σm) is several orders of magnitude lower than that of the extraand intracellular fluid (σext and σint, respectively) so that the induced electric fields (and also the resulting electric currents) are mainly restricted to the extracellular fluid (eext > eint). as a result, electric voltages are generated across the membrane of cells that can stimulate nerve and muscle cells were not performed of course with humans, but rather with dogs. the data, which are listed and reviewed by reilly (1998) , reveal that magnetic stimulation is most effective, when it is delivered during the t wave of the cardiac cycle. moreover, excitation thresholds for the heart are substantially greater than that for nerve as long as the pulse duration is sufficiently less than the chronaxie time of the heart of about 3 ms. therefore, the avoidance of peripheral sensations in a patient provides a conservative safety margin with respect to cardiac stimulation. bourland et al. (1991) determined a mean value of 14.1 ± 6.7 for the ratio of cardiac (the induction of ectopic beats) to muscle stimulation in dogs for a pulse duration of 530 µs, which is quite close to the theoretical heart/nerve ratio of 14.0 estimated by reilly (1998) . various studies yielded that the cardiac threshold variability of healthy persons is surprisingly low, which is confirmed by experimental and clinical experience that pacing thresholds are rather uniform (schmitt et al. 1998 ). drugs and changes in electrolyte concentrations can lower thresholds, but not below about 80% of the normal value (schmitt et al. 1998 ). peripheral nerve stimulation has been investigated in various volunteer studies. a systematic evaluation of the available data was presented by schaefer et al. in 2000 . they recalculated published threshold levels-often reported for different gradient coils and shapes in different terms-to the maximum db/dt occurring during the maximum switch rate of the gradient coil at a radius of 20 cm from the central axis of the mr system, i.e., at the border of the volume normally accessible to patients. in fig. 2.9 .6, the recalculated threshold levels are plotted for the y(anterior/posterior) and z-gradient coils (superior/inferior) as compared to model estimates by reilly. as expected, y-gradient coils have lower stimulation threshold for a given ramp time than x-gradient coils since the x-z cross-sections of the body are usually larger than are x-y cross-sections. by fitting the hyperbolic strength-duration relationship defined in eq. 2.9.6 to mean peripheral nerve stimulation thresholds measured by bourland et al. (1999) in 84 human subjects, schaefer et al. estimated the following values for the rheobase/chronaxie: 22.1 t/s/0.365 ms for the y-gradient and 31.7 t/s /0.378 ms for the z-gradient. as shown in fig. 2.9 .6, the db/dt intensity to induce a sensation that the subject described as uncomfortable or even painful was significantly above the sensation threshold. bourland et al. (1999) also analyzed their stimulation data in the form of cumulative frequency distributions, that gives for a db/dt level the number of subjects that had already reported on perceptible, uncomfortable, or even intolerable sensations. they found that the db/dt level needed for the lowest percentile for uncomfortable stimulation is approximately equal to the median threshold for perception. the lowest percentile for intolerable stimulation occurs at a db/dt level approximately 20% above the median perception threshold. time-varying magnetic fields can also result in the perception of magnetophosphenes due to the induction of electrical currents, presumably in the retina (cf. sect. 2.9.2.3). a unique feature of phosphenes, which are not considered to be hazardous to humans, is their low excitation threshold and sharply defined excitation frequency of about 20 hz as compared to other forms of neural stimulation (reilly 1998) . in general, a combination of magnetic gradient fields from all three gradient coils is used in mri. in this case, the biological relevant time-varying magnetic field is approximately given by the vector sum of the magnetic field components. a detailed discussion of the effect of stimulus shape, number of stimuli, and other experimental setfig. 2.9 .5 hyperbolic strength-duration expression that relates the stimulation threshold, expressed as rate of change db/dt of the magnetic flux density, with the stimulus duration t, i.e., the ramp time of the magnetic gradient field. the asymptotic stimulus strength, b • ∞, for an infinite duration is denoted as rheobase; the characteristic response time constant, τc , as chronaxie 2.9 risks and safety issues related to mr examinations tings on stimulation thresholds can be found in (reilly 1998; schmitt et al. 1998 ). a comprehensive review of the current scientific evidence on biological effects of low-frequency electromagnetic fields in the frequency range up to 100 khz has been published by icnirp in 2003. the majority of the reviewed studies focus on extremely low-frequency (elf) magnetic fields associated with the use of electricity at power frequencies of 50 or 60 hz. according to the icnrip review, cellular studies do not provide convincing evidence that low-frequency magnetic fields alter cell division, calcium homeostasis, and signaling pathways. furthermore, no consistent effects were found in animals and humans with respect to genotoxicity, reproduction, development, immune system function, as well as endocrine and hematological parameters. on the other hand, a number of laboratory and field studies on humans demonstrated an effect of low-frequency magnetic fields at higher exposure levels on the power spectrum of different eeg frequency bands and on sleep structure. in the light of cognitive and performance studies yielding a number of biological effects, further studies are necessary to clarify the significance of the observed effects for human health. over the last two decades, a large number of high quality epidemiological investigations of long-term disease endpoints such as cancer, cardiovascular and neurodegenerative disorders have been performed in relation to time-varying-mainly elf-magnetic fields. following the mentioned icnirp review (2003), the results can be summarized as follows. among all the outcomes evaluated, childhood leukemia in relation to postnatal exposures to 50 or 60 hz magnetic fields at flux densities above 0.4 µt is the one for which there is most evidence of an association. however, the results are difficult to interpret in the absence of evidence from cellular and animal studies. there is also evidence for an association of amyotrophic lateral sclerosis (als) with occupational emf exposure although confounding is a potential explanation. whether there are associations with breast cancer and cardiovascular disease remains unsolved. from a safety standpoint, the primary concern with regard to rapid switching of magnetic gradients is cardiac fibrillation, because it is a life-threatening condition. in contrast, peripheral nerve stimulation is of practical concern because uncomfortable or intolerable stimulations would interfere with the examination (e.g., patient movements) or would even result in a termination of the examination. in the current safety recommendations issued by iec (2002) and and icnrip (2004), maximum db/dt values for time-varying magnetic fields created by gradient coils is limited for patient and volunteer examinations performed in the normal and the controlled operating mode by the db/dt level of 80% and 100% of the mean perception threshold for peripheral nerve stimulation, respectively. to this end, mean perception threshold levels have to be determined by the manufacturers for any given type of gradient system by means of experimental studies on human volunteers. as an alternative, the following empirical hyperbolic strength-duration expression for the mean threshold for peripheral nerve stimulation (expressed as maximum change of the magnetic flux density in t/s) can be used: (2.9.7) in this equation, teff is the effective stimulation duration (in milliseconds), i.e., the duration of the period of monotonic increasing or decreasing gradient. a mathematical definition for arbitrary gradient shapes can be found in the iec standard (2002). time-varying magnetic fields used for the excitation and preparation of the spin system in mri (b1 fields, cf. sect. 2.2.4) have typically frequencies above 10 mhz. in this rf range, the conductivity of cell membranes is comparable to that of the extra-and intracellular fluid, which means that no substantial voltages are induced across the membranes (foster and schwan 1995) . due to this reason, stimulation of nerve and muscle cells is no longer a matter of concern. instead, thermal effects due to tissue heating are of importance. energy dissipation of rf fields in tissues is described by the frequency-dependent conductivity σ(ω), which characterizes energy losses due to the induction and orientation of electrical dipoles as well as the drift of free charge carriers in the induced time-varying electric field (foster and schwan 1995) . the energy absorbed per unit of tissue mass and time, the so-called specific absorption rate (sar, in w/kg), is approximately given by where e is the induced electric field, j the corresponding current density, and ρ the tissue density (cf., 2.9.3.1). absorption of energy in the human body strongly depends on the size and orientation of the body with respect to the rf field as well as on the frequency and polarization of the field. theoretical and experimental considerations reveal that rf absorption in the body approaches a maximum when the wavelength of the field is in the order of the body size. unfortunately, the wavelength of the rf fields used in mri falls into this "resonance range. " in order to discuss the effect of various measurement parameters on rf absorption, let us consider a simple mr sequence with only one rf pulse-such as a 2d or 3d flash sequence. in this case, the time-averaged sar can approximately be described by the expression (2.9.9) according to this equation, the time-averaged sar is proportional • to the square of the static magnetic field, b0, which means that energy absorption is markedly higher at high-field as compared to low-field mr systems • to the square of the pulse angle, α, so that a sequence with a 90° or even a 180° pulse will result in a much higher sar value than a sequence with a low-angle excitation pulse • to the duty cycle, tp / tr, of the sequence, e.g., the ratio of the pulse duration tp and the repetition time tr of the pulse or sequence • to the number of slices, ns, subsequently excited within the repetition time of a 2d sequence (multi-slice technique, cf. sect. 2.3.5; ns = 1 for 3d sequences) in case of a more complex mri sequence with various rf pulses, e.g., a spin-echo or a turbo spin-echo sequence, the contribution of the different rf pulses to patient exposure has to be summed up. the most relevant quantity for the characterization of physiological effects related to rf exposure is the temperature rise in the various body tissues, which is not only dependent on the localized sar and the duration of exposure, but also on the thermal conductivity and microvascular blood flow (perfusion). in case of a partial-body rf exposure, the latter two factors lead to fast temperature equalization within the body (adair 1996) . based on the bioheat equation, it can be shown (brix et al. 2002) that for this particular exposure scenario the temperature response in the center of a homogenous tissue region, which is larger in each direction than the so-called thermal equilibration length, λ, is given by a convolution of the exposure-time course, sar(t), with a tissue-specific system function, exp (-t/τ), (2.9.10) where τ is the thermal equilibration time, ta the constant temperature of arterial blood, and c the specific heat capacity of the tissue. for representative tissues, equilibration lengths and times are between 1.5 and 12 mm and 0.2 and 25 min, respectively (brix et al. 2002) . both parameters are inversely related to tissue perfusion and thus vary considerably. in case of a continuous rf exposure, the temperature rise even in poorly perfused tissues is less than 0.5°c for each w/kg of power dissipated. using a simple model of power deposition in the head, athey (1989) showed that continuous rf exposure over 1 h is unlikely to raise the temperature of the eye by more than 1.6°c when the average sar to the head is less than 3.2 w/kg. more complex computations were performed by gandhi and chen (1999) for a high-resolution model of the human body using the finite-difference time domain in order to assess sar distributions in the body for different rf coils. their calculations indicate that the maximum sar averaged over 100 g of tissue can be ten times greater than the whole-body average sar ("hot spots"). established biological effects of rf fields used for mr examinations are primarily caused by tissue heating. therefore, it is important to critically evaluate the numerous number of studies focused on temperature effects, from the cellular and tissue level to the whole-body level, in-cluding potential effects on vulnerable persons. in contrast, non-thermal (or athermal) effects are not well understood but seem-as far as this can be assessed at moment-to have no relevance with respect to the assessment of adverse effects associated with mr examinations. non-thermal effects are those which can only be explained in terms of mechanisms other than increased random molecular motion (i.e., heating) or which occur at sar levels so low that a thermal mechanism seems unlikely (icnirp 1997) . as summarized in a review by lepock (2003) , relative short exposures of mammalian cells to temperatures in excess of 40-41°c result in a variety of effects, such as inhibition of cell growth, cytotoxic changes, alteration of signal transduction pathways, and an increased sensitivity to other stresses such as ionizing radiation and chemical agents. this suggests that damage is not localized to a single target, but that multiple heat-labile targets are damaged. extensive protein denaturation has been observed at temperatures of 40-45°c for moderate periods. the most sensitive animal responses to heat loads are thermoregulatory adjustments, such as reduced metabolic heat production, vasodilatation, and increased heart rate. the corresponding sar thresholds are between about 0.5 and 5 w/kg (icnirp 1998) . the observed cardiovascular changes reflect normal thermoregulatory responses that facilitate the conduction of heat to the body surface in order to maintain normal body temperatures. direct quantitative extrapolation of the animal (including primate) data to humans, however, is difficult given the marked species differences in the basal metabolism and thermoregulatory ability (who 1993) . at levels of rf exposure that cause body temperature rises of 1°c or more, a large number of additional, in most cases reversible, physiological effects have been observed in animals, such as alterations in neural and neuromuscular functions, increased blood-brain barrier permeability, stress-associated changes in the immune system, and hematological changes (icnirp 1998; michaelson and swicord 1996; who 1993) . thermal sensitivities and thresholds for irreversible tissue damage from hyperthermia have been summarized by dewhirst et al. (2003) . the most sensitive organs to acute damage are the testes and brain as well as portions of the eye (lens opacities and corneal abnormalities). the sar threshold for irreversible effects even in the most sensitive tissues caused by rf exposure, however, is greater than 4 w/kg under normal environmental conditions (icnirp 1998) . effects of heat on embryo and fetus have been thoroughly reviewed by edwards et al. (2003) . processes critical to embryonic development, such as cell proliferation, migration, differentiation, and apoptosis are adversely affected by elevated maternal temperatures. therefore, hyperthermia of animals during pregnancy can cause embryonic death, abortion, growth retardation, and developmental defects. especially the development of the central nervous system is susceptible to heat. however, most animal data indicate that implantation and the development of the embryo and fetus are unlikely to be affected by rf exposures that increase maternal body temperature by less than 1°c (who 1993). in humans, epidemiological studies suggest that an elevation of maternal body temperature by 2°c for at least 24 h during fever can cause a range of developmental defects, but there is little information on temperature thresholds for shorter exposures (edwards et al. 2003) . humans possess comparatively effective heat loss mechanisms. in addition to a well-developed ability to sweat, the dynamic range of blood flow rates in the skin is much higher than it is in other species. studies focused on rf-induced heating of patients during mr procedures have been summarized and evaluated in a review by shellock (2000) . they indicate that exposure of resting humans for 20-30 min to rf fields producing a whole-body sar of up to 4 w/kg results in a body temperature increase between 0.1 and 0.6°c (who 1993) . of special interest is an extensive mr study reported by shellock et al. (1994) . in this study, thermal and physiologic responses of healthy volunteers undergoing an mr examination over 16 min at a whole-body averaged sar of 6.0 w/kg were investigated in a cool (22°c) and a warm (33°c) environment. in both cases, significant variations of various physiologic parameters were observed, such as an increase in the heart rate, systolic blood pressure, or skin temperature. however, all variations were in a range that can be physiologically tolerated by an individual with normal thermoregulatory function (shellock et al. 1994) . generally, these studies are supported by mathematical modeling of human thermoregulatory responses to mr exposure (adair 1996; adair and berglund 1986, 1989) . it should be noted, however, that heat tolerance or thermoregulation may be compromised in some individuals undergoing an mr examination, such as the elderly, the very young and people with certain medical conditions (e.g., obesity, hypertension, impaired cardiovascular functions, diabetes, fever, etc.) and/or taking certain medications (e.g., beta-blockers, calcium channel blockers, sedatives, etc.) (donaldson et al. 2003; goldstein et al. 2003; icnirp 2004; shellock 2000) . some regions of the human body, in particular the brain, are particularly vulnerable to raised temperatures. mild-to-moderate hyperthermia (body temperature less than 40°c) induced thermal stress. for example, it affects cognitive performance (sharma and hoopes 2003) and can produce specific alterations in the cns that may have long-term physiological and neuropathological consequences (hancock and vasmatzidis 2003) . there have been a large number of epidemiological studies over several decades, particularly on cancer, cardiovascular disease, and cataract, in relation to occupational, residential, and mobile-phone rf exposure. as summarized in a review published by the icnirp standing committee on epidemiology (ahlbom et al. 2004) , results of these studies give no consistent or convincing evidence of a causal relation between rf exposure and adverse health effect. it has to be noted, however, that the studies considered not only have too many deficiencies to rule out an association but also focus on chronic exposures at relatively low levels-an exposure scenario that is not comparable to mr examinations of patients. as reviewed in the previous section, no adverse health effects are expected if the rf-induced increase in body core temperature does not exceed 1°c. in case of infants, pregnant women, or persons with cardiocirculatory im-pairment, it is desirable to limit body core temperature increases to 0.5°c. as indicated in table 2 .9.2, these values have been laid down in the current safety recommendations (iec, icnirp) to limit the body core temperature for the normal and controlled operating mode. additionally, local temperatures under exposure to the head, trunk, and extremities are limited for each of the two operating modes to the values given in table 2 .9.2. however, temperature changes in the different parts of the body are difficult to control during an mr procedure in clinical routine. therefore, sar limits have been derived on the basis of experimental and theoretical studies, which should not be exceeded in order to limit the temperature rise to the values given in table 2 .9.2. as only parts of the body-at least in the case of adult patients-are exposed simultaneously during an mr procedure, not only the whole-body sar but also partial-body short-term sar the sar limit over any 10-s period shall not exceed 3 times the corresponding average sar limit a partial-volume sars given by iec; icnirp limits sar exposure to the head to 3 w/kg b partial-body sars scale dynamically with the ratio r between the patient mass exposed and the total patient mass: normal operating mode, sar= (10-8 . r) w/kg; controlled operating mode, sar = (10-6 . r) w/kg c in cases where the eye is in the field of a small local coil used for rf transmission, care should be taken to ensure that the temperature rise is limited to 1°c sars for the head, the trunk, and the extremities have to be estimated by means of suitable patient models (e.g., brix et al. 2001 ) and limited to the values given in table 2 .9.3 for the normal and controlled operating mode. with respect to the application of the sar levels defined in table 2 .9.3, the following points should be taken into account: • when a volume coil is used to excite a greater field-of view homogeneously, the partial-body and the wholebody sars have to be controlled: in the case of a local rf transmit coil (e.g., a surface coils), the local and the whole-body sar (iec 2002). • partial-body sars scale dynamically with the ratio r between the patient mass exposed and the total patient mass. for r → 1 they converge against the corresponding whole-body values, for r → 0 against the localized sar level of 10 w/kg established by icnirp for occupational exposure of head and trunk (icnirp 1998). • the recommended sar limits do not relate to an individual mr sequence, but rather to running sar averages computed over each 6-min-period, which is assumed a typical thermal equilibration time of smaller masses of tissue. but even if mr examinations are performed within the established sar limits, severe burns can occur under unfavorable conditions at small focal skin-to-skin contact zones. the potential danger is illustrated in fig 2.9 .7 by the case of a patient who developed third-degree burns at the calves after conventional mr imaging. in this case, the contact between the calves resulted in the formation of a closed conducting loop and high current densities near the small contact zone. therefore, patients should always be positioned in such a way that focal skin-to-skin contacts are avoided (e.g., by foam pads) (knopp et al. 1998 ). to protect volunteers, patients, accompanying persons, and uninformed healthcare workers from possible hazards and accidents associated with the mr environment, it is indispensable to perform a proper control of access to the mr environment. the greatest potential hazard comes from metallic, in particular ferromagnetic materials (such as coins, pins, hair clips, pocketknives, scissors, nail clippers, etc.), that are accelerated in the inhomogeneous magnetic field (cf. sect. 2.9.2.2) in the periphery of an mr system and quickly become dangerous projectiles (missile effect). this risk can only be minimized by a strict and careful screening of all individuals entering the mr environment for metallic objects. every patient or volunteer should complete a detailed questionnaire prior to the mr examination to ensure that every item posing a potential safety issue is considered. an example of such a form can be found, for example, in shellock and crues (2004) , or can be downloaded from http://www.mrisafety.com. next, an oral interview should be conducted to verify the information of the form and to allow discussion of any question or concern that the patient may have before undergoing the mr procedure. an in-depth discussion of the various aspects of screening patients for mr procedures and individuals for the mr environment can be found in various publications by shellock (e.g., shellock 2005; shellock and crues 2004) and the webpage mentioned above. here only a condensed summary of the most important risks and contraindications can be given. all patients (and volunteers) undergoing mr procedures should-at the very least-be visually (e.g., by using a camera system) and/or acoustically (using an intercom system) monitored. moreover, physiologic monitoring is indicated whenever a patient requires observation of vital functions due to a health problem or whenever the patient is unable to communicate with the mr technologist regarding pain, respiratory problems, cardiac stress, or other difficulty that might arise during the examination (shellock 2001) . this holds especially in the case of sedated or anesthetized patients. for patient monitoring, special mr-compatible devices are available (shellock 2001) . pregnant patients undergoing mr examinations are exposed to the static magnetic field, time-varying gradient fields and rf fields. the few studies concerning the combined effects of these fields on pregnancy outcome in humans following mr examinations have not revealed any adverse effects, but are very limited due to the small numbers of patients involved and difficulties in the interpretation of the results (colletti 2001; icnirp 2004) . it fig. 2.9 .7 current-induced third-degree burns due to a small focal skin-to-skin contact between the calves during the mr examination. (from knopp et al. 1998, with permission by springer-verlag) is thus advised that mr procedures may be performed in pregnant patients, in particular in the first trimester, only after critical risk/benefit analysis and with verbal and written informed consent of the mother or parents (colletti 2001) . the standard of care is that mr imaging may be used in pregnant woman, if other non-ionizing forms of diagnostic imaging (e.g., sonography) are inadequate or if the examination provides important information that would otherwise require exposure to ionizing radiation (e.g., fluoroscopy or ct) (colletti 2001; shellock and crues 2004) . in any case, however, exposure levels of the normal operating mode should not be exceeded and the duration of exposure should be reduced as far as possible (icnirp 2003) . mr examinations of patients with implants or metallic objects (such as bullets, pellets) are always associated with a serious risk, even if all procedures are performed within the established exposure limits summarized in the previous sections. this risk can only be minimized by a careful interview of the patient, evaluation of the patient's file and contacting the implanting clinician and/or the manufacturer for advice on mr safety and compatibility of the implant (medical devices agency 2002). in any case, mr procedures should be performed only after critical risk/benefit analysis. it should be noted that having undergone a previous mr procedure without incident does not guarantee a safe subsequent mr examination, since various factors (type of mr system, orientation of the patients, etc.) can substantially change the scenario (shellock and crues 2004) . in the case of passive implants-e.g., vascular clips and clamps, intravascular stents and filters, vascular access ports and catheters, heart valve prostheses, orthopedic prostheses, sheets and screws, intrauterine contraceptive devices, etc.-it has to be clarified if they are made of or contain ferromagnetic materials. as already mentioned, strong forces act on ferromagnetic objects in a static magnetic field. these forces (astm 2005a) may result in a movement and dislodgment of ferromagnetic objects that could injure vessels, nerves or other critical tissue structures. comprehensive information on the mr compatibility (astm 2005b) of more than 1,200 implants and other metallic objects is available in a reference manual published by shellock (2005) and online at http://www. mrisafety.com. mr examinations are deemed relatively safe for patients with implants or objects that have been shown to be non-ferromagnetic or weakly ferromagnetic (shellock and sawyer-glover 2001) . furthermore, patients with certain implants that have relatively strong ferromagnetic qualities may safely undergo mr procedures when the object is held in place by sufficient retentive forces, is not located near vital structures, and will not heat excessively (shellock and sawyer-glover 2001) . however, such examinations should be restricted to essential cases and should be performed at mr systems with a low magnetic field strength. examinations of patients with active implants or lifesupport systems are strictly contraindicated at conventional mr systems, if the patient implant card does not explicitly state their safety in the mr environment. in addition to the risks already mentioned above, there is the possibility that the function of the active implant is changed or perturbed, which may result in a health hazard for the patient. clinically important examples are cardiac pacemakers, implantable cardioverter defibrillators, infusion pumps, programmable hydrocephalus shunts, neurostimulators, and cochlear implants, etc. (medical devices agency 2002; shellock and sawyer-glover 2001) . the induction of electric currents by rf fields during imaging in implants made from conducting materials can result in excessive heating and thus may pose risks to patients. excessive heating is typically associated with implants that have elongated configurations and/or are electronically activated, as for example the leads of cardiac pacemakers or neurostimulation systems (shellock and crues 2004) . the same holds for electrically conductive objects (e.g., ecg leads, cables, wires, etc.), in particular when they form conductive loops in the bore of the mr system. to avoid severe burns, the instructions for proper operation of the equipment provided by the manufacturer of the implant or device have strictly to be followed. practical recommendations concerning this issue can be found in (shellock and sawyer-glover 2001) . in various reports, transient skin irritations, cutaneous swellings or heating sensations were described in relation to the presence of both permanent (cosmetic) and decorative tattoos. these findings seem to be associated with the use of iron oxide or other metal-based pigments that are prone to magnetic field-related interactions and/or rf-induced heating, in particular when the pigments are organized as loops or rings. according to a survey performed by tope and shellock (2002) , however, this side effect has an extremely low rate of occurrence in a population of subjects with tattoos and should not prevent patients-after informed consent-from undergoing a clinically indicated mr procedures (shellock and crues 2004) . as a precautionary measure, a cold compress may be applied to the tattoo site during the mr examination (tope and shellock 2002) . real-time position monitoring of invasive devices using magnetic resonance adaptive technique for highdefinition mr imaging of moving structures electrocardiogram acquisition during mr examinations for patient monitoring and sequence triggering acquiring simultaneous eeg and functional mri the modular (twin) gradient coil-high resolution, high contrast, diffusion weighted epi at 1.0 tesla resonant trapezoidal gradient generation for use in echo planar imaging monitoring the patient's eeg during echo planar mri biological effects and health implications in magnetic resonance imaging hazardous situation in the mr bore: induction in ecg leads causes fire on the induced electric field gradients in the human body for magnetic stimulation by gradient coils in mri active magnetic screening of coils for static and time-dependent magnetic field generated in nmr imaging limits to neural stimulation in echo planar imaging nmr probeheads for biophysical and biomedical experiments: theoretical principles and practical guidelines contrastenhanced mri of the central nervous system: comparison between gadodiamide injection and gd-dtpa evaluation of neck and body metastases to nodes with ferumoxtran 10-enhanced mr imaging: phase iii safety and efficacy study imaging of myocardial infarction: comparison of magnevist and gadophrin-3 in rabbits detection of colorectal liver metastases: a prospective multicenter trial comparing unenhanced mri, mndpdp-enhanced mri, and spiral ct detection of focal hepatic lesions: comparison of unenhanced and shu 555 a-enhanced mr imaging versus biphasic helical ctap sensitivity of enhanced mr in multiple sclerosis: effects of contrast dose and magnetization transfer contrast evaluation of retroperitoneal and pelvic lymph node metastases with mri and mr lymphangiography intravascular contrast agent-enhanced mri measuring contrast clearance and tumor blood volume and the effects of vascular modifiers in an experimental tumor hepatocellular carcinoma in cirrhotic livers: double-contrast thin-section mr imaging with pathologic correlation of explanted tissue efficacy and safety of mr imaging with liver-specific contrast agent: us multicenter phase iii study diffusion and perfusion mr imaging in cases of alzheimer's disease: correlations with cortical atrophy and lesion load rationale and applications for macromolecular gd-based contrast agents microcirculation and microvasculature in breast tumors: pharmacokinetic analysis of dynamic mr image series gadodiamide-associated nephrogenic systemic fibrosis: why radiologists should be concerned morphologic predictors of lymph node status in rectal cancer with use of high-spatial-resolution mr imaging with histopathologic comparison effects of three different doses of a bolus injection of gadodiamide: assessment of regional cerebral blood volume maps in a blinded reader study randomized double blind trial of the safety and efficacy of two gd complexes (gd-dtpa and gd-dota) public assessment reports increased risk of nephrogenic fibrosing dermopathy/nephrogenic systemic fibrosis and gd-containing mri contrast agents mri angiography is superior to helical ct for detection of hcc prior to liver transplantation: an explant correlation factors in myocardial "perfusion" imaging with ultrafast mri and gd-dtpa administration preclinical profile and clinical potential of gadocoletic acid trisodium salt (b22956/1), a new intravascular contrast medium for mri superparamagnetic iron oxides as positive mr contrast agents: in vitro and in vivo evidence oral contrast media for magnetic resonance tomography of the abdomen. iii. initial patient research with gd-dtpa detection of intracranial metastases: a multicenter, intrapatient comparison of gadopentate dimeglumine-enhanced mri with routinely used contrast agents at equal dosage a comparison of gd-bopta and gd-dota for contrast-enhanced mri of intracranial tumors pulmonary mr angiography with ultrasmall superparamagnetic iron oxide particles as a blood pool agent and a navigator echo for respiratory gating: pilot study susceptibility changes following bolus injections pulsatile motion effects on 3d magnetic resonance angiography: implications for evaluating carotid artery stenoses time-of-flight techniques. pulse sequences and clinical protocols centric phase-encoding order in three-dimensional mp-rage sequences: application to abdominal imaging level-set-based artery-vein separation in blood pool agent ce-mr angiograms blood pool contrast-enhanced mra: improved arterial visualization in the steady state flow compensation in balanced ssfp sequences separation of arteries and veins in 3d mr angiography using correlation analysis measurement of flow with nmr imaging using a gradient pulse and phase difference technique tissue specific perfusion imaging using arterial spin labeling phase contrast mr angiography techniques hepatic arterial-phase dynamic gadolinium-enhanced mr imaging: optimization with a test examination and a power injector fast selective black blood mr imaging improved time-offlight mr angiography of the brain with magnetization transfer contrast signal targeting with alternating radiofrequency (star) sequences: application to mr angiography cerebral arteriovenous malformations: improved nidus demarcation by means of dynamic tagging mr-angiography theoretical limits of spatial resolution in elliptical-centric contrast-enhanced 3d-mra time-resolved contrast-enhanced three-dimensional magnetic resonance angiography of the chest: combination of parallel imaging with view sharing (treat) perfusion imaging using arterial spin labeling nmr relaxation times of blood: dependence on field strength, oxidation state, and cell integrity steady-state and dynamic mr angiography with ms-325: initial experience in humans reducing motion artifacts in two-dimensional fourier transform imaging time-resolved contrast-enhanced 3d mr angiography black blood angiography the effects of time varying intravascular signal intensity and kspace acquisition order on three-dimensional mr angiography image quality the effects of incomplete breath-holding on 3d mr image quality steady-state imaging for visualization of endovascular interventions non-contrast-enhanced mr angiography using 3d ecg-synchronized half-fourier fast spin-echo nonlinear excitation profiles for three-dimensional inflow mr angiography fisp-a new fast mri sequence mr angiography by multiple thin slab 3d acquisition first-pass renal perfusion imaging using ms-325, an albumin-targeted mri contrast agent gadolinium-enhanced mr aortography body mr angiography with gadolinium contrast agents contrast-enhanced abdominal mr angiography: optimization of imaging delay time by automating the detection of contrast material arrival in the aorta sense: sensitivity encoding for fast mri mr flouroscopy: technical feasibility central intraluminal saturation stripe on mr angiograms of curved vessels: simulation, phantom, and clinical analysis simultaneous acquisition of spatial harmonics (smash): fast imaging with radiofrequency coil arrays flow artifacts in steady-state free precession cine imaging image artifacts due to a time-varying contrast medium concentration in 3d contrast-enhanced mra separation of arteries and veins using flow-induced phase effects in contrast-enhanced mra of the lower extremities venous compression at highspatial-resolution three-dimensional mr angiography of peripheral arteries use of a blood-pool contrast agent for mr-guided vascular procedures: feasibility of ultrasmall superparamagnetic iron oxide particles high-resolution magnetic resonance angiography of hands with timed arterial compression (tac-mra) improved centric phase encoding orders for three-dimensional magnetization-prepared mr angiography elliptical spiral phase encoding order: an optimal, field-of-viewdependent ordering scheme for breath-hold contrast-enhanced 3d mr angiography fluoroscopically triggered contrast-enhanced three-dimensional mr angiography with elliptical centric view order: application to the renal arteries quantitative evaluation of nonrepetitive phase-encoding orders for firstpass, 3d contrast-enhanced mr angiography parallel imaging in mr angiography mr image artifacts from periodic motion decreased venous contamination on 3d gadolinium-enhanced bolus chase peripheral mr angiography using thigh compression elimination of eddy current artifacts in diffusion-weighted echo planar images: the use of bipolar gradients phase insensitive preparation of singleshot rare: application to diffusion imaging in humans analysis and correction of motion artifacts in diffusion weighted imaging utilizing the diffusion-tonoise ratio to optimize magnetic resonance diffusion tensor acquisition strategies for improving measurements of diffusion anisotropy high b-value q-space analyzed diffusion-weighted mri: application to multiple sclerosis mr diffusion tensor spectroscopy and imaging microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor mri a simplified method to measure the diffusion tensor from seven mr images diffusion-weighted mr imaging of bone marrow: differentiation of benign versus pathologic compression fractures diffusion-weighted imaging of bone marrow: current status the basis of anisotropic water diffusion in the nervous system a technical review eddy current correction in diffusion-weighted imaging using pairs of images acquired with opposite diffusion gradient polarity diffusionweighted mr imaging of the liver of hepatitis c patients on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies diffusion-weighted interleaved echo planar imaging with a pair of orthogonal navigator echoes the diffusion sensitivity of fast steady-state free precession imaging effects of diffusion on free precession in nuclear magnetic resonance experiments an optimized pulse sequence for isotropically weighted diffusion imaging pathologic damage in ms assessed by diffusion-weighted and magnetization transfer mri mr diffusion imaging of the human brain mr diffusion imaging of cerebral infarction in humans single-shot diffusion-weighted trace imaging on a clinical scanner diffusion tensor imaging in spinal cord: methods and applications a review in vivo mapping of the fast and slow diffusion tensors in human brain tracking neuronal fiber pathways in the living human brain diffusion-weighted mri in the evaluation of renal lesions: preliminary results reducing motion artefacts in diffusion-weighted mri of the brain: efficacy of navigator echo correction and pulse triggering noise correction for the exact determination of apparent diffusion coefficients at low snr diffusion-weighted imaging of the spine using radial k-space trajectories a comparative evaluation of a rare-based single-shot pulse sequence for diffusionweighted mri of musculoskeletal soft-tissue tumors diffusion-weighted mri of soft tissue tumours overview of diffusion-weighted magnetic resonance studies in multiple sclerosis anisotropy in high angular resolution diffusion-weighted mri use of a projection reconstruction method to decrease motion sensitivity in diffusion-weighted mri line scan diffusion imaging spin-echoes comparison of gradient encoding schemes for diffusion-tensor mri diffusion-weighted mr imaging in normal human brains in various age groups vertebral metastases: assessment with apparent diffusion coefficient diffusion-weighted mr imaging of the normal human spinal cord in vivo mapping eddy current induced fields for the correction of diffusion-weighted echo planar images visualization of neural tissue water compartments using biexponential diffusion tensor mri diffusional kurtosis imaging: the quantification of non-gaussian water diffusion by means of magnetic resonance imaging determining and visualizing uncertainty in estimates of fiber orientation from diffusion tensor mri the effect of gradient sampling schemes on measures derived from diffusion tensor mri: a monte carlo study squashing peanuts and smashing pumpkins": how noise distorts diffusionweighted mr data confidence mapping in diffusion tensor magnetic resonance imaging tractography using a bootstrap approach optimal strategies for measuring diffusion in anisotropic systems by magnetic resonance imaging spatial normalization and averaging of diffusion tensor mri data sets isotropic resolution diffusion tensor imaging with whole brain acquisition in a clinically acceptable time selection of the optimum b factor for diffusion-weighted magnetic resonance imaging assessment of ischemic stroke contrast-to-noise ratios of diffusion anisotropy indices diffusion-weighted half-fourier singleshot turbo spin-echo imaging in breast tumors: differentiation of invasive ductal carcinoma from fibroadenoma intravoxel incoherent motion imaging using steady-state free precession mr imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders separation of diffusion and perfusion in intravoxel incoherent motion mr imaging diffusion imaging with the mp-rage sequence turbo spin-echo diffusionweighted mr of ischemic stroke diffusion tensor mr imaging of the brain and white matter tractography self-diffusion nmr imaging using stimulated echoes mri of "diffusion" in the human brain: new results using a modified ce-fast sequence nonlinear phase correction for navigated diffusion imaging steady-state diffusion-weighted imaging of in vivo knee cartilage self-diffusion in normal and heavy water in the range 1-45° diffusion weighting by the trace of the diffusion tensor within a single scan fiber tracking: principles and strategies a technical review threedimensional tracking of axonal projections in the brain by magnetic resonance imaging diffusion tensor imaging and aging a review early detection of regional cerebral ischemia in cats: comparison of diffusionand t2-weighted mri and spectroscopy diffusion-weighted mr imaging of acute stroke: correlation with t2-weighted and magnetic susceptibility-enhanced mr imaging in cats diffusion-tensor imaging of cognitive performance evaluation of hepatic lesions and hepatic parenchyma using diffusion-weighted reordered turboflash magnetic resonance images diffusion tensor imaging of normal and injured developing human brain a technical review implications of bulk motion for diffusion-weighted imaging experiments: effects, mechanisms, and solutions on the application of ultra-fast rare experiments diffusion-weighted echo planar mr imaging in differential diagnosis of brain tumors and tumor-like conditions correction of motional artifacts in diffusionweighted mr images using navigator echoes minimal gradient encoding for robust estimation of diffusion anisotropy ) k-space correction of eddy current-induced distortions in diffusion-weighted echo planar imaging diffusion tensor mr imaging of the human brain multishot diffusion-weighted fse using propeller mri reduction of eddy current-induced distortion in diffusion mri using a twice-refocused spin-echo diffusion tensor mri of the spinal cord diffusion-weighted mri in the characterization of soft-tissue tumors splice: sub-second diffusion-sensitive mr imaging using a modified fast spin-echo acquisition mode evidence that both fast and slow water adc components arise from intracellular space high-resolution diffusion imaging using a radial turbo-spin-echo sequence: implementation, eddy current compensation, and self-navigation condition number as a measure of noise performance of diffusion tensor data acquisition schemes with mri diffusion tensor imaging of neurodevelopment in children and young adults spin diffusion measurements: spin-echoes in the presence of a time-dependent field gradient usefulness of diffusion-weighted mri with echo planar technique in the evaluation of cellularity in gliomas diffusion tensor imaging in normal aging and neuropsychiatric disorders selective age-related degradation of anterior callosal fiber bundles quantified in vivo with fiber tracking the future for diffusion tensor imaging in neuropsychiatry evaluation of liver diffusion isotropy and characterization of focal hepatic lesions with two single-shot echo planar mr imaging sequences: prospective study in 66 patients the spatial mapping of translational diffusion coefficients by the nmr imaging technique a quantitative method for fast diffusion imaging using magnetization-prepared turboflash mr imaging of high-grade cerebral gliomas: value of diffusion-weighted echoplanar pulse sequences test liquids for quantitative mri measurements of self-diffusion coefficient in vivo bloch equations with diffusion terms analysis and comparison of motion-correction techniques in diffusion-weighted imaging diffusion-weighted mri of the cervical spinal cord using a single-shot fast spin-echo technique: findings in normal subjects and in myelomalacia q-ball imaging high angular resolution diffusion imaging reveals intravoxel white matter fiber heterogeneity mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging effects of diffusion in nuclear magnetic resonance spin-echo experiments optimized isotropic diffusion weighting optimised diffusion-weighting for measurement of apparent diffusion coefficient (adc) in human brain (eds) handbook of biological effects of electromagnetic fields. crc, boca raton on the thermoregulatory consequences of nmr imaging risks and safety issues related to mr examinations thermoregulatory consequences of cardiovascular impairment during nmr imaging in warm/humid environments epidemiology of health effects of radiofrequency exposure standard practice for marking medical devices and other items for safety in the magnetic resonance environment athey tw (1989) a model of the temperature rise in the head due to magnetic resonance imaging procedures z-gradient coil and eddycurrent stimulation of skeletal and cardiac muscle in the dog. society for magnetic resonance in medicine sampling and evaluation of specific absorption rates during patient examinations performed on 1.5-tesla mr systems estimation of heat transfer and temperature rise in partial-body regions during mr procedures: an analytical approach with respect to safety considerations induced alignment of flowing sickle erythrocytes in a magnetic field: a preliminary report static magnetic field effects on human subjects related to magnetic resonance imaging systems effect of static magnetic field exposure of up to 8 tesla on sequential human vital sign measurements magnetic resonance procedures and pregnancy. in: shellock fg (ed) magnetic resonance procedures: health effects and safety basic principles of thermal dosimetry and thermal thresholds for tissue damage from hyperthermia cardiovascular responses to heat stress and their adverse consequences in healthy and vulnerable human polulations effects of heat on embryos and foetuses health effects of static magnetic fields a review of the epidemiological evidence (eds) handbook of biological effects of electromagnetic fields specific absorption rates and induced current densities for an anatomy-based model of the human for exposure to time-varying magnetic fields of mri summary, conclusions and recommendations: adverse temperature levels in the human body magnetic field effects in biology -a survey of possible mechanisms with emphasis on radicalpair recombination effects of heat stress on cognitive performance: the current state of knowledge orientation of erythrocytes in a strong static magnetic field static and extremely low frequency electric and magnetic fields. iarc monographs on the evaluation of carcinogenic risks to humans guidelines for limiting exposure to timevarying electrical, magnetic, and electromagnetic fields (up to 300 ghz) general approach to protection against non-ionizing radiation review of the scientific evidence on dosimetry, biological effects, epidemiological observations, and health consequences concerning exposure to static and low frequency electromagnetic fields (0-100 khz) survey of reproductive health among female mr workers cognitive, cardiac, and physiological safety studies in ultra high field magnetic resonance imaging sicherheitsaspekte zur vermeidung strominduzierter hautverbrennungen in der mrt untersuchungen über den einfluß statischer magnetfelder auf die pränatale entwicklung der maus cellular effects of hyperthermia: relevance to the minimum dose for thermal damage orientation of nucleic acids in high magnetic fields effects of static and time-varying (50 hz) magnetic fields on reproduction and fetal development in rats interaction of nonmodulated and pulse modulated radio frequency fields with living matter: experimental results effects of static magnetic fields at the cellular level fetal development of mice following intrauterine exposure to a static magnetic field of 6.3 t orientation of sickled erythrocytes in a magnetic field effects of a 4.7 static magnetic field on fetal development in icr mice applied bioelectricity. from electrical stimulation to electropathology review of patient safety in time-varying gradient fields safety of strong, static magnetic fields physical interactions of static magnetic fields with living tissues physiological side effects of fast gradient switching hyperthermia-induced pathophysiology of the central nervous system radiofrequency energy-induced heating during mr procedures: a review patient monitoring in the magntic resonance environment. in: shellock fg (ed) magnetic resonance procedures: health effects and safety reference manual for magnetic resonance safety, implants, and devices: 2005 edn the magnetic resonance environment and implants, devices, and materials. in: shellock fg (ed) magnetic resonance procedures: health effects and safety physiological responses to mr imaging at an sar level of 6.0 w/kg development of mice after intrauterine exposure to directcurrent magnetic fields magnetically induced electric fields and currents in the circulatory system center for devices and radiological health. criteria for significant risk investigations of magnetic resonance diagnostic devices neurobehavioral effects among subjects exposed to high static and gradient magnetic fields from a 1.5 tesla magnetic resonance imaging system: case-crossover pilot study united nations environment programme/word health organisation/international radiation protection association: environmental health criteria 137, electromagnetic fields (300hz to 300 ghz) risks and safety issues related to mr examinations grazioli l, morana g, kirchin ma, schneider g (2005) accurate differentiation of focal nodular hyperplasia from hepatic adenoma at gadopentate dimeglumine-enhanced mr imaging: prospective study. radiology 236: 166-177 grossman ri, rubin di, hunter g et al (2000) magnetic resonance imaging in patients with key: cord-006860-a3b8hyyr authors: nan title: 40th annual meeting of the gth (gesellschaft für thromboseund hämostaseforschung) date: 1996 journal: ann hematol doi: 10.1007/bf00641048 sha: doc_id: 6860 cord_uid: a3b8hyyr nan the variable molecular weight (mw) of vwf is due to differences in the number of subunits comprising the protein. it is assumed that endothelial cells secrete large polymeric forms of vwf and that smaller species arise from proteolytic cleavage. vwf has two main properties: it stabilizes factor viii protecting it from inactivation by activated protein c or factor xa, and it mediates platelet adhesion to subendothelium of the damaged blood vessel wall. each vwf subunit contains binding sites for collagen and for platelet giycoproteins gp ib and gp iib/i~a. multiple interactions of the multivalent vwf lead to extremely strong binding of platelets to subendothelial surface, that is capable of resisting high wall shear rate in the circulating blood. only the largest multimers are hemostatically active. lack of the largest vwf multimers was observed in patients with yon wiuebrand disease type 2a. unusually large molecular forms of vwf were found in patients with thrombotic thrombocytopenlc purpura. proteolytic enzyme(s) may be involved in the physiologic regulation of the polymeric size of vwf and thus play an important role in the pathogenesis of vwf abnormalities in some patients with congenital or acquired disorders of hemostasis. we have purified (-10,000-fold) from human plasma a vwf degrading proteas¢ using affinity chromatography and gel filtration. the proteolytic activity was associated with a high mw protein (mr -300 kd). vwf was resistant against the protease in a physiologic buffer but became degraded at low salt concentration or in the presence of 1 m urea. proteolytic activity had a ph optimum at g-9 and was not inhibited by serine protease inhibitors or sulfl~ydryl reagents. inhibition by chelating agents was best reversed by barium ions, the observed properties of the vwf degrading enzyme differ from those of all hitherto described pretenses. analysis of cleaved vwf showed that the peptide bond 842tyr-g43met had been cleaved -the same bond that has been proposed to be cleaved in rive. the endothelium releases the vasodilator nitric oxide (no) and the vasoconstrictor endothelin (et)-i. no is formed from l-arginine via the activity of constitutive nitric oxide synthase (cnos or enos). an inducible form of nos (inos) is activated by cytokines. no activates guanylyl cyclase in vascular smooth muscle and platelets, leading to the formation of cgmp which induces relaxation or platelet inhibition, respectively. in vessels, no is responsible for endothelium-dependent relaxations; in vivo it exerts a vasodilator tone which can be enhanced by shear forces and receptor-operated agonists such as acetylcholine, bradykinin, thrombin, atp and adp. infusion of no-inhibitors in vivo leads to vasoconstriction and increases in blood pressure and oral administration to hypertension in the rat. within the endothelium, no inhibits et gene expression and release of the peptide via cgmp. hence, no-induced hypertension is associated with increased plasma et levels. et, a 21-amino acid peptide, has potent vasoconstrictor properties via eta-and in part etb-raceptors on vascular smooth muscle. in endothelial cells, et activates etb-receptors linked to no and prostacyclin formation. under basal conditions, little et is formed, but is increased by thrombin, angiotensin ii, arginine vasopressin, cytokines and ox-ldl. et antagonists have been developed and allow to study the effects of et in vivo. et and no most likely play an important role in disease states such as hypertension, atherosclerosis, coronary artery disease, heart failure, pulmonary hypertension and subarachnoid hemorrhage. clinical trials to further define their role in these disease states are now under way. in summary, the endothelium is an important regulator of vascular tone and structure in vitro and in vivo. in disease states, their interaction is imbalanced leading to enhanced vasoconstriction, thrombus formation and structural changes of the blood vessel wall. pharmacological tools aiming to inhibit those changes are now being developed. j.m. harlan, r.k. w/nn, s. sharar, and n. vedder universit3, of washington, seattle, washington ischemia-reperfusion injury has been implicated in the pathogenesis of a wide variety of efinical disorders. [n preclinical models, tissue damage .clearly occurs during ischemia, but, parado.,dcally, may be exacerbated during reperfusion. this reperfusion injury appears to involve activation of the intlammato~, cascade with generation of complement 'components, lipoxygenase products, and chemokines as proximal mediators and neutrophils as final effectors of vascular and tissue damage. we have examined the role of leukocyte adhesion in reperfusion injury in two models -the rabbit ear as a model of isolated organ injury and hemorrhagic shock and resuscitation in the rabbit and primate as a model of traumatic shock and multiple organ failure. data regarding the efficacy, timing, and safety of leukocyte adhesion blockade using selectin-or integrindirected reagents in these models w/ll be presented. the current status of anti-adhesion therapy in other pre-clinical models and early clinical trials will be re~4ewed. an amidolytic assay for the determination of activated protein c (apc)-resistant factor va (fva) has been developed. this assay measures the cofactor activity of fva in diluted plasma samples via the rate of thrombin formation. the apc response is calculated from two fv determinations: one performed in the presence (apc-fv) and one in the absence of recombinant apc. the apc-fv activity is expressed as percentage of the initial fv activity and indicates the sensitivity of fva to apc. normal ranges were established by analysing plasma samples of 100 healthy individuals and an apc-fv activity above 60% was found to be indicative for apc-resistance (apc-r). in a control group of 34 patients the apc-r assay gave abnormal results in 15 patients. dna analysis confirmed heterozygous fv r506q mutation in all 15 patients and confirmed the non-carrier status in all of the 19 patients yielding normal results. an aptt-based apc-r assay performed on the same group of patients showed abnormal results in two of the non-carrier patients. one of these patients was diagnosed as positive for lupus anticoagulant, whereas the reason for the wrong positive result in the second patient remains unclear. eleven patients were analyzed before start of oral anticoagulation and during oral anticoagulant treatment. comparison of the assay results demonstrate a correlation of 96% indicating that the assay is independent of the activities of vitamin k-dependent clotting factors. the apc-r amidolytic assays allows specific and sensitive detection of fva-resistant to apc. the assay is performable in plasma samples of all persons in whom the diagnosis of apc-r may be indicated. in patients treated with oral anticoagulants or showing other clotting abnormalities affecting the aptt the apc-r amidolytic assay is helpful to establish the diagnosis of apc-r. dept of pediatrics, university hospitals kiel and mtinster, germany resistance to activated protein c (apcr), in the majority of cases associated with the arg 506 gin point mutation in the factor v gene is present in more than 50 % of patients < 60 years of age with unexplained thrombophilia. to determine to what exteut this relativdy common gene mutation affects the risk of thromboembolie events in infants and children, its occurrence was investigated in a population of children with unexplained venous or arterial thromboernbotism: thrombosis of the central nervous system (cns, n=4), vena portae (n=4), deep vein thrombosis (n=4), vena caval occlusion (n=4), neonatal renal venous thrombosis (rvi'; n--3), neonatal stroke (n=14), stroke (n=3), arteria femoralis ocdusion (n=2). four ont of these 38 patients showed a positive history of unexplained familial thrombophilia. apcr was measured in an activated thromboplastin time (afit) according to dahlbtick. the results were expressed as apc-ratios: clotting time obtained using the apc/caci2 -solution divided by dotting time obtained with cac12. concerning the special properties of the childhood hemostatic system, infants and children with apcr < 2 were considered to be apc-resistent only when the results were confirmed in a 1: l 1 dilution with factor v deficient plasma (instrumentation laboratory munich, germany). plasma of 268 healthy children served as controls. the arg 506 gin mutation of the factor v gene was assayed by amplification of the dna samples by pcr followed by digestion of the amplified products with the restriction enzyme mul i. results were confirmed by sscp -analysis or by direct sequencing of dna from patients with apcr. consistent with the ap'it based method 10 out of 19 children with venous (v) thrombosis and eight out of 19 patients with arterial (a) vascular insults showed the common factor v mutation. additional coagulation defects (autithrombin, protein c type i, enhanced antiphospbolipid igg, enhanced lipoprotein (a)) were found in 30% (v) and 28% (a). furthermore, we diagnosed exogenlc reasons (septicemia, postpastal asphyxia, fetopathia diabetica, central line and steroid/asparaginase administration) in six out of 10 (v) and three out of 8 (a) children with thrombosis and apcr. all four patients with a positive family history of thrombophilia (mothers only !) showed the common factor v mutation arg 506 gin. in the control group the prevalence of apcr was 5.1%. the high incidence of additional exengenic factors in children with apcr confirm literature data of previously described inherited coagulation disorders during infancy and childhood: an acquired risk of thromboembolic disorders masks the coagulation deficiency in the majority of patients with an inherited prethrombotic state. furthermore, the incidence of 42 % apc resistant children with arterial insults in this study challenge the view that apcr is associated with venous but not with arterial thrombo-st8. 12 activated protein c resistance and plasminogen deficiency in a family with thrombophilia m, zttger 1 , f. demarmels biasiutti 1, ch. mannhalter 2, m. furlan 1 , b.~e 1 1httmatologisches zentrallabor der universit~t, inselspital, ch-3010 bern 2klinisches institut fur medizinische und ehemische labordiagnostik, universit~t wien, a-1090 wien several hereditary defects of the proteins regulating blood coagulation have been associated with familial thrombophilia. since the recent discovery of activated protein c (apc) resistance due to the factor v rf06q mutation as a highly prevalent hereditary risk factor for venous thromboembolism (tel, evidence is accumulating that familial thrombophilia may be due to a combination of genetic defects. thus, protein c-or protein s-deficient patients having suffered from te seem to be more likely to carry the factor v r506q mutation than expected from its allelic frequency in the population. we report a family (see figure) in which plasminogen deficiency (0.60 u/ml) had been found in the propositus having had twice postoperative deep vein thrombosis (dvt) at ages of 29 and 31 yrs, respectively, as well as in 5 family members (0.53-0.69 u/ml). out of these 5 plasminogen deficient individuals, only the propositus' daughter had suffered from recurrent dvt at age <20yrs. reinvestigation of this family in 1995 showed factor v r506q mutation in the propositus, his daughter, an asymptomatic sister and a brother with postoperative pulmonary embolism (pe). his father had had postoperative pe; he is deceased and could not be examined. ~, [] plasminogen deficiency ~ ,rll factor v rf06q mutation /~ propositus ~¢ bistory of dvt and/or pe e superficial phlebitis j2~,,~ not investigated "1" deceased even though this family is small for establishing unequivocal association of te with known defects, the two most severely affected individuals with recurrent te at ages <30yrs had combined plasminogen deficiency and apc resistance whereas those with isolated plasminogen deficiency were asymptomatic. these data support the concept of multigenie interactions leading to familial thrombophilia. resistance to activated protein c (apc resistance) is the most common dsk factor for venous thrombosis (vt). in most cases apc resistance is caused by a single point mutation at position arg 506 in the factor v gene (factor v leiden). while ample data in hetarozygous patients have been published, reports in homozygous patients are limited. we studied 29 patients (12 males [m] , 17 females it]) in whom a homozygous mutation had been verified by dna analysis. the median age at the time of the study was 38.8 years (y) (range 9-83 y). twenty-five patients had experienced vt (10 m, 15 f). four patients were discovered dunng family studies and were asymptomatic, three were children (between 9 and 13 y) and one patient was a 62 y old man. in males the first thrombosis occurred at a median age of 38 y (range 21-82 y), in females this was at a significantly younger median age of 26 y (range 17-49 y). twelve of the 15 symptomatic females had taken oral contraceptives (oc, estregen content 0.02-0.ling) for 6 to 150 months (median 71 m) pdor to thrombosis. in 2 women vt occurred during pregnancy, in 1 female it was precipitated by hormone replacement therapy. in contrast, in 8,<10 males the thrombosis happened spontaneously, in 2 males it followed surgery. the sites of thrombosis were dvt in 9 males and 14 females, dvt and pulmonary embolism (pe) in 4 females and 1 male, dvt and caval vein thrombosis in 1 female and superficial thrombophlebitis in 4 males and 1 female. eight females had at least one pregnancy, in total 11 children and 5 abortions. two had thrombotic events during pregnancy and 2 after delivery. all homozygous patients showed apc ratios between 1.12 and 1.68 (mean 1.39 + 0.19). conclusion: patients with homozygous fv leiden have similar clinical symptoms as patients with deficiencies of antithrombin-, protein c or protein s deficiency. however, in contrast to these defects a very high dsk dudng oral contraceptive medication leading to an ealier manifestation in females can be observed. several synthetic (efegatran, argatroban, inogatran and napsagatran) and recombinant (hirudin, peg-hirudin and hirulog) antithrombin agents are in different stages of nlinical development for cardiovascular and thrombotic indications. while the specificity of these agents for thrombin is a concern, little has been done to study the effects of these agents on other serine proteases involved in coagulation and fibrinolytic processes. fihrinolytic compromise by site directed thrombin inhibitors has been reported recently (thromb res 74(3): 193-205, 1994) . while these agents have been shown to inhibit plasmin and related enzymes, little or no informatienentheir effects onthe generation and functionality of apc is available. sinceapc plays an increasingly important role both as an antienagulant enzyme, by inhibiting factors v and viii, end a pro-fibrinelytic enzyme, by stimulating the release of t-pa fi'om endothelial sites, an inhibition of apc may result in both pro-coagulant state and fibrinolytic deficit. representative thrombin inhibitors (dup 714 -a prototype boronic acid peptide derivative, efega~an, argatroban, hirulog, hirudin and feg-hirudin) have been compared for their ability to inhibit apc (american red cross). these bionhemically defined studies in which the remaining activity of apc after incubation with a thrombin inhibitor was determined speclrophotometrically with a ehromogenic substrate (s-2288, pharmacia, franklin, oh) , demonstrated that dup 714 and efegatran inhibit apc in a coneentrafinn dependent manner (ic~0=0.75 and 8,4 gm resp~tively), hirnlog inhibits apc weakly (60 p2¢i produced only 25% inhibition), while argatroban and ~ have no anti-apc activities, while hirulog, hirudin and argatroban produced no direct enti-apc activities, it is conceivable that they may inhibit thrombomodnlin-bound thrombin and thus prevent activation of protein c, resulting in a functional apc deficit and failure to improve clinical outcomes despite higher dosage. while initially it was thought that sole targeting of thrombin will provide monospacifin anticoagulant agents devoid of some of the adverse effects observed with heparin, the recent clinical trials clearly suggest that thrombin is not the only determinant of thrombogenesis. furthermore, potent antithrombin agents such as hirudin, hirulog and peptides, indirectly inhibit the generation of apc, by compromising thrombomodulin-bound thrornhin and such agents as efegalran and dup 714 also produce direct apc inhibition. endogenous inhibition of formed ape by thrombin inhibitors may therefore compromise the feedback regulatory funetiens of apc and may lead to thrombotic amplification in fully enticoagulated patients. these studies warrant prcelinlcal assessment of thrombin inhibitors to evaluate their relative inhibitory effects on apc. poor anticoagulant response to activated protein c (apc resistance) causes a significant portion of deep vein thrombosis (dvt) whereas its association with coronary artery disease (cad) and myocardial infarction (md is still controversial. therefore, we investigated 346 recently hospitalised patients suffering from cad with or without previous mi. the cad was proven by coronary angiography. apc resistance was analysed by using the ap'l-i'-based assay coatest apc resistance (chromogenix). eleven patients showed an apc sensitivity index below 2.0 viewed as apc resistance. using pcr technology, the factor v mutation causing apc resistance 11691 g "-) a) has been shown in nine of these eleven patients. this represents 2.6 % (9/346), compared to 8,5 % found in healthy blood donors (8/94). one homozygous carrier (male, age 76) was identified (apc sensitivity index 1.4) who suffered from dvt at age 59. recent angiography demonstrated diffuse cad, no thrombotic events were reported in his family. in contrast, multiple thrombotic manifestations (dvt, mi, stroke) occurred in the relatives of four heterozygous patients. we conclude that the prevalence of apc resistance is rather low in patients with cad. nevertheless, the natural history of coronary manifestation of apc resistance seems to vary, probably depending on the presence and severity of cardiovascular risk factors. resitanee to activated protein c (apc resistance) is the most cormnon hereditary cause of thrombophilia and significantly linked to factor v leiden pcr based methods are used to identify the crucial point mutation in the factor v g(me. we designed primers in order to identify factor v leiden by allele-specific pcr amplification. amplification specificity for factor v was ensured by the 3'primer fv1001, located at the introng/intronl0 border of the g~ae. one sense and two antisense primers were used in ~vo separate primer mixes specific for factor v arts06 (wildtypo) or factor v otn506 (factor v leiden) yielding 235bp products each. in each pcr reaction a pair of primers amplifying a fragment of the human growth hormone gene was included, fimctioning as an internal positive amplification control (429bp pcrfragment). after an initial denaturation step 10/.tl samples (100rig genomic dna) were subjected to 10 two-temperature cycles fouowed by 20 threetemperature cycles. for visualisation 8p3 of the amplification product were run on a 2% agarose gel presmined with ethidittm bromide. the presence or absence of specific pcr amplification allowed defiu/te allele assignment without the need for any postamplifieation specificity step. the in~ernal positive control primers it~cate a sucessf-u/pcr amplification, allowing the assignment of homozygosity. in a prospective study 126 p~e.ients with tlaromboembolic events were analysed using this technique and enmpared with pcr -rflp according to bertina et al. the concordance between these techniques was 100 %. in 27 patients a heterozygous factor v ohas06 mutation was detected, whereas one pa6ent with recurrent thrombcembolism was homoz-ygous. no false-positive or false-negative results worn observed in the homozygous as well as hcterozygous samples. in addition in 15 samples identified to carry the point mutation by al/ele-specifin pcr anxplification automatic :~equencing confirmed the heterozygous or homozygous point mutation. due to its time-and cost-saving features allele-specific amplification should be considered for screening of factor v leiden. background: an initial intravenous course of tmfractionated heparin ~ljasted on the basis of the activated partial thromboplastin time is the currmt standard treatment for most patients with venous thrombosis. low-molecularweight hqmrin pre~a~tious can be administered subcutaneously, once or twice daily, without laboratory monitoring. we compared the relative effic.~y and safety of low-molecular weight heparin versus anfractionated heparin for the initial treatment of deep venous thrombosis. methods: english-language reports of randomized trials vtta'e identified th~ a medline search (1984 through 1995) and a complementary extensive manual search. reasons for exclusion from the analysis were no hepada dosage adjustments, the lack of um of obj~tive tests for deep venous thrombosis, dos~ranging studies that used higher doses of low-molecularweight heparin than are ctareatly in use, and the failure to provide blind endpoint ~sossmeat. we assessed the incidence of symptomatic recurrent vinous thromboembolic disease, the incidence of clinically ii~t bleeding and mortality. results: twelve of the 21 identified trials satisfied the predetermined criteria. the relative risk reductions for symptomatic thromboembolic complicatious, clinically ~t bleeding, and mortality varied firom 20.60% aad were all statistically significantly in favor of low-molecular-wedght hqtmrin. coadusions: low-mol~ular--weight hoparim administered subcutaneously in fixed doses adjusted for body weight aml without laboratory nmaitori~ =re more effective and safea" tlum adjn~_-dose standard h~. sauce low molec~dar weight hqmrim vary in o~apositiou =ad pharma~ogical im)fil~ the benefits of each ~ shodd ~tabllsbcd separately. unfraetionated hcparin (uh) and low molecular weight heparin (lmwh) are widely used for the prevention and treatment of thrombotic disorders. uh and lmwh induce platelet aggregation in vitro. rgd peptides compete with fibrinogm for the binding to the glycoprotein receptor (gp lib-ilia) of platelets and inhibit platelet aggregation. to inhibit the heparin-indueed platelet ~tion and prolong the half-life in blood of rgd peptides, we linked ac-rgdv-ssggs-ahx-yk eovalently to lmwh in a ratio 1:1. the peptide is composed of three regions: a. rgd-gives the specificity for the receptor gp lib-ilia; b..ssggs-ahx-is the spacer between carrier and ligand, which should facilitate the intnraetion between the conjugate and the gp lib-ills receptor; c. -yk arc functional antino acids for iodination (y) and for covalent attachment (k) to the cattier lmwh. the aggregation achieved with different concentrations of lmwh, lmwh-eonjugate and lmwh/rgd-peptide mixture in a ratio 1:1 was mea.~ared after 20 rain.; maximum aggregation after platelet activation with 10 pm adp was set equal to 100%. platelet aggregation in normal human plateletrich eitrated plasma (prp; 220 000/p.l) was induced by lmwh in a dose ~ndent manner. heparin can induce antbodies which interact with platelets and endothelial cells. this causes thrombocytopenia and thromboembolic complications. hitpatients do need effective parenteral anticoagulation. we treated 82 patients (30m,52fm), median age 61 years (18-84) with laboratory prooven hit (hipa-test) with recombinant (r-)hirudin. as these patients had been preseleeted by their immunological response during heparin treatment and the treatment duration of the study was longer than in any other study using thlrudin, all patient samples were investigated for anti-r-hirudin antibodies himdin antibodies were screened by a sandwich elisa using r-hirudin fixed to the solid phase as antigen. all plasma samples were screened for anti-hirudin antibodies of the igg class, but solar only a suset of samples for lge anti-r-himdin antibodies. 38 of 82 patients (46.3%) developed anti-hirudin antibodies of the igg class. anti-hirudin antibodies were not detectable not before 6 days of r-hirudin administration solar no ige anti-hirudin antibodies were found. none of the patients devdoped thromboeytopenia or allergic symptoms. however, in a subset of patients the anti-hirudin antibodies enhanced the anticoagulatory effect of r-hirudin. in 5 patients the hirudin dosage had to be decreased by 2-3 fold to maintain a stable aptt level, in 3 patients, despite stable r-hirudin maintenance dose the aptt increased to values >100 see.. during the study 4 patients with anti-hirudin antibodies had to be reexposed to a second course of r-hirudin for parenteral anticoaguhtion none of these patients developed any allergic reaction. in conclusion we found a high proportion of anti-hirudin antibodies in hatpatients treated with r-hirudin for more than 7 days. these antibodies seem to have minor clinical relevance in regard to allergic reactions. however, one has to consider that these antibodies may influence the pharmacokinetics of rhimdin and thereby enhance its antieoagulatory potency. therefore, aptt must be monitored closely in patients receiying r-hirudin for more than 5 days a major concern in the use of hirndin, the most potent and specific thrombin inhibitor, is the risk of bleeding associated with the potential effect of this drug on hemostasis, particularly when the antithrombotic therapy is combined with invasivo procedures, fibrinolytic treatment, or patient's predisposition to abnormal bleeding. thus, availability of an antagonist to hirudin would be essential for instant neutralization of the antithrombotie action. however, thueh a hirudin antagonist is unknown in nature. to prepare an antagonist to hirudin, a mutant derivative of human prothrombin, in which active site aspartate at position 419 is replaced by an asparagine, has been designed, expressed in recombinant chinese hamster ovary cells, and purified to homogeneity. d419n-prothrombin was converted to the related molecules d419n-meizothrombin and d419n-thrombin by limited proteolysis by e. carinatue venom and o. scvutellatus venom, respectively. both d419n-thrombin and d419nmeizothrombin exhibited no thrombin activity and titration resulted no detection of the active site. however, binding to solid phase immobilized hirudin and fluorescence studies confirmed that the binding to the most specific thrombin inhibitor, hirudin, was conserved in both proteins, hi vitro examinations showed that d419n-thrombin and d419nmeizothrombin bind to immobilized hirudin, neutralize hirudin as well as in the purified system and in human blood plasma and re-activate the thrombin-hirudin complex. animal model studies confirmed that d419nthrombin and d419n-meizothromi.,in act as hirudin antagonist in blood cireulatlon without detectable effects on the coagulation system. while i.v. injections of hirudin in mice resulted in an increase in partial thromboplastin time, thrombin time and anti-thrombin potential, additional injections of d419n-thrombin and d419n-meizothrombin resulted in a normalization of these coagulation parameters. elevation of plasma homocysteine is a hereditary disorder of methionine metabolism associated with a high risk of arterial vascular disease. however, as yet relatively little attention has been directed towards the association between hyperhomocysteinemia and juvenile venous thromboembolism (vte). consequently the aim of our study was to evaluate the prevalence of hyperhomocysteinemia (hyper-hcys) and juvenile vte. patients: 85 patients (29 men, median age 42 ys; 56 women, median age 35 ys) who had at least one verified episode of vte before the age of 45 ys were investigated in regard to their total plasma hcys levels. none of the patients had renal or liver dysfunction or evidence of any autoimmune or neoplastic disease. methods: plasma total homocysteine levels were determined by hplc with fluorescence detection. hyperhomocysteinemia was defined as hcys levels exceeding the upper limit of the normal range obtained in our laboratory from 80 healthy control subjects (40 males, median age 25 ys, hcys 95% ci: 2.02-5.67 pmol/l; 40 females, median age 27.5 ys, hcys 95% ci: 2.99-5.40 ,gruel/l). resuits: 13 out of 85 patients had hyper-hcys, giving a prevalence of 15.3 %. of these 13 patients, 9 were male and 4 female, indicating that the relation between elevated plasma hcys levels and vte may not be as strong in woman as in men. discussion: according to previous reports, our study shows that there is a high prevalence of hyper-hcys in patients with juvenile vte. however, the mechanisms by which hyper-hcys can provoke vte and whether hcys is an exclusive risk factor or if it contributes to other existing predispositions, possibly working as a trigger factor is unknown yet. some authors suggest hcys-iaduced effect on factor v activation or inhibition of thrombomodalin-dependent protein c activation. in addition an influence on thrombocyte aggregation has been postulated. conclusion: measurement of hcys levels may be useful in the evaluation of patients with a history of juvenile venous thromboembolism and could be clinically important as hyper-hcys is easily corrected by vitamin supplementation. detailed determination of the pathogenesis of vte in patients with hyper-hcys should be the aim of further investigations. a deficiency of one of the coagulation inhibitors antithrombin (at), protein c (pc) or protein s (ps) and resistance to activated protein c (apc resistance) are established risk factors for venous thromboembolism (vte). in the majority of patients with apc resistance, the .tug 506 gin mutation (factor v leiden) is present. whereas deficiencies of one of the coagulation inhibitors are rare in the normal population, the allele frequency of factor v leiden is 2-7% in western europe. heterozygous individuals have a 3-7fold, homozygous an 80fold increased risk for vte the typical clinical features of all abnormalities are deep vein thrombosis, pulmonary embolism, superficial vein thrombosis and thrombosis at unusual sites, like mesenteric vein thrombosis or cerebral vein thrombosis. the thrombotic risk is low during childhood, but increases considerably after the 13th year of age. a retrospective study in adult patients out of families with a symptomatic deficiency of at, pc or ps revealed that around 30% of surgical interventions and traumas of the lower extremities were complicated by vte. therefore, these patients should receive thrombosis prophylaxis al~er surgery and trauma if their age is higher than 13 years. pregnancy is associated with a very high risk for vte in individuals with at deficiency and prophylaxis should be initiated already in the first trimester. after delivery, thrombosis prophylaxis is adviced for all females known to have an abnormality. oc increase the risk, especially in at deficient and in homozygous factor v leiden females and are therefore contraindicated in these individuals. oc do also increase the risk for vte in patients heterozygous for factor v leiden and females known to have this abnormality should be discouraged from taking oc or should at least be informed on their increased risk. university hospital-', jerusalem, israel, hospital bcan.iou ~. paris, france, increased frequency of thrombocmbolie events have been observed iu patients with b-thalassemia. our findings of shortened platelet survival and enhanced urinary excretion ofthmmboxanc a: metabolitcs (blood 77:1749 (blood 77: , 1991 suggested an increased platclet activation in tbese patients. we also fouud that isolated thalassemie rbc enhance prothronlbin activation, suggesting an increased membrane exposure of procoagulant phospholipids i.e, phasphatidylserine (am j. hematol. 44:33, 1993) . we now show that annoxin v, which has a high specificity and affinity for anionic phospholipids inhibits pmthrombm activation by factor xa, by binding to thalassemic rbc (ic~, = 0.32 nm). kerckhoff-klinik, bad nauheim ~, medizinische poliklinik bonn 2, institut for immunologie und transfusionsmedizin universit~ll greifswald a antibody-mediated intravascular platelet activation is believed to be the basis for both arterial and venous thrombosis in patients with hat. while the development of arterial thrombosis can explained sufficiently by intravascular platelet activation, it is a matter of discussion whether additional risk factors are involved in the pathogenesis of hat-related venous thrombosis. since resistance to activated protein c (apc) is the most common inherited risk factor for venous thrombosis described the frequency of apc resistance among a population of 68 hat patients has been studied. hat was diagnosed using the heparin-induced platelet aggregation assay and confirmed by the ~4c-serotoninrelease test. the diagnosis of apc resistance was established by two functional assays and genetic analysis. at time of diagnosis of hat, 38 patients showed venous thromboembolic complications. among these, 24 were found positive for apc-resistance. pulmonary embolism was diagnosed in 18 hat patients, 14 of them were apc resistance positive. none of the 18 hat patients who showed exclusively thrombocytopenia were apc resistance positive. early oral anticoagulation (oa) was initiated in 7 patients after the diagnosis of hat has been established. six of these patients developed serious thrombotic complications including skin necrosis. these results demonstrate that apc resistance is an additional and common risk factor for the development of hat-related venous thrombosis. early initiation of oa during an acute episode of hat dramatically increases the risk of thrombosis. therefore, oa in hat patients should be initiated only after platelet counts have been returned to baseline levels and effective parenteral anticoagulation is achieved. controlled trials for primary and secondary prevention of stroke g. de (3aetano, c. cerletti and v. bertel~ consorzio mario negri sud, santa maria imbaro, italy this presentation will review the antithrombotic treatments to prevent ischemic stroke that have been evaluated in controlled clinical trials. in two studies of aspirin therapy for pdmary prevention in male physicians there was no reduction in the incidence of stroke, while that of first myocardial infarction was significantly lowered. similar results were obtained in a prospective study in a large cohort of women taking aspirin daily. the incidence of vascular death was not modified by aspirin in any of these trials. this is possibly due .to an excess of strokes associated to aspirin treatment: indeed the four vascular events avoided in 1000 us physicians under aspirin prevention for five years would result from five myocardial infarction and one vascular death avoided and two additional strokes occurred. oral anticoagulant therapy decreases the relative risk of stroke in patients with non valvular atrial fibrillation. warfarin appears to be superior to aspirin, but the latter drug is a useful alternative when long-term anticoagulant therapy cannot be administered. a metanalysis of about 150 trials and over 100,000 patients with different vascular diseases treated with aspirin (at different doses) and/or other platelet inhibitors showed 25% overall reduction of vascular events including stroke. the optimal dose of aspirin for secondary stroke prevention could not be established. in patients with previous minor strokes or tia there was 22% reduction of vascular events and 23% of non fatal strokes. the avoidance of nine strokes of any cause among the 38 expected in 1000 patients at risk would result from the sum of 10 ischemic events avoided and a haemorrhagic one occurred in excess. ticlopidine was reported to reduce the risk of stroke in two large tdals (one in patients with major stroke), but there is no evidence that it is better or safer than aspirin. we compared the effect of the direct specific thrombin inhibitors, napsagatran (na) and rec. hirudin (rh) with unfractionated heparin (uh) on the further growth of preformed thrombi. as a model of thrombogenesls, an annular perfusion chamber exposing rabbit aortic subendothelium was perfused with native rabbit blood at an arterial wall shear rate (650/s). fibrin and platelet thrombi were allowed to form during a 10min perfusion period after which the test agents were given iv as a bolus and a continuous infusion (3 and 10pg/kg/min, n=6) and the perfusion continued for 20min. the control groups were perfused for 10 or 30 rain (n=6). fibrin deposited and platelet thrombi formed on subendothelium were evaluated by microscopic morphometry. the % surface coverage with fibrin was not reduced in the drug-treated groups since fibrin deposition was similar in the 10 and 30 min control groups (39+8% and 335:6%, respectively, mean:l:sem). platelet thrombus area (ta) in the control groups increased from 24+7pm2/pm after 10min to 97+32pm2/lim after 30rain perfusion. na at 1011g/kg/min reduced ta by 94% to values (6+_2ptm2/ ~m) lower than those of the 10min control group whereas rh at this dose reduced ta by 70% (30-j:141.tm2/i.tm). uh at both doses was ineffective. these findings show that in contrast to uh the direct thrombin inhibitors na and rh inhibit the growth of preexisting thrombi. these results could be explained by the higher potency of na and rh as compared to uh for inhibiting clotbound thrombin (gast et al., blood coagul fibrinol 1994, 5.' 879-87) and suggest that thrombus-bound thrombin is an important modulator of platelet thrombus growth and/or stability in this thrombosis model. platelet adhesion -the initial event of thrombosis -is believed to be completely prevented by intact endothelium. we challenged this theory by superfusing intact human umbilical vein endothelial monolayers with activated human platelet rich plasma utilizing the stagnation point flow adhesio-aggregometer (spaa). the spaa provides flow mediated contact of platelets with the superfused surface. heparinized (3.5 -4.0 u/ml) platelet rich plasma (prp) was obtained from healthy volunteers and activated by addition of adenosine diphosphate (adp 2"10 -6 m). platelet deposition was recorded on-line by video as well as by measuring scattered light. fixed samples were examined by phase contrast and electron microscopy, inhibition experiments were performed with either the tetrapeptide rgds, the non-peptide gpiib/llla-inhibitor ro-43-8857 or a monoclonal antibody directed against the gpilb/llla complex. stimulation with adp prompted platelets to adhere to intact endothelium single or as microaggregates of a diameter of up to 100 micrometer. adhesion was dependent upon convective transport resulting in platelet collision with the endothelial monolayer. infusion of rgds or ro-43-8857 into the flowing, adp-stimulated prp completely prevented platelet adhesion to the endothelium as well as subsequent aggregation. when the inhibitor inflow was stopped while adp stimulation persisted, adhesion and aggregation occurred immediately. re-establishing the inflow of the inhibitors -with still continued adp stimulation -led to disintegration of the adhering aggregates. when prp preincubated with the monoclonal antibody against gpllb/llla was superfused, platelet adhesion to the endothelium and aggregation were irreversibly blocked. our results suggest that convective transport and stimulation of platelets are prerequisites to overcome endothelial thromboresistance and that subsequent platelet adhesion to the endothelium is mediated via the platelet gpilb/llla receptor complex. prevent thrombus formation affer ptca i.p. 8tepanova t, g.v. bashkov 2, l.p.kapralova, 2 s.p. domogatsky ~ cardiology research center t and national haematology scientific cettter 2 russian academy of medical sciences, moscow, russia percutaneous transluminal coronary angioplasty (ptca) results in atheroselerotie plague rapture, vascular wall damage and thrombogenic collagen exposure. subendothelial collagen type i-lll is a very ~rong agonist of platelet-dependent thrombus formation in arteries. the anlithrombotic action of rabbit polyclonal antibodies to rat collagen type i-ill and their chemically synthetized conjugate with monoclonals to human recombinant two-chain/one-chain urokinase type plasminogen activator (rtcu-pa/rseu-pa), cross reacting with rat tcu-pa/scu-pa was studied both an in vifro and in vivo. anticollagen antibodies and bispecific conjugate inhibited human platelet adhesion, aggregation and formation of thrombi-like ~ructures induced by rat collagen immobilized with the polystiroi surface in a condition mimics the high shear rate in the large elastic-type arteries. the short-term treatment of the collagen-soaked silk thread by the collagen antibodies suppressed the platelet-dependent thrombus formation in the arterio-venous shunt in rats by 56_+4 % (p<0.001) as well as by the bispecific conjugate (44_+4%, p<0.001). the treatment of collagen-adsorbed conjugate by rtcu-pa did not increase the autithrombotic effect of bifunctional antibodies. the present date suggest, that the local administration of the anticollagen antibodies at the site of atherosclerotic plague rapture may tm the efficient tool for prophylaxis of platelet-dependent thrombus formation in arteries after ptca. increased levels of certain hemostatic factors have been shown to be related to an increased risk of cardiovascular events. hypercoagulability is suggested to predispose to arterial thrombosis and thereby to participate in atherogcncsis. we therefore assessed fibrinogen, prothrombin fragment 1+2 (fi+2) and yon willebrand factor (vwf) antigen in 80 consecutive patients (aged 59+5 years) with known coronary artery disease (cad) who all underwent coronary angiography. the extent of coronary artery disease was quantified according to modified criteria of the american heart association (total, proximal and distal "score"). furthermore the intima-media thickness (imt) was determined in the carotid and femoral arteries by standardized ultrasonographie measurement, vwf antigen was found to correlate positively with the total and proximal coronary score (r=029, p<0.05 and r=o.36, p< 0.01). while fi+2 showed no correlation with the coronary scores, it was significantly correlated with the imt values in the carotid arteries (r=0.27. p< 0.05). after differentiating tertiles of the parameters patients belonging to the upper tertile of fi+2 concentrations had significantly higher imt values of the carotid and femoral arteries (0.81_+0.11 mm vs. 0.72+_0.13 mm in the lower tertile, p<0.05:1.38_+0.44 mm vs. i. 15-+0.25 ram, p=0.05) whereas in patients belonging to the upper tertile of vwf antigen concentrations the proximal coronary artery score was significantly higher (!.71-+0.59 vs. 1.31+ 0.92 in the lower fertile, p<0.01). fro correlation of fibrinogen concentrations and extent of cad or imt values of the carotid and femoral arteries could be demonstrated. in conclusion procoagnlatory mechanisms as indicated by elevated concentrations of yon willehrand factor antigen and fi+2 may be contributing factors in atherogenesis. we have previously shown that pgei is a potent inhibitor of pdgf-ioducod proliferation of vascniar smooth muscle cells (vscm) and inhibits dna replication by a camp-related mechanism (grol~er et al, 1994) . the present study investigates of whether or not this aatimitogeni¢ activity of pget can be amplified by trapidil, a compound that has been shown recently to inhibit the incidence of restenosis of hmnan coronary arteries subsequenmt to ptca (maresta et al. 1994) , vsmc were prepared from coronary arteries of adult bovine hearts, passagod and kept under standard tissue culture conditions. cells of passage 4-9 wore incubated in serumfree medium for 24h in the presence of indomethacin (3 p.m). addition of pdgf-bb (10 ng/ml) under these conditions stimulated dna-replication as assessed from 'hthymidin lncm'poration, by 3.-4laid above control level. trapidil at 10 idvl caused a minor reduction of pdgf-induced mitogenesis whereas 10t) of the compound resulted in a marked reduction of dna replication by 69% (p < 0.05, n = 4). pgei at 0.3 nm diminished the incorporation rate by t 1% while the simultaneous administration of both pged and trapidil (100 idyll caused a significantly stronger response as seen from n reduction of ~h-thymidine incorporation rate by 82% (p < 0.05, n = 4). as a possible mechanism of action, trapidil might have inhibited phosphodiesterases. to establish this, we measured the camp-depcudont proteinkiaasc (pk) a activity in cell homogenates. trapfdil increased the basal fka-activity from 19% to 31% of the maximum response while the response to pget (10 am) amounted to 55%. coincubation of pgei with trapidil caused a 65% stimulation of pka activity, sugesting a small though detectable inhibition of vscm phosphodiesterases by trapidil at anttmitogenic concentrations. essentially similar results wore obtained when thrombin was used as the mitogenic agent. the data demonstrate a significant antimitogenic effect of trapidil at p.molar concentrations that are in the range of plasma levels after therapeutic administration of the compound in rive. at these concentratrations, pget induced inhibition of mitogenesis is markedly enhanced by trapidil. inc. i~enna, and ~cenlral itematnlogy laboroto~. , university hospital of bern pibrinogen (fg), yon willebrand factor antigen (vwf) and tissue-type plasminogeu activator antigen (l-pal have recently been shown in be independent risk factors for subsequent coronary events in patients with angina pectoris (nejm 1995; 332:635) although paul antigen has also been proposed as a risk factor, conclusive dam showing its predictive value is still lacking. furthermore, we have recently shown in a study investigating 200 survivors of myocardial infarction that not only are fg, t-pa and pai-i significantly increased in these patients when compared to a heahhy conlro[ group, but pci activity is also elevated (7hrornb. tfaemost. 1995;73:1137 abst.) , hi order to obtain cut.off points for the individual parameters, frequency histogram plotl; were transformed into straight line cumulative frequency (probit) plots (thromb i/aemost. 1995;74:718) . the cut-off valu~ for the four parameters were determined as follows: fg at 2.7 g/l, t-pa at 8.7 ng/ml, pal-i at 25 ng/ml and pc[ at 126% of a normal pooled plasma. utilising there cut.off points it was then possible to determine the accumulative discriminatow effectiveness of the parameters. when fg w;qs employed alone as the discriminatow factor, it was observed that 47% (941200) of the coronary heart dir, ease (chd) group eilher had the cul..off value or were below it aud 29% (29199) of the normal group were above the cut-off value, thus, resulting in 47% false ne$atives and 29% false positives. when a second additional risk factor, t-pa wa_~ introduced, the number of false negatives dropped to 21% [i.e. 79% (158/200) had two, risk factors elevated] and the number of false positives to 13% to investigate whether a third parameter could discriminate further, pai-i antigen was used to analyse the rcnudning false positives and negatives. an additional 4% could be detected, resuhing in 83% of the chd group having three risk factors elevated. similarly, the number of normal aubjecta with three parameters elevated dropped by 4% to 9% furthermore. when a fourth parameter was introduced, namely pci, it was round to discriminate a further 8% in the chd group, thereby increasing tile di~riminalion to 91%. the number of false positives dropped to 4%, additionally, determination of pci increased the discrimination of patienta having had multiple infarctions from 88°/= when thrce parameters were mcasured to 100%. from these results it can be concloded that determination of fibrinogen levels alone is not sumcicnt to separate patients from controls as t-pa adds significant discrimination. pai-i antigen which correlated strongly with t-pa did not significantly increase the discriminatory potential of both fg and i-pa. however, by employing pci as a fourth paramctcr, virtually complete separation between the chd and normal groups as well as rurthcr recoguitiou of' patients having had multiple infarctions could be obtained. to test the hypothesis that oral contraceptives (oc) enhance exercise-induced activation of blood coagulation we examined 11 women (27 + 3 (sd) years, bmi 20.4 + 2.0 kg/m 2, vozm.. 49 + 7 ml/kg/min) without oc between day 18 and 22 of the menstrual cycle and 10 women (24 + 2 (sd) years, bmi 20,6 ± 1,6 kg/m', vo2max 47 + 7 ml/kg/min) taking oc (150 mg desogestrel and 30 mg ethinylestradiol) between day 7 and 21 of drug intake. prothrombin fragment 1 +2 (ptf1 + 2) and fibrtnopeptide a (fpa) were measured before and after running for one hour on a treadmill at a speed corresponding to the anaerobic threshold. mean heart rate [191 ± 10 vs. 196 ± 12 min 1) and mean plasma lactate (3.3 ± 1.6 vs. 3.1 + 1.2 mmot/i) wera comparable during exercise between control and oc group, respectively. results for markers of thrombin and fibrin formation were: ptf1 +2 (nmol/i) fpa (ng/ml) control before 0,66 ± 0.19 1.0 + 0.2 after 0.73 + 0.23 2.2 + 1.2" oc before 0.82 + 0.31 1.0 + 0.2 after 1.11 + 0.48* + 5.8 -+ 6.0* + * p < 0.05 vs. baseline, + p < 0.05 between groups. we conclude that oral contraception with 150 mg desogestrel and 30 mg ethinylestradiol enhances exercise-induced thrombin and fibrin formation, our data suggest that exercise testing might be useful for evaluating the risk of thrombosis associated with different compositions of oc. a. haushofer +, wm. halbmayer +, j. radek +, m. dittel *, r. spiel *, h prachar *, j. mtczoch *, m. fischer + + zentraltaboratorium mit thrombose-und c~rinnungsambulanz -krankenhaus lai~: * 4. medizinische ab[eilung mtt ka~liolo$i¢ -krankenhaus lainz und ludwig bottzmann-lnstitut ftlr herzinfarktforsohung, wien fifty-one patients (age 61.6 ± 9.2a; 34 m / 171) implanted with 74 coronary stems 03 palm~-schatz, 27 gianturco-roubin, 14 micro stcnts) received a now antithrombotic treatment using a combination of ti¢lopidine (tic) 2 ×250 mg/d for 28 days and acetyl salicylic acid (asa) zoo mg/d for long-term treatment. patients (pat) only received 15000 tu standard hepartn as i.v. bolus immediately before stent implantation (day l ). side effects and changes in hematological (day i to 7, 14. 28 and 35 [= without t[cil liver and kidney parameters (day 7, 14, 28, 35) were monitored. thirty-eight pat (75%) came for the controls to our del~rtment and were additionally monitored by thromboelastograpy (teg) and bleeding time (bt) (day 2g and 35). the other pat were monitored externally, side effects were reported. thrombin geucration after stenting was monitored from day i to 7 by prothrombin fragment 1+2 (f 1+2) and thrombin-antithrombin-lll-comptex (tat). "k" of the "leg decreased (day 28 vs 35; p< 0.0l ). bt prolongation was negatively correlated with the bodysurf ace area (tic+asa: p< 0.05, asa: p< 0.0 l) and showed a reduction after withdrawal of tic (2 l0 sec, 180/ 300 so: [median, quartiles] vs. 120 sec, 881157 sec; p< 0.ix)01). f 1+2 and tat of day i (blood collection: 0, 2, 4, 6 h after intervention, f 1+2:0.84 nmol/i, 0.68/i.07 nmol/l: tat: 2.6 pg/1, 2.0/4.6 ilg/1) were lower compared to day 2 to 7 (f i +2: 1.31 nmol/l, ].08/i .62 nmol/l; tat: 4.8 pg/i, 3.2/7.2 ijg/1; p< 0.0001). tic scorns not to be a strong thrombin generation inhibitor. during stenting one pat (i.9%) sustained a non penetrating mci and one (1.9%) an ischaemic stroke. tic+asa were very effective, only with one pat (1.9%) stent thrombosis (acute) occurred. side effects: 8/15.7% gastrointestinal (one lead to hospitalization), 5/9.8% hematomas at the needle site in the groin (one surgical intervention), 5/9.8% leucopcnias (one agranulozytosis with hospitalization), 3/5.9% allergic skin reactions and 2/3.9% increased liver enzymes (got, gpt, "pgt, alkaline phosphatase; > 2 × of the j. ). with one pat with gastrointestinal disturbances and skin reactions tic had to be withdrawn and treatment was changed to oral anticoagulatlon + asa. one pat showed a combination of skin reactions, gastrointestinal distufl~aneas and on day 28 a heavy reaction of the liver enzymes ( j. after 5 weeks). a decrease of the white blood count (day 1:7.19 gh, 6.03/8.2 g/l, day 28:5.46 g/l, 4.63/6.47 g/l; p< 0.000 i) could be observed. the safety of the therapy with tic+asa should be elucidated and extensively discussed. the serpins c1 esterase inhibitor (cllnh), antithrombin iii (atiii), alantitrypsin (slat), and a2-antiplasmin (azap) are known inhibitors of coagulation factor xla (fxla). although initial studies suggested al at to be the main inhibitor of fxla, we recently demonstrated cllnh to be a predominant inhibitor of fxla in vitro in human plasma (wuillemin et el., blood 1995; 85:1517) . the present study was performed to investigate the plasma elimination kinetics of human fxla-fxla inhibitor complexes injected in rats. the amounts of complexes remaining in circulation were measured using elisas. the plasma tl/2 of clearance was 98 min for fxla-alat complexes, whereas it was 19, 1 8, and 1 5 min for fxla-cllnh, fxla-a2ap, and fxla-atiii complexes, respectively. thus, due to this different plasma tl/2, preferentially fxla-alat complexes may be detected in clinical samples. this was indeed shown in plasma samples from thirteen children with meningococcal septic shock (mss), a clinical syndrome which is complicated by activation of coagulation, fibrinolytic, and complement systems. fxla-fxla inhibitor complexes were assessed upon admittance to the intensive care unit. fxla-a 1at complexes were elevated in all patients, fxla-c11nh complexes in nine, fxla-atiii complexes in one patient, and no elevated fxla-a2ap complexes were found. we conclude from this study that, (1) although c1 inh is the predominant fxla inhibitor, fxla-alat complexes may be the best parameter to assess activation of fxi in clinical samples, (2) measuring fxla-fxla inhibitor complexes in patient samples may not help to clarify the relative contribution of the individual serpins to inactivation of fxla in rive, and (3) fxl is activated in patients with meningococcal septic shock. dudng the coagulation of plasma about 20 % of the (x2ap present is covalently crosslinked to fibrin by factor xiila (aoki und sakata 1980, thomb. res. 19: 149-155) . we investigated the binding of azap by factor xiila to soluble fibrin (desaabb-fibdno) whose polymerization was inhibited by an isolated fibrin ddomain named d=,,, (haverkate and tiemann 1977, thromb. res. 10: 803-812) . d==. is known to have an intact fibrin-polymerization site and is able to block the prolongation of the fibrin protofibrils at an early stage depending on its concentration. lateral association to fibrin fibers does not take place, since the inhibited protofibnls formed at the conditions used here do not reach a sufficient length (williams et el. 1981, biochem. j. 197: 661-668; hantgan et al. 1983, ann. n. y. acad sci. 408: 344-365) . material and method: soluble desaabb-fibrino was prepared by incubation of (lztl)-fibrinogen (0.48 mg/ml), d=1o (1.91 mg/ml; molar ratio d==o to fibrin 14:1) and 0.4 u/ml thmmbin for 45 min. then q2sl)-c~2ap (16 p.g/ml), faktor ×ill (2 ulml) and ca 2) (5 mmol/i) were added. the crosslinking reaction was stopped at different times of factor xiila-incubation by adding of urea/edtasolution. the suspension was analysed by ultracentrifugation on gradients containing saccharose, urea and sos. re~ultl: the elution profiles of the ultrecentifugation-gradients show the formation of cmsslinked fibrin oligomers of increasing size depending on the time of factor xiila-action. the crosslinked fibrin polymers contained about 80% of the fibrin initially added. although factor xiila acted well, crosslinking of azap in the fibrin oligomers could not be observed. conclusl0n: as we already demonstrated (kelach et el. 1994, ann, hematol. 70 (suppll) : a 60) the crosslinking of azap to fibnn clots depends on the structure of the fibdn network, especially on the degree of lateral association of the fibrin pmtofibdla. in desaabb-fibrino no lateral association of fibrin protofibnls takes place under the conditions chosen here. thus it is consistent with our theory that we did not observe any binding of aiap to the fibrin oligomers of desaabb-fibrino. human pci is a non-specific serpin that inhibits several proteases of the coagulation and fibrinolytic systems as well as tissue kallikrein and the sperm protease acrosin. it is synthesized in many organs including liver, pancreas, and testis. the physiological role of pci has not been defined yet. recently, we have cloned and sequenced the mouse pci gene (zechmeister-machhart etal., manuscript in prep.) . this enabled us to study pci gone expression in murino tissues using mouse pci edna and crna probes. by northern blot analysis, mouse pci tar.ha was exclusively found in the reproductive tract (testis, seminal vesicle, ovary), all other organs analyzed -including the liver were negative for pci mkna, indicating that in the mouse pci is not a plasma protein. to determine which cells of the reproductive tract synthesize pci, cellular localization was assessed by in situ hybridization of mouse testis and ovary sections. in testis, pci mrna was present in the spermstogonia layer and in leydig cells, while sertoli cells and peritubular myoid cells were negative. these results are consistent with the immunohistological localization of human pc1 (laurell et al,, 1992) . in the mouse ovary, stroma cells of the medulla and around the follicles were positive for pci mrna. no pci expression was detected in theca or granulosa cells. we also studied the regulation of mouse pci gone expression by steroid hormones in vivo. [n mature male mice castration caused an increase in pci mrna in seminal vesicles, which was reversible upon the administration of testosterone. in tissues of intact adult male and female mice, pci mrna levels decreased after injection of human chorionic gonedotropin (hcg), while in castrated male mice, hco had no effect on seminal vesicle pci mrna. progesterone and 17-b estradiol decreased ovarian pci mrna levels in immature female mice. these data suggest direct down-regulation of mouse pci by sex steroids. the different tissue specific pci-geoe expression in men and mice furthermore indicates a different biological role of this serpin in the two species. ctr. transgene technology, leuven "[' 1-tissue factor ('it) is a 47 kda glycoprotein mainly known a the primary cellular initiator of blood coagulation. whether tf expression may also play a role in development is unknown, but the lack of spontaneous viable mutations of the tf gene in rive leads to the speculation that its absence may not be compatible with normal embryonic development. to determine the significance of "if in ontogenesis, the pattern of tf expression in mouse development was examined and compared to the 'if distribution in human postlmplantation embryos and fetuses of corresponding gestational age. at early embryonic period of both murine (6.5 and 7.5 pc) and human (stage 5) development there is a strong tf expression in both ectodermal and entodermal cells. "if decoration was seen during ontogenetic development in tissues such as epidermis, myocardium, bronchial epithelium, and hepatocytes, which express "if in the adult organism. surprisingly, during renal development and in adult organism tf expression differs between men and mice. in humans maturing stage glomerali were "if positive whereas in mice glomeruli were negative and instead epithelia of tubular segments were tf positive. in ncuroepithelial cells there was a striking 'if expression indicating a possible role of'if in neumlation. moreover, there was a robust tf expression in tissues such as skeletal muscle, and pancreas, which do not express in adult. in contrast to tf, its physiologic ligand factor vii was not expressed in early stages of human embryogenesis, but was detectable in fetal liver, the temporal and spatial pattern of tf expression during murine and human development support the hypothesis, that 'if serves as an important mo~hojzenic factor darinz embrvozenesis. to serve as an anticoagulant, protein c (pc) must be activated by a complex formed between the enzyme thrombin (t) and its cofactor thrombomodulin (tm). therefore, downregulation of endothelial cell surface expressed tm, for example, triggered by an inflammatory stimulus, could become a critical factor in effective pc activation. in order to develop a recombinant (r) pc mutant which can be activated independently of the tkm-complex, a peptide sequence including p1-7 in the activation peptide of pc was modified to be identical to the factor xa (fxa)-cleavage site in prothrombin. the mutant was expressed in hu 293 cells, purified and its anticoagulant properties characterized. using purified fxa the mutant showed activation rates between 0.02 and 0.13 nmlmin at pc concentrations between 97 and 770 nm, while the rpc wild type was insensitive for fxa activation. the activation reaction is calcium-dependent reaching maximal activation rates at a calcium concentration of 3 mm and was enhanced to 3.8-fold by addition of anionic phospholipids (pl). in contrast to the wild type pc the rpc mutant was insensitive for activation by the t/i-m complex. addition of the mutant to normal human plasma induces a prolongation of tissue-factor and p-it-based clotting assays. using normal human plasma as a source for fxa the the activation rates of the mutant were found 5-fold higher than in the purified system if tissue factor was used to generate fxa. in conclusion, our data demonstrate that the rpc mutant is effectively activated by fxa in a purified as well as in a plasma system. interestingly, the activation rates are enhanced in the presence of pl and normal human plasma. fudher studies should clarify the potential use of this mutant as a novel anticoagulant. thrombln plays a pivotal role in thrombotic events. the time course of thrombln concentration in blood or plasma after activation is of special interest to answer a variety of questions. with a chromogenic assay developed by hemker et el. [thromb. haemostas, 70, 617, 1993] it became possible to measure the generation of thrombin in activated plasma continuously. inhibitors of clotting enzymes which are to be developed as anticoagulants should be able to inhibit thrombin generation or to immediately block generated thrombin. we have used a test based on hemker's thrombin generation assay to elucidate which potency and specificity an inhibitor of factor xa needs to efficiently block thrombin generation in human plasma. thrombin generation after extrinsic (tissue factor) or intrinsic (ellagic acid) activation was followed using the chromogenic substrate h-~ala-gly-arg-pna (pentapharm ltd.). a series of synthetic low molecular weight inhibitors as well as naturally occurring inhibitors of factor xa with different potency were investigated. because of the inhibition of activated factor x the generation of thrombin in plasma is delayed and the amount of the generated thrombin is reduced. the concentrations which cause a 50 % inhibition of thrombin generation (icso) correlate with the k~ values of the inhibitors. low molecular weight inhibitors with k~ values of about 20 nmol/i inhibit the generation of thrombin after extrinsic activation with icso in micromolar range. after activation of the intrinsic pathway tenfold lower concentrations are effective. the strongest inhibitory activity after extrinsic as well as intrinsic activation is shown by recombinant tick anticoagulant peptide (r-tap) with ic~o of 0.049 pmol/i (axtdqsic) and 0.037 pmo/i (intdnsic). in the compadson of synthetic low molecular weight inhibitors of thrombin end factor xa which have similar k= values for the inhibition of the respective enzyme (lowest i<1 20 nmol/i), factor xa inhibitors are less effective tn the thrombin generation assay. in contrast, the highly potent xa inhibitor r-tap shows a stronger inhibition of thrombin generation than the tight binding thrombin inhibitor hirudin. background: resistance to degradation of coagulation factor v by activated protein c is associated with a point mutation in which adenine is substituted for guanine at nucleotide 1691 in the gene coding for factor v. to date this specifc mutation appears to be the most common inherited abnormality which predisposes patients to venous thrombosis. for this reason a reliable, fast and automatable system for the diagnosis of the described point mutation is required. the conventional methods used to identify the mutation are based on allele-specific restriction enzyme site analysis or direct sequencing. these methods have disadvantages for a large scale dna diagnosis, which include the need for electrophoresis or a high cost and time consumption. methods: an alternative strategy of dna diagnosis, the allele-specific oligonucleotide ligation assay, was adapted for the diagnosis of tile point mutation of factor v. following pcr amplification of the target dna, tile procedure was performed completely automatically on a robotic workstation with an integrated elisa reader using a 96-well microtiter plate. allelespecific restriction enzyme site analysis was performed to confirm the genotypes. results: in 10 patients with the mutation and in 20 individuals without the mutation the genotypes determined with the conventional allele-specific restriction enzyme site analysis were in 100% concordance with the elisabased oligonucleotide ligation assay. discussion: the pck-oligonucleotide ligation assay applied as automated detection system for the identification of the coagul;mon factor v point mutation allows tile rapid, reliable, and large scale analysis of patients at risk for thrombosis. resistance to the asticoa=m~lant activity of activated protein c (apc resistance) has emerged as the most con'anon inherited thrombophilic state. patients lreterozygous for factor v leiden are more likely to suffer from thromboembolie events than controls. this risk is even more pronounced in homozygotes. due to the low sensitivity and speeifity of most coagulation tests some investigators suggest to examine patients for the presence of factor v leiden mutation by pcr-based methods. re e~tly we presented an aptt-based functional test (acceleria inactivation test ait): 1:20 diluted plasma (50bi) is mixed with factor v deficient plasma (50~tl) and aptt reagent (501.d), incubated at 37°c and then coagulation is induced by caci2 and a.pc (50~d). using a standard curve, the clotting time (see) is transferred in per cent accelerin inactivation (%ai). using this test, the widely used apc-ratio as well as pcr-based factor v leiden detection (confirmed by direct sequencing) we prospectively studied 172 consecutive patients with thromboembolic events. patients without the factor v mntation eonsitently showed more flazm 50% al with the exception of one patient with severe factor deficiencies (including factor v) due to hepatic failure and heterozygous for factor v-leiden resulting in 62*/. ai, there was a complete concordance between the pcrbased method and dysaseelerinemia detected by ait. due to these result a specifity and sensitivity of ait above 95 % was calculated. furthermore, a clear discrimination could be obsoved beween 34 heterozygotes (20%0,5 to <10 years; >10 to <18 years) with a normal population of 159 children. the mutation g1691a was found with an unexpected high prevalenee of 12% in our normal controls. however, the prevalence was significantly higher in the age groups: 0 to< 0,5 years (25%) and >10 to <18 years (30%). in patients between >0,5 to <10 years the overall prevalence was similar to the control (13%). however in patients of this age with spontaneous thrombosis apcr was also a significant risk factor (29%). our results emphasize the impact of apcr for thrombogenesis in children. however, the significance is agedependent and does possibly reflect the different physiology of haemostasis in our three age groups. activated protein c (apc)-resistanec is a newly reeognised risk factor for thrombosis. in at least 90% of the cases it is caused by a single point mutation in the factor v gene (g->a at nucleotide 1691), which predicts replacement of arginin 506 with ghitamin. one of the apc cleavage sites in factor va is located c-terminal of arginin 506, and mutated factor va (factor v leiden) is resistant to apc-mediated inactivation. from epidemiologic studies it is known, that this abnormality can be found in about one third of patients with thrombosis. apc-resistance is a major basis for venous thromboembolism and is prevalent in about 5.10% of the general caucasian population. recurrent spontaneous abortion (rsa) affects 1-5% of couples and represents a major concern for reproductive medicine. in spite of extensive endocrine, genetic, serologic and anatomic evaluation some 30-40% of rsa women remain unexplained. a frequent morphologic finding in placentae of aborted pregnancies is an increase of fibrin deposition within the intervitlous space. because of these findings we studied the prevalence of apc-resistance in women with rsa (more than 3 miscarriages) of unknown origin. in 2 of 35 cases we found a pathologic apc-resistance, both patients had a history of recurrent thrombosis and were heterozygous for factor v leiden. the prevalance of apc-resistance is 5,7% and thus equals the prevalence in the general population. our data do not support the hypothesis that apc-resistanee is a risk factor for recurrent spontaneous abortion. h~matologisches zentrallabor der universit~t, inselspital, 3010 bern resistance to activated protein c (apc) due to the mutation 506 arg --~ (]in of factor v (factor v leiden mutation) is the most frequent hereditary thrombophilic defect known today, with a prevalence of 20 -50 % in patients with idiopathic venous thromboembolism and of about 3 -5 % in the general population. with an allele frequency of 2 % the expected number of homozygous individuals is about 4 in 10000. homozygous and heterozygons individuals differ considerably with respect to the relative risk of thrombosis (80 -fold versus 7 -fold) as well as to the age of the first thrombotic event (31 versus 44 years). deficiency of the vitamin k dependent protein s (p$), an important cofactor of apc, is another hereditary thrombophilia which is, however, much rarer than apc resistance with a prevalence of 5 to 8 % in patients with venous thromboembolism. factor v leiden mutation as well as ps deficiency are associated with impaired anticoagulatory activity of apc, which is most pronounced in case of the combination of the two defects. the combination of ps deficiency (with an assumed prevalence similar to that of pc deficiency) with heterozygous or homozygous apc resistance can be expected with a probability of 1: ~ 5000 or 1: ~ 500000, respectively. it is well known that ps levels decrease towards the low normal or even subnormal range during pregnancy. moreovar, there is increasing evidence that the sensitivity of plasma to the antieoagulatory effect of apc decreases during pregnancy resulting in an acquired apc resistance. these pregnancy associated effects art obviously much more relevant in case of preexisting ps deficiency or apc resistance and should contribute to the elevated thrombotic risk during pregnancy in a subject with either of the two defects, and even more so for a woman who suffers from both defects. we describe a young woman with a combination of homozygens apc resistance ( apc ratio 1.10, normal range: 2.02 -3.73), pronounced ps deficiency (free ps 0.ll u/i, total ps 0.30 u/i, normal range: 0.55 -1.20 u/1 and 0.65 -lag u/i, respectively) and, moreover, impaired fibrinolysis (no change of euglobulin -lysis time after 10 rain venous occlusion) who developed deep vein thrombosis after cesarean section in her first pregnancy. examination of her familiy showed heterozygous apc resistance in her asymptomatie father (apc -ratio 1.99) , combination of heterozygous apc resistance (apc -ratio 1.60) and ps deficiency (free ps 0.30 u/i, total ps 0.58 u/i) in her nsymptomatic mother and no defect in her sister. considering the fact that the mother was still thrombosis free at the age of 49 one may assume that the thrombosis risk in the proposita was mainly influenced by the homozygnsity for apc resistance. s. ehrenforth, m. adam, b. zwinge, i. scharrer university hospital, dept. of angiology, frankfurt a.m., germany introduction: apc resistance has been shown to be the most commonly inherited defect which constitutes a risk factor for venous thrombosis (vt). however, most of the present epidemiological studies concerning apc-r prevalence in thrombophilia were derived from results of tests conducted onplasmas collected under various conditions. this may influence the great differences reported on the prevalence of apc-r among these patients. for example, it has been shown that freezing of plasma specimens prior to analysis of apc-r causes a significant decrease in the assay results.the aim of our study was to evaluate the influence of eentrifugation conditions on the results obtained with the chromogenic apc-r assay. patients and methods: blood was collected from 31 patients (t9 women, 12 men; fv gent.type: 13 r/r 506, 14 r/q 506, 4 q/q 506) through veinpuncture into trisodmm ciwat (1:9). platelet-rieh and platelet-poor plasma was obtained by immediately centrifugation at 4"c for 10, 20, 30, 40, 50, 60 rain at 1000, 2000, 3000, 4000, 5000 and 6000 rpm. additional, pnp obtained from 14 healthy individuals (7 male, 7 female without hormonal trealraent) was prepared equally. apc-response was determined within one hour after centrifugation using the coatest apc resistance kit from chromogenix. results: for both, pnp and sin/gle plasma samples, we observed continuous higher af'c-ratios after increasing cenwifugation intensity. for example, an increase from 1000 to 6000 rpm resulted m an increased apcratio from 2.7 to 3.26 (20 min), from 2.88 to 3.326 (60 rain) respectively. even though less distinctive, similar results were observed concerning the duration ol eentrifugation: when the duration was increased from 10 to 60 minutes we observed a continuous increase in apc-ratio, for example from 2.74 to 3.12 when using 2000 rpm and from 3.09 to 3.36 when using 4000 rpm. the decrease of the ratio after low eentrifugation is the eonse-9nence of the shortening of affft in the presence of apc, without a signhcant influence of basal al:rl~ without apc. conclusion: our results demonstrate that centrifugation conditions are important to consider for the interpretation of apc-r results. supporting our observations, recent studies from sidelmann et al. have shown that an increase in plasma platelet concentration, low eentrifugation respectively, causes a signficant decrease in the apc-response. however, so far the mechanism responsible for the significant effect of both on apc-r assay results is unknown. although technically simple, the biochemical cemplexitiy inherent in the chromogenic apc-r assay necessitates a standardized plasma handling procedure to secure a reproducible determination of apc-il compapjson of different assays for determination of apc-resistance with the geno'fyping factor v (arg -> glu) g. siegert*, s. gehrisch*, e. runge**. r. naumann**, r. kn6fler*** *institute of clinical chemistry, **clinic of internal medicine, *** childrens hospital resistance to apc diagnosed on the basis of prolongated clotting time in the aptt assay" is now considered a major cause of thrombophilia. in the majority apc resistance is ~ted with a point mutation in factor v molecule (arg506glu), but both are not synonym. protongated baseline aptt is a limitation of the assay. following the determination is not possible in risk groups of patients (factor)ill deficiency, lupus anticoagulan0 and in patients under anticoagulant therapy. in these causes a dilution of plasma in factor v deficient plasma is recommended. the immunochrom assay is based on the inactivation of factor villa by apc. the aim of the study was to compare different functional apc response assays with the result of the dna analysis. apc response was tested in 100 healthy probands, 81 thro~ patients and 16 family members using the lmmtmochrem assay, the contest (chromogeaix) and the contest with 1+ 4 dilution of the plasma in native factor v deficient plasma (immune). the dna analysis was performed as described by bertina. one patiem was homozygoas for factor v mutstion~ a hetemzygous result was obtained in 4 members of the control group, in 28 patients and in 6 family members. in all cases with factor v mutation the ratio of the immunochrom assay was lower than the laboratory own value, independent on anticoagulant therapy. pathological ratios in this assay were also obtained in one member of a family" with high thrombotic incidence (dna arg/arg) and in 17 patients under anticoagulant therapy ( two of this patients are one cloned twins). in the contest a ai~ response was diagnosed in all cases with factor v mutation without anticoagulant therapy and in 40 % of heterozygous patients under anticoaglant therapy. results of the test using the dilution in factor v deficient plasma showed a good agreement vath the results of the dna analysis but the method is obviously only sensitive for the factor v mutation. the reason for pathogical ratios in the lrnmunechrem assay in wildtype patients is unclear. the majority of this patients is treated with anticoagulants, a comparison with the contest is not possible. interestingly in one patient under heparin and low ratio in the immunochrom assay' after reduction of hepann the ratio of the coatest was also low. it seems necessary to investigate in which distance to the thrombotic events the apc resistance should be tested. following pathological ratios in ftmctional apc assays must be discussed: high levels of factor viii and or v wiuebrand antigen (acute phase reactien), other mutations in factor v and viii. the factor v dilution assay should be replaced by the dna analysis. due to their differing compositions, the "sensitivities" of various aptf reagealts differ not only with respect to factor depletions, heparin and fibrin-fibrinogen degradation products, but also with regard to pathological inhibitors. for lupus anticoagulants this means that "lupus-sensitive" reagents can be delineated from "lupus-insensitive" reagents. with a "lupus-insensitive" ai~ reagent there is no or only slight prolongation of the aptt in the plasma under investigation, whereas with a "lupus-sensitive" reagent marked prolongation is observed. for the meaninof~l use of aptr reagents it is necessary to know the extent to which they are influenced by lupus anticoagulants. the following 14 apti' reagents were tested: • ptt-reageaz, p'rta, ptta liquid, ptt-la, pti'-lt (boehringer/stago) • pathromtin, pathromfin sl, necthroratin (behring) • platelin s, piatehn excel ls (organon tekinka) • actin-fs, actin-fsl (dade) • aptt silica lye, aptt silica liquid (instntmentation laboratory) the material for investigation consisted of 20 plasmas from patients with lupus anticoagulants. a confirmatory test (lupus anticoagulant test, immune) was positive for all of the patients. measurements were made using the sta coagulation analyser (boehringer/stago). it can be seen from the results that in some instances very different prolongations were obtained in identical plasmas by using differing aptt reagents. low susceptibility to lupus anticoagulants was shown by actirt fs (dade), ptt-reagenz (bcehrlnger) and neothromfin (behring). high susceptibility was shown by platetin excel ls (organon teknika), ptt-la and pti'-lt (boehringer/stago). lupus anticoaguhant screening with the aptt reaction is promising when two aptr reagents differing as greatly as possible in their lupus anticoagulant sensitivity are used. the resistance to the anticoagulant response of activated protein c (apc) is a major cause of venous thrombosis. apcresistance is due to a single mutation in factor v gene, which predicts replacement of arg 560 in the apc-cleavage site with gln (factor v leiden mutation). in contrast to other known genetic risk factors for thrombosis, this factor v 1691 g-a mutation has a high prevalence in the common population of western europe (average 4-5 %). we have determined the prevalence of the factor v 1691 g-a mutation in a population of 700 probands of north-eastern part of germany. the mutation was found in 7 %. (heterozygoty were found in 48 subjects 1 person was homozygous.) the results are compared with our studies of populations from argentine and poland. me analysed the factor v 1691 g-a mutation in 71 patients with thrombosis from germany and hungary. this mutation has been found in about 60 % of these patients. in contrast, the frequency of this mutation was strongly reduced in a group of 47 patients with thrombosis and pulmonary embolism of argentine (3 heterozygotes in 47 patients; 6 %). the results of these different populations will be described and discussed. past medical history: venous thromboembolic events (re) at 18, 19 and 2i years; intermittent oral anticoagulation (oac) without te's. diagnosis of autoimmune disorder with elevated antinuelear-antibody-fiters and positive lupus-anticoagulant test. no other relevant illnesses; family history uneventful. two weeks prior to the referral to us -acute febrile illness with nausea, diarrhea, abdominal pain; hospitalisation, treatment with iv antibiotics and anticoagulation with fraetionated heparin; development of extensive deep vein thrombosis (dvt) of the right leg; initiation of full-dose unfractionated heparin; decline of platelet count from 121 to a nadir of 21 g/l; referral to our department. on admission an extensive coagulation screen yielded the following results (n/normal, t/elevated, i/reduced, +/positive, -/negative): pt t, aptt t, tr n, factor ii, v, viii n, factor vii, ix, xi, xii /,, fibrinogan t, atiii n, protein c, s *, activated protein c sensitivity ratio 1.92 ($), fv-leidenmutation pcr -, fibrinolytic system n, tat t, ft÷2 t, lupus anticoagulant +, heparin induced platelet antibodies +; no diagnosis of a specific autoimmuna disorder could be made. an immunosuppressive therapy with corticosteroids and anticoagulation with recombinant hirudin were init'~at~; no p~ogr~zsion of the dvt oeeured and normalisation of the platelet count was observed. during follow-up under oac ) and low-dose corticosteroids, the patient was well, the pathologic coagulat;.on results, including lupusanticoagulant and activated protein c resistance, have returned to normal; no further te's have been observed. in summary we present a case of a complex coagulation disorder as part of an autoimmune process, resulting in a clinically manifest prothrombotie dysbalance including lupus anticoagulant, acquired resistance against activated protein c and heparin induced thrombocytopenia (type ii), entering complete remission under combined immunosuppressive and anticoagulant therapy. in the last 30 years, a vast number of simplified analytical procedures have been developed for the diagnosis of haemostatic disorders. today the detection method have evolved from the mechanical hooking method or ball coagulometry to optical systems, which additionally can utilise chromogenic substrates or immunological methods. in these systems the clotting time is derived from algorithms (e.g. threshold or maximum of the first or second integral). we studied 50 healthy subjects, aged 21 to 65 years and 118 patients, aged 9 to 93 years using a new aptt reagent (pathromtin $l). the results were compared with those obtained with a routinely used reagent (pathromtin). the reference range, factor-, heparin-and lupus anticoagulant sensitivity were determined. analysis was performed using the behring fibrintimer a (bfa) with optomechanical clot detection, the behring coagulation timer (bct) with op-"dcal clot detection by threshold and the dw test and dw confirm for lupus anticoagulant diagnostic. our results showed that the new pathromtin sl reagent met the demands for a higher factor and lupus anticoagulant sensitivity. it is highly suitable for monitoring heparin therapy and gave comparable results with the optical and the optomechanical analyser systems, hence reagent c~n be used for both systems. restenosis following percutaneous transluminal angioplasty (pta) continous to be a major clinical problem. neoinfimal hyperplasia, being the major undedying cause, can not be sufficiently avoided. vadous plasmatic coagulation and fibrinolytic factors, have been associated with artedal restenosis. anticardiolipin antibodies (act_) have been established as dsk factors for venous or arterial thrombosis. methods: in a cohort of 75 patients (53 men and 22 women, age 68±13 years) undergoing pta of a peripheral artery we prospectively evaluated whether acl could influence 6 months restenosis rate. patients were clinically examined before, 3 and 6 months after pta. noninvasive grading of artedal stenosis was done by duplex scanning of jet peak velocities. restenosis was arbitrarily defined as more than 50% occlusion of the lumen at the site of dilatation 6 months after successful intervention. laboratory investigation at the same time included acl and other known atherosklerosis risk markers, such as fibdnogen (fbg), yon willebrand factor (vwf), homocystein (hcy), c-reactive protein (crp). thrombin generation markers, such as thrombin-antithrombin iii complexes and prothrombin fragments 1+2, as well as thrombomodulin (fm) as an endothelial activation marker, were also measured. results: 31/75 (41.3%) patients were considered to have developed restenosis after 6 months. 9/75 (12%) patients were found to have positive igg-(19-35 gpl) and/or igm-acl (14-103 mpl) at all three measurements. 1/75 was negative before but seroconverted (igm) 3 months after pta. 2/10 (20%) acl-positive and 29165 (44.6%) acl-negative developed restenesis at 6 months (chi-square p-value=0.14). all above mentioned coagulation parameters did not differ between acl-positive and -negative patients, measured before or 6 months after pta. some of them are shown below (values before pta): fbg ( basilar artery stenosis is a rare event in young children. risk factors are head or neck trauma with consecutive dissection of the vertebral artery, cardiac diseases or hypercoagulability. elevated lipoprotein (a) (lp(a)) serum levels in adults can mediate atherosclerosis. in addition, lp(a) might interfere with fibrinolysis. here we report on a 3 year old boy , who presented with acute brain stem symptoms. history revealed neither trauma nor infectious disease. conventional and mr angiography showed stenosis of basilar artery without ischemic lesions. laboratory findings were normal in routine blood and csf tests. global coagulation parameters as well as procoagulant and anticoagulant factors were normal. cardiac and autoimmune disease could be ruled out. lp(a) serum levels were significantly elevated to 104 mg/dl (normal range <30 mg/dl). analysis of other family members revealed a hereditary hyperlipoproteinemia (a) which might explain family history of an increased incidence of myocardial infarction and cve in elderly family members. clinically the patient recovered completely from brain stem symptoms after heparinization and subsequent oral anticoagulation with phenprocoumon. however, radiological signs of basilar artery stenosis were progredient. in a recently developed specific test, an elevated anti-phosphatidylserin antibody titer was detected one year after primary diagnosis. in conclusion, this is the first report on a child with stenosis of the basilar artery and elevated levels of lp (a). it is unclear, whether apa contributed to the onset of basilar artery stenosis or developed secondary due to endothelial defects after thrombosis and anticoagulation. apa, however, might increase the risk of further thrombotic events in this patient. in 110 patients with thrombotic events respectively patients with systemic lupus erythematodes antioardiolipin antibodies (aca) aund lupus anticoagulant (la) were ~ea~ured. for aca detecting we use the assays from elias for igg-and ig~}-antibodies. we use as sensitive methods for detecting la in our laboratory the testkits from diagnostlca stago (staclot la with hexagonal array of phospholipids, ,ptt-la a very sensitive pttmethod and staclot p~p-a platelet neutralization procedure) and the ptt from organon teknik~ (platelin excel ls with two incubation times, 1 and 10 minutes). i"~e results of this tests were compared with three new or~e on german market: specktin apot (aktlvated plasma clotting time), specktin aptt (aptt wlth purified soy extract) and 8pecktin la (phospholipid preparation in concentrations between 10 and 500 ~g/ml); all wak chemie. traditional aptt reagents were developed for the sensitive detection of factor vib an ix as a cause of hemorhage. high sensitivity against lupus anticoagulants, which also prolong aptt, was not required for this purpose, with increasing recognition of the importance of antiphospholipid antibodies as a risk factor for thrombembolism, more sensitive reagents were designed, which now reliable detect this condition. using such reagents as a screening test in a general hospital makes it necessary to distinguish both conditions quickly. we here report an algorhythm, by which we use an inhibitor (lupus anticoagulant) sensitive (sta aptt, boehringer) and an inhibitor insensitive reagent (actin fs, dade) to distinguish anticoagulants and factor deficiencies as a cause of prolonged aptt. citrate plasma from 33 patients with various diseases showed an unexpectedly abnormal inhibitor sensitive aptt (>40s). 13 plasmas with factor deficiencies remained abnormal with the insensitive aptt reagent. a regular correction of their defect occured on mixing with normal plasma. by measurement of single coagulation factors five patients with contact factor xii deficiency were found. this condition is associated with thrombosis and very rarly with bleeding. three patients with factor xi deficiency and two patient with factor ix deficiency were also identified. antiplatelets, of any kind, permits a secondary prevention of myocard ischemic lesions. there is no general consensus regarding secondary prevention of cerebral ischemic lesions. aspirin remains the most common substance, ticlopidlne also brings about prevention, but with important secondary effects. european stroke prevention study i has demonstrated that the combination of antiplatelets, in particular aspirin/dipyridamole (persantln), is also very active. to collect more information, esps 2 was organized and 6602 patients receiving either a placebo,either 50 mg aspirin,either 400 mg sustained release form of dipyridamole (persantin (r)), or the combination aspirin/dipyrldamole, were recruited. it ended march 31st 1995 with the following conclusions: i-aspirin, 50 mg a day, brings about a significant secondary reduction of stroke (18.z %), after a two year follow-up. notwithstanding the low dose of aspirin, haemorrhages remain important. 2-dipyridamole, at 400 mg a day, brings about a significant reduction of stroke (i6.3~), similar to that of aspirin. one could thus substitute 50 mg aspirin by 400 mg dipyridamole. 3-the combination of 50 mg aspirin and 400 mg dipyridamole brings about a significantly greater reduction of stroke (37.0 ~). esps 2 revealed that a low dosage of aspirin is active, that dipyridamole alone is also active, but that the combination of both gives far better results. the study of the primary end-points,the study of the survival curves, the factorial statistical analysis and the pairwise comparison analysis, led to these conclusions. the conclusions drawn from esp£ 2 underline that the combination aspirin/dipyridamole is a privileged choice for cerebral ischemia, the state of activation of circulating platelets in acute cerebral ischemia is controversial. activation of platelets on single cell level can be assessed by determining the shape change or the expression of antigens such as p-selectin (cd62). shape change is an early and rapidly reversible event in platelet activation whereas p-selectin is irreversibly expressed on the platelet surface upon stimulation. methods: we investigated 20 untreated patients within one day after cerebral ischemia, 20 patients months after stroke treated with warfarin, and 21 age and sex matched control subjects without vascular risk factors. venous blood was collected into a fixation solution blocking the metabolic processes in platelets within milliseconds. we determined the fraction of resting discoid platelets by phase contrast microscopy. the expression of p-selectin was measured by flowcytometry. results: the fraction of platelets expressing p-selectin was higher in patients with acute cerebral ischemia (7.8_+4.7%) than in control subjects (1.9_+1.1%; p<0.001, u-test). patients with stroke (n=15, 7.8+4.5%) and patients with transient ischemic attack (tia; n=5, 7.6-+6.1%) had similar values. patients months after stroke still had higher values (4.3+9,3 %, p<0.05) than control subjects. the rate of discoid platelets was not different between patients with acute ischemia (n=17, 85.6-+8.8 %), patients months after stroke (n=19, 85.7-+5.1%) and control subjects (n=21, 86.7_+5.8 %). platelet count was not significantly different between groups. conclusion: the elevated proportion of platelets expressing pselectin indicates strong platelet activation in acute cerebral ischemia and in a majority of patients months after stroke. assessment of pselectin revealed a higher sensitivity for platelet activation after stroke or tia than analysing the reversible shape change. further studies have to clarify if monitoring of platelet activation by flowcytometry is helpful as a prognostic tool and to evaluate therapeutic strategies after stroke. vascular smooth muscle cell (smc) proliferation and migration into neointima are the hallmarks of atherogenesis. the complexity of these processes and their concerted action and interaction of molecules are yet to be fully elucidated, one crucial molecule seems to be the urokinase-type plasminogen activator receptor (upar) recently also assigned as cd87 antigen, upar serves a dual function: (1) it directs upa proteolytic activity to a special location on the cell surface and (2) induces cellular signals leading to various phenotypic changes. we have investigated the signal-transducing capacity of upar in human smcs and provide here a molecular explanation for uparrelated cellular events. activation of these cells with upa (even with inactivated catalytic center) results in the induction of tyrosine phosphorylation, suggesting modulation of upar-associated protein tyrosine kinases (ptks) upon ligand binding. we obtained patterns of tyrosine-phosphorylated proteins with molecular masses of ~ 55-65 and 35-40 kd. using antibodies against different types of ptks as well as immunoprecipitation-and immunoblotting techniques the ptks involved in the upar-signalling complex were identified to be members of the src-ptk family. the cotocalization of upar and ptks at the cell surface of smcs was further confirmed by confocal microscopy studies. we conclude that the upar-ptk complex is most likely involved in this signal transduction pathway that provides the coordinated action of extracellular proteolysis, adhesion, and cell activation, which is required for cell migration. this mechanism may be crucial for the progression of atherosclerotic plaques. activation markers of haemostasis have been found elevated in relation to diabetic vascular lesions. simultaneous pancreas-and kidney transplantation (pkt) in type i diabetes has been shown to improve diabetic complications and long term survival. we measured haemostatic vascular risk factors and activation markers in plasma of 18 patients after successful pki', 17 patients after pkt and rejection of the pancreas graft and 5 patients after pkt and rejection of the renal graft. blood samples were taken during routine ambulatory visits, patients were free of any ongoing acute disorder or transplant rejection and under continuous immunosuppressive medication. despite individually adjusted insulin therapy hba1 plasma levels increased after pancreas rejection (5,37 vs 7,i2, p<0.001). platelet counts and plasma levels of fibrinogen, f1+2 fragment, tat-, app-complex and-fibrin monomer were found significantly elevated as compared to diabetic controls but not significantly different with respect to complete or partial successful pkt. one major reason of the increased activation state of haemostasis may be cyclosporin treatment given to all patients, t-pa and pal i plasma levels were within the normal range and significantly correlated to plasma triglyceddes (r.0.049; p<0.005). d-dimer plasma levels were significantly lowered after pancreas rejection (772(375) vs 324(232) nglml; mean(sem) p<0.005), which might reflect impaired fibrin degradation related to increased glycosylation of fibrinolytic factors. in conclusion, despite the marked improvement of glucose and lipid metabolism, plasma markers of activation of coagulation and flbrinolysis are not decreased to normal after simultaneous pancreas and kidney transplantation. according to the investigations of fowler et al. and pepe et al. the probability of an ards occurring with one risk factor is 5-8%, and in the presence of several risk factors, 25%. goris et al. and johnson et al. determined the level of severity with the aid of a fixed scale: the injury severity score. all these investigations are however not to be interpreted as typical following coronary surgery. these investigations demonstrated that the kallikrein and factor xii systems are of great importance as intraoperative risk factors. here the factor xii system plays a major role with direct or indirect activation of the kauikrein-kinin system with the splitting products alpha-factor xiia and bfactor xha respectively. all ards scores take the pmn-elastase into account. if the pmn-elastase values (1000 pg/l) are constantly high postoperatively then lung complications are to be expected. patients developing an ards displayed significantly lower alpha2-macroglobulin values. patients who developed a highly significantly raised kallikrein-like acdvity (>60 u/i) after the beginning of bypass and showed constantly high values during ecc are difficult to keep under control due to the blood pressure behaviour. the platelet pal also shows a significant rise and intraoperatively runs analogous to platelet factor 4, only antiparailel, since it attacks the endothelium. we were able to show that pai-1 is suitable as an indirect marker for a possibly developing restenosis. 85% of the patients investigated with lowered pai-1 values in the postoperative phase did not develop a restenosis. however, with patients showing significantly rising pa[-1 values from the 1 st. to 3rd. postoperative day 50% of all the cases had a restenosis. a further risk factor in this respect are significantly raised fibrinogen levels which lay over 800% at the end of surgery. if these fibrinogen values do not fall from the 1st. postoperative day onwards a raised risk of thrombosis must be reckoned with in the absence of therapeutic intervention. the following parameters represented haemostaseological risk parameters with significant behaviour within the framework of this study: 1) regards the blood pressure behaviour, the kallikrein-like activity (>60 u/i); 2) with regards to the lung complications, aipha2-macroglobulin and pmn-elastase (>900 g/i); 3) and final/y as a possible marker for a developing restenosis pai-1 and fibrinogen (>800%). resulting from numerous clinical studies homocysteinemia is found to be an almost independent risk factor of atherosclerosis including thrombotic complications as well as of venous thromboembolism. experimental investigations on the underlying mechanisms suggest endothelial cell damage accompartied by the development of an atherogenic and thrombogenic potential, increased platelet reactivity, oxidative modification of ldl, and enhanced affinity of lp(a) for fibrin. to our knowledge no results are published on the influence of homoeysteine on leukoeytes although these cells are deeply involved in pathological events within the vasculature. therefore, as a first approach different functional parameters of human polymorphonuclear leukozytes (pmnl) were followed under incubation with 60, 300, and 600 i.tm (final concentration) dl-homocysteine (hc) in isolated fractions or whole blood, respectively: l) spontaneous mobility of pmnl, measured as migration distance into micropore filters in a modified boyden-chamber, is found to be significantly enhanced by the two smaller hc concentrations. 2) chemotaxis induced by 0.1 i.tm formylmethionylleueylphenylaianine (tmlp) shows no significant differences. 3)monitoring of chemiluminescence signals (autolumat lb953, berthold) is complicated as hc influences the luminol-mediated indicator reaction. adjusting appropriate conditions the following results are obtained: spontaneous chemiluminescence and that induced by zymosane, tmlp, and the ca2+-ionophore a23187 are entranced by the two higher hc concentrations. there are, however, differences between the blood donors as a minority does not respond to hc in repeated measurements. with phorbol 12-myristate 13 acetate the signal is diminished by hc in all cases and with all concentrations. 4) phagoeytosis induced by zymosane (microscopic evaluation) as well as by opsonized e. coil (cytoflowmetric evaluation) is significantly increased by the two higher hc concentrations. conclusion: the activation of human pmnl is enhanced with respect to the majority of investigated stimuli by hc in concentrations reached under pathophysiological condititions. the effect of pysical exercise on hemostatic parameters was studied in 12 patients (male, mean age: 55 [range 36-68] yrs) with angiographically documented coronary artery disease (cad) and in 11 controls (male, 53 [43-62] yrs) both participating in an 1 hour group exercise session for cardiac rehabilitation. in each group relevant arteriosclerotic lesions in carotid, abdominal and leg arteries were excluded by doppler ultrasound examinations. patients were all under 6-blocking agents and aspirin. plasma levels of prothrombin fragment 1+2 (ptfi+2) and fibrinopeptide a (fpa) reflecting formation of thrombin and fibrin, respectively, were measured at rest and immediately after 1 hour of exercise consisting of jogging, light gymnastics and ball games. training intensity in both groups was comparable as indicated by the mean heart rate during exercise corresponding in patients to 80+6% (mean-+sd) and in controls to 79-+5% of the maximal heart rate previously determined on a bicycle ergometer. baseline values for ptf1 +2 were significantly lower in oatients (0.67-+0.15 nmol/i; mean-+sd) than in controls (1.01-+0.21; p<0.001 i. after exercise we found an increase of ptf1 +2 in controls to 1.14-+0.24 nmol/i (p<0.001) while in patients ptfi+2 remained unchanged (0.67-+ 0.14 after). accordingly, exercise induced r se of fpa was more pronounced in controls (from 0.62-+0.24 to 1.60-+0.62ng/mt; p<0.001) than in patients (from 0.63-+0.26 to 1.20+0.46ng/ml; p<0.00t). we conclude that in terms of thrombin and fibrin generation exercise training does not exert detrimental effects on hemostasis in patients with cad. lower baseline values and lack of exercise induced increases of ptf1 +2 in patients with cad might be attributed to medication with aspirin and/or b-blocking agents. periodontitis marginalis (pm) is an inflammatory oral disease that is caused by gram-negative bacteria and that has a high incidence in the second half of the life. clinical signs of pm are gingival bleeding, periodontal pockets, alveolar bone destruction and loss of teeth. recent epidemiologlcal studies have provided some evidence for an association between pm and atherosclerosis. in the present paper we will summarise some of the results that we have obtained in studies on patients with pm as well as on patients with hypercholesterolaemia (hc) and atherosclerosis. pm was frequently found to be associated with hc (90 % in rapidly progressive pm) and increased reactivity of peripheral blood neutrophils and platelets (e.g. generation of oxygen radicals and paf-induced aggregation). patients with hc and atherosclerosis had a higher frequency of severe pm when compared with data on the community periodontal health. the severity of pm was higher in patients with plasma cholesterol levels _> 6.5 mm when compared to those with plasma cholesterol < 6.5 mm. in patients with coronary atherosclerosis the severity of pm was significantly correlated with plasma cholesterol level, systolic blood pressure and the number of diseased coronary arteries. these results provide further evidence for an association between pm, hc and atherosclerosis. it can be speculated that hc is not only a risk factor for atheroscterosis but also a risk factor for pm and acts by increasing the reactivity of neutrophils and platelets. on the other hand, pm as a mild chronic inflammation could promote the development of atherosclerosis due to effects of endotoxins on vessel wall, blood cells and haemostatic factors. it has been also speculated that phagocyting leukocytes in the inflamed periodontal tissues could contribute to oxidative modification of ldl. so far, there is no evidence that atherosclerosis may contribute to the pathogenesis of pro protein z (pz) is a vitamin k dependant plasma protein synthesized in the liver. it promotes the association of thrombin with phosphorlipid surfaces. recently it has been shown that a deficiency of pz may lead to a bleeding tendency. in patients undergoing chronic hemodialysis, disorders of hemostasis are common. to examine if plasma levels of pz are altered in patients with end stage renal disease we determined pz in plasma of patients at the beginning of hemodialysis treatment. the results were compared with a group of 50 healthy controls. the difference of pz levels in plasma of patients with end stage renal disease with the control group was not significant. control group was 2900 + 487 ng/ml and in patient group was 3133 + -1394 ng/ml. one patient with marked bleeding tendency after hemodialysis pz was 1387 ng/ml. we concluded that patients with bleeding disorders pz determination may be helpful. the normal range of actin fs was reinvestigated in a multicentric approach. a protocol was developed which requests from each center to assess the aptt with one common and one variable lot of actin fs in 50 samples of suspected normals. inclusion and exclusion criteria based upon the results of clotting assays, liver enzymes and clinical data were defined. results: a total of 1056 results was obtained. the majority of centers in this study used the electra 1000 or 1600 c (mla). results for the electra group (n = 6) showed a precision for the common lot of actin fs with a common lot of a three level control from 1.5 % (level 1) to 1.1% (level 3) with an excellent accuracy between the 6 centers. clotting times with the variable lots of actin fs were very similar. the results from normals, however, showed a somewhat higher dispersion using the common lot of actin fs. 4 of 6 centers had almost identical mean values (range 26.6 to 27.2 sue) whereas one reported shorter and one longer clotting times (25.2 and 29.7 sec). results with the variable lots gave almost identical results as the common one. a total of 556 results of all lots gave a normal range of 23.5 to 31.7 (5 -95 % percentiles) on electra. mean values on acl (n = 200) were 26.3, on bct, 27.65 sec, on amga coagulometric, 29.4 sec, on amga turbidimetric, 26.6 sec (n = 100 each). all centers used sarstedt monovettes with 3.2 sodium citrate. discussion: the results of this study demonstrate the lot to lot consistency of all lots of reagents included in this study since the common and variable lots showed very consistent results. interestingly in the large group of electra users the normal ranges showed some differences, though the controls in all centers were almost identical. this confirms the recommendation that a normal range as stated from the manufacturer should be used for orientation only and that each laboratory should assess its own range. direct acting anticoagulant agents such as hirudin (r-h), argatroban (arg), efegatran (efe) and peg-hirudin (ph), represent specific and potent inhibitors of thrombin. blood samples collected in r-h (10 ~g/ml), arg (50 ~tg/ml), efe (25 ~tg/ml) and ph (10 ~tg/ml) do not clot for extended periods (>24 hours), thus allowing for the collection of plasma for analytical purposes. unlike heparin, these agents do not require any plasma cofactor for their anticoagulant effect. in contrast to citrate, oxalate, edta and heparin, these antithrombin agents do not alter the electrolyte or protein composition of blood. thus, blood collected in these agents may provide a physiologically intact (native) sample for clinical laboratory profiling. we have used all of these agents to prepare whole blood and plasma samples for various diuical laboratory measuroments. plasma samples collected with these agents are obviously not suitable for global clotting tests (pt, aptt, thrombin time, fibrinogen); however, these agents are optimal anticoagulants for the collection of samples for the molecular markers of hemostatie activation, such as fibrinogen/fibrin related degradation products, prothrombin fragment, protease cleavage products, tfpi, tnf and other protein mediators. electrolytes, blood gases, enzymes and protein profiling can also be satisfactorily measured on blood samples collected with these agents. antithrombin anticoagelatad blood used fur hematologic analysis showed equivalent blood count and differential results as that obtained with edta blood. unlike other anticoagulants, these agents do not interfere in the cell staining process. washed blood cells can also be prepared using antithrombin aents supplemented buffers for morphologic and fuuctional studies. thrombin inhibitors such as hirudin have also been used for flow cytometry and image analysis of blood cells and tissue exudates. our observations suggest that these anticoagulants can be used as suitable anticoagulants for clinical laboratory blood sampling. these agents can also be used as a flush anticoagnlant fur most automated instruments as these exhibit superior anticoagulant properties to heparin. furthermore, the hematologic parameters obtained in antithrombin anticeagnlated blood may be physiologically more relevant than those determined on blood collected in edta, citrate or heparin. antithrombin ul determination is one of the most popular method for in vitro diagnostic of number of different disorders. human fhrombin a~nity purified on heparin-rnodified silica-based sorbents was used for level of antithrombine lu determination by abilgaard method in blood of 40 patients with pregnancy pathology, acute leukemia, thrombocytopenia and anemia. it was founded, that antithrombin level is decreased to 50-60% of normal values in case of pregnancy pathology, to <50% -in case of acute leukemia and thrombocytopenia, to s0% -in case of anemia. obtained results show the strong relationships between named disorders and patient antifhrombin iii level. therefore anfifhrombine iii estimation may be used as simple and quick method for preliminary diagnosis of above named disorders. bm coasys 110 is a complete automated analyzer system for coagulation tests. it is well suited for routine coagulation testing in random access in a medium throughput laboratory environment. analytical performance and practicability were tested by a common evaluation program in five hospital laboratories. within run and day to day cv's were below 5 % in different samples (controls, patients) . comparison in different therapeutic ranges confirms the declaration of the isi-value for calculating inr-values. normal values for coagulation tests with results in pdmary units were checked in 390 samples and confirmed. due to the optical measuring principle of the bm coasys 110 there was a little tendency to shorter times with the thrombin reagent. in conclusion the performance of coagulation tests with the bm coasys 110 was rated as well or better compared to existing systems in the laboratories with advantages due to short timed familiarizing and easy handling. flexibility and stability of the system permit optimal integration and innovation into the w0rkflow of the routine laboratory. the purified thrombin and antithrombin iii (at iii) have a great interest in the clinical diagnostic and treatment practice, so their isolation methods are very important. molecules of these proteins have some fragments replying for interaction with native glycosaminoglycan, heparin. this interaction is used for isolation and purification of thrombin and at ul from native materials, blood plasma or its fractional products. we have done comparative studying these proteins purification on heparin sorbenfs, which contain heparin immobilized on sificagel, modified by glycidooxipropyl, gamma-aminopropyl and tosyl chloride groups, or on cellulose: heparin-epoxy-silica (1), heparin-gammapropyl-silica (2), heparjn-tozylsilica (3), and heparin-cellulose (4). we founded that thrombin binds with all sorbents, while at iii doesn't binds with sorbents 2 and 3. there wasn't any difference between silica and cellulose sorbents in thrombin desorbfion by t m naci. at iii binds more stronger with heparin-ceuulose t[~,~n with silica sorbents but specific activity and purity degree were approximately the same on both kinds of sorbents. thrombin specific activity and purity degree were approximately twice higher on sorbents 2 and 3 in comparison with sorbents ! and 4 (2250-3000 nih units/mg versus 1260-t350 nih unlts/mg). therefore, sorbents 2 and 3 can be used for isolation and purification of thrombin and sorbents t and 4 can be used for isolation and puriiication of at ii1. we used these sorbents for large scale purification of named proteins. purified thrombin was used for production of diagnostic kits for anfithrombine iii, fibrinogen, fibrin/fibrinogen degradation products and thrombin time determination. after an aerobic or anaerobic physical exercise various alterations of the hemostatic system were detected. numerous investigations of the hemostatic system exist of running and of bicycle ergometer exercise but not of swimming. young volunteers (n=54; median age 25 years) were investigeted~ there was an aerobic exercise (achieved heart rate 130 --10/min, lactate < 4 mmol; n= 27) and an anaerobic exercise (achieved heart rate 150 ~ lo/min, lactate > 4 mmol/1; n= 27). in both groups there was a significant shortening of the ptt. under anaerobic conditions hematocrit and quick significantly increased. factor viii activity rose significantly in both groups. indicating plasmatic clotting activation there was a significant increase in molecular markers tat and f1+2 only under anaerobic conditions (tat from 2,5 to 5,4 pg/1; f1+2 from 0,87 to 0,93 nmol/1). indicating activation of fibrinolysis t-pa activity increased significantly in the anaerobic group (from 0,1 to 0,4 iu/ml) but not in the other group. this findings indicate that there is e balance in the hemostatic system by activation of clotting as well as of fibrinolytic system in young volunteers during exercise by swimming dependend on the degree of exercise load. membranes as well as compact, porous disks are successfully used for fast analytical separations of biopolymers. as far as capacity, speed and performance of separation are concerned, the supports are as effective as other recently developed fast media for the separation of biopolymers °). so far, technical difficulties have prevented the proper scaling-up of the processes and the use of membranes and compact disks for preparative separations. in this report, the use of a compact tube made of poly(glycidyl methacrylate) for fast preparative separations of proteins is shown as a possible solution of these problems. the units have yielded excellent results, regarding performance and speed of separation as well as capacity. the application of compact tubes made of poly(glycidyl methacrylate) for the preparative isolation of the coagulation factors viii and ix from human plasma shows that this method can even be used for the separation of very sensitive biopolymers. in terms of yield and purity of the isolated proteins, this method was comparable to preparative column chromatography. the period of time required for separation was five times shorter than with corresponding column chromatographical methods. our measurements showed an excellent correlation of the two systems (r=0,99). the maximum amplitudes on the roteg were on average 2.2% higher than on the hteg, corresponding to a slightly lower reverse momentum of the measuring system in comparison to the hteg. we report first results out of the evaluation of sta compact (boehringer mannheim/diagnostica stack)). sta compact is designed for automated analyses of routine and special coagulation (chronometfic, photometric [405 nm] and turbidimetric [540 run]) tests. in addition, it does measure ,,derived" fihrinogen. tests as follows were evaluated: prothrombin time (pt), partial thromboplastin time (aptt), fibrinogen (clauss method), thrombin time, at iii (chromogen), hepato quick, as well as the factors ii, v, vii, x, and viii. results: within run cvs of the clotting tests were below 2% (calculated on the basis of seconds) in most cases, day to day cvs below 4% (not measured for factors, yet). at iii yielded within run cvs below 3% in the decision range. measuring ranges: at iii: 20-140 %; fibrinogen: 1.3-9.0 g/l (plasma -dilution 1/20), after rerun with other dilutions: from 0.3 g/1 (dilution: 1/5) to 18 g[l (dilution: 1/40). method comparisons, using sta as reference, yielded slopes close to 1.00 and negligible intercepts. throughput: with routine clotting tests about 100 tests/h, in a sample selective access mode. we conclude, that sta compact allows precise measurement of routine and special coagulation tests. it is also a reliable system for photometric tests and well suited for intermediate workloads as well as stat analyses. we did evaluate ptt lt, a new liquid, silica based ptt reagent. special attention was given to reference interval and heparin sensitivity. the new reagent is well suited for the measurement of intrinsic clotting factors and is reported to have high sensitivity for lupus anticoagulants (higher sensitivity than sta aptt [boehringer mannheim = bm]). it is stable for 4 days in the cooled compartment of the sta analyzer. methods: all experiments were made on sta. for comparison, we used three other ptt reagents (a lab. routine, silica based aptt, as well as sta aptt and sta ptt kaolin from bm). in addition, thrombin time (3 u/ml thrombin, sta thrombin reagent) and heparin (chromogenic xa test, rotachrom heparin) were measured. results: within mn imprecision (n=21) was below 0.7 % cv in the normal range and in two controls (mean values: 35 s, 76 s), and 1.4 % in a heparin plasma (mean: 81 s). between day imprecision (d=10) was below 3% in two controls ( mean values: 34 s and 58 s). the upper limit of the reference range is 40 s (97.5 th perc., median: 32 s; 90 patients with normal coagulation status [routine aptt, fib., pt], median age: 54 years); almost identical reference ranges were obtained with sta aptt and the routine ptt reagent, while sta ptt kaolin showed significantly lower values (97.5th perc.: 33 s, median 28 s). method comparison study: good agreement using plasmas from patients without heparin: (y= 0a + 0.98 x, n= 198, range of(x) from 25 to 79 s, r = 0.97; x = sta aptt). the median values from 54 patients under high dose heparin were: routine ptt: 81 s, sta aptt: 75 s, ptt -lt: 82 s0 sta ptt kaolin 54 s, thrombin time 37 s and heparin 0.5 iu/ml in conclusion, results of the new reagent compare well to our routine ptt and to sta aptt system reagent. it allows sensitive monitoring of high dose heparin therapy and is well suited for detecting abnormalities of the intrinsic clotting factor pathway. is a standard technique since many years. the interpretation of the thrombelastograms has been widely based on phenomenologic observations, while there is a lack of exact information concerning the coagulation mechanisms leading to the teg amplitude (a're~). the aa'ec is a measure for the mechanical stiffness of the clot and depends on: a) fibrin formation and adequate polymerisatiun of a 3-dlmensional network: measurements with nonrecalcified citrated blood activated with adp or epinephrine (both n=10) did not show any clot formation in the teg this relies on the need for a mechanical coupling between the teg pin and cup over a distance of 1 mm, which is accomplished by the fibrin network therefore, teg can only be performed under thrombin formation and thus under thrombin-activation of the platelets in the sample. factors, which inhibit platelet aggregation but don't limit thrombin-activation of platelets, cannot be monitored by teg. b) the attachment of the dot on the surface of the teg pin and cup. according to recent literature we suggest that the attachment of the clot in the teg relies exclusively on fibrinogen/fibrin adsorption to the surfaces of the pin and cup. interruption of this attachment can result in lower amplitudes or the so-called ,,stairway" phenomenon. we could show a complete interruption of the clot attachment by dipping the pin for one second in 30% albumine solution (n=10). c) the fibrinogen concentration (fg) and platelet count (pc) of the sample. in 50 volunteers we found only a poor correlation of the maximum amplitude (ma) with fg alone (r=0.40) or pc respectively (r=0.50), while there was a very good nonlinear correlation to the product of fg and pc. we suggest that the fibrin network forms the main structure of the clot while the thrombocytes enhance its stiffness in a concentration-dependent manner. this effect of the ptatelets can be completely reversed by gplibfllla antagonists. d) adequate coagulation activation: in nonactivated teg even small amounts of inhibitors can lead to a significant reduction of the ateg. conclusion: alterations in teg measurements can be judged more properly when the underlying mechanisms are understood. the consideration of the limitations of the method allows a more specific interpretation of the results. as a response on a customer request we did investigate the sample stability of blood samples for the aptt. the study was set up in a way that simulated the conditions of a large private laboratory in which the samples arrive several hours after blood collection. blood was drawn from 10 donors into 3.2 % sodium citrate and mixed well before it was divided into several aliquots which were kept at room temperature. the aliquots were centrifuged after ~ 0,5, 11, 23 and 29 h after venopuncture and the plasma was analyzed immediately with 3 different reagents on electra 1000. results: there was a clear difference in the change of the apttover time with these reagents. also f viii (determined with a chromogenic assay with complete and standardized activation) change considerably. 2 reagent a: ellagic acid, plant phospholipid, reagent b: sulfatide/kaolin, phopholipids, reagent c: ¢llagi¢ acid, plant and rabbit brain phospholipids the increase of aptt was apparently not a function of the decrease of fviii because the in vitro f viii sensitivity of reagent b. was inferior to reagent a though reagent b showed more prolongation of the aptt than reagent a. reagent c, however, showed only minor changes in the aptt. discussion: these data show that the sample stabifity of the aptt is reagent dependent and that it is not simply a function off viii sensitivity. other factors such as the buffer system but also the sensitivitiy towards other factors than f viii seem to contribute. a comparison of the technical principle of the roteg coagulation analyser and conventional thrombelastographic systems an. calatzis, p. fritzsche. al calatzis, +m. kling, +r. hipp, a. stemberger institute for experimental surgery and +institute of anesthesiology yechnische universit~t m0nchen thrombelastography (teg) was introduced by hartert in 1948 as a method for continous registration of the coagulation process. in 1995 we presented the roteg coagulation analyser, using a newly developed technical method. in teg systems according to i/artert the sample (blood or plasma) is placed in a cup which is alternately rotated to the right and left by 4,75 °. a cylindrical pin, which is suspended freely on a torsion wire, is lowered into the blood. when coagulation starts, the clot begins to transfer the rotation of the cup to the pin against the reverse momentum of the torsion wire. the angle of the pin is electromagnetically detected, transformed to the teg amplitude and continously recorded. in the roteg the pin is attached to a short axis, which is guided by a ball beating. thus all possible movement is limited to rpotation (r_oteg). the cup is stationary, and the pin is rotated alternately by 5 ° to the right and left by a feather system. when a clot is formed, it attaches to the surfaces of the pin and cup and starts preventing their relative movements against the reverse momentum of the feather. here the reduction of the rotation of the pin, which is detected optically, is tranformed to the teg amplitude. as can be shown by theoretical analysis and by control measurements, the roteg provides the same measuring capabilities as conventional teg systems. the main advantage is the solid guiding of the measuring system, which makes the roteg easily transportable and less susceptible to shock or vibration during measurement. yhrombelastography (teg) is a standard monitoring procedure for evaluation of coagulation. usually only nonactivated native blood teg measurements (nateg) are performed, which leads to a) a long time interval until coagulation and fibrinolysis parameters are available b) very high susceptibility of the measurement to inhibitors like heparin, which disturbes the judgement of other components of coagulation, c) unspecific results. our aim was to develop a coagulation monitoring system based on teg providing fast and specific information on the different components of coagulation. methods: the following measurements are performed in paralel using disposable pins/cups (haemoscope): a) extrinsic activated teg (exteg): 3551al whole blood (wb) + 5~tl innovin (recombinant thromboplastin reagent, dade). b) intrinsic activated teg (integ): 3551al wb + 5~tl kaolin (suspension 5g/l, behring). c) aprotinin teg (apteg): exteg + 20 kie aprotinin (trasylol, bayer). d) heparinase teg (hepteg) as decribed in (1). results: exteg and integ provide information on the extrinsic/intrinsic system within 5-10 min and information on the platelet/fibrinogen status within 10-20 min. because of the addition of potent activators integ and exteg can be performed when inhibitors like heparin are present in the circulation. fibrinolysis effects can be seen on exteg and integ and by comparison of exteg and apteg (apteg: invitro-fibrinolysis inhibition by aprotinin). if fibrinolysis is detected by exteg or integ and aprotinin-susceptibility is verified by apteg, aprotinin therapy will be initiated. heparin effects are revealed by hepteg. discussion: by the comparison of parallel teg measurements which have been differently activated, specific and fast information on the different aspects of the clinical coagulation status is provided. the presented tests can be easily performed bedside and only a small specimen of whole blood is needed (0,4-1,8 ml). introduction: a severely prolonged aptt (333s; normal: 40~os) was observed during preoperative screening for a planned splenectomy in a 71-year-old man with an 8 year history of osteomyelofibrosis. fellewing neer-normal~atien (71s) of the ap'ci" after 10 rain preincubation in a kaolin based aptt assay, pk deficiency was suspected and studies were performed to further investigate the nature of the pk deficiency as well as the mechanism underlying the normalization of the prolonged aptt by increasing the preincubation time. methods: the apl-r assay was peal'armed using kaolin/inesithin. high molecular weight kininogen clotting activitiy (hk:c), fxii:c end pk:c were measured by an aptt based assay using neothromtin ® (behnng) and 1 rain (pk:c) or 4 min (hk:c, fxli:c) preincubation. pk amidolytic activity (pk:am) was assayed using cosset pk ~ (chromogenix) and pk antigen (pk:ag) by quantitative immunoblotting. fxll and hk proteolysis dunng activation of plasma by kaolin (10mg/ml at 37=c) or ds (12.5~tglml at 4=c) was demonstrated by immunotilotting assays of fxii and hk following sds-page. assay pk:c pk:am pk:a~i fxfi: the propositus had pk:c<5%, pk:am=5% and pi~ag<2.5% as compared to normal pooled human p(asma (nhp). his son and two daughters had pk:c-50% and normal aptt values, incubation of the propositus' plasma with ds did not result in fxii or hk cleavage within 180 rain, whereas jn nhp detectable f×ii and hk proteolysis occurred after 5 rain and complete proteolysis was observed after -120 rain. in contrast, kaolin activation of propositus' plasma led to slow activation of fxii after 10 rain, presumably by autoactivation, and to fxlla-induced hk proteolysis. near-normalization of the propositus' aptt by prolongation of the preincubation time paralleled fxii autoactivation as evidenced by immunobletting. we describe a propositus with severely prolonged aptt due to hereditary, crm negative pk deficiency suffering from omf. activation with a particulate suspension of kaolin led to slow fxii autoactivation and hk proteolysis, whereas ds in solution did not induce fxii or hk cleavage. fxii autoactivation seems to be responsible for the normalization of the prolonged aptt in pk deficiency after prolonged preincubation times. in our study we compared a conventional bag with silicone tubing (a) for blood donation with 2 new ones (]3 from biotrans and c from baxter) with a newly developed y-shaped adapter. this adapter is integrated into the tubing and therefore provides the advantage for drawing blood samples in a closed system. the 3 systems were identical in amount and content of anticoagulant, i. e. 63 ml of cpd per bag resulting in approximately 14% of the final whole blood volume. the purpose of the study was to determine whether the different tubings can influence the quality of plasma products conceming the blood coagulation system. in plasma samples we measured several factors of the procoagnlatory and fibrinolytic systems. intralndividual control eitrated (.135 m) blood samples were initially drawn from the contralateral cubital vein from the same male donor (34 in each group). in all bag samples we found small but significantly higher levels of the global test parameters ap'it and ti" compared to controls, indicating a higher amount of anticoagulant. pt, however, revealed no differences, thus suggesting that factor activities were not altered (statistics according to mann-whimey). increase of procoagulatory activity measured as tat complexes showed elevated levels in bags a and c whereas prothrombin fragments fl+2 decreased only in a. conceming the fibrinolytic system, plasminogen a~tivators and pai-1 values were diminished in all three systems 03 < a < c) compared to controls. d-dimers were lowest in a followed by slightly higher values in c, controls and b. fibrin monomers did not reveal any significant differences: a < c < controls < b. in summary, the quality, of the 3 different blood sampling devices was comparable to the intraindividual controls as to factor activities measured by global tests. the activation of the procoagulatory and fibrinotytic systems was slightly but in most cases significantly higher in the two new devices than in the conventional one. all values, however, obtained from the plasma samples did not exceed the normal range of healthy blood donors. therefore we concluded that the two new closed blood drawing systems are favorable for blood donating procedures. in 20 patients with acute myocardial infarction (ami) and thromholytie therapy (13 patients with rt-pa, 6 patients with streptokinase and one with heparln) with ck, myoglobin and ekg criterions the patients were divided in two groups (reperfusion/no fellow two hours after starting the thrombolytic therapy) . blood samples were taken before, 30 rain, i h, 2 h, 4 h, 8 h,12 h after lysis and than every day till day 10. because of the central role of factor xii in activation of coagulation, fibrinolysis, kallikreln-kinin-system and complement cascade we investlgate the role of factor xila initiated by ami and the relation of factor xiia to the thrombolytie agent and reoeclusion rate. for the investigatlens we take the kits from shield diagnostics (xiia), behring diagnostica (c~-inactivator, pl~nogen, ~-antip]~n~n, pap), chromogefiilx ab (prekallikrein) and di~nostica stage (vile). the results: there is an increase of factor xiia immediately after starting the fihrinolysis (max. 30 rain after starting); the increase 5/i independently of the thrombolytie agent. parallel to factor xiia raises factor viia without significant changes of c1-1naotor and prekalllkrein. that means: activation of xiia and fibrinolytic pathway leads to relatively mild c.hanges in kallikrein system, hut to significant activation of extrinsic system by vila-tissue factor. in some patients is an additional rise in the system xiia -viia, when the fibrinolytic system is already in the normal range. there will need further investigations to define the risk of reocclusion as a result of activation of faktor viia by faktor >li ia. autoimmune thrombocytopenic purpura (aitp) is a frequent complication of chronic lymphocytic leukemia (cll] which developes on different stages of the disease and needs special treatment measure. mechanism of autoimmune disorders in cll remains uncleared. we investigated immunologic phenotype of blood lymphoid cells in 22 patients suffering from cll with aitp. in these patients we did not observe disorders in expression of b-lineage markers as compared with cll patients without immune complications (13 patients). but in the 1st group of the patients the greater number of b-celts expressed markers of activation. according to ig heavy chain expression, the lymphocytes in most cases of cll complicated by aitp had more mature phenotype. in all patients with k phenofype of cll lymphocytes we found immune disorders. the development of aitp was accompanied by lowered level of t cells and changed dis'flibution of their immunoreguiatory subsets: diminished number of cdz~cells and increased one of cd~'÷lymphocytes. the results of our investigations undirectly proved that malignant b-cells in cll are involved in production of autoantibodies against blood cells. dysbalance in t-cell system with functional disturbances of immunoregulation are significant in development of autoimmune complications in cll a24 in women with severe fvii deficency (<10%) hypermenorrhagia may cause life threatening blood loss. therefore, hysterectomy at a young age is reported frequently in the literature. a 12 year old girl without history for a bleeding disorder was transfered with hypermenorrhagia. the initial laboratory data revealed an abnormal quick-test of 30% due to fvll of 9,0%, normal platelet count and hemoglobin level of 7,2 g/dl. antifibrinolytic therapy (tranexamic acid 4x15mg/kg bw/d) and lynestrenol substitution were started to reduce the hemorrrhage. despite treatment the daily blood loss increased to a maximum of 290ml. therefore, substitution therapy with recombinant fvila (rfvila) (novonordisk) was started at a dose of 15 ilg/kg bw q 6 h. subsequently blood loss decreased to 30ml/d, but even with an increasing dose of rfvlla up to 35 i~g/kg bwq 4h (fvil activity max. 7400% 10 min after injection) and additional hormonal support with a lh-fsh-anatgonist some hemorrhage remained. a short .course of methergin was stopped due to severe pain. ultrasound of the uterus revealed a hypertrophic endometrium causing the persistent bleeding. it decreased slowly over several weeks and hemorrhage stopped completely after 40 d. the total rfvlla dose administered was 118 rag. no side effects were observed. no transfusions of blood products were necessary. currently, menstrual cycle is suppressed by estriosuccinate. conclusion: due to close cooperation with a specialised gynecologist, hypermenorrhagia was controlled and in this woman with severe fvll deficiency hysterectomy was avoided. in three male members aged between 27 and 52 years of a family suffering from inherited bleeding disorders the diagnosis of protein z deficiency was established. plasma protein z evaluated by elisa (asserachrom protein z, diagnostica stago, france) ranged between 200 and 300 ng/ml. the patients mostly suffered from moderate bleeding complications like prolonged bleeding secondary to trauma or invasive measures and also spontaneous hematuria. previous laboratory investigations revealed variable platelet function deficiencies and transitory boderline decrease of von-willebrand factor. spontaneous bleedings were rarely recognized, however, they occured more frequently when analgetics were taken. bleeding complications showed good response to hemostyptic measures and antifibrinolytic therapy. the use of pcc containing a high level of protein z in these patients is restrained to severe bleeding disorders or major surgery. defibrotide is a mammalian polydeoxyribonucleotide derived anti-ischemic and antithrombotic drug (crinos s.p.a., v"flla guardia, italy). while the drug is known to produce polytherapeutic effects owing to its multicomponent nature, the exact mechanisms of its anti-ischemic effects remain unknown at this time. since defibrotide is found to be effective in ischemic disorders such as paod, vod related occlusive disorders and related rnicroangiopathic conditions, we studied the effect of this drug on the contraction of dog and pig arterial strip/rings obtained from various sites. in vitro supplementation ofdefibrotide to the organ bath containing control dog and pig arterial rings did not modulate the serotonin and thromboxane (generated) contraction, however, tissues obtained from dogs treated with 10 mg/kg defibrotide iv exhibited a profound desensitization to the agonist induced contractile process. the time course of these effects was found to be much larger than the plasma half-life of defibrotide. this presentation will provide additional data on the effect of defibrotide on the contraction of vascular smooth muscles as a possible explanation for the anti-ischemic effects of defibrotide. a. wehmeier, a. popescu, w. schneider klinik for h,~matologie, onkologie und klinische immunologie der heinrich-heine-universit&t d0sseldorf in chronic myeloid leukemia (cml), evolution of blast crisis is the limiting factor of survival. however, as in other chronic myeloproliferative disorders, bleeding and thrombotic complications are a major source of morbidity but their incidence has rarely been analysed in larger patient groups. we retrospectively evaluated 182 patients with cml during chronic phase (170 cases), accelerated disease (58 cases), and blast cdsis (72 cases), and determined the incidence of thrombohemorrhagic complications in relation to the stage of the disease. in chronic phase, 28 patients had bleeding complications (8.4%/patient year) and 15 patients thrombotic episodes (4%/patient year). the incidence of bleeding increased significantly in accelerated disease (18 patients, 51.2%/patient year) and blast crisis (37 patients, 347%/patient year), and many patients had repeated complications. contrary to our expectations, the incidence of thrombotic complications also increased to 10.2%/patient year in accelerated phase and 39.8% /patient year in blast crisis, tn chronic phase, 3 patients died because of bleeding events. in accelerated phase, 5 patients died due to bleeding and 1 patient due to thrombotic complications. in blast crisis, bleeding was associated with 21 deaths, and pulmonary embolism with 2 deaths. analysis of the cause of thrombohemorrhagic complications revealed that in chronic phase, bleeding was often associated with uncontrolled busutfan therapy, whereas in blast crisis, severe bleeding occurred mainly when platelet counts were low and peripheral blasts increased. however, there was no obvious explanation for thrombotic complications. we conclude that bleeding and thrombotic complications are a major source of morbidity and mortality also in cml, and that the incidence of such complications increase in advanced stages of the disease. klinik for innere medizin °, klinikum schwerin patients suffering from primary or secondary amyloidosis may occasionally acquire a coagulation disorder characterised by isolated factor x deficiency. we report on a 60-years-old man who presented with lower gastrointestinal bleeding and prolonged prothrombin time (quick 50 %). amyloidosis was suspected and proven using biopsy of the rectum and histological analysis. in addition, a monoclonal gammopathy of undetermined significance was diagnosed by immunofixation (light chain, type x). detailed investigation of the prolonged prothrombin time led to the discovery of a pronounced factor x deficiency (residual activity 4 %). inhibitors of coagulation factors could not be demonstrated. the treatment of the patient consisted of red blood cell transfusion and infusion of prothrombin complex concentrates. due to the extremely rapid clearance of infused factor x, no increase of its activity was observed. chemotherapy of the monoclonal gammopathy was initiated (melphalan/ prednisone). over the following six months the frequency of major bleeding episodes gradually decreased. however, subclinical occult bleeding continued. the factor x activity was repeatedly found between 10 and 12 %. we support the suggestion from literature data that clinically relevant bleeding episodes are likely to occur in patients with amyloidosis-associated factor x deficiency if the residual activity is below 10 %. sepsis and septic shock is a disease entity which is characterized by inflammatory reactions (sirs), coagulation abnormalities (dic), organ failure (mof) and severe hemodynamic alteration frequently leading to death in a shock. the aim of our studies was to investigate the efficacy of antithrombin iii (kybernin ®) on ~he outcome of septic shock in a pig endotoxemic model. pigs, in this model respond to lps with elevated tnflevels, decreased leukocytes and platelets counts, increased tat and fibrin monomer levels, hypotension and in increase of the pulmonary arterial pressure (pap), indicating impaired lung function. a total number of 13 male castrated juvenile domestic pigs (25 -30 kg) were anaesthetized, ventilated mechanically and infused with saimonella abortus equi lipopolysaccharide (s. equ-lps) over three hours (0.5 ~g/kg * h). a swan-ganz-catheter was inserted into the pulmonary artery to measure the pap. animals were allocated to two groups,, the treatment group (n = 7) received antithrombin iii (at iii) according to the following regimen: 250 u/kg (t = 60 -30, i. v. infusion), 125 u/kg (i. v. bolus, t = 0) and 250 u/kg (t = 180 -240 rain, i. v. infusion). the placebo group ( n = 6) received the appropriate amount of human serum albumin: 50 -25 -50 mg/kg (same schedule as with at iii). main objective was defined as the mortality rate at six hours a_~er s. equ-lps infusion. whereas in the placebo group 4 out of 6 animal died (mortality rate: 66 %) all at iii-treated pigs survived the observation period of 6 hours (p < 0.05, x2-test). the at iii group was shown to have a lower pap than the control group, especially the second peak of hypertension was abolished by at iii. it is therefore concluded that at iii should be a useful tool for the treatment of severe sepsis and septic shock. in a nationwide monthly survey all childrens hospitals in germany (esped) were asked to clinical and therapeutical informations about children suffering from pmi. during july 94 till june 95 299 children were registered. from these, 87 had either ecchymoses and/or necroses related to an increased mordibity and mortality (20%), whereas 212 showed no bleeding signs except for petechiae. of these children one died. the therapeutic interventions concerning hemostasis are listed according to the defined two risk ~oups. from the patients with ecchymoses or necroses, 13/87 received combination therapy (compared to 5/212 with petechiae or no bleeding sign) of at iii, heparin and/or plasma. only t child received protein c concentrate. the data show that children with low risk did in part receive higher doses of heparin and/or at iii concentrate than did high risk patients, whereas plasma therapy was adjusted to severity of eoagnlopathy. furthermore, the wide range of given therapeutics allows no information about the different medications. therefore, controlled studies with respect to the different therapeutic interventions in children with high risk pmi is desirable. a fully automated procedure for the reptilase time assay y. schmitt (1) and h.j. kolde (2) (1) institute for laboratory medicine, st~dtisches klinikum, darmstadt, frg, (2) dade diagnostics, unterschlei6heim the reptilase time assay is a relatively simple technique for the detection of fibrinogen degradation products and fibrinogen deficiency or abnormality. the procedure is performed with citrated plasma and batroxobin reagent, a snake venom enzyme from bothrops atrox. this enzyme cleaves fibrinogen by releasing fibdno peptide a only but not fibdno peptide b. in contrast to the physiological enzyme thrombin that is readily neutralized by antithrombin iii and hepadn batroxobin is not inactivated by physiological inhibitors. at present this assay is mainly performed manually or on mechanical instruments. we have adapted this assay to the electra 1000 fully automated coagulation analyzer (medical laboratory automation, pleasantville, n.y.) using the thrombin clotting time procedure in the instrument software with batroxobin reagent (dade diathe clot formation is registrated turbidimetrically and the dotting time is pdnted. the within run precision (n= 10) of this procedure was tested with two plasmas from the daily routine and was between 2.8 and 3.4 %. in 25 normal samples we found clotting times from 10.5 to 12.8 sec. in 30 samples with liver disease (confirmed by pseudochlinesterase < 2000 u/ml) or on thrombolysis therapy with streptokinase or urokinase the fully automated assay on the electra was compared to the semiautomatic method using a kc 10 coagulometer (amelung, lemgo, germany) based on a rolling metal ball pdnciple and magnetic endpoint detection. the two assays agreed very well with a correlation coefficient of r = 0,948 and a regression line according to passing and bablok of y = 1.0 x + 1.7. these data show that the reptilase time can be performed with good precision and with good correlation to the manual technique on mechanical instruments on the electra 1000. introduction: disseminated intravasal coagulation (dic), due to a massive activation of the coagulation system, is frequently observed in intensive care patients suffering from severe underlying diseases. laboratory diagnosis of dic is based on different coagulation tests, but unfortunately the routine haemostaseological parameters react with latency in the course of acute dic objective: in four cases from a cohort of 43 patients with severe sepsis and dic we analysed special haemostaseological parameters (tat, f1-t2, d-dimers, human-leucocyte-elastese (file), catepsin g and heparin cofactor ii (hc ii)) and correlated them with a mof-score in order to test their predictability on the prognosis of these patients. results: all patients were substituted with at iii concentrate. l,1 the investigated patients median time of treatment with at iii concentrate was 8 (6-9) days and median time of dic-duration was 6 (4-8) days. none of the presented patients died during observation period. all analysed parameters, except d-dimers, showed a sufficient correlation with the evaluated mof-score (tat: r= 0,78; f1-f2: r= 0,84; hle: r= 0,71; catepsin g: r=-0,75; hc ii: 1"=-0,88). the d-dimers did not correlate with the mof-score, which is probably due to the delayed reactive hyperfibrinolysis in the course of dic. furthermore, the decrease of the tat-complexes, f1-f2, hle and catepsin g levels were followed by an increase of at hi and hc ii activity. conclusion: in general the analysed activation markers and coagulation parameters are sufficiently to describe the ongoing process of the dic. the hyperfibrinolytic activity of dic is sufficiently represented by the d-dimer test, but is of defered reactivity in the course of dic. unfortunately these parameters are not established in the routine monitoring of dic on intensive care units and therefore further studies are needed to investigate the practicability and reliability in the daily routine monitoring. we have previously reported that notoginsenoside r1 (ng-r1) has an effect on counteracting lipopolysaccharide (lps) induced upregulation of plasminogen activator inhibitor-1 and tissue factor expression in cultured human umbilical vein endothelial ceils in vitro and in mice in vivo [fibrinolysis 1994;8:(suppl 1)119]. in this study we investigated the effect of ng-r1 on prevention of lps induced lethal toxicity in mice. because mice are relatively resistant to lps when applied as a single agent, we sensitized them by simultaneous treatment with d-galactosamlne. the 80% lethality induced by lps (1.5 mg/mouse) plus d-galactosamine (8 mg/mouse) in c3hs-ie mice was reduced to 16% by simultaneous administration of ng-r1 (1.5 mg/mouse) with lps/galactosamine (p<0.05 by x 2 test). ng-r1 also significantly delayed lps/galactosamine induced lethal toxicity from 12 hours to 30 hours with all animals surviving beyond 30 hours. because lethality induced by lps involves the synergistic effect of multiple effector molecules such as tumor necrosis factor (tnf)-ct, interleukin (il)-i, interferon 3' etc., we also investigated the effect of ng-r1 on lps induced tnf-ct production from leukocytes in cultured human whole blood cells (hwbcs) ex vivo. the production of tnf--ct induced by lps (1 ng/ml for 24 hours) in the supernatant of hwbcs was inhibited by 46% and 22% respectively, when the cells were incubated 1 ng/ml or 10 ng/ml lps together with 100 i~g/ml ng-r1, respectively (tnf-ct concentration, 1 ng/ml lps treated cells: 297+192 pg/ml, i ng/ml lps plus 100 l.tg/ml ng-ri treated cells: 162+137 pg/ml, p<0.01; 10 ng/ml lps treated cells: 3094_+487 pg/ml, 10 ng/ml lps plus 100 pg/ml ng-r1 treated cells 2423+713 pg/ml, /'=-0.02). the present results suggest that ng-r1 can prevent the onset of lps toxicity as well as the lps induction of cytokines. therefor ng-ri may be effective in preventing the effects of septic shock in gram-negative infections. to elucidate the mechanisms by which coagulation is initiated in septic patients in vivo, coagulation measurements were prospectively evaluated in patients with severe chemotherapyinduced neutropenia. this group of patients was chosen because of their high risk of developing severe septic complications, thus allowing serial prospective coagulation testing prior to and during evolving sepsis or septic shock. 62 patients with febrile infectious events were accrued to the study. of these, 13 patients progressed to severe sepsis and an additional 13 patients to septic shock. at onset of fever, factor (f) vlla activity, f vii antigen and antithrombin iii (at iii) activity decreased from normal baseline revels and were significantly lower in the group of patients who progressed to septic shock compared to those that developed severe sepsis (medians: 0.3 versus 1.4 ng/ml, 21 versus 86 u/dl and 45 versus 95%; p < 0.001 ). the decrease of these variables in septic shock was accompanied by an increase in a marker of thrombin generation like prothrombin fragment 1 + 2 (medians: 3.6 versus 1.4 rim; p=o.05). these differences were sustained throughout the septic episode (p < 0.0001 ). f vlla and at ill levels of <0.8 ng/ml and <70%, respectively, at onset of fever predicted a lethal outcome with a sensitivity of 100 and 85%, and a specificity of 75 and 85%, respectively. in contrast, fxila-alpha antigen levels were not different between both groups at onset of fever and were only marginally higher further during the course of septic shock (p=o.001). thus, septic shock in neutropenia is associated with significant coagulation activation, presumably driven by the tissue factor pathway rather than the contact system. furthermore, in septicemia both f vlla and at iii measurements are sensitive markers of an unfavourable prognosis. hemostatic parameters in sepsis patients treated with anti-tnfct monoclonal antibodies c. salat 1, p. boekstegers 2, e. holler 1,3, b. reinhardt i, r. pihusch 1, k. werdan 2, m. kaul 4, t. beinert 2, e. hiller 1 med. klinik iii 1 und i 2, klinikum grosshadern der ludwig-maximilians-universitat monchen, h~imatologikum der gsf 3, knoll ag ludwigshafen 4 tumor necrosis factor et (tnfc~) is a central mediator in the pathogenesis of sepsis and septic shock. as administration of anti-tnfct monoclonal antibodies was able to protect animals from an otherwise lethal endotoxin challenge clinical studies were initiated in patients with sepis. tnfct exerts a procoagulant effect, e.g. by enhancing pai-i and activating thrombin as indicated by an increase in tat and pf 1/2 levels. therefore it may be involved in disseminated intravascular coagulation in sepsis. we determined tat, pf 1/2, d-dimers, tpa, upa, pai-i and vwf levels in 30 patients with sepsis or septic shock. 14 patients received the anti-tnfa monoclonal antibody mak 195f (knoll ag, ludwigshafen), whereas 16 patients served as controls. we found a significantly lower level ofupa in anti-tnfc~ treated patients. since the difference existed before onset of treatment it can not be attributed to tnfot antagonisation. all other parameters investigated did not differ significantly between the two groups throughout the study period. failure to detect modulation of hemostasis by anti-tnf~ might be explained by delayed initiation of treatment in clinical sepsis. in animal experiments it has been observed that the antibody prevented lethal endotoxin effects when given prophylactically or 30 minutes after endotoxin challenge, but not when it was administered 2.5 hours later. in addition, beneficial clinical and hemostatic effects of tnfet antagonisation might be observed only in subgroups of patients with hyperinflammatory sepsis. larger studies addressing this point are under way. protease receptors for thrombin and trypsin have been described for different cell lines. we investigated the ability of trypsin to activate human umbilical vein endothelial cells (huvec). cell activation was measured by the increase of intracellular free ca 2* (caff) with help of microscope fiuorometry (fura-2) and by the von willebrand factor release measured by a sandwich elisa. incubation of huvec with thrombin (1u/ml) or trypsin (10nm) showed a 2-10 fold increase of c~ff. a subsequent homologous stimulation after 80 s lead to a 2-5 fold lower concentration of ca~ 2÷ compared to the first stimulation. therefore cells have been desensitised by the first stimulation. inhibition of the proteolytical activity of trypsin by soybean trypsin inhibitor was followed by failure of trypsin inducing an increase of ca~ 2÷ concentration. in cross stimulation experiments with thrombin and trypsin, we could demonstrate, that cells first stimulated with thrombin showed a second maximal response by subsequent stimulation with trypsin. the same effect was measured with first stimulus trypsin and second stimulus thrombin. trypsin and thrombin induced a release of von willebrand factor (2-5 fold in comparison to unstimulated cells). we found a vwf release dependent on the concentration of trypsin similar to thrombin. an electrophoretic analysis of the released von willebrand factor showed a different multimeric composition of vwf between trypsin and thrombin stimulation. these results indicate, that there might be a protease receptor on huvec for trypsin being different from the thrombin receptor. clinical and laboratory findings of coagulopathy were investigated by an 1-year-survey to 320 children's hospitals. 291 meningococcal infections were evaluable. severe disease (characterized by need for mechanical ventilation, dialysis and/or catecholamines) was seen in 42 of these children; 29 of those survived and 13 died. clinical signs of severe coagulopathy were seen in 83 children: ecchymoses (n = 73) and skin necrosis (n = 36) were associated with increased mortality (16% and 20%, resp., compared to 4.5% overall mortality). five of 29 surviving children with skin necroses required surgical interventions (skin transplantation and/or amputations). petechiae were frequent (n = 156) and as isolated finding not related to severe disease or fatal outcome (6% mortaliy). platelet counts at admission were lower in non-survivors (10th-90th percentile: 30 -450.000/gl, median: 139.000/i.tl) than in survivors (10th-90th percentile: 140 -480.000/i.tl, median: 242.000/gl). at iii values showed no difference between survivors and non-survivors. protein c was available in few patients (n =14): in this subgroup, protein c was lowered in patients with limited disease (10th-90th percentile: 20 -105%, median: 48%) as well as severe disease (10th-90th percentile: 30 -75%, median: 60%). in conclusion, the findings "ecehymoses" and "skin necroses" were related to fatal outcome and therefore included in a prognostic score for severity of meningncoccal disease. the influence of irradiation on pai-i and vwf levels in human umbilical vein endothelial cell cultures k. fragiadaki, c. salat, r. pihusch, b. reinhardt, m penovici, e. hiller med. klinik iii, klinikum grosshadern der ludwig-maximilians-universitat monchen an elevation of pai-1 in bone marrow transplant recipients developing veno-occlusive disease (vod) of the liver has been described earlier. endothelial cell damage due to the preparative myeloablative radioehemotherapy is supposed to be an important step in the pathogenesis of the disease, which is characterized by an obstruction of small intrahepatic venules. in order to investigate a possible role of irradiation we studied the influence of several doses (0, 5, 15, 30 gy) on pai-1 and vwf levels in the supematant of human umbilical vein endothelial cell cultures (huvec). pai-1 antigen and vwf were determined by enzyme immunoassays. whereas pai-1 and vwf levels remained unchanged alter irradiation with 5 gy and in control cultures, a rise was observed one day after irradiation with 15 gy (mean day 0"-)day +1) in pai-1 (100,0% --)171,2 %) and vwf (100%--)159,7%) levels. the increase was more pronounced and reached levels of statistical significance after a dose of 30 cry (pai-1 100%--) 278,7% and vwf 100%--)168%). both pai-1 and vwf levels decreased on day 2 after irradiation with 15 and 30 gy. our results indicate that irradiation induces an increase of pal-1 and vwf in endothelial cells. nevertheless, this effect was observed only in doses above those ones used during conditioning when patients receive 3x4 gy. additional factors seem to be of significance. cytokines like tnfo~ enhance pai-1 and vwf in endothelial cell cultures and are known to be elevated in bmt-associated complications. it can be speculated that irradiation in concert with these factors may contribute to the development of veno-occlusive disease. disseminated intravascular coagulation is characterized by high consumption of coagulation factors, systemic elevation of fibrinolysis by tpa and concomitant elevation of pai-i secreted from inflamed endothelial cells. in an attempt to investigate the contribution of inflammatory cytokines, endothelial cells lines of microvascular origin were stimulated in vitro and pal-1 antigen was measured 2h, 4h and 24h after stimulation. in contrast to results published from experiments performed with macrovascular human umbilical vein cells (huve), our results obtained with 3 different microvascular endothelia isolated from skin, solid tumor tissue and bone marrow revealed that inflammatory cytokines reduced pal-1 antigen levels. in addition to tnf-a (25ng/ml) and lps (10pg/ml), we found that il-10 (100 u/ml) and gm-csf (100 u/rot) also reduced pai-i levels within the first 2h of incubation (from 120ng/ml to 80-110 ng/mll and the effect was even more pronounced after 4h and 24h (from 380 ng/ml to 250 ng/ml). il-1 (10 u/ml) and lps (10 pg/l) also reduced constitutive levels of pal-1 but the effect occured later than 4h after addition of the stimulator. the strongest synergistic effect was demonstrated with gm-csf plus il-1 resulting in pal-1 suppression of 50% after 2h and 30% after 24h. in contrast, g-csf (300 u/ml) induced the immediate (120 to 140 ng/ml after 2h and 380 to 420 ng/ml after 24h) upregulation of pal-1 antigen. stimulation of pat-1 levels was also observed with tgf-i~ (10 pg/ml), however not earlier than 18h of incubation. interestingly, both stimulatory cytokines, ie. g-csf and tgf-13, alone were able to counteract the decrease of pat-1 antigen by tnf-a but only a combination of g-csf plus tgf-g neutralized the effect by il-1. results indicate that inflammatory cytokines regulate pal-1 fibrinolysis in a synergistic and antagonistic fashion. we established the culture of human brain microvascular endothelial cells (hbmec) in order to investigate the pathophysiology of hu~man cerebral malada, which is still associated with a high mortality rate. it is widely accepted that among the reasons for the fatal outcome of cerebral malaria, the interaction of endothelial cells with cytokines and paras lites with subsequent changes in haemostaseological parameters is involved. the human microvascular endothelium may therefore play a deci §ive role in the pathophysiology of cerebral malaria. ery throcytes containing later stages of p. falciparum specifically bind to capillary ec in vivo (sequestration). tnf-cq il-1 and il-6 are considerably elevated in severe malaria. coagulation factors such as tissue factor and von willebrand factor are affected by malada suggesting the involvement of the hbmec in cerebral malada. so far, research on the involvement of the hbmec has been performed on ec cultured from human umblilical veins (huvec). the relevance of this model may be questioned on t, ,he grounds that the capillary endothelium probably plays a greater role than the endothelium of the large vessels. besides, some propertie.$ of the endothelioum seem to vary, upon the organ of origi/n. for the~ reasons, our laboratory has established the hbmec as a model to study the pathophysiology of human cerebral malaria. to demonstrate the relevance of this model in the context of malaria, hbmec were challenged with sera from different patients with severe p. falciparum malaria and with serum from a healthy donor. we can demonstrate that in cells challenged with malaria patient sera icam-1 and substance p were upregulated. on the other hand cells challenged with serum from a healthy donor expressed neither icam-1 nor substance p. these results strongly suggest the relevance of this model for vessel involvement in malaria. both, histamine and serotonin have been described as potent stimulators of yon willebrand factor (vwf) release from human umbilical vein endothelial cells (huvec). we performed experiments to differentiate the receptors for histamine and serotonin induced vwf release. absolutely unexpected we don't found any significant vwf release after the addition of serotonin to huvec or human artery endothelial cells (huaec) in concentrations from 0.1 ijm to 50 pm. in the case of histamine (0.1 pm -50 pm) we measured a vwf release 2-5 fold compared to unstimulated cells. this release was in the same order of magnitude as the release induced with 11u thrombin. to verify these results we measured the effect of histamine and serotonin on the intracellular ca 2÷ concentration (ca~ 2÷) in huvec and huaec. cells were labelled with fura-2 and the change in fluorescence after agonist addition was measured with a microscope fluorometer. using the same agonist concentrations as above we found an 5-10 fold increase of caj 2. with histamine or thrombin but no effect by addition of serotonin. this results indicate a similar activation of human endothelial cells by histamine and thrombin and that serotonin don't stimulate endothelial vwf release or increase of cay. activation and/or dysfunction of the endothelium can be triggered by cytokines (e.g. interleukin-2, tumor necrosis factor-alpha) or bacterial substances (e.g. endotoxins) and may contribute to shock and multi organ failure. pal-l and tm were assessed as parameters of activated endothelium following bsct in three to four days intervals from start of conditioning therapy through day +35. data were compared to the occurrence of sepsis, veno-occlusive disease (vod), capillary leakage syndrome (cls) and graftversus-host-disease (gvhd). patients with neither complication served as controls. no *days after stem cell tranplantation pai-1 and tm were increased in all patients with sepsis, cls~ vod and/or gvhd. pai-1 peaked at days 14 to 18 and the increase was highest in sepsis and lowest in cls. the increase in tm values was somewhat delayed (day +24) and was highest in vod and cls and lowest in gvhd. pai-1 and tm are sensitive markers of endothelial activation in sepsis, vod, cls, and/or gvhd, but they do not allow a differention between these complications. endothelin (et) is the most potent vasoconstrictor. it is known that et plasma concentration is correlated with a poor prognosis in patients with non ischemic cardiomyopathy (cm). the contribution of the heart to the production of et is still unknown. to investigate the pathogenetic mechanism in patients without coronary artery disease (cad), we examined 13 patients with hypertension ( . pulmonary capillary wedge pressure (pcwp) was measured in all patients. et and its precursor big-endothelin (bet) were determined at rest and after pharmacological stimulation with dipyridamole (0.5 mg/kg body weight), that increases coronary blood flow by factor 2 -4 on a non endothelial pathway. cardiac coronary et and bet concentrations were determined from the arterial blood samples, obtained from the aorta, and simultaneously from the coronary sinus (venous blood). blood samples were collected into ice chilled vacutainer tubes and stored after centrifugation at -70 *c. et and bet were analysed after extraction by a sepal< c 18 cartridge by radio immuno assay technique (immundiagnostik). it is concluded that et is increased with elevated filling pressures of the heart in patients with cm. it is not produced in considerable quantity by the heart neither at rest nor at increased blood flow. there4ore the lung has to be considered as the major organ for the production of et and bet in patients without cad. to characterize the incompatibility of blood with foreign surfaces valide in vitro methods especially in testing of platelet function are neceessary. it seems to be effective to use test systems which can also be helpful lateron in the clinic when foreign surfaces (e.g. venous catheters) are used and evaluated in so called phase-4-studies. we studied the influence of 21 reference polymers under standardized and controlled flow conditions on platelets in citrated blood specimen of healthy blood donors.the following tests were performed pre and post platelet-pol)aner contact: decrease of platelet count, platelet aggregation (wu-gmtemeyer index), analysis of platelet spreading capacity on standardized plastic surfaces by using a visual microscopic evaluation according to breddin and bfirck (1963) and an interactive computer-aided system (ibas, kontron gmbh, manchen, frg) by digitalizing the morphological picture of the platelet slides and area detection with a resolution of 512x512 pixels. results: platelet counts showed significant differences pre and post polymer contact, the wu-grotemeyer index demonstrated platelet activation only by blood contact with large volumes of polymeric material whereas both visual and computer-assisted evaluation of platelet spreading ability revealed a marked shift in the different classes of platelets: platelet activation results in a decrease of large structural elements and an increase of elements with spider threads. (pre contact (n=1000): 27:~-6 large forms of platelets, 700~-39 small forms and 275:l-41 spider forms; post contact (n=1000): 6-+-5 large forms, 510a:56 small forms and 484±58 platelets with spider threads). in some series there were significant differences between visual and computer-aided evaluation in the detection of small and spider forms. however, the relative increase of these nonspread spider forms could be stated with beth methods (wilcoxon test). we therefore conclude, that platelet morphometry with both methods is a sensitive and reliable ex vivo method to evaluate platelet interactions with artificial surfaces and can also be used lateron in phase-4-studies in patients. however, the ibas-system requires further maprovement in hard-and so,ware to reduce the high expenditure of this method. despite for the most part standardised methods such as hypothermia, cardioplegia the perioperative myocardial infartion rate is still high at approx. 6%. in cardiovascular surgery it is well known that various cardioplegic solutions are employed for myocardial protection during the ischemic phase. in order to evaluate the possible influence of these solutions we selected two of the most commonly used cardioplegic solutions for investigation in a randomised double-blind study: htk (group 1) and st. thomas (group 2). after randomisation each group consisted of 20 patients who had to undergo aortocoronary bypass surgery. aim of the investigation was to establish possible varying cellular changes during the reperfusion phase or in the early operative phase in order to be better able to apply reinforcing clinical measures. in the context of this study the classical enzyme-diagnostical methods ck,ck-mb and ldh as most useful, however not as convincing. still, we have in the meanwhile been able to show that the cardiac muscle troponin t proves a particularly sensitive parameter regards differentiated ischemic damage to the myocardium. ~his we were able to conflrm in extensive preliminary trials. cardiac troponin t was registered with a one-step lmmunoassay using two highly specific monoclonal antibodies directly via two different epitopes of cardiac troponin t. simultaneously the corresponding pre-and postoperative ecg was registered. further, within this context we investigated parameters that indicate cellular damage, such as platelet factor 4 (pf4), t-pa, interleukin-6 and pmn-elastase. in the reperfusion phase in group 2 there is a significant rise in tmponin t while in group 1 these values remain practically unchanged up to the 1st. postoperative day. of special importance is interleucin 6 since according to most recent studies the release of this substance leads to platelet activation via the arachidonic acid metabolism. this pathway must, further, be regarded within the context of free radical formation. on the 1st. postoperative day the 11 6 values in group 2 are significantly higher. the effects of membrane damage is also observed via pf4 and the pmn-elastase to be different in both groups. on the basis of this study we arrive at the conclusion that the htkcardioplegia is essentially less damaging than that of the st. thomas solution. (2) r. hetzer (2) (1) department of hematology and oncology, vimhow klinikum, humboldt university, berlin, germany (2) we investigated the influence of two different vad systems on these hemostatic changes. vads were implanted in 18 patients [11 bi-vad (berlin heart), 7 left vad (novacor n 100)] with end-stage heart disease who were awaiting heart transplantation. the following hemostatic parameters were measured during the first 51 days of bddging or until heart transplantation: thrombin-antithrembin iii (tat) complexes, prekallikrein, factor (f) xll, plasminogen, or2 -antiplasmin, and i?,thremboglobulin. results: during the first week of bridging, significantly higher tat levels were observed in novacor patients compared to berlin heart patients. prekallikrein activity levels were significantly lower in the berlin heart patients in the early bridging period. all other parameters were comparable in both groups throughout the entire observation period. differences in hemostatic parameters became apparent only in the early bridging period with more enhanced pmthrombin activation in the novacor group and more prominent contact activation in the berlin heart group. avoidance of the transmission of viral infections and saving in the use of blood products encouraged the use of apparatwe intraoperative autetransfusion techniques. patients and methods: arer randomization apparative intraoperative autotransfusion was performed in 5x7 patients during elective hip surgery using i-iaemonetics cell saver ill, haemonetics cell saver v, electromedics elmd, haemolite 3 and fresenius continuous autotransfnsion system (cats). at defined tmaes we detenmned a lab panel (clinical chemistry, lipids, proteolytic capacity, hemolysis, coagulation panel) at 9 determination points in the reservoir, the retransfused blood and in the patient. results: no significant differences concerning proteolytic capacity, prothrombin time, platelets, lipids, electrolytes. increased hemolysis (p<0.01) in the hcs iii group vs. the other groups (lo rain. after application of the retransfnsed blood). low heparin concentrations of retransfused blood in the hcs iii group( 0.32+-0.3 u/ml) vs. high concentrations in the cats group (0.47 +-0.3;p--0.01). parameters of thrombin generation were elevated in the hcs iii group vs. the other groups (p=0.02). conclusions: the use of 5 different apparative autotransfnsion systems dunng elective hip surgery results in dysturbances of hemocompatibility. the activation of the coagulation system during the collection and filtering is partly influenced by the elimination kinetics and the dose regime of heparin. however intraoperative autotransfusion must be roan~ged very carefully and possibly adverse effects of perioperadve heparin peak levels have to be considered. little information is available on the management of patients with factor viii deficiency who require cardiac surgery. we report the case of a 54 year old man with factor viii deficiency and combined severe aortic stenosis and incompetence and mitral incompetence who underwent a double valve replacement at our institution. he had a history of several bleeding episodes following minor surgery. previous factor viii levels were between 8 and 26%. using standard cardiopulmonary bypass, a double valve replacement with a 23 and 29 mm bileaflet prosthesis in aortic and mitral position, respectively, was performed. a high dose aprotinin regime was used (5.5 x 10 a iu). three doses of factor viii concentrate were given in the perioperative period, totalung 7000 1u until the 1st postoperative day. repeated measurements of the factor viii level were performed. the postoperative chest tube drainage was 350 rot. until the 4th postoperative day an additional dose of 3000 iu of factor viii was given to maintain a level of at least 30%. the obligatory anticoagulation was achieved initially with heparin i.v. in therapeutic dosage. due to a persistent 3rd degree av block a permanent pacemaker was inserted with additional 2000 iu of factor viii. on the 17th postoperative day warfarin was commenced aiming for an inr of 3.0 -3.5. the patient was discharged home therearer. he was trained to monitor his inr with a coagu chek device. no bleeding episode occurred during the first 3 months follow up. open heart surgery can be performed safely in patients with factor viii deficiency with the use of factor viii concentrates and monitoring of factor viii levels. coating of biomaterials was developed using synthetic polymers with incorporated anticoagulants. stents were coated with a thin layer consisting of a polylactide polymer containing peg-hirudin and a stable prostacyclin analogue. these materials were tested with a ,,human shunt model" using nonant/coagulated blood of healthy volunteers. within minutes uncoated stents were covered by fibrin and aggregated platelets, which could be seen macroscopically and by scanning electron microscopy; coated stents were free from coaguiation plugs. this observations were supported by analysis of coagulatiuon activation markers. unlike coated stents, uncoated stents revealed high levels (>detection limit) of tat complexes and prothrombin fragments (f1-2). in a series of experiments stents were tested in sheep. in 16 sheep stents (coated/uncoated patmaz-schatz stents) were ptaced by conventional techniques in the left anterior descending artery. anticoagulant therapy consisted of a heparin bolus and intravenously given aspirin before stent implantation. no ant/coagulation was given thereafter. existing data show hyperplasia in the area of uncoated stents which was reduced around coated stents (this study will be finished in january 1996). this coating technique with incorporated anticoagulants reduces thrombogenicity during the early and late phase of biomaterial implantation. studies concerning catheters, vascular prosthesis and oxygenators are in progress. the mechanical circulatory support (mcs) is a therapy for patients (pts) with endstage cardiac insufficiency. during mcs thrombeembolic events, due to the surface thrombogenicity of the implanted device, are feared complications. activated blood platelcts play a major role in this context. therefore, patient's platelet morphology was investigated. during the period of mcs, using the novacor left ventricular assist system n100, blood samples of 8 pts were observed by means of scanning electron microscopy (sem). blood was collected preoperatively and after implantation daily during the first week as well as weekly for the first 3 months. samples were drawn via an 18gauge cannula into caeodylic-acid buffered glutaraldehyde and platelets were prepared for morphological investigations. platelet alterations were classified as non activated, activated and aggregated, based on "shape change" morphology. additionally, the common blood coagulation parameters were evaluated. preoperatively, 15.0 + 4.6 % of activated platelets were found. within the first postoperative week, the mean level of activated platelets raised to 32.8 + 8.0 % (p<0.05). comparing short-(<30 days) vs. long-term (>30 days) mcs, a significant difference of activated ptatelets (overall mean values) could be seen (24.3 +_ 3.3 % vs. 34.8 _+ 3.4 %, p=0.004). during mcs a correlation between hemolysis and platelet aggregates, as well as the values of activated dotting time and activated platelets were observed. also, specific platelet deformations and damages appeared during mcs, which could not be found preoperatively. all pts with mcs showed alterations of their platelet morphology induced by the activation of the implanted synthetic material. with regard to the postoperative antithrombotic therapy, these observations should be taken into consideration. during extracorporeal circulation (ecc) the blood and its compenents are exposed to artificial surfaces and inflammatory respenses are activated, especially the complement, coagulation, fibrinolytic and kallikrein systems. furthermore leukocyte activation occurs and platelet function is impaired. these humoral and cellular systemic responses are known as the "pustperfusion syndrome" with clinical symptomes like lenkocytosis, increased capillary perraeability, accumulation of interstitial fluid and organ dysfunction. the impertance and even perhaps the existence of the damaging effects of cpb have been widely debated in the literature over the past 30 years. many efforts have been made to reduce traumatizing factors, e.g. the use of membrane instead of bubble oxygenators. recently, heparin-coated equipmen~ and tubings have been proposed to avoid excessive contact activation during cpb, the here presented study was designed to assess changes in coagulation and flbrinolytie activity in 20 patients undergoing cpb. in this regard we investigated coagulation parameters like fibrinogen, antithrombin, pmthrombin-fragments fl+2, thrombin-anthhmmhin complex, tissue-factor, fibrin-monomeres and parameters of the fibrinolytic system like tissue-plasminogen-activator, plasminantiplasmin-complex, d-dimers and plasminogen-activator inhibitor before, during and after cpb. the activation of the complement cascade was followed by measuring the concentration of c5a, c4 and c3c. the results demonstrate distinct alterations in above mentioned parameters. in spite of a high dose hepariulzation (act>450s) combined with an antifibrinolytic tw, atment an activation of the coagulation system was observed immediately after the onset of cpb followed by an activation of the fibrinolytic system. therefore further efforts should be done to develop new anticoagulatory regiments and improve the biocompatibility of materials used for cpb. during cardiopulmonary bypass blood is exposed to nonphysiologic conditions. the contact with artificial surfaces and mechanical stress result in a periopemtive response which includes activation of the complement, coagulation, fibrinolytic and kallikrein system, activation of nentrophils with degranulation and pmtease enzyme release, oxygen radical production and the synthesis of various proinflammatory cytokines. this so-called "pest-pump intlammatory response" has been linked to respiratory distress syndrome, renal failure and neurologic injmy. our goal was to investigate the time course of eytokine levels and the activation of leukozytes and platelets and to quantitate leucocyte subpepulatioas in 20 patients undergoing cpb. at different time points, pre, during and pest cpb, we determined the levels of interleukin (il) 113, il-2, il-4, il-6, il-8, il-10, tumor necrosis factor ¢z (tnf-a) and interferon "1' (ifn'--/) using elisa-techulques. lymphozyte subpepulations were characterized by flow cytometry and specific monoclonal antibodies against cd3 (pan t-cell marker), cd4 (surface antigen on t-helper cells), cd19 (surface antigen on b-cells), monocytes were determined by cd14 and platelets by cd41 (act. gpilb/llla) and cd42b (gp ib). single cell activation was analyzed using markers against cd25 (il-2 receptor), cd126 (il-6 receptor), hla-dr (mhc class ii), cd71 (transferrin receptor) and cd69 (activation inducer molecule), platelet activation was monitored with an antibody against cd62 (gmp-140). preliminary results revealed distinct increases in r,-6, il-8, and il-io following cpb whereas tnf-a and ifn--/levels were not significantly influenced. fttnhermore, activation of particular cell populations was observed. finally, our investigations should contribute to a better understanding of the complex humeral and cellular respenses induced by cpb and thus might help to develop new strategies to circumvent the negative impacts of cpb. optimal adjustment of anticoagulation in machine plasmapheresis is important for the quality of the prepared fresh frozen plasma (ffp) as well as for the safety of the donation. in the present study the suitability of prothrombin fragment ( ft+2 ) in the assessment of anticoagulation during plasmapheresis was investigated. matarlal and methods: 75 plasmapheresis procedures were performed on 25 donors (10 ~, 15o" ) using 3 different plasmapheresis machines (a 200, baxter; mcs 3p, haemonetics; pph 900, electromedics/medtronic). acid citrate dextrose formula a (acd-a) in a ratio to whole blood of 8 : 92 was used for anticoagulation. the concentration of fi+2 in the donor's blood was measured before and after plasmapheresis and in the prepared ffp. the actual acd-a volume used was also registered. results: there was a significant rise of the ft+2 -concentration in the donors blood after plasmapheresis with each of the three automatons: a 200:1.32 vs 1.14, p < 0.05; mcs 3p: 1.26 vs 0.98, p < 0.05; pph 900:1.20 vs 1.05, p < 0.05. the ffp prepared with each machine showed the following f~+2concentrations: 0.91± 0.18, 1.0:2 ± 0.17 and 0.93 ± 0.11 respectively. the difference between the groups was not significant. the elevation of the ft+2 -concentration in the donor's blood showed a negative correlation with the volume of the acd-a used. during 6 of the 75 procedures technical problems occurred (inadequate venous acces, occlusion of the citrate tube, reduced whole blood flow). after these procedures there was a marked elevation of f~+2 in the donors blood (2.74 ± 0.53), accompanied by an elevated f~+2 -concentration in the prepared ffp's. conclusion: these data show that ft+2 is a suitable parameter for the assessment of anticoagulation during plasmapheresis. several epidemiologic studies demonstrated that fibrinogen is an independent cardiovascular risk factor and should be considered for screening programs. prothrombin time derived fibrinogen (df) measurement combines the advantage of an established highly reproducible automated method with no additional reagents, except for calibration. several studies showed that the df values correspond well with the clanss method except in cases such as thrombolytic therapy in which the df results are higher. however, no results exist whether in patients with coronary heart disease with fibrinogen as a risk factor the df values are also comparable to the established clausss method. the aim of our study was to compare df values to clauss method results in cardiac patients, especially in patients before and after coronary bypass grafting (cab(]). measurements of df were performed on an acl 3000 (il) using the pt-fibrinogen-hs reagent. fibrinogen clanss method was done on the acl using fibrinogen c reagent (il) and on a kc4 (amelung) with fibrinogen a reagent (boehringer maanheim). for calibration we used the calibration plasma half volume (it.) with the fihrinogen concentration proposed by the manufacturer. plasma samples were obtained from 24 patients at admission before cabg and postoperatively up to 1 week, and from 23 healthy persons (staff). within assay imprecisious using normal and abnormal controls (il) were comparable with both methods showing cvs between 1.99 and 4.22 %. in normal healthy persons the medians of the df and the clanss method run on the acl were very similar (296 vs 302 rag/all), whereas kc4 values were about 10% lower (268 md/dl). in cabg patients at admission we found the same differences as in normals with the clanss method (acl: 363 vs kc4: 337rag/all), however the df values were siginficantly higher (median 418mg/dl). if we took a cutoff value of 320 mg/dl, as suggested by the results from the northwick park heart study, we would categorize into the high risk group 21 out of 24 patients using the df method, 20 with the clanss-acl method and 16 with the clanss-kc4 method, i.e. nearly 30% more patients were classified in the high risk group using the df method. postoperative samples showed the expected increases due to the acute phase response with the same magnitude of differences. because of its rapidity and reproducibility the df method is well suited for routine measurements, however, standardization remains an urgent task in order to avoid misinterpretation of results. for fibdnogen measurements in clinical laboratories, the two most widely used methods are the clotting time method according to clauss (cfib) and the sn called "derived" fibrinogen method (dfib) implemented in optical coagulometera with the fibrinogen concentration being derived flora the optical density of the fibrin clot in a standard prothrnmbin time (pt) assay. it is well known that under certain circumstances, e.g. in the presence of fibrin(ogen) degradation products (fdp), there is a discrepancy between the two methods with higher values for dfib than for cfib. yet the opposite discrepancy, i.e. fibrinogen values derived from the optical density of the clot grossly lower than values from dotting time assays, seems to be very rare and is poorly understood so far. the patient (male, 26 years) had ingested the esterase inhibitor parathion (e605) in an attempt o f suizide and was treated with high doses of atmpin. he had no clinical signs or history or family history of bleeding or thrombotic disorders. except for a very low pseudocholinesterase activity, all laboratory results were normal ineinding pt, afft, thrombin time, and factor xiii. pt and aptt did nnt differ between an optical coagulometer (electra 1000c, mla) and a mechanical one (kc..4, amelung). there was no evidence of disorders known to interfere with hemostasis like paraproteinemia or dyslipldemia. however, in all 7 blood samples received for dotting tests during a period of 7 days the macroscopic appearance of the fibrin clot was quite unusual (only slightly turbid/almost transparent) and there was a striking discrepancy between a very low or low dfib on the electra (pt reagent: thromboplastin is, dade) and a normal or high cfib (kc4; thrombin reagent, dade). on admission, values were 57 mgml (derived) vs. 275 mgldl (clauss). cfib rose to s42 mg]dl with dfib at 155 mg/dl in the last sample on day 7. ~ al! samples dfib was about 20 % (ls-23) of cf[b. when the patient's plasma was added m normal pooled plasma it caused, in a dose-dependent manner, values lower than predicted for dfib and values slightly higher than predicted for cfib. in the absence of data from additional (e.g. immunologic) methods the following principal possibilities (and combinations) have to be considered: 1) normal fibrinogen concentration and clot formation rate, but abnormal optical properties of the clot (cfib correct, dfib falsely tow); 2) normal optical properties of the clot, but accelerated clot formation and very low fibrinogen concentration (dfib correct, cfib falsely high). in either case, the molecular basis could be: a) a genetic or acquired molecular abnnrmality of fibrin/fibfinogen; b) an interfering substance. direct effects of the loxic agent parathion and/or the antidot drug atropin are not likely to be the cause since other patients, often with more severe parathion inmxicatian requiring higher doses of atmpin, showed normal optical density of the clot. we hope to perform a more in depth investigation of this abnormality in the future, including various methods, reagents, and instruments for fibrinogen measurement, a survey of the patient "s family, and studies of the molecular nature of the phenomenon. increased fibrinogen is known to be an independent predictor of subseqtmnt acut~ coronary syndromes. however. a multitude of methods for fibrinogen determination is available. there is a lack of standardisation among fibrinogen assays. in a family cohort study (patients'with combined hyperlipidaemia and f or hypemricaemia) fibrinogen was determined in plasma samples from 340 family members using a functional and an immunochemical assay. the fimctional assay according to clauss was performed on the analyser ca 5000 using the test fibrinogen a from boehringer. the immmmephelometric assay was performed on ~e behring nephelometer system using the reagent and standard from behring. a good similarity between both assays was obtained at low and high flbrinogen levels as well as in samples with increased c-reactive protein (crp). values obtained by both assays correlated similar with total cholesterol, ldl--cbelesterol and apolipeprntein b. the ratio functional fibrinagen / immlmochemial fibrinogen showed no dependence on cholesterol, t-pa, v wiuebrand factor and crp. release of two fibrinopeptides a from fibrinogen generates desaa-fibrin monomer, which rapidly aggregates, forming fibrin complexes. fibrin monomers can be detected in plasma samples after chemical desaggregation of fibrin complexes using thiocyanate by monoclonal antibody binding to the alpha-chain neo-n-termini generated by fibrinopeptide release. although postulated, an intermediate of fibrin formation, carrying one fibrinopeptide a and one fibrin alpha-chain neo-n-terminus has so far escaped analytical procedures. we have employed a monoctonal antibody specific for fibrin alpha-chain neo-n-terminus, mab 2b5, attached to magnetic microparticles, for isolation of fibrin-related material from plasma samples of patients with elevated soluble fibrin. the material was desorbed by sds-urea buffer and subjected to sds-page and immunoblotting. immunostaining with panspecific anti-fibrinogen and anti-fdp-e antisera showed a range of bands corresponding to fibrin monomers, and fibrin derivatives containing the fibrin e-domain. lmmunostaining with monoclonal anti-fibrinopeptide a antibody resulted in a doublet band corresponding in size to fibrin monomer. similar results were obtained with polyclonal antisera against fibrinopeptide a. for a more quantitative approach, desa-fibrin monomer was detected by an elisa procedure using mab 2b5 as capture and monoclonal anti-fibrinopeptide a antibody as tag. a sample with extremely high level of desaa-fibrin monomer, determined by elisa (enzymun®-test fm) was used for calibration, since reference material is not available. a correlation of r=o.g4 was found between desaa-fibrin monomer and relative desa-fibrin monomer levels. detection of desa-fibrin monomer required sample pretreatment with thiocyanate for desaggregadon of fibrin complexes. from these preliminary data it appears that desa-fibrin monomer accounts for a fairly constant proportion of soluble fibrin and is a polymerizing species. fibrinogen has been shown to be a major cardiovascular risk factor. especially for epidemiological studies, exact quantitation of fibrinogen in clinical plasma samples is of great imporance. fibrinogen levels are generally measured by clotting assay according to clauss, or by determination of derived fibrinogen values upon photometric measurement of prothrombin time (derfbg). the clotting assay has been shown to be influenced by high levels of soluble fibrin derivatives. the pt-derived fibrinogen levels appear rather convenient in clinical routine, since no additional reagents are needed. we have compared the clauss assay and derfbg with a turbidimetric fibrinogen assay using snake venom protease for fibrinopeptide release, performed in photometric autoanalyzers. d-direct antigen was measured in parallel using tinaqaant d-dimer lpia. results were correlated with total fibrinopeptide a release by thrombin, measured by elisa. a total of 484 samples were included, of which 29 samples (6 %) were recorded as above measurung range by derfbg. these samples encompassed a range of 5.90-10.40 g/l and 5.21-12.37 g/l in clauss, and turbidimetric assay, respectively. the range of values measured by derfbg assay was 0.72-9.14 g/i, corresponding to 0.26-11.00 gll and 0.24-10.48 g/1 in the clauss and turbidimetric assay, respectively. the correlation of derfbg with the clauss assay was re0.91, correlation with turbidimetric assay was r=0.92 for the values actually detected. the correlation between clauss and turbidimetric assay was r=0.93 for all values. there was no dependency of test results or inter-test variation upon d-direct. correlation graphs displayed a decreased test response of clauss assay in the high concentration range, resulting in an underestimation of fibrinogen concentration. the derfbg assay, in contrast, showed normal range values in samples from patients with fibrinotytic treatment and low fibrinogen levels in the other assays. correlation with fibrinopeptide a release was r=0.88 for clauss assay, r=0.89 for turbidimetric assay, and r=0.82 for derfbg. for clinical routine, derfbg appears to be applicable for all samples between 1.00 and 5.00 g/l with exclusion of samples from patients with fibrinolytic treatment or endogeneous hyperfibrinolysis. other samples may be analyzed by clotting assay or turbidimetric assay, although the latter appears to be more suited for measurement of high range samples. for inhibition of pk is 0.067 pmol/l the antifibrinolytic activity of the inhibitors was determined by measuring the lysis of radiolabelled human plasma clots• the compounds which inhibit plasmin and pk influence remarkably the streptokinase-induced clot lysis but not lysis induced by uk and tpa. surprisingly, inhibitors of uk and tpa do not influence clot lysis induced by uk or tpa. the structure-activity relationships for inhibition of ptasmin, uk, tpa and pk could help in the design of more potent inhibitors of fibrinotytic enzymes. uk inhibitors are of interest for the development of anti-invasiveness drugs, while plasmin/pk inhibitors could be prototypes of a "synthetic aprotinin". in the ecat angina pectoris study t-pa antigen was an indepcndem risk factor of subsequent acute coronary syndromes. pat indicates the risk bat depends on other known risk factors. it should be tested in 183 members of a family cohort study (patients with combined hyperlipidaemia and / or hyperuricaemia), if the active pal antigen or the whole pai antigen showed a stronger relation to t-pa and metabolic variables. the active pall antigen was determined using elisa actibind pat-1 (technoclone / lmmuno) , the whole pai-i antigen was measured using the f_lisa pat-1 (technoclone i immuno). t-pa activity was determined with the coaset t-pa from chromogenix, the tintelize tpa from biopool was the used test for determination of t-pa antigen. the active pat antigen showed a stronger correlation to t-pa activity and t-pa antigen than the whole pal antigen. circulating t-pa activity was influenced predominantly by the active pal antigen. both pat antigens were correlated in similar manner with metabolic variables, lipoproteins and b/vii. table: correlations of active and whole pal antigen (** p < 0,001) active pal antigen whole pal antigen active pat antigen 1,000 0,851 ** whole pal antigen 0,851 ** 1,000 t-pa activity -0,594 ** -0,492 ** bpa antigen 0,604 ** 0,497 ** body mass index 0,502 ** 0,462 ** triglycerides 0,452 ** 0,441 ** total cholesterol 0,252 ** 0,255 ** ldl-cholesterol 0,263 ** 0,264 ** hdl-cholesterol -0,357 ** -0,355 ** apolipoprotein b 0,428 ** 0,402 ** apolipoprotein a i -0,233 ** -0,211 ** the lower relationship of the whole pat antigen to t-pa is obviously caused by patient samples with high levels of whole pat antigen in contrast to normal values of active pat as well'as of t-pa. possibly, a high ratio of whole pai antigen / active pat antigen is caused by a raise of latent pal the main form of pat in the platelets. the clinical importance of an increased ratio whole pal antigen / active pal antigen remains under investigation. the cyclic antibiotics-polypeptides bacifracin a, bacilliquin from boci/lu~, licheniformis and gramicidin s from bocil/us brevis, var. (3. b., were used for investigation. we studied their influence on the fibrinoly~c and coagulation activity in vitro• me~hods. to solution of human plasmin (thrombin). containing 0.2 mg of protein (1 nih unff)/ml, the analyses' solution of antibiotics (0.1-8,0 mg) was added. then we defined the tlbrinolytlc activity of the mixes using azofibrin lysis, and fhrombin activity was determined according to the speed of fibrin clots formation from fibrinogen solution. results. in following table are submifled the results received in our laboratory {we also offer results of antibiotics influence on urokinase activity): ki, mm ki --the constant of inhibition. n. d. ~ in studied lirnils the inhibitor's activity was not observed. ---the inhibitor's activity was not define. i. --the inhibitor% activity was observed but ki not determined. +, +% +++ --effect of inhibffion (in rela*iive indexes). conc/us/on.~ the results received by us testify to the necessity of cautious approach to the use of antibiofics-polypeptides for various sorts of therapy in view of their possible influence on fibrinolytic and coagulation actlvlfy, of the organism. these results were used for preparation in our laboratory of biospeciflc sorbents containing c-ramicidin a, bacil}iquirt and gramicidin s.as ligands, they can reversibly bind thrombin, plasmin {plosminogen) and urokinase directly from crude exkacts. the enzymes are selectively eluted without substantial losses of specific activity in e yield of 60-90%. there is a great body of rather contradictory informations dealing with fibrinolysis in liver.. cirrhosis, which can be accelerated, normal or reduced, depending on the type of cirrhosis and investigation techniques (clot-lysis, fibrinolytic component measurements). our previous finding was, that in vitro plasma-clot lysis, induced by exogeneously added tpa or streptokinase proved to be reduced, and this had a good correlation with severity of the disease and the elevation of plasmatic yon willebrand factor levels. in vitro clo~/-[ lysis tests, induced by tpa were performed in 41 patients with alcoholic liver cirrhosis, utilising a microplate light-scattering assessment method. the tests were repeated using the same plasma samples in each patients with a microplate which was covered by cultured endothelial-cell monolayer (umbilical vein, huvec}. clot lysis speed proved to be 1.5-2 times slower with huvec milieu in the control group, while in the cirrhotic patients this inhibition was stronger and resulted in 5-fold reduction of lysis speed. our results suggest, that cirrhotic plasma is able to accelerate the release of fibrinolytic inhibitors from cultured endothelial cells, which phenomenon may also contribute to the complex alterations of in vivo fibrinolysis in cirrhotic patients. deep vein thrombosis (dvt) is a systemic disease with prolonged clinical manifectation. anticoagulation therapy in dvt is not completely effective. thrombolytic therapy may give rise to a systemic lytic state, the fibrinospesific agents (scu-pa and t-pa) have short half-lives in the circulation. we investigated the potency of the acylated plasminogen streptokinase activator complex (gbpg-sk) to deep vein clot dissolution as compared to well known sk and apsac both in v~tro and in vivo in the model of venous thrombosis in artherio-venous shunt in rats. it was shown in in vitro study that fibrinolytic activity of plasminogen activators mainly depends on their stability in plasma. stability studies carried out by incubating sk and pg-sk activator complexes in plasma with euglobulin precipitation . total fibrinolytic activity was measured by the fibrin plate method. gbpg-sk possessed the greatest stability in human plasma than apsac or sk because of its prolonged inactivation period (the deacylation half-life for gbpg-sk was 230 :e 21 rain in contrast with 73 -~ 6 min for apsac). the stability degree of two acylated thrombolytics (gbpg-sk and apsac) was in order to inverse proportion of their first order rate deacylation constants (2.9 • 10 -4 and 6.0 • 10-s sec-1 respectively). the fibrinolytic potency of sk, apsac and gbpg-sk was measured by 1251-labeled fibrin clot lysis in plasma and in vivo by lysis of the preliminary formed 1251-labeled fibrin clot inserted into the jugular vein. fibrinolytjc activity of acylated plasminogen activators gradually increased in time. under sk administration, the clot lysis came to the end by 2 hours while apsac and gbpg-sk haven't lost their activity for 5 -6 hours. gbpg-sk possessed significantly more prolonged fibrinolytic activity than apsac, the acyl-enzymes did not significantly influence on plasminogen,,.~2-anfiplasmin and fibrinogen levels in plasma according to their activity specific to fibrin-bound plasminogen. in opposite, sk produced a significant depletion of plasminogen, ~-2antiplasmin and fibdnogen levels in plasma. it seems, on the basis on in vitro and an animal experimentation, than apsac with its moderately fast deacylation rate is more suitable for rapid thrombolytic effect, but gbpg-sk with its slow deacylation rate is suitable for deep vein thrombosis, when the rapid thrombolysis is less critical. it's well known that the complete lysis of thrombi usually isn't observed at the thrombolytic therapy. at present study we have attempted to quantify the possible mechanism of fibrinolysis inhibition during the thrombolysis. 125i-labelled partially cross-linked fibrin clots of different volume (0.1-0.35 ml) were immersed in tris-hcl buffer (3 ml) containing plasmin (5-100 nm) at 37°c. the lysis rate was detected by counting of soluble fibrin degradation products (fdp). at all the eases lysis slowed down and stopped in 3 hs though clots dissolved up to only 60-85%. no irreversible inlaibition of plasmin caused by denaturation occur as was judged by the measurement of fibrinolytic activity at the diluted samples. however the increase of fdp concentration in surrounding buffer led to the reversible inhibition of fibrinolytic activity of plasmin up to 5% of baseline. the sds-page analysis under non-reduced conditions shown the acoumulation of high-molecular weight fdp at the surrounding buffer. the inhibition phenomenon could be connected with the specific binding of plasrnin with soluble fdp having exposed lysine residues and the subsequent removal of enzyme from fibrin surface. unexpectedly since the heterogeneous character of occurred reactions tile change of the clots surface area during lysis didn't affect the fibrinolysis kinetics in all the concentration intervals. to estimate the kinetic parameters the kinetic curves were linear in the coordinates [p1/t (l/t*ln(isl°{(lslo.lpi)). the obtained parameters were following: keat=l.36 min-l,km=l.33 ixm,kp=0.12 ~tm. the clinical trials have shown that fdp concentrations at the thrombolytic therapy of deep venous thrombosis and acute myocardial infarction usually was approximately in the range 0.05-0.2 ~tm. therefore the described phenomenon of fibrinolysis inhibition by formed fdp may take place during thrombolytic therapy. al. calatzis, an. calatzis, +m. klmg, +l. mielke, +r. hipp, a. stemberger institute for experimental surgery and +institute of anesthesiology technische universit~.t monchen thrombelastography (teg) is an established method for the detection of fibrinolysis. fibfinolysis is usually determined when the teg amplitude decreases by more than 15% atter the maximum amplitude is reached. this takes a considerable amount of time (more than 30 minutes). our approach bases on the understanding of fibrinolysis as a process which runs in paraue[ to coagulation and is not exclusively subsidiary to it. the effect of fibrinolysis on the growing clot in the teg is shown by the comparison of two parallely performed teg measurements: exteg: teg measurement with standardised activation of the extrinsic system. apteg: exteg with in-vitro-fibrinolysis inhibition via aprotinin. exteg-reagent (ex): 1:2 dilution of innovin (recombinant thromboplastin reagent, dade) with aqua dest. apteg-reagent (ap): 5 parts innovin, 2 parts trasy[ol (aprotinin, bayer, i0.000 kie/ml), 3 parts aqua dest. test procedure: l0 p1 ex or ap + 300 ~l citrated blood (cb) + 50 lal cacl2-solution 0,15 m. the only difference of the two reagents is the addition of 20 kie aprotinin in the apteg, leading to an in-vitro fibrinolysis inhibition. the usage of disposable pins and cups (haemoscope, illinois, usa/e.m.s., vienna) is recommended for ensuring standardised conditions for both measurements. results and discussion: when there is a better clot formation in the apteg (corresponding to a lower so-cafled k-value) than in the exteg, fibdnolysis can be suspected. this technique requires only commercially available reagents and is easy to perform on conventional teg systems. due to the standardised coagulation activation with a thromboplastin reagent, fibrinolysis can be detected also when inhibitors like heparin are present in the circulation. according to our experience using this technique during liver transplantation, clinical relevant fibrinolysis can be detected as described in less than l0 minutes. many thromboembolic (massive pulmonary embolism, proximal deepvein thrombosis, etc.) and coronary diseases (infarction, acute phase, etc.) require fibrinolytic therapy to early recanalizafion. the application of the well-known or new thrombolytic agents needs the use of specific, simple and reproducible methods for the determination of fibdnolyfic activity. we suggest new methods for measuring the blood plasma concentrations of plasmin, plasminogen, antiplasmins, and urine urokinase activity. these methods involve the employment of chromogenic substrafe azofibrin (human fibrin, covalently labeled with p-diazobenzenesulfonic acid). method~. 0.2 ml of studied solution was added to 0,8 ml of azofibrin suspension in certain buffer (5-10 mg/ml) and the mixture incubated at 37 oc for 10-60 rain. after the end of incubation the mixture was filtered, the volume of solution brought up to 4 ml by 0.02 m naoh and the optical density was determined at 440 nm. resuffs. azofibrin can be used for quantitative determination of proteinases activity in search of new fibrinolytic means. for comparison the results of our studies fibrinolytic activity of some proteinases with the use of azofibrin are presented: activity. with an increase of pal and ldl-and a decrease of hdl-cholesterol concentrations k is concluded that the increased cardiovascular risk in diabetes meilitus was partly caused by a down regulation of the fibrinolytic system, increase of erythrocyte aggregation and plasma viscosity. also disturbances of lipid metabolism an abnormal whr seems to be of an additional atherogenous factor in dm. plasma concentrations of thrombin-anfithrombin-iii (tat), alpha-2antiplasmin-plasmin (app) complexes and ddimer were investigated in 50 patients treated with thrombolytic therapy for acute myocardial infarction (ami) either with streptokinase (n=24), urokinase (n=16) or recombinant t-pa (rt-pa, n=10). all patients received an intravenous heparin bolus of 5,000 iu on admission, which was followed at once by an infusion of 1,000 iu/hr for the next three days titrated to maintain the partial thromboplastin time at twice control value. tat, pap and ddimer were measured by enzyme immunoassay on admission, 1, 2, 4, 6, 8, 12, 24 hours and on day 3 and 7 after admission. groups did not differ significantly in regard to age, sex, delay and infarct location. on admission, no marker differed significantly between groups. thereafter, tat levels increased significantly exclusively in rtpa treated group. from 2 to 6 hours after admission, tat were significantly higher in rtpa treated patients than in streptokinase and urokinase treated group (p<0.02). however, during continous heparin infusion, which was started immediately after stop of thrombolytic therapy, in each group tat concentrations decreased below admission values. app were significantly higher only 1 hour after admission in the rt-pa group (p=0.03). ddimer did not differ signifieanfly between groups. our results demonstrate, that rtpa induces a hypercoagulable state, which may contribute to reocclusion after successful reopening of the infarctrelated coronary artery. the significant tat decrease during continous heparin infusion supports the concomitant use of thrombin inhibitors as adjunctive therapy with thrombolytlc treatment for ami. thus, in acute myocardial infarction patients, thrombin generation is markedly influenced by the thrombolytic agent used and concomitant heparin therapy. endothelium derived relaxing factor-no (edrf-no) plays a major role in regulation of vascular tonicity and also exerts platelet inhibitory action~ however, due to the chemical nature of edrf-no few is known about its production and activity as a general index or marker of vascular function in human diseases. one way to achieve this can be measurement of nitrate/nitrite excretion in the urine, which seems to reflect vascular edrf-no production. in this report a self-developed elisa method is described, which was used for this perpose. nitrate/nitrite urinary exretion proved to significantly decreased in insulin dependent and in non-insulin dependent diabetes mellitus as well after a comparison of the excretion values to other markers of angiopathy (yon willebrand factod soluble thrombomodulin, beta -thromboglobulin) it seems to be acceptable, that urinary nitrate/nitrite excretion can be a useful indicate of diabetic vascular disorders. two major concerns still accompany the application of prothrombin complex concentrates (pcc). viral safety has to be guaranteed and therefore several measures for virus inactivation or elimination are taken during the manufacturing process. the inherent risk of thrombo-embolic side effects has to be considered. to minimize these risks and to achieve good clinical efficiency the quality criteria for pcc's are under pending discussion. it is generally accepted that a modem pcc-preparation should contain all of the four coagulation factors in a well balanced proportion and that it should also contain protein c and protein s. additionally, the concentration of activated coagulation factors should be kept at a minimum. a present pcc-produedon process mainly consists of a qae-sephadex extraction of cryopeer plasma followed by a solvent/detergent virus inactivation step. further purification is achieved by subsequent chromatography on deae-sephamse. the aim of this study was to improve product quality by avoiding f viiactivation without implementing major changes to the production process. at the same time, a second virus eliminating step was added to the production process. it could be shown that speeding up the chromatographical process by switching the deae-sepharose-chromatography from a classical axial column to a radial chromatography resulted in a significant reduction of f viia-genemtion. mainly the reduction of contact time, resulting from the highest possible flow rates, leads to the wanted effect. the relation between f vii/f viia was 10 : 1 or more. in order to investigate the feasibility of virus filtration the eluate of the deae-sepharose column was filtered through a virus removing ultipor vffilter. the analysis of the solution before and after fillration showed that the filtration had no influence on coagulation factors activity, protein content, proteolytic activity etc. preliminary studies showed significant virus reduction values. in the past few years the problem of expediency of the treatment aimed at developing immunological tolerance in hemophil;a patients by way of complete removal of inhibitor with high doses of factor viii has been discussed in literature. we observed 121 patients with hemophilia. inhibitors to factor viii:c were revealed in 32.7 % of patients with hemophilia a and fo factor ix --in 1.6 % of patients with hemophilia b. the level of an inhibitor was not higher than 87 befhesda u/ml, that is those patients were not regarded as "high responders". a high incidence of inhibifors in young patients [from 7 to 26 years of age, 51.9 %) compared with older patients (from 27 to 40 years of age, 11.2 %) testifies to the probability of inhibitors development during treatment with modern concentrated preparation of factor viii, ix. inhibitor development in patients (40.5 %] in the course of antihemophilic concentrates transfusions is an evidence of alloimmunization of patients with proteins. the investigations show that in the course of transfusion therapy patients develop secondary immunodeficiency due to chronic antigenic stimulation of immune system with high doses of allogenic proteins. against the background of immunodeficiency patients with hemophilia develop complications of immune character: infections complications --53.9 %, aufoimmune processes --44.9 %, secondary tumours --1.2 %. plasmapheresis is the most rational method of removing inhibitor in patients with low level of inhibitor ("low responders", < 10 bu/ ml) and in patients with mean response. thus it should be noted that the treatment of patients aimed at developing immunological tolerance is not only expensive and economically unprofitable but also not indifferent fo the organism. in a recent multicenter study 73 previously untreated patientens (pups) with severe hemophilia a were treated with a recombinant factor viii concentrate (rfviii, recombinate©). during fviii treatment 21 (29%) developed inhibitors, 6 high titer (>5 bethesda units (bu)/ml), 4 low titer (<5 bu/ml) and 11 transient inhibitors. plasma samples from before treatment and during treatment but before inhibitor occurrence were available in 12 inhibitor patients. these plasma samples were analyzed by a highly sensitive immunoprecipitation (ip) assay for the presence of anti-tviii antibodies. in 9 (66%) a significant increase of anti-fv]]i antibodies was seen indicating the development of a clinical relevant inhibitor titer. this immune response occurred after 2 to 17 (median 5) exposure days (ed). in the same period only 3 out of 15 inhibitor patients showed a decreased in vivo recovery. in 16 pups who developed no inhibitors plasma samples from the entire treatment period were available. an immune response to rfviii treatment was seen in 7 pups after 2 to 43 ed (median 24 ed). the immune response was later and less pronounced in comparison to inhibitor pups before inhibitor occurrence. with the ip method the detection of an early immune response is possible which might be predictive for a later inhibitor development. the inclusion of the lip method should be considered for future multicenter pup studies. in the past anaphylactie reactions to plasma and plasma components have been a common complication of replacement therapy in patients with hemophilia a and b. we report on 3 severe bleeding episodes in 2 patients with hemophilia a and b, respectively. both patients had a history of life threatening anaphylactic reactions after exposure to different plasma derived clotting factor concentrations including intermediate purity factor viii-and factor ix-concentrate, respectively. high purity factor concentrates were tolerated well without any allergic side effects. a 67 years old patient with a moderate form of hemophilia a (f viii 4 %) had a history of severe immediate reactions with skin manifestations and bronchospasm after exposure to fresh frozen plasma, ctyoprecipitate and 3 different plasma derived factor viii-concentrates of intermediate purity. in all episodes pretreatment with corticosteroids and antihistamines was unsuccessfull in avoiding severe bronchospasm. replacement therapy with two different recombinant factor viii concentrates was tolerated well without any side effects. a 12 years old haemophiita b patient developed hypersensitivity reactions to prophylactic factor ix substitution, which could be overcome by using a factor ix .concentrate with improved purity. a recent recurrence of hypersensitmty under this treatment was finally overcome by the use of highly purified (monoclonal antibodies) factor ix concentrate. we conclude from these findings that high purity of factor concentrates, possibly due to the absence of soluble hla-antigens, are advantageous in patients disposed to allergic reactions. introduction: antibody formation against factor (f) viii remains one of the most severe complications of repeatedly transfused patients with haemophilia a. as reported previously in our study about the incidence of fviii inhibitors, we have observed a high incidence of fviii inhibitors among our haemophilia a patients. it is still not clear why certain haemophiliacs develop antibodies and others do not. a number of previous studies suggest that there is a genetic predisposition for the fviii inhibitor development. thus, the purpose of our study was to examine, if there is a correlation between fviii antibody-formation and genetically determined histoeompatibility antigen (hla) patterns in our haemophiliacs. patients and methods: hla-class i (a, b, c) and hla-class ii (dr, dq) typing was carried out for 51 respectively 44 multi-transfused paediatric haemophilia a patients (fviii:c activity < 5%), including 22 who had developed an antibody to fviii: 19 were high responders (> 5 bu), 3 were low responders (< 5 bu). hla-typing has been performed by a standurcl two-stage microlymphoc~.ftotoxicity procedure (drk frankfurt) using antisera with defiend hla-specifity (biotest diagnostica). results: we found an under-representation of hla-a2 in fviii inhibitor patients when compared with the subgroup without inhibitor. in regard to the hla-b and hla-c antigen frequencies there are no apparent differences between the groups. among the class ii antigens there were higher frequencies of dr1, drw52 and dqwl in the non-inhibitor group. however, the reduction in hla-a1, hla-cw5, hla-dqw3 respectively hla-dr4 frequency for inhibitor patients as reported previously could not be confirmed in our study. conclusion: so far it remains unclear if there is a significant association of a certain hla allels with the development of fviii antibodies. recombinant factor sq (r-viii sq, pharmacia) is a b-domain-deleted recombinant factor viii. it is formulated without albumin (hsa). the product has been shown to have in vitro and in vivo biochemical characteristics similar to a plasma derived full-length protein (p-viii). the international clinical trial programme was initiated in march 1993. pharmacokinetic studies have shown that the b-deleted r-viii sq should be given according to the same dosage principles as a full length p-viii. at present, the product is being tested in previously treated patients (ptps) and untreated patients (pups) with severe haemophilia a (viii:c < 2 %), both during long-term treatment (on demand therapy or prophylaxis) as well as during surgery. the long-term study in previously treated patients in germany was started in january 1994. thirteen patients have been included in 8 centers. all patients are still on treatment with r-viii sq, most of them receiving prophylactic treatment. global treatment efficacy has in general been considered excellent or good. no serious clinical adverse events related to the study product have been reported, nor have any inhibiting antibodies to factor viii or antibodies to mouse-lgg or cho-cell components developed in the patients. further results such as data on efficacy, half-life, recovery and safety will be presented in detail at the meeting. nowadays it is not sufficient to regard hemophilia only as hemorrhagic diafhesis of coagulation genesis, caused by deficiency or molecular anomalies of coagulation factor, without taking into account the immunity state. on examination of 125 patients (pts) (hemophilia a --110 pts, hemophilia b --11 pts, willebrandt's disease u 4 pfs) the development of immune complications was revealed in 34.4 %. chronic persistent hepatitis (3.2 %), chronic active hepatitis (3.2 %), herpes simplex (1.2 %), chlamidiosis (1.2 %), bacterial infection (7.2 %} were regarded as infectious complications. bacterial infections have a routine course due to preserved phagocytic function of neufrophils. and viral infections, whose ability to resistance is connected with t -cell link immunity, take on a chronic persistent course, mechanism of the development of autoimmune processes (autoimmune thrombocytopenic purpura --2.4 % of pts, immunocomplex disease --4.9 % of pts, the appearance of immune inhibifors --34.4 % of pts} is connected with the impairment of immunological surveillance over b -cells aufoimmune clones as a result of dysbalance in the system of t -lymphocyfes immunoregulatory subpopulations. lymphadenopathy and splenomegaly (4.9%) develop due fo benign proliferation of lymphoid tissue as a result of impairment of regulatory function of t -lymphocytes system, or they may be an evidence of virus infection. we observed one episode of acute leukemia. immune complications in hemophilia patients develop against the background of secondary immunodeficiency caused by chronic antigenic stimulation of patients' immune system with high doses of allogenic proteins, which plasma preparations contain. in immune complications hemophilia patients develop hemorrhages, whose pathogenesis is quite different from that caused by coagulation factor, so it should be taken into account in the course of treatment. control of hemophilia therapy classically was based on four parameters: life span expectancy of patients, orthopedic status (normal zero), pettersson score and social integration. oren, however, these parameters described an irreversible status with permanent damage particularly of the joints, especially when patients were grown-up. in order to establish risk-adapted therapy protocols to prevent hemophllic osteoarthropathies, quality control programs have to he set-up that allow for early adjustment of dosage and substitution frequency. here bleeding frequency is one the main parameters, being a clear hint for the possible development of a target joint. since 1988 we have established a computer database (haemopat) that contains data from all patients treated in our center. tables and graphs allow for early detection of increased bleeding tendency in a given joint, and accordingly for adjustment of therapy. the results of 8 years of measuring reasons of joint damage and not documenting the orthopathies as such will be demonstrated. parallelly a new program (haemopat win 1.0) will he introduced allowing for easier handling of data and their evaluation. this program will be used as of december 1995. in combination with a substitution calender to be filled in by all patients, in which factor concentrates, lot numbers, dosage, and date of administration will he constantly recorded, this program will extend our existing database in order to follow closely clinical and orthopedic parameters of each patient, and consequently acts as strict control of therapy quality. additionally, it provides sufficient data to fulfil any documentation needs, requested by medical authorities. the program will be available for all those interested free of charge. 2) kinderklinik der westf. wilhelms univ. mttuster 3-6) biotest pharma gmbh, dreieich haemoctin® sdh; the fviii sdh (sdh = solvent detergent and dry heat = 100 °c, 30 rain) from biotest pharma is a high purity (specific activity ~ 100) fviii concentrate manufactured from large human plasma pools. virus validation studies have shown virus inactivation/reduction (log 10) during the manufacturing process for lipid coated vints~ such as: h]v-1 > 16.2; psr > 16.8; vsv > 14,5; bvdv > 15.7; hcv > 4.5* and non enveloped vimsas such as: parvo** = 2.7; reo > 5.3*** and hav > 13.9. more than 50 hemophilia a patients (ptps = previously treated patients), baseline fviii activity < 1%, were included in an international drug monitoring study to follow their fviii inhibitur status. the hemophilia centers included were three centers from hungaria (helm pal children hospital and the national inst. of haematology, budapest and regional blood transfusion center, debrecen) and four centers from germany (two from berlin, one fraukfurffmain and one monster). patients were enrolled in the drug monitoring beginning aug. 1993. at the entry none of the patients had a detectable inhibitor. at the end of sept. 1995 there were no side effects or adverse events in connection with the use of haemuetin®. before the haemoctin drug monitoring study, the patients were treated with cryoprecipitate, or purified fviii products. inhibitor testing was done on patients plasma samples using the bethesda method. repeated fviii recovery determination at one time (between 12 to 24 hrs) after haemoctin® application demonstrated the expected recovery and normal half life time. none of the hemophilia a patients, treated with haemuetin® sdh developed a clinical relevant inhibitor. at the beginning of the stud)', the clinical efficacy of haemuetin® was studied in 16 hemophilia a patients and shown to give an in vivo recovery of 71 + 15 % by one stage assay and 77 + 17 % by a chromogenic assay. t ½ values were 13 + 2.8 and 12.7 + 3.2 hrs respectively. the study for the clinical efficacy of haemoctin® sdh was repeated in a group of 6 patients approximately two years later. although cd4 lymphocyte counts are known as reasonable predictors of prognosis in hiv infection, the cd4 count is not in all cases an infallible indicator of prognosis. therefore several serological markers are used to predict disease outcome, including beta-2 microglobulin (132m), immunoglobulin a (iga), lymphocyte counts (lymph) and others. in this study we followed a cohort of 23 haemophiliacs (19 with haemophilia a, 4 with haemophilia b) and 2 patients with severe von willebrands disease over a period of 28 months (mean, range: 22-34). testing for l~2m, igg, iga, igm, cd4 and cd8 cell counts (abs. and relat.), cd4/cd8 ratio, and absolute resp. relative leucocyte and lymphocyte counts was performed at least 3 times a year. at the same time clinical examinations and review of history were undertaken. mean of laboratory tests for every quarter of a year and significant changes during time of observation were calculated and correlated with clinical data. 1-4 5-8 9-12 13-16 17-20 21-24 cd41 440+956 344+924 278+925 302.-1:240 273.+.220 166+125 cd8 ~ 1165+474 1171+523 1236+1187 1017+439 930±412 873+478 1~2m z 2.0+0.6 3.0+1.0 3.0+1.2 3.0+1.1 3.5±1.0 3.5±1.3 lymph ~ 1.98+0.6 1.83+0.6 1.72+0.7 1.67±0.6 1.46±0.6 1.31±0.5 means/pl ± standard deviation means mg/l ± standard deviation during time ef observation we found significant changes of cd4 (abs. and relat.), abs. cd8 counts, cd4/cd8 ratio, f~2m, leucocytes and lymphocytes. the abs. cd4 and cd8 counts correlated clearly with lymphocytes und leucocytes counts but not with 1~2m. the prognostic value of the tested parameters is discussed by calculation of correlations with clinical data, anti-retroviral treatment and treatment of haemophilia. the availability of high purity factor concentrates has recently encouraged clinicians to use perioperative continuous infusion of fviii or fix to prevent or reduce bleeding in patients with haemophilia. in conliast to repeated highdose bolus injections, the continuous infusion trealment regime maintains constant coagulation factor activity at a level necessary for hemostasis, reducing the total cost of treatment by about 20% and preventing possible side effects of bolus doses. the new application mode, however, requires stable products which tolerate slow passage through an infusion device. our objective was to test in vitro the fviii concentrate immunate (stim plus) and the fix concentrate immi.ynine (stim plus) at room temperature, under conditions of long-term contact with polypropylene tubing in an infusion pump. infusion rates were chosen to mimic clinical situation. the control samples were not infused through the pump but were otherwise treated identically. test samples were drawn before and at 1, 4, 8, 24 and 48 hours after the onset of each infusion run. fviii (one-stage, two-stage and chromogenic assay) and fix (one-stage) activity were measured using immuno reagents. presence of activated factors were measured by napt'i', while flla, fxa, plasmin and pre-kallikrein activator were detected with specific chromogenic substrates. the data showed equivalent results between test and control samples with no loss of fviii or fix activity. the potencies of both immunate (stim plus) and immunine (stim plus) remained within 100 + 20% of labeued values within 48 hours after onset of infusion. in conclusion, immunate (stim plus) and immunine (st1m plus) are suitable for contiuous infusion when using automatic infusion device within applied test criteria. in htanans, circulating half-lives of asparaginase enzymes from e. coli and erwinia chrysanthemi vary within a wide range. moreover, half-lives differ not only among different e. coli strains but also among commercial e. coli preparations. to investigate the possible influence of two different sources of e. coil asparagmase (asn) preparations on the fibfinolytic system of leukemic children a prospective randomized study was performed correlating asn pharmacokiuetics (asn activity, asparagine depletion) with fibrinolytic parameters (plasminogen (plas), o.2-antiphismin (ct2ap), tissue-type plasminogen activator (t-pa), tissue type plasminogen activator inhibitor 1 (pal 1), d -i)imer (i)-d)). together with prednisono, vincristine and an anthracycline 20 children received i0000 iu-/m 2 asn medae r (originally purchased: kyowa hakko, kyogo japan) and 20 children 10000 iu/m 2 crasintin r (bayer, leverkusen, germany). blood samples for pharmacokinetic and coagulation analysis were drawn before the first asn administration and every third day whilst on medication. the results are shown in the 0.05 asn activity shows a negative correlation (spearman: rho/p) to plas (-,637/0.0003) and ct2ap (-, 751/0.0001). a positive correlation was found between asn activity and d -dimer formation (0.475/0.01). t-pa and pal 1 showed no relationship to asn activity. all children showed complete aspamgiue depletion at a detection limit of 0.1 um during the course of asn admiatstration. two thrombotic events occurred in the kyowa group, one of the distinctions between the two e. coli asn preparations administered ill this stndy is the absence of cystine in the kyowa asn, which also has a lower isoelectric point and a longer half-life than the bayer type a asn. with respect to this observations this may lead to longer inhibition of protein synthesis, which then may be the cause of a bigher rate of side effects. along with studies on asn pharmacokinoties dose recommendations need to be tailored to the specific asn preparation employed to ensure optimal antineoplastic efficacy while minimizing the hazard of complications. different types of coagulopathy in hepatic veno-occlusive disease (vod) and capillary leakage syn-drome (cls) after bone marrow transplantation w. ntimberger, s. eckhof-donovan, st. burdaeh and u. g6bel department for pediatric hematology and oncology, heinrich heine university medical center, diisseldoff, germany it is generally accepted, that cls, coagulation activation and refractoriness to platelet transfusions are part of the syndrome of hepatic vod. we assessed patients with either vod or cls or both vod and cls, in order to analyze the influence of either syndrome on different aspects of hemostasis. vod was diagnosed according to jones et al. [transplantation 44 (1987) 778]. diagnosis of cls was >_3% increase of body weight in the past 24 hours and non-responsiveness to furosemide [niirnberger et al., ann hematol 17 (1993) 67] . patients with vod, cls or both were compared to control patients without either diagnosis. eight patients suffered from both vod and cls, 5 patients only from vod, and 8 only from cls. 61 patients had neither syndrome and served as control population. activation of the coagulation system was assessed by increase of tat-complexes and/or increased consumption of at iil the hemostasis patterns were as follows: no. introduction: lung cancer goes along with coagulation activation and increased thromboembolic risk. acute phase reaction in cancer patients leads to elevated levels of c4b-binding protein (c4b-bp) followed by a shift from free to c4b-bp-bound protein s. we tried to find out whether there is a correlation between alterations of c4b-bp, protein c protein s system and interleukin 6 (il-6), which is one of the most potent inducers of hepatic acute phase reaction. patients: i. 25 patients with lung cancer; 2. control group: 11 patients in complete remission after lung cancer. methods: clotting methods: protein c and s activity; elisa tests: protein c antigen, tat-complexes, prothrombin fragments f i+2, il-6. electroimmuno-diffusion (laurell): free and total protein s, c4b-bp. results: tat-complexes and f i+2 were elevated in cancer patients. c4b-bp levels were slighthly increased (129±19 % of n.), protein s activity was 89±33 % of n. (control group: 108±23 % of n.). il-6 in lung cancer patients was 37.2252.9 pg/l (control: 27.2±8.2 pg/l). conclusion: one source of the hypercoagulable state in lung cancer patients is decreased protein s activity due to elevated c4b-bp levels. this is probably caused by hepatic acute phase reaction which is triggered by increased il-6 levels. these plasma levels correlate with levels of the tumor marker ca 125 and with the stage of the disease but correlations with patient outcome (disease recurrence and overall survival) have not previously been shown. plasma levels of d -dimer and ca 125 (determined by sandwich elisa assays} were measured prior to treatment in 36 women with figo stage t to iii ovarian cancer and correlated with tumor stage, relapse and overall survival over a mean follow -up period of 28 months (range 16 to 40 months). levels in 71 healthy women and 27 patients with benign ovarian disease served as controls. the occurrence of deep vein thrombosis in the cancer patients was also determined by impedance plethysmography that, when positive , was confirmed by contrast venography. preoperative d -dimer and ca i25 levels in ovarian cancer patients were statistically signfficantly higher than in controls. preoperative cut off values were calculated for the prediction of cancer relapse and survival for both measurements. d -dimer levels above a cut off level of 2060 ng/ml were statistically significantly associated with the rate of relapse but ca 125 levels were not. deep venous thrombosis occurred in 33 % of cases but there was no difference between properafive levels of d -dimer in patients who subsequently did versus did not develop deep vein thrombosis. high levels of d -dimer are associated with more advanced disease and with poor prognosis in patients with ovarian cancer. the high levels of d -dimer are a biologic feature of the malignancy itself that may be attributable, at least in part, to increased conversion of fibrinogen to fibrin in the tumor bed with subsequent degradation of fibrin by the fibrinolytic mechanism. thus d -dimer levels may serve as a marker for overall tumor burden as well as "disease activity". a high incidence of deep vein thrombosis exists in the course of the disease in ovarian cancer patients but preoperative levels of d -dimer are not predictive of this occurence. yon tempelhoff georg -friedrich, michael dietrich, dirk schneider, lothar heilmann. dept. obstet. gynecol. city hospital of ruesselsheim. -germany. an increase of plasminogen activator inhibitor activity (pai act.) in the plasma of cancer patients has been recently discribed. we have longitudinally investigated pai act. in 136 patients with primary breast cancer and compared the results with the outcome of malignancy. patients with untreated primary breast cancer and without proof of metastasis (t 1-4 n0-2 m0) were eligible for this study. in all patients coagulation tests including fibrinogen {method according to clauss), d -dimer (elisa} and pal act. (upa dependent inhibition test) were performed prior to primary operation, 6 months thereafter and at the time of cancer relapse. seventy -two healthy women and 43 patients with benign breast disease served as controls. during a mean follow -up of 32 + 16 months 34 patients (25 %) developed cancer recurrence and 13 (9.6 %) patients died. in all cancer patients preoperative levels of fibrinogen and pai act. were significantly higher compared to healthy women and to patients with benign breast disease. preoperatively only pal act. was significantly higher in patients with vs. without cancer recurrence (4.52 _+ 1.67 u/ml vs. 3.4 1 + 1.55 u/ml; p = 0.002). in patients with later recurrence pai a~t. significantly dropped 6 months after operation (p = 0.02) and was again significantly increased at the time of cancer recurrence (4.90 _+ 2.89; p = 0.001). a preoperative cut off value (calculated via cox model) of pai act. above 3.52 u/ml was significantly associated with the rate of relapse (tog rank: p = 0.0005) and in 70 % of patients who died of cancer preoperative pai act. were also above this cut off. impaired fibrinolysis in patients with breast cancer is significantly associated with the outcome of cancer. a monoclonal heparin antibody (mab) has been raised against native heparin using a heparin-bovine serum albumin conjugate prepared by reductive amination. for further analyses tyramine, which was covalently bound to low molecular mass heparin by endp0int attachment (malsch r et al: anal biochem 1994; 217: 255-264) , was labeled with 125-iodine at the aryl residue. the tracer antibody complex was immunoprecipitated by goat anti-mouse immunoglobuline igg. the mab recognized specifically intact heparin and heparin fractions. the lower detection limit of heparin preparations was 100 ng/ml. no cross reactivity of the mab occurred with other glycosaminoglycans such as heparan sulfate, dermatan sulfate, chondroitin sulfate a and c. oversulfated heparin showed lower affinity to the antibody hl.18 than 2-0-and 6-0-desulfated beparin. the method established for the purification of the mab was ammonium sulfate precipitation with followed dialysis. sds-page and high pressure capillary electrophoresis prooved the high purity of the received antibody. the biological activity of mab was tested by the chromogenic assay $2222 and remained stabile while purified. in conclusion, the present abstract describes an purified igg 1 monoclonal antibody directed against heparin and heparin fractions, which can be used for biological measurements. the concentration of heparin and dermatan sulfate in biological fluids is usually measured using radiolabeling. for this purpose aromatic compounds are usually used to insert radioactive iodine labeling at the saccharide backbone of the glycosaminoglycan. we developed methods for the specific labeling of hepann and dermatan sulfate at the terminal residue. tyramine was bound by reductive amination to the 2,5 anhydromannitosyl end of heparin, produced by nitrous acid degradation and confirmed by 13c.nm r spectroscopy. (anal biochem 217: 254-264, 1994) this method was also used to produce a low molecular mass dermatan sulfate (lmmd)derivative after partial deacatylation. in order to choose the proper method for evaluating the specific anticoagulant activity in the row of chitosan polysulphate (cp) samples with different degrees of pol3~merization and sulphation we applied to pharmaeapea article (a~) when assessing the ability of direct anticoagulants to depress the coagulability of recalcificated sheep blood (using the 3rd international heparin standard), and to measuring such acti¢ity as per pharmacokinetic model (a2). the model admits the "kinetics of cp elimination be linear in ease of intravenous injection to rabbits, as it is observed in heparin: ct=co exp(-i~ x t), where ct is cp concentration at the time moment t; co is cp concentration at the moment of injection; i~ is the elimination constant. besides, it is assumed that there is a linear approximation of the anticoagulant effect on the dose, which finally makes it possible to calculate the specific actidty a2 : t=kt ct + tin, where t is the time of clot formation at different tlme intervals after of cp injection; t~, is the time of clot formation prior to cp injection. t value was assessed in two tests: in blood coagulation time (bct) and in activated partial thromboplastin time (aptt). no correlation was observed between a1 and a2. at the same time the values of ifm and the period of semieliminatinn (tvz) with the use of the original method that were obtained with the help of the quantitative determination of cp in rabbit's blood taken at different time intervals after injection, showed a close correlation (1"=0,94 p<0,05) between the same parameters, obtained with the help of the of the pharmacokinetic model in bct test. thus, experimentally it was proved that the assumption of the linear elimination and the effect-dose dependence was true, which is necessary for a2 calculation. we recommend to use intravenous injection of the samples to animals with further assessment of the results according to the pliarmacokinetic model to calculate the specific anticoagulant activity in the row of chemically related potential direct anticoagulants. in this investigation we compared the biological activity of a low-molecular-heparm (lmw-heparin, mono embolcx®) after intravenous, subcutaneous and oral application in rats. sprague-dawly rats were anaesthetized by ketamine/diazepam and the blood samples were taken from the retro orbital sinuus. 150 axa u/kg body weight of the lmw-heparin were injected intravenously and subcutaneously to 10 rats each. between 3 minutes and 10 hours after injection serial blood samples were taken. 200 mg/kg (20.000 axa u/kg) body weight of the lmw-heparin were applicated orally using a stomach tube. blood samples were taken between 1 and 24 hours after oral application. the antifactor xa and antithrombin activities of the plasma samples were measured, using ehromogenic assays and the substances s 2222 and s 2238 (kabi vitmm). after i.v. injection the maximum axa and alia activities were 2.8 axa u/ml and 0.8 aiia u/nil respectively. after s.c. application the antifactor xa activity of the lmw-heparin showed a maximum of 0.5 axa u/ml atter 120 minutes. the antithrombin activity exhibited an eatiier maximum activity of 0.2 alia u/nil 60 minutes after injection. after the oral application no increase of the axa or alia activities was measured. the lmw-heparin has a high antifaetor xa and antithrombin activity after i.v. and s.c. injection. after oral application no activity of the lmw-heparin was measurable. these results implicate that fractionated heparin is not absorbed after oral application or is inactivated in the gastrointestinal tract. to improve the activity after oral application modified hepatins have to be synthesized. in an in vitro study the effect of various heparin derivatives (calciparin, fraxiparin, cy 222, cy 231, astenose, hexasaccharide, ssh 14) on thrombin-and adp-induced platelet aggregation as well as on adpmediated platelet activation in whole blood was investigated. all heparin derivatives caused a concentration-dependent inhibition of thrombin-induced aggregation of washed platelets. calciparin and astenose were found to be the most effective compounds with ic5o values of 0.67 and 1.2 p, mol/l, resp.; higher concentrations (5-30 times) were required for the other compounds. furthermore, the heparin derivatives were studied with regard to their potentiating effect on adp-induced platelet aggregation. in a concenwation range from 1 to 10 u/nil calciparin, fraxiparin, cy 222 and astenose led to a potentiation of the adpinduced aggregation whereas cy 231, hexasaccharide and ssh 14 did not show this effect. the increase in aggregation was associated with an increase in thromboxane a2 lbrmation. in addition, the effect of calciparin, fraxiparin, cy 222 and astenose on adp-induced platelet activation in whole blood was investigated by flow cytometric analysis using monoelonal antibodies to platelet surface receptors opiiia (cd-61) and p-selectin (cd-62). at concentrations that caused a maximum potentiation of adp-induced platelet aggregation these substances led to a strong increase of adp-mediated activation of platelets in whole blood. the effect was most pronounced when the blood was anticoagulated with calciparin and astenose, resp. in conclusion, the results suggest that the aggregation-promoting effect of heparin derivatives included in this study is dependent on the molecular weight and the degree of sulfation and is in part due to the generation of thromboxane. heparins are negatively charged polysaccharides and bind protamine forming a stable complex. here we report on the properties of microbeads (4.5 pro) coated by protamine. protamine chloride (0.16 ijm) was covalently bound to 0.5 mg paramagnetic tosyl..activated microbeads m-450 (dynal). the covalent binding of protamine was from 1.0 to 13,0 mg/g beads. protamine-dynabeads were produced in a phosphate buffer at different ph (7,0; 7,5; 8,0 and 8,5). the protamine-dynabeads produced ph 7.5 showed the best properties for flow cytometry analysis. in saline solution they bound lmm-heparin-tyramine-fitc (lmmh-tyr-fitc) dose dependently from 0.001 to 2 u/ml, whereas in plasma and blood they bound lmmh-tyr-fitc from 0.05 to 2 u/ml. dependent on the binding protocol, the microbeads also bind proteins unspecifically, i.e bovine serum albumine and protamine to a lower extent.the adsorbed proteins, however do not bind lmmh-tyr-fitc dose dependently. the saturation of the proteins on the beads was determined as their relative fluorescence intensity (rfi). in saline solution the saturation was measured at 380 rfi, in human plasma at 325 rfi and in whole blood at 252 rfi. using flow cytometry erythrocyctes, lymphocytes, monocytes and granulocytes were not bound to protamine dynabeads. these data demonstrate that protamine-dynabeads can be used to measure the concentration of lmmh-tyr-fitc in saline solution, plasma and blood because they do not bind to human blood cells. the present study was designed to investigate the anticoagulant action of inhaled low molecular weight (lmw)-heparin in healthy volunteers. 3,000 iu (group t), 9,000 iu (group 2), 27,000 iu (group 3) or 54,000 iu (group 4) lmw-heparin were given to 20 healthy volunteers each at 4 weeks intervals. in group 1 tissue iactor pathway inhibitor (tfpi) antigen and activity, 82222 chromogenic factor xa assay, heptest, aptt and thrombin clotting time (tot) remained unchanged during the 10 days observation period. in group 2 tfpi antigen and activity, aptt, tct and the $2222 method remained uneffected. heptest coagulation times were 18.7 + 2.0 before, 26.1 + 5.2 sec. 6 hrs and to 20.5 + 1.9 sec. 24 hrs after inhalation. in group 3 tfpi antigen increased from 74.1 + 13.9 to 80.5 + 14.2 ng/ml 3 hrs after inhalation. tfpi activity remained unchanged. $2222 method increased from 0.01 to 0.08 + 0.06 iu/ml 6 hrs after inhalation. heptest coagulation values were prolonged up to 42 _+ 7.6 s ec after 6 hrs and returned to normal within 72 hrs after inhalation. aptt and tct remained unchanged. after inhalation of 54,000 iu lmw-heparin, the following changes were observed: tfpi antigen increased to 103 +_. 17.9 ng/ml and normalized within 24 hrs. -i'fpi activity increased to 1.14 _+ 0.23 u 3 hrs after inhalation and was normal after 24 hrs. antifactor xa activity, as measured by s2222 method, increased to 0.343 + 0.196 u/ml after 6 hrs and was normal after 72 hrs. heptest coagulation values increased to 77.5 + 11.8 sec 6 hrs after inhalation and normalized after 144 hrs. aptt and tct did not change throughout the observation period. the data demonstrate a resorption of lmw-heparin by intrapulmonary route in man. no side effects were observed. recently we developed a tritium-labelled arachidonic acid ([3h]aa) release test with high sensitivity to membrane-toxic agents. the assay performed in u937 cells is intended to evaluate ehemicals, drugs and biomatefials with regard to their eytomembrane toxicity [kloeking et at. (1994) , toxicology in vitro 8, 775-777]. local irritation reactions are described in patients receiving therapeurieat dosages of lmw heparin. this fact prompted us to examine the following lmw hepafins and heparinoids for their membrane toxicity in u937 cells: reviparin-sodium, enoxaparine-sodium, mueopolysccharide polysulphate (mps), pentosan polysulfate sodium (pps), polysulfated bis-lactobionic acid amide derivatives lw10082 (aprosulate) and lw10086. for this purpose, [31--1]aa labelled u937 ceils were incubated with different concentrations of lmw heparins and heparinoids at 37°c for 1 hour. compared with untreated cells, the [~h]aa release of cells treated with 5 mg of the drugs was two times higher with reviparin sodium, three rimes higher with bis-lactobionic acid amide lw10086, five times higher with pentosan polysulfate, 20 times higher with ertoxaparine-sodlum, but it was equal to the control with mucopolysaccharide polysulphate. the rate of araehidonic acid release in response to a test chemical may therefore be used to assess the membrane-toxic effect of this substance and to predict its the inflammatory potential in the skin. semi-synthetic glyensaminoglycans (gags) with antithrombotic properties can be prepared from the e. coli k5 polysaecharide by coupled chemical and enzymatic methods. the molecular weight of these semi-synthetic gags can be adjusted to obtain products mimicking the molecular profile of a low molecular weight hepatm. in order to compare the biochemical and pharmacologic properties of a semi-synthetic gag (sr 80486a, sanofi/choay) with a commereiany available low molecular weight heparin, fraxiparine (sanofi, paris, france), valid biooheanical and pharmacologic methods were used. the molecular profile of this agent as determined by hplc exhibited a comparable distribution profile (mr=5.7 kda) in comparison to fraxiparine (ma=5.1 kda) . the anticoagulant properties of sr 80486a were comparable to fraxiparine in the aptt and heptest. however, in the usp assay, this agent showed slightly weaker activity. sr 80486a also exhibi~d comparable affinity to atffl and hcii. in comparison to fraxiparine, it produced a much weaker response in the hit screening system. in~ viv0 studies, sr 80486a preduecd strong dose-dependent antithrombotic actions in both the iv and sc studies in the rabbit jugular vein stasis thrombosis model (ed50=i 5-60 gg/kg). additionally, it also produced antithrombotic aefiorts in a rat jugular vein clamping model. the hemorrhagic effects of this agent were comparable to those of fraxipafine as measured in a rabbit ear blood loss model. intravenous administration of sr 80486a also revealed a comparable pharmaeokinetie behavior to fraxiparine. no abnomaiitias of the clinical chemistry (change in liver enzymes) and hematology profile (thrombocytopenia and lencecytosis, etc.) were noted in primates. at a dosage of i and 2.5 mg/kg iv, this agent also caused a release of functional tfpi which was comparable to the observed responses of other low molecular weight heparins. these studies suggest that sr 80486a is capable of producing similar pharmacologic effects as other low molecular weight heparms, however, additional optimization studies are required for demonslrating product equivalence. limited information on the comparative pharmacoldnetics of low molecular weight heparin (lmwh) is available on the data obtained from aptt, heptest, anti-xa and antmia assays. since these drugs are currently used for therapeutic indications using relatively high dosages and intravenous administration. aptt, heptest and antmia test may be valuable in the assessment of their effects. in order to investigate the relative pharmacokinetics of lmwh using apt'i', heptest, anti-xa and anti-iia methods, certoparin (sandoz, basel, switzerland) was administered to individual groups of healthy male volunteers (52-70 kg) via intravenous (30 mg) and subcutaneous (50 nag) routes in a crossover study. blood samples were drawn at 0, 5, 15, 30, 60, 90, 180, 360, 540 and 720 minutes. using a baseline pool plasma obtained from the same volunteers, calibration curves for each of the individual tests were constructed to extrapolate circulating levels of certoparin. a non-compartmental model using trapezoidal technique was used to obtain pharmacokinetic parameters such as t 1/2, vd, and clsys. in the intravenous studies, the t 1/2 was found to be dosedependent for aptt, heptest, anti-xa and antm]a. the auc, however, was significantly different for each test and was dose-dependent following the order: apttheptest>aptt>antmia. the clsys of the antma was much faster in comparison to the other tests. the clsys of the aptt and heptest was independent of dose. however, anti-xa clsys by this route was lower than other tests. the apparent vd followed the order aptt>antmia>heptest>anti-xa. the bioavailability of the certoparin as measured by various tests ranged from 81-119%. these studies suggest that beside providing pharmacokinetic data, aptt, heptest and anti-iia assays may provide useful data on thier safety and efficacy at high dosages. the immunological type of heparin associated thrombocytopenla (hat ii) is a severe complication of heparin treatment and is associated with arterial and venous thrombosis. only patients with absolute thrombocytopenia have prompted suspicion of hat in clinical practice. we report on a 44 year old male, who developed thromboembolic episodes after coronary angiography like reinfarction and thrombotic episodes of a. brachialis. fibrinolytic therapy combined with i.v. unffactionated heparin treatment was the therapy of choice and was followed by severe fua~er thromboembolic adverse effects. besides an impaired fibrinolytic response and elevated antiphospholipid anitbodies, we diagnosed hat type ii in hipa and elisa (stago-boehringer, marmheim). this special patient had platelet counts within a normal range, when developing the thromboembolic episodes. it appears that the normal platelet count during the thromboembolic episodes reflect a relative thrombocytopenia. from a clinical point of view we recommend the use of a lab panel to exclude hat type ii in patients with thromboembolic episodes under therapy with fractionated or unfractionated hepafin. platelet counts within a normal range are no absolute exclusion criterion for hat ii. low molecular weight heparins (lmwhs) are now commonly used for the prophylaxis of post-surgical thromboembolic complications. in this indication, lmwhs are administered as a single or twice a day subcutaneous regimen. usually these agents are administered at 30-40 mg total dose which is equal to 3000-4000 anti-xa (axa) iu. newer methods such as ehromogenic substrate based axa methods and the heptest clotting time can be used to determine the effects of lmwhs during the initial phases of prophylactic therapy. this may be useful in the elderly and weight compromised patients where a fixed dosage may not be optimal and may produce bleeding effects. similarly in the overweight patients, a fixed dose may not be efficacious. thus, monitoring of lmwhs in these patients may be useful in the optimization of their therapy. lmwhs are also used in the treatment of deep vein thrombosis using both intravenous and subcutaneous protocols. high dosages of up to 200 mg sc/day and infusions of up to 30 axa iu/kg/hr have been administered. in these conditions, the monitoring of the circulating lmwh levels may be useful in optimizing the dosage. we have modified the aca heparin (do_pont merck, wilmington, de) method to measure the lmwh levels in the plasma of patients treated with both the prophylactic and therapeutic dosage. owing to the required turnaround time, simple operation and reliable results, this method was found to be of value in the monitoring of these agents. this presentation provides an overview of the clinical application of various lmwhs with particular reference to the need of monitoring for their effects to optimize the clinical outcome. a double-blind, multicentric, controlled trial was performed in order to compare the antithrombotic efficacy and safety of single daily doses of 5000 ie anti-xa of low molecular weight heparin (lmwh) sandoz (certoparin) and 5000 ie unfractionated heparin (ufh) tid. in 288 patients undergoing elective total hip replacement blood samples were drawn before the first subcutaneous injection of lmwh or ufh resp., two hours after administration on the first and 7th postop, day and on the last day of prophylaxis (day 10-14), anti-xaactivity was measured by chromogenic substrate assay, heptest and aptt by clotting assays and tissue factor pathway inhibitor (tfpi) and heparin-pf4-antibodies by elisa techiques. as expected, the anti-xa-activity and the heptest values were significantly higher in the lmwh-group at all time points after administration of the drugs; the mean values of heptest were 35 sec in the ufh-and 75 sec in the lmwh-group respectively, the aptt was not different in both groups. at the end of prophylaxis positive antibodies to heparin-pf4complexes were detected ~n both groups; this however was not correlated with clinical thrombocytopenia. a detailed correlation between patients with deep vein thrombosis (dvt) and positive antibodies has still to be done (all patients were screened for asymptomatic dvt between day 10-14 by bilateral phlebography. tfpi was markedly increased in the lmwh-and only slightly elevated in the ufh-group; the differences are statistically significant. summarizing it can be concluded that antibodies to heparin-pf4complexes may occur without clinical symptoms of hepafin-induced thrombocytopenia type ii and that tfpi may play a sigificant role for the antithrombotic efficacy of ufh and lmwh. unfractienated heparin represents one of the most severe and frequent causes of drug-induced thrombooytopenia. heparin-indueed thrombocytopeala (hit) occurring early in therapy is often mild and serf-limited, appearing to be caused by a direct aggregant effect of heparin on platelets (hit type i). hit type ii, however, is immune-related an may result in absolute thrombocytopenia (platelet count 5 bu) hemopb~iacs with high fitcrs have ~ually serious ~ problems. they are resistent to mg,,flary replacement therapy, the ~ goal in the treawnent is to control severn acum bleedin~ and to eradicate the inlu'bitor perrmnanfly and to induce tolea'ance. in the tream'tmt of acute blcedings in patients with hlhibitors factor viii inhibitor bypassing ag~ts like activated prothro~ complex concenuxtes (feiba) or prothrombin complex concentrates (pcc) arc mostly used. the meehani~n of aefiou of theses concentrates is net fully investigated. their effect is usually related to the high coment of activated clotting factors ~d phosphoupids. since some years acdwated recombinant factor vii (f vii a) is used to treat patients with inl'dbitocs successfully in several clinical situations including surgery. in addition porcine factor vii1 is widely used in particular in the uk for the treatment of factor v].ii inhibitor patients and could demonstrade good clir3cal results, in case of life threatening bleedings a temporary reducfic~ of inhihitors could be. ~hieved by using extem*,ivc plasma exchange (protein a adsorption) and immune suppression with cyclophosphamid (~alm6 protocol). follow~g the first description by h. bmc~'~mn some modifications for tlm induction of irmnune tolerance in hemophilia a patients have ~en propet'ed. these schedule, can be derided into high, intemxxfime and low dosage roglrmms di:ffea'jng in the dosage of factor viii infused. successful rates about 70 to go % can be obtained with ~ and high dose regimens. but is has to be co~sidered that the~ expensive trea.t~nt regimens have a great physical and p .syc.hosocial impact to the benx~-li~s and thch" farm~e& the different immu~ mler-a.~ze mg~-'~ predominantly used in high rcsponder inhibitor. most of the patients with low concentrations of inhibitors cm be managed with factor viii in increased dosage. this is in agreement with the consensus recorrnr~rdadons for u'eatlncnt hemophiliacs in germany fi'orn 1994. before vitamin k(vk) prophylaxis was generally accepted in japan, the incidence of infantile vk deficiency was 1:4000 both idiopathic and secondary types. since 1981, 4 nationwide surveys have been conducted. the current incidence rate is now about one-tenth that in early 1980. however, in a small number of eases, vk deficiency oceured despite prophylactic administration during the neonatal period. in order to clarify the absorption,excretion and transplacental transport of vk in the perinatal period,following studies were carried out. t)hepaplastintest(normotest) were performed on 65 women in the last stage of pregnancy and each coagulation factor was estimated as well. 2)correlations were made between mothers'and babies'hepaplastin test values. 3)transplacental transport of vk 2 was studied. the general activity of vk dependent factors in pregnant women was much higher than in non pregnant women. as far as the correlation between mothers'venous blood during delivery and cordvenous blood is concerned, in the group of mothers with hepaplastin test value of less than 120% of the normal adult value, the value of the hepaplastintest was less than 30 % of normal adult value in the cord venous blood° we also demonstrated that vk passed through the placenta but only in small qualities. 61hiv-negative patients (median age 4]yrs, 13-70}, formerly treated with non-virusinactivated coagulation products, underwent hepatologic examination, including afp screening and sonography. 31.suffer from severe, 20 from moderate or mild haemophilia a or b, 10 from other severe coagulation factor deficiencies. 52 had been treated with products of the swiss red cross (src) only (28 with small pool cryoprecipitate}, 4 with foreign products only, 5 with both src and foreign products. treatment intensity was variable with>20'000 iu/yr in 26,< 20'000 in ]6, < 1 treatment episode/yr in ]0, a total of only 1-3 treatment courses in 6 patients. 3 afibrinogenemic patients had prophylactic replacement therapy. hcv serology was positive in 56/6] patients (92%), in 47 with detectable hcv rna (77%). the 5 persons who escaped hcv infection, with normal alt-levels and without sonographic alterations, had low intensity treatment with small pool src preparations only. alt-levels were elevated in 33/56 anti-hcv positive patients (59%). 26/56 had abnormal sonographic findings (46%). there was a clear correlation between elevated alt-levels and abnormal sonographies: of 33 patients with elevated alt 23 had abnormal sonography, of 23 with normal alt 3 had abnormal sonography. 8 patients had liver cirrhosis (6 with clinically overt hepatopathy), 4 (4/56 = 1%!) with hepatocellu]ar carcinoma (hcc) with elevated afp-leveis. of these 4 patients 2 had intraarterial embolization with ]ipiodol-epirubicin; in 2 patients hcc diagnosis was made in a late stage. i patient with advanced liver cirrhosis underwent successful liver transplantation. 2 of the 6 patients with hepatopathy had severe haemophilia with temporary high alcohol intake, 4 had mild coagulatlon disorder with few treatment episodes. possible precipitating factors were coinfection with hbv, high alcohol consumation and first exposure to hcv contaminated blood products in an advanced age, but not intensive replacement therapy. very similar results for f vlll and vwf. since the factor viii level is kept steady above the level where there is an increased risk of haemorrhage, continuous infusion is haemostatically safer and more efficacious than bolus injections, another advantage is a progressive decrease of clearence during the first days after surgery which leads to a substantial reduction of factor concentrate consumption by avoiding the innecessary peaks of bolus injections. 22 children with severe form of haemophilia a undergoing elective surgery received continuous infusions with different plasma-derlved and recombinant f viii concentrates. before surgery, patients got bolus injections to raise the factor viii levels to more than 80 %. during continuous infusion factor viii levels were measured two to three times a day and the infusion rate of 4 to 5 iu/kg/h could be reduced on the second or third day to 2-3 iu/kg/h. the clinical efficacy was excellent with no bleeding events. in 5 children with vwd also undergoing elective surgery continuous infusions with humate pr were performed in the same way. no bleeding events were observed in these patients. none of the patients developed postoperative wound infections. the overall doses of f vtll concentrate 'were about 20-30 % lower than those required during replacement therapy with bolus doses. lg8 factor x frankfurt i : molecular and functional characterisation of a hereditary factor x defect (gla +25 lys) huhmann i., holler b., krinninger b., turecek p.l, richter g., scharrer i., forberg e., watzke h. univ. klinik f0r inhere med.i, abteilung for h~tmatologie und h~mostaseologie, w~en; immuno-ag, wien ; klinikum der j.w. goethe-univ. frankfurt am main, abt. f. angiologie. factor x (fx) is a vitamin k-dependent plasma protein which is activated either by fvila/tissue factor or ixaniila. fxa is the main enzyme for conversion of prothrombin to thrombin. the congenital fx-deficiency (stuart -prower-defect) being inherited as an autosomal recessive trait subsequently leads to bleeding diasthesis of varying severity. our propositus is a 24 year old patient presenting a mild bleeding tendency. his p'fi (36 sec) is within the normal range, the pt ( 73% of normal) is slightly reduced. the factor x antigen level is reduced to 55% of normal. molecular charactedsation of the genetic defect was performed by amplification of the eight exerts and exonintron junctions by pcr and subsequent direct sequencing of the products. in comparison to the normal sequence we could determine a single mismatch within exon ii resulting in the substitution of +25 gla (gaa) by lys (aaa). the mutation abolishes a naturally occuring mboll site in the dna sequence of exon ii. the status of the fx encoding alleles was determined in the propositus, his mother and one of his brothers by amplification of exon ii and restriction digest with mboll. these family members were heterozygous with respect to the mutation in exert ii. fx was isolated from plasma of the propositus by monoq ion exchange chromatography. performing clotting assays with purified fx frankfurt i we determined an activity of 89% of normal fx upon activation with rw, 77% upon intdnsic activation (aptt) and 81% upon extrinsic activation (pt). this compares well with the results obtained from the patient plasma ( pt 56%, ptt 55% and rw 57% of normal) when the reduced fx-ag-level of the plasma (55%) is taken into account. we therefore conclude that the substitution of gla +25 to lys results in a fx molecule which is severely defective in both the intrinsic and extrinsic pathway of blood coagulation. bleeding after cardiothoracic surgery is still a frequent, important and sometimes life-threatening complication. thus, the aim of this study was to examine routine parameters of hemostasis and their predictive values for severe bleedings. this prospective study included patients undergoing cardiopulmonary bypass surgery. blood samples were drawn preoperatively as well as 0, 6, 18 hours and 2, 3, 4, 5, 6, 7, 8 days after surgery. blood loss from drains, transfusion of blood products and other important clinical data were monitored apart from platelet count, hematocrit, thrombin time, thromboplastin time, aptt and levels of fibrinogen, atiii and c-reactive protein; soluble fibrin (sf) was measured via protamine sulfate aggregability and total fibrin(ogen) degradation products (ftdp) by an elisa from organon teknika. n= 109 patients were examined (age: 64__+9 y). they lost 750+__460 ml blood (mean+sd) into the drains within the first 18 hours after end of surgery. a severe bleeding was defined to exist, if the blood loss exceeded this range (> 1200 ml within 18 h). fibrin(ogen) split products proved to be a useful parameter in predicting the risk of severe bleedings : ftdp levels exceeding 12 mg/i at end of surgery (n = 105) had a negative predictive value of 94%, positive predictive value of 60%, specificity of 96% and a diagnostic efficacy of 91%. in contrast, soluble fibrin which correlated well with fibrinopeptidea (r>0.91, n= 20) did not correlate neither with degradation products nor with bleeding complications (n = 109). this observation does not match to the correspondence of sf with organ dysfunction during dic: sf reached a neg.predictive value near 95% and a diagnostic efficacy of >70% (pat. without antifibrinolytic drugs), which complies to findings from bredbacka (1994). other parameters were less predictive than ftdp and sf. therefore, further examinations are necessary to determine the value of soluble fibrin for a risk prediction of bleeding complications or dic. a differentiation of splits products deriving from either fibrinogen, fibrin or xl-fibrin will provide further insights into fibrin(ogen) metabolism. heparin induced thrombocytopenia represents a multicomponent syndrome associated with the use of heparin and related drugs resulting in not only thrombocytopenia, but also arterial thrombosis of varying magnitude. the initial diagnosis ofthis syndrome is usually made by clinical observation and a drop in platelet count. conventional diagnostic methods include platelet aggregation responses to patient's serum and ~4c serotonin release in response to patient's serum, aggregation/agglutination of patient's platdets in response to heparins and the detection of patients anti-heparin platelet factor 4 (hpf4-ab) ned-antibodies by using elisa methodology. several other individualized methods are also used to demonstrate platelet activation. to test the diagnostic validity of the platelet aggregation (pa) 14c serotonin release (sr) and the relevance of hpf~-ab 340 serum samples collected from patients with clinically eunfwmed eases of lilt syndrome were compared in parallel in various assay systems. the diagnostic efficacy of these tests varied from 60-74% with the pa test providing better results than others. when the pa test was compared with serotonin release, a poor correlation was noted (r=0.47). in contrast, the correlation between the pa and hp4-ab was somewhat better (r=0.66). in another study, blood samples collected from 50 patients treated with ahigh dose low molecular weight beparin for two weeks (80 mg o.d.) were tested. 20 of these patients showed a high titre of hpf4.ab without any decrease of platelet count. none of these patients were found to be positive in the 14c serotonin release assay. a third study included blood samples from dvt patients administered with iv heparin infusion, high dose sc lmw heparin (certoparin) and iv lmw heparin for the management of dvt. none of these patient groups (20-34) exhibited any hit responses, hmvever, the incidence of high hpf4-ab titre was found to be 53% in heparin, 36% in patients with lmw heparin iv and 26% in lmw heparin sc groups. pa and sr studies revealed 8% and 12% false positive ~ respeetively. these studies clearty suggest that the currently available ~ for laboratory diagnosis of hit syndrome are of limited value, and caution should be exercised in the interpretation of the results obtained with these tests. heparin-induced thrombocytopenia (hit) is one of the major severe side effects during treatment with heparin. in postoperative medicine clinical studies demonstrated the prevalence of hit with unfractionated over fractionated heparins. few data are available from the non-ope1"ative medicine and from patients without thmmboembolism before heparinization. in a controlled prospective randomized study the safety and efficacy of low-dose heparin was compared with a lowmolecular-weight (lmw) heparin over 10 days in bedridden medical inpatients (haemostasis, in press). 1968 patients were randomized and controlled for the development of thrombocytopenia. thrombocytopenia was defined as a platelet count below 40.000lid at day 10. 4 patients developed thrombocytopenia in the heparin group and no patient in the lmwheparin group (p<0.05). none of the patients with thrombocytopenia developed a thromboembolic complication. in a second prospective case control study 90 patients with side effects on anticoagulants were treated with lmw-heparin once daily subcutaneously for a period of 1 month to 9 years. platelet count was performed every 1 to 3 months. none of these patients developed thrombocytopenia during heparinization with lmw-heparin. it is concluded that hit is a very rare complication in nonoperated bedridden medical patients. a decrease of platetet count may occur in about 0.5% of patients receiving low-dose heparin. the incidence of hit with thrombosis during low-dose heparin and of hit during lmw-heparin in non-operated patients is manyfold lower and remains to be determined. terminology: instead of the term "hemorrhagic disease of the newborn (hdn)" the term vkdb should be used, since neonatal bleeding is often not due to vkdeficieacy and vkdb may occur after the neonatal period (i.e. after 4 weeks). definition: vkdb is a bleeding disorder caused by reduced activity of vkdependent coagulation factors which responds to vk. diagnnsis: in a bleeding infant a prolonged pt (inr > 3.5) together with normal fibrinogen and platelet count is almost diagnostic of vkdb. the diagnosis is proven, if vk shortens the pt (after only 30-60 minutes) and/or stops bleeding. classification: classification by age of onset into early (< 24 h~. classic fdav 2-7) and lale form (> i week <6 months), and by etiology into idionathic and ~ec0nd~'y. in secondary vkdb in addition to breast feeding other factors can be demonstrated, such as poor intake or absorption of vk and increased consumption of vk. vk-prophylaxis: benefits: oral and intramuscular (i.m) vk (one dose of i nag) prevents equally well the classic form of vkdb. lm. vk appears to be more effective in preventing the late form (times 15-> 50). the protection achieved by single oral prophylaxis (times 3-5) is improved by triple oral vk (times 15-30). risks: because of poten[ial ri~l~ associated with extremely high levels of vk and the possibility of injection injury, i.m. vk has been questioned as the prophylaxis of choice for normal neonates. since vk is involved not only in coagulation but 'also in carboxviation with multiple effects, excessive deviations from the low physiologic concentrations, which prevail in the fully breast-fed healthy mature infant should be avoided. proposal: repeated (daily or weekly) small oral doses of vk are closer to physiologic conditions than single i.m. bolus doses, which expose neonates to excessively high vk levels. the incidence of intracranial vkdb can be reduced if the grave significance of warning signs is recognized (i.e, icterns, failure to thrive, feeding problems, minor bleeding, disease with cholostasis). whether or not the more reliable absorption of the new mixed mieellar (mm~ nrenaral~i0n of vk can reduce the protective oral dose of vk-.prophylaxis has to be evaluated. before vitamin k(vk) prophylaxis was generally accepted in japan, the incidence of infantile vk deficiency was 1:4000 both idiopathic and secondary types. since 1981, 4 nationwide surveys have been conducted. the current incidence rate is now about one-tenth that in early 1980. however, in a small number of cases, vk deficiency occured despite prophylactic administration during the neonatal period. in order to clarify the absorption,excretion and transplacentel transport of vk in the perlnatal period,followlng studies were carried out. 1)hepaplastlntest(normotest) were performed on 65 women in the last stage of pregnancy and each coagulation factor was estimated as well. 2)correlatlons were made between mothers'and babies'hepaplastin test values. 3)transplacental transport of vk 2 was studied. the general activity of vk dependent factors in pregnant women was much higher than in non pregnant women. as far as the correlation between mothers'venous blood during delivery and cordvenous blood is concerned, in the group of mothers with hepaplastln test value of less than 120% of the normal adult value, the value of the bepaplastlntest was less than 30 % of normal adult value in the cord venous blood. we also demonstrated that vk passed through the placenta but only in small qualities. the point mutation g to a at nt 449 in exon v of the factor x gene (gin 102 to lys) has previously been found in two independent kindreds with fx deficiency. it occured in both families in an heterozygote state and was associated with two other genetic defect in the fx gene. we have identified another familiy in which this mutation occurs in a homozygote state. in this family the mutation is associated with the previously reported mutation gla 14 to lys which also occurs in a homozygote state. the pt and ptt of the proposita and her siter are markedly prolonged. the fx activity is reduced to <1% in the extrinsic system, to 30% in the intrinsic system and to 18 % after activation with rvv. the fx antigen is reduced to 20%. the coagulation profile of this family thus is identical with that of fx vorarlberg despite the fact that the fx vorarlberg kindred is only heterozygous for the mutation glal02 to lys. haplotype analysis could not rule out consanquinity with the fx vorarlberg kindred. these data suggest that the mutation at nt 449 which leads to a fairly dramatic amino acid change from glu to lys would indeed represent a polymorphism. to further address this question we cloned the fx gene in an expression vector (pcep 4) for transient expression in the human embryonic kidney cell line 293 and introduced the mutation at nt 449 by site directed mutagenesis. hereditary deficiency of factor ixa, a key enzyme in blood coagulation, causes hemophilia b, a severe x-chromosomelinked bleeding disorder; clinical studies have identified nearly 500 deleterious variants. the x-ray structure of porcine factor ixa shows the atomic origins of the disease, while the spatial distribution of mutation sites suggests a structural model for fx activation by phospholipid-bound flxa and cofactor villa. the 3.0 a resolution diffraction data clearly show the structures of the serine proteinase module and the two preceding epidermal growth factor (egf)-like modules; the n-terminal gla module is partially disordered. the catalytic module, with covalent inhibitor d-phe-pro-arg chloromethyl ketone, most closely resembles fxa but differs significantly at several positions. particularly noteworthy is the strained conformation of glu-388, a residue strictly conserved in known fixa sequences but conserved as gly among other trypsin-like serine proteinase. flexibility apparent in electron density together with modelling studies suggests that this may cause incomplete active site formation, even after zymogen activation, and hence the low catalytic activity of fixa. most hemophilic mutation sites of surface fix residues occur on the concave surface of the bent molecule and suggest a plausible model for the membrane-bound ternary flxa-fvilla-fx complex structure: the stabilizing fvilla interactions force the catalytic modules together, completing flxa active site formation and catalytic enhancement. factor x frankfurt i molecular and functional characterisation of a hereditary factor x defect (gla +25 ---, lys) huhmann i., holler b., krinninger b., turecek pi., richter g., scharrer i., forberg e., watzke h.. univ. klinik ftlr innere medi, abteilung for h~matoiogie und hamostaseologie, w~en; immuno-ag, wien ; klinikum der j.w. goethe-univ. frankfurt am main, abt. f. angiologie. factor x (fx) is a vitamin k-dependent plasma protein which is activated either by fvila/tissue factor or ixaniila. fxa is the main enzyme for conversion of prothrembin to thrombin. the congenital f×-deficiency (stuart -prower-defect) being inherited as an autosomal recessive trait subsequently leads to bleeding diasthesis of varying severity. our propositus is a 24 year old patient presenting a mild bleeding tendency. his ptt (36 sec) is within the normal range, the pt ( 73% of normal) is slightly reduced. the factor x antigen level is reduced to 55% of normal. molecular characterisauon of the genetic defect was performed by amplification of the eight exons and exonintron junctions by pcr and subsequent direct sequencing of the products. in comparison to the normal sequence we could determine a single mismatch within exon ii resulting in the substitution of +25 gla (gaa) by lys (aaa). the mutation abolishes a naturally occuring mboti site in the dna sequence of exon ii. the status of the fx encoding alleles was determined in the propositus, his mother and one of his brothers by amplification of exon ii and restriction digest with mboll. these family members were heterozygous with respect to the mutation in exon i1. fx was isolated from plasma of the propositus by monoq ion exchange chromatogrephy. performing clotting assays with purified fx frankfurt i we determined an activity of 89% of normal fx upon activation with rw, 77% upon intrinsic activation (aptt) and 81% upon extrinsic activation (pt). this compares well with the results obtained from the patient plasma ( pt 56%, ptt 55% and rw 57% of normal) when the reduced fx-ag-level of the plasma (55%) is taken into account_ we therefore conclude that the substitution of gla +25 to lys results in a fx molecule which is severely defective ip both the intrinsic and extrinsic pathway of blood coagulation. bleeding after cardioth~)racic surgery is still a frequent, important and sometimes life-threatening complication. thus, the aim of this study was to examine routine parameters of hemostasis and their predictive values for severe bleedings. this prospective study included patients undergoing cardlopulmonary bypass surgery. blood samples were drawn preoperatively as well as 0, 6, 18 hours and 2, 3, 4, 5, 6, 7, 8 days after surgery. blood loss from drains, transfusion of blood products and other important clinical data were monitored apart from platelet count, hematocrit, throm. bin time, thromboplastin time, aptt and levels of fibrinogen, atiii and c-reactive protein; soluble fibrin (sf) was measured via protamine sulfate aggregability and total fibrin(ogen) degradation products (ftdp) by an elisa from organon teknika. n= 109 patients were examined (age: 64+9 y). they lost 750+__460 ml blood (mean_+sd) into the drains within the first 18 hours after end of sur. gory. a severe bleeding was defined to exist, if the blood loss exceeded this range (> 1200 ml within 18 h). fibrin(ogen) split products proved to be a useful parameter in predicting the risk of severe bleedings : ftdp levels exceeding 12 mg/i at end of surgery (n = 105) had a negative predictive value of 94%, positive predictive value of 60%, specificily of 96% and a diagnostic efficacy of 91%. in contrast, soluble fibrin which correlated well with fibrinopeptide a (r>0.91, n= 20) did not correlate neither with degradation products nor with bleeding complications (n= 109). this observation does not match to the correspondence of sf with organ dysfunction during dic: sf reached a neg.predictive value near 95% and a diagnostic efficacy of >70% (pat. without antifibrinolytic drugs), which complies to findings from bredbacka (1994). other paramelers were less predictive than ftdp and sf. therefore, further examinations are necessary to determine the value of soluble fibdn for a risk prediction of bleeding complications or dic. a differentiation of splits products deriving from either fibrinogen, fibrin or xl-fibrln will provide further insighls into fibrin(ogen) metabolism. this study was conducted as a randomized parallel -group clinical trial comparing the safety and efficacy of a low molecular weight heparin {lmwh} -monoembolex sandoz and unfractionated standard heparin glfh) for the perioperative prevention of venous thromboembolie disease (dvt) following major surgms' in patients with gynecologic malignancy.. three hundred and twenty 4 women (six drop outsl werr randomized and received either 3 times daily 5000 [l" s.c. ul.'i-i (sandoz nuemberg germany] (n = 164) or once a day t5000 i~'v'. units s.c. monoembolex (n = 160) plus two placebo injections. heparin therapy was started the morning before opcrati(m and continued until the 7th postoperative day. up to the 10th poatop, day the incidence of dvt was 6.25 % (n = 10; incl. 7 pulmona~ embolisms pe) in the lmwh group and 6.10 % (n = 10; incl. 4 pe} in the ufh group. the overall incidence of clinically hemorrhagic wound complications was significantly decreased in the lmwh group 16.3 % (n = 2hi compared to the ufh group 26.8 % {n = 44; p < 0.0051. the incidence of major hemorrhagic episodes was 9.4 % in = 151 in the lmwh group and 14.0 %/n = 23) in the ufh group. this difference was not statisticauy significant. one case of fatal pe was observed in the lmwh -treated group. five women deaths in the lmwh group were observed during the study and 3 in the ufh group. this study demonstrates that the perioperative treaunent of low molecular weight heparins is more safety than standard heparins in gynecologic -oncologic patients undergoing major surge .ry. however, the incidence of thromboembohc complications is simmilar in both treatment regimes. to explore the effect of targeting an antithrombin to the surface of a thrombus, recombinant hirudin (hir) was covalently linked to the fab' fragment of fibrin-specific monoclonal antibody 59d8 (fab) resulting in a stable conjugate (hir-fab). in vitro, hir-fab was 9times more efficient than hir alone in inhibiting fibrin deposition on experimental clot surfaces in human or baboon plasma (p<0.01). to validate these results in vivo, hir-fab was compared to hir in a baboon model. the deposition of ill-in-labeled platelets onto a segment of dacron vascular graft present in an extracorporeal arteriovenous shunt was measured. blood flow rate was 40 ml/min. one hour local infusions of 4500 atu of either hir-fab or hir resulted in deposition of 0.16 x 109 and 2.17 x 109 plate!ets, respectively. equieffective dosages were 2000 atu hir-fab and 9000 atu hir resulting in deposition of 1.06 x 109 and 0.93 x 109 platelets, respectively. based on full dose response curves (n = 14), hir-fab was found to be > 4.5-fold more potent (based on activity) than hir. because of the small total amounts of antithrombins used and the short duration of these experiments, no significant systemic effects were observed. thus, fibrin-targeted recombinant hirudin prevents platelet deposition and thrombus formation more effectively than uncoupled hirudin in vitro and in an in vivo primate model. triabin, a 17 kda protein from the saliva of the assassin bug triatoma pallidipennis, is a new specific thrombin inhibitor (1). tt does not block the catalytic center but interferes with the anionbinding exosite of thrombin. the recombinant protein was produced with the baculovirus/insect cell system and used to study the inhibitory effect of triabin on thrombin-induced responses of human blood platelets and blood vessels. aggregation of platelets in tyrode's solution was measured turbidimetrically at 37°c. for the studies on blood vessels rings (2-3 mm) from small porcine pulmonary arteries were placed in organ baths for isometric tension recording. the integrity of the endothelium was assessed by the relaxant response to bradykinin. like hirudin, triabin inhibited the thrombin (0.1 u/ml)-induced aggregation of washed human platelets at nanomolar concentrations (ec50 = 2.6 nmol/l); whereas the adp-and collagen-induced aggregation were not suppressed. in pgf2c~-precontracted porcine pulmonary arteries, the thrombin (0.2 u/ml)-induced endothelium-dependent relaxation was inhibited by triabin in the same concentration range as found for inhibition of platelet aggregation. higher concentrations of triabin were required fo affect the contractile response of endothelium-denuded porcine pulmonary arteries to thrombin (1 u/ml). in all these assays, the inhibitory potency of triabin was dependent on the thrombin concentration used. these studies suggest that the new anion-binding exosite thrombin inhibitor triabin is one of the most potent inhibitors of the thrombin-mediated cellular effects. dept. of medicine, university hospital benjamin franklin, free university of berlin, dept. of medicine and dept. of surgery, heinrich-heine:university dusseldod after standardized training in home prothrombine estimation using the coaguchek system, 150 consecutive patients (p) who had st. jude medical aortic or mitral valve implantation were allocated to two random arms; 75 p were asked to control the inr themselves every third day. in the remaining 75 p anticoagulation was managed by the home physician without recommending an interval for these controls. all 150 p were monitored during the education period to a target therapeutic range of inr 3.5-4.0. p were asked to contact their home physician immediately if the inr was measured 0.5 below or above the target range (inr-corrider 3.0-4.5). all p had out-patient re-examinations every three months. thrombotic, thromboembelic and hemorrhagic complications were documented by the p using special documentation cards. the following findings were documented during the follow-up period: 4.49 0.9 0 the results of this randomized study demonstrate a significant improvement in the management of oral anticeagulation by home prothrombine estimation. significant (p<0.001) more inr measurements were found inside the target therapeutic range. moreover. bleeding and thromboembolic complications could be reduced (p = 0.038) in the study group with home prothrombine estimation. life-threatening thromboembolic and hemorrhagic complications were not observed in p who were on home prothrembine estimation, while three such events (2.72 %/year) were documented in group a. local vascular injury following ptca exposes circulating platelets to prodmmbogenic stimuli. by binding to platelet gp iiblliia fibrinogen crosslinks platelets, which represents the final common pathway of platelet aggregation. fradafiban (bibu52zw) is a non-peptide compound with effective, reversible inhibitory effects on fibrinogen binding to gp iib/ii/a on human platelets. in the first double-blinded, prospective phase ii study three escalating doses of bibu52zw as a continuous 24 h-i.v, infusion were tested in comparison to placebo in 65 patients with stable angina pectoris undergoing elective ptca. the mean receptor occupancy with 20rag, 40ms and 60ms per hour were 71.974, 84.5 % and 87.9% at 24 hours, respectively. as compared to placebo breeding time was significantly prolonged (7 vs 20rain) during fi-adaiiban infusion with a weak dose-dependency. platelet aggregation in platetet rich plasma ex vivo with collagen (2.0 and 4.0 gg/ml), adp (2.5 and 5.0 gmol/ml) or ca-ionophor a 23187 (2.5 and 5.0gg/ml) was significantly and dose-dependently inhibited as compared to placebo. using the two upper doses of fradafiban, we observed major bleeding complications in 8 patients requiring blood transfusions or vascular surreal repair. in these patients, too, maximal antiplatelet effects could be documented. these data sugest that bibu52zw is an effective fibrinogen receptor antagonist in patients. the requirement of ad hoc receptor occupancy determination or platelet function monitoring for safe and effective clinical use should be evaluated. in a placebo controlled interaction study 9 healthy volunteers were randomized to receive either a 24 hour infusion of peg-hirudin ( 0.02 mg/kg/h) after an i.v, bolus of 0.2 mg/kg + placebo, or 325 mg/day acetylsalicylic acid (asa) for three days followed by a placebo infusion or the peg-hirudin infusion + asa. each volunteer received all three treaments. there was a washout period of at least 14 days between the infusions. at short intervals aptt, activated clotting time (act), ecadntime (ect), alia-activity using the chromogenic substrate 2238, collagen-induced aggregation, platelet adhesion and platelet induced thrombin gene,ration time (pitt) were measured, bleeding time (simplate) was studied before drug administration, on day three before the infusion and 6 hours after start of the infusion.the infusion of peg-hirudin after 4 and 8 hours led to a mean hirudin plasma level of 1.8 pg/ml. asa markedly inhibited collagen induced aggregation as expected. the mean bleeding time was prolonged under the influence of peg-hirudin from 5.2 to 6.22 min, after asa from 5.8 -18.2 min and after the combination of peg-hirudin + asa from 5.4 -33.7 min. in each volunteer the bleeding time was longer under the combination than after asa alone. in two volunteers receiving peg-hirudin + asa the bleeding time measurement was stopped after 60 rain. none of the coagulation parameters or platelet function tests correlated with the prolongation of the bleeding time. however the bleeding time was excessively prolonged in those volunteers who had a marked prolongation under asa alone.the combination of hirudin at a higher dosage with asa probably is associated with a relative high risk of bleeding. either the hirudin dosage should be reduced if the combination seems feasabie or asa should be given after the end of hirudin treatment. fibrinogen with the sta/stago and the mla/dade systems correlated well, but neither system correlated well with the acl/il system. at iii, protein c, protein s, and anti-xa heparin assays using stago reagents performed as expected for normals and low abnormals on the sta. factor levels on the sta/stago system were less sensitive than factor levels obtained with the dade reagents on the mla or fibrometer. using the sta/stago system, thrombin time results correlated well with the aptt and heparin levels. the thrombin time was not associated with additional manipulation for assay preparation, nor any cross-contamination of reagent or sample, since on the sta reagents do not come into contact with tubing. the sta was not sensitive to hemolytic, icteric or lipernic samples for clotting assays artd showed the same sensitivity as the mla for chromogenic assays. the overall data comparisons, high throughput, minimal operator intervention for reagent/assay change and ease of operation warrant further evaluation of the sta hemostasis analyzer. a. wehmeier, d. s6hngen, c. rieth klinik for h#,matologie, onkologie und klinische immunologie der heinrich-heine-universit~it d0sseldorf hirudin selectively inhibits thrombin by direct interaction. because the effect of hirudin is independent of antithrombin iii and other factors, it seems an attractive alternative to current anticoagulants. however, it is uncertain whether hirudin influences plateletassociated thrombotic disorders and how it compares with conventional and lmw heparin. we investigated the effect of 2 recombinant hirudin preparations (rhein biotech, dt3sseldorf) on platelet function tests: in vitro bleeding time, adhesion to glass beads, aggregation in platelet-rich plasma and whole blood. hirudin was used in concentrations of 0.1-100 i.tg/mi, and was compared to trisodium citrate (0.38%), conventional heparin (50 iu/ml) or lmw heparin (fraxiparin, 500 iu/ml). both recombinant hirudins showed normal activity in thrombin neutralization tests, and prolongation of thrombin time and aptt. however, in vitro bleeding time was not prolonged by hirudin, but was more than doubled by addition of conventional and lmw heparins. platelet retention to glass bead columns was reduced by hirudin in a dose-dependent manner to about 40% but was more effectively reduced by both heparin preparations and citrate. hirudin had an inhibitory effect on p!atelet aggregation in prp induced by thrombin, collagen, and predominantly epinephrine but not adp and ristocetin. in whole blood, a small effect could only be observed with hirudin concentrations of >25 ~g/ml as compared to citrateanticoagulated blood. in summary, thrombin inhibition by recombinant hirudin has little effect on in vitro platelet function tests in comparison to heparins and calcium depletion. the role of endothelin (et), prostaglandins and the coagulation system in the pathogenesis of acute renal failure is still to be defined. in anaesthesized pigs the effects of i.v. infusion of et (3 /~g/kg) alone (group 1, n=6) and after pretreatment with the potent thrombin-inhibitor hirudin (0,5 mg/kg)(group 2, n=6) on haemodynamics, coagulation parameters (factor viii, antithrombin iii, precallicrein, fibrin monomers, aptt) and prostaglandins were investigated. plasma renin activity (pra)-, creatinine clearance-, urine volume-measurement and blood gas analysis were performed hourly. et-infusion caused an initial bp-reduction and marked hr-reduction followed by a transient bp-elevation and hr-reduction. activation of platelets can be directly measured by flow cytometry using monoclunal antibodies. in an in vitro study the effect of the thrombin inkibitors argatroban, efegatran, dup 714, recombinant hirudin and peghirudin on platelet activation induced by various agonists was studied in whole blood. blood was drawn from normal human volunteers using the double syringe technique without use of a tourniquet to avoid autoaggregatiun of platelets. for anticoagulation of blood the thrombin inhibitors mentioned above were used at a final concentration of 10 ~tg/ml each. blood samples were then incubated at 37°c either with saline, r-tissue factor (rtf), arachidonic acid (aa), adenosine diphosphate (adp) or collagen. at definite times (1, 2.5, 5, 10 rain) aliquots were taken and after various steps of fixative procedure the percentage of platelet activation was measured by means of fluorescent monoclonal antibodies to platelet surface receptors gpiiia (cd-61) and p-selectin (cd-62). the agunists used induced a platelet activation of 37.4 + 15.2 % (rtf), 65.1 + 12.1% (aa), 19.3 + 7.4 % (adp) and 27.1 + 12.8 % (collagen). flow cytometric analysis showed that all thrombin inlaibitors studied caused a nearly complete inhibition of r-tissue factor-mediated platelet activation. in contrast, after induction of platelet activation with the other agonists an increased percent cd-62 expression was found showing a strong platelet activation with a maximum at the same times as in non-anticoagulated blood. in conclusion, the results show that in whole blood thrombin inhibitors are effective in preventing platelet activation induced by r-tissue factor. the formation of active serine proteases including thrombin may be effectively inldbked by these agents. the observations further suggest that, while thrombin inkibitors may control serine proteases, these agents do not inhibit the activation ofplatelets mediated by other agonists. this work was supported by the grant bmft 07nbl01. animal experimental studies on the pharmacokinetics of peg-hirudin e. bucha, a. kossmehl, g. nowak max-pianck-gesellschaft e. v., arbeitsgruppe "pharmakologische h~imostaseologie", jena hirudin, when complexed with polyethylene glycol (peg), increases its molecular weight from 7 to 17 kda, thereby preventing extravasation of this drug. peg-hirudin is distributed almost only in the intravascular blood space. in addition, its increased molecular weight retards the renal elimination. the elimination half-life of hirudin in rats (58 + 12 min, as determined) is increased five-fold (244 ± 18 min). with the same hirudin dose applied, the blood level of hirudin is increased 19-fold, measured in the 13-elimination phase. in the urine of rats, 2 -3% of the hirudin activity were recovered following hirudin administration, but 48% could be detected after peg-hirudin had been applied. after subcutaneous administration of peg-hirudin, the trnaxwalue is reached at 380 rain (r-hirudin: 65 min); the cmax-value is increased 3-fold, compared to that of r-hirudin (2.5 pg/ml). 24 hours later, still one fifth of the maximum concentration (cma,) is present in the blood, and the renal elimination is still retarded. in the urine of rats, 12% of the hirudin activity applied were recovered in the 24-h urine sample. with intact renal function, following subcutaneous administration, peghirudin is abte to produce a constant blood level of hirudin over a long pedod. thrombin inkibitors such as r-hirudin (rh), argatroban (a), efegatran (e), and peghiradin (ph) are currently undergoing extensive clinical trials in such cardiovascular indications as ptca, ami, and treatment of unstable angina. a rapid assessment of the anticoagulant actions of these agents is, therefore, crucial to assure their efficacy and safety. currently, act and aptt are used to measure the anticoagulant effect of these agents. we have utilized a dry reagent technology based on the motion of paramagnetic iron oxide particles (plop) to measure the antithrombin effects of various thrombin inhibitors (cv diagnostics, raleight, nc). the heparin monitoring card has been modified to measure antithrombin agents in various anticoagulant ranges for (a)0 (e), (rh), and (ph). blood samples drawn from patients treated with (a) and (rh) have been evaluated and concentrations of these agents have been calculated using an external calibration curve. in the in vitro setting, citrated whole blood or citrated frozen plasma can be used to evaluate the anticoagulant effects of these agents. the results obtained are comparable to the act which is conventionally used for the monitoring of these agents. both (rh) and ( period. we would like to present a case of heparin-induced-thrombocytopenia (hit) in a 47 years old woman who underwend open heart surgery. she suffered from a combined aortic valve disease and leading stenosis. laboratory analysis showed constant low platelet counts (50/nl) without heparin application, so that an idiopathic thrombocytopenlc purpura was suspected. but platelets also decreased after heparin application. heparin-antibodies were found in the heparin induced platelet activation assay (hipaa). treatment with corticosteroids and immunoglobulines, respectively, showed no improvement but the patient unfortunately developed a pneumonia with legionetla pneumophila. therefore, the only suitable anticoagulant for the necessary aortic valve replacement was hirudin: a bolus injection of r-hirudin of 0,75 mg/kg b.w. was administered 10 min. bevore start of the extracorporal circulation (ecc), the heart-lung machine (hlm) was primed with 6 mg r-hirudin and another bolus of 5 mg of r-hirudin was administered. additionally 10 mg of r-hirudin was applicated to the cell-saver-reservoir. during the period of ecc ecarin clotting time and aptt values were taken every ten minutes for monitoring r-hirudin concentration. the postoperative anticoagulation was performed with a constant infusion of r-hirudin starting eight hours after the end of ecc and monitored by aptt. due to mechanical aortic valve the further anticoagulation was performed with phenprocoumon, starting 3 days postop. the therapy with hirudin showed no side-effects. hirudin, threrefore seems to be a suitable anticoagulant in patients with high risk for bleeding complications like this. doses fi:om 2-30 mg/kg gave similar post-op blood loss measurements without s dnseresponse (4-15 oc/kg) (less blood oozing than a historical heparin control but equivalent post-op blood loss; 10 q-3 ec/kg). doses >30 mg/kg showed more intra-op blood loss than the lowe~ doses, but equal post-.op blood loss. the bleeding time test was less elevated than for heparin. platelet counts and hematoerit did not vary except for hemodihition on pump. liver enzymes did not vary significantly pre-op to post. act values showed arg was eliminated (dose-dependently) by 1 hour post-op. dogs were hemodymamieally stable during the peri-operative period, and overall gave predictable responses to arg (as opposed to variable responses to heparin). in a substudy it was demonstrated that hypothermia did not affect the activity of arg, nor did varioos formnlations. this dose finding study strongly suggests that arg may be a safe and effective alternative to heparin for patients undergoing cpb. this is particularly important for the growing population of patients with hit who require cardiac surgery, for which no anticoagulant alternative is presently available. three recent clinical tdals with r-hirudin (timi 9, gusto 2 and hit) have shown that the risk of severe haemorrhagic side effects was strongly associated with high aptt-levels. the large intedndividual variability of the aptt and the lack of a linear dose-effect ratio, however, limits its value for reliable monitoring of the anticoagulant effect of hirudin since even severe overdosage due to impaired renal elimination may not be detected with this assay. we have therefore evaluated the ecadn clotting time (ect) as descdbed by nowak and bucha (thromb. haemost. 1993; 69: 1306) under conditions which allow conclusions on its reliability in the clinical situation.for this, citrated venous blood obtained from healthy volunteers, patients with unstable angina pectoris, and patients treated with marcumar was supplemented with different concentrations of peghirudin. measurements of aptt and ect were made in duplicate. in contrast to the aptt, the ect showed a close, linear relationship with peg-hirudin plasma concentrations in the range of 100 and 10 000 ng/ml. the lineadty of this relationship was not affected by the presence of unfractionated or low molecular weight hepadns in concentrations of up to 10 pg/ml. the ect was not affected by fibdnogen concentrations 60% below normal. a somewhat higher slope but no change in linearity was found in plasma from marcumar-patients with quick-values between 20 and 32%. no significant differences were found between values measured in citrated blood or plasma or using different coagulation timers. the most potent thrombin inhibitor containing a benzamidine moiety is napap (k i = 6 nmol/i). unfortunately, the pharmacokinetic properties (fast elimination by hepatic uptake and biliary excretion, poor enteral absorption) are unsuitable for the use of napap as an oral anticoagulant. the application of choice of a synthetic thrombin inhibitor would be the oral one, therefore, we looked for other lead structures. with the nc~-arylsulfonylated piperazides of 3-amidinophenylalanine we found a new group of derivatives which inhibit thrombin with ki-values in the nanomolar range. the piperazides exert anticoagulant activities with high selectivity, leaving activated protein c and components of the fibdnolytic system unaffected. in rats, the piperazides are rapidly eliminated from the circulation (tl/2 ~ 10 min) upon i.v. administration, too. after oral administration, the systemic bioavailability is low. upon intraduodenal administration of high doses widely varying blood levels were seen, depending on the mode of administration. to cladfy the importance of a possible hepatic first pass effect we studied in more detail the pharmacokinetics of the n~-(2naphthylsulfonyl)-3-amidinophenylalanine n'-acetylpiperazide in rats using hplc-analysis. like other benzamidines the piperazide is excreted via the bile to a high extent. enteral absorption rates of about 20 % are found after blocking the hepatic uptake and biliary excretion. hence, a hepatic first pass effect appears to be the main reason for low systemic bioavailability after orallenteral administration. at the same time, fast elimination from the circulation by hepatic uptake is the main problem for maintaining effective blood levels with benzamidines. therefore, the elucidation of the structural elements influencing the absorption and elimination processes of these types of inhibitors is necessary. the piperazides of 3-amidinophenylalanine bear the possibility to easily introduce a wide variety of substituents on the second nitrogen of the piperazine moiety. a 69-year-old female patient with diabetic nephropathy increasingly developed signs of allergisation combined with dyspnea, erythema, pruritus, and circulatory insufficiency two months after start of heparin-anticoagulated haemodialysis und initial surgical application of a double lumen venous catheter. in addition, growing thrombocytopenia was observed involving a drop in platelets by 50%, compared to the initial values. the haemodialytic efficiency was reduced by massive thrombosis of the dialyzer and subsequent repeated interruption of treatment. at the end of may 1995 heparin antibodies were detected and the hat diagnosis was confirmed. immediately afterwards, haemodialysis treatment was continued, applying hirudin as anticoagulant. using steam-stedlised haemophan dialyzers and 0.14 mg/kg r-hirudin (iketon, italy), the minimum therapeutic blood level of hirudin (0.4 pg/ml whole blood) was reached. this provided therapeutically relevant blood level conditions during a 4.5h haemodialysis. more than 80 regular haemodialyses were run without problems. in all hirudin-anticoagulated haemodialysis treatments the ecarin clotting time was used as the method of choice for bedside blood level and dosage control. after the 34th haemodialysis, the frequency was reduced from 3(4) to 2 haemodialyses a week. accordingly, the hirudin dose was increased to 0.2 mg/kg. the creatinine clearance increased continuously from initially 2.6 to 10.4 ml/min after the 13th week of hirudin-anticoagulated haemodialysis. platelet count and haemodialytic efficiency normalized. we could demonstrate that the regular use of hirudin as anticoagulant along with dialyzers impermeable to hirudin enables very good results in haemodialysis treatment in heparin-associated thrombocytopenia, hirudin is suited for use as anticoagulant in problem patients with hepadn-induced allergy when combined with a drug monitoring method fit for bedside use. capillary electrophoresis methods provide a fast measurement of proteins. thus we developed for pharmacokinetic measurements of r-hirudin and peg-hirudin capillary electrophoresis methods. for the measurement of r-hirudin we used fused silica capillary and a borate buffer. this buffer was used to detect r-hirudin, but could not be used to measure peg-hirudin. for simultaneous measurement we used a neutral capillary to prevent protein absorption to the capillary wall. the buffer was a 20 mm tricine buffer (ph = 8.0 field strength 500 v/cm). it resolved r-hirudin from peg-hirudin at 214 nm using reverse polarity. a linear correlation between the peak area and the concentration was found between 80 pgtml and 10 mg/ml for hirudin (r 2 = 0.99) and between 2,5 and 10 mg/ml for peg-hirudin (r2= 0.99) was found by coinspiking of human plasma and urine with r-hirudin and peghirudin the two proteins were completely resolved. a linear correlation between the peak area and the concentration was found. the method separates r-hirudin from peg-hirudin and may be applied to biological systems to measure the concentration of r-hirudin. triabin is a thrombin inhibitor from the saliva oft. pallidipennis structurally unrelated to any protease inhibitor known and which probably functions by an interaction with the anionbinding exosite of thrombin. we used sf9 insect cells infected with recombinant baculovirus to produce sufficient triabin for a detailed biochemical characterization. the activity of the protein purified from cell lysates was assessed in a fibrinogen clotting assay and was found to be similar to that of the natural protein. a 4-fold prolongation of thrombin-clotting time and aptt was achieved with 22 nm and 600 nm triabin, respectively. a kinetic analysis of the thrombin-catalyzed fibrinopeptide a release from fibrinogen showed that triabin is a tight-binding inhibitor. using the graphical method of dixon, the ki was determined to be 3 pm. introduction: thrombocytopenia is a common adverse effect of heparin therapy, in type ii hit platelet decrease induces severe complications. we here present two special cases of type ii hit. case report i: a 66 year old male patient with dvt of the left leg was treated with therapeutic doses of heparin. from the first to the 12th day of therapy, platelet count decreased from 122000 to 54000/ui. hit was confirmed by hipa-test, heparin therapy was s~opped and treatment with the heparinoid orgaran n was started. during the following days, arterial thromboses in the right a. femoralis occurred. several thrombe~tomies were not successful and although orgaran ~" was stopped because of suspected crossreactivity, amputation of the right leg could not be avoided. during the following days under hirudin-treatment platelet count normalized and no further complications occurred. case report 2: a 74 year old female patients suffering from hip fracture was treated by surgery with tep-operation and received prophylactic heparin treatment. after 6 days, platelet count decreased from initially 170000 to 8000/ul and dvt of the right leg was diagnosed. on the same day, severe bleeding into the left leg was observed and hemoglobin concentration was diminished to 7.8 g% (before surgery 16.0 g%). hit was confirmed by hipa te~t, heparin was stopped and treatment with orgaran started. thrombocyte count normalized and no further complications occured. conclusion: hit type ii can cause severe bleeding as well as thromboembolic complications. because of possible cross-reactivity between heparin and orgaran~,, hirudin should be given in hit patients. currently thrombin time (ti), aptt, activated clotting time (act) or anti ila -activity (alia), measured by a chromogenic substrate test are used to monitor hirudin treatment or prophylaxis. the "13" responds very sensitive to hirudin plasma levels end thus requires variable thrombin concentrations. aptt appears to be more adequate, however, it shows large interindividual variations and does not respond sensitive enough to higher hirudin concentrations. act is a simple whole blood clotting assay, but it is strongly influenced by the blood collection technique. the ecadn clotting time (ect) is a new clotting assay, recently described by nowak and bucha (thromb.haemost 1993, 69,1306) . it measures the clotting time of citrated blood or plasma after prothrombin activation by ecarin, a snake venom of echis carinatus. ec.t shows a linear dependence on different hirudin concentrations over a wide concentration range ( e.g. 0.1-5 pg/ml). in a clinical interaction study healthy volunteers were administered hirudin, asa or both. 15 male volunteers received an i.v. infusion of peg-hirudin (0.02 mg/kg/h) for 24 hours after an initial i.v. bolus of 0.2 mg/kg to compare the sensitivity and reliability of ect with aptt, l-r end act. the act was measured on the hemochron 801, usa, ect on a fibrin timer, aptf using the aptt lyophylized silica reagent by il and alia on an acl (il-milan) with the chromogenic substrate 2238. all tests were performed in duplicate. ect was more sensitive to different hirudin concentrations than aptt, or act. the ect results were better correlated with the alia-activity than ap'l"r and act. the lower detection range for ect is 0.05 pg/ml hirudin. ect is a very sensitive, simple and reliable test for the monitoring of hirudin treatment and prophylaxis. recombinant and synthetic inhibitors of thrombin such as hirudin, efegatran and argatroban are currently in various phases of clinical trials in several surgical and medical indications. the therapeutic effects of these agents are usually monitored by aptt whereas in cardiovascular indications, cefite act and hemotech® act are used. the reliability of both aptt and the act tests in predieting the safety of various thrombin inhibitors has been heavily debated. furthermore, some of these inkibiturs are administered simultaneously to heperinized or coumadinized patients and the obtained aptt and act results do not lady refleot the effects of these agents. fcafin is a snake venom derived fi'om echis carinatus which converts prothrombin into mesothrombin, targeting the arg~3-ile tm bond between the a and b chains of prothrombm. while thrombin inhibitors are capable of inhibiting mesothrombin, atiii/beparin complex does not have any effect. using purified ecarin, nowak and bucha (1993 thromb haemost 69:1306) proposed to assay hirudin. since thrombin inhibitors exhibit similar mechanisms of thrombin inhibition, ecarin clotting time (ect) was evaluated to test its diagnostic efficacy in various experimental and clinical settings. lyphilized eoarin was obtained from knoll ag, ludwigshafen, germany). concentration dependent clotting times for himdin, efegatran and argatroban were obtained in a range of 0-15 p.g/ml. all of the antithrombin agents produced a concentration dependent prolongation of ect and showed va~angpotendies inthe order ofefegatran> argatreban> hirudin, on a gravimetriebasis. on a molar basis, the anticoagulant order of potency was found to be hirudin> afegatran> argatroban. utilizing the ect, the effect of these inhibitors on patients undergoing bolus or infusion therapy, resulting in a concentration level of ~ 10 gt g/rnt, have been measured. unlike such global tests as pt and aptt, patients receiving simultaneous heparin or oral anticeagulants can be monitored for antithrombin specific prolongation ofthe ect. plasma samples from heparinized (aptt 45-90 sec) or coumadinized ('pt 15-25 see) patients, supplemented with argatroban or hiredin did not show any differences m the ect. a medified ecarm act comparable to the celite act has also been developed. initial results demonstrate that this test is not affected by aprotinin, heparin and reduction of the prothrorabin complexes in the inr range of 1.5 -3.0. these results indicate that ecarin based clotting times provide slx~etlie ~lts of circulating levels of thrombin inhibitors, which can provide reliable information to optimize their safe(y and efficacy. r-hirudin is a highly potent and selective inhibitor of the serine proteinase thrombin. after intravenous administration, r-hirudin is eliminated exclusively with the urine. its plasma half-life is very short, 1-2h. peg-hirudin is a derivative produced by coupling polyethylene glycol (peg) to a specially designed recombinant hirudin mutein. peg-coupling results in a considerable prolongation of the plasma half-life of peg-hirudin, compared to r-hirudin. after intravenous administration of r-hirudin into rats, a very small amount of ,,hirudin-like" activity (2 -4% of applied activity) was recovered in the urine. in contrast, after peg-hirudin had been administered, more than 30% of the applied activity could be recovered in rat urine. these results suggest differences in the renal metabolism of peg-hirudin and r-hirudin. within the scope of pharmacokinetie studies in rats we investigated the appearance of biologically active metabolites of peg-hirudin after kidney passage in urine. affinity chromatography on immobilised thrombin was used as a quick and gentle method in searching for biologically active hirudin metabolites in rat urine. but it had to be completed by anion-exchange and/or reversed-phase chromatography to ensure that all active metabolites were detected. the isolated biologically active metabolites were purified by reversed-phase hplc and were biochemieally characterized. in previously reported studies we found a hlrud n derivative consisting of the amino acids 1-50 as the main metabolite in rat urine following intravenous administration of r-hirudin. this metabolite was not detected in the urine after administration of peg-hirudin, confirming the suggestion of a different renal metabolism. carrageenans are high molecular weight sulfated polygalactans of plant origin (derived from red algae) with anticoagulant properties. in previous studies we investigated the anticoagulant activity of lambda-carrageenan, a highly sulfated type of carrageonans. unlike heparin, lambda-earrageenan exerts its anticoagulant activity primarily through direct inhibition of the serine proteinase thrombin. only a part of its antithrombin activity is indirectly mediated through antithrombin iii. to investigate relations between molecular weight and biological activities, tambda-carrageenan has been hydrolysed and fractionated. the molecular weight has been determined with the aid of size exclusion hplc using dextrans as molecular weight standards. the degree of sulfation has been determined by anion-exchange hplc. we have obtained low molecular weight lambdaearrageenans ranging from 10,000 dalton to 100,000 dalton with degrees of sulfation of 13 -17% and 33 -38%. the anticoagulant and antithrombin activity of low molecular weight carrageenans have been determined using coagulation assays and purified systems, and we have compared their activities with those of heparin and other sulfated polysaecharides. further, we have investigated the ability of lambda-carrageenan and its low molecular weight derivatives to inhibit the activity of human blood phagocytes. the activity has been determined by measuring the cellular chemiluminescence in a mieroplate himinometer using a himinol-dependent assay and zymosan as phagocytosis activating agent. we have used an assay in human whole blood and assays with isolated human mononuclear and polymorphnuclear cells. the anticoagulant activity and also the ability of carrageenans to inhibit the activity of human macrophages decrease with decreasing molecular weight and decreasing degree of sulfation. the natural ocouring, yellow pigment curcumin is the major component of tumeric and is commonly used as a spice and food-coloring agent. since curcumin has been reported to have anti-tumorpromoting, antithrombotic and anti-inflammatory properties, we studied, whether curcumin acts on the transcription factors ap-l(jun/fos) and nf-~:b in cultured endothelial cells (ec). when ec were cultured in the presence of curcumin, electrophoretic mobility shift assays (emsa) demonstrated, that binding of endogenous ap-1 to its dna recognition motif was suppressed. inhibition was due to direct interactions of curcumin with the dna-binding motif for ap-i. enhanced ap-1 binding, induced after tnfa stimulation of ec, was decreased in cells pretreated with curcumin. this resulted in reduced transcription and expression of tissue factor, known to be controlled by ap-f and nf-~b. nuclear run on assays proofed, that curcumin directly reduced the tnfa mediated transcription of genes, regulated by ap-1, as tf, endothelin-1 and c-jun. thus, curcumin did not only suppress apl(jun/fos)-binding, but also inhibited tnfa induced jun transcription, transient transfections with tissue factor promotor plasmids confirmed, that inhibition by curcumin was dependent on intact ap-i sites. beside its effect on ap-l-binding, curcumin reduced the radical dependent activation of nf-kb due to its antioxidant properties, however, this inhibition was indirect and less prominent. the relevance of the in vitro data was confirmed in vivo in mice bearing meth-a-sarcoma. when mice received curcumin before tnfa was injected, tumors showed reduced ap-1 activation. simultanously fibrin/fibrinogen deposition decreased, most probably due to reduced tissue factor expression. thus, curcumin inhibits ap-t activation and expression of endothelial genes controlled by ap-t in vitro and in vivo. (jung, 1991) . additionally, haemorhenlogical parameters (plasma viscosity, erythrocyte aggregation) were measured. in all patients aptt, bleeding time, platelet adhesiveness, von wiuebrand f~ctor and factor viii concentration and activity were determined. the patients with von willebrand disease showed characteristic morphological changes of capillary geometry. tortuosity of nailfold capillaries was markedly increased as well as the diameter of capillariez on the arterial and venous side. plasma viscosity was significantly low. multiple parameter analysis concerning to galen and gambino (1983) and using the parameters ,,plasma viscosity below 1.25 mpas", ,,torquation index higher than 5", ,,erythrocyte column diameter bigger than 16,9 gin" showed a positive predictive value of 100 %. capillary diameter and capillary tortuosity have a positive predictive value of 91,2 %. additionally, a reduction of the vasomotorie reserve and/or a decreased erythrocyte velocity in the capillaries below the reference range was found in most of the yon willebrand patients. it was quite remarkable, that 16 of 40 of the yon willebrand patients showed significant capillary bleedings. these findings confirm some former observations (e.g. o'brian 1950) and preliminary reports of our group (koscielny 1994). polymerase chain reaction (pcr)-based quantitation of mrna transcripts is an important tool in the investigation of the underlying molecular defects in inherited platelet disorders, such as the bernard-soulier syndrome. however, for the exact quantitation of mrna a number of methological requirements has to be met. first, a standard (s) mrna must be synthesized which is able to undergo the same processing as the target wild type (wt) mrna. secondly, the quantitation step following the pcr must differentially recognize standard and target dna, and thirdly, the assay must be precise with respect to both inter-and intraassay variability. in order to satisfy these requirements we constructed a s-gpib mrna which is identical to the wt-gpib mrna except a 13 bp long primer recognition site at its 5" end allowing differentiation between the pcr amplified wt-or s-gpib cdna through incorporation of a fluorescein or biotin labelled 5" primer. both standard and w[ gpib mrna showed identical amplification kinetics in the pcr reaction. the amplified dna was quantified using an dna binding assay. in this assay binding of amplified dna to gcn4 fusionprotein-coated microtiterplates is measured. since the gcn4 binding motif is incorporated into the wt-and s-gpib cdna through an identical 3" primer, competition between s-and wt-cdna during amplification has been analyzed. at a given concentration of 250 nm of gcn4. primer no competition between the sdna and wt-dna for the primer was observed during 25 pcr cycles. the sensitivity limit of the assay performed in this way was 250 amol wt-gpib~, dna, and intraassay variability reached from 1.66% to 6.72% calculated for 100 fmol and 5 fmol dna, respectively. to sum up, combination of rt-pcr with the amplified dna binding assay and usage of an internal standard mrna allows sensitive and accurate quantitation of gpiba mrna in human platelets. since upa and thrombin are main conrtibutors to the process of proliferation and migration of vascular smooth muscle cells (vsmc), which is part of the pathogenesis of atherosclerosis. we are currently assessing the role of spatial expression of upa and thrombin receptor (tr) on cells with human carotid artery plaques (n=10). we have used a double immunolabeling approach, combining anti-upa and anfi-tr antibodies. to identify the different cell types, we used the following antibodies: anti a-smooth muscle actin (a-sma) for smooth muscle cells, ulex europaeus agglutinin i (uea i) for endothelial cells, inflammation cell cocktail (cd68+cd45) for monocyte/macrophage and lymphocytes and an anti-proliferation cell nuclear antigen antibody (pcna) to stain proliferating cells. in the carotid atherosclerotic plaques, upa immunostaining was distributed focally, preferentially in the fibrous cap and some cells of the foam cell rich region (fcrr). it was present in distinct patterns: cytoplasmic staining. tr staining was distributed similar to upa staining. with double staining combining anti upa antibodies with anti-tr antibodies, cellular co-localisation of both upa and tr was demonstrated. these cells were identified as smooth muscle cells by -sma. inflammatory cells were mainly localized within the fcrr, they only stained for upa. in conclusion: our data demonstrates that upa and tr are coexpressed in vsmcs in human carotid artery atherosclerotic plaque tissue. we therefore conclude, that the mitogenic activity of upa is associated with the thrombin signalling pathway. in the proficiency test of the ,,deutsche gesellschaft flir klinische chemie" (dgkc) 1/95, 5 lyophilised plasma samples (immuno ag) were sent to participants: a normal plasma and 4 plasmas from persons under oral anticoagulation (oac-plasmas. inr 1.9 to 3.7). the participants (n=552) returned the pt times obtained and in most cases (n=355) also the isi value for the thromboplastin used (isi of pack insert). the inr was calculated using the pt of normal plasma and the isi of pack insert (method i). two additional methods for inr calculation were compared with method i. according to the concept of calibrated plasmas (houbouyan et al., t993), a calibration curve was constructed using the normal plasma and the 30ac-plasmas. the inr calculated using the pt •fn•rma¿ plasma and the laboratory-specific isi value given as 1/slope of the calibration curve (method ii) or was read off directly (method hi). for inr values, calculated by the 3 methods from the participants data (n=355), outlier elimination (2sd, iterative) was performed. the inr mean values for all 3 calculation models remain in a narrow range. using calibrated plasmas (method 1i and m), less outlier were eliminated and cv's obtained were smaller than using the conventional procedure ( i ). obviously, the inr inherited problems, such as accurate isi value, pt value of normal plasma and instrument/laboratory influences on isi, can be reduced using calibrated oac-plasmas. practical approach and educational considera-tions of home prothrombin time estimation a. bernardo, a. bernardo, c. halhuber herz-kreislauf-klinik, bad berleburg, germany specific training is necessary for the patient to achieve reliable and reproducible results in prolhrombin time measurement. the training scheme is based in many respects on experience with similar training courses for home control and management of diabetes and asthma. the education program is divided into a theoretical and a practical part. the theory part has group sessions of twenty patients of a time. the practical course is reduced to a maximum of five patients. the sessions are conducted by a medical doctor and by specialized medicaf/technical assistants. on average eight hours of theoretical education and two hours of practical training are sufficient. the contents of the theoretical lessons are: • need for anticoagulation after heart valve replacement, • potential interaction between anticoagulants and other medication, • accurate recording of the measured prothrombin time results, • techniques of prospective determination of the necessary amount of anticoagulant, • calculation of the individual doses, • potential pitfalls and mistakes, • corrections in case of over-and under-dosage, • early recognition of thromboembelic and/or bleeding complications. an alternative is a full-day intensive course which can be held during the weekend. our recently reported (1) observation that oral anticoagulant treatment causes an increase of heparin cofactor ii (hc ii) activity in plasma is now confirmed by a more extensive study. in 43 thrombophilic patients who were on vitamin k antagonist therapy (marcumar r) we found a median hc ii level of 142 % as compared to 119 % for 72 thrombophilic patients without any therapy (p < 0.002" ) and 104 % for 59 healthy controls (p < 0.001" ). moreover we observed that the increase of hc ii level was significantly correlated with increasing inr-values (r = 0.63, p < 0.001). follow-up observations on some patients showed, however, clear differences in the levels of hc ii activity after onset of vitamin k antagonist therapy. thus, some patients responded rapidly with a significant increase in activity ("strong responders") while others showed only slight changes ("weak responders"). in conclusion, the determination of hc ii activity may result in an improved estimation of the risk of bleeding, especially in high intensity treated patients (inr > 3.5). after intracoronary stent implantation an aggressive oral anticoagulation (oac) therapy is mandatory. to find out whether coagulation activation occurs after coronary stent implantation during high dose oac therapy markers of plasmatic coagulation and d-dimer were measured. patients 5 male patients (average age 57 years) were examined. blood samples were taken before and right after stent implantation and during the following week. patients got 30 mg phenprocoumon during the first three days and additionally heparin and acetylsalicylic acid (asa) were given. methods ptz, aptt, tz, protein c, tat-complexes, fi+2 and d-dimer were measured. results d-dimer levels increased steadily between day 0 and day 7. tatcomplexes showed a slight increase from day 0 (2.4 bg/i) to day 3 (15.3 ~tg/i). on day 7 tat levels were down again (2.0 p,g/l). fl+2 (day 0:1.0 ng/ml) also showed a slight increase on day 3 (1.3 ng/ml). protein c decreased steadily from day 0 (108%) to day 7 (15%). conclusion during the initial phase of oac therapy a coagulation activation is reported but no significant elevation of tat or fl+2 was found. this result shows that additional heparin and asa therapy was sufficient to avoid systemic coagulation activation. the increase of d-direct should be interpreted as a si~=m of local fibrinolytic reaction due to stent implantation. three methods for the determination of prothrombin time from capillary blood in patients under oral anticoagulation have been investigated. two methods were run on coaguchek® monitors (boehringer mannheim) from capillary whole blood. after fingerpuncture the first drop of blood was applied to the well of a coaguchek® test strip directly from the finger-tip, whereas the second drop was sucked into a non-anticoagulated plastic capillary (hirschmann) and immediately applied to the test strip -and vice versa to eliminate any influence of first and second drop of blood. the third method was hepato quick (boehringer mannheim) which was determined out of citrated capillary blood from an earlappuncture. 66 specimen of patients under oral anticoagulation were investigated. the method comparisons between each of the coaguchek® methods and the laboratory method show good results and the correlation between the coaguchek® methods is excellent. mean differences to the lab methods are -0.1 inr in both cases. no mean deviation was detectable between the coaguchek® methods. scattering of coaguchek® versus hepato quick was +/-0.6 inr in the range 1 to 4 inr except for three outliers and one patient with fluctuating results in the lab method which could not be resolved. introduction: haemorrhagic coumarin skin necrosis is a severe complication during initial phase of oral anticoagulant therapy. histological examination shows thrombotic occlusion of small vessels, but little is known concerning the pathophysiologic background of the bleeding component. recently, we described protein z deficiency in patients with bleeding complications of otherwise unknown origin. thus, we were prompted to measure protein z in patients with coumarin skin necrosis. patients: 4 patients (i man, 4 women; age: 35±10 years) suffering from haemorrhagic coumarin skin necrosis were examined. all patients had normal liver protein synthesis function, none was under oral anticoagulant treatment during this study. method: protein z antigen test, diagnostika stago, france. results: 4 out of the 5 patients examined had diminished protein z levels ( 700, 820, 1080, 1700 ug/l) in comparison to normals (2900 ug/l). in one of our patients, protein z was normal (3020 ug/l). conclusion: low protein z levels are additional risk factors for haemorrhagic coumarin skin necrosis. oral anticoagulant therapy is the treatment of choice in patients with need for long-term anticoagulation. since oral anticoagulants interfere with the function of vitamin k, it is not clear whether stable oral anticoagulation can be achieved in patients with need for continous substitution of fat-soluble vitamins including vitamin k. we report about a 59-year-old man who had experienced progressive hypertrophic obstructive cardiomyopathy over the preceeding 21 years. atrial fibrillation has been first diagnosed 18 years ago. latter on, recurrent ischemic attacks and embolism of the right arteria iliaca occurred. in 1993 the patient received extirpation of the ileum and subtotal amputation of the jejunum because of mesenteric infarction. the resulting short bowel syndrome requires continous substitution of fat-soluble vitamins. since vitamin k free preparations of fat-soluble vitamins for parenteral use are not available, prophylaxis of thrombosis has been performed with unfractionated hepadn. as a consequence of the longterm treatment with hepadn the patient developed severe osteoporosis. therefore, the decission 1:o discontinuate heparin therapy and initiate oral anticoagulation has been made. because of its shorter halflife warfarin (coumadin) was used instead of dicoumarol. over a 4 weeks lasting induction phase inr values were controlled daily. a dosage regime starting with '10 mg warfarin at the day of vitamin application (day 1) followed by 3.75 mg on day 2 and 1.25 mg on days 3, 5, and 6, respectively, was found to be optimal to maintain inr values within the target range (inr: 2.0-3.0). in order to minimize the risk of hemorrhage the vitamin administration was changed to the subcutaneous route. during an observation period of 6 months neither any bleeding or thrombotic complications nor a vitamin deficiency occurred. these data indicate that stable oral anticoagulation can be achieved despite extreme variation of vitamin k plasma levels. portable monitors for home monitoring of inr are well established for adults on oral anticoagulants. patient's compliance is improved as well as long term outcome. experience concerning accuracy of the procedure in children is limited. 32 inr determinations were performed in parallel from venous and capillaryblood samples of an infant on phenprocoumon, starting at the age of 4 months. the coaguchek® monitor from boehringer mannheim was used. choosing an arbitrary range of agreement of ,qnr 0.5 for both determinations, 81% of the measurements were within the defined range. 5/6 outliers were due to low inr resulting from difficulties in capillary blood sampling. the degree of agreement increased when the procedure was performed at least once a week. in conclusion: inr determination with a portable monitor may be helpful in home monitoring oral an.ticoagulant therapy in young children. a dose adjustment should be done only on the base of inr determination of venous blood -if it is considered the gold standard -to avoid over-anticoagulation. a stable anticoagulation is one of the most difficult tasks in attending patients with heart-valve-prosthesis. if prothrombin times are out of the therapeutic range, the risk of bleeding or thromboembolism increases disproportionately. for this reason any improvement in anticoagulant control and/or management can have far reaching consequences in decreasing complications, in extending longevity and in improving quality of life. for the first time a clinical trial was started in 1986 and continues until today at the cardiac rehabilitation center bad berleburg, germany with patients mainly after heart valve replacement. the patients were trained to measure their own prothrombin time and to adjust their own dosage of the oral anticoagulant. within six years 600 patients were trained: 216 patients could be followed up with regard to their selfdetermined prothrombin times. the results were within the therapeutic range in 83.1% of the measurements (n=14.812) taken by the patients themselves. on average, the patients who determine their prothrombin time themselves did so at a weekly interval. neither major bleeding nor thromboembolic complications could be observed in the 205 patient-years of home prothrombin estimation. it is to be hoped that the usual rate of complications can be reduced when patients determine their prothrombin time themselves at a close interval, resulting in more constant values in the therapeutic range and slight corrections of the anticoagulant dose. home prothrombin estimation promises better quality of life and has a considerable potential to achieve this goal. circulating plasma thrombomodulin (tm) is a novel endothelial cell marker, which may reflect endothelial injury. tm acts as thrombin receptor which neutralises the fibrin-forming effect of thrombin, and also accelerates the formation of the anticoagulant protein c/s pathway. tm therefore belongs to the anticoagulant defence system against thrombosis. increased tm levels have been described in various diseases such as ards, thromboemboembolic diseases, ttp, diabetes, le and cml reflecting alterations of the vascular system at the endothelial level. to find out to what extent cardiac catheterisation imtates vascular endothelium, tm concentrations (stago, asnieres, france: x 10 3 iu/ml) were investigated prospectively in 58 infants and children (three days -16 years). blood samples were drawn before the intervention, immediately at the end and 24 h later, snap frozen (-70 °c) and investigated serially in dublicate six weeks -3 months later. the results (median and range values) are shown in the enhanced tm concentrations immedately after the operative intervention, followed by normalisation within 24 h, indicates that cardiac catheterisation in pediatric patients rather leads to a short lasting irritation of the vascalar endothelium than to severe irreversible endothelial damage. recently in an al=wl" based method dahlb~ick et al described in vitro resistance to the anticoagulant effect of activated protein c (apc) in thrombophilic adult patients. apcr is in the majority of cases associated with the arg 506 gin point mutation in the factor v gene. concerning the special properties of the neonatal hemostatic system (low vitamin k dependent coagulation factors, physiological prolongation of the pt and aptf) we adjusted this ap'it based method (chromogenix, m~,lndal, sweden) to neonatal requirements: apcr was measured in 120 healthy infants according to dahlb~ck. the results were expressed as apc-ratios: clotting time obtained in a 1:1, 1:5 and 1:11 dilution with factor v deficient plasma (instrumentation laboratory munich. germany) using the apc/caci2solution divided by clotting time obtained with cac12 in the same i:1, 1:5 and 1:11 dilution. in addition, plasma of 24 neonates with septicaemia were investigated and data of 18 infants aged birth -three months with arg 506 gin +/-were shown. the arg 506 gin mutation of the factor v gene was assayed by amplification of the dna samples by pcr followed by digestion of the amplified products with the restriction enzyme mnl i. results were confirmed by sscp -analysis or by direct sequencing of dna from patients with apcr. results are shown in the 1.6(1.4-1.95) neonates and infants were considered to be apcr when the aptt ratio was < or = 2. concerning the special properties of the neonatal hemostatic system, our data show concordance with the pcr method in neonates and infants only, when the aptt based method was performed in the i: 11 plasma dilution. case report: we report on an 8-year old boy with severe hemophilia b and frequent screaming at night. eeg showed spike wave activity, starting from the temporal lobe, but generalizing within seconds. complex partial seizures were diagnosed and therapy with carbamazepine was initiated. as no improvement was seen nmr was performed. this revealed lesions within the right frontal cortex. higher doses of carbamazepine were not succcssfull as was therapy with phenytoin and pfimidone respectevely. the patient is now treated with carbamazepine and valproate. he still suffers from one short seizure per day. because of his seizures we started prophylactic replacement therapy with 600 i.e. factor ix twice per week. discussion: in 1992 wilson et al. first detected brain abnormalities in 25 of 124 children and adolescents with hemophilia a or b who were negative for immunodeficieney virus (1). the most common findings (14/25 patients) were small, focal, nonhemorrhagic white matter lesions of high signal intensity on t2weighed images. similar lesions have been reported in children with sickle cell cerebral infarction (2) . only three of these 14 patients had seizures, all of those having a documented history of intracranial hemorrhage. our patient has similar lesions as those described by wilson et al. but no history of intracranial hemorrhage is documented. even if tuberous sclerosis might be a differential diagnosis, we think that the abnormalities are related to hemophili a or its treatment, because the patient has no further signs of this disorder. conclusions: 1. in patients with hemophilia and seizures nmr might be useful as a high sensitive method for the detection of gray and white matter changes. 2. further studies should be initiated to determine the prevalence of pathological conditions in the brain of hemophiliac patients. disseminated intravascular coagulation (dic) is a rare, but foudroyant disease occuring in gram-negative sepsis like meningococcal septicemia. despite the avallibility of potent antibiotics, mortality in mertingococcal disease remains high ( about 10 % ), rising to 40 % in patients presenting with severe shock and consecutive dic. as the clinical course and the severity of manifestations of systemic meningococcal infections varies there is a need for early diagnosis of the infection and stage of coagulopathy in order to reduce the high mortality rate. few and rapidly available parameters are needed to classify the wide spectrum of clinical and laboratory findings in patients with dic. the parameters include partial thmmboplastin time, pmthmmbin time, plasma levels of fibrinogen, fibrin monomers and dimers, fibrin degradation products and the thrombocyte count. monitoring the course of hemostaseologicai findings in 26 pediatric patients with systemic meningococeal infections we observed a change of coagulation parameters as early as in the first stages of the infection: a prolongation of partial thromboplastin time to an average of 69. 1 sec (range 22 -150 sec, norreal 30 -45 sec), a decrease of prothrombin time to 45.7 % (range 13 -71 %, normal 70 -100 %) and of antithrombin iii to an average level of 16. 8 u/ml (normal 20 -29 u/ml ) was found 1 to 4 (-6) hours after admission. the consecutive development of hemostaseological parameters mentioned above permitted to define the stage of coagulopathy and thus to induce a stage related therapy. primary treatment consisted in control of shock by liquid substitution, compensation of metabolic acidosis, correction of clotting disorders ( at iii and heparin in stage of pre-dic ; at iii and fresh frozen plasma in case of advanced dic ) and treatment with g-lactam antibiotics ( e. g. cefotaxime or ceftriaxone ). an early assessment of the coagulation disorders in meningococcal disease can be based on few coagulation parameters, thus an appropriate treatment may be arranged to prevenl the patient from a fatal outcome of meningococcai septicemia and protect him from the development of a waterhouse-friderichsen-syndrome. this study was designed to prospectivdy evaluate coagulation and flbrinolyfie activation in 60 children (neonate -16 years) during cardiac catheterisation with low dose flush heparin (10 iu/ml saline). aptt (instrumentation laboratory: see), anti xa activity (xa; chromogenix: iu/ml), prothrombin fragment ft.2 (f1.2; behring werkc marburg: nmol/l) and d -dimer formation (d-d; bnhring werke/vhrburg: ug/l) were investigated before (t1), at the end (t2) and 24 h after cardiac catheterisation (t3). in addition, to evaluate the influence of inherited thrombophilia in all patients resistance to activated protein c (apcr), protein c, protein s and antithrombin were investigated. during catheterisation median (range) hepadn was administered in a total dose of 60 (17-206) iu/kg bw. in addition infants < 6 months of age (arterial catheterisatiun only) or patients with known thrombophilia received 300 -400 iu/kg hepafin for fmther 24 hours. the results (median and range) are shown in the ft.2 was sigificanfly elevated above the pediatric boundary immediately after the intervcation and nearly reached baseline values 24 h later. in contrast no cfinically relevant fibrinolytic activation was seen: d -dimer formation increased within the pediatric boundary immediately after the catheter and returned to basdine levels 24 h later. three children showed resitance to apc. tn one child stroke occurred before. not knowing the result of apcr in the remaining two patients only one neonate received further prophylactic heparin. the third neonate without heparin prophylaxis suffered from venous occlusion within two days after the intervenfon~ in addition, no protein c, protein s or antithrombin deficiencies were found. although administration of low dose flush heparinisation during cardiac cathetefisation could not prevent short -term coagulation activation, no thrombotic events occurred in children without inherited thrombophilia. if fnrther prophylactic hepariuisation in children with a~r, protein c, protein s or antithrombin deficiencies may prevent vascular occlusion requires a more intensive study. a.sandvoss, w.eberl, m.b0rchert introduction: capillary leakage, edema and hypovolemia are common complications in preterm infants especially if birth weigth is below 1.500 g. septicemia, asphyxia and immaturity seem to be most important risk factors. to determine the influence of c 1-esterase inhibitor (cilna) in preventing contact phase and complement activation we investigated c11na concentrations in normal and symptomatic preterm infants. methods: activity of cilna were measured by chromogenic substrate method (behringwerke), cilna concentration with radial immunodiffusion (behringwerke,germany). results: cllna-activity in asymptomatic preterm infants (n= 14) was 65+/-15% of normal at birth. healthy newborns showed activities of 80+/-20%. cilna reached normal adult values 2-4 days after birth. preterm infants with respiratory distress syndrome(n = 14) showed lower activity on day 2-5, patients with additional septicemia (n=15) had decreasing c1 ina-activities in the first three days of life. individual course of cllna-activity and thrombocyte count correlated in the group with irds with and without septicemia. in children with capillary leakage onset of diuresis went parallel with raising cllna-activity. markers of contact phase (f xlla) and complement activation (c 5al were investigated in single cases and evidence for involvement of both systems was found. conclusion: contact activation and complement system play an important role in capillary leakage in preterm infants. cilna regulates both systems. activity of cilna correlates with clinical course, substitution therapy is possible and may improve outcome of these critical ill patients. antiphospholipid antibodies (apa) interfere with hemostasis probably by inhibition of protein c or prothrombinase complex. thereby, apa might lead to thrombosis or increased bleeding. however, incidence and clinical importance of apa has not yet been investigated in children. therefore, we assayed plasma samples of 220 children, aged 0,1 to 19 years (mean 7 years) by elisa detecting igg-and lgm-antibodies directed against eardiolipin, phosphatidyl serine and phosphatidic acid. in patients with increased bleeding, thrombophilia or prolonged clotting tests a detailed coagulation analysis was performed. according to their diagnosis children were devided into 5 groups: i. autoimmune diseases, ii. infections, iii. metabolic diseases, iv. other diseases, v. healthy children. results: apa were found in 69/220 patients. in the respective groups we demonstrated apa in the following proportions: 1. lgg-isotype: activitiy of c1 esterase inhibitor (c11na) is reduced in preterm infants especially if birth weigth is below 1.500 g and respiratory distress syndrome and/or septicemia is present. capillary leakage with generalized edema, hypovolemia and hypotension is resulting in imbalance between inhibition and activation of contact phase and complement system. iln four patients we investigated seven courses of substitution ;with commercial c1 esterase inhibitor preparation (berinertr,behringwerke), case reports are given. all patients had clinical symptoms of capillary leakage, all had septicemia accompanied by either respiratory distress, disiseminated intravascular coagulation or mutiple organ failure. jefficiacy of substitution therapy is dose related, supranormal iactivities of cilna are necessary, reflecting raised consumption of inhibitor in ongoing disease. clinical effects on diuresis, catecholamine need and especially on thrombocyte counts are demonstrated. or arterial thromboembolic event in children e. lenz, c. heller, w. schr6ter*, w. kreuz johann w. goethe-universit~itskinderklinlk, frankfurt a. main, germany * georg-augast-universit/itskinderidinik, g/3ttingen, germany venous thrombosis as well as arterial thrombo-occlusive events are rarely observed in childhood, but can lead to life-threatening situations and longterm sequelae in these patients. after the initial stage of treatment (thrembolysis or thrombectomy) the pediatrician has to decide how to efficiently prevent re-thrombosis in the individual patient. anticoagulation after venous thrombosis is generauy recommended for 6 months after the event; if an underlying thrombophilic condition has been detected in the patient anticoagulation has to be considered lifelong. when evaluating antithrombotic therapies for children it is of importance to consider whether the anticoagulatory effect is mainly necessary in the venous or arterial vessel system. the hemorrhagic risk and side effects of the different anticoagulatory preparations have to be taken into account, especially when treating small children. only limited experiences exist concerning the suitability of the preparations for long-term anticoagulation in children and general recommendations on the ideal dosage in pediatric patients are still missing. we want to disscuss different types of anticoagulants (such as coumarins, unfractionated heparin, low molecular weight heparin (lmwh) and inhibitors of platelet aggregation) their mode of action, their suitability for pediatric patients and their side effects and relevance of these side effects especially in children. from the experience in our own pediatric patients, we would like to report on the indications, which can be given to administer these different preparations, the dosage regimen we recommend and the laboratory tests to monitor save and efficient re-occlusion prophylaxis in our patients. in this context we would like to present our data on 8 patients with either thrombosis or arterial infarction due to a thrombophilic condition, who had all contraindicatioas to oral anticoagulation by coumarins. because prophylaxis for re-thrombosis was mandatory in these patients, lmwh was given for long-term anticoagulation in a dally subcutaneous dosage of 100-150 anti-xa u/kgbw. monitoring was done by anti-xa-test (0,4-0,8 anti-xa u/ml). under this regimen none of the patients developed re-thrombosis or bleeding complications. alopecia was seen as a side-effect. this study was designed to prospectively evaluate coagulation and fihrinolytic activation after cardiopulmonary bypass with aprotinin (2x17000 u/kg bw) in 42 infants and children aged 0.1 -15 years, and to correlate these findings to the clinical outcome. prothrombin fragment f 1.2 (f1.2; behring werke marburg: nmol/l), antithrombin-serinesterase -complex (atm; stago: ng/ml), d -dimer formation (d-d; behring werke marburg: ug/l), tissue-type-plasminogen activator ag (t-pa; chromogenix: ng/ml), plasminogen activator inhibitor 1 antigen (pai; chromogenix: ng/ml) and cl-inhibitor (c1; behring werke marburg: x 10-3 g/l) were investigated before the operation (t1), at the end of the operation (t2), and on postoperative days 1 (t3), 4-6 (t4) and 7-9 (t5), respectively. the results are shown in the table (median and median absolut deviation): t1 t2 t3 t4 "1"5 nv fi.2 0.9 +/-0.5 1.7 +/-0.9 1.4+/-1 1.8+/-0.8 1.6+/-0. the platelet (pl) function defect induced by thrombolytic agents has been attributed either to the degradation of pl surface receptors or to the anti-aggregatory effect of fgdps. in contrast to other plasminogen activators scu-pa is intimately inked with pl: they can rapidly incorporate exogenous seu-pa, release it upon stimulation and bind the proenzyme. recently we have reported that exposure of prp to recombinant scu-pa (2.5-t00 um) in timed interval 1-30 min resulted in dose-dependent inhibition of pl aggregation. timecourse changes of the process were followed by the biexpotential kinetics: a rapid initial inhibition during the first 3-5 rain with the moderate suppression of pl aggregation in the 30 min period. when tcu-pa (25-100 nm) was exposed to prp in the same conditions dose-and time-dependent inhibition of pl aggregation was also observed. since the effect was obtained no earlier than t0 min after exposure of tcu-pa to prp, and the threshold dose was higher. comparable inhibition of pl aggregation was obtained with 25nm of scu-pa versus 100nm of tcu-pa and the llbrinogen depletion by the end of the 30 min period was 2% and 30% respectively. it's likely that tcu-pa and its precursor have different mechanisms of action on the pl aggregatory function. in a recent study we have shown that recombinant rscu-pa inhibits platelet (pl) aggregation in prp. to exclude the possible influence of rscu-pa/plasma interfere on this process the aggregation of washed pls was under the investigation. pls were washed according to modified mustard's method, suspended in buffer and adjusted to 250,109/1. the resuspended pls were exposed to 5-100 nm of rscu-pa for 30 min at 370(;. at time points 3, 5, 15 and 30 min the aggregation with 0.6 iu/ml of thrombin was measured. it was found that the exposure of pls to rscu-pa (20-100 nm) for 3 man resulted in marked inhibition of their aggregation. since after 15-30 man of incubation with 20-50 nm of rscu-pa the inhibitory effect on pl aggregation became less pronounce or even disappeared. when 5 nm of rseu-pa was used the inhibition of pl aggregation became significant only by 15 rain of exposure period and didn't change for 30 man of investigation. the observed results may be cormeeted with uptake of rscu-pa by pls from surrounding buffer as well as with individual variations of pl response to the same concentration of rscu-pa. loss of glycosylation may result in a reduced platelet (p) survival and perhaps altered function. we analyzed the structural and functional effect of specific deglycosylation (combinations of n/o-glycosidase and neuraminidase treatment) of p and isolated p gpib. washed and formaldehyde-fixed p were digested as follows: 1) with neuraminidase (0.125u/ml) + o-glycosidase (3.1mu/ml) + n-glycosidase (1.25u/ml), 2) with neuraminidase alone (0.2u/ml), 3) with n-glycosidase (2u/rnl) and 4) with neuraminldase (0.2u/ml) + o-glycosidase (5mu/ml). all reactions were performed in the presence of protease inhibitors (pmsf, leupeptin, sbti), after washing x2 the p and identically treated controls were analyzed by flowcytometry with the antibodies 6di (mab: a-gpib), 7i-l2 0vlab: a-gpiiia), and the lectins wheat germ agglutinatinln (wga, for neunac) and peanut agglutinin (pna, for [3dgal(1-3)-galnac) which confirmed effective and specific deglycosylation by the respective enzymes (but gave only minor differences with 6di and 7h2). the botrocetin (13) and ristocetin (r)induced agglutinations showed arer treatment 1) (all enzymes) a full inhibition of r-induced agglutination but only a mildly reduced b-induced agglutination (70% of normal). treatment 2 and 3 (neuraminidase alone, and n-glycosidase alone) affected both agglutinations only mildly (70-80% of normal).treatrnent 4) (o-deglycosylation) however showed a major inhibition of r-agglutination down to 30%, while b-agglutination interestingly was almost fully retained. the results of the rotary shadowing electron microscopy of purified gpib suggested a collapse of the normally stretched, glycosylated, gplb, not only after the treatment with all three glycosidases, but also .after o-deglycosylation alone. we conclude that oglycosylation is most important for ristocetin-induced platelet-von willebrand factor-interaction and responsible for the typical stretched shape. the phenomenon of in vitro platelet aggregation and consequent pseudothrombocytopenia (ptcp) in the presence of calciumchelatization by na-edta and sodium-citrate was studied in blood samples of a patient. initial platelet counts electronically measured were 20000/ul blood anticoagulated with na-edta and sodium-citrate. normal platelet counts were found in heparin-anticoagulated blood and in capillary blood. immunoglobulines of the igg and igm subclass were identified in the patients plasma. by incubation of the patient's serum with platelets of healthy individuals, platelet-clumping occurred in the presence of na-edta and sodium-citrate but not in the presence of heparin. the platelet membrane glycoproteins (gp) hb/llia, ix and iiia/vnr g-chain were involved in the antigen antibody reaction as demonstrated by specific antibodies and flow-cytometry. on platelet surface permanent calcium-exchange and -replacement is dependent on external calcium concentration. calcium depletion induced by calcium chelators as na-edta and sodium-citrate might conformationally change platelet surfaces and induce formation of neoantigens. the decrease of gp llb/illa platelet surface antigen to 10% (normal >75%) indicated the important role of the gp iib/iiia receptor at ptcp. the saliva of tdatoma pallidipennis, a triatomine bug, was found to contain a protein called "pallidipin", that specifically inhibits collageninduced platelet aggregation but not adhesion or shape change. to investigate the mechanism of action of recombinant pallidipin the influence on platelet fibdnogen binding after activation by collagen type i in different concentrations was measured by flow cytometry. the same concentrations of pallidipin that inhibited the couagen-induced platelet aggregation completely did not cause any inhibitory effect on fibdnogen-binding in the prp from the same donor measured contemporaryly. collagen type i-induced platelet aggregation of cd36-deficient platelets from two different unrelated blood donors was inhibited by the same concentration of pallidipin that inhibited aggregation of control platelets. there was no inhibition of collagen-induced fibdnogen-binding in the cd36-deficient platelets as well. pallidipin did not cause inhibition of collagen-induced membrane expression of cd62 and cd63 of control and cd36-deflcient platetets as measured by flow cytometry. however eadier studies had shown an inhibition of collagen-induced atp and {3tg secretion by pallidipin. therefore we compared the effect of pallidipin in unstirred and stirred prp samples. while pallidipin had no effect in unstirred samples it showed strong inhibition of ptg secretion in stirred samples. we therefore conclude that pallidipin does not act on collagen-induced aggregation through cd36 and that the inhibition is a post fibdnogenbinding event. pallidipin does not influence the first steps in secretion, which are independent from cytoskeleton and platelet-platelet contact, but inhibits the following steps. 17-hydroxy-wortmannin does not inhibit the transport of 1nm-gold labelled fibrinogen in resting platelets. e. morgenstem, b. kehrel and k.j. clemetson medical biology, saarland univ., homburg, germany, haemostasis research, univ. muenster, germany and theedor-kocher-lnstitut, univ. bern, switzerland. wortmannin, an inhibitor of phosphoinositide 3-kinase and of myosin light chain kinase blocks reactions of the activated platelet. to obtain informations about the role of the contractile cytoskeleton in receptor-mediated transport of resting platelets, the effect of 17-hydroxy-wodmannin (hw) on the endocytosis of fibrinogen from the surface of resting platelets was studied. gel filtered platelets (gfp) were incubated for 10 min at 37°c with hw (3x10-6m) or with iloprost. controls and gfp preincubated with hw or ilopmst were incubated with 1.4nm-gold labelled fibrinogen molecules (fg-au; final concentration 40p.g/ml) at 37°c. the experiments were stopped after 5 or 30 min by rapid freezing. after freeze substitution in acetone with 4% osmiumtetroxide, sedal sections were prepared. the sections were examined after incubation with ascorbic acid (5% in h20) for 30 rain at 20°c (to reduce metallic osmium) and silver-enhancement using danscher's (1981) method (to visualize the fg-au). examination of adp stimulated platelets in the presence of 40fg/ml fg-au shows that the ligand is able to mediate aggregation. the examination reveals, that fg-au was present in a low density on the platelet surface, in higher density in the surface connected system (scs), in coated pits and vesicles and separated smooth vesicles (representing endosomes?) as well as in the matrix of alpha-granules. after 30rain, the number of labeled granules was increasing. labels on the surface and on the mentioned cytoplasmic membranes were observed during the whole period of incubation. hw or iloprost did not alter the resting gfp and the mentioned qualitative ultrastructural findings in both preparations did not show differences to the controls. we conclude from the results with hwthat the regular contractile function of the cytoskeleton is not necessary to transport the fg-au in resting platelets. methods: edta anticoagulated whole blood was incubated with thiazole orange and analyzed with a flow cytometer. young platelets were defined by having a high fluorescence from thiazole orange (normalized to platelet size). platelets were also incubated with fluorescent antibodies to gpib, gp lib/ilia and gmp-140 (two colour method). results: surface expression of gpib was the same in young and older platelets. results for gp lib/ilia and gmp-140 (in resting and activated platelets) will be presented. conclusion: young platelets can easily be detected using thiazole orange and flow cytometry. there is no differential expression for gpib. further results will be presented. the influence of erythrocyte and thrombocyte content on the release of atp by different agents in whole blood specimens was tested. the measurement had been performed in the lumi-aggregometer using the principle of the luciferin-luciferase reaction. altogether 39 blood samples were diluted gradually before induction of the release reaction by arachidonic acid (1,25 mmol/i final concentration), adp (30 ijmol/i) and collagen (1,0 and 5,0 tjg/ml). the peak of the obtained curves was transformed into percent values of the maximal deflection by the undiluted sample (= peak in relation) and into atp concentrations (= absolute peak) after testing the atp standard in parallel for each dilution step separately. the peak in relation increases by increasing dilution with all inducers. it was identic with the atp standard and with collagen, somewhat lower with arachidonic acid and much higher by adp. a luminescence-optical effect may influence all these results. the absolute peak decreases by dilution under arachidonic acid and collagen as it was expected by the decreasing thrombocyte content of the samples. under induction by adp no decrease of the absolute peaks by increasing dilution of the samples was abserved. this can be explained only by liberation of atp from the erythrocytes. the atp standard is essential for the quantification of the release reaction. adp doesn't suit for it. collagen with a final concentration of 1 pg/ml was proven as the best inducer. platelet aggregation induced by several agents has been photometrically investigated in disc shaped rotating cuvettes coated with vessel wall tissues obtained from human umbilical cord, either endothelium or smooth muscle cells or extracellular matrix or combinations of them. in addition, effects of endothelium incubated with several cytokines on platelet aggregation have been studied. endothelial cells strongly inhibited aggregation depending on their cell count and the concentration of the inducer. smooth muscle cells showed the same effect but very less marked. in presence of extracellnlar matrix spontaneous aggregation occured. endothelium could inhibit this spontaneous aggregation when present in the same cuvette, smooth muscle cell could not. incubation of endothelium with several cytokines increased its anti-thombotic properties. for example, at a platelet count of 3x105/id in the prp, 10 -6 m adp led to maximal aggregation in uncoated cuvettes, in presence of 5,5x106 endothelial cells aggregation was completely abolished, in presence of 2,75x10 "6 cells aggregation was decreased to 40%. smooth muscle cells diminished the aggregation effect of 0,1 nih thrombin to 67% when only one side of the cuvette was coated and to 63% when both sides were coated. endothelium could not inhibit aggregation induced by 2,5 x 10 -6 m adp but endothelium incubated with 500 u/ml tnf-a or 30 u/ml intedeukin-lfl or lmm l-nitro-arginin for 24 h did completely inhibit aggregation. platelets become sticky and adhere to surfaces or to another without contracting and secreting. during maturation of megakaryocytes finally platelets lost their genomic nuclear message. only mitochondrial dna of platelets can be identified. we focused our attention on the impact of mitochondrial dna and the mitochondrial transscriptive mechausisms during platelet activation in normals. materials and methods: leucocyte free (nagentte chamber, flow cytometric analysis) platelet rich plasma or platelet concentrates a_~er hemapheresis were filtered by pall 100 leucocyte filters. the influence of different anticoagulants (commercially available sarstedt tubes containing citrate, heparim edta and 500 atu/ml hirudin wacker) was examined. activation was due to a 60 nun. hemapheresis procedure ( 3-5fold increase of cd 62, cd 63) and ex rive stmaulation due to 4 niy u/ml thrombin, 0.025 m cac12 or combmatious. the guanidiurn method for total rna preparation were used according to t. brown: current protocols in molecular biology 4.21-4.9.14,1991. different primers of mitochondrial genome (e.g. cytochrome b and atpase) were prepared using pcr and mitochondrial transscription was examined using northern-blot-technique. results: 1., there is less activation of mitochondrias using hirudin anticoagnlation, but a 2fold increase of mitochoindrial rna content in heparinized samples. 2., stimulation with thrombin leas to an increase to 5.5 e-l0 rna btg/platelet, compared to 4.7 -4.8 e-10 rna ~tg/platelet under unstimulated conditions.. conclusion: there is evidence for the importance of platelets mitochondrial dna and mitochondrinl transsefiption in regulation of cytosceleton and platelet activation. thrombospondin-1 (tsp-1) is a large homotrimeric glycoprotein originally identified as a platelet alpha-granule component. the investigation of its putative role in a variety of pathophysiologies like haemostatic disturbance, malignancy and wound healing requires specific laboratory reagents. monoclonal antibodies are one of the most powerful of these reagents. therefore, we purified human tsp-1 from thrombin-stimulated platelets using affinity chromatography to generate monoclonal antibodies in mice. a subclass igg 1 monoclonal antibody designated 48.42 was purified from ascitic fluid and further characterised. western blot experiments demonstrated that this antibody reacted only with the unreduced molecule whereas the tsp-1 subunit chain was not recognised. no cross-reactivities with human fibrinogen, fibronectin, vitronectin and von willebrand factor were found. preliminary results indicate that the monoclonal antibody 48.42 can be used to investigate tsp-1 function in several assays including immunocytochemistry and cell adhesion as has been demonstrated for hl-60 cells. in addition, a sandwich enzyme immunoassay was developed using goat-antihuman tsp-1 igg and derivatised monoclonal antibody 48.42 (peroxidase, biotin) as a sensitive method for detection of tsp-1 in human body fluids. in the following study the expression of the platelet antigen (cd62p) and the leukocyte antigen (cdllb) were measured in whole blood, in addition to platelet-leukoeyte adhesion (rosette formation) by means of multicolour fluorescent labelling (cd45, cd14, cd42a). the measurements were carded out both in freshly drawn whole blood which had been antieoagulated with different agents, and in stirred samples of whole blood under controlled conditions (37°c, 1000 rpm, different stirring times). the results are presented as the percent positive events in each gate (platelets, leukocytes -pmnl, monocytes, lymphocytes and rosettes -plateletpositive events in the pmnl, monocyte and lymphocyte gates), whose mean fluorescence is given in addition to an index comprising the product of the percent positive events and their mean fluorescence. stirring (max 15 rain) induced an increase of cd62p on the platelet surface of ca. 10%, without any change in the mean fluorescence. under these conditions increased cdllb on pmnl and monoeytes could be detected. an increase in the rosette formation could also be measured (greater index), in that the percent of monocytes which were platelet-positive increased with no change in the mean fluorescence of the positive events, whereas pmnl showed an increased mean fluorescence, but not an increased number, of platelet-positive events. the time-dependent changes in rosette formation on stirring could be further increased by addition of adp. these results show that it is possible to measure rosette formation, and also the influence of effector agents (inhibitors or activators of platelets or leukocytes) on rosette formation, in whole blood using flow eytometry. 17 itp patients undergoing splenectomy were observed after 1-30 years following operation and divided into 2 groups. first group consisted of 8 patients with normal platelets count and absence of haemorrhagic syndrome. second group was formed of 9 itp-patienfs with episodes of thrombocytopenia recovery following certain time period after splenectomy. in the aim to study the cellular immunity there were carried out immunophenotypical investigations of blood samples using immunofluorescence method with monoclonal antibodies application. the increase of b-cells, expressing cd22, cd37, hla-dr-antigen has been revealed in the 2nd group. quantity of srfc, cd3 +, cd5 + cells in the blood of recovered patients was lower than in patients of the first group. this group was also characterized by statistically significantly increased level of cd4 + cells while the cd4/cd8 ratio was equal to 1.0 :i: 0.3 % (0,5 + 0,1% in patients of the second group, respectively, p>o,05}. also the relatively high expression of activating antigens in patients with thrombocytopenia recovery after splenectomy was stated. among infectious complications in all patients observed were predominantly found various types of throat infection, mainly with unsatisfactory treatment possibilities. we have observed the opsi-syndrome in 2 patients, being featured with marked tiredness, breath loss, intolerance of hard physical working, diminished ability to maintain physical activity. extracellular matrix (ecm) produced by human endothelial cells closely resembles the vascular subendothellal basal lamina in its organization and chemical composition. thus it contains collagens, fibroneetin, von witlebrand factor, thrombospondin, fibrinogen, vitronectin, laminin and heparin-sulphate. platelets carry different receptors on their membrane surface with specific binding capacities for one or more of these extracellular matrix proteins, such as glycoprotein (gp) iibiiia, gp ib/ix and gpiiib. incubation of platelets with ecm results in platelet adhesion, degranulation, prostaglandin synthesis and aggregation. we studied patients whose platelets showed either a receptor defect in gpiibiiia or gpiiib or a storage pool disease. adhesion experiments were performed using siliconised glass, collagen coated surfaces, immobilized fibrinogen as well as human subendothelial matrix. platelet adhesion of patients with thrombasthenia glanzmann (receptor defect of gpiibiiia) resulted in a total lack of binding to silieonised glass and immobilized fibfinogen. adhesion to collagen was almost normal in spite of the fact that only single platelets sticked to the surface and no microaggregates were observed. the adhesion to ecm was diminished and also no aggregates were detected. patients with a receptor defect in gpiiib showed normal platelet adhesion to siliconised glass and immobilized fibrinogen but binding to collagen and ecm was markedly reduced, while platelets with a storage pool defect sticked to siliconised glass but failed to adhere to ecm. by centrifugation of citrate blood (250 x g, 10 min) erythrocytes and leucocytes go to the bottom, whereas plasma and thrombocytes stream in the upper part of the probe. so the thrombocyte count doubbles in the platelet rich plasma in contrast to the platelet count in the whole blood volume. if the thrombocytes are more or less activated, they adhaere on erythrocytes, leucocytes or aggregate end are not able to stream upwards. the quotient between thrombocyte counts in prp and whole blood is a measure for thrombocyte activation. we chequed the value of this screening in different groups of patients with arterial occlusions disease (aod), chronical venous disease (cvd), diabetes mellitus (dm] and in healthy control persons (control). variation coefficient of the method is 3.7 (prp) and 4.4 (tc) respectively (coulter counter). differences to the control group are significant. changes in the patient groups in dispensaires follow up 5 years are also significant. nicardipin -induced immunthrombocytopenia p. eichler 1, c. hinrichs 2 , g greinacher l i.institut fur immunologic und transfusionsmedizin, ernst-moritz-arndt-universitat greifswald, 2. deister-s0ntel-klinik, bad m0nder drug-dependent immune-thrombocytopenias are a rare but clinically important variant of immune-thrombocytopenias. patients are at risk to suffer from severe bleeding complications. especially in patients receiving multiple drugs, diagnosis of drug-dependent immune-thromboeytopenia is often difficult. we report the case of a 71 year old male patient who received allopurinol, captopril, digitoxin, furosemid, and nieardipin. the patient presented with hematomas (pit. count < 10 g/l) and later developed bone marrow dysplasia. in an elisa using whole platelets and patient serum, a weak reactivity in the presence of furosemid, but a stronger reactivity in the presence of nicardipin (antagonil, ciba-geigy) could be demonstrated. the reaction pattern is given in the the enzyme-immunological determination of soluble fibrin (sf) proved to be highly sensitive and specific. this sf-elisa detected fibrin hacking fibrinopeptide a (fpa) via the monoclonal antibody 2t35 specific for the neoepitope generated on the aa-chain after the split of fpa. lill et al. recently introduced a new assay modification which utilizes the same antibody as the old one but takes advantage of a pretreatment of plasma specimens with kscn. this strong chaotropic ion is used to dissociate the various fibrin complexes possibly hiding fibrin epitopes. it was the aim of this study, therefore, to compare the two sf-elisa modifications (with and without kscn-pretreatment of specimens) . in order to examine the dynamics of thrombin-induced fibrin(ogen) metabolism we made course observations in patients with a certain form of septicemia. both assay modifications detected fibrin(ogen) derivatives which differed considerably in kinetics (n= 160 samples from 10 courses). the former sf-elisa (no kscn) correlated well with prothrombin fragments, thrombin-antithrombin !11 -complexes and with the release of fibrinopeptide a ( r > 0.96, n= 151). results of the new sf-elisa with kscn pretreatment of patients' plasma, however, correlated conspiciously well with d-dimer levels (r > 0.94) but distinctly less with the markers of thrombin generation (-0.12 < r < 0.29). this good correlation with d-dimer levels was unaccountable since the d-dimer maximum occured significantly later than the peak of markers of thrombin generation (p < 0.05). therefore, kscnpretreatment of fibrin specimens seems to lead to a change in the specificity of the fibrin assay despite usage of the same catching antibody. different half-iifes of differently composed fibrin complexes should be considered in trying to explain the findings. nevertheless, the results of the former assay without kscn-treatment correlated much better with the well-known dynamics of thrombin-induced fibrin generation during hemostasis activation than the data from the new assay modification. consequently, further examinations are necessary to specify the effect of kscn on soluble fibrin complexes and the resulting assay specificity. a rapid assay for the determination of the primary hemostasis potential (php) of whole blood has been developed (kundu et al, 1995) from the original method of kratzer and born. the new system employs a disposable test cartridge which holds the sample (citrated whole blood) and all components for the tests at the same time. the test procedure is very simple. the cartridge is loaded with -500 p.l citrated whole blood and is inserted into the platelet function analyzer (pfa 100aaw). the test is started automatically after a preincubation phase of 2.5 rain. the reaction starts with the contact of the whole blood and the capillary which is connected with a collagerdephinephrin coated membrane with a small aperture inside the test cartridge. under constant negative pressure the sample is aspirated and through the contact ofplatelets and vwf with collagen adherence and aggregation begins. the adhesion and aggregation process leads to the formation of a platelet plug which obstructs the flow through the aperture. the result of the php is reported as closure time (ct). additional parameters such as bleeding volumes are possible as well. first results show good reproducibility, normal values in the range of up to 150 sec. and a good discrimination of healthy donors from patients with congenital or acquired platelet dysfunctions. the system detects aspirin induced thrombocyte function defects and von willebrand disease. in ease of an abnormal result in the collagerdepinephrin system a second type of cartridge with a collagerdadp coating can be employed. in the majority of cases aspirin induced dysfunctions are normalized and could thus detect aspirin use. the proposed system may be a valuable tool for routine assessment of the primary hemostasis potential in a routine citrate blood sample laboratory. inducing mental stress in 20 young healthy male volunteers aged 20 to 40 ),ears with no previous history of thmmbophilia or a hemorrhagic diathesis was performed by a first time parachute descent from an altitude of 1000 meters. the purpose of this investigation was to find out whether there are any changes in the corpuscular and plasmatic fractions of peripheral blood. we were especially interested in elucidating changes in the procoagulatory and/or fibrinolysis systems. venous blood samples were obtained directly before and directly after the jump. flight time from the departure of the airplane to the landing of the parachutists was approximately 20 minutes. the maximum time that elapsed between the two blood withdrawals were 45 minutes. in a preliminary study with different voinnteem, certain fluid imbalances had been observed. absolute numbers of leukoeytes (6.9 vs. 9. l/n0, erythrocytes (4.6 vs. 5.1/pl), and platelets (246 vs.276/nl) significantly increased (p < .001), as well as the hemoglobin concentration from 145 to 156 g/l (p < .018). even though fluid imbalances before and after the jump had practically been excluded by measuring nearly identical hematoerit values (.41 vs..42), we noticed a marked drop in aptr (27 vs. 23 sec) and a significant increase in factor viii ~tivity. as a direct stress response, we found a rise in fibrinogen concentration (2.4 vs. 2.8 g/l) which is one of the shortest acting acute phase proteins. concerning reactive fibrinolysis, d-dimers showed an increase in concentration from 115 lag/l to still normal values of 192 lag/l, which was not significant due to low numbers of values (p = .086). we observed similar changes in fibrin monomers and prothrombin fragments fl+2. from other investigations on the kinetics of the activation of the procoagulatory system we know that maximum activil7 is not reached until 24 hours after initiation of activation.these investigations studied perioperative changes in different kind of operations which served as a control group concerning the degrees of tissue damage and resulting coagulation disturbances. to better understand these phenomena we plan to induce mental stress in a laboratoq' environment to further exclude unknow~a influences on the mechanisms which can activate the procoagulatory and fibrinolytic systems. triodena (t) 30/40/30 ug ee, 50/70/100 ug gestodene) were tested for their effect on hemostatic parameters. three groups (n=20) of healthy female volunteers were treated for 6 months with one of these oc. blood was taken before treatment (day 24-28 of pretreatment cycle, 0) and on days 18-22 of the 3 ~ (i) and 62 (ii) treatment cycle. indications of an activation of blood coagulation and fibrinolysis were detected as the plasma levels of prothrombin fragment f i+2 and of fibrin split product d-dimer and plasmin antiplasmin complexes were found elevated during treatment. the following main regulatory components of blood coagulation, activators and inhibitors, were investigated: factor vii antigen fviiag, fvii clotting activity fviie, circulating activated factor vii cfviia and antithrombin 3 at3 activity, total protein s antigen tps-ag, free protein s antigen fps-ag, protein s activity psact, circulating thrombomodulin etm fviiag, fviie and cfviia significantly increased during treatment; cfviia: 0: c 32.4 mu/ml a prethrombotic condition characterized by elevated levels of circulating soluble fibrin has been claimed to be a predisposing factor for accumulation of coronary thrombotic material in acute myocardial infarction. the present study includes 161 patients with clinical suspicion of myocardial infarction. blood samples were drawn by the primary care physician, upon arrival in the hospital, and after 2, 6, 12, and 24 hours of hospital stay. patients with myocardial infarction were identified by typical course in 12 lead ecg, and upon sequential determination of troponine t, myoglobin, ck, and ck-mb. patients with primary cpr were excluded from evaluation. soluble fibrin was measured by enzymun®-test fm (boehringer mannheim). patients with acute myocardial infarction display soluble fibrin levels within the normal range (< 5 ~tg/ml) during the initial two hours after onset of symptoms. there was no significant difference between patients with myocardial infarction and patients with coronary heart disease without myocardial infarction. slightly elevated levels were found in patients with atrial fibrillation, reflecting intracardiac fibrin formation. in patients without fibrinolytie treatment, a slight increase of soluble fibrin levels with a maximum after approximately 8 hours is observed. most patients with fibrinolytic treatment display a considerable increase in soluble fibrin, with maximum levels immediately after infusion of the fibrinolytic agent. four patients with pulmonary embolism showed soluble fibrin levels in the range of 40-300 [.tg/ml, which remained in the same range during the entire observation period. in conclusion, circulating soluble fibrin is not increased in patients with acute myocardial infarction and does not appear to be a predictor of acute coronary events. high levels of soluble fibrin in patients with fibrinolytic therapy may reflect release of fibrin from thrombotic material, but also de novo generation of fibrin due to release of active thrombin from thrombi not necessarily located in the coronary vessels. detection of elevated levels of soluble fibrin in patients with acute chest pain should result in careful examination for signs of pulmonary embolism or aortic aneurysm. the possibility to determine activated coagulation factors opens the question if data provide evidence of an activated coagulation or fibrinolysis and if this has a prospective value. we investigated patients with confirmed thrombosis, postsurgical septieaemia and also after liver transplantation. in all patients factor viia, xii, xiia and also the fibrinolytic parameters t-pa, pai-1, pap, plasminogen and a2-ap were determined. in addition, f1+2 and apc-resistance with heterocygote factor v-leiden-mutation and confirmed thrombosis. we found increased factor viia which showed partly also an increased fl+2. patients with other pathological results such as a reduced t-pa and/or increased pai-1 showed a low incidence of elevations in factor vii or f1+2. the activation of factor xii seems to be of minor importance in patients with thrombosis. a different picture is found in septic and transplanted patients. obviously factor xii-activation is of major importance in this group. a deterioration of the clinical symptoms is correlated with an increased factor xiia which is paralleled by a decrease of factor xiiactivity. the investigation of fibrinolysis parameters such as pai-1 and pap demonstrate a fibrinolytic disturbance of the balance. statistically significant are differences in septicaemic patients both in the surgical and in the internistical group in contrast to polytrauma patients. in patients with liver transplantations significant changes are apparently related to rejection of the transplanted organ together with a deterioration of the clinical picture. the possibility to detect activated coagulation factors may be a tool to detect changes in the hemostasis system at an early stage and to use this for an improved therapy. control of long-term oral anticoagulation is usually performed by serial determinations of the prothrombin time. however, the assessment of effective anticoagulation versus the potential risk of bleeding complication is difficult to achieve. molecular markers of blood coagulation activation might add valuable information in individual cases. we investigated 48 patients with thromboembolic manifestations (deep vein thrombosis n=22, pulmonary embolism n= 13, myocardial infarction n= 13) for one year beginning with admission to the hospital. tat, prothrombin fragments f 1 +2, d-dirner and fibrin monomer concentrations were analysed. all markers were significantly increased at the time of initiation of anticoagulant therapy thus reflecting a prethrombotic situation. patients suffering from venous thromboembolism demonstrated higher concentrations of tat and f 1 +2 in comparison to myocardial infarction (34.6 vs 12.3 pg/1, p=o.009; 2.8 vs 1.3 nmol/i, p=0.0025). f 1 +2, tat and d-dimer concentrations decreased gradually over the first 14 days of anticoagulant therapy reaching values within the established normal ranges in all cases. f 1 +2 and tat concentrations reflect the activity of the coagulation system during long-term anticoagulation whereas analysis of fibrin monomer yielded partly controversial results. we conclude that f 1 + 2 and tat appear to be superior to fibrin monomer for the individual control of oral anticoagulant therapy. the influence of thyroid failure on haemostasis is controversial. mainly hypoceagulable states have been described in clinically overt hypothyroidism. since hypothyroidism has been associated with an increased risk of atherosclerosis, we studied a wide range of haemostatic factors in untreated female patients with subclinical (b, n=42, age 59+13) or overt (c, n=8, age 55-zcj) hypothyroidism, as well as in hypothyroid women under 1"4 treatment (d, n=8, age 57+9) and euthyroid controls (a, n=80, age 50+14). simple screening tests (prothrombin time, activated partial thromboplastin time, fibdnogen), procoagulant factors (fvii, fviii, von willebrand factor), coagulation inhibitors (antithrombin ill, hepadn cofactor ii, protein c, protein s) and fibdnolytic factors (plasminogen, antiplasmin, plasminogen activator inhibitor, tissue plasminogen activator) were measured. results factor vii activity (vii:c), factor vii antigen (vii:ag) and their ratio were found increased in hypothyroid patients. factor viii activity showed the same tendency, whereas von willebrand factor ramained unchanged, as did all other parameters with exception of free protein s, which declined in overt hypothyroidism and in t4 treated subjects. these differences tended to diminish after exclusion of 26 women with estrogen replacement therapy for menopause, but the ratio vii:cnii:ag, as well as fvii:c still remained significantly higher in hypothyroid patients. conclusions: subclinical and overt hypothyroidism are associated with significantly higher levels of factor vii:c and vii:ag. the disproportionate increase in vii:c compared to vll:ag, as shown by their ratio, might reflect the presence of activated factor vii (vila), which in turn indicates a hypercoagulable state. this pattern becomes more pronounced with the concomitant estrogen replacement after menopause. exocytosis following platelet activation leads to translocation of cd62p (p-selectin), cd63, and thrombospondin, from cytoplasmic granules to the cell surface membrane, where these molecules, serving as activation markers, can be detected by flow cytometry. we here report detectability of these molecules preformedprior to platelet activation -inside the cytoplasm of resting platelets. two different methods are compared, i. e. using either methanol or the fix&perm kit (an der grub) for cell membrane permeabilization. in addition, interleukin(il)-ice is shown to be present in platelet cytoplasm after methanol treatment, but not after permeabilization using fix&perm. whenever cell surface positivity for a specific marker coincides with intracellular presence, blocking of the surface membrane sites prior to membrane permeabilization is required in order to obtain fluorescence intensity attributable to cytoplasmic staining. our data demonstrate the feasibility of the methods presented for the detection of intracellular platelet molecules. this technique should also provide a means for estimating the relative quantity of intracellular platelet antigens, provided the permeabilization procedure does not lead to antigen leakage or destruction. physical exercise activates the clotting as well as the fibrinolytic system as indicated in numerous investigations of exercise by running and by bicycle ergometer but not by swimming. the positive effect of an endurance training in coronary sport groups is induced also by influences on the hemostatic system. the influences are suppression of the clotting activation by the acute exercise and by an increased fibrinolysis response. different hemostatic parameters, therefore, were analyzed before and after swimming of male coronary patients (n=33; median ag~ 61 years, achieved heart rate: 68/min). indicating plasmatic clotting activation there was a significant increase in molecular markers tat and f1+2 among the coronary patients (tat from 2,1 to 3,4 pg/1; fi+2 from 0,92 to 1,1 nmol/1). the degree of clotting activation among the coronary patients was less than that observed in a group of young volunteers in a former investigation. this must be explained by existence of the coronary heart disease or by the higher age in the patient group. indicating an activation of fibrinolysis t-pa activity increased significantly in coronary patients (from 0,14 to 0,5 iu/ml) resulting in an unchanged balance between coagulation and fibrinolysis. from this findings of the hemostatic systems no increased risk of the coronary patients by swimming can be derived. a prerequisite, however, are precautions l±ke to devoid exercise in the anaerobic range, exclusion of major heart failure and of cardiac arrhythmias before begirming of the swim training. the principle of the fontan operation consists in anastomosing the right atrium to the pulmonary arteria, thus bypassing the right ventricle and using the only functional single ventricle as a pump for the systemic circulation. there are only few data about the influence of the changes in hemodynamics on coagulation and fibrinolysis. we investigated the coagulation system in 20 children and young adults aged 4 to 21 years in a general examination 4 to 61 months after fontan procedure. besides other abnormalities of the coagulation system, there were significantly increased values for the thrombin-antithrombin-iii-complex (tat) in 12 patients (60%). as a marker for an activation of the fibdnolytic system we found elevated plasmin-alpha2-antiplasmin-(pap-) levels in 14 patients (70%). less frequently, the concentrations for the prothrombin-fragments 1 and 2 (f1 and 2) (7 patients, 35%) or the d-dimer (2 patients, 10%) were increased. we didn't find significant differences in a clot-lysis-assay between fontanoperated patients and an age-matched control group. there was no significant correlation between activation of coagulation and clinical situation or diameter of the pulmonary arteria. whether the present data can help to estimate the risk for a thrombo-embolic complication following fontan procedure, still has to be investigated. the results of the clot-lysis-assay suggest, that for lysis of thrombi the same dose of rt-pa should be used as for other patients. a 2nd generation functional protein s assay p. van dreden* and e. adema** * serbio, gennevilliers france, ** boehringer mannheim, tutzing germany a second generation protein s test was developed with improved sensitivity to protein s and better reagent stability. the test result was found to be unaffected by apc-resistence (10 patients, heterozygote for the mutation with a apti' + apc ratio between 1.4 and 1.9), heparin up to 2 iu/ml and f viii activity between 1 and 250%. in the test, diluted sample is mixed with protein s deficient plasma, activated factor v, activated protein c, phospholipids and an intrinsic pathway activator. this mixture is incubated for 3 minutes. during this time, the activated protein c inactivates part of the f va. the extend of f va inactivation depends on the protein s concentration. after 3 minutes caci2 is added and the time untill clot formation is measured. the clotting time is a linear function of the protein s concentration between 10 and 140% protein s. for the three preproduction lots the difference in dotting time between 10 and 100% protein s was 43-54 seconds. this compares to 30-40 seconds typically obtained with the old test. within run precision (n= i0 on sta) is cv= 2 -7% on the basis of protein s. day to day precision (n=10 on sta) was found to be cv= 4 -11%, again calculated on the basis of protein s concentration. the cv of 11% was obtained for an avk plasma with 13% protein s; it corresponds to a standard deviation of only 1.5% in protein s. the insensitivity to interferences, in particular apc-resistence and better precision and stability are expected to improve the quality/reliability of a protein s determination. in this study we evaluated the use of hormonal contraception on the parameters protein c, protein s and pal. samples from 71 women with, without hormonal contraception and in menopause were assayed by coagulometric (protein s clotting test (behdngwerke, marburg, frg) or chromogenic methods (protein c activity test and pal reagent from behringwerke, marburg, frg) in double determination and were compared with the reference ranges. in addition thromboplastin time (thromborel s reagent) and fibrinogen (multifibrin) from behringwerke, marburg, frg, and aptt (actin fs reagent from dade corp., unterschlei6heim, frg) were determined. in women using hormonal contraceptives (p<0,01) and in menopause (p<0,05) protein s activity was significantly reduced compared to other women (<45 years) while protein c acitivity did not change. in menopausal women a higher susceptibility to thrombosis was supported by an increase of aptt (p<0,05) and fibronogen (p<0,01). while there was no change for pal, plasminogen was significantly lower in women using hormonal contraceptives and in menopause (p<0,05). we could not observe a higher turnover of coagulation and fibdnolytjc system with hormonal contraception. noteworthy was the occurence of low (<200 mg/dl) and borderline fibrinogen (max. 220 mg/dl) in 40,9% of women res. in 22,8% of women (together with borderline aptt) who had an individuell risk for arterial disease. protein s protein c fibdno~en aptt plasminog~ without hcc 109,1-+13,6 78,3-+14,1 255,0-+14,0 36, [2] [3] [4] [5] 4 24, [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] 2 with hcc 85, [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] 2 78, 5±12, 0 253, 8.+24, 1 35, [2] [3] [4] 8 14, 9 menopause 90, 3+97, 6 87, 4±41, 4 307, 05:57, 6 39, [8] [9] [10] 0 12, 6 hcc= hormonal contraception hemostatic parameters in a patient undergoing bone marrow and subsequent liver transplantation due to veno-occlusive disease c. salat 1, , e. holler t,3, hi. kolbl, 3, b. reinhardt l, r. pihusch 1, p. g0hring 2, s. poley 2, e. hiller 1 l=med. klinik iii, 2 = institut flit klin. chemie, klinikum grosshadern der ludwig-maximilians-universit~tt mfinchen, 3=h~tmatologikum der gsf a 40 year old patient suffering from all received allogeneic bone marrow transplantation (bmt). after an uncomplicated early posttransplant period the patient was dismissed after 4 weeks. a bilirubin rise with subsequent liver failure was observed during the following weeks. according to biopsy proven hepatic veno-occlusive disease (vod) liver transplantation was performed on day 79. unfortunately the patient died on day 140 due to aspergillosis. we monitored levels of protein c (pc) and s (ps) as well as pall during the pre-and posttranspiant period. pal1 level was normal (<43 ng/ml) during the first 4 weeks after bmt but increased with the manifestation of vod (317.5 ng/ml on day 47). it reached its peak immediately before liver transplantation (547.6 ng/ml) and returned to normal levels within the next few days. pc levels which were normal before bmt decreased prior to clinical diagnosis of vod and were normal after liver transplantation. ps levels lay within the normal range at all timepoints. vwf was elevated before bmt (240%) and remained relatively stable during the whole investigatonal period ranging from 170 to 260%. it is assumed that vod is initiated by an endothelial cell injury -possibly due to radiochemotherapy -and subsequent hypercoagulability. our results indicate that the "endothelial cell marker" vwf is not helpful in predicting vod. the kinetics of the investigated parameters underline the significance of pc and pai-1 as described by others and our group earlier, whereas ps does not seem to play a role in the pathogenesis of vod. the budd-chiari syndrome (bcs) is characterized by hepatic venous outflow obstruction that may be caused by the precipitation of a thrombus. it frequently coseggregates with other major diseases like myoloproliferative diseases or defects in the haemostatic system (antiprotein c and protein s deficiencies e.g.). only recently, the factor v leiden mutation (fvlm) has also been associated with bcs. we hypothesized that defects in the thrombo-modelling associated anticoagulant pathways (tmaap) are a major risk factor for the precipitation of bcs. we screened our cohort of 27 patients (pts) with bcs for the presence of defects in the tmaap and identified 3 pts with protein s deficiency (psd). these pts were screened for the three point mutations in exon 1 (codon-25; ins t), exon 15 (codon 636; a-->t) and in intron 10 (g-->a + 5) of the ps alpha-gene that have been demonstrated by bertina et al to coseggregate with psd. restriction enzyme analysis and confirmation-sensitive gel electrophoresis for the detection of single-base differences in doublestranded pcr-products were employed. all living family members of the indicator pts were also screened for heterogeneties in the three point mutation as described. no single abnormality in these genes despite presence of pbd in those family members was found. in addition, pts and family members were also screened for fvlm. one pt and two of his family members, in addition to psd, were subject to fvlm. the other two lots and their family members were not subject to fvlm. in contrast to the first family, despite psd, those two pts suffered from morbus crohn and acute myeloid leukaemia as risk factors for bcs. we conclude: psd is one major risk factor for the precipitation of bcs. to precipitate this disease, one additional risk factor is required. psd may be caused by genomic defects in the protein s gene other than those described by bertina. only a few publications describe a thromboembolic disease due to dramatically reduced protein s levels being associated with viral or bacterial infections, autoimmune mechanisms are suspected but the aetiopathogenesis is still under discussion. we report on a 5 year old boy who developed purpura fulminans of the left leg during varicella infection. on the fourth day of infection the disease started with pain and haemorrhagic efflorescence localized at the left taft. on admission the boy suffered from a purpura fulminans with central necrosis measuring 15x8 era. suspecting a hereditary thrombophilic disease we started therapy with protein c concentrate and recombinant tissue type plasminogen activator. the fellowing coagulation investigation showed a severe deficiency of protein s (total protein s-antigen < 5 u/ml, free antigen not measurable) in combination with factor v leiden mutation. other thrombophilie and coagulation parameters did not show deviation from normal range. after 4 weeks we saw a slight improvement of the total protein s antigen up to 50 u/ml. the free protein s antigen was still undetectable. during the following weeks the patient recovered slowly and the protein s activity and antigen normalized. because of skin necrosis thromboembolie prophylaxis was initiated with low molecular weight heparin (fragmin®, 100 ie/kgbw/die) and continued for 6 months. under this therapy there were no further thromboembolic events. these results suggested an autoimmune protein s deficiency in a patient suffering from chickenpo×. an analyses of autoantibodies at the time of diagnosis showed a slight increase of the antieardiolipin antibodies (igg 16,1 iu/ml, igm 15,1 iu/ml) which normalized during hospitalisation. we suspect an antibody to protein s probably caused by similar presented viral antigens. we suppose that autoimmune mechanism during different infections in combination with a heterzygous apc-resistance may be a potential risk factor for developing thrombotic disease. in the central nervous system mrna encoding for prothrombin and thrombin receptor is present and astroglial cells in culture process and secrete thrombin. moreover, effects of thrombin on brain cells including change of neudte outgrowth and astrocyte shape are described, but the molecular mechanisms are unclear. we investigated the effects of human 10 g/l). when compared with conventional elisa techniques (asserachrom ddi), the assay demonstrated a correlation coefficient of 0.97 on 131 samples from normal individuals and hospitalised patients with elevated d.dimer concentrations. slope was of 0.97 and intercept was of -0.07. this new assay offers a full flexibility for individual testing as the calibration curve is stable for at least one week on the instrument. it is then well adapted for all the applications of d.dimer measurements in coagulation laboratories. 16 children between an age of 3 days and 11 months ( median 6 weeks ) with thrombotic or embolic occlusion of major vessels were treated with rt-pa for thrombolysis. the affected vessels were both sided renal veins or one sided renal vein and v. cava inf. in 8 cases, the v. cava superior in 3, the v. cava inf. plus renal veins plus aorta in 1, the left ventdcle in 1, the aorta in 1, the a. femoralis in 1 and the v. portae in 1 case. 10 out of 16 occlusions were associated with an indwelling catheter. underlying dieseases were sepsis (4), prematudty (3), vitiurn (2), asphyxia (1), short bowel syndrome (1), hus (1), diabetes (1), cmv (1), exsiccosis (1) and m. hirschsprung (1). thrombolysis was performed with an bolus of rt-pa (0.1-0.2 mg/kg) followed by continuous infusion (0.8-2.4 (-9) mg/kg/24h, median 1.8 mg/kg/24h). low dose hepadn (100 ie/kg/24h) was given dudng full dose hepadn (aptt 1,5-2 times normal) after the thrombolysis. in 5 pts. rt-pa was administered locally through the catheter and in 11 cases systemically. in 13 patients the vessels could be recanalised completely, in 2 partially, in 1 patient the therapy had to be discontinued. in 2 vessels a reocclusion occurred. bleedings were noted in three patients, all from recent venous puncture sites. the results encouraged us to start a multi-canter trial which has been approved by the ethical committee and is open for recrural. the aim is to compare efficacy and safety of rt-pa with urokinase, the only recommended standard in the management of critical major vessel obstruction in newborns and infants. the design is a randomised, notblinded trial with a cross-over option after three days in cases without success. study end points are recanalisations, major bleedings and number of cross-overs. inclusion criteria are age under 1 year, lifethreatening vessel obstruction, age of thrombus up to 10 days, no precaeding fibdnolytic therapy. exclusion cdteda are cerebral hemorrage, pedventricular leukomalacia, surgery dudng the last 7 days and cns injuries during the last 2 months. although our knowledge on inherited thrombotic coagulation disorders has greatly expanded within the last years, there are still man}, patients with recurrent venous thrombosis in whom no obvious predasposition can be identified.thus we decided to include also so-called rare defects associated with thrombosis in our routine thrombophilia screening programme, such as fxii deficiency. fxii is an important element m the intrinsic pathway of fibrinolysis and there is evidence for an insufficient fibrinolytic activity in fxii deficient pts..up to date only few and controversial data exist about the frequency of fxii deficiency in pts. with thrombophilia. cons~uently the aim of our study was to evaluate the association between fxii deficiency and juvenile venous thrombosis in a great population. patients and methods: 1554 pts. (851 female, 703 male, aged i to 61 ys, median age 38.2 ys) with venous thromboembolism before the age of 45 ys were studied. one-stage clotting activity assay of fxii (fxii:c) was performed on acl using fxii deficient plasma from instrumentation laborato~. fxii antigen concentration (fxii:ag) was measured by electroimmundiffusion using reagents from behfingwerke, enzym research respectively. the normal ranges are tl~. routine reference values obtained m our labratory from 80 healthy subjects (40 males, 40 females, median age 26.2 ys); 95% range: fxii:c 53-135%, fxii:ag 57-132%). results: 122/1554 pts.were classified as fxi1 deficient (f 60, m 62), giving a prevalence of 7.8%. severe fxii deficiencies with fxii:c below 1% were observed in 7 pts..ll5 pts= proved to have moderate fxii deficiency with fxihc ranging lrom 2 to 51% and fxii:ag ranging from 1 to 53%. in none of them inherited deficiencies of other well established thrombophila risk factors could be detected. none of the fxii deficient pts. had positive lupus anticoagulant tests. familial fxii deficiency was found m 9 cases. discussion and conclusion: the precedences of fxii deficiency amongpts, with venous thromboembolism was previously described to be 7.5-10%. supporting these data, we have shown a praevalence of fxii deficiency of 7.8 %. in comparison to the frequency of other well established thrombophila risk factors we consequently have observed a relatively high prevalence of fxii deficiency m our study group.these data, from the largest such study reported, strongly indicate that fxii deficiency may not be a rare deficiency and may be more frequently associated with thrombosis than currently suspected. we describe a family with an exceptionally rare, i.e. plasminogen, deficiency, combined with subnormal activities of coagulation factor xii (hageman factor). the first thromboembolic event, pulmonary embolism in the proposita was diagnosed at age 35. since that time, 'spontaneous' venous thromboembolic events verified by phlebography and perfusion/ventilation lung scan recurred once every year despite oral coumarin therapy, whose intensity varied over an exceptionally wide range despite tight control the patient was repeatedly given succesful thrombolytic therapy with streptokinase or recombinant tissue plasminogen activator. her plasma plasminogen chromogenic activity was 51-59 % compared to a normal plasma pool (reference range 70-130 %), plasminogen antigen was diminished to the same extent. the patient's factor xii exhibited only 28-55 % activity in a factor-deficient plasma assay as compared to a normal plasma pool. other known risk factors for recurrent venous thromboembolism were not present : no evidence of malignancy, no obvious precipitating events, normal values of antithrombin iii, protein c, protein s, fihrinogen, thrombin time, platelets, lupus-like anticoagulant, aptt prolongation after addition of activated protein c. the proposita's mother had died at age 60 from pulmonary embolisnt no coagulation studies are available. the proposita's sister was first diagnosed deep leg vein thrombosis at age 17, since that time recurrent episodes of venous thromboembolism have been diagnosed also in an other hospital. this sister's plasminogen activity was 120%, but factor xii activity was reduced to 55 %. three brothers of the proposita were examined, too, all in their 3rd decade of life. none of them recalled symptoms of or treatment for thromboembolic disease. in one brother, factor xii activity was normal (100-105 %), but plasminogen only about 50 %. in the 2nd brother, factor xii was very variable (64, 42 and 94 %), plasminogen was in the lower normal range, in the 3rd brother, factor xii was about 50 % (repeatedly), plasminogen was normal. current knowledge about the risk of thromboembolism with both enzymes is limited, the optimal management remains controversial. msrgit serbsn,maria cucuruz,dan madras,carmen petrescu, natalie rosiu,rodica costa iii rd psediatric clintc,universtt v of medicine, the unsatisfactory efficiency of entihepetitis b vaccination in our haemsphiliscs suggested the control of the immune status in 52 hiv negative patients,by establishing through flowcitomstrie with monoclonsl antibodies the lymphocyte subsets (cd3,cd4,cds,cd&/cd8 ratio and cd19) and by seric tmmunoglobulins levels; the immunological parameters have been correlated with the serological markers of hepatitis infections (hay, hbv,hcv ebd hdv) as well as on dependence with the treatment (blood,plasma,crysprecipitate,fector viii/ix concentrate) and the quantity of their consumption (ui/k9 weight/yesr).the interpretation of the results pointed out • significant lower level of cd3,cd& (p20 years (group 3) duration. anticoagulated whole blood was incubated with fluorescent antibodies to gpib and gmp-140 (two colour method) and analyzed with a flow cytometer. thrombomodulin, f1+2, protein s, 13-thromboglobulin were measured according to standard procedures. results: surface expression of gmp-140 was not different in groups 1 to 3, however, there was a tendency to higher acitvation in group 1 (<10 years iddm). results for thrombomodulin, f1+2, protein s, 13-thromboglobulin will also be presented. conclusion: though it did not reach statistical significance, platelet acitvation seems to be more important during early diabetes. this wilt be correlated with endothelial and plasmatic activation markers. in our clinic four patients with hiv-related thrombocytopenia were treated with a lot of gammagard (93f21abllf), which later turned out to be hcv contaminated. before infusion all patients were negative for hcv antibodies and hcv rna. 2 to 8 months after infusion 2/4 patients, who suffered from arc at the time of hcv infection with cd4 counts >100/pl, seroconverted, whereas in the two other patients, who suffered from aids with cd4 counts below 100/pl, there was no seroconversion. in all cases hcv rna was found. genotyping with inno-lipa (innogenetice) showed hcv genotype l(b) in all patients. liver enzymes and hcv rna copies were measured repeatedly over a period of one year after infection. the 2 patients with arc showed a strong increase of hcv rna titre during the first 3 to 4 months after infection, followed by a rapid decrease within the next months. in the patients with aids hcv rna copies increased moderately within the first 4 to 6 months, followed by a slow decrease. elevation of liver enzymes was mild in the aids patients and seems to be independent from the hcv rna titre. in the arc patients liver enzymes changed parallel to hcv rna titers with a delay of 2 to 3 months. the course of hiv infection was only slightly influenced by the acute hepatitis c as measured by cd4 counts, i%2microglobulin and hiv rna copies. introduction:mechanisms underlying ischemia/reperfusion injury have been thoumughly investigated in experimental models. leucocytes appear to play a main role through production of cytokines and overexpresssion of adhesion molecules. in experimental animals, administration of monocional antibodies (mab) recognizing cd18 can reduce organ injury following ischemia/repedusion. no data, however, have been reported concerning clinical ischemia situations. patients and methods:we investigated expression of cdt8, cd1 la, cdf l b and cd1 lc in granulocytes, monocytes and lymphocytes from peripheral blood of five patients undergoing elective hand surgery. the tourniquet was applied on the upper arm and heparinized samples from cubital veins were obtained before and at the end of ischemia. control samples were drawn from the nonischemic contralataral arm with the same timing, duration ot ischemia ranged between sixty and one hundred minutes (80~16). whole blood samples were incubated with specific, fluorochmme labelled antibodies and analyzed by fluorocytometry (facscan, becton dickinson, san jose, ca). mean fluorescence intensity (mfi), quantitatively reflecting surface expression of the indicated markers was evaluated for the individual cell populations. data were compared by the paired student's t-test, p<0,05 was evaluated as significant. results:mfi for all markers was comparable in all cell populations in samples obtained before ischemia from both arms. in contrast, expression of cd18 was significantly enhanced in granulocytes (321_+50 vs. 189_+38), monocytes (653-+54 vs. 426+122) and lymphocytes (299_+45 vs. 228-+36) from samples derived from the ischamic arm, as compared with the nonischemic arm, as measured at end of ischemia. at the same time, an increase of cdf lb on granulocytes (500~_342 vs. 213+150) and monocytes (533+359 vs.237-+206) but not on lymphocytes was found, no modifications of cdlta and cdttc expression could be observed. there was no correlation between duration of ischemia and quantitative expression of these markers, conclusions:our data indicate that relatively short ischemia periods induce an increased expression of ~2" integrins adhesion molecules on leucocytes. these results suggest, at close similarity with findings from expodmental models, that overexpression of adhesion molecules might play an important role in the induction of ischemia/reperfusion injury, in humans. in patients suffering from chronic inflammatory bowel diseases, such as morbus crohn and colitis ulcerosa, we observe massive, sometimes barely staunchable bleedings. hereby, the deficiency of coagulation factors, especially of factor xiii in plasma is established. ttowever the influence of factor xiii on the pathomechanism of the underlying disease is still under discussion. therefore we studied the f xiii content in the intestinal mucosa. an immunohistochemicat method was developed using commercially available antibodies against f xiii subunit-a, the detection of mucosal factor xiii depends on the amount of chromogen bound to the antibody-horseradish-peroxidase complex. with this method, it is possible to locate but not to quantify f xili in the intestinal tissue. therefore we developed an elisa-metbod in homogenized intestinal tissue, using commercially available antibodies. its precision was validated using a standard curve with commercially available factor xiii preparations (fibrogemmin®). the detection limit of this method is > 0.05 i.u. f xiii/ml of tissue solution. freezed dried intestinal tissue (lmg) was homogenized in 1 ml buffer using a potter. specimens of the large bowel revealed f xiii values of 0,21 + 0,0038 i.u. (x __+ sd), tissue solution. with this method it is possible to quantify tissue-bound faxtor xiii. studies are in progress to elucidate the content of f xiii in the intestine of patient's suffering from infammatory bowel diseases in order to contribute data to the pathomechanisms of f xiii deficiency. in a previous double-blind, controlled trial we were able to show that aprotinin administration has significantly contributed to reduce periand postoperative bleeding complications without increasing the risk of thromboembotic complications. the question arises whether this beneficial effect may be associated with its effects on intraoperative fibrinolysis. therefore, 20 patients were treated with or without aprotinin (2 million kiu loading dose over 15 minutes followed by 500,000 kiu per hour), and citrated blood samples were obtained at the following time points: before operation, after induction of the anesthesia, at the beginning of operation, intraoperatively when the femur shaft was implanted, and 24 hours postoperatively. the determinations of plasmin/antiplasmin-complexes, d-dimers, thrombin/antithrombin iii-complexes, and prothrombinfragments 1 +2 were performed by means of test kits from behring, germany (enzygnostrpap micro, enzygnost r d-dimer testkit, enzygnost r tat micro and enzygnost r f 1 +2 respectively). -all markers of activated fibrinolysis and blood coagulation were significantly increased in the groups with and without aprotinin treatment, the highest activities to be seen when the femur shaft was implanted. however, the values of pap and d-directs of the aprotinin group were below the values of the control group until the end of operation. the markers of activated coagulation showed the opposite effect, however the differences between the two groups were not significant. as expected, the aptt was significantly prolonged in the aprotiningroup. the aprotinin treatment was also associated with a significantly lower blood loss in these patients. -concluding it can be said it is not clear whether the blood saving effect of aprotinin may be exclusively attributed to its antiplasmin activity since the differences of the fibrinolysis parameters were not statistically significant. further blood samples should be analysed between the implantation of the femur shaft and the end of operation. in our laboratory large amounts of human prothrombin are required (30-50 mg/week). as we try to produce meizothrombin and meizothrombin-des-fragment-1 from human prothrombin and to apply it as an antidote for hirudin, the classical adsorption to barium sulphate or aluminum hydroxide from human plasma cannot be used. commercially available human prothrombin is expensive and of an unacceptable quality for our applications. in most of these batches we found small amounts of factor x and prothrombin activation products. we now developed a procedure to isolate prothrombin from "prothrombin complex concentrates" (ppsb-250-bulk, drk-blutspendedienst nds.). the concentrate also contains fac-tor vii, factor ix, and factor x. the prothrombin had to be separated from these factors. the concentrate we used contained amounts of other proteins and activation products of prothrombin (e.g. prethrombin-1) as well. for the preparation of prothrombin from ppsb we used anion exchangechromatography (resource-q ®) on an fplc ®. we applied dissolved ppsb directly or after buffer exchange on sephadex g-25 onto the column at room temperature. the prothrombin was eluted with an naci-gradient in trisodium citrate buffer, ph 7.0. the buffer conditions are similar to the conditions used in the preparation of ppsb. the quality of the prothrombin so obtained was sufficient for most of our experiments. a second purification step on ion-exchange resulted in a 99% pure product devoid of contaminating factor activities and activation intermediates as examined with coomassie and silver stained sds-page electrophoresis and assays for factor x. this prothrombin contained full enzymatic activity and its activation by specific snake venom prothrombin activators showed the known activation products. we are now able to isolate the amounts of pure prothrombin required for preclinical investigations. most of the commercially available lmwhs such as enoxaparin, fraxiparin, and fragrnin are prepared by chemical methods which can result in desulfation and other chemical modifications of the internal structure leading to differences in the pharmacologic effects. on the other hand, tiactionated lmwhs retain their native characteristics and are structurally similar to heparin. in addition, the oligosaocharide sequence responsible for atiii binding is not modified. physical methods such as gamma irradiation (~co) have been used to fi'agment sulfated glycosaminoglyeans yielding fragraents without chemical modifications (deambrosi et at. in : biomedical and biotechnological advances in industrial polysaccharides, pp. 45-53). utilizing this technique, depolymerized heparius exhibiting different molecular weights can be obtained. this communication reports on the biochemical and pharmacologic effects of several such depolymerized heparins to demonstrate the molecular weight dependence on biologic activity. fragments exhibiting molecular weights of 5, 7, 8, and 9 kda were prepared by exposing concentrated heparin solutions to a rectilinear gamma ray beam at intermittent doses of 2.5 to 25 mrad under controlled temperatures. unlike the chemically depolymerized heparins, these fractions did not exhibit any decrease in charge density or atiii affinity. in routine assays for heparin, a clear cut molecular weight dependance on the anticoagulant and antiprotease actions was observed. on a gravimetric basis, these agents produce superior antithrombotic actions in comparison to chemically depolymerized derivatives. these studies suggest that gamma irradiation can be used to prepare lmwhs which retain their molecular integrity and therefore may prove to exhibit a more comparable biologic profile to hepari~ futthermore, lmwhs produced by gamma irradiation lack the usual double bond fommtion which requires the use of additives which can alter the product profile. university hospital, dept. of angiology, frankfurt a.m., germany introduction: thromboembolic disease constitutes a major clinical problem and among others a defective fibrinolytic system has been suggested as a predisposing factor for the development of thrombosis. the plasma fibrinolytic system can be impaired by inherited deficiencies of plasminogen defective release from the wessel wall tissue plasminogen activator (t-p'a) or by high ptusma levels of regulatory proteins, such as plasmino-8en. activator inhibilors (pal). the aim ....... of the present study w~s to eshmate the prevalence of decreased fibnnolyl~c actwlty m young pls. with thrombophilia. patients: a great population of 884 pts. (fenmle 478, male 406; age 21-61 ys median 39.8 ys) with venous thromtx~emolism before the age of 45 years were investigated in regard to their plasma fibrinolytie system. in none of them well established thrombophilia risk factors could be identified previously. methods: plasminogen ~behdngwerke), pai-1 activity (ehromogenic assay, biopool), pal-i anugen coneentration (elisa, biopool), t-pa activity (chromogenic assay, biopool) and antigen concentration (elisa, biopool) were measured before and after venous oeclusion.vo was performed z 12 month after the last thromboembolic epi~xle. 24 healthy subjects (median age 24.7 ys) served as controls. results." 24 pts.(2.7%) were classified as plasminogen deficiencies (activity and antigen). 142 pts.(16%) had significantly elevated levels of pal activity (up to 120 u/ml) and pal antigen (up to 90 ng/ml). none of the pts. with high pal levels had laboratory signs of acute phase reaction. low t-pa activity could be demonstrated and confirmed in 121 pts., aecordingto a prevalence of 13.6% (range: 0-2.7 u/ml; reference limils: 2.8 -21.8 u/ml). however, there was a significant negative correlation between t-pa activity and pal values. in 67 pts. (55.4%) the low t-pa activity was associated with increased pal levels whereas the t-pa antigen concentration was normal. a parallel reduction of t-pa activity and t-pa antigen (range: 0.35-3.5 ng/ml; reference limits: 3.6 -21.0 ng/ml) were determined repeatedly in 54 pts. (f 23, m 31, median age 39 ys). thus, the prevalence of a defective t-pa release was 6.1% in our study group. conclusion." in comparison to the frequency of inherited deficiencies of other well established thrombophila risk factors we have observed a relativel~ high prevalence of diminished t-pa activity, elevation of pal respectively in our study group. our data strongly indicate that besides t-pa and pal acuvity, antigen concentration for both parameters should be determined in pts. with thrombophilia. the antithrombotic and anticoagulant effect of the supersulfated low molecular weight heparin ssh 14 was studied after i.v. and s.c. administration in rats. thrombus formation in the jugular vein was induced by i.v. injection of activated human serum and following stasis for 20 rain and was assessed by a thrombus score ranging from 0 (no thrombus formation) until 3 (complete thrombus formation). ssh t4 injected either 10 min (i.v.) or 30 rain (s.c.) before thrombus induction caused a dose-dependent antithrombotic effect in a range from 0.25 to 2 mg/kg i.v. and 1 to 4 mg/kg s.c. there were clear differences in the antithromboric effectiveness between female and male animals, i.e, in female rats antithrombotically effective doses were lower than in male rats (edh0 after i.v. injection in females 0.35 mg/kg, in males 0.9 mg/kg). the sex differences were confirmed in studies on the time course of the antithrombotic effect. after i.v. injection of fully effective doses (2 mg/kg i.v. and 4 mg/kg s.c., resp.) the antithrombotic effect disappeared after 8 h in female or after 4 h in male rats. for studies on the anticoagulant action blood was drawn from the femoral artery and after centrifugation global clotting assays were performed in plasma. similar to its antithrombotic action ssh 14 also caused doseand sex-dependent anticoagulant effects. the most sensitive assays were the aptt and the heptest; thrombin time and prothrombin time were less or not influenced by ssh 14. in conclusion, ssh 14 was found to be an effective anticoagulant and antithrombotic agent in experimental studies in rats. at present there is no explanation for the clear sex differences found in this species. venous thromboembolic disease is the most frequent complication in patients undergoing total knee replacement therapy. patients and methods: after informed consent 3x30 patients were included in an open randomized clinical study and the incidence of venous thromboembolisrn was examined using different regimes for heparin prophylaxis (30 patients received fraxiparin 36 rag once daily, 30 patients clexane 40 once daily and 30 patients 7500 u calciparin twice daily). there were no differences between the groups concerning age, sex, body weight, risk factors, surgeons, decrease in hemoglobin~ and requirements for blood products. pre surgery, day 1, day 5-7 phiebograms were performed and also tat, dimers, fl+2 prothrombin fragments were examined. results: 1., dvt in 26 patients (28.9%). dvt in 5/30 patients under calciparm prophylaxis, 8/30 patients under fraxiparin and 13/30 patients under clexane treatment. 2., low speciflty (3.4%) of dimers and tat (24%) for detecting a dvt in these special patients undergoing knee replacement therapy, elevated fi+2 fragments in the dvt group at ti and t2 vs the patients without dvt (t1 dvt: 3.24+-1.8 vs. 1.6+-0.3 -p= 0.0042). 3, only 8/26 patients (31%) with dvt had clinical signs of thrombosis. conclusions: 1., there is an increase of thrombin gneeration measured by tat and dimers after knee replacement therapy. there are further studies with more patients necessary to confirm that fl+2 prothrombin fragments can discriminate between patients with and without dvt from a clinician's point of view. 2., phlebographicauy confimled dvt in almost 30% of our patients demonstrate the high thromboembolic risk in these patients. von willebrand's disease (vwd) type 2 is characterized by absence of high molecular weight muitimers. qualitative changes in the structure of the molecule might be associated with enhanced binding of von willebrand factor (vwf) to platelet glycoprotein lb. therefore in some patients vwd type 2 is associated with severe thrombocytopenia. here, we report on a 9 year old boy who presented with severe purpura and platelet counts about 20000/gl at the age of 2 years. thrombocytopenia did not respond to corticosteroids. a normalized platelet count of short duration was observed after high-dose immunoglobulins. in addition, increase of platelets was seen after anti-d treatment. thus, although platelet associated antibodies were not detected, thrombocytopenia seemed to be caused by an autoimmune mechanism. despite platelet counts above 50000/gl, the patient experienced severe bleedings with a significant decrease of hemoglobin levels. therefore, he needed several transfusions. coagulation analysis revealed vwd. application of ddavp lead to a normalization of partial thromboplastin time (ptt) and an increase of factor viii with subsequent cessation of bleeding symptoms. recently, vwd was typed 2 by lack of high molecular weight multimers. in conclusion, we report a case with vwd type 2 responding to ddavp. however it is unclear, whether thrombocytopenia is part of the vwd type 2 or of autoimmune origin. since autoimmune antibodies have not been detected, the effect of immunoglobulin treatment might be explained by blockade of enhanced binding of vwf to glycoprotein lb. von willebrad disease (vwd) with a prevalence of 0,8% (ruggeri 1994, rodeignere 1987) seems to be the most frequent inherited hemostatic disorder. • the diagnostic criteria for vwd are clinical picture, family hostory, laboratory findings: bleeding time, partial tromboplastine time (ptt), level of factor viii:e, vwf, vwf:ag, ristocetin induced platelets aggregation (ripa) and multim~-analysis.the diagnosis ofvwd is occasionally difficult, especially in early childhood because the laboratory data may vary due to time of investigation, as well as abnormalities may not be present in all sub-types the aim of this study was the evaluation of diagnostic approach to vwd in childhood and diagnostic reliability of all available laboratory tests. all previously mentioned laboratory tests have been done on our own material (51 child who satisfied all criteria for vwd, 23 boys and 28 girls, 1-9 years old) except mulfimer analysis which was unavailable in some cases. majority of laboratory tests proved to be highly specific and necessary for diagnosis. however, the diagnostic reliability of fviii:c and adhesion of platelets is much lower in mild cases in comparison to total sample, while ptt is an unvaiied test. the most specific screening test for vwd is vwf which diagnostic reliability is almost 1,00. the optimal strategy to establish general diagnosis of mild forms ofvwd is use of vwf and vwf:ag plus ripa if necessary and multimer analysis to classify variant types. we report on a new multimeric structural defect of vwf detected in a german family (two sisters and their three children): all members of the family who presented to our outpatient clinic had an increased spontanous bleeding tendency (moderate or strong hematoma, epistaxis, menorrhagia). prolonged bleeding could be observed after surgical procedures (adenotomia, tooth extraction) and after trauma (laceration). wound heeling was impaired in two cases. clotting assays showed slightly prolonged apti" and a mild decrease of f viii:c, vwf:ag and vwf:rcof levels. collagen binding activity was within normal ranges. bleeding time (simplate i) was slightly prolonged. the analysis of the multimeric structure in plasma showed quantitative and qualitative abnormalities: all multimers were detectable; the structure of vwf was reproducably abnormal in all family members so that the defect must be caused genetically. the thmmbocytic vwf showed neither qualitative nor quantitative alterations. minirin@ (ddavp) was administered as a test dose of 0,3 ~tg/kg bw in 100 ml 0,9% nacl-solution i.v. to evaluate efficacy and tolerance: clotting assays showed normalization of a_vrt, f viii:c, vwf:ag, vwf:rcof in plasma and shortening of bleeding time in three cases. an insufficient rise of vwf:ag and vwf:rcof levels could be observed in one case. one patient had no rise of f vm:c but a corrected bleeding time. multimeric analysis showed no structural change. the administration of ddavp was well tolcrated in all cases. the existance of all multimers in plasma and the normal collagen binding activity suggest that the structural abnormalities of vwf in this family does not cause functional defects so that the defect could be classified as a type i vwd. the response to ddavp was only partially effective. mild von willebrand disease (vwd) is far the most frequent congenital bleeding tendency. its diagnosis is very helpful in pre-operative check-up in order to avoid bleeding complications during surgery. following post-operative periods or monitoring the management of haemorrhagic episodes in vwd patients is also strongly recommended. current methods involve complex technologies, are time consuming and require large series. these assays lack the expected flexibility for rapid individual testing in patients. a new and flexible assay which works on the fully automatic walk-away coagulation instrument, sta, has been developed for these applications (liatest vwf). the technology is an immuno-turbidimetric method using mierolatex particles coated with rabbit polyelonal antibodies specific for vwf. the assay has a dynamic range from 2 to 420% yon willebrand factor (vwf) concentration, it works with a 2 fold dilution of tested plasma (50 td) and it offers a calibration established with the nibsc international standard. the total assay time is of less than 10 minutes and the detection threshold is of 2% there is no prozone effect up to concentrations higher than 1,000% vwf. intra-assay reproducibility is < 4% and inter-assay one < 5%. in dilution studies a mean recovery of 98% was obtained. in a study on 55 plasma samples from norma~ individuals, patients with high vwf concentrations, and vwd, comparison with the elisa technique demonstrated a correlation coefficient of 0.997 with a slope of 0.978 and an intercept of 3.30. in the low assay range too, a good agreement was obtained with the elisa. we conclude that liatest vwf is a reliable, flexible, sensitive, and rapid automated assay which fits well the vw'f assay applications in coagulation laboratories. fibrinolysis, the process during which the active enzyme plasmin is generated in a regulated and localised way, is -in a classical understanding -responsible for the dissolution of blood clots formed in a vessel. for this activity, t-pa is generally assumed to be the most important plasminogen activator and its activity, is regulated by enzyme kinetic mechanisms dependent on the presence of fibrin. with this background t-pa is used for thrembolytic therapy with great success. however, data from t-pa knockout mice indicate that t-pa might not be responsible for inhibiting the spontaneous development of intravsacular thrombi but only for dissolution of fibrin formed upon a coagulation challenge. in contrast, u-pa, generally assumed to be important for extravascular proteolytie activity on activated or tumour cells, seems to lead to the development of spontaneous fibrin formation in a mouse knockout model. on the other hand, the major plasminogea activator inhibitor pal-i seems not only to regulate intravascular fibrinolysis but seems to also be important for the progression of vascular diseases (neointima formation is e.g. increased in a pai-1 knockout model, but increased levels of pai-1 seem to predict reocclusion after angioplasty). in addition to their functioning as enzymes and inhibitors, components of the fibrinolytic system seem also to be involved in signalling processes in tumour and other cells. the u-pa/u-pa-receptor system could be shown to function as a chemotactic system and to elicit a migratory and mitogenle response in monoeytes and tumour ceils as well as in vascular cells. for such a response activation of tyrosine kinases of the sre-family might be responsible in some cell lines, but other signal transduction pathways e.g. involving caveolae and the starprotein can not be excluded. there seems to be a further important role of components of the fibrinolytic system which involves serine protease inhibitors (serpins): serpins have homologies to hormone binding proteins and cleavage of serpins by their target enzymes not only leads to inactivation of the enzyme but also to a possible release of bound hormones from the serpins. from these data clearly the relevance of any regulation of the fibrinolytie, system depends on the specific function of the system to be dealt with. in addition to "fibrin binding", "receptor mediated" and "genetic control" (e.g. 4g vs. 5g in the pai-i promotor) also "signal transduction" and "hormone delivery" are distinct functions of the system with specific regulation. plasmatic for both, healthy persons as well as for patients with angina pectoris it could be shown that increased values of plasma fibrinogen, factor viic and vwf:ag are significantly associated with the risk to suffer an acute myocardial infarction or cardiac sudden death. the same holds for tpa:ag. however, a group analysis in quintiles reveals that particularly low tpa:ag values are connected with a particularly low coronary risk. unexpectedly also the acute phase protein crp is positively associated with increased coronary risk. for clinical purposes these factors have already been included into coronary risk scores in order to improve the individual risk prediction in combination with lipids and other risk factors. the assessment of the pathophysiological significance of these observations remains at dispute. 4 pathways are discussed: 1. the assumption that increased plasma values of those factors indicate increased coagulation activity could so far not be established in prospective studies. 2. both vwf:ag and tpa:ag are produced in endothelial cells. an increase of their plasma level could therefore indicate increased endothelial cell functions which accompanies progressive atheromatosis. the risk association of the two acute phase proteins crp and fibrinogen could be interpreted analogously. 3. first prospective studies favour the assumption of a genetic determination to an increased production of coagulation proteins in persons at particular coronary risk. it could also be shown that there is a certain dependance of the gene-polymorphism for co-and 13-fibrinogen chains from the coronary risk. 4. even slightly elevated concentrations of fibrinogen and/or vwf:ag may influence the quality of a coronary thrombus both by increased physical stability and by reduced fibrinolytic lysibility. this could mean that an early coronary clot under these conditions could more readily develop to a stable, occlusive thrombus. a newborn with pronounced bleeding tendency had a prothrombin (prth) deficiency below 2.8% in a clotting assay. both parents had activities of 71% and 69%, respectively. however, the immunological determination ofprth by elisa revealed normal concentrations in all family members (93%-101%). furthermore, thrombin generation as investigated by a chromogenic assay using ecarin for activation of prth was normal as well. activation of prth by fxa was investigated by reealcificafion of the plasma samples and further analyzed for prth and its derivatives produced. although clotting times still were different, finally, normal levels of fl+2 and tat were generated as determined by elisa. western blot analysis using polyclonal (rabbit) antibodies to prth and a monclonal antibody specific to human thrombin, revealed different patterns of prth degradation products. tat was only weakly visible in the serum of the mother and nearly absent in the child.the mobility of prothrombin and thrombin was different compared to normals indicating a lower molecular weight. after reduction of disulfide bridges a higher molecular weight of thrombin was observed compared to normals indicating an insufficient cleavage ofprth and formation ofprethrombin 2. these observations let suggest that prothrombin marburg is a deletion mutant lacking the cleavage region arg 320-ile321. upon cleavage by factor xa only prethrombin 2 is formed under liberation of fl+2. this prethrombin 2 is able to cleave chromogenic substrates in the ecarin assay. probably, prethrombin 2 forms a complex with atiii which is detected by elisa, but unstable under denaturing conditions as in the western blot. as a major complication of haemophilia a treatment, up to 30% of the severely affected patients develop antibodies to substituted factor viii. investigating 133 patients and considering the data of further 231 patients of the haemophilia database, we could show, that risk of inhibitor developement depends on the patient's mutation type. patients with more severe gene defects, like intron 22 inversions, stop mutations or large deletions had a risk of about 35% for inhibitor developement, which was about 7 times higher than for missense mutations or small deletions. besides an influence of mutation type, we investigated other parameters e. g. immune response genes (i-ila-genotype) and clinical aspects (treatment onset and frequency, type of concentrate) that might also affect inhibitor formation. to exclude any effect of mutation type, we focussed on 72 patients with an intron 22 inversion. hla-typing showed that some t-ila-alleles (dqb0602, bt) occurred more otten and others (dqa103, dqb603, dr13, c2) less frequent in inhibitor patients. treatment onset, frequency and type of concentrate apparently do not affect inhibitor incidence. the results presented here, prove that inhibitor development is considerably influenced by the mutation type. this supports the hypothesis that patients with severe molecular defects have no endogenous factor viii protein and that substituted factor viii represents a foreign protein, leading to an immune response, e. g. the production of alloantibodies. in addition, the immune response seems to be modified by the hla-genotype. however oar findings (in terms of genotype and treatment parameters) can only explain part of the inhibitor pathogenesis. it is still unsolved why substituted factor viii does not lead to a recognizable immune response in 2/3 of the patients with severe molecular factor viii gene defects. consequently other factors, probably concerning the antenatal phase, must be involved. viia in the treatment of patients with inhibitors against factor viii or ix: a german/swiss/austrian multi~center trial d. ellbriiek*, i. scharrer**, j. dethling***, and the rfviia study group *section h~mostaseology, university ulm **dept. of angiology, jwg-university hospital frankfurt a.m. ***novo nordisk, mainz administration of activated recombinant factor vii (rfviia) can by-pass the fvnlwlx pathway and offers an alternative treatment for patients with antibodies (inhibitors) against these factors. from november 1994 to october 1995, a total of 25 bleeding episodes and 10 surgical interventions in 18 patients were treated with rfviia in a phase iiib multicenter trial. diagnosis was hemophilia a (n = 15) or b (n=l) with inhibitor, and acquired inhibitor against factor viii (n=2). various serious bleeds, from complicated joint and gingival bleeds to lifethreatening psoas bleeds, have been treated. operations have been tooth extractions, radiosynovectomy, implantation and explantadon of porth-acaths and one adenotomy. dose regimen was 90-120/zg/kg bw every two to three hours until clinical improvement, with subsequent dose reduction. results: for bleeding episodes, response to rfviia after 24 hours was effective in 72%, partially effective in 12 9"0, ineffective in 129"o and not evaluable in 1 (4%) of the patients. two of the three treatment failures were associated with very long dosage intervals of rfviia. the third patient was in a critical situation with artificial high pressure respiration and polytransfusion because of a hematothorax, and suffered a terminal intracerebral bleed. the efficacy of rfviia for surgery was very good. response to treatment was independent of antibody titer. no signs of dic or activation of coagulation were noted. conduslon: in our experience, rfviia is an efficient and safe treatment for inhibitor patients with acute bleeding episodes. it should be investigated, whether rfviia can be an alternative treatment also for the hometreatment situation. successful immunetolerance therapy of f vih-inhibitor in children after changing from high to intermediate purity f vih concentrate w. kreuz, j. joseph-$teiner, d. mentzer, g. auerswald*, t. beeg, s. becker zentrum der kinderheilkunde, j. w. goethe-universit~itj frankfurt am main *professor hess kinderklinik bremen introduction: inhibitor to f viii is the most severe complication in treatment of patients with haemophilia a. the incidence of f viii inhibitors is estimated to range between 15-33%. several authors reported that the immunetolerance therapy (itr) of f viii-inhibitors can be induced with high dose f viii concentrate. objective: this presentation will show data of four children with haemophilia a and f viii inhibitor (high responder), who had an unsuccessful lit with high dose f viii concentrate (high purity) in the first step. f viii concentrate was changed to an intermediate purity product (haemate hs®) in the subsequent course of h't. all patients received bleeding prophylaxis with an activated-prothrombin-complex-concentrate (feiba®). results: median age was 13 (9-18) months, when the inhibitor was first detected. in all four patients the f viii inhibitor titre increased under immunetolerance treatment with f viii concentrate (high purity) in the first step of therapy. after changing the f viii concentrate (intermediate purity) the inhibitor titres decreased continuously after a rebooster effect to 0be within months. median duration of f viii inhibitor elimination time (until first testing of 0 be) was 3 (2-5) months. in all patients the f viii inhibitor was successfully eliminated. until now all patients are under prophylactic treatment with f viii concentrate and had no positive inhibitor testing since. median observation time since the first testing of 0 be is 14 (4-60) months. conclusion: different studies concerning immunetolerance treatment have been successful with f viii concentrates of different purity. according to our experience in these four presented patients, we assume that probably not the purity of the f viii concentrate is important for the induction of immunetoleranee, rather than the type of f viii presentation in the used concentrate. the used preparation (haemate hs®) is a f viii concentrate with high concentration of vwf, which is known to be important for the protection of f viii against degradation by proteases. this may be a mechanism for a prolonged antigen presentation to the immunesystem and thus may have a positive impact on the outcome ot itr. long scale trials are needed to prove the above assumptions. thrombasthenia glanzmann is a disease affecting platelet function because of a partial or total lack of glycoprotein (gp) ilbllla expression or a modification of this complex. since the receptor dysfunction goes along with reduced or absent platelet aggregation and adhesion, it causes bleeding complications in case of injury. here we report about a 60 years old women, who suffered since early childhood from a severe bleeding disorder. life threating bleeding complications occured after tooth extraction and after abdominal surgery. analysis of the patients platelets revealed normal values for the platelet count, whereas their volume showed to be increased (11 fl). clot retraction was diminished to 17%. platelet adhesion to siliconised glass and human subendothelial matrix was reduced, as was the spreading of the platelets. adp (i#m) induced platelet aggregation was inhibited, while collagen-, ristocetin-and thrombin-induced aggregation showed to be normal. cross immunelectrophoresis resulted in an atypical peak of gpiibllla with reduced electrophoretic mobility. in the electroimmunoassay according to laurell 14% of gpiibllla was detected. moreover we observed a markedly diminished 125j-fibrinogen binding. sequence analysis of the gpiib and gpiila cdna after pcr amplification unraveled a g2508 --, a transition in gpiib, substituting gly805 --* glu. the structure/function relationship of this mutation has still to be investigated. we report two new abnormal fibrinogen variants, denoted as bem iv and milano xi, both having an exchange of arginine to histidine in position 16 of the ac~-chain. routine coagulation studies revealed prolonged thrombin and reptilase clotting times, low plasma fibrinogen concentrations determined by a functional assay but normal fibrinogen levels measured by the immunological assay. the onset of turbidity increase following addition ofthrombin to purified fibrinogen was markedly delayed in both variants. release of fibrinopeptide b by thrombin, measured by reversed phase hplc, was normal whereas only one half amount of normal fibrinopeptide a was released. in addition to normal fibrinopeptide a, an abnormal fibrinopeptide a* was cleaved from both dysfunctional fibrinogens. the structural defect was determined by asymmetric pcr and direct sequencing of a gene fragment coding for the nh2-terminus of the aachain. both variants were found to be heterozygous for the transition g to a at nucleotide position 1203, leading to the substitution actl6 arg-->his, resulting in a delayed fibrin polymerization. the simple assay permits detection of the most common amino acid substitutions occuring in the nh2-terminus of the ac~-chain of the functionally abnormal fibrinogen variants. protein c inhibitor (pci) a member of the serpin family is also known as plasminogen activator-3 (pal-3). pci was first described as a component of human plasma, regulating the activity of activated protein c and other sedne proteases of the human coagulation and fibdnolysis system. since then pci was found to be present in extra-plasmatic systems also. high concentrations of pci were detected in human seminal plasma suggesting a role for pci in human fertility. significant concentrations of pci mrna and antigen were located in lysosomes of proximal tubular kidney cells suggesting an intracellular function for pci in this environment. in this study we present evidence that pci is also present in human pancreas. rna from human pancreas was reverse transcribed and pcr amplified. the resulting pci cdna was identical with pci cdna from human liver. ~p labeled antisense rna probes used in in situ hybridization experiments with human pancreas tissue sections showed that pci rna was located in the acinar ceils. pancreatic fluid was analyzed by sds-page and immunoblotting. using monospecific antibodies directed against human plasma pci, a 57 000 mw protein band was observed which comigrated with purified human plasma pci. our results show that pancreas cells contain a significant concentration of pci mrna. this message is localized in the secretory acinar cells. therefore we conclude that pci antigen found in pancreatic fluid is likely to originate in the pancreas. the role of pancreatic pci is unknown at present. however, since thrombosis and systemic hypercoagulable states are known complications of pancreatic diseases our results and in vitro experiments by others showing that pci can inhibit pancreatic enzymes such as chymotrypsin and trypsin indicate that pci may be part of the inhibitor potential which protects pancreatic tissue from auto degradation. these inhibitors normally prevent the release of active pancreatic proteases into the vasculature or microcirculation where destabilization of the coagulation balance and subsequent thrombus formation could occur. institute for clinical chemistry and laboratory diagnostics and *clinic for cardiology, universi w of duesseldorf p-selectin (cd 62p, the former granule membrane protein 140 or gmp 140) is an integrated membrane protein of platelets and endothelial cells. under inactivated conditions it is stored in the alpha granules of platelets and in the weibei-palade bodies of endothelial cells. endothelial cells covering atherosclerotic plaques show an increased expression of p-selectin. 13-thromboglobulin (13-tg), which is also expressed from the alpha granules of platelets during adhesion or aggregation, is regarded as a marker of platelet activation in vivo. coronary thrombosis plays a central role in the pathogenesis of acute coronary syndromes. we therefore analysed cd 62p and 13-tg in acute coronary syndromes, healthy subjects (hs, n=l i), patients with stable angina pectoris (sap, n=20), unstable angina pectoris (uap, n=l 2) and acute myocardial infarction (ami, n= 12). plasma samples were obtained by using ctad vacutainer tubes (0.109 m na~-citrate, theophylline, adenosine dipyridamole). patients with cad showed significantly increased plasma concentrations of cd 62p (hs: 98+20 versus sap: 133+38ng/ml, p<0.05; versus uap: 128+28 ng/ml, p<0.01; versus ami: 144+72 ng/ml, p<0.05) independent of the severity of clinical symptoms. in comparison only patients with ami showed significant higher 8-tg concentrations compared with hs (hs: 30+20 versus ami: 39+14ng/ml, p<0.05). although the cd 62p plasma concentrations showed no relationship to the clinical severity, hence there was a positive correlation between cd 62p (r=0.47; p< 0.001; n=55) to the severity of cad classified as i, 2, 3 vessel disease. it is concluded that elevated cd 62p concentrations are correlated with the severity of cardiovascular disease. cd 62p is not suitable for differential diagnosis of acute coronary syndromes, because it is elevated independently of the clinical status of the patients. the involvement of platelets in the pathogenesis of acute myocardial infarction may be indicated by the increased 13-tg concentrations. iklinik nr herz-, thorax-und herznahe gef&schirurgie und 2institut x~tr klinische chemie und laberatodumsmedizin der universint regensburg an increased blood loss following surgery with extracorporeal circulation (ecc) contributes to the morbidity and mortality. postoperative haemorrhage following ecc has been related to a platelet function defect and the activation of the blood dotting and fibrinulytic system. we investigated platelet surface antigen expression and parameters indicating activation of the clotting and fibrinolytic cascade to assess the predictive potential of these variables for increased blood loss after ecc. g0 patients referred for coronary bypass gra~ing with no history of a bleeding disorder and normal routine clotting tests were included. on the day prior to surge~ and immediately upon arrival on the intensive care unit blood samples were drawn. the surface expression of glycoprotein (gp) lib-ilia, gp lb, and p-selectin was meamred with and without in vitro stimulation with adenosine diphosphate (adp) using whole blood flow cytomet~y. platelet counts and platelet factor 4 (pf4), as well as, routine clotting tests were performed. activation of the clotting and fibrinolytic system were judged from thrombin-antithrombin-iii complex fiat), fibrinogen fig), d-dimers (dd), cc2-antiplasmin (tz2a), prothrombin fragment 1+2 (fl+2),and tissue plasm~ activator (t-pa). blood loss fxom chest tubes was measured hourly until removal of drains. following ecc the levels of pf4, tat, dd, o~2a, fl+2, and dd were sigulticnatly increased (p<0.0001) compared to baseline values. gp iib-iila, gp ib, p-selectin, platelet count, and fg were significantly reduced (p<0.0001). analysis of variance (anova) revealed that postoperative values of gp ib (p<0.0001), dd (p0.05). there was a trend for more lungs to be transplanted after sbd exam (62% vs 44%, p=0.06), but this was not found with kidneys, heart, liver, pancreas or intestines. in multiple logistic regression models, adjusting for variables pertinent to each individual organ function (for example, bun or creatinine level for kidneys, blood gases for lungs etc), the number of exams was not an independent predictor for successful transplantation conclusions sbd exam led to similar numbers of organs transplanted compared to dbd exam in this single center registry analysis. more rapid brain death declaration, as with sbd, is not a factor that influences organ transplantation the glasgow coma scale (gcs) is a standardized and commonly used way of assessing important aspects of neurological condition for critically ill patients. while it is a validated tool for prognostication, it is unclear whether serial measurements add value to this prognosis. we used a large set of serially collected gcs measurements to assess the impact of gcs score on the trajectory of neurological recovery as well as factors affecting score variance. gcs total and subscores (483,041 time points from 5,456 patients) recorded hourly by registered nurses in the neurosurgical intensive care unit (nsicu) between january, 2012 and may, 2016 were analyzed retrospectively. k-means clustering provided groups with similar progression characteristics during nsicu stay. k-means clustering provided groups with similar progression characteristics during nsicu stay. descriptive features for each cluster were binned into histograms and evaluated for similarity using 2 and kruskal-wallis tests. linear correlations of the sub-scores were very high (eye-verbal: 0.583, eye-motor: 0.686, verbal-motor: 0.583), while compositional variance was low for aggregate scores. hour-to-hour variance in gcs correlates to significant nsicu activities such as nursing shift changes. among patients with similar minimum gcs scores during their stay, those that recovered were significantly less likely to have deteriorated in the hospital ( 2, p<<0.0001). for patients with a minimum gcs<=8, those that arrived at their minimum score (i.e., did not deteriorate in nsicu) were 18.6% more likely to recover than those who deteriorated in-hospital (kw, p<<0.0001) . patients that experienced recovery show significantly greater improvement as early as 2 hours after their minimum score (kw, p<<0.0001). the gcs is unnecessarily complex for most nsicu patients and can be represented by fewer variables. serial gcs measurements do provide value for prognosis and may be able to distinguish patients with potential to recover early in their hospital course. stroke is a major cause of death and disability, and common admission to neurological intensive care units. preferences for cardiopulmonary resuscitation (cpr) are often discussed, but there is limited understanding of cpr outcomes among stroke patients. systematic review and meta-analysis of published literature from 1990 to 2016 among stroke patients undergoing in-hospital cpr. preferred reporting items for systematic reviews and meta-analysis, metaanalysis of observational studies in epidemiology, and utstein guidelines were used to construct standardized reporting templates. detailed searches of pubmed and cochrane libraries were supplemented with hand-searched bibliographies. primary data from studies meeting inclusion criteria at two levels were extracted, i) survival to hospital discharge after cpr, and stroke as a primary admitting diagnosis, and the less restrictive, ii) survival to hospital discharge after cpr with stroke listed as a comorbidity, were meta-analyzed to generate weighted, pooled estimates of survival to hospital discharge. of 818 articles screened, there were 176 articles (22%) that underwent full review. three articles met primary inclusion criteria, specifically identifying patients with stroke as a primary admitting diagnosis. twenty additional articles met secondary inclusion criteria, listing stroke as a comorbidity. there was an 8% (95% confidence interval (ci) 0.01, 0.14) rate of survival to hospital discharge rate from a combined sample of 561 patients that received in-hospital cpr. among the more heterogenous population of inpatients with stroke listed as a comorbidity, there was 16% (95% ci 0.14, 0.19) rate of survival to hospital discharge. adherence to utstein reporting guidelines was poor, and neurological outcomes were measured in 6 (26%) of studies. survival to hospital discharge among stroke patients is lower relative to general hospital populations. these preliminary findings highlight the need for improving the quality of evidence to inform patient and provider discussions of cpr among stroke patients. there is often a tendency to treat patients with traumatic brain injury (tbi) and a glasgow coma scale (gcs) score of 3 on presentation less aggressively because of low expectations for a good outcome. based on the crash trial database, a prognosis calculator has been developed for the prediction of outcome in tbi patients. our aim was to investigate whether the crash calculator can be used for prognostication in patients with tbi and gcs of 3 on presentation. we performed a retrospective review of patients with tbi and a gcs score of 3 from 1/12 to 9/16. the crash calculator has been validated to estimate mortality at 14 days and death and severe disability at six months (glasgow outcome scale-gos 1-3). the calculator uses country of origin (usa in our dataset), age, gcs, pupils reactivity to light, presence of major extracranial injury, and findings on ct scan of brain (petechial hemorrhages, obliteration of the third ventricle or basal cisterns, subarachnoid bleeding, midline shift, and non-evacuated hematoma). the individual prognosis for mortality at 14 days and unfavourable outcome at 6 months was calculated and compared with the actual outcomes. a total of 62 patients were included. a tend toward underestimation of the risk of mortality at 14 days was found (estimated mortality was 66 % compared to actual mortality of 81%; difference of 15%, p = 0.05). however, the estimation of outcome at 6 months was accurate (estimated gos 1-3 was 85.5 % compared to actual of 85.5 %, p = 1.0). the crash prognosis calculator underestimated the risk of mortality, but accurately predicted unfavourable 6 month outcome in patients with tbi and gcs of 3 on presentation. pending larger studies to validate our findings, we believe that crash calculator can only support -not replace -clinical judgment. there are no nationally enforced standards regarding brain death. few data exist on how brain death is determined across the u.s. we used claims data from 2012-2015 from a nationally representative 5% sample of medicare defined as icd-9-cm code 348.82. the primary outcomes were evaluation by a neurologist or neurosurgeon, defined as a physician evaluation-and-management claim associated with the medicare provider specialty codes for neurology or neurosurgery, during the dates of the hospitalization. cpt codes were used to ascertain ancillary testing: brain radionuclide imaging, transcranial doppler ultrasound, or electroencephalography for brain death determination. exact binomial confidence intervals (cis) were used to report proportions. we identified 312 patients with a brain death diagnosis. common associated neurological diagnoses were stroke (122 patients; 39.1%), cardiac arrest (120; 38.5%), and traumatic brain injury (tbi) (45; 14.4%). head ct or brain mri was performed in 79.8%; this was true of 91.6% of cases of stroke or tbi versus 68.3% of cardiac arrests. neurologists were involved in the care of 137 patients (43.9%; 95% ci, 38.3-49.6%). they were more commonly involved in the care of stroke (50.8%) or cardiac arrest (55.0%) than tbi (6.7%) or other conditions (24.0%). neurosurgeons were involved in 98 cases (31.4%; 95% ci, 26.3-36.9%), mostly after tbi or stroke. two hundred patients (64.1%; 95% ci, 58.5-69.4%) were seen by a neurologist or neurosurgeon. twenty-nine patients (9.3%; 95% ci, 6.3-13.1%) underwent any ancillary testing. two hundred and nine patients (67.0%; 95% ci, 61.5-72.2%) were seen by a neurologist or neurosurgeon or underwent ancillary testing. in a nationally representative cohort of elderly patients, one-third of patients with a brain death diagnosis were not evaluated by a neurologist or neurosurgeon or by using ancillary tests. traumatic brain injury (tbi) is a major cause of death and disability in the us. recent advances in 3d illustration (3di) can precisely quantify intracranial pathology on computed tomography (ct). the current standard of measurement, abc/2, demonstrates variability in precision with bleed phenotype. the aim of this project is to assess accuracy automated 3di and compare it to standard abc/2 measurements. baseline ct scans collected during the protectiii multicenter clinical trial (n=881) were retrospectively reviewed by a central neuroradiologist. subdural and epidural hematomas were identified (n260). the radiologist calculated abc/2 score using osirix (mac) and radiant (pc) workstations. in a blinded fashion, research assistants concurrently generated 3di using the following methods: dicom data were resampled to 1.5 mm thickness slices and symmetrized using image analysis software (aquarius terarecon inc, 2012) . lesions were then compiled into single volumetric regions of interest (3d slicer v4.5, 2015) . hemorrhages were divided into two groups for analysis: group 1. volume of hemorrhage bland-altman analysis. this study was irb approved. there is a significant difference between the results of the 3di and abc/2 methods. in group 1. the estimated relative bias between the two measurements (after transformation) is 0.17 (sd 1.16; pvalue 0.044; 95% ci 0.005, 0.333). in group 2, the relative bias is -0.712, sd 1.22, pvalue <0.0001, 95% ci (-1.013, -0.410). the 3di method calculates detailed surface area measurements in large and small volume hemorrhages, while abc/2 averages cross-sectional area. the abc/2 estimates vary by bleed phenotype and offer less topographical precision than 3di. this is particularly true in extra-axial hemorrhages, which are numerous studies have shown a significant association between hypotension and poor outcome in patients with head injuries. prior investigations have demonstrated that generation of negative intrathoracic pressure (itp) in ventilated patients with brain injury improves mean arterial pressure (map) and lowers intracranial pressure (icp). we hypothesized that augmentation of negative itp by breathing through an impedance threshold device (itd) with 7 cmh2o of inspiratory resistance would improve mean arterial pressure in a porcine model of intracranial hypertension. six spontaneously breathing female pigs (42.2 ± 3.7 kg), anesthetized with propofol, were subjected to focal brain injury through inflation of an 8 french foley catheter placed in the epidural space. once a stable injury was obtained, baseline data were collected for 20 minutes followed by 20 minutes of itd use. results are reported as mean ± sd. the itp without the itd during inspiration was -1.0 ± 0.4 mmhg, compared to -9.0 ± 0.8 mmhg with the itd, p<0.001. following brain injury, map (mmhg) was significantly higher during itd use (95 ± 11 vs. 89 ± 11; p<0.001). cerebral perfusion pressure (mmhg) was also significantly higher during itd use (75 ± 11 vs. 69 ± 10; p<0.05). icp (mmhg) was not significantly different between groups (20.1 ± 3.4 vs. 19.9 ± 3.3; p=0.19) although end tidal carbon dioxide levels (mmhg) were significantly higher during itd use (26 ± 3 vs. 20 ± 1; p<0.005) presumably due to lower respiratory rates during itd use (29 ± 5 vs. 35 ± 7; p=0.21). contralateral cerebral blood flow (ml/100gm/min) was similar between groups (34 ± 18 vs. 34 ± 17). in this porcine model of intracranial hypertension, spontaneous respirations through an itd significantly improved map and cpp. this approach could be utilized to prevent hypotensive episodes in the setting of brain injury. the impact of applying nanotechnology and biomedical engineering to improve the management of patients with spinal cord injuries (sci) is still not accurately described, nor understood. a systematic review of the literature was conducted, according to prisma criteria, to identify publications revolving around "sci+nanotechnology" and "sci+biomedical engineering" indexed on pubmed in the period 2007-2017. furthermore, the database of clinicaltrials.gov was searched to highlight the stage of translation of this research into clinical practice through randomized clinical trials (rct). finally the uspto database was interrogated to identify the number of pertinent patents filed in northamerica in the same timeframe. the literature on bioengineering and nanotechnology contributions to sci is exponentially growing, with almost 50% of articles published between 2015 and 2016. its quality and the interest of the scientific community are high, as confirmed by the average impact factor (14.9) and the average number of citations (5) of articles published in the last two years. this field still represents a niche of sci research: the 56 articles reviewed represent only 1.2% of all articles on sci published in the same decade. this trend is confirmed on clinicaltrials.gov: out of 886 rct on sci only few focus on the application of those technologies, furthermore 34 out of 40 articles spurring from the rct identified were published after 2010, and 25% after 2016. interestingly, with 119 patents registered by the uspto, the interest in the commercial application of this research seems vivid. currently, the most promising areas of research are: nanofabrication/nanoscaffolding for structural repair, nanodrugs for regeneration, and design of neural interfaces for functional therapies. this review showed that both universities and independent research institutions (mostly from usa, china and european union) are driving this research race; the figures provided above suggest its potential to become a successful example of translational medicine. there are no neuroprotective and neuroregenerative treatments available for traumatic brain injury (tbi). clinical trials investigating potential treatments such as therapeutic hypothermia and progesterone have failed. pre-clinical studies indicate there may be a role of stem-cells in promoting neuroprotection/neuroregeneration in-vivo in animal models of tbi. we aim to provide a pre-clinical literature review into stem-cells as a potential therapeutic option in tbi-animal models. a literature search was conducted on pubmed and google scholar using the terms "traumatic brain injury", "stem-cell", "preclinical", and "animal studies". studies were included if there was an in-vivo animal model of tbi with either intravenous or intra-cortical stem-cell transplantation, along-with a control group, and investigated either motor or behavioral outcomes, or a combination. twenty-seven studies (n=1184 animals) satisfied the criteria. 774/1184 (65.4%) animals were investigated for outcomes. 17 studies harvested stem-cells from human-source, whereas 10 harvested stem-cells from animal-source. bone-marrow stromal-cells (bmsc) were used in 17 studies, neural stemcells (nsc) in 7, and miscellaneous in 3. 450/774 (58.1%) animals received any stem-cell transplantation, whereas 324 were controls. of animals receiving stem-cell transplantation (450), 339 (75.3%) showed significantly better outcomes relative to control animals in each individual study, with exception of one study. amongst transplanted animals, functional outcomes did not differ significantly when grouped by stem-cell type (p=0.553), transplantation route (p=0.054), and source (p=0.784). animals were followedup until 1 week (n=5 studies), 2 weeks (n=10), 4 weeks (n=5), or >4-weeks (n=7). this pre-clinical data demonstrates that stem-cell transplantation may have treatment potential in tbi as shown by improvement in functional outcome in as many as three-quarters of all animals that were treated with stem-cells. this data provides a foundation for the design of clinical translational studies. age of trauma patients including those with asdh is increasing as stated by national trauma registers. we were especially interested if age > 65 years significantly influences outcome compared to younger patients and if other factors like initial gcs have an influence too. methods midline shift, if asdh was surgically removed, additional contusions, comorbidities and intake of anticoagulants. outcome was analyzed using the glasgow outcome scale (gos) at hospital discharge (gos 1) and if possible 6 months after discharge (gos 2). uni-and multivariate analysis (cox regression model) was performed using the sigma stat softwar 0.05. adverse outcome p=0.014. in addition, all patients > 65 years with an initial gcs 3 died whereas only 44% of younger patients with initial gcs 3 died (p<0.005). this was the only significant result in the multivariate analysis the monovariate analysis of our data showed a significantly higher risk for adverse outcome after asdh whe it should be considered if it is reasonable to transfer them from local hospitals to a specialized neurosurgical clinic, especially in times of limited resources. reported incidence of pulmonary edema in isolated head injury varies from 50-85%. lung sonography is a potentially useful non invasive technique to detect extravascular lung water(evlw). this study aimed to identify the presence of evlw using lung ultrasound (b lines >3 per lung field) in chead injured patients admitted to icu . secondary objectives were to compare diagnostic accuracy and time to identification of evlw using chest x ray versus lung ultrasound. association of evlw with duration of mechanical ventilation (mv)and icu stay were observed after ethical clearance (iec no. int/iec/2015/372), 120 patients with head injury requiring mv and critical care were enrolled in the study. daily routine chest x ray and bedside lung ultrasound were done from the day of icu admission until the patient was on mechanical ventilator support. four inter costal spaces (ics) were scanned in semi recumbent position; third and sixth ics on either side of sternum till mid clavicular line. evlw was reprted as > 3 b lines per lung field scan sonographically. details of mv and icu management were noted . evidence of evlw at the time of admission using sonography and cxr was recorded in 32 and 6 patients respectively. during icu stay 61.7% patients showed evlw using lung usg (vs 43 patients on cxr). mean delay in detection of evlw on cxr after detection on ultrasound was 1.42±0.767 days. patients with low gcs, s. albumin, pao2/fio2 ratio and greater apache ii and saps ii had significantly higher incidence of evlw. duration of weaning, mechanical ventilation and icu stay was significantly longer in patients with presence of evlw (p <0.05) conclusions: lung ultrasound appears promising in detecting evlw earlier than chest x ray and may aid to minimize the duration of mechanical ventilation, weaning and icu stay . antiepileptic drugs (aeds) are recommended by guidelines for prophylaxis of early post-traumatic seizures (pts) associated with traumatic brain injury (tbi). there has been an increased use of both phenytoin and levetiracetam for this indication. the purpose of this study is to determine the incremental cost-effectiveness of phenytoin compared with levetiracetam for early pts prophylaxis in tbi patients. a cost-effectiveness study was conducted comparing phenytoin and levetiracetam for early pts prophylaxis during the 7 days post-tbi. patients were included if they were 18 years or older, received a study drug, and had a diagnosis of tbi. patients were excluded if they had a history of epilepsy, did not sustain a recent tbi, were initiated on both study drugs concurrently, or were switched to pentobarbital for elevated intracranial pressure. data was collected via retrospective chart review using electronic medical records and publically reported costs. effectiveness was measured as having a successful seizure prophylaxis regimen (sspr), which was defined as 1) no clinical or electrographic seizure, 2) no discontinuation of study aed, 3) no cross-over of study aed to different aed, or 4) no addition of aed during the 7 days of therapy. the costs included costs of the study drugs, phenytoin level, and eeg. the data was used to calculate the primary endpoint, the incremental cost for the incremental change in sspr or the incremental cost effectiveness ratio (icer). the phenytoin regimen (n=136) cost $371.76 and had an sspr of 81.6%. the levetiracetam regimen (n=237) cost $628.85 and had an sspr of 86.1%. the icer was $57 for each 1% increase in sspr with levetiracetam. the sspr of phenytoin and levetiracetam were similar. because patients who received phenytoin may differ from those who received levetiracetam, further analysis is needed prior to drawing any conclusions about the cost-effectiveness of levetiracetam relative to phenytoin. augmented renal clearance (arc) has been reported in up 85% of critically ill tbi patients and may impact therapeutic drug concentrations. improved predictors of arc are needed. serum cysc, a validated marker of glomerular filtration, has not been examined as a marker for arc in critically ill tbi patients. this pilot study tested the hypothesis that serum cysc concentrations are lower than reference values following tbi. adult tbi patients enrolled in the ukccts-unctracs prospective study of arc effects on drug clearance, were eligible. cysc serum concentrations (elisa -r & d cysc) were measured daily for up to 7 days and compared to reference values. descriptive statistics and student t-test for continuous measures (patient vs. reference lower range cysc) were calculated. the first ten patients [7m/3f, mean age= 41.5years (18-59 y/o), median gcs=6 (iqr3-7)] provided a total of 49 serum cysc for analysis. each patient provided at least 4 samples (range 4-7) for up to seven days. measured serum cysc concentrations were below the reference range in 24 of 49 samples. the overall mean cysc concentration was 0.589 + 0.132 mg/l vs expected mean of 0.621 + 0.05. (ns) measured values fell below the lower reference range in 5 patients (2m/3f) for the first 4 study days (mean = 0.483 + 0.101 vs 0.622 + 0.067 p<0.05). the mean difference between measured concentration and reference value was 0.139 + 0.07 mg/l. after 4 days, four patients (2m/2f) remained below reference values with a mean difference of 0.14 + 0.06 mg/dl. preliminary results show cysc was not consistently below reference ranges in all tbi subjects. a subset of subjects showed significantly lower cysc within seven days of injury. the relationship between cysc and arc needs to be further examined as analysis continues. functional connectivity of the default mode network (dmn) is believed to be necessary for recovery of consciousness after coma. however, dmn connectivity has not been comprehensively studied in patients with acute severe tbi. we hypothesized that dmn connectivity in patients with acute severe tbi is associated with level of consciousness. we prospectively enrolled patients admitted to the intensive care unit for acute severe tbi and performed resting-state functional mri (rs-fmri) as soon as safely possible. dmn functional connectivity was assessed by rs-fmri analysis of the blood-oxygen level dependent (bold) signal using a seed-based approach. pearson's correlation coefficients were calculated between the mean bold time series within dmn nodes and all other regions in the brain. level of consciousness was assessed at the time of the scan using the coma recovery scale-revised (crs-r). two-sample t-tests were performed to identify brain regions with connectivity differences between conscious and unconscious subjects. we then tested for associations between level of consciousness and dmn connectivity within these regions. we enrolled 16 patients (12 male, mean+/-sd age 30+/-8 years) and 16 matched controls (11 male, age 28+/-8 years). rs-fmri was performed 9.6+/-4.5 days post-injury. at the time of rs-fmri, patients' levels of consciousness were coma (n=1), vegetative state (vs; n=4), minimally conscious state (mcs; n=7), and post-traumatic confusional state (ptcs; n=4). connectivity within the medial prefrontal cortex and posterior cingulate was selectively reduced in unconscious patients (coma and vs) compared to conscious patients (mcs and ptcs; false discovery rate-corrected p < 0.05). when these regions were further interrogated, connectivity correlated with crs-conclusions dmn functional connectivity correlates with level of consciousness after acute severe tbi. traumatic brain injury (tbi) is a substantial source of death, disability, and healthcare utilization. many older tbi patients present to community hospitals and are transferred to trauma centers for further care; however, little is known about the provision of care and patient outcomes at the final receiving hospital. we described trauma center care among geriatric transfer patients with tbi. we conducted a secondary analysis on a sub-cohort from a prospective multi-center study focusing on ambulance and emergency department (ed) care of injured older adults transported via ambulance. the current analysis focused on tbi patients transferred to the region's level i trauma center from another hospital. transfer paperwork from the originating hospital was reviewed and we conducted a detailed medical record abstraction, including computed tomography (ct) findings, procedures, length of stay (los), and ed disposition. data were collected on 205 transfer patients. thirty had confirmed abnormalities on head ct (14.6%). the mean age was 78 years (range: 55-91), 57% female, and the most frequent mechanism of injury was falls (93%). average los was 13.5 days (range: 0-230, median los 4.5), with 8 patients staying one day or less. ct findings included subdural hematoma (60%), subarachnoid hemorrhage (50%), and intraparenchymal hemorrhage (36.7%). five patients required neurosurgical intervention (17%), eight required icu admission (27%), two were discharged from the ed (7%), and two transitioned to inpatient hospice (7%). tbi is a frequent cause of transfers to trauma centers. in our sample, admission occurred in the majority of patients, but neurosurgical intervention was less common. however, for appropriately selected patients, strategies such as telemedicine may reduce transfers thus saving resources and improving continuity of care for patients and their families. this is an area in which future research is warranted. the prospects and timing of decannulation may affect surrogate decision making regarding tracheostomy for traumatic brain injury (tbi) patients, yet predictors of decannulation are unknown. methods tracheostomy admitted to an affiliated acute rehabilitation hospital between january 2014 and december 2014. patients who had life-sustaining measures withdrawn were excluded. admission data, including injury characteristics and presence of lung injury on initial chest x-ray, and inpatient complications were compared. patients were followed throughout rehab and to the point of decannulation. patients lost to follow up were eliminated from analysis. time of decannulation was verified by inpatient physician notes. a cox proportional hazards model was created to determine factors associated with the time to decannulation and reported as hazard ratios (hr). there were 209 tbi patients admitted to the icu during study period and 94 (79% men, mean 46 yearsold, median gcs 7) underwent tracheostomy after 10 ± 6 days of intubation, of which 79 were followed throughout rehabilitation. overall cannulation time was 37 (29-65) days. 55 (70%) patients had their trach removed prior to discharge from rehab after 35 (27-49) days of cannulation. in a cox proportional model adjusting for sex, reintubation, aspiration pneumonitis, and presence of lung injury on admission chest x-ray; a higher hospital discharge gcs was associated with a shorter time to decannulation (hr, 1.147; 95% ci, 1.023-1.286; p =.019) while patients who required inpatient dialysis had a longer time to decannulation (hr: 0.281; 95% ci, 0.098-0.803; p = .018). the majority of tbi patients that require tracheostomy will be decannulated prior to discharge from rehab. longer durations of tracheostomy cannulatio hospital discharge and those that receive inpatient dialysis. goal directed therapy (gdt) is thought to be associated with outcome after traumatic brain injury (tbi). our team applied gdt to standardize care in patients with moderate to severe tbi, who were enrolled in a large multicenter clinical trial. physiologic goals were defined a priori in order to standardize care across 42 sites participating in the protect iii trial. data were collected hourly for all randomized subjects (n=882). hours where gdt were not achieved were classified as "transgressions". these included: map1.4; platelets180mg/dl; and sbp180mmhg. the proportion of hours spent in transgression was calculated for each parameter and grouped by quartile. poor outcome was defined via stratified dichotomy of the gos-e. data were adjudicated electronically and via expert review. for each parameter, the association between outcome and either (1) occurrence of transgression or (2) cumulative duration of transgression was estimated via logistic regression model, and backward selection was used to identify the physiologic parameters associated with outcome. subgroup analyses were performed in subjects with intracranial monitoring (ticp, n=480) . parameters significant at alpha 0.01 are reported. prolonged duration of transgression was associated with poor outcome when: glucose>180mg/dl (p=0.0001); hgb180mg/dl (p=0.0098) and inversely associated with map180 mg/dl (p=0.0015) or and was inversely associated with map<65mmhg (p=0.0055). the protect iii clinical trial rigorously monitored compliance with gdt after tbi. multiple significant associations between physiologic transgressions and patient outcome were found. the data suggest that reducing physiologic transgressions is important to minimizing patient morbidity after tbi. the measurement and management of intracranial pressure (icp) is a key component in the care of severe head injury. extracranial ventricular drains (evd) have remained the standard due to the ability to lower icp with the drainage of cerebrospinal fluid (csf). placement of an evd is a more invasive procedure than intraparenchymal icp monitors (ipm) and it is unclear if the use of an evd improves outcomes. we hypothesized that early placement of an evd, in adult patients with severe head injury, would not affect outcomes. utilizing data from the citicoline brain injury treatment (cobrit) trial, a prospective multicenter study, we identified 224 patients who met the inclusion criteria; 1) placement of an icp monitoring device, 2) glasgow coma score (gcs) less than 9, 3) evd placement prior to arrival or within 6 hours of arrival at the study institution. primary outcome was glasgow outcome score-extended (gose) at 180 days post injury. secondary outcomes included neuropsychological evaluations at 180 days post injury, mortality, and length of icu stay. logistic regression with forward-stepwise predictor adjustment and propensity score adjustment was performed to assess the independent association between evd placement and outcomes. patients who received an evd prior to or within 6 hours of arrival at the study institution had worse gose at 180 days (3.8 ± 2.2 vs 4.9 ± 2.2, p=0.007), higher in hospital mortality (23% vs 10%, p =0.021), and did worse on 4 out of 8 neuropsychological measures at 180 days. there was no difference in icu length of stay (16.5 ± 9.0 vs 13.9 ± 10.8, p=0.289). early placement of evds in severe adult head injury is independently associated with worse outcomes and higher in hospital mortality. goal directed therapy (gdt) is thought to be associated with outcome after traumatic brain injury (tbi). our team applied gdt to standardize care in patients with moderate to severe tbi, who were enrolled in a large multicenter clinical trial. physiologic goals were defined a priori in order to standardize care across 42 sites participating in the protect iii trial. data were collected hourly for all randomized subjects (n=882). hours where gdt were not achieved were classified as "transgressions". these included: map1.4; platelets180mg/dl; and sbp180mmhg. the proportion of hours spent in transgression was calculated for each parameter and grouped by quartile. data were adjudicated electronically and via expert review. for each parameter, the association between outcome and either (1) occurrence of transgression or (2) cumulative duration of transgression was estimated via logistic regression model, and backward selection was used to identify the physiologic parameters associated with mortality. subgroup analyses were performed in subjects with intracranial monitoring (ticp, n=480). parameters significant at alpha 0.01 are reported. mortality was 17.2% and 23.2% in the full and ticp cohorts. prolonged duration of transgression was associated with increased mortality for: hgb1.4 (p180mg/dl (p180mg/dl (p=0.0093), and sbp1.4 (p1.4 (p=0.0005). covariates inversely related to mortality included single occurrence of map180mmhg (p<0.0032). the protect iii clinical trial rigorously monitored compliance with gdt after tbi. multiple associations between physiologic transgressions and mortality were observed. the data suggest that maintaining physiologic measures within gdt guidelines may be important in preventing deaths. current outcome models in moderate-severe traumatic brain injury (mstbi) include only admission characteristics. yet, mstbi patients commonly have prolonged intensive-care-unit(icu)-stays with high risks to develop icu complications, lending to the hypothesis that these may be additionally associated with outcomes. the objective of this study was to examine the incidence rates of pre-specified medical and neurological icu complications, and their impact on post-traumatic in-hospital mortality and 12month functional outcomes. we analyzed 431 mstbi patients consecutively enrolled in the prospective observational optimismstudy at a level-1 trauma center between 11/2009-1/2017. poor outcome was defined as glasgow outcome scale 1-3. multivariable logistic regression was employed to adjust for admission characteristics and icu-length-of-stay. the mean age was 52±22 years, 73% were men, and median motor glasgow-coma-scale and injury-severity-scores were 4 (iqr 1;5) and 27 (iqr 25;38), respectively. the three most common medical and neurological icu complications were: hyperglycemia (83%), systemic inflammatory response syndrome (69%) and fever (66%); intracranial pressure crisis (icp; [56% of n=162 with icp-monitor]), brain edema requiring osmotherapy (46%), herniation (45%). multivariable models were adjusted for age, marshall-ct-classification, motor glasgow-coma-scale, pre-admission hypotension, icu-length-of-stay and injury-severity-score. after adjustment, in-hospital mortality was significantly associated with in-icu-cardiacarrest (or 23;95%ci 3.5-152. recent studies suggest benefits for early tracheostomy in patients with traumatic brain injury (tbi), yet data regarding who will require tracheostomy is lacking. ad lifesustaining measures withdrawn were excluded. admission and inpatient variables were compared. multivariable logistic regression analyses were used to assess admission and inpatient factors associated with receiving a tracheostomy and to develop models predictive of tracheostomy. there were 209 patients (78% men, mean 48 years-old, median gcs 8) meeting study criteria with tracheostomy performed in 94 (45%). admission predictors of tracheostomy included gcs, marshall score, injury mechanism, pao2/fio2 ratio, and number of quadrants on chest x-ray with consolidation. inpatient factors associated with tracheostomy included the requirement for an external ventricular drain (evd), number of operations, pneumothorax, inpatient dialysis, aspiration, reintubation, and the presence of hospital acquired infections. multiple logistic regression analysis demonstrated that the development of hospital acquired infection (adjusted odds ratio [aor], 4.78; 95% confidence interval [ci], 2.18-10.48; p < .001), number of operations (aor, 1.47; 95% ci, 1.2-1.78; p < .001), pneumothorax (aor, 2.64; 95% ci, 1.08-6.48; p = .034), reintubation (aor, 7.18; 95% ci, 2.50-20.54; < .001), penetrating tbi (aor, 0.14; 95% ci, 0.03-0.60; p=0.008) and placement of evd (aor, 6.73; 95% ci, 3.03-14.95; < .001) were independently associated with patients undergoing tracheostomy. a model of inpatient variables only was more strongly associated with tracheostomy than one with admission variables only (roc auc 0.90 vs. 0.72, p<0.001) and did not benefit from addition of admission variables (roc auc 0.91 vs 0.90, p=0.76). potentially modifiable inpatient factors have a stronger association with tracheostomy than do admission characteristics. existing traumatic brain injury (tbi) guidelines are designed primarily for the evaluation and management of tbi in tertiary care centers with advanced neuroscience capabilities. military special operations medical providers, however, are often required to treat and sustain patients in austere environments with limited resources for up to 72 hours. tbi management guidelines directed specifically toward the care of these patients are needed. a review of recent operational experiences involving tbi and a survey of military special operations medics prompted a multidisciplinary expert panel to develop draft clinical practice guidelines/recommendations for prolonged field management of tbi. the panel conducted an in-depth review of literature on tbi and related topics and adapted existing and emerging therapies to address the unique challenges encountered in prolonged field care. tbi management while optimal management of pbto2 is not fully established. the objective of this -coeur arterial blood gas was drawn (icp, cpp, hemoglobin, temperature, pco2 and pao2). probes were localized in normal appearing white matter. we used a was calculated. a total of 1006 data sets were collected from 38 patients (mean age 44.5±20.0, median gcs 4, mortality range from 94 to 400). mean pao2 for the group as a whole was 178 mmhg (± 88) and mean cpp was 72 mmhg (± 12). mean duration of pbto2 monitoring was 6.5 days (± 3.7). taking into account all determinants of pbto2 and using a protocolized approach to correct pbto2, the mmhg for a few days. high pao2 values are possibly required due to the fact that oxygen delivery to the brain is rate-limited by diffusion and impaired by oedema or microvascular ischemia. it should be noted that pulse oximetry is not sensitive to detect pao2 below this level. traumatic brain injury (tbi) and stroke are extremely common causes of acute brain injury (abi), which cause long term disability and permanent neurological impairment. coma and stupor are common manifestations of abi, due to interruptions of the ascending reticular activating system (aras). neuro stimulants can improve functioning of the aras. despite decades of research there is a paucity of prospective high-level evidence utilizing neuro stimulants to help with earlier awakening from coma and stupor in abi. we reviewed the literature using the grade level of evidence (loe) methodology. we performed a preliminary literature search of the national library of medicine (nlm) using search terms abi and stimulants. within the literature we searched for timing of stimulant use among abi studies and included all forms of abi such as tbi, stroke, and anoxic injury. we retrieved 508 total results, of which we excluded 474 since they did not meet grade high loe criteria or were "n of 1" studies or aggregates. only 34 high loe randomized studies or meta-analyses were found. among these various stimulants were investigated including methamphetamines such as methylphenidate and lisdexamfetamine, caffeine, armodafinil, galantamine, and amantadine. methylphenidate had 10 randomized trials and a meta-analysis in subacute tbi but reported only attention as a main outcome. we were unable to draw broad-level recommendations about optimal timing, best stimulant, and patient centered outcomes from this data. there is insufficient data to recommend optimal stimulant, timing, and dosing among heterogeneous abi disease models. we propose conducting future homogenous abi neuro stimulant trials in for safety, tolerability, dose-finding, optimal timing, and outcomes based efficacy. neuro stimulants could play a role in earlier awakening and extubation in abi which could improve outcomes similar to sedation/vacation bundles in icu's currently if studied adequately. tbi remains the leading cause of death and disability in young adults in the us and europe. thus far, pharmacological and non-pharmacological intervention studies did not confirm benefits on functional outcomes. the inducible enzyme nitric oxide synthase (inos) is upregulated in response to brain injury, causing excessive production of no, a key driver of secondary injury after tbi. the antipterin vas203 is a structural analogue of the endogenous nos cofactor and a potent in-vivo selective inhibitor of inos. a randomized, placebo-controlled phase 2 study examined 3 dose levels of vas203 in 32 patients with acute moderate or severe tbi. cerebral microdialysis showed pharmacologically relevant drug concentrations close to the injury and a tendency for vas203 to increase the arginine/citrulline ratio, an indirect marker of nos inhibition (stover et al., j neurotrauma 2014). vas203 conferred a significant benefit on the extended glasgow outcome scale interview (egos-i) at 6 and 12 months after injury. no changes in systemic blood pressure or partial brain oxygen pressure were noted. a recent pharmacokinetics and pharmacodynamics study further corroborated the selective inos inhibition by vas203. the confirmatory nostra phase 3 trial (eudract no. 2013-003368-29; clinicaltrials.gov identifier nct02794168) was initiated in 2016. adult patients with a nonrequiring intracranial pressure monitoring, are randomized 1:1 to vas203 or placebo, administered in addition to standard of care, as intravenous continuous infusion for 48 hours, starting between 6 and 18 hours post tbi. the primary efficacy endpoint is egos-i at 6 months post injury. additional endpoints include the daily therapy intensity level and tbi-specific quality of life measures. continuous safety monitoring is performed by an independent committee. nostra iii, the only ongoing registration study in acute moderate and severe tbi, is sponsored by vasopharm gmbh, and plans to recruit 232 patients by q3 2018. a glasgow coma scale (gcs) score of 3 on presentation in patients with traumatic brain injury (tbi) portends a poor prognosis. consequently, there is often a tendency to treat these patients less aggressively because of low expectations for a good outcome. we performed a retrospective review of patients with tbi and a gcs score of 3. demographics, apache iv scores , pupillary reactivity to light, intracranial pressure (icp), icp burden (the number of days with an icp spike >25 mm hg as a percentage of the total number of days monitored), and outcome (mortality and glasgow outcome scale-gos at 6 months, with good outcome defined as gos of 4-5). patients were divided into 2 groups: group 1 (gos = 1-3) and group 2 (gos = 4-5). a total of 62 patients were included. the overall mortality rate was 80.6 %. at 6-month, 9 patients (14.5 %) achieved a gos 4-5. compared to group 2 (n = 9), group 1(n = 53) had higher average apache iv score (104 ± 19 vs 89 ± 27, p = 0.04), more patients with bilateral fixed pupils (59 % vs 22 %, p = 0.04), and higher icp burden ( 50 ± 34 vs 0 ± 0, p = 0.0001). gos score 4-5 was achieved in 41% of patients presenting with bilateral reactive pupils versus 6.9 % of patients presenting with bilateral fixed pupils (p = 0.005). 14.5 % of patients with tbi and a gcs of 3 at presentation achieved a good outcome at 6 months. apache iv scores, icp burden, and pupillary reactivity were significant predictors of outcome. we believe that patients with severe tbi who present with a gcs of 3 should still be treated aggressively initially since a good outcome can be obtained in a significant proportion of patients. elevated circulating catecholamine levels are independently associated with functional outcome and mortality after isolated traumatic brain injury (tbi). we assessed the ability of peripheral catecholamine levels to improve the prognostic performance of the crash and impact-tbi models. prospective, observational, multicenter cohort study, conducted at three level 1 trauma centers in canada and usa. epinephrine (epi) and norepinephrine (ne) concentrations were measured in the peripheral blood at admission (baseline), 6, 12 and 24h after trauma. outcome was assessed at 6 months and dichotomized into favorable [extended glasgow outcome scale (go -tbi models, which identified core prognostic markers of severe tbi. the baseline model (m1) included age, gcs and pupillary size/reactivity. the model 2 (m2) included m1 + hypoxia, hypotension and marshall ct classification. model 3 and 4 included m1 + epi levels, and m1 + ne levels, respectively. the risk models performance was assessed by comparing receiver operating characteristic (roc) curves, and by the use of integrated discrimination improvement (idi) index. m3 had significantly higher roc and idi than the baseline model (m1), to predict mortality. m3 had a roc = 0.930 (0.891 -0.968, p = 0.012) and idi = 0.14 (p = 0.0001). the prediction of mortality was not improved by including ne [m4 = roc = 0.896 (0.848 -0.943, p = 0.245) and idi = 0.02 (p=0.124)]. the integrated discrimination improvement index indicated the prediction of unfavourable outcome by the baseline model was improved by including epi (idi = 0.04, p=0.024), and ne (idi = 0.08, p=0.0002) in the models. catecholamine levels improved risk models performance to predict mortality and unfavorable outcome after traumatic brain injury. following traumatic brain injury (tbi), depression is common and may influence recovery. small trials demonstrated that various drugs are beneficial in managing depression following tbi, but no large, definitive study has been conducted. we performed a meta-analysis to estimate the potential benefit of anti-depressant medications following tbi. multiple databases were searched using the terms "anti-depressant tbi," and "depression treatment tbi" to find prospective pharmacologic treatment studies of depression following tbi. studies were excluded if they did not measure depression as an outcome. effect sizes for anti-depressant medications in post-tbi patients were calculated for within-subjects designs that examined change from baseline after receiving medical treatment and treatment-placebo designs that examined the differences between anti-depressants and placebo groups. a random effects model was used for both analyses. of 923 titles screened, 10 studies were included, with 288 total patients. medications evaluated included selective serotonin reuptake inhibitors, monoamine oxidase inhibitors, and tricyclic antidepressants. pooled estimates showed significant reduction in depression scores for individuals after pharmacotherapy (mean change [mc] -11.7, 95% confidence interval [ci]: -15.5 to -8.0) and significant difference in reduction of depression scores between medications and placebo in the pooled estimate (standardized mean difference of four trials [smd] -0.4, 95% ci: -0.7 to -0.1); however only one of the four treatment-placebo studies found medications significantly reduce depression scores more than placebo. this meta-analysis found a significant benefit of pharmacotherapy for treatment of depression in patients with tbi. however, there was a high degree of bias and heterogeneity regarding tbi severity, time since injury, depression severity, and demographics. larger prospective studies on the impact of anti-depressants on post-tbi depression are warranted to better understand treatment effects and the relationship of post-tbi depression and outcome more broadly. pleural effusion (pe) has been reported in 62% of medical icu. there is little published data on the prevalence and clinical significance of pe in mechanically ventilated patients with traumatic brain injury. head injury patients admitted to icu for mechanical ventilation (mv) within 48-72 hours and gcs > 5 were assessed for eligibility. presence of pe was assessed by serial cxr on daily basis and volume of effusion was estimated and recorded. in case there was no evidence of pe on cxr, a bedside sonography in semi recumbent position was done within 72 h of icu admission. pleural fluid volume was estimated based on 4-point classifications on sonography. details of mechanical ventilation and icu management were recorded. successful weaning was defined as ability to breath spontaneously for 48 h. primary aim was to observe prevalence of pe in mv head injured patients. as secondary measure; impact of pe on duration of mv, weaning and length of icu stay were compared. study enrolled 90 patients. three baseline cxr showed pe. total of 27(30%) patients developed pe in icu. 9 patients had evidence of pe on both cxr and usg. 18 patients had only sonographic evidence of pe, which were not detected on cxr. significantly more minimal effusions were detected on sonography (10/27, p=0.03). duration of mechanical ventilation and duration of icu stay were significantly more in patients with pe. (p= 0.009, mann whitney rank sum test) there was no significance difference in duration of weaning in patients with and without effusion (2.9±2.2, 2.1±1.3, p=0.084). chest ultrasonography increased the detection rate of pleural fluid. patients with pe had longer duration of mechanical ventilation. early detection may be associated with shorter period of mechanical ventilation and icu stay spine surgery can trigger a systemic inflammatory response syndrome and lead to hypotension requiring vasopressors. as sepsis is a major differential diagnosis in the post-operative period, the objective of this study is to understand the prevalence of a true systemic infection in this setting. retrospective review of all consecutive adults with post-operative shock requiring vasopressors following spine surgery in an academic tertiary medical center. a total of 43 patients, median age 69 years (iqr 57-74), were included in the final analysis. comorbidities included a median bmi of 28 (iqr 23-33), coronary artery disease (28%) and diabetes mellitus (16%). median estimated blood loss was 1200 cc (iqr 450 to 2150 cc). circulatory volume was adequately replaced in a total of 72% patients within 6 hours post-op. all patients received crystalloids, and an additional 42% received multiple (>4) units of prbcs transfusion. adequate urine output was confirmed in 42 (98%) of the patients. the maximum median rate and duration of each vasopressor infusion was as follows: phenylephrine 100 mcg/min (iqr 60-150, n = 39), 21 hours (iqr 12-46), norepinephrine 40 mcg/min (iqr 20-95, n = 13), 16 hours (iqr 8-62), epinephrine 10 mcg/min (iqr 10-50, n = 3), 12 hours (iqr 8-16) and vasopressin 0.04 units/min, 23 hours (4-31, n = 3). of the 43 patients, 34 (79%) met at least 2 sirs criteria. infection was confirmed in a total of 10 patients; positive respiratory or blood cultures in 7 (16%) patients and positive urinalysis or urine culture in 5 (12%). two patients (5%) were diagnosed with myocardial infarction. no patients had pulmonary embolism. our study suggests that the risk of infection and sepsis in patients with persistent shock following spine surgery is small but not negligible. larger multicenter studies are needed to confirm our findings and to identify the predictive factors. ischemic and hyperemic injuries may occur unnoticed after severe traumatic brain injury (tbi) and contribute to additional brain damage. maintaining an adequate cerebral perfusion is considered crucial in preventing such injuries, as deviations from autoregulation-guided optimal cerebral perfusion pressure (cppopt) are associated with greater mortality and disability. this makes reliable estimation of cppopt an interesting diagnostic and treatment tool for monitoring. cppopt is defined as the cerebral perfusion pressure (cpp) at which the pressure reactivity index (prx) is minimal. the leading method for estimating cppopt automatically, by aries et al. (2012) , fits a parabola to pairs of prx and cpp data. the method uses preset heuristics to reject the fit as unreliable, namely when the parabola is too "shallow" or does not cover a certain cpp range. as a result, the 2012 cppopt estimates could be generated only about 50-60% of the time. moreover, the manually set heuristics potentially restrict the generality of the model. here, we propose an alternative method based on bayesian inference. treating prx at each time as a function of cpp corrupted by noise serves as a "forward model" that can be inverted to yield, for a given data set, a temporally evolving posterior probability distribution over cppopt. the mean of this distribution is a bayesian estimate of cppopt; we find that these estimates are generally consistent with those obtained from the classic method. importantly, the width of the distribution at a given time serves as a metric of uncertainty about cppopt estimation. we find that this uncertainty tends to be large at time points where the classic method with preset heuristics rejects the fitted parabola. our method makes manually setting rejection criteria unnecessary. bayesian estimation of cppopt holds promise as a tool for providing additional decision support in the care of individual tbi patients. quantitative parameters derived from continuous eeg (ceeg) have been useful to understand evolution of traumatic brain injury (tbi) and the impact on regional networks. these parameters are often interrogated at a global level rather than region-specific. the regional evaluation of quantitative eeg parameters may provide an objective assessment of regional network function, and be of predictive value for prognostication continuous eeg was performed in 54 patients with tbi, and mri imaging was obtained during acute and chronic time points post injury (within 30 days and 6 months, respectively). the extended glasgow outcome scale (gose) assessed clinical recovery at 6 months, with good recovery defined as gose score 5-8 and poor as gose score 1-4. volumetric measurements of selected brain regions, both cortical and subcortical, were obtained at acute and chronic time points. quantitative parameters derived from ceeg, such as percent alpha variability (pav) and hemispheric symmetry, were calculated continuously and anatomically (frontal, temporal, occipital) through the acute hospitalization course. we hypothesized that persistent regional variation in alpha power post injury would lead to brain regionspecific atrophy and may predict outcome at 6 months acute pav within the first 48 hours post injury was poor in patients with poor outcome. in addition, patients with poor outcome had significantly more atrophy in the thalamus, hippocampus, and temporal and occipital lobes. asymmetry of the hemisphere pav values correlated with both brain atrophy and clinical outcome regional asymmetry of pav within the first 48 hours post injury correlates with chronic brain atrophy and clinical outcome after tbi after moderate and severe traumatic brain injuries (tbis), individuals are often admitted to an intensive care unit (icu), and later require intensive rehabilitation. many neuro-icus engage therapists and physiatrists for rehabilitation and therapy during a patient's icu admission. however, the optimal timing, intensity, and components of rehabilitation needed while in the icu are not known and practice patterns are highly variable. the goal of this study is to describe the rehabilitation practices to identify whether there is consensus on best practices. an electronic survey asking participants to describe tbi rehabilitation practices in their icu was distributed via redcap through the neurocritical care society (ncs) and american congress of rehabilitation medicine (acrm) websites. potential respondents were first asked if they cared for patients with tbi in the icu, and if they answered "yes," they were invited to complete the survey. two email reminders were sent to each group for completion. after 51 weeks, the data were extracted and analysis completed. there were 47 respondents who reported that they cared for patients with tbi in the icu (17 attending physicians, 6 advanced care practitioners, 15 therapists, 4 nurses, 4 fellows, and 1 other). of these, 96% recommended early rehabilitative care. the most common reasons to wait for the initiation of physical therapy and occupational therapy were normalization of intracranial pressure (icp) (86% and 89% respectively) and hemodynamic stability (66% and 69% respectively). speech therapy was typically recommended after extubation (65%) and normalization of icp (58%). the majority of clinicians caring for patients with tbi in the icu support early rehabilitation efforts, typically after a patient is extubated, intracranial pressure has normalized and the patient is hemodynamically stable. prospective studies evaluating the merits of these self-reported rehabilitation initiation criteria are warranted. high-dose methylprednisolone (hdmp) has been studied as a potential therapeutic option for acute sci, with mixed results regarding efficacy and consistent suggestion of complications. we conducted a retrospective cohort study of acute sci patients extracted from the medical information mart for intensive care iii (mimic-iii) database to evaluate the hypothesis that steroid-related adverse drug events (ades) occur less often than in published clinical trials using hdmp. three groups of patients coded for acute sci were identified from mimic-iii from june 2001 to october 2012: hdmp recipients per nascis ii/iii protocols (hdmp, n = 55), patients who received some steroids but not per nascis ii/iii protocols (non-hdmp, n = 47), and patients who did not receive steroids (no steroids, n = 198) . demographics and data on complications of steroid therapy were extracted. one-way anova or student's t test were used to evaluate continuous variables; chi-squared or fisher's exact test were used for nominal or categorical variables. there were no differences in steroid-related ades between the three groups. there were higher average blood glucose readings in recipients of any steroids compared with the no steroids group, and more variation in blood glucose readings in hdmp recipients compared with the other two groups. there was a higher icu los and ventilator time in the hdmp group compared with the other two groups. compared with three other trials examining similar use of hdmp in acute sci, there were higher rates of pneumonias overall, though lower rates of urinary tract infections, skin & soft tissue infections, pressure ulcers, and superficial thromboemboli/thrombophlebitis. the results of this study are consistent with previous works related to the potential for harm regarding the use of hdmp or any steroids in acute sci. changes in selected adverse event profiles may be due to standardization of icu supportive care over time. cervical spinal immobilization and clearance protocols are important steps in the minimization of secondary spinal cord injury. patients with primary neurologic diseases are frequently found down and placed in rigid cervical collars despite sustaining minimal-to-no cervical injury. in these patients, neurologic dysfunction can complicate and delay cervical clearance. decreasing time spent in cervical spinal immobilization could improve patient care by allowing greater access to / range-of-motion of the neck, increasing patient comfort, and decreasing skin breakdown. through retrospective chart review over a 28-month period, we collected the following: the rationale behind each mri, any mri evidence of cervical instability, the result of any ct imaging, and the basic mechanism of any trauma. for patients that were simply found down, any evidence of trauma either by history or physical exam was recorded. during the study period, there were 306 instances where an mri of the cervical spine was performed. of those mris, 176 (58%), were performed for cervical spinal clearance. sixty-one (35%) of mris were ordered without any ct imaging first. of the patients with a normal ct, six (3.4%) were found to have mri evidence of cervical instability. notably, of the 62 patients who were found down, there was only one instance where the mri demonstrated instability. that patient had extensive facial injuries suggestive of an unwitnessed fall. in the 40 patients that were found down with no history or evidence on physical exam of trauma, there was no mri instability. for patients that are found down without any history or evidence on physical exam of trauma, a ct of the cervical spine is likely sufficient for cervical spinal clearance. acute subdural hematoma (asdh) represents a major clinical entity in severe traumatic brain (stbi), approximately 60% are accompanied by various extents of asdh. stbi has been reported to cause cerebral circulatory disturbances at an acute stage and had the worst circulatory disturbance among stbi. in this study, we focused on the cerebral circulation of asdh, evaluated the absolute left-right difference between cerebral hemispheres and compared the cerebral circulation between the favorable outcome group and the unfavorable group. we retrospectively reviewed 31 patients with asdh. they were admitted to our hospital from 2002 to2011. in these patients, we simultaneously performed xenon-computed tomography (xe-ct) and perfusion ct to evaluate the cerebral circulation on post-injury days 1-3. we measured cbf using xe-ct and mean transit time using perfusion ct and calculated the cerebral blood volume (cbv). a significant absolute difference in cerebral circulation between the hemispheres among different types of tbi was observed in mtt. there was no significant difference in these parameters between left-right hemispheres with asdh among the favorable outcome group and unfavorable group. although there was no significant difference in age, gcs at the onset of treatment, cbf and cbv, there was significant difference only in mtt between the favorable outcome group and unfavorable group. the circulatory disturbance in patients with asdh occurs diffusely despite the focal injury. additionally, in unfavorable patients, the circulatory disturbance is worse than in favorable patients. because asdh suffered ischemia more than other types of stbi, we had to perform not only removal of the occupying lesions, but also neurointensive care, including whole-body management and hypothermia therapy for the ischemic brain after surgery. we have to adopt a treatment strategy appropriate to the pathophysiology of the different tbi types. kcentra is 4-factor prothrombin complex concentrate that is fda approved for reversal of warfarin. there is limited research describing the use of kcentra for coagulopathy in the setting of traumatic intracranial hemorrhage. here, we show the largest ever retrospective review for the use of kcentra in the setting of traumatic intracranial hemorrhage. retrospective chart review was performed from 2013-2016 for patients with intracranial hemorrhage who presented to the r adams cowley shock trauma center. patients who received kcentra were identified. basic clinical information was obtained including cardiac/stroke history, blood pressure, glasgow coma score, medication history, and categorization of hemorrhage. pre and post inr level was assessed. hemorrhagic expansion was assessed with ct scan up to up to 24 hours. disposition and thromboembolic events were recorded. forty-four patients were identified as receiving kcentra in the setting of traumatic intracranial hemorrhage. pre and post kcentra dosing inr was found to be significantly different (p<0.001) across the two groups assessed (warfarin and tbi/noac coagulopathy). seventeen patients (38.6%) had hemorrhagic expansion as determined on ct scan. disposition (home vs rehab vs death) was found to have three significant variables: history of stroke, hemorrhagic expansion, and admission glasgow coma score. eight patients (18.2%) were found to have thromboembolic events. here, we show the largest retrospective review describing the clinical use of kcentra for coagulopathy reversal in the setting of intracranial hemorrhage. overall, kcentra is shown to be a safe and effective drug for the reversal inr. importantly, our reported hemorrhagic rate of 38.6% is lower than established rates reported in the literature for warfarin/coagulopathic patients with intracerebral hemorrhage (50-60%). the prognostic importance of hemorrhagic expansion was highlighted in the disposition analysis which showed that zero patients were discharge home if there was recorded expansion. despite the impact of post-traumatic amnesia (pta) duration on long-term functional outcome after traumatic brain injury (tbi), radiologic predictors of pta duration are lacking. we hypothesized that the number of traumatic microbleeds (tmbs) detected by gradient recalled echo (gre) magnetic resonance imaging (mri) in neuroanatomic regions that mediate memory correlates more strongly with pta duration than does the number of global tmbs. using a prospective outcome database of patients treated for mild-to-severe tbi at an inpatient rehabilitation hospital, we retrospectively identified 65 patients who underwent acute mri with gre. pta duration was determined by the galveston orientation and amnesia test, orientation log or chart review. a rater blinded to pta duration identified tmbs on the gre datasets globally and in neuroanatomic regions that mediate memory, including the hippocampus, fornix, corpus callosum, thalamus, and the temporal lobe. associations between global and regional tmbs (in the 5 mentioned locations) and pta duration were tested using spearman rank correlation coefficients. the cohort was comprised of 25% ( hippocampus and corpus callosum tmbs are associated with pta duration, and thus may have greater utility for predicting functional outcomes than global tmb number. validation of these findings in larger prospective studies is indicated. using a large two-center cohort of 413 penetrating traumatic brain injury (ptbi) patients, we previously developed the survival after acute civilian penetrating brain injuries (spin) score, a logistic regressionbased parsimonious risk stratification scale for estimating survival after civilian ptbi. the objective of the present study was to externally validate the spin-score. our multicenter validation cohort comprised 362 ptbi patients retrospectively identified from three u.s. level-1 trauma center registries. the spin score variables (motor gcs [mgcs], sex, pupillary reactivity, self-inflicted ptbi, transfer status, injury severity score [iss] and inr) were collected from the trauma registries supplemented by chart review. using the spin-score multivariable logistic regression model from the original study, receiver-operating-characteristic area-under-the-curve (roc-auc) analysis and hosmer-lemeshow goodness-of-fit testing was performed. the mean age was 32±15 years, and patients were predominantly male (85%), with 41% white and 28% black. in-hospital mortality was 52%, and 6-month mortality of discharge survivors was 3. in this multicenter external validation study, the full spin-model predicts in-hospital survival after ptbi with excellent discrimination and calibration. after removing inr from the model, discrimination remained excellent, but model calibration diminished. the full spin-score may provide important information to guide families and physicians after civilian ptbi. limited data has described alterations in vancomycin pharmacokinetic (pk) parameters in traumatic brain injury (tbi) patients that have resulted in sub-therapeutic concentrations. the primary objective of this study is to evaluate the pk parameters of vancomycin in tbi patients to determine if using the common clinical practice of capping creatinine clearance (crcl) to 120 ml/min in determining dosing impacts achievement of therapeutic concentrations. this was a single-center, retrospective study of patients at least 18 years of age with tbi who received vancomycin and one reported steady-state vancomycin serum level from april 2014 to december 2015. predicted pk parameters based on population data using actual and capped creatinine clearance (crcl) at 120ml/min were compared with calculated pk parameters based on serum trough concentrations at steady state. the difference was assessed using a two-sample wilcoxon rank-sum test where p < 0.05 was considered statistically significant. when using actual crcl [median 167 ml/min patients with tbi experienced crcl that were greater than predicted. based on the results of this study, actual crcl is more accurate at predicting vancomycin pk than the common practice of capping crcl at 120ml/min. therefore, actual crcl should be used when determining vancomycin dosing regimens in patients with tbi to achieve desired therapeutic concentrations. neurocritical care is traditionally provided within institutions in urban centers while access in rural communities has been limited. transport to urban centers is not always favorable for a variety of reasons including critical patient condition, family wishes, weather, and geography. our hypothesis is that tele-neurocritical (tele-ncc) can extend access to this service with meaningful impact on icu outcomes. a tele-ncc pilot study was initiated within intermountain healthcare. starting 5/1/17, the study included all ischemic stroke patients admitted to the icu of one primary stroke center in utah. tele-ncc consultations were provided by ncc physicians at our flagship hospital located three hundred miles from the spoke site. tele-ncc consultations occurred via an existing telehealth platform developed inhouse. primary outcomes for this pilot study were icu and hospital lengths of stay (los). secondary outcomes include stroke complication rates and results on a provider satisfaction questionnaire. to date,12 tele-ncc consultations have been performed with median hospital los = 3 days (iqr 2.8 -4.3) and icu los = 1.6 days (iqr 1 -2). in the 12 months prior to the pilot, there were 109 admissions to the icu for ischemic stroke with median los = 4 days (iqr 2.8 -5.8) and icu los = 1.8 (iqr 1 -2.6). for this small sample size, the p-values for comparison of hospital and icu lengths of stay before and after tele-ncc are 0.39 and 0.78 respectively. tele-ncc care can have significant impact on icu outcomes by expanding access to critical support from neurocritical care specialists. tele-ncc expands access to not only consultation on critical neurological emergencies, but also on when to de-escalate from the icu or in end of life discussions with which general icu teams may not be comfortable. these impacts could be measured as important decreases in hospital and icu los. hospital readmissions increase health care costs, increase patient exposure to nosocomial disease, and imply patients were not stable for discharge. because readmissions are a target for hospitals and payers, several centers have developed predictive readmission scores in order to identify high-risk patients. we contend that these general readmission scores are not suitable for neurocritically ill patients and that specific predictive score must be developed to identify high-risk patients. we conducted a retrospective chart review of 340 consecutive patients admitted to our neuroscience critical care unit. we recorded the readmission scores, reason for admission, length of stay,and if they were readmitted. we then compared the median readmission scores between the two groups. after removing patients without readmission scores or died at the end of the original admission,we analyzed the records of 279 patients. patients were more likely to be readmitted if they were initially emergently hospitalized or had malignancy. readmitted patients had a longer original hospital length of stay. we found no difference median readmission score between those who were readmitted, and those who were not. most readmitted patients (65.8%) had an original "low-risk" readmission score. we found that our center's score was poor in predicting readmission for neurocritical care patients and that several components of the score do not apply to our patient population. we propose that to accurately predict readmission,centers should create their own unique readmission scores for more homogeneous admission populations. clinical evaluation of the level of consciousness in non-communicative patients can be very challenging. in this study, we aimed to evaluate the nurses and nursing assistants' (nas) perception of the consciousness on patients suffering from disorder of consciousness (doc). through their activities, nurses and nas have an extended observation time of patient's behavior, and make repeated implicit assessments of patients' clinical state of consciousness. we hypothesized that even in the absence of a structured and explicit evaluation of consciousness (in contrast for instance with the coma recovery scale revised -crs-r), nursing expertise could be a valuable measure to improve assessment of state of consciousness in doc patients. this was a prospective observational single-center study. our primary objective was to correlate the nurses and na's assessment of doc-patients' consciousness quantified through an analogic visual scale (the "doc-feeling score") with the results of the standard methods (including crs-r, fmri, electrophysiology). the secondary objective was to identify elements which correlate with this assessment and/or with the expert's diagnosis (such as visual pursuit, patient's participation to nursing, motor responses to verbal command or adapted reactions to painful care). . linear regression reveals a good correlation between the "doc-feeling score" and the crs-r gold standard (r2 = 0.37, p-value <0.0002, figure 2 ). global assessment of the level of consciousness by all the caregivers interacting with the patient using the "doc-feeling score" is reliable and can improve assessment of state of consciousness in doc patients. investigating causes of deterioration in neurological patients is important to anticipate these complications and improve outcomes. this is a prospective observational study performed at an academic tertiary care trauma, stroke and neurorehabilitation center. data was collected over a year from rapid response system activations (rrsa). in one year, our center had 5388 admissions. 169 rrsa were performed on 140 patients. most common admission diagnosis was ischemic stroke 45 (32%). most common rrsa organ system involvement was respiratory system (n=59, 34.8%). the only predictors of death or new limitation of care in those patients who had rrsa were age (67 years vs 58 years, p <0.01) and history of cancer 8 (26%) vs 11 (19%) p=0.02. 62.1% (n=105) of rrsa happened during day shift and 37.9% (n=64) during night shift. 20.7% (n=35) of rrsa happened around shift change and were more likely to result in an unplanned icu admission. 37.7% (n=63) of rrsa happened within 24 hours of admission and were more likely to result in unplanned icu admissions. the most common reasons for in hospital decompensation in neurological inpatients are nonneurological. most common organ system involvement responsible for rrsa is respiratory system. the only predictors of death or new limitation of care were history of cancer and age 67 and older. rrsa activations were more frequent during day shift. however, there was no different in the outcomes we evaluated between day and night shifts. rrsa happening around shift change wew more likely to result in unplanned icu admission. rrsa within 24 hours of admission showed an increased risk of unplanned icu admission when compared to rrsa happening after 24 hours of admission. neurocritical care is a growing field with an increasing number of dedicated neuroscience intensive care units. in this dynamic context, it is unclear which types of physicians provide neurocritical care across the united states. we performed a retrospective cohort study using claims data from a nationally representative 5% within analyzed critical care procedures . the primary outcome was physician specialty, defined by medicare provider specialty codes. in a sensitivity analysis, we excluded claims for services on the day of admission and claims associated with a diagnosis of cardiac arrest, since these activities may often occur outside of neuroscience intensive care units. we identified between 2009 and 2014, neurologists were responsible for approximately one-quarter of neurocritical care services among a nationally representative cohort of elderly patients. critically-ill patients on mechanical ventilation (mv) cannot verbally communicate. research suggests several phenomena occur in patients during mv because of impaired communication including anxiety, loss of control, loneliness, and compromised interaction (schou and egerod, 2008) . for neurocritical care patients, this can be especially profound when coupled with cognitive and motor/sensory deficits. currently, the blom® speaking valve (sv) is the only approved product available that allows phonation in ventilator-dependent patients with tracheostomy. sv trials are known to facilitate vent-weaning. the current standard of care (passy-muir speaking valve, 30 minute trials), is contra-indicated in patients who cannot tolerate cuff deflation. as such, the blom® sv was evaluated for clinical and quality efficacy. we retrospectively evaluated clinical outcomes associated with blom® sv on mv during a trial in a neuroicu of a large tertiary center between 6/1/2014 and 5/31/2016. baseline demographics, diagnoses, physiologic, sedation, delirium, mobility and swallowing parameters, length of stay, ventilator modes and settings, ventilator days, work of breathing and presence of pneumonia were abstracted along with patient, family and interdisciplinary staff satisfaction survey results. 28 patients were recommended for blom® tracheostomy. 22 patients received sv trials. of the 185 trials were performed, 28% (52) were optimal/completed (30+ minutes); 36% (68) were suboptimal/completed trials (0-29 minutes); 14% (27) unable to complete. satisfaction results from patients/families were positive compared to the interdisciplinary team survey results. remaining parameters currently in analysis with results pending, to be completed by end of august, 2017. impaired communication during mv is suboptimal for neurocritical care patients. our clinical experiences with blom® sv showed positive and negative outcomes. positive benefits were enhanced patient/family engagement and family satisfaction. unanticipated obstacles included significant increase in patient fatigue during sv trials, often delaying ventilator weaning. further study is needed to determine efficacy in this population. patients with clinical signs of cerebral herniation require immediate intervention known as a "brain code". in our neurosciences critical care unit (nccu), a rapid response program is in place to ensure the safety of 23.4% hypertonic saline's use (high risk medication) and to expedite its delivery to the bedside given the emergent need for this medication when ordered. our institution, however, is lacking a more holistic and structured approach to cerebral herniation syndrome that include components of tiers zero to three emergency neurological life support elevated icp or herniation protocol. the neurocritical care communication council consists of bedside staff nurses, nursing leadership, advanced practice providers (nurse practitioners and clinical nurse specialist), pharmacists, respiratory therapists and physicians. the council identified processes during neurological brain codes that could be improved as a result of using a bedside debriefing tool. the unit leadership council of the nccu reviewed literature on hospital debriefing tools and referenced this organizations current resuscitation debriefing tool. from these sources, a brain code debriefing form was constructed. a clinical tool was developed with the expectation of standardizing the brain code process in this nccu. the brain code debriefing form will be piloted to determine unit and system wide value. pre-and postimplementation data will be collected to discover areas of improvement for optimized patient care. through the development of this debriefing tool, it was ascertained that a clinical practice guideline for impending cerebral herniation would be beneficial to further guide and direct evidence-based care. thus, a preliminary algorithm for identification and emergency treatment is in process. the americas medical center is a 212-bed tertiary hospital complex, located in the city of rio de janeiro. the center was elected by the international and the brazilian olympic committees as the referral hospital for the olympic family (of), comprised of athletes and their crews, support and technical personnel, credentialed media and credentialed governmental representatives from the participating countries. the neurology and neurocritical care teams were selected to head a comprehensive program of acute emergency neurology, including a 10-bed neurocritical care unit (nicu), and 24-7 emergency neurology service. we hereby describe our experience during the 2016 olympic games in rio de janeiro, brazil results 63 neurological assessments were conducted in patients from the of, 22 of these involving athletes from 19 countries. the most common reason among athletes were traumatic brain injuries (tbi), with 6 politraumas (all involving cycling), 7 mild tbi (3 of boxing, 2 of field hockey, 1 of rowing and 1 of cycling) and 2 moderate tbi (cycling and water polo). three patients were admitted to the nicu: 2 ischemic strokes and 3 politraumas with tbi. motor vehicle accidents with associated tbi involving the of were surprisingly frequent, with 10 assessments, none requiring admissions. finally, 39 ct scans of head, 22 ct scans of the cervical spine and 14 mri scans (3 brain and 11 spine mri) were performed to assess the patients. of note, cases of seizures, functional deficits, multiple sclerosis flare and psychiatric complaints were observed affecting the of. not only that multiple sports-related injuries were observed, cases of diverse acute neurological issues were reported involving members of the of. olympic games are complex events mobilizing thousands of people, and a comprehensive and detailed plan for neurological emergencies is of extreme importance the term "handoff" has been defined as the "transfer and acceptance of responsibility for patient care that is achieved through effective communication, passing patient-specific information from one caregiver or team of caregivers to another to ensure the continuity and safety of the patient's care" (patterson and wears, 2010) . the joint commission reported that two-thirds of sentinel events occur at the time of patient handoff, which led to a national patient safety goal, requiring standardized process for handoffs (the joint commission on accreditation of healthcare organizations, 2006) . to support this npsg, a nccu specific handoff tool and timeout process were created to support the transition from or to nccu. nccu postoperative handoffs were identified as an area to enhance staff satisfaction and patient safety. baseline data to evaluate the frequency of neurosurgery report was performed in may 2017. using a qualtrics survey in june 2017, staff satisfaction with current ns or report was obtained from nccu rns, nps, and fellows evaluating whether they felt they received: accurate medical history, accurate information about performed procedure, sufficient handoff for patient care, specific patient goals, recent pharmacological intervention, anticipated concerns regarding diagnosis/procedure, estimated blood loss, blood/fluid intake, airway concerns, complications and overall satisfaction with handoff. a taskforce of rns, nps, neurointensivists, and neurosurgeons was established, ending with the creation of a handoff tool and timeout process. the new tool and process were initiated and two months later, a repeat survey was sent to evaluate staff satisfaction and perceived effectiveness of the new process and handoff tool. currently being tabulated at time of submission. using standardized, open communication techniques including handoff tools and a timeout throughout the perioperative period is crucial to positive outcomes and can improve perioperative care in the nccu patient and increase satisfaction and collaboration of all team member during or handoff. in the age of the healthcare reform and rising costs, it is important for strategic service lines to explore cost saving and care efficiency strategies. beginning in september 2016, physician and administrative leadership within the duke neurosciences intensive care unit (neuroicu) began investigating per patient cost to explore opportunities to decrease direct cost to the neuroicu, cost to the patient, and reduce redundancy of care. with assistance from health system finance, the team assessed the following data points within each cost group and compared these values to that of our peers within the us news and world report top honor roll: · number of units · direct cost per unit · total direct cost our performance according to our peers in the following cost areas was: 1.pharmacy-ranked 8th out of 15 2.laboratories-ranked 12th out of 15 3.radiology-ranked 14th out of 15 4.cardiovascular-ranked 14th out of 15. based on these performance metrics, neuroscience administrative and medical leadership developed a project grid of prospective initiatives and identified the following for each cost area: ·stakeholder-led teams inclusive of providers, nursing, and administration ·duration or impact of each initiative: short, medium, or long ·activity phases based on duration the stakeholder-led groups would propose and validate projects based on scope and duration. at each group's meeting, members reviewed charge level financial data by activity code for the group's respective cost area to develop applicable initiatives. multiple initiatives are currently underway including those within the cost areas of pharmacy, laboratories, and radiology. included among these initiatives is a change in routine resistant organism screening and cervical spine clearance. other initiatives will be target intravenous anti-hypertensive treatment and laboratory frequency. the total cost savings from these initiatives can only be estimated at this point but will likely be in excess of $50,000 for the calendar year. it is uncertain whether dedicated neurocritical care units are associated with improved outcomes for critically ill neurologically injured patients in the era of collaborative protocol-driven care. we examined the association between dedicated neurocritical care units and mortality, and the effects of standardized management protocols for severe traumatic brain injury. we surveyed trauma medical directors from centers participating in the american college of surgeons trauma quality improvement program (tqip) to obtain information about icu structure and processes of care. survey data were then linked to the tqip registry, and random-intercept hierarchical multivariable modeling was used to evaluate the association between dedicated neurocritical care (ncc) units, the presence of standardized management protocols and mortality. we performed three sensitivity analyses reclassifying ncc units by restricting to closed units, under ucns director leadership, and exclusion of neurotrauma units. data was analyzed from 9,773 adult patients with isolated severe traumatic brain injury admitted to 134 tqip centers between 2011 to 2013. fifty icus were dedicated neurocritical care units, whereas 84 were general icus. rates of standardized management protocols were similar comparing dedicated neurocritical care units and general icus. care in a dedicated neurocritical care unit was not associated with improved risk-adjusted in-hospital survival (or 0.97; (95% ci 0.80-1.19; p= 0.79). the results from the model were robust in our sensitivity analyses. the presence of a standardized management protocol for these patients was associated with lower risk-adjusted in-hospital mortality (or 0.77; 95% ci 0.63-0.93; p=0.009). compared to dedicated ncc models, standardized management protocols for severe traumatic brain injured patients are low-cost process-targeted intervention strategies that may improve clinical outcomes. understanding the differences in processes of care within the context of icu structure is necessary to better understand mortality differences observed between centers, and may help in the design of future trials for severe tbi patients. complex multidisciplinary care of patients in the neurocritical care unit requires reliable and effective communication to minimize medical errors. we implemented a structured rounding process that incorporates ahrq-endorsed team strategies and tools to enhance performance and patient safety (team stepps) to improve interprofessional collaboration between team members. we convened a project team of physicians, advanced practice providers (apps), nurses, respiratory therapists, and pharmacists in a 16-bed nicu. we defined structured rounding processes and implemented team stepps strategies to promote closed-loop communication between team members during daily rounds. the assessment of interprofessional team collaboration scale (aitcs-ii) was administered to team members at baseline and 16 months post-intervention. impact on overall team collaboration and subscale domains of team partnership, cooperation and coordination was assessed. the possible range of the overall collaboration score is 23 to 115; higher scores indicate better collaboration. the survey was completed by 61 (85%) staff at baseline, and 54 (77%) staff post-intervention. overall team collaboration scores improved significantly pre and post-intervention (78.0 ± 18 vs 89.7 ± 15, p < .001), as did subdomain scores of team partnership (27.9 ±6.9 vs 31.9 ± 5.2, p < .001), collaboration (28.1 ± 6.3 vs 32.1 ± 5.7), and coordination (22.1 ±6.6 vs 25.7 ± 5.7., p < .001). perceived shared understanding of patient daily goals between nurses and providers (physicians/apps) increased from 58% to 91% (chi-square 15.7; p < .001). 68% of staff reported that the intervention shortened or did not affect the duration of rounds. of those who reported longer duration of rounds, 100% responded that the intervention was worthwhile. interprofessional team collaboration can be enhanced by structured rounding and communication workflows. by promoting closed-loop communication during daily rounds, shared understanding of patient daily goals between team members is increased, and may optimize quality and safety of patient care. advanced practice providers (apps) are increasingly utilized to provide clinical care within neurocritical care units (nsicu) . despite the complex issues in this patient population, the specific educational and orientation needs of these providers have not been established. to meet the demands for rapidly and effectively training apps to provide advanced neurocritical care (ncc), a structured educational curriculum was developed and integrated within the standard orientation and on-boarding process for newly-hired app within our nsicu. this curriculum was designed with measurable learning goals, objective assessments of goal achievement, and opportunities for additional education and remediation at multiple steps within the program. the curriculum has three phases with distinct goals and assessments. phase i covers basic triage and resuscitation issues for the acutely-decompensating patient. phase ii covers general critical care principles in significantly greater depth. phase iii provides detailed experience and exposure to specific ncc issues. each phase incorporates relevant reading assignments with a tailored study guide, as well as a multiple-choice question post-test to demonstrate knowledge acquisition. phases ii and iii also include an oral exam incorporating hypothetical patient scenarios to allow the app to demonstrate comprehension and application of the goals for each phase. each phase lasts approximately 4 to 8 weeks with the expectation that the entire orientation curriculum will be completed within six months of employment. in addition to the educational curriculum, phases i and ii include working alongside a more senior app preceptor and providing bedside care for a progressively increasing number of patients. apps not meeting minimum established standards on any aspect of the curriculum are provided additional remediation and instruction by the preceptor and ncc faculty based on an individualized learning plan. a standardized educational curriculum provides a structured learning environment for new apps in the field of neurocritical care. reimbursement changes from the centers for medicare and medicaid services and value based purchasing systems have made quality improvement linked to clinical outcomes more crucial than ever. in one neuroscience icu, providers and nurses collaborate to address key infection parameters that impact patient outcomes. quality metric data in one neuroscience icu was collected over a period of 3 fiscal years. outcome measures, consisting of glycemic and temperature control, and ventilator weaning strategies, were obtained after certain parameters were enforced over two years and then compared to the initial year. the urinary catheter utilization has decreased by over 10%, with catheter associated urinary tract infections decreased by 80% in 3 years (p-value <0.0001). central line utilization decreased by 5%, with a 60% decrease in central line associated blood stream infections in 3 years (p-value 0.45). new ventilator weaning strategies were put into place utilizing adaptive support ventilation mode, which decreased total ventilator days by 1000 days/year . successful weaning and extubations resulted in no recorded ventilator-associated pneumonia in the last 3 years. this neuroscience icu maintains glucose below 180mg/dl more than 85% of the time. regarding temperature control data, a normothermia protocol was implemented that utilizes aggressive temperature control coupled with bromocriptine administration. as a result, 95% of patients had a temperature less than 38°c. all of the quality initiatives that have been implemented have improved the observed/expected mortality ratio by 4.4%. this study shows that by optimizing infection control, temperature management, glycemic control and ventilation strategies, there is an overall positive impact on the patient's morbidity and mortality. as evidenced by these results, this institution is now a top performer when compared in a national clinical database. this presentation will share the pragmatic strategies to create a culture of quality improvement in any neurocritical care unit or patient care organization. health care records are not accessible universally at point of care delivery. in developing countries like thailand a large proportion of health care records are still paper based. patients may not able to convey relevant information about their own medical problems and medications during patient-physician encounters or in the event of emergency. our purpose was to create a simple platform for recording relevant basic healthcare information through a system that can be securely accessed even in countries like thailand. our vision is to improve healthcare communications and leverage social media in thailand and other developing countries, particularly for patients with lower levels of education or socioeconomic status. we created a cloud-based personal healthcare information platform 'meid' that uses a qr code scanned from wristbands and other products like stickers to access patient information. conventional methods require a treating team to request medical records from a patients' prior hospital visits including visits at different medical facilities. time lost during this process can potentially cause delay in treatment decisions. we also aim to improve health literacy in thailand. application name 'meidth' is available in both apple store and google play. we launched meid in thailand in april of 2017. we have more than 1,000 active users and have sold more than 3000 wristbands. the meid thailand facebook page has received 20,367 likes. there are at least two patients that have already benefited from this product: one of these patients received intravenous tissue plasminogen and had a good outcome. timely access to his past medical history and medications via meid was a key in this case. our cloud based personal healthcare information platform using qr codes from wristbands and stickers can help increase health literacy, decrease times to appropriate treatment, improve patient safety and decrease healthcare costs. clinical pharmacists have become an integral part of multidisciplinary medical teams. expanding the role of pharmacists in the neurocritical care units has the potential to positively impact the quality of patient care and provide costs savings. this study examines these potential benefits at one neurocritical care unit. we reviewed patient medication profiles and had formal rounds with a pharmacist four times per week. for the purposes of this study, the focus was on minimization of a select number of high expense drugs. nine months of baseline data was compared to three months of post intervention data. interventions were performed at the time of rounding, which involved timely conversion to enteral formulas, changes to alternative medications or discontinuation of medications. we then performed a cost-benefit analysis to assess the net amount of money saved by reducing inappropriate pharmacy drug use following the interventions. average cost of nicardipine was $34,122 pre-intervention, compared to $23, 871 post-intervention (pvalue 0.0229). the cost of iv levetiracetam usage on average was $4,719 pre-intervention and $3,612 post-intervention (p-value 0.102), while the cost of iv dexmedetomidine was $2355 pre-intervention compared to $1800 post-intervention (p-value 0.5226). average expense per month was reduced by approximately $11,900 per month compared to the average expense per month from the previous 9 months (p-value 0.0128). appropriate use of stress ulcer prophylaxis was also positively impacted; patient days/month on famotidine was reduced by approximately 30% from baseline, 222 patient days (pre-intervention) vs 150 days post-intervention. pharmacist interventions within a neurocritical care unit are known to be beneficial clinically for patients, however this study also shows that their interventions offer substantial cost benefits and should justify creating collaborations between pharmacists and neurointensivists. multi-disciplinary rounds have been shown to improve patient outcomes. the objective of this study is to observe the effect on patient care, team dynamic, and nursing satisfaction before and after the implementation of a nursing-led rounding model in the neurological icu. prior to the implementation of the nursing-led rounds quality initiative, nurses in the neurointensive care unit (nicu) were asked to answer a brief survey on basic demographics and perceptions of team dynamics and satisfaction in the nicu. a multidisciplinary systems-based rounding sheet inclusive of the abcdef bundle and previous nicu checklist was created and revised with extensive bedside and senior nursing educator input. while rounding, nurses presented and clinicians were to in real-time come up with an assessment and plan and relay these to the nurse and other team members. any questions, educational pearls or concerns by the clinician team or the bedside nurse were encouraged during these rounds. nurses completed a 6 month post-implementation survey. 42 of the full-time nicu nurses (71%) responded to both the pre-implementation and postimplementation surveys. a bimodal distribution of nursing experience was noted with 41% new nurses (<1 year) and 31% experienced nurses (5 years+). more than half of the nurses (68%) reported doing both night and day shifts as opposed to being exclusive to only day or night. there was an increase in the nursing perceptions of participation during rounds as well as education during rounds. nurses felt significantly more involved with patient decision making and felt that they were able to give input into the patients care. the implementation of a nursing-led rounding structure may be beneficial to communication, education and overall patient care. as the project continues, we hope to further examine common icu objective measures as well as other subjective measures such as patient satisfaction scores and communication perceptions. with increased elective and non-elective volume, directing the flow of admissions has become essential to the efficient operation of inpatient strategic service lines. this is especially true in the neurosciences where widespread acute ischemic stroke intervention has placed an especially high demand on comprehensive stroke centers. as a result, an important collaboration was formed at duke between the health system, transfer center and neurosciences to create an algorithm-driven multi-hospital triage and pre-hospital care system called phast (pre-hospital acute services team). in this abstract, we present the formation and current state of this service. this effort was formally begun in the spring of 2015 with an initial focus on centralizing the admission process into the duke neurosciences intensive care unit (neuroicu) by an icu physician. after some initial success, it was clear that the service line would benefit from a more formalized process. as a result, a successful multidisciplinary collaboration with a core group of physicians and administrators was formed to develop algorithms and to overcome multiple administrative and legal hurdles. over a period of 24 months, multiple algorithms were developed to systematize neuroscience admissions including acute ischemic stroke and vascular and non-vascular neuroscience emergencies. in an effort to decrease door-to-intervention times as well as effectively mitigate the impact of limited bed-space availability, this system now serves 3 hospitals including 2 with acute neurointerventional services and the 3rd with a burgeoning neurohospitalist program and incorporated rehabilitation services. in addition to systematizing the transfer and admission process, quality assurance, improvement, and educational processes are in a place. the current state of phast is that of a young but maturing and now essential service for duke neurosciences. extubation involves removal of an endotracheal tube (ett) and is a common intensive care unit (icu) procedure. extubation failure occurs in 8-20% of icu patients and can be difficult to predict accurately. we hypothesized that a multivariate re-intubation scale calculation (risc) model could predict extubation failure better than a single variable like rapid shallow breathing index (rsbi). after irb approval, we conducted a retrospective review of data on mechanically ventilated icu patients above 18 years of age who were not receiving mechanical ventilation through a tracheostomy tube from january 1, 2006, through december 31, 2015 at mayo clinic rochester. various data points were gathered on these patients via electronic medical records search, and reintubations within 72 hours of extubation were identified. univariate and multivariate logistic regression models were used to predict reintubation after extubation and construct a risc estimate. we included a total of 6161 patients which were randomly divided into a derivation set (n=3080) and validation set (n=3081). in the derivation set, patients had a mean age of 62±17 years, and 59% were men. three hundred and ninety three extubation failures occurred within 72 hours. predictors of extubation failure included underweight status, gcs score>=10, mean airway pressure at 1 minute=1500ml and total mechanical ventilation days>=5 in the final multivariable model. risc score was calculated using the validation set and ranged from 0 to 8. logistic model result shows that, as risc increased by 1, the odds of having extubation failure was 1.6 fold higher (c-index=0.72). roc analysis shows that the best cut off for risc was >=4 vs. <4, which demonstrated a sensitivity of 0.80, specificity of 0.54 and auc=0.67. the current risc model warrants further exploration in a prospective study to help critical care providers to decide when extubation can be done more safely. this report presents results of the 2nd nationwide survey concerning neurocritical care units (ncus) in china. this is an observational cross-sectional survey and close-ended self-reported questions were used. the questionnaire was sent to 31 different provinces (autonomous regions and municipalities) across china from october 1st, 2015 to january 1st, 2016. basic information, equipment and device information, and staffing and organization information were investigated. in total, 101 questionnaires from 101 ncus at 92 hospitals in 28 regions were received. most of the hospitals with ncus were large-scale (average hospital beds: 2150), teaching (84.8%), and tertiary hospitals (97.8%). the average number of ncu beds was 14, occupying 11.2% of the total number of beds in their department. most of the equipment and devices (37/50) were available in over 80% of the 101 ncus. however, some devices were centralized by hospital and operated with assistance from other departments. a total of 1250 full-time doctors and 1978 full-time nurses were employed at the ncus. a few of the ncus achieved a doctor-to-bed ratio of 0.5:1 (40.6%) and a nurse-to-bed ratio of 1:1 (37.6%). and respiratory therapists, clinical dieticians, clinical pharmacists, and physiotherapists were present in 5.9%, 32.7%, 36.6% and 49.5% of the 101 ncus. the number of ncus increased, the availability of ncu equipment became more sufficient, and the staffing of ncus improved. however, we should pay attention to the management of specialized ncu equipment, the shortage of ncu staff, and the need of ncu training. automated devices collecting quantitative measurements of pupil size and reactivity are increasingly used for critically ill patients with neurologic disease. however, there is limited data on the effect of ambient light conditions on pupil metrics. to address this issue, we we tested the range of pupil reactivity in healthy volunteers in both light and dark conditions. we measured quantitative pupil size and reactivity in seven healthy volunteers with the neuroptics-200 pupillometer in both bright and dark ambient lighting conditions. bright conditions were created by overhead led lighting in a room with ample natural light. dark conditions consisted of a windowless room with no overhead light source. the primary outcome was the neurologic pupil index (npi), a composite metric ranging from 0-5 in which >3 is considered normal. secondary outcomes included resting and constricted pupil size, change in pupil size, constriction velocity, dilation velocity and latency. results were analyzed with multi-level linear regression to account for both inter and intra-subject variability. seven subjects underwent ten pupil-readings in bright and dark conditions, yielding 140 total measurements. mean resting pupil sizes were 3.56 v. [13.8-16.4 ], p<0.001). all additional secondary outcomes except latency were also significantly different between conditions. we found that ambient light levels impact pupil parameters in healthy subjects. however, changes in npi are small and more consistent in varying lighting conditions than other metrics. further testing of patients with poor pupil reactivity is necessary to determine if ambient light conditions could influence clinical assessment in the critically ill. practitioners should standardize lighting conditions to maximize the reliability of their measurements. neural stem cells (nscs) are known to have anti-inflammatory effect in strokes in previous studies. however, the mechanism of anti-inflammatory effect in direct co-culture with nscs in hemorrhagic stroke remains unclear. the aim of this study was to investigate whether direct co-culture with nscs modulates hemolysate-induced inflammation in raw 264.7 cells. we stimulated raw 264.7 cells with hemolysate to induce hemorrhagic inflammation in vitro. hemolysate-activated raw 264.7 cells were co-cultured with hb1.f3 directly for 4 hours. following direct co-culture, the production of cycloxygenase-2 (cox-2), interleukin-signal regulated kinase (erk) were assessed by western blotting, and tumor necrosis factor (tnfevaluated by enzyme-linked immunosorbent assay (elisa). hemolysate generates an activation of inflammatory response in raw 264.7 cells. direct co-culture with hb1.f3 significantly suppressed the phosphorylation of erk 1/2 in hemolysate-activated raw 264.7 cells. the expression of inflammatory mediators such as cox-2, il-by direct co-culture with hb1. f3 cell. in addition, the expression of cox-2, il-attenuated by erk inhibitor (u0126). our results demonstrated that direct co-culture with hb1.f3 cells reduced the inflammatory responses in hemolysate-activated inflammation via suppressing erk1/2 pathway. this suggests that nscs treatment can suppress the inflammatory response in hemorrhagic stroke. no pharmacological intervention improves outcomes after primary intracerebral hemorrhage (ich). we developed a novel therapeutic approach based on known biological function of endogenous apolipoprotein e (apoe). apoe is a key mediator of neuroinflammatory responses and modifies recovery from a variety of acute and chronic brain injuries. unfortunately, intact apoe holoprotein does not cross the blood brain barrier (bbb) and cannot be administered as a neurotherapeutic. we created apoemimetic peptides that cross the bbb and down-regulate neuroinflammatory responses in vitro and in vivo. cn-105, our lead candidate, is a 5-amino acid apoe-mimetic peptide derived from apoe's receptorregion. cn-105 retains anti-inflammatory and neuroprotective effects of intact apoe, was well-tolerated in preclinical studies, readily crosses the bbb, and demonstrates excellent pharmacokinetic, safety, and tolerability profiles in phase 1 studies. this is a multicenter, open-label phase 2a trial of cn-105 in patients with acute primary supratentorial ich. a total of 60 participants between the ages 30 and 80 years across 4 study centers, with a confirmed radiographic diagnosis of spontaneous, primary supratentorial ich. patients will be evaluated for eligibility within 12 hours of symptom onset. eligible participants will receive cn-105 intravenously over 30--minute infusion every 6 hours up to 3 day maximum. participants will be monitored daily throughout the treatment phase and receive standard-of-care treatment for the duration of the study. primary: to assess safety of cn-105 administration in primary ich. secondary: to evaluate effects of cn-105 administration on 30--day mortality and functional outcomes. exploratory: to investigate feasibility of radiographic surrogates of clinical outcomes using perihematomal edema measurements on serial brain ct and mri, and investigate feasibility of serial biochemical markers of neuroinflammation as surrogate measure of perihematomal edema and clinical outcome. cn-105 represents a first-in-class agent now entering phase 2 clinical trials in patients with acute ich. novel oral anticoagulant (noac) associated intracranial hemorrhage is a life-threatening condition for which activated prothrombin complex concentrate factor eight inhibitor bypassing activity (feiba) may be used for reversal. few studies report its use in spontaneous or traumatic intracranial hemorrhage. our institutional protocol is reversal with feiba 25 units/kg and escalating doses as needed. the safety and efficacy of this protocol was assessed. we performed a retrospective review of adult patients presenting to a level 1 trauma center between 2014-2016 with spontaneous or traumatic intracranial hemorrhage while on a noac . we evaluated the medication they presented on, indication for anticoagulation, location and size of the hemorrhage, presentation gcs, dosage of feiba recieved, change in size of hemorrhage on serial imaging as well as time between serial images, complications from reversal, and need for blood product transfusion. we identified 19 patients with an acute intracranial hemorrhage while on noacs. patients underwent a baseline head ct documenting acute intracranial blood, were reversed with feiba (25 u/kg), and underwent repeat imaging 6 hours later per protocol. ten (59%, 10/17) patients had no increase in hematoma volume on repeat imaging. two underwent neurosurgical procedures (aneurysm coiling, sub-occipital craniectomy) without intra-operative bleeding complications. five (29% 5/17) patients had clinically insignificant increase in size of hemorrhage. of those, one underwent a subsequent neurosurgical procedure, which was already anticipated. two (12%, 2/17) patients had clinically significant hematoma enlargement. of those, one underwent urgent craniectomy (indicated based on initial presentation) and one required a ventriculostomy for hydrocephalus. two patients had no repeat imaging. adjusted dose feiba (25 units/kg) may be an effective alternative to standard dose (50 unit/kg) for reversal of noacs in acute intracranial hemorrhage. our experience showed clinically significant hematoma expansion in 12% of patients and no increase in unplanned neurosurgical procedures after reversal with feiba. here we sought to determine if there is an association between recanalization success and rate of hemorrhagic transformation amongst patients who have undergone intra-arterial thrombectomy for ischemic stroke secondary to anterior circulation large vessel occlusion (lvo), many treated at extended time from last seen well (lsw) after mri assessment. stroke patients with anterior circulation lvo treated with thrombectomy between april, 2011 to june, 2017 were studied. group-wise comparisons were made between patients with post thrombectomy hemorrhage (as confirmed by a single, blinded neuro-radiologist reviewer) and patients without hemorrhage. failed or incomplete recanalization was defined as mtici < 2b. symptomatic intracranial hemorrhage (sich) was defined as validated hemorrhagic conversion or parenchymal hematoma plus 4 point decrease in nihsss. pertinent baseline characteristics were recorded and analyzed. sich was more prevalent amongst patients with tici<2b recanalization (or 2.76 [95ci 1.33 -5.71]). interestingly there was a low rate of sich amongst patients with tici=0 recanalization (2/12 [16.7%]). although many patients were treated at advanced time lsw no excess rate of sich was observed. baseline characteristics including age, presentation nihss, and presentation aspects were similar among the two groups. rates of sich are low after successful mri seleted thrombectomy regardless ot time lsw. patients with poor recanalization show increased rates of sich in keeping with past literature. our data suggest that thrombectomy after mri selection may be safe and effective for patients at extended time lsw of tor patients with unknown lsw. cta spot sign is associated with hematoma growth, a common complication of intracerebral hemorrhage (ich) that portends worse outcomes. magnesium and calcium are cofactors in the clotting cascade and for platelet aggregation. we tested the hypothesis that magnesium and calcium levels are associated with the presence of the cta spot sign. patients with spontaneous ich presenting to northwestern memorial hospital were identified from a prospective observational registry. inclusion criteria included cta obtained within 24 hours of symptoms onset and admission magnesium and calcium levels. cta spot sign (active contrast extravasation on ct angiography) was identified by a board-certified neurointensivist or neuroradiologist. variables suggesting association with spot sign at p<0.1 were assessed for inclusion in a logistic regression model, and a parsimonious predictive model for ct spot sign was developed using backward stepwise variable selection. 146 patients (age 63 ± 14.5 years, 47% male, median ich score 1 [iqr 1-3]) were included. seventeen (11.6%) patients with cta spot sign were identified. admission magnesium was 1.96 +/-0.27 and calcium was 9.27 +/-0.56. lower magnesium (or 0.074, 95% ci 0.007-0.746, p 0.027), lower calcium (or 0.390, 95% ci 0.156-0.973, p 0.044), and higher ich score (or 1.98, 95% ci 1.28-3.07, p 0.002) were independently associated with ct spot sign. magnesium and calcium level on admission are associated with the presence of a cta spot sign in patients with ich. magnesium and calcium supplementation may be attractive therapeutic targets for preventing harm from hematoma growth. cerebellar intraparenchymal hemorrhage (iph) is a rare and likely underreported complication of subdural hematoma (sdh) evacuation. we present two cases of post-operative iph and review the literature. case 1. an 83-year-old man underwent craniotomy for evacuation of a chronic right frontoparietal sdh. post-op ct showed pneumocephalus. the patient was extubated and clinically improved. three days post-operatively, he became lethargic and a ct brain revealed a 4cc right cerebellar iph. he was unable to safely swallow, declined a feeding tube and died under hospice care nine days later. case 2. a 72year-old man underwent craniotomy for evacuation for a left convexity sdh. routine post-op ct revealed an incidental left cerebellar iph. he returned to baseline one month later. only four such cases have been reported in the literature (1-4). two cases led to death within one week and two recovered, one with significant deficits. five more occurred following burr hole drainage of sdh and two others following drainage of subdural hygromas (2, 5-10). the incidence of cerebellar iph following supratentorial craniotomy has been reported in up to 0.29% of cases with significant morbidity or mortality (3). it occurs irrespective of age, pre-existing coagulopathy or arteriovenous malformations. size of insult and amount of csf loss do not correlate to iph, despite the fact that cerebral blood flow imaging shows over-drainage of cerebrospinal fluid (csf), causes intracranial hypotension and subsequent damage to dural veins (3,11). iph also occurs independently of operating room position, even though having the head turned is thought to compress venous drainage in the neck and cause congestion (3). cerebellar vasculature may be more sensitive to changes in intracranial pressure, though why this does not lead to complications more routinely is not clear. cerebellar iph should be considered in cases of neurological decline after sdh evacuation. intracerebral hemorrhage (ich) location predicts outcome, but most studies have examined differences between deep, lobar, and infratentorial locations. this study aims to characterize specific deep ich locations in a diverse cohort. the ethnic/racial variations of intracerebral hemorrhage (erich) study is a multi-center, prospective, u.s.-based study. 1345 subjects with supratentorial deep ich, known ich volume, and three-month follow-up data were included. logistic regression was used to evaluate the association between location and poor outcome (mrs > 3). receiver operating curve (roc) analysis was performed to identify ich volumes specific for poor outcome by location. 671 thalamic, 581 putaminal, and 83 caudate ichs were included. median ich volume was largest in putamen (15 ml), followed by thalamus (9 ml) and caudate (4 ml, p<.001). intraventricular hemorrhage (ivh) was most prevalent in caudate (94%), followed by thalamus (65%) and putamen (23%, p < 0.001). subjects with thalamic ich were older (62 vs 57 vs 58 years, p < 0.001) and more likely hypertensive (91% vs 85% vs 82%, p= 0.002) than those with putaminal and caudate ich, respectively. compared to thalamic, putaminal ich had more ich expansion (23% vs 15%, p < 0.001) and surgery (20% vs 14% p = 0.004) but fewer external ventricular drains (15% vs 27%, p < 0.001). thalamic location predicted poor outcome (or 2.25, 95% ci 1.35-3.73) at 90 days after adjustment for age, sex, premorbid disability, ich volume, ich expansion, ivh, and admission gcs. roc analysis identified 10 ml for thalamic and 29 ml for putaminal ich without ivh as having 95% specificity for poor outcome. there are significant differences in characteristics and outcomes within deep ich. specificity estimates for the identified ich volume thresholds require external validation. these findings may have implications for prognostication and clinical trial design. racial differences in outcome after intracerebral hemorrhage (ich) among asians, native hawaiians and other pacific islanders (nhopi) have been inadequately studied since these racial groups have been historically aggregated into a single racial category. a multiracial prospective cohort study of ich patients was conducted from 2011 to 2016 at a tertiary center in honolulu, hi to assess racial disparities in come after ich. favorable outcome was defined as 3month modified rankin scale (mrs) score £ 2. patients with no available 3-month functional outcome, race other than asians and nhopi, and baseline mrs >0 were excluded. multivariable analyses using logistic regression were performed to assess the impact of race on favorable outcome after adjusting for the ich score, early do-not-resuscitate (dnr) order and dementia/cognitive impairment. a total of 220 patients (161 asians, 59 nhopi) were studied. overall, 65 (29.5%) achieved favorable outcome at 3 months. nhopi were younger than asians (51.8±15.3 vs. 63.9±16.7 years respectively, p<0.0001), and had higher prevalence of diabetes (39.0% vs. 21.1% respectively, p=0.007), obesity (58.9% vs. 23.1 respectively, p<0.0001), and lower prevalence of early dnr order (3.4% vs. 24.2% respectively, p=0.0004) and advance directive presence (6.8% vs. 29.2% respectively, p=0.0005). nhopi race was a predictor of favorable outcome in the unadjusted model (or 2.47, 95% ci: 1.32, 4.62) and after adjusting for the ich score (or 2.30, 95% ci: 1.06, 4.97) but not in the final model (or 2.04, 95% ci: 0.94, 4.42) . in the final model, the ich score remained as the only independent negative predictor of outcome (or 0.26, 95% ci: 0.17, 0.41 per point). nhopi are more likely to achieve favorable functional outcome after ich compared to asians even after controlling for ich severity. however, this association was attenuated after adjusting for dnr status and baseline cognitive factors. intracerebral hemorrhage (ich) patients often require continuous antihypertensive infusions. we sought to identify clinical and care process predictors of anti-hypertensive infusion duration, and tested whether infusion duration independently predicts intensive care unit length of stay (icu los) after adjusting for validated measures of ich illness severity. we identified spontaneous ich patients admitted 12/2013-9/2015 to a tertiary center, excluding those transitioned to comfort care within 24 hours. we abstracted demographic and clinical variables from the medical record. we calculated the total duration each patient received continuous infusion of an antihypertensive medication. we categorized glasgow coma scale score as 14-15; 8-13; or < 8. two reviewers independently classified ich location and etiology. we determined univariate associations of clinical variables to anti-hypertensive infusion and performed regression analysis to determine the effect of continuous infusion on icu los. we identified 495 spontaneous ich patients and excluded 50 for early comfort care. in the remaining 445, mean age was 70 [69-71] years, 45% were female, median ich score was 1 [1-2], and 44% had lobar hemorrhages. continuous infusion included nicardipine, clevidipine, labetalol and diltiazem. a total of 127 (28%) patients received anti-hypertensive infusions, mean 13 hours. mean time to enteral antihypertensive administration medication was 32.5 hours, and mean icu length of stay was 3.4 days [3.0 -3.8]. predictors of longer antihypertensive infusion duration were male gender (p=0.003), non-lobar ich (p=0.0002), non-caucasian race (p<0.001), younger age (p<0.0001), higher initial systolic (p=0.03) and diastolic bp (p=0.01), worse gcs category (p<0.001), and longer time to first enteral medication (p<0.0001). anti-hypertensive infusion duration independently predicted icu los (p<0.0001) after adjusting for age, race, gcs category, time to enteral antihypertensives, and ich score. worse gcs category, younger age, non-lobar ich location, and race are significant independent predictors continuous iv antihypertensive infusion duration, which is significantly associated with longer icu stay. patients with sich have a high risk of vte. pharmacological prophylaxis such as unfractionated heparin(ufh) has been demonstrated to reduce vte. however, published datasets exclude patients with recent ich out of concern for hematoma enlargement. aha/asa guidelines recommend ufh 1-4 days after hematoma stabilization while the eso has no recommendations on timing of ufh. there are few data for patients who received ufh before 48 hours. our institutional practice is to begin ufh following sich after 24 hours of clinical and radiographic stability. we examine the impact of this practice on risk of hematoma expansion. we performed a retrospective cohort analysis of 87 sich patients admitted in 2012-2013 to a single us university hospital. demographic and clinical characteristics were abstracted. ich was measured via 3d volumetrics for an admission ct, a 24 hour follow-up, and a follow-up prior to discharge. percent hematoma growth between 24-hour ct and discharge ct was calculated. risk factors for expansion >5%, including early heparin use, were analyzed via oneway t-test and chi-squared tests. results 87 sich patients analyzed had a median ich score of 2(iqr1-3) and median admission gcs of 10(iqr6-13). 39%(34/87) patients received early ufh. 18%(16/87) suffered hematoma expansion >5%. overall mean hematoma growth was higher with early ufh (ufh24hr -15%,p=0.0007). in multivariate analysis, ich score, gcs and initial hematoma size did not predict >5% hematoma expansion. early vte prophylaxis at 24 hours from sich had a statistically significant increase in hematoma size, but this increase is clinically insignificant. in this cohort, early ufh did not increase risk of significant hematoma expansion. further prospective trials are warranted, given the high risk of vte in this population. antiplatelet therapy at the time of spontaneous intracerebral hemorrhage (sich) may increase the risk of hemorrhage expansion and mortality. current guidelines recommend consideration of a single dose of desmopressin in sich associated with cyclooxygenase-1 inhibitors or adenosine diphosphate receptor inhibitors. this study sought to compare outcomes in patients that received desmopressin for antiplatelet reversal in the setting of sich to similar patients that did not receive desmopressin. this retrospective chart review of the electronic medical record included adult patients admitted for sich that were on antiplatelet agents at the time of diagnosis. patients that received desmopressin were compared to similar patients that did not receive desmopressin. exclusion criteria included traumatic brain injury, active coagulopathy and thrombocytopenia. the primary outcome was the incidence of hematoma expansion. additional outcomes included average increase in hematoma volume, in-hospital mortality and functional outcome at hospital discharge. overall, 73 patients (36 received desmopressin, 37 did not receive desmopressin) were included for analysis. incidence of hematoma expansion was not different between groups (36% with desmopressin vs 22% without desmopressin, p=0.17). average largest increase in hematoma volume on follow-up imaging from baseline was not different (10.5±16.7 ml with desmopressin vs 10.2±26.7 ml without desmopressin, p=0.96. in-hospital mortality was significantly higher in the desmopressin group (20% vs 8% without desmopressin, p=0.016) as well as the incidence of a modified rankin score of 4-6 at discharge (39% vs 28% without desmopressin, p=0.029). administration of desmopressin for antiplatelet reversal in sich does not appear to reduce the incidence of hematoma expansion. further studies assessing temporal relation of desmopressin administration and hematoma expansion are needed to confirm the results of this single-center retrospective study. clinical outcome after intracerebral hemorrhage (ich) remains poor. definitive phase-3 trials in ich have failed to demonstrate improved outcomes with intensive systolic blood pressure (bp) lowering.we sought to determine whether other bp parameters-diastolic bp, pulse pressure (pp), and mean arterial pressure (map)-showed an association with clinical outcome in ich. we retrospectively analyzed a prospective cohort of 672 patients with spontaneous ich and documented demographic characteristics, stroke severity, and neuroimaging parameters. consecutive hourlybp recordings allowed for computation of systolic bp, diastolic bp, pp, and map. threshold bp values that transitioned patients from survival to death were determined from roc curves. using inhospital mortality as outcome, bp parameters were evaluated with multivariable logistic regression analysis. patients who died during hospitalization had higher mean pp compared to survivors (68.5 ±16.4mmhg vs. 65.4 ±12.4 mmhg; p=0.032). the following admission variables were associated with significantly higher in-hospital mortality (p < 0.001): poorer admission clinical condition, intraventricular hemorrhage, and increased admission normalized hematoma volume. roc analysis showed that mean pp dichotomized at 72.17 mmhg, provided a transition point that maximized sensitivity and specific for mortality. the association of this increased dichotomized pp with higher in-hospital mortality was maintained in multivariable logistic regression analysis (or 3.0; 95%ci 1.7-5.3; p < 0.001) adjusting for potential confounders. widened pp may be an independent predictor for higher mortality in ich. this association requires further study. a national confidential enquiry into patient outcome and death (ncepod) report concerning management of aneurysmal subarachnoid haemorrhages in the uk suggested up to half of patients received suboptimal consideration for organ donation. as demand for organs continues to increase, so does the need to pursue all potential sources of donor organs. subarachnoid haemorrhages have an estimated mortality of 50% and can potentially provide younger donor organs with less chronic pathology. this is a comprehensive picture of donation rates within a tertiary centre. retrospective data regarding all deceased patients on the neuro-intensive care unit during 2016 with aneurysmal subarachnoid haemorrhage as the cause of death was obtained from the nhs blood and transfusion team. the local audit committee provided ethical approval. data regarding organ donation was extracted and compared to national data, then analysed using fisher's exact test. referral rates were 100%. this is greater than the national average of 57.7% (p=0.001), yet only 30.8% of referred patients proceeded to organ donation. consent was withheld in 52.8% of potential donors. nationally 37.9% of donors are lost due to non-consent (p=0.235). 32.1% of consented patients were unable to donate organs, similar to national figures (p=1.00). referral rates within this centre are excellent; consent remains the main obstacle. consent rates can be improved using a long contact model where specialist nurses in organ donation establish relationships with relatives prior to any discussion of donation. the ideal discussion is a pre-planned collaboration involving a senior doctor and a specialist nurse. early brain stem testing may facilitate earlier acceptance of death by relatives whilst reducing the duration of the multi-systemic effects of the associated hyperresponsive cascade on donor organs. neurosurgeons should be encouraged to suggest organ donation when declining referrals. further work is needed to assess the barriers to instituting these measures and inspiring change. spontaneous brainstem hemorrhage has been historically associated with high mortality. however, updated data on the frequency and outcome of spontaneous brainstem hemorrhage is scarce vis-a-vis advances in neuro-critical care. the purpose of this study was to investigate the frequency and outcome of spontaneous brainstem hemorrhage. records of consecutive intracerebral hemorrhage (ich) patients presenting to an urban academic medical center from january 2010 though december 2015 were reviewed. cases with brainstem hematomas were isolated for analysis. data on demographics and outcomes were collected and analyzed. sub-group analysis was also done to look at outcomes based on location of hemorrhage in the brainstem. of 181 consecutive spontaneous ich patients, 13 (7.18%) presented with brainstem hemorrhage; 7 (53.8%) were pontine, 2 (15.4%) mesencephalic, and 4 (30.8%) were located in both the pons and the midbrain. the average age was 52.92 years and (38.46%) were men. median glasgow coma scale on presentation was 6.5. thirty-day mortality rate was 62.5%, with 6 in-hospital deaths and 2 deaths post discharge. two and 3 patients were discharged home or a rehabilitation facility, respectively. in subgroup analysis, thirty-day mortality for midbrain, pons and combined pons/midbrain hemorrhage was 0%, 57% and 100%, respectively. spontaneous brainstem hemorrhage remains an uncommon but highly fatal clinical entity. more than one-half die within 30 days. only a minority are discharged to rehabilitation or home. in sub-group analysis, location of brainstem hemorrhage was shown to influence outcome, with 100% mortality in case of combined pons/midbrain hemorrhage, and more than 50% mortality with pontine hemorrhage. midbrain hemorrhage was associated with good outcome with 100% survival. patients with intracerebral hemorrhage (ich) frequently present with hypertension. it is unclear whether this is due to preexisting hypertension (prhtn) causing the bleed, an effect of the bleed, or both. we retrospectively analyzed a single-institution cohort of ich patients presenting between 2013 and 2016. data included home antihypertensive use; asbp; tte, and ekg and imaging results; and nicardipine administration. the primary objective was to assess the relationship between prhtn and asbp, while the secondary objectives were to assess the relationship between prhtn, imaging and acute antihypertensive requirements. 112 ich patients met inclusion criteria. in our assessment for prhtn, we found that 46% of patients were on antihypertensives, 16% had lvh on ekg, and 15% had lvh on tte. there was a significant relationship between lvh on tte and lvh on ekg (p<0.001), but not between home antihypertensive use and presence of lvh using either modality. asbp was higher for all patients with markers of phtn, but this was only significant for patients with lvh on tte (181mmhg, iqr 153-214 vs. 152mmhg, iqr 137-169, p < 0.001) and patients with lvh on ekg (195 mm hg, iqr 155-216 vs. 147 mm hg, iqr 129-163, p<0.001). all patients with markers of prhtn were more likely to require nicardipine, but this was only significant for patients with lvh on tte (94% vs. 64%, p=0.016) and patients with lvh on ekg (83% vs. 52%, p=0.018). all patients with markers of prhtn were more likely to have deep bleeds (p=0.017 for patients with lvh on ekg vs. those without lvh on ekg). there was no relationship between any markers of prhtn and the presence of a spot sign. in patients with ich, prhtn is related to higher asbp, deep bleed location, and increased acute antihypertensive requirements. all spontaneous intracerebral hemorrhage (sich) patients, including those with low severity are uniformly admitted to the intensive care unit (icu) at our institution. many may not benefit from this high-intensity observation and leave the icu within 24 hours without experiencing any complications. identifying low-risk characteristics could aid in triaging such patients to stroke units instead. retrospective data collection of all sich patients admitted to our institution from june 1, 2013-june 30, 2016 included ich score, need for surgical interventions, medical complications, and icu/hospital los. we analyzed variables predicting short (< 24 hour) icu los among low severity (ls-ich) patients (defined as those with ich score 0-1). 163 (52%) of 313 sich patients had ich scores of 0-1, of which just under half (78) had icu los 24 hr. they also spent significantly fewer days in hospital (4 vs 9.7, p<0.001). we could not identify a clear ich score cutoff that was sensitive enough to predict short icu los. however, requirement for antihypertensive infusion and early clinical deterioration correlated strongly with longer icu los p<0.001. there appears to be a subset of mild ich patients (ich score 0-1) who do not require icu observation. a risk assessment score incorporating gcs and ich volume may be able to delineate this low-risk population who could instead be admitted to a stroke unit, with the potential for significant cost saving and hospital efficiency. obesity has been linked with relative longevity in several disease conditions. this relationship has been termed the "obesity paradox." in this study we sought to evaluate the impact of obesity on short-term outcomes in patients with intracerebral hemorrhage (ich). patients admitted with a diagnosis of ich were selected from the 2003-2014 nationwide inpatient sample (nis) database, using icd-9 codes. patients with ich were dichotomized based on the presence of obesity as a coexisting diagnosis based on icd-9 codes and diagnosis related groups. the primary outcome measure was in-hospital mortality. length of stay and total charges were also examined as resource utilization measures. of obesity is a major health care burden as evidenced by higher resources utilization. counterintuitively, obesity appears to be associated with lower in-hospital mortality in ich patients. one possible physiological basis for this could be that the higher ldl levels on presentation result in a lower likelihood of hematoma expansion. recent short-term outcome analysis indicates association of spontaneous intraventricular hemorrhage (ivh) related hydrocephalus with incontinence and gait dysfunction. we explore the association of hydrocephalus scores, intraventricular alteplase and clinical variables with these outcomes at long term follow up in survivors from the clear iii trial. clear iii, a randomized, multi-center, double-blinded, placebo-controlled trial was conducted to determine if pragmatically employed external ventricular drainage (evd) plus intraventricular alteplase improved outcome, in comparison to evd plus saline in patients with ivh causing obstructive hydrocephalus. we assessed hydrocephalus scores on survivors at diagnosis, days 30 and 365. incontinence and dysmobility were defined using 12-month barthel index subscores (<10 for bladder and <15 for mobility, respectively). outcome measures were predictors of incontinence and gait dysmobility at 1 year after ich. this prospective observational study analyzed consecutive ich-patients (n=246) treated at the neurological and neurosurgical departments of the university-hospital erlangen, germany over a 21month inclusion period (05/2013-01/2015). we analyzed the influence of patient characteristics, inhospital measures and functional status on treatment recommendations and on oac initiation during 12-month follow-up. clear treatment recommendations by attending stroke physicians seem necessary to ensure oac initiation after ich. oac showed beneficial associations; however data here suggests the presence of an indication bias introduced by treatment recommendations and outpatient care during follow-up. therefore, observed association with age and functional status might affect unadjusted analyses. although recently, non-vitamin k antagonist oral anticoagulants (noacs) therapy in patients with non valvular atrial fibrillation have half the incidence of intracerebral hemorrhage (ich) compared to warfarin. however, it would be still controversial subject that outcome of noac-associated ich (nich) might be worse or better than warfarin-associated ich (wich). in this study, we investigated clinical outcome and radiological finding of ich between two different anticoagulation treatments. retrospective review of medical records was performed for 1,197 patients who admitted with ich from 2003 to 2016 in seoul national university bundang hospital. clinical characteristics, functional outcome, location and volume of ich, and all-cause mortality within 90 days were analyzed. among those patients, 47 patients with wich and 8 patients with nich were included. lesion location was common in supratentorial deep area (50.0%, 44.7%), lobar area (37.5%, 31.9%) and brainstem and cerebellum (12.5%, 23.4%) in the nich and wich group, respectively. no significant difference found in initial nihss (15.5 vs 10), discharge nihss (16.5 vs 8), mrs (0 to 2) at discharge (14.9% vs 25.0%), mrs (0 to 3) at discharge (36.2% vs 25.0), mrs (0 to 2) at 90 days (25.5% vs 12.5) and mrs (0 to 3) at 90 days (25.5% vs 12.5) in nich and wich group. we did not find any difference between nich and wich for allcause mortality at discharge (17% vs 0%), 90 days (27.7% vs 0%), and 1year (34% vs 25%). median baseline ich volume was not significant difference in two groups (3676.2 vs 1542.24). in our study, functional outcome, mortality, and baseline ich volume were similar following nich and wich. because of low statistical power due to small sample size in our study, further studies with prospective larger patient cohorts will need to be conducted. novel oral anticoagulants (noac) are increasingly used as an alternative to vitamin-k antagonists (vka) such as warfarin for anticoagulation and have shown lower rates of intracranial hemorrhage in several randomized clinical trials. it has been suggested that noac-iphs might be particularly dangerous, yet the literature regarding hematoma characteristics and outcomes between noac-iphs and vka-iphs is inconclusive. given the lack of standardized reversal strategies and lack of information on outcomes following noac-associated iph, the aim of this meta-analysis was to compare 1) mortality; 2) hematoma volume, and 3) risk of hematoma expansion in patients who developed an iph on noacs versus vka. a meta-analysis of the literature through december 2016 was conducted using pubmed, embase and cochrane databases in accordance with prisma guidelines. pooled risk ratios (rr) were calculated for mortality and hematoma expansion and pooled mean difference (md) was calculated for hematoma volume (ml) using random-effect (re) and fixed-effect (fe) models. noac-iph was not associated with increased mortality (re and fe: rr: 1.07; 95%-ci: 0.89;1.28, i2=0.00%, p-heterogeneity=0.29; 7 studies) and hematoma expansion (re and fe: rr: 0.97; 95%-ci: 0.77;1.21, i2=0.00%, p-heterogeneity=0.48; 4 studies) compared to vka-iph. the hematoma volume of noac-iph was smaller than vka-iph (re: md: -8.83ml; 95%-ci: -17.00; -0.67, fe: md: -6.48ml; 95%-ci: -9.85; -3.10; 5 studies), but with considerable heterogeneity that could not be alleviated (i2=79.1%, p-heterogeneity0.05). noac-iph was not associated with increased mortality or hematoma expansion compared to vka-iph and may be associated with a smaller hematoma volume. controversy exists regarding blood pressure (bp) reduction and perihematomal ischemia (phi). we investigated the association of acute bp reduction and presence of qualitative and quantitative phi in a large prospective cohort of intracerebral hemorrhage (ich). consecutive patients from the prospective nih funded dash1 study (> 18 years, primary spontaneous ich) were included. phe volume was outlined on t2/flair and ich volume on gre; these and adc were co-registered. tissue characteristics was defined as: ce = adc 1200 x 10-6 mm2/sec. the association of clinical, radiographic factors and bp at baseline and 24 hours with qualitative perihematomal and/or remote ischemia (i.e. dwi bright adc dark) and quantitative ce on adc were determined. 164 patients (50% female) with mean age 64 ± 15, and nihss 8 (iqr 3, 17) were included. mri time was 34.6 hours (iqr 21, 60). 37% had lobar ich. ich volume was 12 cc (iqr 5, 33). 44% had perihematomal (33%) or remote ischemia (21%). 99% of patients had areas of perihematomal adc 1 cc) was associated with higher absolute (52±30 mm hg, p=0.05) and relative (38% ± 16% vs 33% ± 15%, p=0.058) map reduction, younger age (p=0.03), h/o tia/stroke (p=0.04) and larger ich volumes (32 vs 7 cc) (p<0.001). in multivariate analysis, map reduction was not significantly associated with ce whereas ich volume was (p=0.00). perihematomal and remote ischemia is frequently seen after ich, but the severity of phi is small and of unclear significance. bp reduction may be associated with phi but this was not an independent predictor. introduction: patients with left ventricular assist devices (lvads) receive anticoagulation and antiplatelet therapy to prevent pump thrombosis. consequently, neurological events including intracranial hemorrhage (ich) are one of the most feared causes of morbidity and mortality in these patients. there is little evidence to guide initiation of anticoagulation after such ich events. methods: this is a retrospective, single academic center analysis of lvad patients from 2014-2017. the electronic medical record was reviewed after irb approval for the physiologic, laboratory, and radiographic data of these patients as well as survival or cause of death by 30 days or by discharge. results: during the analysis, 51 patients were reviewed, 7 of which (13.7%) had intracranial hemorrhage. one patient was excluded from analysis after care was transitioned to hospice, thus follow-up scans were not obtained. the remaining 6 patients were receiving both aspirin (81-325mg daily) and warfarin (2-8mg daily) with an inr of 1.5-3.3 (mean=2.8) at the time of ich. aspirin (81-325mg daily) was resumed within 1-2 (mean=1.1) days post ich. warfarin was resumed 1-20 (mean=6.6) days post ich at 2-8mg (mean=4mg) with goal inr (1.7-2)-(2-3) depending on device. there was 1 death due to withdrawal of life support in setting of multiple comorbidities, though follow-up scan 7 days post warfarin resumption revealed no evidence of rebleed. the remaining 5 patients showed no evidence of rebleed on ct scans at 2 months post warfarin resumption and were subsequently discharged to rehab facilities or home with modified rankin scores 0-4 (mean=2.8). conclusion: in this review of 51 lvad patients, about 13% suffered ich, and of those 6 survivors aspirin was safely resumed within 2 days and warfarin was safely resumed as early as 7 days post-event. further studies are needed in order to establish safe practice guidelines and risk factors to prevent ich. intracerebral hemorrhage (ich) remains a devastating form of stroke, and perihematomal edema worsen outcomes after ich. recent studies have demonstrated the safety of minimally invasive surgery (mis) for hematoma removal, but the efficacy of mis in the treatment of ich is controversial. this study aimed to evaluate the effect of mis compared with medical treatment for basal ganglia ich. we retrospectively analyzed the clinical outcomes of prospectively collected data from two stroke centers. the treatment strategies of the two stroke centers for basal ganglia ich are different; one stroke center underwent mis and the other stroke center medically treated according to the current guidelines. we hypothesized that mis could reduce perihematomal edema and improve functional outcomes compared to medical treatment. primary outcome of this study was a modified rankin scale (mrs) at 3 months after ich occurrence. a total of 155 patients with basal ganglia ich were treated with different treatment strategies; 86 patients underwent mis and 69 patients received medical treatment. no statistically significant differences were found in age, sex, hematoma volume, and glasgow coma scale scores between the groups. a better functional recovery (mrs < 3) at 3 months was found in the medical treatment group than the mis group (46.4% vs 14.0%, p <0.05). no significant differences were observed between groups in terms of mortality. our findings suggest that the best medical treatment improves functional recovery after basal ganglia ich compared to mis. these results are contrary to other studies of ich, and further randomized trials are required. perihematomal edema (phe) after intracerebral hemorrhage (ich) is thought to be predominantly vasogenic. the presence and extent of cytotoxic edema (ce) is controversial. we investigated phe diffusivity (phed) and factors associated with ce. consecutive patients from the prospective nih funded dash1 study (> 18 years, primary spontaneous ich) were included. phe volume was outlined on t2/flair and ich volume on gre; these and adc were co-registered. tissue characteristics was defined as: ce = adc 1200 x 10-6 mm2/sec. clinical and radiographic factors associated with ce were determined. cytotoxic edema is detected in the perihematomal area, early after ich and is associated with younger age, larger ich and prior h/o tia/stroke. its clinical significance needs to be studied further. hemorrhagic stroke carries a high mortality rate and determining prognostic factors during initial presentation can aid redirecting intensive care unit (icu) management. we described the physiological profile and clinical outcomes of hemorrhagic stroke patients in a colombian icu. we retrospectively reviewed all hemorrhagic stroke patients admitted to our icu from 2014-2017. clinical characteristics, outcomes, available laboratory values and hourly vital signs from the first 48 hours in the icu were retrieved and analyzed. our primary stroke center admitted 395 patients, 72 (18%) were hemorrhagic. out of these, 47 required icu management, representing 1 % of the total 4687 icu admissions during this time frame. intracerebral hemorrhage (ich) was present in 32 patients while subarachnoid hemorrhage (sah) was seen in 15. the latter had a median fisher score of 4. for all patients, the most common risk factors were hypertension (61.7%), dyslipidemia (29.8%) and smoking (17.0%). icu mortality was 25.5% (53.3% with ich and 12.5% with sah). mean sequential organ failure assessment (sofa) score was significantly greater in patients who died (8.8 vs. 1.8, p<0.05) and mean glasgow coma scale was significantly lower (8.1 vs. 13.2, p<0.05). vasopressors were required in 18 patients (38.3%), mechanical ventilation in 22 (46.8%), and half of the patients requiring either support therapy died. only 5 patients (10.6%) had fever in the first 48 hours and all died. mean coefficient of variation for systolic, diastolic and mean blood pressure was significantly lower in patients who survived. mortality cases were more likely to have hypokalemia and hypomagnesemia than surviving patients (50.0% vs. 25.7% and 58.33% vs. 40%, respectively). icu-admitted hemorrhagic stroke patients have a poor prognosis. sofa and gcs are accurate predictors of mortality. certain electrolyte disturbances, fever and a higher variation of blood pressure during the first 48 hours were associated with a worse outcome. the association between worsening cerebral edema and unfavorable outcome in ich patients has been described in rcts. the objective of this analysis was to compare hospitalized spontaneous ich patients with and without perihematomal edema (phe) expansion and to evaluate relationships between hypertonic saline (hts) use, peak serum na, phe expansion, and short-term outcomes. we conducted a cross-sectional study of consecutive spontaneous ich patients admitted to a single center from 11/2014-11/2015. head cts during the first week of admission, use of hts, and phe (using abc/2 method) were evaluated. phe expansion of 10% or more was considered worsening edema. outcomes of interest included time to peak na, poor disposition (not home or inpatient rehabilitation), discharge mrs 4-6, and in-hospital death. of 180 ich patients, 43% experienced worsening phe. there was no difference in age, race, sex, arrival bp arrival, or vascular risk factors in patients with or without worsening phe. however, for each mm of midline shift (mls) present on initial head ct, odds of phe expansion was decreased by 16% (or 0.84, 95%ci 0.73-0.97, p=0.014). mls on initial head ct was the best discriminator of phe expansion (auc 0.811 (95%ci 0.663-0.958). although hampered by small sample size, our data indicates that finding that ich patients with degree of mls on initial head ct is the best radiographic predictor of had lower odds of phe expansion. those without mls at presentation may be at risk of phe expansion, and counterintuitively may be those most in need of aggressive medical management. may suggest a role for intensive osmotherapy in patients with favorable imaging at presentation. intracerebral hemorrhage (ich) is a devastating stroke with high mortality rates. previous studies have shown a potential role of immune cells as a prognostication method. a high neutrophil to lymphocyte ratio was associated with poor outcomes after ich. we sought to determine whether absolute lymphocyte count(alc) at admission was predictive of outcomes in patients with ich. we performed a retrospective chart review of all patients admitted to our hospital with a diagnosis of ich from january 2012 to december 2016.we collected baseline demographic characteristics, medical history, ich scores, differential leucocyte, platelet and total leucocyte(tlc) counts at admission. the functional outcomes after ich were measured using modified rankin scale (mrs) at discharge. mrs of 5 and 6 were considered poor outcomes. statistical analysis was done after grouping lab values into higher and lower groups with respect to the normal reference ranges a total of 257 patients with ich were admitted to our center during the study period. 111 patients were included in the study and the rest were excluded due to lack of differential leucocyte counts at admission. 34% (38 of 111) had poor outcomes. univariate analysis using fisher's exact test showed significant association between low alc levels (10.5) were also found to be significantly associated with worse outcomes (p = 0.0036, 0.0001, 0.0022, respectively). however, after multivariate analysis, only low absolute lymphocyte counts retained significant association (p =0.0268). intracerebral hemorrhage patients with low absolute lymphocyte counts at admission have a higher probability of poor outcomes at discharge. further studies are required to confirm our results. intraventricular hemorrhage (ivh) is a significant predictor of poor outcome after intracerebral hemorrhage (ich), and may differentially predict hydrocephalus and mortality among blacks vs. nonblacks. we aimed to confirm these findings in a separate cohort of spontaneous ich patients with severe ivh. the cleariii-ivh trial was a randomized, multi-center placebo controlled trial examining the effect of intraventricular alteplase versus saline, on outcomes in patients with spontaneous ivh. we retrospectively analyzed data on all 500 patients, including self-reported race/ethnicity, medical comorbidities, presentation characteristics and functional outcomes. represented race/ethnic groups with >5 subjects per group were 252(50.4%) white/non-hispanic (wnh), 53(10.6%) white/hispanic (wh), 168(33.6%) black/african american/non-hispanic (bnh), and 14(2.8%) asian. bnh were significantly younger than rest of the cohort with median age [interquartile range] 55 [46,60] years, had more hypertension(97%, p=0.056), and significantly higher rates of antihypertensive medication non-compliance (70.8%, p=0.048). wnh had more frequent coronary artery disease (14.7%, p<0.001), use of vitamin k antagonists (10.7%, p=0.002) and elevated inr on presentation (9.1%, p=0.002). bnh had significantly more frequent hydrocephalus on presentation (23.8%, p=0.003), and a higher rate of ventriculoperitoneal shunt placement (25%, p=0.015). neither ich nor ivh volume at enrollment, nor ivh remaining at end of treatment differed significantly between race/ethnic groups. however, bnh patients were more likely to have greater than 80% ivh reduction, a recognized endpoint for better functional outcomes in cleariii (26.8% vs. 25%-wh; 17.1%-wnh; 7.1%-asian; p=0.019), and this difference persisted in those who received intraventricular alteplase (p=0.04) and after adjustment for diagnostic ivh volume (p=0.03). race/ethnicity was not an independent predictor of mortality or poor outcome at 30 or 180 days on multivariable logistic regression. although functional outcomes did not differ significantly among race/ethnic groups, differences in risk factors, hydrocephalus/shunting post ivh and response to thrombolytic therapy warrant further exploration. investigators from the randomized trial of unruptured brain arteriovenous malformations (avm) trial (aruba) reported in 2013 that interventions to obliterate unruptured avms resulted in greater morbidity and mortality compared to medical management. we investigated whether patterns of avm treatment changed after aruba's publication. we used inpatient and outpatient claims data from 2009-2015 from a nationally representative 5% sample of medicare beneficiaries. unruptured brain avms were identified using icd-9-cm code 747.81. the date of first avm diagnosis was coded as occurring before or on november 20, 2013 (online publication of aruba) versus after. outcomes were referral to a neurologist or neurosurgeon, and interventional treatment. interventional treatments were identified using cpt codes 61680-61692, 61623-61624, 61796, 61798, or 77371-77372. the likelihood of outcomes after versus before aruba was compared using survival analysis with log-rank tests and cox proportional hazards models adjusted for age, sex, race, and the charlson comorbidity index. we censored patients at diagnosis of intracranial hemorrhage. we identified 1,444 patients with a mean 2.8 (±2.0) years of follow-up after diagnosis of unruptured brain avm. diagnosis was most often by neurologists (20.6%), neurosurgeons (14.4%), and internal medicine specialists (11.3%). after aruba publication, there were no changes in 1-year cumulative rates of referral to a neurosurgeon (26.9% after, 26.8% before; p = 0.99) or neurologist (46.8% after, 47.7% before; p = 0.97), but there was an increase in avm treatment (7.0% after, 3.5% before; p = 0.0096). after adjustment for demographics and comorbidities, there was an increased likelihood of interventional management (hr 1.9; 95% ci, 1.1-3.2) after aruba's publication. in a nationally representative cohort of elderly patients, we found an increase in interventional avm management after publication of aruba. this is notable given that our data pertain to older patients who are generally seen as less suitable surgical candidates. elderly patients with severe intracerebral hemorrhage (ich) are often projected to have future functional dependence but unclear degree of cognitive recovery. surrogates for such patients frequently weigh multiple concerns when facing the difficult decision of whether to prolong life with tracheostomy and gastrostomy tube insertion versus pursue comfort care. we aimed to characterize distinct groups of surrogates in these situations, based solely on how they prioritize their concerns. subjects recruited from a probability-based us population sample completed an online best-worst survey that presented the above scenario and asked the respondent to prioritize concerns as the patient's surrogate. clusters were identified with latent class analysis after weighting data to match the us census demographic distribution. class solutions were replicated 20 times from random starting seeds, with the solution chosen after factoring in akaike's information criterion. we identified 4 distinct decisional groups among 792 respondents (response rate = 44.6%). all groups reported multiple concerns as important, but group 1 (34.4%) was more concerned than any other that the patient was too old to live with disability. group 2 (26.4%) focused on ensuring agreement among other family members. group 3 (20.7%) was concerned that the patient might suffer if tube feeding and iv fluids were stopped and that the prognosis could be incorrect. group 4 (18.6%) had numerous considerations that were comparably important but prioritized paying for long-term care. groups varied in whether they would actually request prolonging care for the patient (group 1 = 5.20%, g2 = 9.46%, g3 = 31.13%, g4 = 23.12%, p<0.0001). in a multivariate model, religious affiliation and frequency of attending religious services were the only variables independently predicting group membership. we identified 4 distinct profiles of decisional patterns for surrogates of severe ich patients with uncertain prognosis. these data will inform development of strategically tailored decision aids. cerebral venous sinus thrombosis (cvst) represents an important cause of both ischemic and hemorrhagic strokes in young people. while recent guidelines recommend management in a stroke unit, the impact of neurocritical care in this condition has not been studied. we aimed to assess whether the introduction of a neurocritical care program influenced clinical outcomes in cvst patients. we retrospectively reviewed electronic medical records of adult patients admitted to yale new haven hospital's neuroscience icu (nicu) between 2011 and 2017 with a diagnosis of cvst. demographics, vascular risk factors, comorbidities, length of stay and discharge modified rankin scale (mrs) were collected. patients were excluded for age 24 hours of presentation. we compared two time periods, before (epoch 1, 2010-2012) and after (epoch 2, 2013-2017) the introduction of continuous staffing of cvst cases by neurointensivists in the nicu. univariable and multivariable logistic regression were utilized to model the odds of poor outcome (dichotomized mrs 0-2 vs 3-6). fifty-three patients with cvst met the inclusion criteria during the study period (mean age 39 (+/-17) years, 51 % female). 20 patients were identified for epoch 1 and 33 patients for epoch 2. overall, 40 patients (76%) had a good (mrs 0-2) outcome. for epochs 1 and 2, good outcomes were observed in 12 (60%) and 28 (85%) patients, respectively (p=0.04). in both univariable and multivariable regression analysis (adjusted for age and sex), admission during epoch 2 was associated with a significantly reduced odds of a poor outcome (or 0.27, ci 0.07 -0.98; p =0.048) and (or 0.27, ci 0.07-1; p=0.05), respectively. in this small, single-center cohort of patients with cvst, most patients experienced a good outcome. the institution of continuous neurointensivist coverage was independently associated with better outcomes. further validation in prospective, multicenter cohort studies is needed. thrombelastography (teg) provides a dynamic assessment of clot formation, strength, and stability. we examined relationships between teg parameters and outcomes from intracerebral hemorrhage (ich). we prospectively enrolled patients with spontaneous ich between 2009 to 2017. teg was performed at the time of admission. we divided patients into two groups based on the presence or absence of hematoma expansion (he). clinical characteristics, baseline teg values, and outcomes were compared between the two groups. multivariable regression analysis was conducted to compare the differences of teg components between the two groups after adjusting for potential confounding effects. we included 275 patients, 67 (24%) with he and 210 (76%) without he. patients with he were more often male and had higher rates of aspirin use, lower incidence of intraventricular hemorrhage, and larger baseline hematoma volumes. after controlling for potential confounders, mean r time was independently associated with he (5.3 ± 0.4 vs. 4.8 ± 0.2 mi significantly higher risk of he with or 2.63 (95% ci: 1.49, 4.65), p=<0.001. patients with hematoma expansion were more likely to have poor neurological outcome (mrs 4-6) at discharge (91% vs. 73%, p=0.002) and had higher mortality rates (28% vs. 14%, p=0.005). overall, 47 patients (17%) died in the hospital. following multivariable analysis, patients who died had significantly lower mean delta (0.4 ± 0.2 vs. 0.6 ± 0.2 mins.; p=0.04) and smaller angle (69.9 ± 3.1 vs. 66.1 ± 1.9 degrees; p=0.01) than those who lived. hematoma expansion and mortality from ich are independently associated with slower clot formation on teg. baseline teg identifies significant coagulation disturbances which may predict poor outcome and represent potential targets for therapeutic intervention. intracerebral hemorrhage (ich) patients often present with acute hypertension requiring intravenous and enteral antihypertensive medications. we performed a cohort study to determine clinical predictors of time to enteral antihypertensive medication and its effect on icu length of stay (icu los). we identified consecutive spontaneous ich patients admitted from 12/2013 to 9/2015 to a tertiary center, and excluded those transitioned to comfort care (cmo) within 24 hours of admission. we calculated time from hospital admission to first enteral (oral or feeding tube) antihypertensive. we abstracted demographic and clinical variables. two reviewers examined medical records and classified ich location and etiology. we determined univariate and adjusted associations of clinical variables to time to enteral antihypertensive medication and performed regression analysis to determine effect on icu los. we identified 495 patients and excluded 50 for early transition to cmo. endotracheal intubation (p=0.03), higher ich score (p<0.001), no outpatient antihypertensive use (p=0.001), and non-lobar ich location (p=0.005) predicted longer time to starting enteral antihypertensive in adjusted analysis. presenting systolic or diastolic bp, time of icu admission (day vs. night), sex, and race were not significant predictors of time to enteral antihypertensive. time to enteral anti-hypertensive is the strongest predictor of icu los (p<0.0001) after adjustment for age, gcs, ich score, sex, race, and duration of iv antihypertensive infusion. patients with higher ich scores, intubation, no prior antihypertensive use, and non-lobar ich are at risk for increased time to enteral antihypertensive administration. timely enteral antihypertensive administration is an important and potentially modifiable predictor of icu los in acute ich. overall mortality from intracerebral hemorrhage (ich) represents a combination mortality from a potentially fatal disease as well as practice variation around treatment withdrawal of care. early do-not-resuscitate (dnr) rates are independently associated with in-hospital mortality and may serve as a proxy for withholding aggressive care. the 2007 american heart association (aha) guidelines recommended that dnr orders should not be applied before 24 hours out of a concern that less aggressive care would lead to a self-fulfilling prophecy and excess mortality. we performed a retrospective analysis of temporal trends among primary ich patients presenting to all nonfederal emergency departments in california from 1999 to 2014 using data from the office of statewide health planning and development (oshpd). demographic information, clinical covariates (such as mechanical ventilation, craniotomy), and early dnr status within 24 hours were collected and analyzed using segmented regression to evaluate for differences in linear trends from 1999-2006 compared with 2007-2014. over a use of early dnr orders for ich patients has steadily decreased over the last 15 years, even after adjusting for age and disease severity. the pace of this downward trend did not significantly change around the time when recommendations on early dnr use for ich in aha guidelines were revised in 2007. spontaneous intracerebral hemorrhage (ich) is a common form of stroke that often results in severe morbidity or death. for most ich, there are no proven therapies for acute management. evidence suggests minimally invasive surgical evacuation of ich may result in improved patient outcomes. the enrich clinical trial is designed to determine the efficacy and economic impact of early ich evacuation using minimally invasive, transulcal, parafascicular surgery (mips) compared to standard guideline-based management. in this abstract we present the trial design and rationale at the foundation of the enrich clinical trial. enrich is an adaptive, prospective, multi-center clinical trial designed to enroll up to 300 patients with acute ich. patients are block-randomized based on hemorrhage location (lobar vs basal ganglia) 1:1 to mips or standard management. included patients are 18 -80 years, gcs 5-14, baseline mrs 7, presenting within 24 hours from last known well and found to have a spontaneous, cta-negative, supratentorial ich (30-80 ml). primary efficacy will be determined by demonstrating significant improvement in the mean utility-weighted mrs at 180 days after enrollment. economic effect of mips will be determined by quantifying the cost per quality-adjusted life-years gained at pre-determined time points. the rationale for early intervention is to interrupt the time-dependent ich related pathophysiology caused by mechanical pressures and the pro-inflammatory secondary cascade that leads to worsened cellular injury and edema formation. the planned enrichment strategy acknowledges that hemorrhages in varied locations may have a differential response to mips. study adaptation, in the form of enrichment, may occur if pre-determined futility rules are met for the primary outcome in either of the two locations. enrich is designed to establish the clinical and economic value of early mips in the treatment of ich. enrollment was initiated in december 2016. early seizures (< 30 days) after intracerebral hemorrhage (ich) may be associated with the presence and degree of perihematomal cytotoxic injury. we explored the association between perihematomal diffusivity (phd) and early seizures after ich. consecutive patients from the prospective nih funded dash study (> 18 years, spontaneous ich) were included. all patients had multimodal mri within 2 weeks. perihematomal edema (phe) volume was outlined on t2/flair and ich volume on gre; these and adc were coregistered to analyze phd. eeg monitoring was performed for clinical suspicion of seizure. mean adc values of phe and the percentage of phe volume were compared between the seizure and no-seizure groups, with adc values 1200 as vasogenic edema. results 26 (11%) of a total of 229 patients had early seizures at a median of day 2 post ich. mean adc in the phe region was higher in the seizure group (mean: 1169 +/-127 vs 1066 +/-154, p=0.001). ich, absolute, and relative phe volumes were not different between groups. the phe of the seizure group had a lower percentage of cytotoxic edema (5% vs 8%, p=0.007) and a higher percentage of vasogenic edema (43% vs 30%, p 1300 was the most predictive of seizure with auc = 0.73, though adc thresholds between 1200-1400 had largely similar auc's. phe volume of >24% (of adc > 1300) identified patients with seizure with sensitivity of 0.81, specificity of 0.63, and remained significant in multivariable analysis. patients with early post-ich seizures have higher mean perihematomal adc and a larger percentage of vasogenic edema in the perihematomal region. vasogenic edema due to bbb breakdown and perihematomal inflammation rather than cytotoxic injury is associated with early post ich seizures. novel neuroprotective treatments hold the promise to improve patient outcomes by broadening time windows of intervention and reducing hypoperfusion and reperfusion injury in the era of mechanical thrombectomy for acute ischemic stroke. hibernating species, such as arctic ground squirrels (ags), demonstrate remarkable resilience to ischemic and reperfusion injuries. bioinformatic analyses of genomes of hibernating species reveals signatures of convergent evolution in genes regulating stability and formation of mitochondrial respirasomes. hypoxia pre-conditioning (hpc) also leads to improved survival upon subsequent exposure to hypoxia, and is associated with increased stabilization of respirasomes. the respirasome is a macromolecule consisting of oligomers of complex i, iii, and iv. cox7a2l is a key mediator of respirasome stability via interactions with complex i and iii. in this study, we explored the role of cox7a2l in mediating respirasome stabilization in ags neural stem and progenitor cells (nsc/npcs) as well as mouse nsc/npcs exposed to hpc. respirasome stability was assessed using blue native gel electrophoresis and mitochondrial metabolism assessed by measuring oxygen consumption in vitro (seahorse metabolic analyzer). exposure to mild hypoxia and induction of hif leads to stabilization of respirasomes, upregulation of hif, and modulation of mitochondrial metabolism. interestingly, overexpression of the ags isoform of cox7a2l, which has amino acid substitutions in residues mediating respirasome stability, recapitulates the effects of hypoxia on respirasome stability and mitochondrial metabolism without altering hif expression. targeting respirasome stability by modulating cox7a2l is a potentially novel neuroprotective target for treatment of ischemic injuries. testing of these hypotheses in pre-clinical models of stroke is on-going. acute stroke symptoms need timely diagnostics in order to ensure best outcomes. as a non-academic, community-based center located in rural western nc, where we are the regions only comprehensive stroke center, we developed a process to intake stroke patients quickly directly from ct to interventional radiology when applicable. a smooth transition reduces the quantity of time from imaging to interventional suite, ultimately reducing the time it takes to prepare to actively treat a patient. interventional radiology value stream mapping started in june 2016. multidisciplinary team worked in multiple work groups to design and create "code ir stroke now". flow chart created to show multiple moving parts simultaneously, to streamline transition from er (sometimes this includes triage from the region also) to ir. an ir "ready" criteria was made, er and ir checklists, followed by post procedure debrief and treatment plans/order set to standardize care and documentation. first mock code ir was done 3/28/17, this was critiqued/perfected. "go live" date: 4/3/17. we continue process improvement today. in first three months, 13 patients have gone through this process. average compliance for goal door to puncture <90min went from 78.6% to 100%. door to groin times reduced from 72 minutes to 62minutes. our performance is 26 minutes quicker than other comprehensive stroke centers (88m avg gwtg database). saved an average of 19 million neurons per patient. total of 247 million neurons saved on average since 4/3/17! door to groin times can be reduced with streamline approach to care. multidisciplinary team approach, including house supervisors, anesthesia, switch-board in addition to the bedside staff and providers can make a smooth transition from the time a large vessel occlusion is identified to getting the patient to the interventional suite. activation of "code ir stroke now" page activates this team 24/7. it is unknown whether antithrombotics for secondary stroke prevention in patients with acute ischemic stroke (ais) due to infective endocarditis (ie) reduce the rate of secondary ais or increase major bleeding. we conducted a multi-center, retrospective cohort study from 2007-2015 of patients with ais secondary to left-sided ie, separated into two groups (antithrombotic vs no antithrombotic). antithrombotics included antiplatelets and/or therapeutic anticoagulation. the primary outcome was a composite of recurrent ais and major bleeding. secondary outcomes included ais and major bleeding individually. a binary logistic regression model adjusted for age and native vs prosthetic valve involvement was used for outcome evaluation. the final analysis included 123 patients (91 antithrombotic vs 32 no antithrombotic). median age was 61 years and 25 (20%) patients had prosthetic valve infections. infecting organisms were mostly methicillin sensitive s. aureus (29%) or streptococcus spp. (26%). valve repair/replacement occurred in 34 (28%) patients. aspirin with or without another antithrombotic (85%) was the most common antithrombotic treatment. the primary outcome occurred in 25.3% vs 15.6% of patients with antithrombotics vs no antithrombotics, respectively (or 2.2; 95% ci 0.7 to 6.9). ais (6.6% vs 2.9%; or 2.2; 95% ci 0.2 to 20) and major bleeding (18.7% vs 12.5%; or 2.3; 95% ci 0.7 to 8.1) were similar between groups. a subgroup analysis of aspirin monotherapy vs no antithrombotic yielded similar results for the primary outcome (28.6% vs 15.6%; or 3.1; 95% ci 0.9 to 11.1) and ais (7.1% vs 3.1%; or 2.3; 95% ci 0.2 to 25.0). major bleeding was increased, however (26.2% vs 12.5%; or 4.1; 95% ci 1.01 to 16.7; p=0.048). antithrombotics after ais secondary to ie were not associated with a decrease in recurrent ais or an increase in major bleeding. aspirin monotherapy was associated with an increase in major bleeding without any reduction in ais. malignant hemispheric stroke (mhs) represents between 2-8% of all hospitalized ischemic stroke in the united states. pooled analysis of 3 european studies has demonstrated that decompressive hemicraniectomy (dchc) for mhs reduces mortality compared with conservative medical management and may also improve functional outcomes. these trials however, excluded patients with major medical comorbidities that might confound clinical outcomes. apache ii and sofa scores are validated icu scoring systems to help characterize disease severity and estimated hospital mortality. this study aims to evaluate apache ii and sofa scores in predicting outcomes for patients undergoing dchc for mhs. this is a single center retrospective analysis of patients who underwent early dchc for mhs between may 2010 through january 2017 at unc chapel hill. apache ii and sofa scores were calculated for the date of admission or date of first presentation to neurologic care. outcomes included mortality at discharge, mortality at 30 day, and functional outcome at last follow up, up to one year. multivariate analysis included timing of surgery, age, laterality, presence of midline shift, hemorrhage or multiple territory infarction. we identified 44 patients who met inclusion and exclusion criteria. the median age was 58 (25 to 78), -nine percent of patients received surgery by hospital day 2. full statistical analysis is pending. our hypothesis is a positive correlation between icu severity scores and mortality. given apache ii and sofa scores capture the effects of acute and chronic disease that would affect patient recovery, we hope to provide a more comprehensive prognostication of outcomes following surgery to help guide physicians and family members of these patients in their decision-making process. we conducted this study to investigate the effects of decompressive craniectomy (dc) combined with hypothermia on mortality and neurological outcomes in patients with large hemispheric infarction. within 48 hours of symptom onset, patients were randomized to one of the following three groups: dc group, dc plus head-surface cooling (dcsc) group and dc plus endovascular hypothermia (dceh) group. we combined the data of the dcsc and dceh group to dch group during analysis. the primary endpoints were mortality and modified rankin scale (mrs) score at 6 months. there were 9 patients in the dc group, 14 patients in the dcsc group and 11 patients in the dceh group. for all patients, the mortality at discharge and after 6 months was 8.8% (3/34) and 35.3% (12/34), respectively. the dch group had lower mortality, but the difference was not statistically significant (at discharge, 4.0% vs. 22.2%, p=0.098; 6 months, 32.0% vs. 44.4%, p=0.449). after 6 months, 22 patients survived, and 54.5% of the surviving patients had good neurological outcomes (mrs score of 1-3). the dch group had better neurological outcomes, but this difference was also not statistically significant (10/17, 58.8% vs. 2/5, 40.0%; p=0.457). the total number of patients experiencing complications in the dc group and the dch group was 15 (7.2%) and 59 (10.3%), respectively. treatment with hypothermia led to decreased mortality and improved neurological outcomes in lhi patients who received dc. a multi-center rct is needed to confirm these results. destiny ii investigated hemicraniectomy in patients 61-years and older for the treatment of malignant cerebral edema. we sought to describe the treatment effect of early hemicraniectomy in destiny ii, using number needed to treat to benefit (nntb) and benefit per hundred (bph) treated at 6 and 12months. as an mrs of 5 is generally undesirable, we also present nntb and bph excluding this outcome. for all possible dichotomizations of the mrs, net nntb was derived by taking the inverse of absolute risk difference, and net bph by multiplying absolute risk difference by 100. for benefits simultaneously across all disability transitions on the mrs, nntb, and bph, estimates were derived using joint outcome tables: 1) algorithmic minimum and maximum and 2) four independent experts. the expert data is presented as geometric mean. the algorithmic nntb was 2.34 (range 1.89-2.78) at 6-months and 2.71 (2.38-3.03) at 12-months, while bph was 44.5 (36-53) and 37.5 (33-42). the expert nntb was 2.25 (2.08-2.56) at 6-months and 2.78 (2.63-2.94) at 12-months, and the bph was 44.4 (39-48), and 35.9 (34-38) respectively. excluding mrs 5 the algorithmic nntb was 4.38 (range 4-4.76) at 6-months and 4.44 (3.35-4.55) at 12-months, while bph was 23 (21-25) and 22.5 (22-23). the expert nntb was 4.36 (4.0-4.76) at 6-months, and 4.4 (4.35-4.54) at 12-months, and bph was 22.9 (21-25) and 22.7 (22-23) respectively. early systematic hemicraniectomy improves outcome (including mrs 5) for every 2-3 patients treated. excluding patients with mrs 5, hemicraniectomy improves outcome for every 4.4 patients treated. the algorithmic range provides bounds to the data, while the expert geometric mean provides the most accurate point estimate. these data provide a powerful tool to describe the potential treatment outcomes to families during the first day following a malignant middle cerebral artery infarction. background cerebral bypass surgery is performed to restore, or revascularize blood flow to the brain. previous studies have not shown whether emergency surgical reperfusion therapy may be effective in acute ischemic stroke patients with large artery occlusion and hemodynamic deterioration. objective to evaluate the effect of emergency sta-mca bypass surgery on the outcome of hemodynamic compromised patients who had progressive or fluctuating stroke despite best medical treatments. we retrospectively reviewed the clinical and radiological data of 57 consecutive patients treated by both emergency bypass surgery (31 cases, 54.4%) and elective bypass surgery (26 cases, 45.6%) due to large artery occlusion at a single center. the effect of surgical therapy was measured with the modified rankin scale (mrs) at 3 months. clinical severity was evaluated by the national institutes of health stroke scale (nihss) between pre-and post-operative state. major perioperative complications were defined as any hemorrhagic stroke, myocardial infarction and death. results occlusive sites were the cervical internal carotid artery in 32 (56.1%) patients and the middle patients in emergency surgery group and 24 (92.3%) patients in elective surgery group. emergency bypass surgery improved nihss (preoperatively, 9 [4-16] ; 2 weeks postoperatively, 6 [2-11]). major perioperative complications in 30 days were happened in three patients (9.7%) after emergency bypass surgery, and four patients (15.4%) after elective bypass surgery. emergency revascularization surgery may be effective alternative treatment for acute ischemic stroke patients with hemodynamic deterioration refractory to maximal medical treatments without significant complications. larger randomized clinical study is needed to evaluate the effect of emergency revascularization surgery in acute hemodynamic deterioration. multiple studies have reported lower mortality rates in obese patients with various cardiovascular disorders, a phenomenon called as the 'obesity paradox'. such relationship has been largely unreported in patients with neurological pathologies especially stroke. this study reports the effect of obesity on prognosis in patients with ischemic stroke. analysis of national inpatient sample data (2003-2013) showed a total of 1,168,847 patients discharged with primary diagnosis of is, icd-9 code 433.xx and 434.xx. patients with obesity were identified using agency of healthcare research and quality (ahrq) criteria. we used binary regression to compare inhospital mortality between obese and non-obese patients with ischemic stroke. from 2003-2013, 1,168,847 patients with ischemic stroke were identified of which 8.7% were found to be obese. obese patients with ischemic stroke were more often younger, female, and african american as compared to caucasian. after risk adjustment for demographics, and baseline comorbidities, obese patients with ischemic stroke had lower observed in hospital mortality as compared with non-obese patients with ischemic stroke (2.5% vs 4%, or: 0.717 ci=0.681-0.756 p<0.001). from an eleven year nationwide cohort of patients with ischemic strokes, we observed a significant protective effect of obesity and better prognosis including a lower mortality rate. more prospective studies are warranted to further analyze this counter-intuitive trend. very early mobilization of critical care patients improves outcome, length of stay, and patient satisfaction. data for efficacy of very early mobilization for stroke patients have been mixed, and there is limited outcomes data for patients mobilized within 24 hours of receiving intravenous alteplase (iv tpa). the objective of this retrospective observational study was to determine if patients receiving iv tpa who were mobilized earlier were more likely to discharge home. medical records of ischemic stroke patients who received iv tpa between 2012 and 2016 at two urban facilities were reviewed for mobility protocol activities. patients who received endovascular treatment, were placed on comfort care day zero or one, mobilized after the first 24 hours, and transferred out or left against medical advice were excluded. multinomial regression was used to determine if there were significant differences in patients' discharge status by time first mobilized, adjusting for stroke severity using the national institutes of health stroke scale (nihss), age and gender. of the 268 patients included, 46.6 % (n=125) were female, mean age was 68.6 (±14.8), and the median admit nihss was 5.0 [iqr: 3.0, 10.0]. the median time first mobilized was 9.0 hours [iqr: 4.3, 15.0], 71.3% (n=191) of patients were discharged to home, 12.3% (n=33), a skilled nursing facility (snf), 13.8% (n=37), an inpatient rehab facility (irf), and 2.6% (n=7) hospice or expired. there was suggestive, but inconclusive evidence for a relationship between time first mobilized and discharged to snf versus home (p=.071). for every one hour increase in time mobilized, patients were 1.07 (95% ci= 1.00 -1.15) times more likely to be discharged to snf than home. this study reveals very early mobility is potentially efficacious after iv tpa. longer time to first mobility was associated with discharge to skilled nursing facility, although this was not statistically significant. medical management of cerebral edema after large volume stroke varies greatly across institutions. hypertonic saline has emerged as a common treatment strategy to attempt to reduce edema and theoretically prevent the need for decompressive hemicraniectomy. there is no established protocol for hypertonic saline administration and there have been concerns regarding safety. in a single-center, retrospective analysis we identified patients who received hypertonic saline for malignant edema after an ischemic stroke involving the entire hemisphere or diffuse middle cerebral artery (mca) territory. we compared patients who received continuous infusions of 2% or 3% hypertonic saline to those who received continuous infusions with boluses of 23.4%. the primary endpoint was time to goal sodium (150). secondary endpoints included the need for surgical decompression and adverse events. we included 28 patients who received only continuous infusions of hypertonic saline and 20 patients who received a combination of continuous infusions and bolus doses. we found no significant difference between number of patients who reached goal sodium (14 vs 14 respectively, p=0.17) or time to goal sodium (20 hours vs 23.5 hours, p=0.68). there was a significant difference in the number of patients who underwent surgical decompression (4 vs 8, p=0.04). there was not a significant difference in the rate of acute kidney injury or development of acidosis between groups (5 vs. 5, p=0.55). both hypertonic strategies appear to be safe. bolus dosing, on review, was more often instituted during clinical deterioration, accounting for the higher rate of surgical intervention. we feel we can safely be more aggressive earlier in the clinical course to potentially avoid surgical decompression. furthermore, we may need to look more closely at our target sodium, evaluating whether it should be based on the patient's baseline sodium or a universal value. even though recanalization is strongly associated with improved functional outcomes and reduced mortality, clinical benefit from thrombolysis is reduced as stroke onset to treatment time increases. in the recent study, endovascular treatment(evt) has been demonstrated to improve functional outcome in patients with acute ischemic stroke (ais) within the time window of onset to 6 or 8 hours. however, beyond usual thrombolysis time window, early neurologic deterioration(end) related with proximal artery occlusion is not uncommon in ais. with this, we report ais case series treated with evt because of end related proximal artery occlusion. from january 2012 through march 2017, all 261 patients underwent iat for ais with anterior circulation stroke. among them, twenty-four patients underwent evt due to end. at admission, all twenty-four patients showed near to complete occlusion of a proximal artery and had diffusion-perfusion mismatch. mean age was 68. initial median initial national institutes of health stroke scale (nihss) was 5 and nihss after end was 12. all patients had diffusion-perfusion mismatch over 200%. seven patients treated with iv-tpa before evt. good recanalization (tici 2b/3) was achieved in 91.7%. the hemorrhagic complication was seen in the follow-up computed tomography scan in 4 of 24 cases: three were hemorrhagic transformation, another was the subarachnoid hemorrhage. the thromboembolic mortality case. in our report, evt in ais with end achieved safe and successful recanalization. and successful recanalization was associated with good clinical outcome. we think evt could be a useful method in case of end in ais patients with proximal artery near to complete occlusion, even beyond usual 6 to 8 hours time window for evt. jugular bulb venous monitoring can provide information about cerebral hemodynamics and metabolism. we investigated the feasibility and clinical application of jugular bulb venous monitoring in acute ischemic stroke patients at neurocritical care unit. from march 2015 to june 2017, we conducted jugular bulb venous monitoring in 33 patients in a tertiary referral hospital. five patients were excluded; 3 without ventilator care and 2 other diseases than stroke. jugular venous catheters were placed in internal jugular vein by ultrasound-guided method. lactate, venous oxygen saturation (sjvo2), and arteriovenous oxygen saturation differnece (avdo2) were monitored every 4 hours. metabolic derangement was defined when lactate level was more than 2.0 mmol/l. patients were divided according to presence of clinical deterioration. for long-term prognosis, modified rankin scale 5-6 at 3 months were defined as poor outcome. twelve patients (42.9%) showed metabolic derangement and they experienced more frequent clinical deteriorations compared to patients without metabolic derangement (n=9, 64.3% vs. n=3, 21.4%, p=0.022). clinical deterioration was noted in 14 patients, and lactate level was significantly higher in the presence of clinical deterioration group (1.44±0.48 vs. 1.04±0.20mmol/l, p=0.009). adjusting other potential variables (age, baseline stroke severity, sjvo2, and avdo2), metabolic derangement was an independent factor associated with clinical deterioration (or 6.60, 95% ci 1.23-35.44, p=0.028). meanwhile, poor outcome group (n=12) showed no difference on lactate level, but avdo2 were higher in poor outcome group (29.54±5.51 v. 24.95±5.65, p=0.041). avdo2 remained an independent factor for poor outcome after multivariable logistic regression analysis (or 3.68, 95% ci 1.08-12.55, p=0.038). this study showed that lactate was associated with clinical deterioration during neurocritical care, whereas venous desaturation contributed to long-term prognosis. jugular bulb venous monitoring is a feasible tool in patients with acute ischemic stroke at neurocritical care unit. swift recognition of stroke symptoms, immediate access to testing and timely treatment plays a vital role in functional outcomes (middleton et al., 2015) . delays can postpone treatment and complicate recovery. delays at this facility included registration, order entry times, and imaging. pi included evaluating and eliminating interruptions, with a goal of reducing the time to treatment. process improvement (pi) utilized an evidence-based algorithm to improve performance metrics and treatment of acute strokes. setting was a suburban, ancc magnet recognized primary stroke center with 16 beds in the ed that experiences 24,010 ed visits and 6,619 admissions per year. patients included in the acute stroke protocol presented with signs and symptoms of stroke and last known well within 12 hours of symptom onset. participation included ed staff, and staff working in areas impacted by stroke care. code stroke was initiated for patients who fit the criteria. an overhead page was implemented notifying the team throughout the hospital. radiology would prioritize ct and call the ed as soon as ct was ready. in the meantime, ed team continued assessments. with ct resulted, the physician would determine whether the patient was eligible for tpa. the acute stroke protocol included a list of inclusion/exclusion criteria for tpa administration. other treatment requirements included reminders for frequency of vital signs, neuro checks and assessments. implementation began in may 2016 and the team began to see a significant decrease in ct times and better compliance of dysphagia screening and nih assessments. ct tat completed within 45 minutes increased from 62% to 83%. nih stroke scale completion rose from 79% to 100%. compliance with completing dysphasia screening increased from 86% to 93%. results stem from a commitment to excellence from the entire team. pi continues to further improve care for stroke patients. induced hypertensive therapy (iht) has used to enhance cerebral perfusion pressure in subarachnoid hemorrhage and stroke, but there is no established indication for iht in ischemic stroke. we report the usage of iht in acute ischemic patients with hemodynamic instability caused by steno-occlusive disease of a main cerebral artery. we reviewed acute ischemic stroke patients with cerebral perfusion deficit due to intracranial and extracranial steno-occlusive disease. iht was applied for early neurological deterioration and maintained until hemodynamic instability was stabilized over 24 hours or neurointervention including angioplasty and extracranial intracranial arterial bypass surgery were performed. 52 patients were analyzed. territories of stroke were 31 of anterior circulation of intracranial vessels, 11 of posterior vessels, and 10 of extracranial vessels. mean duration of ih therapy was 4176.04 minutes. pre and post nihss score of ih therapy was 8.19 and 7.35, respectively. 30 patients (57.7%) were showed improvement and 13 patients (25%) were stabilized without further aggravation. 16 patients revealed bradycardia. there was no fatal complication of therapy. 15 patients were performed further treatment include bypass surgery, angioplasty, and stenting after ih therapy. at 3 months follow up, 34 patients showed good outcomes (modified rankin scale 0, 1, and 2). iht may be safe and effective for the neurologic deterioration or progression of acute ischemic stroke with hemodynamic instability due to severe steno-occlusive disease of major cerebral artery. large randomized trials are needed to confirm this result. most patients with progressive stroke have a poor prognosis. the aim of our study was investigate the factors related with progressive neurologic deficit (pnd) in the patients receiving recanalization therapy for acute ischemic stroke. -month period, were enrolled. blood pressures (bps) at 0, 12, and 24 hours after admission and bp variation (bpv) for the first 24 hours were collected. variables associated with pnd were analyzed. among 216 enrolled patients, 68 patients showed pnd. the patients with pnd had higher systolic bps at 0, 12, and 24 hours after admission and higher bpv than the others (p <0.05). posterior circulation stroke was more prevalent in the patients with pnd (p <0.01). in logistic regression analysis, pnd was independently associated with posterior circulation stroke [odds ratio (or) = 3.35, p <0.001] and systolic bp at 24 hours after admission (or = 1.02, p = 0.03). pnd may be associated with elevated systolic bp for the first 24 hours after admission in the patients receiving recanalization therapy for acute ischemic stroke. telestroke has revolutionized stroke care delivery in the modern era. massachusetts general hospital (mgh) uses the most common model, the hub and spoke. the demonstration of superiority of endovascular therapy (et) with intravenous tpa over tpa alone for acute stroke patients with large vessel occlusions prompts a thorough assessment of telestroke's role in the delivery of et, particularly in terms of transferring patients to hubs capable of et. our primary objective was to examine associations between transfer time and clinical outcomes. patients were selected from the get with the guidelines-stroke registry who were transferred to mgh from jan 2011 to oct 2015 who had nihss>6 and last known well<12h on mgh arrival (n=618). we excluded patients for whom we could not calculate the primary predictor, transfer time (defined as the mgh arrival time minus the telestroke consult answered time, n=384). several clinical outcomes were explored by linear and logistic regression to determine association with transfer time. of the 234 patients in the study, 120 (51%) were transferred by ambulance, 114 (49%) by helicopter, and 63 (27%) underwent et at mgh. median transfer time was 132 min, and median aspects decrease was 2 during transfer. longer transfer time was associated with decreased likelihood of undergoing et (p=0.003). however, transfer time was not significantly associated with aspects decrease during transfer. for those patients undergoing et, transfer times bore no association to 90 day mrs. this study identifies an association between longer transfer time and decreased likelihood of undergoing et. reasons are varied, and are not clearly related to imaging progression alone. only 27% of transferred patients underwent et. more efficient spoke triage and transfer may improve the ratio of patients treated with et. these data provide an important perspective during this period of stroke triage evolution. intra-arterial thrombectomy (iat) has been approved for acute treatment of ischemic strokes (is). with the advent of several new devices for iat, this procedure has become more widely utilized with better outcomes. we performed this analysis to evaluate trends and predictors of utilization of iat over an 8 year period. analysis of nationwide inpatient sample data (2006 to 2013) showed a total of 850,997 patients discharged with a primary diagnosis of is, icd-9 code 433.xx, and 434.xx. iat was ascertained by icd-9 procedure code 39.74. independent predictors of iat were studied using binary logistic regression. the predictors included in the model were age, sex, race, teaching status, and insurance type. results 4903 or 0.6% of is patients received iat. mean age of patients receiving thrombolysis was 71.01 years. percentage of is patients receiving iat has consistently increased from 0.037% in 2006 to 1% in 2013. we also observed significant year to year decrease in mortality among patients receiving iat. in 2006, 26.2% of iat patients died as compared to 15.8% in 2013. using binary logistic regression, the statistically significant independent predictors of iat utilization were age (or= 0.978, p=0.000), female gender (or=1.120, p=0.019), insurance type as compared to medicare (private insurance or=1.199 p=0.006, and self-pay or=0.778 p=0.05). as compared to caucasians, african americans were less likely to receive treatment (or=0.531 p=0.000). also, a teaching hospital was found to be more likely to administer iat as compared to a non-teaching hospital (or = 5.941, p=0.000). is patients with younger age, female gender, private insurance and patients admitted to teaching hospitals are more likely and african americans are less likely to receive iat. this study showed that iat utilization has increased significantly since 2006 with a steep decline in the in-hospital mortality. this may point to improved iat devices and better patients' selection. telestroke plays an integral role in stroke care. nationally the most common model is the hub and spoke, which is used at our institution. understanding telestroke's role in the transfer of candidate patients for endovascular therapy (et) is critical to minimizing delays. our primary objective was to evaluate predictors of transfer delay. patients were selected from the get with the guidelines-stroke registry who were transferred to mgh from jan 2011 to oct 2015 with nihss>6 and last known well<12h on mgh arrival (n=618). we excluded patients for whom we could not calculate transfer time (the mgh arrival time minus the telestroke consult answered time, n=384). ideal time was calculated using google maps incorporating date/time information for ground transfers and straight line distance at 130mph for helicopter transfers. ideal time was subtracted from actual time to calculate delay, accounting for distance, mode of transport, weather, and traffic. analysis of covariance was used to explore 3 possible predictors of delay (night vs. day, weekend vs. weekday, tpa delivery at spoke). of the 234 patients in the study, 120 (51%) were transferred by ambulance, 114 (49%) by helicopter, and 63 underwent et. a significant proportion of the variation in delay was explained by the predictors (f=6.43, p<0.0005). nocturnal transfer (1800-0600hrs) was associated with significantly longer delay (20.53 additional minutes relative to daytime transfers, p<0.0005). weekend vs. weekday transfer and tpa delivery at spoke hospital did not contribute significantly to model variance. our findings highlight the importance of refining protocol approaches. nocturnal transfers were associated with substantial delay relative to daytime transfers. in contrast, delivery of tpa was not associated with delays, underscoring the impact of effective protocols that are in place. metrics and protocols for transfer, especially at night, may have a positive impact on transfer times. the use of anticoagulant therapy in the acute stage of ischemic stroke is controversial. novel oral anticoagulant (noac) is effective in preventing recurrent embolism in patients with non-valvular atrial fibrillation (nvaf), but the risk of hemorrhagic transformation is the major concern for its early use in ischemic stroke. we aimed to study the use of noac in patients with acute ischemic stroke and nvaf. patients with acute ischemic stroke and nvaf, who were admitted to our acute stroke unit from 2014 to 2016, were recruited in this single-centre cohort study. the timing of initiation of noac is at the discretion of the treating physician based on the stroke severity and infarct size. nvaf attributed to 32.5% (214/ 659) of all ischemic stroke cases. the early recurrent embolism rates were 1.4%, 3.3% and 4.7% at one week, two weeks and one month respectively. noacs were prescribed in 105 patients. noacs were initiated within one week in 47 patients (44.8%). the median time to noac initiation were five days (iqr 1.8 -18.0), nine days (iqr 6.3 -25.3) and 20 days (iqr 12.0 -37.0) for patients with no/small-sized infarct, moderate-sized infarct, and large-sized infarct respectively. at one month, two patients had recurrent ischemic stroke despite treated with noac. only one patient, who had a large-sized infarct, developed symptomatic hemorrhagic transformation. early use of noac in ischemic stroke appears to be safe. further large prospective studies are required to evaluate the risk and benefit of noac use in acute ischemic stroke. osmotherapy (hypertonic saline or mannitol) is the mainstay of available therapy to counter cerebral edema that can develop after large hemispheric infarction. in a post-hoc analysis of the games-rp trial, we hypothesized that patients with large infarction, treated with intravenous glyburide, might require less osmotherapy than placebo treated patients. games-rp was a multi-center prospective, double blind, randomized, placebo controlled study which enrolled 84 patients with large anterior circulation infarction. patients were randomized to iv glyburide administration (biib093; n=44) or placebo (n=40) with target time from symptom onset to drug infusion decompressive craniectomy (dc), or both. total bolus osmotherapy dosing was quantified by an "osmolar load" (volume in l * osmolarity in mosm/l). of the 84 subjects, the percentage of patients who received bolus osmotherapy did not differ between the glyburide and placebo treated subjects (51% v. 53%; p=0.89). there was no difference in mean total osmolar load received (mosm) or hours from drug bolus to osmotherapy administration. overall, 40 subjects received osmotherapy. the baseline dwi lesion volume (ml) was significantly larger in the osmotherapy treated group (170.2 ±53.3 v. 147.4 ±45.5; p=0.047). the presence of adjudicated malignant edema on imaging was more common in the osmotherapy group (63% v. 30%, p=0.004), as was dc (45% v. 8%; p<0.0001). among patients with adjudicated clinical neurologic deterioration from edema, 31% (n=11) did not receive osmotherapy. treatment with iv glyburide was not associated with less osmotherapy, possibly due to a ceiling effect resulting from the large infarct volumes. however, osmotherapy use was associated with larger infarct volumes, malignant edema, and higher incidence of dc. use of osmotherapy did not always follow the appearance of clinical or radiographic malignant edema. acute ischemic stroke patients receiving intravenous alteplase (iv-tpa) are placed on bedrest for 24 hours or longer due to provider fear of worsening stroke symptoms from decreased cerebral perfusion. this is based on medical uncertainty and lack of robust studies, despite american stroke association (asa) recommendations for mobilization when hemodynamically stable. this retrospective observational study evaluates very early mobility in acute ischemic stroke patients post iv-tpa while evaluating for change in nihss. medical records of ischemic stroke patients who received iv-tpa between 2012 and 2016 at two urban hospitals were reviewed for mobility protocol activities. patients who were given endovascular treatment, placed on comfort care on day zero or one, mobilized after the first 24 hours, transferred out or left against medical advice were excluded from the analysis. multiple linear regression was used to determine if those patients mobilized earlier saw a greater change between nihss at admit and 24 hours post iv-tpa administration, adjusting for age and gender. of the 268 patients included in the final analysis, 46.6% (n=125) were female, mean age was 68. the multiple linear regression results showed no significant relationship between change in nihss from admit to 24 hours post iv-tpa and earlier mobilization, after adjusting for age and gender (ß= -0.065 change in nihss points per hour; p=0.181). this study reveals early mobility does not worsen stroke symptoms or severity based on nihss. this suggests that very early mobility of patients after iv-tpa is safe as recommended by asa. interhospital transfers to a stroke center following iv-tpa administration are increasingly common. however, no studies have evaluated icu needs in these transfer patients and such understanding may have a significant impact on resource utilization. the aim of this study is to compare the frequency, timing, and nature of icu-level needs in post-iv-tpa patients that were transferred versus those who present directly to the admitting hospital. retrospective chart review of consecutive, tpa-treated ischemic stroke patients admitted to the icu at a comprehensive stroke center servicing a large telestroke referral network from 11/2013 to 7/2015 was performed. we evaluated patient demographics, stroke characteristics, and icu needs between transfer and non-transfer patients before and after icu admission. results 331 patients were admitted to the icu post-tpa. 237 patients (71.6%) were transferred from an outside hospital, of which 123 patients had icu needs (51.9%). this frequency of icu needs was no different when compared to the non-transfer patients (47/94, 50.0%, p = 0.81). similar icu needs were observed for each specific icu intervention between transfer and non-transfer patients (iv antihypertensive, vasopressor requirement, iv rate control, respiratory support, ia therapy, icp monitoring, hypertonic therapy, and neurosurgical intervention, all p > 0 association with icu needs (or 11.9 in transfer patients, or 9.7 in non-transfer; both p < 0.0001). transferring post-iv-tpa patients is not associated with increased icu needs. about one-half of post-tpa patients do not have icu needs, and these patients typically have milder stroke severity. our data supports the safety of transferring post-tpa patients, and to potentially monitor a subgroup of these patients in a non-icu setting. the ability to appropriately triage post-tpa patients may lead to more efficient and cost-effective stroke care. stroke patients requiring decompressive craniectomy remain at high risk of prolonged mechanical ventilation as well as ventilator associated pneumonia (vap). early tracheostomy placement may provide a reduction in the duration of mechanical ventilation however prediction of those who ultimately require a tracheostomy remains a clinical challenge. a preoperative assessment of tracheostomy dependence may help to guide decision making. the authors compare key outcome data after early versus late tracheostomy and develop a preoperative decision-making tool to predict postoperative tracheostomy dependence. a subsequent validation utilizing a decision tree analysis applied prospectively is ongoing and will be presented. we performed a retrospective analysis of prospectively collected registry data and developed a propensity weighted decision tree analysis to predict tracheostomy requirement utilizing factors present prior to surgical decompression. outcomes include probability functions for icu los, hospital los, and mortality based on data for early (10 day) tracheostomy. a subsequent validation of the decision tree is being applied prospectively to evaluate its predictive value. a total of 168 surgical decompressions were performed on patients with acute ischemic or spontaneous hemorrhagic stroke between 2010-2015. forty eight patients (28.5%) required a tracheostomy, whereas 35 (20.8%) developed vap, and 126 (75%) survived hospitalization. mean icu and hospital los were 15.1 and 25.8 days respectively. utilizing gcs, sofa score and hydrocephalus presence, our decision tree analysis provided a 63% sensitivity and 84% specificity for tracheostomy prediction. early tracheostomy conferred significantly fewer ventilator days (p<0.001) and shorter hospital los (p=0.014) with similar vap and mortality rates between groups. early tracheostomy shortens duration of mechanical ventilation and length of stay following surgical decompression for stroke, however without a demonstrable impact in mortality or vap rates. a preoperative decision tree awards a practical tool that may provide insight to guide preoperative decision-making with patient families. patients suspected acute stroke are critical in time delay of endovascular or intravenous thrombolytic therapy. prehospital notification from emergency medical services (ems) may shorten the door to recanalization time. the 'brain saver', web-based prehospital notification system could reduce the time interval from symptom onset to recanalization. beginning in march 2016, stroke team consisted of stroke specialized doctors, nurses and radiologists of multi departments received direct alarms via smart phone application from paramedics of ems about transport information of patients with suspected stroke. we compared baseline characteristics and prehospital/ in-hospital delay time in stroke patients treated with intravenous thrombolysis or endovascular treatment for 12 months with and without ems use brain saver protocol. 167 patients (69 patients with protocol and 98 patients without protocol) were enrolled in this program. the patients who used brain saver had shorter median onset-to-arrival times (63 minutes versus 142 minutes, p < .001) and in in-hospital delay time (35 minutes versus 52 minutes, p<.001). prehospital notification by brain saver was associated with shorter median door-to-imaging time (5 minutes versus 12 minutes, p<.001), door-to-needle time (20 minutes versus 31 minutes, p <.05), door to puncture time (55 minutes versus 137 minutes, p < .001) we found that prehospital notification was associated with faster door-to-imaging time, door-to-needle time and door-to-puncture time in patients presenting within 6 hours of symptom onset. close collaboration between stroke team in hospitals and the ems system gives stroke suspected patients an in-time emergency care system. infection is a common complication in the acute phase after ischemic stroke. furthermore, malnutrition is associated with unfavorable outcome in patients with stroke. therefore, we investigated that premorbid undernutrition identified by objective and quantitative method, nutritional risk index (nri) was related to the risk of infection after ischemic stroke. a consecutive 852 patients who were admitted within 7 days after ischemic stroke onset between october 2010 and october 2015 were included. we assessed initial nutritional status using nri, and nri formula as follows: nri = (1.519 × serum albumin, g/dl) + {41.7 × present weight (kg)/ideal body weight (kg)}. the patients were categorized into three groups on the basis of nri [no risk (nri >97.5), moderate risk (nri 83.5-97.5), and severe risk (nri <83.5)]. we compared the clinical characteristics and nri according to the presence of infection. among the included patients (mean age, 67.7 years, male, 60.6%), 85 (10.0%) patients experienced infection during hospitalization. the rate of pneumonia was 81.2% (n=69), and the rate of urinary tract infection was 11.8% (n=10) among total infection. the proportion of lower nri patients (moderate risk and severe risk) was significantly greater in the infection group (45.9% vs. 17.9% and 10.6% vs. 2.7%, p <0.001). moreover, higher nri patients were less likely to be admitted to the intensive care unit (1.4% vs. 5.1% vs. 6.7%, p = 0.004). a multivariate analysis revealed that lower nri groups had a higher risk of infection [odds ratio (95% confidence interval); moderate risk 3.98 (1.95 -8.13); severe risk 4.21 (1.10 -16.14), p for trend = 0.001]. our study demonstrated that the lower nris predicted infection complications and severe infections after ischemic stroke. this suggests that assessment of nutrition depletion could be a useful predictor and a modifiable risk factor for infection following stroke. cyp2c19 plays a major role in the metabolism of the clop[idogrel. cyp2c19 generates an active oxidized metabolite of clopidogrel that exerts antipl;atelet activity by inhibiting p2y12 reeceptor. the major alleles of the cyp2c19 gene are *1, *2, *3 and *17 and approximately 30% of caucasians and 55% of asians have one or more loss of function alleles in this study, patients with at least two *2 or *3 allels were classified as poor metabolizer(pm), those with one *2 or *3 allele were classified as intermediate metabolizer(im), and those without a *2, *3 or *17 alleles were classified as extensive metabolizer. in addition. those with (*2/*17 or *3/*17) were classified as unknown metabolizer. 784 stroke patients were enrolled for this trial. the mean age was 61 years, and 32% were women. 61% had a history of hypertension, 29% of dm and 28% of dyslipidemia. of the participants, 37% were classifies as em, 1% as um, 45% as im, 16% as pm and 1% as unknown metabolizer. 38% had good genotype for clopidogrel metabolism and 62% had poor genotype. there were no significant diffirences in the demographic and clinical findings between the good and poor genotype groups the prevalence of cyp2c19 polymorphisms is different according to the ethnicity. the racial difference in platelet function may lead to diffrerences in the treatment as well as new targets for antiplatelet therapy the social brain hypothesis is an evolutionary theory proposing that the number of contacts in a primate's social network is proportional to neocortical volume. we tested the hypothesis in a patient population with social network data before and after vascular events. we studied whether social network indices would decrease after stroke, but not after myocardial infarction (mi), as anticipated by the theory. we examined trajectories of the lubben social network scale score (range 0-50, higher values indicating larger network) before and after vascular events in participants from the cardiovascular health study. we used a repeated measures design with linear mixed models to compare the change in social network score before and after events in 382 persons with ischemic stroke and 395 with mi. over a mean of 11.1 years of follow-up for stroke and 12.4 years for mi, we examined an average of 4 social network scores for each participant. we controlled for socio-demographics, baseline cognitive function, and comorbidities. social network scores declined significantly after stroke (an additional -0.14 points every year, 95% ci -0.27, -0.01, p=0.04), but not after mi (-0.06, 95% ci -0.16, 0.04, p=0.24) compared to the baseline slope in fully adjusted models. social network score declined more steeply after stroke than after mi, even after adjusting for potential confounders. these findings support the social brain hypothesis but do not address mechanism. shrinkage of the social networks may be a specific target for interventions to optimize recovery in vascular diseases, particularly stroke. emergency neurological life support (enls) protocols are an essential component to assessment and management of patients within the first hours of the neurological emergency. with increasing focus on emergent endovascular treatment for large vessel occlusion (lvo) in acute ischemic stroke our institution incorporated stroke van assessment as part of the enls acute stroke initial assessment protocol. stroke van screening tool was taught to all nurses in the emergency department (ed) who triage stroke. all patients who presented to the ed with suspected stroke had a van assessment completed prior to ed physician evaluation and ct imaging. patients with weakness in addition to visual changes, aphasia, or neglect were considered van + and triaged immediately to ct angiogram head/neck with immediate notification to the neurointerventionalist. a sample of 76 patients presenting to the ed as a stroke alert over an 8 month time period were utilized. using the stroke van assessment tool was found to improve time to identification of lvo by reducing time from arrival to cta for van positive patients from 77 minutes pre-intervention (n=31) to 27 minutes post-intervention (n=35). this was a significant decrease in time to identification of patients presenting with lvo (p<0.05), improving time to endovascular treatment. incorporating stroke van as part of the acute stroke assessment protocol improved identification of patients presenting with lvo, decreased time to cta imaging and improved time to endovascular treatment which is well documented with improved neurological prognosis. time is essential in neurological emergencies. the van assessment is quick and easy to perform, requires no scoring or calculations, and is the only lvo screening tool tested in the ed by ed nurse and physicians. we suggest incorporating stroke van to the enls acute stroke protocol as a way to improve identification of lvo and improve time to endovascular treatment. elevated blood pressure (bp) is known to be related to hemorrhagic transformation (ht) after ischemic stroke. however, the effect of bp variation on the ht remains unclear, especially in patients with successful recanalization after mechanical thrombectomy. therefore, we investigated the relationship between bp and ht after mechanical thrombectomy following ischemic stroke. a consecutive 141 patients with acute ischemic stroke and successful recanalization (tici 2b or tici 3) were included for the analysis between january 2013 and november 2016. the information on bp was obtained over the first 24 hours using various parameters including mean, maximum (max), minimum (cv), and successive variations (sv) for systolic, diastolic bp, and mean bp. we defined major ht as a parenchymal hematoma type 2 (ph2). among the included patients (age, 66.3; and male, 55.2%), 16 patients (11.3%) developed major ht over the first 24 hours after successful recanalization. systolic bp max-min was significantly increased in patients with major ht compared to those without major ht (61.2 mmhg vs. 44.2 mmhg, p = 0.034) while other bp parameters were not. in addition, systolic bp max-min was significantly associated with symptomatic ht (n=11, 7.8%, p = 0.007). after adjusting for confounders, systolic bp max-min was independently associated with major ht (odds ratio, 1.028; 95% confidence interval, 1.004-1.051). our results demonstrated that absolute change of systemic bp over the first 24 hours was associated with major and symptomatic ht after successful mechanical thrombectomy after acute ischemic stroke. this suggests that maintaining stable systolic bp is an important factor in possibly preventing major ht after successful recanalization. the benefits of intravenous tissue-plasminogen activator in acute ischemic stroke are highly timedependent. however, there are so many cross-departmental tasks to eligible patent that many stroke centers have difficulty achieving the guideline recommended 1-hour door-to-needle (dtn) time. we have developed web based visual task management system called "task calc. stroke" (tcs) by using information and communication technology. herein, we performed a trial installation and preliminary evaluation of tcs. the application software of tcs was designed to run on the google cloud platform. tcs alerts the relevant hospital staff to the patient's arrival condition and time, and displayed tasks to be performed and its treatment status by changing color in real time on networked wall-mounted smart devices in the several relevant departments. we started a trial installation of tcs during the daytime from august 2015. we compared lead times before (august 2014 to july 2015) and after (august 2015 to july 2016) trial installation of tcs. trial installation of tcs in our hospital showed successful information sharing. a total of 33 patients included (pre: 13, post: 20) . after the installation, significant reductions occurred in the median time from door to complete blood count time [26.6 vs. 17.5 min, p < 0.001] and a trend toward a reduction from door to needle time [36.0 vs. 30.0 min, p = 0.175]. tcs may be useful tool to reduce the lead times of acute stroke patients. tcs is a new approach that has the potential to promote efficiency for acute stroke care. prior history of intracranial hemorrhage (ich) has been considered a contraindication to administration of intravenous recombinant tissue plasminogen activator in acute ischemic stroke, per the original activase fda label and 2013 aha/asa guidelines. however, limited data are available on the risks of lysis in patients with prior ich. we performed a cross-sectional study of adult patients who received thrombolysis, using administrative claims data on admissions to acute care hospitals in california between 2005-2011. diagnosis codes were used to identify patients who received thrombolysis, and to ascertain (1) a prior diagnosis of ich, including intraparenchymal hemorrhage (iph), subarachnoid hemorrhage (sah), subdural hematoma (sdh), or epidural hematoma (edh); and (2) relevant comorbidities, including hypertension, smoking, diabetes, heart failure, atrial fibrillation, renal disease, malignancy, and demographic data. we used univariable and multivariable logistic regression to model the odds of in-hospital mortality as a function of prior ich, after adjusting for potential confounders. 60,142 patients received thrombolysis during the study period (mean age 65 [sd 15], female count 26,613 [44%]). of these, 824 patients (1.4%) had a documented diagnosis of prior ich on admission. inhospital mortality was 7% overall, 6.9% for patients without prior ich, and 33.1% for patients with prior ich. in multivariable analysis, all prior ich subtypes remained independently associated with in-hospital mortality, including iph (or 5.9, ci 4.9-7.1, p <2e-16); sah (or 6.5, ci 4.8-8.8, p <2e-16); and sdh (or 3.8, ci 1.8-7.5, p=0.0002). only 4 patients had edh and testing was not possible. 1.4% of patients who received thrombolysis during the study period had prior diagnosis of ich. prior ich was found to be significantly associated with in-hospital mortality regardless of ich subtype. we evaluated the association between early neurological improvement (eni) after ert and time spent from symptom onset to recanalization, according to the degree of collateral circulation measured using multiphase cta. patients with anterior circulation occlusion who underwent ert based on a non-contrast brain ct and multiphase cta were evaluated. collateral status was evaluated using a pial arterial filling score, which was developed into a six-point scale. eni was defined as equal to more than 50%, or as an 8-point decrement in nihss from baseline. neurological statuses at day 1 and at day 7 (or discharge) were determined by a certified neurologist using nihss. the collateral circulation degree measured by multiphase cta was inversely correlated with baseline stroke severity (p=0.02). the proportion of eni at day 1 was significantly lower in patients with poor collateral status (score 0~4) according to the time from symptom onset to recanalization (0-180, 50.0%; 180-360, 14.8%; > 360, 14.3%; p=0.02). however, the proportion was similar in patients with a good -180, 57.1%; 180-360, 50.0%; 360-480, 50.0%; >480, 50.0%; p=0.97, day 7 or discharge; 0-180, 81.0%; 180-360, 71.4%; 360-480, 100.0%; >480, 66.7%; p=0.68). collateral status was the best predictor for eni after ert. eni was achieved in only 5 (14.3%) patients with poor collateral status, and their time from symptom onset to recanalization was more than 180 minutes. the time window for ert might differ according to baseline collateral status measured by multiphase cta. the current time window for ert within 6 hours from symptom onset to groin puncture could be atrial fibrillation (af) is the most common cardiac arrhythmia among adults. despite of the proven advantage in primary and secondary stroke prevention in patients with af, antithrombotic therapy has been reported to be still underused in many countries. however, there is a little data about the incidence of af and any changing pattern of antithrombotic therapy among patients with af over the past decade in korea. data source for this study were obtained from the nationwide sample cohort comprising 1,025,340 individuals (2% of entire population in korea) which were established by nationwide health insurance system. during a 10-year follow-up period, there was 16, 273 developed af (1.58%). the incidence of patients with af remained relatively constant during study period (8.23% in 2004 vs 8.24% in 2013). the proportion of patients with antithrombotic therapy increased from 18.5% in 2004 to 36.5% in 2013 significantly (p for trends < 0.001). however, the proportion of patients with antiplatelet agents was higher than with oral anticoagulation. af steadily increased over recent 10 years in korea. however, only 36.5% of af patients were receiving antithrombotic therapy. our study demonstrated that there was huge gap between the clinical practice and treatment guideline in antithrombotic medication for af patients in korea over the past decade. ohiohealth (oh) possesses one of the nation's largest neuroscience programs and is the leading volume provider of stroke care in ohio. oh is comprised of 12 hospital-based sites, 4 primary stroke centers, 1 comprehensive stroke center, and a virtual health (vh) stroke network that serves 24 hospitals throughout the state. in august 2016, stroke services at ohiohealth were restructured to enable dedicated clinical time for vh providers, require expertise, training, and quality review participation for stroke responders, streamline activation algorithms to limit hand-offs, and eliminate identified barriers to vh consultation. a 6 month interim analysis was planned to assess the impact of these changes (termed "stroke 2.0"). comparative analyses were performed between the first 6 months of stroke 2.0 and the similar time period of the year prior to restructuring. pre-defined metrics included consultation volume, vh response time, iv-tpa time to treatment, research enrollment volume, endovascular referral rate and time to treatment, ischemic stroke (is) observed : expected (o:e) mortality data, and patient retention rate at associate vh sites. during the first 6 months of stroke 2.0, 2007 encounters were seen (historical 700) with a mean activation to vh log in of 4.3 minutes. both volume of patients treated with iv-tpa (154 vs. 96, p< 0.05) and mean treatment times (63 vs. 86 minutes; p< 0.005) were significantly reduced. mean time to endovascular intervention was less during stroke 2.0 (98 vs. 140 minutes, p <0.05). system-wide o: e mortality was reduced after restructuring (0.513 vs. 0.704, p< 0.05), accounting for 21 additional lives saved. acute stroke research enrollment doubled (77 vs. 38) during this same period. transfer rates to vh hub were unchanged (59 vs. 59 %, p = 1.0). strategic changes in staffing, expertise, vh structure, and access can have profound and positive changes on a well-functioning stroke system. strokes due to cns fungal infections (scfi) are often misdiagnosed. retrospective study of electronically-extracted records in patients with strokes & positive fungal studies, from cerebrospinal fluid (csf) or brain biopsy. other stroke etiologies were excluded. thirteen patients had scfi by a priori exclusion & inclusion criteria. nine were males. mean age was 53+17 years. symptoms were mild [nihss 6 (2, 9.5) (median and iqr)]. focal deficits & headaches (both 76.9%) were common. seventy-percent were immuno-compromised (medications, malignancy, transplant recipients). clinical course was indolent in 30.7%. seventy-percent had poor outcome (2-ltac, 4-snf, 2-dead). ninety-two percent had csf pleocytosis (range:27-471) while 61% had csf glucose less than 40mg/dl (range: 2-37). seventy-five percent had lymphocytic predominance. seven strokes were from yeasts (4-cryptococcus, 1-coccidiomycosis, 1-histoplasma, 1-candida) and 6 from molds (4-zygomycetes, 2-aspergillus). sixty-two percent had posterior circulation involvement (71.4% yeast vs 50% molds). there was lepto-meningeal enhancement in 83% of yeast vs. 0% of molds infections (p= 0.01). the basal ganglia (bg) was involved in 75% of intravenous-drug users (ivdu) vs. 0% of non-ivdu (p=0.01). one had abnormal cns vessel imaging directly attributed to the ischemic lesions. in this series, patients were young, immunocompromised or ivdu. stroke sizes & clinical deficits were modest with no angiographic evidence of vasculitis. majority had csf pleocytosis & hypoglycorrhachia. posterior circulation involvement was typical. lepto-meningeal meningitis was only seen in yeast infections. the bg was spared in non-ivdu but common in ivdu. mechanism of stroke in yeast infections is probably from meningitis & secondary involvement of small perforating branches. mechanism in mold infections in immuno-competent ivdu is probably direct angio-invasiveness in small vessels of the bg. outcomes are poor in spite of therapy. scfi should be considered in selected cases of cryptogenic (recurrent or progressive) strokes with clinical, csf and mri features described. life-threatening bleeding requires prompt reversal of factor xa (fxa) inhibitors. their anticoagulant effects can be reversed with the antidote andexanet alfa. the efficacy of andexanet to reverse bleeding in an apixaban anticoagulated porcine trauma model was investigated. after ethical approval, male pigs (n=15) were given apixaban for 3 days (20 mg daily); the sham group (n=5) received placebo. standardized polytrauma by blunt liver injury and bilateral femur fractures were inflicted. 12 minutes post-trauma, animals were randomized (n=5 per group) to a single andexanet bolus (1,000 mg), a bolus (1,000 mg) + infusion (1,200 mg over 2 hours) regimen, or vehicle (control). blood loss (bl) and hemodynamics were monitored over 6 hours or until death and analyzed by anova (mean±sem). apixaban anti-fxa levels were 183±26 ng/ml with no differences between anticoagulated groups prior to injury. bl in the sham animals was 494 ± 24 ml 12 minutes after injury (total bl 651±39 ml at "x" hours; 100% survival). anticoagulation with apixaban significantly increased bl 12 minutes after injury (873±25 ml; p<0.01). controls exhibited a total bl of 3,913±235 ml with 100% mortality (mean survival time = 165 minutes). treatment with a bolus or bolus+infusion of andexanet was associated with a significant reduction in bl versus sham (p<0.05) and 100% survival. two hours after injury, apixaban anti-fxa levels in bolus animals were 99±45 ng/ml, whereas the bolus+infusion regimen resulted in levels of 17±6 ng/ml (p<0.05). hemodynamic parameters (e.g., cardiac output) and markers of shock (e.g., lactate) recovered to pre-trauma levels in andexanet-treated groups. clinically and macroscopically, no adverse events were observed. in this study, andexanet effectively and safely reversed apixaban anticoagulation and reduced bl induced by severe trauma under anticoagulation. the bolus alone had a similar impact on survival and bl as the bolus+infusion regimen in this lethal porcine model. current guidelines for management of pain, agitation, and delirium in mechanically ventilated patients in the intensive care unit (icu) recommend an analgesia-first approach to sedation management. however, these guidelines are derived from non-neurologic patient populations leaving uncertainty in their generalization to this population. the purpose of this study was to evaluate implementation of an analgesia-first sedation clinical pathway in the neuroscience icu. a single-center cohort study was performed within the neuroscience icu including patients mechanically ventilated for greater than 48 hours over a time period of three months before and after clinical pathway implementation. providers were educated on the pathway with emphasis on frequent assessment of richmond agitation-sedation scales (rass), critical care pain observation tool (cpot), and confusion assessment method-icu (cam-icu) scores and systematic de-escalation of sedatives through adequate pain and delirium management. outcome measures included frequency and magnitude of rass, cpot, and cam-icu scores, analgesic and sedative medication prescription/administration per day of mechanical ventilation (mv). a total of 107 patients met inclusion criteria (67 pre-pathway and 40 post-pathway). there was no statistically significant difference in the median frequency of rass (4.8 vs. 4.1) and cpot (6.7 vs. 6.4) assessments per day of mv or in median rass (-3 vs. -3) and cpot (0 vs. 0) scores. mean acetaminophen usage increased from 85.1% to 100% (p< 0.001) post-pathway implementation. there was no statistically significant difference in mean opioid or propofol usage, however a trend toward increased morphine and decreased propofol usage was observed post-pathway. analgesia-first sedation pathway implementation trended towards increased opioid analgesic and decreased sedative use, however only increased acetaminophen usage was significant. this highlights challenges in changing unit-based practices and future directions include focus on the frequency and reliability of pain, agitation and delirium assessment. interdisciplinary coordination and communication remains necessary for effective unit-based practice changes. andexanet alfa (andexanet), a modified, recombinant human factor xa (fxa) molecule, binds and sequesters fxa inhibitors. in a phase 2 study of apixaban, rivaroxaban, edoxaban, and enoxaparin in healthy volunteers, andexanet rapidly reversed pharmacodynamic markers of anticoagulation. here, the ability of andexanet to reverse the anticoagulant activity of betrixaban was investigated. in a randomized, double-blind, phase 2 study in healthy subjects, andexanet (n=12) or placebo (n=6) was administered intravenously following 80mg po qd betrixaban to steady state (7 days). in cohort 1 (andexanet bolus only), subjects (n=6) received a 800-mg andexanet bolus 3 hours after the last betrixaban dose (day 7) or placebo (n=3). in cohort 2 (andexanet bolus plus 2-hour infusion), subjects (n=6) received a 800 mg andexanet bolus 4 hours after the last betrixaban dose, followed by a 2-hour infusion of andexanet (8mg/min) or placebo (n=3). endpoints included safety and pharmacodynamic markers of anticoagulation reversal. following treatment with betrixaban in cohort 1, andexanet rapidly decreased anti-fxa activity from 29.9±11.6 to 6.5±4.5ng/ml, while the anti-fxa levels following placebo were largely unchanged (45.2±44.8 to 43.6±37.7ng/ml). unbound betrixaban plasma concentration decreased from 12.3±5.6 to 3.6±2.7ng/ml with andexanet, but remained constant following placebo administration (18.3±17.9 to 19.3±18.1ng/ml). similar results were observed in cohort 2 following andexanet bolus (2 minutes after bolus), and the effects were maintained during the 2-hour infusion of andexanet. for cohort 1, thrombin generation was restored in 6/6 (100%) and 1/3 (33.3%) of andexanet-administered and placebo subjects, respectively. for cohort 2, thrombin generation was restored in 5/6 (83.3%) of andexanet subjects versus 1/3 (33.3%) of placebo subjects. andexanet was well tolerated; there were no thrombotic events or serious/severe adverse events. andexanet was well tolerated and rapidly reversed anticoagulation effects of betrixaban in healthy subjects. these and other studies indicate that andexanet could be a universal antidote for fxa inhibitors. andexanet alfa (anxa), a recombinant human fxa molecule, reverses the anticoagulant activity of fxa inhibitors. in studies of healthy volunteers, anxa showed dose-dependent reversal of direct and indirect fxa inhibitors in tissue factor (tf)-initiated thrombin generation (tg). we compared rivaroxabaninduced inhibition of tg initiated via the extrinsic pathway (tf) versus intrinsic pathway (non-tf). tf-initiated tg was measured using a calibrated automated thrombogram (cat) and ppp-reagent. non-tf-initiated tg was measured using cat and actin fs. anti-fxa activity was measured using an anti-fxa chromogenic assay. pooled plasma was spiked with rivaroxaban or rivaroxaban+anxa; tg, anti-fxa activity, and clot formation were measured. for low tf-initiated clot formation, thromboelastography profiles were measured. anxa alone had minimal effect on endogenous thrombin potential (etp). anxa fully reversed rivaroxaban-induced anticoagulation in the actin fs assay, independent of anxa-tfpi interaction. modulation of tf activity was assessed by correlating etp versus anti-fxa activity with rivaroxaban or rivaroxaban+anxa. rivaroxaban dose-dependently inhibited tf-initiated tg as anti-fxa activity increased. at similar anti-fxa levels, rivaroxaban+anxa had higher etp than rivaroxaban alone, but not in the actin fs assay. clot formation was studied in plasma using thromboelastography without rivaroxaban. anxa did not affect thromboelastography parameters, with/without recombinant tissue plasminogen activator (rtpa). when low tf initiated clot formation without rtpa, anxa reduced the thromboelastography-r parameter, but not maximum amplitude. the fibrin clot was lysed at low rtpa, resulting in well-segregated coagulation and fibrinolysis. with the optimal rtpa, fibrin clots formed at each tf concentration were compensated by the fibrinolytic activity of rtpa. without a fxa inhibitor, anxa had minimal effect on tf or actin fs-initiated tg with no direct effect on rtpa function. anxa dose-dependently and completely reversed rivaroxaban-induced inhibition of tg initiated by intrinsic or extrinsic pathways, but had different effects on etp due to the anxa-tfpi interaction. there is a growing body of evidence relating poor outcomes to off-hour management. studies investigating the effect of overnight extubation (oe) have produced mixed results, and limited data is available for brain-injured patients. there may also be tendency to limit oe due to decreased staffing levels at night. we sought to determine the safety of oe and risk factor profiles associated with extubation failure (ef) in this cohort. we conducted a retrospective review of mechanically ventilated patients admitted to a single-center in-house database. exclusion criteria included limitations in care, tracheostomy placement, selfextubation, and death prior to extubation. the primary outcome was ef defined as non-elective endotracheal intubation within 72 hours. ef rates were compared between daytime (6am -5:59pm) and overnight (6pm -5:59am) extubation cohorts. in-hospital mortality served as a secondary outcome. amongst 997 identified patients, 783 (78.5%) underwent daytime extubation (de) and 214 (21.5%) oe. ef was indifferent between de and oe (10.1% and 8.4% respectively; p=0.46). however, multivariable adjustment for clinical severity indicators suggests higher ef for oe (or: 1.16, ci: 0.65-2.06; p=0.61). compared to de, oe was more likely performed in elective post-operative patients (29.9% vs 21.1%; p=0.006) with lower apache-ii scores (median 11 vs 13; p=0.002), and shorter durations of mv (median 0.3 vs 1.4 days; <0.001). higher apache-ii score, longer duration of mv, and admission diagnoses of acute vascular injury or neuromuscular disease were associated with ef. there was no difference in mortality (p=0.91). in our cohort, oe was not associated with increased ef or mortality. our results suggest that oe can be performed safely if standard extubation criteria are met in low-risk patients. these data provide a basis for subsequent more robust studies. case series have reported reversible left ventricular dysfunction, also known as stress cardiomyopathy or takotsubo cardiomyopathy, in the setting of acute neurological diseases such as subarachnoid hemorrhage. the nature of the association between various neurological diseases and takotsubo remains incompletely understood. we performed a cross-sectional study of all adults in the national inpatient sample, a nationally representative sample of u.s. hospitalizations, from 2006-2013. our exposures of interest were primary diagnoses of acute neurological disease, defined by icd-9-cm diagnosis codes. our outcome was a diagnosis of takotsubo cardiomyopathy. binary logistic regression models were used to examine the associations between our prespecified neurological diagnoses and takotsubo cardiomyopathy after adjustment for demographics. we identified 18,321,298 adults with a primary acute neurological diagnosis and 231,416,640 patients admitted to the hospital without a primary acute neurological diagnosis. among neurological diagnoses, subarachnoid hemorrhage (odds ratio [or], 9.20; 95%ci, , status epilepticus (or, 6.31; 95% ci, 5.21-7.63), transient global amnesia (or, 2.71; 95% ci, 1.66-4.43), and meningoencephalitis (or, 2.36; 95% ci, 1.91-2.92) were most strongly associated with takotsubo cardiomyopathy. weaker associations were present for ischemic stroke (or, 1.18; 95% ci, 1.08-1.30) and migraine headache (or, 1.44; 95% ci, 1.31-1.59). intracerebral hemorrhage and guillaine-barre syndrome were not significantly associated with takotsubo cardiomyopathy. in our multivariable model, female sex was significantly associated with takotsubo (or, 5.09; 95% ci, 4.85-5.35). we found associations with takotsubo cardiomyopathy for several acute neurological diseases besides subarachnoid hemorrhage. gram-negative meningoventriculitis (gnmv) causes significant morbidity and mortality. in addition to intravenous antibiotics, intra-thecal (it) or intraventricular (iv) antibiotics may be used to treat central nervous system (cns) gram-negative infections, including multi-drug resistant gnmv. there are limited studies on the effect of direct cns administration on cerebrospinal fluid (csf) cultures, csf routine parameters and other clinical outcomes. we conducted a retrospective chart review of all patients who received it or iv antibiotics for gnmv since 2009. demographics, source of illness, severity of illness (sofa), intravenous and it/iv antibiotic choice and csf microbiological, drug level and routine analysis were collected. time to pathogen clearance from csf culture was also measured. there were 32 inpatient encounters where iv/it antibiotics were given for gnmv during our study period, of which 24 were cared for in a neurosciences intensive care unit. antibiotics utilized were: gentamicin (20), colistimethate sodium (7), amikacin (4), and tobramycin (1). the most common pathogens were p. aeruginosa (8), k. pneumoniae (8), enterobacter sp. (7) and e. coli (5). prior to dosing, median csf white blood cell (wbc) count, protein and glucose was 2804/ul, 201 mg/dl and 25 mg/dl, respectively. it/iv antibiotics were dosed a median of 4 times per patient and clearance of csf culture occurred in a median of 5 days. there were significant changes in csf wbc (p< .001), protein (p<.001) and glucose (p<.001) between the first and last dose of iv/it antibiotics. twenty-five (78.1%) patients survived to discharge, 20 (62.5%) were confirmed alive at 6 months. patients who survived to discharge went to rehabilitation (5), home (5), long-term acute-care (9) and skilled nursing facility (6). it and iv antibiotics significantly improve csf wbc, protein and glucose profiles and clear csf cultures in patients with gnmv. it and iv administration may provide additional benefit to systemic therapy. gram-positive organisms are the most common cause of meningo-ventriculitis. systemic antimicrobial therapy may fail to achieve adequate cerebrospinal fluid (csf) concentrations, particularly against organisms with higher minimum inhibitory concentrations, such as mrsa and vre. direct intraventricular (iv) or intra-thecal (it) administration may be beneficial as they can facilitate high csf levels at the site of infection. there are limited studies on the effect of direct central nervous system (cns) administration of antibiotics on csf cultures, csf routine parameters and other clinical outcomes. we conducted a retrospective chart review of all patients who received it/iv antibiotics for grampositive meningo-ventriculitis since 2009. demographics, source of illness, severity of illness (sofa), intravenous and it/iv antibiotic choice and csf microbiological, drug level and routine analysis were collected. time to pathogen clearance from csf culture was also measured. there were 30 inpatient encounters where iv/it antibiotics were given for gram-positive meningoventriculitis during our study period, of which 23 were cared for in a neurosciences intensive care unit. antibiotics utilized were: vancomycin (28) and daptomycin (2). the most common pathogens were staphylococcus sp. (15), enterococcus sp (5), and streptococcus sp (8). prior to dosing, median csf white blood cell (wbc) count, protein and glucose was 430/ul, 117mg/dl and 47mg/dl, respectively. it/iv antibiotics were dosed a median of 4 times per patient and clearance of csf culture occurred in a median of 3 days. there were significant changes in csf wbc (p< .001), protein (p<.001) and glucose (p=.045) between the first and last dose of iv/it antibiotics. twenty-nine (96.7%) patients survived to discharge, 19 (63.3%) were confirmed alive at 6 months. it and iv antibiotics significantly improve csf wbc, protein and glucose profiles and clear csf cultures in patients with gram-positive meningo-ventriculitis. it and iv administration may provide additional benefit to systemic therapy. use of prothrombin complex concentrate (pcc) for urgent reversal of anticoagulant associated coagulopathy is increasing, and at the university of illinois hospital (uih), an anti-thrombotic reversal guideline was developed in may 2016 in order to assist licensed practitioners in choosing the appropriate reversal agent, optimal dosing, and improve timely administration pcc. the current study examined the safety and efficacy of pcc used for the urgent reversal of anticoagulant associated coagulopathy before and after the development of the anti-thrombotic reversal guideline. this was a retrospective chart review of adult patients who received pcc as the only hemostatic agent at the uih from jan 2008 to april 2017. the primary endpoint was hemostasis and secondary endpoints included thromboembolic events and time to pcc administration. there were 21 and 17 patients who received pcc before and after the anti-thrombotic reversal guideline, respectively. frequent cause of coagulopathy was warfarin (81% and 76%, respectively), and frequent indication for pcc was acute intracranial hemorrhage (81% and 76%, respectively). 3-factor pcc was more frequently used before the guideline and 4-factor pcc was more frequently used after the guideline. in patients presenting with warfarin induced major bleeding, target inr <1.4 was achieved in 71% and 62% of these patients before and after the guideline, respectively. clinical assessment of bleeding cessation from direct oral anticoagulant (doac) therapy was difficult to assess. thromboembolic event was observed in 38% and 6% of the patients, respectively. median time to pcc administration from its initial order was 107 minutes and 80 minutes, respectively. hemostasis was similarly observed in the warfarin group before and after the development of reversal guideline, but more thromboembolic events were observed before the reversal guideline. in order to further reduce the pcc administration time, a change in workflow has been made to administer pcc in timely manner. dexmedetomidine, a selective alpha-2 adrenoreceptor agonist inhibiting sympathetic neuronal activity, is a mild sedation agent. two recent case reports showed reduced norepinephrine (ne) requirement in septic shock with clonidine, a less selective alpha-2 agonist. increased vasopressor responsiveness (vr) was also observed with dexmedetomidine in cardiovascular surgical settings. sympatholytic effects of the alpha-2 agonists reverse vascular desensitization due to high levels of sympathetic activity in sepsis. depletion of intra-neuronal catecholamines with reserpine has shown to increase vr. in septic sheep infused with escherichia coli, clonidine reduced renal sympathetic tone and restored vr. additionally, alpha-2 agonists have shown to decrease pro-inflammatory cytokines and reduce mortality, improve capillary perfusion deficit, and lower arterial lactate in animal sepsis models. a prospective trial in human septic shock is in the pipeline. we report decreases in vasopressor requirement with initiation of dexmedetomidine in two patients with brain injury. a 51-year-old woman presented with a high-grade subarachnoid hemorrhage and concomitant reverse takatsubo cardiomyopathy. her clinical course was complicated by septic shock secondary to aspiration pneumonia at admission. when dexmedetomidine was started after 36 hours of ne infusion, a steady decrease in ne dosage was observed until its discontinuation. increased vr was also observed in a 33year-old man being treated for new onset refractory status epilepticus. on hospital day 10, the patient continued to have stimulus-induced seizures on ketamine, midazolam and pentobarbital infusions and required ne to maintain an adequate mean arterial pressure. when dexmedetomidine was added, a decrease in ne infusion was observed within an hour and continued for six hours until the patient no longer required vasopressor therapy. these findings are consistent with aforementioned reports of restored vr by alpha-2 agonists in septic shock, and warrant further investigation of possible beneficial effects of attenuated hyperadrenergic state conferred by alpha-2 agonists in various neurocritical care settings. decreasing the amount of time a patient remains intubated has been shown to reduce multiple negative outcomes. by extubating these patients earlier, risk of infection, prolonged immobility, and delirium are reduced. in early 2016, this nsicu was chosen to participate in the society of critical care medicine's icu liberation collaborative. the collaborative was focused on implementation of the abcdef bundle or icu liberation. the successful implementation of the bundle led to a decrease in the amount of time neurocritically ill patients were intubated. the bundle elements began to be rolled out in june 2016 (end of 1st quarter). included in the bundle's roll out was the creation of a respiratory clinical specialist role to help the interprofessional team with the respiratory components of the bundle. this role was a full time respiratory care practitioner who was dedicated to the nsicu and helped to ensure standards were being met. additionally, as a part of the bundle's implementation, a spontaneous awakening trial and spontaneous breathing trial algorithm was developed and initiated. this algorithm relied on interprofessional collaboration between nursing and respiratory therapy with communication to the provider and was rolled out in september 2016 (end of 3rd quarter). ventilator o/e for 2016: 1st quarter-0.953, 2nd quarter-0.989, 3rd quarter-0.827, 4th quarter-0.798 ventilator o/e for 2017: 1st quarter-0.844, 2nd quarter-0.700 the bulk of the research conducted that proved the benefits of the bundle elements has been completed in medical and/surgical patient populations. the neurocritical care patient population is very specialized and has several nuances that may impact the way the various elements need to be implemented. through this process, we have found that the techniques suggested within each element can positively impact the neurocritical care patient population. the cognitive reserve hypothesis refers to inter-individual differences in the ability of patients to cope with brain pathology. cognitive reserve can be measured by surrogate markers such as education and occupation and has shown to be an important predictor of outcomes in alzheimer disease, multiple sclerosis and traumatic brain injury. in this prospective longitudinal cohort study we determined whether cognitive reserve measured as number of years of education and employment status predicted 3-month functional outcome of ncc patients. demographic and clinical data, including number of years of education and occupational status, were collected. at three months after discharge, glasgow outcome scales (gos) were collected via telephone from patients or surrogate respondents. gos scores were categorized into 'good' or 'poor' outcome (gos 1-3). from march 2016 to july 2016, 35/83 patients with 3-month follow-up data were included. mean age was 56 ± 19 years, 12 (34%) were male, with stroke as the predominant admitting diagnosis.the two groups with good vs poor outcomes did not differ in age, gender or race in univariate analysis although employment status was statistically different in the two groups. in multivariate logistic regression neither employment nor education was a significant predictor of good vs poor outcome (p = 0.34, p = 0.88). prognostication in neurocritical care patients is difficult. the effect of cognitive reserve needs to be studied further. our current sample size is small and as enrollment continues, we will determine the relationship between cognitive reserve and 3-month functional outcome. fever commonly occurs in patients with spontaneous intracerebral hemorrhage (sich). however, it is non-infectious in the majority of cases. blood cultures (bcx) are often obtained as part of a fever workup, yet their utility may be limited and false-positive results may potentially compromise patient care. we hypothesized blood cultures in the first 48 hours would more likely be false-positive. we performed a retrospective chart review of patients admitted to a tertiary medical center with a diagnosis of spontaneous intracerebral hemorrhage. patients with secondary causes of ich as well as institution of comfort measures only were excluded. data obtained included demographics, clinical parameters of ich and blood culture results. blood culture results and charts were reviewed for adjudication of false-positive and true-positive cultures. of 654 included patients with sich, 226 patients (34%) had 1065 blood cultures obtained. 52 cultures were positive, of which 23 were classified as false-positive and 29 as true positive. false positive results were more common in the first 2 days (17 vs. 6), while true positive results were more common after the first 48 hours (12 vs. 17) (p= 0.018). early blood cultures in patients with sich are more commonly non-infectious. in line with prior published data, our results demonstrate the high cost and limited yield for blood cultures within the first 48 hours. predictive energy expenditure (pee) equations are commonly used in lieu of indirect calorimetry (ic) due to cost and limited resources; however, these equations may not be as accurate as ic in estimating resting energy expenditure (ree) in critically ill patients. the purpose of this study is to compare pee and measured energy expenditure (mee) in critically ill adults with acute brain injury. this was a retrospective review of adult patients admitted with acute brain injury between may 1st, 2014 and april 1st, 2016 who had ic performed. three predictive equations (pe), harris benedict (hbe), penn state university, and mifflin st jeor (msj), were used in comparison to ic results. subgroup analyses included a modified aspen weight-based equation, stratifying patients based on bmi and type of acute brain injury. 144 patients met inclusion criteria. comparing the pee estimated by the three predictive equations to the mee from ic found no significant difference. high degrees of interpatient variability were discovered in each anova analysis, with standard deviations ranging from 17 -29%. despite no difference found among pee and mee, pearson's correlations indicated weak associations when hbe, penn state, and msj were individually compared to mee (r-values = 0.372, 0.409, and 0.372, respectively). in patients with a bmi < 30 kg/m2, a significant difference was found (p-value=0.0006) with pee underestimating the ree. additionally, in aneurysmal subarachnoid hemorrhage a significant difference was observed between pee and mee( p-value=0.005). the results of this study highlight the importance of using ic whenever feasible due to the interpatient variability of the ree of critically ill patients with acute brain injury. although predicative equations appear to have similar estimations as ic, interpatient variability warrants more accurate measurement with ic to optimize nutrition in patients with acute brain injury. introduction 4-factor prothrombin complex concentrate (pcc) should be administered as soon as possible for reversal of anticoagulation in the setting of life-threatening bleeding or urgent procedures. limited information is available on the safety, efficacy, and time to administration of pcc when administered at high infusion rates. on march 21, 2017 grady health system implemented a rapid pcc administration strategy while attempting to reduce times from order entry to administration as a quality improvement initiative. this irb-approved, retrospective evaluation includes pcc administrations 90 days pre-and post-protocol implementation. after protocol implementation, pcc doses were prepared in up to four, 60-ml syringes, dependent on the ordered dose. each syringe was administered over 2 minutes, not exceeding a rate of 750 iu/minute. the primary objective of this study is to evaluate the safety of a rapid administration strategy for pcc. secondary objectives include turn-around times and effectiveness of inr reversal in patients previously on warfarin. results 52 unique pcc administrations were identified: 29 administrations in the pre-cohort and 23 in the postimplementation cohort. most pcc administrations were in the setting of spontaneous or traumatic intracranial hemorrhage. there were no infusion-related adverse events documented with the exception of a possible pcc infiltration post-implementation which resolved with supportive care only. the median order entry to administration time was higher in the post-implementation group (52 vs. 42 minutes). 18 administrations in the pre-cohort and 11 administrations in the post-cohort were for warfarin reversal. a greater percent of patients previously on warfarin reversed to an inr < 1.4 in the post-cohort compared to the pre-cohort, 81.8% vs 72.2%, respectively. this retrospective evaluation suggests that rapid intravenous push administration of 4-factor pcc is safe and effective. time to administration was longer after implementation of rapid pcc administration and may have been due to operational limitations. icu readmission is defined as a return to the icu during the same hospital admission. there are multiple studies related to medical and general surgical recidivism, however there is limited data on icu readmissions following spine surgery. the aim of this study was to evaluate factors associated with icu readmissions following spine surgery. patients requiring icu admission following spine surgery from june 2013 to june 2017 were studied. variables included age, gender, icu and hospital disposition, icu and hospital length of stay, bmi, comorbidities, surgical location, number of previous surgeries and vertebra manipulated, estimated blood loss, post op blood transfusions, and cause of readmission. a 1:1 matched control group based on age, bmi and location of surgery was identified. thirty-two patients required readmission following spine surgery during the study period. there was a higher prevalence of preoperative atrial fibrillation in the readmission group (20% vs. 5%, p=0.04). ebl (1041 vs 1214 ml, p=0.9) and lowest maps (60 vs 59.2 mmhg, p=0.7) were not significantly different in the two groups. we found a higher mortality rate (22% vs 0%, p=0.01), longer icu (60.7 vs 39.5 hours, p=0.06) and hospital los (17.28 vs 7.87 days, p= 0.001) in the readmission group. respiratory distress (25%) was the most common reason for readmission followed by cardiovascular instability (13%). discharge rates to inpatient rehabilitation and nursing facilities were similar for both groups; however 42% of the control group went directly home as opposed to 19% of the readmission group. complex spine patients who experience icu recidivism have a longer hospital stay and incidence of death within 5 years of their index procedure. they are less likely to be discharged home. preoperative a-fib correlates with increased incidence of readmission to icu post-operatively. further studies are needed looking at post operative fluid and pain management. to demonstrate the feasibility of exenatide infusion for hyperglycemia following acute brain injury. adult patients with acute brain injury and having two blood glucose concentrations >150 mg/dl and was administered within 48 hours of admission and continued per protocol for a maximum duration of 48 hours. the primary endpoint was feasibility (<25% of subjects experiencing severe hypoglycemia (<40 -180 mg/dl). descriptive endpoints were also collected. data is presented as medians [interquartile range] or percentages. a total of eight patients received exenatide (age 64.0 years [58.8, 67.8], 87.5% male, 50.0% caucasian, 50.0% history of diabetes, a1c 6.5% [5.5,7.3]). admitting diagnoses were intracerebral hemorrhage (n=3), acute ischemic stroke (n=3), subarachnoid hemorrhage (n=1), and subdural hematoma (n=1). glascow coma score was 10.5 [7.0, 14.0] and sequential organ failure assessment was 2.0 [1.0, 4.0]. based upon predefined criteria, feasibility was met with 0% of subjects experiencing severe hypoglycemia, 87.5% achieving the blood glucose goal, and 0% experiencing nausea requiring discontinuation. blood glucose was controlled during the 48-hour exenatide infusion ( intravenous exenatide infusion is feasible for the treatment of hyperglycemia following acute brain injury. extubation failure remains a common complication in critical care patients, and is associated with increased intensive care unit and hospital length of stays, hospital costs, morbidity and mortality. the most common cause of reintubation is laryngeal edema, often identified by the presence of a high pitched inspiratory whistling sound known as post-extubation stridor (pes). providers in the neurocritical care unit (nccu) at a large urban academic medical center noted higher than normal rates of pes. to reduce the rates of pes and reintubation without delaying extubation, a clinical pathway was created by an interdisciplinary team. the purpose of the pathway was to aid in the identification of patients expected to develop pes and guide prophylactic treatment. prior to project implementation, all providers in the nccu completed hands on training with practice in completing the pathway in the form of a checklist. during the 12 week implementation phase, checklists were completed on all intubated patients daily during rounds. during the 12 week trial, there were a total of 606 ventilator days. there were 531 completed checklists, yielding an 87.62% compliance rate for utilization of the clinical pathway. of the 56 patients who were extubated during the trial, 54 had a checklist completed, generating 96.43% compliance on the day of extubation. a chi-square analysis was performed to evaluate outcomes following all non-palliative extubations during the 12 week pre-implementation (n = 43) and post-implementation (n = 56) periods. implementation of the pathway was associated with a statistically significant reduction in rates of pes (1, n = 99) = 6.16, p<0.02, reintubation (1, n = 99) = 5.54, p<0.02 and reintubation due to pes, (1, n = 99) = 8.32, p<0.005. the clinical pathway implemented in our nccu was safe and effective in reducing rates of pes, reintubation and reintubation due to pes. agency for healthcare research and quality (ahrq) identified postoperative deep vein thrombosis (dvt) or pulmonary embolism (pe), also commonly referred to as venous thromboembolism (vte), as one of the complications acquired in the hospital and thus developed a mechanism to report its rate using administrative data. postoperative vte rate reduction became top priority for the university of illinois (uih) due to its high yearly rate, especially among patients in the neurosciences intensive care unit (nsicu). therefore, a quality improvement team in the nsicu implemented vte bundle and analyzed its effect on the vte rate. the vte bundle was initiated on all neurosurgery and neurology patients admitted to the nsicu since march 2016. vte bundle included lower extremity doppler ultrasound within 24 hours of admission, vte education provided to patient or family member within 48 hours of admission, and daily surveillance on proper use of mechanical sleeves and the mechanical device, low-dose heparin initiation and maintenance therapy, and documentation of activity status. the nursing staff were encouraged to follow the early mobilization protocol. mean vte rate was 52.9 per 1000 cases approximately 1-year before and 33.6 per 1000 cases approximately 1-year after the implementation of vte bundle. the rate of compliance was high on all aspects of vte bundle, especially on correct placement of ipc sleeve >85%; functioning ipc device >90%; low-dose heparin >90%; documentation of activity status >94%. no adverse effects were noted (i.e., skin breakdown, major bleeding) during the study period. this was the first time in 3 years at uih, the postoperative vte rate was reduced among nsicu patients based on the ahrq reports. the reduction may partly be attributed to the implementation of vte bundle; however further evaluation need to be performed to determine the effect size of vte bundle. increasing evidence suggests that large volume infusions of 0.9% sodium chloride (nacl) for resuscitation are associated with hyperchloremic metabolic acidosis and renal vasoconstriction leading to an increased risk of acute kidney injury (aki). in patients with neurologic injury, hypertonic (1.5% or 3%) nacl or sodium acetate (naacetate) may be required for therapeutic hypernatremia, treatment of cerebral salt wasting or elevated intracranial pressure. the primary aim of this study was to determine the incidence of aki in neurologically injured patients receiving intravenous hypertonic nacl and in those who were switched to hypertonic naacetate based on provider preference. this single-center, retrospective study compared patients that received only hypertonic nacl to patients that were switched to naacetate. data was collected to assess renal function, hyperchloremia, and metabolic acidosis. a total of 301 patients were screened and of those 142 were included. the patients who were switched from nacl to naacetate (n=45) had a greater incidence of aki (27% vs. 6%, p<0.001) and hyperchloremia (56% vs. 29%, p = 0.01) compared to patients who received only nacl (n=97). the incidence of metabolic acidosis was increased but not statistically significant (15% vs. 11%, p = 0.791). on average, hypertonic nacl was switched to hypertonic naacetate on day 3 of treatment with a mean chloride of 115.7 meq/l at the time of the switch. there was no statistical difference in the administration of nephrotoxic antibiotics, mannitol, vasopressors, or contrast dye between the two groups. the receiver operating characteristic (roc) analysis demonstrated that if a patient received greater than 2055 meq of chloride over 7 days they were more likely to develop aki (sensitivity 72%, specificity 70%, p=0.002, auc 0.70). neurologically injured patients receiving hypertonic sodium therapy requiring a switch to hypertonic naacetate had an increased incidence of hyperchloremia and aki. in-hospital complications following acute neurological injury has been a topic of extensive research to help reduce the morbidity and mortality among the patients. however, the incidence and prevalence of in-hospital infections following an acute neurological injury at the national level has never been studied. the aim of our study is to determine the frequency and prevalence of in-hospital complications among different patient groups admitted following acute neurological injury. we identified patients with primary diagnosis of ischemic stroke (is), subarachnoid stroke (sah), intracerebral hemorrhage (ich), status epilepticus (se), meningitis, encephalitis and traumatic brain injury (tbi) from nationwide inpatient database (2011-2014) through using the respective icd-9 codes. common in-hospital complications among the above-mentioned diagnoses through using their respective icd-9 codes patients with primary diagnoses of is (n=1855297), sah (n=101576), ich (n=254758), se (n=190701), meningitis (n=46067), encephalitis (n=23839), tbi (n=1142155) were identified. in-hospital events such as myocardial infarction (mi), sepsis, pneumonia, deep venous thrombosis (dvt), pulmonary embolism (pe), urinary tract infections (uti), and gi bleed were identified and compared among different patient groups. patients with se were noted to experience higher systemic complications, mi (3.6%), sepsis (11.2%), pneumonia (8.6%), dvt (1.7%), uti (16.2%), gi bleed (0.41%). patients admitted with meningitis had a higher incidence of sepsis (20.5%), pneumonia (9.0%), dvt (2.2%), pe (1.5%) and uti (12.7%) compared to the other groups. uti was the most common in-hospital complication observed. based on our analysis, we report a higher incidence of urinary tract infections among all patients admitted following acute neurological injuries. patients with primary diagnosis of status epilepticus experienced more systemic complications compared to the other diagnoses. macroglossia is a phenomenon that has been documented in association with prolonged neurosurgical procedures, brainstem injury, phenobarbital administration, and venous/lymphatic congestion of the tongue. however, exact causation of this condition in the neurocritical care population remains unclear. patients with macroglossia face significant risk for airway compromise. no interdisciplinary patient safety and management protocol exists. patients admitted to two neuro icu's within a single health system between 2012-2017 were reviewed. twenty-five patients with macroglossia were identified. an interdisciplinary patient management protocol was created, instituting airway safety standards, oral care directives, and interventions to promote symptom resolution. early consultation to oral and maxillofacial surgery and consideration of early tracheostomy was recommended. seventeen patients (68%) were women. age ranged from 20-81 years. the majority (16/25) of patients were african american. primary diagnoses included status epilepticus (12/25) and stroke (1 sah, 4 ais, 6 ich). nineteen patients received antiepileptic medications before diagnosis. average gcs at symptom onset was 5.8 [3] [4] [5] [6] [7] [8] [9] [10] [11] and at time of discharge was 8.9 [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] . median symptom onset was hospital day 5 [0-39]. twenty patients (80%) required tracheostomy. nine (36%) experienced symptom resolution by hospital discharge. two patients received botulinum toxin injection; both experienced symptom resolution. lingual massage was performed in two patients; in both patients, tongue swelling resolved. tongue lacerations occurred in 10/25 patients (40%), although most were observed following macroglossia onset, ruling out trauma as an inciting event. chlorhexidine oral rinse was discontinued for all except five patients due to concern for angioedema. endotracheal tube was dislodged in two patients, complicating reintubation, although successful. no trend in pre-existing allergies or antibiotic regimen was apparent. macroglossia is a relatively uncommon but high-risk condition in the neuro icu that warrants further study. care of patients with macroglossia should be standardized in order to ensure airway safety. an interdisciplinary approach is recommended. one of the biggest challenges of magnetic resonance imaging (mri) examination is the acquisition of high-quality diagnostic images, as it requires the neurological intensive care unit (nicu) patients to keep still for a significant time. in situations with poor patient cooperation, unplanned sedation is inevitable, which can lead to complications such as desaturation and hypotension. we investigated the incidence and factors related to complicated mri examinations (mri-c) in patients admitted to the nicu. we designed a retrospective study to review the data of 218 patients who had an attempt to undergo brain mri during stay in the nicu between july 2014 and august 2016. the mri-c group was defined when a patient met one of following criteria: 1) required sedation for mri examination due to irritability 40 mmhg or required inotropic agents, 4) developed cardiac or respiratory arrest. of 218 patients, 66 (30.3%) developed mri-c. the most common cause of mri-c was unexpected irritability at the mri room. among patients with mri-c, 62 (93.9%) patients required unplanned sedation; 9, desaturation; 6, hypotension; none, cardiac or respiratory arrest. higher apache ii scores (p = 0.031) and lower gcs scores (p = 0.047) on admission and use of sedative agents during critical care in the nicu were associated with mri-c (p < 0.01). in addition, patients with mri-c had longer mri scan time than those without mri-c (p = 0.004). many of neuro-critically ill patients undergo unsafe mri scans. our findings suggest that severity of illness and use of sedative agents during management in the nicu were factors related to mri-c. introduction: fulminant hepatic encephalopathy (fhe) with diffuse cerebral edema has dismal prognosis if transplantation is not performed. novel therapeutic interventions may change this outcome. we reviewed all cases with fhe admitted to our hospital since 2008. in 2010, we developed a multidisciplinary management protocol, mandating transfer of patients entering grade 3 from other icus to the neurosciences-icu (nicu) for intracranial pressure (icp) management. multiple interventions were utilized including coagulopathy reversal with factor vii and prothrombin complex concentrate (pcc, kcentra), icp device placement, osmotherapy, aggressive ammonia lowering regimen with lactulose and rifaximin, early renal replacement therapy, mild hypothermia for refractory icp, in conjunction with liver transplantation candidacy investigation. results: twenty-four patients (19 women, mean age of all patients 40 years) were admitted; seven were managed in the micu/sicu and 17 in the nicu. the etiology of fhe was acetaminophen toxicity in 72% of patients. the model for end-stage liver disease (meld) admission scores and liver enzymes between the micu/sicu and the nicu were not different (mann-whitney test). although the nicu admission ammonia level was higher than the micu/sicu (168.75 vs 99.50, p = 0.00), the lowest achieved ammonia was lower in the nicu (41.31 vs 78.13, p = 0.022, mann-whitney). patients received icp monitoring (all in the nicu plus 2 in the sicu) and the highest icp recorded was 120 mm hg. the preand post-coagulation reversal inr were 3.37 and 1.3, p=0.031, wilcoxon test). seven patients in the nicu received hypothermic treatment. mortality in the micu/sicu was 85.7% (6/7) and in the nicu 41.1% (7/17), p = 0.13 (chi square test). conclusion: a multidisciplinary approach centered around anti-cerebral edema protocol-driven management based on novel interventions may improve the outcome of patients with fhe. catheter-associated urinary tract infections (cauti) are among the most common health-care associated infections (hcais), (gould, 2016) . neurological patients in the critical care setting are particularly at risk for cauti due to cognitive, motor, and sensory deficits. in the neuro intensive care unit, despite following recommended cauti reduction bundle guidelines, cauti rates continued to rise over the last five years with rates reaching 6.45 per 1000 catheter days. in january of 2017, the unit implemented a cauti taskforce to perform a literature review of best practices and subsequent 1:1 peer training and education targeting cauti reduction. in an analysis of organisms causing the infections, e. coli and enterococcus bacteria accounted for more than 50% of cautis on the unit. the taskforce (comprised of 13 staff nurses) focused on fecal management, proper cleaning technique, and proper indications of indwelling urinary catheter necessity. using training videos, indwelling urinary catheter care checklists, and real-time feedback on technique, the taskforce performed 1:1 training with bedside staff over four weeks. to ensure undivided attention, the taskforce worked in pairs enabling one trainer to teach and observe the staff member receiving training while the second trainer provided the necessary clinical duties for the trainee's patients. after implementation, the cauti rate decreased to 3.43 for january-march 2017 and 2.83 for april-june 2017, lowering total cauti events to 23 for fy17 compared to 32 for fy16. implementing a 1:1 training program focused on fecal management, cleaning techniques, and appropriately timed catheter removal can reduce cauti rates in the neuro critical care setting. brain aneurysms can be treated with coil embolization or flow-diverting devices. thromboembolism is a major complication of aneurysmal coil embolization, with an incidence as high as 28% . new flowdiverting devices have been designed to have a mesh with high coverage area and high flexibility to facilitate the redirection of blood flow. these features can induce blood stagnation and thrombosis. to reduce the risk of thrombosis, the common but unproven practice of dual antiplatelet therapy with aspirin and clopidogrel has been implemented from the cardiac literature. despite some favorable outcomes, clopidogrel, "non-responsiveness" has been reported to be present in as low as 5% to as high as 56% making this agent not optimal. this will leave practitioners with other oral p2y12 alternatives such as prasugrel and ticagrelor that have not been studied widely in this setting. it is therefore likely that controversy exists among practitioners regarding the use of optimal antiplatelet agents in neurointerventional procedures. we hypothesized that practices in regards with the use of oral antiplatelets in neurointerventional procedures are likely heterogeneous and different from state to state. by using an electronic survey, we would like to identify different practices surrounding the use of oral anti-platelets in neuro-endovascular centers in the united states. an electronic survey will be distributed via the web using survey monkey (seattle, wa). the survey will be posted on the neuro-critical care society (ncs) web page. all practicing neuro icu or stroke physicians, pharmacists, physician assistants, or nurse practitioners are eligible to respond to this survey. this survey is approved by the johns hopkins hospital irb and the ncs research committee. 8 centers have completed the survey at this point. the results will be analyzed after the closing date of survey (9/4/2017). to be completed myasthenia gravis (mg) crisis and guillain-barre syndrome (gbs) are immune mediated diseases that may require mechanical ventilation as part of their management if severe. comparative analysis of outcomes in terms of length of stay, disability, and mortality between these two disease entities at national level is not reported mechanically ventilated patients with primary diagnosis of guillain-barre syndrome and myasthenia gravis were identified from the nationwide in-patient sample (nis) database for the years 2012 to 2014 mechanically ventilated mg patients (n=2330, mean= 62+/-19.5 years) were older compared to gbs patients (n=2060, mean= 55.9 +/-20.0 years, p=0.001). medical co-morbidities were significantly higher in mg patients (diabetes mellitus, congestive heart failure, coagulopathy, chronic lung disease and dyslipidemia) whereas significantly higher nicotine dependence and alcohol abuse were noted in gbs. significantly higher in hospital complications of pneumonia and urinary tract infection were noted in gbs. disease severity measured by apdrg severity index and rate of treatment with intravenous immunoglobulin and plasma exchange was comparable. length of stay (25.3 ± 18.2 days , p < 0.0001 ); hospital charges ( $353361.2 ± 293863.2 vs 232160.12 ± 222881.3 p = 0.001) ; moderate to severe disability ( 86.6% vs 46.2% p < 0.001) were significantly higher for gbs patient compared to mg. inhospital mortality was comparable ( 8.7% gbs vs 8.6 % mg, p =0.93). in multivariate analysis after adjusting for confounders including treatment, myasthenia gravis patients had significantly less disability (or 0.06 (95% ci 0.03-0.10) and shorter length of stay (or 0.32, 95% ci 0.16-0.61). mechanically ventilated gbs patients have higher in-hospital complications, length-of-stay, and disability compared to mg. this may reflect a delay in diagnosis of gbs at admission and poor response to immunotherapy in certain gbs variants. betrixaban is an inhibitor of factor xa (fxa) for prophylaxis of venous thromboembolism (vte) in at-risk patients hospitalized for acute medical illness. a phase 3 trial (apex) compared extended-duration anticoagulation with betrixaban to enoxaparin in acute medically ill patients; the effect of patient characteristics on population pharmacokinetics and exposure-response relationships is analyzed here. patients received betrixaban (35-42 days; n=3,759) or enoxaparin (10±4 days; n=3,754). the primary efficacy and safety endpoints were composite occurrence of vte events and incidence of major bleeding, respectively. betrixaban dose was 80mg po qd (40mg po qd for patients with severe renal insufficiency/requiring concomitant p-glycoprotein inhibitor). pharmacokinetic samples were collected -5 hours or 10-30 hours after the most recent dose of study medication. patient characteristics included age, sex, race, region, body weight, crcl category, and specific p-glycoprotein inhibitor. 3,146 pharmacokinetic samples were analyzed. at 80mg, the projected concentration was 18.8ng/ml at 2 hours post-dose and 16.1ng/ml at 20 hours post-dose, showing a stable daily concentration. coadministration of 2 p-glycoprotein inhibitors on the day of sampling more than doubled betrixaban concentration to ~35ng/ml at 20 hours post-dose. at 40mg, the projected concentration was 7.2ng/ml at 20 hours post-dose, indicating a greater-than-dose-proportional exposure relationship. patient age, sex, weight, crcl category, p-glycoprotein inhibitors, and region were significant covariates affecting betrixaban pharmacokinetics. the exposure-response relationship for the primary efficacy endpoint was not significant, but the relationship between betrixaban concentration and major/clinically relevant nonmajor bleeding was significant in multivariate testing (p=0.011). the betrixaban pharmacokinetic profile exhibited stable serum concentrations with qd dosing. several covariates had a 15%-20% effect on betrixaban concentration, but no effect on efficacy/safety. betrixaban dose should be adjusted to 40mg for patients taking amiodarone or clarithromycin, but not other p-glycoprotein inhibitors. andexanet alfa is being investigated for reversal of anticoagulation by factor xa (fxa) inhibitors. a pharmacokinetic/pharmacodynamics model, developed in healthy subjects, predicted the andexanet regimen required to reverse anticoagulation by fxa inhibitors. the current analysis validated the pharmacokinetic/pharmacodynamic model using interim data from the annexa-4 study in patients with acute major bleeding. in annexa-4, an ongoing prospective, open-label study, bleeding anticoagulated patients received iv andexanet bolus (400 or 800 mg) followed by 120-minute infusion (4 or 8 mg/min). anti-fxa activity was measured before andexanet administration (baseline), at end of bolus (eob), at end of infusion, and at 4, 8, and 12 hours after infusion. the relationship between baseline anti-fxa activity and reversal in healthy subjects was derived from the pharmacokinetic/pharmacodynamic model and used to predict percent reversal for patients with acute major bleeding. from the first interim analysis of annexa-4, 73 patients (apixaban, n=39; rivaroxaban, n=34) had plasma levels available for model qualification, although 7 did not meet criteria for inclusion into safety and 27 did not meet criteria for efficacy analysis. the mean observed percent reversal of anti-fxa activity for rivaroxaban and apixaban was well predicted by the healthy subject pharmacokinetic/pharmacodynamic model; the point estimates fell within the 90% confidence intervals of predicted values. the percent reversal at eob for rivaroxaban and apixaban were 74.4 [58.3-90.7] and 83.9 [75.3-92.5], compared with 76.3 and 84.1 predicted by the model. the predicted reversal closely fit the observed confidence intervals through the first 4 hours for rivaroxaban and apixaban, and extended through all evaluated time points for rivaroxaban and slightly outside of post-4-hour time points for apixaban, possibly due to higher baseline anti-fxa activity levels for apixaban. the pharmacokinetic/pharmacodynamic model in healthy subjects closely predicted the extent of reversal of anti-fxa activity for apixaban and rivaroxaban in patients with major bleeding. risk factors and methods to predict extubation failure are well established for patients in medical icus and surgical icus. literature on patients who fail extubations in neurological icus is limited. the intention of this study was to collect descriptive information from patients with neurological injuries who failed liberation from mechanical ventilation. retrospective review of all patients with acute neurological injury who were admitted to our neuro icu and who required reintubation within 72 hours of discontinuation of mechanical ventilation between january 2012 -february 2017. we identified 60 patients intubated primarily due to neurological pathology who required reintubation within 72 hours after initial extubation over a 5-year study period. the majority of reintubated patients (n=43; 71.7%,) had a positive fluid balance prior to failed extubation. twenty-six of the reintubated patients had a concurrent underlying chronic cardiac and/or pulmonary disease. five patients were placed on noninvasive ventilation post extubation. low glascow coma scale and absence of basic brainstem functions (gag and cough reflexes) was only minimally predictive of extubation failure. most of our reintubated patients did not have significant supratentorial midline shift nor an insult to the posterior circulation or dominant hemisphere. in patients with primary brain injury who required reintubation, a positive fluid balance prior to extubation may confer a lower rate of successful extubation. lesion location and supratentorial midline shift may not be tightly associated with extubation success. overall, our reintubation rate is quite low. early tracheostomy may play a small but significant role in the low rate of reintubation. further studies may be useful in creating a scoring system to identify the likelihood of extubation success in patients with neurological injury. surgical prophylaxis guidelines for evd insertion recommend peri-procedural antibiotics rather than prolonged antibiotic administration for the duration of evd placement. several small studies have shown that prolonged systemic antibiotic use does not reduce the incidence of catheter related ventriculitis. prolonged use is also associated with a higher rate of multi-drug resistant (mdr) infections. this study aims to show that prolonged antibiotic administration following evd insertion is potentially harmful. this is a single center, retrospective, chart review. all 100 patients admitted to our hospital who had an evd placed from january 2000 to march 2017 were identified. patients with preceding infections, incomplete data or uncertain infection diagnosis were excluded. sixty-nine patients were analyzed. documented variables included demographics, comorbidities, indications for evd, duration of antibiotic therapy, infections and organisms' sensitivities. eight patients (12%) did not receive any antibiotic therapy; the rest received cefazolin following evd insertion. infections occurred in 22 of 69 (32%) patients; 13 of 22 (59%) were mdr bacteria. ventriculitis occurred in 4 (6%) patients, and 2 of these were resistant to cefazolin (mdr). ventriculitis was not associated with the use or duration of antibiotic therapy. graphical analysis showed that the probability of any infection decreased during the first 3 days of antibiotic prophylaxis. after 3 days, the longer patients remained on prophylactic cefazolin, the higher the probability of infection (spearman rank correlati patients who received antibiotics for > 3 days were 2.5 times as likely to develop mdr infections (95% ci,0.60 to 10.3; p=0.3). cefazolin may prevent infections for the first 3 days after evd insertion. however, prolonged administration increased the risk of mdr bacterial infections. a randomized study comparing periprocedural (72hrs) antibiotic use is needed to resolve this controversy. each year more than 13,000 deaths are associated with urinary tract infections. eighty percent of all utis are associated with an indwelling catheter. neuroscience intensive care (neuro-icu) units have the highest rates of catheter associated urinary tract infections. catheter associated urinary tract infection (cauti) increases morbidity rates, length of stay, and costs among hospitalized patients. at an urban academic medical center, our neuro-icu had the highest cauti cases among our icus. the purpose of this project was to reduce our cauti cases by 50%. this quality improvement project used several strategies: (1) formed a multidisciplinary cauti task force that included nurses, physicians, infection control, management and supply chain personnel; (2) developed an action plan to update standard of practice by conducting a review of the literature and pilot testing new products; and, (3) educated staff using huddles, a bedside guide, and email blasts with cauti facts starting in august 2016. additionally, cauti prevention was discussed during patient handoffs among nurses and physicians. data were collected for all neuro-icu patients from fiscal year (fy) 2016-2017. cauti cases are determined by utilizing cdc's national healthcare safety network. analysis included evaluation of trends across time. we reduced our number of cauti cases from 5 in fy 2016 to 1 in fy 2017. as of the beginning of fy 2018, we have not had a cauti for 365 days. a comprehensive approach with a strong commitment by clinicians is critical for sustaining a reduction in cauti. we reduced our cases and exceeded our goal. our efforts to provide evidence-based care are ongoing as we continue to monitor the research and upcoming supplies aimed at making hospitalacquired cauti a never event. isophane insulin (nph) is a commonly prescribed basal insulin to manage hyperglycemia in critically ill patients on continuous tube feeding due to its intermediate duration of action. however, the incidence of hypoglycemia may be higher given the duration of nph can last between 14-24 hours and because of the potential for unexpected interruption in feeding. using scheduled regular insulin (ri) instead of nph may reduce this risk given its shorter duration of action. it may also improve glycemic control due to more frequent titration. this was a single-center, retrospective, observational, cohort study from december 2016 to may 2017. patients on continuous tube feeding who were prescribed scheduled ri were compared to those prescribed nph. all patients continued to receive an insulin sliding scale. choice of agent was determined by the bedside team. the primary endpoint was incidence of hypoglycemia while secondary endpoint assessed efficacy. in our patient population, a higher incidence of hypoglycemia was seen in those that received nph. hypertonic saline bolus (hsb) is a proven intervention for neurological emergencies arising from cerebral edema and increased intracranial pressure. safety of hsb administered via central venous catheters is well established. however, infusion of hsb through peripheral intravenous access raises concern for complications related to caustic nature of the solution. we aim to assess the safety of peripherally administered boluses of hypertonic saline (3% sodium chloride) at a regional level 1 trauma and comprehensive stroke center. we performed retrospective chart review of patients who received hsbs from january 1, 2014 to january 1, 2015 as part of a quality improvement project. we identified 167 instances of hsb administration. the cases were individually reviewed for iv gauge, location of the iv, whether central access was present at the time of administration, documentation of iv removal, and volume of boluses. patients were excluded if there was concurrent central venous access catheter present at the time of hsb administration or unrelated death within 12 hours after administration of hsb. adverse events were defined as line infiltration, erythema, or swelling at the site of hsb administration. 111 charts were excluded from the study because of presumed administration of hsb through central venous access, not peripheral iv. two patients had adverse events (3.5%). none of the patients progressed toward limb threatening complications. the majority of patients (54/56) did not experience erythema or infiltration of the iv. hsb administered through peripheral intravenous access does not pose significant risk of severe complications and may be safely used in emergency situations in the absence of central line access. routine screening of high risk asymptomatic trauma or surgical patients for venous thromboembolism(vte) is controversial. studies suggest against screening while others recognize that some patients at high risk may benefit. the purpose of this pilot study is to evaluate the outcome of routine screening in patients who underwent neuro-surgical interventions. all adult patients admitted to a neuro-intensive care unit with a primary diagnosis of brain injury requiring surgical interventions were included. data from april-june, 2017 were retrospectively collected on all subjects who had either spine or cranial surgery. data collected include: incidence of vte, number of times duplex ultrasonography and computed tomography of the chest was performed. on july 1st, prospective data collection began by screening for presence of deep vein thrombosis(dvt) on day 1, 7 and 14 from admission or surgery day. all patients received pharmacologic and mechanical vte prophylaxis within 36-48 hours post-operatively. a total of 101 (pre-pilot, n=91 and post-pilot, n=10) subjects were included in the study. in the pre-pilot group, the ages ranged from 18-70 and most were male. majority, 73/91(80%) had either craniotomy/craniectomy while 18/91(20%) had spine surgery. about 30/91(33%) were admitted with primary diagnosis of traumatic brain injury. of the 91 subjects, 35 had duplex screening for dvt and 8 had screening for pulmonary embolism(pe). the incidence of vte was confirmed in 11/91(12%); (dvt-8% and pe-4%). median hospital length of stay was 9 (iqr 4-15) days. 38/91(42%) were discharged home and 2/91(2%) death rate was attributed to pe. in the post-pilot group, one incidence of pe was identified on day 1 post surgery screening. the rest of the results are still pending. in this preliminary report, post surgical patients have a higher incidence of vte. routine screening might benefit to lower the incidence of mortality caused by pe. epsilon aminocaproic acid (eaca) is an antifibrinolytic agent that crosses the blood-brain barrier and has shown benefit in decreasing bleeding in patients acutely. its use in intracranial hemorrhage has uncertain benefit. we aimed to describe the administration and impact of eaca in a single-center neurosciences intensive care unit (neuroicu) over one year. we performed a single-center retrospective study of neuroicu patients undergoing intravenous eaca administration over a one-year time period. inclusion criteria included eaca administration over 24 hours for a diagnosis of acute traumatic hematoma. the dose and duration of eaca infusion was collected. we additionally collected and compared pre-administration and post-administration prolonged thromboplastin time (ptt) hematology assays and neuroimaging. clinical outcomes were reviewed for survival at hospital discharge. over a 13-month period (april 2016-may 2017), 24 patients each received a 24-hour infusion of eaca. the most common indication for eaca was to prevent worsening of intracranial hemorrhage in patients in traumatic coma (gcs <8). 68% of patients underwent neurosurgical management. ptt assay values showed a significant difference before and after eaca administration. (ptt 30.3+/-sd vs. 28.6+/-sd; student's t test p<0.05, n=21). stability of the intracranial hematoma burden was evident following eaca in 63% of patients. 29% of patients who received eaca survived to discharge. patients receiving eaca showed a significant reduction in ptt assay values 24 hours after completing their dose. ct neuroimaging demonstrated stable intracranial hemorrhage burden in most patients receiving eaca despite a high prevalence of acute operative neurosurgical management. however, only a modest number of patients receiving eaca survived to discharge. these results suggest that eaca may acutely reverse hematologic abnormalities and enable emergent neurosurgical management in patients with severe, acute traumatic hemorrhage, despite a limited role in affecting survival outcomes in these patients. prognostication is difficult for patients admitted to a neurocritical care unit (nccu). can serum biomarkers obtained as part of routine admission lab work help predict outcomes among patients in this prospective cohort study, the following biomarkers were measured at admission: c-reactive protein (crp), arterial lactate, neuron specific enolase (nse), lactate dehydrogenase (ldh), albumin, and brain natriuretic peptide (bnp). we collected information about demographics, comorbidities, hospital procedures and complications and 30-day mortality. we compared these serological biomarkers in patients who were alive versus those who had died at 30 days. a total of 112 patients were enrolled over 4 months from june to september 2016, 11 of which whom (9.8%) died within 30 days of admission. there were no statistically significant differences in age or gender between the two groups. the 30-day mortality group had a higher mean charlson comorbidity index (cci) (3.0 vs 1.6, p=0.027) as well as mean nse (39.1 vs 13.9 ug/l, p=0.008) and bnp levels (590.2 vs 177.3 pg/ml, p=0.003). mean crp, lactate, and ldh were also higher in the 30-day mortality group (79.5 vs 51.8 mg/l, 2.3 vs 1.9 mmol/l, and 307.2 vs 274.2 u/l) while mean albumin was lower (3.0 vs 3.3 g/dl), although these differences were not statistically significant (p<0.35). cci and serological biomarkers may have utility in predicting 30-day mortality among patients admitted to the nccu. as we continue enrollment, we plan to develop a predictive model for 30-day mortality on admission for patients admitted to the nccu using serological biomarkers, cci and admission characteristics. among hospitalized acutely ill medical patients, the risk for venous thromboembolism (vte) is high. the goal was to examine vte prophylaxis of at-risk patients and vte risk during hospitalization and in the outpatient continuum of care. acutely ill medical patients were identified from the marketscan commercial and medicare databases from 1/1/2012 to 6/30/2015. inclusion criteria were hospitalization for heart failure, respiratory diseases, ischemic stroke, cancer, infectious diseases, and rheumatic diseases; 6 months of continuous insurance coverage prior to (baseline period) and after (follow-up period) the index hospitalization. outcomes included the proportions of patients receiving inpatient and/or outpatient vte prophylaxis, and the risk for vte events. years, and 55.4% were female. patients were hospitalized for infectious diseases (40.6%), respiratory diseases (31.0%), cancer (10.7%), heart failure (10.4%), ischemic stroke (6.4%), and rheumatic diseases (0.9%). mean hospital length of stay was 4.8 days. in total, 59.1% (n=10,581) of patients did not receive any vte prophylaxis, and 7.1% (n=1,267) received both inpatient and outpatient vte prophylaxis. during hospitalization, 38.2% (n=6,843) received vte prophylaxis (enoxaparin, 76.7%; warfarin, 15.2%; enoxaparin and warfarin, 5.3%; a direct oral anticoagulant (doac), ~2%). following discharge, 9.7% (n=1,738) received outpatient vte prophylaxis (warfarin, 43.8%; doac, 13.7%; enoxaparin, 10.1%; enoxaparin and warfarin, 7.6%). among the entire study population, the vte event risk remained elevated up to 30-40 days after hospital admission. among hospitalized acutely ill medical patients, the risk for vte was present in both the inpatient and outpatient settings, with significant vte risk extending into the post-hospitalization period. only a small portion of at-risk patients (7.1%) received vte prophylaxis in both the inpatient and outpatient continuum of care, suggesting an unmet medical need for vte prophylaxis in the post-hospitalization. brain edema is a good research target in various forms of neurologic injury. a real time measurement of brain edema is possible using thermal conductivity methods. however, this technique might be hard to apply in small rodents, which are commonly used as experimental brain edema models. we developed a new approach method for applying thermal conductivity methods in rodent brain edema model. a 10-week-old spraque-dawley rats were used for brain edema model. qflow 500 probe was inserted through a suboccipital burr hole, located 3mm left from the midline, then was advanced anteriorly 10mm from the occipital bone margin until probe place assistance value indicates valid values (ranging from 0 to 2.0). probe was fixated using adhesive glues and tagging suture. in vivo brain water content was continuously calculated using thermal conductivity values. for validation, calculated brain edema was compared with standard methods (dry/wet brain weight ratio) in water intoxication models (intraperitoneal injection of distilled water, 20% of body weight) and drying effect of mannitol was validated in streptokinase induced intracerebral hemorrhage (ich) models. calculated brain water content was 78.6±0.6% in thermal conductivity method and 78.4±0.5% using dry/wet weight ratio methods (p=0.43). in water intoxication model, brain water content started to increase 30 minutes after injection and reached up to 81.5±0.7% at 6 hours post injection. on wet/dry weight method, edema was measured as 81.0±0.8% (p=0.80). in ich model, brain water content started to drop 30 minutes after administration of mannitol (0.5mg/kg) and drifted back 4 hours after injection of mannitol. thermal conductivity method in assessing brain edema is applicable in rodents using suboccipital approach through burr hole. this method may better reflect dynamic changes of brain edema. in patients with critical brain injury, alterations of brain physiology with dialysis initiation are poorly understood. from a consecutive series of brain-injured patients undergoing invasive multimodality monitoring between 2008 and 2016, 13 patients that underwent continuous veno-venous hemodialysis (cvvh-d) and 4 patients that underwent intermittent hemodialysis (ihd) were identified. changes in mean arterial pressure (map), intracranial pressure (icp), and brain tissue oxygenation (pbto2), and microdialysis lactate-pyruvate ratio (lpr) were compared six hours prior to and twelve hours following dialysis initiation. high-resolution data was collected every 5 seconds, with the exception of lpr collected hourly. data were normalized to patient maximum values, analyzed by fitted segmented regression, and checked for slope change-points by davies' test. values prior to dialysis initiation were averaged as a baseline for comparison. median values for patients undergoing cvvh-d were map 85 +/-12.77, icp 9.8+/-9.61, pbto2 21.7 +/-6.023 mmhg (n=5), and lpr 21.6 +/-22.13 (n=4). normalized median values for patients undergoing ihd were map 94 +/-13.73, icp 19 +/-5.32. for the cvvh patient segmented regressions with normalized data, there was no change in map (slope 0.0004) during the twelve hours. however, we found a change-point in icp at 4.78 hours (ci 3.76-5.79, slope change 0.006 to 0.0006) and pbto2 at 4.99 hours , slope change 0.006 to -0.023). lpr increased through cvvh (slope 0.036 +/-0.0097). median values for patients undergoing ihd were map 94 +/-13.73, icp 19 +/-5.315. there was no identified change-points in map or icp in ihd patients, further parameters were limited by small sample size. initiation of cvvh in patients with neurologic multimodality monitoring showed change-points in icp and pbto2 in setting of stable map, with slight decrease in icp and pbto2. initiation of hd in showed no change-points in icp. data on the cerebral effects of antihypertensive agents are limited but potentially important in patients requiring blood pressure reduction in neurological emergencies. our objective was to measure the effect of rapid-acting antihypertensive agents on cerebral blood flow (cbf) in patients with acute hypertension we conducted a prospective, quasi-experimental study of patients with a sbp > 180 mmhg and planned rapid-acting antihypertensive treatment in the emergency department. patients < 18 years or pregnant were excluded. non-invasive hemodynamic and transcranial doppler measurements of the middle cerebral artery mean flow velocity (mfv) were obtained prior to and post treatment. analysis included descriptive statistics and generalized linear modeling to test the effect of four categories of antihypertensive agents on mfv. categories included clonidine, iv labetalol, iv hydralazine and combination therapy. we enrolled 35 patients (37% female) with a mean age of 49 ± 13 years. eight (23%) patients received clonidine, 6 (17%) iv labetalol, 5 (14%) iv hydralazine and 16 (46%) combined therapy. the mean baseline sbp was 214 ± 24 mmhg and mfv 49 ± 13 cm/sec. the mean percentage fall in sbp by medication was: clonidine -12 ±7%, labetalol -13 ±12%, hydralazine -23 ±11%, and combination -23 ±16%. the overall change in mfv was -9 ±15%, and by medication was: clonidine -10% (95%ci -2 to -21%), labetalol -11% (95%ci -5 to -27%), hydralazine +1% (95%ci -18 to +21%), and combination -11% (95%ci -2 to -19%). adjusting for baseline bp, hydralazine caused less change in mfv compared to other medications (difference between means +12%, 95%ci -3 to +26%, p=0.1). in this study with modest bp reductions, rapid-acting antihypertensive medications had comparable effects on cerebral blood flow. these results hint that cerebral blood flow may be more stable with hydralazine administration, but further testing of hydralazine and infusions such as nicardipine is required. studies exploring correlations between non-invasive (oscillometric) blood pressure (nibp) and intraarterial blood pressure (abp) have excluded neurocritically ill patients with continuous infusion of vasoactive medications. compared to abp, nibp monitors generally tend to over-read at low values and under-read at high values. this study examines the relationship between simultaneously measured nibp and abp recordings in these patients. following informed consent, prospective observation of patients (n=70) admitted to a neurosciences icu, with simultaneous abp and nibp monitoring and continuous vasopressor (n=21) or antihypertensive (n=49) infusion. paired nibp/abp observations along with covariate and demographic data were abstracted via chart audit. analysis was performed using sas v9.4. 2,177 paired nibp/abp observations from 70 subjects (49% male, 63% white, mean age 59 years) receiving vasopressors (n=21) or antihypertensive agents (n=49). t-tests show significant difference between paired readings: ([sbp: m=137 vs 140mmhg respectively; p<.0001], [dbp: m=70 vs 68mmhg respectively, p<.0001], [map: m=86 vs m=90mmhg respectively, p<.0001]). the paired differences for specific medications were tabulated, with 50-70% of the differences <10mmhg, and 75-90% of the values with <20mmhg difference. bland-altman plots for map, sbp, and dbp demonstrate good intermethod agreement between paired measures (excluding outliers) and demonstrated marked nibp-abp sbp differences at higher blood pressures. pearson correlation coefficients for paired measurements show strong positive correlation for map (+0.82), sbp (+0.84), and dbp (+0.73). despite a statistically significant difference between nibp and abp readings for patients on vasoactive medications, there may be no clinical significance. the relatively positive and linear correlation between paired values guide providers towards not being forced to use one over the other. the final manuscript will aim to detail whether there is a clinical significance in particular vasoactive medications. pathological activity in continuous electroencephalogram (ceeg) data of icu patients is conventionally categorized into a small number of named rhythmic and periodic patterns. we aimed to develop a valid method to automatically discover a small number of homogeneous pattern clusters, to facilitate efficient interactive labeling by ceeg experts. we extracted 576 time and frequency domain features from 12+ hour ceeg recordings from 10 different icu patients. after removing artifacts, we applied principal component analysis (90% variance retained), then separated the data into 9 clusters (k-means). from each cluster we took 9 random samples plus the most central one, rendering 900 samples in total. three expert electroencephalographers independently categorized all samples into one of 6 standard pattern categories (seizures, gpds, lpds, lrda, grda, burst suppression, other). we compared two methods for labeling clusters: (1) "labor intensive labeling" (lil): assign the most frequent of 30 expert-provided labels; (2) "labor efficient labeling " (lel): assign the most frequent of the 3 expert labels for the central sample. we compared interrater agreement (ira) among experts vs. between each expert and consensus labels using lil vs. lel. finally, we used laplacian eigenmaps (le) to visualize the data. this research suggests that large ceeg datasets can be automatically clustered into a small number of patterns described by standard icu eeg pattern labels. we demonstrated efficient cluster labeling by inspecting only the central-most representative of each cluster. furthermore, le visualizations support the hypothesis of an interictal-ictal continuum. real time measurement of cerebral oxyhemoglobin (oxyhb) and deoxyhemoglobin (deoxyhb) using near infrared spectroscopy (nirs) may help us better understand the status of cerebral oxygenation and possibly cerebral blood flow (cbf) in patients with acute brain injury. we developed 48 multichannel functional nirs (fnirs) system and evaluated its role in patients with acute brain injury. a 48 channel fnirs system (nirsittm) was used for measuring cerebral oxyhb and deoxyhb in patients with brain injury. measurement protocols were as follows; baseline measurement for 5 minutes with activation stimuli (nipple pinching for 5 seconds). patients groups were categorized as follows; 1) global cerebral ischemia with profound cerebral injury (n=40), 2) large ischemic stroke or decrease in cbf in the frontal lobe due to severe stenosis in the middle cerebral artery (mca) or internal carotid artery (ica) (n=74), 3) high grade subarachnoid hemorrhage with a risk of vasospasm (n=14), control groups did not have either cerebral lesion or cbf abnormality (n=22). global ischemia with good functional outcome group had better oxyhb level (rso2) compared to those with poor outcome (67.4% vs. 59.5%, respectively, p = 0.003). patients with poor perfusion in the mca territory had low oxyhb level compared to mirror lead in the contralateral hemisphere. oxyhb level in patients with decreased vasomotor reactivity on diamox spect had improved after carotid stenting. three patients who underwent superficial temporal artery-middle cerebral artery bypass surgery had transient hyperperfusion syndrome. oxyhb and total hb were elevated in the affected area. patients with sah and vasospasm had blunted oscillation pattern of oxyhb compared to those without vasospasm. bedside multichannel measurement of oxyhb and deoxyhb using fnirs might be useful in understanding hemodynamic changes occurring in patients with acute brain damage at the real time. multimodality monitoring (mmm), brain tissue oxygenation (pbto2) and microdialysis (md) in sah may be important to the treatment of delayed cerebral ischemia (dci). our hypothesis was that concordance between pbto2 and md occurs in the tissue bed displaying angiographic vasospasm. this retrospective observational study includes 10 patients with sah. the extent of angiographic vasospasm for each vessel was graded on angiography by the on call neuro-interventionalist and quantified as 0 (no spasm) to 6 (severe spasm). pbto2 and md probes were placed in the frontal lobe white matter. the severity of vasospasm was estimated by the weighted average of (1x aca + 2 x mca + 3 x ica) / 6. cases with score of 2 or more were considered to have clinically relevant vasospasm. using a within-subjects design, epochs of baseline mmm were compared with during spasm using daily mean for pbto2, lpr, glucose, icp and cpp. given the limited number of observations the simplifying assumption was made that the observations from all epochs are independent. the measurements from all patients were divided in the two groups with and without spasm and were compared using a twotailed non-paired student t-test. sixteen sets of baseline and vasospasm epochs were evaluated for pbto2 and 18 for md. compared with baseline values, the average pbto2 was significantly lower (15.1 vs 24.6mmhg, p=0.003), lpr was non-significantly higher (38.3 vs 28.4, p=0.07), and glucose was similar (0.9 vs 1.2 mmol/l, p= 0.5) during vasospasm epochs. there was no difference in icp (9.7 vs 9.7mmhg, p=0.98). these differences were unaffected by induced hypertension, when cpp was augmented for treatment of dci (120.4 vs 101.3 mmhg, p=0.07). mmm during angiographic vasospasm after sah suggests discordant changes in brain oxygenation and metabolism. these data suggests that dci may be related to metabolic factors other than tissue oxygenation. multimodal monitoring including brain tissue oxygenation (pbto2) is increasingly used for the management of acute tbi patients. the optimal management of pbto2 is not fully established. increasing fio2 is efficacious to correct pbto2 but may mask other oxygen delivery mechanisms which may be deficient. the objective of this study was to explore the clinical utility of a pbto2/pao2 ratio to detect overtreatment by fio2. retrospective cohort stud were collected simultaneously whenever an arterial blood gas was drawn (icp, cpp, hemoglobin, temperature, pco2 and pao2). causes of cerebral hypoxia (pbto2 < 20mmhg) were noted. pbto2/pao2 ratio <0.15 was considered abnormal and plotted over time for each patient individually. 1006 data sets were collected from 38 patients (mean age 44.5±20.0, median gcs 4, mortality 45%). 33.0% of the time and associated with a mean pao2 of 250 mmhg. measures within the low pbto2-low ratio category had significantly lower cpp (64 vs 72 mmhg), higher pao2 (242 vs 130 mmhg) than patients with normal pbto2 or normal ratio respectively. various causes of hypoxic pbto2 were reported when the ratio was abnormal: hypocapnia, low cpp, low cardiac index, long equilibration time... four patterns of evolution of the ratio over time were identified and associated with different mortality rate: 28.5%, 33.3%, 46.7% and 60%. conclusions associated with increased pao2 and decreased cpp. this suggests clinicians often used fio2 to compensate for deficient cerebral oxygen delivery. indeed, various causes of hypoxia besides low pao2 were identified and corrected. pattern of temporal evolution of the ratio seems to correlate with mortality. pupillary light response (plr) evaluates cranial nerves ii, iii, and midbrain function. bedside quantitative infrared pupillometry provides reproducible assessment of the plr, reported as the neurological pupillary index (npi). increased intracranial pressure results in decreases in npi. intracranial hypotension (ih) can also cause brainstem distortion. we therefore hypothesized that similar changes in npi could be seen with ih. here, we describe sequential changes in npi in ih before and after treatment. we identified four patients monitored with pupillometry for clinical care during ih diagnosis and treatment. ih was diagnosed with a compatible history, exam, and characteristic neuroimaging findings. patients' npi at baseline, during symptomatic ih, and after treatment were compared using related samples friedman's two-way anova and wilcoxon signed ranks tests. two patients were male; causes of ih were csf leak following lumbar instrumentation (n=3) and basilar skull fracture (n=1). mean baseline npi was normal (defined as >3) and declined in one or both eyes concurrent with clinical deterioration in the 24-48 hours preceding definitive diagnosis. all patients underwent treatment for csf leak with epidural blood patch or fracture repair, with return of npi > 3 within 5 hours of treatment. the baseline, symptomatic and post treatment npi's differed significantly (3.55±0.35 vs 0.80±0.59 vs 3.65±0.24, mean +/-sd, pre-treatment vs nadir vs post-treatment, p=0.05). both baseline and post treatment npi's differed from the npi nadir (p=0.068) but there was no difference between baseline and post-treatment npi (p = 0.71). impairment of the plr, as measured by npi, occurred during symptomatic ih and resolved after treatment. because management of intracranial hyper-and hypotension differ markedly, our results emphasize the importance of evaluating the clinical context before attributing pupillary/npi changes to increased icp. automated pupillometry provides a non-invasive, bedside tool for monitoring progression and treatment of intracranial hypotension the correlation of optic nerve sheath diameter (onsd) as seen on ultrasonography (us) and directly measured intracranial pressure (icp) has been well described. nevertheless, differences in ethnicity and type of icp monitor used are obstacles to the interpretation. therefore, we investigated the direct correlation between onsd and ventricular icp and defined an optimal cut-off point for identifying increased icp (iicp) in korean adults with brain lesions. this prospective study included patients who required an external ventricular drainage (evd) catheter for icp control. iicp was defined as an opening pressure over 20 mmhg. onsd was measured using a 13 mhz us probe before the procedure. linear regression analysis and receiver operator characteristic (roc) curve were used to assess the association between onsd and icp. optimal cut-off value for identifying iicp was defined. a total of 62 patients who underwent onsd measurement with simultaneous evd catheter placement were enrolled in this study. thirty-two patients (51.6%) were found to have iicp. onsd in patients with iicp (5.95 ± 0.25 mm) was significantly higher than in those without iicp (5.17 ± 0.57 mm) (p 5.6 mm disclosed a sensitivity of 93.75% and a specificity of 86.67% for identifying iicp. onsd as seen on bedside us correlated well with directly measured icp in korean adults with brain lesions. the optimal cut-off point of onsd for detecting iicp was 5.6 mm. impaired cerebral autoregulation following neurological insult has been established as a strong predictor of clinical outcome. hypothermia may offer autoregulatory protection in these patients, although the effect of body temperature on autoregulatory status is unclear. retrospective analysis of data from an ongoing prospective study to evaluate multimodal monitoring using near infrared spectroscopy (nirs) for bedside measurement of autoregulation. ninety-one comatose patients (gcs <8) were continuously monitored for up to three days. nirs derived cerebral oximetry index (cox) was used as a marker of autoregulation. cox was calculated as a moving, linear correlation coefficient between regional cerebral oxygenation saturation and map. autoregulation improves as cox values approach 0, and is impaired as values approach 1. patients were grouped by trend in temperature seen over the monitoring period: no change (<1oc temperature change, n=12), increase (n=6), decrease (n=7), increase followed by decrease (n=7), decrease followed by increase (n=6), and fluctuating (n=53). we performed multivariable logistic regression analysis to assess the association between temperature and outcomes. the association between hourly temperature and cox was assessed using mixed random effects models with random intercept. in patients showing a sustained increase or decrease in temperature, a linear relationship between temperature and cox was seen; for every 1oc increase or decrease in temperature, cox changed by 0.045±0.010 (p<0.001) and -0.020±0.018 (p=0.15), respectively, after adjusting for pco2, haemoglobin, map and temperature probe location. mean temperature changes over the monitoring period for these groups were 2.52±1.07oc and -1.77±1.13oc, respectively. cox did not change significantly in other groups. there was no significant difference in mortality or poor outcome (mrs 4-6) at discharge and 3, 6, or 12 months between patients in each group. in acute coma patients in the neurocritical care unit, increasing body temperature is associated with worsening cerebral autoregulation as measured by cox. the historical tradition of examining the pupillary light reflex (plr) required the examiner to score the size and reactivity of the pupil. a change in the plr from brisk to sluggish or fixed may be a marker of a pathological process and a need for intervention. the plr has been difficult to quantify and has poor inter-rater reliability. handheld pupillometry provides several novel measures, such as the neurological pupillary index™ (npi) and constriction velocity (cv) that may be more quantifiable than the plr. the purpose of this analysis is to examine the relationship between cv and npi in neurologically injured patients. the end-panic registry is a prospective registry of pupillometer values and variables associated with intracranial dynamics (e.g., icp). this analysis from 946 adult (over 18 years) patients from 2 hospitals includes 42,568 pupillometer readings; left eye (20,943), and right eye (21,625). subjects had a mean age of 57.9 yrs and 48.1% were male. the primary admission diagnosis included neoplasm (241), ischemic stroke (169), sah (82), ich (81), tbi (9), and other (364). the left eye mean/s.d. cv (1.6/0.9) npi (4.1/0.9) and size (3.5/1.2) were similar to the right eye cv (1.6/0.9) npi (4.1/0.9) and size (3.5/1.2); statistically significant difference related to large sample size. the correlation between left eye cv and npi (r2=0.068, p<0.001) was significantly improved after controlling for size (r2=0.67, p<0.001). the correlation between right eye cv and npi (r2=0.048, p<0.001) was significantly improved after controlling for size (r2=0.67, p<0.001). constriction velocity is highly dependent on size of the pupil. further studies need to be undertaken to determine the sensitivity and specificity of abnormal npi and cv in detecting pathological processes such as midline shift or 3rd nerve compression that effect pupillary reactivity. cerebral injury is increasingly described in adult recipients of extracorporeal membrane oxygenation (ecmo) therapy. we describe the association between regional brain tissue oxygenation (rso2) measured by near infrared spectroscopy (nirs), survival, and cerebral injury on neuroimaging. a single-center retrospective chart review was conducted of adult patients who underwent veno-arterial (va) ecmo from april 2016 to october 2016. all patients had received nirs monitoring during ecmo therapy. baseline demographics, in-hospital complications, and mortality were recorded. desaturations of rso2, defined as decline >25% below baseline or absolute value <40, were recorded and analyzed. desaturation burden was calculated by area under the curve analysis and measured by rso2*seconds. eighteen va ecmo patients (9 females) underwent nirs monitoring during the study period. eleven patients experienced desaturations, while 7 did not. patients with desaturations tended to be younger (50.2 vs. 64.7 years old), more likely female (8 vs. 1), had lower ejection fraction (28.6% vs. 46.4%) and experienced liver dysfunction (7 patients vs. 1). patients with desaturations were more likely to have abnormalities on ct scan (6 vs. 0). eleven of the 18 patients survived to discharge. survivors tended to be younger (50.2 vs. 64.6 years old) and had lower initial ecmo sweep (4.1 vs. 6.7). survivors had lower baseline rso2 values at the beginning of nirs monitoring (right -57 vs. 65, left -57 vs. 68), fewer desaturation events (7 vs. 14), lower desaturation burden, and spent less overall time desaturating (1:21 vs. 3:09 hours). desaturation on nirs may be correlated with cerebral injury in the adult va ecmo population and may have utility in triggering clinical investigation or determining prognosis. further studies in larger patient populations are needed to determine its reliability and accuracy. pressure reactivity index (prx) is the most validated index to measure cerebrovascular reactivity in patients after traumatic brain injury. the aim of this study is to identify the natural history of cerebral autoregulation measured by prx in various forms of brain injury to monitor restoration or not of cerebral vasomotor reactivity in the acute phase. retrospective analysis of data from ongoing prospective study to evaluate multimodal monitoring using prx for the measurement of cerebral autoregulation at the bedside. thirty comatose patients (glasgow coma scal used as a marker of autoregulation. prx was calculated as a moving, linear correlation coefficient between icp and map. impaired cerebral autoregulation has been pre standard maximal medical therapy was implemented to treat elevated icp, cerebral edema, etc. patients with withdrawal of care in the first 48 hours or brain death on neurological exam were excluded. thirty comatose patients from acute brain injuries (16 intracerebral hemorrhage, 6 tbi, 5 aneurysmal subarachnoid hemorrhage, 2 intraventricular hemorrhage, 1 hypoxic ischemic encephalopathy) were studied. the average prx upon starting neuromonitoring using prx was 0.3 ± 0.35 (impaired), whereas the average prx at the end of day 3 of neuromonitoring was 0 ± 0.10 (restored). one third of the patients had icp crisis during monitoring. the average opening icp=8.3, average highest recorded icp=26.9. impaired cerebral autoregulation has been implicated as a predictor of clinical outcome. aggressive medical therapy instituted by the neurocritical care team (icp and cerebral edema management, blood pressure control, etc.) may result in restoration of cerebral vasomotor reactivity measured by prx by intensive care day 3-5. restoration of cerebral vascular reactivity may be a necessary but not sufficient for favorable outcome. elevated intracranial pressure (icp) is an important cause of death following acute liver failure (alf). while invasive icp monitoring (iicpm) remains the gold standard, the presence of coagulopathy increases the risk of bleeding in alf. measurement of optic nerve sheath diameter (onsd) using optic nerve ultrasound (onus) may accurately detect elevated icp. our goal was to study the ability of onus to detect sustained intracranial hypertension following alf, and to predict death and therapeutic intensity level (til), a quantitative measure of the intensity of treatment required to control icp. consecutive patients with alf admitted to our institution in a 6-year period underwent onus. blinded measurement of onsd was performed from deidentified onus videos. patients underwent iicpm on the basis of an institutional protocol for selection of appropriate candidates, coagulopathy reversal and insertion of an intraparenchymal monitor. the til-basic for management of icp during the icu stay was recorded. the ability of highest onsd to predict concurrent icp>20mmhg at the time of measurement, sustained icp elevation >20mmhg at any time and til-basic>2 was assessed in patients who underwent iicpm, while prediction of death was assessed in all patients. receiver operating characteristic (roc) curves were constructed for the outcomes of interest. thirty-nine patients with alf were admitted during the study period, 29/39(74%) underwent onus, 25/39(64%) underwent iicpm and 19(49%) died. of 25 patients who underwent iicpm, 13(52%) developed sustained icp elevation and 7(28%) had a til-basic>2. the roc area under the curve (auc) of onsd for prediction of concurrent icp>20mmhg was 0.63(95% confidence-interval 0.39-0.82, p=0.42 for null hypothesis of auc=0.5), sustained icp elevation at any time was 0.51(0.28-0.73,p=0.97), death was 0.56(0.36-0.74,p=0.59) and til>2 was 0.61(0.42-0.79,p=0.33). in patients with alf, onsd measurement performed poorly for detection of icp elevation, and was a poor predictor of til and death. limited literature exists regarding the neurochemical and physiologic events that occur as brain death develops. using intracranial multi-modality monitoring, we identify physiological changes that signal the onset of brain death. we measured intracranial pressure (icp), brain partial oxygen tension (pbto2), cerebral blood flow (cbf), and biochemical correlates of cerebral metabolism in 4 patients with diffuse hypoxic ischemic brain injury after cardiac arrest during the development of brain death. monitoring probes were inserted into cerebral white matter through a burr hole using a ct compatible multi-lumen bolt. brain tissue energy-related metabolites (lactate, pyruvate, glutamate, glucose, glycerol) were measured using a bedside microdialysis analyzer. pbto2 and temperature were measured via a licox catheter. cerebral perfusion was measured with a hemedex bowman perfusion monitor. brain death was confirmed in accordance with institutional guidelines. a characteristic pattern of physiologic and neurochemical findings emerged as brain death occurred. absolute loss of cerebral autoregulation, with a near perfect correlation between icp and map was followed by equalization of map and icp resulting in progressive drop in cpp to zero, followed by a progressive decline in pbto2 that became unresponsive to a 100% fio2 challenge. cerebral perfusion decreased in tandem with pbto2. lactate/pyruvate ratio (lpr), glutamate, and glycerol all increased precipitously, with lpr consistently >1000. brain temperature decreased despite maintenance of a normal core temperature. finally, intracranial compliance, while initially very low (evidenced by marked increase in the p2 component of the icp waveform), appeared to paradoxically re-normalize as the recording continued. continuous neuromonitoring reveals a characteristic pattern of cerebrovascular physiologic changes that accompany brain death. these changes are consistent with a progressive cessation of cerebral perfusion caused by diffuse cerebral edema. although not currently a part of the formal brain death determination process, such monitoring could be helpful to identify when brain death has truly occurred. automated devices that collect objective quantitative data on pupil size and reactivity are increasingly used for critically ill patients with neurologic disease. however, there is limited data on the normative range of pupillary reactivity in the critically ill, and the relationship between reactivity and traditional monitoring metrics. to determine pupil characteristics in this population, we prospectively collected quantitative pupillometry data in patients admitted to the neuro icu with an expected stay of at least 24 hours. trained nursing staff measured pupillary reactivity with the neuroptics-200 pupillometer device every 2-hours. measurements included the neurologic pupil index, (npi) a composite metric ranging from 0-5 in which >3 is considered normal, resting and constricted pupil size, constriction velocity, dilation velocity and latency. these recordings were compared with averaged intracranial pressure (icp) and glasgow coma scale (gcs) assessments within the same hour. we used univariate and spearman's rank tests to explore associations between pupil characteristics and clinical variables, followed by multi-level linear regression to account for intra-and inter-subject variability. one-hundred patients underwent 4200 paired observations. fifty-five patients had at least one recorded episode of anisocoria, 27 had low npis in more than 10% of recordings, and 29 had normal pupil reactivity. average and minimum npi was correlated with average and minimum recorded hourly glasgow coma score (gcs) (p values <0.001). increased asymmetry in both pupil size (p=0.002) and dilation velocity (p=0.02) was associated with increased intracranial pressure (icp). anisocoria was associated with hyperosmolar therapy (p=0.002). the presence of low npis in more than 10% of total pupil measurements was associated with death at discharge (p=0.02). the range of pupillary metrics varies among critically ill neurologic patients and correlates with gcs and icp. further study is needed to establish whether change in pupil metrics predict specific clinical events. near infrared spectroscopy (nirs) provides a non-invasive measurement of regional cerebral oxygen saturation (rso2) that may be able to detect seizure activity. in this study, we explored the hypothesis that rso2 is lower ipsilateral to seizures or epileptiform activity compared to the contralateral side in comatose patients. five patients (2 men and 3 women; mean age 69) underwent continuous electroencephalography (ceeg) monitoring and nirs recording. ceeg data were classified as baseline, epileptiform activity or seizure, slowing, or burst suppression at hourly intervals over the course of the recoding period (mean duration 34.1 hours, range 24 to 85 hours). three patients had idiopathic status epilepticus, two had intracranial hemorrhage, and one had a temporal meningioma. the relationship between rso2 and epileptiform discharges was explored using scatterplots. the association was assessed using mixed random effects models with a random intercept. an independent within-subject residual structure was used. there were 127 measurements with ceeg and nirs from 5 patients. one patient was excluded as the nirs sensors were potentially reversed. epileptiform activity or seizures were observed in a median of 37% of the measurements (iqr 23-62%). rso2 was significantly lower on the side ipsilateral to seizures -5.19, p <0.001) after adjusting for map. all patients only had partial seizures with no generalization. partial seizures and/or epileptiform discharges were not associated with impaired autoregulation. we found a significant lower cerebral oxygen saturation ipsilateral to seizures and/or epileptiform activity. the association was observed in patients with various etiologies of coma, and with either convulsive and non-convulsive seizures. decreases of regional cerebral oxygen saturation at the bedside may alert the clinician of ipsilateral seizures. elevated intracranial pressure (icp) and cerebral edema are common causes of mortality in neurocritical-care patients. key monitoring techniques for icp-elevation include neuroimaging and invasive icp-measurement. examination of the pupils is routinely performed to determine disturbances within cerebral physiology but shows high inter-rater variability. portable infrared pupillometry is increasingly used for accurate measurements. the benefit of these technique remains to be established in patients with elevated icp. aim of this study was identify pupillary parameters associated with icpcrisis in neurocritical-care patients. we prospectively enrolled 31 critically-ill patients (subarachnoid hemorrhage/intracerebral hemorrhage/stroke/bacterial meningitis) admitted to our neurointensive care unit(07/2016-07/2017) who required placement of external ventricular drains. we recorded serial pupillometer readings [i.e. maximum/minimum apertures(mm), constriction/dilation velocities(mm/sec.), latency period(sec.)] and corresponding icp values every 3 hours after admission. neurological pupil index(npi), an algorithm that compares above mentioned pupillary parameters to a normative model of pupil reaction to light, grades pupil-function on a scale of 0(nonreactive) to 5(normal). receiver operating characteristic(roc) curve analysis was performed to investigate associations between pupillary parameters and presence of icpcrisis(icp>20mmhg). in 31 our data suggest a relationship between non-invasively detected changes in npi, cv or mcv and icpcrisis. yet, clinical benefit of these parameters is subject to future studies. lung injury is frequently observed in patients with severe, acute brain injury. while these patients often require mechanical ventilation, a lung protective ventilation strategy has not been extensively studied. this may be due, in part, to concerns that elevated positive end-expiratory pressure (peep) could adversely affect intracranial pressure (icp) or cerebral perfusion pressure (cpp). we were interested in exploring this relationship as a first step towards understanding whether mechanical ventilation resulted in a transmission of pressure to the brain. 8) and received both mechanical ventilation and icp monitoring were enrolled in this pilot study. an esophageal balloon was inserted to measure their transpulmonary pressure (ptp). fluid responsiveness was assessed prior to the intervention. subjects underwent a step-wise increase in peep (increments of five) from 5 to 15 cmh2o. airway pressure, ptp and icp were measured at each peep interval. of the planned twenty, three patients have been enrolled to date. primary diagnoses included aneurysmal subarachnoid hemorrhage and intraparenchymal hemorrhage with a median gcs of 7. patients were ventilated using either volume control or pressure support ventilation; median fio2 was 0.35. two patients were on vasopressors and the same two patients were determined to be fluid responsive. at baseline (peep 5), mean icp, cpp, and ptp were 6mmhg, 85mmhg, and -3.18cmh2o, respectively. when peep was increased to 10 cmh2o, the average change from baseline in icp and cpp was -3.7% and -2.2%, respectively. when increased to 15 cmh2o the change from baseline in icp and cpp was 3.0% and 0.5%. during the intervention icp did not exceed 20 mmhg, nor did any patient experience hypotension. preliminary data suggests that intrathoracic pressure is not directly transmitted to the intracranial compartment. continued enrollment is needed to confirm these findings. neurocritical care after severe traumatic brain injury (stbi) is focused on detecting and preventing secondary brain injuries. in addition to intracranial pressure (icp), measures of brain tissue oxygen (pbto2), regional cerebral blood flow (rcbf), and electrocorticography (deeg) may provide critical clinical data. few studies have assessed the safety of such an approach and the reliability of data that is gathered. we describe here the placement, complications, and reliability of multimodality monitoring (mmm) data from a novel, single burr hole approach using a four-lumen bolt at our institution. we included consecutive adult stbi patients admitted to the neuroscience intensive care unit at the university of cincinnati from april 2015 to march 2017 who underwent mmm as part of standard clinical management per institutional protocol. data was obtained regarding device placement and complications. all data was visually inspected for errors and gaps in data. 40 patients were included. the mean age was 43+/-17 and 85% were men. bolts were placed a median of 17.7 (iqr 8.3-16.9) hours from injury. no clinically significant complications occurred, although 37.5% had minor complications (e.g. small tract hemorrhage or pneumocephalus). suboptimal placement of probes was noted in 13%. we monitored patients a median 103.5 (iqr 48.8-133.6) hours. icp data was the most reliable, with data available 94.8% of the total monitoring time and only 1.2% error time. pbto2 and deeg data were reliable for > 90% of total monitoring time with < 2% error time. rcbf provided data for 77% of total monitoring time and had 17.9% error time. mmm in stbi may be carried out via a single burr hole without significant clinical complications and reliably yields continuous data to facilitate clinical decision making. this supports the feasibility of our approach as an alternative to icp monitoring alone. intracranial hemorrhage patients with non-compliant ventricles may have high intracranial pressure (icp) despite normal ventricle size. we aimed to assess the incidence of elevated icp among those with no radiographic evidence of intracranial hypertension. prospectively enrolled primary intracranial hemorrhage patients (sah, sdh and iph) admitted to two tertiary-care centers between 4/2008-5/2016 were retrospectively reviewed. among patients with external ventricular drainage (evd), admission head ct (hct) scans within 72h prior to evd placement were reviewed for evidence of elevated intracranial pressure (icp) including: ventricle size (bicaudate index, temporal horn size), basal cistern effacement, midline shift and global cerebral edema. when all of these features were absent, patients were classified as having normal-icp hct. the incidence of elevated icp (>20mmhg) at the time of evd placement and during hospital stay were recorded. of 741 intracranial hemorrhage patients enrolled, 253 (34%) had evd. 23/253 (9%) had a normal-icp hct. of these, 11/23 (48%) had elevated opening pressure at the time of evd placement, and 13/23 (57%) had elevated icp during their hospital stay. among normal-icp hct patients with icp>20mmhg, 92% had sah, and the median gcs and hunt-hess scores were 14 (range 3-15) and 3 (range 2-5). the positive and negative predictive values of normal-re 52% 62%, respectively (auc 0.471, p=0.648) the only radiographic feature that was associated with elevated icp was global cerebral edema (or 5.4, 95% ci 3.5-8.4, p<0.001). approximately half of intracranial hemorrhage patients without radiographic features of elevated icp had icp>20mmhg at the time of evd placement and additional patients had elevated icp during their hospital stay. radiographic findings should not be relied upon to exclude the possibility of elevated icp. the measurement of intracranial pressure (icp) is a cornerstone of intensive care management following severe traumatic brain injury (stbi). it has been only recently that the time integral of icp has been quantified in relation to outcome; the time integral of brain tissue oxygen (pbto2) has not been studied. we gathered time-locked intracranial monitoring data on s tbi patients at the university of cincinnati over 2 years. clinical management of all patients followed national standards. raumedic pto probe was used to measure icp and pbto2; accuracy was verified by visual inspection with automated data cleaning. normalized data was mapped based on correlation with glasgow outcome scale scores at 3-6 months. we studied 43 patients aged 42+/-18 years (mean+/-sd); 84% were male. initial post-resuscitation glasgow coma scale score was median 6 (interquartile range: 4.5-7). 13/43 underwent craniectomy prior to monitoring. among those with good (gos4-5) and poor (gos1-3) outcome, the average icp was 13.2+/-5.6mmhg and 18.4+/-8.6mmhg (p=0.09); the average pbto2 was 25.7+/-10.0mmhg and 31.1+/-21.5mmhg (both n.s.). the correlation with outcome was dependent on both icp and time: an icp > 27mmhg for > 5 minutes was associated with poor outcome, whereas an icp < 23mmhg was associated with poor outcome only after 4 hours. in contrast, the pbto2 level, but not the duration, correlated with poor outcome in those without craniectomy at a pbto2 < 30mmhg, and particularly below 12mmhg. pbto2 burden was less reliable in those following craniectomy. we replicated the effects of icp/time in a cohort of patients with severe tbi, both with and without craniectomy and subsequently demonstrated the burden of brain tissue hypoxia in those without craniectomy. the time integral of multimodality monitoring data may provide more accurate measures of secondary insult burden with implications for clinical care and prognosis. neurologists who work in neurocritical care (ncc) as neurointensivists may have critical care (cc) charges rejected for payment unless they are classified per centers for medicare services (cms) taxonomy codes in their systems as critical care providers. the neurocritical care society and cms created a new ncc code 2084a2900x to fix this issue. we polled the aan ccen section members for awareness of this problem. we conducted a six question google forms survey using the aan ccen synapse community website to assess knowledge of the ncc taxonomy code: we received 20 anonymous responses by the time we closed the poll on 7/13/17. question (q1) and (answers, a1): are you a neurology or neurosurgery back grounded intensivist who does neurocritical care at your hospital? y/n (yes/no). a1: 95% reported being neurologists. q2: were you aware of the new cms neurocritical care taxonomy code 2084a2900x ? y/n a2: 55% were aware of the taxonomy code. q3: are you aware why the ncc taxonomy code was created? y/n a3: 55% of respondents were unaware why this code was created. q4: what is your primary department for revenue collection? a4: 85% reported neurology, 15% neurosurgery, 15% critical care, and 15% blend. q5: are you aware that medicare can reject critical care charges (99291 and 99292) can be rejected unless you are listed as a cms 'critical care provider' or as a neurocritical care provider? a5: 20% reported rejected charges at their centers. q6: are you aware of rejection of critical care charges happening at your own institution due to this misclassification? y/n a6: 20% of respondents reported rejected charges at their center. although limited in sample size, this survey revealed almost half of the respondents were unaware of the ncc code. we believe a larger study is warranted. arterial subdural hemorrhage (sdh) is a rare but potentially devastating neurologic entity. it has been associated with ruptured aneurysms. we report a case-series of five patients with arterial sdh and their outcomes. a retrospective chart review of our institute's vascular database was conducted using a pre-defined search strategy including the terms "aneurysm", "arterio-venous malformation", "subdural hemorrhage", and "dural arterio-venous fistula" (dural-avf). amongst 200 patients in the database, five cases were identified with ages ranging from 43 to 78 (four females). four had sdh due to aneurysm (two internal-carotid, one middle-cerebral, and one posteriorcerebral artery; one had parieto-occipital dural av-fistula. no patient had preceding head-trauma or anticoagulation. of aneurysmal patients, one had no sah. on admission ct-head imaging, the mean-sdh size was 10.88mm (sd 3.24; range 7.4-15mm), and mean midline-shift (mls) was 12.64mm (sd 2.49; range 9-15mm). the mean ratio between sdh-size and mls was 1.16 (sd 0.25). in a historic cohort of acute subdural hemorrhage of non-arterial etiology ; the mean size of sdh was 8.0 mm and the mean mls was 4.6 mm. ratio of mls: sdh size was 0.56. in our series, three patients with aneurysms had decompressive-craniectomy, one had mini-craniotomy for sdh evacuation; the patient with dural-avf had coiling and mini-craniotomy for sdh evacuation. four patients had died during hospitalization, whereas patient with dural-avf recovered to baseline functional-status at 6-month follow-up. arterial sdh is a rare entity and may present without subarachnoid hemorrhage and any preceding head-trauma. the degree of midline-shift is usually out of proportion to sdh size. there should be a low threshold to obtain vessel imaging in cases of sdh with no clear trauma history. mls: sdh ratio may be a useful screening tool for possibility of arterial sdh especially in absence of trauma and may reflect rate of bleeding. neurostimulant medications (amantadine and modafinil) are sometimes prescribed after acute nontraumatic brain injury to facilitate wakening and rehabilitation participation; the safety and effectiveness of this practice is unknown. following a retrospective evaluation of our experiences, we characterized anticipated challenges to designing a prospective randomized trial of neurostimulant medications to promote rehabilitation participation after acute non-traumatic brain injury. retrospective chart review of patients over with subarachnoid hemorrhage (n=17), intracerebral hemorrhage (n=41) and ischemic stroke (n=30) who received neurostimulant medications over a period of 43 months. data regarding clinical course and potential confounders to assessing response were collected. continuous data are reported as median and interquartile range. neurostimulant medications were initiated in 88 patients at a median of 7 (5-13) days after hospital admission, for hypersomnolence 68 (77%), not following commands 28 (32%), lack of eye opening 25 (28%), and/or low gcs 15 (17%). thirty-nine (44%) patients were receiving sedatives or opioids at the time of neurostimulant(s) initiation. twenty-two (25%) patients received newly prescribed sedatives or opioids after neurostimulant(s) initiation. potentially sedating antiepileptic medications were prescribed to 14 (16%) of patients. twenty-two (25%) patients were intubated at the time of neurostimulant initiation confounding the gcs-v. potentially confounding clinical factors included hydrocephalus 37 (42%), vasospasm 7 (8%), and seizures 4 (5%). twenty-eight (32%) of patients had temporary cerebrospinal fluid diversion in place at the time of neurostimulant initiation. initiation and titration of neurostimulant medications after acute non-traumatic brain injury was common, but timing and indications varied widely. confounders to assessing effectiveness included concomitant sedating medications, variable pathophysiology related to the type and location of the stroke, and clinical factors like seizures, vasospasm, and hydrocephalus. these confounders are likely to make prospective evaluation of neurostimulant medication effectiveness difficult in the period of initial therapy following acute non-traumatic brain injury. brain small vessel disease can cause cognitive impairment via ischemic or hemorrhagic mechanisms. current imaging modalities, specifically magnetic resonance imaging allow for easier detection of different intracranial pathological processes including cerebral microhemorrhages (cms). research demonstrated that the number and location of cms correlate with the type of cognitive impairment (memory, processing speed, executive function, and motor speed). a retrospective analysis of 50 patients (age 44 to 87) seen at our neurology outpatient clinic from 2010 to 2017 who were identified by linguamatics software to have "microhemorrhage" in their radiology mri report. additional information included age, sex, cognitive examination, presence of cardiovascular risk factors, mri, and the number and location of cms. cognitive function was determined by mini mental state examination (mmse) score and diagnosis by a cognitive neurologist. patients were grouped by presence of 1 cm or greater (2 to 10) and regression was used to determine a relationship with mmse and vascular risk factors. the number of microhemorrhages per patient were 1(17 patients), 2 (5 patients), 3 (8 patients), 4 (12 patients), 5 (3 patients), 6 (3 patients) and 10 (2 patients). vascular risk factors included hypertension (16 patients), diabetes mellitus (25 patients) and smoking history (26 patients). regression analyses indicated that the presence of more than 1 cm correctly predicted mmse lower than 26 at 83% (p=0.003). age was the only factor that influences this finding and increased this prediction to 90%. this study provides novel evidence that the presence of multiple cms on brain images predicts the presence of cognitive impairment. this study raises the need for more investigations. point-of-care ultrasound is a valuable tool in critical care, allowing timely and frequent beside assessment of clinical questions. neurocritical care has long utilized transcranial doppler but is still early in the adoption of other critical care ultrasounds. this study looked at the comfort level and competency of the participants at the point-of-care ultrasound workshop at the 2016 neurocritical care society annual meeting. the workshop comprised of didactics and hands-on small group practice using live models. topics covered included ultrasound physics, lung, cardiac, optic nerve sheath ultrasounds, as well as case studies in neurocritical care. participants were asked to complete an anonymous pre-and postworkshop assessment on a volunteer basis. a total 41 pre-workshop and 33 post-workshop assessments were completed. the mean age of the participants was 40.7 ± 9.2 years. there were 21 (51.2%) attending physicians, 8 (19.5%) advance practice practitioners, 7 (17.0%) fellows, 4 (9.7%) residents, and 1 (2.4%) research scientist. participants had limited ultrasound experience prior to the workshop, with 12 (29.2%) reported none, 16 (39.0%) reported < 1 year, and 10 (24.3%) reported 1 to 3 years. on a 1-5 scale on comfort using ultrasound with 1 being very uncomfortable and 5 being very comfortable, participants reported a median score of 2 (iqr 2-3) pre-workshop with an improvement to 4 (iqr 3-4) post-workshop. for 15 matched pre-and post-tests, all 15 participants had an improvement in their ultrasound knowledge. while the majority of the participants at this workshop had prior ultrasound experience, many are still uncomfortable with their ultrasound competency. the format of didactics and hands-on small group practice improved the comfort level as well as overall ultrasound knowledge of these participants. additional opportunities for point-of-care ultrasound training should be considered in neurocritical care fellowships and meetings. event related potentials (erps) allow assessment of cognitive processing in unconscious brain-injured patients. here we explored the diagnostic and prognostic value of erps obtained shortly after brain injury. we prospectively collected a comprehensive erp paradigm labeled "local global paradigm" from a consecutive series of unconscious patients with acute brain injury. this auditory paradigm allows the assessment of: 1) cortical responses, 2) unconscious cognitive processing, 3) unconscious focusing of attention, and 4) conscious processing of sounds. levels of consciousness assessed with the coma recovery scale-revised (crs-r) at the time of recording were correlated with the presence/absence of each erps component and functional connectivity/complexity measures. we tested the prognostic value of each measure for recovery of consciousness prior to discharge. we analyzed 32 recordings from 15 patients (median recordings per patients 2[iqr 1,3]). recordings were made 6 [iqr 4.25,10.75] days after brain injury and all patients were unconscious at the time of the initial recording. underlying etiologies included ich(n=6), sah(n=4), tbi(2) and other (n=3). there were trends for higher crs-r scores in patients with preserved erp components. crs-r scores correlated with the functional connectivity indexed (rho=0.44; p=0.01) but not with complexity measures. five (33%) patients regained consciousness (within 6 to 42 days from brain injury). one of these patients had to be excluded due to poor quality recording. all the 4(100%) remaining patients had the three type of non-conscious responses preserved on at least one recording in comparison to only 3(30%) among patient who did not recover consciousness (fischer p-value = 0.07). similarly, connectivity was greater in patients who regained consciousness (0.0855 vs 0.0797; p=0.05) but the complexity was similar. simple bedside erp responses indexing cognitive processing during the local global paradigm obtained shortly after brain injury correlate with the level of consciousness but, more importantly, the probability to recover consciousness. over a decade ago, the institute of medicine introduced family-centered care (fcc) as a vital aspect of quality health care by strongly recommending that family members of intensive care unit (icu) patients be actively involved in decision-making. while there are many resources to help icu staff conduct meetings and provide information to families, the latest society of critical care medicine guidelines for fcc recommend the implementation of communication and decision support tools for family members to use. electronic decision support tools such as my icu guide have been effective in pilot studies at allowing family members to customize information delivery and communicate their preferences to icu staff. we sought to integrate a decision support and communication tool for families into an electronic patient portal. we developed an electronic patient and family engagement checklist for the neurointensive care unit (nicu) using doctella (patient doctor technologies, sunnyvale, ca), an existing patient engagement application. checklist components included: identifying a spokesperson, developing an advance directive, understanding diagnosis and prognosis, access to helpful resources, and a family meeting guide and planner. we presented the checklist to our hospital's patient and family engagement steering committee for the icus and received useful feedback. the checklist will also be vetted by the hospital's patient and family advisory council. usability testing will also be conducted. a family engagement checklist using an electronic patient portal is a novel strategy to enhance communication in the nicu. further validation of the tool is needed. applying painful stimuli to brain injured patients is a time-honored practice assumed to provide valuable clinical information for diagnosis, prognosis, and potential guidance for therapeutic interventions. however, there is limited literature that has evaluated and discussed the benefits and potential adverse effects related to repeated painful stimulation during bedside neurological examinations. though providers intend to do no harm, the practice of repetitive painful stimulation can unintentionally damage patient's skin, muscle, and bone, as well as inflict emotional duress. in conjunction with basic ethical principles used to justify painful stimulation during patient examinations, we propose a revisiting of the practice of routine repetitive painful stimulation in neurologic bedside assessments. 1. discuss the current literature regarding the use of painful stimulation and its beneficial and damaging effects, 2. describe alternative strategies for neurologic assessments, 3. propose guidelines to optimize accurate neurologic assessments while avoiding unnecessary repeated painful stimulation, 4. propose the development of a graded methodology for delivering painful stimulation when necessary for neurologic assessments. a summary of the literature will be outlined and discussed focusing on the ethical considerations and justification for the use of painful stimulation in the neurologic patient and the perception of pain in coma and minimally unconscious patients, 2. alternative strategies will be presented to minimize bodily and emotional injury, 3. a proposed outline with a companion flow diagram "easing the pain guidelines" implemented in a tertiary care neurocritical care unit will be presented. there has been little attention paid to the burden of painful stimulation inflicted on patients in the neurocritical care unit. the guiding principle of nonmaleficence (do no harm) morally obligates clinicians to evaluate current practice standards using repetitive painful stimulation in routine neurologic assessments. implementing standardized guidelines will limit unintended harm to patients without compromising accurate neurologic assessments. plasmapheresis is utilized in anti-n-methyl-d-aspartate receptor (nmdar) encephalitis to remove autoantibodies. antiepileptic drugs (aed), such as valproate, are often used to control seizures which may complicate anti-nmdar encephalitis. it is important to prevent rapid reductions of aed levels to ensure proper seizure management in this setting. we obtained total and free (active drug, unbound to plasma proteins) valproate levels intermittently throughout two 5-day courses of daily plasmapheresis. during the first course, trough levels were obtained. during the second course, levels were obtained before, during, and after plasmapheresis. the patient was a 26 year old female, weighing 79 kg. albumin was 2.6 g/dl. her valproate regimen ranged from 1250 mg to 2250 mg every 8 hours (47 to 85 mg/kg/day), given intravenously or enterally. prior to the first plasmapheresis, valproate dose was 1250 mg every 8 hours, resulting in a total level of 25 mcg/ml (reference range: 50-125 mcg/ml). free valproate was 21 mcg/ml (reference range: 7-23 mcg/ml); free fraction was 86% (reference range: 5-18%). four days later, prior to the 5th plasmapheresis, the total valproate level was 35.3 mcg/ml. two days after the 5th plasmapheresis the total level was unchanged at 34 mcg/ml; free valproate was 8 mcg/ml and free fraction was 24%. during the second course of plasmapheresis, valproate total levels, free levels, and free fractions were 49 mcg/ml, 18 mcg/ml, and 36% before, 105 mcg/ml, 24 mcg/ml, and 23% during (valproate dose given upon initiation of plasmapheresis), and 59 mcg/ml, 24 mcg/ml, and 41% after plasmapheresis, respectively. valproate serum levels were not markedly influenced by plasmapheresis. free valproate levels and the free fraction were highly elevated throughout the patient's hospital course, however. future studies should evaluate critically ill patients' clinical response and toxicity correlations as the free fraction of valproate appears to be elevated in this setting. the purpose of this study is to assess knowledge retention of emergency neurological life support (enls) after participation in the course via a prospective observational study. study subjects seeking enls certification consented for study participation (enls-vs) from the ncs website then took a closed-book, 70 multiple-choice question pre-test within 72 hours of enls course participation. after completion of the enls course, participants took the same closed-book, 70 multiplechoice question test (post-test). these tests consisted of novel questions from material presented in the course. questions were not repeated from the enls certification exams. thirty participants enrolled in the study with 11 completing both the pre-test and immediate post-test. all 11 participants' scores improved on the post-test as compared to the pre-test. the mean percent correct on the pre-test was 63.5% with a median of 64.3% (range 34.5-85.7%). of the participants who have completed both pre-and immediate post-test, the mean pre-test score was 72.4% with a median of 72.9% (range 61.4%-72.9%). the mean post-test score was 86.7% with a median of 88.6% (range 77.1%-94.3%). the improvement of scores was statistically significant on the post-test compared to the pre-test (86.7% vs. 72.4 % %, p< 0.005). all participants in the emergency neurological life support course showed improved test scores immediately after participation in the standard enls course. this study will assess knowledge retention at 6-months following training, and is actively enrolling new participants. augmented renal clearance (arc, defined as a creatinine clearance of > 150ml/min) has been demonstrated in neurocritical care disease states such as traumatic brain injury, intracranial hemorrhage, and subarachnoid hemorrhage. arc may result in increased elimination of renallyeliminated medications, thereby reducing drug exposure with standard doses. the overall prevalence of arc is not well described. the purpose of this study was to estimate the overall prevalence of arc in a neurocritical care population. this was a retrospective cohort study of adults > 18 years of age admitted to the intensive care unit on the neurosurgery service. demographic and pertinent laboratory data were collected for patients admitted from january 1, 2016 thru december 31, 2016. an arctic score was calculated for each patient (6 or greater suggests arc). parametric data was compared using one-sample student's t-test, nominal data was compared using fisher's exact test (alpha = 0.05). statistical analysis was conducted using ibm spss version 24. present in a total of 37.24% of patients. a broad spectrum of neurocritical care diagnoses was present. the mean age in years was significantly lower in patients with arc [47.8 (14sd)] than without arc [54 (17sd), p = 0.026], as was the serum creatinine [with arc 0.55mg/dl (0.1sd) vs without arc 1mg/dl (0.6sd), p < 0.001]. mean hospital length of stay was greater in patients with arc than without [10.1 (10sd) vs 6.5 (10sd), p < 0.001]. arc occurs commonly in neurocritical care patients and likely merits proactive screening or direct measurement of creatinine clearance in select patients. pharmacokinetic studies of commonly used renally-eliminated medications may be needed to establish population parameters in the neurocritical care population. education surveys demonstrate gaps in resident neurocritical care education and training. we assessed junior residents' baseline knowledge of neurologic emergencies, procedural competency, knowledge of available resources, and the impact of pre-rotation orientations. junior residents (neurosurgery pgy1s and neurology pgy2s) who had not previously rotated in the neuroicu were surveyed. a three-part survey was administered: part i, knowledge of icu structure and personnel; part ii, procedural competency; part iii, comfort with common neurocritical care emergencies. the survey was comprised of selection responses. after the survey but prior to starting the rotation, each resident was oriented to the unit by a neuroicu attending and nursing director. this orientation reviewed rotation goals, icu structure, personnel and rounding expectations. a survey was repeated to evaluate the orientation. of 15 residents who had not rotated in the icu, 10 (66.7%) responded. none of the residents understood their specific role within the icu team. 80% did not understand the role of the resource nurse and were unaware of where to find procedure equipment. 60% of residents were not comfortable placing an a-line; 30% were not comfortable performing a lumbar puncture. over half of respondents said they "didn't know and could not easily find" the indications for hemicraniectomy after malignant mca ischemia, the indications for icp monitoring, or the initial workup of autoimmune encephalitis. 14 residents responded to post orientation surveys. 76% felt the orientation was helpful in explaining the roles of team members. 61% felt it was at least "somewhat helpful" in understanding the role of the resource nurse. 46% felt the orientation was "helpful" and 38% felt it was "somewhat helpful" in identifying the goals of the rotation. these baseline measures underscore the importance of structured interventions, both before and during the neuroicu rotation, to improve junior resident comfort and preparedness in managing neurologic emergencies. physician-staffed helicopter emergency medical services (hems) are a well-established component of prehospital care in japan. however, there has been no report on hems and neurocritical care patients. we studied characteristics of neurocritical care patients who were transported by hems. we retrospectively evaluated neurocritical care patients who were brought to our emergency and critical care medical center (eccmc) by hems between january, 2010 and march, 2017. we excluded patients in whom the outcome was unknown, those who were transported to other hospitals or between facilities. of 4216 the most important role of hems is rapid transportation of a flight medical team to the scene to provide immediate, lifesaving medical treatment. we found that half of patients admitted to our hospital by hems were neurocritical care patients. as proposed in the enls of neurocritical care society, hems is considered useful to allow neuro-emergency patients to receive the best care in the first hour. optic nerve sheath diameter (onsd) measurement using ocular ultrasound has been shown to accurately detect elevated intracranial pressure (icp), but does require specialized training. variations in the optimal onsd threshold for detection of elevated icp in the literature limits clinical utility, and may reflect heterogeneity in manual measurement techniques. our objective was to develop, and validate against expert measurement, an image-analysis algorithm for onsd measurement to facilitate standardization and ease of use of this technique. consecutive patients with acute brain injury admitted to the neurointensive care unit underwent ocular ultrasound with a multipurpose point-of-care ultrasound machine. a 6-second video was recorded from each eye in the axial plane and downloaded in dicom format. the onsd measurement algorithm was as follows. an average of images was calculated using non-overlapped segments of the image sequence. a line integral was performed to estimate the border of the region of interest (roi), the globe. the roi orientation and globe point of the segmented region were established, then a point 3mm posterior to the globe point identified. the onsd was measured at this point. manual onsd measurement was performed separately from the dicom videos by an expert blinded to the algorithm measurement. an intraclass coefficient (icc) was calculated for absolute agreement between highest onsd measured by the algorithm and expert manual measurement. a total of 27 patients with acute brain injury underwent ocular ultrasound. the icc for absolute agreement between algorithm (median 0.66, interquartile range 0.61-0.71) and manual expert (median 0.63, 0.61-0.71) highest onsd measurement was 0.68 (95% confidence interval 0.31-0.85). an algorithm for automated measurement of onsd was developed and demonstrated good inter-rater agreement with expert measurement, although further refinement is required. automated measurement may help standardize and simplify a promising noninvasive bedside tool for the detection of elevated icp. after transition to electronic health record (ehr), transition to inpatient hospice required a separate encounter to account for change in insurance payer in our neuroicu. this negatively affected completed transitions and hence patient-centered care. the focus of this quality improvement project was to define the new process, improve outcomes, and identify further opportunities. the quality improvement method "plan-do-study-act" was employed for this work within a 30-bed neuro-icu at a large academic medical center. we assessed the current state (not enabling transition to inpatient hospice) and the desired state (enrollment in hospice during inpatient stay). a new process was created using an ehr discharge navigator, coordinating all relevant stakeholder groups (patient/caregiver, nurse, pharmacist, bed control, physician,). in addition, standard methodology for unit-based education, in-service, just-in-time training, and booster education was employed to identify process, outcome, and improvement opportunities. after rollout of the new discharge navigator, 90% of all patient-facing staff successfully completed the inpatient hospice training. process improvements lead to increase in palliative care consults by 500% (16 to 96 annually) and inpatient hospice discharges up to 30% (68 to 88 annually). furthermore, there was a statistically significant improvement in the vizient mortality index r2 = .11, f(1,51 ) = 22.64, p < .05 and length of stay index r2 = .10, f(1,51) = 12.64, p < .05 within the study population and period. ability to transfer patients to inpatient hospice is often limited and complicated. this study shows how employing standard quality improvement as well as education and implementation methods can result in improved process and outcome measures with sustainable success. opportunities remain in further analyzing and optimizing 'time to palliative care consult', 'time to admission to hospice and withdrawal of artificial support'. arnp and pa's are a rapidly growing part of the critical care workforce. proper selection among a pool of app candidates for the job is necessary to ensure the "right fit" and optimize patient safety. conventional interview techniques may not be adequate when selecting critical care app's. we hypothesized that use of a simulation center could be used to help select app candidates based on their critical thinking skills in conjunction with contemporary interviewing skills. from 2010 to 2013, we performed conventional interviews for app's to staff critical care and neurocritical care patients. in 2014 we changed to an interview process consisting of the conventional interaction with the interviewee followed by simulation. after narrowing down the initial candidate pool, each was taken to the simulation center where they participated in a simulation of a decompensating patient. proctors were able to view the simulation in a separate room and direct the simulation mannequin. during this time proctors were able to evaluate the interviewee's patient interaction, assessment and interventions. an evaluation tool was to grade app candidates for their decision making skills, communication and thought. from 2014 to 2017, we screened 40 candidates before selecting 21 for interviews and finally 10 of those for simulation. over this timeframe, our center hired 6 app's. comparing the ratio of screen applicants to employment was 6.6 and ratio of interviewed to hired candidates was 3.5. these ratios show the competitive process and potential use of simulation in selecting apps. compared to the time period of applicants prior to simulation to after, retention went from 75 to 92%, and disciplinary action for practice deficiencies went from 16% to 0%. the use of simulation based interviews for critical care app's in our institution improved retention and decreased the number of disciplinary actions compared to conventional interview methods. the contraindications for lumbar puncture (lp) in the setting of cerebral mass effect remain debatable. limited retrospective data advocate its potential safety. yet, high-quality guidelines specifically addressing this topic are not available. specific patient populations (post-instrumentation & immunosuppressed) may benefit from csf studies. we reviewed 1072 consecutive patients who underwent lp and cerebral imaging a week before or after lp from 2007-2014. all individuals with evidence of brain herniation, a component of midline shift, or mass effect were included. all subjects received a low volume lp (5-10 cc of csf). there were 132 patients with radiological increased icp. midline shift (average = 4 mm) was present in 39 patients. we also observed herniation: uncal (n=16), subfalcine (n=15), and a combination of both (n=10) , ventricular effacement (n=67) and cisternal compression with partial occlusion: quadrigeminal cistern (n=3), cerebellar-pontine-angle cistern (n= 14), ambient cistern (n=24), crural cistern (n=14), prepontine cistern (n=7), suprasellar cistern (n=12), basal cistern (n= 2), suprachiasmatic cistern (n= 4), cisterna magna (n=3), interpeduncular cistern (n=3), medullary cistern (n=4). all patients tolerated the lp without complications. most survived a week after the procedure (n=128, 97%). notably, four individuals deteriorated for reasons unrelated to the lp and expired within a week because of withdrawal of care. as brain compliance cannot yet be accurately determined radiologically, we believe anatomical involvement should drive decision-making regarding lp safety. our data suggest that a low volume lp (5-10 cc) might be safe in individuals with subfalcine herniation, midline mass effect < 4 mm at foramen of monro level, and partial cisternal effacement. we believe that while lps might be safer in patients with supratentorial mass effect, individuals with posterior fossa involvement may tolerate it as well. these promising findings need further verification in larger sample populations. the importance of neurocritical care has recently been recognized in japan. however, to date, there has been no neurocritical care training program. we developed the neurocritical care hands-on seminar as a proposed training module, and here we report the satisfaction of participants. we prepared a post-course questionnaire about participants' degree of satisfaction. the main concept of our seminar was "how to maintain cerebral oxygen demand and supply balance." beginning with a short lecture about this concept, participants joined four hands-on scenarios: post-cardiac arrest syndrome (pcas), subarachnoid hemorrhage (sah), traumatic brain injury (tbi), and states epilepticus (se). in the pcas scenario, participants learned how to trouble shoot regarding targeted temperature management, especially in regard to the management of shivering. in the sah scenario, they learned about perioperative management, including delayed cerebral ischemia. in the tbi scenario, starting with actual insertion of an intracranial pressure (icp) monitor in the simulator, they learned about icp management through a scenario-based simulation. in the se scenario, they learned about se management, with actual continuous electroencephalogram monitoring. this seminar was held twice in 2017. most participants were middle-aged intensivists; 11% were in their twenties (6/54), 64.8% were in their thirties (35/54), 18.5% were in their forties (10/54), and 5.6% were in their fifties (3/54). most of the participating physicians were specialists in emergency or intensive care medicine (74.1%; 40/54); nurses (13.0%; 7/54) and a clinical engineer (1.9%: 1/54) also participated. most participants (81.5%; 44/54) were satisfied with the seminar, and almost all (98.1%; 53/54) improved their self-confidence in the ability to carry out clinical practice in neurocritical care. we received positive, satisfied reactions from the japanese intensivists who participated in our seminar. for further improvement, we need to collect objective data to assess the utility of our neurocritical care hands-on seminar. lumbar puncture in the presence of mass effect? ciro ramos-estebanez uhcmc, case western reserve university/neurology, cleveland, ohio, usa introduction 1) we propose an international consortium that would prospectively confirm the safety of low volume lumbar puncture (lp) in the presence of mass effect in selected scenarios. 2) welcome peers and advisors to join the effort. lp may be clinically necessary in the presence of cerebral mass effect. while empirical antibiotic therapy is generally successful, specific groups such as post-instrumentation patients and immunosuppressed individuals may benefit from cerebrospinal fluid (csf) studies. in the absence of high-quality clinical recommendations, uncontrolled retrospective literature suggests that a small volume lp (5-10 cc) might be without complications in specific situations. nevertheless, the ethical principle of non-maleficence and the liability risk prevent clinicians from performing lps. in this scenario, an extended length of stay, poor outcomes, or a higher cost of care are legitimate concerns. synthesize an external peer reviewed methodology to maintain rigour and transparency. 6. seek appraisal, approval and endorsement of national and international quality improvement committees. 7. generate and assimilate the most current clinical evidence through: a. systematic review and meta-analysis. b. prospective randomized controlled clinical trial. 8. construct a protocol to inform decision making amongst healthcare and non-healthcare personnel. 9. dissemination and implementation. 10. schedule updates and/or revision. 15 centers across the globe (north america = 5, south america=4, europe=2, and asia=5) have agreed to establish an lp consortium so far. retrospective analyses suggest low-volume lp's relative safety in the presence of increased icp. thereby, an expert consortium trusted with prospective verification would potentially benefit specific patient populations. patient centered decision making in the nccu requires family members understanding of their loved one's preferences and values as well as the complexities of their medical condition and treatments. family-centered care (fcc) is essential so that family members are actively involved in decision-making. stakeholders have reported their preference to receive prognostic information in smaller packets and recapitulated in different venues including rounds, bedside and care conferences. we examine implementation of a multimodal communication strategy on clinician utilization, family engagement and satisfaction in the nccu. an interdisciplinary team convened to develop a plan implementing a multimodal communication strategy. pre-implementation survey of clinicians (mds, nps, rns, etc.) and patient families was completed to determine the level of family engagement already in place in the nccu. four interventions were implemented: family communication boards were installed in patient rooms; family engagement pamphlets developed; a script and schedule for family care rounds was developed; nursing and provider staff were educated on inviting families to participate in patient care team rounds. family involvement on patient care rounds and family conferences was compared before and after the implementation of the 4 best practice initiatives. additionally, pre and post implementation patient satisfaction survey results were also compared to evaluate the project's success. pre-implementation data was collected from october -november 2016. sixty-one clinician surveys and forty family surveys found that family more consistently participated on daily rounds. baseline and postimplementation surveys demonstrated families feeling supported during the decision-making process. the implementation of a multimodal communication framework to achieve consistent family engagement and communication has led to an appreciable change in utilization by clinicians. its use is supported by consistent positive family attitudes towards communication and availability of information in the nccu. neurocritical care society undertook initiatives to integrate social media in member engagement activities and initiated a twitter journal club (#ncstjc) in 2015 with the first journal club conducted in february 2015. articles were chosen by a subgroup of the communications committee in consultation with dr. eelco wijdicks, chief editor, neurocritical care journal. these articles were chosen based on their overall importance and the interest bound to generate amongst the journal club attendees. the journal club occurs bimonthy over an hour and is unique in the participation of the authors. the journal club is registered with healthcare hashtag project. each article chosen for #ncstjc is made available free to download 2 weeks before and after the scheduled date for the journal club courts springer. analytics data on usage on article discussed in #ncstjc was obtained from sean beppler, editor, clinical medicine. between feb 2015 and apr 2017, 12 sessions were held with data available from 11. the ncc articles discussed had higher than average altmetric scores (measuring social media activity). these articles represented 5% of all ncc articles discussed on twitter since feb 2015 but 15.3% of all tweets. total usage (number of times an article html page is accessed, or a pdf is downloaded ) was 12, 989 ( mean 1082, n=11) representing 24.6% usage of all neurocritical care articles, a total of 39 citations and 3191 downloads ( mean 290) . the upper bound of the audience as assessed by the publisher was total of 111, 524 for all 1 articles (mean 9,294 per article) twitter is becoming an emerging platform for dissemination of information in medical education and academic activities. while the exact impact of the initiative on member engagement or outreach in enhancing journal impact or citations is hard to determine, we saw trends in enhanced engagement. neurostimulant medications have been studied in patients with traumatic brain injury, but few studies describe their use in patients with acute non-traumatic brain injury. our objective was to describe neurostimulant medication prescribing patterns, clinical response, and potential adverse effects in this patient population. retrospective database review of patients with acute ischemic stroke, intracerebral hemorrhage, or subarachnoid hemorrhage who received amantadine or modafinil from december 2012 through june 2016. neurostimulant selection, dosing regimen, and indication were recorded. patients were classified as responders if they met two of the following three criteria within 7 days of neurostimulant initiation: 1) increase in average daily gcs of greater than 3 points, 2) neurological improvement documented in provider progress notes, or 3) increased participation in rehabilitation therapies documented in physical or occupational therapist progress notes. safety data included need for a new anxiolytic or sleep aid or seizure. continuous data are reported as median with interquartile range. eighty-eight patients received neurostimulants: intracerebral hemorrhage (n=41), ischemic stroke (n=30), subarachnoid hemorrhage (n=17). median age was 66 (55-73) years and 57 (65%) were male. amantadine (n=71), modafinil (n=13), or both (n=4) were initiated a median of 7 (5-13) days after hospital admission. the median initial daily dose of amantadine and modafinil were 200 mg and 100 mg, respectively. reasons for initiation included somnolence (77%), not following commands (31%), lack of eye opening (28%), and low gcs (17%). forty (45%) patients were responders, occurring at a median of 3 (2-5) days after neurostimulant initiation. twenty-three (26%) patients required new prescription of an anxiolytic or sleep aid. four (5%) patients developed seizure. neurostimulant medications may increase wakefulness and participation in rehabilitation therapies in patients with acute non-traumatic brain injury, with tolerable adverse effects. the role of neurostimulants in this population should be defined in prospective studies. difficulty in obtaining peripheral intravenous (piv) access often necessitates central venous access placement in many critically ill patients. central line placement exposes patients to potential complications such as pneumothorax, hemorrhage, catheter-related infection or deep venous thrombosis. ultrasound-guided piv placement has become common practice in emergency departments, but there is no systematic program to train and support routine use of ultrasound-guided piv access in icus. we have developed a systematic program to train and support icu nurses in becoming experts and clinical leaders in ultrasound-guided piv placement. we hypothesize that implementation of this program will increase nurse confidence and chances of successful piv placement subsequently decrease central line-related complications in 30-bed neurocritical care icus. we have developed a video didactic training program for the neurocritical care nursing staff. the program discusses use and maintenance of the ultrasound machine and guided-technique for piv placement including the short-and long-axis approaches. the training video is followed by a hands-on simulation session using mannequins. standardized-surveys are administered to nurses before training and then at 3 and 6 months post training. we are prospectively collecting data on nurse comfort level with ultrasound-guided piv placement, total iv attempts, patient central line associated bacteremia (clab) rates and total number of patient central line days. we will compare this data for 6 months pre-and post program implementation. comparisons will be made using t-test and chi-square analyses or non-parametric equivalents depending on data distribution. central-line related complications are an important clinical problem in all icus. we have developed and implemented a systematic training program to support nursing-led ultrasound-guided piv placement. we will determine if this program reduces the overall number of central lines placed, duration of indwelling central lines, and clab rates in a neurocritical care, and subsequently expand to additional icus and beyond. ultrasound measurement of optic nerve sheath diameter (onsd) is a sensitive and specific non-invasive ultrasonographers. despite clinical applications in the icu, er and outpatient settings, neurology residents lack experience and training. the aim of our project was to provide neurology residents with foundational skills in ocular ultrasound and onsd measurement. we designed a two-part workshop for neurology residents covering ultrasound basics, measurement of onsd, and ultrasound appearance of papilledema. workshop 1 was a 30 minute lecture and demonstration followed by 90 minutes of hands-on practice. two weeks later, workshop 2 included 60 additional minutes of practice to consolidate learning. the practical portions were facilitated by emergency medicine attendings and residents with experience in performing ocular ultrasounds. neurology residents tracked the number of practice ultrasounds performed. they also completed anonymous pre-and posttests to assess their knowledge of ocular ultrasound and their comfort level and likelihood to perform future procedures using a 6-point likert scale. prior to the workshop, the majority (14/19) of neurology trainees had never performed an ocular ultrasound. one (1/19) was able to answer two basic questions about the procedure correctly, which increased to 100% on the posttest (n=16). trainees performed an average of 5 ultrasounds total during the workshops. resident self-assessment of comfort performing the procedure increased from a median of "very uncomfortable" to "moderately comfortable" on the 6-point likert scale (p=0.001). resident likelihood to perform the procedure in the future increased from a median of "very unlikely" to "moderately likely" on the 6-point likert scale (p=0.002). this session successfully increased basic knowledge, comfort, and likelihood to perform ocular ultrasound among neurology residents. future directions include follow-up to gauge magnitude of practice changes and accuracy of procedural skills. reaching patients by telephone is a common method of assessing functional outcome, cognitive function, and quality of life after hospital discharge. however, when patients do not answer the phone, missing data creates bias and warrants strategies to increase follow-up rates. we hypothesized that we would have less follow up with patients discharged to long-term care facilities and sought to examine other potential sources of lost data. between 1/2016 and 3/2017, we identified all patients admitted to the university of cincinnati neuroscience intensive care unit (nsicu). we excluded those with recurrent admissions, boarders, and those admitted < 24 hours or for uncomplicated post-op care. telephone follow-up was attempted for each patient. univariate analysis was used to identify associations with patients who did not answer the phone. 1103 critically-ill patients were included. average age was 59.7+/-17.6 and 53% were men. the average hospital length-of-stay was 9.4+/-8.9 days. major diagnoses were: ischemic stroke (25%), intracerebral hemorrhage (19%), traumatic brain injury (15%), seizures/status epilepticus (10%) and subarachnoid hemorrhage (9%). 271 (25%) died in the hospital; 130 (12%) died by follow-up. 702 survivors were assessed 133.2+/-25.6 days following admission. 462 calls were answered, 231 were not. there were no associations between rate of answered calls and age, gender, race, hospital length-of-stay, diagnosis, or hospital disposition. there were no differences in between morning vs. afternoon calls. only the number of attempts differed: the probability of an answered call was 90% on the first attempt but declined to 14% by the third attempt (or 0.13; p<0.01). our outcome assessment strategy captured data on 78% of neurocritically ill patients. those who answer the phone are most likely to answer with the first call; the probability of a patient answering after a second phone call may not justify resources needed to continue calling these patients. posterior reversible encephalopathy syndrome (pres) typically presents with vasogenic edema on neuroimaging. a subset of patient, however, can have "atypical findings" including restricted diffusion and intracranial hemorrhage. these atypical findings all suggest acute vascular injury, and may mark a distinct pathophysiological subtype of pres. however, it is unknown whether atypical imaging findings are associated with differences in precipitating factors or outcome. patients with evidence of restricted diffusion, frank hematoma, microhemorrhage, or subarachnoid hemorrhage were classified as having atypical imaging findings. the demographics, risk factors, clinical outcomes, and degree of vasogenic edema for patients with typical (n = 109) vs atypical pres findings of vascular injury (n = 75) were analyzed. patients with atypical pres had a longer hospital stay (22.8 vs 15.4 days; p = 0.039) and were less likely to be discharged home (54.7% vs 69.7%; p = 0.036). severity of vasogenic edema (graded using a standardized radiologic scale) was also higher in patients with atypical imaging findings (severe edema: 41.3% vs 21.1%; p = 0.003). restricted diffusion and hemorrhage are features of acute vascular injury that may mark a unique pathophysiological subtype of pres. pres patients with these atypical imaging features had longer hospital stays, greater degree of vasogenic edema, and were less likely to be discharged home. this may be due to the fact that bleeding and infarction lead to irreversible brain injury, prolonging hospital stay and contributing to overall disability. in 2016, 16 deaths occurred in the neuro-oncologic critical care unit (nccu). the impact of this event is significant for patients, families, and the staff that care for them. brain and spinal pathology can be incredibly debilitating causing a rapid and impromptu decline of the patient's status. in order to better support patients, family, and staff throughout the dying process, the nccu staff created formalized endof-life care interventions. these interventions include educational pieces and supportive approaches to aid all involved through the dying process. several interventions were created to help transition the family members during the dying process. these include the creation of: dnr-cc closet, homemade blankets, condolence cards, hygiene bags, educational packets, word clouds, and aromatherapies and massage items. once the patient's code status is changed to comfort care, a blanket is given to the patient. family members are provided with a bag of toiletries for those that remain at bedside. education on the dying process are given to family members. multidisciplinary resources are provided, such as religious support focused on patient/family beliefs, palliative care for symptom management, and dietary provision of light snacks to the family. education for the physicians, nurse practitioners, nurses, and patient care associates was provided for those who wished to attend to further understand this end of life care program. a 29 item tool was created to collect before and after data on staff satisfaction and comfort with the end of life process. results from this process are currently in process. creating specific end-of-life care interventions for the nccu have enabled staff to better care for patients and families. through the creation of the interventions and utilization of the dnr-cc closet, this unit has been able to better provide comprehensive education and supportive pieces to patients and family members during such a difficult time. delirium is a neuropsychiatric syndrome, characterized by disturbance in awareness, with reduced ability to sustain attention, impaired cognition, perception, tends to fluctuate in severity during the day; in critical care is associated with longer stay and increase mortality. this study aimed to determine the incidence, prevalence, predictors, risk factors and outcomes of delirium in critically ill adult. a historical cohort study was conducted in adult patients hospitalized in a polyvalent icu from january 2016 until december 2016. delirium was diagnosed using cam-icu. a bivariate and multivariable risk were analyzed and presented as odds ratio (or) and 95 % confidence interval (ci). a total of 1133 patients were enroll. delirium developed in 276 patients.the incidence was 24.3%. three independent predictors for delirium were identified, sedation (or 6 (95% ci, p < 0,001); alcohol dependence (or 4,4; 95%ci, p < 0,001) and glasgow coma scale < 8 (or 2,2; 95% ci, p < 0,001). delirious patients had a significantly apache ii (15 (11-21) vs 13 (8-19), p < 0.001), higher sofa, (6 (3-8) vs 4 (2-7), p< 0.001) and higher saps iii (56 (48-65) vs 50 (41-60), p< 0.001). other risk factors were hyperlactatemia ( p<0.002), and hypotension (map<65mmhg),(p< 0.035). patients required prolonged mechanical ventilation, p< 0.001), and prolonged icu-hospital stay. the incidence of delirium in the period from january 1 to december 31, 2016 was 24.3% in a polyvalent intensive care unit. exposure to sedative medications, alcohol dependence, and decrease glasgow coma scale minor 8 are independent predictors for the development of delirium. similarly, the icu stay was longer in the group that developed delirium; however, mortality was not affected by the presence of this condition. it has been previously reported that the course of hsv-2 in the cns is significantly more benign that hsv-1, and that it rarely causes encephalitis or significant morbidity in immunocompetent adults. the aim of our study was to investigate the claim that hsv-2 cns infections are typically benign, and to assess for predictors of poor outcome in those patients who do suffer significant morbidity from hsv-2 cns infections. restrospective chart reviews were completed on patients with a positive hsv pcr at our institution from july 2008 until july 2016. patients with a hsv pcr positive for hsv-2 were selected in our analysis. multiple clinical variables were evaluated in these patients and we assessed outcome in the patient population, dichotomizing outcome into two categories at the time of discharge: good outcome defined as home or inpatient rehabilitation versus those with poor outcome defined as death, hospice, or placement in a long term acute care facility. 21 patients with hsv-2 positive pcrs were identified. their charts were evaluated for demographics, laboratory values (serum and csf), imaging results, and outcome. there were 3 patients with poor outcomes. it was noted that they were all female, their mean age was 59.6 (vs 47.2 in the good outcome group) and two of the three were immunocompromised (67% vs 44% in the good outcome group). statistical analysis was performed however due to the small sample size no statistical significance was found. however, age, sex, clinical presentation consistent with encephalitis and immune status seemed to have a trend towards poor outcome in this pilot study. future study with larger sample size is warrented to further assess this trend, as hsv-2 may not be as benign as previously reported. there is a high prevalence of non-traumatic illness in patients presenting to emergency departments as trauma team activations (tta). we sought to determine the prevalence of neurologic emergencies within a population of patients receiving a tta. this was a retrospective review of prospectively-collected registry data capturing all ttas in a highvolume, urban, academic level i trauma center. records from june 2011 through june 2016 were reviewed to identify patients found to have a diagnosis of ischemic stroke, intracerebral hemorrhage (ich), subarachnoid hemorrhage (sah) or status epilepticus. further demographic, clinical, and outcomes data was then abstracted from the electronic medical record. a proportion of abstracted charts were reviewed by an independent reviewer to ensure data quality. there were 18,859 trauma activations in the registry during the study period. 120 patients (0.6%) were found to have a nontraumatic neurologic emergency and were included in the analysis. of these patients, there were 55 ischemic stroke (46%), 40 ich (33%), 15 sah (13%), and 10 status epilepticus (8%) patients. the mean age was 67, and 70 patients (58%) were male. the mean gcs on presentation was 10. about half of these patients (43%) were intubated in the emergency department. all patients received a head ct scan. 20 patients (16%) received intravenous thrombolysis. neurologic emergencies such as ischemic stroke, ich, sah or status epilepticus were common diagnoses in this population of trauma activation patients. clinicians caring for patients in these settings must maintain a high index of suspicion for non-traumatic illnesses, and act quickly to mobilize appropriate resources when a diagnosis is made to avoid delays in care. further research is needed to examine ways to improve both time to diagnosis and quality of care in this patient population. formalized communication strategies decrease post-traumatic stress disorder (ptsd) symptoms in caregivers in the intensive care unit (icu). in one study, only 2% of family meetings met all shared decision-making criteria. however, much of the research has focused on family meetings, ignoring less formalized communication. the decision maker (patient or caregiver) was interviewed for all patients admitted to the medical (micu), neurosciences (nsicu), surgical (sicu), and cardiothoracic icu (cticu) for greater than 72 hours. subjects who stated significant decisions had been made were asked to report on 10 aspects of shared decision making on a 5-point scale. they identified the lead provider, who was subsequently approached to complete the same questionnaire. overall, 121 eligible decision makers were identified, 47 (39%) in the micu, 31 (26%) in the nsicu, 25 (21%) in the sicu, and 18 (15%) in the cticu. of these, 63 (52%) were unable to be contacted, 7 (6%) had insufficient english, and 13 (11%) reported no decisions made, with 38 (31%) enrolled. nineteen (50%) provider interviews were completed. topics most reported covered "well" or "thoroughly" by caregivers were assessment of understanding (36, 95%) and the nature of the decision (35, 92%), while those least covered were need for input from others (23, 61%) and the context of the decision (25, 66%). topics reported most covered by providers were the nature of the decision (18, 95%) and opinions about the treatment decision (17, 89%), while those least covered were patient's values and preferences (12, 63%) and their preferred role in decision making (10, 53%). eighteen (47%) caregivers and 5 (26%) providers described all topics covered "well" or "thoroughly." these results demonstrate differences in perception of shared decision making by decision makers and providers. further qualitative investigation is underway to elucidate the nature of these inconsistencies. organ donation is a life-saving medical intervention. the effect of race, insurance and economic status on organ donation and recipients has not been studied at a national level. in our study, we analyzed nationwide in-patient (nis) database of years 2011-14 to select donors and recipients. baseline demographics (i.e., age, gender, race), insurance status and socio-economic status was compared between two groups. we identified donors (n=26821) and recipients (n=108953) from 2011-14. recipients were significantly older (mean age ± sd, 50.3±16.3 vs 41.5±13.5, p<0.001). donors had higher (57.1% vs 38.3%, p<0.001) proportion of women compared to recipients. both groups had a higher proportion of whites compared to other races (67% and 57.9% respectively). insured patients were largely represented in both groups with private insurance predominating in donors (67%) and medicare in recipients (50.2%). interestingly, self pay represented 15.5% of donors but only 0.8% of recipients. race, insurance and socioeconomic status seem to be evenly similarly represented in donors and recipients. interestingly self pay insurance has a higher distribution among donors than recipients. central line-associated bloodstream infections (clabsis) are a common health care associated infection accounting for 30,100 infections annually in the intensive care and acute care areas (cdc,2014). according to the center for disease control, clabsis result in thousands of deaths yearly and upwards of billions of health care dollars spent on preventable hospital acquired infections. intensive care patients, especially the neurocritical care population, have an increased need for centrally placed catheters related to inadequacy of peripheral access, need for caustic iv medications, and fluid resuscitation. our neuroicu's goal was to decrease utilization and subsequently reduce number of clabsis. in february 2016, we initiated a patient-centered quality improvement effort with this goal in mind. the neuroicu clinical nurse leaders conducted rounds daily to evaluate the necessity and management of central lines. the neurocritical care team and clinical nurse leaders collaborated in exploring alternatives if central lines were present. in addition to daily rounding, clabsi bundles based on cdc guidelines for clabsi prevention were initiated. our neuroicu developed checklist "buster cards" in september of 2016, prompting staff to the bundle interventions. the intent of the cards was to enhance nurse to nurse dialogue of bundle elements. the cards were evaluated monthly for trends in care. from august 2015-june 2017, there was a 15% reduction in neuroicu utilization of central lines. in addition, the mean number of clabsis per month decreased from 0.67 to 0.22. trending of unit buster cards did not show care variances during this time period. implementing daily clinical nurse leader rounds with enhanced team communication significantly reduced the neuroicu's utilization of central lines and thereby decreased the rate of clabsis. percutaneous dilatational tracheostomy (pdt) is one of the most commonly performed procedures on critically ill patients. many studies showed the safety and feasibility of pdt, but there is limited data of pdt in neurocritical care units. we have described our experience of pdt performed by neurointensivist. pdts were performed by neurointensivists at bedside using the griggs guide wire dilating forceps technique. to confirm a secure puncture site, pdt was done under fiberoptic bronchoscopic guidance. from september 2015 to may 2017, procedural data were prospectively collected. the patients' demographic and clinical characteristics were retrospectively reviewed. we analyzed immediate complications of pdt as the primary outcome. pdts were performed for 46 patients; the mean age was 65.9 years, 26 (56.5%) were male, and mean acute physiology and chronic health evaluation ii score was 20.5 ± 7.0. overall, the procedural success rate was 100% and the mean procedural time was 19.7 ± 9.3 min. periprocedural complications occurred in 13 patients; 10 had minor bleeding and 3 had tracheal ring fracture. there were no serious periprocedural complications of pdt. from our experience, pdt performed by neurointensivist was safe and feasible and was implemented without serious complications. the neurocritical care unit (nccu) is a fast paced setting with a multitude of providers and team members requiring optimal communication. it is also a high cost/high utilization environment, dictating the need for patients to be moved thru appropriate levels of care efficiently. all of this must be accomplished while providing support and opportunity for collaboration and decision making on the part of the patient/ family unit. there is great discussion in the case management world about the benefits of a unit based verses service based case management model. we looked at outcomes following the implementation of a unit based case manager in the nccu. a dedicated case manager (cm) was implemented in the nccu to maximize assessment, advocacy, communication, education, identification of resources, and facilitation of services. processes to support maximal contributions were created.interventions included use of a discharge planning worksheet, implementation of a morning huddle, and space for the case manager to be physically available on the unit. los of patients discharged from nccu decreased from 7.2 to 5.1. alos for patients that passed thru the nccu during their hospitalization decreased from 10.7 days to 9.2. there was a 38% increase in discharges from nccu from 2015 to 2016. average time from admission to cm assessment decreased from 51 hours to 32.2 hours. progress notes indicating intervention and/or communication of the plan increased from 211 to 630. staff questionnaire indicated increased awareness of los and dc plan needs. in this midwestern, academic medical center, integrating a dedicated, unit based cm resulted in improved los, increased discharges and improved staff awareness of dc plans. high throughput genotyping technologies and large collaborative consortia have revolutionized the field of medical genetics. open data access is the final barrier to be overcome to capitalize fully on the opportunities currently available in stroke genetics research. the international stroke genetics consortium (isgc) has created the cerebrovascular disease knowledge portal (cdkp), a comprehensive web-based resource to explore and freely access genetic data related to cerebrovascular diseases. funded by the nih, the cdkp has been jointly developed by the isgc and the american heart association (aha) institute for precision cardiovascular medicine. the cdkp seeks to democratize access to genomic data and potentiate stroke genomics research by providing open access to genetic, phenotypic and imaging data on stroke. within the cdkp, data are aggregated, integrated, and harmonized according to a pre-specified standardized pipeline. any institution or investigator working with stroke genomic data is welcome to deposit their data or use available data. the cdkp houses two types of data, each meeting different regulatory and analytical needs: summary level data and individual level data. the cdkp offers three main features: (1) a web-based graphical user interphase that allows the exploration of stroke genomics information through a wide menu of integrated tools for analysis and data visualization; (2) a repository of full sets of genome-wide summary statistics produced by published landmark studies in the field, available with a single mouse click ; and (3) a repository of individual level data, accessible through a secure cloud working space provided by the aha platform for precision medicine. the cdkp can be accessed at www.cerebrovascularportal.org. the cdkp advances the isgc's goal of liberal data sharing in stroke genomics and other areas of cardiovascular research that may benefit from genomic analyses. in the future, phenotypic datasets can be added to further enrich sharing of non-genetic data as well. hyperosmolar therapy using hypertonic saline is common in patients admitted to the neurocritical care unit (nccu) for the management of different type of cytogenic cerebral edema or increased intracranial pressure (icp). vancomycin is commonly prescribed in nccu as empiric antimicrobial therapy. the purpose of the study is to evaluate the effects of hypertonic saline therapy on the pharmacokinetic parameters of vancomycin in critically ill patients with generalized or compartmental icp. this was a retrospective, observational study of adult patient consecutively admitted in the nccu between february 2012 and february 2017 who received hypertonic saline (3% sodium chloride) and vancomycin dosing protocol managed by the pharmacist. patients with serum creatinine > 1.4 mg/dl were excluded from the study. the estimation of vancomycin trough levels was done by using published pharmacokinetic equations and then compared to the measured trough levels with the paired t test. the study protocol was approved by our institutional review board. of forty-four patients who met the inclusion criteria, twenty-one patients (47.7%) were diagnosed with intracerebral hemorrhage, nine (20.5%) ischemic stroke, seven (15.5%) subarachnoid hemorrhage, four (9%) subdural hemorrhage, two with brain tumors, and one patient with chiari malformation. the mean dosing regimen was 13.8 ± 3.4 mg/kg every 12 (8 -24) h. the mean measured trough level was lower than the predicted trough level (9 ± 3.3 vs. 18.9 ± 6.2 mcg/ml; p < 0.0001). the mean serum sodium level was 147 ± 8 meq/l and the mean serum osmolality was 317 ± 18 mosm/kg. critically ill patients with cerebral edema or high icp who were treated with hypertonic saline achieved a subtherapeutic vancomycin level that may lead to lower through level and possibly poor clinical response. further research is warranted to evaluate the clinical response of vancomycin in this patient population. unnecessary telestroke activations are costly to emergency departments (ed), telestroke providers, and patients. therefore it is important that ed nurses are well trained to effectively recognize stroke symptoms, and decrease the rate of false-positive stroke code activations. the nursing-driven acute stroke care (nas-care) study aims to determine if implementing a standardized ed stroke program decreases door-to-needle times in emergency departments utilizing telemedicine. the nas-care intervention consists of ed nursing education including mock codes, nihss certification, and implementation of a standardized flow sheet. in this interim analysis from the first 4 (of 7) nas-care study hospitals, we examined ed admission and discharge diagnoses at each site for 3 months of blinded baseline data collection ("control") and 6 months after standardized training ("intervention"). false-positive encounters were defined as stroke code activations for which the patient diagnosis on leaving the ed was not stroke. although hospitals trended toward a reduction in false-positive stroke code activations after implementation of the standardized stroke education, mock stroke codes, and flow sheet, none of the values were statistically significant. further research is needed to determine whether intensive ed nursing education can improve telestroke resource utilization. pharmacist-driven intravenous (iv) to oral (po) conversion protocols result in greater compliance, improved cost-savings, and better patient outcomes related to length of stay, re-admission, and duration of intravenous therapy. this study aims to determine the cost-savings and patient impacts of such a conversion protocol for anti-epileptic drugs (aeds) including lacosamide, levetiracetam, phenytoin, and valproic acid. a retrospective, observational phase was conducted to determine usual practice patterns concerning conversion to oral therapy between 11/1/2016 and 11/30/2016. the conversion protocol was approved in december 2016 and implemented in january 2017. a second retrospective phase observed conversion practices beginning 02/01/2017 and ending 03/02/2017. length of intravenous and oral therapy, date eligible for conversion, and date of conversion were recorded. hospital acquisition costs were utilized for medication expenditure calculations. this information was used to determine financial impact of the protocol and is presented as descriptive endpoints. adverse drug events were collected via an institutional incident reporting system. a total of 408 encounters were identified, resulting in 147 encounters in the pre-cohort and 121 postcohort encounters. looking at the pre and post cohorts respectively, both cohorts had similar median lengths of stay (11 days vs. 9 days), 30-day readmission rates (6.8% vs 9.9%), and rates of conversion from oral back to intravenous therapy (6.8% vs 6.6%). the median length of intravenous therapy was 5 days prior to protocol implementation and decreased to 3 days in the post-cohort. the average cost per day of aed therapy was $10.36 in the pre-cohort but decreased to $4.46 in the post-cohort. median missed opportunity costs, defined as the cost savings if conversion occurred at the earliest possible date, also decreased between the cohorts from $3.87 to $1.24. pharmacist involvement in aed conversion had a positive financial impact without compromising patient care. the national institute of neurological disorders and stroke (ninds) established the nih strokenet to facilitate the rapid initiation and efficient implementation of multi-center exploratory and confirmatory clinical trials focused on promising interventions in stroke prevention, treatment, and recovery. strokenet was initiated in the fall 2013, and involves over 350 hospitals across the us. the network is anchored by 25 regional coordinating centers (rccs), along with the national coordinating center (ncc) at the university of cincinnati and national data management center (ndmc) at the medical university of south carolina, as well as active participation by the ninds. one of the primary goals of the strokenet is to serve as the primary infrastructure for conducting stroke clinical trials and pipeline for new potential treatments. to maximize the impact of nih strokenet, it is important for the larger community of stroke researchers and clinicians, including the neurocritical care specialists, to know its structure and the process and timeline by which stroke trials are developed and implemented. since the inception of the network, 49* proposal concepts have been submitted to the strokenet and are in different development stages. among those evaluated to obtain ninds permission to submit a grant application, 16 have submitted and 3 are in the process. every application has been prepared and submitted for peer review within 3 months of the ninds permission. two* funded strokenet trials are now underway with brisk enrollment rates, and another is awaiting study initiation. (*as of abstract submission date) the nih strokenet has become a stable infrastructure and offers several distinct advantages to developing competitive clinical trial proposals, including scientific input from the strokenet working groups, comprehensive feasibility assessments (including site enthusiasm and patient availability), assistance with grant budgeting, and other requirements for grant submission that are likely to help refine and improve the application. the modified early warning score (mews) is a physiological scoring system, validated in adult medicalsought to determine the value of mews to identify clinical deterioration or occurrence of sepsis in neuroicu patients. we retrospectively reviewed all patients admitted to the neurological intermediate care unit (imc) or neuroicu of a large tertiary care center from 7/2015presentation/during admission. baseline characteristics, diagnoses, physiologic parameters, infections, treatment with antibiotics, neurological worsening and mortality were abstracted from the electronic medical record. outcomes were defined as escalation of care and discovery of a new infection or sepsis. of 2556 p were male. 62% were intubated, and in-hospital mortality was 22% (versus 9% for all admissions). 182 (40%) were already treated with antibiotics for a known infec diagnosed in 19%. in reaction to the elevated mews score, antibiotics were added or broadened in 22%, and level of care was escalated in 2.7% from imc to icu. in 18.2%, there was neurological worsening, most frequently associated with increasing cerebral edema (34%) and midline shift/herniation (18%). the mews score is not a valuable screening tool in the neuroicu population. it preferentially was triggered in known high acuity patients with ongoing or present infections with no change of management in the majority of patients. while associated with high mortality, its ability to indicate new infections or sepsis was poor. in 1 out of 5 patients, the mews score was associated with neurological worsening known at that time of the score. other screening tools should be explored for early warning in the neuroicu. introduction: it is challenging to maintain neurosciences critical care nursing expertise in an environment of a rapidly expanding knowledge, changing evidence-based practices and technological advancements. to address the needs for neuroscience nursing expertise in a mixed critical care unit, our institution developed a core group of nurses, known as "neuro champions", who have additional training and expertise in neurocritical care. methods: nursing participation was voluntary and recruitment was via unit-wide announcements. the goal was to improve patient care by developing a core group of nurses who serve as resources and educators for all things neurosciences related. to develop content expertise, the nurses initially completed a set curriculum including: neuro anatomy and pathophysiology, cerebral hemodynamics and multimodal monitoring, pupillometry, eeg interpretation, temperature management, evds, and quality indicators. bi-monthly meetings continued ongoing education, with content including clinical case studies and review of processes and protocols. additionally, 6 beds staffed by neuro champions were designated as critical neurological care unit ("cncu") beds to co-localize the highest acuity neurosciences patients. the neuro champions are responsible for educating and sharing neuro related practices with the entire icu nursing staff. results: as a result of the implementation of the neuro champion role, our icu has benefited from: 1) dedicated co-localized beds for the highest acuity neuro patients; 2) increased number of enls certified nurses; 3) improved collaboration between the medical team and nurses; 4) promoting care uniformity to maintain comprehensive stroke center certification; 5) integration of multimodal monitoring advancements, all of which supports advances in patient care and research. conclusions: the neuro champion role has provided a platform for neurosciences-specific nursing expertise in a mixed critical care unit and has facilitated education dissemination to the entire staff via a core group of nurses. this expanded knowledge has improved the care of the neurologically critically ill patients. the rate of cerebrovascular complications in patients treated with extracorporeal membrane oxygenation (ecmo) is about 7%. transcranial doppler (tcd) can be used to noninvasively monitor cerebral blood flow velocities (cbfvs) in patients undergoing ecmo. the aim of this study is to describe tcd-cbfv patterns in patients undergoing venovenous (vv) and venoarterial (va) ecmo. a neuro-surveillance protocol among ecmo patients was initiated as part of a quality improvement project at our institution. daily neurological exam, daily tcd, brain-ct on days one and three and 24-hr continuous eeg were performed in all patients undergoing vv and va-ecmo. demographics, clinical and imaging data were collected for the duration of ecmo support. cbfvs, lindegaard ratios (lr), pulsatility index (pi) and resistance index (ri) on tcd were collected. total of 11 patients were included in the study [6 female (55%); 9 caucasians (82%)]. mean age was 62 years. 8 (73%) patients received va-ecmo; 1 (9%) vv-ecmo; 2 (18.2%) received both va and vv-ecmo. median days on ecmo was 7 days. median number of tcd studies performed was 2 (mean, 3.45 we observed an overall pattern of low-normal flow cbfvs and reduced pulsatility in patients on va-ecmo. nurse practitioner (np) and physician assistant (pa) roles continue to expand in the critical care setting. single and multisite studies have examined various aspects of app practice, but none have focused on role implementation within the neurologic critical care unit (nccu). the purpose of this study was to obtain foundational knowledge about how nccu apps are implementing the role nationally this was a voluntary, cross-sectional, descriptive study of nurse practitioners (np) and physician assistant (pa) practicing in the us. apps were invited to participate in this voluntary, 70 item survey. distribution occurred initially through email inquiry via multidisciplinary, professional organization listservs (ncs, aacn, aann) followed by snowball effect circulation. enrollment occurred from march to june 2016. data was collected in redcap and analyzed using spss with descriptive statistics for demographic, institutional, practice, role characteristics of the sample and for each survey data element 172 app participants completed the survey: 78% np, 12% pa, 10% other. the majority of respondents were master's prepared (82.9%) acute care trained, (68.9%) and hospital employed (81.4%). participants were either early in their career (38.7% 1-5 years as app) or experienced (28.7% > 9 years). 81% work in a direct care role with 77% providing total care for their patients with an average daily caseload of 7.6 + 3.4 patients. 63% of providers believed 3-6 patients was a reasonable caseload for total care. in addition to the nccu, 28% of participants care for patients in step-down or emergency department (36%) with 49% routinely bilingl for their work. this study is the first to provide information regarding how ncc apps are implementing the role in the united states. this study provides benchmarking data which may guide future research with this population as well as serve as a template for evaluation of other app specialty roles. despite advances in treatment, the median survival for high grade gliomas (hgg) remains poor. there is a growing body of research showing that palliative care improves quality of life and survival in patients with advanced malignancies. we sought to examine our own practices in the neurologic intensive care unit (nicu) regarding palliative care consultation in this population. we hypothesized that the incidence of palliative care consultation is low and associated with a clarification of patient's wishes, measured by a change in code status. we conducted a retrospective cohort review of patients with previously diagnosed hgg admitted to the nicu from 2011-2016 with a length of stay (los) greater than 48 hours. the primary outcome was the incidence of patients with an advanced directive or inpatient palliative care consult (pcc). secondary outcomes included intensive care unit los, change in code status and location of death. 90 patients were identified with hgg. the mean age was 59.5 years (19-86years), 64% were male, 81% were white. zero patients were admitted with an advanced directive on admission. pcc was obtained in 16 patients (18%). pcc was associated with increased nicu stay (254 hrs vs 63 hrs p=0.01), a change in code status to do not resuscitate (69% vs 11% p=0.00001), and an increased likelihood to not die in the hospital (92% vs 58% p=0.08). at our large academic tertiary care facility intensivists underutilize palliative care services for hgg patients. patients with fatal brain tumors are not having end of life discussions prior to admission, indicating a need for early palliative care intervention. patients are six times more likely to change their code status and there is a trend towards dying outside of the hospital if they receive a palliative care consult. hypertonic saline (hts), a hyperosmolar solution, is typically administered using a central venous catheter (cvc) due to concerns of extravasation, but a cvc is rarely readily available. in emergent situations, intraosseous (io) access is used when peripheral intravenous access is not available. existing literature does not address the administration of hypertonic saline using io access for adult patients with brain injury. the administration of hts is often delayed due to the time taken to obtain a central venous access. insertion of an io needle is typically much faster than a cvc. we report the safety and tolerability of hts using io route. a prospective pilot study on the safety and tolerability of 3% hts via io is currently underway. data on local complications at the site of injection, pain during insertion and during infusion, and serial serum sodium levels were collected. additionally, we report a case of successful administration of 23.4% hts using the io route. preliminary data demonstrated that 3% hts was well-tolerated, with no reports of severe pain, infections, extravasation, soft tissue injury or local infectious complications in our sample of patients with brain injury. indications for use of hypertonic saline included patients with cerebral edema and mass effect from intracerebral hemorrhage. an appropriate rise of serum sodium levels by approximately 1 mmol/l/hr in was observed. in the case where 30 ml of 23.4% hts, no local complications were observed and serum sodium levels rose appropriately. administration of hts using io route appears to be safe and feasible. utilizing io access for urgent administration of hts may reduce the lag time to administration of the initial bolus, reduce the need for emergent placement or eliminate the placement of cvc in certain cases. optic nerve sheath diameter (onsd) measurement is an emerging bedside tool to assess intracranial pressure (icp) non-invasively in brain injury patients. multiple studies demonstrate onsd width from 4.5mm to 5.8mm correspond to an external ventricular device (evd)-measured icp >20mmhg. we sought to create a low cost, 3-d constructed, re-usable osnd teaching model to train neurology, neurosurgery, and critical care advanced practice providers and physicians. we searched the national library of medicine using terms "optic nerve sheath diameter ultrasound" with combinations of "simulation" and "model." the literature was used in conjunction with a human non-contrast head ct head model to make an eye ball model which was then tested in our simulation center and compared to a live human model. we identified 253 articles, of which 15 were associated with models and two with simulation. one gelatin model was reported, upon which we based our initial design. we could not validate the visual findings of this model. however, following construction of multiple beta models, the design most representative of human eye anatomy was a globe made of ballistics gel with either a 3mm, 5mm or 7mm 3-d printed "optic nerve" attached to a platform composed of ballistics gel and psyllium powder with a hollowed out core for ultrasound gel the globe rests upon. this model was taught to learners at a continuing medical education event prior to teaching osnd on a live human model. a 3-d printed skull from ct head data is being created to incorporate this model. a simple 3-d ballistic onsd model allows learners to learn proper hand placement, basic landmarks, onsd measurement, and practice proper pressure on human eyes. this model can be replicated and utilized in a sustainable fashion given that the globe and platform are composed of ballistics gel. pressure measurements using pressure guide wires is an invaluable diagnostic tool in the management of many endovascular revascularization therapies. its role is well established in coronary artery disease management such as use of fractional flow reserve (ffr) as a standard diagnostic tool to determine need for stenting, angioplasty or bypass. renal fractional flow reserve remains an integral physiologic parameter used in endovascular revascularization therapy of renal artery stenosis. despite the wide spread use of pressure wires in endovascular therapies, its application in the management of cerebral venous diseases remains vastly unexplored. we sought to evaluate the safety and applicability of pressure guide wires in several cerebral venous diseases. patients undergoing diagnostic angiography for possible venous outflow obstruction had pressures measured by pressure guide wires (volcano verrata® or prestige primewire®) across the following vessels: superior sagittal sinus, torcula, right and left transverse sinus, right and left sigmoid sinus, and right and left internal jugular vein. venous pressures were also collected from patients undergoing venous thrombectomy, stenting, or an arteriovenous malformation embolization (avm). five patients who underwent diagnostic angiography for pseudotumor cerebri showed no major variability in their pressures across the cerebral venous architecture which was confirmed by lack of stenosis or thrombi on intravascular ultrasound (ivus). four patients had a pressure difference above 10 which was suggestive of a stenosis and later confirmed by ivus. patients undergoing pressure measurements that had evidence of stenosis or thrombosis by ivus showed improvement in pressure gradients post stenting or thrombectomy. no variability in pressure gradients were noted in a patient that underwent avm embolization. pressure measurements using pressure guide wires can improve diagnostic accuracy and guide management of several diseases of the cerebral venous system. further studies are necessary to understand the applicability of this approach in the management of venous disease. monitoring metrics is imperative for quality assurance and ongoing improvement in a developing clinical unit. a new neurocritical care unit (nccu), specializing in the treatment of critically ill, neurologicallyinjured patients opened in july 2013. this study examined quality metrics that correlate with the development and growth of a neurocritical care program. data from patients with principle diagnoses of ischemic stroke (isc), subarachnoid (sah) or intracerebral (ich) hemorrhage, seizure, or brain tumor, admitted to nccu in 2014 and 2016 were used in the analyses. quality metrics included overall and individual complication rates per 1,000 patient days of pneumonia, venous thromboembolism, pulmonary embolism, sepsis, septic shock, pulmonary edema, gastrointestinal bleeding, and catheter associated urinary tract infection, as well as hospital mortality and length of stay (los). chi-squared and mann-whitney tests and poisson regression were used to compare metrics between 2014 and 2016. patient volumes increased by 17.0% (439 to 515) from 2014 to 2016. the overall complication rate declined significantly from 3.94 to 2.33 per 1,000 patient days (p=0.031). the highest complication rate in 2014 and 2016 was pneumonia (1.45 and 0.85 per 1,000 patient days, respectively). the proportion of patients who expired decreased from 11.6% (n=51) in 2014 to 10.5% (n=54) in 2016, though not significantly (p=0.65). there were no significant differences in los among patients with isc, brain tumor or seizure. however, those with sah or ich had significantly shorter stays in 2014 (median [interquartile range] =3.0 [2.0, 9.5]) versus 2016 (7.0 [2.0, 14.0]) (p=0.032). data suggest that over the initial 3-year period, complication rates among patients in the nccu improved. los did increase for hemorrhage patients; however, this may be related to greater severity of illness in the patient population over time. further analyses will be conducted to account for severity and other factors. delirium is a frequently seen but underestimated problem in critical care settings. delirium screening is considered time consuming, which is one of the factors leading to under diagnosis. the cam-icu screening tool for delirium has been validated in medical and surgical icus. among neurological patients, it has been validated in stroke patients but not in general neurocritical care population. this study was designed to validate cam-icu flow sheet in neurointensive care unit. a prospective cohort study was conducted in a 16 bed neurointensive care unit of a university hospital. patients meeting the inclusion criteria (all nicu patients) and exclusion criteria (comatosed, aphasic, psychotic, prior diagnosis of neurocognitive disease, persistently vegetative state, sedated) were screened for delirium by (1) a nurse practitioner using confusion assessment method (cam-icu) and (2) a physician reference rator using delirium screening criteria in diagnostic and statistical manual of mental disorders-5. assessments were done daily monday through friday for the icu stay. paired assessments were done less than 4 hours apart. the study enrolled 50 patients (19 male, 31 female). 159 daily assessments were done. mean age of the patients was 63.5 and mean sap score was 23. admitting diagnoses were ich (8), sah (10), ischemic stroke (5), tumor (9), spinal surgery(4), neurological infections(4), seizures(3),elective angiograms(4), hydrocephalus(1), transverse myelitis(1) and av dural fistula(1). using dsm-5 criteria, the reference rator identified delirium in 8 out of 50 (16%) patients during the icu stay. 27 out of 159 assessments were positive for delirium according to dsm-5 and 29 according to cam-icu. cam-icu flow sheet had sensitivity of 96.43% (95%ci 81.65% -99.91%) and specificity of 98.51% (95%ci 94.71%-99.82%). cam-icu has high sensitivity and specificity for diagnosing delirium in critically ill neurological population. it is a valid tool for diagnosing delirium. a value stream mapping event (vsm) for general neurology inpatients, revealed multiple barriers related to videofluoroscopy swallow studies. there was a high volume of patients requiring instrumental swallow assessments, a limited number of radiology appointments, and transportation delays that were delaying feeding plans, discharge recommendations and goals of care discussions. an operations engineer involved in the vsm event started the process by collecting observational data regariing timing. after meeting with the chief operating officer, director of patient transport, director of radiology, speech pathology manager, neuro intensive care unit manager and the operations engineer, a pilot program was agreed upon. the results for the three week pilot program were successful, and resulted in a permanent change in procedure. the pilot data showed a decrease in test time by 5 minutes, a decrease in transport delays by 13 minutes, and a decrease in length of stay by 0.6 days. the number of patients waiting for the study dropped from 1.6 to 0.3 per week. by annualizing this data, the change has created 238 new available bed days, 38 additional patient encounters and an incremental annual contribution margin of $381, 425. with the appointment time consistent, the nurse is able to plan patient care around the study, and ensure the patient is prepared and not delayed for the study. it has also allowed, if deemed safe for the patient to swallow, medications to be changed from the intravenous route to the oral route earlier, and earlier determination of safe feeding and diet restrictions. we previously reported outcome for children with refractory and super-refractory status epilepticus in a cohort of 40 patients. mortality was 30%. 25% of survivors required new tracheostomy and/or gastrostomy tubes. the majority of surviving patients experienced some degree of disability at discharge as determined by the pediatric cerebral performance category scale (pcpc). here, we aimed to identify patient factors in this cohort that were associated with a decline in functional neurologic outcome at discharge. retrospective chart review of children age 0-19 years who received pentobarbital infusion for status epilepticus in the pediatric intensive care unit of a large tertiary children's hospital from 2004-2015. outcome was defined using pcpc at admission and discharge. potential factors associated with outcome were evaluated using fisher's exact test and wilcoxon rank sum test. 40 children were included. pcpc score at admission (p=0.0006), etiology of status epilepticus (p=0.015), new tracheostomy (p=0.012), and new gastrostomy tube (p=0.012) were all significantly associated with children were more likely to have normal baseline neurologic function and more likely to have febrile encephalitis, stroke/trauma, or hypoxic ischemic encephalopathy as the etiology of status epilepticus. duration of pentobarbital infusion (median = 8days vs. 3days) (p=0.005) and duration of hospital admission (median = 1.48months vs. 0.49months) (p=0.005) were both longer in patients who had an admission pcpc score, etiology of status epilepticus, new tracheostomy and gastrostomy tube as well as longer duration of pentobarbital infusion and longer hospital stay were significantly associated with a decline in functional neurologic outcome at hospital discharge in children with refractory and superrefractory status epilepticus. status epilepticus (se) is the most common pediatric neurological, and super-refractory se is a lifethreatening form of se that continues or recurs for more than 24 hours despite multiple therapeutic interventions. this population-based study investigated pediatric se and srse admissions in germany. pediatric (age 0-18 years) admissions between 2008-2015 were identified in the arvato health analytics database. se, epilepsy, and febrile seizure cases were identified using a modification of a previouslypublished algorithm based on icd-10 diagnosis codes (g40, g41, and r56) and coding for ventilator and intensive care unit use. based on primary diagnosis, prior epilepsy status, and ventilation se was subclassified as non-refractory, refractory (rse), and super-refractory (srse). inpatient mortality, costs, length-of-stay (los), and discharge disposition were assessed overall and for rse and srse. the algorithm identified 11,693 seizure-related admissions and classified 4% as se, of which 22.9% were rse and 13.1% were srse. the rse frequency was highest among ages 1-6. the incidence of cases classified as srse peaked among newborns (age<1 year), decreasing between ages 1-7 years. cases classified as se accounted for 23.0% of total costs associated with seizure-related hospitalizations. srse exhibited the highest per case cost (mean €77,316), amounting to 58.0% of all se costs, and these costs correlated with the highest los (median 44.5 days). srse was associated with greater mortality (11.7%) cases classified as srse accounted for 14.2% of all pediatric seizure-related costs, despite representing only 0.5% of admissions. srse was associated with the highest los and mortality rate. these results highlight the burden of illness associated with srse and suggest that optimization of srse management has the potential to improve outcomes and reduce costs. despite its more routine use and the recognition that mri provides superior detection of traumatic brain injuries, there has been little written about how mri might affect the acute management of trauma patients. we sought to describe mri findings in a cohort of children admitted to the picu with tbi and to extend comparisons between ct and mri in acute trauma. a secondary aim was to quantify in what ways mri findings influenced clinical management in this cohort. we retrospectively identified 101 patients admitted to the picu with an acute head injury between september 2010 and may 2016 who underwent head mri within the first 96hrs. we compared mri with ct findings, using the nih common data elements definitions of injury type. we determined by chart review the indication for mri and if there was documentation that mri led to a change in management, defined as either an escalation or a de-escalation of care. seven patients had mri only, and mri identified additional lesions in 60 of the 94 patients who had first undergone head ct. of these, 49 patients had new intra-parenchymal lesions, 22 had new extra-axial lesions, and 11 had both a new intra-parenchymal and a new extra-parenchymal lesion identified. the most frequent new lesions were contusions and traumatic or diffuse axonal injury. acute management was influenced by mri in a majority of patients, leading to an escalation of medical or surgical management in nearly one third and a de-escalation of care in half. early mri may have a beneficial role in the acute management of pediatric traumatic brain injury. mri frequently identified clinically important lesions not appreciated on ct, and findings influenced management decisions. future studies will assess whether early mri improves patient outcomes or provides cost/benefit by reducing length of stay. while adverse outcomes of decompressive hemicraniectomy (dh) including infection, disturbances of the csf compartment, and sunken flap syndrome are well documented, there is a dearth of literature assessing outcomes related to the timing of cranioplasty. while adverse outcomes of decompressive hemicraniectomy (dh) including infection, disturbances of the csf compartment, and sunken flap syndrome are well documented, there is a dearth of literature assessing outcomes related to the timing of cranioplasty. we identified 143 patients who received dh, 60 of whom underwent reconstructive cranioplasty at our institution. the post-cranioplasty complication rate was 20%, which was due in part to hemorrhage, infectious complications, or csf compartment disturbances. patients receiving early cranioplasty developed an increased rate of hemorrhagic complication (19% vs 0%; p = 0.01), increased median hospital length of stay (los) (10 vs 4 days; p = 0.002) and increased median icu los (3 vs 1 days; p = 0.01). of the patients who received dh surgery related to malignant cerebral edema from an acute ischemic stroke, total complication rates trended down for early compared to late cranioplasty surgery (8% vs 18%; p = 0.68). patients receiving dh surgery for any cause who underwent early reconstructive cranioplasty, experienced higher rates of hemorrhagic complications and increased hospital and icu los. however, among those patients receiving dh surgery for the specific indication of malignant cerebral edema from acute ischemic stroke, significant differences did not exist between the early and late cranioplasty groups. the total complication rates in these patients trended lower in the early group. another important and mainly unpublished finding is that a majority of dh patients are lost to surgical follow up and may therefore impact the complication rate of this not so benign surgery. postoperative antibiotics (pa) are often administered to patients after instrumented spinal surgery until all drains are removed to prevent surgical site infections (ssi). this practice is discouraged by numerous medical society guidelines, so our institutional neurosurgery quality improvement committee decided to discontinue use of pa for this population. we retrospectively reviewed data for patients who had instrumented spinal surgery at our institution for seven months before and after this policy change and compared the frequency of ssi and development of antibiotic related complications in patients who received pa to those who did not (non-pa). we identified 188 pa patients and 158 non-pa patients. discontinuation of pa did not result in an increase in frequency of ssi (2% of pa patients vs. 0.6% of non-pa patients, p=0.4). growth of resistant bacteria was not significantly reduced in the non-pa period in comparison to the pa period (2% vs. 1%, p=1). the cost of antibiotics for pa patients was $5,499.62, whereas the cost of antibiotics for the non-pa patients was $0. on a per patient basis, the cost associated with antibiotics and resistant infections was significantly greater for patients who received pa than for those who did not (median of $26.32 with iqr $9.87-$46.06 vs. median of $0 with iqr $0-$0; p<0.0001). after discontinuing pa for patients who had instrumented spinal procedures, we did not observe an increase in the frequency of ssi. we did, however, note that there was a non-significant decrease in the frequency of growth of resistant organisms. these findings suggest that patients in this population do not need pa, and complications can be reduced if pa are withheld. the development of flow-diverting stents has allowed for new treatment options for giant vertebrobasilar aneurysms. however, the expertise required to perform these procedures safely and concerns about complications continue to limit their use. we sought to identify common complications of this treatment that can be anticipated by neurointensivists, to optimize management in the postoperative period. we retrospectively reviewed our hospital database of treated aneurysms to identify those with giant vertebrobasilar aneurysms. medical and neurological complications were recorded. six patients (5 male, 1 female) underwent treatment of giant vertebrobasilar aneurysms with pipeline embolization devices. five received adjunctive coiling. frequently reported pre-procedure symptoms were dysphagia (n=4), diplopia (n=3), dysarthria (n=3), facial weakness (n=3), hemiparesis (n=2), gaze palsy (n=2), and nystagmus (n=2). five patients ambulated normally. due to concerns about necessary procedures after stenting when on antiplatelet therapy, three patients received prophylactic ventriculoperitoneal shunts, two underwent gastrostomy, and two underwent tracheostomy. angiography confirmed successful aneurysm embolization in all patients. postoperatively, all patients developed new or worsened symptoms attributed to brainstem edema, including hemiparesis (n=4), facial weakness (n=4), dysphagia (n=4), diplopia (n=4), nystagmus (n=3), gaze palsy (n=3), and dysarthria (n=3). neurological symptoms were treated with steroids, with most symptoms subsiding by discharge. five patients had medical complications, including pneumonia (n=2), respiratory failure (n=2), gastrointestinal bleeding (n=2), arrhythmia (n=2), urinary tract infection (n=1), and myocardial infarction (n=1). two patients were re-intubated, three underwent gastrostomy, and one underwent tracheostomy. functional status at 3-months was available for five patients. three achieved modified rankin scale scores between 0-2, one regressed to a 5, and one died. the treatment of giant vertebrobasilar aneurysms presents significant challenges. practitioners should anticipate temporary postoperative neurological worsening and various medical complications. prophylactic shunt placement, gastrostomy, and/or tracheostomy should be considered in patients anticipated to likely need these procedures after treatment. ventriculostomy-related infection (vri) remains a major complication of external ventricular drain (evd) placement. historically, prophylactic antimicrobials are utilized to decrease the incidence of vri after evd placement. recent guidelines for the insertion and management of evds recommend a single preoperative dose prior to evd insertion and urges against the use of duration antibiotic prophylaxis. prior to the publication of this guideline, we hypothesized that significant variations existed among institutions with respect to antibiotic prophylaxis practices in this setting. the purpose of this practice survey was to determine trends in antimicrobial prophylactic strategies utilized by various healthcare institutions for evd placement prior to publication of the 2016 neurocritical care society (ncs) evidence-based guidelines for the insertion and management of evds. a seven-question practice survey on antimicrobial prophylaxis for evd placement was distributed to active pharmacist members of the ncs by email and open for response from 12/22/15 to 1/25/2016. the following information was collected: antimicrobial prophylaxis regimen utilized, pharmacologic class, utilization of impregnated catheters, and institution guidance. survey results were analyzed for trends in antimicrobial prophylaxis in the setting of evd placement. respondents (43/105, 41% response rate) from 43 institutions completed a seven-question evd management survey. most institutions initiate a single dose of antibiotics prior to evd insertion (31/43, 72%). periprocedural antimicrobial therapy is the most common prophylactic strategy utilized by respondents (18/43, 42%). of respondents who do not continue antimicrobial prophylaxis for the duration of evd placement, 59% (10/17) utilize antimicrobial-impregnated catheters to reduce incidence of vri. the importance of antimicrobial prophylaxis to prevent infectious complications associated with evd placement is widely accepted. prophylactic strategies vary between institutions. periprocedural antimicrobial therapy is the most common prophylactic strategy utilized by survey respondents. antimicrobial-impregnated catheters are commonly utilized in institutions using periprocedural antimicrobial prophylaxis. the postoperative course seen in critically ill neurosurgical patients is known to vary depending on the timing of the surgical procedure. this study seeks to compare the clinical characteristics, complications, and outcomes between elective or urgent surgery patients admitted to the intensive care unit (icu). retrospective review of a two-year neurosurgical patients' cohort. the pre and postoperative conditions and outcomes were compared between elective (group a) and emergency (group b) surgery patients. a total of 416 patients were evaluated, 262 in group a and 154 in group b. the most common diagnosis was intracranial tumor. the mean american society of anesthesiology (asa) score was significantly higher in group b than in group a (3.35 vs. 2.79, p<0.05). mean sequential organ failure assessment (sofa) score on admission was higher in group b (2.47 vs. 0.83, p<0.001). these patients were more likely to require mechanical ventilation (or 14.75, p<0 .001) and vasopressors (or 5.76, p<0.001) . group b had a higher probability of rebleeding (or 5.29, p<0 .001), intracranial hypertension (or 11.9, p<0.001), hydrocephalus (or 6.45, p<0.001), and reintervention (or 1.91, p=0.043). post-operative nausea and vomiting were less likely in group b (1.9% vs. 9.9% and 0 vs. 3.4%, respectively). mean hospital and icu los were shorter in group a than in group b (9.1 vs. 15.4 and 2.8 vs. 5.7, p<0.001 respectively). mortality rate during icu stay was higher in group b (9.74% vs. 1.52%; or 6.96, p<0.001). the preoperative glasgow coma scale (gcs) in patients who died, was below 8 in only a minority of them (14.28% in group b; 0% in group a). in this cohort of neurosurgical patients, emergency, compared to elective operations, were associated with higher post-operative complications and mortality rates. emergency surgery was associated with a higher severity of illness measured by the sofa and asa scores. intraprocedure rupture (ipr) is a rare but potentially serious complication of endovascular coiling of intracranial aneurysms. potential complications include hemorrhage, ischemic stroke, vasospasm and hydrocephalus which can lead to increased morbidity and mortality. the clinical course for these patients is not well studied and characterized. we performed a retrospective review of prospectively collected data for all unruptured aneurysms treated with endovascular coil embolization between july 2003 and march 2017 at a large universitybased hospital. out of 998 cases of all unruptured aneurysms coil embolizations, 10 (1.002%) patients had ipr. we reviewed baseline data, procedure notes, clinical course, and outcomes at discharge and at 3, 6 and 12 months. among the ten patients, the location of the aneurysms included: 4 basilar apex, 3 internal carotid artery 1 anterior communicating artery, 1 posterior cerebral artery, and 1 posterior communicating artery aneurysm. patients were monitored in the icu for variable lengths of time and daily transcranial doppler ultrasound detected no significant sonographic vasospasm. the large majority of the patients (8/10) were discharged to home at their baseline functional status assessed by modified rankin scale. one patient was discharged to inpatient rehabilitation for cognitive deficits from ipr of a basilar apex aneurysm. they were subsequently discharged home with supervision. there was a single mortality in a patient receiving retreatment of a proximal ica aneurysm with prior stenting and coil embolization who developed massive subarachnoid hemorrhage with diffuse intraventricular hemorrhage with external ventricular drain placement. the incidence of ipr is very low and potentially serious complications occur rarely in these patients. the location and factors associated with ipr are highly variable and without clear associations. outcomes of such complications are overall favorable. a short observation period in the hospital is likely warranted with a benign clinical course the most likely outcome. the standard treatment of cerebral venous-sinus thrombosis (cvst) is anticoagulation. however some patients clinically deteriorate secondary to mass-effect from infarct or intracerebral-hemorrhage (ich). the role of decompressive-craniectomy (dc) in this patient population is unknown. we elucidate the baseline characteristics of patients treated with dc, and report their outcomes. a retrospective chart review of our institutional database identified patients with cvst who were treated with dc. demographic and clinical data were collected. imaging variables collected from ct-head or mri-brain immediately before dc were intracerebral-hemorrhage volume (ich-v), combined volume of mass-effect from infarct/ich and peri-lesional edema (me-v), midline-shift at level of pinealgland (mds-p), midline-shift at cranial-most portion of corpus-callosum (mds-cc), and herniation-type. favorable outcome was defined as glasgow-outcomes scale of 4-5 upon last-known follow-up. a total of 15 patients (females=10) treated with dc were identified with mean-age 46.7 (+/-17.4), mean glasgow-coma scale (gcs) before surgery 8 (+/-3.9), mean-ich-v 42.2 ml (+/-44.0), mean-me-v 100.7 ml (+/-51.6), mean-mds-p 6.1mm (+/-3.9), and mean-mds-cc 7.4mm (+/-6.5). transverse-sinus was most commonly involved (n=10). 14/15 patients had any herniation, most commonly cingulate (n=10). meanchange in gcs from admission to before-surgery was -4.8 (+/-4.2). ten patients were anticoagulated before surgery. on last-known follow-up, 9/15 patients had a favorable outcome. four had died. on chisquare analysis, superior-sagittal sinus thrombosis was associated with unfavorable outcomes (p=0.025), and mortality (p=0.039). on univariate binary-logistic regression, there was a non-significant trend towards unfavorable outcomes (p=0.082) and mortality (p=0.079) with every-point decrease in mean-gcs before surgery. the predictive-value of other factors towards outcomes is unknown given limited sample-size. decompressive-craniectomy might improve outcomes even in patients with cvst who have developed coma, cerebral herniation, have failed treatment with anticoagulation, and have large-volume masslesions causing midline-shifts of >5mm. a prospective multi-institutional observational-cohort would poster presentations better delineate outcomes in comparison to matched-patients who are not treated with decompressivecraniectomy. meningiomas are often benign and mostly asymptomatic, and the treatment approaches may include open surgical resection, radiosurgery, and/or watchful waiting. reported morbidity and mortality rates for elderly patients undergoing meningioma resection vary widely. we sought to investigate mortality rates for elderly patients undergoing craniotomy for meningioma resection using the nationwide inpatient sample (nis). the nis datasets from 2003 to 2013 were used to identify patient admissions for meningioma resection based on the icd-9-cm code 01.51. age categories were defined as 70 years of age. primary outcomes were in-hospital mortality, poor outcomes (defined as death or discharge to a facility other than home), cost and length of hospitalization. a total of 24,953 patients were identified who underwent meningioma resection during 2003-2013 of which 20.4% were elderly (>70 years). each of the primary outcomes was heavily influenced by the advancing age. in-hospital mortality was higher in the elderly as compared to the younger patients (3.5% vs 1% p<0.001), as was the rate of a poor outcome (64.8% vs 28.1%, p<0.001). elderly patients also had a higher cost ($104425 vs $96012, p=0.013) and increased length of hospitalization (8.9 vs 6.8 days, p<0.001). in our study, age > 70 was strongly associated with adverse outcomes after meningioma resection. this increased risk should be taken into account when considering surgical intervention in this subgroup. based on this study, closer perioperative monitoring may be warranted in the elderly patient subgroup. treatment with anticoagulation improves outcomes in cerebral venous-sinus thrombosis (cvst). however patients who develop extensive infarcts and/or intracerebral-hemorrhage with mass-effect resulting in comatose-state are at risk of poor outcomes, and may benefit from decompressive craniectomy (dc). we evaluated the role of dc in the management of malignant cvst and its impact on outcomes. literature-search was conducted on pubmed and google-scholar using terms "craniectomy", and "cerebral venous-sinus thrombosis". we included studies that described any number of patients with cvst who underwent dc after clinical deterioration and reported their outcomes. a similar search strategy identified patients from our institute. outcomes were reported as modified-rankin scale (mrs) or glasgow-outcomes scale (gos) and were classified as favorable (mrs 0-3; gos 4-5), or unfavorable (mrs 4-6; gos 1-3). a total of 296 patients (females=164; males=74; unknown=58) who underwent dc for malignant-cvst were identified from 29 studies (n=281) and our institute (n=15). age and gcs (before-surgery) were only available from 125 patients, with mean-age 36.74 (+/-13.52) and mean-gcs 7.58 (+/-3.03). 199 patients (67.2%) had favorable-outcomes, while 63 patients (21.3%) died. in the multi-variate binarylogistic regression-model, every point-drop in gcs decreased the odds of favorable-outcomes by 0.57times (p<0.001; 95%ci=0.423-0.775), and survival by 0.58-times (p=0.002; 95%ci=0.414-0.821). thrombosis in internal-jugular vein (ijv) (or=10.39; 95%ci=1.36-79.30; p=0.024) and deep-cerebral veins (dcv) (or=8.94; 95%ci=1.33-60.17; p=0.024) predicted unfavorable-outcomes. ijv-thrombosis (or=10.67; 95%ci=1.07-106.86; p=0.044) and dcv-thrombosis (or=13.5; 95%ci=1.57-115.82; p=0.018) also predicted mortality. interestingly, cortical-vein thrombosis was associated with lower odds of unfavorable outcomes (or=0.217; 95%ci=0.051-0.921; p=0.038). data regarding anticoagulation and long-term follow-up were not uniformly available. for patients with malignant-cvst, craniectomy could potentially improve outcomes. factors such as gcs before-surgery and cvst location can help predict outcome following dc and aid the decision-making process. a multi-institutional observational cohort should be designed to prospectively evaluate predictors for, timing of, and outcomes after craniectomy in cvst. the external ventricular drain (evd) is commonly used in the neurocritical care unit to help monitor intracranial pressure (icp) with the added advantage of therapeutically treating elevated icp by diverting cerebrospinal fluid (csf). placement of an evd can be complicated by hemorrhage surrounding the catheter insertion tract, which in some cases may prove to be fatal. this retrospective study was designed to look at the rate of tract hemorrhages after evd placement that were performed at our institution as well as associated outcomes. we conducted a retrospective review of all patients who underwent evd placement during a 3 year period using our institutional database. postinsertion computerized tomography (ct) scans of the head were analyzed independently by 2 physicians to identifying tract hemorrhages. data on primary diagnosis, age, sex, length of icu stay and mortality were collected and analyzed. a total of 115 patients were identified as having had an evd placed during their hospital course. 15 patients were excluded as there were no images of evds present in their records. 100 patients were analyzed, of which 43% were male. mean age was 59.4 years. 50% of patients had a diagnosis of subarachnoid hemorrhage, 45% with intraparenchymal hemorrhage and 16% with ischemic stroke. mortality was 15% among all evd patients. the rate of tract hemorrhages among all patients with evd images was 21%. asymptomatic tract hemorrhages occurred in 19 patients (95.23%) with 1 patient (4.77%) dying due to the tract hemorrhage itself. among patients with tract hemorrhages mortality was 14.3%. the rate of tract hemorrhages was noted to be 21% with the majority being asymptomatic. there was no difference in mortality among patients with evds who developed tract hemorrhages compared to patients with no tract hemorrhages. verapamil is a phenylalkylamine calcium channel blocker that blocks the calcium ion influx through slow channels into conductile and contractile myocardial cells and vascular smooth muscle cells resulting in vascular relaxation and vasodilatation. symptomatic hypotension and/or extreme bradycardia/asystole are often seen with intravenous verapamil administration requiring pharmacologic treatment. in neuroendovascular field verapamil is mainly being used as a vasodilator agent. current lack of pharmacokinetic/pharmacodynamics data of intra-arterial verapamil often makes very challenging to neurointerventionalists during endovascular procedures. the purpose of this study is to observe acute hemodynamic effects of intra-arterial verapamil administration as well as the safety of higher dose of the medication during endovascular treatment. ten patients who underwent endovascular treatment for acute ischemic stroke were evaluated pre and post procedure with vital signs. the dosage of intra-arterial verapamil was documented and tabulated along with the pre and post heart rate and systolic blood pressure. the dose of intra-arterial verapamil varied from 1 to 5 mg in each internal carotid or vertebral artery, total dose per patient per procedure varied from 2.5 to 10. the average dose of intra-arterial verapamil administered was 4.95 ± 2.0 mg or 59.7 ± 30.8 mcg/kg that were infused over 5 to 10 minutes. at the baseline before administration of intra-arterial verapamil, the mean systolic blood pressure (sbp) was 157.8 ± 24.4 mm hg and the mean heart rate (hr) was 71.6 ± 16.2 bpm. after administration of intraarterial verapamil, sbp decreased by mean of 16.7 ± 0.8 mm hg but we observed no symptomatic hypotension requiring any pharmacologic treatment. hr changed only by mean of 0.1 ± 5.4 bpm post intra-arterial verapamil. we observed no acute significant changes in hemodynamic parameters with administration of verapamil in carotid or vertebral arteries. this may represent its safe use during neuro-endovascular therapy. growing evidence suggests inflammation is critical in epileptogenesis. endogenous brain apolipoprotein e protein (apoe) modulates neuroinflammatory responses to injury through downregulation of glial activation and secondary neuronal injury. we created a 5 amino acid peptide (cn-105) mimicking the binding face of apoe. cn-105 downregulates the inflammatory response in vitro and in vivo and improved histologic and clinical outcomes across several injury models in mice. we hypothesized that downregulation of inflammation by administration of cn-105 will reduce the development of epilepsy after pilocarpine induced status epilepticus in mice. c57bl/6 mice were intraperitoneally injected with pilocarpine to induce status epilepticus. following induction of status, animals were randomized to receive two doses of cn-105 or vehicle at 45 minutes and 6 hours. status was terminated by injection of benzodiazepine at 55 minutes. epidural eeg leads were surgically placed at 3 weeks and continuous video-eeg (cveeg) monitoring was performed for several days in a row at 4-6 weeks post status to determine spontaneous seizure development and frequency. at 4-6 weeks following induction of status epilepticus, administration of 0.2 or 0.5mg/kg cn-105 reduced the development of epilepsy by approximately 50% compared to vehicle treated animals. further, cn-105 treated animals that did develop seizures had significantly fewer seizures than vehicle mice. similar results were seen with 7 daily doses of 1mg/kg starting at 45 minutes. importantly, cn-105 is not an anticonvulsant as cveeg monitoring during status induction clearly demonstrated that seizures were not stopped or reduced by injection of cn-105. these results are consistent with the hypothesis that inflammation plays an important role in the development of epilepsy after injury and demonstrates treatments that target inflammation, like cn-105, can prevent and/or reduce the development of epilepsy. this represents the first therapy to prevent the development of epilepsy that has entered into clinical trials. to determine the speed of brain entrance of the antiepileptic drugs (aeds) brivaracetam (brv) and levetiracetam (lev) after single intravenous dosing in humans. brv and lev both bind to synaptic vesicle protein 2a (sv2a), but brv has more rapid brain entry than lev in mice and monkeys [1] . sv2a can be quantified in the living human brain using pet imaging with [11c]ucb-j[2]. pet scans (n=13) were performed with [11c]ucb-j administered by a bolus-infusion protocol in healthy volunteers (n=9). therapeutic dosages of brv (50mg, n=1; 100mg, n=4; or 200mg, n=2) or lev (1500mg, n=6) were administered as 5-minute intravenous infusions 60 minutes after the start of the first pet scan. tracer displacement half-times were determined by subtracting the radioligand clearance halftime from the radioligand displacement half-times estimated by exponential fitting of the post-aed drop in distribution volumes (vt). data were also analyzed using an advanced mathematical model that described the relationship between brain [11c]ucb-j pet data and time-varying aed plasma curves to directly estimate brain entrance (k1) of both aeds and [11c]ucb-j, free fraction of [11c]ucb-j in the brain, and vt values. the radioligand clearance half-time was 8 minutes. tracer displacement half-times were 1.7 and 2.1 minutes for brv 200mg, and 20 ±6 minutes for lev 1500mg. lower brv doses had longer half-times, but values were misleading as they assumed 100% sv2a occupancy. the advanced compartment model described well -dose scans. using the advanced model, the brv uptake rate (~50 ul/min/cm3) was found to be at least 8-fold higher than that of lev (~6 ul/min/cm3). the results demonstrate that brv enters the human brain faster than lev. the potential therapeutic benefit of this has yet to be determined. while intravenous anesthetic therapy (ivat) represents the gold-standard for treatment of refractory status epilepticus (rse), the optimal depth and duration of therapy is not known. the goal of this retrospective observational study was to describe the relationship between the depth of burst suppression and the ability to successfully wean ivat during rse treatment. fifty patients were identified with rse who underwent continuous electroencephalography. using persyst, the suppression ratio (sr) was calculated up to 72 hours prior to weaning ivat. the type and duration of ivat was recorded, as well as complications. we compared these variables between successful and unsuccessful weans. the mean sr for all patients was 19.0±26.3%, with a mean treatment duration of 63.3±42.1 hours. there was no difference in treatment duration between successful and unsuccessful weans(p=0.54), but sr was significantly lower in successful weans (10.0±15.2% vs 27.9±33.4%, p=0.01). the receiver operating curve (roc) for the sensitivity and specificity of the mean sr to predict a successful weaning attempt did not identify a threshold to predict weaning success. the use of pentobarbital was associated with a significantly higher sr when compared to midazolam (69.1±18.9% vs 6.9±8.2%, p <0.001). patients failed ivat weaning a mean 24.5±22.6 hours after initiating the ivat wean, which occurred after a mean decrease in the midazolam infusion rate of 68 ± 33%. depth of sr was not associated with infection risk (p=0.74), but was associated with the need for tracheostomy (27.6±32.7% versus 10.2±17.2%, p=0.01). vasopressors were required in 87.5% of patients while on ivat. unsuccessful weaning of ivat was associated with a higher depth of sr, which is likely a marker of disease severity. depth of sedation was not associated with increased risk of infection, but was associated with the need for tracheostomy. vasopressor requirements are common. the primary objective of this study was to determine the sensitivity and specificity of real-time neuro icu nurse interpretation of quantitative eeg (qeeg) trends in the identification of recurrent nonconvulsive electrographic seizures in adult patients admitted to the neuro icu. thirteen adult patients admitted to the neuro icu that had nonconvulsive seizures on continuous eeg (ceeg) monitoring were included in the study. neuro icu nurses consented for their participation and underwent a brief, standardized qeeg training session. a 1-hour qeeg panel (rhythmicity spectrogram, left/right and amplitude-integrated eeg, left/right) printout containing the marked sentinel seizure(s) was displayed next to the bedside ceeg/qeeg monitor. at one-hour intervals, the nurses logged the number of seizures seen in the past hour based on their qeeg interpretation for the duration of their shift. their answers were compared with the gold standard of neurophysiologist interpretation of seizure occurrence on raw eeg. a total of 120 hours of qeeg data was reviewed for 13 patients. average length of data collection was 9.5 hours. for the neuro icu nurses' ability to detect the presence of seizures on real-time qeeg the sensitivity was 70.0% (95% ci, 34.8-93.3%) and specificity was 90.0% (95% ci, 82.8-94.9%). the positive predictive value for seizure detection was 38.9% (95% ci, 24.2-56.0%) and the negative predictive value was 97.1% (95% ci, 92.7-98.8%). the false-positive rate was 0.075/hr. a simplified panel of qeeg trends can be used by neuro icu nurses to screen for recurrent electrographic seizures in critically ill patients with a reasonable sensitivity, an excellent specificity and a very low false-positive rate. this may facilitate earlier identification of recurrent electrographic seizures by notifying the neurophysiologist who is not present in the icu and not able to perform real-time ceeg interpretation. nonconvulsive status epilepticus (ncse) is an indicator of poor outcomes in neurocritical care settings. however, because of unfamiliarity with continuous electroencephalography monitoring (ceeg), the diagnosis and treatment of ncse remains challenging, and its clinical impact and prognostic factors have not been sufficiently reported in japan. we performed ceeg for 188 adult patients in our neurocritical care unit with coma or unexplained altered mental status from april 2013 to september 2015. we reviewed all ceeg records according to the american clinical neurophysiology society's terminology (2012 version), and diagnosed patients with ncse when the ceeg revealed spatiotemporally evolving or fluctuating periodic or rhythmic discharges and after considering clinical information based on the modified salzburg consensus criteria. patients with ncse were aggressively treated with benzodiazepines, fosphenytoin, and levetiracetam. they were divided into a generalized convulsive status epilepticus (gcse) group and a non-gcse group. we compared mortality and outcomes between the two groups after 3 months using fischer's exact test. outcomes were defined as poor when the glasgow outcome scale score was worse at the 3-month follow-up than at admission. we excluded 15 cases undergoing supportive care or lacking of follow-up. of 173 cases in the study, 61 cases were diagnosed with ncse, including 18 cases with accompanying gcse and 43 cases without. mortality rates at the 3-month follow-up were significantly higher in the non-gcse group than the gcse group (19% vs. 0%, respectively; p = 0.0492). the rate of poor outcomes was significantly higher in the non-gcse group than in the gcse group (26% vs. 0%, respectively; p = 0.0138). this study suggests that the absence of gcse is associated with increased mortality and poor outcomes among ncse patients. limitations of this study include its retrospective design and small number of ncse patients. further studies are necessary to identify additional prognostic factors. super-refractory status epilepticus (srse) is a life-threatening condition in which status epilepticus recurs or continues for over 24 hours despite first-, second-, and anesthetic third-line agent (tla) medications. no treatments are currently approved for srse. a randomized, double-blind, multi-center, placebo-controlled phase 3 trial evaluated brexanolone (usan; formerly sage-547 injection), a synaptic and extrasynaptic gabaa receptor positive allosteric modulator as adjunctive therapy for srse (nct02477618; "status trial"). enrolled subjects underwent a qualifying tla wean after at least 24 hours of seizure-or burstsuppression. srse subjects failing the qualifying wean were randomized 1:1 to a blinded infusion of brexanolone or placebo as adjunctive therapy following resumption of one or more tla infusions. subjects were administered the blinded infusion for 6 days, during which attempts were made to wean off tla infusions. clinical standardization guidelines (csgs) facilitated standardization across sites by outlining eeg patterns for which tla weaning should be continued, paused, or discontinued. an on-call clinical standardization team provided real-time support. safety was assessed via adverse events, laboratory testing, vital signs, and ecg parameters. the primary endpoint was defined as successfully super-refractory status epilepticus (srse) is a life-threatening neurological condition characterized by status epilepticus persisting over 24 hours despite treatment with first-, second-, and third-line agents (tlas) or upon the weaning of tlas. currently, there is no consensus around treatment protocols for srse. this study aims to describe srse treatment patterns and related outcomes in a us population. we retrospectively identified srse cases in cerner healthfacts®, a large, de-identified, us electronic health record database, using records from 2000-2015. cases were classified as srse using a modified version of a previously published algorithm using icd-9 and procedure coding for status epilepticus (345.00, 345.01, 345.1x, 345.3, 345.41, 345.51, and 345.81) , ventilator support, pharmacotherapies. descriptive and univariate statistics were used to evaluate anesthetic treatment, anti-epileptic medications, and the association between glasgow coma score (gcs) and mortality. using our algorithm, 1153 srse cases (1137 patients) were classified. multiple tlas were received in 89% of cases, and in 37%, >2 concurrent tlas were received. the first post-admission tlas were propofol, lorazepam and midazolam, respectively, in 38%, 38% and 18% of cases. median anesthetic duration was 9.5 days. mortality was higher in -8 (3.1 vs. 1.6 days; p<0.001). srse patients identified in our analysis underwent variable treatment patterns, reflecting lack of co days of tla treatment. nonconvulsive seizures (ncs) and nonconvulsive status epilepticus (ncse) occur in approximately 20% of neurologically critically ill patients. the most effective antiepileptic drug (aed) regimen to treat ncs and ncse is unknown. this study was designed to determine the efficacy of add-on clobazam, a unique 1,5-benzodiazepine with favorable pharmacokinetic properties, in the treatment of ncs and ncse. a retrospective chart review was performed on 17 adult patients who were admitted to the neurological intensive care unit between january 1, 2013 and june 1, 2017, were diagnosed with ncs or ncse by continuous eeg monitoring and received clobazam as add-on therapy. the primary efficacy endpoint was defined as clobazam being the last aed added before ncs/ncse cessation, regardless of latency between dosing and ncs/ncse cessation. of the 17 patients included in this study, 7 (41%) had ncs vs. 10 (59%) with ncse. the most common etiologies were autoimmune (n=6) and cns tumor (n=5), with 7 patients (41%) having pre-existing epilepsy. clobazam was the last aed added before cessation of ncs/ncse in 12 of 17 (71%) subjects. clobazam was chosen as the 3rd to 6th line agent. clobazam was started at a median of 2 days from the onset ncs/ncse (range 1-55 days). the median total daily dose of clobazam was 20 mg (range 10-60 mg). this study suggests that clobazam may be effective at various time points in the treatment of ncs/nsce and may prevent the need for addition of intravenous anesthetic drugs to control seizures. however, a prospective study is warranted to determine efficacy and optimal dosing. continuous electroencephalography monitoring(ceegm) with international 10-20 system is essential for detect nonconvulsive status epilepticus (ncse). in japan, both ceegm systems and human resources are lacking, and few facilities are able to conduct such advanced monitoring. the ceegm headset, described in this report, is a novel and easy-to-use technology. we attempted to validate the novel ceegm headset by comparing it with a conventional, international 10-20 ceegm system (conventional ceegm). we completed this study at a single center, eight-bed neurocritical care unit, between january 2017 and june 2017. the new, ceegm headset features eight electrodes (f, c, t, o), and is capable of simultaneously transmitting eeg data by bluetooth. patients with disturbed consciousness, of unknown etiology, underwent ceegm headset followed by conventional ceegm. we verified the concordance rate of the two systems for detecting eeg morphologies (e.g. periodic discharges, rhythmic delta activity, spikes and waves), and diagnosing ncse. eeg morphologies were appreciated according to "american clinical neurophysiology society's standardized critical care eeg terminology: 2012 version" and diagnosis of ncse were done according to modified salzburg consensus criteria. among this period, we enrolled thirty patients. three patients were excluded because of not satisfying protocol. final analyses included verified data from 27 patients. the mean age was 66 years old (range: 20-86), 59% were male, mean acute physiology and chronic health evaluation (apache) ii score was 15 (range: 5-27), and mean full outline of unresponsiveness (four) score was 10 (range: 6-16). we appreciated concordant eeg morphologies, and ncse, in 78% (21/27), and 74% (20/27) of patients, respectively. this easy novel ceegm headset may be useful in settings with limited resources or access to conventional ceegm technology. further study is needed to validate the actual diagnostic ability of this novel headset. the traditional approach to interpreting eeg requires physicians with formal training to visually assess the waveforms. this approach is less practical in critical settings when a trained eeg specialist is not readily available to diagnose subclinical seizures, such as non-convulsive status epilepticus, in patients with altered mental status. we have recently invented an algorithm for sonifying eeg, and in the current study, we explored whether individuals without eeg training can detect ongoing seizures by simply listening to one channel of sonified eeg. we sonified 84 eeg samples (15-second long) that represented various conditions commonly seen in the icu (7 seizures; 25 lpd, gpd, or burst suppression, and 52 normal or slowing). 34 medical students and 30 nurses were asked to indicate each audio sample as "seizure" or "non-seizure". we then compared their performance with that of eeg experts [epilepsy attendings with >10 years of experience (n=2) and epilepsy fellows (n=7)] and some of the medical students (n=29) who also diagnosed the same eegs on visual display. non-experts listening to single-channel sonified eegs detected seizures with remarkable sensitivity (students: 98±5%; nurses: 95±14%) compared to experts or non-experts reviewing the same eegs on visual display (attendings: 100%; fellows: 90±11%; students: 76±19%). if the eegs contained seizures or seizure-like activity, non-experts listening to sonified eegs rated them as seizures with high specificity (students: 85±9%; nurses: 82±12%) compared to experts or non-experts viewing the eegs visually (attendings: 95±1%; fellows: 91±7%; students: 65±20%). our study confirms that individuals without eeg training can detect ongoing seizures or seizure-like rhythmic periodic activity by merely listening to short duration of sonified eeg. while sonification of eeg cannot replace the traditional approaches to eeg interpretation, it provides a meaningful triage tool for fast assessment of patients with suspected subclinical seizures. super-refractory status epilepticus (srse) is a life-threatening form of status epilepticus (se) that continues despite, or recurs after, 24 hours of therapeutic interventions, including continuous intravenous anesthetic third-line agents (tlas). no therapies are approved for srse, leading to substantial variation in both management and determination of treatment response. for the phase 3 trial of brexanolone as adjunctive therapy for srse involving up to 180 international sites, we developed and implemented clinical standardization guidelines (csgs) for real-time support of tla administration, weaning, and outcome assessment under eeg neuromonitoring. a clinical standardization team (cst), including investigators and se experts, developed consensus csgs defining acceptable eeg patterns for continuation, termination, or pausing the weaning of tlas. csg implementation was facilitated by training and cst call centers staffed internationally by physicians with critical care eeg expertise. in cases of disagreement, the local site retained final decision-making authority. a "traffic light" system defined: 1)"green" tolerated eeg patterns (improving background, 3 seizures within 3 hours, discharges >3 hz, or discharges 1-2.9 hz with evolution and no improvement over 6 hours), and 3)"amber" eeg patterns not meeting the above, for which tla weaning should be paused while optimizing anti-epileptic medications and monitoring for transitions to green/red eeg patterns. the initial 125 cst consultations yielded 97% csg compliance; 83% of eegs underwent cst review. few cst consultations lasted >60 minutes (7%); most lasted <30 minutes (67%). this phase 3 trial demonstrates the feasibility of applying neuromonitoring csgs for tla weaning in srse patients, to ensure better consistency of clinical care and reliability of the primary outcome measure in clinical trials. csgs were well accepted by investigators and may serve as a framework for future clinical trials or clinical therapies in srse. severe brain trauma is a leading cause of death and disability worldwide. post-traumatic epilepsy (pte) is a chronic complication that occurs in up to 40% of cases (frey, 2003; najafi et al., 2015) . drugs and other interventions to prevent epileptogenesis would likely be most effective early after traumatic brain injury (tbi), but cannot be given indiscriminately. there is a critical need for tools that quantify those at high risk for pte. abnormal neural activity, in the form of ictal-interictal continuum abnormalities(iicas) are increased acute brain injuries, and appear to differentiate patients at risk for secondary brain injury (e.g. kim et al., 2017) . we hypothesized that iicas acutely following tbi may be a marker of posttraumatic epilepsy risk. we evaluated continuous eeg data from moderate to severe tbi patients who did and did not develop pte, (any seizure 2-12 months post-tbi; n=50). seizures <1month post-tbi were classified as symptomatic, not pte. conventional 10-20 scalp electrode placement was used and eegs were reviewed by standard visual analysis, by the mgh neurophysiology service. daily eeg reports were scored for the presence of iicas and seizures. demographic data including gender, age, tbi severity and type of brain injury were recorded. univariate and multivariate regression analyses were performed to determine which iica and demographic features correlated with pte. gcs (p=0.02) and tbi severity (p=0.02) were significantly associated with pte, as expected. seizures (p=0.002), epileptiform discharges (p=0.0003), generalized periodic discharges (0.0003) and lateralized rhythmic delta activity (p=0.04) independently predicted risk for post-traumatic epilepsy. epileptiform discharges, in particular, were more prevalent acutely post-tbi in pte patients. increased iica prevalence is significantly associated with pte and may be a predictive marker for identifying patients who may benefit from anti-epileptogenesis trials. rapidly obtaining eeg signals in the ed and icu for at-risk patients can enhance diagnosis accuracy and speed, while cutting down time until treatment. ceribell inc has developed a portable eeg data recorder and electrode headset with rapid setup (~ 5 min) technology without any eeg technician required to overcome the inaccessibility of eeg in urgent situations when seizures are suspected. the purpose of this study is to evaluate the signal quality and performance of the ceribell system compared to a reputable clinical eeg system. we collected eeg samples in the laboratory and at stanford university medical center. laboratory collections on healthy volunteers included simultaneous collection of eeg using ceribell and nihon kohden systems, and a split-signal that recorded eeg to both data recorders from the same electrodes. in the icu, eeg was recorded with the ceribell system on 25 patients and subsequently with the clinical eeg system. data was filtered and spectral densities, mean frequency (mf), spectral entropy (se), and 75% spectral edge frequency (sef75) were computed. in the split-signal test, the waveforms consistently appeared similar by visual inspection. the analysis of ceribell data revealed (mf = 22.46 hz, se= 8.51, sef75 = 21.56) similar to the commercial system (mf = 22.86 hz, se = 8.52, sef75 = 21.58). in the simultaneous test, the ceribell system produced (mf = 14.3 hz, se = 8.4, sef75 = 17.36) similar to the commercial system (mf = 15.5 hz, se = 8.58, sef75 = 18.84). in the clinical setting, the ceribell system showed spectral density distributions comparable with the commercial system. our results indicate that the signal quality of the ceribell system is similar to a commercially available eeg used widely in the clinical setting, while requiring less setup time and allowing more portability. status epilepticus (se) is a life-threatening condition characterized by prolonged seizures without regaining consciousness between seizure events. when se continues or recurs 24 hours or more after treatment with third line anesthetic agents, it is considered super-refractory se (srse). there are few population-based studies on the descriptive epidemiology of srse at a national level. the objective was to estimate the incidence of srse in canada in 2010-2012. we analyzed standardized national administrative record-level data covering all provinces across canada as provided by the canadian institute for health information. srse episodes were classified from two databases for acute care admissions (discharge abstract database) and emergency visits (national ambulatory care reporting system) over 3 fiscal years (2010/2011 to 2012/2013). cases were identified as srse using a modification of a previously published algorithm using icd-10-ca diagnostic codes for epilepsy (g40), status epilepticus (g41), or convulsions (r56) plus an intensive care unit stay of 2 days or more with mechanical invasive ventilation. using our algorithm, from 2010-2012, the mean annual number of cases classified as srse was 3,427 (9.98/100,000 persons per year). the annual incidence was higher in males (11.78/100,000 per year) than females (8.21/100,000 per year). the highest rates were in the age group 70-79 years: 20.7 and 32.9 per 100,000 per year for females and males, respectively. the mean age of srse patients was 52 years (sd=22 years), with 58% males. the most common comorbidities for srse included metabolic disturbances (39%), sepsis (32%), toxic withdrawal state (31%), cardiovascular disease (22%), and head trauma (13%). in-hospital mortality for srse was 21%. this is the first study reporting estimates of srse incidence in canada. these results suggest that srse is associated with a substantial disease burden. interventions that improve patient outcomes and reduce mortality are required. new-onset refractory status epilepticus (norse) is a condition characterized by prolonged pharmacoresistant seizures in a previously healthy individual with no identifiable etiology during initial evaluation. typical magnetic resonance imaging (mri) findings include bilateral limbic and neocortical t2-weighted hyperintense lesions. fluorodeoxyglucose (fdg)-positron emission tomography (pet) findings have not been previously reported. this study sought to describe fdg-pet and mri characteristics in patients with norse. methods 12 patients were retrospectively identified amongst a database of autoimmune-mediated encephalitis from 2008-2017, meeting diagnostic criteria for norse and having undergone mri and pet over the course of their illness. imaging findings were confirmed with a board-certified neuroradiologist. nine patients were autoantibody positive: three n-methyl-d-aspartic acid (nmda) receptor, two glutamic acid decarboxylase (gad), three voltage-gated potassium channel (vgkc)-complex with two having leucine-rich glioma-inactivated protein 1 igg positivity, and one gamma-aminobutyric acid (gaba) b receptor. all patients had identifiable abnormalities on fdg-pet. hypometabolism was most common, with 11 of 12 patients having diffuse, bilateral, or unilateral frontal, parietal, or occipital cortical hypometabolism. nine patients also had bilateral (7) or unilateral (2) mesial temporal hypermetabolism. two patients had multifocal hypermetabolism with bilateral or unilateral frontal abnormalities in addition to mesial temporal findings. of the nine patients with fdg-pet hypermetabolism, concurrent mri scans failed to show corresponding t2-weighted hyperintense lesions in the mesial temporal and medial frontal regions in two patients. fdg-pet findings in norse include bilateral or unilateral mesial temporal or mesial frontal hypermetabolism with diffuse, bilateral, or focal cortical hypometabolism. hypermetabolism may reflect regions predominantly involved in acute epileptogenesis. fdg-pet may improve sensitivity when compared to mri alone. while seizures are uncommon but reported in primary intraventricular hemorrhage (ivh), little evidence is available on the prevalence of hyperexcitable patterns on long term eeg monitoring. we sought to determine the prevalence of hyperexcitable patterns and seizures in patients with primary ivh who were extracted from a cohort consisting of patients with spontaneous intracerebral hemorrhage (sich) who underwent continuous electroencephalogram (ceeg) monitoring between january 2013 and december 2016 at yale-new haven hospital. indications for ceeg monitoring included fluctuation of or depressed mental status, abnormal movements and a limited clinical exam. we recorded demographics, radiologic hydrocephalus, duration of eeg recording and eeg findings. hyperexcitable patterns comprised generalized, bilateral independent or lateralized periodic discharges (pds), lateralized rhythmic delta activity (rda), brief potentially ictal rhythmic discharges (b(i)rds), and spike-and-wave discharges (sw). of 196 adults with sich who had ceeg performed, 13 patients had primary ivh. hydrocephalus was present in 9 patients (69%). patients were monitored for a mean duration of 22.4 (± 14.7) hours. 9 patients had hyperexcitable patterns and/or electrographic seizures (70%): electrographic seizures and co-existent hyperexcitable patterns were captured in 2 of 13 patients (16%) and hyperexcitable patterns without seizures in 7 of 13 patients (54%). hyperexcitable patterns included periodic discharges (pds) (4) (generalized, lateralized and bilateral independent, with and without rhythmicity), rhythmic delta activity (rda) (5) (both lateralized and generalized, with and without sharps), brief potentially ictal rhythmic discharges(b(i)rds) (1) and spike-and-wave discharges (sw) (1). there was no significant difference between patients with and without hydrocephalus and hyperexcitability or electrographic seizures (p= 0.76). both electrographic seizures and/or patterns of hyperexcitability on eeg are common in our selected cohort of primary ivh patients. this underscores the importance of continuous eeg monitoring in this patient population, since the detection of non-convulsive seizures may offer an opportunity for therapeutic intervention. patients with aneurysmal sah (asah) frequently have ictal-interictal continuum (iic) eeg patterns. while seizure burden can worsen outcomes, less is known about iic burden. we investigated the impact of iic burden and anti-epileptic drug (aed) treatment on asah outcomes. we included patients with asah undergoing continuous eeg (ceeg) from 2011-2016. patients with nonaneurysmal sah or 90%, 50-89%, 10-49%, 1-9%, <1%. age gender, admission gcs, apache ii score, fisher and hunt and hess (hh) scores, aed dosing and discharge gos were ascertained by chart review. presence of iic patterns in asah independently predicts worse neurologic outcome, although maximum burden does not. although nearly half of these patients receive aed treatment, our data suggest that aed treatment may not influence outcome. prospective studies may further delineate the clinical risks and benefits of aed treatment. refractory status epilepticus (rse) is defined by failure to control epileptic activity after the administration of 1st and 2nd line antiepileptic agents. mortality associated with rse has been estimated to be around 23-61% at hospital discharge. we conducted this study to analyze trends in the frequency and management of rse. we conducted a cross-consortium (uhc) database from 2009 to 2016. this is a database from 120 academic medical centers and their 299 affiliated hospitals in the united states and consists of a sample of 12,366,264 patients. data including age, sex, antiepileptics (aed) and length of stay was collected. total mean age was 52.4 years and females were 48.83%. there was an increasing trend of using lorazepam as the first line aed (58.5% in 2009 to 60.5% in 2016) and a decreasing trend was noted of using midazolam as the first line aed (62.9% in 2009 aed (62.9% in to 55.4% in 2016 . leviteracetam was the most common second line aed used throughout all years which was followed by propofol followed by phenytoin/fosphenytoin. mean length of hospital stay was 10.8 days. between 2009 to 2016, the proportion of hospitalized patients in the united states diagnosed with rse has increased. lorazepam and leviteracetam have been the most common aeds used. mean length of hospital stay has not changed. status epilepticus is associated with high risk of multi-organ dysfunction. ketamine for the treatment of super refractory status epilepticus (srse) has the benefit of a different mechanism and lack of cardiac depression when compared with other anesthetic agents. this study evaluated the improvement in sequential organ failure assessment (sofa) score in patients treated with ketamine for srse. this is a retrospective study of patients with srse from 2011 to 2016. the timing and dosage of anesthetic agents used in their treatment were abstracted. sofa scores at admission and for the first 6 days after initiation of ketamine were calculated. the presence of shock prior to initiation of ketamine included septic shock and cardiogenic shock. outcomes including mortality, organ failure, and hospital associated infections (hais) were also recorded. a total of 47 patients were treated with ketamine after failure of seizure control using other anesthetic agents. seventeen (36.2%) had an improvement of their sofa score while 30 (63.8%) did not. the median sofa score on admission was 5 (iqr 4-6) for those who had an improvement and 6 (iqr 6-8) for those who did not (p=0.11). cardiac arrest was the etiology of srse for 6 (37.5%) patients who improved vs. 10 (62.5%) patients who did not (p=0.89). patients required 0 to 3 vasopressors for hemodynamic support, with less needed for those who had an improvement (p=0.02). there was a higher rate of hais in patient who did not have an improvement of their sofa score (p=0.04). there is a subset of patients treated with ketamine for srse who have an improvement in their sofa score, require less vasopressor support, and have a lower rate of hais. further studies are needed to better understand which patient population may most benefit from the use of ketamine for treatment of srse. the ceribell eeg system (ces) is a novel 8 channel eeg device with instant sonification and visual display capability that can be set up quickly without an eeg technician. we hypothesized that by using ces, we can decrease time to eeg acquisition and improve diagnosis and treatment decisions in suspected nonconvulsive seizures (ncs). adult icu patients (gcs < 12) who had continuous eeg (ceeg) as part of clinical care were enrolled. once ceeg was ordered, consent was obtained and ces was placed by the treating physician (n=7) who listened to the left/right hemisphere signals for 30 seconds each. suspicion for seizure (1=low, 5=high) and decision to treat (yes/no/not sure) were rated pre-and post-sonification. three blinded epileptologists compared accuracy of sonification with visual ces eeg. outcomes were difference in time to eeg acquisition, change in suspicion for seizure and decision to treat, and ease of use (1=challenging; 5=easy). 35 patients (mean age 61 +/-18, median gcs of 6 (iqr 4-8.5) were enrolled from 7:00 am to 5:00 pm. start of eeg acquisition was significantly faster for ces (23 minutes (iqr 14-46) vs 145 minutes (iqr 93-237) p<0.001), median difference 86 minutes (iqr 60-152). one patient had ncs during sonification and this was accurately identified and treated. low suspicion for seizure (1) was more likely postsonification (63% vs 37%, p=0.029). treatment decision changed in 40% after sonification, and this was in the correct direction 86% of the time. inappropriate decision to treat decreased from 26% to 9% (p=0.07). negative predictive value was 100% (95% ci 85-100%). ces was consistently rated easy to use. the ceribell eeg system is easy to use, speeds eeg acquisition, accurately identifies ncs, and enables appropriate treatment decisions. it has the potential to greatly enhance timely diagnosis and treatment of ncs in critically ill patients. the aim of the study was to understand the efficacy of ketamine in refractory status epilepticus and identify the underlying factors affecting the effectiveness of ketamine. moreover, we also studied the rate of complications in patients who underwent continuous midazolam ketamine dual therapy for treatment of refractory status epilepticus. this is retrospective cohort study evaluating the efficacy of ketamine in patient with refractory status epilepticus in total of 52 patients admitted to university of maryland medical center in either neuro intensive care unit /micu during the last five years between (2011-2016). we established a standardized algorithm for managing refractory status epilepticus. electrographic and clinical control of seizures was classified into four groups: likely response, possible response, permanent response and no response reviewed by a team of epileptologist and neuro intensivist. the effective doses of ketamine to abort rse were studied. complications intensive care unit stay while on therapy were reviewed. of the 52 patients, 29 were male, 23 were female. 22% of the patients had cardiac arrest as an etiology of seizures. median loading dose was 1.5 mg/kg, median maintenance dosage was 4 mg/kg/hr. 25 % of the patients had no response to ketamine. 75% were responsive to ketamine of which, 12 patients had likely response to ketamine, 28 patients had possible response.38.5% of the patients had permanent response to ketamine. 67% patients had hospital acquired infections, 34 % patient had metabolic acidosis, 25% had ards. this is one of the largest single center study illustrating the efficacy of ketamine in aborting rse. further study should address the difference in incidence of complications in patients with usage of ketamine versus groups alternative therapies. this study also demonstrates the etiology of seizures and its influence on efficacy of ketamine in aborting rse. acute cardiopulmonary complications are frequently observed in convulsive status epilepticus but mechanism is poorly understood. complications include tachy-arrhythmias, myocardial ischemia, takotsubo cardiomyopathy and neurogenic pulmonary edema. herein, we mapped evolution of cardiac dysautonomia as function of 5 sequential electrographic stages of se in four subjects admitted to icu. we hypothesize pathological co-activation of both arms of autonomic system contributes to cardiac complications. heart rate variability (hrv) is considered a proxy for ans tone on heart. we analyzed hrv in time and frequency domain, complexity measure (lempel ziv-lz) during se and mapped changes as function of stages of se as determined by scalp eeg. conventional scalp eeg recording and lead i-ekg (sampled at 256 hz) were analyzed using kubios hrv software 2.1. cardiac vagal index (cvi) and cardiac sympathetic index (csi) were calculated using geometric lorenz-plot method. parasympathetic activity is expressed in rmssd, pnn, cvi, and hf power four adults (range 22-59; m=3) were admitted to icu following convulsive se. ictal hrv changes initially reflected high sympathetic system activation (high csi) and reduced vagal tone (low hf, rmssd) as reported previously with convulsive seizure. earlier stages of se (stage i and ii) were marked by dual activation of the ans with sympathetic predominance (lower cvi/csi ratio). later stages of se (stage iv and v), demonstrated progressive increase in parasympathetic activity (hf power, rmssd, cvi, cvi/csi ratio). hf power and rmssd at stage v se was three times higher than during discrete seizure. lz complexity measure downtrended with the loss of fluctuations in late stages of se. in one subject se terminated with asystole this case series highlights dynamic changes in sympatho-vagal imbalances with progressive se. dual activation of sympatho-parasympathetic system and loss of complexity measures are associated with increased cardiac complications. therapies directed towards stabilization of cardiac dysautonomia might minimize complications super-refractory status epilepticus (srse) is a life-threatening neurological condition that is characterized by status epilepticus that persists for 24 hours despite treatment with first-, second-, and third-line agents (tlas) or upon the weaning of tlas. srse is associated with limited treatment options, and high morbidity and mortality. this study aims to describe and quantify inpatient srse treatment and its associated outcomes in the us. srse cases were classified retrospectively using a modified version of a previously published algorithm applied to a large, de-identified, us electronic health record database (cerner health facts®) covering >600 hospitals (2000) (2001) (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) . cases were classified utilizing icd-9 and procedure coding for status epilepticus (345.00, 345.01, 345.1x, 345.3, 345.41, 345.51 and 345.81) , ventilator support or with los>180 days or missing age were excluded. univariate statistics were used to describe mortality, hospital los, icu los, and discharge disposition. our algorithm classified 1153 cases as srse (1137 patients). most cases (61%) were to large (300+ beds) and/or teaching hospitals (86%). mean hospital los was 24.4 days, and icu los was 15.1 days. both los and icu los were significa average mortality rate was 23.5%. mortality rates increased with number of tlas used (1-2 tla=21.5%; 3-rged home (6% with tracheostomy), while 32% (n=285) were discharged to another facility. treatment of srse requires acute, intensive management in the hospital setting. los and mortality rates were high and increased with increasing use of tlas. while good outcomes remain possible even after srse, additional interventions are needed that enable seizure control, liberation from anesthetic and ventilator management, and improved mortality. refractory status epilepticus (se) carries an exceedingly high mortality and morbidity, often warranting an aggressive therapeutic approach. initially used in childhood epilepsies, ketogenic diet (kd) has also accumulated supporting evidence in the treatment of pediatric se. recently, the implementation of kd in adults with refractory and super-refractory se has been shown to be feasible and effective. we describe our recent experience with a new onset refractory status epilepticus (norse) patient and the unexpected challenge of achieving and maintaining a ketotic target. practical advice, a comprehensive review of offenders jeopardizing ketosis commonly used in the neurocritical care unit and alternatives is provided. a previously healthy 29-year-old woman was admitted with cryptogenic norse following a febrile illness with a course complicated by prolonged super-refractory se. a comprehensive work-up was notable only for mild cerebral spinal fluid (csf) pleocytosis, elevated non-specific inflammatory serum markers, and edematous hippocampi with associated diffusion restriction on magnetic resonance imaging (mri). repeat csf testing was normal and serial mris demonstrated resolution of edema and diffusion restriction with gradually progressive hippocampal and diffuse atrophy. she required an aggressive approach including high anesthetic infusion rates, 16 anti-seizure drug trials (in various combinations), empiric partial bilateral oophorectomy, and immunosuppression. enteral ketogenic formula was started on hospital day 27, however, sustained beta-hydroxybutyrate levels > 2mmol/l were only achieved 37 days later following a careful comprehensive adjustment of the care plan. notably, a significant response to kd was only achieved with beta-hydroxybutyrate levels > 3.5 mmol/l. there are hidden carbohydrates in commonly administered medications for se, antibiotics, and even electrolyte repletion formulations and solutions used for oral care -all challenging the use of kd in this setting. tailoring comprehensive care and being aware of possible complications of kd are important for the successful implementation and maintenance of ketosis. early seizures are estimated to occur in 18-33% of patients with moderate to severe traumatic brain injury (tbi) (herman 2015, vespa 1999). continuous eeg (ceeg) is essential for detection of nonconvulsive seizures (claassen 2004) the university of california davis protocol for tbi includes ceeg on a case by case basis, which we reviewed. a retrospective review of patients admitted to icu for tbi from 9/23/2016-3/13/2017 was performed for demographics, icu length of stay (los), and ceeg. patients with ceeg were assessed for demographics, tbi severity, gcs, ceeg indication and findings. 100 patients were identified. twenty-one were monitored on eeg. median age was 36, 24% were female. indications for ceeg included seizure prior to admission (n=4), altered mental status (ams) (n=13), ams with paroxysmal events (n=4). seizures were recorded in 2 patients. median duration of ceeg was 26.8, 296.36, and 146.25 hours among the groups. those with seizures prior to hospitalization were connected to ceeg earliest (median 14 .65 hours) but had the longest median icu los (623.27 hours), followed by ams (40.28 and 424.92 hours) and ams with paroxysmal events (78.83 and 377.88 hours). median gcs was 3, 6, and 5 respectively. median los for patients without seizures or interictal epileptiform activity (iea) was 380.63 hours, 483. 47 for those with iea only, and 580.58 for those with seizures. median gcs was 4.5, 6, and 4 among the eeg groupings. our data suggests seizures prior to hospitalization, ceeg recorded seizures, and iea predict longer icu los. associated lower gcs likely indicates more severe injuries. tbi patients with ams may have delay to seizure detection and treatment. our rate of seizure detection is lower than expected. a more consistent protocol for ceeg will likely improve seizure detection. prospective studies are needed to determine if ceeg can predict and influence outcomes. status epilepticus is a serious neurologic emergency. although many studies have been published on incident status epilepticus, there are few data on the risk of recurrent status epilepticus. we performed a retrospective cohort study using administrative claims data to identify all patients hospitalized with status epilepticus in california, new york, and florida between 2005-2013. our primary outcome was a recurrent hospitalization for status epilepticus. survival statistics were used to calculate the cumulative rate of recurrence at 30 days, 1 year, and 5 years. in subgroup analyses, we compared rates of recurrence according to age, gender, race, and etiology (stroke, traumatic brain injury, acute and chronic central nervous system (cns) infections, brain tumors, dementia, autoimmune cns disease, or unspecified etiology). we identified 37,418 patients with status epilepticus. during a mean follow-up of 2.9 (±2.0) years, 4,755 (12.7%; 95% ci, 12.4-13.0%) developed recurrent status epilepticus. the cumulative rate of recurrence was 2.7% (95% ci, 2.5-2.9%) at 30 days, 10.9% (95% ci, 10.5-11.3%) at 1 year, and 20.8% (95% ci, 20.2-21.5%) at 5 years. the 5-year cumulative rate of recurrence was 19.8% (95% ci, 18.9-20.6%) in women versus 21.9% (95% ci, 21.0-22.8%) in men, 16.8% (95% ci, 16.0-23.2% (95% ci, 22.4-24.0%) in patients <60, and 19.1% (95% ci, 18.4-19.9%) in white patients versus 23.0% (95% ci, 22.1%-24.0%) in non-white patients. the 5-year cumulative rate of recurrence was highest for status epilepticus associated with autoimmune cns disease (30.3%; 95% ci, 25.4-35.9%) and chronic cns infection (24.1%; 95% ci, 19.6-29.4%). approximately 1 in 5 patients with status epilepticus experienced a recurrent episode within 5 years. recurrence was most often seen in younger patients, non-white patients, and patients with underlying autoimmune cns disease or chronic cns infection. super-refractory status epilepticus (srse) is a rare, life-threatening form of status epilepticus (se) refractory to multiple therapies including anesthetic third-line agents (tlas). enrollment in a srse clinical trial is challenging because patients may present urgently before srse is confirmed or may dynamically improve before randomization. pivotal clinical trials in srse require patient selection criteria accurately identifying srse at randomization. in this phase 3 trial of brexanolone as adjunctive therapy for confirmed srse, the enrollment scheme enabled operationally confirming srse prior to randomization during a qualifying wean (qw) under real-time eeg neuromonitoring. informed consent was obtained for all subjects 1) admitted in se having failed first-and second-line therapies; 2) transferred on tlas in seizure-or burst-suppression; or 3) transferred without seizure-or burst-suppression or not receiving tlas. subjects were required to achieve seizure-or burst-suppression for 24 hours through continuous administration of one or more tlas, followed by a post-enrollment qw of tlas. enrolled subjects failing the qw were randomized to concomitant brexanolone or placebo following reinstitution of one or more tlas. subjects not randomized after a successful qw underwent a 3-week follow-up. the qw protocol and criteria for qw failure were developed and implemented utilizing eeg neuromonitoring to confirm srse after enrollment using the definition of shorvon and colleagues. a qw was performed on over 100 evaluable subjects across 180 international sites to enable enrollment of patients with confirmed srse. subjects with a successful qws who were not randomized provided insight into outcomes associated with se and avoided the randomization of patients who did not meet srse criteria following enrollment. the use of neuromonitoring-guided diagnosis during a structured qw helped confirm srse, facilitating the enrollment of appropriate patients into this phase 3 trial in a rare, critically ill, and dynamic srse patient population. autoantibodies to the 65 kda isoform of gulutamic acid decarboxylase (gad65 ab), commonly found in t1dm patients, have been associated with drug resistant epilepsy. ketosis prone diabetes is a heterogenous syndrome encompassing various forms of beta cell dysfunction culminating in diabetic ketoacidosis. rates of epilepsy in patients with ketosis prone diabetes are not known. we compared the prevalence of epilepsy in patients with ketosis prone diabetes in a multi-ethnic population with the prevalence of epilepsy in the type 1 diabetes population as well as the general population in a metropolitan medical center. our study design is prospective review of retrospectively collected sera of 500 patients admitted for diabetic ketoacidosis (defined as ph < 7.3, bicarb < 17, with ketonemia or ketonuria) for the presence of gad65 ab. all these sera were assessed separately for autoantibody presence or absence at dr hampe's lab in washington, seattle. we also reviewed patients medical records for neurological diagnoses. this done in a blinded fashion by two separate reviewers. out of our 500 patients with ketosis prone diabetes, 4.3% also had epilepsy. this is higher than the published rate in type 1 diabetics (1.35%) and the general population in the surrounding area (<0.05%). antibody testing revealed 21% of patients with ketosis prone diabetes were gad65 ab positive with a rate of epilepsy of 1%. a two-tailed t test between the gad65 ab + group and gad65 ab -group showed no statistically significant difference in prevalence of epilepsy in these two groups. while prevalence of epilepsy is higher in the ketosis prone diabetes population than the general population of houston, the difference is not related to titers of gad65 ab, and must be due to some other unknown factor in these patients management of refractory status epilepticus commonly involves the induction of seizure-or burstsuppression using anesthetic agents. however, the duration and endpoints of these therapies are not well defined. specifically, weaning anesthetic agents is complicated by the emergence of eeg patterns on the ictal-interictal continuum (iic), which have uncertain significance, given that iic patterns may worsen cerebral metabolism and oxygenation, have a dissociation between scalp and depth eeg recordings, and indicate a late stage of status epilepticus itself. determining the significance of iic patterns in the unique context of anesthetic weaning is important to prevent the potential for unnecessarily prolonging anesthetic coma. we identified a series of patients who underwent over 48 hours of burst-suppression therapy, multiple weaning attempts, and continued weaning despite the initial emergence of iic patterns. patients who experienced anoxic brain injury were excluded from the series. we report 5 cases of patients who underwent successful weaning despite initial emergence of iic patterns. eeg patterns following anesthetic weaning (including lateralized periodic discharges approaching 3 hz frequency and lateralized rhythmic delta activity) as well as terminal eeg patterns are described in detail. in these 5 patients, continuing weaning of anesthetic agents despite the emergence of iic patterns did not result in relapse to status epilepticus. while the metabolic impact of these patterns on brain activity is uncertain, weaning strategies that treat iic as a surrogate of recurrent status epilepticus risk further prolonging anesthetic management and its known toxicity. we speculate that iic patterns are transitional and may have a context-specific association with status epilepticus relapse, with less risk conferred when these patterns are observed during the weaning of anesthetic agents after prolonged burst-suppression therapy. other electrographic features aside from this clinical context may discriminate the risk of status epilepticus relapse, such as eeg background activity. brivaracetam (brv) is approved as adjunctive therapy for focal (partialyears) with epilepsy. brv is available as oral tablets, oral solution, and an intravenous (iv) formulation. the formulations are interchangeable. this abstract reports the safety and tolerability of iv brv. during clinical development, 177 participants received iv brv. we report pooled safety findings from 153 participants receiving brv 25-150 mg doses. the therapeutic range of brv is 25-100 mg twice daily. in n01256, 24 healthy volunteers received iv brv as a 15-minute infusion or 50 mg/min bolus (25, 50, 100, or 150 mg single doses; n=6 in all groups). in ep0007 (nct01796899), 25 healthy volunteers received iv brv 100 mg as a single 2-minute bolus injection or oral tablets. in n01258 (nct01405508), 104 patients received 7 days of brv oral tablets 100 mg twice daily or placebo, and then 4.5 days of iv brv 100 mg twice daily either as a 2-minute bolus or 15-minute infusion for nine doses in total. treatment-emergent adverse event (teae) data were pooled. data reported are for iv brv 25-150 mg (n=153). most frequent teaes were somnolence 30.1%, dizziness 15.7%, fatigue 15.0%, headache 7.2%, dysgeusia 6.5%, euphoric mood 3.9%, feeling drunk 3.9%, and infusion-site pain 3.3%. infusion-site pain was specific to administration route. most teaes were mild or moderate and occurred mostly in healthy volunteers. iv brv was well tolerated, with an ae profile consistent with oral administration except for routespecific injection-site aes, dysgeusia, euphoric mood and feeling drunk. the interpretation of these data was complicated by the difficulty of pooling disparate studies involving healthy volunteers and epilepsy patients with heterogeneous medical histories and concomitant antiepileptic drug use. further clinical trials or real-world experience are needed to understand potential clinical impact. ucb pharma funded refractory status epilepticus (rse) is a challenging condition that requires multiple antiepileptic drugs (aed) to treat. during rse, the brain is under excessive excitation, which results in an increase in glutamate receptors such as alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (ampa) and nmethyl-daspartate (nmda).. perampanel (per), a novel, noncompetitive ampa-receptor antagonist, may have a role in the treatment of rse and there are positive results in different animal models with rse. we identified 8 adults patients over a 6 month period who were treated with per for different forms of rse. one was excluded as the etiology of rse was anoxic brain injury and care was transitioned to comfort only within 24 hours of initiating per. three patients had a definite response to per, which we defined as resolution of ictal patterns on electroencephalogram (eeg) within 48 hours of per without adding a new aed. one had a possible response with significant improvement in eeg findings; however, there was some eeg improvement predating the initiation of per. in observed several treatment factors that may have increased response to per. those who responded had it used earlier in the treatment cascade (sixth or seventh vs. ninth or tenthaed ), higher initial dose (8 mg vs 4mg), and were escalated to maximum dosage within 48 hours. they were also more likely be receiving continuous ketamine and midazolam, suggesting a possible synergy with per. there were no documented adverse effects in any patient prior to discharge. one patient did experience a decline in phenytoin levels, which could be related to per as there are reports of enzyme-inducing properties. we observed efficacy of per in several patients with focal and generalized rse without a significant adverse effect profile. further studies are needed to clarify the dosing, timing and appropriate indications in rse treatment. topiramate is a potent broad-spectrum anti-epileptic drug (aed) with several mechanisms of action including blockage of the inotropic glutamatergic ampa receptor, voltage-gated sodium channels, antagonism of non-nmda glutamate receptors and enhancement of gaba mediated chloride conductance. we hypothesize that topiramate is an effective adjunctive therapy in rse and srse due to multiple mechanisms of action. we performed a retrospective analysis of 11 patients admitted to the intensive care unit with status epilepticus (se) at a tertiary referral center from 2013-2016. we reviewed demographics, age, seizure type, etiology, prior aed/topiramate exposure, time to response to treatment, eeg reports and neuroimaging results. rse was defined as failure of benzodiazepine and another conventional second line aed to stop se. srse was defined as se that continues or recurs 24 hours after being treated with an anesthetic agent. 6 (55%) were male, 6 (55%) had a history of seizures; mean age of patients with se was 62.5 years. of 11 treated patients, 6 (55%) had focal non-convulsive se (ncse), 1 (9%) had myoclonic se, 1 had myoclonic, followed by generalized ncse,1 (9%) had generalized ncse, and 1 (9%) had focal and generalized nonconvulsive se, prior to administration of topiramate. 3 (27%) patients were treated with 2 aeds, 8 (72%) patients with 3 aeds prior to topiramate. electrographic seizures improved in 10 (91%) patients after receiving topiramate. resolution of electrographic seizures occurred within 12 hours in 2 (18%) patients, 24 hours in 4 (36%) patients, 48 hours in 2 (18%) patients and 72 hours in 2 (18%) patients. our findings suggest that topiramate could be an effective adjunctive treatment in rse and srse. however, prospective studies, including larger number of patients are needed to confirm these findings. patients with refractory status epilepticus (se) require multiple antiepileptic drugs (aeds) to abort seizures, and often barbiturates. there is a paucity of data on how to wean aeds safely once seizures are controlled while minimizing medication side-effects or withdrawal symptoms. a retrospective review of 160 patients admitted to mayo clinic in rochester, minnesota for se between 2006 and 2016 was performed. patient demographics, se type (focal versus generalized, convulsive, and refractoriness), seizure etiology, aeds in admission and at outpatient follow-up, aed side effects from use and withdrawal, and functional outcomes in terms of modified rankin scale were recorded. 44 of 160 (27.5%) patients had refractory se, 24 (15.0%) patients had refractory non-convulsive status epilepticus (ncse), 62 (38.8%) patients had convulsive se, 18 (11.2%) patients had ncse, and 12 (7.5%) patients had epilepsia partialis continua. of the 122 patients with outpatient follow-up (ranging 1 to 50 weeks following hospital discharge with 68.0% patients following-up within one month), 120 patients were on an aed regardless of etiology. patients were on a median of 1 aed in both refractory and nonrefractory se at follow-up. 9 (5.6%) patients had withdrawal seizures after aeds were weaned (3 had a prior stroke, 1 traumatic brain injury, 2 idiopathic, 2 multifactorial). none of the 13 patients completely weaned off a barbiturate had seizure recurrence at follow-up. 6-month mortality in refractory se was 22/68 (32.4%) and 23/92 (25.0%) in non-refractory cases. favorable functional outcome at follow-up was achieved in 17/68 (25.0%) patients with refractory se versus 29/92 (31.5%) in non-refractory se. we found a low rate of late seizure recurrence after weaning aeds in refractory and non-refractory se, particularly in the case of barbiturates. spreading depolarizations (sd) are strongly associated with secondary brain injury after aneurysmal subarachnoid hemorrhage (sah). however, studies to understand whether sds play a causal role in secondary injury are hindered by existing sd induction methods which are invasive, cumbersome, and cause primary tissue injury. we developed a method to study the role of sds after experimental sah using commercially available transgenic optogenetic mice which express channelrhodopsin (chr2) in cortical neurons. we used in vivo laser speckle and doppler flowmetry, intrinsic signal imaging, and local field potential (lfp) and extracellular potassium shifts to detect sds. we optogenetically induced sds with light through intact and unaltered skull in multiple regions without causing primary brain injury. we found regional differences in thresholds for optogenetically-induced sds (from lowest to highest threshold): (1) whisker barrel, (2) motor, (3) sensory, and (4) visual cortex. lower thresholds were associated with higher chr2 tissue expression. changes in lfp and increased extracellular potassium concentrations at the site of stimulation preceded precipitation of an sd. finally, we induced and detected sds in the setting of sah over several days through chronically implanted glass coverslips non-invasive optogenetic light stimulation can reliably induce sds in the setting of sah. longitudinal optogenetic induction of sds in chr2 transgenic mice is a potentially useful tool to study the role of sds in the pathogenesis of secondary brain injury after sah. aneurysmal subarachnoid hemorrhage is a devastating neurologic injury with significantly prolonged hospital courses and high morbidity and mortality. when aneurysms are detected, they often require securement either via surgical clipping or endovascular techniques. a subset of intracranial aneurysms, given location, poor surgical approach, and wide neck are amenable to flow diversion which promotes thrombosis through redirecting of blood flow within an aneurysm leading to slow obliteration. approximately 10% of treated aneurysms with flow diversion do not obliterate after 12 months, but currently there is no validated way to predict treatment failure. computational models of blood flow of flow diverted aneurysms predict a significant difference in the hemodynamic energy loss across aneurysms between cases that resolve and those that do not. energy loss could be estimated clinically during angiography, however, this hypothesis needs to be validated experimentally because computer models often over estimate hemodynamic parameters, poorly predict flow through stents, and may not have the resolution to fully describe intra-aneurysmal blood flow. in this pilot study, four cases of giant fusiform intracranial aneurysms will be selected --two with resolution following flow diversion treatment, and two without resolution. models of each vessel geometry will be fabricated using additive manufacturing techniques. under fluoroscopy, within the model vessel, flow diverting stents will be placed within the aneurysm in the same configuration that was achieved clinically. model blood, containing tracer particles will be pumped through model aneurysms and using particle image velocimetry, energy loss will be calculating within model vessels following treatment. energy loss between aneurysms successfully and unsuccessfully treated with flow diversion will be compared experimentally. hemodynamic energy loss may be a clinically measurable value which could predict treatment failure after flow diversion. additive manufacturing techniques can be used to test patient specific hemodynamics to improve understanding of flow-diversion treatment success or failure. the national institute of neurological disorders and stroke (ninds) and the national library of medicine (nlm) initiated development of unruptured cerebral aneurysms and subarachnoid hemorrhage (sah)specific common data elements (cdes) in 2015 as part of a joint project to develop data standards for funded neuroscience clinical research. through the development of these data standards, the ninds and nlm sah joint cde initiative strives to improve sah data collection by increasing efficiency, improving data quality, reducing study start-up time, facilitating data sharing/meta-analyses and helping educate new clinical investigators. the sah cde working group (wg) consisted of 83 international members with varied fields of sahrelated expertise and was divided into domains such as subject characteristics and assessments and exams. the wg developed a set of sah-specific cde recommendations by selecting among, refining and adding to existing field-tested data elements, especially established stroke cdes. wg cde recommendations were drafted into the nih cde repository. following an internal review of recommendations, the sah cdes were vetted during a public review on the ninds website for 6 weeks and later posted on nlm and ninds websites. version 1.0 of the sah cdes was available on the ninds cde website in april 2017. these new sah cdes and recommendations include those developed for unruptured intracranial aneurysms and long-term therapies. the website provides uniform names and structures for each data element, as well as guidance documents and template case report forms using the cdes. the ninds encourages the use of cdes by the clinical research community in order to standardize the collection of research data across studies. the ninds cdes are a continually evolving resource, requiring updates as research advancements indicate. these newly developed sah cdes will serve to be a valuable starting point for researchers and facilitate streamlining and sharing data. subarachnoid hemorrhage (sah) represents 5% of stroke admissions in the us. aneurysmal hemorrhage represents the most dangerous etiology, however 10-15% of sah have negative digital subtraction angiography (dsa). there is variation in practice with regards to repeat diagnostic studies and timing of such studies. it is not uncommon to repeat dsa in 7-10 days of the initial assessments. this study aims to describe the costs associated with prolonged icu stay and repeat diagnostic studies this patient cohort. retrospective review of all patients admitted for spontaneous sah between january 2011 and april 2017 at our single institution. patients with at least one negative initial angiogram for suspected spontaneous sah were included. patients were categorized into diffuse patterns of sah and nondiffuse. cost estimates were based on standard costs as provided by our financial department and cdc estimates for costs of hospital acquired infections. one hundred fifty-four patients were identified with initial negative dsa. second angiograms were performed in 81% of patients, and potentially positive causal findings in 18/125 (14.4%). icu los for angiogram negative diffuse sah and non-diffuse were 10.1 and 6.5 days respectively. other indications for icu stay included vasospasm (8.5%), evd placement (30.8%), and intubation (22%). the excess cost estimates per patient for angiogram negative diffuse and non-diffuse sah were $71,567 and $52,843 respectively. hospital acquired complications were an additional total $57,422 for the cohort. this is the first study to our knowledge attempting a cost analysis of the diagnosis and management of patients with angiogram negative sah. we had a high frequency of patients requiring icu admission for other indications, which should continue to dictate the level of care. however, there may be a cohort of lower risk patients in which de-escalation would not harm, and be of benefit in the reduction of morbidity and cost. purpose: to evaluate the feasibility and potential role of bedside optical coherence tomography (oct) as a diagnostic protocol in terson's syndrome (ts) in patients with acute subarachnoid hemorrhage (asah). background: 10% of sah patients become permanently legally blind. the average cost of lifetime support and unpaid taxes for each blind person is approximately $900,000. ts presents as ocular bleeding commonly associated with asah. it can be diagnosed by fundoscopy, yet retinal haemorrhages, detachments and macular holes may be undetected. early ts identification is critical since untreated it may lead to legal blindness, limit rehabilitation and impair quality of life. pilot study: 31 sah patients were screened for ts with dilated fundoscopy and then with oct. mood assessments (phq-9, hds), quality of life measures (nih-promis) and subjective visual function scales (vfq-25) were performed. there was a 22.6% (n=7) incidence of ts. dilated retinal fundoscopy significantly failed to detect ts (n=4, 57.1% missed cases). ivh was significantly more in ts (85.7% vs. 25%). no participants experienced any complications from oct examinations. neither decreased quality of life visual scores nor a depressed mood correlated with objective oct pathological findings at 6 weeks follow-up after discharge. there were no significant mood differences between ts cases and controls. oct is the gold-standard in retinal disease diagnosis. this pilot study showcases its bedside feasibility in asah. in our series, oct was a safe procedure that enhanced ts detection by decreasing false negative/ inconclusive fundoscopic examinations. it allows early diagnosis of macular holes and severe retinal detachments, which require acute surgical therapy to prevent legal blindness. besides, oct aids ruling out potential false positive visual deficits in individuals with a depressed mood at follow up. a comprehensive study is underway to understand the impact oct might exert on blindness prevention and quality of life. fever is common in patients with aneurysmal subarachnoid hemorrhage (asah), and blood cultures are commonly sent to diagnose etiology. several studies have shown a low incidence of positive blood cultures, but no studies have assessed blood cultures in patients with asah. we performed a retrospective analysis of patients admitted with asah between january 2013 to december 2015. blood cultures were adjudicated as true positive (tp) or false positive (fp) based on speciation, time to positivity, number of cultures positive, and repeat culture results. tp patients were compared to all other patients. age, gender, hunt hess, modified fisher, aneurysm treatment, incidence of delayed cerebral ischemia (dci), length of stay (los), and neurological outcomes were analyzed. 324 patients with asah were included. blood cultures were sent on 195 (60%). sixteen were positive. eleven were adjudicated tp and 5 fp. thus, 3.4% (11/324) of patients had true bacteremia, and blood culture yield for true infection was 5.6% (11/195). fp rate was 2.6% (5/195). eight tps were gram negative (73%), and all contaminants were staphylococcus non-aureus. median post-bleed day for tp results was 23. only 3 patients were tp within the first week of admission (0.9%). tp patients had higher admission wfns (p=.015) and ivh score (p=.006), but age, gender, aneurysm treatment, and fisher score did not differ. tp patients had longer icu and hospital los and higher incidence of dci (56% vs 27%, p=.026). mortality did not differ in the two groups either. the yield of blood cultures in asah patients is low. even with a contamination rate under 3%, 31% of positive blood cultures are fp. future studies should evaluate factors to identify patients at higher risk of bacteremia to reduce costs and improve care. intra-arterial verapamil therapy reduces cerebral vasospasm after aneurysmal subarachnoid hemorrhage (sah). there is little literature that quantitatively describes its safety, required dosing, or efficacy. as a result, therapeutic outcomes need to be subjectively analyzed by experienced radiologists during the intervention and clinically correlated by cerebral perfusion pressure, intracranial pressures and transcranial dopplers. we present a novel imaging analysis to quantify cerebral perfusion in realtime and apply this technology to patients undergoing therapy for vasospasm. we developed software to evaluate changes in contrast flow dynamics for digital subtraction angiography (dsa) scans performed pre-and post-intra-arterial therapy for vasospasm. performing signal intensity curve deconvolution on a voxel by voxel basis provides quantitative 2d perfusion parameters including: time to peak, time to drain, area under the curve, root mean transit time, arrival time, tissue concentration, arterial input functions and cerebral blood flow at each voxel. after aligning perfusion studies, our software then displays and automatically creates regions of interests for changes in perfusion to visualize the effects of interventions. our software quantitatively measures perfusion from dsas and can normalize two dsas accounting for differences in volume and speed of contrast administration. two applications of this technology are demonstrated. the first subtracts perfusion from pre-and post intra-arterial interventions quantifying exact changes in perfusion at each voxel. the second compares two dsa studies of the same patient at different dates to contour the territories susceptible to delayed cerebral ischemia. we compare this analysis to mri imaging when applicable demonstrating ischemic changes aligning to the susceptible territories outlined by our analysis. dsa based perfusion is an effective study to quantify the need for and the precise effects of endovascular interventions. quantitative thresholds and analysis based on dsa perfusion may assist with real-time assessment of treatment efficacy for patients undergoing intra-arterial verapamil therapy. we aim to characterize the clinical predictors of ventriculoperitoneal shunt (vps) placement in aneurysmal subarachnoid hemorrhage (asah) patients. there has been no clear consensus as to effective measures of predicting vps placement in these patients. we reviewed the clinical data of patients with aneurysmal subarachnoid hemorrhage (asah) who were treated at our institution between 2011-2014. we eliminated patients who died or had withdrawal of care during admission. we recorded patient demographics and clinical predictors including admission/discharge glasgow coma scale (gcs), hunt hess score, aneurysm size/location, modified fischer score, modified rankin scale (mrs), intracranial pressure (icp) values during evd clamp trial, and incidence of vasospasm requiring intra-arterial therapy. there were 112 patients included in this study and 36% of patients required vps (n=42/112). vps patients had significantly worse mrs functional scores at discharge (2.7 vs 3.4; p=0.029), but this began to balance at 1 year (2.5 vs 3.3; p=0.08). aneurysms were significantly larger in vps patients (7.4cm vs 5.5cm; ci: 0.57 to 3.38; p=0.006). a greater percentage of vps patients had posterior fossa aneurysms, but this was not found to be statistically significant (37% vs 52%; p=0.12). vps patients had significantly lower gcs scores at admission (12.9 vs 14.1; p=0.02), and discharge (11.7 vs 9.4; p=0.01). there was no difference in modified fischer score (p=0.36) or hunt hess (p=0.06), but both variables were higher in the vps cohort. there was no difference in the frequency of vasospasm in the vps cohort (p=1.0), or icp values (p=0.15). patients presenting with large aneurysms and poor gcs scores had a significantly higher likelihood of requiring vps during admission. these patients had significantly poorer mrs scores at discharge but not at 1 year. subarachnoid hemorrhage (sah) affects a young population and results in death or disability in the majority of those who experience it. this epidemiology is very different from other forms of stroke. consequently, patients with sah and their families may have different priorities for recovery. involving patient perspectives is encouraged in research and is often accomplished using patient-reported outcome measures (proms). however, whether proms reflect patient and family priorities is unclear given that (a) proms are often developed without their input; and (b) generic proms may not apply to specific conditions. we aimed to systematically review the sah literature that has: a) involved patient, family or caregivers in evaluating existing outcome measures, b) developed novel outcome measures by incorporating their perspectives (including co-development), or c) described outcomes important to patients, families, or caregivers. we searched embase and ovid medline from inception to december 19, 2016. study eligibility and data extraction was performed independently and in duplicate. for each eligible citation, we abstracted the following: study population, design, type of patient involvement, and outcome measure(s), as applicable. we planned a qualitative summary of all included studies. our search yielded 4275 unique citations. only four articles have met our eligibility criteria. in each, patients (n=235) self-report impairments resulting from sah and their impact on their lives (aim c). none involve the evaluation of prom applicability. additionally, we found 8 articles that, although they did not meet our a priori eligibility criteria, discuss collecting proms (n=2), using proms to predict health outcomes (n=4), and comparing prom applicability without patient perspectives (n=2) in sah populations. based on our findings, there is alack of patient, family, or caregiver involvement in selecting or identifying outcomes after sah with direct relevance to them. sah research may be overlooking outcomes that are important to patients. early brain injury (ebi) after aneurysmal subarachnoid hemorrhage (asah) is defined as brain injury occurring within 72 hours of aneurysmal rupture. although ebi is the most significant predictor of outcomes after asah, its underlying pathophysiology is not well understood. we hypothesize that ebi after asah is associated with an increase in peripheral inflammation measured by cytokine expression levels and changes associations between cytokines. methods asah patients were enrolled into a prospective observational study and were assessed for markers of ebi: global cerebral edema (gce), subarachnoid hemorrhage early brain edema score (sebes), and huntassays to determine levels of 13 pro-and anti-inflammatory cytokines. pairwise correlation coefficients between cytokines were represented as networks. cytokines levels and differences in correlation networks were compared between ebi groups. of the 71 patients enrolled in t associated with high grade sebes. correlation network analysis suggests higher systematic inflammation conclusions ebi after sah is associated with increased levels of specific cytokines. peripheral levels of il10, il6 and ession levels of individual cytokines may offer deeper insight into the underlying mechanisms related to ebi. few recent studies have evaluated health resource utilization and patient outcomes in aneurysmal subarachnoid hemorrhage (asah) in the united states. empirical evidence implicates asah as one of the highest cost diseases treated in the hospital. we identified asah patients to determine hospital charge, length of stay (los) and patient disposition associated with care in u.s. hospitals using claims data from the 2013 national inpatient sample (nis). patients within the international classification of disease, 9th revision (icd-9) 430 diagnosis code were identified; a secondary analysis of the nis (2013) was conducted utilizing icd-9 clinical modification codes excluding patients with traumatic and non-aneurysmal sah. population size, patient outcome, average charge and average los were calculated using subgroups including: aneurysmal clipping or endovascular coiling (n=5,925), aneurysmal clipping or coiling with external ventricular drain (evd) (n=4,660), use of evd only (n=2,355), other surgical procedures (n=880) and medically managed (n=1,545). analyses were survey-weighted and adjusted for patient and hospital characteristics. in 2013, asah resulted in an average per patient hospital charge of $308,139, an average los of 17 days, an average mortality of 20% and total, annual hospital charges of $4.7 billion. the highest average charge per patient ($417,042) and hospital los (22 days) were attributed to clipped or coiled patients with evd, and highest mortality (47%) found in medically managed patients. these data support the conclusion that asah is a high cost illness managed in u.s. hospitals, and help raise awareness to the potential economic benefits resulting from developing safer, more effective therapies. additional analyses with updated datasets including lifetime burden of asah (e.g. physician fees, long term medical and care costs, hospital re-admission impact, quality of life, productivity loss, caregiver burden) should be explored to understand the full economic burden of asah and the potential cost effectiveness of new therapies. external ventricular drain (evd) placement is a mainstay of treatment for patients with aneurysmal subarachnoid hemorrhage with hydrocephalus or elevated intracranial pressures, but the optimal strategy for evd management is still unclear. the goal of this study was to compare the impact of evd clamping at three different levels on the duration of drain placement and the intensive care unit (icu) length of stay. we performed a retrospective analysis of patients admitted with aneurysmal subarachnoid hemorrhage to the neurological icu from december 2015 to january 2017 and included all patients who had an evd placed. patients who died were excluded from the study. patients were divided into three groups: patients whose evd was clamped at 10 mmhg, patients whose evd was clamped at 15 mmhg, and patients whose evd was clamped at 20 mmhg. duration of drain placement in days and icu length of stay in days was compared among the 3 groups using an analysis of variance (anova). outcomes were adjusted for presenting hunt-hess score, modified fisher grade, gender, and age. there were 10 patients who had their evd clamped at 10 mmhg, 23 who had their evd clamped at 15 mmhg, and 25 who had their evd clamped at 20 mmhg. there was no difference in duration of evd placement among the three groups (adjusted p-value 0.36, unadjusted p-value 0.6) nor in icu length of stay (adjusted p-value 0.18, unadjusted p-value 0.43). evd clamping at three different levels did not affect drain duration nor length of stay in icu. this study was limited by the small number of patients enrolled. further studies are need to clarify optimal strategies for evd management in the icu. headache is a presenting complaint in majority of patients with asah and is known to persist long after initial icu care. various medications have been used for control of headache with major emphasis on opiate use. history of a prescription for an opioid pain medication increases the risk for overdose and opioid use disorder. we looked at prevalence of opiate use at discharge and its associated factors. chart review of all patients admitted in a tertiary care center between jan 2014 and march 2016 was carried out. along with baseline demographic data, information about use of pain scores, csf diversion, use of opiates, average morphine equivalent doses, use of opiates at discharge and destination at discharge was collected. analysis was carried out using microsoft excel. the study was approved by hospital irb. 52 patients were admitted with asah in above period (70% female, average age: 57 yrs). 45 (62% home, 25% snf) survived to discharge. among survivors, 40% required csf diversion for hydrocephalus. all people complained of pain on presentation and were prescribed opiates during hospital stay. average oral morphine equivalent doses used was 86 mg per day. 25 (56%) patients were prescribed opiates on discharge. alternative regimens included (2 patients: tricyclic antidepressant (tca), 2 opiate + tca, 1 acetaminophen, 1 dexamethasone, tca and opiates). most common prescribed form of opiate was oxycodone. there was no significant association between opiate use/morphine dosing and age, gender, final disposition and csf diversion, opiate prescription at discharge is common in patients with asah. no clinical characteristic seem to predict analgesic need at discharge. little data exists about better alternatives leading to variety of treatment approaches. further controlled trials are needed to decrease opiate use and prevent adverse effects delayed cerebral ischemia (dci) in sah has been associated with vasospasm-dependent and vasospasmindependent phenomena. for more than 40 years isolated hemostasis disorders have been reported in these patients. the objective of this systematic review is to describe the natural history of hemostasis in sah. we systematically reviewed the medline, embase, cochrane and lilacs databases using controlled language and the prisma statement and included studies on spontaneous sah analyzing any hemostasis parameter. we screened 1794 titles, of which 57 observational were included. evidence was evaluated following the strobe statement. no meta-analysis was attempted because of the methodological nature and heterogeneity of the studies. hemostasis is profoundly altered during the first hours after bleeding, with several alterations noted including a hypercoagulable state concomitant with increased fibrinolysis activation and reduced clot stability. direct and indirect coagulation markers show a trend towards normalization of hemostasis in the first 2 to 3 days. platelet count decreases with a nadir 4 to 6 days after bleeding and a recovery in the following weeks. a later nadir is associated with dci. platelet aggregability is consistently decreased in the first few days, regaining its normal function around the second week after bleeding. in addition, the persistence of these alterations or the presence of a second peak in pro-coagulatory activity is associated consistently with dci and worse functional outcomes. the hyperacute phase of sah is characterized by a profound activation in hemostasis with reduced clot stability, probably due to an increase in the fibrinolytic pathways. on the second day post-bleeding, a slow trend towards normalization takes place, except in patients evolving towards dci. further research on the pharmacologic manipulation of hemostasis in sah might be warranted to decrease dci and improve outcomes in this population. hypertonic saline(hts) is a treatment for sah-related cerebral edema, administered to improve cerebral perfusion and reduce brain injury. hts a supra-physiological chloride concentration that can contribute to acute kidney injury which can lead to a poor outcome. in a previously published single-center cohort of 1,267l sah patients, 16.7% developed acute kidney injury (aki). hyperchloremia, but not hypernatremia, was correlated with an increased risk to develop aki (o.r. 7.39 ). aki was correlated with increased mortality. a secondary analysis of the aforementioned sah patient cohort (2009) (2010) (2011) (2012) (2013) (2014) , was analyzed. trends of acute kidney injury were evaluated in relation to the burden of exposure to intravenous chloride, as well as serum levels of sodium and chloride. the proportion of patients developing aki with a maximal serum chloride concentration of 108 (p 109, will be randomized into one of two treatment groups: standard hypertonic saline solution (nacl 23.4%) versus a solution of nacl/na-acetate. we hypothesize that by reducing the iv chloride burden(baseline compared to post randomization exposure), the delta serum chloride level will decrease, and will subsequently reduce aki occurrence (acetate trial, clinicaltrials.gov nct03204955). aki is common in sah patient population, and associated with worse outcomes. serum chloride concentrations are a significant risk factor for the development of aki. a prospective randomized clinical trial now underway examining the relationship between the hypertonic solution composition and serum chloride concentration, and to the development of acute kidney injury in aneurysmal sah. spontaneous spinal subarachnoid hemorrhage (ssah) is a rare but serious condition that can lead to a variety of medical complications. literature to this point primarily includes isolated case reports, and none have looked at hyponatremia as a complication. patients were identified from the electronic medical record database at the mayo clinic in rochester, minnesota. the advanced cohort explorer tool was used, searching from january 2000 to december 2015. inclusion criteria were spinal subarachnoid blood products due to hemorrhage into the spinal subarachnoid space not due to (1) redistribution of blood from intracranial subarachnoid hemorrhage, (2) trauma, (3) medical procedures, or 4) predominant hematomyelia who experienced symptoms and received treatment at our facility. eight patients (median age 70 years, range 51-87) were identified as meeting the study criteria. five of these eight patients experienced hyponatremia during hospitalization with a median value of 125 meq/l. all of these patients were treated with free water restriction and one patient briefly received 1.5% sodium chloride solution; in all cases the hyponatremia improved after free water restriction. in all cases the hyponatremia improved with fluid restriction, and there was no documentation of increased urine output, suggesting that it was likely due to siadh. cord compression and hyponatremia were present together in two patients, and in these cases treatment of the hyponatremia was particularly useful to avoid worsening edema. to our knowledge this is the first compilation of cases of spontaneous ssah highlighting hyponatremia as a complication. there is significant morbidity and mortality associated with aneurysmal subarachnoid hemorrhage (sah) and only about 20% of patients survive and resume their previous lifestyle after 3-6 months. many randomized clinical trials (rcts) have been conducted yet no treatment definitively improves outcome from sah. outcome is strongly related to baseline factors, yet imbalances are common in early trials. we developed a technique to identify promising treatments at early phase using a pooled control arm model (ppredicts: kent, shah, mandava neurology 2015) that compares early studies at their own baselines. we applied this method to sah to develop a multi-dimensional model (ppredicts-sah). models for functional outcome and mortality (dependent variables) were developed based on baseline variables (eg: wfns grade 4-5 % and age) using methodology developed for ischemic stroke (mandava, kent, stroke 2009 ). the outcome model is a 3-dimensional surface bounded on either side by +/-0.1 prediction interval surfaces. these prediction interval surfaces incorporate statistical variability to assess whether a treatment differs from expected outcome. treatment arms from rcts and single arm trials, of various treatments of sah were compared against the pooled controlled arm. the best model fit was for good outcome (modified rankin score 0-2 equivalents) based on % patients with wfns 4-5 and age (r2=0.54; p<0.001). seven trials of known negative drug tirilazad were superimposed on the model and fall within the +/-0.1 prediction interval surfaces confirming futility. three trials were neutral and within the prediction interval surfaces while case series using implanted prolonged release nicardipine and a low dose heparin study were above the +p=0.1 surface showing promise. models were also developed for mortality (r2=0.23, p=.02). outcome models based on percentage of high grade wfns and age were successfully developed. this approach may be useful to prioritize treatments worthy of further study. oral nimodipine is recommended to improve outcome in treatment of aneurysmal subarachnoid hemorrhage (asah). fda approved nimodipine liquid oral solution (nos) in 2013 to reduce complications associated with administering nimodipine capsules (nc) to patients with impaired swallow. experience with nos at our center has been complicated by increased liquid bowel movements (lbm) prompting unnecessary testing for infectious diarrhea and exposure to invasive fecal management devices. study approved by local qualtiy improvement review committee. data was collected prospectively in consecutive patients diagnosed with asah during intensive care unit (icu) course. formulations of nimodipine available were generic nc (heritage pharmaceutical) and nos (arbor pharmaceuticals). we examined total icu days exposed to nos, icu days with lbm, infectious diarrhea investigations, and fecal management device use. all statistical tests were performed using minitab. 27 patients were studied from 3/6/2017 to 6/23/2017; 17 patients exposed to nos for 143 icu days, 71 icu days with lbm, 7 infectious diarrhea investigations, and 5 required fecal management devices. 17 patients exposed to nc for 154 icu days, 9 icu days with lbm (all cases were also received nos), no infectious diarrhea investigations, and no fecal management device requirements. odds ratio for lbm with exposure to nos was 15.9 (95% ci 7.5 to 33.5, p < 0.0001). the high incidence of lbm with nos resulted in more infectious diarrhea testing and fecal management device use. uncontrolled diarrhea may increase risk for dehydration and delayed cerebral ischemia, although this is not explored in the current study. nos can mitigate risks associated with needle aspiration of nc, however these issues coupled with higher cost may limit benefit of its use. possible solutions may include compounding nc into a liquid formulation by pharmacists or pharmacy technicians. possible safety and cost benefits require further investigation. headache (ha) management after subarachnoid hemorrhage (sah) is challenging and lacks standardization. we hypothesized that inadequate inpatient ha pain management leads to the development of chronic ha (cha) after sah. prospective, observational study of non-traumatic hunt and hess (hh) grades i-iii sah patients admitted from 2/2014 to 12/2014. after informed consent we recorded demographics, clinical and radiographic features, analgesic and steroid doses, hospital course and inpatient pain scores using numeric rating scale (nrs, 0-10) before (nrs-pre) and after each analgesic administration over post-bleed days 0-14. a phone survey administered 12-24 months after admission evaluated cha burden. inpatient ha control effectiveness was evaluated by percent pain resolution from initial pain score, using nrs-pre. the percentage of administrations yielding full pain resolution was compared between those with and without cha. chi-square and t-tests were used for statistical analyses. 29 patients, 69% female, mean age 54.8 ±11.4 years with hh grade i (6/29), ii (16/29), and iii (7/29) sah were enrolled with 7 lost to follow-up. at follow-up, 27.3% patients (6/22) reported daily ha, 13.6% (3/22) occasional ha, and 59% (13/22) no ha. full pain resolution after analgesic administration was associated with less cha (57 [17.4%] vs. 80 [24.3%], p=0.029). mean daily inpatient opioid dose (morphine equivalents) for patients with and without cha was 14.9 mg and 8.9 mg, respectively (p=0.28). mean nrs-pre were 4.94 vs 4.12 for patients with vs without cha, respectively (p=0.05). inpatient analgesia for sah-related ha is inadequate and may be associated with the development of chronic ha. patients with cha had higher mean inpatient pain score and fewer analgesic administrations resulting in complete pain resolution. inpatient opioid dose per day was higher in cha group, although not statistically significant. additional research is needed to characterize the relationship between inpatient headache management and chronic headache after sah. subarachnoid hemorrhage (sah) remains a significant cause of neurological morbidity and mortality with few interventions to prevent delayed cerebral ischemia. hypocapnia has been associated with worse outcomes in brain injury. sah patients may be particularly susceptible to hypocapnia induced vasoconstriction. this study aims to describe the incidence of iatrogenic and spontaneous hyperventilation in sah patients. a descriptive analysis was performed on a retrospective cohort of adult sah patients admitted to beth israel deaconess medical center icus between 2008 and 2015 with gcs <9 who were treated with mechanical ventilation and an extraventricular drain, and had at least one abg. patients on chronic ventilator support were excluded. the lowest paco2 per icu day was analyzed. 125 patients were included with 959 days with at least one documented paco2. mean gcs on admission was 5.0 (sd 1.7). 70.3% of patients survived to hospital discharge. 61.6% of patients were exposed to severe hypocapnia (paco2 30mmhg, those with severe hypocapnia had similar pao2 and pao2/fio2 ratios, but mildly increased leukocytosis (13.1 vs 12.3). 40.9% of paco2s <30mmhg occurred during spontaneous ventilation or over-breathing. prior studies have shown that hypocapnia causes decreased brain tissue perfusion and is associated with worse outcomes in sah patients. these recent data demonstrate that severe hypocapnia is common in patients with sah severe enough to warrant intubation, and is associated with both iatrogenic and spontaneous hyperventilation. hypocapnia is not primarily compensatory or hypoxia driven, as suggested by mean ph and pao2. confirmation of this association and potential future interventions require further study. although delirium is associated with higher rates of hospital complications among critical care patients, limited data exist on risk factors for delirium in aneurysmal subarachnoid hemorrhage (sah). a previous study identified older age, high hunt hess grade, intraventricular hemorrhage (ivh), and hydrocephalus as risk factors for delirium. we sought to identify risk factors for delirium during admission after sah. retrospective review was performed of prospectively collected data for consecutive sah patients enrolled into the university of maryland recovery after cerebral hemorrhage (reach) study. baseline data and clinical complications during each admission, including delirium, were recorded. statistical analysis was performed using univariate and multivariate logistical regression. 103 sah patients from july 2014 to january 2016 were reviewed. while age was not singly associated with delirium during icu admission, higher hunt hess grade, ivh, hydrocephalus, hospital-acquired infection, elevated troponin, and intubation were significantly associated with delirium on univariate analyses. upon stepwise multivariate logistic regression, ivh (or 9.6, p=0.04) and intubation (or 6.4, p=0.01) remained significantly associated with delirium. ivh and intubation predicts delirium during icu admission for sah. further analyses are needed to determine if the relationship between ivh and delirium is primarily explained by risk of hydrocephalus, risk of fever, medication exposure, or through independent mechanisms. stroke triage scales are very important in order to expedite acute evaluation, assure quick door to neuroimaging time and decrease door to needle time in patients with ischemic stroke eligible to intravenous thrombolysis. subarachnoid hemorrhage (sah) is associated with a high mortality in the acute phase due to a particular risk of early and devastating re-bleeding. therefore patients with sah also need urgent assessment. the performance of classic triage stroke scales in the identification of patients with sah was not previously evaluated. the objective of our work was to evaluate the performance of the los angeles prehospital stroke screen (lapss) in identifying patients with sah admitted to a tertiary hospital. we evaluated consecutive patients admitted to a tertiary hospital with sah from january 2009 to may 2016. at hospital admission, lapss was applied by trained nurse personnel to all noncomatose patients with complaints suggestive of neurological disease. a total of 139 with sah patients were evaluated (mean age 55.9 +/-15.6), 59.7% females). lapss was applied to 94 patients. lapss was positive in only 45 patients (32.4%). patients with a positive lapss had higher nihss stroke score at admission ( 4, [2,23] versus 0, p<0.01), lower glasgow coma scores (14 [8, 15] versus 15, p<0.01) and a significant shorter door to neuroimaging time (p<0.01). in patients with sah and mild symptoms, lapss was not a sensitive screening tool in our series. hospital and pre hospital services using lapss for triage of patients with stroke should be aware of this limitation and include in triage flowcharts specific questions evaluating sah specific symptoms. spontaneous subarachnoid hemorrhage (sah) is a neurological emergency, which despite current advances in management strategies and advent of institutional protocols, remains with significant rates of mortality due to poorly understood causes. our objectives were to characterize in-hospital mortality by evaluating the primary cause of death and externally validate the hair score, a clinical score that prognosticates mortality. in this retrospective cohort study, we reviewed all sah patients admitted to our neuro-icu between april 1, 2010 and march 31, 2016. univariate and multivariate logistic regressions were performed to identify predictors of in-hospital mortality, our primary outcome. to validate the hair score, the model's predictors were hunt and hess score at treatment decision, age, intraventricular hemorrhage, and re-bleeding within 24 hours. discrimination was assessed by visualizing the receiver-operating curve and calculating the area under the curve (auc). among 434 sah patients with a median age of 56 years (interquartile range, 48 -65), 63.6% females, inhospital mortality was 14.1% (n=61). of those, 54 (88.5%) had a neurological cause for death or withdrawal of care and 7 (11.5%) had a cardiac death. median time from sah to death was 6 days. the main causes of death were the primary effects of the initial hemorrhage, re-bleeding and refractory edema. factors significantly associated with in-hospital mortality in the multivariate analysis were age, hunt and hess score, and intra-cerebral hemorrhage. maximum lumen size was also a significant risk factor among aneurysmal sah patients. the hair score had a satisfactory discriminative ability, with an auc of 0.89. our in-hospital mortality is lower than previous reports, attesting to the continuing improvement of our protocolized subarachnoid hemorrhage care. the major causes are the same as previous reports. the hair score showed good discrimination and could be a useful tool for predicting mortality. so far, scientific and therapeutic efforts mainly focused on the prevention of rebleeding and ischemic complications(dci) in patients with subarachnoid hemorrhage(sah). however, data regarding the impact of parenchymatous hemorrhage(ph) on longterm outcome in these patients is limited. all consecutive patients with atraumatic sah admitted to our hospital over a 5-year-period(2008-2012) were retrospectively analyzed. extent of sah as well as presence, localization and volume of ph were evaluated. functional and health outcome were assessed after 12 months using the modified rankin scale (unfavorable:3-6) and the eq-5d. propensity-score(ps)-matching was performed to minimize potential bias due to confounding variables between sah-patients with and without ph. of overall 494 patients with atraumatic sah, 85(17.2%) patients had ph on initial imaging. ph-patients had a worse clinical condition on admission (wfns: ph 4(3-5) vs. øph 2(1-4);p<0.001) and a greater extent of sah (modified fisher: ph 3(2-4) vs. øph 2(1-3);p=0.001). median ph-volume was 11.0(5.4-31.8)ml with largest volumes in patients with ruptured )ml). after successful ps-matching (parameters: age, wfns, modified fisher and graeb score) patients with ph had worse functional and health outcome after 12 months compared to those without ph (mrs 3-6: ph 56/82(68.3%) vs. øph 33/78(42.3%);p=0.001; eq-5d: ph 50(30-70) vs. øph 80(65-95); p<0.001). in multivariate analysis presence of ph was the strongest independent predictor of unfavorable outcome after 12 months followed by the occurrence of dci (risk-ratio(95%ci): ph 4.5(2.0-10.0); p<0.001). parenchymatous hemorrhage is frequent and associated with functional and subjective impairments in patients with atraumatic sah. aneurysmal subarachnoid hemorrhage (asah) is associated with early and delayed brain injury. insulin growth factor (igf1) is a potent cellular growth-promoting factor with demonstrated independent neuroprotective actions in stroke and neurologic disease but has not been well characterized after asah. this study sought to examine the relationship between plasma igf1 levels and outcomes after asah. this cohort of 128 asah patients was 50.6 years (sd 10.03) and female (66%) with a mean hh 3 (45%), wfns 1 (41%) and fisher 3 (48%). initial and peak plasma igf1 concentrations were measured in 268 plasma samples from a banked biorepository using a commercial sandwich solid-phase elisa kit. delayed neurological deterioration (dnd) and delayed cerebral ischemia (dci) were determined using radiologic and clinical information. igf1 levels were log transformed due to non-normality. anova, t-tests, pearson correlations and logistic regression analyses were completed using spss and sas. older age was significantly associated with lower initial and peak plasma igf1 levels (r=.37, p<.001; r=.41, p<.001). men had higher initial and peak plasma igf1 levels than women (p<.01; p=.002), and premenopausal women had higher initial and peak plasma igf1 levels than post-menopausal women (p=.003; p=.004). lower peak plasma igf1 levels were associated with increased clinical severity by wfns (p=.02) and fisher grade (p=.03) as well as the development of dnd (p=.04; p=.003). lower peak igf1 levels were associated with the presence of dci (p=.009). controlling for age and fisher grade, log peak plasma igf1 levels remained significantly associated with the presence of dnd (p=.01; or 0.27; ci: 0.1-.74) and dci (p=.007; or 0.29; ci: 0.12-0.7). igf1 levels have not been well characterized after asah. these results suggest lower plasma igf1 are associated with clinical severity and outcomes after asah and provide impetus for future work to further examine these relationships. induced hypertension (ih) is the mainstay of medical management for delayed cerebral ischemia (dci) after subarachnoid hemorrhage. however, using vasopressors to raise systemic blood pressure well above normal levels may be associated with systemic and neurological complications, of which posterior reversible encephalopathy syndrome (pres) has been increasingly recognized. however, the frequency and risk factors for ih-induced pres have never been systemically evaluated we identified 68 patients treated with ih from 345 sah patients admitted over a three-year period. pres was diagnosed based on clinical suspicion (i.e. unexplained deterioration), confirmed by imaging. we conducted retrospective extraction of data on ih therapy, including baseline and highest target mean arterial pressure (map) and vasopressor dose/duration. we compared those with pres to ihtreated controls and also described the clinical features and sequelae of all pres cases. five sah patients were diagnosed with pres, with median time from initiation of vasopressors to diagnosis of 6.6 days (range 1-8 days). baseline map did not differ between pres and ih controls, but highest target map was greater (140 vs. 120 mm hg, p=0.006). magnitude of ih was similarly greater (54 vs. 34 mm hg above baseline, p=0.004). all cases presented with lethargy, three had new focal deficits, and one had a seizure. one died from cardiac complications but the other four patients had complete resolution with ih discontinuation, without infarction or residual disability. pres was diagnosed in 7% of patients undergoing ih therapy and was most likely when map was raised well above baseline to levels exceeding the traditional limits of autoregulation (130-140 mm hg). high clinical suspicion for this reversible disorder appears warranted when aggressive ih targets are maintained for several days or in the presence of unexplained neurological deterioration. other interventions may be preferable for refractory dci when moderate degrees of ih have been attempted. patients with aneurysmal subarachnoid hemorrhage (asah) may receive significant exposure to potentially harmful ionizing radiation exposure (phire) from diagnostic tests and therapeutic procedures during their initial hospitalization. we hypothesized that risk factors to detect excessive phire are present at the time of admission. following irb approval, all patients admitted to our institution with documented asah over a 4-year period were retrospectively evaluated for inclusion and exclusion criteria. patients were excluded if they died prior to discharge. all study data, including sah-specific and patient-specific risk factors, were obtained from the electronic medical record. the total effective dose of ionizing radiation (tedir) per patient was calculated from previously published radiation exposure data. phire was considered to have occurred if tedir was greater than 50 msv, the annual phire limit for radiation workers. logistic regression models were then fit to the dataset to evaluate clinical variables that significantly the risk of phire in these patients. data were collected from 108 patients (58.4% of all asah patients evaluated). the mean tedir in these patients was 47.8 msv. forty-two (38.9%) of patients met criteria for phire. in multivariate logistic regression modeling, male gender (or=3.2, ci=1.1-11.8), posterior circulation aneurysms (or=3.7, ci=1.2-11.5) and ventriculostomy (or=24.8, ci=3.7-164.7) were significantly associated with an increased risk of phire. in this study, approximately 40% of asah patients received phire. male gender, posterior circulation aneurysms and ventriculostomy were significantly associated with increased risk of phire. these factors may serve as important predictors of patients who require additional or complex care necessitating repeated diagnostic or therapeutic procedures during their hospitalization. alternative diagnostic or therapeutic modalities should be considered for patients with these risk factors to limit the risk of phire. future research should also evaluate the effect of phire on neurologic outcomes in these patients. it remains unclear whether patients with unruptured intracranial aneurysms (ica) should be treated. vessel wall enhancement (vwe) in high-resolution magnetic resonance vessel wall imaging constitutes a promising marker of aneurysm instability in this population. to find risk factors for aneurysm instability, we sought to identify predictors of vwe in patients with unruptured icas. we conducted a retrospective analysis of prospectively collected data on patients with unruptured ica evaluated by a single provider. all patients were evaluated using a previously validated algorithm to ascertain vwe using high-resolution magnetic resonance vessel wall imaging. two different raters, blinded to the study data, categorized all observed aneurysms as vwe-positive or vwe-negative. kappa statistics were used to evaluate the reproducibility of this approach. univariable and multivariate logistic regression modelling was utilized to identify factors associated with vwe after adjusting for potential confounders. 94 patients with unruptured ica were included in the analysis (mean age 59 [sd 14] , female sex 77 [82%]). of these, 34 (36%) were vwe-positive and 60 (64%) were vwe-negative. inter-rater reliability for vwe ascertainment was excellent (kappa 0.86, 95%ci 0.75, 0.97). 9 out of 10 (90%) patients presenting with cranial nerve palsy were vwe-positive. in univariable analysis, age (p=0.03), headache on presentation (p=0.004), and size (p<0.001, per additional millimeter) were associated with vwe-positive status. in multivariable analysis, headache on presentation (p=0.007) and size (p=0.003) remained independently associated with vwe. cranial nerve palsy is an established clinical marker of aneurysm instability; consequently, our results confirm the role of vwe as a marker of aneurysm instability. headache on presentation and aneurysm size are independently associated with vwe; these risk factors for aneurysm instability could be used to select patients with unruptured icas that may benefit from vessel wall imaging. prognostication in subarachnoid hemorrhage (sah) patients presenting in coma is crucial for surgical decision making. indications for aggressive aneurysmal treatment are unlikely for those not demonstrating signs of neurological improvement chronologically or after ventricular drainage. early neurological evaluation is, however, challenging in critically ill sah patients requiring anesthesia and intubation for airway protection. in this single-center retrospective study, we applied continuous amplitude-integrated eeg (aeeg) monitoring using a subhairline montage for wfns grade v patients who did not undergo emergency aneurysm treatment. monitoring was initiated soon after admission to the icu. patterns of aeeg findings were classified according to rundgren, et al. as follows: flat (f); suppression-burst (sb); electrographic status epilepticus (ese); and continuous (c). based on the aeeg findings, indications for aneurysm treatment were reevaluated. outcome was assessed at six months using the glasgow outcome scale. twenty-three patients, 8 men and 15 women, aged 68.3 ± 14.1 years (mean ± sd), were eligible since december 2012. all patients underwent prophylactic intravenous sedation. the population represented 28 % of all grade v patients including those resuscitated after cardiac (n=9) or respiratory (n=2) arrest. the glasgow coma scale scores were 3 (n= 21), 4 (n=1), and 5 (n=1), respectively. aneurysms were located in the posterior fossa in 9 patients (39%). aeeg monitoring was initiated 11.8 ± 12.2 hours median 8.4, 1.03-41.7 hours after arrival. all patients showing early f (n=12) or sb patterns (n=3) died. one patient demonstrated ese remained in a persistent vegetative state. five out of 7 patients with a c pattern underwent aneurysm treatment; 4 clippings and 1 coil embolization. moderate disability was attained in 3 and severe disability in 2. two patients undergoing conservative therapy died. continuous aeeg provided useful prognostic information for identifying salvageable sah patients undergoing sedation in the early phase. delayed cerebral ischemia (dci) may result in focal neurological deficits and cerebral infarction after subarachnoid hemorrhage. while global cerebral blood flow (cbf) may be variably reduced, dci is more likely related to regional impairments in cbf below critical perfusion thresholds. we applied volumetric methods to assess the proportion of brain exhibiting hypoperfusion (pbh) in those with clinical dci and in the symptomatic hemisphere of those with focal deficits. methods 61 patients with aneurysmal sah underwent 15o-pet and ct imaging during period of risk for dci (median 8 days after sah, iqr 7-10). we measured pbh as proportion of voxels with cbf < 25 ml/100g/min, after excluding regions of infarction/hematoma on ct. we compared pbh in patients with vs. without dci at time of pet and, in those with focal deficits, we compared hypoperfusion between affected and unaffected hemispheres. pbh was greater in the 23 (38%) with dci compared to those without dci (20%, (9) (10) (11) (12) (13) (14) (15) (16) p=0 .006) despite higher mean arterial pressure (map) and most being on active hemodynamic therapies. global cbf was also lower in those with dci (36.0 vs. 44.4 ml/100g/min, p=0.01) but did not differ between those remaining symptomatic and those whose deficits had resolved. while mean hemispheric cbf was not lower in the affected hemispheres of 13 with lateralizing deficits (40.2 vs. 41.1 ml/100g/min, p=0.15), there was greater pbh in the symptomatic hemisphere (21% vs. 18%, p=0.049). sah patients with dci have a greater proportion of brain with hypoperfusion despite active hemodynamic therapy and higher map. there was also larger proportion of the symptomatic hemisphere with hypoperfusion despite no asymmetry of hemispheric cbf. such measurements of hypoperfusion may better reflect the regional pathophysiology of dci than global averaged measures of cbf. further studies should determine whether burden of hypoperfusion correlates with tissue and patient outcomes. patients who survive aneurysmal subarachnoid hemorrhage (asah) are often burdened with lasting cognitive impairment due to a combination of sequelae including neuro-cardiac injury. the impact of neurocardiac injury after asah is poorly understood. this study sought to examine if neurocardiac injury detected by global longitudinal strain (gls) is associated with poor performance in neuropsychological np memory impairment after asah. we studied 135 asah patients at 3 months and 112 at 12 months (sahmii study r01nr04221) after hemorrhage. speckle tracking gls from 3 apical views were assessed days 0-5 from bleed from transthoracic echocardiograms. neuropsychological (np) outcomes covering 7 domains were completed at 3 and 12 months after hemorrhage by trained personnel. memory tests included controlled oral word association (cowa), wechsler memory scale (wms) and rey auditory (r-aud) and complex figure (reyc). anova and kruskal-wallis, pearson and spearman correlations and logistic regression were completed using spss and sas. there were 30 (22%) patients with abnormal gls (defined as >-17%) and 27 (24%) in the 3 and 12 months groups respectively. gls groups had similar age, gender and fisher grade. abnormal gls was associated with higher hh at 3 (p=.05) and 12 (p=.002) months. abnormal gls was significantly associated with decreased performance in r-aud memory domains at 3 months (p=.013) and 12 months (p=.027) after asah and even when controlling for age and hh at 3 months (p=.010). gls<-17 was associated with poor memory performance 12 months after asah in cowa (p=.008) and the wms (p=.02) even after adjusting for age and hh, cowa (p=.008) and wms (.018). neuro-cardiac injury detected by gls was associated with decreased performance in memory domains of np function at 3 and 12 months after asah. while these relationships require further examination, neurocardiac injury may contribute to long term np impairment after asah. delayed cerebral infarction (dci) is a frequent complication following high-grade aneurysmal subarachnoid hemorrhage (asah). management of dci includes maintaining hypertension, which is challenging in heavily sedate patients. ketamine is a hemodynamically stable, analgesic sedative not studied in this population. we hypothesize that ketamine infusion (k), as compared to traditional sedatives (control), will safely improve the hemodynamic profile in high grade ventilated asah patients retrospective review of asah patients admitted 1/2015 to 2/2017 requiring mechanical delayed cerebral infarction (dci) is a frequent complication following high-grade aneurysmal subarachnoid hemorrhage (asah). management of dci includes maintaining hypertension, which is challenging in heavily sedate patients. ketamine is a hemodynamically stable, analgesic sedative not studied in this population. ventilation >72 hrs, and without dnr within 48 hrs from admission. we assessed demographics, hemodynamics, pressor, dci at 2 weeks, ventilator and icu los, and mortality. fisher exact, wilcoxon, and paired t-test applied. comparing k (n=16) vs control (n=19), median (q1, q3) results for: age 59 (48, 67) vs 59 (48, 72); hunt and hess 4 (3, 5) vs. 4 (3, 4); mpm-3 30 day estimated mortality 27. 8% vs. 28.5%; and gcs 7 (6, 9) vs 7 (5, 8) . ketamine initiated on day 4 (2, 5); icu los 20 (17, 24) vs. 15 (7, 22); and vent los 17 (12, 29) vs. 11 (8, 28) . mean (sd +/-) for 8 hours before and after ketamine: map 93 (11) vs 99 (15), p 0.05, except where noted. ketamine infusion, as a second line sedative, had no effect on mortality or icp, and improved map. however, there was a nonsignificant increase in dci as well as vent los, without a greater rate of tracheostomy. prospective studies are needed to study the effect on dci and long term outcomes. seizures are a well-known complication of aneurysmal subarachnoid hemorrhage(asah) and occur most commonly in the immediate post-hemorrhagic period. most commonly used antiepileptic drugs (aeds) for seizure prophylaxis in asah include phenytoin and levetiracetam. there is no reliable data available on the safety and efficacy of restricting aed prophylaxis only till the aneurysm is secured. we retrospectively chart reviewed patients admitted to our neurosciences intensive-care-unit with asah during the last two years. seizure incidence was studied in patients treated with phenytoin versus levetiracetam and in patients treated for 3 to 7 days versus those where aed was discontinued immediately after aneurysm was secured. in 28 patients aed prophylaxis was discontinued immediately after the aneurysm was secured, and in 21 patients it was continued for 3 to 7 days. of th phenytoin was used in 20 patients and levetiracetam was used in 8 patients. in patients receiving aed prophylaxis for 3 to 7 days, phenytoin was used in 8 cases and levetiracetam was used in 13 cases. none of these patients had seizures reported during hospitalization or at three month follow-up. stopping the aed prophylaxis immediately after aneurysm coiling is not associated with increased risk of seizures. seizures at presentation in patients with asah are not associated with development of epilepsy at 3 months. both phenytoin and levetiracetam are well tolerated in patients with asah when limited to the immediate post-hemorrhagic period. the main preceding factor of delayed cerebral ischemia (dci) in asah is cerebral vasospasm (cvs). anticipating dci can have major impact on patient outcomes. studies have attempted to predict dci in patients with asah by using various imaging modalities that measure cvs, ranging from transcranial doppler ultrasonography, ctp, and mr perfusion. few compare these imaging modalities to the accepted gold standard of dsa. we propose that mri using asl imaging can be used as a sensitive and specific measure of cvs and can be used as a marker to identify patients with asah who are at risk for developing dci. to support our hypothesis, we compare asl results in patients with documented cvs on dsa who developed dci. 165 patients in the academic years 2013 to 2016 with the diagnosis of asah were admitted to our nicu. the inclusion criteria for the patient population was the presence of asah confirmed by dsa, diagnosis of dci by a neurointensivist, mri with asl, and a repeated dsa during the hospitalization after dci was suspected. all patients underwent mra with asl on day 8 in an attempt to capture the peak time of cvs. nine patients were included in this study. all cases with perfusion defects on asl sequences had confirmed cvs on dsa except for one. the outlier in our cohort developed dci with asymmetry on asl that was not demonstrated on dsa. to our knowledge, no studies have compared the specificity of asl with dsa in detecting cvs. this study highlights the utility of asl in detecting cvs in patients with asah. our limited data suggests asl can be utilized for detection of dci and cvs with greater confidence than the conventional modalities. we also suggest that asl approaches the utility of dsa in the detection of cvs. blood glucose dysregulation following aneurysmal subarachnoid hemorrhage is associated with serious complications and poor clinical outcome. an influence of hyperglycemia on the occurrence of delayed cerebral ischemia (dci) is assumed, nevertheless the exact mechanism remains unclear. the goal of the present study aims to investigate the influence of systemic blood glucose level on cerebral perfusion measured by dynamic perfusion computed tomography (pct) and outcome. daily serial blood glucose levels and pct data sets of 206 patients treated at our neurointensive care unit after asah were retrospectively analyzed. serial pcts were performed between six hours and 21 days after aneurysm repair. mean average of mean transit times (mtts) was calculated for each perfusion scan. the maximum mean transit time (maxmtt) and outcome assessed with glasgow outcome scale were correlated with defined blood glucose ranges as followed 1.) >180mg/dl (hyperglycemia) 2.) 140-180mg/dl (elevated glucose level) 3.) 110-140mg/dl (strict glucose control) and <110mg/dl (low glucose level). hyperglycemia (>180 mg/dl) was associated with prolonged maxmtt (p< .05, rs = .13) and was linked to an increased risk of infarction (p < .001) whereas strict glucose control (110-140 mg/dl) correlated significantly negative with maxmtt (p < .05, rs = -.17). strict glucose control was also associated with a lower occurrence of cerebral infarction and good outcome (p < .001, rs = .47). in contrast, elevated blood glucose levels (140-180 mg/dl) and hyperglycemia showed a negative correlation with good outcome (p < .001, rs = -.26, rs = -.35). the present analysis supports for the first time the assumption that dysregulation of blood glucose balance influences cerebral perfusion and thus may contribute to the occurrence of dci and poor outcome. therefore careful monitoring and prompt treatment of blood glucose levels after asah should be highly valued to avoid cerebral perfusion deficits correlated with poor outcome. the aim of this study was to determine the correlation between transcranial doppler (tcd) velocities and angiographic vasospasm after subarachnoid hemorrhage (sah). methods 221 patients with sah were evaluated with spencer technologies tcd power m mode from 2-14 days, following the sah. both the temporal windows were insonnated to determine flow velocities in the middle (mca) and anterior cerebral arteries (aca) and the suboccipital widow was used to determine flow velocities in the vertebral (va) and basilar arteries (ba). the middle cerebral artery/ipsilateral extracranial internal carotid artery velocity ratio (lindegaard ratio) was also correlated with vasospasm ct angiography and conventional cerebral angiography was used to confirm tcd findings suggestive of vasospasm. the sensitivity, specificity, likelihood ratios for positive and negative tcd results, positive there was 131 males and 90 females and with mean age 42.72+-7.6 years. 88 % were aneurysmal sah. delayed ischemic neurological deficits (dind) developed in 48/221 patients (21.70 %). interobserver ue of cm/s were useful (likelihood ratio for negative result = 0.15, likelihood ratio for positive result = 14.89). lindegaard ratios correlated well with vasospasm. tcd diagnosis of vasospasm was more often present in the mca, followed by aca and basilar arteries. tcd is a good non invasive method to detect vasospasm and predict the occurrence of dind. very high angiographic vasospasm. tcd is also useful to follow up patients with angiographically proved vasospasm. aneurysmal subarachnoid hemorrhage (asah) is a significant cause of morbidity and mortality. the mortality rate approaches 50%. nearly half of the survivors remain unable to care for themselves . dci occurs in 29% of these patients . when present, it doubles the risk of poor outcome. -several methods have been used to treat cerebral vasospasm and dci, which is a major cause of morbidity and mortality in patients with aneurysmal subarachnoid hemorrhage (sah). milrinone safe and, potentially, effective treatment of dci as reported in low level of evidence literature . however, the efficacy not compared in a randomized way to placebo. we will examine the effectiveness and safety of intra-venous injection of milrinone for the treatment of dci following aneurysmal sub-arachnoid haemorrhage. our intension is to study the outcome of using milrinone as an addition to current therapies such as hypertensive therapy are not effective enough, yet can not be replaced as it is standard of care. as intravenous milrinone was not yet shown to have an affect in dci in a randomized controlled trial. this pilot trial is a step towards that study. the study is a pilot trial of a randomized placebo-controlled double blind trial testing the potential beneficial effect of milrinone, a phosphodiesterase inhibitor, on clinical neurological outcome in patients with dci after aneurysmal subarachnoid hemorrhage. the study drug will be given along with the standard therapy when dci occurs. the administration of milrinone increases cerebral blood flow most likely as a result of cerebral vasodilation. as intravenous milrinone was not yet shown to have an affect in dci in a randomized controlled trial. this pilot trial is a step towards that study. milirinone promising treatment for delayed cerebral ischemia following aneurysmal sub-arachnoid haemorrhage .particulary by using standardized protocol as a finding suggestive of good prognosis fever in the neurocritical care population is very common and is strongly associated with increased mortality and poor outcome. fever is aggressively treated in the icu due to its deleterious effects. yet despite best efforts with standard antipyretic agents and even with aggressive cooling measures with endovascular cooling catheters some patients may still have refractory fevers. celecoxib, a cyclooxygenase-2 (cox-2) inhibitor, has been used as an adjunctive antipyretic agent. this is a retrospective analysis to evaluate the effectiveness of celecoxib in lowering temperatures in patients with refractory fevers. this is a retrospective chart review of patients admitted to a neurointensive care unit at a single institution with fevers (>38.3c) that do not respond to convention treatment with acetaminophen, endovascular cooling catheters and ibuprofen. 83 patients with severe traumatic brain injury, subarachnoid hemorrhages and intracerebral hemorrhages were included. patient temperature recordings were obtained in the period of 24 hours before and 48 hours after administration to the first dose of celecoxib. the mean temperature of the before and after periods were compared and temperature difference was calculated. 83 patient records were included. the average of the mean temperatures in the before periods and after periods were 37.79 c (+/-sem 0.0659) and 37.37 c (+/-sem 0.062 respectively. there was a significant difference on mann-whitney-wilcoxon rank sum test (p<0.000003). one average there was a drop of 0.41(+/-sem 0.069) degree celsius of the mean temperature after the start of treatment. in neurocritically ill patients with fevers that are refractory to conventional treatments adding celecoxib, a cox-2 inhibitor seems to be effective at lowering the core body temperature. further study is warranted to evaluate for adverse effects such as risk of cardiovascular events. achieving and maintaining normothermia (nt) after subarachnoid hemorrhage (sah) or intracerebral hemorrhage (ich) often requires temperature modulating devices (tmd). shivering is a common adverse effect of tmd's that can lead to further costs and complications. we evaluated a new esophageal tmd, the ensoetm (attune medical: chicago, il), to compare nt performance, shiver burden, and cost of shivering interventions with existing tmd's. patients with sah or ich and refractory fever were treated with the ensoetm. patient demographics, temperature data, shiver severity, and amount and costs of medication used for shiver management were prospectively collected. control patients who received other tmds were matched for age, gender, and body surface area (bsa) to ensoetm recipients and similar retrospective data was collected. all patients were mechanically-ventilated. fever burden was calculated as areas of curves of time spent above 37.5 or 38c. demographics, temperature data, and costs of ensoetm recipients were compared to recipients of other tmd's. eight ensoetm recipients and 24 controls between october 2015 and november 2016 were analyzed. there were no differences between the two groups in demographics or patient characteristics. no difference was found in temperature at initiation (p = 0.4) and fever burden above 38°c (p = 0.47). ensoetm recipients showed a non-significant trend in taking longer to achieve nt than other tmd's (p = 0.07). ensoetm recipients required fewer shiver interventions than controls (p = 0.03). ensoetm recipients incurred fewer costs than controls per day (p = 0.04). the ensoetm achieved and maintained nt in sah and ich patients and was associated with less shivering and lower pharmaceutical costs than other tmd's. further studies in larger populations are needed to determine the ensoetm's efficacy in comparison to other tmd's. targeted temperature management is an important aspect of care in neurologically impaired patients. however, achieving optimum temperature for a specific patient can be challenging; a patient's size, body composition, metabolism, and hypothalamic function contribute to his or her response to a given temperature management modality. the purpose of this study is to evaluate patient response to esophageal temperature management when continuously applied for at least 72h. deidentified core temperature data for 18 patients (a total of 1237 measurements) were obtained from three hospital sites where esophageal temperature management was used for at least 72h (range 72-452h). indications for active temperature management included: cardiac arrest (7), refractory fever (4), subarachnoid hemorrhage (4), intracranial hemorrhage (2), and traumatic brain injury (1). goal temperatures ranged from 33-38°c and initial patient temperatures ranged from 35-40°c. deviation from goal was calculated by subtracting target temperature from actual temperature for each measurement which allowed the calculation of the mean and standard deviation for each time point across all temperature management protocols. across 129 time points, representing an average treatment time of 137.7h, 95.3% of mean deviations from goal were within ±1°c and 68.5% were within ±0.5°c. in interpreting these results, several limitations must be considered. this dataset reflects a wide range of temperature management protocols and clinical scenarios. for example, a larger than average deviation in measurements recorded in the 24-36h period was related to rewarming in cardiac arrest patients who rewarmed slowly. also, the later time points were dominated by sah, ich, and refractory fever patients who often experience more pronounced fever spikes. this analysis indicates that esophageal temperature management is a feasible option for patients who require active temperature management for 72 or more hours. the role of therapeutic temperature management (ttm) in neurocritical care is uncertain. one question that has been inadequately addressed is the diversity of practice across multiple neurocritical care units (nccu) throughout the world. a barrier to understanding this practice variance is a data collection method that would provide adequate understanding of how ttm is implemented in various nccus. the purpose of this pilot study is to test the efficacy of a data collection method that would provide unitlevel data on ttm practice. the design of this study was prospective, observational, and cross-sectional study using quality assurance methodology. the study received institutional review board approval. to reduce the risk of loss of confidentiality and promote privacy, individual patients were not consented. data on temperature management was collected each day for 90 consecutive days. completed data was available for 90 days. mean daily census of 15 patients included the following mean number of patients with sah (2), ich (1), ischemic stroke (2) and other (9). of those, ttm was provided to at least one patient during 40 of 90 days (44.4%). the most common ttm method (tylenol) was used on 112 patient days; surface cooling was used on 53 patient days. ttm was initiated for fever management (109 patient days) and normothermia (11 patient days). the most common associated complication was hypocalcemia (19) and hypokalemia (5). the data collection form was easily and quickly filled completed on a daily basis, but provides limited data. although the form captured a significant number of events surrounding the use of ttm, the primary limitation noted is the inability to link specific events (e.g., hypokalemia) to specific patients or diagnoses. this pilot study demonstrates the efficacy of data capture and provides insight towards refining a prospective observational study to describe ttm practice. brainstem tumors are exceedingly dangerous due to its proximity to the structures responsible for basic human survival in the neurocritical care setting. these lesions may cause autonomic dysregulation. we report on a rare case of a female with a past surgical history of ventriculoperitoneal shunt with a brainstem mass of müllerian type epithelial tissue. methods 38 year old caucasian female presented to our hospital status-post fall after episodes of lightheadedness, as well as, episodes of decreased respirations in her sleep. mri showed a medullary contrast enhancing mass with calcifications measuring 2.8 x 2.7 x 1.5 cm and a small calcified lesion in the right lateral ventricle. suboccipital craniectomy for biopsy and decompression was performed. intraoperatively, the heart rate and blood pressure dropped transiently due to the mass being firmly adhered with calcification to the medulla. the neuropathologist diagnosed the tissue as mullerian type epithelium with estrogen receptors. post-operatively, our patient encountered several instances of cardiac pauses on monitoring that required the need for cardiology to place a permanent pacemaker. the above is a rare case of a calcified heterogeneously contrast enhancing brainstem mass that underwent neurosurgical biopsy. histopathology results indicated müllerian type epithelial tissue which is tissue that gives rise to female reproductive organs. the origin of a brainstem lesion from an embryologically gynecological site could be speculated to have traveled retrograde via the ventriculoperitoneal shunt catheter. patient required postoperative cardiac management and intervention with a pacemaker for encroachment or mechanical conflict of the mass onto the rostral ventrolateral medulla. oncology recommended pet ct scan and further consideration for tamoxifen chemotherapeutic regimen. this case is a reaffirmation of the importance of brain tumor location and tissue diagnosis for the purpose of adjuvant treatment of neurosurgical lesions in the neurocritical care setting. tranexamic acid (txa) has been used off label in cardiovascular and orthopedic surgery, as well as in trauma resuscitation. the use of txa has increased since the publication of crash-2 (2010) and matters (2012), demonstrating its efficacy in trauma patients to reduce bleeding. there remains concern about the thrombotic risk as well the reduction in the seizure threshold after txa administration. case description: we present a case of a 25-year-old female admitted to the surgical icu after a motor vehicle accident with multiple traumatic pelvic and extremity fractures and soft tissue injury. she subsequently developed extensive arterial and venous thromboses with bilateral acute ischemic strokes with superimposed posterior reversible encephalopathy syndrome after txa administration. a second case involved a 50-year-old female who had a fall from standing and given txa in the field by ems. shewas admitted to the neurocritical care unit with status epilepticus and suffered a complicated course with cardiogenic shock due to stress induced cardiomyopathy. discussion: the risk-benefit balance of txa administration is generally considered acceptable in severe bleeding. the cases presented here suggest the neurological risks of txa administration may be poorly understood and demonstrate the need for better patient selection and heightened awareness for early identification and management of complications given the possible severity of neurologic sequelae. conclusion: txa is an anti-plasmin drug that is increasingly used in the areas of trauma and postoperative bleeding. we aim to educate clinicians in the potential neurological complications that can arise with its use. cryptococcus neoformans is normally an opportunistic infection known to cause meningoencephalitis and can present with stroke like symptoms. in imaging, cns vasculitis can be identified, which can lead to cerebral infarcts. when involved, these cerebral vessels are small sized leading to lacunar infarcts. we present a case that involved a large vessel territory leading to patient mortality. initial treatment with glucocorticoids, though beneficial in other meningoencephalitidies, may actually be harmful in fungal cns infections. case: a 57 year old male with a presents with 2 weeks slurred speech and worsening headache. an enhancing lesion on brain mri in left temporal lobe was concerning for vasculitis. patient was treated with glucocorticoids, with a negative rheumatologic workup and discharged home. patient subsequently presented 2 days later with worsening symptoms, with ct imaging showing completed infarcts. blood cultures were positive for cryptococcus neoformans; patient died due to diffuse right mca territory edema and brain herniation syndrome. discussion: it is important to consider cns infection even in immunocompetent patients who present with any of the following: fever, nuchal rigidity, mental status change, and headache. cns vasculitis in association with infection is caused by basilar meningeal exudates. these cause traversing vessels to become inflamed, leading to distal inflammation and subsequent thrombus and infarction. we present a right mca territory infarct , presumed to be based on the aforementioned vasculitic process. when acute infarcts are associated with opportunistic cns infections, they are usually not associated with large vessel infarction. we also examine the adjunctive use of glucocorticoid therapy for treatment of fungal cns infections. this is an infrequent case of cryptococcus neoformans causing a cns infection in an hiv-seronegative patient not on chronic immunosuppressive medications. this case presents a unique complication of cryptococcal infections, a cns vasculitis leading to infarction in a large vessel territory. we describe the baseline characteristics, continuous intravenous midazolam doses, seizure control, hospital course and outcomes in patients who received high dose continuous midazolam infusion for refractory status epilepticus in this retrospective case series study, we evaluated adult patients with refractory status epilepticus treated with high continuous intravenous midazolam doses in an academic neurocritical care unit between august 2016 and june 2017. four patients were identified. the maximum midazolam dose for each patient was: withdrawal seizures (occurring within 48 hours of discontinuation of continuous iv midazolam) occurred in patient b. "ultimate continuous iv midazolam failure" (patient requiring change to a different continuous intravenous antiepileptic drug despite maximum optimized dose) was not observed in any of the four patients. hospital complications occurred in patient a and b due to infections. hypotension related to continuous infusion midazolam occurred in patient a. three out of four patients discharged alive to a skilled nursing facility; after a goals of care discussion with the family, the fourth patient had withdrawal of care due to the severity of his brain injury. in this case series, we report the use of high dose continuous iv midazolam for treatment of refractory status epilepticus. there were no midazolam-related deaths. neurologic complications in infective endocarditis (ie) occur up to 35 % and are independent predictors of mortality. infectious intracranial aneurysms known as "mycotic aneurysm" (ma) are rare constituting 2-5 %. hemorrhaging rate is 50%. mortality is 80% with rupture. ruptured ma poses significant management conundrum due to lack of available solid prospective data guiding the order (cardiac vs neurosurgical) or timing (early vs delayed) of surgery. a 34y/o male iv drug abuser presented with acute hypoxemic respiratory failure secondary to pneumonia and suspected meningitis. gsc 6 intubated on iv antibiotic. hemodynamic instability prompted tee showing large aortic valve vegetation. blood cultures positive mssa fulfilled criteria for ie. tests showed kidneys infarctions. ct brain showed r mca territory infarct with sah.cta head revealed small ma along the distal r mca m4 branch confirmed with cerebral angiogram. acute heart failure and arrhythmia led discussion on cardiothoracic surgery for valve replacement. due to ruptured ma, decision to secure it was made prior to cardiac surgery. after failed endovascular intervention, patient underwent surgical clipping. post operative mri brain showed new infarcts suggesting recurrent embolization. due to risk of intracranial bleeding, cardiac surgery was postponed for at least 2 weeks initially then to 6 weeks. patient underwent avr after completed 6 weeks of antimicrobial therapy with st jude mechanical valve and discharged on anticoagulation with a modified rankin scale of 1. this case reflects on how urgent surgical intervention should take place.safety period between neurological event and cardiac surgery is largely debated because of lack of controlled studies. there has been no consensus on how to approach those cases as paucity of robust evidence. given their rarity the best management modality remains unclear. this case stress the importance of multimodal therapy in achieving good outcome although the timing of surgery remains a matter of debate. we present a patient with vertebral cerebral artery embolism (cae) following blunt trauma. case presentation: a 49 year-old male was admitted with a right vertebral artery dissection and occlusion with intraluminal air, widespread pneumocephalus, bilateral pneumothoraces, a pulmonary laceration, and multiple fractures including ribs, c3 transverse foramen (with normal alignment), and femur following a motor vehicle collision. his pupils were initially nonreactive, and he experienced one hour of witnessed generalized seizure activity on arrival despite aggressive treatment. management: midazolam infusion, levetiracetam, and fosphenytoin were initiated for seizure control. targeted temperature management to 36 celsius was initiated on arrival out of concern for hypoxic brain injury. computed tomography at 12 hours demonstrated resolution of vertebral and intracerebral air, diffuse edema, and diffuse loss of gray-white matter differentiation, thus a hypertonic saline infusion was initiated. the following day, an mri demonstrated diffusion restriction in the areas adjacent to the air, including c3-4 and diffusely throughout bilateral cerebral hemispheres. prognosis was thought to be poor. however, the following day, the patient awoke. by day four he followed commands. he was discharged to skilled nursing on day 17. at three months he had only minimal residual right hip weakness. discussion: there are only three case reports of cae following blunt trauma, and only one involving the vertebral artery. air migrates to the arterial circulation due to a positive gradient from low central venous pressure or high airway pressure. pulmonary venous air then embolizes to cerebral vasculature. as little as 2ml of arterial air emboli can be fatal with the major cause of death being circulatory obstruction and arrest from air trapped in the right ventricular outflow tract. conclusion: this patient developed pneumocephalus and cae due to a pulmonary laceration. as the cerebral air reabsorbed, his seizures resolved and his exam improved. petrous ica aneurysms are extremely rare1-2 and difficult to treat surgically, due to the inherent challenges of microsurgical access to the carotid canal of the petrous bone3-4. endovascular approaches may also prove challenging, typically as the consequence of therapeutically-unamenable morphology, but occasionally due to size considerations as well.5 a 23-year-old male presented with headache and vertigo for the past 3 weeks. the patient was hivpositive with medication noncompliance and denied any history of trauma or head injury. head ct identified a 3.9 x 2.3 cm heterogeneous soft tissue density lesion in the right petrous bone. ct angiography revealed a 3.0 x 1.8 x 2.8 cm lobulated giant aneurysm of the right petrous ica. mri/mra was performed to rule out thrombosis and showed giant partially thrombosed right petrous ica aneurysm. the decision was made to treat using flow diversion. the patient underwent catheter angiography, confirming a giant 3 x 3.5 cm right internal carotid artery petrous segment aneurysm. we proceeded with flow diversion pipeline endovascular device, placement of two pipeline endovascular devices (flex 5 x 30 and 5 x 25) successfully. final angiographic runs showed significant stasis within the aneurysm and demonstrated the flow-diverter construct was well placed both proximal and distal to the aneurysm neck with no sign of endovascular leak. the patient was discharged home well. we suggest that flow diversion is an ideal treatment for petrous ica aneurysms, specifically un-ruptured lesions of complex morphology. other options for treating petrous ica aneurysms challenging, not possible, less effective, and/or carry substantial risks. second, several of the disadvantages of pedocclusion of side vessel branches and preclusion of future coil embolization, do not apply to the petrous segment of the ica. lastly, use of ped in petrous ica aneurysms has proven effective in the vast majority of reports. the spot sign is a focus of enhancement within the hematoma on ct angiogram (cta) with unique characteristics. it has a spot-like appearance within the margin of a parenchymal hematoma without connection to an outside vessel. it should measure greater than 1.5 mm in diameter in at least one dimension. its contrast density (hounsfield units, hu) is at least double that of the background hematoma. finally, there should be no hyperdensity at the corresponding location on non-contrast ct. it is a strong predictor of hematoma expansion and poor prognosis in intra-parenchymal hemorrhage. the pathogenesis of spot sign remains unclear. some studies showed an association with faster rates of contrast leakage which indicates continued bleeding. a spot sign has not been reported with isolated intraventricular hemorrhage (ivh) before. a case report of a 59-year-old man with a past medical history of hypertension who got admitted to the er with acute encephalopathy and right-sided weakness. head ct-scan (hct) revealed isolated ivh. cta was notable for a spot sign. it measures 2.8 mm in diameter and 175 hu in density (surrounding hematoma measures 80 hu). it lies within the hematoma without connections to any adjacent vessel. a follow-up hct after four hours showed expansion of the ivh. although seems uncommon, looking for a spot sign in isolated ivh can also anticipate expansion of the hemorrhage. a further study is needed to validate this observation and calculate the prevalence of the spot sign in isolated ivh. west nile neuroinvasive disease may present with nonspecific physical exam and imaging findings. to our knowledge, this is the first report of wnnd involving the temporal lobe in adults with neuroimaging suggestive of limbic encephalitis. our patient presented in winter and developed autonomic instability and sensory deficits, which are all rare findings in wnnd. 73-year-old texan with dm presented with acute confusion and seizure in november. patient complained of headache, fever, diarrhea and lower extremity weakness after a fishing trip. patient was febrile with mosquito bites on his arms. neurological exam was significant for comatose state, absent brainstem and deep tendon reflexes, and flaccid paraparesis. he developed autonomic instability with labile blood pressures. lp revealed 13 wbc/mm3 (monocyte predominance), 23 rbc/mm3, glucose 99 mg/dl, elevated protein of 82 mg/dl, and a positive west nile virus (wnv) igm antibody; gram stain, hcv pcr, and the paraneoplastic and autoimmune panels were negative. eeg showed severe diffuse brain slowing. mri brain had t2 flair and dwi changes in right hippocampus and posterior limb of internal capsule. emg described severe subacute sensorimotor axonal polyneuropathy without prolonged distal latencies and normal conduction velocities. he received 5 days of ivig without improvement and was terminally extubated. our patient presented with both clinical entities of west nile: wn fever and wnnd (present in less than 1% of cases). our patient had axonal polyneuropathy with paralysis which is due to inflammatory changes in the white matter tracts affecting spinal sensory pathways. sympathetic ganglia involvement caused the autonomic instability, another very rare manifestation of wnnd. november presentation was due to warmer texas winter. recognize that west nile fever and west nile neuroinvasive disease may present together in winter. recognize that west nile neuroinvasive disease can present with rare temporal lobe neuroimaging, sensory involvement, and autonomic instability. intracerebral hemorrhage (ich) is a common pathology seen in the neurocritical care setting that can be associated with significant morbidity and mortality. the use of sympathomimetic agents containing phenylpropanolamine (ppa) have been associated with ich in the past which lead to the drugs' removal by the fda as an over the counter medication in 2005. we report a case in which ppa was the etiology for a spontaneous ich in a patient who was taking an appetite suppressant. case report and review of the literature we report a case of a 40 year old female with no prior medical history, who presented with sudden onset left sided hemiparesis and hemianesthesia found to be due to a right striatocapsular intraparenchymal hematoma. systolic blood pressures at presentation and throughout the hospital course were normal. extensive work up including multiple ct scans of the head, mri brain, ct angiography, mr angiography and digital subtraction angiography were performed with no evidence of any vessel abnormality. etiology of the ich was attributed to the use of ppa. in young patients with no known comorbidities, ppa use should be considered a primary etiology of ich when no intracranial vessel abnormality can be detected. seizures have been known to cause sudden death, but reports in the literature of only cardiopulmonary failure in cases of sudden unexpected death in epilepsy (sudep). we present the case of a patient who presented post-seizure and developed sudden progressive and fatal cerebral edema within 14 hours after a second seizure. a 22 year old female with a history of down syndrome and epilepsy presented to the emergency department after a prolonged convulsive seizure. she received 2 doses of 2mg lorazepam and levatiracetam 6.7 mg/kg with cessation of seizure activity and return to baseline neurologic status within 5 hours of the initial event. head ct showed lack of sulci throughout the cerebral hemispheres and basilar cistern effacement despite being at her baseline neurologic status. 8 hours after presentation the patient had another seizure, vomited, was intubated and an additional 15 mg/kg of levatiracetam given. 14 hours after presentation, the patient was admitted to the neuroicu with absent brainstem reflexes and repeat head ct with worsened cerebral edema and tonsillar herniation. formal brain death testing was performed approximately 24 hours after the patient's initial presentation. seizures are known to cause a hypermetabolic state in the brain. uncontrolled neuronal firing leads to hyperemia, failure of na+/k+ atp pump, increased levels of neuronal chloride, and inability for cells to maintain homeostasis. in this case, the patient's initial head ct showed cerebral edema, likely from prolonged seizure activity. once the second convulsive seizure occurred, a period of pre-intubation hypoxemia coupled with post-intubation hypotension allowed for progression of cerebral edema in an already compromised brain; similar to what is seen in post-cardiac arrest and traumatic brain injury. this case illustrates the importance of controlling for factors that can contribute to secondary brain injury in seizure patients. posterior reversible encephalopathy syndrome (pres) is a clinico-radiographic syndrome characterized by seizure, headache, encephalopathy and neuroimaging findings of symmetric white matter edema in the posterior cerebral hemispheres. cerebellar and brainstem involvement occurs rarely. here, we report a patient who presented with severe pres complicated by diffuse cerebellar edema and obstructive hydrocephalus requiring decompression with ventriculostomy placement. this is a case report from a tertiary medical center. a 23-year-old woman with a history of migraine presented to the emergency room with 2-day history of fever, right upper quadrant abdominal pain, nausea and vomiting. on day two of hospitalization, the patient developed worsening headache, dizziness and lethargy and her blood pressure was elevated to 188/93 mmhg. ct of the brain showed cerebellar edema and bilateral occipital lobes with effacement of the fourth ventricles and associated hydrocephalus involving the lateral and third ventricle. mri obtained post-operatively revealed t2-weighted/flair diffuse hyperintensities in the parietal, occipital lobes and cerebellum. there was no mass lesion or restricted diffusion in diffusion weighted images (dwi) suggestive of acute infarction. cerebellar edema with compression of the fourth ventricles with hydrocephalus was slightly improved status post interval ventricular drain placement. ventriculostomy was weaned off over the course of seven days. follow up mri showed improvement of the hydrocephalus with decreased in t2-weighted hyperintensities in posterior parietal and occipital lobes as well as within the cerebellum. severe cerebellar edema with obstructive hydrocephalus is an exceedingly rare complication of pres; however, prompt recognition and surgical decompression in addition to usual medical management is critical to achieve a favorable outcome. while obstructive hydrocephalus may be successfully treated with medical management and blood pressure reduction, this case emphasizes that clinical evidence of brain herniation should prompt immediate consideration for emergent ventriculostomy placement or surgical decompression to redirect cerebrospinal fluid and reduce intracranial pressure. one of the biggest uses of qeeg is the alpha delta ratio (adr). adr drops of 20% from baseline are associated with vasospasm (vsp/dind). we describe a case in which subtle qeeg adr change occurred in a poor grade sah patient over a number of days, making it challenging to detect an acute adr drop. this is a case report and literature review. this study also compared hemispheric adr values against the mca values by tcd, dsa, cta and clinical exam. a 67 year old female with hunt hess iii, wfns iv, came in comatose with a ruptured ica aneurysm. over six days, she developed refractory vsp/dind. the patient's adr was gradually declining but their increased icp required propofol sedation, which itself lowers adr. re-analysis over multiple days had to be performed, and that re-analysis showed a gradual adr decline preceding the vsp/dind. when looking at our cases, we found a sensitivity and specificity of (40,75%) when using the adr nadir compared to cta/dsa. recent publications have shown the adr method has less than ideal sensitivity and specificity of (65,43%). qeeg adr is a useful multimodal monitoring parameter in neuroicu patients with relatively good baseline adr. however, its ability to detect vsp and dind in poor grade sah patients who have adr values that are already low (< 0.5) is challenging, particularly given the confounders in this population, such as eeg artifact which artificially raise adr values, and sedation (e.g., propofol) which suppress adr values. based on this information, we would suggest neuroicu centers carefully use continuous eeg monitoring for other indications such as nonconvulsive seizures, unless they have sophisticated bedside protocols about sedation vacation (baseline daily adr that is not) and eeg department resources (technicians who can fix eeg electrode artifacts). hypoxic-ischemic brain injury is a severe consequence of global cerebral hypoperfusion following cardiac arrest. brain ct findings may include diffuse sulcal effacement, loss of cisternal spaces, poor differentiation of grey/white matter, and decreased densities in the basal ganglia and watershed territories. the connection between aggressive resuscitation, as seen with in-hospital cardiac arrest, and cerebral edema is unclear. here we present the case of a hemodynamically unstable patient who developed transient reversible cerebral edema believed secondary to aggressive resuscitative efforts and pressor therapies. a 55 year old female with a past medical history significant for diabetes and hypertension presented to the emergency department with headache and non-bilious vomiting. workup revealed isolated ventricular hemorrhage secondary to a ruptured left posterior inferior cerebellar artery (pica) aneurysm and cerebellar arteriovenous malformation, which underwent subsequent embolization. during her early hospital course she remained intubated due to pulmonary factors, but awake and alert with a non-focal neurologic examination. her course was subsequently complicated by a severe metabolic acidosis requiring several doses of bicarbonate boluses and continuous infusion, cvvhd, intravenous crystalloids, hydrocortisone and multiple pressors to maintain stability. over a 24 hour period she received 10 liters of volume while maintaining a mean arterial pressure above 60mm hg and o2 saturations above 88%, without requiring cpr. subsequent progressive encephalopathy developed, with a ct brain revealing diffuse sulcal effacement prompting hyperosmolar therapy. gradually her encephalopathy began to improve, with repeat imaging showing improvement of cerebral edema and return of grey/white matter differentiation. this case highlights a potential etiology of reversible cerebral edema that may confound early prognostication in patients with hemodynamic instability such as multi-organ failure and in-hospital cardiac arrest. further investigations are warranted. langerhans cell histiocytosis (lch) is a rare disease with an incidence of 0.2-2.0 cases per 1000,000 children under 15 years of age. frequency in adults is unknown. the hypothalamic-pituitary manifestations of lch (commonly diabetes insipidus) and hypernatremia are well known complications. here we present a case where a patient presented with poor mental status and the etiology remained unknown initially despite extensive testing. electronic medical record was reviewed regarding hospital course, sodium trends, and radiology images. this patient is a 66 year old female with history of langerhans' cell histiocytosis with biopsy-confirmed suprasellar metastases (complicated by pan-hypopituitarism) who was transferred to our institution for hypernatremia and hydrocephalus. she had undergone two cycles of chemotherapy, most recently one week prior to presentation, and five rounds of radiation completed three months earlier. her presentation to the community hospital from a nursing facility was with unresponsiveness and she was intubated on arrival. her sodium was 174 at that time; and had been 137 three days prior. sodium was corrected from 174 to 139 over the course of four days with a drop from 174 to 157 within the first ten hours. her mental status improved to the point where she was awake and following commands; however still remained intubated. when she presented to our institution her sodium was 139 and subsequently became unresponsive with a poor neurological exam limited to cranial nerve function only. she was evaluated with eeg monitoring and mri brain; however both were unrevealing for a cause. she had an external ventricular drain placed for concern for hydrocephalus that did not change her exam. one week later repeat mri brain revealed extrapontine myelinolysis. this case highlights the complications associated with intracranial lch and the need for repeat imaging in patients with rapid sodium correction to identify effects of osmotic demyelination. cangrelor is a rapid-acting, intravenous p2y12 platelet receptor inhibitor with a plasma half-life of 3-6 minutes and full platelet recovery achieved within one hour after discontinuation. because it is rapidly reversible, cangrelor is commonly used to bridge patients with recent coronary stents to cabg surgery. oral p2y12 inhibitors, such as clopidogrel, have a delayed onset and offset with platelet recovery occurring over 5-7 days, making their use challenging perioperatively or in the setting of an acute bleed. safety and efficacy data of cangrelor in noncoronary stents are lacking. we present two patients in whom cangrelor was used to maintain internal carotid artery (ica) stent patency acutely. both patients presented with an ischemic stroke secondary to acute occlusions of the left ica and left middle cerebral artery (mca) and were taken emergently to the neurointerventional suite for carotid artery stenting (cas) and mechanical embolectomy of the mca clot. heparin and eptifibatide were administered intraoperatively. post-procedure dynact demonstrated intracranial hemorrhage complications. dual antiplatelet therapy (dapt) with clopidogrel and aspirin, typically initiated following cas, was deferred given the difficulty of reversing their antiplatelet effect in hemorrhage expansion. instead, cangrelor was initiated to maintain carotid stent patency at 0.75 mcg/kg/min in one patient and 0.5 mcg/kg/min in the other patient and infused for 5.5 and 25 hours, respectively. platelet reactivity was trended with the verifynow® assay and used to adjust cangrelor dosing. serial imaging was obtained to monitor hemorrhage expansion. one patient was transitioned to oral dapt and discharged while the other patient deteriorated neurologically from malignant cerebral edema and expired. cangrelor may be useful following cas complicated by intracranial hemorrhage when the need to maintain stent patency must be balanced with the risk of hemorrhage expansion. further research is warranted to determine its safety and efficacy in noncoronary stents. cerebral amyloid angiopathy (caa) although has been described in the literature, the different categories of this entity and its recognition and subsequent treatment are still elusive. it is important for neuro intensivists to recognize its variable presentation . we describe a single case report and perform a systemic review. caa depending on pathology can be categorized as inflammatory-caa where perivasculitis is seen on biopsy. this causes a non-destructive perivascular inflammatory infiltration and amyloid deposition pattern. on the other hand, amyloid beta related angitis (abra) results in a vasculitis and there is predominantly granulomatous angio-destructive inflammatory mediated disease affecting leptomeningeal and cortical vessels characterized by meningeal lymphocytosis and abundant amyloid-beta deposition within the vessel walls. caa on the other hand results in no inflammation of vessels but rather just deposition of amyloid deposition in the walls of vessels. we report a case of a 82 year old man with an extensive cardiac history, who presented with syncope. initial computed tomography (ct) of head was negative. during admission, he acutely started having trouble answering questions including his name, and was unable to communicate his needs. repeat ct head showed hypodensity in left frontal region which was attributed to a stroke. he than developed complex partial seizures requiring intubation and seizure management. lumbar puncture showed mild pleocytosis. mri brain showed edematous changes of the left subcortical and deep white matter frontal lobe region which on repeat imaging subsequently worsened. biopsy was eventually performed which confirmed inflammatory cerebral amyloid angiopathy. he was treated with steroids and immunosuppression with gradual improvement. 3 month follow up in clinic with continued improvement to independence. recognize the various subtypes of caa in their pathology, presentation and potential treatment. in acute emergency situations, intraosseous vascular access represents an alternative route of vascular access when peripheral vein insertion is difficult. we present the first documented case of intraosseous alteplase (tpa) administration in a patient with acute ischemic stroke symptoms. methods 57 year old male with past medical history of hypertension, end stage renal disease, and diabetes mellitus presented to the hospital with sudden onset expressive aphasia and right sided numbness 36 minutes prior to ed arrival. nihss was 3 and code stroke was activated. patient blood pressure was 207/102. ct head did not show any acute intracranial hemorrhage. it was decided to proceed with thrombolytic therapy. one peripheral venous access was obtained through which nicardipine drip was started to lower the blood pressure however second peripheral venous access was attempted multiple times but was unable to be obtained. tpa is more effective the faster it is administrated, and there was no known contraindications to administering tpa via intraosseous access (io). we report the first known case of successful and safe administration of fibrinolytic therapy through the intraosseous route in a patient with acute ischemic stroke symptoms. intraosseous access has been considered to be more invasive than intravenous (iv) and carries theoretical risk of bleeding however we were able to demonstrate tpa administration through io without any local or systemic complications. the bioavailability of alteplase through io access has not been studied however it is considered to be close to iv infusion in case of morphine and vasopressors. no studies negate or support the use of intraosseous access in stroke patients. contraindications are few and complications are uncommon. the findings of our case report suggest that intraosseous cannulation may be safely used for fibrinolysis in acute ischemic stroke patients with difficult peripheral venous access in in-hospital or out-of-hospital setting. tufts medical center, boston, massachusetts, usa we report a case of a pregnant patient with bilateral ovarian teratomas who presented with treatment refractory nmda receptor encephalitis despite removal of bilateral teratomas, successfully treated with rituximab. case report and discussion of treatment and outcome. 25 year old 22 weeks pregnant female with known ovarian cysts who presented with one week of confusion and subsequent status epilepticus. she was started on empiric treatment with ivig while undergoing workup. nmda receptor antibody was confirmed. left oophorectomy and right ovarian cystectomy were performed, both of which confirmed ovarian teratoma. she was given high dose steroids. her worsening condition prompted consideration of additional agents. plasma exchange and rituximab were initiated and then she was continued on rituximab alone. she improved dramatically over six weeks and delivered at full term via spontaneous vaginal delivery. at one year follow up, the child was healthy and meeting appropriate milestones. we report the use of rituximab for safe and successful treatment of nmda receptor encephalitis in a gravid female. neovascular glaucoma (nvg) is a known complication of carotid endarterectomy in patients with carotid stenosis. there are no previous reports of acute nvg refractory to medical treatment following carotid artery stenting (cas). we report a patient who needed surgical treatment for acute exacerbation of nvg following cas. a 56-year-old man with hypertension, diabetes, and hypercholesterolemia presented with recurrent transient weakness in his right hand. fifteen days before presentation, he had experienced acute loss of vision on the left side because of central retinal artery occlusion. magnetic resonance imaging of the brain was unremarkable. conventional angiography showed an occlusion of the left proximal internal carotid artery. ophthalmological evaluation before cas showed neovascularization of the iris and a normal intraocular pressure (iop) of 14 mm hg in the left eye. cas was uneventful, but the following morning, the patient developed pain in the left eyeball with an iop of 49 mm hg. anterior chamber paracentesis followed by intraocular injection of bevacizumab, panretinal photocoagulation, and medical treatment failed to reduce the iop below 32-36 mm hg. eighteen days following cas, an ahmed glaucoma valve was implanted in the left eye to treat the refractory nvg. iop decreased to 6 mmhg and his ocular pain resolved completely post implantation. although nvg is a rare complication of cas, it should be suspected in patients who develop acute ocular pain following cas. nvg may respond to anterior chamber paracentesis, panretinal photocoagulation, and bevacizumab, but surgical treatment, such as implantation of an ahmed glaucoma valve, should be considered in cases with refractory nvg. background: cerebral amyloid angiopathy is a common cause of spontaneous lobar intracerebral hemorrhage. convexal subarachnoid hemorrhage can be a manifestation of cerebral amyloid angiopathy. whether focal amyloid burden predicts future hemorrhage is unclear. case: an 82-year-old man presented with transient left arm weakness and paresthesias in the setting of previous cognitive decline. mri showed a convexal subarachnoid hemorrhage of the right central sulcus, as well as susceptibility weighted imaging findings consistent with superficial siderosis. lumbar puncture revealed normal cell count with a mildly elevated protein. he had spontaneous resolution of his symptoms after several hours. one year later he presented with sudden onset confusion and imaging again showed a convexal subarachnoid hemorrhage over the posterior right frontal lobe. susceptibility weighted mri revealed hemosiderin over the right posterior frontal and anterior parietal lobes. an amyloid-pet, obtained one year prior to his first spell as a research participant, demonstrated asymmetric amyloid deposition in the right temporo-parietal region. 6 years after his initial episode he presented again with confusion, headache, and decreased level of alertness. a ct scan demonstrated a right-sided temporo-parietal intracerebral hemorrhage in the area of asymmetric amyloid deposition on pet. his family opted for comfort measures only, and he was discharged to hospice. autopsy revealed severe amyloid angiopathy, as well as alzheimer disease, braak stage vi. discussion: this case illustrates the clinical course of a patient with amyloid angiopathy, including recurrent convexal subarachnoid hemorrhages, and superficial siderosis. of importance, the amyloid pet scan predicted the location of his intracerebral hemorrhage 7 years later. the case of a 23-year-old man, presenting with a past medical history of migraine headaches, dipola, vertigo, with symptoms later progressing to lethargy and confusion for 10 days. brain mri revealed a peripherally enhancing mass within the left thalamus with central restricted diffusion, which is consistent with a cerebral abscess. case report of congenital heart disease when discovered in adulthood is an interesting entity, especially when it is the source of brain abscesses. detailed history taking, physical examination and appropriate imaging can usually reveal the anomaly. the diagnosis of brain abscess should promote the clinician to consider right to left shunts as a possible predisposing condition for brain abscess management of acute cerebral embolism in patients with implanted ventricular assist devices (vads) is particularly challenging, since chronic anticoagulation often precludes the use of intravenous tissue plasminogen activator (iv-tpa). we describe a vad patient who suffered cerebral embolization, and was successfully treated with thrombectomy, emphasizing the nuances particular to this clinical scenario in the context of limited historical experience. a 54 year-old man with heart failure (ejection fraction 10%) and heartware ii vad implantation about 15 months prior, was found at the scene of a car accident with expressive aphasia, right homonymous hemianopia, extinction and right hemiplegia, with a national institutes of health stroke scale (nihss) score of 12. upon arrival, his ct was unremarkable, but cta revealed occlusion of the left middle cerebral artery (m1 segment). since his inr was 2.1, he underwent emergent thrombectomy with the solitaire device, resulting in complete revascularization (tici = 3) 120 minutes from onset, with rapid deficit resolution (nihss = 0). the procedural and clinical success was accompanied by lack of evidence of infarction in subsequent ct studies, and a modified rankin score of 0 upon discharge. the removed thrombus displayed early organization, suggesting unexpected firmness, and underscoring the potential importance of mechanical removal rather than chemical lysis in vad patients. our case has attributes that set it apart from those previously reported: 1) the use of a hybrid (i.e. retrieval plus aspiration) endovascular retrieval technique, 2) the lack of concurrent use of thrombolytic drugs, and 3) the rapid, sustained and optimal clinical improvement. the utilization of vads continues to grow, yet the literature regarding endovascular techniques for managing these types of patients remain scarce. however, the increasing availability of centers capable of delivering this type of treatment, suggests that thrombectomy should be strongly considered in vad patients with acute cerebral embolism. extreme cerebral oxygen changes has not been reported via monitoring of partial brain tissue oxygen levels. here we present an asah patient with brain tissue oxygen (pbto2) monitoring, who developed cerebral hypoxia due to cerebral vasospasm, then went on to develop cerebral hyperoxia with associated cerebral infarction. methods 54 yo female with hh5fg4 sah with initial gcs of 3t underwent coiling of a ruptured basilar tip aneurysm. a pbto2 monitor was inserted to guide therapy. this patient had multiple episodes of low pbto2 (80 mmhg). this corresponded to infarction on follow up head ct and mri with preservation of local arterial vessels on mra, consistent with diagnosis of dci. in the present case, high pbto2 is more likely resulted from a combined effect of 1) increased cbf from co-administration of ketamine at the time of milrinone infusion; 2) decreased cerebral metabolic demands in already infarcted left frontal lobe, resulting in reduced oxygen uptake; 3) accelerated reperfusion and thus hyperemia with milrinone. restoration of flow with milrinone may have been too late to reverse the prolonged period of vasospasm induced ischemia, resulting in perfusion of infarcted tissue, or luxury perfusion. clinicians utilizing pbto2 monitoring for dci management should be cautious of high pbto2 values, as it may herald cerebral infarction. further studies are needed to better elucidate the mechanism of reperfusion injury and potential treatments. patients with acute brain injury, especially those with intracranial hemorrhages are at a higher risk for hemorrhage while on therapeutic anticoagulation. unfractionated heparin (ufh) is frequently used as it is easily reversible and has a short half-life. activated partial thromboplastin time (aptt) is traditionally used to monitor its effect. several disadvantages with aptt monitoring include inability to reach therapeutic goal, over-or under-dosing and its associated complications. anti-xa level is reported to have better correlation with actual degree of anticoagulation using ufh. retrospective chart review of 3 patients with acute brain injury who required initiation of early therapeutic anticoagulation and monitored with anti-xa level. case 1 -30 year-old-man with intracerebral hemorrhage (ich) developed lower extremity deep vein thrombosis (dvt) and required therapeutic anticoagulation. patient became therapeutic within six hours of titrating infusion based of anti-xa levels and remained therapeutic. asymptomatic rectal bleeding associated with fecal management system was noted. case 2 -26 year-old-man with cerebral venous sinus thrombosis presented required therapeutic anticoagulation. ufh infusion was initially monitored using aptt levels which had widely varied lab results, thus monitoring was switched to anti-xa levels which provided a more consistent therapeutic range. however, patient developed thrombocytopenia in the setting of inflammatory bowel disease. therefore, ufh infusion was changed to argatroban infusion. case 3 -22 year-old-man with lower medullary acute ischemic stroke due vertebral artery dissection required therapeutic anticoagulation to prevent recurrence. patient became therapeutic within 27 hours of titrating based of anti-xa levels. to monitor therapeutic anticoagulation, anti-xa level appears to achieve target anticoagulation level faster and without serial variation as compared to aptt. however, anti-xa level estimation is costlier as compared to aptt and not widely available. by restricting it to special populations like those with acute brain injury might justify its use and underscore cost-effectiveness. neurological admissions presenting to the icu benefit from a dedicated neurocritical care team but many community hospitals lack this subspecialty expertise. with an aging population and a neurointensivist shortage, more patients are transferred to designated neurocritical care units which increases healthcare spending and resource utilization. recognizing this obstacle, we describe the management of a patient in status epilepticus via our novel "eneuro-icu" consult program in which a 'sub-hub' of the northwell health tele-icu was set up at the only hospital out of 19 in our health system that is staffed 24/7 by neurointensivists. a 56-year-old man with history of a left frontal meningioma presented with multiple seizures to a hospital within our healthcare system. he received 6 mg of lorazepam and levetiracitam in the emergency department and was admitted to the icu for further monitoring. there he was witnessed to have recurrence of clinical activity concerning for ongoing seizure. levetiracitam was increased and phenytoin was added. neither immediate neurological consult nor continuous eeg was available, thus an "eneuro-icu" consult was obtained. in this model, the bedside provider contacts the tele-icu that facilitates a conference call with the neurointensivist. av technology was used to provide consultations and follow ups. the neurointensivist determined the patient was rapidly returning to baseline and recommended a head ct, lab studies and continuation of the anti-epileptic drugs. the eicu team monitored the patient overnight. by leveraging the infrastructure in place for management of critically ill patients remotely, an additional level of subspecialty care was offered in a timely manner and allowed the patient to remain at their local facility. based on the success of the initial program we are currently in the process of extending the virtual consult service to various community hospitals' eds/icus to improve outcomes for patients who would benefit from neurocritical care services. hypoglycemic encephalopathy is a potentially life-threatening manifestation of hypoglycemia, it is usually caused by metabolic change, hypoglycemic agents, and malignancy. here, we report a patient with hypoglycemic encephalopathy caused by sleeve gastrectomy a 34-year-old woman was admitted due to unconsciousness of acute onset. she showed normal corneal and vestibulo-ocular reflex but sluggish pupil light reflex and decerebrated posture by painful stimulation. she has taken severe medications for weight control including orthosiphon powder and hydrochlorothiazide after bariatric surgery. laboratory studies showed significantly low blood glucose level (13mg/dl) with normal liver enzyme and creatinine. there was no evidence of adrenal insufficiency. electroencephalography showed no epileptiform discharge. initial and follow-up brain magnetic resonance imaging revealed diffuse high signal intensity on white matter expanded to cortex, corpus callosum and posterior limb of the left internal capsule, suggesting hypoglycemic encephalopathy. in abdomen-pelvic ct, there is no mass lesion like carcinomas or insulinoma. the clinical diagnosis of hypoglycemic encephalopathy followed by sleeve gastrectomy was made by given history of bariatric surgery and the lack of evidence of hypoglycemic agent overdose, adrenal insufficiency, endogenous hyperinsulinism or malignancy. there are several hypotheses that sleeve gastrectomy can encourage hypertrophy of beta cells, hypersecretion of glucagon-like peptide, glucagon abnormality and increased insulin sensitivity, that may induce hypoglycemia. we suggest that clinicians should consider sleeve gastrectomy itself as a possible cause of profound hypoglycemia pulmonary embolism (pe) is a fatal complication in neurological conditions with plegic extremities. clinical presentations and supportive testing can be variable. we present a case of pe which presented with st segment elevations 2 weeks after spontaneous intracerebral hemorrhage (sich). case report and review of the literature we present a case of a 68 year old female with a history of a recent sich with resultant left hemiplegia who presented with a syncopal episode and chest pain. on physical examination, she was noted to be tachypneic and tachycardic with an unchanged neurological exam. pulmonary embolism can present with a variety of ekg abnormalities including st elevations after sich and the treating physician should be aware of these idiosyncrasies. anticoagulation should be cautiously initiated in such cases. infectious intracranial aneurysms (iia) are rare neurovascular lesions associated with infective endocarditis. we present a case of a large iia which developed within 24 hours of a negative ct angiogram and ruptured despite 4 weeks of appropriate antibiotic treatment. a 66 year-old woman presented with fevers and malaise. her initial workup revealed an aortic valve mass and blood cultures grew out streptococcus. three days after intravenous penicillin therapy was initiated for bacterial endocarditis, she developed a new headache and right hemianopsia. a head ct demonstrated a left occipital lobe stroke with hemorrhagic transformation. further workup with ct angiography revealed a 5 mm outpouching along of the distal branch of the left pca, consistent with an infectious intracranial aneurysm. on repeat imaging, this aneurysm demonstrated growth despite medical treatment, and required coil embolization/occlusion. aortic valve replacement was planned after 4 weeks of antibiotic therapy because of continued severe aortic insufficiency and persistent valve vegetation. on the day of surgery, she developed acute word-finding difficulty followed by a rapid neurologic deterioration resulting in coma. a head ct demonstrated a new left frontal intraparenchymal and subarachnoid hemorrhage associated with the rupture of an 11mm x 11mm irregularly shaped aneurysm in the region of the left mca bifurcation, which had been absent on a prior surveillance ct angiography just 23 hours prior. she underwent emergent coil embolization, extraventricular drain placement, and decompressive hemicraniectomy. despite these measures, her exam did not improve. she was transitioned to comfort measures and life-sustaining therapies were withdrawn. the development of iia can occur despite appropriate medical treatment. these aneurysms may rapidly expand and rupture within hours, as shown by our case. even with prior exonerating imaging, clinicians should have a high suspicion for iia development in all infective endocarditis patients. the corneomandibular reflex, also known as wartenburg reflex or von solder phenomenon, is a rare pathological reflex signifying severe supranuclear trigeminal injury. it presents as contralateral jaw deviation to corneal stimulation. etiologies include upper brainstem lesions, large hemispheric lesions with brainstem compression, as well as advanced amyotrophic lateral sclerosis and multiple sclerosis when corticobulbar pathways are affected. this clinical finding is useful in differentiating structural and metabolic causes of coma, as this examination finding would not be present in metabolic phenomena. a middle aged man presents with a hypertensive right thalamic hemorrhage and a four score of e0m0b0r1. the patient's cornea was stimulated with a cotton swab. the cornea was tested bilaterally to determine any lateralizing features. recording on video was performed with patient's family written consent as patient was comatose. upon stimulation of the patient's cornea a contralateral jaw jerk was appreciated. this was replicated contralaterally. this case describes a common patient with a rare physical examination finding. there is utility in recognizing this finding as it will aid in determination of the underlying cause of a comatose state. the corneomandibular reflex present at presentation rules out a metabolic cause. a structural cause was validated by imaging studies (shown). the reflex arc was researched and has been independently artistically rendered (shown), which demonstrates the pathway beginning with the afferent limb of the corneal stimulus (v1) which travels to the main trigeminal sensory nucleus via the trigeminal ganglion. severe supranuclear trigeminal lesions will inhibit inhibitory interneurons within the mesencephalic nucleus, leading to activation of the motor nucleus of the trigeminal nerve. this causes activation of the ipsilateral external pterygoid muscle which produces a contralateral jaw jerk. overall this patient fared poorly and expired several days after admission. pneumocephalus is when air enters and is contained inside the intracranial compartment. when intracranial pressure increases causing neurological decline, patients can experience nausea, vomiting, seizures, dizziness, and altered mental status. here we present three cases of postoperative pneumocephalus which resolved quickly with humidified oxygen delivery via high-flow nasal cannula. we follow the cases with a review of the mechanisms and pathophysiology of pneumocephalus and its treatment, as well as future directions in management. case series of 3 patients with post-operative pneumocephalus who were treated with high-flow nasal cannula. case 1 describes a 78-year-old woman who underwent hemicraniotomy for removal of meningiomas, with focal postoperative neurological signs and 8mm of midline shift on head ct due to pneumocephalus. case 2 describes a 57-year-old woman who underwent right anterior temporal lobectomy for seizures, who developed postoperative focal prefrontal lobe signs and mount fuji sign on head ct. case 3 describes a 58-year-old man with bilateral subdural hematomas, status post bilateral burr hole evacuation. he was excessively somnalent postoperatively with bilateral pneumocephalus. with high-flow nasal cannula, they all returned to clinical, and near radiographic baseline within 8, 12, and 20 hours, respectively. recognizing the limitations of a small case series, we believe these cases support use of high-flow nasal cannula when treating patients with symptomatic pneumocephalus. thsee patients showed more rapid clinical and radiographic improvement after implementation of hfnc oxygen therapy than previously described using other methods. high-flow nasal cannula may help washout nitrogen from the lungs, allowing a downward gradient from the nitrogen in the intracranial air bubble out the lungs. in addition, high-flow nasal cannula is more comfortable for the patient, allowing for more consistent treatment. randomized studies are needed to confirm our findings. the neurotoxin produced by clostridium botulinum is the most lethal toxin known by weight. early recognition and treatment of botulism are crucial for full recovery. we present a case of progressive paralysis secondary to botulism toxemia following a gunshot wound (gsw). a 27-year-old man suffered a gsw to the right lower extremity. he was treated in the emergency department where the wound was irrigated and closed. some bullet fragments could not be retrieved due to close proximity to popliteal vessels and surrounding nerves. he returned ten days later with diplopia and nausea. he denied consumption of canned foods or illicit substances and had no preceding upper respiratory or gastrointestinal illnesses. on examination, he exhibited ptosis and symmetric bilateral motor weakness with diminished deep tendon reflexes. the gsw showed no signs of infection. progressive respiratory insufficiency resulted in intubation and mechanical ventilation. a lumbar puncture revealed normal opening pressures and cerebrospinal fluid analysis was unremarkable. titers for acetylcholine receptor and anti-muscle specific kinase antibodies were negative, as was a tensilon test. blood toxicology analysis showed no evidence of illicit substances or heavy metal poisoning. a high suspicion for wound botulism led to consultation with the regional poison center and cdc. blood and anaerobic wound samples were obtained for toxin bioassay and culture. empiric intravenous penicillin g therapy was started. equine heptavalent antitoxin (h-bat) was obtained and administered on hospital day 2. serum toxin bioassay tested positive for botulinum neurotoxin type a. the patient required a gastrostomy tube due to persistent dysphagia. after one month of hospitalization, he was discharged home and continues outpatient physical therapy. wound botulism from traumatic injury is exceedingly rare with only one to two cases reported annually. our case is the first reported incidence of wound botulism from a single gunshot wound. hyperammonemic cerebral edema (hce) with brain herniation carries a dismal prognosis historically despite aggressive treatment. however, we report a case where a patient with severe hce and herniation returned to her neurological baseline after aggressive medical management. a 62-year-old woman became acutely comatose with a blown left pupil and required intubation several days after admission for encephalopathy. head ct demonstrated diffuse cerebral edema with central and bilateral uncal herniation. profound hyperammonemia (238ug/dl) was implicated, though hepatic function was normal. her intracranial hypertension was ultimately controlled using hyperventilation, sedation, and osmotherapy, resulting normalization of her brainstem reflexes and improvement in her coma and imaging. continuous veno-venous hemodialysis (cvvhd) normalized her ammonia and encephalopathy that was initially refractory. multiple porto-hepatic shunts were identified on hepatic ct angiogram as the cause of her hyperammonemia, and were embolized. she was eventually weaned off cvvhd and extubated, without residual neurological deficits. our case demonstrates that, with contemporary management, clinical and radiographic reversal of hce and herniation is possible and prognosis is not uniformly poor. therefore, neurological prognostication in these patients should only be performed after assessing the clinical trajectory following cerebral resuscitation and ammonia reduction. furthermore, our case provides an example of how cvvhd can be used to reduce refractory hyperammonemia quickly until the cause of the hyperammonemia can be ascertained and addressed. finally, this is the first case reported of hce secondary to primary portosystemic shunt in absence of hepatic disorder; vascular imaging of the liver should be considered in the work-up of patients with hyperammonemia. a good neurological prognosis is possible for patients with hce and cerebral herniation with aggressive management that includes reduction of icp and ammonia. ccvhd is a useful adjunct to treat refractory hyperammonemia. a porto-systemic shunt should be considered as an etiology for hyperammonemia. cerebral venous sinus thrombosis (cvst) often presents with intracerebral hemorrhage and seizures. extensive involvement of the cerebral sinuses can lead to comatose state due to cerebral edema and associated intracranial hypertension. if not reversed with early therapeutic anticoagulation, then mechanical thrombectomy and decompressive hemicraniectomy (dhc) may be necessary as life-saving measures. however, etiological diagnosis of associated hypercoagulable state is needed for successful long-term treatment. case report of a patient presenting with cvst requiring anticoagulation, dhc and total colectomy (to treat underlying ulcerative colitis) as treatment with full anticoagulation was associated with lifethreatening hematochezia. twenty-five year old man with one week history of diarrhea presented with left sided weakness. imaging studies confirmed extensive cvst with minimal venous drainage through bilateral cavernous sinuses as well as right hemiparesis secondary to left post cingulate intracranial hemorrhage. patient subsequently developed loss of vision and became encephalopathic, despite initiation of anticoagulation with heparin. hence, mechanical thrombectomy was attempted but was unsuccessful. he also developed consumptive thrombocytopenia for which his anticoagulation was switched to argatroban. progressive neurologic deterioration necessitated dhc. his neurological examination progressively improved upon re-initiation of anticoagulation resulting in restoration of vision and resolution of left hemiparesis. later in the disease course, he developed symptomatic hematochezia associated with his primary disease, ulcerative colitis and required total colectomy. subsequently he was transitioned to oral anticoagulation and transferred to inpatient rehabilitation facility due to deconditioning from prolonged hospitalization. cvst can be life-threatening unless early treatment is initiated. appropriate and timely treatment including etiological diagnosis can lead to favorable patient outcomes. adverse effects of intrathecal non-ionic contrast during myelography are rare but can include seizures and encephalopathy. to our knowledge, cerebral edema has only been reported in the literature in two previous cases. we report a case of malignant cerebral edema following intrathecal administration of non-ionic contrast who developed seizure like activity with radiographic evidence on a head computerized tomography (ct) scan of acute diffuse cerebral edema. an 80 year-old male underwent an elective spinal ct myelogram using 20mm of isovue m 200 non-ionic contrast to evaluate chronic lumbar pain related to spinal stenosis. no complications were reported intra-procedurally and the patient was discharged home. the patient began to complain of progressive worsening headaches. the following morning he started complaining of nausea/vomiting, lost consciousness with posturing vs seizure like activity. a head ct revealed extensive brain edema and swelling with crowding of the brainstem and herniation ( fig. 1 ). this patient was intubated and given an iv mannitol, 23.4% hypertonic saline followed by an infusion of 3% hypertonic saline infusion. serial cts revealed complete resolution of his cerebral edema 48 hours after admission ( fig. 2 and 3) . the patient's mental status improved, was extubated, and then was discharged home 5 days after admission. while significant adverse effects of non-ionic contrast following spinal myelography are rare, the potential life threatening severity of these incidents warrants further patient education following this routine outpatient procedure. we recommend close neurological monitoring after intrathecal administration of contrast media. patients should be provided with detailed instructions about the potential side effects of non-ionic contrast and how to seek medical attention if symptoms of cerebral edema are noted post procedurally. a large acute traumatic subdural hematoma with brain compression and midline shift is typically considered a neurological emergency necessitation surgery. spontaneous resolution of a large subdural hematoma is considered a rare phenomenon with a few case reported in the literature. to our knowledge, we present the first case of spontaneous resolution of a traumatic acute subdural hematoma with brain compression and midline shift on dual antiplatelet therapy. a 71 year-old patient initially presented after being found down and unresponsive in his home. the patient was on aspirin and clopidigrel. he was found to have altered mental status, wasn't following commands, and had a glascow come scale score of < 8. the patient's initial head ct revealed a large left acute subdural hematoma (sdh) measuring 1.8cm in diameter. neurosurgery was consulted upon arrival for possible emergent evacuation. the patient's repeat head ct showed a decreased sdh to 1.1cm in diameter. given the rapidly resolving sdh, surgery was postponed. another repeat head ct the following day revealed a decrease in size of the sdh to 10mm in diameter. several theories have been proposed for the rapid resolution of an acute sdh including csf leaking into the sdh through a tear in the arachnoid membrane with rapid reabsorption, redistribution of the hematoma in the subdural space, and acute fluctuations in icp driving the spontaneous resolution of the sdh. close neurological and repeat imaging may be helpful in managing these patients. as seen in our patient and others, a low density band in the subdural hematoma may indicate csf and be a predictor for spontaneous resolution of an acute sdh. the features of this atypical case offer points of discussion regarding the surgical or non-surgical approach of these patients. early post-hypoxic myoclonus -or myoclonic status epilepticus -develops within 48 hours of the initial anoxic injury and is associated with poor outcomes per current aan practice guidelines. late posthypoxic myoclonus -or lance-adams syndrome -develops >48 hours after the anoxic injury, consciousness is regained, and is associated with relatively good outcomes. the patient is a 46 yo man with a history of alcohol and cannabis use disorder, bipolar disorder, pnes who presented after attempted hanging for up to 15 minutes. intial rhythm was pea; he had 3 rounds of cpr, received 1mg epinephrine, and was intubated prior to rosc. myoclonic jerks were noted within 24 hours post arrest. hypothermia protocol was initiated as gcs was 3t. ct head showed subtle loss of grey-white differentiation. eeg initially showed that his generalized myoclonic jerks correlated with cortical activity. he was started on versed gtt, keppra, vpa with improvement in the frequency of jerks. on post-arrest day 3, mri brain showed mild cerebellar edema. mri c-spine was negative for significant myelopathy, arguing against myoclonus as a spinal reflex. mentation gradually improved; on post-arrest day 7 he opened his eyes to command. eeg evolved to show gpeds and sirpids and oxc and tpm were added. on post-arrest day 11 a paralytic challenge resolved electrographic spikes, suggesting subcortical origin of myoclonus. he continued to improve cognitively, but despite clonazepam, vpa, home oxc he continues to have severe intention myoclonus. despite the presumed poor prognosis, the patient's family pursued aggressive measures and his mentation gradually improved. early post-hypoxic myoclonus carries a poor prognosis, however, in this case, the patient survived with a good cognitive outcome likely owing to his age and relatively few comorbidities. further studies are needed to differentiate early-onset lance-adams from myoclonic status since prognosis differs greatly. posterior reversible encephalopathy syndrome (pres) can occur from multiple etiologies and often presents with rapid-onset headache, altered consciousness, seizures and/or visual disturbances. vasogenic edema involving predominantly cerebral white matter is a key finding on imaging studies. although seizures are a frequent presenting symptom of pres, refractory status epilepticus (rse) requiring multiple antiepileptic medications is very rare. a case report of a patient presenting with pres and clinical course complicated with rse necessitating use of intravenous anesthesia, ketamine, and newly-available brivaracetam. 65-year-old woman with history of congestive heart failure, chronic iron deficiency anemia and uncontrolled hypertension was admitted for severe encephalopathy and convulsive status epilepticus (cse) for longer than 30 minutes necessitating propofol and midazolam infusions. her admission systolic blood pressures were in the 280s, and mri brain revealed bilateral parieto-occipital t2/flair hyperintensities consistent with pres. despite adequate control of hypertension following admission, patient remained encephalopathic and continuous electroencephalography (eeg) demonstrated nonconvulsive status epilepticus (ncse). the patient's ncse continued despite use of maintenance antiepileptics (fosphenytoin, lacosamide, levetiracetam) and high-dose infusions of midazolam and propofol. ketamine infusion was started to maximize nmda receptor blocking properties, and burstsuppression pattern on eeg was easily achieved with bolus infusions followed by continuous infusion. addition of brivaracetam was used to replace levetiracetam and allowed patient to remain seizure-free when iv anesthetics were weaned off. patient required prolonged hospitalization with gastrostomy tube placement and tracheostomy, which was later decannulated prior to patient's discharge to home with family. high index of suspicion is necessary to identify patients in ncse with prolonged encephalopathy that have pres. early use of ketamine along with a benzodiazepine may result in rapid achievement of burstsuppression to treat se. brivaracetam may be a useful agent to treat rse. diagnosis of diabetes insipidus(di) includes polyuria, hypernatremia and low urine specific gravity. we present two patients, receiving hyperosmolar therapy for intracranial hypertension (iht), in whom using low urine specific gravity to diagnose di lead to delayed treatment. this is a retrospective case series. criteria used to diagnose di at our institution include polyuria, sodium <300mosm/kg and urine to plasma osmolality ratio <2. case 1: 23-year-old male with subdural hematoma, iht on hyperosmolar therapy, developed polyuria. sodium rose from 141 to 172meq/l. urine specific gravity was 1.020 excluding di. eventually, sodium rose to 184meq/l. specific gravity remained 1.009 but urine osmolality was 162mosm/kg and urine/plasma osmolality ratio was 0.42, consistent with di. vasopressin was initiated, however the patient had already developed lactic acidosis and renal failure due to hypovolemia. case 2: 49-year-old female with intracerebral hemorrhage and iht on hyperosmolar therapy, developed polyuria. sodium rose to 176meq/l, specific gravity remained >1.005 but urine osmolality was 187mosm/kg and urine/plasma osmolality ratio was 0.49 consistent with di. vasopressin was initiated. hyperosmolar therapy increases urine osmoles and raises urine specific gravity. this interferes with diagnosis of di which requires low urine specific gravity. while specific gravity measures the weight of particles, osmolality measures particles independent of their weight and thus accurately measures urine tonicity in the presence of heavy particles like mannitol. moreover, urine/plasma osmolality ratio is able to demonstrate relative hyposmolarity of urine when compared to serum assisting with diagnosis of di even when urine specific gravity is elevated. we conclude that urine specific gravity does not reliably detect di in patients receiving hyperosmolar therapy. urine osmolality and urine/plasma osmolality ratio may detect di earlier and prevent dehydration and kidney injury. these findings should be validated prospectively. endovascular intervention in the treatment of cvt(cerebral venous thrombosis) is an alternative strategy when cases deteriorate despite best medical management or develop refractory intracranialhypertension. we present a patient with cvt due to heparin-induced thrombocytopenia(hit), with intraparenchymal hemorrhage(iph) and refractory intracranial-hypertension, who was managed with systemic anticoagulation, continuous intra-sinus infusion of rtpa and mechanical thrombectomy(mt) resulting in excellent outcome. case report: a 50-year-old woman with left parafalcine meningioma s/p cyberknife was started on subcutaneous heparin for radiation necrosis 5 days prior to admission. she presented to the hospital with new onset headaches and nausea. ct head showed increased edema with mid-line shift around the meningioma, for which steroids were started. within 2 days her headaches worsened and repeat imaging demonstrated right temporal iph. emergent hematoma evacuation was performed. mri brain showed right cerebellar infarct and mra head showed extensive cavernous sinus thromboses, from right internal jugular vein and into sigmoid and transverse venous sinuses. she tested positive for hit and was switched to argatroban drip. patient however continued to deteriorate due to refractory intracranial-hypertension. intra-cavernous rtpa injection and mt was done but the thrombosis was noted to recur on repeat angiogram 24hrs later. an intra-sinus catheter was left in place for continuous infusion of rtpa at 1 mg/hr. for 12hrs was done while argatroban drip was continued. the patient's intracranial pressure returned to normal. repeat venogram showed resolution of cvt. patient tolerated the therapies well, without any further hemorrhagic complications. modified rankin score at 6 month follow-up was 1. this case features successful aggressive endovascular interventions including in-situ rtpa infusion, mt and concomitant systemic anticoagulation for cvt due to hit, complicated by intracranial hemorrhage and refractory intracranial hypertension. the paucity of high quality evidence related to safety, efficacy and modality of endovascular treatment lead to making therapeutic decision on individual basis. acute brain injury may be followed by encephalopathy marked by electroencephalographic features along the ictal-interictal continuum (iic). the use of perfusion imaging to co-localize radiographic features of known malignant eeg patterns may add an important context to guide treatment escalation or de-escalation. this is only the second report in which widely available ct or mr perfusion techniques were favored for this application over more cumbersome metabolic imaging such as pet. retrospective analysis was performed on records for patients admitted to a neurosciences icu, exhibiting encephalopathy, with eeg features on the iic, who underwent perfusion imaging. studies included ct perfusion, mr perfusion, arterial spin labeling, or spect. these studies were obtained for unrelated purposes. escalation or de-escalation of anti-convulsant and sedative medication, hospital course, and patient outcomes were extracted. perfusion imaging data was juxtaposed with eeg patterns along the iic, and patient outcomes are described in narrative form. seven cases were identified. four cases occurred in the context of intraparenchymal hemorrhage, of which one was secondary to meningioma resection. two cases occurred after treatment for subdural hematoma, and one case was related to ischemic stroke. anti-convulsant and sedative management was escalated or de-escalated relative to the presence or absence of radiographic co-localization of hyperperfusion in all but one case. emerging data indicates that some iic eeg patterns may merit aggressive treatment. metabolic signatures of secondary brain injury as measured by cerebro-oximetry or microdialysis have associated these patterns with unfavorable outcomes. we report case studies in which information gleaned from basic perfusion imaging may suffice to distinguish between benign iic patterns and those that should be regarded as near-ictal. the cases hint at novel ways to conceptualize treatment of encephalopathy following acute brain injury and suggest a dimensional shift in thinking towards electroperfusive status epilepticus. sudep has classically been a diagnosis of exclusion. recent studies have shown, however, that similar genes -and even genes within the same family -are associated with sudep and brugada. this suggests that perhaps the cardiac irritability of brugada syndrome exists on a spectrum with epileptic sudden death. a 25 yo man with a history of presumed seizure disorder presented as a transfer from another hospital after being found to have anoxic brain injury following cardiac arrest. he had been shopping with his wife when he was thought to have one of his typical seizures. he was non-responsive for about 20 minutes. on arrival ems found him pulseless. cpr was started en route and continued for 30 minutes in the ed where he was defibrillated three times before achieving rosc. he completed the therapeutic hypothermia protocol. cardiac catheterization was clean. eeg showed diffuse slowing with no epileptiform discharges. imaging showed diffuse anoxic brain injury. after nearly two weeks without clinical improvement he was made comfort care. . of note, previous seizure workup failed to identify epileptiform activity. he was given an aed prescription which he never filled. further chart review showed that he had previously presented to the ed after a "seizure" episode which lasted 45 minutes. his neuroexam was non-focal. ct head was negative. review of his ekg at that time showed type 1 brugada syndrome pattern with an elevated jpoint and t-wave inversions in v1 and v2. his sudden cardiac arrest is most likely a result of symptomatic brugada symptomatic brugada is important to identify early since deaths such as the one discussed above may be prevented by an implanted defibrillator. this case highlights the need for heightened awareness and more effective testing for brugada in the setting of seizure or pseudoseizure. patients with cerebral air embolism (cae) often exhibit more severe symptoms than those typically associated with the number of air emboli and size of infarcts on brain images. however, this discrepancy between symptoms and imaging findings has not been sufficiently explained. we report a case of cae in which disruption of the blood-brain barrier (bbb) and perfusion defects were identified via brain magnetic resonance imaging (mri). a 79-year old man with a lung mass was admitted to our hospital. percutaneous needle aspiration of the mass was performed in the left lower lobe of the lung. the patient developed sudden confusion and irritability after the procedure. during neurological examination, he could follow only simple commands and exhibited symptoms of left-sided weakness and neglect (medical research council grade 2). noncontrast computed tomography (ct) of the brain revealed a few small air emboli in the right frontal subcortical area. multimodal mri of the brain was performed 50 minutes after the onset of symptoms. t2-weighted gradient-echo imaging revealed only a few small air emboli in the right frontal area, and diffusion-weighted imaging findings were unremarkable. in contrast, time-to-peak imaging revealed widely distributed perfusion defects in the right hemisphere, while contrast-enhanced t1-weighted imaging revealed prominent leptomeningeal enhancement, suggestive of bbb disruption in the right hemisphere. magnetic resonance angiography revealed no steno-occlusive lesions. the patient was treated with 100% oxygen via a high-flow nasal cannula. his weakness subsided the next day, although his confusion persisted for 18 days. follow-up mri performed five days after the onset of symptoms revealed resolution of the abnormal findings. our findings suggest that disruption of bbb and perfusion defects may develop in patients with cae. extensive impairments of the bbb and perfusion may explain the mismatch between severe neurological symptoms and small air emboli/infarcts. co-existence of cerebral salt wasting and diabetes insipidus is an extremely rare entity that has only been described in 2 adult case series and a paediatric series. due to the complex nature of diagnosing this entity, mistreatment may ensue and lead to high morbidity and mortality rates. we report a case of a patient who was admitted to the neurosurgical intensive care unit after sustaining a subarachnoid haemorrhage secondary to a ruptured anterior communicating artery aneurysm. a 67-year old lady presented with sudden onset of severe headache and nausea. gcs was 12 (e3v4m5) with no focal neurological deficits. she underwent endovascular coiling and embolisation of the aneurysm under general anaesthesia and had a left external ventricular drain inserted. in the immediate postoperative period, she was found to be polyuric, with the initial workup suggestive of diabetes insipidus. desmopressin was administered with initial good effect. however, her polyuria recurred and persisted despite desmopressin. the repeat workup revealed the presence of concomitant cerebral saltwasting. she was then treated with fludrocortisone and sodium chloride supplementation. careful monitoring of her serum sodium levels and overall fluid balance allowed close titration of the desmopressin, fludrocortisone and sodium chloride supplementation. she was eventually weaned off treatment and discharged well with normal sodium levels and with no neurological deficits. this case highlights the difficulty encountered in managing concomitant cerebral salt wasting and diabetes insipidus in critically ill neurosurgical patients and the need to for a high index of clinical suspicion, early intervention and close monitoring. levetiracetam is a commonly used antiepileptic drug (aed) used to treat epilepsy. this agent was approved by the fda in 1999, is available in oral and intravenous formulations, and offers advantageous pharmacokinetics, minimal drug interactions, and a favorable side effect profile. the purpose of this case report is to describe a case of severe, asymptomatic rhabdomyolysis exacerbated by levetiracetam administration. the medical record was reviewed and data was collected to describe a case with a pertinent review of the literature. a 43-year-old african-american male with a history of hypertension presented to the emergency department following a tonic-clonic seizure. baseline labs were drawn and revealed a ck level of 1,226 iu/l, negative urine myoglobin and normal renal function. levetiracetam therapy was initiated and no further seizures were noted. the patient's ck continued to trend up throughout his stay despite aggressive fluid resuscitation with a positive myoglobin on hospital day 2. the ck reached a peak of 41,463 iu/l on hospital day 4. after a literature review and evaluation of his medication list, six casereports were identified linking elevated ck and rhabdomyolysis to levetiracetam administration. at that time levetiracetam was discontinued and the ck rapidly declined to 6,426 iu/l on hospital day 6. the patient never had muscle pain or kidney injury and was discharged on hospital day 6. this case-report describes rhabdomyolysis associated with levetiracetam administration with a naranjo probability scale score of 6 indicating a probable adverse drug reaction. the adverse effects of generalized pain and neck pain are described in the package insert with an incidence of 2-8%; however, it is not reported that ck levels were monitored. due to the frequent use of this aed and given the rare yet serious adverse effect of rhabdomyolysis, ck levels should be monitored upon initiation. acute toxic leukoencephalopathy (atl) is a potentially reversible disturbance to white matter caused by exposure to toxins. we report the first case of a patient with atl in the setting of a fentanyl overdose and reviewed the literature. a 28 year-old man with a history of opiate abuse was found unconscious, last seen well nine hours prior. he was known to have purchased 10mg of fentanyl that day. he was intubated and briefly required blood pressure support. he was initially hypoglycemic and suffered fulminant liver damage, acute kidney injury, rhabdomyolysis, and stunned myocardium. comprehensive toxicology screen was positive for cannabis and fentanyl. mri of the brain showed pronounced bilateral restricted diffusion in the high frontoparietal subcortical white matter with radiographic stability five days later. he remained intubated and neurologic exam poor with fluctuating brainstem reflexes and posturing despite improvement in end-organ function. atl has been reported in a 19-month-old girl and an 85-year-old man with exposure to transdermal fentanyl, both of whom had favorable outcomes (2,3). one case has been reported following oral oxycodone ingestion (11). of 27 cases of atl secondary to inhaled heroin, 48% were fatal (5). preferential white matter injury has been seen in cases of hypoxic ischemic encephalopathy (hie) (7,10). it was initially thought to be secondary to wallerian degeneration following grey matter damage, but post-mortem pathology has shown direct insult to axons (6). atl has been reported in one case of hypoglycemic coma (8) and one case of uremia (9). it has never been reported in isolated hepatic encephalopathy, secondary to seizure, or with cannabis use alone. based on our review of the literature, the most likely causes of this patient's atl are fentanyl or hie. fentanyl should remain on the differential as a previously unreported cause of atl. autonomic dysregulation is a common complication of acute spinal cord injury (sci). subsequent hypotension may worsen central nervous system injury as well as neurologic and mortality outcomes. to help mitigate this occurrence, consensus guidelines recommend maintaining patients' mean arterial pressure (map) >85 mmhg within the first seven days based on evidence from limited clinical trials. limited data exists describing the use of midodrine, an alpha-1 agonist and the previously only available enteral vasopressor, for blood pressure (bp) augmentation in this setting. the use of midodrine is limited by cardiovascular side effects such as bradycardia. droxidopa, a novel enteral precursor of norepinephrine that works independently of the central nervous system, may serve a role in sustaining map in acute sci. we describe a novel case of droxidopa use in a 64-year old male who sustained a spinal cord contusion secondary to severe stenosis at the fourth cervical vertebrae following a ten-foot fall. droxidopa was used to facilitate vasopressor wean in the setting of neurogenic shock as a complication of acute spinal cord injury. to sustain adequate cns perfusion (map goal >65-85 mmhg) and facilitate patient transfer to a lower level of care, droxidopa 100 mg three times daily was initiated after five days of continuous infusion of intravenous norepinephrine. daily assessments of hemodynamic parameters were performed, including blood pressure, heart rate, map, and an electrocardiogram. successful wean of norepinephrine was achieved within 24 hours of droxidopa initiation, with an average map sustained above 65 mmhg. the patient was transferred to a lower level of care within 72 hours of droxidopa initiation. no cardiovascular side effects were observed. droxidopa was well tolerated and facilitated transition from norepinephrine infusion to an enteral option. droxidopa may be a viable option in stable neurocritical care patients who require vasopressors to sustain adequate cns perfusion. traumatic brain injury is acute and sometimes rapidly aggravated during or after surgical treatment. imaging study is most important and computed tomography (ct) is the golden standard in tbi. however the patient should be transfer to ct room or relatively high cost mobile ct scanner may be used. ultrasound is not expensive and also does not produce radiation exposure. we studied the effectiveness and advantages of intra-operative ultrasound examination in traumatic brain injury patients intra-operative ultrasound was used after decompression of injured brain from june 2016 to april 2017. the ultrasound device was the affiniti 50 (philips ultrasound inc, usa) and 2.5 mhz transducer was used. the transducer was covered by thin transparent sterilized vinyl with ultrasonic gel with aseptic manner. to protect brain injury by the ultrasonic probe, a saline soaked gauze was applied on the cerebral cortex. the axial images were captured and then stored in pacs system promptly. ultrasound images were compared to postoperative ct scan. there were 13 male and 2 female patients were examined by ultrasound during there surgery. ipsilateral hemisphere, especially cortical layer was slightly distorted to identification. brain stem area was visible in most cases. contralateral hemisphere was seen in unilateral craniotomy and craniectomy cases. in bilateral craniectomy cases, both hemispheres were observed well. parenchymal hemorrhage was also identified and confirmed for removal using ultrasound. in severe brain swelling cases, arachnoid space was seen increased echogenicity. ultrasound image was compared to postoperative ct scan. intra-operative ultrasound is effective in real time inspection of brain during surgery and may helpful detect opposite or parenchymal hemorrhage before closure and leaving operation room. to describe a rare case of a varicella zoster virus (vzv) meningitis with progressive multiple cranial nerve deficits in the absence of cutaneous zoster rash. a young woman with idiopathic thrombocytopenic pupura on steroids presents with horizontal diplopia in the setting of seven days of intractable headache. she had no meningeal signs, fever, leukocytosis or cutaneous rash. within three days into hospitalization, she developed bilateral cn vi, cn iii, right cn v and right cn vii palsies in a progressive fashion. csf analysis revealed cell count of 1,146/mm3, a protein of 202 mg/dl and glucose 8 mg/dl. cytology, tuberculosis, bacterial and fungal cultures, ace and hiv testing were negative. vzv-dna was detected in csf in high titers vzv quant: 3.9million. contrasted brain mri revealed mild diffuse leptomeningeal enhancement in the basilar region. she recovered almost all cranial nerve function within 10 days of treatment with acyclovir and high dose steroids. a diagnosis of polyneuritis cranialis with zoster sine herpete (zsh) was made given pcr positive vzv-dna in csf. vzv reactivation with a wide array of neurological deficits can present without rash making diagnosis challenging. zsh should be in the differential for acute cranial nerve deficits as prompt treatment with acyclovir can lead to rapid recovery. stress-induced cardiomyopathy or neurogenic stunned myocardium is a well-documented cardiac complication following aneurysmal subarachnoid hemorrhage (sah). onset is usually immediate, within hours after aneurysm rupture, and is characterized by left ventricular dysfunction with pulmonary edema and elevation in cardiac biomarkers. this can often be mistaken for an acute myocardial infarction or ischemia. the pathogenesis appears to be the result of elevated catecholamine levels following injury leading to myocardial contraction band necrosis and cardiac dysfunction. this syndrome occurs more commonly in patients with severe or "high-grade" sah. we review a case of delayed cardiac dysfunction coinciding with the onset of vasospasm. a 52-year-old female presented with a h&h3, mf4 sah. she appeared to have lost consciousness prior to arrival and was reporting worst headache of life. she had an evd placed upon arrival with opening pressure at 27. she underwent endovascular coiling of a ruptured aneurysm of her anterior communicating artery aneurysm. initial echocardiogram demonstrated normal wall motion with ef of 63%, and minimal troponin i elevation at 0.04 ng/ml. on post-bleed day 10 the patient became more somnolent and developed chest pain with an ecg demonstrating st-elevation in all anterolateral leads concerning for acs. she was taken for cardiac catheterization where she had non-obstructive vessels with no vasospasm seen. her ef was reported at 25-35% with apical ballooning present. her repeat echocardiogram also demonstrated a new apical akinesis with ef 45%, and troponin peaked to 16 ng/ml. her tcds at the time were suggestive of vasospasm with bilateral lr >3, but no focal deficit present. it appears that regardless of timeline, stress-induced cardiomyopathy or neurogenic stunned myocardium occurs after sympathetic or catecholamine surge and may occur after the onset of vasospasm in patients with aneurysmal sah. the rapid neurological assessment of critically ill patients with neurologic disease is paramount when determining a course of action. neuromuscular blockade is often used during critical care transport and in the emergency department. unfortunately, this can delay examination and assessment leading to unnecessary testing and procedures. historically, neuromuscular blockade reversal was accomplished using a combination of neostigmine and glycopyrrolate. however, this can lead to incomplete reversal and unwanted side effects from these medications. sugammadex is a cyclodextran injectable compound that has been fda approved in the united states since 2015 for rapid reversal of rocuronium induced neuromuscular blockade. sugammadex works by forming a complex with rocuronium and rendering it unable to bind to nicotinic cholinergic receptors at the neuromuscular junction. sugammadex can reverse neuromuscular blockade without the unwanted side effects of cholinesterase inhibitors. this is a case report of the successful use of sugammadex to reverse the effects of neuromuscular blockade in an intracerebral hemorrhage patient. a 77 year old male with a history of atrial fibrillation and a supratherapeutic inr presented via aeromedical ambulance with a 63 ml left frontal intracerebral hermorrhage causing a 10mm midline shift. he received a 100 mg bolus of rocuronium prior to arrival and had a gcs of 3 upon presenting to the neurosciences icu. a train-of-four revealed 0/4 twitches. he was given 4 mg/kg of sugammadex with a return of 4/4 twitches within 30 seconds. a more accurate neurological examination was then obtained demonstrating that his brainstem reflexes were intact, he could open his eyes spontaneously and reacted purposefully to painful stimulation. this allowed a non-operative course to be taken. sugammadex can reliably and quickly reverse neuromuscular blockade allowing for the immediate assessment of the neurocritical care patient. it is a useful tool with minimal side effects. piperacillin-tazobactam is commonly deployed as empiric antibiotic therapy. piperacillin-induced hematologic laboratory test abnormalities were rare in pre-marketing studies, and whether these alterations are of clinical significance has been studied little. aberrations in platelet function have not been implicated. in the present case, we discuss a patient presenting with hypertensive intracerebral hemorrhage (ich) who sustained two additional hemorrhages in distinct locations after routine removal of intracranial monitors and an external ventricular drain (evd). these significant bleeding events occurred exclusively during piperacillin-tazobactam therapy and were correlated with new abnormalities in the patient's platelet function assay (pfa) results. a 55-year old vietnamese male with hypertension presented for treatment of a left basal ganglia ich. epinephrine/collagen and adenosine diphosphate/collagen pfas at the time of evd and quad-lumen bolt placement were normal, and imaging showed no hemorrhage after placement. hospital course was complicated by aspiration pneumonia requiring empiric piperacillin-tazobactam administration. after removal of the quad-lumen bolt and evd on separate days, both follow-up ct scans showed new hematomas in the devices' tracts, with significant intraventricular hemorrhage. repeat pfas were abnormally prolonged, representing a distinct change from baseline. a trend toward normalization of pfas was observed after discontinuation of piperacillin-tazobactam with progression toward baseline thereafter. the present case is unique in that the significant bleeding that occurred was attributable to objectively confirmed platelet dysfunction rather than thrombocytopenia. other possible innate causes of bleeding were less likely as the patient demonstrated normal platelet count, von willebrand multimers, platelet morphology, and clotting factors. this is the first reported case of intracranial (periprocedural) hemorrhage potentially related to piperacillin-tazobactam; further research into this drug's impact upon qualitative platelet function is needed. the life-saving potential of extracorporeal membrane oxygenation (ecmo) has been well recognized since the 1960s. modern advancements of research and technology have allowed ecmo to be accepted as a dependable intervention for patients with severe pulmonary or cardiac failure. however, with increased use, associated complications that detract from the benefit of ecmo are surfacing as well. this case report describes a case of diffuse intracerebral hemorrhage (ich) after prolonged ecmo resulting in cerebral edema, mass effect, and eventual brain herniation. the patient is a previously healthy 19 year old female who presented with fever, chills, and myalgia. when evaluated at urgent care, she was noted to be hypoxic and was sent to an outside hospital where her monospot test was positive. upon arrival, the patient was placed on venovenous ecmo (vv-ecmo) due to severe hypoxia. she was also in acute renal failure requiring continuous renal replacement therapy (crrt). she had an episode of hypotension with bradycardia. subsequently, her pupils were noted to be fixed and dilated. a stat ct head then showed diffuse bilateral hemorrhages at the graywhite junction as well as diffuse edema. labs showed thrombocytopenia likely due to disseminated intravascular coagulation (dic). her exam was consistent with brain death. it has been estimated that up to 90% of patients who were placed on ecmo as a last resort for respiratory failure have neurological complications including ich. there is no stereotypical pattern of bleeding but diffuse hemorrhage has been seen, which is consistent with the pattern seen in our patient. notably, those with ich have significantly higher rates of mortality. thrombocytopenia, dic, and platelet dysfunction that develop as a result of ecmo are thought to play a role in the development of ich. to present a case report of syndrome of the trepheined (sot) and paradoxical herniation without craniectomy. sot is reported when a constellation of positional neurological symptoms arise following large craniectomy, resolving in a delayed fashion following cranioplasty. paradoxical herniation may occur in extreme cases.the pathophysiology is incompletely understood however proposed mechanisms include compression of underlying brain by the flaccid skin flap due to the gradient between atmospheric and intracranial pressure exacerbated by upright pressure, changes in cerebral blood flow, and csf fluid. a middle aged woman with a history of mood changes eight months preceding admission presents with worsening left hemiplegia over one week. mri revealed a 78 x 44 mm right frontal cystic mass. hyperosmolar therapy and steroids were initiated for midline shift and brainstem compression. her immediate post operative course after tumour excision was uncomplicated. on post-operative day two, she developed uncontrolled hypertension, worsening anisocoria, and decerebrate posturing requiring urgent intubation. head ct revealed uncal and subfalcine herniation despite a large resection cavity. an external ventricular drain was placed and removed due to lack of drainage. within 12 hours of trendelenburg positioning, she improved both clinically and radiographically. she did not undergo an intraoperative csf reduction and no preadmission history (back pain, orthostatic headache, trauma) to support an occult csf leak. she had a recurrence of symptoms on post-operative day eight which also resolved upon lying flat for 48 hours. she was ultimately discharged to acute rehab and tumor pathology returned as glioblastoma (who grade 4). this novel case of sot in the absence of craniectomy demonstrates the complex and poorly understood consequences of slow growing massive tumors, csf dynamics and exertional force on static cns structures. this case also illustrates the benefits of a collaborative, multidisciplinary approach to patient care in the neuroicu. to present a lesser known leukoencephalopathy that occurs when patients overdose on inhaled heroin vapor 'chasing the dragon" is a method of inhaled heroin vapor that is different from smoking or snorting heroin. heroin powder is placed on aluminum foil, which is heated by placing a flame underneath. the white powder turns into a reddish-brown gelatinous substance that releases a thick, white smoke, which resembles a dragon's tail. the fumes are "chased" or inhaled through a straw or small tube. currently the us is facing a growing epidemic of heroin use making this leukoencephalopathy more pronounced. a 34-year-old female with history of drug abuse presented to the emergency department with altered mental status. the boyfriend informed staff that she likely smoked heroin. on arrival, she was drowsy but easily arousable. her brainstem reflexes were intact but she was grossly dysmetric. urine drug screen was positive for opiates only. initial ct of the brain demonstrated extensive loss of gray-white differentiation within the cerebellar hemispheres and bilateral lucency in the globus pallidus and developing hydrocephalus. patient was placed in the neurointensive care unit to monitor and was managed medically with hypertonic therapy to combat her cerebral edema. an mri was done which demonstrated a distinctive pattern of symmetrical white matter t2 hyperintensities in the cerebellum, hippocampus and internal capsule bilaterally characteristically known as "chasing the dragon" sign. the patient gradually improved with supportive treatment, but continued to have mild ataxia upon discharge. we present a case of leukoencephalopathy that was generally rare to see, but now that heroin use is now at a 20 year high within the us, this phenomenon may become more prominent. heroin inhalation leukoencephalopathy should be suspected in all patients with a history of chasing the dragon when they present with neurological abnormalities. the use of intra-venous (iv) thrombolysis for the treatment of acute ischemic stroke is now the standard of care. this is typically followed by endovascular thrombectomy if patient is eligible does not improve . we present a rare acute ischemic posterior circulation stroke that had progression of the stroke despite receiving both intra-venous thrombolysis and endovascular thrombectomy. case report: a 38 years old african-american gentleman with past history of obesity, sleep apnea and prostatic hyperplasia, presented with acute onset left hemiparesis, with limb ataxia, who then progressed to altered sensorium in the emergency room needing endotracheal intubation. his initial nihss was 10. he was given iv thrombolysis, with subsequent vascular imaging that showed a top of the basilar clot, that was removed via endovascular intervention. a sister and one of the aunts reported a history of 'clots' when asked about family history. despite initial improvement, the patient deteriorated clinically after about 14 hours from symptom onset, and was found to have extension of stroke into the brainstem, with simultaneous acute loss of brainstem reflexes . the patient was started on palliative withdrawal of care by the family about 2 days from the initial onset of symptoms. his thrombophilia work-up revealed later that he was homozygous for methylenetetrahydrofolate reductase (mthfr) gene mutation, 677c >t. this case with a poor outcome due to extension of the ischemic stroke despite receiving the standard of care therapy, highlights the need for considering the use of anticoagulation within 24 hours postthrombolysis and thrombectomy in cases with underlying thrombophilia. the current guidelines do not support this aggressive approach. there is a dire need for randomized controlled trial about such cases to provide evidence based care to avoid a repetition of a similar poor outcome. barbiturate therapy has shown benefit in reducing intracranial pressure (icp) in patients who are refractory to other treatment modalities. however, severe adverse drug effects can accompany barbiturate use when used at the high doses required for icp management, such as hypotension, hepatic/renal dysfunction, and infection, among other deleterious consequences. dyskalemia has been reported infrequently in the literature with most of the cases involving patients on thiopental. there remains little guidance for management of this adverse effect. we present a case of severe dyskalemia induced by high-dose pentobarbital therapy and experience with management of this rare but life threatening effect. the patient was a 26-year-old male with traumatic brain injury and subdural hematomas complicated by refractory icp elevations. after hyperosmolar therapy, sedation, and csf drainage failed to control icp, and he was deemed to not be a candidate for surgical decompression, high-dose pentobarbital was started. after initiation of pentobarbital, his initial potassium of 4.0 mmol/l decreased to a nadir of 1.9 mmol/l over the next 24 hours despite aggressive repletion with a total of 600 meq of oral and intravenous potassium chloride. upon down-titration and discontinuation of pentobarbital, the serum potassium rapidly rebounded to 8.3 mmol/l with st-segment elevations on ekg. pentobarbital was restarted in an attempt to stabilize escalating icps and elevated serum potassium. subsequently a slow taper was utilized to mitigate dyskalemia during barbiturate discontinuation. dyskalemia associated with high-dose barbiturate therapy presents a significant dilemma to practitioners as both severe hypo-and hyperkalemia can be life threatening. published literature provides little guidance on how to safely manage patients who experience this adverse effect. patients receiving barbiturate therapy should have frequent potassium monitoring especially in the initiation and discontinuation phases. potassium repletion should be approached with caution, especially preceding discontinuation of barbiturate therapy. diffuse astrocytoma (formerly known as 'gliomatosis cerebri') may present with seizures or symptomatic raised intra-cranial pressure. this is typically followed by a few months of relatively stable phase (with treatment) and then possible subsequent development of glioblastoma multiforme. we present a rare case of a previously healthy caucasian lady who had new onset seizures, that showed glioblastoma multiforme already present on a background of diffuse astrocytoma. case report: a 65 years old caucasian lady with no significant past medical history was admitted with new onset focal seizures with secondary generalization, needing intubation and propofol for airway protection. brain imaging showed left frontal ring-enhancing mass, with a smaller satellite lesion in the left insular cortex, on a background of diffuse infiltrative lesion involving left fronto-temporal lobe and a smaller area of right parafalcine frontal lobe. biopsy of the left frontal mass revealed it to be glioblastoma multiforme. this is a rare situation when a previously healthy patient presents with new onset seizures and already has glioblastoma multiforme on a background of diffuse astrocytoma (or 'gliomatosis cerebri'). her post operative imaging revealed disease progression with increase in the size of the left insular cortical lesion. she was discharged home with plan for radiotherapy and chemotherapy. diffuse astrocytoma with glioblastoma multiforme within can remain asymptomatic till late in the disease course. diffuse astrocytoma (or 'gliomatosis cerebri') is a rare disease and even more rare is to have this remain asymptomatic till the development of glioblastoma multiforme within. this particular case highlights the need for vigilance about such a possibility, as this aggressive brain tumor carries a grave prognosis, especially when it develops on background of a diffuse astrocytoma. subdural hygromas (sdg) are cerebral spinal fluid collections in the subdural space that may occur following trauma. decompressive craniotomy may increase the risk for acute sdg or other forms of external hydrocephalus along the surgical plane. while these are traditionally benign and resolve spontaneously, they may in rare cases cause clinical deterioration. we report three cases. cases 1 and 2 were alcoholic men aged 54 and 69, respectively, who suffered severe traumatic brain injury (tbi) following falls while intoxicated. they had early clinical deterioration prompting emergent hemicraniectomy for left-sided sdh with midline shift (mls). case 1 clinically worsened on postoperative day (pod) 6 with posturing, decreased pupillary responses, and new-onset seizures. new bilateral, extensive subdural hygromas were noted, enlarging over serial ct scans up to 1-cm with progressive mass effect. uncal herniation and downward brainstem displacement occurred by pod8 despite external ventricular drainage. case 2 deteriorated on pod4 with fluctuating exam and newonset seizures. imaging revealed new subgaleal fluid collection measuring 1.8-cm and a contralateral sdg. on pod8, hemicraniectomy was performed for new mls from enlarging fluid and hemorrhage in extradural component. both died shortly after withdrawal-of-care. case 3 was a 68 year-old man with dural arteriovenous fistula who presented with spontaneous left-sided sdh and underwent left hemicraniectomy. on pod3, he had new-onset seizures and new bilateral sdg measuring 1.5-cm on the left and 0.5-cm on the right without mass effect. two days later; the right sdg grew to 2.8-cm causing significant mass effect. he recovered after burr-hole evacuation and temporary subdural drain placement. sdg following sdh evacuation can have a malignant course, causing clinical deterioration without prompt recognition and csf diversion. all patients had large volume sdh and two were alcoholic; larger prospective cohorts are required to identify risk factors. seizures may be an early clinical sign. moyamoya disease is an intracranial vasculopathy that results in stenosis of bilateral internal carotid arteries with subsequent development of extensive collateralization. the diagnostic criteria for moyamoya disease are well established and generally accepted, yet reaching the diagnosis can be challenging in some cases. herein, we present an unusual case of progressive cerebral vasospasm triggered by pituitary apoplexy that led to a delay in the underlying diagnosis of moyamoya disease. case report. a 53-year-old female with hyperlipidemia presented to the emergency department with a bifrontal headache, right-sided weakness, and dysarthria. ct angiogram showed extensive multifocal narrowing of the bilateral supraclinoid icas, proximal aca/mcas, and posterior circulation. mri brain revealed a left insular stroke as well as a sellar mass with a central hemorrhagic component. mr perfusion demonstrated decreased perfusion in the right hemisphere. lumbar puncture and extensive vasculitic workup was unremarkable. endocrine studies were notable for elevated prolactin with low fsh and lh levels. despite protracted blood pressure augmentation strategies, the patient continued to experience progressive infarcts in the left mca/aca territory. repeat ct angiogram showed progression of vasculopathy, and transcranial doppler studies demonstrated worsening vasospasm of the right mca and left pca arteries. the patient received corticosteroids given concern for apoplexy, and was maintained on aspirin and verapamil. given the aggressive nature of her vasculopathy, the patient underwent conventional angiography two weeks later, which revealed bilateral suzuki grade iii moyamoya. following this diagnosis, she received bilateral sta-mca bypass surgeries. it is important to revisit the differential diagnosis of cerebral vasospasm when the clinical course does not conform to expectations. this case highlights moyamoya as the causative agent in progressive vasculopathy likely masqueraded by pituitary apoplexy and concomitant vasospasm. moyamoya is an important diagnosis to consider in patients with a fulminant vasculopathy refractory to traditional treatment of vasospasm. visualization of intracranial structures by ultrasound in adults is limited by the presence of skull, though ultrasound imaging can occur through temporal windows. point of care ultrasound allows assessment of midline shift, brainstem, and ventricles. doppler allows visualization of cerebral perfusion patterns. patients with a hemicraniectomy have better temporal windows available since a portion of their skull has been removed. in such patients, ultrasound can provide a non-invasive method to serially assess midline shift, intracranial hematomas, and focal ischemia at the bedside. we present images of a cranial ultrasound that shows remarkable anatomical details that correlate well with computed tomography (ct) head. a 74 year-old male presented with right-sided weakness and confusion and was found to have a left parietal intraparenchymal hemorrhage with cerebral edema and left-to-right midline shift on ct head. increase in cerebral edema and expansion of the hematoma caused clinical neurological decline necessitating a left-sided hemicraniectomy with clot evacuation. a cranial ultrasound was performed two days after surgery to assess for progression of cerebral edema and intracranial hemorrhage. a transtemporal approach in axial plane was used to visualize intracranial structures through the craniectomy window. physiological structures like the falx cerebri, lateral ventricles, midbrain, mammillary bodies, choroid plexus, splenium of corpus callosum, thalami, and circle of willis were visualized with incredible anatomical detail. pathology such as intracranial hemorrhage, focal ischemic areas, vasogenic edema as well as encephalomalacia were identified with close correlation to the noncontrast head ct. the patient is currently recovering in the neurocritical care unit with supportive care. cranial ultrasound has potential applications in point of care assessment of intracranial pathology in neurocritical care patients. this application has promising use in directing therapy in patients who are otherwise unstable for transport or are unable to undergo neuroimaging secondary to positioning needed for management of cerebral edema. cerebral mucormycosis is a rare infection caused by fungi found in soil and decaying vegetation. the rhino-orbital-cerebral type is classically associated with aids, diabetes, malignancy and immunosuppression. we observed a series of young immunocompetent patients who presented with a fulminant form of isolated cerebral mucor associated with severe meningoencephalitis, parenchymal necrosis and symptomatic cerebral edema. six patients with histopathological diagnosis of cns mucormycosis admitted to the university of cincinnati neurocritical care unit between 2008 and 2017 are presented. patient ages ranged from 22-45 (median 27). none had diabetes or hiv. drug use (intravenous and intranasal) was confirmed in 4 patients. they presented with altered mental status (2) and focal neurologic deficits (4). four patients presented with fever and leukocytosis. mri revealed lesions in the basal ganglia (5) or cerebellum (1) which were characterized by t2 hyperintensities with patchy restriction and susceptibility signal. contrast enhancement was present in 5 patients. mass effect (6) and midline shift (4) were prominent. mechanical ventilation was required in four patients. all but one patient received amphotericin b. three died from intractable intracranial pressure (icp). one patient eventually gained functional independence, one still requires high level of care, and one was lost to follow-up. csf analysis was negative for mucor in all cases. fulminant cerebral mucormycosis should be considered in every young patient presenting with rapid onset meningo-encephalitis and necrotized cerebral lesions, especially if located in the basal ganglia. history of ivdu should raise further suspicion. these patients should be monitored in intensive care settings as they can rapidly develop malignant cerebral edema and increased icp. antifungal therapy should be initiated upon presentation as it has been shown to improve morbidity and mortality. the incidence of acute ischemic stroke in the immediate post-partum period ranges between 4-18% and is considered a serious cause of morbidity and mortality. pregnant or postpartum women are less likely to receive iv tissue plasminogen activator (tpa) primarily because of pregnancy, ongoing peripartum bleeding and/or recent delivery. the fda classifies tpa as a category c drug and current recommendations consider pregnancy a relative contraindication for receiving tpa. we present two cases of peripartum ischemic strokes with varying ischemic stroke time windows requiring aggressive revascularization therapy (endovascular and pharmacologic). a 32y g2p2 presented to an outside hospital 12 days post-partum with new onset of facial droop and left upper extremity weakness (nihss 6). imaging showed right m1 cutoff and occlusion of several m2 branches. the patient was not a candidate for tpa given ongoing vaginal bleeding. the decision was made to proceed with mechanical thrombectomy when her exam worsened to nihss 12. the thrombectomy was successful with tici 2c reperfusion. she was discharged home 4 days later with a nihss of zero. a 30y g1p1 presented 4 days post-partum with new onset of left facial droop and slurred speech (nihss 2). imaging showed right m1 cutoff with reconstitution, but with significant associated penumbra. acute worsening of exam post tpa triggered a push for mechanical thrombectomy achieving a tici 3 recanalization. post procedure the patient's only symptom was decreased sensation in left fingertips. at 30-day follow up the patient had returned to her baseline with a nihss of zero. endovascular and pharmacologic revascularization therapy should be considered on an individual basis in the peripartum population. current literature is limited to case reports /case series. larger multicenter trials are warranted and anticipated in the near future. while the optimal duration of burst suppression for status epilepticus (se) has not been established, burst suppression poses significant morbidity that may be dependent on the amount of time spent in burst suppression. herein, we report a case of se that resolved after ultra-short burst suppression. case report. a 74 year-old female was admitted to the neuro-intensive care unit after experiencing several brief tonic-clonic seizures characterized by right-sided shaking and left-sided head turn. despite lorazepam and levetiracetam administration, the patient did not return to baseline and was transferred to our unit. on presentation, her workup revealed a leukocytosis and a glucose level > 300 mg/dl. lumbar puncture showed a mild pleocytosis for which broad spectrum antibiotics were initiated. on initial examination, she was unresponsive and was not following commands. electroencephalogram (eeg) demonstrated frequent sharp and slow discharges in the right posterior quadrant with generalization (~20 seizures/hour) with minimal improvement following levetiracetam and phenytoin administration. given the refractory nature of seizures, the patient was intubated and treated with general anesthetics. using propofol, burst suppression was achieved (consisting of 1-2 s bursts with intermixed suppressions) and was continued for < 2 hours. following weaning, the patient had no further evidence of seizures, and eeg showed lateralized periodic discharges in the right occipital lobe. mri did not demonstrate an occipital focus, but did reveal cortical diffusion restriction in the bilateral posterior hemispheres. the patient was extubated the following morning, and was transferred to the neurology floor two days later. this case provides evidence that in certain situations, relatively brief periods of burst suppression in se can serve as a "reset switch", allowing for resolution of seizures while minimizing toxicities associated with prolonged burst suppression. further studies to determine which patients may benefit from ultrashort burst suppression are warranted. there are two systems of facial control, voluntary and emotional; these are independent up to the level of the facial nucleus. we described a case of a patient who presented with isolated emotional facial palsy after intracerebral hemorrhage (ich). retrospective review of a case admitted to the neurocritical care unit (nccu) of the johns hopkins hospital. a 29 year-old woman with history of migraines who presented to the emergency room after a colleague noticed she was not moving the left lower side of her face when she smiled. head ct showed a large right frontal ich involving the medial frontal lobes and anterior thalami. on review of an old mri done, an underlying developmental venous anomaly with an associated cavernoma was seen. her exam was notable for a flattened emotional affect, no facial palsy when asked to activate on command, but a facial droop that occurred in the context of her smiling to jokes and other humor. her nccu course was complicated with significant brain edema requiring osmotherapy up to 3 weeks out from the initial insult with self-limited episodes of brain herniation characterized by extensor posturing, dilated pupils, hypertension, hyperventilation and tachycardia. these were initially dismissed as sympathetic storming vs seizures as she will come out of those to her baseline (awake with mild left sided weakness) many times without therapy. she eventually required a hemicraniectomy two weeks after presentation. conclusions solated emotional facial palsy can be the presenting sign after ich when the hemorrhage involves the contralateral thalamus, of the striato-capsular region or the medial frontal lobes. in this case, transient icp elevations were leading to dilated pupils, tachycardia and hypertension -highlighting that heart rate changes can be variable with elevated icps and that in young patients, brain herniation episodes can be self-resolved with hyperventilation. 18 yo female with no pmh developed fever, headache, and neck pain. she presented to outside hospital day 2 after ct head was negative, patient was discharged. symptoms did not improve and she went to her pcp on day 4 and was instructed to go to the ed. she presented to osh and underwent a lp that was indicative of viral meningitis with wbc 357 cells/mm3 and protein 215 mg/dl. patient admitted and treated with acyclovir. on day 6, she developed generalized body aches. on day 8, she was trying to stand with assistance and she became rigid. parents report a total of 4 seizures and was intubated for airway protection. she underwent another lp on day 8 with an opening pressure of 55 cm h2o. csf was sent for paraneoplastic panel. csf analyses and blood cultures were negative. evd placed for icp pressures of 30-34 cm h2o. history obtained from mother and father who reported the patient had been hiking 2 weeks prior. results mri brain showed meningeal enhancement scattered throughout the supratentorial and infratentorial brain and most compatible with inflammatory sequela of meningitis. patient continued on keppra, high dose steroids, antibiotic, antiviral, antifungal therapy until cultures resulted. additional treatments included ivig therapy followed by plasmapheresis, and finally rituximab. continued workup with brain biopsy showed demyelinating process and possible necrotizing encephalitis. mri four weeks after initial presentation showed white matter demyelination and deep gray nuclei lesions consistent with adem. four score of 0 on admission improved to 11 (e3, m0, b4, r4) 5 weeks after patient presented from osh. diagnosis of adem vs ms variant made based on the above data. case provides information for the clinician diagnosing and treating adem. potential for further studies with treatments described above and their effect on meaningful neurological outcomes. dengue is a flaviviruses transmitted via mosquitos and prevalent in south east asia. neurological complications are rare but can involve encephalitis, myelitis, neuromuscular dysfunction and neuroophthalmological problems. we describe an interesting case of dengue encephalomyelitis. retrospective review of a case admitted to the neurocritical care unit (nccu) of the johns hopkins hospital a 54 year-old ship filipino captain with no significant past medical history but an extensive exposure to heavy metals, travel throughout the pacific, who presented with progressively worsening fevers, encephalopathy, urinary retention and tremors. he was transporting iron ore and other metals in a cargo ship from russia through south-east asia through to bermuda. while passing through the pacific, he began to experience malaise, myalgia, and fever. he was treated with amoxicillin but became worse, developing urinary retention, periods of confusion, and word finding difficulties. he was initially hospitalized in bermuda and then transferred to our hospital for further workup. given his rapid deterioration, he was initially in the nccu. his exam was notable for mild expressive aphasia, paratonias, right-sided weakness with hyper-reflexia, and a low amplitude tremor. his csf was notable for lymphocytic pleocytosis, elevated protein, low glucose. mri brain showed flair hyper-intensities in the frontal lobes, and diffusion restrictions in the bilateral basal ganglia and thalami. mri spine showed extensive flair hyper-intensity lesions. an extensive workup evaluated for heavy metal toxicities, autoimmune disorders and infectious workup. csf analysis came back positive for dengue igg and igm, leading to a diagnosis of acute dengue fever and encephalomyelitis. with supportive care in the nccu, he improved considerably over 2-3 weeks and was discharged home to the philippines. dengue encephalomyelitis is a rare infection but should be considered in patients living in endemic areas. treatment includes supportive care with fluid resuscitation, neurological monitoring and monitoring for hemorrhage. posterior reversible encephalopathy syndrome (pres) is known to cause altered mental status and leukoencephalopathy in the setting hypertensive emergency. we present a novel case of severely asymmetric pres due to a concurrent right transverse sinus dural arteriovenous fistula (davf). a 53 year-old woman with hypertension, non-compliant on medication, had fatigue and 2 weeks of intermittent left sided weakness when she presented to an outside hospital for evaluation. initially upon arrival her glascow coma scale (gcs) was 14. her mental status deteriorated over 6 hours, eventually requiring intubation. her peak blood pressure was 269/147. outside ct demonstrated scattered intracerebral hemorrhage (ich) and she was transferred for higher level of care. on admission her gcs was 4. review of her outside ct was remarkable for extreme right-sided white matter hypodensity, moderate left white matter hypodensity, and small scattered ich. workup including infectious, inflammatory, and neoplastic processes were excluded through serum, csf studies, and mri. conventional angiogram demonstrated a right transverse sinus davf with reflux into cortical veins, which was subsequently embolized. her white matter t2-weighted hyperintensities improved on follow-up mri, and her gcs was 11 at the time of discharge. our case highlights the possibility of asymmetric pres due to abnormal venous congestion due to the right-sided davf. venous hypertension likely caused the patient's intermittent left sided symptoms in the weeks prior to admission. few cases of unilateral or asymmetric pres have been reported following induced hypertension for treatment of subarachnoid hemorrhage or in the setting of vascular malformation. to our knowledge, this is the only case of severely asymmetric pres and preceding stroke like symptoms due to a davf. the most common pathology associated with an intraluminal carotid thrombus is underlying atherosclerosis. in rare cases it may be associated to thrombocytosis. currently there are no clear recommendations for the treatment of ischemic stroke associated with thrombocytosis. our case describes the use of plateletpheresis for the acute management of thrombocytosis complicated by an internal carotid artery thrombus resulting in a right mca stroke. a 55-year-old female with past medical history of menorrhagia who presented complaining of left face, arm and leg weakness with associated shortness of breath. upon arrival her nihss was 1 and the initial head ct was unremarkable. laboratory results revealed a hemoglobin 8.2 mg/dl, hematocrit 30 mg/dl, and platelet count of 1014 x 103/ml. she was not a candidate for thrombolytic therapy due to the time window. soon after admission she had acute worsening of symptoms with an nihss of 17. a cta of the head and neck showed acute ischemic infarction involving the right mca territory with non-occlusive intraluminal thrombus within the right carotid bulb. asa 325 mg and heparin infusion were initiated promptly. after a thorough work-up for thrombocytosis, reactive thrombocytosis secondary to iron deficiency anemia was diagnosed. plateletpheresis as well as oral ferrous sulfate were started. after one plateletpheresis cycle the platelet count stabilized at 400x103/ml. complete thrombus resolution was confirmed on follow-up cta on day 10 of admission without need for surgical revascularization. the role for plateletpheresis is not well established in secondary thrombocytosis. in cases with extreme thrombocytosis immediate surgical thrombectomy may be contraindicated due to high risk of rethrombosis. urgent cytoreduction with correction of the putative mechanism for thromboyctosis should be undertaken for optimal management. plateletpheresis is safe and efficient in reducing the platelet count to decrease the risk of clot progression or further clot formations which could worsen patient outcome. hyperpyrexia is an elevated core body temperature secondary to an elevated hypothalamic set temperature. hyperthermia is an elevated core body temperature beyond the normal hypothalamic set temperature. intracranial hypotension can present with a wide variety of symptoms ranging from orthostatic headache up to coma. it has never been reported to present with fever, namely hyperpyrexia. a case report of a 55 year old female patient with a history of depression, diabetes mellitus, hypertension, and angiogram negative subarachnoid hemorrhage status post ventriculo-peritoneal (vp) shunt placement six years ago who was complaining of worsening headaches and slurred speech for the past three months but acutely decompensated one morning. she suddenly became confused and agitated but became obtunded. initially, she was given haldol. she was found to be febrile (rectal temperature of 104.8 f). she was given dantrolene and bromocriptine for suspected malignant neuroleptic syndrome with no effect. creatine phospho-kinase was not elevated. she underwent infectious work up which later came negative. cooling measures like external cooling, peripheral iv cooling, tylenol and nsaids were also not helpful. fever responded to central intravascular cooling but encephalopathy did not. several expert attempts of lp and shunt tapping failed to obtain csf. brain imaging showed bilateral chronic symmetrical hygromas, diffuse pachy-meningeal thickening and enhancement, slit-like ventricles and slumping of the midbrain with closure of the mammillary pontine distance. following shunt setting adjustment, the encephalopathy markedly improved and the fever did not recur after stopping the cooling measures and antimicrobials. intracranial hypotension might present with hyperpyrexia, likely secondary to hypothalamic dysfunction. in our case, hyperpyrexia was reversible as the intracranial hypotension was emergently treated. nevertheless, spontaneous intracranial hypotension might be difficult to diagnose especially if it presented with non-classical symptoms like fever. complex emotions about critical illness can affect families in the icu. rightfully, we put focus on how they are impacted, but we also need to pay attention to how it can affect providers and our decision making. a poignant case from my training was a 15-year-old girl struggling with lupus. she had now developed lupus cerebritis and had massive intracranial hemorrhages. despite aggressive efforts to manage cerebral edema, she repeatedly herniated brain matter out of old craniotomy scars with incredible force. it was the most horrifying thing i've ever seen. other organs were also failing, with four consulting services working to salvage them unsuccessfully, prompting numerous procedures. this went on for a month. the therapies that we can offer have limits from a physiological standpoint which we must recognize and respect. we struggle with reconciling the interventions we feel compelled to implement versus what is realistic. i remembered the most valuable advice that i once received: "only do something to someone if it does the complexity of the neuro icu is amplified by the nature of intracranial catastrophes and poor recovery (in contrast to pure medical illness). providers cling to what is technically indicated while families cling to hope, but neither is enough and concurrently too much. we lose our autonomy to grieving families telling us to "do everything" losing sight of the bigger picture. we lose our autonomy to one another by pushing onwards, which can unintentionally push each other into the territory of doing more harm than good. something for them". all services began to share this view, thus slowly dialysis, steroids and immunosuppression stopped. eventually, her heart stopped. my experiences have reiterated a simple paradigm: to do no harm. through this, i can empower myself to take control of each situation by first taking control of myself. we report a case of an hiv positive patient who presented with cryptococcus gattii meningitis who then developed acute respiratory distress syndrome (ards) secondary to pneumocystis jirovecii pneumonia (pjp) that required ecmo support. ards in immunocompromised hiv positive patients is associated with extremely high mortality. ecmo can improve oxygenation in patients without increasing alveolar pressure and therefore avoid mechanical lung damage with ventilation. we present a patient with newly diagnosed aids with cryptococcus gattii meningitis and course complicated by pjp that progressed to severe acute respiratory distress syndrome (ards) for which veno-venous ecmo was initiated. patient is a 27year old male who presented to the emergency department with new onset seizures. lumbar puncture in the ed overflowed the manometer and demonstrated wbc 62, rbc 610, protein 41, glucose 51, positive yeast gram stain positive for yeast with pcr and ag positive. his cultures later grew out cryptoccoccus gattii. he was admitted to the nsicu and we placed a lumbar drain and an intraparenchymal ipc monitor that demonstrated elevated icps to the 60-80mmh20 but improved with drainage. the day of admission he acutely desaturated and required emergent endotracheal intubation. chest x-ray demonstrated bilateral infiltrates. bal was positive for pj. five days following presentation and respiratory failure he was started on veno-venous ecmo. two days following initiation of pjp treatment with bactrim his chest x-rays and lung compliance began to improve. he remained on ecmo for a total of 10 days before decannulation. he underwent induction chemotherapy for four weeks for meningitis. this case report demonstrates the use of ecmo in a complicated and critically ill patient with aids, pjp, and cryptoccous gattii meningitis. to our knowledge, few cases of ards secondary to pjp are reported and none are reported with concurrent cryptococcus gattii infection. sympathetic storming occurs during the acute care of patients following severe brain injury. cannabinoid cb1 receptors (cb1r) mediate the effects of delta(9)-tetrahydrocannabinol (thc), the psychoactive component in marijuana. expression of cb1r is widespread in the central nervous system and includes the hypothalamus, which is thought to mediate the hypothermic inducing effects of cannabinoids. dronabinol is a synthetic analogue of thc we present a novel therapeutic use of cannabinoids in a case of super-refractory sympathetic storming following coccidioidal meningitis and extensive bilateral subcortical stroke a 26-year-old previously healthy man was transferred from an outside hospital for treatment of meningitis, vasculitis, and hydrocephalus requiring placement of a ventriculostomy. workup subsequently revealed coccidioidal meningitis. during hospitalization the patient had severe vasospasm, elevated intracranial pressure, diabetes insipidus, cerebral salt wasting, and severe sympathetic storming. intermittent storming episodes with high fever persisted for over 8 weeks despite treatment with bromocriptine, dantrolene, tylenol, ibuprofen, phenobarbital, and sinemet. due to its mechanism of action, a trial of dronabinol 10mg divided twice daily was tried. the storming episodes ceased and within 6 hours the average temperature decreased by about 2.5 degree centigrade. temperature over the next several days was better controlled with a substantial reduction in use of anti-pyretics, surface cooling measures, and other storming medications our case highlights a novel therapeutic use of cannabinoids in super-refractory sympathetic storming related to brain injury. dronabinol may be an alternative pharmacotherapy with unique mechanism of action in difficult to control sympathetic storming patients with poor grade subarachnoid hemorrhage(sah) commonly present with significant mental status changes that preclude reliance on neurologic exam for screening for neurologic deterioration. jugular venous oximetry monitoring has been suggested for use in guidance of hyperventilation therapy, barbiturate coma, and vasospasm monitoring. no studies are found in literature validating its use in sah. milrinone has been using for the treatment of vasospasm in sah in an established protocol in the montreal neurological hospital. this study was performed using multiple methods of monitoring, but not jugular bulb oximetry. we report one case with high grade subarachnoid hemorrhage complicated by vasospasm treated with milrinone using jugular bulb monitoring for dose titration. methods 71 years old female presented with thunderclap headache and subsequently became comatose. noncontrast head computer tomography showed posterior fossa subarachnoid blood. she was intubated, external ventricular drain (evd) was placed and she was admitted to neurosurgical intensive care unit (nsicu). angiogram showed left posterior inferior cerebellar artery aneurysm and was successfully coiled. her hospital course was complicated by refractory symptomatic vasospasm. angiogram showed basilar artery vasospasm treated with intra-arterial verapamil. post procedure patient was not able to tolerate norepinephrine due to tachycardia and could not maintain hypertension on phenylephrine. milrinone was then started. jugular bulb catheter was place because the area at risk was not amenable to invasive multimodality monitoring. oximetry was monitored and her milrinone rate was titrated to goal of venous oximetry in the range of 50-70%. on day 20, angiogram showed no more evidence of vasospasm. her exam was back to her prior poor baseline. subsequently, she was discharged to long term care facility. our case demonstrates the benefit of using jugular venous oximetry monitoring guidance for milrinone dose titration. further, it may be an effective tool is research studying treatments of cerebral vasospasm repetitive transcranial magnetic stimulation (rtms) is increasingly used in treatment of various conditions including depression, chronic pain, and movement disorders. the use of rtms for chronic management of medically refractory epilepsy has grown substantially in the last 15 years. however, little literature exists on use of rtms for acute status epilepticus. the exact antiepileptic mechanism of rtms remains unclear, but may be secondary to inhibition of cortical excitability. we report promising response to rtms in a case of super-refractory focal status epilepticus. the study is a case report. a daily dose of 1800 pulses of 1hz rtms was applied to the left occipital lobe. treatment course was divided into 3 periods of 3-5 consecutive days each for a total of 13 days of treatment over 18 days. a 63-year-old woman with recent hemiarthroplasty complicated by wound infection presented with acute unresponsiveness and right gaze deviation, evolving into fluctuating encephalopathy, word finding difficulty, and right hemineglect. eeg revealed persistent left posterior quadrant lateralized periodic discharges (lpds), at times evolving into electrographic seizures, and positron emission tomography demonstrated a co-localized hypermetabolic focus. mri revealed subtle bilateral occipital t2 hyperintensity without diffusion restriction, which later resolved; cerebrospinal fluid was noninflammatory. seizures continued despite treatment with multiple aeds, burst suppression, and empiric trial of high dose corticosteroids. the patient demonstrated abrupt electrographic and clinical improvement after rtms initiation. previously unseen brief periods of lpd resolution were observed within 30 minutes after first tms session with further improvement in eeg background correlating with improvement in encephalopathy and clinical findings over subsequent days. given excellent safety profile, rtms may be useful transitional therapy in management of some cases of status epilepticus. durability of efficacy, patient selection, and optimal treatment schedules remain important unresolved questions. further study is required. central pontine myelinolysis (cpm) occurs due to rapid osmotic shifts causing demyelination in white matter, typically due to rapid correction of hyponatremia mostly in setting of alcoholism, malnutrition, and/or liver/renal dysfunction. sequelae may include cranial neuropathies, quadriparesis, seizures, and encephalopathy. no specific treatment exists; literature reports indicate favorable outcomes in only 50-67% of patients. our patient is a 50 year old male with hypertension, tobacco and alcohol abuse, admitted with severe aortic stenosis, complicated by alcohol withdrawal, pneumonia, and acute kidney injury. he was treated with benzodiazepines, broad spectrum antibiotics, and fluid resuscitation. on hospital day (hd) 10, he had to be intubated for airway protection due to acute confusion and quadriparesis. his blood work was notable for wide fluctuations in serum sodium, from 131 on admission to 155 on hd7 to 146 on hd10. otherwise, laboratory evaluation was remarkable only for mildly elevated ast and serum creatinine. mri brain 2 days after symptom onset (hd12) showed dwi and flair hyperintensities around central pons bilaterally crossing midline. eeg showed severe generalized slowing. diagnosis of cpm was made and intravenous immunoglobulin (ivig) (0.4 g/kg/day for 5 days) was initiated within 4 days of symptom onset, on hd 14. after initiation of ivig, patient showed rapid improvement, first noted in the bilateral upper extremities. by hd 20 i.e., 6 days after initiation of ivig, he was able to be successfully extubated; and he had regained 3-4/5 strength in all extremities. neuropsychology testing at 1 month demonstrated intact cognition. we describe a case of rapid clinical improvement in cpm following treatment with ivig. in addition to ours, about 5 similar cases have been reported, in which beneficial outcomes were demonstrated following prompt initiation of ivig. one proposed theory would be through reduction of myelinotoxic antibodies, thus promoting remyelination. few cases have reported central neuronal hyperventilation (cnh) secondary to infiltrative malignancy or autoimmune disease. the lesion is usually located at the pontine tegmentum and interrupts the fibers between the respiratory centers in the pons and those in the medulla. we report a case of a 57 year old female with multiple comorbidities who was admitted to the neurocritical-care unit after intra-operative rupture of a 4mm distal basilar aneurysm while being electively coiled. an external ventricular drain (evd) was placed due to early signs of ventriculomegaly. the postoperative exam showed progressive encephalopathy, left > right hemiplegia progressive tachypnea (rate and depth) despite being on assisted mode ventilation leading to severe hypocapnia (12.8 mmhg) and compensatory renal acidosis (bicarbonate = 8.5 mmol/l) to maintain normal ph. attempt to sedate the patient led to severe metabolic acidosis. intraventricular nicardipine was started and the patient ventilator settings were changed to bi-level pressure control. transcranial doppler (tcd) showed markedly improved vasospasms. the patient respiratory rate and, to a lesser extent, the tidal volumes improved after several days. sedation was weaned off successfully. evd was successfully weaned off and removed. tcd and ct angiogram showed severed basilar artery vasospasm while mri done later showed bilateral tegmental midbrain ischemia. one case has reported acute central neuronal hyperventilation following left thalamic bleed while another reported chronic neuronal hyperventilation that was attributed to old bilateral lacunar thalamic strokes by exclusion. our case is the first to report central neuronal hyperventilation following aneurysmal subacrachnoid hemorrhage that got complicated by bilateral tegmental midbrain strokes. while respiratory centers are known to exist in the medulla and the pons, more recent articles have described networks that regulate breathing extending to the midbrain peri-acquiductal grey and possibly the thalami. our unique case supports this hypothesis. serotonergic and atypical antipsychotic drugs are often used in the critically ill in the treatment of posttraumatic depression and anxiety disorders. hyperactive delirium may mask serotonin syndrome, which carries high morbidity and mortality if left untreated. we describe a case of serotonin syndrome in a critically ill patient in the setting of surgical and neurocritical intensive care unit. a 56-year-old male with remote trauma presented with left upper abdominal pain. a ct-scan of abdomen showed left diaphragmatic hernia. he underwent left thoracotomy and repair of diaphragmatic hernia. his postoperative course was complicated by sepsis, ileus, and aspiration pneumonitis. he was started on sertraline and quetiapine for stress-induced anxiety disorder, depression and agitation. despite increasing doses of sertraline, patient became agitated, tremulous, and confused. physical examination included fever, tachycardia, hypertension, diaphoresis, dilated pupils, hyperactivity, and clonus. initially considered to be due to hyperactive delirium, these manifestations did not improve with haloperidol. neurocritical care was consulted. due to presence of hyperactivity, fever and clonus, serotonin syndrome was strongly suspected. sertraline and quetiapine was discontinued and cyproheptadine added. within 24-hours his symptoms improved and cyproheptadine was tapered over 10 days. serotonin syndrome, a potentially life-threatening syndrome, is manifested by triad of mental status changes, neuromuscular and autonomic hyperactivity. a multitude of drug combinations can result in serotonin syndrome. serotonin syndrome is a diagnosis of exclusion, based on history and neurological examination in a patient taking serotonergic drug. 5ht-2a receptors are most commonly incriminated along with high levels of norepinephrine.the keys to management include discontinuation of all serotonergic agents, supportive care, and cyproheptadine. cyproheptadine, a potent 5ht-2a antagonist, is effective in ameliorating symptoms. a high suspicion for diagnosis is important for reducing morbidity and mortality associated with this neurologic syndrome in the critically ill. ruptured cerebral mycotic aneurysm as consequence of infective endocarditis (ie): a management qeeg adr in poor grade sah: is it really useful? recognize the various subtypes of cerebral amyloid angiopathy bilal butt baylor college of medicine 24-hour development of a giant infectious intracranial aneurysm: a case report catherine albin intra-operative ultrasound in traumatic brain injury patients namkyu you syndrome of the trepheined (sot) and paradoxical herniation without craniectomy elysia james spectrum health neurosciences -icu division stephen a. trevick 1 , andrew naidech 2 , leah tatebe 2672 patients were included. median age was 54 years. 66% were female, 26% smokers, 54% hypertensive and 12% diabetic. 8% had a history of cad or mi and 14% had hyperlipidemia. in the multivariable analysis, the odds ratio for unfavorable outcome, defined as mrs score of 3-6, was 5.7(95%c.i: 4.6-7.0) and 66.0(95%ci: 44.0-99.1) for the intermediate-grade(iii) and high-grade(iv and v) hh groups respectively, when compared to the low-grade(i and ii) hh group. age, hypertension and diabetes were found to be negatively associated with mrs, while hyperlipidemia was positively associated. gender, race, smoking and history of cad/mi were not significantly related to mrs. a positive trend for better mrs outcome was observed across years (p=0.008). this trend was not related to hh grade on admission, (p=0.18 for interaction between hh grade and year). hh scale on admission is associated with the mrs outcome upon discharge for patients with nontraumatic sah. models predicting the probability of a good mrs outcome could be created based on the hh grade on admission, age, hypertension, diabetes and hyperlipidemia status. the data suggest a trend toward improvement in medical and surgical care for this patient population across years. ciro poor-grade subarachnoid hemorrhage (sah) is associated with high mortality rates. although death rates have decreased in the last three decades, the exact mechanisms of demise are still to be determined in this patient population. a retrospective study of consecutive poor-grade sah patients (world federation of neurosurgical societies grades iv and v) aggressively treated in two academic high-volume centers, one in the netherlands (amc) and one in canada (smh). the primary outcome was in-hospital mortality. the main reasons of death were evaluated. a total of 357 poor-grade sah patients were admitted between 2009 and 2013, 178 to amc and 179 to smh. 152 (43%) patients died, and 85 (24%) of those patients died before having the culprit aneurysm treated. the median interval between hospital admission and death was three days (iqr 1-12).withdrawal of life support was the main reason of death in both centers (total of 107 deaths -71%), cardiopulmonary causes, aneurysm rebleeding, refractory intracranial hypertension, and other extracranial causes), represented less than 15%. extensive review of patients chart for all the data collection including literature search for similar cases if reported before. although rare, there are multiple case reports and series of nkh and clinical findings of hemichorea-hemiballism (hc-hb). there are few case reports of nkh with unilateral signal changes in the caudate and putamen. our patient presented with acute right basal ganglia ich. despite the typical imaging findings of nkh, work-up and management of ich took precedence over control of bg. mri findings were different in our patient given presence of positive gre and dwi/adc in areas other than t1 hyperintensity, which is known to be associated with nkh. we hypothesize an association between ischemia and hemosiderin deposition with hyperglycemia. the selective vulnerability of unilateral involvement of basal ganglia and caudate is unclear and needs more research. identification of neuroimaging findings in nkh in absence of focal neurological deficits (hc-hb) is important, especially for a first responder. early recognition can prevent icu admission, provide efficient patient care and allocation of resources. although most metabolic diseases affect basal ganglia bilaterally; nkh is associated with specific unilateral neuroimaging findings even in absence of movement disorders or focal neurological deficits. a 32 year old male with a history of seizure disorder due to mesial temporal lobe sclerosis, presented with altered mental status after a lamotrigine overdose. he had consumed 9.6 gm of the drug. he was awake and alert at presentation. urine toxicology was negative. initial creatine kinase (ck) was 1478 iu/l and peaked at 5974 iu/l; his creatinine was 1.3 mg/dl. lamotrigine level went from 14 mcg/ml to 38.6 mcg/ml after 18 hours. four days after admission it was 31 mcg/ml. a head ct at admission was negative. despite initial alertness, he developed profound encephalopathy with agitation and rigidity, requiring heavy sedation, induced paralysis, and intubation. this in turn lead to hemodynamic instability, which along with persistently elevated lamotrigine levels, prompted initiation of continuous veno-venous hemodia-filtration (cvvhdf) on hospital day 5. the lamotrigine level declined to 13.3 mcg/ml within 24 hours, the encephalopathy and rigidity resolved, and he was extubated. to our knowledge, this is the first reported case of lamotrigine toxicity managed with cvvhdf. overdoses up to 15 g have been reported and can even result in death. while cleared hepatically, the half-life of lamotrigine is approximately twice as long when patients have chronic renal failure. in a small series of 6 patients with renal failure, approximately 20% of lamotrigine was reported to be removed by hemodialysis. we applied this principal to our patient. our experience suggests that augmenting drug clearance with dialysis may help reduce the time on mechanical ventilation, need for higher doses of sedatives, and improve time to discharge. cvvhdf should be considered a supplemental treatment option for lamotrigine toxicity. traumatic brain injury (tbi) complicated by percutaneous coronary intervention (pci) remains a significant clinical dilemma. dual anti-platelet therapy (dapt) is standard after pci, but may contribute to progression of tbi. novel antiplatelet drugs with ultra-short half-lives, such as the p2y12-adenosine receptor antagonist, cangrelor, may provide added clinical flexibility in avoiding tbi-associated hematoma progression, particularly in the absence of reversibility options. case report. we report a 53 year-old female who presented to the ed after a syncopal episode with a fall down a flight of stairs. an ekg was obtained demonstrating inferior wall stemi. signs of head trauma included facial and scalp contusions, and bloody otorrhea. initial gcs was 14. a non-contrast head ct demonstrated tsah and contusions of bilateral frontal lobes and left temporal lobe, and a non-displaced fracture of the left temporal bone. neurosurgery, interventional cardiology and critical care were consulted. the patient developed signs of cardiogenic shock related to stemi and was taken emergently to cath lab. successful revascularization of proximal rca occlusion was achieved. heparin was given per protocol, and aspirin and cangrelor administered post-pci. cath lab was complicated by tonic-clonic seizures requiring intubation. repeat head ct demonstrated blossoming of bifrontal contusions, trace subdural hematoma development and increased tsah conspicuity. dapt infusion was continued, and subsequent imaging was stable, allowing transition to asa and clopidogrel. she survived with only minor disability. newer generation p2y12 inhibitors can be administered intravenously with reliable platelet inhibition similar to older p2y12 receptor inhibitors. with rapid reversibility upon discontinuation, their utilization should be considered any time pci complicates tbi. cerebral air embolism (cae) is a rare but potentially fatal entity with high morbidity and mortality, commonly seen secondary to iatrogenic causes like neurosurgical procedures, vascular surgeries, etc. as also deep sea diving. cae after esophagogastroduodenoscopy (egd) is extremely uncommon. we present a rare case of cae post egd resulting in diffuse cortical infarction. an 80 year old man underwent an elective (egd) for esophageal stricture with biopsy and balloon dilatation. patient did not wake up after procedure. on initial exam, patient was comatose, glasgow coma scale 4t with decerebrate posturing. computed tomography (ct) revealed multiple foci of cerebral air embolism. ct angiogram of the brain was negative. diffusion weighted imaging and apparent diffusion coefficient imaging sequences in magnetic resonance imaging (mri) showed diffuse, global bi-hemispheric cortical infarction. ct chest showed pneumomediastinum. only 21 cases of cae from egd have been reported in literature prior to this case. 5 received hyperbaric oxygen therapy(hbo). 12 patients had a documented patent foramen ovale (pfo) or some form of arteriovenous (av) shunt. presence of av shunts/ pfo, therapeutic endoscopic procedures providing vascular communication as well as providing pressure gradient are all factors facilitating air embolism associated with egd. hbo therapy has been shown to improve outcomes in cae patients, initiating therapy >6 hours after insult and early and significant ischemic changes seen on ct/ mri prior to starting therapy were strong predictors of poor outcomes. our patient did not have a documented echocardiogram with a shunt study prior to the egd. cae after egd causing global cerebral bi-hemispheric ischemia as seen in our case is extremely rare. hbo has been shown to improve outcomes. time to treatment > 6hours and early ct/ mri changes suggest poor outcomes. studies do not recommend benefit of screening for pfo or av shunts prior to every egd. key: cord-023095-4dannjjm authors: nan title: research abstract program of the 2011 acvim forum denver, colorado, june 15–18, 2011 date: 2011-05-03 journal: j vet intern med doi: 10.1111/j.1939-1676.2011.0726.x sha: doc_id: 23095 cord_uid: 4dannjjm nan clinics ãã also see infectious disease abstracts id-1 -id-14 (thursday, june 16, 2:15 pm -6:15 pm) ãã also see pharmacology abstracts p-1 -p-5 (thursday, june 16, 5:00 pm -6:15 pm) ãã also see gasteroenterology abstracts gi-1 june 17, 8 hypertrophic cardiomyopathy (hcm) is the most commonly observed myocardial disease in cats. beta-blockers and calcium channel inhibitors are frequently administered drugs to cats with preclinical hcm despite the fact that neither drug category has been proven to slow disease progression or improve survival. ivabradine (procorolan s , servier, france) is a novel negative chronotropic agent used in the treatment of ischemic heart disease in people. little is known about its efficacy and safety in cats. the purpose of this study was to determine the short-term effects of ivabradine on heart rate (hr), blood pressure, left ventricular (lv) systolic and diastolic function, left atrial (la) performance, and clinical tolerance in healthy cats after repeated oral doses. ten healthy laboratory cats were involved in the present study. physical examination, systolic blood pressure measurement, and transthoracic echocardiography were performed in all cats at baseline and after oral administration (4 weeks each) of ivabradine (0.3 mg/kg, q12 h) and atenolol (6.25 mg/cat, q12h; 1.0-1.7 mg/kg) in a prospective, double-blind, randomized, active-control, fully crossed study. a-priori non-inferiority margins for the effects of ivabradine compared to atenolol were set at 50% (f 5 0.5) based on predicted clinical relevance, observer measurement variability, and in agreement with fda guidelines. variables were compared by use of 2-way repeated measures anova. ivabradine was clinically well tolerated with no adverse events observed. hr (ivabradine, po0.001; atenolol, po0.001; ivabradine vs. atenolol, p 5 0.721) and rate-pressure product (ivabradine, p o 0.001; atenolol, p 5 0.001; ivabradine vs. atenolol, p 5 0.847) were not different between treatments. at the dosages used, ivabradine demonstrated more favorable effects than atenolol on echocardiographic indices of left ventricular (lv) systolic and diastolic function and left atrial performance. ivabradine is non-inferior to atenolol with regard to effects on hr, rate-pressure product, lv function, la performance, and clinical tolerance. clinical studies in cats with hcm are needed to validate these findings and further assess safety. the aim of this study was to compare outcome from cpa in dogs following initial administration of either epinephrine or vasopressin during cardiopulmonary resuscitation (cpr). dogs having cpa in the er or icu of a university hospital were randomized to receive either iv epinephrine (0.01-0.02 mg/kg) or vasopressin (0.5-1.0 u/kg) in a blinded fashion immediately following establishment of iv access and again three minutes later. a standardized cpr protocol was followed. other vasopressors were not permitted during the six minute study period, at the end of the study period additional cpr interventions were at the discretion of the managing clinician. the primary end point was return of spontaneous circulation (rosc) within the study period; secondary end points included rosc at any point, survival to 20 minutes, and survival to one hour. sixty dogs completed the study, 31 received epinephrine and 29 received vasopressin. rosc within six minutes was 53% (13 vasopressin, 19 epinephrine p 5 0.2), rosc at any time was 60% (15 vasopressin, 21 epinephrine p 5 0.2). survival to 20 minutes was 32% (6 vasopressin, 13 epinephrine p 5 0.077), survival to one hour was 18% (2 vasopressin, 9 epinephrine p 5 0.027). five dogs survived to 24 hours, one survived to hospital discharge. of animals dying after rosc, 13/35 were euthanized and 22/35 rearrested. no advantage of routine substitution of vasopressin for epinephrine was seen for rosc, a small survival advantage at one hour was seen in the group receiving epinephrine. the study also demon-strated that prospective clinical cpr research in animals is both possible and practical. three dogs were evaluated in 3 phases. phase-1: single-dose diltxr at approximately 6 mg/kg po. phase-2: same dose q12h for 4.5 days. phase-3: after a 14 day wash-out the single-dose protocol was repeated using cut tablets to assess affect on extended release properties. blood pressure (bp), 6-lead ecg, echocardiogram, and 24h-ambulatory ecg were performed at baseline, and conclusion of phase-2. blood samples and bp was obtained 0, 1, 2, 4, 8, 12, and 24h after final dosing. peak median plasma diltiazem concentrations (mcg/ml) measured by hplc for each phase were 0.0979, 0.016, and 0.07 respectively. diltiazem concentrations were below the limit of detection in the majority of samples in phase-2. median diltiazem concentration reached purported therapeutic concentrations (0.05-0.2 mcg/ml) by 2 h post-pill in phase-1 and 1h post-pill in phase-3. therapeutic concentrations were maintained for 24 h in phase-1, but only 2h in phase-3. median bp (mmhg) was 142.7 at baseline and 126.0 at peak concentration in phase-1. median heart rate (bpm) was 127.1 at baseline. 24h-ambulatory ecg analysis revealed the median hourly heart rate was 101.5 at baseline and 102 during phase-1. median heart rate at peak concentration in phase-1 was 127.5. lack of detectable plasma diltiazem during phase-2 may be due to up-regulation of drug metabolism via p-glycoprotein (abcb1-1) mutations. ongoing data collection and analysis will include mutation testing. adiponectin (adpn) is a cytokine produced by fat cells which has been shown to be correlated with adverse cardiac conditions in humans. in the heart, adiponectin activates several pro-survival reactions, including the ampk pathway and cox2 receptors, which protect the heart following ischemic injury. recent studies have shown that higher levels of adpn influence cardiac remodeling signaling, inhibiting protein synthesis and suppressing pathological cardiac growth. in humans, adpn plasma levels rise with decreased activity of the sympathetic nervous system and b-adrenergic agonists inhibit adpn at the level of gene expression. in contrast c-reactive protein (crp), a marker of systemic inflammation is elevated in humans with congestive heart failure (chf) and correlates to the severity of disease. first, we hypothesized that dogs with chf would have reduced adpn and elevated crp compared to normal dogs and that cytokine concentrations would predict severity of chf. second, we hypothesized that adpn receptor-1 (r1) and adpn protein would be elevated in the myocardium of chf dogs reflecting a compensatory process. we collected serum from 32 dogs (18 healthy and 18 chf). circulating adiponectin and crp levels were quantified using a mouse/rat adiponectin elisa and a canine crp elisa. we found lower mean crp concentrations in normal dogs (3.5 ae 2.3 mg/ ml) than dogs with chf (17.9 ae 17.1 mg/ml), however, the results were not statistically significant due to the large variability seen among the chf dogs (p 5 0.07). we found greater mean adpn concentrations in normal dogs (11.46 ae 3.3 mg/ml) than chf dogs (9.2 ae 3.2 mg/ml) (p 5 0.05). in general, the greater the severity of the heart failure, the lower the level of serum adpn. when the 2 tests the purpose of this study was to determine if there are any clinically important differences between the approaches (including devices) used in non invasive transvascular (interventional) closure of patent ductus arteriosus (pda) in dogs in our institution. initial and follow up records from all dogs (n 5 112) that underwent attempted transvascular pda occlusion from january 2006-december 2009 were examined. dogs were placed into 4 groups depending on the device and route of vascular access (transvenous or transarterial). group 1: amplatz s canine ductal occluder (acdo) (transarterial) -36 dogs; group 2: gianturco s or mreye flipper s detachable embolization (flipper) coil (transarterial) -38 dogs; group 3: amplatzer s vascular plug (avp) (transarterial) -23 dogs; group 4: flipper coil (transvenous) -15 dogs. statistical comparisons were made using the kruskal-wallis test with mann-whitney tests to compare pairs of groups when significance was detected. p o 0.05 was considered significant. there was no significant difference in ages between the 4 groups. there was a significant difference in body weight between groups with dogs receiving a coil either transarterially or transvenously (groups 2 and 4) being significantly smaller than dogs receiving an acdo or avp. this was by design since the acdo and avp cannot be used in small dogs. overall, the success rate of the total procedure (including vascular access and satisfactory pda occlusion) was high (92%) with success rates being comparable between groups (87-97%). there was a significant difference in complication rate between groups (p o 0.0001) with the acdo group having a markedly lower complication rate than the remaining groups (3% for acdo versus 24-33% for the other groups). total fluoroscopy time ranged from 3-78 minutes (median 8 minutes). fluoroscopy time for the transvenous method was significantly longer (median 13 minutes; range 10-78 minutes) than in the remaining groups (median 6 minutes; range 3-33 minutes) (p o 0.0001). number of dogs with residual flow immediately following the procedure and 24 hrs later was significantly less in the acdo group than in the remaining groups (2 dogs from group 2 and 3 from group 3 had moderate persistent flow while 1 dog from group 2 and 1 from group 4 had severe persistent flow 24 hours after the procedure). the acdo appears superior in ease of use, complication rate, completeness of occlusion and fluoroscopy time than other devices. the remaining limiting factor with this device is patient size. until a smaller acdo device is marketed, coils remain the only choice for interventional closure in very small dogs ( o 2.5kg). previously presented at the university of california davis, house officers seminar day. subvalvular aortic stenosis (sas) is one of the most commonly reported canine congenital heart defects and is inherited in newfoundland dogs and human beings. the golden retriever and rottweiler are breeds over-represented in dogs with subvalvular aortic stenosis; however, a genetic cause of this disease in these breeds has not been described. we performed genome wide association analysis in both normal and sas affected rottweilers and golden retrievers to identify chromosomal regions of interest that could implicate a causative mutation by high density single nucleotide polymorphism (snp) array. 48 (24 unaffected/24 affected) adult golden retrievers and 48 (20 unaffected/28 affected) adult rottweilers were included in this study. criteria for affected included a subcostal continuous-wave doppler aortic velocity ! 2.5 m/s and presence of a left basilar systolic ejection murmur; criteria for unaffected included a doppler aortic velocity 1.8 m/s. dna samples were obtained from anticoagulated blood. genotypes were obtained using high density (173,662) snp arrays, and genome wide association with sas was evaluated for each breed. significance cut-off was set at p 5 5 â 10 à5 , and all snps meeting this criterion were plotted within each breed and compared across breeds using plink. affected golden retriever data implicate the most significant region of genetic variation on chromosome 21 at location 27384260 (p 5 1.15 â 10 à5 ; odds ratio 7.58) with 11 other significant surrounding snps . affected rottweiler data also implicate the most significant region of genetic variation on chromosome 21 at location 27895300 (p 5 8.98 â 10 à6 ; odds ratio 23.39) with 3 other significant surrounding snps . other regions of statistical significance were on chromosomes 4 and 7 in the golden retriever and 11 and 38 in the rottweiler. genome wide association with subvalvular aortic stenosis in the golden retriever and rottweiler implicate overlapping chromosomal regions of interest for causative mutations on chromosome 21. the different secondary chromosomal regions of interest (chr 4, 7 in golden retrievers and 11, 38 in rottweilers) supports the known familial nature of this disease within different breeds and may suggest the presence of multiple mutations or breed specific disease modifiers. these data highlight the need for candidate gene evaluation on chromosome 21 in golden retrievers and rottweilers with sas. heart valves share developmental signaling pathways with bone and cartilage. degenerative aortic valve disease in humans is characterized by valve stenosis and calcification. recent evidence suggests that degenerative aortic valves are undergoing pathologic processes that mimic osteogenesis. degenerative mitral valves in dogs and humans are characterized by valve regurgitation, and rarely undergo calcification. we tested the hypothesis that canine and human degenerative mitral valves might be undergoing pathologic processes that mimic chondrogenesis. to test this hypothesis, expression of bone morphogenic protein 2 (bmp2), a chondrogenic growth factor; sox 9, a chondrogenic transcription factor; aggrecan, a proteoglycan abundant in cartilage; and type ii collagen were evaluated utilizing immunohistochemistry. normal canine mitral valves, different stages of canine degenerative mitral valves (early, intermediate, and late), and late-stage human degenerative mitral valves were studied. canine and human degenerative mitral valves showed focal areas that co-expressed all four markers of chondrogenic signaling and phenotype. valve interstitial cells and surrounding extracellular matrix in these focal areas adopted a morphologic appearance reminiscent of cartilage. focal chondrogenesis was present in all stages of canine degenerative mitral valves, but not normal canine mitral valves. focal areas of chondrogenesis did not coincide with nodular areas of glycosaminoglycan accumulation on the leaflet edge, but rather seemed to occur at points of chordae attachment to leaflets. in conclusion, canine and human degenerative mitral valves undergo pathologic processes that mimic chondrogenesis. this finding suggests that mitral valve degeneration may be recapitulating developmental signaling pathways shared by heart valves and cartilage. the triggering events for chondrogenesis in mitral valves remain unknown; as does the reason why aortic and mitral valves appear to be undergoing different pathologic processes. the fact that humans exhibit degeneration of both the aortic and mitral valve, and that dogs commonly exhibit only the latter could eventually provide insight into both processes. arrhythmogenic right ventricular cardiomyopathy (arvc) is a familial cardiomyopathy characterized by right ventricular fibrofatty infiltration and ventricular ectopy of left bundle branch block morphology (vpc) . a deletion in the striatin gene has been associated with arvc in at least some boxer families. syncope and sudden death (sd) occur in some affected dogs, although many affected dogs survive for years. the objective of this study was to define clinical characteristics of arvc in boxers that experienced sd, and compare them to those of a contemporaneous group of arvc boxers that had not died suddenly (nsd). data for both groups were collected from adult boxers enrolled in a long term prospective study of arvc in which echocardiograms and 24 hour ambulatory ecg (aecg) are evaluated annually. aecg are quantitated for vpc numbers and arrhythmia grade (1-4). arvc diagnosis requires at least 100 vpcs/ 24 hours in the absence of other disease. forty three adult boxers that entered the study had died suddenly at the time of analysis (sd defined as the absence of observed clinical signs within 24 hours prior to an unexpected and unexplained death). striatin genotype was available for 28 of the 43 sd dogs (17 heterozygotes, 11 homozygotes); 19 were female (9 intact) and 24 were male (14 intact). sd occurred at a mean age of 8 years (range, 1-12); 27 sd dogs (63%) had no prior history of syncope. twelve sd dogs (28%) were on antiarrhythmics at the time of death (metop-p0oooooooooooprolol (1), sotalol (4), amiodarone (1), procainamide (2), mexiletine & atenolol (3), atenolol (1)). eleven sd dogs (25%) had decreased myocardial systolic function defined by a shortening fraction (%fs) o 25% (range 11-45, mean 5 27) on the most recent echocardiogram prior to sd. median vpcs/24 hours on annual aecg was 7,700 (range 106-91,000) with a median arrhythmia grade of 4 (range 2-4). twenty one contemporaneously entered arvc boxers that had survived to at least the median age of the sd group with nsd were available for comparison; 15/21 were genotyped (10 heterozygous, 1 homozygous, 4 negative), 13 were female (3 intact) and 8 male (2 intact). twelve nsd dogs (57%) had no prior history of syncope. median nsd group age was 10 years (range, 8-13); 11/21 (52%) were on an antiarrhythmics (sotalol (9), mexiletine & sotalol (1), mexiletine & atenolol (1)). one nsd dog had decreased %fs (nsd group %fs range 20-42, mean 5 32). the nsd median number of vpcs was 5,342 (range 622-62,622), median arrhythmia grade was 4 (range 2-4). striatin genotype was significantly associated with sd. no significant differences were found between groups with respect to vpc numbers or arrhythmia grade. shortening fraction was significantly lower in the sd group (p o 0.01). sd in arvc appears to be associated with the presence of the striatin mutation and reduced % fs, it does not appear to be associated with number of vpcs or arrhythmia grade. coughing in the small breed dog may be related to cardiac causes associated with myxomatous mitral valve degeneration (mmvd) including pulmonary edema and compression of the mainstem bronchus by a severely enlarged left atrium, or due to respiratory causes such as tracheal and/or bronchial collapse or chronic bronchitis. the purpose of this study was to evaluate the association between left atrial enlargement and large airway collapse in dogs with mmvd and chronic cough. we hypothesized that airway collapse was independent of degree of left atrial enlargement. twelve dogs with mmvd and a chronic cough in the absence of congestive heart failure were prospectively evaluated with thoracic and cervical radiography, echocardiography, fluoroscopy, bronchoscopy and bronchoalveolar lavage (bal). group 1 dogs (n 5 8) had moderate to severe left atrial enlargement based on an echocardiographically calculated left atrial:aortic surface area [la:ao(a)] 4 6. group 2 dogs (n 5 4) had no to mild left atrial enlargement [la:ao(a) 6]. the site and severity of airway collapse was graded on bronchoscopy and bal cytology was assessed for evidence of inflammation or infection. the occurrence of bronchoscopic abnormalities was compared between groups using fisher's exact test. p o 0.05 was considered significant. age and body weight did not differ between groups. left atrial size was interpreted radiographically as moderately to severely enlarged in 7 of 8 dogs in group 1 and as moderately enlarged in 2 of 4 dogs in group 2. fluoroscopy revealed variable degrees of airway collapse during normal respiration and induced cough in both groups. radiography and fluoroscopy were not accurate in identifying site and degree of collapse in either group when compared to bronchoscopy. cervical tracheal collapse was identified during bronchoscopy in both group 1 (2 of 8) and group 2 (3 of 4) dogs but was subjectively less severe in group 1 dogs. bronchial collapse 4 50% was evident at multiple sites in both groups of dogs with no difference between groups. all dogs had suppurative and/or lymphocytic inflammation on airway cytology. infection was not present in either group of dogs, although non-specific light bacterial growth was detected in 6 of 8 group 1 dogs and 1 of 4 group 2 dogs (p 5 0.22). preliminary results failed to identify an association between left atrial enlargement and airway collapse in dogs with mmvd but did suggest that airway inflammation is common in affected dogs. further studies are needed to identify factors contributing to airway collapse in dogs with and without mmvd. atenolol is often used empirically in cats with asymptomatic hcm, even though clinical and experimental evidence of efficacy is lacking. cardiac biomarkers play a critical role in the early detection of subclinical cardiac disease, in the prediction of long-term prognosis, and in monitoring the response to therapy in humans. we hypothesized that serum concentrations of the biomarkers, nterminal pro-brain natriuretic peptide (nt-probnp) and cardiac troponin i (ctni), would improve following the chronic administration of atenolol po to asymptomatic cats with hcm. six maine coon or maine coon cross cats with severe hcm from the research colony at ucdavis were administered atenolol (12.5 mg po twice a day) for 30 days. no cat had severe left ventricular dynamic outflow tract obstruction due to systolic anterior motion of the mitral valve. the concentrations of nt-probnp and ctni were assayed prior to drug administration and on the last day of drug administration. there was no statistically significant difference identified in nt-probnp [median before: 394 pmol/l (range: 71-1500 pmol/l), median after: 439 pmol/l (range: 24-1500 pmol/l); p 5 0.63] or ctni [median before: 0.24 ng/ml (range:0.10-0.97 ng/ml), median after: 0.28 ng/ml (range: 0.09-1.0 ng/ml); p 5 0.69] concentrations before and after drug administration using the wilcoxon matched pairs test. the ctni finding suggests that atenolol does not reduce chronic myocyte death in cats with hcm. the lack of improvement in nt-probnp suggests that atenolol does not improve myocardial wall stress in cats with hcm. a clinical trial is warranted to confirm or refute the findings from this study. therefore, leptin-gene expression was investigated in blood samples of dogs with congestive heart failure (chf; n 5 8) in comparison to dogs presented for cardiac screening (n 5 8) without abnormalities. additionally myocardial samples (interventricular septum, right and left atrium and ventricle) of 10 dogs with no cardiac abnormalities (controls), seven dogs with acquired and three with congenital cardiac diseases were investigated using quantitative rt-pcr. leptin blood levels were significantly higher in dogs with chf in comparison to dogs without diseases (p 5 0.013). there was an association with gender with higher myocardial leptin levels in female dogs with cardiac diseases (p 5 0.001). differences between cardiac regions were present (p o 0.05) and cardiac diseases resulted in an increase in atrial leptin levels in both sexes (p 5 0.006). interestingly, a significant reduction of myocardial leptin was present in dogs with congenital cardiac diseases (p 5 0.016), whereas acquired cardiac diseases resulted in an increase in leptin (p 5 0.035) in comparison to controls. these results suggest that the heart might be a target of leptin action in the dog and myocardial leptin production might play a role in regulating cardiac function in an auto-and paracrine manner. predicting risk of chf in asymptomatic dogs with mitral valve disease (mvd) is challenging. we examined ability of nt-probnp to identify asymptomatic dogs at high risk for chf. dogs with isachc-1b (la:ao 4 1.6) were prospectively recruited; dogs with current or previous chf or diuretic therapy were excluded. physical examination, radiography, echocardiography, and nt-probnp were performed at 2-9 mo intervals for 66 dogs (median follow-up 5 276 d, range, 17-1334 d). thirty-one patients reached a study endpoint of radiographic pulmonary edema; 35 remained asymptomatic. parameters from the visit immediately previous to onset of chf (future-chf) or prior to the most recent visit without chf (remain-asympt) were analyzed. median nt-probnp of future-chf (3001pmol/lpmol/l, iqr 2255-3001) was significantly different from remain-asympt (1600 pmol/l, 984-2863; p 5 0.0004). median time to chf of future-chf was 86d (iqr, . groups also differed in median la:ao (future-chf 2.20 [2.00-2.50]; remain-asympt 1.96 [1.85-2.13], p 5 0.0014); vhs (future]; remain-asympt 11.3 [10.75-12.0], p 5 0.003); and lvidd:ao ]; remain-asympt 2.33 [2.15-2.64], p 5 0.0001). roc analysis to predict if chf would occur prior to the next visit yielded auc 5 0.753 (95%ci, 0.632-0.851). sensitivity was 90.3% or 77.4% and specificity 48.6% or 68.6% for nt-probnp 41490 pmol/l or 42150 pmol/l, respectively. mean increase in nt-probnp between penultimate visit and two visits prior to endpoint: future-chf 5 1440.1 pmol/l vs. remain-asympt 5 110.5 pmol/l. within 6 mo, 4.5%, 10.7%, 17.4%, and 36.7% of dogs with nt-probnp o 1000, , and 43000 pmol/l developed chf. nt-probnp and heart size helped assess risk of chf in asymptomatic mvd. increasing the assay's upper limit of detection would likely improve utility of nt-probnp. piiinp is a serum biomarker of collagen biosynthesis and is described as a marker of myocardial fibrosis in human patients. we hypothesised that piiinp concentrations would vary according to the degree of remodelling demonstrable in dogs with naturallyoccurring myxomatous mitral valve disease (mmvd). serum piiinp concentrations (mg/ml) were measured in dogs with mmvd and healthy controls using a validated commerciallyavailable radioimmunoassay. results are reported as (mean ae sd). non-normally distributed variables were logarithmically transformed. comparisons of continuous variables were made between groups using t-tests and one-way anovas with tukey's post-hoc comparisons. univariable analyses were used to evaluate associations between piiinp and clinical characteristics (age, breed [cavalier king charles spaniel (ckcs) yes/ no], sex, weight, heart rate [measured from ecg], treatment with acei [yes/ no], treatment with diuretics [yes/ no] and echocardiographic measurements [la/ao, lvedd/ lvfwd, lveddn, lvesdn]). multivariable analysis was initially performed with all dogs included and then repeated excluding all dogs receiving therapy. dogs with mmvd were divided into those with no cardiomegaly (nc) (la/ao o 1.5 and lveddn o 1.8), those with cardiomegaly (la/ao ! 1.5 and/ or lveddn ! 1.8) but no clinical signs (c) and those dogs with cardiomegaly requiring treatment for congestive heart failure (chf). one hundred and fifty-four dogs with mmvd and 23 control dogs were studied. there was no difference in age (p 5 0.870) or weight (p 5 0.606) between the mmvd and control groups. there was a significant difference in serum piiinp (p 5 0.034) between normal (11.2 ae 3.66), nc (11.6 ae 4.57), c (10.1 ae 3.44) and chf (9.4 ae 3.06) groups. post-hoc comparisons demonstrated a difference between nc and chf groups (p 5 0.038). there was no difference in serum piiinp between genders (p 5 0.228). in the univariable analysis ckcs (yes/ no) (p 5 0.016) was positively associated with serum piiinp. age (p o 0.0001), log (la/ao) (p 5 0.002) and lveddn (p 5 0.002) were negatively associated with serum piiinp. in the multivariable model including all dogs, lveddn (p o 0.0001, b 5 à4.04 (95%ci 5 à6.11 to à1.97)), age (p 5 0.006, b 5 à0.28 (95%ci 5 à0.47 to à0.08)) and ckcs (yes/ no) (p 5 0.003, b 5 2.01 (95%ci 5 0.67 to 3.34)) were independently associated with serum piiinp. in the multivariable model including only dogs not receiving therapy (n 5 141), lveddn (p 5 0.006, b 5 à4.30 (95%ci 5 à7.32 to à1.29)), age (p 5 0.011, b 5 à0.31 (95%ci 5 à0.55 to à0.07)) and ckcs (yes/ no) (p 5 0.034, b 5 1.75 (95%ci 5 0.13 to 3.37)) were independently associated with serum piiinp. in conclusion, serum piiinp decreases with age and with increasing lveddn. ckcs have higher serum piiinp measurements independent of age and lveddn, which may reflect a difference in collagen turnover in this breed. left atrial (la) chamber dilation and congestive heart failure (chf) are common consequences of cardiac conditions in cats. in some cases chf is manifest as right-sided chf (r-chf) or pleural effusion, in other cases chf manifests as left-sided chf (l-chf) or pulmonary edema. it is not always readily apparent as to which cats will develop what form of chf. a general impression has been that la enlargement is associated with the average burden of elevated filling pressures, but little attention has been paid to the function of the la chamber itself. since chf is classically preceded by abnormal atrial chamber dilation and alterations in atrial chamber function, we want to understand how these changes may help us manage or predict chf in the cat. we hypothesized that cats manifesting r-chf have la failure with the la acting primarily as a conduit, resulting in greater pulmonary hypertension, whereas l-chf cats maintain some booster pump and reservoir function. we measured la maximum and minimum areas from right parasternal long axis four-chamber views on 2d echo, and la m-mode dimensions at maximum, minimum, and beginning of atrial contraction. la area change, fractional shortening, active emptying fraction, and expansion index were calculated from these measurements. right ventricular internal diastolic diameter was also measured on m-mode views. preliminary data revealed that maximum left atrial size is not significantly different between r-chf and l-chf cats on 2d or m-mode measurements due to high variability. however, total left atrial fractional shortening is significantly reduced in r-chf cats (11.32% ae 4.9) compared to l-chf cats (19.7% ae 5.1)(p 5 0.001), and r-chf cats have reduced left atrial active emptying fraction (6.12% ae 3.5) as compared to l-chf cats (13.7% ae 3)(p o 0.001). left atrial expansion ability is poorer in r-chf cats (13.11% ae 6.7) than in l-chf cats (25.0% ae 8)(p 5 0.002). these findings may suggest that atrial stiffness and poorer atrial function is associated with a greater degree of pulmonary venous and thus secondary pulmonary arterial hypertension resulting in pleural effusion (r-chf). right ventricular diameter on m-mode was increased in r-chf cats (4.8 mm ae 2.1) when compared to l-chf cats (2.89 mm ae 0.8)(p 5 0.002) and normal cats (2.74 mm ae 0.8)(p 5 0.002), which may also be evidence for a greater degree of pulmonary arterial hypertension in these cats. episodic weakness and syncope are common in boxer dogs. reported causes include rapid ventricular tachycardia (vt) and exertion-excitement triggered neurally-mediated bradycardia (nmb) .the purpose of this retrospective study is to describe the features of presumed nmb in boxers. to be included in the study, each dog must have been overtly healthy with a history of exertion-excitement triggered syncope or presyncope; had a normal echocardiogram (ec); had absence of vt and fewer than 500 ventricular premature complexes (vpc) on an initial and subsequent 24 hour holter recordings; and been alive and overtly healthy for at least six months following the initial evaluation. a total of 27 boxers were identified. sixteen were male and 11 were female. most (90%) dogs were either less than 4 or more than 7 years of age. most dogs had multiple, but infrequent, episodes and heart rhythm was documented at the time of an episode in only 8 (30%) and only once (bradycardia) on the first holter recording. owners were instructed to attempt to precipitate episodes. bradycardia related episodes were subsequently recorded in 5: during the 2nd (1), 3rd (2) or 4th (1) day of 120 hour holter recordings and during the 5th day of an event recording (1). collapse and bradycardia were documented during auscultation in 2 additional dogs. the heart rate during syncope was never documented in 19 (70%) dogs. a presumptive diagnosis of nmb was based on the absence of initial and follow-up of ec abnormalities and the presence of no or few vpc during extended ecg monitoring. multiple holter recordings (48-168 hours) were performed in 8 of 19 (42%) dogs and event monitoring of 5 days (1) and 7 days (1) was performed in 2 additional dogs. in conclusion, documentation of the heart rhythm during episodes of collapse was difficult, accomplished in only 30% and was unlikely during the first holter recording. in boxers with suspected nmb, extended ecg monitoring and implantable loop recorders may be best for hr documentation. arrhythmogenic right ventricular cardiomyopathy (arvc) is an inherited myocardial disease with high prevalence in the boxer dog population, and is associated with sustained monomorphic ventricular tachycardia, sudden cardiac death, and replacement of myocardium with fatty or fibro-fatty tissue. though several genes have been linked to the disease both in humans and in boxers, the etiology of arvc is still unclear. several mechanisms for the development of arvc have been suggested, including dysfunction of the canonical wnt pathway, which results in an arvc phenotype in the mouse. the canonical wnt pathway has been linked to many cellular functions, including growth and differentiation of adipocytes. with the recent discovery that the gene encoding striatin, a protein involved in wnt signaling, may be involved in the development of boxer arvc, we hypothesized that changes in the wnt pathway may also play a role in the etiology. here, we show changes in the localization and decreased amount of proteins affiliated with the canonical wnt pathway. afflicted boxers were identified by 24-hour holter monitoring and histopathological examination of the heart. samples from the right ventricle rv) of 15 arvc afflicted boxers, and 5 unafflicted dogs (1 beagle, 2 mongrels, and 2 german shepherds) were collected, fixed in 10% formalin, processed, treated with antibodies recognizingcatenin, striatin, and calnexin, and examined using confocal microscopy. western blots were performed on 3 unafflicted rv samples, and 2 arvc afflicted rv samples. frozen tissue samples were homogenized in laemmli buffer, and 10 mg of protein was loaded into each well of a 8-16% gradient gel. -catenin, an integral modulator of the wnt pathway, and striatin were colocalized with the endoplasmic reticulum (er) marker, calnexin. in the unafflicted animals, -catenin localized at sites of cell-to-cell apposition, and striatin localized in a diffuse intracellular pattern, with no detectable localization in the er. in contrast, in the 15 arvc boxers, bothcatenin and striatin were colocalized with calnexin in an er pattern. in the afflicted samples, -catenin and striatin were not visualized to the intercalated disc and intracellular space, respectively. western blots of striatin and -catenin revealed no changes in the amount of protein. interestingly, a western blot for the wnt protein revealed a decrease in the amount of protein in arvc samples, compared to unafflicted samples. our preliminary data suggest that disturbances of the canonical wnt pathway may play an etiological role in the development of arvc in the boxer dog. there are numerous benefits of omega-3 fatty acid supplementation in human heart disease, including reduction in arrhythmias, decreased incidence of sudden death, and improved survival in heart failure. antithrombotic effects of omega-3 fatty acids have been demonstrated in people, which may have particular benefit in cats given their risk of thromboembolic complications with cardiac disease. benefits also have been found in canine heart disease, and reduced serum concentrations of the omega-3 fatty acids, eicosapentaenoic acid (epa) and docosahexaenoic acid (dha) have been found in dogs with congestive heart failure (chf) secondary to dcm. to the authors' knowledge, no studies to date have investigated fatty acid concentrations in cats with cardiomyopathy. the purpose of this study was to measure serum fatty acid concentrations in normal cats and cats with cardiomyopathy. serum fatty acid concentrations were measured in normal cats and cats with cardiomyopathy using gas chromatography. cats with cardiomyopathy and at least mild left atrial (la) enlargement (la to aortic ratio 4 1.5 on two-dimensional echocardiography from a right parasternal short axis view) were candidates for study. normal cats had a normal history, physical examination, echocardiogram, packed cell volume, total solids and platelet count. cats with evidence of other systemic disease or those receiving anticoagulants were excluded from the study. normally distributed and skewed data were compared between the cardiomyopathy and control groups with independent t tests or mann whitney u tests, respectively. statistical significance was set at p o 0.01. thirty cats with cardiomyopathy (23 neutered males and 7 neutered females) and 27 healthy controls (13 neutered males and 14 neutered females) were enrolled. median age was 8 yr (range, 2-16 yr) in the cardiomyopathy group and 5 yr (range, 1-10 yr) in the control group (p 5 .009). cats in the cardiomyopathy group were classified in the international small animal cardiac health council stage 1b (n 5 10), 2 (n 5 8), 3a (n 5 3) and 3b (n 5 9). compared to control cats, cardiomyopathic cats had higher concentrations of palmitic acid (p 5 .002) and dha (p o .001), and lower concentrations of linoleic acid (p 5 .005). among cats with cardiomyopathy, there was no significant correlation between any serum fatty acid concentration and left atrial size or age. these findings warrant further investigation into the role of fatty acids in cats with cardiac disease. platelet mapping is an application of thromboelastography that relies on the generation of at least three tracings: ma thrombin (maximum platelet activity),ma fibrin (fibrin activity only), an-dma aa or adp (platelet activity not inhibited by arachidonic acid or adp receptor antagonists, respectively). using these three tracings, the % inhibition of platelets can be calculated. the purpose of this study was to evaluate the platelet mapping assay in normal cats and assess platelet inhibition in cats receiving clopidogrel. employee-owned cats with normal history, physical exam, echocardiogram, thromboelastography, packed cell volume, total solids and platelet count were eligible. clopidogrel (18.75 mg po q24h) was administered for 10 days with platelet mapping performed on days 0 and 10. platelet mapping values were compared using a paired t test, with significance set at p o 0.05. seven cats (4 fs, 3 cm, aged 2-10 years) were enrolled. compared to day 0, ma adp (p o .001) and ma fibrin (p o .001) were lower on day 10. the latter unexpected result prompted measurement of fibrinogen concentrations at day 0 and 10 in the last 5 of these 7 cats. fibrinogen was not different from day 0 to 10 in these 5 cats. these results suggest that platelet mapping may be a simple, outpatient clinical tool to measure antiplatelet activity in cats receiving clopidogrel. this clopidogrel dose resulted in significant platelet inhibition as measured by ma adp in all cats. further studies correlating antiplatelet effects measured by platelet mapping with clinical outcomes in cats with cardiomyopathy are warranted. this study investigated the hemodynamic effects of application of an itd in a canine model of cardiopulmonary arrest. 8 laboratory beagles which were part of a separate terminal study were anesthetized and instrumented for continuous measurement and recording of right atrial pressure, arterial pressure and carotid blood flow. following euthanasia, cpr was performed for one minute, then a pause of one minute followed by a second one minute period at a compression rate of 100-120/minute, ventilation with 100% oxygen was delivered at eight breaths/min and 15 ml/kg tidal volume. cpr was performed with the itd in place (itd-cpr) and without the itd (s-cpr) for one period each in a randomized fashion with the rescuer blinded to its application. baseline, s-cpr and itd-cpr data were assessed for normality, a kruskal-wallis one-way anova on ranks was used (baseline v cpr). when appropriate a pairwise multiple comparison procedure (dunn's method) was used. percentage of baseline s-cpr v itd-cpr was assessed using the student t-test. the right atrial diastolic pressure was significantly more negative with the itd attached than without (p 5 0.02), the coronary perfusion pressure and carotid blood flow were significantly higher during cpr with the itd than standard cpr (p 5 0.015, p 5 0.017). no significant differences in diastolic, mean or systolic arterial pressure or end tidal co 2 were seen. application of the itd resulted in significantly improved hemodynamics during cpr in dogs. clinical evaluation of the device is warranted to examine whether this translates into improved success in resuscitation and survival. left ventricular (lv) systolic dysfunction is a common problem in dogs and can be due to a variety of etiologies. one potential etiology for systolic dysfunction is persistent or paroxysmal tachyarrhythmias, leading to tachycardia-induced cardiomyopathy (ticm). in humans, ticm carries a relatively good prognosis in that remodeling may be reversible with normalization of heart rate. differentiating between primary and secondary tachyarrhythmias in dogs with systolic dysfunction is critical for prognostic purposes as primary tachyarrhythmias may be associated with a better outcome. the goal of our study was to describe a population of dogs with ticm and to determine if treatment of arrhythmias was associated with reversible cardiac remodeling as indicated by standard echocardiographic parameters. medical records of dogs referred to the cardiology service of ksu vmth from 2008 to 2010 were reviewed. ticm was defined as the presence of severe tachyarrhythmias that were reversible with treatment, systolic dysfunction or ventricular enlargement that improved with treatment of the arrhythmia, or dogs with severe tachyarrhythmias and systolic dysfunction of breeds with that are atypical for idiopathic dilated cardiomyopathy. exclusion criteria were dogs with congenital heart disease, severe mitral regurgitation, and endocarditis. transthoracic echocardiography, thoracic radiographs and electrocardiography (ecg) were performed in all dogs. ventricular enlargement and systolic dysfunction were defined according to standard echocardiographic parameters. arrhythmias were confirmed with a holter monitor in 7 dogs. a total of 12 dogs were included in the study. mean age was 7 years (range 2-12 years) with 8 males (4 intact, 4 castrated) and 4 females (3 spayed, 1 intact). four dogs had pulmonary venous congestion or pulmonary edema and two dogs had ascites. at initial presentation, the meanaesd values were as follows: heart rate 219ae75 bpm, m-mode fractional shortening (fs) 17.29ae9.05%, ejection fraction (ef) using the area-length method 30.69ae13.45%, and left atrial to aortic root ratio (la/ao) 1.92ae0.37. initial meanaesd m-mode derived lv internal dimensions corrected for body weight were as follows: diastolic 1.92ae0.37 and systolic 1.49ae0.38. at least one of the following tachyarrhythmias were identified in each dog: atrial fibrillation (4), supraventricular tachycardia (5), junctional tachycardia (2), and ventricular arrhythmias (3). ten dogs were available for follow up. seven dogs improved in at least one of the following parameters: resolution of tachyarrhythmia (3), improved systolic function (3) antidiuretic hormone (adh) has been shown to be elevated in humans with congestive heart failure (chf). recently, antidiuretic hormone antagonists were successful during investigational treatment of refractory congestive heart failure in humans. adh levels have been only modestly investigated in dogs with cardiac disease, primarily due to the technical difficulty in measuring adh levels via radioimmunoassay. based on the homologous structure of canine and human adh, we aimed first to determine the feasibility of measuring adh in dog plasma using a human elisa kit, and secondly to investigate the level of adh in dogs with congestive heart failure due to acquired cardiac disease. elisa assay kit validation was performed using six healthy dogs with normal clinical and echocardiographic examinations. pooled canine plasma was spiked with synthetic adh and intra-assay precision, dilutional parallelism, and linearity were assessed. to address the second aim of the study, samples were collected from normal dogs and dogs with heart failure due to one of two types of acquired cardiac disease: chronic degenerative valve disease (cdvd) or dilated cardiomyopathy (dcm). patients underwent clinical, radiographic, and echocardiographic examination to confirm diagnosis, assess severity of disease, and determine presence of pulmonary edema. whole blood was collected into edta tubes containing protease inhibitors, cold centrifuged, and plasma was stored at à801 until analysis. following ether extraction, plasma adh in each sample was measured in duplicate using a human elisa kit. statistical analysis included a d-agostino and pearson test for normality; group results were compared using a nonparametric mann-whitney test. adh was measured in canine plasma using the human elisa kit with acceptable intra-assay precision, linearity, and dilutional parallelism. intra-assay coefficient of variation was 11%. twenty-four dogs were recruited for the second phase of the study. six normal dogs and twelve dogs with radiographic evidence of pulmonary edema due to either cdvd or dcm were selected for participation. the remaining six dogs were excluded due to lack of definitive radiographic evidence of congestive heart failure. median adh values were 3.67 ae .38 pg/ml for the normal group (n 5 6) and 6.155 ae .94 pg/ml for the heart failure group (n 5 12). median adh values for the two groups were statistically different (p 5 0.0057). our preliminary results indicate that measuring canine adh using a human elisa kit is feasible and provides results with an acceptable coefficient of variation. we also showed that dogs with congestive heart failure due to cdvd and dcm have elevated adh levels in comparison to normal dogs. our findings motivate further investigation to assess the degree of plasma adh level elevation and the possible use of adh antagonists as an adjunct treatment for refractory congestive heart failure in dogs. aortic thromboembolism (ate) occurs in both cats and dogs. whereas ate in cats is strongly associated with structural heart disease and typically has an acute catastrophic presentation; the pathogenesis and presentation of ate in dogs is less well known or understood. further, an effective antithrombotic strategy for ate in dogs has not been reported. medical records of dogs diagnosed with ate between 2003 and 2010 were examined retrospectively. diagnosis of ate was based on ultrasonography, doppler flow studies, and diminished or absent femoral pulses. dogs were treated with various acute and chronic antithrombotic therapies. the severity of ambulatory dysfunction was graded as none, mild, moderate, severe, or non-ambulatory at presentation and after therapy. a cohort of dogs in this study received a standardized protocol of chronic warfarin therapy with or without antiplatelet drugs. target international normalized ratio for warfarin therapy was 2 to 3. twenty-six dogs were diagnosed with ate. all had an apparent mural aortic thrombus caudal to the renal arteries with most having evidence of embolization to the iliac and femoral arteries. none had structural heart disease at diagnosis. twenty dogs (77%) were still ambulatory at diagnosis. the median duration of ambulatory dysfunction prior to presentation was 7.8 weeks (range 1 day -52 weeks). a majority of dogs (58%) had no concurrent conditions at diagnosis. nine dogs (33%) had protein-losing nephropathy. four dogs (15%) were hypothyroid. fourteen dogs were treated with a standard warfarin protocol for a median period of 22.9 months (range 0.5-53 months). eight dogs were treated concurrently with aspirin, 2 dogs were treated concurrently with clopidogrel, and 4 dogs were treated with warfarin only. ambulatory function improved between 1 and 3 grades in dogs treated with chronic warfarin. the median period until clinical improvement was 13.9 days (range 2-49 days). two dogs treated with chronic warfarin therapy had documented resolution of ate, and 5 dogs had complete resolution of ambulatory dysfunction. none of the 14 dogs treated with chronic warfarin became nonambulatory, died, or underwent euthanasia because of ate. the median period of freedom from an adverse event was 24.2 months. no serious hemorrhagic events were reported. four dogs were treated with tpa. three of these had an unfavorable outcome. two dogs were ambulatory before tpa and become non-ambulatory after treatment. two dogs underwent surgical thrombectomy. one had a favorable outcome until ate recurred 14 months after surgery. in conclusion, the pathogenesis of ate in dogs is not associated with structural heart disease or left atrial thrombus formation. the presentation tends to be for chronic ambulatory dysfunction. most dogs are still ambulatory at presentation. warfarin, with or without concurrent antiplatelet therapy, is an effective antithrombotic treatment strategy for dogs with ate. information is known. through previous work, investigators have encountered norfolk terriers (nt) with echocardiographically apparent dmvd in the absence of a heart murmur. in order to more fully understand dmvd in this breed of dog, we sought to characterize findings from the physical and echocardiographic exam, biochemical, biomarker, and nutritional profile, and select environmental variables from a cohort of apparently healthy nt. overtly healthy nt ! 6yrs old were recruited by 3 different veterinary hospitals and underwent historical, physical, ecg, and 2d/color-flow doppler echocardiographic exam. anterior mitral valve length, maximal thickness, area, and prolapse were measured from 2d images. presence of dmvd was defined as thickened, prolapsing, or flail mitral valve leaflets in the presence of color flow doppler evidence of mitral regurgitation. blood samples were obtained for serum biochemistry and serotonin, plasma nt-probnp, amino acid profile, c-reactive protein, and cardiac troponin-i. forty-eight dogs were entered into the study (median age, 8yrs iqr [7-10]; gender, 29f, 19m; median bcs, ). of the 48 dogs, 23 (48%) had murmurs, 2 (4%) had mid-systolic clicks, 11 (23%) had ecg p-pulmonale, and 41 (85%) were deemed to have echocardiographic evidence of dmvd, including 18 nt without a murmur. seven (15%), 28 (58%), and 13 (27%) dogs were classified as isachc 0, 1a, and 1b, respectively. mean indexed echocardiographic mitral leaflet length (p o 0.0001), thickness (p 5 0.019), prolapse (p 5 0.0005), and la:aod (p 5 0.01) were significantly different between isachc 1a/b vs 0. between isachc 1a/1b and 0, there were no differences in serum amino acids, c-reactive protein, troponin, diet, or environmental factors; however 6 different amino acids (ala, gly, phe, pro, trp, tyr) were significantly higher in isachc 1b vs. 1a. median serum serotonin was increased in dogs with 1a/b vs. 0 (p 5 0.025). dogs whose diets contained some canned food (p 5 0.12) and dogs residing in suburban environments (p 5 0.03) had higher serotonin concentrations. nt-probnp tended (p 5 0.07) to be higher in isachc 1a/1b vs. 0. dmvd appears to be relatively common in nt and echocardiographic changes consistent with mild dmvd can be seen in dogs without a heart murmur. the results of this study establish a foundation of useful information upon which additional prospective studies can be developed. left ventricle (lv) evaluation is one of the most important contributions of echocardiography in the assessment of cardiac function. however, lv analysis can be made from images obtained by different modes and views of the heart. the aim of this study was to compare lv measurements, shortening fraction (sf) and ejection fraction (ef) obtained from four methods: m mode in short-axis and in long-axis, bidimensional mode in short-axis and in long-axis views of the heart. forty normal adult german shepherds were selected. echocardiographic study of lv of each animal was performed by the four methods described above. ancova test was used to examine the effects of axis, mode, weight and gender over lv measurements. isolated effect of the axis was observed for lv end-diastolic diameter (lvedd), with greater values obtained from short-axis views. there was isolated effect of mode over ef and sf, with greater measurements derived from bidimensional mode methods. weight correlated with all linear lv measures at least in one method, but not with ef and sf. weight had positive effect over lv endsystolic diameter and lv end-diastole posterior wall thickness in all methods, except from m mode in short axis in the last one. gender had isolated effect over lvedd, males showing greater values than females in bidimensional mode in short and long axis. the combined effect of axis, gender and weight was identified in interventricular septal end-diastolic thickness. we concluded that normal reference values obtained by different echocardiographic modes and planes should not be used interchangeably. abstract c-29 assessment of left ventricular diastolic func-tion by color tissue doppler imaging echo-cardiography in maine coon cats tested for mypbc-ap31 mutation hypertrophic cardiomyopathy (hcm), characterized by increased cardiac mass and diastolic dysfunction, is the most common feline heart disease. myocardial analysis by color tissue doppler imaging (tdi) is more sensitive than conventional echocardiography. this study evaluated diastolic dysfunction in various stages of feline hcm. maine coon cats (n 5 57) were screened for the mybpc-a31p mutation and examined with both echocardiography and tdi. then, were phenotypically classified in: normal (n 5 45), suspects for hcm (n 5 7) and hcm group (n 5 5); and genotypically classified in: negative (n 5 28), heterozygous (n 5 26) and homozygous group (n 5 3). myocardial velocities, measured in the basal and mild ventricular segment of the interventricular septal wall (ivs), left ventricular free wall (lvw) and in radial segment of left ventricular wall, was compared among different groups. a significant decreased (p 5 0,01) longitudinal em/am at the basal segment of ivs was observed in hcm cats compared with suspects and normal cats. a significant increased (p 5 0,01) longitudinal e/em at the basal segment of ivs was observed in hcm cats compared with suspects and normal cats. and a significant decreased (p 5 0,02) longitudinal sm at the basal segment of the lvw was observed in heterozygous cats compared with negative cats, both without hypertrophy. there was a significant positive correlation between summated early and late diastolic velocities (emam) and heart rate (p o 0,001); and a positive correlation between sm and em velocities and heart rate (p o 0,01). the mybpc-a31p mutation is not consistently associated with ventricular hypertrophy and negatives cats can also develop hcm. the tdi alone is not able to identify cats with mutation before myocardial hypertrophy. diastolic dysfunction occurs in many cats with hypertrophic cardiomyopathy (hcm) but less is known about systolic function in various stages of hcm. myocardial strain analysis by tissue doppler imaging is a noninvasive echocardiographic method to assess systolic function. this study evaluated systolic function in various stages of feline hcm. maine coon cats (n 5 57) were screened for the mybpc-a31p mutation an examined with both echocardiography and strain. then, were phenotypically classified in: normal (n 5 45), suspects for hcm (n 5 7) and hcm group (n 5 5); and genotypically classified in: negative (n 5 28), heterozygous (n 5 26) and homozygous group (n 5 3). peak myocardial strain, measured in the basal and mildventricular segment of the interventricular septal wall (ivs), left ventricular free wall (lvw), left ventricular anterior wall (lvaw), left ventricular posterior wall (lvpw) and radial segment of left ventricular wall, was compared among different groups. whereas conventional echocardiography demonstrated an apparently normal contractile state based on fractional shortening, myocardial strain (at mildventricular segment of ivs) in hcm cats was significantly decreased compared with normal group (p 5 0,01). myocardial strain (at basal segment of lvaw) also was significantly decreased in heterozygous cats compared with negative group (p 5 0,00); and was significantly decreased in heterozygous cats compared with negative group, both without ventricular hypertrophy (p 5 0,019). there was a significant negative correlation between strain values and wall thickness (p o 0,05). this method allows detection of abnormal systolic deformation in maine coons cats with hcm mutation despite apparently normal systolic function. the abnormal systolic deformation already can be present in heterozygous cats without hypertrophy and increased with progressive ventricular hypertrophy. recently, multiple advanced resting electrocardiographic (ecg) techniques have been applied in humans for detection of cardiac autonomic and repolarisation function. this has improved the diagnostic and/or prognostic value of short-time ecg in detection of common human cardiac diseases even before onset of symptoms or changes in the standard ecg. therefore, this study investigates, if advanced ecg can predict the severity of mitral regurgitation (mr) in dogs with myxomatous mitral valve disease (mmvd) and thereby improve the diagnostic value of ecg. the study included 77 privately owned cavalier king charles spaniels (ckcss) (age 6.0 ae 2.7 years; 30 males and 47 females). all dogs were examined by echocardiography and a short-time (3-5 min) high-fidelity 12-lead ecg, with the dog in a resting position and in sinus rhythm. dogs were divided into 5 groups according to the degree of mr estimated as the percentage of the left atrium area using color doppler mapping (0%; 0% o jet 15%; 15% o jet 50%; jet 4 50%; jet 5 100% and with clinical signs of congestive heart failure). ecg recordings were evaluated via custom software programs to calculate 76 different parameters, including heart rate variability (hrv), qt variability (qtv), t-wave complexity, wave morphology and 3-d ecg. one-way anova determined 21 ecg parameters, which were significantly different (p o 0.05) between the 5 dog groups. principal component factor analysis identified a 5factor model with 83.2% explained variability. qrs dipolar voltage and two repolarization indices of qtv increased significantly with mr severity, whereas total power of the frequency spectrum of rr interval and the standard deviation of qtv decreased significantly with mr severity. for the 5 selected parameters the prediction of mr jet value was tested by multiple linear regression. a correlation coefficient (r) of 0.65 indicated that the prediction value was significant (p o 0.01). if age was included in the multiple linear regression the prediction value was further increased (r 5 0.80). our results indicate that for a cut-off criteria of mr ! 50% jet the five selected ecg parameters could predict the severity of mr caused by mmvd in ckcss with sinus rhythm with sensitivity 65% (78% with age inclusion) and specificity 98% (92% with age inclusion) (p o 0.05). nt-probnp concentration is increased in canine patients with heart disease. relatively little is known about the stimuli for release of nt-probnp in dogs. physical activity independent of cardiac disease and the stress of being in the hospital could influence nt-probnp release and affect diagnosis and management of patients. we hypothesized that nt-probnp concentration in healthy dogs would not exceed the normal reference value (900 pmol/l) following a period of exercise. the goal of this study was to examine whether physical activity could elevate plasma nt-probnp and cause false positive results in healthy dogs. the study population included healthy dogs 41 yr of age without heart murmur or known systemic disease, and normal 2d/color flow echocardiographic exam. plasma samples for nt-probnp were obtained before, immediately after, and 1 hour after a standardized 5-minute submaximal exercise regimen. the study included 14 dogs with a median age of 5.3yrs and included 6 females and 8 males. there was no statistical difference in median plasma nt-probnp concentration across the three time points (baseline median, 617 [iqr, immediately post, ; p 5 0.11). the average coefficient of variation of nt-probnp concentration across the exercise regimen was 23.0 ae 18.9%. in 1 of 14 dogs (7.1%), nt-probnp increased from 790 to 1054 pmol/l immediately after exercise. the results of this study demonstrate that submaximal exercise does not significantly change median nt-probnp concentration and the incidence of false positive results is low. further studies should investigate effects of exercise on nt-probnp concentrations in dogs with heart disease. obesity is an increasing problem in veterinary medicine. obese human patients are shown to present lower levels of natriuretic peptides, regardless of an increased volume and pressure load, what raises the possibility that the natriuretic response is impaired in these individuals. considering the controversial findings in obese humans, and the lack of studies reported in dogs, this study proposed the evaluation of nt-proanp concentration in obese dogs. nt-proanp concentration was determined prospectively in 39 obese dogs (27 females; 12 males; 6-108 months) and in 23 non-obese dogs (controls; 13 females; 10 males; 12-108 months) from a veterinary hospital population. obesity was determined by body condition score [2 (4/9); 21 (5/9); 1 (6/9); 3 (7/9); 19 (8/9); 18 (9/9)]. dogs were excluded if they had any primary cardiac disease, renal insufficiency, endocrine disease, or if they were receiving diuretics, vasodilators, antiepileptic drugs or corticosteroids. commercial kits were used (vetsign s canine cardio screen nt-pro-anp vc3167 -guildhay/biomedica). mann whitney test was used for group comparison. results are presented as median; interval; p25 and p75). nt-proanp was significantly lower in obese dogs [413.56fmol/ml (0.0-1287.14); p25 5 228.673; p75 5 595.205] than in controls [647.98fmol/ml (34.99-1596.82 ); p25 5 490.785; p75 5 957.055]; (p 5 0.004). results were similar to what has been found in obese humans. lower levels of natriuretic peptides are also seen in obese heart failure patients. this study provides important information regarding nt-proanp concentration in obese dogs, which should be better explored characterizing the behavior of natriuretic peptides after weight loss, and also in obese dogs with primary heart disease. left-to-right shunting patent ductus arteriosus (pda) is one of the most common canine congenital cardiovascular defects. human studies have shown that bnp and nt-probnp concentrations are elevated in patients with pda, and can be used to detect hemodynamically significant pda. the purpose of this study was to measure nt-probnp concentrations in dogs with pda, and to assess whether additional indicators of hemodynamics correlate with ntprobnp. we hypothesized that nt-probnp will serve as a simple non-invasive marker of hemodynamic significance in dogs with pda prior to and following transcatheter ductal occlusion. nt-probnp was measured in 30 client-owned dogs with echocardiographically normal hearts. ten dogs with pda were initially evaluated with thoracic radiographs, transthoracic and transesophageal echocardiography, pulmonary capillary wedge pressure (pcwp) and nt-probnp. nt-probnp and echocardiography were repeated at 1 day and 3 months following ductal occlusion. pcwp was repeated at 3 months. baseline nt-probnp was significantly higher in pda dogs compared to control (1877 ae 2081 pmol/l (mean ae sd), 651 ae 301; p o 0.0025). at 1 day and 3 months following ductal occlusion, nt-probnp was 1347 ae 1256 and 466 ae 380, respectively. the following decreased significantly from baseline: pcwp (11.8 ae 4.7 to 6.4 ae 3.2 mmhg; p 5 0.018), and indexed left ventricular internal dimensions in diastole (2.26 ae 0.47 to 1.67 ae 0.19; p 5 0.006) but not significantly in systole (1.40 ae 0.31 to 1.20 ae 0.19; p 5 0.13). nt-probnp is elevated in dogs with pda and transductal closure is associated with a reduction in nt-probnp, pcwp and left ventricular size. cardiac biomarkers, particularly nt-probnp, are becoming more commonly used in dogs and cats as part of a diagnostic work up. multiple studies already have documented the correlation of this peptide with cardiac disease status and potential clinical implications. in a portion of these reports the manner in which samples were handled was placement of whole blood into an edta tube, followed by centrifugation and decanting of the supernatant that was ultimately stored at à201c or à801c prior to shipment, either with or without protease inhibition. our objective was to compare the nt-probnp concentrations in feline plasma collected using the previously reported methods to the california animal hospital (cah) collection method using tubes containing a protease inhibitor. this study compared nt-probnp concentrations using the protease inhibitor tubes vs. edta tubes from 18 privately owned feline patients, with confirmed cardiac disease, and 6 control feline patients. for all study participants, we performed a full history and physical examination, a hematology and chemistry panel, thoracic radiographs, ecg, and echocardiogram. in each study participant, at least 4 ml's of whole blood was drawn from a peripheral vein, and transferred to a plastic edta tube. the sample was centrifuged within 1 hour after collection. 1 ml of plasma was then transferred to a tube containing a protease inhibitor, which was stored at 41c until being shipped within 24 hours of collection. the remaining plasma was placed into 2 separate microtubes, which did not contain a protease inhibitor. one microtube was then stored and shipped as previous studies have reported (à201c, styrofoam container, shipment within 24 hours), and the second microtube was frozen at à801c. all samples were shipped, received and analyzed within 24 hours of collection. results of this study showed that no difference was found between the 2 frozen sample methods (663 pmol/l and 650 pmol/l p 5 0.40). it was determined that both frozen methods had lower nt-probnp levels (655 and 646 pmol/l) when compared to plasma samples shipped in protease inhibitor tubes (756 and 794 pmol/l). the findings of this trial demonstrate that the nt-probnp levels are significantly different between samples placed in edta tubes vs. contain protease inhibitor (p 5 0.008 and p 5 0.0006). utilizing protease inhibitor tubes allows more accurate measurement of plasma nt-probnp. as for its relevance for future research and publications, authors should take care to investigate the manner in which blood samples were handled and the conclusion/results of these studies should be taken in light of the methodologies used in collecting, storing, shipping and analyzing the samples. degenerative mitral valve disease (dmvd) is one of the most common heart disease and is present approximately 75% of the canine heart disease. although the high prevalence exists in small dogs, the underlying molecular mechanism of its pathophysiology is rarely known. dmvd is functionally and pathologically similar in humans and dogs, thus, there will be a common pathogenesis in human and dogs with naturally occurring dmvd. serotonin and serotonin-related mechanisms have been implicated as a cause of valvular disease in human and animals, including spontaneous dmvd in dogs. increased circulating 5ht concentration as a potential source of heightened 5ht signaling is demonstrated in small dogs with dmvd. the aim of this study was to investigate whether serum 5ht concentrations were associated with severity of naturally occurring dmvd in small dogs and to investigate potential associations of dog characteristics on serum 5ht concentrations in our study population. forty-eight dogs were included in this study and were classified into control and dmvd groups according to the results of physical and echocardiographic examinations. based on the la:ao ratio, dogs with dmvd were classified as follow: control (la:ao ratio 1.5 and no mr), mild (la:ao ratio 1.5 and mr), moderate (1.5 o la:ao ratio 1.8 and mr), severe (la:ao ratio 41.8 and mr). serum serotonin concentrations were measured by elisa. an overall significant difference (p o .05) was found among 4 groups and 5ht concentrations (control, 72.38 ng/ml [51.34-95.11 dmvd, ). significantly higher 5ht concentrations were observed in dogs with moderate (p o .05) and severe (p o .05) dmvd, compared with concentration in control group. additionally, 5ht concentration in dogs with moderate dmvd were significantly higher (p o .05) than concentration in dogs with mild dmvd. also, dogs with severe dmvd had significantly higher 5ht concentration than dogs with mild (p o .05) and moderate (p o .05) dmvd. there was no significant association of age, platelet, and lvidd, on serum 5ht concentration, however, weak correlation between serum 5ht increased significantly and la:ao ratio (r 2 5 .211, p o .05) was observed. the results of this study indicate that serum 5ht concentrations were higher with increasing severity of spontaneous dmvd, which may be the potential cause to advance the progression of dmvd. further studies should be performed to reveal the role of 5ht in inducing and accelerating spontaneous dmvd and to investigate if lowering serum 5ht concentration could alter the progression of dmvd. the objective of this prospective study was to evaluate the utility of cardiac troponin i (ctni) in differentiating between underlying etiologies of pericardial effusion in the canine patient. patients were prospectively recruited at time of diagnosis of novel pericardial effusion. serum samples were collected prior to pericardiocentesis. patients were evaluated by echocardiography and classified with the diagnosis of hemangiosarcoma (hsa), heart base tumor (hbt), or unknown etiology at the initial evaluation based on established characteristic echocardiographic findings. idiopathic pericardial effusion (ipe) was defined by histopathology, echonegative for a mass lesion with no recurrence of pericardial effusion 46 months, or symptom free 412 months from time of enrollment. patients were excluded from analysis if a diagnosis could not be established based on above criteria or concurrent moderate azotemia (creatinine 43.0 mg/dl) was present at time of sample collection. serum samples were frozen and analyzed in batches within 60 days of collection by a ctni assay with a 0.2ng/ml lower limit sensitivity. sixty-three patients were recruited over a one year period with 15 patients excluded due to lack of diagnosis (13) or azotemia (2). median ctni levels of dogs with hsa (n 5 24), hbt (n 5 19), and ipe (n 5 5) were 14.8 ng/dl (interquartile range (iqr) o 0.2-11.3), 0.23 ng/dl (iqr o 0.2-0.29), and o 0.2 ng/dl (iqr o 0.2-o 0.2) respectively. concentrations of ctni differed significantly between dogs with hsa and hbt (p 5 0.001) and ipe (p 5 0.0029). there was no difference between ctni concentrations between hbt and ipe dogs (p 5 0.911). receiver operating curve analysis to determine the optimal cutoff for differentiation of dogs affected with hsa and both hbt and ipe revealed a significant (p 5 o 0.001) area under the curve (0.79). a cut-off point of ctni of 40.78 yielded a sensitivity of 67% (95% ci, 45-84%) and specificity of 95% (95% ci, 79-99%). utilizing a higher cut-off point of 43.0 yielded a lower sensitivity of 50% (95% ci, 29-71%), but a higher specificity of 100% (95% ci, 86-100%) which may have more clinical utility given the disparity in prognoses of the etiologies compared. in conclusion, this study supports the diagnostic utility of ctni concentrations to delineate between patients with hsa and other etiologies of pericardial effusion, but does not reliably differentiate between dogs with ipe and other neoplastic etiologies. the pathogenesis of degenerative mitral valve disease (dmvd) in dogs remains to be fully elucidated. the high sheer stress caused by mitral regurgitation damages the endothelial surface of the valve, and a previous study demonstrated increased transcription of intercellular adhesion molecule-1 (icam-1) and e-selectin in affected mitral leaflet tissue. we hypothesized that this may be responsible for platelet recruitment and adhesion, and initiation of a proliferative cascade, resulting in further myxomatous changes. the goal of this study was to compare plasma levels of icam-1 and e-selectin in healthy dogs and those with dmvd. the study population included dogs with echocardiographic evidence of dmvd and healthy control dogs 41 year old with no heart murmur or known systemic diseases. dmvd dogs underwent 2d/color-flow doppler echocardiographic exam. blood samples were obtained for plasma icam-1 and e-selectin analysis using commercially available elisa kits. the study included 34 dogs, of which 20 had dmvd and 14 were normal. the dmvd group had a median age of 9.5yrs ) and included 6 females and 14 males. two (10%), 13 (65%), 2 (10%) and 3 (15%) dogs were classified as isachc 1a, 1b, 2 and 3a, respectively. of the control dogs, median age was 4.5yrs [2-6.5], with 5 females and 9 males. there was no statistical difference in plasma e-selectin between control dogs (median 2.71 [2.05-7.66]) and those with dmvd (2.46 [1.54-3.55]); p 5 0.35. plasma icam-1 concentrations were higher in dmvd dogs (1.58 [1.20-10.58 ]) than controls (median 1.31 [1.11-1.65], but this difference did not reach statistical significance (p 5 0.22). linear regression analysis showed no significant correlation between icam-1 or e-selectin and serum serotonin level, nt-probnp or echocardiographic measures of dmvd severity (la:ao, lvidd:ao, lvids:ao). the results of this study demonstrate no significant difference in circulating adhesion molecules icam-1 and e-selectin in dogs with dmvd as compared with healthy controls. further studies investigating adhesion molecules within the mitral valve tissue itself are likely needed if icam-1 and e-selectin play a role in the pathophysiology of dmvd. the rate of glucose utilization in the heart is greater than in other tissues, and impaired glucose uptake may play a major role in the pathogenesis of heart failure (hf). glucose uptake across the sarcolemma is regulated by a family of membrane proteins called glucose transporters (gluts), which includes glut-4, the major cardiac isoform, and glut-12, a recently discovered isoform, the role of which is unknown in the heart. in addition, despite the wellknown regional differences in myocardial structure and function, potential regional patterns in glucose transport have not been investigated. thus, we hypothesized that glut-4 and -12 protein and gene expression would be chamber specific in healthy dogs and during chronic hf. using a canine model of tachypacing induced chronic hf, glut protein and messenger rna in both ventricles and atria were investigated by immunoblotting and real time pcr. in control dogs, glut-4, but not glut-12, protein expression were significantly higher in the atria compared to the ventricles, with the highest content in the right atrium (ra, p o 0.001). glut-4 and -12 mrna were homogeneously expressed in all the cardiac chambers. during chronic hf, glut -4 and -12 expression was highest in the left ventricle (lv, by 2.5 and 4.2 fold, respectively, p o 0.01), with a concomitant increase in glut-4 and -12 mrna (p o 0.001). glut -4, but not glut-12, was decreased in ra during chronic hf (p 5 0.001). our data suggest that glut-4 protein was differentially expressed across the cardiac chambers in the healthy heart. during chronic hf, lv was the primary site dependent on both glut4and glut12-mediated glucose transport, which was transcriptionally regulated. in addition, the paradoxical decrease in glut4 content in ra may induce perturbations in atrial energy production during chronic hf. some obese dogs are suspected to have cardiac disease because they have enlargement of the heart on thoracic radiograph. it has been reported in cats that the fat increases the cardiac silhouette, while echocardiograms revealed normal cardiac dimensions. the purpose of this study was to determine whether obesity overestimates cardiac dimension in radiographs compared to echocardiographic findings in dogs. twenty three obese dogs and 20 controls were included based on a 1-9 body condition scoring (bcs). computerized radiography was obtained and vhs measurement was performed as previously described. echocardiographic measurements were interpreted based on reference values according to lean body weight regression equations. results for echo and vhs measurements were classified in scores as normal, mild, moderate or severe increase. student's t test was used for comparison of vhs between groups. mann-whitney rank sum test was used to assess echocardiographic scores between groups. spearman rank order correlation was used to assess relationships between any pairs of variables between echo and vhs scores, echo vs bcs and vhs vs bcs. groups were similar regarding age [obese (69ae24); control (62ae27); p 5 0.368], breeds and gender distribution. obese dogs had significantly higher vhs and echo scores compared with controls [vhs: (10.58ae0.69) vs (9.77ae0.54); p o 0.001; echo score: range (1-4) vs (1-2); p 4 0.05]. there were no relationships between any pair of variables analyzed. these results show that there are changes both in echo and radiographic appearance of the heart in obese dogs, but vhs overestimates cardiac silhouette compared to echo, probably related to pericardial fat accumulation. heart rate variability (hrv) is an indirect measurement of the autonomic modulation of heart rate (hr). reduced hrv measured from short-time electrocardiography is seen in dogs with heart failure (hf) secondary to myxomatous mitral valve disease (mmvd). however, hrv is suggested to increase with disease severity at early stages of mmvd. the aims of this study were 1) to associate hr and hrv with severity of mmvd in cavalier king charles spaniels (ckcs) and 2) to compare hr and hrv between ckcs and other dog breeds in a group of dogs in hf secondary to mmvd. one-hundred dogs were examined by echocardiography and 24hour electrocardiography. the dogs were divided into five groups: 1) ckcs with no/minimal mitral regurgitation (mr) (mr jet 15% of the left atrial area using color doppler mapping) and no murmur, 2) ckcs with mild mr (20% o jet 50%), 3) ckcs with moderate/ severe mr (jet450%) and no clinical signs of hf, 4) ckcs in hf (hf defined as left atrium to aortic root ratio (la/ao) 41.5, clinical signs of hf and furosemide responsiveness) and 5) non-ckcs in hf. dogs in hf were allowed hf therapy. both hr and hrv were analyzed over a 24-hour period, while hrv were also analysed over a 6-hour nightly period. analyses of variance were performed with hr or hrv as response variables and the explanatory variables dog group and echocardiographic indices of mmvd were included separately. all p-values were bonferroni corrected. minimum-and mean hr were significantly higher in ckcs with moderate/severe mr and in hf compared to ckcs with no/ minimal and mild mr (all p o 0.001). seven out of 26 hrv variables were significantly decreased in ckcs with moderate/ severe mr and in hf compared to ckcs with no/minimal and mild mr (all p o 0.02). another 10 hrv variables showed the same groupwise differences (all p o 0.02), except that the difference between ckcs with mild mr and ckcs with moderate/severe mr did not reach statistical significance. mminimum hr, mean hr and the hrv variables (7 and 10) differing between dog groups, also consistently decreased with increasing mr, la/ao and the proximal isovelocity surface area in ckcs. non-ckcs in hf had a lower minimum hr compared to ckcs in hf (p 5 0.03) and a higher triangular index measured in both periods (all p o 0.04). in conclusion, hr increased and most hrv variables decreased with increasing severity of mmvd in ckcs, even prior to the development of hf. other breeds in hf secondary to mmvd had lower minimum hr, but higher triangular index compared to ckcs in hf. although the cells in the specialized conduction system in the heart are capable of initiating their own impulse, the rate in which those impulses are generated can be influenced by autonomic nervous system. different types of respiratory patterns can stimulate autonomic nervous system in different manners. thus, non-sedated rabbits were studied during forced respiration aiming to evaluate the influence of this breathing pattern on heart rate. twenty male, one-year-old healthy new zealand rabbits were enrolled in the study. animals were set in right lateral recumbency and maintained that way by physical contention. chemical sedation was not used. partial nasal obstruction by digital compression was applied to those rabbits for five seconds, eliciting a forced inhaling and exhaling against semi closed nostrils. heart rate was obtained by measurement of two consecutives rr intervals in the computerized electrocardiography, recorded continuously prior and during the maneuver. heart rate before the intervention was 251 ae 34 bpm (mean ae standard deviation). all rabbits submitted to the maneuver showed a dramatic reduction in this parameter. after nasal partial obstruction, heart rate was 142 ae 32 bpm. data was submitted to statistical analysis by paired student's t test and a significant difference between the heart rate before and after the maneuver was observed (p o 0.0001). although the exactly mechanism involved in this response was not elucidated, the presented data support the applicability of this maneuver as an efficient method for non-pharmacological heart rate reduction in rabbits. obesity can affect cardiac function due to effects on cardiac rhythm, ventricular volume and blood pressure. the purpose of this study was to determine the effects of obesity and overweight on noninvasive systemic blood pressure and doppler echocardiographic parameters in cats without others causes of cardiac hypertrophy. the study groups comprised fifteen obese cats with mean body score index (bsi) of 8,8, seven overweight cats (bsi 5 6,3) and seven cats with ideal bsi (4,9). the blood pressure was measured by doppler method and the doppler echocardiography was performed in conscious animals. the statistical analysis was performed by analysis of variance followed by tukey's test and pearson's correlation. the blood pressure values of the obese cats were superior (159,12 ae 11,22 mmhg, p o 0,0001) than in overweight (134,45 ae 13,81 mmhg) and normal cats (136,90 ae 13,17 mmhg) and 57% of the obese cats had blood pressure higher than 160 mmhg. there were observed differences on the ratio of early (e) and late (a) left ventricular filling velocity (p 5 0,008) of obese animals (e/a 5 1,07 ae 0,39) compared to overweight (1,68 ae 0,37) and normal cats (1,43 ae 0,24). seven obese cats (50%) had inversion of e/a compatible with diastolic dysfunction and there were negative correlation (r 5 à0,453, p 5 0,026) between the e/a ratio and blood pressure values. other differences observed were increases in left ventricular septum in diastole (p 5 0,002) and in free wall in diastole (p 5 0,023) and systole (p 5 0,042) of the obese animals compared to overweight and normal cats. these results demonstrate the possibility of cardiovascular effects related to obesity in cats, such as systemic arterial hypertension and secondary diastolic dysfunction. diuretic therapy reduces preload, and relieves congestion secondary to cardiac dysfunction. torsemide (torasemide) is a loop diuretic with longer duration of action, less diuretic resistance, and adjunctive aldosterone antagonism as compared to furosemide. we hypothesized that torsemide was no less effective than furosemide at diuresis, control of clinical signs, and maintenance of quality of life in dogs with congestive heart failure. a double-blinded, randomized, crossover clinical trial was performed in 7 dogs with stable heart failure receiving bid furosemide and adjunctive medications. dogs were randomized to their current furosemide dose or torsemide (calculated as 1/10 of the daily furosemide dose divided into bid dosing). crossover occurred at day 7 and the study ended on day 14. clinical, laboratory, radiographic, and owner-perceived quality of life variables were evaluated on days 0, 7 and 14. no dog developed recurrent heart failure during the study. average furosemide dose on day 0 was 5.13 mg/kg/day (range, 2.8-9.6). following torsemide treatment, blood urea nitrogen (p 5 0.0028), albumin (p 5 0.0287), and albumin:globulin ratio (p 5 0.0012) were significantly increased, and urine specific gravity (p 5 0.0062) and chloride (p 5 0.0051) were significantly decreased as compared to baseline and/or furosemide dosing (one-way anova with bonferroni correction). no differences in qol were found. results indicate that torsemide is equivalent to furosemide at controlling clinical heart failure in dogs, and might in fact, achieve greater diuresis vs. furosemide. larger clinical trials evaluating furosemide resistance and/or torsemide as a first-line loop diuretic for congestive heart failure in dogs with heart failure are warranted. the purpose of this study was to investigate the feasibility of speckle tracking echocardiography (ste) in healthy cats and to determine whether or not it can detect myocardial dysfunction in cats with diseased heart. radial and circumferential strain and strain rate were measured by ste using left ventricle short-axis view in clinically healthy cats. eighteen cats with hcm whose lv thickness at end-diastole with 6 mm or more were evaluated with ste analysis, and compared with healthy cats. index of left ventricular synchrony (trs-sd) was also assessed in cats with hcm, and compared to healthy subjects. ste resulted in technically adequate images in 100% of the cats. fusions of early and late diastolic (e and a) wave in strain rate were seen in 3 of 16 cats. percent errors in analysis with or without simultaneous ecg monitoring were 5.3-14.7% in all parameters. inter-and intraobserver variability of ste parameters in healthy cats was minimal (4.1-15.6%) except for the systolic circumferential strain rate. sedation using buprenorphine and acepromazine did not affect any ste parameter. e wave in radial and circumferential strain rate of hcm cats was significantly decreased compared with healthy cats. no significant difference was seen in trs-sd. ste analysis was considered clinically feasible to assess cardiac function in cats, and could detect myocardial dysfunction in cats with hcm. further study is warranted to investigate to assess whether or not ste can differentiate the etiology of left ventricular concentric hypertrophy since it is clinically important. carvedilol, a 3rd generation non-selective beta-blocker with ancillary alpha 1 -blocking and antioxidant properties may have therapeutic implications for multiple diseases in cats; however, pharmacokinetics and bioavailability of commercially prepared oral carvedilol has not been determined. hplc for carvedilol measurement in feline plasma was validated and standardization curves created. the pharmacokinetics (pk) of carvedilol was evaluated in 5 apparently healthy male neutered adult cats (average weight of 5 kg) following single dose intravenous (iv) of 0.5 mg/kg and single dose oral administration of 1.6 to 2.0 mg/kg. concentrations of the active parent compound, carvedilol, were detected in plasma using hplc analysis. lower limit of quantification was 5 ng/ml. the mean peak concentration after iv administration of carvedilol was 8639 ng/ml (range, 901 to 8648), elimination half-life was 2.9 hours (range, 2.0 to 5.4), and clearance was 0.35 l/hr/kg. the volume of distribution was 1.18 l/hr. after a single oral administration of carvedilol, the time to peak plasma concentration was 60 minutes (range, 30 to 90 minutes) and the mean residual time was 4.8 hours. the half life was 4.44 hours. maximal concentration 294 ng/ ml and the mean bioavailability was 15.7% with a median of 9.97% (range, 4.7% to 46%). these data demonstrate a low bioavailability of oral carvedilol and a wide variation in cats. all cats tolerated the oral dose of carvedilol with no major adverse effects. also, a mean residual time of 4.8 hours would suggest that a more frequent dosing schedule may be required to maintain therapeutic plasma levels. pharmacodynamic studies investigating beta-adrenergic blockade duration may provide a more accurate dosing interval of carvedilol. abstract c-49 effects of sildenafil citrate on dogs with ei-senmenger's syndrome. k nakamura, m yamasaki, h ohta, m takiguchi. department of veterinary clinical sciences, graduate school of veterinary medicine, hokkaido university, sapporo, hokkaido, japan. sildenafil has shown to be effective for dogs with pulmonary hypertension; however, its efficacy for dogs with eisenmenger's syndrome (es) and secondary erythrocytosis has not yet been determined. the objective of this study is to determine the effect of sildenafil for dogs with eisenmeger's syndrome and secondary erythrocytosis. this was a prospective, single arm, open-label study. five clinical dogs with es and secondary erythrocytosis were included. new york heart association (nyha) functional class, pcv, and pulmonary artery acceleration time to ejection time ratio (pa at : et) were evaluated before and after sildenafil therapy (0.5 mg/kg, bid). nyha functional class was significantly improved after 1 (median 2; range 1-2, p 5 0.031) and 3 months (median 2; range 1-2, p 5 0.031) of sildenafil therapy, compared with the baseline (median 3, range 2-3). pcv was significantly decreased after 1 month (62.4 ae 4.7%, p 5 0.015) and 3 months (59.9 ae 3.6%, p 5 0.01) of therapy, compared with the baseline (68.0 ae 5.6%). at : et was significantly increased after 1 month of therapy (0.39 ae 0.06, p 5 0.013) from the baseline (0.28 ae 0.04). sildenafil resolved the clinical signs and secondary erythrocytosis in dogs with es. sildenafil therapy could be the treatment of choice for dogs with es. sepsis is the number one cause of mortality in neonatal foals. the role of the raas and hpaa in systemic inflammation and response to stress is well documented in critically ill human neonates, but limited information exists in foals. we hypothesized that in septic foals the raas and hpaa will be activated by systemic inflammation and hypoperfusion and the degree of activation will be associated with severity of sepsis and mortality. blood samples were collected on admission from 60 septic (sepsis score 4 12), 102 sick non-septic (sns), and 18 healthy foals of o 7 days of age. blood concentrations of corticotropin-releasing hormone (crh), adrenocorticotropin (acth), cortisol, aldosterone, angiotensin-ii (ang-ii), arginine vasopressin (avp) and plasma renin activity were determined by radioimmunoassays. acth, cortisol, aldosterone, ang-ii and avp concentrations were higher while crh was lower in septic and sns foals compared to healthy foals (p o 0.05). septic non-survivor foals had higher concentrations of aldosterone, cortisol, acth and avp and lower concentrations of ang-ii and crh than survivors. avp was associated with ang-ii in septic, and with acth in septic and sns foals (p o 0.05). there was no difference in renin activity and ang-ii concentrations among foal groups. septic foals had a higher acth:aldosterone ratio than healthy foals (p o 0.001). this study shows that in response to sepsis there is raas and hpaa activation in critically ill foals. we propose that in sick foals avp is more important than crh in regulating acth secretion. the increased acth:aldosterone ratio further supports relative adrenal insufficiency in septic foals. this prospective, cohort study aimed to characterize alterations in coagulation and blood-derived inflammatory biomarkers in adult horses that developed diarrhea during hospitalization. physical and hematological parameters were evaluated at times 0 (onset of diarrhea), 6, 12, 24 and 48h, then every 48 h until resolution of diarrhea or death. each hematological analysis included a complete blood count (cbc), thromboelastography (teg), partial-thromboplastin-time (ptt), prothrombin-time (pt), plasma concentrations of lactate, tumor necrosis factor alpha (tnf-a), interleukin (il)-1, il-6, il-10 and nt-proc-type-natriuretic peptide (pcnp). horses were categorized into three groups based on the duration of diarrhea and evidence of systemic inflammation. group 0: diarrhea o 6 h without systemic inflammation (si); group 1 -diarrhea ! 6 h without si; group 2-diarrhea with si. assessment of vital parameters and cbc established a diagnosis of si as previously described (levy, 2003) . descriptive and univariate outcome analyses were based on data normality. 19 horses were enrolled, of which 16 (84.2%) survived to discharge. the mean age was 13.61/-5.3 years. eight horses (42.1%) were categorized as group-0, 2 (10.5%) as group-1 and 9 (47.4%) as group-2. two horses developed thrombophlebitis. based on the results of teg, 6/19 (31.6%) were normocoagulable, 7/19(36.8%) were hypocoagulable and 6/19 (31.6%) were hypercoagulable, at one or more time points. of these, 7/9 (77.8%) group-2 horses were coagulopathic. additionally, group-2 horses had a significantly lower ma than group-0 horses at baseline (43.6 ae 16.7 vs. 61.9 ae 7.2) and 6h (45.9 ae 15.9 vs. 65.5 ae 5.8). biomarker analyses are pending. in conclusion, si was associated with coagulation disorders in horses with hospital acquired diarrhea. clostridium difficile and clostridium perfringens are commonly associated with colitis and diarrhea in equines but asymptomatic carriers exist. reported carrier rates of toxigenic c. difficile and c. perfringens strains in feces range between 0-25% and 0-30% respectively. toxigenic c. difficile has also been isolated from the small intestine of diseased foals and is implicated as etiologic agent of duodenitis/proximal jejunitis in adult horses however scarce information is available on prevalence in gastrointestinal compartments other than feces in healthy horses, and it is unclear whether fecal samples are good predictors of the status of proximal intestinal sites. the objectives of this study were to investigate the presence of c. difficile and c. perfringens in various intestinal compartments of healthy adult horses and to molecularly characterize isolates. intestinal contents were collected from the stomach, duodenum jejunum, ileum, cecum, right dorsal and left ventral colon, small colon and rectum of 10 euthanized horses free of apparent gastrointestinal disease. enrichment culture was performed for c. difficile and c. perfringens and c. difficile isolates were further characterized via toxin gene detection and ribotyping. c. difficile was isolated from 9/90 (10%) samples from 5/10 (50%) horses. between zero and three sites were positive per horse, and multiple sites were positive in three horses. isolates were recovered from duodenum (n 5 1), right dorsal colon (n 5 3), small colon (n 5 1) and rectum (n 5 4). in one horse, the rectal sample was negative but c. difficile was isolated from a proximal site, all other horses were positive on the rectal sample if a more proximal compartment was positive. in three horses multiple compartments were positive however different strains were always present within the same horse (n 5 2). all isolates possessed genes encoding toxins a and b. five isolates were ribotype 078 and also possessed genes encoding the binary toxin. the other isolates were ribotype 001 and were negative for the genes encoding the binary toxin. despite using a method with a detection level as low as 9 cfu/g of feces, no c. perfringens was recovered. rectal samples were a good predictor of overall c. difficile carrier status (4/5 horses), however rectal samples were not always representative for the ribotype carried in more proximal compartments. the presence of variable strains within the same horse suggests transient passage of the bacterium through the gastrointestinal system rather than actual colonization although further study testing multiple colonies per site is needed. the predominance of ribotype 078 is consistent with recent emergence of this strain in this region, as earlier studies found other strains (027, 001) to be more prevalent and a variety of ribotypes were typically recovered from horses. interestingly ribotype 078 has recently emerged as a hypervirulent strain in humans in our area. clostridium difficile, clostridium perfringens and salmonella are important enteric pathogens in horses, however some healthy animals also harbour these pathogens. point prevalence studies have reported these carriage rates, but there are no data regarding longitudinal prevalence of these enteric bacteria, information that would be useful to better understand the epidemiology of these pathogens. additionally, antimicrobial resistance is a pressing concern. commensal e.coli is often used as an indicator organism to evaluate antimicrobial resistance of enteric bacteria, yet there are limited data from horses on farms. the objectives of this study were to longitudinally investigate the above enteric pathogens over the course of one year, molecularly characterize obtained isolates and determine the antibiotic susceptibility profile for e. coli. fecal samples were collected from 25 adult horses from five farms on a monthly basis over the course of one year. selective cultures were performed for c. difficile, c. perfringens, salmonella and e. coli. c. difficile isolates were characterized via toxin gene pcr and ribotyping. broth microdilution was performed to assess antimicrobial susceptibility profiles of e. coli. clostridium difficile was isolated from 15/275 (5.45%) samples from 11/25 (44%) horses. four horses were positive on more than one occasion, three were positive in two consecutive months. different ribotypes were found in two of the latter horses. most isolates were ribotype 078 (n 5 6) with ribotype 001 (n 5 5) and ribotype c (n 5 4) also identified. ribotypes 078 and c possessed genes encoding toxins a, b and binary toxin, while ribotype 001 only possessed toxin a and b genes. despite a detection threshold of 9 cfu/g feces, c. perfringens was not detected in any samples, nor was salmonella. e coli was isolated from 117/225 (52%) samples. resistance to !1 antimicrobial was present in only 19/117 (16.9%) isolates. multidrug resistance (! 3 antibiotics) was present in 5/117 (4%). most commonly, isolates were resistant to sulfisoxazole (17/ 117) and trimethoprim sulfamethoxazole (16/117). the overall detection rate for toxigenic c. difficile in fecal samples of healthy horses was 5.4% which is consistent with previous studies. the cumulative prevalence of 44% was striking but only one horse shed the same strain for more than one month, indicating c. difficile shedding is a transient and dynamic state. the predominant isolation of ribotype 078 is consistent with the suspicion that this strain has emerged and become widely disseminated in this region in recent years. the low prevalence of c. perfringens and salmonella is in agreement with some other studies. the low prevalence of antibiotic resistance in commensal e. coli was encouraging and suggests that healthy horses on pleasure horse farms are not likely a major reservoir of resistance in enteric bacteria. type 1 polysaccharide storage myopathy (pssm) in horses is associated with a dominant missense mutation (r309h) in the skeletal muscle glycogen synthase gene (gys1). since disease severity varies between affected horses, we hypothesised that some clinical variability could be accounted for by the underlying genotype. 107 belgian / percheron horses were genotyped using a validated restriction fragment length polymorphism assay enabling grouping of horses as homozygotes (hh), heterozygotes(hr) or normal (rr). subsequently, semimembranosis muscle samples were biopsied from each of six matched sedentary horses from each group; one sample was formalin-fixed and one fresh frozen. sections were stained using haematoxylin and eosin, periodic acid schiff 1/diastase. anti-dystrophin, nnos and myosin heavy chain immunohistochemistry was performed to examine sarcolemmal intregrity, there were significant differences in resting ck activity (p 5 0.023) (median hh 5 364u/l interquartile range(ir) 332-764; hr 5 301u/l ir222-377; rr 5 260u/l ir216-320) and ast activity (p o 0.001) (ast mean hh 5 502u/l sd116; hr 5 357u/l sd92; rr 5 311u/l sd64) and muscle pathology between the 3 groups, with severity increasing rr o hr o hh. there were significantly more type 2a (p 5 0.04) and fewer type 2x fibres (p 5 0.02) in homozygotes (2a 55% sd 10.2; 2x 32% sd 18) compared with the other groups (hr 2a 38% sd 10.9, 2x 44% sd 10.8; rr: 2a 36.5% sd 12.4 2x 57% sd 11.5). more type 2a fibres contained polysaccharide inclusions in homozygotes (30% sd 11.1) than in heterozygotes (10.6% sd 6.9) (p o 0.001). both dystrophin and nnos expression was normally localised to the sarcolemma in pathologically normal and vacuolated fibres from mutant horses. in conclusion, sedentary homozygotes have more severe skeletal muscle pathology and higher resting plasma ck and ast activities than heterozygotes, and pssm1 is associated with a fibre type shift towards type 2a. although subsarcolemmal vacuolation likely disrupts the contractile apparatus's attachment to the sarcolemma, the latter's integrity appeared intact. the recumbent horse presents a logistic, diagnostic, and therapeutic challenge to the equine practitioner. there is currently very little data available on the prognosis and outcome of horses that are recumbent. therefore, the purpose of this study was to investigate the outcome of hospitalized horses that had been recumbent in the field or in the hospital and the factors affecting their survival. records of horses admitted to the school of veterinary medicine, university of california davis from january of 1995 to december of 2010 with a history of recumbency or horses that became recumbent while hospitalized were evaluated. a horse was defined as recumbent if it was unable to stand on its own. the medical record was examined for the following criteria: history pertaining to the current illness including treatment by the rdvm, breed, age, weight, date of presentation, physical and neurological examination findings, cbc and biochemical profile results, initial drugs administered on arrival, time spent recumbent, time spent in a sling, diagnosis, and hospitalization costs. statistical analysis correlating factors associated with survival was performed using logistic regression. overall there were 112 non survivors and 49 survivors. factors that favored survival included early initiation of treatment in the field by the rdvm, horses that tolerated a sling and spent more time in a sling, increased duration and costs of hospitalization, horses that were recumbent post anesthesia, and those recumbent due to disease of the musculoskeletal system. factors that increased likelihood of non survival included horses that were ataxic on presentation, horses with increased bun, horses that spent more time recumbent, those that did not tolerate a sling, and horses diagnosed with botulism and spinal cord disease. in conclusion, this retrospective study demonstrated that both the cause of recumbency and the ability of horses to tolerate a sling had a direct effect on survival. abstract e-7 plasma peak and trough gentamicin concentra-tions in hospitalized horses receiving once daily gentamicin. jr read 1 , pa wilkins 2 , rd nolen-walston 1 . 1 university of pennsylvania, new bolton center, kennett square, pa. 2 university of illinois, champaign-urbana, il. gentamicin is often used to provide gram negative antimicrobial coverage in horses at 6.6 mg/kg iv every 24 hours. therapeutic drug monitoring in our hospital suggests larger doses are required in many clinical cases to achieve the desired concentration (8-10â minimum inhibitory concentration) for common bacterial isolates (peak target range 32-40 mg/ml). the aim of this study was to determine the correlation between gentamicin dose and plasma concentration in hospitalized horses receiving gentamicin treatment in order to identify an optimum dose range for this population. review of records (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) identified 71 horses ! 3 months old receiving once-daily gentamicin with peak and trough assays performed (n 5 107 sets). spearman rank correlation coefficient analysis revealed a weak (r 5 0.289) but statistically significant correlation (p 5 0.003) between gentamicin dose and peak plasma concentration. horses receiving 7.7-9.7 and 4 9.7 mg/kg gentamicin (groups 2 and 3) had higher median peaks (31 mg/ml) than horses receiving 5.6-7.6 mg/kg (group 1; 25.7 mg/ml). higher doses were more likely to result in peaks 4 32 mg/ml (42 and 41%, groups 2 and 3 respectively) than horses receiving 5.6-7.6 mg/kg (group 1; 16%). all 22 hour post-gentamicin administration trough values were o 2 mg/ml. no correlation was found between dose and change in plasma creatinine during treatment, nor dose and trough level. these data suggest that gentamicin dosage in horses should be individually determined by therapeutic monitoring. additionally, these data support an initial dose of 7.7-9.7 mg/kg iv every 24 hours in order to achieve desired peak concentration and an appropriately low trough concentration. heaves is a common respiratory inflammatory disease, characterized by a pulmonary neutrophilia. this disease is also characterized by an activation of circulating neutrophils after antigen challenge but their specific role in heaves is not well understood. also, there are anecdotal studies concerning heaves-affected horse to be more susceptible to infection. however, to our knowledge, the antibacterial host defense role mediated by circulating neutrophils was not investigated in heaves-affected horses. the objective of this study was to compare phagocytosis activity and bacterial killing by circulating blood neutrophils of heaves-affected and control horses. peripheral neutrophils were isolated from heaves-affected (n 5 6) and control (n 5 6) horses using a density gradient technique. the killing capacity was assessed by incubating neutrophils with streptococcus equi spp equi and spp zooepidemicus. after 1 h of bacterianeutrophil coculture, total viable bacterial cells were measured by quantitative plating. the phagocytosis was evaluated by flow cytometry using fluorescent beads and gfp-transformed streptococcus suis suilysin-negative mutant strain. circulating neutrophils from heaves-affected horses showed a significant decrease in their killing capacity toward s. zooepidemicus (p 5 0.046). a reduced, although not significant (p 5 0.1), killing capacity of s. equi by these neutrophils was also observed. the phagocytosis activity was not different between groups. this impairment of blood neutrophil bactericidal activity in heaves-affected horses could contribute to an increase susceptibility to infection. obesity is a common disorder of the horse, with current prevalence estimated at 30%. in people, obesity is associated with dyslipidemia, insulin resistance, mitochondrial dysfunction and downregulation of lipid and glucose metabolic pathways. in the horse, obesity is similarly associated with insulin resistance and alterations in lipid profiles; however, metabolic regulatory gene expression profiles have not been fully characterized. we hypothesized that obese horses have decreased expression of metabolic regulatory genes and decreased mitochondrial content in skeletal muscle compared with non-obese horses. sixteen light breed horses, 2-27 years of age were included. body condition score (n 5 16) and neck circumference (n 5 9) were recorded. post-mortem biopsy samples of the semi-membranosus muscle were obtained. dna and rna were isolated. relative expression of the metabolic genes, peroxisome proliferator activated receptor g (pparg), pparg coactivator-1a (pgc-1a), fatty acid translocase (fat) and estrogen related receptor a (erra) was determined by quantitative polymerase chain reaction (qpcr). mitochondrial content was assessed by determining mitochondrial dna/nuclear dna ratio by qpcr, using nadh-dehydrogenase subunit 2 and cytochrome oxidase subunit 2 as mitochondrial genes and beta actin as the nuclear reference gene. non-normal data was log transformed for analysis and a pearson coefficient of correlation was calculated for relative gene expression, body condition score and neck circumference. a value of p o 0.05 was considered significant. body condition score was strongly correlated with neck circumference (n 5 9, r 5 0.72, p 5 0.03). relative expression of erra and glut-4 increased with body condition score (erra: n 5 16, r 5 0.66, p 5 0.005; glut-4: n 5 16, r 5 0.50, p 5 0.05). copy number of the mitochondrial genes (nadh-dh and cox-2) was not related to body condition score or metabolic gene expression. expression of glut-4, erra, pparg, and fat were strongly correlated to each other, but not pgc-1a. there was a strong trend towards correlation between pparg and pgc-1a in horses with body condition score 4 6 (n 5 6, r 5 0.77, p 5 0.07). in this study, there was no change in mitochondrial content in obese horses. assessment of mitochondrial function in obese horses and horses with ems is under way. the strong correlation between pparg and pgc-1a observed only in horses with high body condition scores suggests this pathway is activated with obesity. the role of pparg and pgc-1a in equine obesity should be further investigated to determine their potential as therapeutic targets. upregulation of erra and glut-4 in horses with increasing body condition score is unexpected, and may indicate a compensatory response to dysfunction of a downstream pathway. further studies to better define the role of metabolic regulatory gene expression in obese horses and those with ems are ongoing. previously presented at the 7th annual harold hamm diabetes center research retreat, oklahoma city, ok. inflammatory bowel disease is a cause of weight loss, decreased performance, and colic in horses. this condition is difficult to diagnose and clinicians rely upon absorption tests to document malabsorption. the purpose of this study was to compare glucose and xylose absorption tests in normal horses and determine the repeatability of these procedures. eight horses received 500 mg/kg dextrose or d-xylose powder mixed as a 10% solution in water or water alone via nasogastric intubation on three different occasions within the same week for three consecutive weeks (9 tests/horse). a crossover design was employed and the order of treatments was randomized. blood samples were collected at time 5 0, 30, 60, 90, 120, 150, and 180 min. data were analyzed by repeated measures anova and t-tests. results showed that the xylose response over time differed significantly from the glucose response over time (test â time; p o 0.001). mean time to maximum concentration differed (p o 0.001) between tests (glucose 90 min; xylose 60 min). within-horse area under the curve, maximum concentration, and time to maximum concentration values for dextrose and xylose did not differ significantly when tests were repeated. results indicate that glucose and xylose absorption tests are repeatable within the same horse, but plotted curves differ between tests, with peak concentrations occurring at a later time point for the glucose absorption test. we conclude that both tests provide repeatable measures of intestinal absorption, but glucose and xylose appear to differ in their rates of absorption and clearance. the purpose of this study was to examine the records of a population of thoroughbreds with cervical vertebral malformation (cvm) and to determine which factors have an effect on these horses achieving athletic function. this was a retrospective case study of 119 thoroughbreds with cvm treated medically from 2002 to 2010. forty-one were euthanized after diagnosis, while the remaining 78 were discharged for treatment. racing records were reviewed to determine which horses raced after treatment. horses were separated into groups based on whether or not they raced. medical records were reviewed, and results of neurologic examination, radiographic and laboratory findings, treatments, and outcome were assessed and compared between groups. twenty-one of 78 horses treated medically (27%) improved enough to race. median neurologic grade between groups was significantly different (p o 0.0001), with a hind limb grade of 2.0 (range 1-3) for the raced group and 2.5 (range 0.5-4) for the unraced. intravertebral sagittal ratios measured from standing lateral cervical radiographs were equivocal between groups. radiographs of all horses were examined for kyphosis, dorsal over-riding arch, caudal epiphysitis, degenerative joint disease, cystic bone lesions, and cranial stenosis of the vertebral canal. horses with kyphosis (p 5 0.0178), degenerative joint disease (p 5 0.0497), or cranial stenosis (p 5 0.0357) at any site were less likely to return to racing. racing prognosis for horses with cvm treated conservatively is equivalent to that of those treated surgically as reported by rush moore et al (javma, 1993) . radiographic changes and neurologic grade may help serve as indicators for whether a horse will respond to conservative therapy. since pain assessment is vital for management of colic, a valid, reliable and feasible tool for assessing the severity of acute abdominal pain in horses is urgently needed. our aim was to construct and validate a behavior-based pain scale by methodology utilized in construction of pain scales in non-verbal humans. the project consisted of four stages. firstly, behaviors to include in a scale were empirically identified. thirty equine clinicians noted behaviors in each of 23 random film clips of horses with colic using a checklist. nine behaviors (e.g. rolling, pawing, and flank watching) demonstrated good inter-observer agreement without bias (multi-rater kappas: 0.5-0.95). secondly, the clinical judgment of experts was utilized to identify and to weight behaviors. six expert clinicians independently expressed opinions as to which of 46 behaviors to include and the severity of pain they indicate. two contending scales (equine acute abdominal pain scales (eaaps) 1 & 2) were constructed based on both the empirical and the judgmental approaches. each included 12 identical behaviors with a 1-5 point score range; eaaps-2 with gradations to some of the behaviors and eaaps-1 without. in the third stage, blood cortisol and lactate levels and heart rate were shown to only approximate pain since they correlate poorly with degree of pain as assessed by visual analog scale (vas) in 32 horses with colic and 8 controls (spearman rho; lactate 0.359; cortisol 0.214; heart rate 0.261). finally, reliability and validity of the pain scales were evaluated including constructs of pain; face validity, convergent and discriminate validity and extreme groups. thirty of 40 films of horses with colic were randomly presented to 44 expert equine clinicians internationally who were randomly allocated into three groups to score pain; one group by both vas and numerical rating scale (nrs)), and two groups, each by one of the two eaaps scales. inter-observer reliability of both eaaps scales was excellent (intraclass correlation 0.8). intra-observer reliability based on scores given for identical films demonstrated; 87% and 56% agreement, kappa 0.9 and 0.35, and spearman's rho 5 0.97 and 0.58 for eaaps-1 and 2, respectively. both scales varied by 1 score between observations. face validity; each group reported their scale to be valid (67% & 81%). convergent validity; the scales compared favorably with vas/nrs scores (spearman's rho: 0.84-0.89). discriminate validity; correlation to heart rate, lactate and cortisol levels was predictably low (rho 5 0.2-0.5). extreme group validity; colic horses scored significantly higher than control horses; scores of 0.6-0.7 in controls versus 2.6-2.7 in cases. in conclusion, methodology established in human medicine but novel in veterinary medicine was used to construct and validate two clinically feasible equine abdominal pain severity scales that showed excellent reliability and validity. further refinement of the eaaps scale is advised prior to introduction into clinical practice. aortic valve prolapse (avp) is a common echocardiographic finding in horses, but when compared with mitral valve prolapse in dogs, little is known about the natural progression of this condition. previously published data has shown that echocardiographic identification of avp in horses is reliable, diagnostic criteria have been established and that development occurs with training. the aims of this study were to evaluate the different rna and protein expressions of smooth muscle actin (sma), transforming growth factor-b 1 (tgf), nitric oxide synthase (nos) and the concentrations of elastin and collagen in normal, prolapsing and diseased cusps to evaluate what structural changes may predispose them to prolapse. valve cusps were harvested and processed from a group of 176 horses at a commercial abattoir following disease classification using echocardiography. horses were aged 13.7 ae 6.7years, weighing 518 ae 51 kg and with a median body condition score of 4/9. cusps were collected in rnalater s and stored at à801c prior to processing. cdna was produced from half a valve using a standard protocoland qrt-pcr performed to assess relative rna expression of sma, tgfb 1 , endothelial (enos) and inducible nos (inos) and compared with the housekeeping gene 18s. a quarter of cusp was processed using an adapted commercial protocol to evaluate protein expression of sma, tgf b 1 , enos and compared to vimentin. specific antibody binding was assessed with western blotting and protein expression evaluated using dot blots. the remaining quarter cusp was used to measure soluble collagen and elastin concentrations using commercial assays 3 . statistical analyses included one way anova with post-hoc bonferoni, paired student's t-test, linear and logistic regression. there was no effect of gender or age on any of the measurements. valves from animals with avp had lower expression of sma and elastin compared to normal and diseased valves, increased expression of tgfb 1 and enos, whereas inos expression was greater than normal valves (table 1) . collagen content of valves from horses with avp was increased compared to normal but lower than horses with valve disease. prolapsing cusps appear to be a different phenotype from diseased cusps. further studies will help to elucidate the significance of these findings in vivo. a clear association between heart rate (hr) and body mass has been observed across a wide range of mammalian species. furthermore, it is well known that electrocardiographic (ecg) time intervals vary with heart rate and body mass. within the equine species, small breeds are generally thought to have higher heart rates than large breeds. however, despite the large differences in size among different equine breeds, there is little information about normal heart rates and normal ecg time intervals in horses and ponies of different body size. similarly, the relationship between hr and body mass in dogs of various breeds and sizes is still under debate. the goal of this study was to investigate the relationship between heart rate and ecg time intervals to body mass in apparently healthy horses and ponies and to calculate normal ranges for different weight groups. 250 adult horses and ponies at an age of 5.5 (1-30) y [median (range)] and a body weight of 479 (46-1120) kg were included in the study. all animals were considered clinically healthy based on history and physical examination. a standard base-apex ecg was recorded at a speed of 25 (n 5 4) or 50 mm/s (n 5 246) using a multiparameter monitor (datascope passport). during the procedure, the horses were unsedated, standing quiet in a box stall. mean hr over 15 sec was determined for each recording. the following ecg time intervals were measured in triplicate and averaged for further analyses: pq interval, qrs duration, qt interval, and difference between qt and qrs (qt-qrs duration). the relationship between hr, ecg time intervals, and body mass was assessed using linear regression analyses. normal ranges (2.5% to 97.5% percentile) were calculated for 5 different weight groups. the level of significance was p 5 0.05. heart rate was inversely related to body mass (p o 0.0001, r 2 5 0.122). the pq interval (p o 0.0001, r 2 5 0.413), qrs duration (p o 0.0001, r 2 5 0.147), qt interval (p o 0.0001, r 2 5 0.089), and qt-qrs duration (p o 0.0001, r 2 5 0.028) were directly related to body mass. normal ranges for hr, pq, qrs, and qt within the different weight groups were 36-64 bpm, 107-230 ms, 60-127 ms, 360-510 ms (o200 kg); 28-68 bpm, 162-319 ms, 74-155 ms, 362-610 ms (200-399 kg); 28-60 bpm, 211-423 ms, 89-147 ms, 390-581 ms (400-599 kg); 28-54 bpm, 220-380 ms, 87-150 ms, 367-587 ms (600-799 kg); and 24-52 bpm, 240-463 ms, 87-140 ms, 437-533 ms (4 799 kg). we conclude that in healthy horses there is a significant but weak relationship between body mass and hr and between body mass and ecg time intervals, respectively. this study therefore supports the hypothesis that within the equine species, small breeds have faster heart rates and shorter ecg time intervals than large breeds. therefore, body mass has to be considered when comparing hr and ecg time intervals to normal ranges in horses. horses with pituitary pars intermedia dysfunction (ppid) often have elevated plasma acth concentrations. however, ppidaffected horses rarely have resting serum cortisol levels above the reference range or adrenal hyperplasia. we hypothesized that this apparent dissociation between plasma acth levels and adrenal response in horses with ppid is due to the secretion of acth that is less biologically active than that from normal horses. to test our hypothesis, a bioassay to evaluate acth activity was developed. adrenocortical explants were harvested aseptically from normal horses at euthanasia and stimulated with plasma from healthy (n 5 9) and ppid-affected horses (n 5 11). the assay was performed three times with explants obtained from different horses. cortisol secreted by the explants and plasma acth levels were measured with commercially available radioimmunoassays. cortisol secretion stimulated by each sample was standardized to the respective explant protein concentration. cortisol data was normalized for acth concentration in each plasma sample and expressed as a cortisol:protein:acth ratio. ratios from horses with ppid and normal horses were compared by unpaired t-test. horses with ppid had significantly lower cortisol:protein:acth ratios compared to normal horses. (assay 1: 0.06 ae 0.04 vs. 0.33 ae 0.11, p o 0.001; assay 2: 0.06 ae 0.04 vs. 0.30 ae 0.10, p o 0.001; assay 3: 0.07 ae 0.06 vs. 0.20 ae 0.13, p o 0.01). these results suggest that plasma acth from ppid horses is less biologically active than plasma acth from normal horses. our findings give further insight into the pathophysiology of ppid and may aid in the development of novel diagnostic testing protocols. an online survey was conducted to determine perceived needs of potential employers of new acvim-laim diplomates. the survey was designed as the first step in determining what is needed for success in the various sectors of practice employing acvim-laim diplomates. demographic and background data were collected using questions and drop-down menus on the first page. the survey evaluated 189 skills or concepts in 26 areas of veterinary practice. participants answered 4 questions about each skill or concept using drop-down ranked lists. those participants that had completed an acvim-laim training program were asked 3 additional questions about whether they were taught the skill or concept during their own residency. data were collated and descriptive statistics calculated. the mean scores or frequencies of use for each skills or concepts were ranked to determine which of the skills or concepts were most important for an entry-level diplomate. eighty-eight individuals participated in the survey with 86 respondents being acvim diplomates, 1 respondent was not board-certified and 1 respondent was an act diplomate. nineteen respondents were diplomates of acvim and an additional specialty. eighty-three respondents had completed an acvim residency. the majority of respondents were in academia (65%) with 30% being in private practice. equine specialists prevailed (53%) followed by mixed large animal (28%) and then food animal only specialists (13%). the distribution of years post-residency was slightly skewed toward younger diplomates, but overall there was a good distribution of diplomates across years of experience. most respondents stated that they did not make hiring decisions in their practice. competency in disciplines other than internal medicine was expected with ultrasonography and radiology being the most desirable followed by theriogenology and lameness. surgical skills, both equine abdominal (10%) and food animal general surgery (11%) were considered important by some respondents. thirty-six per cent of respondents thought that a new diplomate should expect to make o $50,000 per annum, while only 13% of respondents thought that a new diplomate should expect to make ! $90,000 per annum. not all respondents answered questions on all skills or concepts. the mean number of skills or concepts evaluated was 86 (sd 5 75) with only 18 respondents answering all 189. all skills or concepts evaluated were found to be at least somewhat important, were estimated to be used at least occasionally, were recommended for inclusion in training programs as core or elective, and some level of knowledge was expected. at least some of the respondents were taught each of the skills or concepts during their residency, practiced the skill or concept at least occasionally during their residency, and some degree of competency was expected at the time of completion of their residency. these data will provide a framework for designing future laim residency programs. abstract this study evaluated pharmacokinetics and clinical safety of an oral paste formulation of a commercially available cox1-sparing nsaid in clinically healthy pony foals in a randomized controlled clinical trial. values for complete blood count, serum chemistry profile, urinalysis, pharmacokinetic assay, and gastric endoscopy were evaluated in eighteen shetland pony foals treated with firocoxib (0.1 mg/kg, po, q 24 h) or placebo for 14 days. foals were divided into 3 treatment groups. group 1 and 2 foals received firocoxib while a 3rd group was administered an oral placebo. gastric endoscopy was performed on group 1 and 3 foals prior to treatment and on days 7 and 14 to monitor for the presence of gastric ulcers. group 2 and 3 foals had blood and urine samples taken sequentially for pharmacokinetic analysis, cbc, serum chemistry evaluation, and urinalysis. physical examinations were performed prior to treatment and daily for 17 days. data were analyzed using anova and paired t-tests (p o 0.05). none of the foals presented adverse clinical effects. there were no significant changes in cbc, biochemical profiles within groups, or differences between groups. pretreatment gastric endoscopy scores were not significantly different from evaluations at 7 and 14 days. firocoxib was quickly absorbed with an observed maximum concentration at 2 hr, the first sampling interval, for the majority of animals. firocoxib plasma concentrations decreased in a log-linear manner after reaching the maximum concentration and steady state concentrations were achieved by the 7th dose. based on the sampling times after the final and 14th dose, an average half life of 1.3 days was estimated. administration of firocoxib did not cause any adverse effects on gastrointestinal, or hematological or serum biochemical variables, appears to have been well tolerated, and follows a predictable pharmacokinetic pattern in 4-6 week old foals. equine herpesvirus 1 (ehv-1) is highly prevalent in most horse populations. horses are routinely vaccinated against ehv-1, and neutralizing antibodies have helped to prevent disease. however, the usda has recently classified ehv-1 myeloencephalopathy (ehm) as an emerging disease, in response to the apparent increase in incidence, morbidity, and mortality of ehm that suggests a change in virulence of the virus. it has been reported that cellular immune mechanisms, in particular cytotoxic t-cells (ctls), are important in controlling ehv-1 viremia. interferon-alpha (ifn-a) has a key function in innate immune regulation by inducing the differentiation and maturation of ctls. here, we investigated the influence of abortogenic (racl11, ny03) and neuropathogenic (ab4) ehv-1 virus strains on ifn-a, il-4 and il-10 secretion in equine pbmc. equine pbmc were infected with racl11, ny03 or ab4 ehv-1 strains or kept in medium for 24 hours. ifn-a, il-10 and il-4 secretion was detected in the supernatants by a fluorescent bead-based cytokine assay. the production of ifn-a increased with increasing viral doses and similarly for all three ehv-1 strains. the production of the antiinflammatory cytokine il-10 was significantly decreased after ab4 infection compared to racl11 and ny03 strains at viral infection doses of moi 0.3-1. at high doses (moi 3), il-10 production was suppressed by all three ehv-1 strains. the results suggested that abortogenic and neuropathogenic ehv-1 strains equally induce antiviral ifn-a production in equine pbmc. they also illustrated the differences in the ability of ehv-1 strains to modulate anti-inflammatory il-10. neuropathogenic ab4 strain had an increased potential to down-regulate il-10 production suggesting specific viral mechanisms that interfere with the control of inflammation in the host. the variations in innate il-10 secretion might influence the development of protective immunity and might offer an explanation why neuropathogenic ab4 induces more severe disease, including myeloencephalopathy, than abortogenic ehv-1 strains. previously presented at a conference of research workers in animal disease. rhodoccocus equi is the major cause of pneumonia in foals during the first six months and control measures are frequently ineffective. treatment protocols are long, expensive and do not always produce good results. rhodococcosis prevention through immunization of foals using a safe and efficient vaccine is still a challenge. recent studies are based on the use of the virulence associated protein a (vapa) which has been described as an important inducer of immunity against r. equi. the present study evaluated the clinical and immune response of foals vaccinated with an attenuated strain of s. enterica typhimurium expressing vapa antigen (test group) or s. enterica typhimurium without the vapa gene (control group), previous to and following experimental challenge. two experimental phases were established according to the immunization route: intranasal or oral vaccination up to 12 hrs following birth and at 14 days of age. the experimental and control groups were challenged on day 28 with a virulent stain of r. equi. clinical examination, cbc and image complementary exams were used to evaluate the development of clinical signs. immune response patterns were evaluated though immunoglobulin dosage, cytokine expression, lymphocyte proliferation assays, isolation of r. equi and cytological profiles of tbw. clinical manifestation was less intense in the test group during the second experimental phase, and death occurred only in the control group (2/3) and was due to r. equi pneumonia. the test group produced a more intense iggb response when compared to controls however no statistical difference was observed. lymphoproliferation and th1 cytokine expression were higher in the test group. in contrast, controls produced an il-4 response. local iga was significantly higher in animals immunized with salmonella carrying vapa. immunization protocols produced no severe toxic effects. the vaccination of neonatal foals with s. enterica typhimurium expressing vapa was considered safe, produced efficient modulation of the immune response and is apparently able to protect against experimental r.equi infection. this study was conducted to test the hypothesis that the 32 kd protein, myristolated alanine-rich c-kinase substrate (marcks), is involved in equine neutrophil migration and adhesion. in other species, marcks phosphorylation and dephosphorylation causes the protein to cycle between the cell membrane and cytosol, respectively. to investigate marcks phosphorylation in horses, neutrophils were isolated from whole blood and stimulated with platelet activating factor (paf), leukotriene b 4 (ltb 4 ) or phorbol myristate acetate (pma). western blot was performed using specific phospho-marcks and total marcks primary antibodies. these results determined marcks phosphorylation is maximal 30 seconds following stimulation and that dephosphorylation occurs within 3 minutes. to investigate the requirement for marcks in equine neutrophil chemotaxis, isolated neutrophils were pre-treated with mans (a cell permeant peptide identical to the n-terminal 24 amino acids of marcks), rns (a control peptide) or vehicle control (vc) prior to a migration assay toward known neutrophil chemoattractants (ltb 4 or paf). pre-treatment of equine neutrophils with mans significantly inhibited migration while rns pre-treatment had no effect. to investigate marcks requirement in equine neutrophil adhesion, mans, rns or vc treated cells were stimulated to adhere to immulon 2 plates coated with 5% fbs. pre-treatment of equine neutrophils with mans significantly inhibited adhesion while rns pre-treatment had no effect. inhibition of marcks using a cell permeant peptide identical to the protein's n-terminus significantly inhibited equine neutrophil adhesion and migration. these results indicate that marcks is an important regulator of equine neutrophil chemotaxis and represents a potential target for anti-inflammatory therapy. amongst other tests, a thorough neurologic examination of horses may include walking with the head elevated and during blindfolding, in order to help differentiate normal from abnormal and to help with neuroanatomically localising any lesion(s) i.e. in the ataxic horse. consensus amongst equine neurologists suggests that gait abnormalities associated with these specific tests are often exacerbated in horses with underlying proprioceptive deficits however the effect of these tests on temporal gait characteristics in normal horses has not previously been assessed quantitatively. we hypothesized that head elevation or blindfolding, in comparison with walking in a straight line would result in a compensatory decrease in lateral (left front-on to left hind-on and right front-on to right hind-on) and diagonal coupling intervals (left front-on to right hind-on and right front-on to left hind-on) in normal horses. four thoroughbreds without any history or clinical signs suggestive of neurological disease (age range 3 to 5 years) were included in the study. retroreflective markers were applied to the withers, to the sacrum and to left and right tuber coxae; for each limb, lateral fetlock markers and dorsal and lateral hoof wall markers were used. a minimum of 3 trials each with 2-4 walk strides for each task were analysed as horses walked across an 8-force-plate runway i surrounded by a 12-camera kinematic system. ii force-plate data were processed with semi-automated custom written matlab iii scripts. data were analysed with a mixed model using the statistical software r. there was a significant fixed effect of normal walk on a straight line and head elevation on left and right lateral coupling intervals (p o 0.0001) and of the left and right diagonal coupling intervals (p o 0.0001). there was no significant effect of blindfolding on neither lateral nor diagonal coupling intervals. the random effect of horse had no influence on the coupling intervals. the decrease of the lateral coupling intervals indicates a tendency towards a pacing gait during head elevation. we conclude that there is a significant change in temporal gait characteristics of non-neurologic horses when the head is elevated but not during blindfolding compared to normal walking. current results suggest that pacing and increased variation in foot-placement during head elevation should be interpreted with caution however further work is required to determine whether the change differs between horses with neurological disease and non-neurologic disease. hereditary equine regional dermal asthenia (herda) is an autosomal recessive connective tissue disorder associated with a mutation in cyclophillin b that leads to impaired collagen folding, aberrant wound repair, and corneal abnormalities. it affects young quarter horses, appaloosa, and paints. herda shows similarities to the human hereditary connective tissue syndrome ehlers danlos (eds). many eds patients suffer from joint pain and osteoarthritis (oa) as adults. the similarity between eds and herda raises the question whether horses suffering from herda develop oa. in oa, excess production of inflammatory mediators such as prostaglandin e2 (pge 2 ) activate enzymes that degrade cartilage as well as impede wound healing. the present study examined articular cartilage from yearling horses afflicted with herda. we hypothesized that chondrocytes from these horses are continually activated to produce inflammatory mediators. to test this hypothesis, articular cartilage from carpal and tarsal joints of herda horses were evaluated using histology. pge 2 production by chondrocyte cultures was measured by elisa and analyzed by one-way anova, tukey post-hoc test, p o 0.05 significance. we also determined the antiinflammatory effects of an avocado/soybean unsaponifiables (asu), glucosamine (glu), and chondroitin sulfate (cs) mixture (ingredients found in cosequin s asu) and phenylbutazone (pbz) on chondrocytes. cosequin s asu and pbz are used alone or in combination for the management of oa. chondrocyte cultures were incubated for 24 hrs with control media alone, a clinically relevant concentration of pbz (4 mg/ml), or the combination of asu (nmx1000 s , 8.3 mg/ml) 1 glu (fchg49 s , 11mg/ml) 1 cs (trh122 s , 20 mg/ml). articular cartilage from joints of five herda-afflicted horses showed gross and histologic evidence of osteoarthritic lesions. chondrocyte cultures from cartilage of horses suffering from herda spontaneously produced greater pge 2 than chondrocytes from normal horses (41000-fold). pbz significantly decreased pge 2 production by $90% (p o 0.001). the combination of asu1-glu1cs also significantly reduced pge 2 production by $60% (p o 0.001). the present study supports anecdotal findings that horses suffering from herda are likely to develop oa. the inhibition of pge 2 synthesis by asu1glu1cs suggests that this combination may be beneficial for the management of oa in herda. research supported by nutramax laboratories, inc. equine inflammatory airway disease (iad) is a common condition often treated empirically with corticosteroids. gene expression analysis in the bronchoalveolar lavage fluid (balf) may help understand the effects of corticosteroids in iad. the first part of the study aimed at identifying reference genes in the balf of iad horses treated with corticosteroids. the second part of the study investigated the effects of dexamethasone and fluticasone propionate treatments on the mrna expression of il-1b, il-4, il-8 and il-17. the expression stability of seven candidate reference genes was determined in balf taken pre-and post-treatment with dexamethasone and fluticasone propionate in horses with iad. primers' efficiencies were calculated using linregpcr. normfinder, genorm and qbaseplus softwares were used to rank the genes according to their stability. gapdh was the most stably expressed gene whereas b2m was the least stable gene. in addition, genorm analysis revealed that the number of genes required for optimal normalization was four (gapdh, sdha, hprt, rpl32). in the second part of the study the mrna expression of il-1b, il-4, il-8 and il-17 was measured in balf samples from seven iad horses treated in a randomized cross-over design with dexamethasone (0.05 mg/kg sid, 15 days) or inhaled fluticasone propionate (3000 mcg bid with aerohippus, 15 days). the balf samples were taken at baseline and after each treatment period. there was no significant effect of the corticosteroids treatment on the mrna expression of il-1b, il-4 and il-8 in the balf. the mrna expression of il-17 was suppressed by dexamethasone and fluticasone propionate treatments. pneumonia is observed in horses after long distance transportation in association with confinement of horses' head position leading to a reduction in tracheal mucociliary clearance time (tmct). we hypothesize that clenbuterol, a beta-2 agonist shown to increase tmct in the horse, will ameliorate the affect of a fixed head position on large airway contamination and inflammation in a long-distance shipping model. six adult horses were enrolled in a cross-over design prospective study. horses were housed with their heads in a fixed position for 48 hours to simulate long distance transport, and treated with clenbuterol (0.8 ug/kg po q12 h) or a placebo starting 12 hours before simulated shipping. tmct was measured using a charcoal clearance technique. data were collected at baseline and 48 hours, and included tmct, tracheal wash cytology and quantitative culture, rectal temperature, cbc, fibrinogen, and serum tnfa, il-10 and il-2 levels. there was a 3-week washout between study arms, and each horse served as its own control. the data was analyzed using regression analysis and wilcoxon rank-sum tests. no statistically significant difference was seen between treatment and placebo groups for any of the variables investigated. tmct did not differ after treatment (1.71 ae 0.64 cm/min) versus placebo (1.55 ae 0.82 cm/ min; p 4 0.10), and intratracheal bacterial counts were similar for treatment (105 â 10 3 ae 42 â 103 cfu; p 4 0.10) and placebo (98 â 10 3 ae 41 â 103 cfu) groups. a reduction of tracheal b hemolytic streptococcus. spp. after clenbuterol versus placebo was also nonsignificant (0% versus 33%; p 4 0.10). in conclusion, treatment with clenbuterol does not appear to combat the deleterious effects of this long-term shipping model. breathing cold air during strenuous exercise is associated with airway inflammation. under these conditions, warming and humidification of inspired air occurs in the lower respiratory tract resulting in mucosal cooling, desiccation, and hyperosmolarity. the purpose of this research was to test the hypothesis that airway hypertonicity causes inflammatory cell migration and alterations in cytokine expression associated with exercise induced airway inflammation. horses (n 5 9) were examined in a randomized crossover design after exposure to hypertonic aerosols (5 minute nebulization with solutions of either isotonic or hypertonic mannitol). airway leukocytes were harvested 5 and 24 hours post aerosol challenge via bronchoaveolar lavage, and were used to determine total and differential nucleated cell count and expression of cytokinespecific mrna. hypertonic aerosol challenge resulted in an increase in total number of cells 5 hr after challenge, characterized by increased macrophage (p 5 0.04) and neutrophil (p 5 0.03) concentrations, but there was no effect on airway leukocyte concentrations 24 hours after nebulization. no significant changes in the relative quantity of mrna for airway cytokines were noted at either time point. these data demonstrate that transient airway hypertonicity can cause airway leukocyte influx and may be responsible for the airway inflammation commonly found in athletes that exercise in cold conditions. however, our data do not support the hypothesis that hypertonicity is the sole initiating cause of changes in cytokine expression secondary to cold weather exercise. it is likely that factors such as airway temperature, shear stress or epithelial damage also play a role in this phenomenon. we studied the importance of abdominal sonograms in neonatal foals suffering from gastrointestinal conditions. we hypothesized that there would be a subgroup of neonates with sonographically detectable pneumatosis intestinalis (pi) as a reflection of a necrotizing component of the disease. records of foals 7 days of age hospitalized between 2005 and 2009 with signs of gastrointestinal disease were evaluated (n 5 89). the association of sonographic, clinical, pathological and clinicopathological signs with outcome and severity of disease was determined. pneumatosis intestinalis was imaged in 19 foals. twenty-seven foals were classified as having necrotizing gastrointestinal disease based on the presence of gastrointestinal signs (colic, diarrhea, gastric reflux or abdominal distension) and pi detected sonographically (19), surgical (2) or pathological (6) evidence of gastrointestinal necrosis. there was a difference between overall survival rate (58%) and survival rate in foals with necrotizing disease (33%, p 5 0.005) or foals with pi detected sonographically (37%, p 5 0.02). pneumatosis intestinalis was the only sonographic finding associated with outcome. sonographic abnormalities in peritoneal fluid, stomach, duodenum, jejunum, cecum, umbilicus or the presence of meconium were associated (p o 0.05) with surrogates of severity of disease (hospitalization cost or days of hospitalization). hypoproteinemia was associated with pi (p 5 0.02). the presence of blood in the feces, reflux and abdominal distension was associated with necrotizing gastrointestinal disease (p o 0.05). abdominal sonograms have prognostic value in neonatal gastrointestinal disease. pneumatosis intestinalis was a common sonographic sign that worsened the prognosis. the therapeutic implications of detecting a necrotizing component of the gastrointestinal disease deserve further study. the interaction of insulin and the microvascular endothelial insulin receptor (irc) plays an important role in the normal and insulin resistant (ir) individual. while endothelial irc signaling is normally vasodilatory, this effect is well-documented to reverse in the ir individual, resulting in vasoconstriction. although vascular dysfunction has been reported in sepsis-associated equine laminitis, the role of the laminar microvasculature in endocrinopathic laminitis remains poorly characterized. the purpose of this study was to characterize the pattern of irc expression in digital laminae in ponies subjected to a dietary carbohydrate challenge that mimicked abrupt exposure to pasture rich in nonstructural carbohydrates (nsc). mixed-breed ponies (body weight 270.9 1/-74.4 kg) received a diet of hay chop (nsc $6% on a dm basis) for 4 weeks prior to initiation of the experimental feeding protocol. following conditioning, ponies either remained on the control diet (n 5 11) or received the same diet supplemented with sweet feed and oligofructose (total diet $42% nsc; n 5 11) for a period of 7 days. serum insulin concentrations were measured prior to and after completion of the feeding protocol. at the end of the feeding protocol, sections of numerous tissues, including dorsal digital laminae, were collected immediately following euthanasia. the samples were formalin-fixed for 48 hours, transferred to 70% ethanol, and paraffin-embedded. laminar sections were stained immunohistochemically for irc using a commercially-available antibody (abcam); the number of irc (1) cells was quantified in 40x light microscopy fields (n 5 10) for each section. the total number of irc (1) cells was greater in the laminae of challenged ponies than control ponies (p 5 0.0096), and there was a significant correlation between the change in serum basal insulin concentration and number of laminar irc (1) endothelial cells (r 5 0.74; p o 0.05). while the number of irc (1) endothelial cells was significantly greater in the dermal laminae of challenged ponies (p 5 0.0095), there was no difference in the number of interstitial irc (1) cells (p 5 0.82). no epithelial irc (1) cells were observed in any laminar section, and irc (1) cells were conspicuously absent from the deep dermal tissue (including vessels). up-regulation of irc expression in the laminar vasculature occurs acutely in response to dietary carbohydrate challenge and accompanies hyperinsulinemia in ponies. the dramatic increase in endothelial irc expression in the laminar microvasculature in nutritionally challenged ponies, with no apparent epithelial irc present, suggests that hyperinsulinemia associated with exposure to increased dietary nsc may induce laminar injury by causing a similar vasoconstriction in ir equids as described in the microvasculature of ir humans. glucose transport from the blood stream into cells, the limiting step in whole-body glucose utilization, is regulated by a family of glucose transporter (glut) proteins in insulin-sensitive (i.e., muscle and adipose) tissues. we previously demonstrated that glut4, the major isoform, is a key factor in the pathogenesis of equine insulin resistance (ir). while it has been recently demonstrated that glut12 (a newly discovered isoform) increases insulin-stimulated glucose transport in human muscle, its role in other tissues, particularly in the setting of ir, is not well characterized in any species. in addition, as160 has recently emerged as a key downstream signaling molecule regulating translocation of glut to the cell surface, the rate-limiting step in glucose uptake. we hypothesized that glut12 content would be differentially expressed across tissues and that ir would induce alteration in glucose transport by affecting active cell surface glut12. biopsies of skeletal muscle, and subcutaneous and visceral adipose tissue were collected from light-breed horses, characterized as either insulin sensitive or compensated ir, based on the results of an insulin-modified frequently-sampled intravenous glucose tolerance test (n 5 5/group). we specifically quantified active cell-surface glut12 in these biopsies, using an innovative exofacial bismannose photolabeled assay, which has not been previously applied to glut12. total glut12 protein expression was measured by western blots, as well as total and phosphorylated (indicating activation of) as160. glut12 was expressed in all the depots with a significant regional effect. total glut12 protein content was increased (p o 0.05) in visceral (omental and mesenteric) compared to subcutaneous (nuchal ligament and tailhead) adipose tissue and skeletal muscle of healthy horses. ir did not induce alterations in active cell-surface and total glut12 content nor in total and phosphorylated as160 in any of the tissues evaluated. our data suggests that glut12 is abundant in visceral adipose tissue and is therefore likely to play a substantial role in the regulation of glucose transport. however, neither glut12 translocation nor as160 activation are impaired in insulin-sensitive tissues of ir horses. it is concluded that, in contrast with glut4, glut12 does not appear to contribute to glucose transport alterations during naturally-occurring equine ir. insulin resistance (ir), characterized by exaggerated glycemic or insulinemic responses to glucose challenge, is a key metabolic disturbance in horses that develop obesity-associated laminitis. in addition to obesity, diet and age have been demonstrated to affect tissue sensitivity to insulin in other species but these factors have received limited investigation in horses. we hypothesized that there would be greater glycemic and insulinemic responses to a sweet feed meal in aged horses, as compared to adult horses, as well as in horses adapted to a forage-only diet. three diets, grass hay only, grass hay plus sweet feed (starch and sugar-rich, ss), and grass hay plus a fat and fiber (ff) feed, were fed to 17 mares, 8 adult (5-12 yr) and 9 aged (4 19 yr), for a 6-week adaptation period in a randomized design. glycemic and insulinemic responses to a standardized meal of sweet feed (4 g/kg bw offered for 1 hour) were determined for 6 hours from the onset of feeding. peak glucose and insulin concentrations and areas under the glucose or insulin vs. time curves (auc-g, mg/ dl/360 min, and auc-i, mu/ml/360 min) were determined and data were analyzed by one-and two-factor repeated measures anova. there were no differences between age groups in glycemic responses to any of the diets. however, in aged horses peak glucose concentration (p o 0.03) and auc-g (p o 0.01) were greater after adaptation to the forage-only diet, as compared to the other two diets. in contrast, aged horses had a greater peak insulin concentration (p o 0.05) and auc-i (p o 0.03) than adult horses on all diets but no differences in peak insulin concentration or auc-i was found between diets within age groups. as hypothesized, the insulin response, but not the glycemic response, to a sweet feed meal was greater in aged horses, regardless of background diet. further, the glycemic response was greatest after adaptation to a forage-only diet, but this finding was only significant in aged horses. morbidity, mortality, and economic loss to the equine industry. in obese humans and rodent models of nutritional obesity, systemic insulin resistance and hyperinsulinemia are followed temporally in a majority of individuals by decreased glucose tolerance, pancreatic bcell failure, and type ii diabetes mellitus. in stark contrast to humans, obese horses and ponies chronically remain in what is termed a ''prediabetic'' state in human ir, characterized by hyperinsulinemic euglycemia. few data exist describing the biology of the equine endocrine pancreas in the chronically ir animal that may both: 1) explain this unique equine endocrine physiology and 2) characterize the animal at-risk for hyperinsulinemia-associated laminitis. the purpose of the study reported here was to characterize the morphology and physiology of the equine endocrine pancreas in response to a dietary carbohydrate challenge. twenty-two mixedbreed ponies (body weight 266.6 ae 170.5 kg) were conditioned to a diet of chopped hay (nsc $6% on dm basis) for 4 weeks; following conditioning, ponies either remained on the control diet (n 5 11), or received the same hay supplemented with sweet feed and oligofructose (total diet $42% nsc; n 5 11) for 7 days. serum insulin concentrations were measured prior to and after completion of the feeding protocol. at the end of the feeding protocol, sections of numerous tissues, including pancreas, were collected immediately following euthanasia. the samples were formalin-fixed for 48 hours, transferred to 70% ethanol, and paraffin-embedded. immunohistochemistry was performed on pancreas sections using a commerciallyavailable anti-insulin antibody (abcam), and measurements of islet surface area and b-cell surface area were performed (n 5 10 islets per tissue section) using a commercially-available computer software program (image j). there was a trend for greater total islet surface area in pancreatic tissue from ponies fed the high nsc diet when compared to the ponies on the hay diet (p 5 0.068); however, no difference was noted in b-cell surface area between diet treatments (p 5 0.12). the change in serum insulin concentration was significantly greater in the high nsc-fed ponies than in controls (403.8 1/-317.1 miu/l vs. 1.00 1/à 4.03 miu/l; p 5 0.002); however, this variable was not correlated with total islet surface area (r 5 0.32; p 5 0.17) or b-cell surface area (r 5 0.25; p 5 0.3). due to the relatively modest changes in pancreatic islet surface area that accompany marked increases in serum insulin concentrations in ponies fed a high nsc diet, it is important to assess both b-cell function and insulin clearance mechanisms in future studies to delineate the mechanism(s) of hyperinsulinemia in this model. humans that suffer from obesity show exaggerated inflammatory responses and this may be relevant to the association between increased adiposity and laminitis in horses with equine metabolic syndrome (ems). this study was performed to test the hypothesis that inflammatory responses to endotoxemia differ between healthy horses and those affected by ems. six healthy adult mares and 6 horses with ems received an intravenous infusion of lipopolysaccharide (lps; 20 ng/kg in 60 ml sterile saline) or saline alone. a crossover design was employed with a 7-day washout period. physical examinations were performed hourly for 9h and whole blood was collected at 30, 60, 90, 120, 180, and 240 min for assessment of inflammatory cytokine gene expression. a liver biopsy was performed between 240 and 360 min postinfusion. data were analyzed using mixed model anova. mean rectal temperature, heart rate, and respiratory rate increased following lps infusion (treatment â time; p o 0.001), with higher heart (group â treatment; p 5 0.087) and respiratory rates (group; p 5 0.017) detected in ems horses. lipopolysaccharide infusion significantly increased whole blood gene expression of tumor necrosis factor a (tnfa), interleukin (il)-1b (p o 0.001), il-6 (p o 0.001), il-8 (p o 0.001), and il-10 (p 5 0.002), and hepatic gene expression of il-6 (p o 0.001), il-8 (p o 0.001), and il-10 (p 5 0.016). inflammatory gene expression did not differ significantly between groups, so our hypothesis was not supported. heart rates tended to be higher when lps was administered to horses with ems. elevated serum concentration of cardiac troponin i (ctni) is a biomarker for myocardial damage in horses. preferred times to test blood for ctni levels following athletic performance or other events that may cause myocardial injury are not yet established and would be affected by time of release from the myocytes, location of release within the myocytes, duration of release and half-life of ctni in the horse. this information would be necessary to more accurately and reliably test horses for myocardial injury. the aim of this study was to determine the elimination half-life (t1/2) of equine ctni. to establish the t1/2 of equine ctni in horses, ctni was recombinantly expressed in e.coli. two healthy ponies received intravenous injections of recombinant equine ctni and plasma ctni concentrations were measured with a point-of-care ctni analyzer at multiple time points after injection. standard pharmacokinetic analysis was performed to establish the elimination half-life of ctni. for comparative purposes, data were subjected to pharmacokinetic models describing a single versus biphasic elimination profile. elimination of recombinant equine ctni following intravenous administration exhibits a short half-life. establishing the t1/2 of troponin provides critical information in understanding the clinical application of this cardiac biomarker in clinical practice. this study describes a true biological ctni t1/2, which has not been documented in any species thus far. stall-side assessment of this cardiac biomarker in horses should enhance the ability of clinicians to detect myocardial damage and aid in the management and treatment of horses with cardiac disease. the objective of the study was to evaluate the between-pony, within-pony, between-analyser and within-analyser variation of flow-mediated vasodilation (fmd) measurement in healthy ponies, to investigate the hypothesis that fmd occurs in healthy ponies. six healthy, native breed, unrelated pony mares of varying weight (236-406 kg), body condition score (3/9-7/9) and age (14-25 years) were used. the median artery was occluded for 5 minutes. twodimensional (2d) ultrasonographic images of the artery were recorded for 30 seconds prior to and for 2 minutes after occlusion. the peak luminal diameter was compared to baseline diameter to calculate the relative percentage increase in luminal size (fmd). images were obtained from six ponies on one occasion and from one pony on six occasions. analysis of images was performed by two independent analysers and by one analyser twice. the mean (sd) fmd in 6 ponies was 12.57% (4.28%) and in 1 pony (6 occasions) was 7.30% (2.11%). coefficients of variation were 34.09% and 28.84% respectively. agreement between analysers was fair (icc 5 0.47) and within analyser was poor (icc 5 0.30). fmd is used to assess endothelial function in humans and has recently been assessed for its use in canine subjects. fmd occurs and measurement is feasible in ponies. fmd could be used to assess endothelial function, in the context of laminitis or other cardiovascular diseases. current state-of-the-art technique for measuring blood pressure (bp) in the horse is invasive and involves cannulation of the facial artery. indirect techniques, such as oscillometry, have proven useful in the anaesthetised horse, but have not become routine in the standing horse. monitoring bp can be indicated for the diagnosis and treatment of the hypotensive patient (ie. caused by endotoxemia, hypovolemia, systemic inflammatory response syndrome and cardiac failure) or the hypertensive patient (ie. due to equine metabolic syndrome or pain). the objective of this study was therefore to a) describe the methodology for application of oscillometric bp using a cuff applied to the tail in the standing horse and b) and to determine accuracy and precision of this method applied to the normotensive standing horse. the oscillometric method is simple to apply in a clinical setting. a pneumatic cuff is snugly applied to the unclipped tail-base with the cuff bladder centered over the middle coccygeal artery. the tail circumference must match the manufacturers description of the cuffs diameter range. the oscillometric apparatus inflates the cuff and obtain systolic, diastolic and mean arterial bp (sap, dap and map). at least 2 consecutive measurements must be obtained. a correction of 0.7 mmhg/cm vertical distance between cuff and heart level is added to the measurement to correct for hydrostatic pressure difference. for determination of accuracy and precision of indirect sap, dap and map, eight healthy horses (age 3 to 16 years), was equipped with an intra-arterial catheter ii in the facial artery and a commercial tail-cuff oscillometric apparatus. i measurements were recorded every 2 minutes for 20 minutes. the data were analysed with the statistical software r using a mixed model with repeated measurements and a bland-altman analysis corrected for repeated measurements. oscillometric bp was accurate and precise for map (mean bias, lower confidence level, upper confidence level, variation in difference, all mmhg) (à0.3, à18.5, 19.1, 33.2, respectively) in the conscious horse but not for sap (à1.5, à19.3, 16.3, 38.2, respectively) and dap (0.05, 15.9, 16.0, 49, respectively) . there was no significant contribution to the statistical model of either horse or measurement number. all horses tolerated the tail-cuff well and the method was simple to apply. only map could be measured with acceptable accuracy and precision in the normotensive standing horse using the described oscillometric method. reference intervals for thyroid hormone (th) concentrations have not been established for donkeys. therefore, clinicians must use reference ranges from horses, potentially leading to misdiagnosis of thyroid diseases. we hypothesized that th concentrations are different between donkeys and horses. the purposes of this study were: a) to compare th concentrations between donkeys and horses and, b) to determine whether the age may influence th concentrations. thirty-eight healthy donkeys (8.5 ae 0.8 years), mixed breeds, and 20 healthy andalusian horses (6.4 ae 0.5 years) were used. donkeys were divided into three groups: o 5 years (n 5 13), 5-10 years (n 5 12), and 411 years (n 5 13). serum concentrations of total triiodothyronine (tt3), free triiodothyronine (ft3), total thyroxine (tt4), free thyroxine (ft4), reverse triiodothyronine (rt3) and thyroid-stimulating hormone (tsh) were quantified by radioimmunoassay. all blood samples were collected the same day. neither horses nor donkeys had received any treatment for 30 days before sampling and both farms had similar production conditions. total t3, ft3, ft4 and tt4 concentrations were higher (p o 0.01) in donkeys than horses. in contrast, no statistical differences were found for rt3 and tsh concentrations. young donkeys ( o 5 years) had higher ft4, tt4 and rt3 concentrations compared to other donkey groups (p o 0.05). old donkeys (411 years) had lower tt3 and ft3 concentrations than both younger donkeys groups (p o 0.05). this study shows that there are differences in th concentrations between donkeys and horses, raising awareness on the possibility of misdiagnosis of thyroid gland dysfunction when using values from horses, being necessary to determine exclusive reference intervals for donkeys. ovariectomy is associated with alterations of responses to many hormones, not just those associated with reproductive function. in humans and rats, ovariectomy leads to insulin resistance, increased adiposity and altered fat mobilization. the effects of ovariectomy on energy metabolism have not been reported in horses. ovariectomized mares have been shown to respond normally to an acth stimulation test, but the response to suppression of the hypothalamo-pituitary-adrenal axis has not been previously described. the aim of this study was to evaluate the effect of ovariectomy on insulin response in mares and to determine if mares exhibit alterations in response to dexamethasone administration after ovariectomy. six healthy mares underwent an intravenous glucose tolerance test (ivgtt), an insulin sensitivity test (ist) and a dexamethasone suppression test (dst) before and 5 weeks after bilateral ovariectomy. body weight, cortisol values at baseline, 15 and 24 hours after dexamethasone injection and acth values at baseline, 15 and 24 hours after dexamethasone injection, basal insulin/glucose ratio, time to reach a 60% decrease in blood glucose in the ist, time to reach baseline glucose concentration in the ivgtt and area under the curves plotting blood glucose and time to injection of glucose or insulin were compared before and after ovariectomy using a paired t-test or an anova for repeated measures. significance level was p o 0.05. average body weight was decreased after surgery (6kg ). the injection of dexamethasone resulted in a serum cortisol concentration of less than 1 mg/dl in all mares before ovariectomy, whereas after ovariectomy, dexamethasone injection resulted in a serum cortisol concentration of less than 1 mg/dl in 5 out of 6 mares. in all cases, acth concentration was within the reference range (9-35 pg/ml) before and after ovariectomy. however, acth concentrations at t 0 and at t 15 were significantly higher after ovariectomy. each mare had a normal ivgtt, both before and after ovariectomy. additionally, no significant differences were observed in basal blood glucose (84 ae 12 mg/dl before and 85 ae 3 mg/dl after) or in the time to reach glucose baseline (108 ae 66 min before and 99 ae 51 min after). serum basal insulin concentration and insulin/glucose ratio was not significantly different before or after ovariectomy (22.0 ae 7.9 miu/ml and 17.5 ae 8.9 miu/ml and 0.26 ae 0.09 and 0.20 ae 0.10, respectively), nor was the average time to reach a 60% decrease in blood glucose after insulin injection (30 ae 7 min and 25 ae 9 min, respectively). these findings suggest that, as reported in other species, the shortterm effect of ovariectomy may modify dexamethasone response in mares and that, contrary to other species, it may not modify insulin response. equine gastric ulcer syndrome (egus) is a common medical problem in horses. the high prevalence of gastric ulcers, vague clinical signs and negative effect on performance make it a significant clinical and economic problem within the horse industry. current pharmaceutical treatments are expensive and alter the acidic environment of the stomach. berries and pulp from the seabuckthorn plant (hippophae rhamnoides) are a rich source of vitamins, trace minerals, amino acids, antioxidants, and other bioactive substances and have been used successfully to treat stomach ulcers in man and rats. the purpose of this study was to evaluate the efficacy of a commercially sold, liquid extract of seabuckthorn berries (seabuck tm sbt gastro-plus) for treatment and prevention of gastric ulcers in horses. eight thoroughbred and thoroughbred-cross horses (3-10 years of age, 5 geldings & 3 mares, 380-600 kg) were used in a blinded two-period cross-over study. treatments consisted of control (untreated) and treatment (seabuck tm sbt gastro-plus) twice daily mixed with the grain meal. horses were treated for 5 weeks followed by a 1 week alternating feed-deprivation period to induce or worsen existing ulcers. gastroscopies were performed on all horses on day 0, week 5, and week 6 (at the end of the alternating feed-deprivation period). gastric juice was aspirated and ph was measured. during gastroscopy, gastric ulcer scores were assigned to each stomach based on lesion number and severity. horses acted as their own controls, and between each treatment period the horses had a 2-week washout period. data was analyzed by anova for repeated measures via the glm procedure (sas inst. inc., cary, nc). when significant differences (p o 0.05) were observed, a post-hoc tukey's test was used to determine differences. non-glandular gastric ulcer scores significantly increased in all control and sbt-treated horses from week 5 to week 6, after the feed-deprivation phase of the study. there was no significant difference in the non-glandular gastric number (p 5 0.84) and nonglandular gastric severity (p 5 0.51) scores in sbt-treated horses compared to non-treated controls. glandular ulcer number (p 5 0.02) and glandular ulcer severity (p 5 0.02) was significantly lower in the sbt-treated horses compared to the control horses. there was no significant difference in the ph (p 5 0.06) in sbt-treated horses compared to non-treated controls. thus, seabuck tm sbt gastro-plus, mixed in the feed twice daily, may be efficacious in controlling the severity of glandular ulcers in horses during stress, without increasing stomach ph. the availability of rapid and accurate quantitative fibrinogen measurements may be useful for evaluation of hospitalized equine patients. the abaxis vspro analyzer was evaluated for precision using two levels of human fibrinogen controls (300 mg/dl and 150 mg/dl), four different vspro machines, and two different lots of cartridges, assessed over 5 subsequent days. the coefficients of variation of the assay ranged from 4% (300 mg/dl) to 8% (150 mg/ dl). we subsequently evaluated the abaxis vspro fibrinogen assay compared to fibrinogen concentration measured using the beckman coulter acl-1000 in 50 equine samples of varying fibrinogen concentrations obtained from horses with gastrointestinal disease. all samples were measured in citrated plasma. fibrinogen samples measured on the acl-1000 ranged from 226 to 959 mg/dl (median 501 mg/dl). vspro samples were run in duplicate, and the mean compared to the acl values. pearson correlation coefficient analysis generated an r value of 0.949 (p o 0.001). duplicate measurements on the vspro were strongly correlated to each other with an r value of 0.9690 (p o 0.001). bland-altman analysis of these samples for the vspro compared to the acl-1000 noted a bias of à84 ae 57 mg/dl the results of this study indicate that the vspro benchtop fibrinogen analyzer provides accurate and precise fibrinogen data compared to the acl-1000 reference analyzer. the immune response of foals to r. equi is incompletely understood and believed to be responsible for clinical disease caused by this pulmonary pathogen. in a recent study foals receiving a large inoculum exhibited th2 skewing with pneumonia and a small inoculum exhibited th1 skewing without clinical disease. we hypothesized that cytokine/chemokine production by pulmonary alveolar macrophages, in vitro, would increase with the infective dose and that the magnitude of the response would differ between foals and adults. alveolar macrophages were obtained by bronchoalevolar lavage from 7 healthy mares and their 5-week-old foals. macrophage cultures were infected with r. equi (337011 or 33701-) at a multiplicity of infection (moi) of 1 or 100. total rna was harvested 4 and 24 hours post-infection, reverse transcribed and used as template for quantita-tive pcr. the ddct method was used to calculate relative gene transcripts for il-6, il-12p40, tnfa and cxcl10. cellular infections at moi 100 resulted in significantly higher expression of il-6, il-12p40 and tnfa mrna transcripts compared to moi 1. however, the dose-effect was reversed for cxcl10 with significantly lower expression at the higher moi. there was no difference in magnitude of cytokine/chemokine responses by the alveolar macrophages between adults and foals. dose-dependent responses of alveolar macrophages may represent a novel mechanism by which r. equi could modulate immune responses and therefore disease. significant down-regulation of cxcl10 mrna transcripts associated with a higher dose is of particular interest as this chemokine plays a role in development of protective th1 responses. the intent of this study was to develop likelihood ratios (lrs) for infection attributable to corynebacterium pseudotuberculosis in horses based on synergistic hemolysis inhibition (shi) test titers. medical records for horses presented to the uc davis veterinary teaching hospital with serum submitted for shi titer determination were evaluated and 171 cases met study inclusion criteria. these cases were grouped based on evidence of internal and/or external infection attributable to c. pseudotuberculosis and likelihood ratios with 95% confidence intervals determined. results showed increasing lrs indicating increasing odds for any form of active disease as titer increased with all cases considered. lrs for internal infection were 4 1 for titers ! 1280 overall and for titers 4 160 with external abscess cases excluded. no difference from 1 (and therefore no significant change in pre-test to post-test odds) was seen in any lrs for internal disease when only cases with external disease were examined (external and internal disease vs. external only). overall, the shi test results showed usefulness in determining internal c. pseudotuberculosis infection in horses with no evidence of external abscessation. overall, however, higher titers were more indicative of active external or internal disease than internal disease specifically in contrast to previous reports. the shi test was unable to distinguish internal infection when external abscesses were present. salmonella enterica is a zoonotic pathogen that has tremendous impact on many different animal production and management systems. rapid detection of s. enterica in fecal samples may facilitate effective infection control practices. current detection methods require 24-48 hours (polymerase chain reaction or pcr) or 48-72 hours (enriched aerobic culture) to obtain results. alternatives have been developed, lateral flow antigen detection systems (lfads), which are currently marketed for salmonella detection related to food safety microbiology. the objective of this study was to evaluate two commercially available rapid salmonella detection systems in equine feces. fecal samples collected from repeatedly culture-negative horses were inoculated with known concentrations of salmonella enterica serotype typhimurium (five uninoculated control samples, and 5 samples of each 10-fold dilution [1.9 â 10 0 -1.9 â 10 4 cfu/gram of feces]). all samples were aerobically cultured using a standard enrichment technique. in a blinded fashion, samples were tested using two different lfads as well as plated on agar media for confirmatory testing. at 24 hours of incubation, using bacterial culture as the reference method, test 1 was correctly identify 70% of samples ( bacterial contamination of stalls with salmonella sp. is a serious problem in equine hospitals. salmonella sp. exposure to horses in the facility can result in nosocomial infections which results in temporary facility closure, until the organism is eradicated. hospital closure can result in loss of revenue, damage to reputation and interference with patient care. the purpose of this study was to evaluate three stall cleaning methods on eradication of salmonella sp. at an equine veterinary teaching hospital (vth). horses admitted to the vth were assigned to salmonella sp.negative stalls within areas of the vth during the study period (september 2009 -january 2010 . when the horses were discharged stalls were randomly assigned to one of three cleaning methods (pressure-washing only [pw] , pressure washing and hand scrubbing [pws] , or hand scrubbing only [s]) in a single period, non cross-over design. all stalls were stripped of bedding and surfaces sprayed with tap water and cleaned with a disinfectant quaternary-ammonia solution (super hdq neutral, spartan chemical co., inc, maumee, oh). the pressure-washing system (psc cleaning systems, inc., toronto, canada) used, provided a pressure of 3000 psi and a temperature range of 1851-2151f. following cleaning, each stall was allowed to air dry and within 48 hours, stall surfaces were sampled using three 4 00 â 4 00 sponges moistened with sterile saline. the person collecting the samples was masked to the method of cleaning. sponges were submitted to the louisiana animal disease diagnostic laboratory (laddl) for culture of salmonella sp. a chi-squared analysis was used to determine significant differences (limit p o 0.05) between cleaning methods and salmonella sp. isolation. during the study period, 112 stalls (pw [n 5 29]; pws [n 5 50]; s [n 5 33] were included. all stalls had negative environmental salmonella sp. cultures prior to beginning the study. for pw cleaned stalls, 6/23 (20.7%) were salmonella sp.-positive, for pws cleaned stalls, 12/38 (24%) were salmonella sp.-positive, and for s cleaned stalls, 4/29 (12.1%) were salmonella sp.-positive. although, there were fewer salmonella sp.-positive stalls (12.1%) in the handscrubbed stalls, cleaning method did not significantly (p 5 0.4057) affect the isolation of salmonella sp. from the stall environment. in conclusion, power washing alone, power washing and hand scrubbing, and hand scrubbing alone, using a quaternary-ammonia solution did not significantly affect environmental isolation of salmonella sp. from stalls surfaces in the vth during this study. the objectives of this study were to determine the plasma and pulmonary disposition of gamithromycin in foals and to investigate the in vitro activity of the drug against streptococcus equi subsp. zooepidemicus (s. zooepidemicus) and rhodococcus equi isolates. a single dose of gamithromycin (6 mg/kg of body weight) was administered intramuscularly. concentrations of gamithromycin in plasma, pulmonary epithelial lining fluid (pelf), bronchoalveolar lavage (bal) cells, and blood neutrophils were determined using hplc with tandem mass spectrometry detection. the minimum inhibitory concentration of gamithromycin required to inhibit growth of 90% of r. equi and s. zooepidemicus isolates (mic 90 ) was determined. additionally, the activity of gamithromycin against intracellular r. equi was measured. mean peak gamithromycin concentrations were significantly higher in blood neutrophils (8.35 ae 1.77 g/ml) and bal cells (8.91 ae 1.65 g/ml) compared to pelf (2.15 ae 2.78 g/ml) and plasma (0.33 ae 0.12 g/ml). mean terminal half-lives in neutrophils (78.6 h), bal cells (70.3 h), and pelf (63.6 h) were significantly longer than that of plasma (39.1 h). the mic 90 of s. zooepidemicus isolates was 0.125 g/ml. the mic 90 of gamithromycin for macrolide-resistant r. equi isolates (128 g/ml) was significantly higher than that of macrolide-susceptible isolates (1.0g/ ml). the activity of gamithromycin against intracellular r. equi was similar to that of azithromycin and erythromycin. intramuscular administration of gamithromycin at a dosage of 6 mg/kg would maintain pelf concentrations above the mic 90 for s. zooepidemicus and phagocytic cell concentrations above the mic 90 for r. equi for approximately 7 days. eight western stock yearling horses were infected with ehv-1 (ab4) by nasopharyngeal instillation. venous blood samples for collection of plasma were collected in na-citrate tubes on the day prior to infection (d -1) and on d4 through d11. in addition, clinical data, nasal swabs and peripheral blood mononuclear cells (pbmc) for detection of viremia were collected on the day before infection (d -1) and on d1 through d14 post-infection. d-dimer concentrations were determined in citrated plasma samples using a latex agglutination test (minutex d-dimer, biopool, ireland). viral load in pbmc was determined using quantitative pcr. all horses showed bi-phasic fevers typical for ehv-1 infections. one horse developed acute ehm on d10 and was euthanized after samples were collected. in all horses d-dimers were undetectable on d -1 and on d4, 5 and 11. in contrast, all horses had increased ddimer concentrations for 3 to 5 consecutive days starting on day 6 post-infection. d-dimer concentrations in 2 horses increased to 1000 ug/ml and one of these horses was the horse with acute ehm. interestingly, mean increased d-dimer concentrations showed timely overlap with the mean fever curve and, delayed by 1 day, with the mean viremia curve. because plasma samples for d-dimer measurements were not collected during the first 3 days post-infection, which are typically associated with a primary fever, conclusion on the association of d-dimers with fever of viremia await analysis of a second study currently conducted in our laboratory. in conclusion our data indicates that during ehv-1 infection with neuropathogenic strains activation of the coagulation cascade and production of cross-linked fibrin is wide-spread; not limited to horses with clinical signs of ehm, and can be expected between days 6 and 10 post-infection. lawsonia intracellularis is an emerging pathogen in horses and the causative agent in equine proliferative enteropathy (epe). the goal of this study was to evaluate the exposure of pre-weanling foals and broodmares to lawsonia intracelluaris on several farms in louisiana with a history of epe and compare the results to several farms with no known clinical cases of epe in foals. an additional goal of the study was to identify whether a relationship exists between lawsonia intracelluaris and other gastrointestinal pathogens in foals. whole blood and fecal samples were collected from 66 mares and 68 foals from four breeding farms in louisiana. farms a and b had no known clinical cases of epe, while farms c and d had previous know cases of epe in 2009. serum samples were examined for the presence of antibodies against lawsonia intracellularis using an immunoperoxidase monolayer assay (ipma). dna was extracted from fecal samples using a commercial dna kit and molecular detection of lawsonia intracelluaris was assayed using real-time pcr. fecal ova were determined using quantitative sucrose floatation. the presence of fecal clostridium difficile toxin was measured using a commercial enzyme linked immunosorbent assay (elisa). three of the 4 farms examined had foals and mares with exposure to l. intracellularis as evidenced by serum antibodies against the organism. of the total population sampled, 6 foals (8.8%) and 14 mares (21.2%) had evidence of antibodies to l. intracellularis based on serology. three foals (4.4%) tested positive for l. intracellularis organism by fecal pcr, and all of these foals were located on farm c. of these, one of the foals was seronegative, while the other two were seropositive. farm c also had the highest percentage of mares (28.6%) serologically positive for l. intracellularis, while farm a had the highest percentage of foals (14.3%) with antibody titers against l intracellaris. farm c also had the only pairs (n 5 3) of serologically positive mares with seropositive foals. while farm a and b had seropositive mares and/or foals, none of the foals were positive for l. intracellularis fecal shedding by pcr. all serum and fecal samples were negative for evidence of l. intracellaris on farm d. ten foals (14%) had fecal egg counts greater than 200 egg per gram and 2 foals (3%) were positive for c. difficile toxin. this study demonstrated evidence of natural exposure to l intracellularis on farms both with and without a history of epe in louisiana. further, this study failed to establish a relationship between l intracellularis and other gastrointestinal pathogens. the objective of this study was to examine the clinical, hematological, biochemical, and outcome data from equids infected with anaplasma phagocytophilum presented to a primary care field setting in southeastern pennsylvania. computerized medical records from 19 febrile equids with confirmed anaplasma phagocytophilum infection were reviewed. confirmation of anaplasma phagocytophilum was defined by the presence of granular inclusion bodies seen within leukocytes or eosinophils on microscopic blood smear evaluation and/or a positive polymerase chain reaction (pcr) for anaplasma phagocytophilum. 18 horses and 1 donkey presented with a mean fever of 104.41f and mean fever duration of 39 hours. the mean age at presentation was 9.6 years and the mean pack cell volume was 30.8%. 15/19 cases were diagnosed in the months of may to december. equids ages 5 to 15 years had significantly lower platelet counts. 16/18 cases were positive on blood smear for inclusion bodies and 6/6 cases were positive for anaplasma phagocytophilum on pcr. treatments included intravenous oxytetracycline, oral doxycycline, or both. mean treatment duration was 4.8 days and mean treatment cost was $621. 17/19 cases were normothermic within 48 hours. the treatment used in the two remaining cases was changed from oral doxycycline to intravenous oxtetracycline and was successful. this is the first case series of equine granulocytic anaplasmosis in the mid-atlantic states. all cases were examined and treated in the field. in order to make a definitive diagnosis, some cases required pcr. treatment failures were documented with the use of oral doxycycline alone. 100% of the cases survived. a high incidence of clinical and possibly genetic abnormalities has been reported amongst friesian horses including dwarfism, hydrocephalus, dissecting aortic aneurism and esophageal dysfunction. the purpose of the current study was to develop a new electromyography (emg) method to assess neurophysiological function of the esophagus especially for friesian horses. five friesian horses with esophageal dysfunction were included (ranging in age from 0.5-24 years and comprising 4 mares and a stallion) and two friesian control horses (a 10-and 12-year-old gelding). all five horses with esophageal dysfunction had a history of recurrent esophageal obstruction and were examined histopathologically post-mortem. barium contrast radiography was used as the gold standard to distinguish the diseased from the control horses. an endoscopically-guided percutaneous needle emg procedure (viking quest r ; software version 11.0) was performed just caudal to the larynx and just cranial to the thoracic inlet (to monitor striated and smooth muscle, respectively) to visualize esophageal motility. esophageal contractility in both control horses was predominantly reflected by interference patterns associated with longer duration and lower amplitude in smooth muscle compared to striated muscle. mean (ae sd) values were 35.1 ae 19.4 ms and 167.7 ae 96.7 mv (n 5 19 readings) and 10.8 ae 14.3 ms and 305.8 ae 233.7 mv (n 5 24 readings), respectively. in diseased horses, aperistalsis in smooth muscle was the most remarkable finding suggesting a loss of inhibitory neurogenic input resulting in aperistalsis and thus esophageal dysfunction. preliminary findings suggest that endoscopically-guided percutaneous needle emg might become a valuable method in elucidating the pathophysiology of dysfunction of esophageal motility especially in friesian horses. lymphoma affects horses of all ages. unlike in humans, no etiologic agent has been discovered. a 9 year old thoroughbred/warmblood cross mare presented with signs of upper and lower respiratory disease and was subsequently diagnosed with lymphoma and equine multinodular pulmonary fibrosis (empf) and was positive for equine herpes virus 5 (ehv-5) in both the pulmonary tissue and the lymph nodes. retrospective polymerase chain reaction (pcr) testing of six lymphoma cases found that 5 of 6 of the cases were positive on pcr for ehv-5 (83.3%, p 5 0.0045, rr 5.55). electron microscopy was performed on one sample and herpes virus particles were identified. of the samples in which immunohistochemistry was performed (3 of 6), only t-cell rich b-cell lymphoma was identified. samples of mesenteric or submandibular lymph nodes from 20 clinically healthy horses were submitted for ehv-5 pcr analysis; 15% were positive. gamma herpesviruses in humans have been associated with lymphoproliferative diseases such as kaposi's sarcoma and burkitt's lymphoma. equine herpesvirus 5, also a gamma herpesvirus, is found in association with equine lymphoma; although the exact role this virus plays in the initiation or perpetuation of lymphoproliferative neoplasia remains unknown. pathologic events reported to occur in the digital laminae in early stages of sepsis-related equine laminitis include leukocyte extravasa-tion into the laminar interstitium, pro-inflammatory cytokine expression, and epithelial stress. while these events have been documented early in the disease process at both a developmental stage and at the onset of obel grade 1 (og1) lameness in the carbohydrate overload (cho) model of laminitis, the later events occurring at the onset of obel grade 3 lameness(og3, time point at which structural failure of the laminae usually occurs) have not been determined. we hypothesized that the inflammatory events described above are sustained through og 3 lameness, likely playing an injurious role culminating in laminar failure. our objectives were to determine pro-inflammatory gene expression, leukocyte extravasation, and epithelial stress at og 3 induced using the cho model. archived laminar tissue samples (snap frozen and paraffin embedded sections) were used from a previous cho study at louisiana state university (control group [n 5 6, water], cho group [n 5 7, corn starch]. calprotectin (cp) immunohistochemistry (ihc) was used to assess both laminar myeloid leukocyte numbers and epithelial stress; rt-qpcr was used to assess inflammatory gene expression. minimal inflammatory changes were present at og3 compared to published values at og1 stage in the cho lameness model including decreased mrna concentrations of cytokines (i.e. 20-fold increase in il-6 at og3 vs. 4 2000-fold increase at og1, no increase in il-1b at og3 vs. 11-fold increase at og1), chemokines (no change in mcp-1at og3 vs. 4 30 fold increase at og1, 8-fold increase in il-8 at og3 vs. 95fold increase at og1) and adhesion molecules (no change in e-selectin at og3 vs. 10-fold increase at og1). laminar leukocyte emigration was also decreased at the onset of og 3 lameness compared to previously reported leukocyte infiltration at og 1. interestingly cox-2, underwent a greater increase at og3 (approx. 50-fold) compared to that reported at og1 lameness (35-fold). finally, epithelial stress at og3 evidenced by cp ihc did not follow the uniform widespread distribution reported at og1 lameness, but instead was present in focal areas in which secondary epidermal laminae on either side of a common primary dermal vascular supply demonstrated increased cp signal. overall, laminar inflammation appears to be subsiding at og3 lameness, with epithelial stress possibly more dependent on vascular dysregulation instead of inflammatory events. the sustained increase in cox-2, central to the induced production of vasoactive prostanoids in disease processes, may play a role in vascular dysregulation. this study was conducted to characterize clinical, laboratory and postmortem findings associated with oleander toxicosis in equids and to determine factors predictive of survival in these cases. retrospective analysis of medical records from our veterinary medical teaching hospital from january 1, 1995 to july 15, 2010 was completed. records of equids demonstrating detectable oleandrin in serum, plasma, urine or gastrointestinal fluid samples or detectable serum digoxin in the absence of pharmaceutical cardiac glycoside administration were included. descriptive statistics were used to evaluate the history, physical examination, and laboratory and postmortem data of affected individuals. logistic regression analysis was used to detect physical examination and laboratory factors significantly associated with survival. thirty equids met inclusion criteria of the study. three of 30 subjects were dead on arrival or died immediately upon arrival (10%). of the remaining 27 equids, 85% presented with gastrointestinal abnormalities, 70% were azotemic and 48% had cardiac arrhythmias. mortality was 50% for all subjects and 44% for those treated. the predominant cause for non-survival was cardiac dysfunction. factors significantly associated with survival included relatively decreased hematocrit and serum glucose, relatively increased serum chloride, absence of cardiac arrhythmias, and increased duration of hospitalization. equids with oleander toxicosis frequently present with gastrointestinal upset and may develop cardiac and renal disturbances. patients with cardiac arrhythmias and relatively increased hematocrit and serum glucose and decreased serum chloride are significantly less likely to survive. oleander intoxication is a differential diagnosis for colic in endemic areas, particularly with concurrent azotemia or cardiac dysrhythmia. the quantitative physicochemical approach emphasizes the importance of strong ions (na, k, cl, lactate), pco 2 , and the plasma protein concentrations in determining plasma ph. serum concentrations of strong ions, proteins, and total co 2 are reported on modern biochemical profiles. we hypothesized that the results of serum biochemical analysis can be used for acid-base interpretation in horses. the objective was to determine whether blood ph, anion gap, and strong ion gap could be quantitatively estimated and clinically used based on the results of serum or plasma biochemical analysis. 100 horses (70 adults and 30 foals) presented to the isolation unit of our veterinary teaching hospital for suspected infectious diseases were prospectively enrolled. a venous serum sample was analyzed using a hitachi 911 or copas 6000 c501 automated machine. measured parameters included strong ion difference (sid 5 {na1k}-{cl1lac-tate}), total protein concentration (tp), and total co 2 (tco 2 ), with lactate being measured by blood gas analyzer. a second venous blood sample was collected into a na-heparin blood gas syringe and analyzed for ph (ph m ), pco 2 and concentrations of na, k, cl, and lactate using a radiometer 800 flex blood gas analyzer; sid was calculated from the measured values, and total solids (ts) were estimated using refractometry. serum/ plasma ph (ph calc ) was calculated using stewart's 8 factor equation from the results of serum or plasma biochemical analysis, assuming pco 2 5 40 mmhg for serum and pco 2 accurate for plasma. anion gap (ag) was calculated as: ag 5 (na1k)-(cl1tco 2 ). strong ion gap (sig) was calculated as: sig 5 0.21x[total protein, g/l]/ (1110 {pka-ph} )-ag. linear regression analysis was used to compare ph calc to ph m, as well as ag and sig to blood lactate concentrations. measured ph ranged from 7.03 to 7.48 (7.37 ae 0.07). measured sid from serum biochemistry (sid sb ) ranged from 22.3 to 51.4 meq/l (36.2 ae 4.7 meq/l) and sid from blood gas analyzer (sid bg ) from 14.3 to 46.6 meq/l (33.3 ae 5.4 meq/l; r 2 5 0.59; sid bg 5 0.919 â sid sb ). sid sb and sid bg showed small variability in measurements. tp ranged from 18 to 88 g/l (51.0 ae 13.9 g/l) and ts from 20-84 (55.2 ae 13.8 g/l; r 2 5 0.77; ts 5 1.071 â tp). using sid sb and tco 2 values with constant pco 2 , ph calc was poorly associated with ph m (r 2 5 0.27; ph calc 5 0.48 1 3.89). in contrast, using sid bg with accurate pco 2 , ph calc was closely associated with ph m (r 2 5 0.54; phcalc 5 0.98 1 0.15) and the equation was not different from the line of identity. anion gap and sig (meq/l) calculated were significantly linearly correlated with lactate concentrations (mmol/l); ag 5 0.93 â [lactate] 1 8.3 (r 2 5 0.39), and sig 5 à0.91 â [lactate] 1 0.5 (r 2 5 0.41). we conclude that ph calc using sid sb , tco 2 and constant pco 2 values is not accurate. however, variability of measured biochemical parameters between machines was small, permitting use of serum biochemistry for clinical metabolic acid-base abnormalities interpretations of patients. these results reemphasize the importance of strong electrolytes and proteins in acid-base balance. metalloproteinases (mmps) are critically important in remodeling processes and in wound healing. however, excessive activation of mmps by pro-inflammatory mediators including cytokines, prostaglandin e2, and nitric oxide lead to tissue breakdown. this is observed in osteoarthritis (oa) which is characterized by erosive lesions in articular cartilage. in hereditary equine regional dermal asthenia (herda), afflicted horses exhibit collagen abnormalities and can have associated chronic inflammation and aberrant wound repair. herda affects horses with quarter horse bloodlines and is similar to the human hereditary connective tissue syndrome ehlers danlos (eds). many adult eds patients suffer from joint pain and oa. we hypothesized that chondrocytes from articular cartilage of herda horses have increased activity of mmps. to test this hypothesis, chondrocytes were retrieved from articular cartilage of homozygous herda carpal and hock joints. chondrocytes from normal horses were also obtained for comparison. chondrocytes were seeded at 5 x 10 6 /ml into 6-well plates and incubated at 371c, 5% co 2 for up to seven days. activity of secreted mmps was determined by zymography using equal amounts of proteins for loading. secreted mmps were analyzed by western blot. zymography showed that normal chondrocytes secreted two major bands with gelatinolytic activity observed at 92 and 72 kda suggestive of the latent form of mmp-2 and mmp-9, respectively. less intense bands of gelatinolytic activity were observed at about 82 and 62 kda suggestive of the active form of mmp-2 and mmp-9. another band of activity was also seen at 240 kda which is suggestive of a dimer of mmp-9 that has been reported when mmps are in excess of tissue inhibitors of metalloproteinases (timps). chondrocyte cultures from homozygous herda cartilage showed a similar profile but with decreased activity by 90% at 92 kda and 10-50% increased activity at 72 kda compared to normal chondrocytes. western blot analysis detected mmp-2 and mmp-9 immunoreactivity in chondrocyte culture media of herda-afflicted and normal horses. the present study demonstrates for the first time that horses suffering from herda have increased mmp activity which may predispose them to the development of lesions in articular cartilage. research supported by nutramax laboratories, inc. equine polysaccharide storage myopathy (pssm) type 1 is a dominantly inherited glycogenosis caused by a mutation in the gene coding for skeletal muscle glycogen synthase type 1 (gys-1). the disease has been reported to affect the haflinger breed but so far its prevalence is unknown. aim of this preliminary study was to estimate the occurrence of the gys-1 mutation in austrian haflingers and establish which of the seven haflinger sire lines appear mostly affected. gys-1 genotyping of 50 randomly chosen haflingers was performed with a validated restriction fragment length polymorphism assay. resting and post-exercise muscle enzyme activities (creatine kinase (ck), aspartate aminotransferase (ast), lacate dehydrogenase (ldh)) and blood lactate concentrations were compared between horses with and without the mutation. among the 50 horses 9 were heterozygous (hr) carrier of the mutation. no homozygotes (hh) were identified. all horses with the gys-1 mutation were descendents of the a-or w-sire lines. the estimated hr prevalence was 18% (95% ci: 8.6-32.4%). ck activity after exercise (p 5 0.022) was significantly higher in hr horses compared with horses not carrying the mutation (rr). ast activity was significantly higher in the hr group at rest and after exercise (p o 0.001). there was no statistically significant difference in resting ck, resting and post exercise ldh activity or blood lactate between hr and rr. results suggest that the prevalence of hr in the austrian haflinger population is higher than in the overall quarter horse population and might be as high as 30%, similar to some draft horse breeds. further research is needed to establish the prevalence within the different breeding lines. hereditary equine regional dermal asthenia (herda) is an autosomal recessive connective tissue disorder affecting quarter horse lineages. 1 although a mutation in the gene encoding cyclophilin b has been genetically linked to herda, its causal association with the disease is not yet documented. 2 previously, we demonstrated reductions in ultimate tensile strength (uts), modulus of elasticity, and energy to failure (toughness) of skin from many corporal regions of herda animals. 3 given the presumed relationship between her-da and abnormal collagen structure, and the predominance of type i collagen in skin, we hypothesized that altered biomechanical properties would be detected in tendons which are rich in type i collagen. to evaluate this hypothesis we compared the uts, modulus of elasticity, and energy to failure of forelimb deep digital flexor tendons (dft) from six herda horses to six age-matched controls. isolated dft was secured and pulled to failure on an instron s 1011 universal testing instrument using purpose-built cryogenic clamps. analysis of variance was executed using sas 9.2 proc glimmix program (sas institute, 2009). p-values 0.05 were identified as significant. uts and modulus of elasticity were significantly lower in herda dft when compared with controls (p o 0.0001); energy to failure did not differ between groups. these findings document abnormal biomechanics in herda tendon, leading us to postulate that lower uts and modulus of elasticity associated with the herda defect could convey a competitive advantage in the athletic disciplines in which this defect has segregated. (references on request). a proprietary herbal biocontamination product (bios) approved for cosmetic use in france, inhibits proliferation of medically relevant bacteria, mold, and viruses. these properties make bios potentially useful as a topical wound medication, prompting us to compare bios to silver sulfadiazine (ssd) in a distal extremity wound healing model in horses. 1 using general anesthesia, two 6.25 cm 2 wounds were aseptically created on the dorsomedial aspect of all limbs. for the duration of the study, two contralateral limbs were randomly chosen to be bandaged; the other two limbs were un-bandaged -with one limb of each group being treated with 10% bios and the other with ssd. for each limb the most proximal wound served as an untreated control. every 48 hours wounds were evaluated, digitally photographed, and perimeter and area determined using morphometric software (imagej, nih). analysis of variance did not identify significant differences between ssd or bios treatment for wound perimeter (p 5 0.76) or area (p 5 0.95). at individual time points the effect of bandaging was significant when area was evaluated (p 5 0.019) and trended toward significance for perimeter (p 5 0.084) comparisons, substantiating published reports that bandaging modifies wound healing. difference in perimeter and area between control and treatment were highly significant (p o 0.0001), substantiating the importance of topical treatment. over the study duration, effects of bandaging (p o 0.0001) and topical treatment (perimeter p o 0.0001; area p 5 0.0016) continued to be highly significant. bios performance in the equine distal extremity wound model was equivalent to ssd. both bandaging and topical treatment significantly impacted wound healing. this effect was compounded when both variables were evaluated over time. radiolabeled leukocytes are the only scintigraphic method currently available for identifying sites of infection and/or inflammation in horses; however the clinical applicability of this technique is limited by expense and poor efficacy. this pilot study compares the accumulation of 99m tc-labeled igg, peg-liposomes and leukocytes in an equine muscle abscess model. three mixed breed adult horses had 2 â 10 6 cfu s. equi zooepidemicus inoculated into the right semitendonosis to create an abscess. peg-liposomes were prepared via the film hydration method and labeled using 200 mci 99m tc-hexamethyl-propylene-amine-oxime ( 99m tc-hmpao). autologous leukocytes were obtained from 120 ml whole blood and labelled using 200 mci 99m tc-hmpao. commercial equine polyclonal igg was conjugated with the chelator hydrazinonicotinamide (hynic) and labelled with 200 mci 99m tc. radiopharmaceutical administration was initiated 24 hours after inoculation. horses 1 and 2 received 5 mg 99m tc-igg, 2.7 mmol/kg 99m tc-liposomes and 99m tc-leukocytes, with a 48 hour interval between each radiopharmaceutical. horse 3 received only 99m tc-leukocytes. scintigraphic examinations were performed at 8 and 21 hours post injection (p.i.) with each radiopharmaceutical. after the final study, horses were euthanized and tissue samples collected. the percentage of injected dose per kilogram of tissue (%id/kg) was calculated for the region of the abscess, normal muscle and multiple organs. scintigraphic examinations demonstrated increased radiopharmaceutical in the region of the abscess with all three techniques at both time-points. at 8 hours p.i. abscess-to-background ratio was highest using 99m tc-igg (3.7 ae 0.2). at 21 hours p.i. abscess to background ratio was highest using 99m tc-liposomes (5.9 ae 2). tissue biodistribution data revealed abscess to muscle ratios of 36 ( 99m tc-igg), 24 ( 99m tc-liposomes), and 4.1 ( 99m tc-leukocytes). this preliminary data demonstrates that 99m tc-liposomes, 99m tc-igg and 99m tc-leukocytes exhibit long circulating characteristics and accumulate at inflammatory/infectious foci after intravenous injection in horses. 99m tc-igg and 99m tc-liposomes appear to be superior to 99m tc-labelled leukocytes in this model. due to its low cost and ease of preparation, 99m tc-igg has great potential for clinical use where identification of infectious or inflammatory foci is necessary. digital hypothermia is used clinically to decrease the incidence of sepsis-related equine laminitis, a disease causing structural failure of digital laminae resulting in crippling lameness. due to the fact that hypothermia was recently reported to effectively decrease laminar expression of inflammatory molecules including pro-inflammatory cytokines, chemokines and cox-2 in equine laminitis, our laboratory is investigating the effect of hypothermia on central upstream signaling cascades which may induce expression of these diverse inflammatory molecules. the p38 mapk pathway has recently been reported to be a central component of inflammatory signaling in multiple diseases including human sepsis, and is currently being assessed as a therapeutic target. we thus hypothesized that 1) p38 mapk is upregulated and activated in affected laminae in equine laminitis and 2) digital hypothermia inhibits inflammatory mediator expression by blocking p38 mapk phosphorylation (indicator of p38 mapk activation). western hybridizations using both a total p38 mapk and a phospho-p38 mapk antibody were performed on archived pooled laminar samples from black walnut extract (bwe) model (10 control, 2 developmental (dev) groups [1.5h & 3h post bwe administration] and the onset of obel grade 1 lameness (og1) [n 5 5 each]) and carbohydrate overload (cho) models (con [n 5 8], dev [n 5 6], og1[n 5 6]) of laminitis, and individual laminar samples from two groups of horses from a digital hypothermia (dh) study. in the dh study, one forelimb of each horse was kept at approximately 41c in ice water and the other at ambient temperature following administration of 10 g/kg oligofructose (of). dorsal laminae were harvested for snap freezing at either 24 hours after of administration (dev, n 5 7) or at the onset of lameness (og1, n 5 6) using protein extracted from treated and untreated digital laminae of each horse. increased laminar concentrations of phospho-p38 mapk were present in the developmental periods (1.5h and 3h) in the bwe model, and in both the dev and og1 periods in the cho laminitis models. however, digital hypothermia had no effect on laminar phospho-p38 mapk concentrations. thus, p38 mapk is activated in affected laminae in multiple models of laminitis, but does not appear to be the central signaling cascade through which hypothermia works to block the expression of inflammatory molecules. therefore, p38 mapk is not likely to be a viable therapeutic target as a sole source for blocking the multiple inflammatory signaling mechanisms inhibited by local hypothermia. abstract e-60 does cefquinome penetrate the blood brain barrier in the normal horse? hollis ar 1 duggan ve 2 and corley ktt 3 . 1 scott dunn's equine clinic, berkshire, uk; 2 university college dublin, dublin, ireland; 3 anglesey lodge equine hospital, the curragh, ireland. meningitis is a rare but serious condition that occurs in both foals and adult horses. there is currently a restricted choice of antimicrobials that are both safe to use in horses and penetrate the blood brain barrier. cefquinome is a fourth generation cephalosporin that has activity against streptococcus, the most commonly reported causative organism in adult horse meningitis. therefore, if cefquinome were to achieve therapeutic concentrations in cerebrospinal fluid following routine administration, this would be an exciting advance for the treatment of meningitis in the horse. 5 mature, healthy horses were used on 2 separate occasions, seven days apart, in a crossover design. each horse was administered either cefquinome (1 mg/kg) or saline (equivalent volume). cerebrospinal fluid was collected via atlanto-occipital puncture under general anaesthesia 1 and 4 hours after administration of cefquinome or saline placebo. blood samples were collected prior to, and 1 and 4 hours after administration of cefquinome or placebo. all samples were analysed for the presence of cefquinome by a laboratory masked to treatments administered. cefquinome was detectable in the cerebrospinal fluid in all horses 4 hours after intravenous administration, and in 2 horses 1 hour after administration. cefquinome penetrates the blood-brain barrier and it is therefore a potential treatment for equine meningitis. further investigation of the pharmacokinetics and pharmacodynamics of cefquinome in the cerebrospinal fluid is warranted to establish the optimum intravenous dose. the purpose of this study was to determine if enrofloxacin alters the pharmacokinetics of firocoxib in the horse. firocoxib is a coxibclass nonsteroidal anti-inflammatory drug (nsaid) approved for use in horses to control pain and inflammation associated with osteoarthritis. dosages of firocoxib are species dependent, with the recommended dose for horses being 0.1 mg/kg as an oral paste every 24 h. the main elimination pathway of firocoxib is hepatic; however the effects of concurrent administration of drugs that may inhibit its metabolism have not been evaluated. enrofloxacin is a synthetic antibacterial agent from the flouroquinolone group developed for veterinary use. it is primarily used for gastrointestinal, urogenital, skin and respiratory tract infections in various animals. a well acknowledged problem associated with flouroquinolone usage is their effect on the metabolism of other drugs. co-administration of multiple drugs can result in unpredictable therapeutic outcomes. often it is either diminished therapeutic efficacy or increased toxicity of one or more of the administered drugs. various pharmacokinetic interactions between antimicrobials and nsaids have been described. six healthy, adult mares were administered 0.1 mg/kg of firocoxib orally. samples were collected by direct venipuncture of the jugular vein at 0 (control), 15, 30, and 60 min, 2, 4, 8, 12, 16, 24, 48, 72 , and 96 h after administration. after a 20 day washout period the six horses were pretreated 3 days with enrofloxacin 5 mg/kg intravenously every 24 h then on the fourth day given 0.1 mg/kg of firocoxib orally. samples were collected at 0 (control), 15, 30, and 60 min, 2, 4, 8, 12, 16, 24, 48, 72 , and 96 h after administration. all samples were stored at à801c until analysis using a validated hplc method. the t1/2, c max , t max , auc 0-24 and auc 0-f after firocoxib administration were 30. 66 angiotensin converting enzyme (ace) inhibitors improve survival and quality of life in humans and small animals with cardiovascular and renal disease. there is limited information regarding their effects in horses. the purpose of this study was to determine the pharmacokinetics of quinapril and its effects on ace inhibition in horses. six healthy horses were administered quinapril at 120 mg iv, 120 mg po or 240 mg po in a 3-way crossover design. blood was collected at predetermined times for measurement of quinapril and quinaprilat concentrations using high pressure liquid chromatography, as well as ace concentrations using a radioenzymatic assay. normally distributed data were analyzed with one way repeated measures analysis of variance (rm-anova) and non-normally distributed data were analyzed using friedman rm_anova on ranks. significance was set at p o 0.05. no adverse effects were observed during the study period. plasma quinapril concentrations were low and rapidly declined after iv administration. quinaprilat concentrations were below the limit of quantification (0.1 mg/ml). ace activity was significantly decreased from baseline at 0.5 and 1 hour after iv dosing and at all timepoints after oral dosing. maximum % ace inhibition was 72, 53 and 47% with the iv, high and low oral doses, respectively. these results suggest that, despite low plasma concentrations, quinapril has sufficient oral absorption and results in inhibition of ace in healthy horses. controlled studies in clinically affected horses are indicated. this study determined the pharmacokinetic profile of firocoxib in healthy neonatal foals. foals are more sensitive to the side effects of nsaid, primarily due to immature renal clearance mechanisms and ulcerogenic effects on gastric mucosa. firocoxib, a novel, second generation nsaid, is reported to have reduced side effects due to cox-2 selectivity. the pharmacokinetic profile of firocoxib in neonates has not been established. we hypothesized that firocoxib given po to neonatal foals would achieve therapeutic concentrations in plasma. seven healthy foals of mixed gender were administered 0.1 mg/kg firocoxib po q24h for nine consecutive days, commencing at 36h old. blood was collected for firocoxib analysis at 0 (dose #1 only), 0.25, 0.5, 1, 2, 4, 8, 16 and 24h after doses #1, 5 and 9. for all other doses (2, 3, 4, 6, 7 and 8) blood was collected immediately prior to the next dose (24h trough). elimination samples were collected after dose #9. plasma was stored at à801c until analysis. physical examinations were performed on foals daily and body weight obtained every two days during the sampling period. analysis of plasma samples by liquid chromatography-mass spectrometry revealed firocoxib was rapidly absorbed. after the initial dose, a maximum plasma concentration was reached in 30 min, minimal accumulation after repeat dosing occurred and steady state was obtained after approximately four doses. after the final dose, plasma drug concentration decreased in a linear manner with an estimated terminal t1/2 of 11h. seventy-two hours after the final dose, firocoxib was not detectable (o 2ng/ml). erythrocytosis is reportedly a rare finding associated with hepatocellular carcinoma in horses. the purpose of this study was to determine the relative frequency of erythrocytosis and the clinicopathologic abnormalities and hepatic histopathology associated with erythrocytosis in horses with liver disease. ninety-seven horses aged !1 year with clinicopathologic or clinical signs of liver disease, a complete blood count (cbc), and hepatic histopathology were included. information on cbc, biochemical variables, and hepatic histopathology was collected from records. data from horses with erythrocytosis (packed cell volume 4 45%) were compared to those without using the mann-whitney rank sum test with significance set at p o 0.05. there were no differences between groups in white blood cell count, gamma-glutamyl transferase, sorbitol dehydrogenase, aspartate aminotransferase, and alkaline phosphatase activities, total protein, albumin, globulin, blood urea nitrogen, or glucose concentrations. fibrosis (64%), biliary hyperplasia (56%), inflammatory infiltrate (50%), megalocytosis (25%), degeneration (25%), necrosis (25%), cholestasis (25%), anisocytosis and anisokaryosis (11%), and lipidosis (3%) were observed in livers of horses with erythrocytosis. neoplasia (3%) was observed rarely. this study reports a high frequency of erythrocytosis in horses with liver disease. erythrocytosis is associated with higher total bilirubin and serum bile acids concentrations. common histopathologic changes include fibrosis, biliary hyperplasia, and inflammatory infiltrate. hepatic neoplasia was rare. this study was performed to determine if horses diagnosed with equine proliferative enteropathy (epe) from lawsonia intracellularis (li) infection had long term effects from disease based on their sale price as yearlings and race earnings. a retrospective review of medical records of thoroughbred horses that were treated for lawsonia intracellularis infection between january 1, 2002 and january 31, 2008 at hagyard equine medical institute in lexington, kentucky was performed. three criteria were used for inclusion in this study. first, each horse had presumptively been diagnosed with li based on physical examination findings such as ventral edema, diarrhea, lethargy, or poor body condition. second, horses had hypoalbuminemia of less than 2.5 mg/dl (normal reference range: 3.4-4.5 mg/dl). third, each horse had a positive fecal polymerase chain reaction (pcr) for li, a positive serum immunoperoxidase monolayer assay (ipma), or both. an ipma titer greater than or equal to 60 was considered positive for disease. 116 horses met the initial criteria. 36 of the 116 horses sold at public auction as yearlings. the sale price of these horses was compared to the average sale price of all yearlings by the same sire as the affected horse (control group). 30 of the 116 horses raced in the united states. their monetary earnings from racing were compared to the average monetary earnings of all progeny by the same sire as the affected horse (control group). earnings of horses that were between 3 and 7 years of age (23/30 horses) at the conclusion of the study were compared to the lifetime average earnings of the stallion's progeny. earnings from horses that were two years of age (7/30) at the end of the study were compared to the two year old average earnings of the stallion's progeny. monetary earnings from all races prior to december 31, 2008 were included in the study. 12 horses both sold at public auction and raced. as well as being included in the total number of horses that sold and raced, their sale records and monetary earnings were compared to the averages from their respective sire as a separate group. this retrospective study indicated that yearling horses previously infected with li do not sell for as much at public auction as their herdmates, but their monetary earnings from racing are not significantly different from other horses. these results should assist practitioners in guiding owners in determing if treatment of horses with epe is appropriate and it may aid in reassuring owners that despite the poor condition of the horse during and shortly after the course of disease, horse may still have future athletic potential. this abstract was presented at the aaep in december 2009. bronchopneumonia caused by streptococccus equi subsp. zooepidemicus (s. zooepidemicus) is one of the most important causes of morbidity in weanling foals. ceftiofur crystalline free acid (ccfa) is a long acting third-generation cephalosporin antimicrobial recently approved for the treatment of bronchopneumonia associated with s. zooepidemicus in adult horses. the objective of the present study was to determine the disposition of ccfa in plasma and pulmonary epithelial lining fluid (pelf) of weanling foals. six healthy 4-to 5month-old weanling foals were administered a single intramuscular injection of ccfa at a dose of 6.6 mg/kg of body weight. concentrations of desfuroylceftiofur acetamide (dca) and related metabolites were measured by use of ultra-high performance liquid chromatography and tandem mass spectrometry. following im administration, median time to maximum plasma and pelf concentrations was 24 h (12-48 h) . mean (ae sd) peak dca concentration in plasma (1.44 ae 0.46 mg/ml) was significantly higher than that in pelf (0.46 ae 0.03 mg/ml). terminal half-life of dca in plasma (74.8 ae 20.9 h) was not significantly different from that of pelf (58.5 ae 11.4 h). time above the therapeutic target of 0.2 mg/ml was significantly longer in plasma (185 ae 20 h) than in pelf (107 ae 31 h). based on the results of the present study, intramuscular administration of ccfa at a dose of 6.6 mg/kg would be appropriate for the treatment of bronchopneumonia caused by s. zooepidemicus and other susceptible pathogens in weanling foals. fgf-23 is secreted by osteocytes and osteoblasts in response to hyperphosphatemia. fgf-23 enhances phosphaturia and is postulated to have a central role in the development of secondary renal hyperparathyroidism. hyperthyroid cats have elevated plasma phosphate and parathyroid hormone concentrations, which may in part be associated with underlying chronic kidney disease (ckd). the aim of this study was to determine if plasma fgf-23 concentrations were associated with the presence of underlying ckd in hyperthyroid cats, and to investigate the changes in plasma fgf-23 concentrations that occur following treatment of hth. hyperthyroid cats were recruited from two london-based first opinion practices between 1999 and 2009. cats that were azotemic at diagnosis were excluded. hth was treated with anti-thyroid medication alone or in combination with thyroidectomy. cats were included in the study if they had a plasma total thyroxine concentration o 40 nmol/l documented for a six month period following commencement of treatment. cats were classified as having azotemic ckd if they developed renal azotemia within six months of establishment of euthyroidism. otherwise cats were deemed to have normal renal function. stored edta plasma samples were assayed for fgf-23 using a recently validated elisa. the mann-whitney u test and the wilcoxon signed rank test were used to compare between the groups and assess the response to treatment respectively. results are reported as median [25 th , 75 th percentiles]. correlations were made using spearman's correlation coefficient. thirty one cats with hth (14 azotemic and 17 non-azotemic) were included in the study. plasma phosphate concentrations decreased following treatment in cats that did not develop azotemia (4.84 [3.91, 5.64] mg/dl vs. 3.91 [3.38, 4.37] mg/dl; n 5 13, p 5 0.01) whereas plasma phosphate concentrations did not change significantly following treatment in cats that did develop azotemia (4.28 [3.81, 5.64] mg/ dl vs. 4.22 [3.10, 5.33] mg/dl; n 5 14, p 5 0.158). plasma fgf-23 concentrations were significantly higher in cats that developed azotemia than cats that did not at both pre treatment (211.7 [176.4, 356 .3] pg/ml vs. 148.3 [118.8, 274 .9] pg/ml; p 5 0.039) and post treatment (514.0 [250.2, 800.0] pg/ml vs. 195.1 [160.7, 287 .3] pg/ml; p 5 0.001) timepoints. plasma fgf-23 concentrations increased following treatment in both azotemic (p 5 0.004) and non-azotemic groups (p 5 0.025). plasma fgf-23 concentrations and plasma phosphate concentrations were not correlated at baseline (r s 5 0.189, p 5 0.335) or following treatment (r s 5 0.136, p 5 0.472). plasma fgf-23 concentrations were higher in pre-azotemic cats than non-azotemic cats and increased following treatment of hth. the reason that fgf-23 concentrations increased following treatment, particularly in the face of decreasing plasma phosphate concentrations in cats that remain non-azotemic, is unclear but may be related to the decline in glomerular filtration rate. hyperthyroidism is a disorder resulting from the excessive production and secretion of t4 and t3 by the thyroid gland. although the disorder and its pathological lesions have been well studied and described the cause remains illusive. whole blood and solid tissue samples from non-diseased, severe disease and mild disease cats based on t4 levels and thyroid histology were used in this study. whole blood samples from 29 non-disease cats, 28 severe disease cats and 17 mild disease cats as well as solid thyroid tissue samples from 30 non-disease cats, 31 severe disease cats and 27 mild disease cats were collected and processed. the resulting total rna samples were used for genechip analysis using our custom feline gene chip designed by affymetrix. data analysis was performed using the partek s gs software for gene expression data. the robust multichip average algorithm was used for background adjustment, normalization, and probe-level summarization of the raw data. anova analysis was performed to find significant differentially expressed genes with a minimal false discovery rate control of 0.1 and a fold change of 1.3 in each direction. during the mild disease state, pathways associated with dna damage and apoptosis are most prominent. at later stages when the histopathological disease is more severe in addition to the aforementioned pathways others associated with tgf-beta signaling, cell adhesion and extracellular matrix remodeling take more prominence. the analysis of this unique data set generated from the use of our proprietary genechip revealed molecular mechanisms that are associated with the transition from non-disease, to mild disease to severe disease, in the thyroid tissue as well as the blood. these mechanisms could provide insights into the causes of the disease and identify potential new therapeutic and diagnostic targets. although it is well established that concurrent chronic kidney disease (ckd) develops in about 30% of hyperthyroid cats, no one has reported the use of the iris staging system for ckd before and after treatment of these hyperthyroid cats. the purpose of this study was to compare the effects of treatment in hyperthyroid cats with known stage 1 and 2 ckd in order to determine the effects of restoring euthyroidism or inducing hypothyroidism has on the iris stage in these cats. we evaluated 36 hyperthyroid cats (median age, 14 years) in this study. one day prior to treatment, serum t 4 concentration, serum chemistry analysis, complete urinalysis, and urine protein-to-creatinine ratio (upc) were measured. all cats were again evaluated with the same parameters again 3 months after treatment with 131 i. prior to treatment, 26 (72%) of the 36 cats had no evidence of azotemia (serum creatinine o 1.6 mg/dl), whereas 10 cats (28%) had stage 2 ckd (serum creatinine, 1.6-2.8 mg/dl). in the 36 cats, iris staging revealed proteinuria in 33 cats (92%), 21 with borderline proteinuria (upc, 0.2-0.4) and 12 with overt proteinuria (upc 4 0.4). hyperthyroidism was cured in all 36 cats (median post-t4, 1.3 mg/dl). all cats had a good response to treatment; there were no signs of ckd except for polyuria and polydipsia in some cats. a significant (p o 0.001) increase in median values for both serum urea nitrogen (26 mg/dl to 31 mg/dl)) and creatinine (1.1 to 1.7 mg/dl) occurred after treatment. nine of the 26 cats (34.6%) classified as nonazotemic or iris stage 1 prior to 131 i progressed to stage 2 ckd after 131 i. all 10 cats with stage 2 ckd before treatment remained azotemic after 131 i, with 5 cats remaining in stage 2 ckd, and 4 cats progressing to stage 3 ckd (serum creatinine, 3.1-3.7 mg/dl). there was a significant inverse relationship (p 5 0.002) between pretreatment urine specific gravity (usg) and post-treatment serum creatinine in the 36 cats. of the 19 cats with post-treatment serum creatinine values 4 1.5 mg/dl (stage 2 to 3 ckd), 15 (79%) had pretreatment usg of o 1.040. in contrast, in the 17 cats with post-treatment serum creatinine values o 1.5 mg/dl, only 3 (18%) had pretreatment usg of o 1.040. a significant (p o 0.001) decrease in median upc from 0.3 to 0.1 occurred after treatment, but there was no relationship between degree of proteinuria and iris stage in these cats. two cats developed iatrogenic hypothyroidism after 131 i, diagnosed by finding low serum t 4 and high ctsh concentrations. both hypothyroid cats had progressed from stage 1 before treatment to stage 2 and 3 ckd, respectively, after 131 i; after thyroxine replacement, serum creatinine decreased to near pretreatment concentrations in both cats. conclusions: 1) iris stage 2 ckd is common in untreated hyperthyroid cats. 2) progression to next higher iris stage is common after treatment, but most cats with remain relatively asymptomatic for ckd. 3) usg may be helpful in predicting which cat's iris stage will progress after 131 i. 4) iatrogenic hypothyroidism worsens azotemia, an effect that appears reversible with replacement therapy. home blood glucose monitoring (hbgm) of diabetic pets is likely to result in superior glycaemic control, minimizing episodes and impact of dangerous hypoglycaemia and reducing costs. nevertheless, it has proven difficult to objectively establish a clear benefit of hbgm using biological parameters (clinical signs, blood glucose, fructosamine). the current study aimed to assess the impact of hbgm on owner perceived quality of life (qol) aspects of diabetes mellitus (dm) treatment, using the recently validated psychometric tool diaqol-pet. owners of insulin treated diabetic cats were recruited to complete the 29-item tool, evaluating areas affecting the cat's and owner's qol, including: worry about pet's dm, hypoglycaemia, costs, owner's desire for autonomous control over the pet's dm, etc. item-weighted-impact-scores (iwis), reflecting frequency and importance ratings of each item, were calculated, as well as averageweighted-impact-scores (awis; average iwis of all items), as an overall measure of diabetes dependent qol. frequencies, iwis and awis were compared between owners practising hbgm and those who did not using mann whitney u test (significance p o 0.05). two hundred and eleven owners of insulin treated diabetic cats completed the diaqol-pet; 161 owners practised hbgm, whereas the remaining 50 did not practise any form of home monitoring (including urine glucose). iwis for 'excessive drinking' and 'owner wanting more control' were significantly different between the hbgm-group (mean1/-standard deviation: à2.011/à2.4 and à5.071/à4.8) and the non-hbgm-group (à3.361/à3.8 and à1.861/à3.2). there was no significant difference between the groups with regards to the iwis for other items, including 'worry about hypoglycaemia' or 'worry about pet's dm'. polydipsia was reported significantly more frequently in the non-hbgm-group and this was the reason for the difference between groups in this item's iwis as it was considered of equal importance. frequency and iwis of reported occurrence of hypoglycaemia signs were not significantly different. awis for both groups was not significantly different (hbgm: à1.851/à1.3; non-hbgm: à1.871/à1.1). the current study suggests that hbgm is predominantly practised by owners who desire more autonomous control over their cat's dm. the frequency of polydipsia was lower in the hbgm-group perhaps suggesting superior control. however, hbgm did not detectably affect the impact of the majority of qol-items, nor the frequency of hypoglycaemic episodes. overall diabetes dependent qol of diabetic cat and owner, as measured per diaqol-pet, was unaffected by hbgm. these data argue for the use of hbgm in selected pet-owner combinations rather than as part of a practice's standard dm management protocol, although further studies are indicated. insulin resistance is associated with impaired activation of the insulin signaling pathway in peripheral tissues such as skeletal muscle, visceral and subcutaneous (sc) adipose tissue. high plasma glucose, fatty acid and endotoxin levels are three major causes of insulin resistance in feline and human obesity and in type 2 diabetes mellitus. however, the mechanisms by which these factors influence insulin action are still unclear. therefore, our aim was to investigate the tissue-specific expression of crucial mediators of insulin action such as the insulin-receptor substrate 1 (irs1), the serine/threonine protein kinase b (pkb/akt) and of the principal insulin-dependent glucose transporter protein (glut4) in feline models of hyperglycemia, hyperlipidemia and subacute endotoxemia. healthy cats were infused through the jugular vein with glucose (n 5 5), lipids (n 5 6) or lipopolysaccharide (lps; n 5 5) for 10 days to clamp their blood concentrations at the approximate level found in untreated feline diabetes (glucose: 25-30 mmol/l; triglycerides: 3-7 mmol/l) or to induce a systemic low-grade inflammation (lps; rectal temperature: 39.2-40.51c), respectively. healthy control cats were infused with saline (n 5 10). on day 10, specimens were collected from skeletal muscles, visceral and sc fat and processed for irs1 mrna expression, total and phosphorylated pkb/akt and glut4 protein expression. gene transcripts of irs1 were not different between the groups. compared to controls, skeletal muscle pkb/akt phosphorylation was 91% lower in cats infused with glucose (p o 0.01); lipid-infused cats showed a trend for a decrease in pkb/akt phosphorylation (31% lower than saline) and had decreased glut4 expression (p o 0.05) in muscle. total (p o 0.01) and phosphorylated (p o 0.05) pkb/akt protein expression were decreased in the sc adipose tissue of lps-infused cats compared to controls. in these cats, phosphorylation of pkb/akt protein was also decreased in visceral fat (p o 0.01). sustained hyperglycemia and, to a lesser extent, hyperlipidemia impaired insulin signaling and glucose transport pathways primarily in skeletal muscle; endotoxemia reduced insulin sensitivity mainly in adipose tissues. thus, the development of insulin resistance in response to hyperglycemia, hyperlipidemia or endotoxemia might be affected by tissue-specific mechanisms in cats. separately used, single photon emission computed tomography (spect) and computed tomography (ct) both lack sensitivity and are additionally hampered by a poor anatomical location capacity and a lack of specificity, respectively. these drawbacks suggest an interest in the fusion of images obtained by the 2 techniques. the aim of this study is to test spect/ct fusion performance in dogs with insulinoma. inclusion criteria were: 1/ a biological diagnosis of insulinoma; 2/ an examination by high resolution ct scan and 111 in-pentetreotide spect followed by spect/ct fusion; 3/ a surgical or post mortem examination completed by histopathological analysis. spect examination showing abnormal foci and ct scan showing pancreas, lymph nodes (ln) or liver abnormalities were considered positive. in case of double positivity, presence (imp1) or absence (imp-) of superimposition of abnormal images was noted. ten dogs were included. in 2/10 dogs, superimposition of abnormalities couldn't be tested. ct scan detected 3 abnormal images [2 pancreatic nodules (pn), 1 enlarged ln (eln)] while spect failed to show any abnormal uptake. both dogs became euglycemic after removal of pn and ln designed by ct scan. in 6/ 10, all abnormal images were classified as imp1 [6 pn, 1 eln and 1 diffuse hepatic infiltration (dhi)]. surgery performed on 5/6 resulted in euglycemia in 4; 1 dog remained hypoglycemic after partial removal of 1 pn. pn localization and dhi were confirmed after necropsy in the 6 th dog. in 2/10 dogs imp1 and imp-images were both recorded. in 1 dog, a dhi was classified as imp1 but pn localization was imp-: localized in the left lobe by ct scan and in the corpus by spect, the latest localization being confirmed after necropsy. in the other dog pn localization was imp1 but a diffuse spect signal superimposing to the liver considered as normal on ct scan was noted. hepatic biopsy confirmed spect results. this study confirms an imperfect sensitivity of both ct scan and spect. it confirms that ct scan can be associated with unspecific abnormal images. subject to a confirmation on a larger cohort of dogs, it indicates that imp1 images provide specific detection and accurate localization of canine insulinomas' primary lesions and metastasis. the majority of dogs with primary hypoadrenocorticism (ph) reveal clinical and laboratory abnormalities of gluco-and mineralocorticoid deficiency. in some of them sodium and potassium levels are normal, a phenomenon currently called atypical addison's. it has been postulated that in those cases adrenal destruction is confined to the zona fasciculata/reticularis, resulting in isolated glucocorticoid deficiency. however, there are no histological studies confirming a normal zona glomerulosa and in most reported cases diagnosis was based solely on low post-acth cortisol levels. the aim of the study was to evaluate aldosterone (aldo) levels in dogs with ph with and without electrolyte abnormalities. seventy dogs with newly diagnosed ph were included. aldo concentrations (ria, coat-a-count s , siemens) were measured before and 60 min after administration of 250 mg synthetic acth (synacthen s , novartis) iv. results were compared to those of 19 healthy dogs and 17 dogs with diseases mimicking ph. to confirm that peak concentrations were not missed aldo was additionally measured 15, 30 and 45 min after acth in 19 dogs (5 with ph, 14 with ph mimicking diseases). results were analysed by means of non-parametric statistical methods (p o 0.05). post-acth aldo was significantly lower in dogs with ph (0-253 pg/ml, median 0 pg/ml) than in healthy dogs (46-602 pg/ml, median 187 pg/ml) and in dogs with ph mimicking diseases (0-639 pg/ml, median 155 pg/ml). low post-acth aldo was found in 67/70 dogs with ph, in 64/67 of them levels were below the detection limit of the assay. normal sodium and potassium levels were found in 5/67 dogs (7%), 6/67 dogs (9%) had hyponatremia and normal potassium, 56/67 dogs (84%) had hyponatremia and hyperkalemia. electrolyte abnormalities ranged from mild to severe. there was no correlation between post-acth aldo and sodium and a weak correlation between post-acth aldo and potassium (r 5 à0.28). aldo concentrations were not different 30, 45 and 60 min after acth. the results demonstrate that aldo levels are low in most dogs with ph independent of the degree of electrolyte abnormalities. this implicates that all three zones of the adrenal cortex are compromised and that there are mechanisms which allow maintenance of a normal electrolyte balance without aldo. definitive diagnosis of canine hypoadrenocorticism (ha) is based on inadequate cortisol secretion following adrenocorticotropic hormone (acth) administration. an abnormal serum sodium to potassium (na:k) ratio can be used to determine whether an acth stimulation test is warranted. the aim of this study was to examine the utility of combining the na:k ratio with white blood cell counts to determine whether an acth stimulation test is warranted. a retrospective review of medical records of dogs examined between 2005 and 2009 was performed. 53 dogs diagnosed with ha and 110 control dogs, in which a diagnosis of ha was excluded during the study period, were included. inclusion criteria for all 163 dogs were hospitalization with intravenous fluid therapy, a complete blood count, and serum na and k measurements at the time of initial examination. dogs were included in the ha group if they also had pre and post acth stimulation serum cortisol concentrations 1.0 mg/dl. dogs were included in the control group if they had resting or post acth stimulation serum cortisol concentration 4 2.0 mg/dl. exclusion criteria were recent administration of glucocorticoids, prior treatment of hyperadrenocorticism, or serum cortisol concentration 4 1.0 mg/dl but 2.0 mg/dl. continuous variables were compared between groups using the mann-whitney u test. receiver operating characteristic (roc) curves were produced to assess the sensitivity and specificity of detecting ha with various cutoffs for each variable. data is presented with 95% confidence intervals (ci) and statistical significance was defined as p o 0.05. the na:k ratio, neutrophil count and neutrophil:lymphocyte ratio were significantly lower in dogs with ha than in dogs without ha (p o 0.01 for each). lymphocyte and eosinophil counts were significantly higher in dogs with ha compared to dogs without ha (p o 0.001 for each). the areas under the curve by roc analysis were largest for na:k ratio (0.87, ci:0.81-0.92) and lymphocyte count (0.85, ci:0.78-0.90). a na:k ratio 39.6 was 100% sensitive (ci:93-100%) but only 15% specific (ci:9-23%) for detecting ha. a lymphocyte count ! 0.77 x10 3 cells/ml was 100% sensitive (ci:93-100%) and 35% specific (ci:27-45%). conversely a na:k ratio 20.1 was 51% sensitive (ci:37-65%) but 100% specific (ci:97-100%) and a lymphocyte count ! 6.1 â 10 3 cells/ml was 9% sensitive (ci:3-20%) but 100% specific (ci:97-100%). a na:k ratio 35.5 was 96% sensitive (ci:87-100%) and 35% specific (ci:26-44%) for detection of ha and a lymphocyte count ! 0.79 â 10 3 cells/ml was 98% sensitive (ci:90-100%) and 36% specific (ci:27-46%) for detection of ha. a combination of this na:k ratio ( 35.5) and lymphocyte count (! 0.79 â 10 3 cells/ml) was 96% sensitive (ci:87-100%) and 55% specific (ci:47-66%) for detection of ha. these results indicate that the combination of lymphocyte count and na:k ratio results in a better screening test for ha than the use of the na:k ratio alone. pheochromocytoma is a malignant, catecholamine-producing, adrenomedullary tumor. clinical signs resulting from excessive catecholamine secretion are typically non-specific, making differentiation from other adrenal tumors a challenge. elevated plasma concentrations of the catecholamine breakdown products metanephrine (mn) and normetanephrine (nmn) are used to identify pheochromocytoma in humans. this study tested the hypothesis that plasma metanephrine concentrations are greater in dogs with pheochromocytoma than in dogs with other adrenal neoplasms, healthy dogs and dogs with non-adrenal illness. edta plasma was collected from healthy dogs and unwell, hospitalized dogs with non-adrenal illness, pheochromocytoma and cortical tumors between april 2007 and october 2010. samples were stored at à801c before measurement of free mn and nmn concentrations using high pressure liquid chromatography at the central laboratory for clinical chemistry at the university of groningen (33 samples) or the mayo clinic, rochester, minnesota (7 samples). kruskal-wallis tests followed by dunn's multiple comparison analysis were used to compare results between groups. significance was set at p o 0.05. results are reported as median [range] . eight dogs with pheochromocytoma, 5 healthy dogs, 15 dogs with non-adrenal illness and 12 dogs with cortical tumors were sampled. pheochromocytoma was diagnosed histologically (7 dogs) or cytologically (1 dog). cortical tumors were diagnosed histologically (9 dogs) or by response to trilostane treatment after obtaining consistent endocrine test results (3 dogs occult hyperadrenocorticism (hac) has been theorized to exist in which excess adrenal sex hormone secretion induces the clinical signs and laboratory changes associated with classic hac. however, the ability of sex hormones to cause such alterations has never been closely evaluated. if sex hormones can cause a syndrome similar to classic hac, they should be able to induce expression of classic glucocorticoid-induced genes. the purpose of the study was to determine if in vitro expression of the gene for corticosteroid-induced alp (cialp) could be induced by clinically relevant concentrations of cortisol and sex hormones believed to cause occult hac. canine hepatocytes were purchased from a commercial source (cellzdirect or invitro) in 6-well plates. upon arrival (3-4 plates per shipment), the cells were allowed to recover in general media per supplier recommendations. after 4 hrs, media was changed to william's e media (-l-glutamine) containing concentrations of cortisol or sex hormones that have been documented in the literature in dogs with hac or with purported occult hac. each plate was treated with a different hormone (cortisol, 17-hydroxyprogesterone [17ohp], progesterone, estradiol or androstenedione), and each well contained a different concentration (starting with no hormone added as a negative control) to evaluate a dose response. media was changed daily. after 5 days of hormone exposure, rna was extracted. reverse transcription was performed and the product used for quantitative pcr for cialp and beta-actin (roche lightcycler) using a gene-specific fluorescent probe for detection. standard curves were created for each gene. all samples and standards were run in duplicate. using the lightcycler software (vers 4.0), cialp expression was normalized to that of beta-actin. fold change in expression was determined relative to the negative control. each sex hormone was used to treat 3 plates; one plate in each shipment was treated with cortisol as a positive control. for cortisol, a dose response was seen in expression of the cialp gene. compared to no cortisol, 10, 50, 100, 150, 250 and 500 nmol cortisol increased expression 2.8, 4.6, 7.1. 9.3, 9.9 and 9.8 fold, respectively. a 2-fold increase is considered significant (j.vandesompele et al genome biol 2002). expression of cialp was not significantly induced in response to any concentration of 17ohp (10 nm maximum), progesterone (5 nm maximum), estradiol (max 400 pm maximum) or androstenedione (100 nm maximum). we conclude that in vitro these sex hormones do not induce expression of the cialp gene which is classically induced by cortisol in vivo; indeed, elevated serum cialp activity is a hallmark of classic hac. thus, the ability of the sex hormones to induce the gene in vivo must be questioned and evaluated. measurement of sex hormones has been advocated as an adjunct means for diagnosing typical hyperadrenocorticism (hac), i.e. disease due to excess cortisol secretion, as well as for diagnosis of atypical hac, i.e. disease due to excess adrenal sex hormone secretion. however, measurements in either setting have not been widely studied. therefore, our objectives were: 1. to determine the sensitivity of 17-hydroxy-progesterone (17ohp) and estradiol concentrations pre-and post-acth for diagnosis of typical hac. 2. to determine the specificity of 17ohp and estradiol concentrations preand post-acth for diagnosis of occult hac. dogs that had pdh (n 5 12), dogs that were suspected to have hac but proven not to (had non-adrenal illness [nai, n 5 89]) or dogs that were healthy (n 5 20, used to establish reference ranges [rr]) were enrolled. acth stimulation tests were performed (5 mcg/kg cosyntropin iv); blood samples were drawn pre and 60min post; 17ohp and estradiol were measured by previously validated radioimmunoassays. a kruskal-wallis rank sum test was used to compare values between the groups. significance was set at p o 0.05. for basal and acth-stimulated 17ohp concentrations, the rr were determined to be 0.03-0.6 ng/ml (mean ae 2 s.d.; range 0.03-0.55) and 0.3-2.2 ng/ml (range 0.31-1.95), respectively. in pdh dogs, 3 and 7 had basal and post-acth 17ohp concentrations above the rr, respectively; in the nai group, 23 and 25 dogs had concentrations above the rr, respectively. thus, the sensitivity of basal and post-acth 17ohp measurement for diagnosis of hac is 25% and 58%, respectively. specificity of diagnosis is 72% and 74%, respectively. post-acth 17ohp concentration was significantly different between groups. for basal and stimulated estradiol concentrations, the rr were determined to be 81-180 pg/ml (range 89-195) and 71-170 pg/ml (range 74-151), respectively. for both basal and stimulated estradiol, 4 pdh dogs (n 5 9) had concentrations above the rr; for those with nai (n 5 80), 14 and 15 had concentrations above the rr, respectively. thus, the sensitivity of estradiol measurement for diagnosis of hac is 44% for both pre-and post-acth. specificity of estradiol for diagnosis for hac is 83% and 81% for pre-and post-acth, respectively. overall, 23 dogs with nai had at least one elevated estradiol concentration (total specificity 70%). post-acth estradiol concentration was not significantly different between groups. we conclude that use of 17ohp and estradiol concentrations for diagnosis of hac can be problematic. sensitivity and specificity are relatively low, potentially leading to misdiagnoses. diabetes mellitus is one of the most common feline endocrinopathies and is considered to have a similar pathophysiological basis to human type 2 diabetes. several studies have identified risk factors for development of diabetes mellitus in cats, which include age, obesity, inappropriate diet and physical inactivity. however, to date, no specific genetic risk factors have been identified. genome-wide association studies in humans have identified several genes that predispose to obesity and/or diabetes mellitus, one of which is the melanocortin receptor 4 (mc4r) gene. the aim of the current study was to identify polymorphisms (snps) in the feline mc4r gene and to use these to perform a case:control study to determine whether these candidate gene snps were associated with diabetes mellitus in cats. genomic dna from 10 cats (6 domestic short hair [dsh], 4 burmese) was initially analysed by pcr and direct sequencing using felmc4r-specific primers, which identified a missense mutation (mc4r:c.92 c 4 t) in the region encoding the extracellular domain of the receptor protein in dsh cats only. one hundred and nineteen dsh cats were subsequently recruited into the case:control study. fifty nine cats were obese diabetic (29 male, 30 female), mean age 11.8 years (range 6-18y); mean weight 6.68 kg (range 5.15-10 kg). sixty lean cats were used as controls (30 male, 30 female), mean age 13.81 years (range 9-19y), mean weight 3.99 kg (range 2.56-5.68 kg). the t to c base change alters a restriction site in the sequence recognized by the enzyme bstoi, such that dna from cats with the mutant (c) allele can be cut, whereas that from the wild-type (t) allele cannot. primers were designed that flanked the mutation to allow pcr amplification of this region of mc4r from genomic dna obtained from edta blood. the pcr products were purified and subject to restriction fragment length polymorphism (rflp) analysis. bstoi digestion products were then analysed by agarose gel electrophoresis. of the 59 diabetic cats, 32 (54%) were homozygous for the mutation (cc), compared to 21 (35%) of 60 control cats. statistical analysis (two tailed fisher's square test) revealed that this difference between groups was statistically significant (p 5 0.0431). in conclusion, this pilot study has identified a missense mutation in the coding sequence of mc4r. this could be an important predisposing factor for development of diabetes and/or obesity in dsh cats. polymorphisms in a similar region of human mc4r predispose to obesity, which in turn is a major risk factor for type 2 diabetes. hyperadrenocorticism (hac) is one of the most common endocrine disorders of dogs. the two most effective medical treatments are trilostane (vetoryl s ) and mitotane (lysodren s ). previous studies evaluating the effect of treatment on aldosterone secretion measured the hormone at 60 min post-acth administration. however, the optimal sampling time would be at the time of maximal secretion, which occurs 30 minutes after the 5 mg/kg dose commonly used for the test (carlson et al, jvim, 2010). thus, the true effect of either medication on aldosterone secretory capacity is unknown. our objectives were: 1) to assess and compare the effect of treatment with trilostane and mitotane in dogs with pituitarydependent hac (pdh) on aldosterone secretory reserve at 30 min post-acth stimulation and 2) to determine if changes in aldosterone concentration at that time correlate with changes in serum sodium and potassium concentrations. forty-six dogs being treated for pdh with either mitotane (n 5 30) or trilostane (n 5 16) have been enrolled. the dogs could be treated for any length of time. all had acth stimulation tests performed (5 mcg/kg cosyntropin iv); blood samples were drawn before and at 30 and 60 min post-acth for monitoring of cortisol and aldosterone concentration using previously validated radioimmunoassays. ten historical normal controls were also included. serum sodium and potassium concentrations were measured in the basal samples. a kruskal-wallis rank sum test was used to compare values between normal dogs and those treated with mitotane or trilostane. linear regression analysis was used to determine if a correlation existed between electrolyte and aldosterone concentrations or between cortisol and aldosterone concentrations. significance was set at the p o 0.05 level. acth-stimulated aldosterone concentrations in mitotane-treated but not trilostane-treated dogs were significantly lower than that in normal dogs at both the 30 and 60 min time points. no difference was detected between aldosterone concentrations at 30 and 60 min after acth injection in either treatment group. a positive correlation existed between the 60-min cortisol and 30-min aldosterone concentrations in the trilostane-treated group (r 5 0.813), i.e. the peak post-acth concentration for each hormone, but not in dogs treated with mitotane. basal serum sodium and potassium concentrations were not correlated with the basal aldosterone concentration in either treatment group. in conclusion, treatment with mitotane resulted in decreased aldosterone secretory reserve, but this did not correlate with hyperkalemia or hyponatremia. measurement of aldosterone concentrations is not predictive of electrolyte concentrations. previously presented at the auburn university phi zeta research emphasis day, november 10, 2010. antioxidant depletion is documented in humans with hyperthyroidism, and is reversible with treatment. in addition, antioxidant depletion has been shown to increase the risk of methimazole toxicity in rats. the primary aim of this study was to determine whether deficiencies in glutathione (gsh), ascorbate (aa), or vitamin e, along with increases in urinary 8-isoprostanes, were present in hyperthyroid cats, and were reversible after radioiodine treatment. a secondary aim was to determine whether antioxidant abnormalities were associated with a prior history of methimazole toxicity. ongoing prospective, controlled, observational study. otherwise healthy client-owned hyperthyroid cats presenting for radioiodine therapy (n 5 26 to date) and healthy age-matched controls (n 5 32 to date) were recruited. all cats were screened with cbc, biochemical panel, urinalysis, and t4, as well as red blood cell (rbc) gsh, plasma aa, plasma vitamin e, and urinary 8-isoprostanes. hyperthyroid cats were re-evaluated 2 months after radioiodine treatment. unlike in humans, median blood antioxidants were not significantly different in hyperthyroid cats (gsh 1.5 mm; aa 11.4 mm, and vitamin e, 19 g/ml) compared to controls (gsh 1.4 mm; aa 12.5 mm, and vitamin e, 17 g/ml). results for urinary isoprostanes are pending, and associations with methimazole toxicity will be investigated after full recruitment. rbc gsh concentrations did increase significantly (to 1.6 mm; p 5 0.019) after radioiodine treatment. however, this modest change is unlikely to be clinically significant. preliminary data do not indicate clinically significant blood gsh, ascorbate, or vitamin e deficiencies in hyperthyroid cats. with appropriate insulin therapy and a low carbohydrate diet, up to 90% of newly diagnosed diabetic cats are eventually able to maintain euglycemia without insulin administration, and these cats are considered to have achieved remission. there are currently no published data reporting the glucose tolerance status of cats classified as being in remission, and it is unknown whether these cats are truly in diabetic remission, or should be classified as non-insulin dependent diabetics, or having impaired glucose tolerance, and/or impaired fasting blood glucose. the aim of this study was to determine fasting blood glucose concentrations and glucose tolerance status of cats in remission. the study was a prospective study in a feline-only clinic. for inclusion, diabetic cats had to have achieved remission through insulin therapy, and insulin withheld for a minimum of two weeks. five diabetic cats in remission and five matched non-diabetic cats were enrolled in the study. blood samples were obtained via the ear vein but where the cat's temperament precluded this, from the jugular.glucose concentration was measured using a meter calibrated for feline blood (abbott alphatrak). a simplified glucose tolerance test was performed after food was withheld for 24 hours. a 22g catheter was placed in a cephalic vein three hours before the gtt was commenced, to minimize the effects of stress on blood glucose concentration. blood glucose concentration was measured at time 0 and then a 1 g/kg dose of glucose was administered slowly via the intravenous catheter. further blood glucose measurements were made at 2 hours and then hourly until glucose had returned to o 117 mg/dl (o 6.5 mmol/l). in the control group, all cats had a fasting blood glucose below 117 mg/dl, and following glucose administration, glucose had returned to o 117 mg/dl by 3 hours. fasting blood glucose in the remission group was o 126 mg/dl (7 mmol/l) in all cats except one, which had fasting blood glucose of 135 mg/dl (7.5 mmol/l). following glucose administration, all five cats in remission had blood glucose above 117 mg/dl (6.5 mmol/l) at three hours, four were o 117 mg/dl at four hours, and one returned to o 117 mg/dl at five hours. the cat with impaired fasting glucose subsequently became diabetic after steroid administration. the results of this study show that these cats, while no longer diabetic, have mildly impaired glucose tolerance compared to nondiabetic cats, and a minority have impaired fasting glucose. the objective of this study was to determine the role of iodine restriction in the nutritional management of cats with naturally occurring hyperthyroidism. five domestic shorthair cats ranging in age from 8-17 years were confirmed to have hyperthyroidism based on persistently increased serum total thyroxine concentrations (tt4), palpable thyroid nodule and weight loss. serum tt4 concentrations ranged from 55-146 nmol/l (reference range 10-55 nmol/l). the cats were then fed a low iodine containing food (0.47 ppm iodine dmb, as measured by epiboron neutron atomic activation). serum tt4 concentrations were measured every 3 weeks. biochemistry parameters were also evaluated at weeks 0, 6 and 9. at 9 weeks, serum tt4 concentrations had decreased in all cats with 4 of 5 cats (80%) being euthyroid (mean 48 nmol/l; range 41-54 nmol/l). the remaining hyperthyroid cat had an initial serum tt4 of 146 nmol/l, which decreased to 83 nmol/l after being fed the iodine-restricted food. mean decrease in tt4 for all 5 cats was 26 nmol/l (range 8-63 nmol/l). renal parameters remained stable in all 5 cats. these 5 cats along with 4 additional newly diagnosed hyperthyroid cats were transitioned to a similar food that contained less iodine (0.28 ppm dmb). baseline serum tt4 concentrations in the 4 new cats ranged from 55-73 nmol/l. serum tt4 and other biochemical parameters were monitored every 3 weeks for 9 weeks, and then every 4 weeks for an additional 8 weeks. with the 0.28 ppm iodine food the four new cats became euthyroid with a mean tt4 concentration of 41 nmol/ (range 29-50nmol/l). the 4 euthyroid cats from the earlier feeding study had a further decrease in tt4 concentration (mean tt4 5 39 nmol/l, range 38-54nmol/l). the single non-euthyroid cat from the first study had a serum tt4 concentration of 61 nmol/l, a decrease from the baseline concentration of 83 nmol/l. the average decrease in serum tt4 for all 9 cats was 20 nmol/l (range 2-35 nmol/l). finally, 8 of the 9 cats were fed a third iodine-restricted food (0.17 ppm dmb) along with one other newly diagnosed hyperthyroid cat (79 nmol/l serum tt4) and evaluated every 4 weeks. all 9 cats in this evaluation were euthyroid (mean tt4 33 nmol/l; range 23-50 nmol/ l). this result included the cat whose serum tt4 remained in the hyperthyroid range in the first two evaluations. the average decrease in tt4 was 13 nmol/l (range 0-43 nmol/l). biochemical features of renal function remained stable and no other biochemical abnormalities were observed. in summary, the results of these three feeding studies demonstrate that feline hyperthyroidism can be managed effectively with dietary iodine restriction. we have shown previously that restriction of dietary iodine (i) is a safe and effective method for decreasing serum thyroxine concentrations (tt4) in cats with hyperthyroidism. the objective of this study was to determine the maximum level of iodine in a nutritionally balanced feline mature adult food required to maintain normal serum tt4 concentrations in hyperthyroid cats currently being controlled on a food containing 0.15 ppm i (dmb) as measured by epiboron neutron atomic activation. all cats were previously diagnosed at least 14 months prior to the start of the study and their tt4 concentrations were maintained in the normal range by dietary iodine restriction for a minimum of 10 months (range 10 months-3 years). serum tt4 concentrations ranged from 9-42 nmol/l (reference range 10-55 nmol/l) at the beginning of the study. the cats were divided into two groups each containing 9 cats. groups were similar in age and gender distribution (mean age 5 13.8 years, range 12-18 years). one group (group a) was placed on a food that was formulated for mature adult cats containing 0.39 ppm i (dmb). the other group (group b) was placed on a similar food that differed only in that it contained 0.47 ppm i (dmb). blood was collected from all cats every three weeks and analyzed for serum tt4 concentration. biochemistry parameters were also evaluated at weeks 0, 6 and 9. all group a cats exhibited increases in serum tt4 concentration (mean increase of 25 nmol/l above baseline, range 5-48 nmol/l). seven of the cats remained in the euthyroid range (mean serum tt4 5 36 nmol/l, range-27-54 nmol/l). two cats exceeded the upper limit of the reference range (59 and 76 nmol/l respectively). the cats in group b also exhibited increases in serum tt4 concentration but to a greater degree than the cats in group a (mean increase 39 nmol/l, range 20-60nmol/l). four cats remained in the euthyroid range (mean serum tt4 5 41, range 29-49 nmol/l). the five remaining cats all exceeded the upper limit of the reference range (mean serum tt4 5 76 nmol/l, range-59-99 nmol/l). all cats returned to a euthyroid state within 1 month of being returned to a diet containing 0.17 ppm i (dmb). it was determined that serum tt4 concentrations are not ideally controlled in the normal range in hyperthyroid cats fed a food containing ! 0.39 ppm i (dmb). hyperthyroidism is a common disease in old cats. excessive production of thyroid hormones is the hallmark of the disease. three main treatments for feline hyperthyroidism include radioactive iodine, thyroidectomy, and antithyroid drugs such as methimazole. previously we have shown that limiting dietary iodine to or below 0.27 ppm induces euthyroidism in cats with hyperthyroidism compared with a similar diet containing 0.42 ppm iodine. the objective of this study was to test whether dietary iodine at 0.32 ppm would induce euthyroidism in cats with naturally occurring hyperthyroidism. fourteen cats with hyperthyroidism confirmed by serum tt 4 and ft 4 measurements were stratified into two groups based on gender and age. one group (control: 4 males and 3 females, age ranged from 11 to 15 years) was given a positive control dry cat food (0.17 ppm iodine) while the other group (test: 3 males and 4 females, age ranged from 12 to 17 years) was fed a commercial dry cat food (1.9 ppm iodine) for at least 6 weeks before the study. afterwards (week 0), the control cats continued to receive the same food while cats in the test group were given a test food (0.32 ppm iodine) for additional 12 weeks. all cats had free access to their food and deionized water during the study. blood samples were collected during weeks 0, 3, 6, and 12 of the study. the control cats maintained euthyroidism during the study. the test food significantly reduced serum tt 4 (72 ae 12, 43 ae 9 ã , 42 ae 9 ã , 40 ae 6 ã nmol/l in weeks 0, 3, 6 and 12, respectively; ã : p o 0.05 compared with week 0, dunnett's t test). it also significantly reduced ft 4 at the end of the study (17 ae 2 vs. 23ae 2 pmol, week 12 vs. week 0; dunnett's t test, p o 0.05). serum ft 4 was within the reference range (10-55 pmol/l) in cats in both groups. serum tt 3 , ft 3 , and tsh were not affected by the test food and were within the reference ranges (tt 3 : 0.6-1.4 nmol/l, ft 3 : 1.5-6 pmol/l, and tsh: 0-21 mu/l) in cats of both groups during the study. this study demonstrates that dietary iodine at or below 0.32 ppm provides an effective and inexpensive therapy for cats with naturally occurring hyperthyroidism. radioactive iodine ( 131 i) is a widely used treatment for feline hyperthyroidism. prior to 131 i administration, many cats receive methimazole therapy. it has been suggested that recent withdrawal of methimazole prior to 131 i may increase the risk of hypothyroidism, inhibit the response to therapy, or have no effect. to further address this question, a retrospective medical records search was performed to identify hyperthyroid cats that received 131 i therapy after methimazole treatment. inclusion criteria included documentation of the time interval between discontinuation of methimazole and 131 i administration, and measurement of thyroxine (t 4 ) at 7-14 days after 131 i. cats were divided into 2 groups: those receiving 131 i within 1 day of stopping methimazole, and those receiving 131 i treatment 5 or more days after stopping methimazole. sixty cats met the inclusion criteria. forty received 131 i within 1 day of stopping methimazole. of those, 20 (50%) had a low t4 (o 1.2 mcg/dl), 17 (42.5%) had a normal t4 (1.2-4.8 mcg/dl), and 3 (7.5%) had an elevated t4 (4 4.8 mcg/dl) at 7-14 days after 131 i therapy. fourteen cats received 131 i 5 or more days after stopping methimazole: 8 (57%) had a low t 4 , 5 (36%) had a normal t 4 , and 1 (7%) had an elevated t 4 at 7-14 days after 131 i therapy. the results were compared with a fisher's exact test and there was no difference between the groups (p 5 0.76). these findings indicate that stopping methimazole therapy within 1 day of 131 i therapy does not inhibit the response to therapy. pharmacokinetic studies evaluating synthetic insulin analogs such as glargine necessitate the ability to measure the blood concentrations of glargine without cross-reactivity to endogenous insulin. although the cross-reactivity between endogenous human insulin assays and synthetic analogs is often known for commerciallyavailable assays, the degree of cross-reactivity of human insulin assays with feline insulin is not. the purpose of this study was to evaluate the cross-reactivity of feline insulin with a commerciallyavailable human insulin elisa with known cross reactivity to several synthetic analogs. pre-and post-prandial blood samples were collected from four healthy cats immediately prior to and approximately 15 minutes following a meal, for a total of 8 samples. dextrose was added to the meals given to two of the cats. blood samples were immediately centrifuged and the serum was collected, aliquoted, and stored at à201c until analysis. serum insulin levels were determined in parallel with commercially-available feline insulin and human insulin elisas. the elisas were run in duplicate and according to the manufacturer's instructions. concentrations of serum insulin measured by the feline insulin elisa ranged from 12.7 ng/l to 4 700 ng/l. despite the wide range of concentrations of feline insulin, all 8 samples evaluated with the human insulin elisa yielded absorbance readings equal to or lower than the absorbance of the negative control, indicating no crossreactivity between the evaluated human insulin assay and feline insulin. since this assay is reported to cross-react significantly with glargine, it is a great candidate for determination of serum glargine concentrations in cats. the aim of this prospective, controlled study was to compare the efficacy of two trilostane protocols for treatment of canine pituitary-dependent hyperadrenocorticism (pdh). among the 28 client-owned dogs diagnosed with pdh, only the dogs weighing o 5 kg were selected (n 5 16). group a (n 5 9; low-dose treatment group) and group b (n 5 7; high-dose treatment group) received 0.8 ae 0.3 mg of trilostane/kg orally every 12 hours and 30 mg of trilostane/ body orally every 24 hours, respectively. all of the dogs were reassessed at 2, 4, 12, and 24 weeks after the initiation of treatment. the improvement in post-acth stimulation serum cortisol concentration, as well as clinical signs in group a, required more time than group b; however, 2 of 7 dogs in group b had clinical signs and abnormal laboratory findings consistent with hypoadrenocorticism after treatment for 20 weeks. twenty-four weeks later, all of the dogs of both groups improved the abnormal clinical findings. the present study suggests that twice daily, low-dose administration of trilostane is effective in the management of canine pdh and may be safe without the potential adverse effects of once daily, high-dose treatment. however, because this study involved only a small number of dogs, a population-based control study will be needed to clarify the efficacy of low-compared to high-dose trilostane treatment. cobalamin is essential for a variety of metabolic processes in many tissues and organs, and has effects on cell growth and peripheral and central nervous system function. chronic distal small intestinal disease in humans, cats, and dogs has been shown to cause cobalamin deficiency. an immunoassay for the measurement of serum cobalamin concentration in these species is being used in routine practice for the diagnosis of cobalamin deficiency. in pigs, the role of cobalamin has not yet been extensively investigated. thus, the aim of this study was to analytically validate an immunoassay, labeled for use in humans, for the measurement of cobalamin in porcine serum samples and secondly to determine serum cobalamin concentrations in weaned pigs. for the analytical validation of the assay, serum cobalamin concentrations were measured using the commercially available immulite s 2000 cobalamin immunoassay (siemens healthcare diagnostics ltd., deerfield, il, usa) in 30 surplus porcine serum samples from a variety of studies. validation of the assay consisted of determination of dilutional parallelism, spiking recovery, and intra-and inter-assay variability. additional surplus serum samples from 27 piglets from four litters at a texas a&m university farm were obtained. each piglet had been bled twice, the first at weaning (21 days of age) and the second one 12 days later. to investigate results in comparison between age groups, serum cobalamin concentrations were compared using a wilcoxon matched pairs test. significance was set at p o 0.05. observed to expected ratios (o/e) for serial dilutions ranged from 87.3 to 124.9% (mean ae sd: 104.2 ae 14.5%) for four different serum samples at dilutions of 1:1, 1:2, and 1:4, and from 96.8 to 118.3% (mean ae sd: 107.6 ae 15.2%) for one serum sample at dilutions of 1:2, 1:4, and 1:8. o/e for spiking recovery ranged from 87.4 to 116.7% (mean ae sd: 102.0 ae 7.5%) for five different porcine serum samples that had been spiked with each other in a 1:1 dilution. intraassay coefficients of variation (%cv) for five different serum samples were 4.3, 5.7, 4.3, 3.7, and 6.1%. inter-assay %cvs for five different serum samples were 4.9, 7.2, 9.6, 3.5, and 7.6%. serum cobalamin concentration was significantly lower in piglets post weaning (median: 242 ng/l) compared to those at the time of weaning (median: 324 ng/l; p 5 0.009). the immulite s 2000 cobalamin immunoassay labeled for use in humans is linear, accurate, precise, and reproducible for measurement of serum cobalamin concentrations in pigs. this study also showed that piglets that differ in age by only 12 days have significantly different serum cobalamin concentrations. further investigations of cobalamin concentrations in both sows and piglets at different stages of weaning are warranted. primigravid dairy heifers can be infected with mastitis pathogens during the periparturient period. the prevalence of intramammary infection (imi) ranges from 30-75% of quarters pre-partum and 12-45% at parturition. some pre-partum infections self-cure before parturition, however a number of these imis persist into early lactation. these imis may impact milk production and quality and may serve as a reservoir for contagious pathogens. no study has specifically investigated the risk of an imi persisting from the prepartum period into early lactation. the objectives of this study were to describe the prevalence of mastitis pathogens in heifers on a grazing dairy before and after parturition and calculate the relative risk (rr) and attributable fraction of population (afp) for the association between a post-partum and pre-partum imi. two-hundred-ninety-four heifers were systematically assigned to 1 of 3 groups: g1) pre-partum secretions from all mammary quarters (n 5 98), g2) no pre-partum secretions collected (n 5 98) and g3) pre-partum secretions from two diagonal quarters (n 5 98). group assignments were designed to assess whether pre-partum sampling increased the likelihood of imi at calving. mammary quarter secretions were collected for bacterial culture approximately 2 weeks prior to expected calving date. quarter milk samples were collected for bacterial culture once weekly during the 1 st 3-weeks of lactation. bacterial isolates were classified as staphylococci, non-agalactiae streptococci and gram-negatives. mammary quarter samples yielding 2 different bacteria were classified as mixed infections and those yielding ! 3 bacterial types were classified as contaminated. bacterial isolates were speciated using gene sequencing methods and strain-typed using pulse-field-gel-electrophorysis to evaluate the relatedness of bacteria isolated from pre-and post-partum samples from the same mammary quarter. relative risk and afp were calculated using 2 â 2 tables. forty-five percent of mammary quarters had a pre-partum imi. during the 1st 3 weeks of lactation the mean prevalence of imi was 23.3% of quarters. staphylococci were most frequently isolated bacteria from pre-partum secretions and milk with s. chromogenes and s. aureus being the most common species. using data from 228 mammary quarters, the rr and afp for the association between a post-partum and pre-partum imi were 11 and 77%, 43 and 86%, and 12 and 68% for all staphylococci, s. aureus only and cns only imis, respectively. mammary quarters sampled pre-partum were no more likely to have a post-partum imi than those not sampled (chisquare, p ! 0.27). these data demonstrate that pre-partum imis persist into early lactation and that pre-partum secretion cultures may be a useful, not only in predicting imi at calving, but also in assessing risk of introducing new contagious mastitis pathogens, e.g., s. aureus, into the lactating herd. despite concerns about antimicrobial resistance and clostridium difficile in food animals, there has been little study of the prevalence or mechanisms of resistance. this study evaluated the impact of tetracycline treatment on c. difficile shedding in veal calves and the impact on resistance. calves arriving on 1 veal farm received oral oxytetracycline for 5 days as per farm protocols. calves were sampled at arrival and 6 days later. selective culture for c. difficile was performed. isolates were ribotyped, and tested for tetracycline susceptibility and the presence of tetracycline resistance genes. multivariable logistic regression models were used to determine the relationship between tetracycline resistance and the presence of tetracycline resistance genes. clostridium difficile was isolated from 32% (56/174) and 51% (88/ 172) calves, at the first and second samples, respectively. the percentage of tetracycline resistant isolates increased from 79% to 93%. isolates from the second sample were 3 times more likely to be tetracycline resistant (p 5 0.016) and 5 times more likely to possess tet(m) (p 5 0.004). tet(m) was detected in 13% (7/53) and 43% (39/ 91), tet(o) in 23% (12/53) and 19% (17/91) and tet(w) in 2% (1/53) and 1% (1/91) of isolates from first and second samples, respectively. tet(l), tet(k) and tet(s) were not detected. 54 resistant isolates were not carrying any of the genes investigated. routine tetracycline use may have had an impact on both the prevalence of c. difficile, as well as the strain distribution and resistance patterns. this is the first report of presence of tet ( the objectives of this study were to 1) estimate the prevalence of antimicrobial resistance in the study population and 2) to investigate the associations between exposures to antimicrobial drugs and antimicrobial resistance in fecal non-type specific e. coli (ntsec) recovered from individual feedlot cattle. two-stage random sampling was used to identify cattle for enrollment at 4 western canadian feedlots. a fecal sample was collected per rectum from each individual at arrival and in the middle of the feeding period when cattle were rehandled as part of standard feedlot protocol. from samples collected at this second time point, a total of 2,133 ntsec isolates were tested for susceptibility to antimicrobial drugs by disk diffusion. parenteral and in-feed exposures to antimicrobial drugs were recorded for each individual enrolled in the study. the least square means estimates and 95% confidence intervals for the prevalence of resistance at each time point were modeled using poisson regression. multivariable logistic regression was used to investigate associations between antimicrobial resistance and exposure to antimicrobial drugs. regression models were adjusted for clustering of observations among individuals and pens. the most common resistances identified in arrival samples were sulfisoxazole (7.5%; 95%ci: 6.1-9.2), streptomycin (7.7%; 95%ci: 6.3-9.5) and tetracycline (20.0%; 95%ci: 17.7-22.6). at the second sampling point, resistance prevalence was 25.6% (95%ci: 23.5-28.0) for sulfisoxazole, 25.0% (95%ci: 22.8-27.3) for streptomycin, and 72.7% (95%ci: 70.5-75.1) for tetracycline. logistic regression modeling identified weak associations of exposures to tetracycline and macrolide classes of drugs with antimicrobial resistance at the second time point. abstract fa-5 premature/dysmature syndrome in cria: a ret-rospective study of 63 cases (1999) (2000) (2001) (2002) (2003) (2004) (2005) . c. gerspach, d. anderson. the ohio state university, columbus oh. prematurity is widely acknowledged as risk factor for subsequent morbidity and mortality in llama and alpaca cria. a review of medical records for premature cria alive at the time of admission to the veterinary teaching hospital between 1999 and 2005 was performed to determine risk factors of prematurity and to report the outcome and related conditions or diseases in affected cria. medical records for 63 premature or dysmature cria were included in this study. of these cria, 51 were alpaca and 12 llama, 36 were female and 27 were male. reasons for referral were prematurity, failure of passive immunity, dyspnoea, weakness and failure to gain weight. cria were presented at a mean age of 1.4 days and were premature by a mean estimated time of 19.5 days. overall survival rate was 82.5%, with all llama cria surviving. a multivariate logistic regression model was used to identify risk factors associated with not surviving. cria receiving camelid colostrum had a significant better outcome than cria receiving no colostrum or colostrum from different species. dyspnea and tachypnea was associated with a poor outcome. all cria that were able to nurse, without assistance prior to referral, survived. clinical pathology parameters most commonly associated with death were hyperphosphatemia and acidosis. enrofloxacin is approved for the treatment of swine respiratory disease, however there are no published studies describing the pharmacokinetics of enrofloxacin at the approved dose and route in pigs (7.5 mg/kg subcutaneously). furthermore no studies have assessed the unbound concentrations of enrofloxacin at its site of action, the extracellular tissue fluid. therefore the objective of this study was to use an in-vivo ultrafiltration method to measure the active fraction of enrofloxacin, and the metabolite ciprofloxacin, at 3 tissue sites relevant to pigs, and to compare these concentrations with plasma concentrations collected at similar time points. six healthy pigs were used in this study. pigs were recently weaned and weighed an average 16.3 kg. on the day before the experiment, pigs were anesthetized for the placement of jugular vein sampling catheters and interstitial fluid collection probes. three ultrafiltration probes were placed in each pig in a subcutaneous site near the right shoulder, an intramuscular site along the epaxial muscles, and in the pleural space of the chest cavity. each pig received an injection of enrofloxacin (baytril 100, bayer animal health) at a dose of 7.5 mg/ kg subcutaneously behind the left ear. plasma and interstitial fluid samples were collected at pre-determined time points, and enrofloxacin and ciprofloxacin concentrations were measured using hplc with fluorescence detection. protein binding was determined with a microcentrifugation system. pharmacokinetic data was analyzed using a one compartment model. the analysis of plasma and isf showed that only a small fraction of ciprofloxacin was produced in these pigs, therefore ciprofloxacin concentrations were not used in pharmacokinetic measurements. the plasma half-life (t 1/2 ), volume of distribution, clearance, and peak concentration (c max ) for enrofloxacin was 25.9 hr (ae 6.2), 6.29 l/kg (ae 1.23), 0.168 l/kg/hr (ae 0.076), and 1.07 mg/ml (ae 0.28), respectively. the concentrations from each of three tissues were not different in each pig. when pharmacokinetic values from all tissues were combined for the isf, the t 1/2 was 23.6 hr (ae 4.1) and the c max was 1.26 mg/ml (ae 0.10). the enrofloxacin plasma protein binding was 31.1% (ae 3.28) and 37.13% (ae 16.54) at a high and low concentration, respectively. this study has demonstrated that the concentration of biologically active enrofloxacin in tissues exceeds the concentration predicted by the unbound fraction of enrofloxacin in pig plasma. the half-life of enrofloxacin is longer in tissues and plasma than has been reported in previous studies. the high tissue concentrations and long half-life produce an auc/mic ratio sufficient for the pathogens that cause respiratory infections in pigs. ceftiofur crystalline free acid (ccfa), a long-acting ceftiofur formulation labeled for use in cattle, pigs, and horses for treatment of respiratory disease has been used for treatment of ovine respiratory infections in clinical practice. pharmacokinetic data, however, do not exist for ccfa administered subcutaneously in sheep. the present pharmacokinetic study evaluated the single dose subcutaneous administration of ccfa in sheep (n 5 9) at 6.6 mg/kg body weight. concentrations of ceftiofur free acid equivalents (cfae) in plasma were measured by high performance liquid chromatography for 14 days following drug administration. pharmacokinetics of subcutaneous ccfa in sheep were best described using a single compartment model with the following average (ae sd) parameters: area under the concentration time curve 0! 1 (206.6 hãug/ml ae 24.8), observed maximum plasma concentration (2.4 ug/ ml ae 0.5), and observed time of maximum plasma concentration (23.1 h ae 10.1). no significant adverse drug reactions were observed. adequate cfae plasma concentrations were attained to effectively treat respiratory tract pathogens associated with pneumonia in sheep. the purpose of this study was to assess, using thoracic ultrasonography, the prevalence of lung lesions in pre-weaned dairy calves. subsequent aims were to describe ultrasonographic changes within the lung, clinical respiratory score, and treatment of respiratory disease. a longitudinal study was performed using female dairy calves from 6 commercial dairy farms in new york state. calves were enrolled based on age. thoracic ultrasound and clinical respiratory scoring were performed on each calf at 2 time points. a standard 5 mhz linear ultrasound probe was utilized to evaluate intercostal spaces 1 through 11 of each hemi-thorax with the calf in lateral recumbency (us1) or standing (us2). lesion appearance, size, and location were recorded. respiratory score (rs) was assigned based on a previously published protocol incorporating fever, nasal discharge, cough, ocular discharge and ear droop, with a higher numerical score corresponding to more severe disease. abnormal lung on ultrasound was defined as one or more areas of !1 cm width or depth of non-aerated lung. farm records were evaluated to identify treated calves. calves were treated for respiratory disease at the farm manager's discretion, not based upon ultrasound findings or rs. non-parametric methods were used to evaluate the data. ninety-one calves were enrolled into the study, with 6 lost to follow-up. an average of 4 minutes was spent performing the rs and ultrasound on each calf. the median ages at first (us1) and second (us2) examination were 13 (interquartile range 12-15) and 46 (interquartile range 44-47) days, respectively. the majority of calves had a low rs (o 5) and only 3.2% of calves had a rs high enough to warrant treatment based on previous recommendations (rs!5). the prevalence of calves that had abnormal lungs on ultrasound but a low rs (o 5) was 5.5% (us1) and 16.5% (us2). the prevalence of calves that had abnormal lungs on ultrasound and a high rs (! 5) was 0% (us1) and 3.5% (us2). of the calves that had abnormal lungs on ultrasound but a low rs, 13% were treated with antimicrobials within 7 days of examination. none of the calves with high rs and abnormal lungs on ultrasound were treated with antibiotics within 7 days of examination. this study demonstrates a high prevalence of abnormal lungs, as detected by thoracic ultrasonography, without significant clinical signs in pre-weaned dairy calves. the relatively low treatment rate in these calves may suggest an area of opportunity for improvement in calf health, welfare, and herd longevity. further studies and follow up are needed to elucidate the significance of these findings and whether or not treatment is indicated. literature regarding diseases causing lameness in beef cattle is limited. this retrospective study was undertaken to examine beef cattle presented for lameness. medical records of beef cattle having a lameness examination done during the period 2007 to 2010 were reviewed and descriptive statistics generated. lameness was classified based on clinical diagnosis. the medical records of 270 beef cattle were reviewed of which 63.2% were male and 36.8% were female. beef cattle presented for lameness most often during the summer months (34%) and least during autumn (18%). causes of lameness were categorized as infectious (44.6%) or non-infectious (55.4%) and infectious lameness subcategorized as either a primary disorder or a secondary infection. all cases of a primary infectious disorder were interdigital phlegmon. secondary infections diseases included sole abscess (25.9%), septic arthritis (11.1%), tenosynovitis (2.8%), and pedal osteitis (1.3%). non-infectious lameness included proximal limb lameness (19.6%), foot trauma (14.6%), hoof horn cracks (9.5%), hoof defects (2.5%), interdigital fibromas (1.9%), overgrown hooves (1.9%), sole bruise (1.3%), subclinical laminitis (1.3%), white line disease (0.9%), osteoarthritis (0.9%), heel erosion (0.3%), sole ulcers (0.3%), and sole hemorrhage (0.3%). the most frequently affected claw was the lateral digit of the hind limb (36.4%), followed by the medial digit of the front limb (27.1%), lateral digit of the front limb (23.6%), and the medial digit of the hind limb (12.9%). the findings of this study suggest significant differences in the frequency of disease causing lameness in beef cattle compared to published reports for dairy cattle. in people, endoscopic ultrasound (eus) has become the technique of choice for assessing pancreatic disease and eus-guided fineneedle aspiration (eus fna) has proven a useful and safe modality for characterizing pancreatic lesions. reported complications include infections, bleeding and acute pancreatitis. in dogs, laparoscopic-assisted pancreatic biopsy has been suggested to be a safe procedure, however eus and eus fna have not been evaluated in dogs so far. thus the aim of the present study was to assess the practicability and safety of eus examination of the abdominal cavity as well as pancreatic eus fna in healthy dogs. this study was approved by the cantonal committee for the authorization of animal experimentation, zurich, switzerland. the study population consisted of 14 healthy beagle dogs with a median bodyweight of 13.4 kg (10.9-15.7). eus was performed with an olympus gf-uc140p-echoendoscope and fna were performed using 19 g needles (cook echotipultra). after completion of the eus-examination of the abdominal cavity from the stomach (liver, gallbladder, bile ducts, kidneys, adrenals, pancreas), the scope was advanced into the duodenum and eus fna of the pancreas was performed. fna tissue acquisition was made applying negative pressure and 6 to 8 needle passes were made. all dogs received 30 mg/kg metimazole im after eus fna and were re-checked ultrasonographically 20 minutes post eus fna. postoperative activity was assessed using a standardized scoring system. a cbc, serum biochemistry, urinalysis and spec cpl s were measured before, as well as 24 and 48 h after eus fna. the eus examination was complete in 13/14 dogs, the pancreas could not be visualized in 1 dog. the pancreas was hypo-(4/13) to isoechoic (9/13) to the surrounding mesenterium in all cases. in 3/13 dogs parts of the pancreas presented hyperechoic. the mean measured thickness was 0.88 cm. the pancreas was aspirated in 12 dogs using a transgastric approach (3) or transduodenal approach (9). duodenal transmural puncture was not accomplished in 1 dog where a re-sterilized needle was used. a minimal amount of peripancreatic fluid was observed in 1/12 dogs after eus fna. all dogs recovered uneventfully and required no further analgesia. all laboratory results including the spec cpl s measurements were within reference ranges on all three time points. cytologically, conglomerates of exocrine pancreatic cells were seen in 8/12 cases, duodenal villous epithelial cells were seen in 11/12 cases. in 1 dog the aspirated pancreatic material was sufficient for a histological assessment. the 8 aspirates with exocrine pancreatic cells on cytology were obtained by transgastric (4) and transduodenal (4) aspirations. in conclusion, (1) eus examination of the abdomen is feasible in medium-sized dogs, (2) the healthy canine pancreas can be difficult to visualize completely, and (3) eus-guided pancreatic fna using a 19 g needle is a safe procedure in healthy dogs. studies evaluating its use in dogs with pancreatic disease are warranted to assess its clinical utility. miniature schnauzers have a high prevalence of idiopathic hyperlipidemia, which is characterized by an increased serum triglyceride (tg) concentration, with or without an increased serum cholesterol (chol) concentration. a common initial therapeutic approach for the management of hyperlipidemia is the use of a low-fat diet. also, it is believed that low-fat diets may be beneficial in the treatment of pancreatitis in dogs. however, the efficacy of this approach has not been evaluated for either condition. the aim of the present study was to evaluate the effect of a commercially available low-fat diet on serum concentrations of tg, chol, and canine pancreatic lipase immunoreactivity (cpli; measured as spec cpl s ) in apparently healthy miniature schnauzers with hypertriglyceridemia. blood samples were collected from 15 apparently healthy miniature schnauzers with hypertriglyceridemia (serum triglyceride concentrations 4 108 mg/dl). common causes of secondary hyperlipidemia were excluded based on historical information, physical examination findings, and the measurement of serum glucose, total t4, and free t4 (by ed) concentrations. the owners of the dogs were asked to switch their dog to the study diet (royal canin gastrointestinal low fat s ; fat content: 18.6 g/1,000 kcal) and have a second blood sample collected 8 weeks after their dog had been on the new diet. all blood samples were collected after food had been withheld for 15 hours. serum tg, chol, and spec cpl concentrations were measured both before and after the diet change. results were compared between the two time-points using the wilcoxon signed rank and fisher's exact tests. serum tg concentrations were significantly higher before (median: 432 mg/dl) than after the diet change (median: 178 mg/dl; p 5 0.003). the proportion of dogs with hypertriglyceridemia was significantly higher before (15/15) than after the diet change (10/15; p 5 0.042). also, the proportion of dogs with serum tg 4 500 mg/dl was significantly higher before (6/15) than after the diet change (0/15; p 5 0.016). serum chol concentrations were significantly higher before (median: 296 mg/dl) than after the diet change (median: 258 mg/dl; p 5 0.004). the proportion of dogs with hypercholesterolemia was significantly higher before (8/15) than after the diet change (0/15; p 5 0.006). finally, the difference in serum spec cpl concentrations before (median: 88 mg/l) and after the diet change (median: 56 mg/l) approached but did not reach significance (p 5 0.052). also, the proportion of dogs with high serum spec cpl concentrations before (4/15) and after the diet change (0/15) was different, but this difference was not significant (p 5 0.099). in summary, a commercially available low-fat diet was effective in reducing serum tg and chol concentrations in miniature schnauzers with hypertriglyceridemia. toll-like receptor 5 (tlr5) is an extracellular pattern recognition receptor which recognizes flagellin present in motile bacteria. we have previously demonstrated a significant association between three non-synonymous single nucleotide polymorphisms (snps) in the tlr5 gene (g22a, c100t and t1844c) and inflammatory bowel disease (ibd) in german shepherd dogs (gsds). recently, we have confirmed that two of these tlr5 snps (c100t and t1844c) are significantly associated with ibd in other canine breeds. to further substantiate the role of tlr5 in canine ibd functional analysis of these polymorphisms would be needed. therefore the aim of this study was to determine the functional significance of the tlr5 snps by transfecting wild-type and mutant receptors in to human embryonic kidney cells (hek) and carrying out nuclear factorkappa b (nf-kb) luciferase assay and il-8 elisa. the tlr5 gene containing the risk haplotype for ibd (acc) and wild-type haplotype (gtt) as determined by the case-control analysis in gsds with ibd were cloned into plasmids expressing yellow-fluorescent protein (yfp). these were then stably transfected into hek cells. nf-kb activity was measured by transiently transfecting the cells with nf-kb firefly and hsv-thymidine kinase promoter (prl-tk) renilla plasmids. the cells were then stimulated with various ligands (0.1 mg/ml flagellin, 0.01 mg/ml flagellin, 1 mg/ml lps, 1 mg/ml pam3csk and media control). firefly and renilla luciferase activities were measured using the dual-glo luciferase assay system (promega, uk) according to the manufacturer's recommendations. the supernatants were harvested and used in an il-8 elisa (r&d systems). human tlr5 transfected hek cells (invivogen) served as positive controls in all experiments. independent t-test was used to determine the significance of relative luciferase activity and il-8 concentration between wild-type and mutated tlr5 cells. although there was no significant difference between the wild-type and mutated receptor when they were stimulated with 0.01 mg/ml of flagellin (p 5 0.14), there was a significant increase when the cells with mutated tlr5 were stimulated with 0.1 mg/ml of flagellin compared to the cells expressing wild-type tlr5 (p 5 0.027). similarly, there was a significant increase in il-8 concentration in the supernatants in the cells with the mutated tlr5 receptor when stimulated with 0.1 mg/ml flagellin compared to the wild-type (p 5 0.025-one-tailed, 0.05-two-tailed) but not with 0.01 mg/ml flagellin (p 5 0.26). we show for the first time that polymorphisms associated with ibd are functionally hyper-responsive to flagellin compared to the wild-type receptor. this suggests that tlr5 may play a role in canine ibd and that blocking the hyper-responsive receptor found in susceptible dogs with ibd may alleviate the inappropriate inflammation seen in this disease. however, further in-vivo functional analysis of tlr5, especially at the intestinal mucosal level would be needed to confirm these findings and predict the usefulness of any future therapeutic interventions. tlr5 has been shown to play a role in the inappropriate inflammation seen in human inflammatory bowel disease (ibd). similarly, we have recently demonstrated a significant association between three non-synonymous single nucleotide polymorphisms (snps) in the canine tlr5 gene (g22a, c100t and t1844c) and inflammatory bowel disease (ibd) in german shepherd dogs (gsds). therefore the aim of this study was to determine the functional significance of the tlr5 snps in the breed of gsds. the tlr5 gene containing the risk haplotype for ibd (acc) and wild-type haplotype (gtt) were stably transfected into hek cells. nf-kb activity was measured by transiently transfecting the cells with nf-kb firefly and hsv-thymidine kinase promoter (prl-tk) renilla plasmids. the cells were stimulated with various tlr ligands (0.1 mg/ ml flagellin, 0.01 mg/ml flagellin, 1 mg/ml lps, 1 mg/ml pam3csk and media control). firefly and renilla luciferase activities were measured using the dual-glo luciferase assay system (promega, uk). the supernatants were harvested and used in an il-8 elisa (r&d systems). peripheral whole blood from dogs carrying the wild type and mutant tlr5 genes was cultured and stimulated with tlr ligands as above. canine tnf-alpha was measured in the supernatant by commercially available elisa (r&d systems). t-test was used to determine differences of relative luciferase activity, il-8 concentration and tnf-alpha concentration between wild-type and mutated tlr5 cells. there was a significant increase in nf-kb activity when the cells with mutated tlr5 were stimulated with 0.1 mg/ml of flagellin compared to the cells expressing wild-type tlr5 (p 5 0.027), which correlated with il-8 expression in the supernatant (p 5 0.26). similarly, in the whole blood assay the tlr5 risk haplotype for ibd in gsds (acc) was significantly hyperresponsive to flagellin at a concentration of 0.1 mg/ml compared to the tlr5 wild-type haplotype (gtt) (p 5 0.001). we show for the first time that polymorphisms associated with canine ibd in gsds are functionally hyper-responsive to flagellin compared to the wild-type receptor. blocking the hyper-responsive receptor found in susceptible dogs with ibd may alleviate the inappropriate inflammation seen in this disease. proton pump inhibitors (ppi) are widely used in human and also veterinary medicine. side-effects of ppi treatment reported in people are atrophic gastritis, gastric and esophageal cancer, and rebound hyperacidity following cessation of treatment, which has been speculated to be due to a sustained increased in circulating gastrin concentration. moreover, long-term ppi treatment has been associated with an increased risk for osteoporosis in people. little is known about the effect of ppi treatment on serum gastrin concentration or calcium metabolism in dogs. eight healthy adult research dogs (4 males and 4 females) were enrolled into the study. the dogs received an average dose of 1.1 mg/ kg of omeprazole orally twice daily for 15 days. blood samples were collected prior to initiating the treatment and every 3 days during the 15 days of treatment and during the 15 days after discontinuation of treatment for determination of serum gastrin, ionized calcium, pth, and 25 oh vitamin d3. gastric fluid was collected via gastroscopy after an overnight fast for measurement of gastric ph prior to, during, and after the omeprazole treatment period. normally distributed data were compared with a repeated measures anova and post hoc dunnett's test. data that were not normally distributed were compared with a friedman's test and a post-hoc dunn's test. gastric fluid ph was significantly higher (p o 0.01) at the end of the treatment period (median: 7.4; range: 4.2-8.1) when compared to pretreatment values (median: 1.7; range: 1.0-6.8). serum gastrin concentrations increased significantly from a median baseline of 10.0 ng/l (range: 10.0-27.0) to a maximum median of 379.5 ng/l (range: 49.9-566.0) at day 9 of treatment (p o 0.01). serum gastrin remained significantly increased above baseline values from day 6 to day 15 of the treatment, but was not different from pre-treatment values 3 days after the end of the treatment. omeprazole treatment had no effect on ionized calcium or pth for the duration of the study. marginal, but significant changes of 25 oh vitamin d3 were observed at day 15 (end of the treatment period -increased by 13.8%) and day 21 (6 days after the end of the treatment -decreased by 14.7%). this study shows that treatment with omeprazole for 2 weeks results in a profound and sustained increase in serum gastrin concentration in dogs. this effect is rapidly reversible after cessation of the treatment. no effect on calcium metabolism was observed. however, this study documents only the effect of short-term treatment and it is possible that the effects of long-term administration are different. omeprazole treatment has been associated with small intestinal bacterial overgrowth and a higher risk for infectious enteropathies in humans. using a semi-quantitative sequencing approach, we have previously shown that omeprazole treatment may lead to alterations in both duodenal and gastric bacterial populations in healthy dogs (acvim 2010). however, a sequencing approach can only estimate relative proportions of genomic bacterial targets. therefore, significant changes in the total number of bacteria could not be evaluated. the aim of this study was to quantify gastric and duodenal bacterial populations in dogs undergoing omeprazole treatment. eight 9 month-old healthy research dogs (4 males and 4 females) were enrolled. the dogs received an average dose of 1.1 mg/kg of omeprazole orally twice a day for 15 days. endoscopic gastric and duodenal biopsies were harvested 30 and 15 days before starting omeprazole treatment, on the last day of treatment (day 15), and 15 days after the end of treatment (day 30). all biopsies were fixed in 10% formalin for 24 hours, processed, and embedded in paraffin blocks. fluorescent in situ hybridization was used to quantify mucosa-associated bacteria using fluorescently-labeled probes targeting the 16s ribosomal rna. statistical analysis aimed to compare changes in helicobacter spp. in gastric biopsies and total bacteria in both gastric and duodenal biopsies using the glimmix and npar1way procedures in sas s 9.2. bacteria were counted in 1,174 and 989 microscopic fields (63â) obtained from 155 and 132 gastric and duodenal biopsies, respectively. in the stomach, omeprazole treatment led to a decrease in helicobacter spp. (log of average counts ae standard error: 1.98 ae 0.14 at day 15) when compared to the counts 30 (2.43 ae 0.13, p0.0001) and 15 (2.40 ae 0.13, p0.0001) days before treatment. after completion of omeprazole treatment, helicobacter spp. increased and returned to baseline counts (2.45 ae 0.13 at day 30, p0.0001 vs day 15). also, in the stomach, non-helicobacter spp. bacteria were observed more often during omeprazole treatment (median: 3, range: 0-20) than on days 30 (median: 0, range: 0-3) and 15 (median: 1, range: 0-6) before and 15 days after (median: 0, range: 0-2) omeprazole treatment; however, statistical comparison across time points did not reach significance. in the duodenum, while the median number of bacteria for all time points was zero, non-parametric comparison of median scores (number of points above median) revealed significantly higher numbers of bacteria during omeprazole treatment (p 5 0.0063). our results suggest that omeprazole treatment for 2 weeks leads to a lower abundance of helicobacter spp. organisms in the stomach of healthy dogs. also, this transient decrease in helicobacter spp. was accompanied by a higher abundance of other bacteria in both the stomach and the proximal duodenum. the smartpill ph.p s capsule (the smartpill corporation) is a wireless motility capsule that measures ph, pressure, and temperature as it passes through the gastrointestinal (gi) tract. analysis of this data allows the calculation of gastric emptying time (get), small and large bowel transit time (slbtt), and total gi transit time (tgtt). this study evaluated the variability associated with repeated measurement of gi transit times and the effect of oral administration of ranitidine (zantac s ) on gi transit times in dogs using this system. it was hypothesized that ranitidine would reduce gi transit times. six privately owned healthy adult dogs weighing between 25.0 kg and 41.0 kg were used. on 3 occasions each dog was fed a standard meal followed by oral administration of a capsule. data were recorded until the capsule had passed in the dog's feces. on a 4th occasion each dog was given 75 mg of ranitidine po q12 hrs starting 48 hrs prior to testing. the dogs were then fed the test meal and the capsule was administered as above. ranitidine was given until the capsule had passed in the dog's feces. proprietary smartpill software was used to calculate get, slbtt, tgtt, and the median gastric ph (mgph). mean intra-individual and inter-individual coefficients of variation (cv%) were calculated for get, slbtt, and ttt for the first 3 time points. transit times and gastric ph recorded at all 4 time points were compared using a repeated measures anova. where significant differences were identified, post-hoc testing was performed using a bonferroni's multiple comparisons test. significance was set at p o 0.05. a sharp rise in ph indicating exit of the capsule from the stomach was identified in each experiment. mean (ae sd) get, slbtt, and tgtt without ranitidine were 775 ae 144, 1563 ae 614, and 2338 ae 577 min, respectively. mean get, slbtt, and tgtt during treatment with ranitidine were 719 ae 11, 1442 ae 885, and 2155 ae 897 min, respectively. mean intra-individual cv% before ranitidine for get, slbtt, and tgtt were 12.9, 29.7, and 17.8%, respectively. mean inter-individual cv% before treatment with ranitidine for get, slbtt, and ttt were 16.1, 40.5, and 24.7%, respectively. no significant differences in get, slbtt, or tgtt were found at any of the 4 time points. the mean mgph during treatment with ranitidine (ph 2.85) was significantly higher than at all other time points (overall mean ph for the 3 time points: 1.73; p o 0.05). the smartpill system is an easy to use, ambulatory, non-invasive, non-radioactive method for assessing gi transit times in medium to large breed dogs. measurements of gi transit times, especially slbtt, were subject to considerable intra-individual and interindividual variation. no significant effect of oral ranitidine on gi motility was identified in this group of dogs. however, as expected, oral ranitidine caused a significant increase in gastric ph. the intestinal microbiota has been implicated in the pathogenesis of various gastrointestinal disorders in both humans and dogs. recent metagenomic data suggest that specific bacterial groups, including bacteria within the clostridium clusters iv and xiva (i.e., faecalibacterium spp., ruminococcaceae, and lachnospiraceae) and bifidobacterium spp. are decreased, while proteobacteria are increased in dogs with clinical signs of gastrointestinal disease. the objective of this study was to establish quantitative polymerase chain reaction (qpcr) assays for these specific bacterial groups and evaluate their abundance in healthy dogs and dogs with clinical signs of gastrointestinal disease. fecal samples were collected from 21 healthy dogs (14 females and 7 males) and 20 dogs with clinical signs of gastrointestinal disease (11 females and 9 males). novel quantitative pcr assays were established for faecalibacterium spp., ruminococcaceae, and lachnospiraceae by aligning respective group specific sequences against canine specific sequences obtained from 16s rrna gene clone libraries and sequences available from the ribosomal database project. primers for bifidobacterium spp. and proteobacteria were selected from previously published studies. the specificity of the qpcr assays was confirmed by sequencing of obtained qpcr amplicons. the bacterial dna abundance in fecal samples was compared between healthy dogs and dogs with clinical signs of gastrointestinal disease using a mann-whitney u test. significance was set at p o 0.05. a significantly lower abundance of faecalibacterium spp. (p o 0.01) and ruminococcaceae (p 5 0.02) was observed in dogs with clinical signs of gastrointestinal disease when compared to healthy dogs. proteobacteria were more abundant in dogs with clinical signs of gastrointestinal disease, but this difference did not reach statistical significance (p 5 0.07). there was no significant difference in the abundance of lachnospiraceae (p 5 0.23) and bifidobacterium spp. (p 5 0.77) between both groups. in conclusion, we established novel qpcr assays for faecalibacterium spp., ruminococcaceae, and lachnospiraceae. we observed significant decreases in the abundance of faecalibacterium spp. and ruminococcaceae in dogs with clinical signs of gastrointestinal disease. these bacterial groups are considered major short-chain fatty acid producers and studies are warranted to determine if a decrease in these bacterial groups is associated with decreases in short chain fatty acid production. further studies are also needed to determine if these bacterial shifts are associated with specific gastrointestinal disorders. the pathogenesis of chronic enteropathies (ce) in dogs likely involves complex interaction between the mucosal immune system and the intestinal microbiota. while the application of bacterial 16s rdna sequence-based analysis has shown an association between altered microbial composition and duodenal inflammation in dogs, relatively little is known about alterations in non-invasive mucosal and luminal bacteria seen with diseases involving the ileum and colon. the present study sought to evaluate the relationship of enteric bacteria to type and severity of mucosal inflammation affecting the ileum and colon of dogs with ce. eleven client-owned dogs with ce involving both the small and large intestines were prospectively enrolled. ce was diagnosed on the basis of a history of chronic gastrointestinal signs, exclusion of identifiable underlying disorders, and histopathologic evidence of intestinal inflammation. mucosal bacteria were detected in formalinfixed ileal and colonic tissue sections with fluorescence in situ hybridization (fish) using 16s rdna-targeted probes directed against all bacteria, enterobacteriaceae, e. coli, eubacterium rectale-clostridium coccoides group, bacteroides/prevotella, and helicobacter spp. sections were examined by epifluorescence microscopy and the number of bacteria and their spatial distribution (luminal, superficial mucus, epithelial adherent, within mucosa) was determined in ten 60x fields of each section. microbial composition in ce dogs was compared to the ileal/colonic microbiota of 7 healthy control (hc) dogs using a mixed effect anova model. p values o 0.05 were considered significant. the final diagnoses for dogs with ce included ibd (n 5 8) and lymphosarcoma (n 5 3). when compared to hc dogs, dogs with ce showed regional (ileum versus colon) imbalances in microbiota composition characterized by selective enrichment of mucosa-associated populations. evaluation of colonic biopsies in dogs with ce showed that the total number of bacteria (p o 0.04), clostridium (p o 0.005), enterobacteriaceae (p o 0.04) and e. coli (p o 0.03) were increased in the adherent mucus regions of dogs with ibd as compared to hc dogs. total bacteria (p o 0.05) and e. coli (p o 0.02) were also more numerous in dogs with lsa versus hc and ibd dogs (p o 0.04 for e. coli). ileal biopsies from ce dogs similarly showed variable dysbiosis with increased total bacteria (p o 0.05) but decreased helicobacter spp (p o 0.001) and bacteroides (p o 0.02) observed within inflamed intestines as compared to hc tissues. the spatial distribution of these bacteria was also appreciably different from hc dogs, with higher numbers of bacteria generally found within the adherent mucus compartment as compared to other ileal regions. our data demonstrate that dogs with ce affecting the ileum and colon have altered microbiota composition that may be a cause or consequence of mucosal inflammation. recognition of these microbiota imbalances may provide new opportunities for therapeutic intervention. trichomonads have been rarely reported in the feces of dogs and their pathogenicity remains uncertain. although pentatrichomonas hominis (ph) is considered to be a commensal that may overgrow in dogs with other causes of diarrhea, little is known regarding the history, clinical presentation or prevalence of concurrent gi infections in dogs with trichomonosis. the aim of this study was to determine whether dogs with diarrhea and trichomonosis could be distinguished from dogs having diarrhea without trichomonosis on the basis of clinical signs or presence of concurrent enteric infections. fecal samples from 39 dogs were submitted to ncsu from 2007-2010 for trichomonas spp. pcr testing. dna was extracted using a zr fecal dna mini-prep kit and absence of pcr inhibitors verified by amplification of bacterial 16s rdna. pcr for ph and tritrichomonas foetus (tf) was performed as well as real-time pcr assays for 9 possible concurrent enteric infectious agents. obtainable medical records were reviewed. all submitted fecal samples were submitted from dogs with diarrhea that was variably described as soft, mucoid, hemorrhagic, or watery. mean age of the dogs was 2.33 years (median 1.9; range: 2.5-120 months) and represented a total of 24 breeds. ph, tf, or concurrent ph and tf were diagnosed in 18, 1, and 1 dogs respectively (group a). the remaining 19 dogs were negative for ph and tf by pcr no dogs were identified as infected with canine distemper virus or parvovirus. five samples from each group had insufficient quantity or quality of dna for concurrent infectious disease testing. in this large study of canine trichomonosis, no differences in age, clinical signs, or prevalence and identity of concurrent enteric infection between diarrheic dogs with or without ph were identified. thus, these findings do not appear to support a primary pathogenic role for ph as a causative agent of diarrhea in dogs. gastrointestinal motility disorders are a common clinical problem in domestic animals. many of the g.i. motility disorders have been treated previously with 5-ht 4 agonists although limited availability of drugs in this classification have stimulated interest in the use of new (and old) drug therapies. the dopaminergic antagonists are a group of drugs with well-known anti-emetic effects at central dopamine d 2 receptors, and putative gastrointestinal prokinetic effects at peripheral d receptors. domperidone has been shown, for example, to reverse gastric relaxation induced by dopamine infusion in the dog. similar studies have not been reported in the cat or rabbit, two species at risk for distal gastrointestinal motility disorders. our aim was to study the effects, mechanisms, and sites of action of domperidone in feline colonic and rabbit gastrointestinal smooth muscle contraction. portions of stomach (fundus and antrum), intestine (duodenum and ileum), cecum (rabbits only), and colon (ascending and descending) were obtained from healthy cats and rabbits from 9-12 months of age. longitudinal and circular smooth muscle strips from each site were suspended in physiologic (hepes) buffer solution, attached to isometric force transducers, and set to optimal muscle length (l o ) using acetylcholine (ach; 10 à4 m). muscle strips were treated with domperidone (d; 10 à8 to 10 à4 m) in the presence or absence of ach (10 à8 to 10 à4 m), and maximal force output (p max ) was normalized for cross-sectional area (n 5 â 10 4 newtons/m 2 ). domperidone (d) had a minor direct effect of inducing feline and rabbit gastric, cecal, and colonic smooth muscle contraction. direct effects were similar whether in the longitudinal or circular muscle orientation. the direct effect of domperidone was dose-dependent and maximal (feline colon p max 5 0.15-0.22 n; rabbit colon p max 5 0.10-0.19 n) at a dose of 10 à4 m. domperidone had a much greater indirect effect in augmenting cholinergic (ach; 10 à4 m) contractions in feline and rabbit gastric, cecal, and colonic smooth muscle. domperidone-augmented cholinergic contractions were 157-200% (feline colon p max 5 1.54 ae 0.24 n ach only; feline colon p max 5 2.03 ae 0.23n ach 1 d) of baseline cholinergic contractions. domperidone contractions were of a similar magnitude to those induced by cisapride. domperidone effects were similar in mucosaintact and mucosa-dissected preparations. domperidone contractions were unaffected by prazosin (a 1 receptor antagonist), yohimbine (a 2 receptor antagonist), or terbutaline (b 2 receptor agonist), but were somewhat attenuated by dopamine (d 2 receptor agonist) and a non-specific cholinergic antagonist (atropine). in vitro studies show for the first time that domperidone has minor direct and major indirect effects in augmenting cholinergic contractions of feline and rabbit gastrointestinal (stomach, cecum and colon) smooth muscle. as recognition of acute and chronic pain in dogs has increased, so too has the use of non-steroidal anti-inflammatory drugs (nsaids) often in conjunction with tramadol. in people and rats, co-administration increases the risk of perforation and gastric injury over nsaids alone. using an ex vivo model of acid injury in canine gastric mucosa, we examined the effects of indomethacin and tramadol on gastric permeability and concentrations of gastroprotective prostaglandin e 2 (pge 2 ). mucosa from the gastric antrum was harvested from 5 shelter dogs immediately after euthanasia, and mounted on ussing chambers. the tissues were equilibrated for 30-minutes prior to addition of acidic ringer's solution (ph, 1.2). after 45-minutes of injury, the acid was replaced with neutral ringer's and the tissues were treated with indomethacin, tramadol or both. tissues were maintained for 210minutes total, during which time permeability was assessed electrically. prostanoid concentrations were quantified using a commercially available elisa. western blots were performed for cox-1 and à2. recovery of gastric barrier function after acid injury was inhibited by co-administration of tramadol and indomethacin ( figure 1 ) but not by tramadol or indomethacin alone (data not shown). prostaglandin e 2 increased with acid injury. the increase in pge 2 was inhibited by co-administration of indomethacin and tramadol (in pg/ml: acid injury 681.25 ae 324.88, indo 1 tramadol 149.11 ae 35.24). there was no significant effect of treatment on cox-1 or à2 expression. co-administration of tramadol with a non-selective nsaid inhibits the return of gastric mucosal barrier function after acid injury in canine tissue, suggesting that caution is required in prescribing concurrent use of these drugs in dogs at risk for gastric ulcers. these drugs may exert this effect by decreasing levels of gastroprotective prostanoids. further study is needed to understand the mechanism of this drug interaction. an increased intestinal permeability (ip) has been suggested to be both cause and consequence of gastrointestinal (gi) disease, such as inflammatory bowel and celiac disease, in people. a novel tight junction regulator, larazotide acetate (alba therapeutics, baltimore, md) has been shown to significantly decrease ip in rats and in humans with celiac disease. the purpose of this study was to determine if larazotide acetate reduces ip in soft coated wheaten terriers (scwt) and norwegian lundehunds (nl) with chronic gi disease and ameliorates clinical signs. four nl (2 females, 2 males; median age: 2.5 yrs, range: 1.5-4.5 yrs) and 9 scwt (7 females, 2 males; median age: 5.0 yrs, range: 1.5-9.0 yrs) were enrolled based on presence of clinical signs of gi disease and hypoalbuminemia, increased fecal alpha 1proteinase inhibitor (a 1 -pi) concentrations, and/or hypocobalaminemia. scwt with protein-losing nephropathy were excluded. dogs were fed q12 hrs and received 0.5 mg (4 nl and 2 scwt) or 2.0 mg (7 scwt) of larazotide acetate po before each meal for 90 days. prior to start of treatment (day 1) and at the end (day 90), ip was evaluated by calculating the lactulose/rhamnose (l/r)-ratio in serum samples obtained at 30, 60, 90, and 120 min after oral dosing. also, 3 consecutive fecal samples each were collected prior to day 1 and day 90 for n-methylhistamine (nmh) measurement. pre-and post-treatment data were compared using a wilcoxon signed rank test. the 0.5 mg vs. 2.0 mg dose groups were compared using a mann-whitney u test. statistical significance was set at p o 0.05. l/r-ratios (medians) for the 60 min sampling time point were significantly lower on day 90 (0.046) than on day 1 (0.080; p 5 0.018). dogs treated with 2.0 mg q12 hrs had significantly lower 60 min l/r ratios on day 90 than dogs treated with 0.5 mg (0.033 vs. 0.094; p 5 0.014). no difference was found between breeds. fecal nmh concentrations were not different between time points, treatment groups, or breeds. fecal a 1 -pi concentrations were available for 11 of the 13 dogs and were significantly higher on day 90 compared to day 1 (p 5 0.033). no differences were found between pre-and post-treatment serum albumin or cobalamin concentrations. weight gain was seen in all 4 nl. resolution of diarrhea, vomiting, hyporexia, as well as an increased activity was seen in 1 scwt. another scwt had resolution of diarrhea and a decrease in pruritus. no changes in clinical signs were reported in the remaining 7 scwt. this study indicates that larazotide acetate might be able to reduce ip in dogs. this effect may be dose-dependent. however, not all dogs showed an improvement in clinical signs, suggesting that factors other than increased ip might have been responsible for the clinical signs in these dogs. breed-related effects cannot be ruled out, and further studies are warranted to determine the efficacy of larazotide acetate in dogs of other breeds with gi disease. to analyze different biochemical markers, calculate clinical activity scores, and assess survival in dogs with ple and compare them with those in dogs with food-responsive diarrhea (frd) without protein loss. 29 dogs with ple and 18 dogs with frd, referred to the university of bern, ch, were enrolled. selection criteria included a history of chronic diarrhea (4 3 weeks), exclusion of identifiable underlying causes, and histopathologic evidence of intestinal inflammation, but not neoplasia. underlying disorders were excluded based on cbc, chemistry profile, urinalysis, fecal analysis, trypsinlike immunoreactivity, cobalamin, folate, and transabdominal ultrasound. also, canine pancreatic lipase immunoreactivity (spec cpl s ), c-reactive protein (crp), calprotectin and alpha 1 -proteinase inhibitor (a 1 -pi) were measured in serum from 18 dogs and compared with 18 dogs with frd without ple. all dogs were scored using the canine ibd (cibdai) and the canine chronic enteropathy (cce) clinical activity index (ccecai). total protein, albumin (5-28.1 g/l), and total calcium (1.21-2.32 mmol/l) were decreased in all 29 dogs. cobalamin was decreased in all but 3 dogs ( o 100-490 ng/l). spec cpl was mildly increased in 3/18 dogs with ple and normal in 15/18 ple and all frd dogs. crp was normal in 5/18 dogs with ple (16/18 frd), mildly increased in 7/18 (1/18 frd), and moderately increased in 6/18 ple dogs (1/18 frd). calprotectin was slightly higher in dogs with ple, but all ple and frd dogs yielded values in the normal range. serum a 1 -pi was significantly lower in dogs with ple than in those with frd (p o 0.001), with 13/18 ple dogs below the reference range (1/18 frd). cibdai ranged from 4 to 16 and ccecai from 6 to 19. at the end of the study, 12/29 dogs were still alive with survival times between 26 and 2544 days. 17/29 dogs died with a median survival of 96 days (range 2-874 days). dogs with mildly increased crp died earlier than dogs with a normal or moderately increased crp (p 5 0.011), whereas albumin, calcium, spec cpl, calprotectin, cibdai, and ccecai had no significant impact on outcome and survival. in conclusion, dogs with ple have a significantly lower a1-pi in the serum than dogs with frd. furthermore, most dogs with ple have an increased crp and a decreased cobalamin. a mild increase in crp appears to be a poor prognostic factor. while hypoalbuminemia is a common finding associated with chronic enteropathies, its impact on survival in this population is poorly defined. the aim of this study was to compare dogs with chronic enteropathies on the basis of their serum albumin concentration at the time of presentation. we hypothesized that dogs with a protein losing enteropathy (ple) have a significantly shorter survival time compared to dogs with chronic enteropathies which are not hypoalbuminemic (controls). information obtained from the medical records included signalment, duration and characteristics of clinical signs, physical examination findings, clinicopathologic data and survival time. one hundred seventeen cases fit the inclusion criteria; 68 in the ple group and 49 controls. there was no statistical significance between groups for age (p 5 0.12), weight (p 5 0.17), weight loss (p 5 0.59) and body condition score (p 5 0.072). compared to control dogs, ple dogs had decreased serum concentrations of cobalamin (p 5 0.002), total calcium (p o 0.0001), globulin (p o 0.0001), cholesterol (p o 0.0001) and ionized calcium (p o 0.001). survival analysis revealed a significantly decreased survival time for ple dogs (p 5 0.0008); median survival was 701 days for ple dogs and 4 3,500 days for controls. while the ple group did not survive as long, survival was not directly associated with severity of hypoalbuminemia; patients with albumin concentration o 1.3 g/dl survived longer than those with mild hypoalbuminemia (1.6-1.9 g/dl). this study supports the observation that chronic enteropathy patients have decreased survival time when presented with hypoalbuminemia; however this study suggests the severity of hypoalbuminemia is not a reliable indicator of survival. cobalamin (vitamin b 12 ) deficiency in the chinese shar pei (shar pei) is suspected to be hereditary. inherited causes of cobalamin deficiency have been reported in humans and may affect absorption, transport, or cellular processing of cobalamin. based on human and veterinary studies, an increased serum methylmalonic acid (mma) concentration has been suggested to reflect cobalamin deficiency at the cellular level. in this context, it has been shown in humans that mma concentrations are higher in patients with genetic disorders affecting intracellular processing than in patients with genetic defects affecting gastrointestinal processing and extracellular transport of cobalamin. therefore, the aim of this study was to evaluate serum mma concentrations in shar peis and dogs of six other breeds with cobalamin deficiency. from 2008 in conclusion, serum cobalamin deficient shar peis had a 10 times higher median serum mma concentration compared to cobalamin deficient dogs of six other dog breeds. further studies are needed to investigate the intracellular processing of cobalamin in shar peis with cobalamin deficiency. chinese shar peis (shar peis) have a high prevalence of cobalamin deficiency. two other conditions reported frequently in this breed are shar pei fever and cutaneous mucinosis. shar pei fever is an autoimmune disorder causing periodic flare-ups and is associated with increased serum concentrations of c-reactive protein (crp), a nonspecific marker of inflammation. cutaneous mucinosis is characterized by excessive deposition of mucin in the dermis. also, hyaluronic acid (ha), the main component of mucin, was shown to be significantly higher in serum from shar peis with cutaneous mucinosis than in healthy controls. to date, a possible association between shar pei fever and/or cutaneous mucinosis on one side and cobalamin deficiency on the other has not been investigated in shar peis. thus, the aim of this study was to compare serum concentrations of ha (an indicator of cutaneous mucinosis) and inflammatory markers (crp, calprotectin, and s100a12), assumed to be increased in episodes of shar pei fever, in shar peis with and without cobalamin deficiency. serum samples from 40 shar peis, collected from 2008 to 2010, were analyzed. serum ha and crp (reference interval (ri): 0.0-7.6 mg/l) were quantified by using commercial elisa kits (echelon biosciences, salt lake city, ut, usa and tridelta, maynooth, ireland; respectively). serum calgranulin concentrations were measured using an in-house elisa (calprotectin; ri: 0.9-11.9 mg/l) and ria (s100a12; ri: 33.0-233.0 mg/l), respectively. mann-whitney u tests were used to compare serum ha, crp, calprotectin, and s100a12 concentrations between shar peis with and without cobalamin deficiency. significance was set at p o 0.05. fourteen shar peis were severely cobalamin deficient, defined by an undetectable serum cobalamin concentration ( o 150 ng/l). in the remaining 26 dogs, serum cobalamin concentrations were within the reference interval (251-908 ng/l). serum concentrations of ha, crp, calprotectin, and s100a12 were not significantly different between cobalamin deficient shar peis (medians: 649.9 ng/ml, 4. . fifty percent of cobalamin deficient shar peis had serum calprotectin concentrations above the upper limit of the reference interval, and 43% had serum s100a12 concentrations above the suggested upper reference limit. in this study, serum concentrations of ha, crp, and the calgranulins did not differ between cobalamin deficient shar peis and shar peis with a normal serum cobalamin concentration. this finding leads us to speculate that increased ha and/or inflammatory markers are not associated with cobalamin deficiency in shar peis. further studies are needed to investigate serum cobalamin concentrations in patients with shar pei fever or cutaneous mucinosis. cobalamin deficiency (cd) has been associated with gastrointestinal and pancreatic disease in dogs. hereditary cd has been demonstrated in giant schnauzers and single case reports have suggested congenital cd in the border collie (bc) breed. clinicopathologic findings of cd vary and can be unspecific as cobalamin acts as a co-factor for a multitude of enzymatic reactions. the two most important reactions concern the conversion of methylmalonyl-coa to succinyl-coa and the re-methylation of homocysteine (hcy). these two metabolites increase when cobalamin is lacking and act as markers for cobalamin availability on a cellular level. preliminary data from dogs suggested that measurement of methylmalonic acid (mma) may be a better diagnostic test for cd than serum cobalamin concentration. therefore the goals of the study were (1) to establish reference values for serum cobalamin, urine mma and plasma hcy in healthy pet dogs, (2) to screen a larger bc population from switzerland for cd, and (3) to perform genomic analyses on bc with cd. for determination of reference values 35 healthy pet dogs were used. serum cobalamin was measured using an automated chemiluminescence assay (immulite 2000), urine mma was determined using gas chromatography and expressed as a ratio to urine creatinine and plasma hcy was measured using high pressure liquid chromatography and fluorimetric detection. to calculate reference ranges the 10th and 90th percentile were used. data were analyzed using non-parametric tests. reference ranges for cobalamin, hcy, and mma were: cobalamin 279.2-972.8 ng/l; urine mma 2-4.65 mmol/mol creatinine; and plasma hcy 4.73-18.34 mmol/l. the screened bc population comprised 113 purebred dogs and 4 bc (median 11.5 months; range 8-41) suffering from congenital cd could be identified. clinical signs differed and consisted of tiredness (4), stunted growth (4), anemia (3), dysphagia (2) and persistent fever (1). median (ranges) results for healthy bc and bc with cd were: for cobalamin 592 (142-1855) and 72.00 (30-139) ng/l; for urine mma 2 (2-360) and 4148 (1800-6665) mmol/mol creatinine; for hcy 8.5 (2.8-22.4) and 41.00 (40-86.6) mmol/l. strikingly, healthy bc with cobalamin concentrations well within the reference range had significantly higher urine mma concentrations compared to control dogs. under the assumption that the four affected bc are inbred to a single founder animal, first results of genotyping on the 170 k illumina canine_hd snp chip suggest that mutations in the cubn and amn gene can be excluded to cause the observed cd in these dogs. we conclude that cd is a rare familial disease in bc with variable clinical signs. to define the genomic region responsible for cd further genetic analysis is in progress. it remains to be determined why some bc have high urine mma concentrations despite a serum cobalamin concentration within the reference range. calprotectin is a protein complex that plays an important role in the innate immune response. preliminary data suggest that canine calprotectin (ccp) is a useful marker for the detection of inflammation in dogs. recently, a radioimmunoassay for the measurement of ccp has been developed and analytically validated, but this test requires the use of a radioactive tracer. therefore, the aim of this study was to develop and analytically validate an enzyme-linked immunosorbent assay (elisa) for the quantification of ccp in serum and fecal specimens from dogs. canine calprotectin (ccp) was purified, antiserum against purified ccp was raised in rabbits, monospecific antibodies were purified by affinity chromatography, and a sandwich-elisa was developed. purified antibodies were used for capturing and, after coupling with horseradish peroxidase (hrp), for reporting. a hrp substrate was used for color development. the assay was analytically validated by determination of analytical sensitivity and specificity, dilutional parallelism, spiking recovery, and intra-and inter-assay variability. control intervals for serum and fecal ccp were established from 110 and 52 healthy pet dogs, respectively, using the central 95 th percentile. sensitivity of the assay for serum samples assayed in a 1:400 dilution and for fecal extracts assayed in a 1:4,000 dilution was 0.3 mg/l and 3.2 mg/g, respectively. over a wide range of the assay, there was no cross-reactivity with cs100a12, the closest structural analogue of ccp available. observed to expected ratios (o/e) for serial dilutions ranged from 83.2-118.5% (mean ae standard deviation [sd]: 101.3 ae 10.0%) for four different serum samples, and from 81.7-129.1% (mean ae sd: 101.8 ae 14.0%) for five different fecal extracts. o/e for spiking recovery ranged from 87.8-130.4% (mean ae sd: 100.6 ae 6.5%) for four different serum samples and 6 different spiking concentrations, and from 95.9-152.0% (mean ae sd: 104.6 ae 11.6%) for 4 different fecal extracts and 6 different spiking concentrations. intra-assay coefficients of variation (cv) for 4 different serum samples were 7.8, 5.0, 7.4, and 12.7%, and 10.0, 6.1, 6.2, and 7.0% for 4 different fecal extracts. inter-assay cv for 4 different serum samples were 17. 2, 8.1, 9.9, and 12.6%, and 12.3, 8.3, 7.2, and 9 .6% for 4 different fecal extracts. the control intervals for serum and fecal ccp were established as 0.9-11.9 mg/l and 3.2-65.4 mg/g, respectively. we conclude that this new elisa for the measurement of ccp is analytically sensitive, linear, accurate, precise, and reproducible, and does not cross-react with canine s100a12. further studies evaluating the clinical usefulness of measuring serum and/or fecal ccp are currently under way. the syndrome of hemorrhagic gastroenteritis (hge) is characterized by a peracute onset of hemorrhagic diarrhea, vomiting, depression, and anorexia, and can be associated with a high mortality if untreated. the etiology of hge is unknown, but it is speculated that an abnormal response to bacterial endotoxins, bacteria, or dietary components may play a role. hge is characterized by an increased vascular/mucosal permeability, thought to represent a type i-hypersensitivity reaction, whereas inflammation and necrosis appear to be rare. however, markers of gastrointestinal (gi) inflammation and changes in the intestinal microbiota have not been studied extensively in dogs with hge. therefore, the aim of this study was to evaluate fecal canine calprotectin (cp) and s100a12 (a12), a 1 -proteinase inhibitor (a 1 -pi, a marker of gi protein loss), and bacterial groups that have previously been shown to be decreased (i.e., faecalibacterium spp., ruminococcaceae, bifidobacterium spp.) or increased (i.e., proteobacteria) in fecal samples from dogs with hge. fecal samples from 3 consecutive days were collected from 7 dogs with hge. fecal cp, a12, and a 1 -pi concentrations were measured by in-house immunoassays. bacterial dna was extracted from each fecal sample and was analyzed for faecalibacterium spp., proteobacteria, rumino-coccaceae, and bifidobacterium spp. using quantitative pcr assays. concentrations of fecal cp, a12, and a 1 -pi, and the abundance of bacterial dna were compared using a friedman test with dunn's post-hoc tests. significance was set at p o 0.05. at the time of diagnosis (day 1), fecal cp, a12, and a 1 -pi were above the suggested reference intervals in 6, 6, and 5 of the 7 dogs, respectively. until day 3, this number decreased to 2, 1, and 4, respectively. decreases in concentrations were significant between days 2 and 3 for a12 (p 5 0.016), and between days 1 and 3 for a 1 -pi (p 5 0.012), but not for cp despite a trend (p 5 0.085). no differences in the abundance of faecalibacterium spp. (p 5 0.085), bifidobacterium spp. (p 5 0.192), or proteobacteria (p 5 0.305) were observed. however, the abundance of rumino-coccaceae was significantly lower on day 3 when compared to day 2 (p 5 0.008). in this study, fecal markers of inflammation and gi protein loss were increased in dogs with hge. although the number of patients was small, following initiation of treatment, two of the markers decreased significantly. these results suggest a loss of protein into the gi tract at the onset of hge. the lack of significant increases of faecalibacterium spp., bifidobacterium spp., and ruminococcaceae, and decreases in proteobacteria may suggest gi dysbiosis. further longitudinal studies are needed and are currently under way to evaluate gi dysbiosis in canine hge patients. the most recent antiemetic approved for use in dogs is maropitant citrate (cerenia s , pfizer animal health). maropitant is a selective nk1 receptor antagonist that acts by blocking the binding of substance-p within the emetic center and chemoreceptor trigger zone. label dosage recommendations for maropitant citrate are 1 mg/ kg sc or 2 mg/kg orally once daily for up to 5 consecutive days (acute emesis) and 8 mg/kg orally once daily for up to 2 consecutive days (motion sickness). the study objective was to determine when steady-state is reached and the pharmacokinetics of maropitant administered at label oral dosages once daily for 14 consecutive days. two groups of eight healthy beagles were administered maropitant citrate at 2 or 8 mg/kg orally once daily for 14 days. concentrations of maropitant and its metabolite were measured in plasma using a lc-ms/ms assay. pharmacokinetic parameters were estimated using non-compartmental pharmacokinetic techniques and a modeling approach was used to estimate steady-state. the accumulation ratio for maropitant was 2.46 (auc0-24) and 2.03 (cmax) for the 2 mg/kg dose; and 4.81 (auc0-24) and 2.77 (cmax) for the 8 mg/kg dose after 14 days. the model estimate for the number of doses required to reach 90% of steady-state was 4.30 for 2 mg/kg and 8.09 for 8 mg/kg. three dogs experienced a single episode of vomiting. dosing maropitant citrate beyond the label duration was well tolerated by healthy dogs. steady-state was reached after approximately 4 doses for daily 2 mg/kg and 8 doses for daily 8 mg/kg oral dosing. previously presented at the veterinary cancer society, november 2010. cobalamin (vitamin b 12 ) is involved in a variety of metabolic processes. altered serum cobalamin concentrations have been observed in dogs with gastrointestinal disorders, such as exocrine pancreatic insufficiency (epi) or severe and longstanding ileal disease. this study was conducted to identify breeds with a higher proportion of a decreased serum cobalamin concentration that were submitted to the gastrointestinal laboratory. the study was also aimed at investigating serum trypsin-like immunoreactivity (tli) concentrations that were diagnostic for epi in the dogs with a decreased serum cobalamin concentration. except for csp, breeds identified here, have not previously been identified to have a higher rate of a decreased serum cobalamin concentration. also, a possible association between an undetectable serum cobalamin and a decreased serum tli in ai needs to be further investigated. calprotectin (cp) is a widely used marker for the diagnosis and monitoring of gastrointestinal (gi) inflammation in humans. studies in humans usually report fecal cp concentrations based on a single stool sample although considerable day-to-day variability of fecal cp was found in patients with gi disease and in healthy controls. intra-individual variation of canine cp (ccp) was also substantial in a small number of healthy dogs but has not been determined in dogs with chronic gi disease. thus, the aim of this study was to compare the day-to-day variation of fecal ccp in dogs with chronic gi disease before and during treatment to that in healthy dogs. we hypothesized that fecal ccp would be less variable in patients with chronic gi disease than in healthy controls, and thus collection of a single fecal sample would be sufficient. fecal samples from 3 consecutive days were prospectively collected from 15 dogs (group a; median age: 6.1 years) referred for diagnostic work-up of chronic signs of gi disease, from 8 dogs (group b; median age: 5.6 years) with stable gi disease while being treated, and from 44 healthy adult dogs (group c; mean age: 5.4 years). fecal samples were extracted and ccp was measured by an in-house immunoassay. mean ccp, standard deviation, coefficient of variation (cv), and difference between maximum and minimum ccp for the 3-day sample collection period were calculated for each dog and were compared among groups using a kruskal-wallis test. fecal ccp ranged from 2.9-102.7 mg/g (median: 17.6 mg/g) in dogs with gi disease (group a), from 2.9-265.1 mg/g (median: 18.4 mg/g) in dogs of group b, and from 2.9-93.1 mg/g (median: 7.9 mg/g) in healthy controls (group c). cvs were 0-121.3% in group a (median: 20.0%), 2.0-89.4% in group b (median: 47.4%), and 0-145.2% in group c (median: 40.4%), respectively. patients in group a appeared to have less variable fecal ccp than dogs in group b and c, but this difference was not significant (p 5 0.326). the difference between maximum and minimum ccp for the 3-day sample collection ranged from 0-31.1 mg/g in group a (median: 7.2 mg/g), from 0.1-240.0 mg/g in group b (median: 12.5 mg/g), and from 0-57.4 mg/g in group c (median: 14.8 mg/g), and were not significantly different between any of the groups (p 5 0.530). in this study, considerable day-to-day variation of fecal ccp was found in dogs with chronic gi disease (regardless of treatment) and was comparable to that in healthy dogs. results of this study suggest that for evaluating fecal ccp in dogs with clinical signs of gi disease, three consecutive fecal samples rather than a single fecal sample should be analyzed. because we did not intend to evaluate the clinical usefulness of fecal ccp as a marker of gi disease in dogs, disease severity, quality, and location differed among dogs in groups a and b. the diagnostic utility of fecal ccp in dogs with gi disease is currently being investigated. it has been suggested that diagnosis of clostridium perfringens related enteropathy should be based on the detection of the c. perfringens enterotoxin gene (cpe-gene) by pcr and/or c. perfringens enterotoxin (cpe) by elisa in feces. however, the prevalence of the cpe-gene and cpe in dogs and especially cats with gastrointestinal disease has not yet been reported. also, there is limited information about the stability of cpe in fecal samples at various storage conditions. the aim of this study was to evaluate the prevalence of the cpe-gene and cpe and the stability of cpe in fecal samples from dogs and cats. to evaluate the prevalence of the cpe-gene, a total of 481 fecal samples from dogs and cats with clinical signs of gastrointestinal disease (273 dogs and 208 cats) and 109 fecal samples from those without such signs (80 dogs and 29 cats) were examined using pcr. to evaluate the prevalence of cpe, a total of 90 fecal samples from dogs and cats with clinical signs of gastrointestinal disease (31 dogs and 59 cats) and 11 dogs without such signs were evaluated using a commercially available elisa kit (techlab, blacksburg, va). the results were analyzed using a fisher's exact test. significance was set at p o 0.05. to evaluate the stability of cpe, 8 fecal samples from dogs and 2 from cats with clinical signs of gastrointestinal disease that were positive for cpe were examined. also, 5 cpe negative samples from dogs were evaluated as negative controls. each sample was subdivided into 8 aliquots and evaluated on day 0; on days 2, 5, and 10 after being stored at room temperature (rt) or 41c; and on day 10 after being stored at à201c. the prevalence of the cpe-gene was not significantly different between dogs with signs of gastrointestinal disease (99/273; 36.3%) and dogs without (27/80; 33.8%; p 5 0.79). also, the prevalence of the cpe-gene in cats with signs of gastrointestinal disease (80/208; 38.5%) was not significantly different compared to cats without (6/ 29; 20.7%; p 5 0.06). pcr and elisa results were available for 90 samples. of the 65 pcr positive samples, only 6 (9.2%) were elisa positive. of the 35 pcr negative samples, only 1 (2.9%) was elisa positive. the prevalence of cpe was not significantly different between dogs with clinical signs of gastrointestinal disease (3/31; 9.7%) and those without (1/11; 9.0%; p 5 1.0). the prevalence of cpe in cats with signs of gastrointestinal disease was 4/59 (6.8%), but no samples from cats without such signs were available. when evaluating the stability of cpe, results for all aliquots were consistent with the initial result, except for one sample (on day 5, stored at rt, which was initially cpe positive). these results indicate that only a small proportion of samples that are pcr positive for the cpe-gene are also positive for cpe. studies are warranted to further compare the prevalence of cpe among animals with gastrointestinal disease and those without. furthermore, the results indicate that cpe is relatively stable in fecal samples at various storage temperatures. clostridium perfringens has been implicated as a cause of diarrhea in dogs. the main study objective was to compare two culture methods for the identification of c. perfringens. a secondary objective was to evaluate c. perfringens toxin genes a, b, b 2 , e, ı and cpe from canine isolates using a multiplex pcr and determine their prevalence in a group of normal and diarrheic dogs. fecal samples were collected from clinically normal (nd, n 5 105) and diarrheic dogs (dd, n 5 54) at a primary care veterinary facility. isolation of c. perfringens was performed using direct inoculation of feces onto 5% sheep blood agar (sba) as well as enrichment of stool in bhi broth followed by inoculation onto sba. isolates were tested by multiplex pcr for the presence of a, b, b 2 , e, ı and cpe genes. c. perfringens was isolated from 84% (88/105) of nd fecal samples using direct culture and 87.6% (92/105) with bhi enrichment (p 5 0.79). in the dd, corresponding isolation rates were 90.7% and 93.8% (p 5 0.45). all isolates possessed a toxin gene. b, b 2, e, ı and cpe toxin genes were identified in 4.4%, 1.1%, 3.3%, 1.1% and 14.4% of nd isolates, respectively. in the dd group, b and b 2 were identified in 5%, e and ı were not identified and the cpe gene in 16.9% of isolates. bhi enrichment did not significantly increase the yield of c. perfringens compared to sba but increased time and cost involved. c. perfringens (p 5 0.64) and c. perfringens toxin genes were present in equal proportions in nd and dd groups (p ! 0.15). culture of c. perfringens and pcr for toxin genes are of limited diagnostic utility due to the high prevalence of c.perfringens in normal dogs and the lack of apparent difference in toxin gene distribution between normal and diarrheic dogs. endoscopic biopsies are a relatively convenient, non-invasive test for feline infiltrative intestinal disorders. commonly, only the duodenum is examined due to cost, risks and time required to prepare the colon using lavage solutions, cathartics and/or enemas. the purpose of this study was to evaluate the consistency between endoscopic biopsies of the duodenum and ileum in cats. endoscopic biopsies from 70 cats which had duodenal and ileal tissue specimens were evaluated retrospectively. all slides were randomized and reviewed by a single pathologist (jm) for quality, number of biopsies, and diagnosis according to wsava standards. no information regarding history, clinical signs, endoscopic findings, or previous histological diagnosis was made available to the pathologist. statistical comparison of the diagnosis of sc-lsa and ibd by intestinal location was conducted using fisher's exact test (p o 0.05 significant). 18 of 70 cats (25.7%) were diagnosed with sc-lsa in the duodenum and/or ileum. of these 18 cats, 7 (38.9%) were diagnosed with only duodenal sc-lsa, 8 (44.4%) were diagnosed with only ileal sc-lsa, and 3 (16.7%) had sc-lsa in both duodenum and ileum. in 8 cats with only ileal sc-lsa, 3 had severe ibd in duodenal biopsies, possibly consistent with early sc-lsa. 5 of these 8 had duodenal biopsies without evidence of sc-lsa. our results suggest there is a population of cats in which diagnosis of sc-lsa may only be found by evaluating ileal biopsies. clinicians should consider performing both upper and lower gi endoscopic biopsies in cats with suspected infiltrative small bowel disease. periodontitis is one of the most common diseases in cats and is mainly due to the presence of plaque and calculus. in this study, we investigated putative correlations between dental tartar and gingivitis and also between gingivitis and subgingival bacteria in cats. twelve cats (median age: 5 years; range: 1-10 years; 6 dsh and 6 persians; 8 females and 4 males) were enrolled. dental tartar was obtained during scaling for a dental prophylactic procedure. all cats were negative for felv and fiv infection as assessed by a commercial elisa test (snap s fiv/felv combo test). severity of gingivitis (scores: 0-3; 0 5 normal, 1 5 mild, 2 5 moderate, and 3 5 severe) and dental tartar (scores: 0-3) were scored in each cat. endodontic paper points were applied for collecting a bacterial sample from the subgingival area and transferred to thioglycollate transporting media for bacterial culture. the relationship between gingivitis and tartar thickness scores was analyzed by spearman correlation. a student's t-test was used to compare the mean differences (gingivitis and tartar thickness scores) between upper and lower teeth. the association between severity of gingivitis and bacterial type was tested by chi square test. the spearman correlation coefficient for the average gingivitis score and the average tartar thickness score was 0.91 (p o 0.05). interestingly, the average tartar thickness scores from the upper jaw were significantly higher than those from the lower jaw (p o 0.05). the highest scores were found for the molar teeth in all cats. bacterial culture revealed 28.9% anaerobic bacteria species (i.e., bacteroides spp., peptostreptococcus anaerobius, and eubacterium aerofaciens) and 71.1% aerobic bacteria species (i.e., pasteurella multocida, streptococcus spp., enterococcus spp., staphylococcus spp., bacillus cereus, escherichia coli, and pseudomonas aeruginosa). anaerobic bacteria were found mostly in cats with higher gingivitis scores (2-3; chi square: p o 0.05), while pasteurella multocida was found mostly in cats with lower gingivitis scores (0-1; chi square: p o 0.05). antimicrobial sensitivity testing indicated that all of the anaerobic bacteria were sensitive to clindamycin, chloramphenicol, metronidazole, cefoxitin, or tetracycline, 90% were sensitive to erythromycin, and 80% were sensitive to penicillin. the most abundant aerobic bacterial species, pasteurella multocida, was sensitive to cefoxitin in all cases in which it had been cultured. these results suggest that anaerobic bacteria may be associated in the pathogenesis of severe gingivitis. these data warrant further studies of the prophylactic use of antibiotics in cats undergoing dental prophylactic procedures. inflammatory bowel disease is the most common cause of vomiting and diarrhea in dogs. although it can occur in any canine breed, certain breeds are more susceptible. we have previously shown that polymorphisms in the tlr4 and tlr5 gene are significantly associated with inflammatory bowel disease (ibd) in the german shepherd dog (gsd), a breed at risk of developing this disease. it would be useful to determine if these polymorphisms are significant in other canine breeds as this may allow the development of novel diagnostics and therapeutics to be applied to all canine breeds with ibd. therefore the aim of this study was to investigate whether polymorphisms in canine tlr 4 and tlr 5 genes are associated with ibd in other non-gsd canine breeds. four non-synonymous snps in the tlr4 gene; t23c, g1039a, a1572t and g1807a and three non-synonymous snps in the tlr5 gene; g22a, c100t and t1844c previously identified in a mutational analysis in gsds with ibd were evaluated in a case-control study using a snapshot multiplex reaction. sequencing information from 85 unrelated dogs with ibd consisting of 38 different non-gsd breeds from the uk were compared to a breed-matched control group consisting of 162 unrelated dogs from patients treated for noninflammatory disease at the royal veterinary college, london, uk. as in the gsd ibd population the two tlr5 snps; c100t and t1844c were found to be significantly protective for ibd in other breeds included in this study (p 5 0.023 and p 5 0.0195 respectively). this study confirms the protective effects of the two tlr5 snps (c100t and t1884c) in other canine breeds with ibd. this highlights the importance of tlr5 in the pathogenesis of canine ibd and may represent common pathological pathways of ibd in different canine breeds due to the high degree of haplotype sharing seen among breeds. this may allow for the future expansion of novel diagnostics and therapeutics to be applied to all canine breeds with ibd. further functional studies looking at the role of tlr5 in the pathogenesis of canine ibd are needed to confirm these findings. toll-like receptor 5 (tlr5) is an extracellular pattern recognition receptor belonging to the innate immune system. we have recently shown that three non-synonymous single nucleotide polymorphisms (snps) in the tlr5 gene (g22a, c100t and t1844c) are significantly associated with inflammatory bowel disease (ibd) in german shepherd dogs. in addition, we confirmed that two of these tlr5 snps (c100t and t1844c) were significantly associated with ibd in a population consisting of 38 different dog breeds. in order to determine if other novel snps exist in the tlr5 gene in addition to the ones identified in the gsd population, mutational analysis was carried out in seven boxer dogs with ibd. polymerase chain reaction was carried out to amplify the tlr5 coding region in the seven dogs with ibd. sequencing was carried out using sequence based typing with the abi prism sequencing kit (applied biosystems, uk) and analyzed using an abi3100 automated sequencer (pe applied biosystems). sequencing information from seven boxer dogs with ibd from the uk were compared to the reference sequence published on the ensemble webserver (www. ensembl.org/canis_familiaris). in addition to the three snps identified previously in the tlr5 gene, a novel non-synonymous snp; t443c was identified in the boxer dog population with ibd. this snp has never been reported before and was present as the homozygote genotype in three dogs with ibd and in one dog as the heterozygote genotype. using the simple modular architecture research tool (smart) web server (http:// smart.embl.de/) we were able to map the t443c snp to the leucine rich repeat domain of the tlr5 protein. the leucine rich repeat domain is involved with ligand binding and therefore a change in the amino acid in this region may affect function, especially as the t443c snp results in a change in the amino acid from non-polar to polar. our study further confirms the role of tlr5 in the pathogenesis of canine ibd. our results suggest that in addition to shared risk polymorphisms amongst breeds, individual breeds may harbor unique snps arising after breed formation which may further affect their susceptibility to this disease. however, a case-control study would be needed in the boxer dog to confirm the significance of the tlr5 t443c snp and further functional data would be needed to elucidate the exact role of this polymorphism in canine ibd. an automated power driver device (oncontrol, vidacare) has recently become available for bone marrow aspiration (bma) and bone marrow biopsy (bmb) in humans. the purpose of our study was to compare this automated technique to the traditional manual technique for bone marrow sampling in cats. twelve healthy research cats were anesthetized using a standardized protocol on 2 different occasions, 2 days apart, to have bmas and bmbs performed by the same operator. on day 1, half of the cats were randomized to have a bma performed at both the proximal humerus and the iliac crest, and a bmb performed at the iliac wing, using the oncontrol device (15-gauge needle for bma; 11-gauge needle for bmb). the other half of the cats had the same procedures performed using a manual technique (15-gauge illinois needle for bma; 11-gauge jamshidi needle for bmb). on day 3, each cat had bma performed at the opposite humerus and iliac crest, and a bmb performed at the proximal humerus using the opposite technique from day 1. for each procedure, the operator was given a maximum of 3 attempts to successfully collect a sample. the rate of success, as well as the number of attempts were recorded. the ''ease of use'' of the device was rated by the operator on a 5-point scale after each procedure. using previously determined criteria, the macroscopic and microscopic qualities of the bma and bmb samples were assessed by a board-certified pathologist, blinded to the technique used. the level of pain experienced by each cat was evaluated 6, 12, 18, 24, 36 and 48 hours following each set of procedures, using a previously validated pain scoring system. two sample t-tests were used to compare the automated technique to the manual technique and to compare the humerus to the iliac crest site for bmas and the humerus to the iliac wing site for bmbs. for all procedures, at all sites, the ''ease of use'' was better for the automated technique than for the manual technique (p o 0.05). the duration of the procedure and the number of attempts to collect a sample were significantly lower with the automated technique for bma at the proximal humerus (p o 0.05). there was no significant difference in the level of pain at any time point following each set of procedures with either technique. performing bma at the proximal humerus was associated with a higher rate of success (p o 0.05), a lower number of attempts (p o 0.05), a shorter duration of the procedure (p o 0.05), a higher-rated ''ease of use'' of the technique (p o 0.05), and a better quality sample (p o 0.05) when compared to sampling from the iliac crest, in conclusion, we found the automated bone marrow sampling technique suitable for use in adult cats. this technique was easier to use than the manual technique for both bma and bmb, and reduced the duration of the procedure and the number of attempts for successful bma at the proximal humerus. performing bma at the proximal humerus was faster, easier and allowed collection of better quality samples than at the iliac crest, independently of the technique used. the fractious nature of some feline patients sometimes makes sedation or general anesthesia necessary for routine procedures such as blood collection for hematologic analyses. it has been anecdotally reported that sedation or general anesthesia could induce variations in hematologic parameters in cats, making it important for the clinician to be able to anticipate potential changes on hematologic parameters that could result from chemical restraint. this study evaluated the effects of a standardizecd anesthetic protocol using ketamine (10 mg/kg, iv), midazolam (0.5 mg/kg, iv) and buprenorphine (10 mg/kg, im) on the hematologic parameters of 12 healthy adult research cats. each cat had blood samples collected before and after induction of anesthesia on 2 different occasions, 2 days apart. in total, 24 pairs of complete blood counts were obtained. analyses were performed at a certified veterinary laboratory. paired sample t-tests were used to determine whether there were any statistical differences between hematologic parameters before and after induction of general anesthesia, for each cat, on 2 different occasions. compared to preanesthetic values there was a significant decrease in red blood cell count, hemoglobin concentration, hematocrit, lymphocyte count and plasma total protein concentration after induction of anesthesia. there was no significant difference in the segmented or band neutrophil, eosinophil, basophil, monocyte and platelet counts between the samples taken before and after induction of anesthesia. on average, there was a 23.7% decrease in the red blood cell count (9.06 â 10 12 /l to 6.91 â 10 12 /l) (p o 0.0001), a 23% decrease in hemoglobin concentration (133.88 g/l to 103.08 g/ l) (p o 0.0001), a 24.4% decrease in the hematocrit (0.41 l/l to 0.31 l/l) (p o 0.0001), a 25.3% decrease in the lymphocyte count (3.68 â 10 9 /l to 2.77 â 10 9 /l) (p 5 0.0023), and a 12.1% decrease in the plasma total protein concentration (84.79 g/l to 74.54 g/l) (p o 0.0001) when samples taken before and after induction of anesthesia were compared. if only the hematocrit was considered as a marker of anemia, 29% of the samples from these 12 healthy cats, taken while they were under general anesthesia, would have been misinterpreted as belonging to anemic patients (hematocrit o 0.285 l/l), using the reference interval established in our laboratory. none of the cats would have been considered anemic before induction of general anesthesia. in practice, the decrease in lymphocyte count following anesthesia is unlikely to be of clinical relevance, as all the samples except 2 had a lymphocyte count that was within the reference interval for cats established by our laboratory. this study suggests that complete blood counts performed on blood taken under general anesthesia with this combination of anesthetic drugs in cats should be interpreted cautiously in order not to make a false diagnosis of anemia. the mechanism responsible for the decrease in circulating red blood cell mass following anesthesia induction in cats is unknown and requires further investigation. rivaroxaban is an oral inhibitor of activated coagulation factor x (xa). it is expected to have similar coagulation effects as low molecular weight heparin, without the need for injection, making it an attractive alternative for long-term anticoagulant therapy in cats. citrated blood obtained from five healthy adult cats was exposed in vitro to varying concentrations of rivaroxaban, followed by coagulation testing. the rivaroxaban was extracted from commercially available tablets (xarelto s ) and dissolved in dmso prior to addition to the blood. tests performed included kaolin-activated thrombelastography (teg), prothrombin time (pt), dilute pt (dpt), activated partial thromboplastin time (aptt), and anti-factor xa (axa) activity. dose-dependent prolongations were seen in all coagulation parameters. similar to human data, therapeutic axa levels (between 0.5-1.0 axa units) were achieved at in vitro concentrations between 160 and 220 mg/l. at 220 mg/l, dpt measurements were clinically prolonged in all cats (29.2 ae 4 sec vs. 18.5 ae 0.8 sec, p 5 0.148), while aptt values were only mildly prolonged from baseline (21.9 ae 5 sec vs. 15.6 ae 2 sec, p 5 0.07). significant prolongations were seen in dpt at 500 (60.4 ae 42 ec, p 5 0.005). teg r time did not prolong from baseline values until concentrations of 2000 mg/l were reached (16.0 ae 9 min compared to 3.1 ae 0.7 min, p 5 0.006). rivaroxaban has similar coagulation effects in the cat as in other species and may play a role in feline thromboprophylaxis. kaolinactivated teg does not appear to be sensitive to low concentrations of rivaroxaban in the cat. anticoagulated blood is required for platelet function studies. sodium citrate, a calcium chelater, is the most commonly used anticoagulant to measure coagulation parameters including platelet aggregation but it may have a negative effect on platelet responsiveness. dogs are generally considered moderate responders to collagen on platelet aggregation and are notorious for being poor or inconsistent responders to adp-induced platelet aggregation using citrated whole blood. hirudin, a selective thrombin inhibitor, can also be used as an anticoagulant for coagulation assays and is the anticoagulant of choice for certain assays including the multiplate s platelet function analyzer. ten adult healthy dogs were used to compare whole blood platelet aggregation between citrated and hirudinated blood samples. venous blood was collected atraumatically from the external jugular vein directly into tubes containing 3.2% trisodium citrate or hirudin. whole blood platelet aggregation was performed (whole-blood lumi-aggregometer, chrono-log corporation, havertown, pa, usa) with collagen (5 mg/ml) and adp (10 mm) as agonists. maximal platelet aggregation (ohms) was recorded. there was a significant increase in collagen-induced platelet aggregation from the hirudinated samples compared to the citrated samples (31.2 ae 6.1 vs. 17.2 ae 8.6 o, p o 0.0005). there was also a significant increase in adp-induced platelet aggregation from the hirudinated samples compared to the citrated samples (15.6 ae 6.0 vs. 2.6 ae 2.6 o, p 5 0.0001). the results of this study show a significant difference in platelet responsiveness between citrated and hirudinated whole blood using the chrono-log impedance aggregometer. while both collagen and adp-induced platelet aggregation was attenuated from citrated blood samples, this was most notable for adpinduced aggregation where almost all samples had no objective measurable platelet aggregation. it is suggested from this data that future whole blood platelet aggregation studies performed on the chrono-log impedance aggregometer should use hirudinated blood samples although new reference limits would need to be established. low-molecular-weight heparin (lmwh) is now used to prevent thrombotic complications in dogs. a functional assay such as the calibrated automated thrombogram (cat) may provide a new approach for monitoring lmwh therapy. we hypothesized that cat would detect decreased endogenous thrombin potential (etp) in healthy dogs receiving lmwh (fragmin s ). twenty-four healthy adult beagles were included in this study and divided equally in four groups. one dose of 50 u/kg, 100 u/kg or 150 u/kg of lmwh were given subcutaneously to healthy dogs and compared to a control group. platelet poor plasma (ppp) was collected over a 24 hour period. using a repeated-measure linear model, effect of lmwh on etp was time and dose dependent with a significant interaction (p o 0.0001). compared to control dogs, significant differences were obtained for group 50 u/kg at t60 (p 5 0.037), for group 100 u/kg at t15 (p 5 0.013) and between t30-t240 minutes (p o 0.0001) respectively, and for group 150 u/ at t15 (p 5 0.011), between t30-t300 minutes (p o 0.0001) respectively and at t360 (p 5 0.004 the cat assay can be employed to measure the effects of lmwh at different doses in healthy dogs, resulting in significant time and dose-dependent decreases in etp and warrants further investigation as a tool for monitoring lmwh therapy in dogs. the purpose of this study was to determine the effects of prednisone and prednisone plus ultralow-dose aspirin on coagulation parameters in healthy dogs, with an emphasis on thromboelastography (teg). this was a prospective, randomized, blinded study utilizing fourteen dogs determined to be healthy based on normal physical examination, complete blood count, biochemistry, urinalysis, and fecal floatation. dogs were evenly divided into either prednisone plus aspirin (pa) or prednisone plus placebo (pp) groups. baseline values for teg parameters (r, k, angle, ma, ly30, ly60, g, ci) were measured twice two days apart, and thrombin-antithrombin complexes (tat), and traditional coagulation parameters (prothrombin time, activated partial thromboplastin time, d-dimer, antithrombin (at), fibrinogen) were measured once. each dog received 2 mg/kg/ day of prednisone, and either 0.5 mg/kg/day of aspirin (pa group) or placebo (pp group) for 14 days. a complete blood count, biochemistry profile, teg, tat, and traditional coagulation parameters were then repeated on each dog. day to day variation was calculated for the teg parameters using the two baseline measurements. the change from baseline between and within each group were compared using t-tests, or wilcoxon 2 sample test where appropriate, for teg, tat, traditional coagulation parameters, and hematocrit. day to day variation in teg was acceptable ( 10%) for ma, g, and angle, unacceptable (4 10%) for r, k, ly30 and ly60, and not meaningful for ci. within both groups, ma, g, ci and fibrinogen significantly increased from baseline (p o 0.05). within both groups, ly30 and at significantly decreased from baseline (p o 0.05). for the pp group, ly60 significantly decreased from baseline (p 5 0.03), and approached significance for the pa group (p 5 0.0504). all other within group changes from baseline were not statistically significant (p-values 4 0.05). for all parameters, there was no difference between groups for change from baseline (p values 4 0.05). day to day variation in some teg parameters is high and may preclude their clinical utility. prednisone causes hypercoagulability in healthy dogs based on increased g, ma, and ci. the addition of ultra-low dose aspirin to prednisone has no effect on the parameters measured in this study. 'aspirin resistance' has been identified in people and dogs that develop thrombi despite low dose aspirin therapy. variability in platelet cyclooxygenase (cox) isoform expression is one proposed mechanism for aspirin resistance in people. two isoforms (cox-1 and cox-2) have been identified in canine platelets. high (antiinflammatory) dose aspirin inhibits platelet function and alters expression of both cox isoforms in most dogs. this study evaluated the effects of low dose aspirin on platelet function and cox expression in normal dogs. twenty-five healthy client-owned dogs were evaluated before and at two time points (days 3 and 10) during aspirin therapy (1 mg/kg po sid). platelet response to aspirin (siemens pfa-100 s ; collagen/ epinephrine cartridges), was stratified into one of three groups [aspirin responders (9 dogs), non-responders (8 dogs), or inconsistent responders (8 dogs)]. flow cytometry identified platelet cox-1 and cox-2 expression. an elisa was used to measure urine 11-dehydro-thromboxane b 2 (11-dtxb 2 ). there were no significant differences between groups for cox-1, cox-2 or 11-dtxb 2 at any time point. when all dogs were considered as a single group, there was a significant increase (p o 0.0001ã) in cox-1 and cox-2 mean fluorescent intensity (mfi) from baseline to day 10, 70.1% ae 38.0 (mean ae sd) and 70.8% ae 71.2, respectively. there was a significant decrease in mean urine 11-dtxb 2 :creatinine on day 3 and 10 by 23.4% (p 5 0.0044 ã ) and 45% (p o 0.0001 ã ). as with our previous high dose studies, cox-1 expression was increased with aspirin exposure. however, there was a significant increase in cox-2 expression with low dose aspirin in contrast to the decrease seen at higher doses. our study suggests that levels of platelet cox-1 and cox-2 expression do not influence aspirin response in dogs. although thromboxane levels decreased in most (23 of 25) dogs on low dose aspirin, platelet function was consistently affected in only 36% of dogs, suggesting that differences in response to thromboxane may play a role in the variable affects of low dose aspirin on canine platelet function. delayed postoperative bleeding is common in retired racing greyhounds (rrgs), despite normal results of routine hemostasis assays. the excessive postoperative bleeding in the rrgs is not due to primary or secondary hemostatic defects, and may be due to enhanced fibrinolysis or to a clot maintenance dysfunction. providing a method to prevent or minimize the severity of postoperative bleeding in rrgs will not only have major economic impact for owners, but also will markedly decrease the associated complications of minor or major surgeries in the breed. epsilon aminocaproic acid (eaca) is a potent inhibitor of fibrinolysis that also supports clot maintenance due to unknown mechanisms. the objective of this double-blinded, prospective, randomized study was to evaluate the effects of eaca versus placebo on the prevalence of bleeding in rrgs, and to investigate its mechanism of action by using thromboelastography (teg). we compared the effects of eaca and placebo in 100 rrgs that underwent elective ovariohysterectomy or orchiectomy at the veterinary medical center, the ohio state university during 2 years. the main endpoint was bleeding (prevalence and severity); minor endpoints included most teg parameters. thirty percent (15/50) of the rrgs in the placebo group had delayed postoperative bleeding starting 36 to 48 hours after surgery, compared to 10% (5/50) in the eaca group (p 5 0.0124). on the teg parameters, the r time (clot formation time) was significantly different between treatment groups (p 5 0.0321). the postoperative administration of eaca significantly decreased the prevalence of postoperative bleeding in rrgs. thromboembolism associated with protein losing nephropathy (pln) has been long recognized as a serious and unpredictable complication in dogs, however its prevalence remains unknown. in humans, surrogate indicators are frequently used to assess thromboembolic risk. this study aimed to investigate the prevalence of hypercoagulability in pln dogs based on thromboelastography (teg), and to determine whether hypercoagulability in these patients could be predicted by clinical assessments that identify systemic hypertension (systolic blood pressure 4 160 mmhg), hypoalbuminemia (serum albumin o 2.7 mg/dl), antithrombin activity (o 70%), and degree of proteinuria (urine protein:creatinine ratio [upc] ! 2). between march 2009-september 2010, twenty-seven dogs were identified with pln at the animal medical center. the prevalence of hypercoagulability based on a teg g-value 4 9.6 was 83.3%. there was no statistically significant relationship, either categorically or continuously, in univariate as well as multivariate analyses of all variables. univariate logistic regression (odds ratio; lower and upper confidence limit; p value) for hypertension was à1.18; 0.201, 6.99; 0.851; for albumin -1.37; 0.228, 8.26; 0.728; and for antithrombin activity -0.73; 0.12, 4.39; 0.728. thus, in this patient population, in the absence of teg, prediction of hypercoagulability using abnormalities in commonly measured clinicopathologic variables was not helpful. however, given the documented high prevalence of hypercoagulability in patients with pln, early institution of prophylactic anti-platelet or anticoagulant therapies should be considered. thromboelastography (teg) is a test of global hemostasis. due to the effects of extrinsic factors on whole blood coagulation, sample collection method (scm) may influence results. the purpose was to determine if scm influenced teg using kaolin-activated citrated whole blood (wb). healthy dogs with normal platelet counts were prospectively enrolled. three wb samples were obtained from each dog at least 48 hours apart from alternating jugular veins in a randomized order of three methods: 1) vacutainer s into citrated tube (vac), 2) citrated syringe with transfer into plain tube (cit), or 3) plain syringe with transfer into citrated tube (plain). draw time was recorded in seconds. kaolin-activated teg was performed, with measurement of reaction time (r), clot formation time (k), maximum amplitude (ma), and alpha angle. eleven dogs were enrolled. there were no significant differences in teg indices between vac samples and either cit or plain samples. cit samples had a significantly higher k value (p 5 0.004) and a lower alpha angle (p 5 0.004) compared to plain samples. draw times ranged from 2-10 seconds. a longer draw time was significantly correlated (p 5 0.046; r 5 à0.35) with a shorter r time. higher platelet count was significantly correlated (p 5 0.001; r 5 0.575) with a higher ma. scm did not have a significant effect on teg parameters when comparing vac samples to either cit or plain samples. minimizing sample collection time and trauma during venipuncture may be important in minimizing hypercoagulable changes in teg indices. liquid plasma (lp) is defined as either plasma collected and refrigerated immediately after collection or fresh frozen plasma (ffp) that is thawed and stored refrigerated until use. stability studies in people have shown that adequate clotting factor activity is preserved for at least 14 days. lp is transfused in human level i trauma centers to critically ill people requiring rapid infusion of clotting factors as the time required to defrost ffp is considered prohibitive. the use of lp has not been described in veterinary critical care. the purposes of this study were to 1) determine the length of time required in a water bath for a unit of canine ffp to thaw and 2) describe the use of lp in a busy university emergency room (er). for part 1: six units (250 ml) of canine ffp were individually thawed in a 371c water bath. the duration of time (in minutes) to thaw was recorded. for part 2: the transfusion log was reviewed for dogs receiving lp in the last 6 months. the indications and outcome were recorded. the mean time ae sd thaw time was 34.7 ae 1.38 minutes. ten units of lp were transfused to 7 critically ill or injured dogs during the study time. indications for lp transfusion included hypovolemic shock due to intra-abdominal hemorrhage in 6 dogs (2 traumatic, 4 non-traumatic) and rapid correction of hemorrhage following parenteral tissue plasminogen activator administration in 1 dog. lp volume transfused ranged from 11.1 to 50.5 ml/kg. no transfusion reactions were identified. effect on coagulation was not consistently evaluated. time required to thaw a unit of ffp is greater than 30 minutes which could be detrimental in a bleeding, coagulopathic dog. lp was transfused without incident to critically ill and injured dogs and represents a potential new addition to the armamentarium of treatments in a veterinary er setting. further investigation of canine lp is warranted including evaluation of in vitro factor stability and in vivo efficacy in correcting coagulopathy. immune mediated thrombocytopenia (imt) is associated with increased morbidity and mortality. large prospective research studies in dog platelet antibodies and clinical utilization of platelet immunoglobulin assays are limited. potential explanations include limited availability and low specificity due to nonspecific binding. the focus of this study is to evaluate optimized direct and indirect platelet surface associated immunoglobulin (psaig) and staining with anti-cd61 antibodies (cd61ab) for the utilization in classifying thrombocytopenic dogs. one hundred clinically ill and 30 apparently healthy dogs were prospectively evaluated. data collected included a history of hemorrhage, physical examination evidence of bleeding, complete blood count, and measurement of psaig and cd61ab. the psaig assay utilized polyvalent antibodies with correction for non-specific binding by subtraction of background fluorescence with control antiserum. thrombocytopenia was defined as o 164,000/ml and all dogs were clinically classified into 1 of 4 groups (g): g1 imt, n 5 45, g2 thrombocytopenia from non-immune mediated diseases, n 5 37, g3 ill with normal platelet counts, n 5 18, g4 healthy dogs, n 5 30. median platelet counts, by groups, were g1, 20,000; g2, 69,000; g3, 324,500; and g4, 212,000/ml. for the direct and indirect psaigs in dogs with itp (g1), more dogs (n 5 6 and n 5 9) with clinical evidence of bleeding had antibodies compared to those who were not bleeding (n 5 3 and n 5 6). considering only direct psaig the sensitivity and specificity was 13% and 95%, respectively for the diagnosis of imt. for indirect psaig the sensitivity and specificity was 27% and 97%, respectively, for the diagnosis of imt. when considering both direct and indirect psaig together, the sensitivity was 33% with a specificity of 93%. in g1 interference from high control antiserum background staining was noted in 26.7% of dogs and resulted in a negative direct psaig classification. minimal background interference was noted in g2, g3, or g4. the percentage of platelets stained with cd61ab was significantly less in g1 (median 38, p 5 0.0001) vs. g2 (median 74, p 5 0.007) vs. g3 (median 95.1, p 5 0.857) and g4 (median 95.6). these findings indicate the optimized platelet surface associated immunoglobulin assay has a high specificity, however poor sensitivity, for the diagnosis of imt. the decreased cd61 staining in g1 (imt group) may reflect decreased surface gpiiia expression, blocking by anti-gpiiia antibodies or other proteins, clearance by macrophages, or increased non-platelet debris and has potential applications in the diagnosis and treatment of imt. greyhounds have lower serum concentrations of a-globulin than other breeds, explained by negligible levels of haptoglobin (hp) measured using different methods (colorimetric, immunoturbidimetric and protein electrophoresis). the purpose of the present study was to characterize the hp gene in greyhounds. we isolated dna and rna from blood samples of akc-registered and retired racing greyhounds (akcg, rrg), and a german shepherd dog (gsd). we sequenced the hp exons and splice sites, and conducted array comparative genomic hybridization to identify associated dna structural variation (custom 1m agilent oligonucleotide array). additionally, we tested for the presence of one or multiple haplotypes spanning hp in greyhounds using a high density snp array (180k illumina hd). sequencing results of hp in both dna and cdna revealed three synonymous snps in the racing greyhound. we did not identify structural variation overlapping or near the hp gene. notably, we detected that the rrg and akcg do not appear to share a specific haplotype spanning hp. despite having low or undetectable serum concentrations of hp, we did find that rrg hp mrna is expressed and lacks amino acid variation. this suggests that the clinical absence of the hp is attributable to post-transcriptional hp effects or to an unknown physiological interaction. finally, given the existence of distinct rrg and akcg haplotypes spanning hp, it is important to characterize serum levels of hp in akcg in follow on studies. we reported that hemoglobin in retired racing greyhounds (rrg) has higher oxygen carrying properties and affinity than other breeds. surprisingly, very little is known about canine hemoglobin genetics. the purpose of this study was to characterize genetics of canine beta globins. using computational blast analysis of the dog genome, we identified five beta globin genes in a single locus: two human hbelike followed by three hbb-like genes. we isolated dna and rna from blood of rrgs, akc registered greyhounds (akcg), and german shepherd dog (gsd). all beta globin exons and splice sites were sequenced, and the beta globin locus was examined by array comparative genomic hybridization (custom 1m agilent array). additionally, we determined the number of common haplotypes that span this locus in rrgs and akcgs using high density snp array (180k illumina hd). expression and sequence analysis of cdna showed all five beta globin genes are actively expressed in adults. canhbb1 and 2 were created by relatively recent segmental duplication and have identical protein sequence. canhbb1/2 are abundantly expressed in adults; canhbb3 is expressed at greatly reduced levels. sequencing results revealed one rare non-synonymous single nucleotide polymorphism (snp) in hbe1 of rrgs, but no variation that could explain their abnormal hemoglobin. we did not detect structural variation overlapping or near the beta globin locus. notably, rrg and akcg do not share haplotypes spanning the beta globin locus. this is the first characterization of canine hemoglobin genetics, and the first report of canine embryonic hemoglobins and their expression in adults. sampling of the bone marrow in the dog from the costochondral (cc) junction can be performed with minimal to no sedation and readily available equipment but is not in widespread use in the united states. the aim of this study was to compare the number of attempts needed to successfully obtain a sample, the time needed for the procedure, and the sample quality between aspirates obtained from the cc junction and more traditional sites (humerus or femur) in healthy dogs when performed by novice and seasoned practitioners. samples were obtained from healthy anesthetized laboratory reared adult dogs after undergoing terminal endoscopic surgery. paired samples from separate dogs were obtained by each practitioner using either a 22 gauge needle and 6 cc syringe at the cc junction or an 18 gauge rosenthal needle and 12 cc syringe from either the proximal humerus or femur (clinician preference). three small animal veterinary interns, one experienced technician and one boarded internist were monitored for number of attempts to success and length of time needed for success of each procedure. slides were prepared by a single investigator and read by a blinded clinical pathologist. data were compared using the paired t-test for normally distributed data and wilcoxen signed rank test for non-gaussian distributions. five pairs of samples from three dogs were evaluated. two dogs had two pairs drawn from opposite limbs and ribs. mean number of attempts to success for traditional sampling sites (1.4 1/à0.54) and time to success (8.6 minutes 1/à5.2) did not differ significantly from attempts (2.01/-0.70, p 5 0.25) or time (2.7 1/-0.98, 0.06) needed when aspirating from the cc junction. subjectively, samples were of similar quality with regards to cellularity and number of particles present when compared within practitioners. myeloid: erythroid ratio and percentage of lymphocytes were also not significantly different between sites (m:e ratio p 5 0.37, lymphocyte % p 5 0.07) and were within normal limits. while there were no significant differences between the two sites in terms of number of attempts or time to success, it should be noted that the ''seasoned'' practitioners had never performed an aspirate at the cc site and had an increased number of attempts compared to the traditional sites. if the number of attempts needed for success decreases with experience, it is likely the time required would decrease as well. both subjectively and objectively, there were no significant differences in quality or cell populations between the two sampling sites in healthy dogs. based on this data, bone marrow aspiration from the cc junction appears to be equivalent to more traditional sampling sites in healthy dogs. larger studies in clinically ill dogs should be performed before routinely using the site in the clinical setting. recent research on iron homeostasis has elucidated the tightly controlled intestinal uptake of iron. hepcidin, the major hormone limiting iron absorption and release from macrophages, is downregulated by matriptase-2, a transmembrane serine protease (tmprss6) produced by the liver. while iron deficiency is commonly caused by chronic blood loss anemia and rarely dietary deficiency or intestinal disorders in dogs and other species, we report here the clinical to molecular investigations of a dog with iron-refractory iron deficiency anemia (irida) caused by a matriptase-2 deficiency homologous to a recently described autosomal recessive disorder in humans. the proband, a spayed female cocker spaniel without any clinical signs except for recent occasional idiopathic seizures, exhibited a lifelong history of microcytosis and hypochromasia but not anemia. there was no evidence of any blood loss and the dog was receiving an appropriate meat-based diet. mean values of complete blood cell counts, performed from 0.5-5 years of age, were: hematocrit 41% (normal reference range 39-56); rbc count 9.6 â 10 6 /ml (5.8-8.5 x 10 6 ); mcv 43 fl (62-72); mchc 31 g/dl (33-36). serum iron parameters revealed severe iron deficiency with serum iron 34 mg/dl (88-238); total iron binding capacity 378 mg/dl (246-450); serum iron saturation o 9% (15-50%), and ferritin 410 ng/dl (80-800). prolonged courses of oral ferrous sulfate supplementation and several short courses of intramuscular (dextran) injections and intravenous iron infusions did not result in improvement of any red cell or serum iron parameters. however, this dog was never anemic and the partial seizures could not be directly related to the iron deficiency status. no family members were available for further studies. genomic dna was extracted from the proband's edta blood and the exons of the tmprss6 gene were amplified with flanking primers and then sequenced. in comparison to the normal canine tmprss6 sequence and that of a sequenced control dog we found a homozygous missense muation, r723h, toward the c-terminal end of the protein in the proband's gene. in conclusion, the severe microcytosis and hypochromasia, low serum iron parameters and lack of a response to oral and parenteral iron therapy led to the diagnosis of irida. the missense mutation in the matriptase-2 at position 723 from an arginine, which is conserved across all species currently deposited in the genbank, to a histidine is likely the disease-causing mutation. this is the first report of an irida in the dog with features very similar to those observed in humans. dogs with naturally-occurring irida may be helpful in developing and assessing novel therapies. accidental ingestion of copper-coated zinc pennies minted after 1982 is the most common causes of zinc toxicity anemia in the dog. zinc toxicity anemia may also be seen with ingestion of zinc from other sources as ingestion of metallic foreign material other than pennies, medicines containing zinc, and zinc supplements. the purpose of this study was to determine if there is a weight below which dogs are more susceptible to zinc toxicity anemia secondary to metallic foreign body ingestion. records of dogs presented to the internal medicine service at the veterinary medical center of long island for metallic foreign body ingestion were reviewed for signalment, weight, presenting pcv, and type of metallic foreign body ingested. eighteen dogs met the inclusion criteria and were compared. of the 18 dogs, there were 15 cases of coin ingestion (83%), with 11 (73%) involving ingestion of 1 or more pennies. the other 3 cases involved ingestion of a metallic object (1), decorative garland (1), and bb pellets (1).of the 14 dogs exposed to zinc, 13 (93%) were less than 27 pounds (12.3 kgs). of those 13 cases 11 (85%) had ingested one or more pennies. eleven out of the 14 (79%) zinc exposure dogs were anemic at presentation. the average weight of the 11 dogs was 17.5 pounds (8 kg). this study showed that dogs less than 27 pounds appear to be more susceptible to developing anemia secondary to zinc toxicosis, with the majority of cases due to ingestion of pennies minted after 1982. zinc toxicity anemia secondary to penny ingestion is more commonly seen in small dogs. we suspect larger dogs are able to pass pennies through the pyloric sphincter and thus not develop clinical signs. although thrombocytopenia is common in hospitalized dogs, canine cryopreserved platelet concentrate (pc) is used infrequently. data suggest in vitro efficacy of pc and when administered to research dogs, but efficacy is unknown in clinical patients. study objectives were to determine clinical characteristics of dogs receiving pc as well as safety and efficacy of pc in thrombocytopenic dogs. medical records were evaluated retrospectively to identify dogs that received pc. information evaluated included patient characteristics, platelet count, acute transfusion reactions, and survival. twenty six dogs met study criteria. dogs receiving pc ranged in age from 1-13 years (mean 7.8 years) and 17/26 (65.4%) were spayed or intact females. hemorrhage was reported in 20/26 dogs (76.9%) prior to pc transfusion. platelet counts prior to transfusion ranged from 0 to 77 â 10 3 /ul (mean 29.9 1/à43.9 â 10 3 /ul). change in platelet count was measured in 23 dogs and the mean change was17.1 1/à 45.5 â 10 3 /ul. dose of pc administered ranged from 4.8 to 40 ml/kg with a mean of 14.5 1/à 8.8 ml/kg. no acute adverse reactions were reported. there was no correlation between transfusion dosage and platelet count change post transfusion. survival to discharge occurred in 15/26 (57.7%) of dogs. the only variable correlated with survival was age with survivors being younger than non-survivors (6.4 years-old ae 3.9 vs. 10.1 years-old ae 3.1.; p 5 0.02). efficacy of cryopreserved pc transfusions for improving clinical outcome in dogs with thrombocytopenia is yet to be determined; however, pc is well tolerated in clinical patients. fresh frozen plasma (ffp) is used to treat coagulopathies in dogs. current transfusion guidelines recommend that ffp be administered within 4 hours of thawing to avoid decreasing clotting factor function and bacterial contamination. the purpose of this study was to assess clotting factor activity and bacterial contamination of ffp that had been thawed and refrigerated for 5 days. blood was collected from 10 client-owned healthy dogs with no known history of coagulopathy or of administration of drugs affecting coagulation. plasma was separated from whole blood and frozen (à201c) within 30 minutes of collection. thawed plasma was maintained at 41c (1/à21c). aerobic and anaerobic bacterial culture, prothrombin time (pt), activated partial thromboplastin time (ptt), and factor ii, vii, ix, and x analyses were tested at time of whole blood collection, ffp thaw, 24 hours post-thaw, 72 hours post-thaw, and 120 hours post-thaw. there were no statistically significant differences in pt and ptt at any of the measured time points. statistically significant differences occurred between initial measurements of factors ii, vii, ix, and x and subsequent time points, but there was no difference in activity levels of the factors once ffp was thawed. one bacterial colony was grown from each of two samples from post-thaw plasma. thawed plasma protocols do not significantly decrease the function of factors ii, vii, ix, and x or prolong pt and ptt. bacterial contamination of the plasma supply seems unlikely, but strict aseptic technique should be used when obtaining plasma for patient use. erythrocyte pyruvate kinase (pk) deficiency is the first and most common erythroenzymopathy described in dogs, cats, and humans. the pk enzyme plays a crucial role in the erythrocyte energy metabolism and its absence causes severe hemolytic anemia, often misdiagnosed as autoimmune hemolytic anemia. the disease is inherited as an autosomal recessive trait and affected dogs also develop osteosclerosis. in dogs, the enzymatic diagnosis is complicated by the anomalous expression of malfunctioning m 2 -pk expression, but breed-specific r-pk mutation tests have been reported for basenjis, west highland white terriers (whwt), and beagles. we report here on a survey of canine pk deficiency studied at the penngen laboratory. a biased group of samples were received for screening from dog breeds with known mutations as well as from dogs with chronic, prednisone-and antibiotic-resistant hemolytic anemia and their relatives. edta blood samples and/or cheek swabs as well as medical record information were received and genomic dna and/ or enzyme activity testing were performed. among the 237 whwts 7% and 37% were found to be homozygous deficient dogs or carriers, respectively, with a mutant allele frequency of 0.26. the average age at the time of diagnosis was 1.5 years ranging from 2 months to 5 years of age; some samples came from europe and south america. of the 67 beagles studied, 36% were affected and 3% were carriers (mutant allele frequency 0.37). the average age at the time of diagnosis was 2 years ranging from 7 months to 9 years. surprisingly, very few samples from basenjis were received for screening, and none showed the mutant allele. while pk-deficient basenjis lived o 5 years, whwt and beagles often show milder signs and can reach 9 years of age. several dogs from other breeds were also examined because of chronic regenerative anemia and none had any of the known mutations seen in the other breeds. however, based upon pk enzyme activity studies, chihuahua, dachshund, miniature poodle, spitz, eskimo toy, and labrador retriever dogs were found to be affected; they also had osteosclerosis and at least one labrador retriever developed severe hemochromatosis (hepatic iron 37,300 ppm; normal o 1,200 ppm, analyzed on a dry weight basis). moreover, sequencing of the r-pk cdna from a pk-deficient labrador retriever revealed a new nonsense mutation in exon 6. in conclusion, pk deficiency appears to be a common cause for hemolytic anemia in certain breeds, and mutation testing makes screening simple. pk deficiency should also be considered in dogs of other breeds which may require the more cumbersome enzyme testing. studies to identify new mutations will confirm and simplify the diagnosis. supported in part by nih grant rr02512. immune-mediated hemolytic anemia (imha) is a common hematological condition observed in dogs. the diagnosis is based on clinical history, presenting signs and hematological evidence of imha such as regenerative anemia, leucocytosis and presence of spherocytes. the definitive diagnostic procedure is the coomb's test (direct antiglobulin test, dat) which is known to be highly specific but lacks diagnostic sensitivity. direct flow cytometric assay (fca) for igg, igm or c3 coated red blood cells (rbcs) detection might be more sensitive and thus could be introduced as an alternative diagnostic tool. to investigate the usefulness of fca for imha diagnosis, evaluation of igg, igm or c3 coated rbcs was performed from 15 dogs presented at the veterinary hospital at usp that fulfilled clinical and hematological criteria for imha. thirty eight healthy dogs were included as controls. dat was performed with polyvalent and monovalent anti-dog sera with twofold serial dilutions of each one, incubated with 2% rbcs suspension at 371c and 41c. for fca, 2% rbcs from anemic and healthy dogs were incubated with fitc anti-dog igg, anti-dog igm and anti-dog c3 and submitted to flow cytometry evaluation. specific software and mann whitney u test were used for data analysis. five dogs showed positive results for dat with polyvalent coombs reagent at 41c (titer 64 to 4096) and 371c (titer 64 to 2048) but only three of them had agglutination titer for anti-igg at 4 0 c (1024 to 8192) and 371c (512 to 4096). no positive results were observed for anti-igm and anti-c3 dat. by fc, percentage of igg, igm and c3 coated rbcs in normal and anemic dogs were, respectively, 1,18% and 17,11% (p o 0,001); 1,15% and 21,29% (p o 0,001); 0,66% and 6,99% (p o 0,001). igg coated rbcs percentage were higher in dogs showing dat positive results (min. 33,94%; max. 99,96%; median 54,25%). direct flow cytometric erythrocyte immunofluorescence assay is more sensitive than dat for detection of antibodies coated rbc in anemic dogs and may provide quantitative data useful for laboratorial diagnosis of imzha. bone marrow aspirates from cats are typically obtained from the ilium, humerus or femur, but may be difficult to obtain and/or of poor quality. in this study the feasibility, safety, and nature of sternal aspiration in cats was investigated. under general anesthesia, bone marrow aspirates were obtained in a randomized order by a single investigator from the sternum and ilium of 10 healthy cats weighing 3.4-8.4 kg, with body condition scores of 5-9 (on a scale of 1-9). for sternal aspirates, cats were positioned in sternal recumbency and a 1-inch, 22-23 ga hypodermic needle attached to a 12cc syringe was inserted into the cranial manubrium and directed caudally along the long axis of the sternum. aspirates were also obtained from the right iliac crest using an 18 ga illinois needle attached to a 12cc syringe. difficulty of site localization, needle insertion and advancement, and specimen aspiration, were scored from 1 (easiest) to 5 (hardest). bone marrow smears were prepared by one investigator and reviewed by a pathologist blinded to aspiration site and cat. sample quality was scored from 0 (no marrow particles) to 5 (excellent) based on the number of wellsmeared marrow particles on the slide. particle cellularity was scored from 1 (4 75% fat) to 3 (o 25% fat). post-procedure, cats were treated with tramadol (4-5 mg/kg, po, q12h) for 3 days, and assessed for post-biopsy pain (colorado state university feline acute pain scale, range 0 [no pain] -4 [maximum]) and site swelling (range 0 [none] -3 [marked]). data were analyzed by ancova accounting for effects of weight and body condition score. pneumothorax was not identified. it was significantly easier to perform sternal than iliac aspiration, but the quality of the sample was significantly better for iliac than for sternal aspirates. because of limitations due to sample quality, bone marrow morphology in sternal samples could not be compared to iliac samples in all cats. for samples that could be compared, cellularity was identical for sternal and iliac samples from 1 cat but underestimated in the sternal sample from another cat. myeloid:erythroid ratios and lymphocyte numbers were the same for sternal and iliac samples in 2 and 3 cats, respectively. megakaryocyte numbers were the same in one sample, less in sternal samples compared to iliac samples from 2 cats, and overestimated in the sternal sample from 1 cat. bone marrow cell morphology was normal in all acceptable samples. it was concluded that sternal aspiration of bone marrow using a 22-23 ga hypodermic needle is 1) easier to perform than iliac aspiration; 2) safe; but 3) provides samples of lower quality than iliac aspiration in cats. the diameter of 11-13ga jamshidi-type needles makes bone marrow core biopsy difficult in cats. in this study, biopsies of the left humeral head were taken under anesthesia using a 1-inch, 15ga needle (ez-io s intraosseous infusion system, vidacare) from 10 healthy cats weighing 3.4-8.4 kg with body condition scores of 5-9 (on a scale of 1-9). humeral biopsies were compared to biopsies taken from the left iliac crest using a 2-inch, 13ga jamshidi needle. biopsies were performed in randomized order by one investigator. biopsy was repeated to a maximum of 3 attempts until a specimen ! 5 mm long was obtained. difficulty of site localization, needle insertion and needle advancement were scored from 1 (easiest) to 5 (hardest). specimens were wrapped in tissue paper and placed in davidson's fixative for 15 min and then transferred to formalin. biopsy sections were reviewed by a pathologist blinded to biopsy site and cat. biopsy length on the slide was measured, and biopsy quality was scored from 0 (no hematopoietic tissue) to 5 (! 5 intertrabecular spaces free of artifact). post-procedure, cats were treated with tramadol (4-5 mg/kg, po, q12h) for 3 days, and assessed for postbiopsy pain (colorado state university feline acute pain scale, range 0 [no pain] -4 [maximum]) and swelling of biopsy sites (range 0 [none] -3 [marked]). data were analyzed by ancova accounting for effects of weight and body condition score. there were no significant differences between 15ga and 13ga biopsies except for post-biopsy swelling, and there were no significant effects of body weight and body condition. six (60%) of 15 ga and 5 (50%) of 13ga biopsies were considered acceptable specimens for assessment of bone marrow architecture and morphology; all intact spaces in these biopsies had normal hematopoiesis and cell morphology. comparison of acceptable 15 ga to 13 ga biopsy specimens from 4 cats showed no significant differences for cell density and lymphocytes/plasma cells, while cellularity, assessed as high in 2 of the 13 ga biopsies, was assessed as medium in corresponding 15ga biopsies; and megakaryocytes, assessed as 4-9/low-power field in one 13ga biopsy, were assessed as 3/low-power field in the 15 ga biopsy. myeloid:erythroid ratios were greater in 15 ga biopsies compared to 13 ga biopsies in 2 cats, and less in the 15 ga biopsy in one cat. discordant results between biopsies were not related to differences in quality. in conclusion, 15 ga bone marrow biopsy of the humerus was as likely to yield a specimen of acceptable quality as was 13 ga biopsy of the ilium, and resulted in less post-biopsy swelling. reports on canine acute liver failure (alf) include individual or small case series of animals with a specific diagnosis. the aim of this study was to describe the clinical course, outcome and etiology of alf in dogs presenting to a referral hospital. medical records (1995) (1996) (1997) (1998) (1999) (2000) (2001) (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) were reviewed for a diagnosis of alf (elevated serum bilirubin or icterus with concurrent coagulopathy or hepatic encephalopathy (he)). fifty cases were identified representing 22 breeds: 7 labradors retrievers, 6 golden retrievers, 3 german shepherds, and 3 cocker spaniels. median age was 6 years (1 m to humerus, 15 ga 5.1 ae 0.9 5.6 ae 0.6 2.1 ae 0.5 6.0 ae 0.7 0 0 ã (2.2-9.3) (3-9) (0-4) (3-10) ilium, 13 ga 6.1 ae 0.6 9.0 ae 0.7 1.9 ae 0.6 7.5 ae 0.8 0.2 ae 0.1 0.9 ae 0.1 ã (4.5-9.5) (6-12) (0-5) (5-12) (0-1) (0-1) 13 yrs). presenting signs included anorexia (31/50), vomiting (26/50), polydipsia (9/50) and neurologic signs (6/50 granulomatous hepatitis (gh) is a histopathological diagnosis characterized by focal aggregations of activated macrophages mixed with other inflammatory cells that is usually part of a systemic disease process (wsava). published case reports describe many potential infectious causes, but only one retrospective study involving nine dogs with gh has detailed clinically relevant findings. the aims of this study were to describe the clinical and clinicopathologic findings in dogs with a histopathological diagnosis of gh, and to identify infectious agents using differential staining techniques, pcr, and fluorescence in-situ hybridization (fish) in archival paraffin-embedded tissues from dogs with gh. medical records of dogs with a histopathological diagnosis of gh (n 5 22) were reviewed and signalment, historical toxin exposure or evidence of other systemic diseases, clinical signs, physical exam findings, clinicopathologic test results, imaging findings, concurrent diagnoses, treatments administered, and case outcome, when available, were extracted and summarized. twelve archival formalin-fixed, paraffin-embedded hepatic tissue samples were available for special staining and molecular diagnostic testing. two of these samples had sufficient tissue for only pcr. the mean age of dogs with gh was 7 years (median 6.5 years; range 2 to 17 years) and included 12 males and 10 females representing 14 different breeds. common presenting complaints included inappetance or anorexia (n 5 12), weight loss (n 5 11), lethargy (n 5 9), fever (n 5 8), and vomiting (n 5 8). high mixed liver enzyme activity (14/22) was the most common clinicopathologic abnormality. leukemia was diagnosed in one dog and copper-associated hepatopathy in 6 dogs. no infectious agents were identified using differential staining techniques. bartonella species dna was not pcr amplified from the extracted archival tissue. furthermore, no bacteria were identified by means of fish using a universal eubacterial probe. these data suggest a possible role for copper accumulation in the genesis of gh in dogs and support further evaluation of dogs with gh for evidence of copper-associated hepatopathy. future studies should include detailed environmental histories, the collection of adequate sample volumes for quantification of hepatic copper content and the examination of frozen tissues using novel molecular diagnostic platforms. hepatocyte copper and iron accumulation contribute to cell loss, inflammation, and fibrosis. the purpose of this study was to compare copper and iron accumulation in feline liver samples with different disease processes. liver biopsies (n 5 104) submitted between july 1, 2007 and june 30, 2009 were evaluated using wsava guidelines and categorized as non-hepatic/normal, congenital, inflammatory/infectious, neoplastic, and other. copper (by rubeanic acid) and iron (by prussian blue) accumulation were graded by increasing amounts (0-3) and location (centrilobular 5 cl, midzonal 5 mz, periportal 5 pp, random 5 r). associations between metal scores and diagnosis category were assessed using the kruskal-wallis test. histologic diagnoses were non-hepatic/normal (n 5 12), congenital (n 5 6), neoplastic (n 5 16), infectious/inflammatory (n 5 39), and other (n 5 31). ninety-two samples were negative for copper; remaining samples were graded 1 (n 5 5), 2 (n 5 6), and 3 (n 5 1). histologic diagnoses (pattern) for positive samples were congenital (1cl), infectious/inflammatory (7: 2cl, 1mz, 2pp, 2r), neoplastic (2pp), and other (2cl). iron staining was negative in 18 samples; remaining samples were graded 1 (n 5 38), 2 (n 5 40), and 3 (n 5 8). distribution was primarily cl (n 5 38) or r (n 5 33), though mz (n 5 13) and pp (n 5 2) distribution occurred. there were no significant differences by kruskal-wallis analysis for amount or location of hepatocellular iron or copper for the different disease categories. in this study, copper accumulation was rare, had variable distribution and occurred primarily in samples with inflammatory/ infectious disease. in contrast, iron accumulation was common and did not correlate with disease category. further prospective evaluation of copper and iron accumulation in feline liver disease and association with outcome may be warranted. chronic hepatitis (ch) in dogs is a progressive condition without clearly defined treatment. glucocorticoids are commonly used to stop progressive inflammation and fibrosis but are associated with significant side effects including a steroid hepatopathy that complicates enzyme monitoring. cyclosporine is proposed as an alternative therapy, but there are no published reports of its use for canine ch. patient records at the csu veterinary teaching hospital were searched for histologically confirmed cases of ch treated with cyclosporine. data were compiled on cyclosporine dosing, concurrent medications, clinical course and biochemical parameters. 13 patients over a 50-month period were identified. serum alanine aminotransferase (alt) decreased by an average of 71% in 12 dogs. the alt normalized completely in 6 of 10 dogs treated for 4 60 days. in 5 of 6 dogs on 4 9 mg/kg/day, the alt also normalized. five of the 6 patients that exhibited clinical signs prior to treatment showed measurable improvement (weight gain, fewer gastrointestinal signs). eight patients had hyperbilirubinemia or ascites prior to treatment; these resolved in 7. post-treatment histopathology, available in one patient, revealed decreased severity of ch. five patients exhibited adverse effects including gastrointestinal signs (3), gingival hyperplasia (1), and papillomatosis (1). cyclosporine was discontinued in 2 dogs with gastrointestinal signs. cyclosporine was an effective therapy for many cases of ch and should be considered for patients who are refractory to or cannot tolerate glucocorticoids. prospective clinical trials with histological documentation are needed to better define cyclosporine's effectiveness in ch. insertion of the veress needle and establishment of pneumoperitoneum is associated with 22 to 57% of all laparoscopic complications in humans. the purpose of this study was to determine the accuracy of interpretation of tissue impedance measurements for veress needle location. two laparoscopists, blinded to impedance measurements, placed reusable veress needles in 20 cadaverous dogs euthanized for reasons unrelated to the study. placement order was randomized. a third individual evaluated impedance measurements using a handheld device (sensormed, knoxville tn) to determine correct versus incorrect needle placement. veress needle locations were marked using contrasting colors of india ink; tissues were dissected to determine ink locations. impedance measurement interpretation identified 29/33 correct and 7/ 7 incorrect placements, respectively. sensitivity, specificity, accuracy, and precision for correct veress needle placement are listed below. agreement was moderate (kappa 0.50, p 5 0.01) for placements by operator 1 and very high (kappa 5 0.88, p o 0.01) for placements by operator 2. results for tissue impedance measurement interpretation are superior to published data for currently available tests. impedance measurements accurately detected all incorrect needle placements. comparison of needle placement with and without tissue impedance feedback will be necessary to determine whether it increases operator detection of inappropriate veress needle placement and decreases installment phase complication rates. delayed detection of intestinal perforation during veress needle insertion is associated with high mortality. the purpose of this study was to evaluate the accuracy of tissue impedance measurement interpretation for veress needle location. two laparoscopists, blinded to impedance measurements, placed reusable veress needles in 24 cadaverous cats. placement order was randomized. a third individual evaluated impedance measurements (sensormed, knoxville, tn) to determine placement location. needle locations were marked using india ink; tissues were dissected to determine ink locations. impedance measurement interpretation identified 36/38 correct and 2/10 incorrect placements. all 8 undetected incorrect placements were located within the retroperitoneal fat pad. sensitivity, specificity, accuracy, and precision for correct veress needle placement are listed below. correlation was absent (kappa à0.15, p 5 0.34) for placements by operator 1 and substantial (kappa 0.78, p o 0.01) for operator 2. there was no association between correct or incorrect placement and operator on chi-squared analysis. failure of impedance measurements to identify placement in the retroperitoneal fat pad resulted in poor accuracy and discordant kappa statistics. small cat size limited the number of appropriate placement sites, perhaps resulting in excessively dorsal placement. comparison of needle placement with and without tissue impedance feedback will be necessary to determine whether impedance measurements increase detection of inappropriate veress needle placements or decrease installment phase complication rates. best clinicopathologic tests detecting portosystemic shunting (pss) in dogs remains controversial. this retrospective study examined performance of single random "fasting" and paired serum bile acids (sba; pre-and 2-hr post-feeding) in a large population of non-icteric dogs with confirmed pss (abdominal ultrasound, colorectal scintigraphy, radiographic or spiral-ct portography, laparotomy, or necropsy). sba were measured by enzymatic colorimetric method with normal o 25 mmol/l. dogs meeting inclusion criteria (n 5 568) included 467 portosystemic vascular anomalies (psva; 368 extrahepatic [e-psva], 99 intrahepatic [i-psva]), and 101 acquired pss (apss). signalment and laboratory parameters were recorded. non-parametric statistical analyses were used, two-tailed p o 0.05 applied with bonferroni corrections. median age and weight of 74 breeds were 1.2 (0.2-12) yrs and 6.6 (0.2-48) kg, with equal gender distribution. random "fasting" sba detected 90% psva and 81% apss, whereas post-feeding sba detected 100% psva and 98% apss. low protein-c (o 70% activity) occurred in 96% psva and 74% apss. low mcv and creatinine occurred in 82% and 94% of psva dogs, respectively; other tests were less helpful. in apss, post-feeding sba was superior. compared to apss, psva had significantly (p 0.002) lower mcv, cholesterol, bun, creatinine, glucose, and protein-c. compared to e-psva, i-psva had significantly (p 0.009) lower post-feeding sba, mcv, albumin, urine specific gravity, and protein-c but higher cholesterol and glucose. post-feeding sba reflect physiologically provoked bile acid challenge and should be the preferred sba test in non-icteric dogs for pss detection. protein-c assists in identifying psva but its utility in apss may be complicated by concurrent coagulopathies and inflammation. this study compared outcomes of treatment with adjunctive nonsteroidal anti-inflammatory drugs (nsaids) or anti-inflammatory glucocorticoids in dogs with severe pulmonary blastomycosis. medical records were reviewed for dogs diagnosed with blastomycosis at the university of illinois veterinary teaching hospital between 1992 and 2007. dogs with a presenting pao 2 of 80 mmhg, and clinical or radiographic signs of respiratory blastomycosis were included. all dogs were treated with either itraconazole, fluconazole, amphotericin b, or a combination of these. group 1 (g1) dogs were treated with nsaids and group 2 (g2) dogs were treated with glucocorticoids as anti-inflammatory adjunctive therapy. the following comparisons were made: days of oxygen supplementation, days in hospital, survival to discharge, and long term patient survival. mann-whitney u tests and chi-squared tests were performed on continuous and categorical data, respectively. p o 0.05 was considered significant. sixty-eight dogs fit the inclusion criteria. g1 consisted of 31 dogs and g2 consisted of 37 dogs. the two groups were found to be similar in weight, age, and sex distribution. there was no significant difference between the two groups with regard to duration of oxygen supplementation, duration of hospitalization, survival to discharge, and patient survival. there does not appear to be a difference between the clinical course or patient outcomes between groups of dogs with severe pulmonary blastomycosis treated with nsaids or anti-inflammatory glucocorticoids. further studies need to be performed to fully evaluate the impact these adjunct treatments have on prevention of ards and additional respiratory complications. diagnosis of feline histoplasma capsulatum infection traditionally relies upon identification of organisms in circulating monocytes or affected organs. in recent years, an antigen assay (aa) was developed for the diagnosis of disseminated histoplasmosis in human patients, but there is little information describing this test in cats. the goal of this study was to determine the sensitivity and specificity of h. capsulatum aa in cats with clinical disorders suggestive of histoplasmosis. urine and serum h. capsulatum aa results for feline patients from 3 veterinary hospitals were evaluated. medical records were reviewed for confirmatory evidence of histoplasmosis (based on cytological or histopathological findings) or an appropriately supported alternate diagnosis. aa results were available for 78 cats; initial testing was performed on 72 urine samples, 6 serum samples, and 1 unspecified sample. of these cats, 17/78 had a definitive diagnosis of histoplasmosis based on organism identification, and 10 had a definitive alternate diagnosis (e.g., neoplasia, other infection) based on necropsy findings (n 5 5) or other clinical data (n 5 5). an additional 16 cats had a clinical alternate diagnosis with no cytological or histopathological evidence of histoplasmosis in the affected body system(s). the remaining cats had unverified histoplasmosis (n 5 9) or an open diagnosis (n 5 26). of the 17 cats with confirmed histoplasmosis, 16 were positive on initial urine aa. one cat (with rectal involvement) was negative, indicating a test sensitivity of 94%. one cat was positive on urine aa but negative on serum aa. all of the 26 cats with definitive or clinical alternate diagnoses had negative results on the aa, suggesting an excellent specificity (100%). however, this result should be interpreted with caution, as the possibility of primary or concurrent histoplasmosis was only definitively excluded in the 5 patients who underwent necropsy examination. these findings suggest that the aa for h. capsulatum is a reliable diagnostic tool in this species. a positive result appears to reliably support the presence of infection, but a small percentage of infected cats may be negative on aa. in addition, tests performed on urine may be more sensitive that those performed on serum. the purpose of this study was to evaluate the sensitivity and specificity of an aspergillus galactomannan antigen enzyme immunoassay (ga-eia) for the diagnosis of canine systemic aspergillosis. serum and urine samples were collected from sick dogs at 2 hospitals (ucd and tamu). group 1 dogs were diagnosed with systemic aspergillosis using culture (sterile site) or microscopy and culture (non-sterile site). group 2 dogs had clinical findings suggestive of aspergillosis but an alternate diagnosis was established. group 3 dogs were not suspected to have aspergillosis. samples were tested using the ga-eia and results expressed as a galactomannan index (gmi). gmis 4 0.5 were considered positive. comparisons were performed using the mann-whitney test. there were 10 dogs in group 1, 22 in group 2, and 23 in group 3. serum was collected from all dogs, and urine from 6, 9, and 10 dogs, respectively. serum gmis did not differ from urine gmis across groups. serum gmis of group 1 dogs were higher than those of group 2 and group 3 dogs (p o 0.0001). results from dogs in group 2 did not differ from those in group 3 (p 5 0.43). two dogs in group 1 tested negative, but had localized pulmonary infections. one dog in group 2, which had paecilomycosis, tested positive. two dogs in group 3 tested positive. one was being treated with plasmalyte. the other had a cutaneous opportunistic mycosis. these data support the utility of this assay to aid in the diagnosis of systemic aspergillosis in dogs. anaplasma phagocytophilum, an ixodes tick transmitted rickettsial bacterium has a wide mammalian host range that is not commonly reported in cats. clinical signs in humans, dogs and cats are often vague and include lethargy, anorexia and malaise. the purpose of this retrospective study was to describe the clinical signs, laboratory data and response to treatment in cats that tested positive for a. phagocytophilum on a commercially available pcr of peripheral blood (fastpanel tm ). this study describes and reports the appearance of intracellular morulae in feline neutrophils contributing to the diagnosis of a. phagocytophilum. the a. phagocytophilum real-time pcr (rt pcr) assay consists of four multiplexed primer systems designed to detect a total of three distinct genes. amplicons were confirmed as a. phagocytophilum by dna sequencing. clinicopathologic data was obtained by review of medical records and interview of primary veterinarians. complete blood counts were available from 13/15 cats and 10/13 blood smears were reviewed. the cats included in this study were all positive for a. phagocytophilum by real-time pcr. the cats ranged from 4 months to 13 years of age with an average age of 3.7 years. fifteen of 15 cats had a history of tick exposure and lived in the northeastern region of the us, an ixodes endemic area. all cats presented with lethargy, 13/15 were anorexic and 14/15 had a fever (temperature 4 103 o f). other clinical findings included hepatomegaly, splenomegaly, ataxia and ocular changes of conjunctivitis and elevation of the nictitating membrane. hematologic findings included leukopenia (1/13), neutropenia (1/13) and lymphopenia (4/13). thrombocytopenia was not noted in any case. morulae were seen within neutrophils in 2/13 cases. all cases in this report responded to treatment with doxycycline. this is the first report of the identification of morulae within neutrophils via peripheral blood smear review in cats confirmed by rt pcr to be infected with anaplasma phagocytophilum in north america. infection with anaplasma phagocytophilum should be considered in a clinically ill cat with tick exposure, living in an ixodes endemic area that presents to a veterinarian for lethargy, anorexia and fever. the spectrum of disease manifestations and the accompanying clinicopathological abnormalities indicative of bartonellosis in dogs have not been thoroughly characterized. the objective of this unmatched case-control study was to compare signalment, clinical and pathologic findings in clinically-ill dogs suspected of a tick-borne disease that were negative for bartonella sp. dna (controls) as were the dogs diagnosed with bartonellosis by pcr amplification, dna sequencing and the bapgm (bartonella alpha proteobacteria growth medium) enrichment culture approach. both groups were tested under the same laboratory conditions and in the same time frame. medical records were reviewed for information regarding signalment, medical history, physical examination findings, clinicopathological abnormalities, microbiological data and treatment. the study population consisted of 47 bartonella-infected dogs and 93 non-infected dogs. healthy dogs with no historical illnesses, such as blood donors, were excluded. the following species were amplified: b. henselae (n 5 28, 59.6%), b. vinsonii subsp. berkhoffii (n 5 20, 42.6%), b. koehlerae (n 5 3, 6.4%), b. volans-like (n 5 3, 6.4%), b. bovis (n 5 1, 2.1%). nineteen (40.4%) bartonella-infected dogs were febrile and lethargic and ten (21.3%) had neurological signs. laboratory abnormalities for both groups are summarized below (number of affected dogs provided in parenthesis): multivariate logistic regression using confounding factors was performed to establish potential associations between specific variables and bartonella sp. infection. there were no differences in signalment, age, sex, body weight and duration of clinical signs between the two groups. compared to the control population, infection with the genus bartonella was associated with a diagnosis of endocarditis (p 5 0.0196, or 5 7.95, 95%ci 5 1.39-51.11) and hypoglobulinemia (p 5 0.0057, or 5 5.30, 95% ci 5 1.62-18.55). controls were more likely to have joint effusion (p 5 0.0059, or 5 5.89, 95% ci 5 1.62-27.22) and azotemia (p 5 0.0353, or 5 2.93, 95%ci 5 1.07-8.86) than were the bartonella sp. infected dogs. bartonella was detected in dogs with signs such as fever, anemia, thrombocytopenia, hyperglobulinemia and proteinuria that are typically associated with tick-borne diseases. when endocarditis or hypoglobulinemia are detected, testing for bartonella should be prioritized. likewise, the detection of bartonella should prompt further testing for endocarditis, if not already investigated. surveillance studies in other species depend on detection of antibodies to the highly conserved influenza a nucleoprotein (np); however, no such antibody detection assay is approved for canine use in the u.s. the purpose of this study was to determine the diagnostic accuracy of a commercial blocking elisa used for avian species in detecting influenza a np antibody in dogs. since the blocking elisa is not a species-specific or viral subtype-specific format, we hypothesized that it would detect np antibodies in dogs infected by influenza a virus. serum samples from uninfected dogs (n 5 204) and dogs naturally infected with canine influenza h3n8 (n 5 150) were tested using the idexx flockchek blocking elisa for influenza a np antibody according to manufacturer instructions. the sample/negative control (s/n) absorbance ratios for infected dogs ranged from 0.12 to 0.67 compared to 0.53 to 1.40 for uninfected dogs. a receiver operating characteristic (roc) curve analysis determined optimum diagnostic sensitivity (99.3%) and specificity (99.0%) at a s/n cutoff ratio of 0.647. using this cutoff ratio, the overall diagnostic accuracy was 99.2%. coefficients of variation for intra-assay (4.7%) and inter-assay (6.1%) testing demonstrated good repeatability with canine sera. the excellent diagnostic accuracy of the commercial blocking elisa makes it a suitable tool for large-scale surveillance of influenza a virus exposure in dogs. upper respiratory disease (urd) can affect a majority of cats in shelters and is one of the leading reasons for euthanasia of otherwise adoptable cats. the purpose of this study was to determine prevalence and risk factors for upper respiratory pathogens in four different models for managing unowned cats: short-term animal shelters (shel), long-term sanctuaries (sanc), home-based foster care (fost), and trap-neuter-return (tnr) programs. conjunctival and oropharyngeal swabs were collected from 543 cats, half of which had clinical signs of urd, and tested for feline herpesvirus (fhv), feline calicivirus (fcv), chlamydiophila felis, bordetella bronchiseptica (bb) and mycoplasma felis by real-time pcr. management model, vaccination, sex, age, body condition, and clinical signs were evaluated as risk factors for infection. a majority of cats in all management models carried one or more organisms capable of causing urd. in many cases, prevalence was similar in cats with or without clinical signs. unlike diseases that can be controlled by segregation of symptomatic animals, the lack of strong correlation between the presence of pathogens with the presence of clinical signs suggests that feline urd control should be managed by vaccination before or at the time of intake ,biosecurity protocols that presume all cats may be shedding pathogens, and minimizing stressful conditions that contribute to disease susceptibility. depending on geographical location, sex, age and environment, 2-40% of cats worldwide are infected with the feline immunodeficiency virus (fiv). knowledge of the fiv status of cats is important to limit the spread of disease and to institute appropriate health management. however, like all lentiviruses, fiv is highly variable in nucleotide sequence, and viral load in cats is variable during different disease stages. detection of antibodies is the most widely employed diagnostic approach, but does not distinguish fiv-infected from fiv-vaccinated cats. in this study, samples from 30 fiv-seronegative cats, 30 fivseropositive cats, and 30 fiv-historically seronegative but vaccinated cats, were analyzed by a commercial quantitative pcr (qpcr) assay and virus isolation. replicate blood samples were coded, and then submitted for 1) qpcr (idexx); and 2) mononuclear cell isolation with 7-day culture and viral p24 antigen detection by elisa. for the p24 antigen elisa, cutoff absorbance values were established from analysis of 10 fiv-negative samples. fiv infection status was pre-determined based on 2 antibody-elisa results and vaccination history. results indicated that qpcr had a sensitivity of 76% for samples from fiv-seropositive cats, and a specificity of 100% and 94.1% for samples from fiv-seronegative and fiv-vaccinated cats, respectively. at a cutoff value of 3 standard deviations above the mean absorbance for p24 antigen elisa, results from fiv-negative samples yielded a sensitivity of 69.2% for samples from fiv-seropositive cats, and a specificity of 77.1% and 64.7% for samples from fiv-seronegative and fiv-vaccinated cats, respectively. conclusions from this study are 1) the commercial fiv qpcr assay has high specificity but limited sensitivity for diagnosis of fiv infection; 2) 7-day virus culture has limited sensitivity and specificity. hence, detection of antibodies remains the most reliable test for diagnosis of fiv infection, but qpcr may be suitable to rule out infection. oral disease is an important clinical problem in feline medicine and includes common painful conditions such as oropharyngeal inflammation (formerly known as gingivostomatitis) and tooth resorptive lesions. a number of infectious agents have been associated with private veterinary clinics in the u.s. were recruited to test feline patients presenting with oral disease. presenting cases included cats with plaque, calculus, gingivitis, stomatitis, periodontal disease, tooth resorptive lesions and other oral diseases as defined by the practitioner. all cats were tested using a commercially available point-of-care elisa test (idexx snap combo). confirmatory tests were not performed as part of the study. seroprevalence was calculated as the percentage of positive tests in the study population for each virus. a total of 11,262 cats were tested. seroprevalence for felv was 6.8% and for fiv was 7.1%. of these, 119 cats (1.0%) were infected with both viruses. seroprevalence was higher in cats with inflammatory oral disease than in cats characterized with other types of oral disease. of 7,805 cats with gingivitis, seroprevalence for felv was 7.4% and for fiv was 7.9%, with 1.1% of cats co-infected. of 1,953 cats with stomatitis, seroprevalence for felv was 12.2% and for fiv was 13.9%, with 2.2% of cats co-infected. the seroprevalence for felv and fiv reported in this population of cats with oral disease was higher than in a recent large study where samples from u.s. cats not specifically selected for oral disease were tested (felv 2.3%, fiv 2.5%). results of this study indicate that further investigation of the role of retroviruses in cats with oropharyngeal inflammation is warranted. reliable tests and preventive vaccines and medications for feline retroviral and heartworm (hw) infections are available, but compliance with protocols to reduce transmission is unknown. no largescale longitudinal studies evaluating prevalence over time have been reported. the purpose of this study was to determine the prevalence and risk factors for infection compared with a similar study completed for the first time 5 years previously. veterinary clinics and animal shelters in the us and canada submitted results of testing using a point-of-care elisa for felv antigen, fiv antibody, and hw antigen (idexx snap triple) and risk factor information for cats tested during march-september 2010. bivariable and multivariable analyses were used to evaluate risk factors for infections. a total of 62,301 cats were tested. only 16% of owned cats were prescribed hw preventive. risk of retroviral infections was increased by outdoor access, adulthood, and male gender. the most important risk factor associated with all 3 infections was clinical disease; in particular, respiratory and oral diseases and abscesses or bite wounds. multivariate analysis revealed differences among geodivisions and across infection types. feline retroviral and heartworm infections are easily prevented, but difficult to treat. despite availability of effective management protocols, compliance remains inadequate to reduce the prevalence of these infections. improved use of preventive care and testing to identify and segregate contagious cats, particularly those at high-risk, is required to reduce the morbidity of these preventable infections. infectious disease outbreaks are common in animal shelters and are frequently managed by depopulation when risk-assessment tools are not available. during a canine distemper virus (cdv) and parvovirus (cpv) outbreak in sheltered dogs, we used a cdv/cpv point-of-care antibody titer elisa, a cdv quantitative rt-pcr test, and a cpv fecal antigen test as risk assessment tools to guide release of exposed dogs from quarantine and euthanasia of diseased dogs. serum samples (for antibody titers) and swabs of the conjunctiva and upper respiratory tract (for cdv pcr) were collected from 111 asymptomatic dogs starting on day 4 of the outbreak. dogs with positive cdv pcr tests were retested every 2 weeks until euthanized for progressive disease or released following recovery from infection. dogs with clinical signs of parvoviral infection were tested using a cpv fecal antigen test. for dogs ! 4 months old, protective antibody titers correlated with resistance to clinical disease, but 10% of dogs shed cdv. lack of protective cdv antibody titers correlated with susceptibility to clinical infection, but most dogs recovered. risk assessment and outcome in 60 dogs !4 months of age feline herpesvirus 1 (fhv-1) is a common ocular and respiratory pathogen of cats that can have clinical illness exacerbated by stress. cyclosporine (csa) is commonly used for the treatment of a number of inflammatory diseases in cats and can induce immune suppression. a small number of cats administered csa to block renal transplant rejection have developed clinical signs of upper respiratory tract disease that may have been from activated fhv-1. in this study, young adult cats experimentally inoculated with fhv-1 several months previously were divided into three groups and administered methylprednisolone acetate (8 cats, 5 mg/kg, im, day 0 and day 21), csa (8 cats, 7.0 mg/kg, po, daily for 42 days), or a placebo (7 cats, corn syrup; 0.075 ml/kg, po, daily for 42 days). each cat was assigned a daily individual clinical score by a trained, masked observer using a standardized score sheet during the initial pre-treatment time period (day -14 to day 0) and throughout the 42 day treatment period. each individual clinical score (conjunctivitis, blepharospasm, ocular discharge, sneezing, nasal discharge, nasal congestion, and body temperature scores), the total clinical score (sum of all parameters), the total ocular score (sum of conjunctivitis, blepharospasm, ocular discharge), and total respiratory score (sum of ocular discharge, sneezing, nasal discharge, nasal congestion) were analyzed using sas proc glimmix with 'treatment', 'time', and the two-way interaction 'treatment by time' all as fixed effects. statistical significance was defined as p o 0.05. on day 42 of the study, all of the csa treated cats had detectable concentrations of csa in serum (mean 5 406.1 ng/ml; standard deviation 5 291.8 ng/ml; median 5 388.5 ng/ml). when group mean values for clinical signs were compared over time as described, significant differences in individual clinical score measurements, in total score, total ocular score, or total respiratory score were not detected over time among any of the treatment groups. while clinical signs of activated fhv-1 occurred in some cats administered methylprednisolone or csa, disease was mild and self-limited in most cats and there were no significant csa sideeffects. these results suggest that the csa protocol described here is unlikely to reactivate latent fhv-1 infection and cause significant clinical illness. the purpose of this study was to determine the prevalence and risk factors for enteropathogens in four different models for managing unowned cats: short-term shelter, long-term sanctuary, home-based foster care, and trap-neuter-return (tnr) programs. fecal samples were collected from 482 cats, half with diarrhea (d) and half with normal feces (n), and tested for a panel of feline and zoonotic enteropathogens by polymerase chain reaction, antigen, and fecal flotation. risk factors for infection evaluated include management practices, fecal consistency, and signalment. a majority of cats had at least one enteropathogen of feline or zoonotic importance, regardless of management model or preventive healthcare protocol. for most enteropathogens, the presence or absence of diarrhea did not correlate with infection, the exceptions being t. foetus in sanctuary cats and fcov in foster cats. prevalence of specific enteropathogens varied between management models, reflecting differences in preventive healthcare and housing conditions. management protocols for unowned cats were inadequate for elimination of infections present at the time of intake and for prevention of transmission of enteropathogens among shelter cats. improved compliance with effective vaccination, deworming, sanitation, and housing protocols is needed to reduce zoonotic and feline health risks. several allergic diseases of cats, including atopy and gingivostomatitis, can be resistant to glucocorticoids but responsive to cyclosporine. toxoplasma gondii infection occurs in approximately 30% of cats and the effect cyclosporine therapy has on the t. gondii oocyst shedding period is unknown. the objective of this study was to determine whether administration of cyclosporine before or after t. gondii infection influences the oocyst shedding period. the young adult cats were t. gondii seronegative when administered 1,000 t. gondii tissue cysts orally on day 42. group 1 cats (n 5 10) were never administered cyclosporine; group 2 cats (n 5 10) were administered cyclosporine (7.5 mg/kg, po) daily on days 84-126; and group 3 cats (n 5 10) were administered cyclosporine (7.5 mg/kg, po) daily from days 0-126. available feces from individual cages were collected daily and fecal flotation by sugar centrifugation was performed for 84 days after t. gondii inoculation. group 3 shed oocysts for a significantly shorter period than groups 1 or 2 and had a significantly lower oocyst shedding scores than groups 1 and 2 on days 5-11 after t. gondii inoculation. group 2 cats had completed the oocyst shedding period prior to being administered cyclosporine and repeat oocyst shedding was not detected during administration of the drug. administration of cyclosporine prior to t. gondii infection lessened oocyst shedding which is likely from the anti-t. gondii effects of the drug. administration of cyclosporine using this protocol is unlikely to induce repeat t. gondii oocyst shedding in client-owned cats. ã 5 group with diarrhea significantly different than group with normal feces p o 0.05 known about its metabolic pathways or mechanism of pathogenicity and whole genome sequencing of feline hemoplasmas has not yet been reported. the aim of this study was to completely sequence the genome of m. haemofelis to further characterise this important pathogen. mycoplasma haemofelis genomic dna was purified and subjected to whole shotgun roche 454 sequencing. gaps were closed using targeted pcr and amplicon sequencing. ribosomal genes and potential open reading frames (orfs) were predicted in silico. putative orfs were annotated and orthologous groups identified. analysis showed a circular genome of 1.15 mbp with a gc content of 38.9%. thirty-one transfer rnas (trnas) were identified, accounting for all amino acids, including a tryptophan trna for the opal codon (uga). of the 1,545 putative proteins identified, 328 (21.2%) matched to proteins from other bacterial species. in common with the pneumoniae group of mycoplasmas, the closest phylogenetic relatives of the hemoplasmas, genes involved in carbohydrate metabolism were limited to enzymes of the glycolytic pathway, with glucose appearing to be the sole energy source for m. haemofelis. the majority of the pentose phosphate pathway genes present in other cultivatable mycoplasmas appear to be incomplete or absent in m. haemofelis, suggesting an alternative mechanism for sourcing purine and pyramidine bases such as scavenging from the host. a gene encoding a glyceraldehyde-3-phosphate dehydrogenase homolog of the immunogenic msg1 protein of mycoplasma suis was present. of the uncharacterized hypothetical proteins, 1,115 were arranged in series of orthologous repeats, or comprised fragments there-of, encoding putative proteins of approximately 200 amino acids. the predicted motifs of the majority of these putative proteins were consistent with these proteins being presented on the cell surface; an n' terminal signal peptide or transmembrane region followed by a non-cytoplasmic tail. these data have provided valuable information as to why this pathogen remains highly fastidious; it lacks some of the metabolic pathways found in cultivatable mycoplasmas. we have also identified a homolog of a known m. suis immunogenic protein, and identified a potential mechanism for host immune system evasion by way of highly repetitive, putatively surface-expressed hypothetical proteins with variable sequences. canine leptospirosis has been recognized as a re-emerging disease in the u.s. over the past 20 years, and several serosurveys of the prevalence of leptospiral antibodies in dogs have been published during that time. the role of cats in the epidemiology of leptospirosis has received little attention. serosurveys of cats for exposure to or infection with leptospires have been published from other geographic areas, but none for cats in the u.s. in the past four decades. the new england states have been found to have a high incidence of canine leptospirosis. the purpose of this pilot study was to determine the prevalence of leptospiral antibodies in a population of feral cats in central massachusetts. blood was collected from 63 sexually intact feral cats presented to a spay and neuter program. microagglutination titers to leptospira serovars autumnalis, hardjo, bratislava, icterohaemorrhagiae, canicola, pomona, and grippotyphosa were determined. three of 63 cats (4.8%) had a positive titer to one or more serovars, with autumnalis being the most common. these results are consistent with previously published prevalence rates in feral cats. further studies are required to determine the role of leptosporosis in clinical disease in the domestic cat. since years the rivalta's test is routinely used in several european countries as a tool to diagnose feline infectious peritonitis (fip) in cats with effusion. it is inexpensive and easy to perform in private practice. there is, however, only little information about mode of action or its diagnostic value. the objectives of this study were to evaluate sensitivity, specificity, positive (ppv) and negative predic-tive values of the rivalta's test to diagnosis of fip and to examine if there is a correlation with any effusion or blood parameters. medical records of 782 cats with effusion in which the rivalta's test was performed between 1999 and 2010 were reviewed concerning diagnosis, blood and effusion parameters, and survival time. effusion and blood parameters were compared between rivalta-positive and -negative effusions using the mann whitney u test. prevalence of fip in cats with effusion was 31.7%. the rivalta's test showed a sensitivity of 91.3%, a specificity of 66.6%, a ppv of 55.9%, and a npv of 94.3% for the diagnosis of fip. the ppv improved, when cats with lymphoma or bacterial infection were excluded (ppv 69.2%) and also, when only cats younger than 2 years (ppv 87.5%) or 1 year (ppv 90.8%) of age were included. the most important significantly different parameters between rivalta-positive and -negative effusions were specific gravity as well as cholesterol, triglyceride, and glucose concentration in the effusion. the rivalta's test in general is a useful tool to diagnose fip, but its sensitivity and specificity are not as high as previously assumed. if the rivalta's test, however, is performed in young cats or if certain diseases have been ruled-out, its diagnostic value is high. effusion total protein is not highly correlated with test outcome. therefore, it is still unclear, which components in the effusion of cats with fip lead to a positive rivalta's test. canine parvovirus (cpv) and canine distemper virus (cdv) infections are relatively common in animal shelters and are important population management issues since the immune status of incoming dogs is usually unknown. our study aimed to determine the antibody protection status of dogs at the time of admission into an animal shelter (pre-vaccination) and over the following 2 weeks after vaccination. serum samples were obtained from 57 incoming shelter dogs aged 4 months and older with no known history of vaccination. immediately following serum collection, the dogs were vaccinated against cpv and cdv using a modified live vaccine (mlv). cpv and cdv antibody protection status was determined using synbiotics titerchek. dogs with unprotective serum antibody levels against cpv and/or cdv were retested at 6-8 days post-vaccination and again at 13-15 days post-vaccination, if antibody levels were still unprotective against cpv and /or cdv. at the conclusion of the study, stored duplicate sera were submitted for batch 'gold standard' testing to determine canine distemper virus serum neutralization and canine parvovirus hemagglutination inhibition antibody titers. based on the synbiotics titerchek results, 43/57 dogs (75.4%) were protected against cpv and 21/57 (36.8%) were protected against cdv at intake. older incoming dogs were more likely to be protected against cpv (p o 0.0001) and cdv (p 5 0.0174). dogs that were spayed/neutered were more likely to be protected against cpv on intake than intact animals, although this result was not statistically significant (p 5 0.0627). the number of dogs with protective titers against cpv/cdv was increased at 6-8 days post-mlv (cpv -48/56, 85.7%; cdv -31/52, 59.6%) and further increased at 13-15 days post-mlv (cpv -54/54, 100.0%; cdv -47/48, 97.9%). we conclude that incoming shelter dogs often do not have protective antibody titers against cpv and cdv, but older shelter dogs are more likely to be protected against cpv. based on this population, we further conclude that a large percentage of dogs develop protective antibody titers to cpv and cdv within 1 to 2 weeks when vaccinated with a mlv. mycoplasma spp. are common inhabitants of the feline oral cavity and so likely contaminate many cat bite abscesses. mycoplasma spp. are cell-wall deficient and so do not respond to beta-lactam class antibiotics, the class most commonly use for the treatment of cat bite abscesses. the objectives of this study was to determine whether mycoplasma spp. are common contaminants of cat bite abscesses and are associated with beta-lactam resistant clinical disease. privately owned cats with clinical evidence of an acute abscess suspected to be from a cat bite were included in the study. participants were given a free aerobic and anaerobic culture as well as mycoplasma spp. culture and polymerase chain reaction using mycoplasma genus specific primers. mycoplasma spp. amplicons were sequenced to determine the species. all cats were initially treated with appropriate wound management, were administered an antibiotic in the beta lactam class (amoxicillin-clavulanate or cefovicin), and were rechecked in person or by phone 7 days after beginning treatment. of the 26 cats entered into the study to date, mycoplasma spp. were amplified from 4 cats (15.4%). of the 2 positive samples with adequate dna for sequencing, one was consistent with m. felis and the other was consistent with m. equigenitalium. of the 26 cats, 25 responded by day 7 to the initial treatment, including 3 of the 4 mycoplasma spp. positive cats. the cat that failed initial treatment was positive for m. equigenitalium on both day 0 and day 7 and ultimately responded to administration of a fluoroquinolone. the results suggest that while mycoplasma spp. commonly contaminate cat bite abscesses, routine wound management and antibiotic therapy is adequate for control. however, as mycoplasma spp. infections do not respond to beta lactam class antibiotic therapy, these organisms should be on the differential list for cats with abscesses that fail treatment with this antibiotic class. molecular diagnostic assays are frequently used in clinical practice to aid in the diagnosis of suspected infectious respiratory diseases in dogs. however, most currently available assays cannot distinguish strains of the organisms used in vaccines from naturally occurring strains. our prior studies demonstrated that previously immune adult dogs are unlikely to shed nucleic acids of vaccine strains of adenovirus 2, parainfluenza, or bordetella bronchiseptica. however, whether this is true for puppies is unknown. puppies (n 5 8) at a breeding facility were moved into area without other dogs at 6 weeks of age. swabs of the nasal and pharyngeal mucosa were collected prior to vaccination and on days 0, 1, 2, 3, 4, 5, 6, 7, 10, 14, 17, 21, 24, and 28 after vaccination with an intranasal adenovirus 2, parainfluenza, and b. bronchiseptica vaccine (intratrac 3, schering plough). the swabs were shipped on cold packs by overnight express for dna/rna extraction and assay in the fastpanel tm pcr canine respiratory disease profile at antech diagnostics. all puppies were negative for the infectious agents prior to vaccination. after vaccination, positive assay results for parainfluenza and b. bronchiseptica were first detected on day 1 and on day 2 for adenovirus 2. by day 3, dna or rna of the agents were amplified from all puppies from both sample sites and most samples were positive for all 3 agents through day 10. by day 24, only one dog was still positive for b. bronchiseptica. the results indicate that intranasal administration of adenovirus 2, parainfluenza, or bordetella bronchiseptica vaccines commonly leads to positive molecular diagnostic assay results for a short time period after primary vaccination. these findings should be considered when assessing the results of these assays in client-owned puppies with respiratory disease. antimicrobial resistance in escherichia coli is an increasing concern in both human and veterinary hospitals' patients. the choice drug for treatment in dogs is enrofloxacin, a second generation fluoroquinolone (fq) whose activity reflects, in part, ciprofloxacin. among the difficulties in effective e. coli treatment is rapid detection of fq resistance. the purpose of this study was to determine the specificity and sensitivity of a fret based assay for the rapid detection of urinary tract infections caused by fq associated multi-drug resistant e.coli. 306 clinical e. coli isolated from canine urine and 120 clinical veterinary urine samples being examined for e. coli were subjected to susceptibility testing for 14 drugs representing 6 drug classes. pure isolates were designated ndr (no drug resistance), sdr (single drug resistance) and mdr (multi-drug resistant) (n 5 101 mdr, 116 sdr and 89 ndr). minimum inhibitory concentration (mic) for enrofloxacin ranged from 0.03 mg/ml to 512 mg/ml, with high mic generally associated with mdr. extracted dna from culture and from urine were subjected to fret-pcr targeting single nucleotide polymorphisms in gyra. the resulting product was sequenced to detect other polymorphisms. further, to determine the level of detection, microbial free canine urine was inoculated with 10 6 to 10 1 cfu/ml of 7 isolates characterized by variable susceptibility to enrofloxacin (mic enro 5 0.03, 0.06, 0.15, 1, 64, 128, 256 mg/ml). of 306 pure isolates, 50 were confirmed positive for enrofloxacin resistance (mic enro 4 4 mg/ml), 43 of which were positively identified by the fret-pcr assay giving a sensitivity of 86.00%. only 1 isolate that was resistant was not detected (specificity of 96.66%). however, of the isolates expressing high level resistance (mic 4 8 x breakpoint [64 mcg/ml]), and mdr (n 5 34), sensitivity 5 97.06%. of the 120 urine samples 27 contained e. coli 7 of which determined to be fq-resistant by the assay. colony dilutions of e. coli confirmed the assay able to detect enrofloxacin resistance at as low as 101 cfu/ml. the relationship between cfus and the peak of the -(d/dt) fluorescence of the melting curve was r2 5 0.988. these results conclude that the assay is capable of detecting not only the presence of escherichia coli in clinical samples, but also detecting severity of fluroquinolone resistance and infection. the fluoroquinolones (fqs) are a key class of synthetic antimicrobial agents with an established history in both humans and companion animals of efficacy for treatment of urinary tract infections (utis) caused by e. coli, and fluoroquinolones are common therapy. among the commonly used fqs in dogs and cats are the 2nd generation drugs, enrofloxacin, marbofloxacin, orbifloxacin (all veterinary approved) and the human drug ciprofloxacin; no 3rd and 4th generation fq is routinely used. the purpose of this study was to assess the in vitro activity of different generation fqs toward e.coli uropathogens whose phenotype ranges from no resistance to multidrug resistance. a total number of 51 canine uropathogenic canine or feline e.coli isolates had been subjected to susceptibility testing to 6 drugs classes (15 drugs) and phenotyped as to resistance: none (ndr, n 5 12), single (sdr, n 5 15), or multiple, mdr (resistance to 2-6 drug classes; n 5 24). mdr included isolates susceptible (enr s -mdr, n 5 12) or resistant (enr r -mdr) to enrofloxacin. the minimum inhibition concentrations (mics) for 11 quinolones (1-1st generation, 3-2nd generation, 4-3rd generation and 3-4th generation) were determined for these isolates using broth microdilution methods according to clsi guidelines (e. coli atcc s 25922 served as a negative control). mic statistics were generated for each drug among phenotypes. the results showed that companion animal e. coli expressing ndr or sdr are largely susceptible to 2nd to 4th generation fqs. however, isolates expressing resistance to 1st or 2nd generation quinolone also express high level resistance based on the mic 90 to 3rd and 4th generation fqs. the overall potency (mic) for the 11 drugs for isolates not expressing enr resistance (that is, ndr, sdr and enr s -mdr) is gat canine leproid granuloma (clg) was first reported in brazil in 1990. over the past 20 years, 37 cases of clg were diagnosed in sa˜o paulo, brazil, and clinical and epidemiological findings were similar to those reported in australia. all dogs presented with one or more, uni or bilateral, ulcerated or not, papular, nodular or tumoral lesions, mainly observed in the dorsal surface of the ear, site usually more affected. in general, the lesions are painless and confined to the subcutis and skin, and it does not involve regional lymph nodes, nerves or internal organs, and systemic clinical signs frequently are absent. short-coated breeds show a marked predisposition for this disease. the definitive diagnosis of clg was obtained by histological examination of skin biopsies that were stained with acid fast (ziehl-neelsen) and diffquik s . thirty one (83.7%) of the dogs were purebred; in this study the breed pattern comprised 19 (61.3%) boxers, 2 (6.5%) german shepherd and labrador retriever, 1 (3.2%) dobermann, 1 (3.2%) brazilian terrier, 1 (3.2%) golden retriever, 1 (3.2%) bulldog, 1 (2.8%) american pitbull, 1 (3.2%) mastiff, 1 (3.2%) fila brasileiro and 1 (3.2%) cocker spaniel, 6 (16.3%) were of unknown breed. nineteen (51.4%) of the thirty seven dogs were males. twenty (54.1%) dogs were 4-6 years old. in most cases, dogs presented with unilateral or bilateral ear lesions, but rarely thoracic, foot and caudal lesions. the animals were successfully treated by use of rifampicin orally (''the brazilian protocol'') or enrofloxacin orally and topical rifamicin. anaplasma phagocytophilum is being recognized more frequently in dogs in endemic areas. currently, most suspected cases are evaluated for a. phagocytophilum antibodies by immunofluorescence assay (ifa) or elisa. since a. phagocytophilum is an acute disease, detection by antibody measurement may be negative on initial evaluation. it is possible that a. phagocytophilum dna can be amplified from blood or synovial fluid prior to seroconversion. wild caught ixodes scapularis adult ticks from rhode island were allowed to feed on 18 young adult (2-3 years), mixed sex beagles for up to 7 days. blood (weekly for 12 weeks), serum (weekly for 12 weeks), and synovial fluid (radiocarpal joint; alternating arthrocentesis weekly for 6 weeks) were collected prior to tick attachment and then weekly after tick attachment. joint fluid cytology was performed and total dna was extracted from blood and synovial fluid and assayed in a proprietary real time pcr assay (fastpanel tm ) that amplifies the dna of anaplasma phagocytophilum, a. platys, ehrlichia canis, e. chaffeensis, and e. ewingii. serum was assayed for a. phagocytophilum antibodies by ifa. time to first positive results for serology and pcr were compared by paired student's t test. none of the beagles developed clinical evidence of disease, and no major changes in synovial fluid cytology were detected over time. of the 18 beagles, 15 were positive for a. phagocytophilum dna in blood or synovial fluid or ifa antibodies in at least one sample after tick attachment. antibody titers appeared in 14 of 15 dogs from weeks 2 to 12 (median to 1 st positive 5 3 weeks ae 1). titer magnitude ranged from 1:40 to 1:10,240. anaplasma phagocytophilum dna was amplified from the blood of 14 of 15 dogs with positive test results ranging from 1 to 12 weeks (median to 1 st positive 5 2 weeks ae 1.5). anaplasma phagocytophilum dna was amplified from synovial fluid from 8 of 15 dogs between weeks 1 to 4 (median to 1 st positive 5 4 weeks ae 1). of the 8 dogs, 7 were pcr positive for only one week and 1 dog was pcr positive for two consecutive weeks. of the 15 dogs, 13 were positive for a. phagocytophilum in both blood and joints by dna analysis. anaplasma phagocytophilum dna was amplified from blood more quickly than seroconversion was detected by ifa antibody titer (t 5 à3.4, p o 0.01) or dna was amplified from synovial fluid (t 5 5.4, p o 0.001). anaplasma phagocytophilum dna can be amplified from the blood prior to development of detectable antibody titers by ifa. amplification of a. phagocytophilum dna from synovial fluid does not occur in all dogs, appears to be transient in most dogs, and a negative test result does not preclude a diagnosis of a. phagocytophilum infection. canine granulocytic anaplasmosis and granulocytic ehrlichiosis are tick-transmitted infections caused by anaplasma phagocytophilum (aph) and ehrlichia ewingii (eew), respectively. both organisms induce an acute clinical disease, frequently accompanied by fever, polyarthropathy and thrombocytopenia. however, aph and eew have different tick vectors, i.e. ixodes scapularis and ambylomma americanum, respectively, with different, but overlapping geographic distributions. in addition, infection outcome may be affected by other regional ticktransmitted pathogens, such as borrelia burgdorferi (mn) or ehrlichia chaffeensis (ar). therefore, we compared serology and pcr results derived from dogs examined at two private practices located in highly endemic areas for either aph or eew. serum collected between april-december, 2005 from minnesota dogs (n 5 420) was tested by snap s 4dx s and whole blood was tested by aph pcr. serum collected from arkansas dogs (n 5 708) for 1 year beginning in august 2009 was tested using microtiter plate elisas for antibodies to eew, e. canis, and e. chaffeensis (ech) while whole blood was tested by ehrlichia pcr. comparisons were evaluated using chi square (ã) and binomial (w) tests with an alpha of 5%. the above results indicated that dogs are frequently exposed to both aph and bb in mn, whereas ar dogs are often exposed to eew, but less frequently to ech. antibodies to e. canis peptides were found infrequently in both mn and ar with only 10 seroreactive dogs detected in both locations. active eew infection, as determined by pcr, was four times more frequent in ar pet dog seroreactors as compared to active aph infections among aph seroreactors. although both organisms induce acute disease, the number of aph and eew pcr positive dogs that were also seropositive was relatively high suggesting that both organisms induce persistent infections or that dogs are frequently re-infected, despite the presence of a measurable humoral immune response. additional studies are needed to determine regional infection profiles in other areas that are endemic for these pathogens. anaplasma phagocytophilum and ehrlichia canis are two of the most common vector borne disease agents that infect dogs and cats. while pcr assays that amplify the dna of these agents from blood are currently available, there is minimal information concerning the performance of these assays in different commercial laboratories that utilize different techniques. the purpose of this study was to compare the e. canis and a. phagocytophilum results of two different laboratories on the same samples collected from client-owned animals. veterinarians in 3 states (az, md, ct) were recruited to participate in the study based on high prevalence rates for e. canis or a. phagocytophilum infection. blood in edta was collected from dogs or cats with fever, thrombocytopenia, or clinical evidence of polyarthritis and an equal volume of the same blood sample was simultaneously shipped on cold packs by overnight express to colorado state and to antech diagnostics. standard operating procedures at each laboratory were followed for total dna extraction and amplification of gapdh as the dna control. at colorado state university, a previously published pcr assay that amplifies the dna of ehrlichia spp., anaplasma spp., neorickettsia spp., and wolbachia was performed on each sample with positive amplicons sequenced to determine the species. at antech diagnostics, a proprietary real time pcr assay (fastpanel tm ) that amplifies the dna of anaplasma phagocytophilum, a. platys, ehrlichia canis, e. chaffeensis, and e. ewingii was performed. in the study to date, samples from 35 animals (30 dogs and 5 cats) have been assayed at both laboratories. dna of a. phagocytophilum (2 cats and 2 dogs) and e. canis (1 dog) were amplified at both laboratories with a percentage agreement between laboratories of 100%. the results to date suggest that the assay results of the two laboratories for a. phagocytophilum and e. canis are comparable. ehrlichiosis and bartonellosis are zoonotic diseases caused by extremely small, obligate intracellular bacteria that require a mammalian reservoir and a blood sucking arthropod vector. human ehrlichiosis is present in peru, with a seroprevalence as high as 23% in the highlands. bartonella species in humans were also identified in peru since 1905 (b. bacilliformis). recently, a new species (b. rochalimae) was isolated from an american woman who became febrile after travelling to peru. dogs can become infected with the same ehrlichia species, and the majority of bartonella species that affect human beings. the role of dogs as reservoirs for human infections has not been clearly established, but exposure and/or infection in dogs has been used to monitor human exposure to tick-borne disease (tbd), since they share the same environment. the objective of this study was to determine the serological and molecular prevalence of anaplasmosis, ehrlichiosis and bartonellosis in rural dogs in the highlands of peru. a total of 122 healthy adult dogs were enrolled in this study from four communities in the central highlands of peru: ondores, pachacayo, san juan de pachayo, and canchayllo. edta-blood samples were collected from 108 dogs, whereas serum samples were available from 110 dogs. serum samples were tested for ehrlichia canis, anaplasma, borrelia burgdorferi and dirofilaria immitis infections using a qualitative dot-elisa (snap s 4dx). the edta-blood samples were screened by conventional pcr for the groel gene of the genus anaplasma and ehrlichia, and for the intergenic transcribed spacer of the genus bartonella. speciation was conducted by nucleotide sequencing. bartonella genus dna was detected from seven of the 108 dogs (6.5%) and ehrlichia canis dna was detected and sequenced from one dog (0.9%). four of the bartonella positive samples were identified by dna sequencing as b. rochalimae (genbank accession numbers hq185696 and hq185695). the other three bartonella positive samples were identified as b. vinsonii subspecies berkhoffii, the causative agent of endocarditis in dogs and humans. no dog was infected with anaplasma species by dna amplification, but one dog was seroreactive for this genus (0.9%). no specific antibodies against ehrlichia canis and borrelia burgdorferi and no antigens of dirofilaria immitis were detected. this study expands the current knowledge about tbd in peru and describes for the first time the infection of b. rochalimae in dogs in peru. the results suggest that dogs may play an important role in the epidemiology of this infection in humans, since they can be asymptomatic but bacteremic. bartonella spp. dna is commonly amplified from the blood of cats exposed to ctenocephalides felis. in previous work, it was shown that cats administered imidacloprid and experimentally exposed to b. henselae infected cats and c. felis did not become pcr positive for b. henselae whereas untreated cats all developed infection. the purpose of this study was to determine if administration of imidacloprid to clientowned cats likely to be exposed to bartonella spp. and c. felis in the field lessens prevalence of bartonella spp. infection. veterinary students in tennessee and florida that owned cats that spent at least 20 days per month outside and that were willing to apply imidacloprid to their cats monthly for six months were recruited for the study. blood for bartonella spp. pcr assay was collected from the cats seven months after starting imidacloprid administration and assayed at colorado state university. to serve as a control group that was unlikely to have been administered flea control products in the previous 6 months, blood was collected from feral cats during tnr programs in each of the two cities and assayed for bartonella spp. dna. the bartonella spp. dna prevalence rates between the groups were compared by chi square analysis with significance defined as p o 0.05. the overall prevalence rates for bartonella spp. dna in the blood of veterinary student cats (7.4%) and the feral cats (39.1%) were significantly different (p o 0.0001). the distribution of results is shown in table 1 . the results suggest that florida feral cats were more commonly exposed to c. felis than tennessee feral cats. while the cats in the groups were not exactly matched, the student cats were allowed outdoors for approximately 20 days per month and lived in the same cities as the feral cats, so c. felis exposure rates were likely similar. as previously shown in experimentally-exposed cats, the use of imidacloprid monthly may influence transmission rates of bartonella spp. amongst naturally-exposed cats. in an endemic area for leishmaniosis and filariosis, coinfection can occur and immunomodulation produced by wolbachia might influence the clinical signs and progression of both diseases. the aims of the present study were 1) to determine the prevalence of wolbachia in dogs infected with dirofilaria immitis (di) and other filarial nematodes, 2) to evaluate the level of coinfection of leishmaniosis and filariosis by molecular assays and 3) to evaluate any associations between leishmania infantum (li) infection, filariosis with or without wolbachia and clinical presentation and outcome. statistical differences between groups were tested for significance by the fisher exact test using spss v.14.0 software (significance: p-value o 0.05). one-hundred and eighteen owned dogs from southeastern spain presenting for clinical evaluation were included in the study. criteria the results of this study highlight the increased sensitivity of pcr for diagnosis of filariosis, confirm the presence of wolbachia in dogs from the mediterranean basin, show the increased severity of hwd when li-filaria coinfection is present and suggest that wolbachia could play a protective role for leishmaniosis. wolbachia antigens can stimulate a th1-type immune response, as has been previously described. however other factors (as treatment with doxycicline) might be responsible for the lower prevalence of wolbachia among filaremic dogs infected with li and further studies must be done to clarify this interaction. the purpose of the present study was investigate the occurrence of leishmaniasis in 200 cats in the municipality of arac¸atuba, sa˜o paulo, brazil, an endemic area for canine visceral leishmaniasis. animals were evaluated by direct parasitological examination of lymphoid organs and serology for visceral leishmaniosis by immunosorbent assay (elisa) and indirect immunofluorescence (ifat). thirteen (6.5%) out of 200 cats studied were diagnosed with visceral leishmaniasis; eight (4%) by parasitological diagnosis through cytological examination of lymphoid organs, six (3%) were considered positive by elisa and one (0.5%) by ifat. only two (15.38%) out of the thirteen infected cats had clinical signs, characterized by the presence of crusty lesions on the dorsal cervical region and hepatosplenomegaly. regarding age five cats (38.5%) had between six months and two years, being the others older than 2 years (61.5%). only one cat (7.7%) was positive for the three employed methods. pcr confirmed leishmania sp infection in nine (69.2%) cats, of which six were diagnosed previously by cytological examination, two by elisa and one by the three techniques employed. since its first description in 1912 feline leishmaniosis has been reported in several countries. the purpose of this study was to assess the prevalence of leishmania chagasi infection in cats showing dermatologic lesions from an endemic area for visceral leishmaniasis in brazil. animals were evaluated by direct parasitological examination of lymphoid organs, immunohistochemical technique for detection of amastigotes in lesioned skin and serology for visceral leishmaniosis by immunosorbent assay (elisa) and indirect immunofluorescence (ifat). twenty seven (49.1%) out of the 55 cats studied were diagnosed with visceral leishmaniosis. twelve (44.4%) were positive by parasitological diagnosis; amastigote forms of leishmania sp were identified in lymphoid organs from 10/55 (18.2%) infected cats, and immunohistochemical technique allowed the identification of nine (16.4%) positive animals. the seroprevalence of leishmaniosis was 25.4% (14/55) by elisa and 10.9% (6/55) by ifat. fiv specific antibodies were found in 6/55 cats (10.9%), of which 5/6 (83.3%) had leishmaniosis. real time pcr confirmed leishmania chagasi infection in three cats. based on the evidence of the high occurrence of leishmaniosis in cats in this study, this disease should be included in the differential diagnosis of skin diseases of felines living in endemic areas. blastomyces dermatiditis is a dimorphic fungus that commonly affects large-breed hunting dogs. a recent advancement in diagnosis has come with the advent of a urine antigen screening test that has both high sensitivity and moderately high specificity. therapy for the disease involves use of antifungal agents, usually itraconazole, and length of treatment is based chiefly on resolution of clinical and radiologic signs. with the new urine antigen test, however, a noninvasive route of monitoring treatment progress is available and could be an adjunct device utilized to determine treatment efficacy and may even reveal a need for prolonged treatment. therefore, the purpose of this study was to determine if monitoring the blastomyces urine antigen test and comparing to pulmonary radiographic signs would elucidate the necessity for prolonged antifungal therapy, even after resolution of radiologic signs. to this end, a retrospective case review was performed that identified a series of client-owned animals with naturally occurring blastomycosis. the inclusion criteria were radiographic pulmonary parenchymal signs consistent with fungal disease and urine antigenconfirmed blastomycosis with repeated testing of both radiographs and urine antigen quantification as monitoring parameters until negative results achieved in each. ideally, intervals between testing dates would be between two and five months. radiographs were considered negative if all radiographic changes had resolved or if repeated radiographs separated by at least one month were considered static after documented improvement had occurred from original diagnostic radiographs (suspected scarring). urine antigen testing was considered negative if concentrations were less than 1.0 enzyme immunoassay units, a reference interval set by the testing laboratory. preliminary data analysis reveals resolution of radiographic signs of blastomycosis occurred earlier in many of the cases presented than did attaining a negative urine antigen concentration. ceasing treatment 1 month after radiographic resolution of signs as has been recommended in the past might have resulted in premature discontinuation of therapy in many of the cases. monitoring of urine antigen concentrations may be of additional clinical use for determining when cessation of treatment should occur in cases of blastomycosis. persistent elevation of urine antigen concentrations after radiographic resolution of infection may account for apparent recrudescence of blastomycosis after suspected clinical resolution. giardia spp. and cryptosporidium spp. are both known to cause infections in dogs and humans in the united states. nevertheless, prevalence rates for dual infection in dogs had not been widely reported. in this study, fecal samples from dogs housed in a northern colorado animal shelter (n 5 121), dogs owned by veterinary students in northern colorado (n 5 132), and dogs from the pine ridge reservation in south dakota (n 5 84) were collected. each sample was assayed with a commercially available fluorescent antibody assay that detects giardia spp. cysts and cryptosporidium spp. oocysts. those samples that were positive for giardia spp. or cryptosporidium spp. with adequate dna available for sequencing were genotyped by the glutamate dehydrogenase [gdh] and by the heat shock protein-70 [hsp-70] genes, respectively. overall, 45 (13.3%) of the dogs had current evidence of a protozoal infection ( table 1 ). the dogs from pine ridge reservation had the highest prevalence rates for giardia infection and also for dual infections. from the student dogs, sequencing was successful for the three giardia isolates (assemblage d from 2 dogs; assemblage c from one dog) and one cryptosporidium isolate (c. canis). from the reservation dogs, sequencing was successful for nine giardia isolates (assemblage d from 4 dogs; assemblage c from 5 dogs) and one cryptosporidium isolate (c. canis). cryptosporidium and giardia co-infections are commonly detected in dogs; in this study dual infections were more common than cryptosporidium infections alone. further studies will be required to determine the clinical importance of this finding. although the giardia and cryptosporidium isolates that were sequenced were the dog specific assemblages/genotypes, more samples should be analyzed in order to assess the potential for zoonotic transmission of either parasite. the current study was conducted to determine the prevalence of intestinal parasites in dogs visiting the veterinary teaching hospital, chiang mai university, northern thailand. fecal samples (n 5 301) were collected and submitted by owners between august 2009 to february 2010. demographic and geographic data were recorded. intestinal parasitic infection was diagnosed by both microscopic examination after zinc sulfate centrifugation flotation and commercially available ifa for giardia spp. and cryptosporidium spp. polymerase chain reaction and dna sequencing were performed on all giardia and cryptosporidium positive samples to provide genotyic information. overall prevalence of intestinal parasitic infection in dogs in chiang mai was 38.9%. the most prevalent parasite was giardia spp. (24.9%) followed by ancylostoma spp. (12.0%), cryptosporidium spp. (7.6%), cystoisospora spp. (6.0%), toxocara canis (2.7%), trichuris vulpis (2.0%), coccidian-like (1.7%), toxascaris leonina (0.7%), and strongyloides spp. (0.7%). the prevalence of having at least one parasite in dogs o 1 year, 1-7 years, and 4 7 years were 49.3%, 39.8%, and 27.4%, respectively. of these infected dogs, 59.0%, 34.2%, 5.1%, and 1.7% were infected with one, two, three, and four organisms, respectively. available dna sequences from giardia spp. positive samples were shown to be dog specific. only one adequate dna sequence was available for cryptosporidium spp., which was shown to be c. canis. the findings suggested that intestinal parasitic infection was common in dogs in chiang mai, thailand. dogs could be potential source for zoonotic intestinal parasitic infection since dogs in this area are allowed for free roaming. regular deworming program is indicated to prevent not only transmission among dogs but also to human. a retrospective study was conducted on 135 parasite positive fecal specimens consisting of 90 canine, 29 feline, 14 equine and 2 from other host species, comparing recovery of eggs, protozoan cysts and coccidian oocysts using 2 standardized methods of parasite concentration: the formalin/ethyl acetate (f/ea) sedimentation concentration and the commercial fecalyzer (flotation) kit procedures. specimens were processed by each technique either according to manufacturer's instructions or according to standard laboratory procedures. formalin/ethyl acetate concentrations used at a ratio of 10 ml normal saline to 4 ml ethyl acetate for extraction of lipophilic material from pelleted stool samples, previously fixed in sodium acetate/acetic/acid/formalin (saf) solution. flotations with the fecalyzer kit were performed with concentrated zinc sulfate solution (s. g. 1.22) . the range of parasites recovered from these specimens included flagellate cysts (40 total), coccidian oocysts (28 total), ova and larvae of nematodes (80 total), and ova of trematodes (12 total) , and cestodes (16 total). recovery rates by fecalyzer flotation were good for protozoan cysts, coccidian oocysts and nematode eggs and larvae, but very poor for cestode and trematode eggs. formalin/ethyl acetate concentration showed excellent recovery of all parasites and consistently outperformed fecalyzer in recovery rates. recoveries by f/ea concentrations were higher by 27.5% for giardia, by 10.7% for coccidia and by 10.0% for nematode eggs and larvae. with the exception of coccidian oocysts, based on z-test analyses, recovery rates were significantly higher, at a confidence level of at least 95%, for all parasites, using formalin/ethyl acetate sedimentation concentration. although capc recommends the use of flotation with centrifugation methods for standard fecal ova and parasite examination for veterinary patients, sedimentation concentration methods are widely and effectively used in human diagnostic parasitology laboratories. these results provide good evidence for the use of f/ea concentration as a preferred method to flotation procedures for stool ova and parasite examinations in veterinary laboratories. cyclosporine and glucocorticoids are powerful immunosuppressive agents used to treat many inflammatory diseases. cyclosporine inhibits calcineurin-dependent pathways of t-cell activation and the resultant cytokine production, and glucocorticoids directly inhibit genes coding for cytokines. little work has been done comparing the effects of these agents on cytokine production in dogs. our study assessed these effects by measuring t-cell cytokine production using flow cytometry, and cytokine gene expression using quantitative reverse transcriptase polymerase chain reaction (qrt-pcr) in activated canine t-cells treated with cyclosporine and dexamethasone. for flow cytometric assays, peripheral blood mononuclear cells were separated using density gradients and cultured for 12 hours in the presence of cyclosporine (5, 25, or 100 ng/ml), dexamethasone (10 à7 , 10 à6 , 10 à5 m), or cyclosporine plus dexamethasone. for qrt-pcr, whole blood was cultured for 5 hours with the same drugs at the same concentrations, and rna was then extracted from leukocytes. expression of cytokines il-2 and ifn-g was analyzed in pma/ionomycinactivated t-cells by flow cytometry, and gene expression for il-2 and ifn-g in activated t-cell populations was assessed via qrt-pcr. flow cytometry and qrt-pcr both demonstrated inhibition of il-2 and ifn-g that was generally dose-dependent in response to both cyclosporine and dexamethasone. flow cytometry results from the average of samples collected from 3 different dogs are shown in figure a . similar results were achieved using qrt-pcr ( figure b ). suppression of il-2 and ifn-g in activated t-cells has potential as an indicator of the efficacy of cyclosporine and glucocorticoids in suppressing canine t-cell function in vivo, and may therefore be of value for characterizing the immunosuppression induced by these drugs in clinical patients. idiopathic eosinophilic diseases are described in several breeds, but are over represented in rottweilers. the immunopathogenesis of idiopathic eosinophilic disorders is poorly characterised. studies in people highlight the importance of cytokines, particularly interleukin-5 (il-5), in mediating eosinophil maturation, differentiation, egress from the bone marrow, migration and polyclonal expansion. eotaxin-2 and eotaxin-3 also appear important for induction of chemotaxis and release of reactive oxygen species from eosinophils. the aim of the current study was to establish whether definable differences in specific cytokines associated with mediation of eosinophil production and survival are present between healthy rottweilers, non-rottweilers and rottweilers with non-parasitic eosinophilia. secondly, by evaluating cytokine profiles the study aimed to improve understanding of the pathophysiology of eosinophilia therefore assisting development of potential molecular treatment options. quantitative real-time reverse transcriptase polymerase chain reaction (qrt-pcr) assays were used to quantify messenger rna (mrna) encoding cytokines il-4, il-5, il-10, il-23p19, il-12p35, il-12p40, il-18, interferon gamma (ifn-g) and chemokines eotaxin-2 and eotaxin-3 from peripheral blood mononuclear cell (pmbc) samples obtained from healthy non-rottweiler dogs with normal eosinophil counts (n 5 5) and rottweilers with normal (n 5 6), mildly increased (n 5 7) and high (n 5 3) eosinophil counts. quantification of serum ifn-g was also performed using a commercially available canine-specific elisa. all samples were positive for housekeeping genes and all cytokines could be quantified with the exception of eotaxin-2 and -3. results were normalised using three stably expressed housekeeper genes (rpl13a, sdha and ywaz) and a relative copy number was calculated for each sample with the sample with the fewest copies given a value of 1. no significant differences were found between groups but there was a tendency for ifn-g mrna expression to be lower in the rottweilers with moderate to severe eosinophilia versus control dogs (p 5 0.062). this trend was not seen in the concentration of serum ifn-g quantified by elisa as there were no significant differences between normal and diseased animals. in conclusion, there were no significant differences in cytokine mrna profiles between normal dogs and rottweilers with varying degrees of eosinophilia. additional studies including larger numbers of affected dogs are warranted before any accurate conclusions can be made. the presence of large amount of antibody on erythrocyte membrane can accelerate red blood cell (rbc) removal process by the mononuclear phagocyte system. an antigenic stimulus such as the one promoted by vaccines, for example, can induce hypersensitivity reactions and may accelerate rbc destruction. the study objective was to evaluate the erythrocytic membrane potential in inducing lymphocyte proliferative response of recently immunized dogs. healthy adult dogs (n 5 17) were immunized with multiple antigens (commercial vaccine with eight antigens: distemper virus, parvovirus, coronavirus, parainfluenza virus, adenovirus, infectious hepatitis virus and leptospire; and anti-rabies). blood samples from each animal were collected into edta tubes in two moments: pre (immediately before vaccination) and pos (28 to 35 days after vaccination). mononuclear cells were separated by gradient, marked with cfse-fitc and cultured. the stimuli for lymphocyte proliferation used were autologous erythrocytic membrane (aem) and concanavalin a (cona). aem was obtained by hypotonic lysis and tested in two concentrations (m1: 0.1ug/100ul; m2: 0.2 ug/100ul). the proliferation assay was evaluated by flow cytometry and analyzed with specific software. the proliferation index (pi) was calculated dividing the fluorescence intensity of the basal sample by the stimulated one. statistical analysis was performed using paired t-test for parametric samples and wilcoxon test for non-parametric samples (a 5 0.05). the for the tested concentrations, autologous erythrocytic membrane does not constitute a stimulus for lymphocyte proliferation in vitro, either before or after vaccination procedure. additionally, there was no evidence of self-reagent lymphocytes to erythrocyte membrane after vaccination. e. coli is a common cause of canine urinary tract infection. current treatment emphasizes eradication of established infection rather than infection prevention but increased antibiotic resistance necessitates strategies to prevent infection. proanthocyanidins found in cranberry juice inhibit e. coli attachment to human uroepithelial cells, impairing bacterial adherence and colonization. we hypothesized that purified cranberry extract (ce) inhibits bacterial adhesion to canine uroepithelial cells. five healthy female dogs received an oral ce supplement (vetoquinol; 100 mg ce/tablet) according to body weight for 30 days. voided urine collected from each dog before (pre) and after (30-day) completion of the protocol was membrane filtered (22 mm) and stored frozen (-20c). bacterial adhesion was determined using an in vitro assay. briefly, urine samples were incubated with an uropathogenic e. coli strain that had been subcultured to promote fimbriae expression. urine samples containing e. coli were next incubated in 96-well plates containing methanol-fixed madin-darby canine kidney (mdck) cells for 1-hr (35c) to permit bacterial attachment. after incubation, plates were washed to remove nonadherent bacteria and fresh media added. plates were incubated (35c) for 4-hr to grow attached bacteria to detection level. bacterial concentration in each well was determined using a spectrophotometer (650 nm). results were analyzed using the chi-square test. ce significantly reduced bacterial adhesion by 30% (n 5 5; p 0.5) in 30-day urine samples compared with pre samples. the results show that ce supplementation can reduce adhesion of uropathogenic e. coli to canine uroepithelium and suggests one mechanism by which ce might improve urinary tract health. the purpose of this study was to determine prevalence of 4 urovirulence factors (uvfs) and antimicrobial resistance in canine uropathogenic e. coli (upec) and to evaluate associations between uvfs and antimicrobial resistance. two hundred and twenty-one upec isolates from samples collected from 184 different canine patients submitted to the university of tennessee microbiology laboratory in 2007 were evaluated. a multiplex pcr assay was used to detect cnf, hlyd, sfa/foc, and papgiii in dna lysate. in vitro susceptibility was evaluated and if the isolate was resistant to any antimicrobial in a class, it was considered resistant to that class. of the 221 samples, the number of uvf expressed per isolate was: 0 5 127/221 (57%), 1 5 27/221 (12%), 2 5 4/221 (2%), 3-22/221 (10%), and 4 5 41/221 (19%). expression of uvf was sfa (33%), hly (24%), cnf (24%), and pap (19%). presence of 4 uvfs was associated with less resistance (p o 0.0001). the combination of hly, cnf, and sfa was associated with less resistance (p o 0.0001). when sfa was present alone, resistance was less (p o 0.0001). average resistance to antimicrobial class by number of uvfexpressed was: 0 uvf 5 4.1 ae 3.3 classes, 1 uvf 5 1.2 ae 1.1 classes, 2 uvf 5 0.0 ae 0.0 classes, 3 uvf 5 0.8 ae 1.6 classes, and 4 uvf 5 0.5 ae 1.2 classes. urovirulence factors were present in a moderate number of upec and correlated negatively with resistance. neither individual nor combinations of uvfs were associated with increased resistance. obesity is associated with several comorbidities in dogs including pancreatitis, osteoarthritis, oral disease, neoplasia, and lower urinary tract disease. investigator observations led to the hypothesis that morbidly obese dogs are more likely to have asymptomatic bacterial urinary tract infections (abuti) than overweight and moderately obese dogs. therefore, a pilot study was conducted to screen for abuti in obese dogs. urinalysis with urine culture and dual energy x-ray absorptiometry (dxa) were performed on fortythree dogs with body fat (bf) percentages ranging from 36 to 56%. following dxa, subjects were categorized as obese (o)(bf 5 35-44%, n 5 17) and morbidly obese (mo)(bf 4 45%, n 5 26). no dogs had owner-reported symptoms indicative of uti. the prevalence of abuti in o dogs was 6% (n 5 1) and 31% (n 5 8) in mo dogs. the dog in the o group with abuti was close to being mo with a bf equaling 44.3%. of the nine dogs with positive cultures, 4 were neutered males and 5 were spayed females. the prevalence ratio of abuti in mo dogs was 7.5, indicating dogs with 45% or greater bf are 7.5 times more likely to have the condition then dogs o 45% bf. the results of this pilot study coincide with other surveillance data describing an increased prevalence of lower urinary tract disease in obese dogs. in conclusion, dogs with body fat percentages greater than 45% are at risk for abuti, and veterinarians should consider screening all morbidly obese patients for urinary tract infections. calcium carbonate (cac) is recommended to decrease phosphate intake in chronic kidney disease. however, its effect is poorly documented in dogs. our objectives were to assess within-day, postprandial and cac effects on phosphatemia variations in healthy dogs. phosphatemia was measured every 2 hours for 24 hours in eight adult healthy beagle dogs in i) fasted condition and ii) a 2 â 2 crossover design. one group received cac mixed with maintenance diet (0.8% phosphorus), while the second group received the diet alone. after a 1-week wash-out period, groups were switched. a general linear model was used to test the period, sequence, treatment, dog and time effects on phosphatemia and the area under the phosphatemia versus time curve (auc 0-24 ). a significant (p o 0.001) circadian variation existed in fasted dogs. the maximum difference (mean: à1.9 mg/dl; 95% c.i.: à2.4 mg/dl; à1.4 mg/dl) was observed between 8 a.m. and midnight. the auc 0-24 with cac (5936 ae 533 mg.min/dl) was mildly but significantly lower (p 5 0.027) than without cac (6239 ae 631 mg.min/dl). however, it was similar to the auc 0-24 in fasted conditions. feeding, with and without cac, has minor effect on phosphatemia. however, circadian variation of fasted phosphatemia might affect its interpretation. gfr measurement permits diagnosis of kidney injury prior to development of azotemia, and is the gold standard for kidney function assessment. accurate and rapid (o 60 min) gfr measurement has been performed in rats by simultaneous transcutaneous assay of two intravascular fluorescently-labeled markers. a recently developed analyzer assays fluorescence via a fiberoptic cable introduced through a peripheral catheter, and thus should also allow rapid gfr determination in larger species. the purpose of this study was to determine correlation and agreement between fluorescent ratiometry (fr) and iohexol plasma clearance (ipc) in dogs over a range of gfrs. acute kidney injury (aki) was induced in 5 female hound-type dogs (10 mg/kg gentamicin iv q8h), and fr and ipc gfr were simulta-neously determined on days 0, 3, 6 and 9. a 9-sample, 5-hr protocol was used for ipc; fr was determined following bolus injection of a dextran conjugate mixture (2-sulfohexamine rhodamine-carboxymethyl 150 kd dextran, 5-aminofluorescein-carboxymethyl 5 kd dextran) with fluorescence measured over 60 min. gfr was calculated using 2-compartment model concentration-vs.-time curves for both techniques. correlation was determined via spearman's rho; agreement was analyzed via bland-altman plots. ipc gfr and serum creatinine confirmed progressive aki in all dogs. correlation between fr and ipc was 0.91 (p o 0.001). bland-altman plots confirmed good agreement between techniques with slight underestimation of gfr by fr across most observed values. these results suggest fr is suitable for gfr determination in dogs with aki. importantly, the portable analyzer allowed for point-of-care gfr determination in o 60 min using a peripheral vein. previously presented at the american society of nephrology renal week (related but not identical abstract). dogs with protein-losing nephropathy (pln) are at risk of thromboembolic disease, but the mechanism of hypercoagulability and the population of dogs at risk are unknown. the purpose of this study was to characterize thromboelastography (teg) in dogs with pln. twenty-eight client-owned dogs with pln (urine protein:creatinine ratio (upc) 4 2.0) and 8 control dogs were enrolled. teg parameters, antithrombin activity, serum biochemical profiles, and upc were measured. teg analyses were run in duplicate with kaolin activation; reaction time (r), clot formation time (k), maximal amplitude (ma), and g (global clot strength) were analyzed. a wilcoxon sum rank test was used to evaluate differences between groups. twelve pln dogs (42.8%) were azotemic. nineteen pln dogs (67.8%) were hypoalbuminemic [serum albumin (salb) o 3.0 g/dl]; 11 had salb o 2.5 g/dl. dogs with pln had higher k (p o 0.01), ma (p o 0.005) and g (p o 0.005) than controls. r was similar between the two groups. pln dogs with salb o 2.5 g/dl had higher g (p o 0.05) values than dogs with salb 4 2.5 g/dl; however, even pln dogs with normal salb (4 3.0 g/dl) had significantly higher g values than controls (p o 0.005). no significant relationship between upc and g, salb and g, antithrombin and g, or salb and antithrombin was noted using linear regression analysis. these results indicate that antithrombin, salb, and upc cannot be used alone to predict hypercoagulability as assessed by teg in dogs with pln. a comprehensive evaluation of the coagulation system in individual patients may be necessary to predict the point at which to initiate anti-thrombotic therapy. cystinuria is a hereditary renal tubular reabsorption defect of cystine, ornithine, lysine and arginine (collectively, cola). the low solubility of cystine in acidic urine predisposes to the formation of uroliths. type i cystinuria in newfoundland and labrador retriever dogs is an autosomal recessive trait caused by mutations in the slc3a1 gene, whereas in other breeds, the cause of cystinuria has not yet been determined. we report here on the clinical, biochemical and molecular features of cystinuria in irish terriers. urine and edta blood were collected from 222 irish terriers from europe and australia. a nitroprusside screening test was used to identify increased cystine in urine. urinary amino acid concentrations were determined by high-pressure liquid chromatography. cystinuric dogs were defined as having cystine calculi, a positive nitroprusside result, urinary cystine (4 179 mmol/g creatinine) and/ or a cola concentration of 4 700 mmol/g creatinine. all 83 females tested nitroprusside negative and had normal urinary cystine (o 150 mmol/g creatinine) and cola (o 500 mmol/g creatinine) concentrations. the 10 intact males that formed calculi as adults exhibited cystine concentrations ranging from 323-1580 and cola from 1029-4302 mmol/g creatinine. an additional 41 males had similarly high cola values with cystine levels from 0-1580 mmol/g creatinine. among the affecteds tested, 75% were nitroprusside positive. the negative nitroprusside results and/or low urinary cystine levels of affecteds may be due to precipitation of cystine in acidic urine. sequencing the coding regions of the slc3a1 and slc7a9 genes from edta blood identified no mutations. the mode of inheritance remains undetermined. however, castration appears to lower the urinary cystine and cola concentrations and to prevent cystine calculi formation, while diet changes have lesser effects. in conclusion, non-type i cystinuria in irish terriers (and several other breeds like mastiffs and scottish deerhounds) is a unique form characterized by increased aminoaciduria only in males, with lower cystine and cola excretion and fewer and later urolith formation compared to type i cystinuria. castrating cystinuric irish terriers lowers their cystine and cola excretion and thus their risk for calculi formation. cats and dogs that are diagnosed with acute kidney injury (aki) and resultant uremia that is not responsive to standard medical therapy are likely to benefit from renal replacement therapies, such as intermittent hemodialysis (ihd). the purpose of this study was to evaluate the long-term outcome of patients with aki treated with ihd, and to establish whether renal function, as determined by serum or plasma creatinine concentrations, is associated with longterm survival. medical records of 20 cats and 35 dogs that were diagnosed with aki, treated with ihd, and survived longer than 30 days following the last ihd treatment were retrospectively analyzed. standard methods of survival analysis using kaplan-meier product limit curves and the log-rank test were performed. for all-cause mortality, the median survival time was 1823 days (95% confidence interval: 841, 4667) for cats and 1049 days (95% confidence interval: 893, 1931) for dogs. when only renal-related causes of death were taken into account, the median survival time was not reached for cats or dogs. survival time for all-cause mortality was inversely associated with the lowest creatinine concentration within the 30 to 90 day period following the last ihd treatment (p o 0.0011 for cats, p o 0.0104 for dogs). this study demonstrates that veterinary patients that are diagnosed with aki, treated with ihd, and survive greater than 30 days after the last ihd treatment have a good longterm prognosis and frequently die from causes that are unrelated to renal impairment. renal fine-needle aspiration (r-fna) is oftentimes attempted during evaluation of dogs and cats with renomegaly, mass lesions, or suspected infiltrative processes. diagnostic utility of fna is dependent upon the organ being sampled; additionally, in some organs, certain diagnostic imaging findings are associated with improved concordance of fna with final diagnosis. objectives of this study were to evaluate the diagnostic utility of r-fna and determine whether concordance with final diagnosis is associated with specific clinicopathologic or diagnostic imaging findings. we hypothesized that r-fna is most useful in patients with diagnostic imaging results suggestive of renal neoplasia (i.e. masses or suspected infiltrative processes). dogs and cats that had undergone r-fna from jan 1, 1998 to dec 31, 2008 were identified by database search. patient signalment, serum creatinine and blood urea nitrogen concentration, urine specific gravity, dipstick protein, r-fna result, and final diagnosis were recorded. patients were excluded if abdominal radiographs or sonographic images were not available for review, or if diagnostic test results were insufficient for determination of final diagnosis. a single coauthor blinded to final diagnoses interpreted all abdominal images using a pre-set list of descriptors and grading criteria. radiographic kidney shape, margin distortion, and ventrodorsal kidney-to-l2 ratio were evaluated. sonographic kidney margin distortion, cortical echogenicity, and corticomedullary junction distinction were described, and presence of nodules or masses, peri-renal effusion, or a peripheral sonolucent rim was noted. concordance of r-fna and final diagnosis was determined, and the chi-squared or fisher's exact test were used to determine association of concordance with the above variables; p o 0.05 was considered significant. 37 dogs and 41 cats (78 animals) met all inclusion criteria. r-fna results were concordant with the final diagnosis in 43 (55.1%) patients, discordant in 12 (15.4%) patients, and inadequate for cytologic interpretation in 23 (29.5%) patients. neoplasia or fip were the final diagnoses in 19 of 43 (44.2%) and 3 of 43 (7.0%) patients with concordant results, respectively. renal lymphoma (p 5 0.196), renal carcinoma (p 5 0.364), and renal neoplasia in general (p 5 0.451) were not associated with a higher likelihood of r-fna and final diagnosis concordance. there was no association noted between likelihood of r-fna and final diagnosis concordance when patients were stratified by species, serum creatinine or blood urea nitrogen concentration, urine specific gravity, dipstick proteinuria, or any diagnostic imaging variables. this study failed to identify concurrent clinicopathologic or diagnostic imaging findings that enhanced the diagnostic utility of r-fna. future studies should use standardized criteria to prospectively identify patients in which r-fna will be performed, evaluate additional variables that may be associated with increased r-fna diagnostic utility, and directly compare the utility of r-fna with that of other diagnostic techniques. feline lower urinary tract disease (flutd) is a disease with increasing prevalence in private practices and veterinary teaching hospitals. although several underlying causes can cause the obstructive form in male cats, the idiopathic form (feline interstitial cystitis) often is diagnosed as underlying reason in cats o 10 years. the goal of this retrospective study was to identify possible predisposing factors in order to optimize the therapy of these patients. as a study group, 40 cats hospitalized with obstructive flutd at the veterinary university of vienna were examined during a 2 year period (2008) (2009) (2010) . as a control group 40 cats presented for other reasons were randomly chosen during the same time period. the data were examined concerning the signalment and history. furthermore, the long-term outcome was evaluated with a questionnaire. based on assumptions a student's t-test or a chi-square test was used. there were no significant differences in age and breed. the body weight was significant higher in the flutd group than in the control group (p o 0.01). we could observe a significant risk for the disease of a weight of 4 5 kg (p o 0.01). there were significant less cat toilets in the flutd group compared to the control group (p o 0.05). furthermore we could observe that in the households of flutd cats there was significant less than one toilet per cat (p o 0.01) and more cats diseased on flutd lived strictly indoor than outdoor (p 5 0.05).there were no significant differences at the time of hospitalization in age, breed, number of cats per household or season of the year between the two groups. in summary, we could observe that cats over 5 kg body weight kept indoor with less than one toilet per cat have a significant higher possibility to be affected by obstructive flutd. further studies with an extensive history of animal husbandry are needed to identify risks predispoing cats to this frequent and cost-intensive disease. although purine uroliths (ammonium urate, sodium urate, xanthine, uric acid, etc.) represent the third most common stone type in cats, purine uroliths have the highest rate of recurrence (13% in 22 months). in dogs, mutation of the urate transporter (slc2a9) and portovascular anomalies are common risk factors. however the underlying cause(s) for purine urolith formation in cats is unknown. the purpose of this study was to test the hypothesis that hyperuricosuria without alterations in liver function is common in cats with urate uroliths. urine concentrations of purine metabolites were measured by high-performance liquid chromatography in 5 cats with ammonium uroliths (cases), 5 clinically healthy, breed and gender matched cats (negative controls), and 2 cats with naturally occurring xanthine uroliths (positive controls). prior to urine collection, all cats were fed a standard maintenance food (protein 5 7 g/100kcal) for 4 weeks. urinary xanthine, uric acid, and allantoin concentrations and concentration to creatinine ratios were calculated and compared between groups. also, serum pre-and post-prandial bile acid concentrations were measured. when compared to control cats, urinary uric acid concentration was significantly higher in case cats (p 5 0.002). xanthine was not detected in the urine of cases or negative controls. a significant difference in fasted and post-prandial serum bile acid concentrations was not detected in cases or controls (p 5 0.197, 0.212).hyperuricosuria without increased concentrations of urinary xanthine or allantoin appears to be a risk factor for ammonium urate urolith formation in cats. an association between portovascular shunts and purine urolithiasis was not observed in this population of cats. studies indicate that proteinuria is predictive, on a population basis, of those cats at risk of developing azotemia. seldi-tof-ms is a sensitive, high-throughput, proteomic technique utilising chromatographic surfaces to facilitate separation and detection of proteins and peptides within biological fluids such as urine. individual low molecular weight (lmw) urinary proteins have been considered as potential biomarkers for renal damage but provide only a limited representation of the urinary proteome; seldi-tof-ms may provide a more global assessment. normotensive, non-azotemic geriatric cats (4 9 years) were recruited prospectively from two first-opinion clinics for routine health screening. at entry cats received a full physical examination, plasma biochemistry, evaluation of total t4 concentration and urinalysis including urine protein to creatinine ratio. re-examination was offered at 6 and 12 months. cats were divided into two groups based on clinical status at the 12 month re-examination (azotemic; creatinine concentration ! 2.0 mg/dl and non-azotemic). optimisation studies were performed to facilitate the automated preparation (biomek 3000) of cm10 (weak cation exchange) arrays for seldi-tof-ms analysis (ciphergen enterprise 4000) of urine samples from cats at entry to the study. results are reported as median [25 th , 75 th percentile]. mann whitney u-test and wilcoxon signed rank test were used to compare variables between groups and between timepoints, respectively. ciphergen express (3.06) software was used to analyse spectral data and a mann whitney u-test was used to identify clusters which differed significantly between groups (p o 0.05) at entry to the study. twenty non-azotemic cats were recruited, of which 10 cats developed azotemia by 12 months. no significant differences in age, body weight, biochemical or urinalysis variables were identified between groups at entry to the study. as might be expected creatinine increased significantly (1.73 mg/dl [1.58, 1.77], 2.17 [2.00, 3.23], p 5 0.002) between study entry and 12 months in the cats that developed azotaemia and there was a commensurate increase in phosphate concentration (3.81 mg/dl [3.60, 4.22], 4.65[3.94, 6 .48], p 5 0.019). creatinine and phosphorus did not change significantly over time in the cats that did not develop azotaemia. seven clusters with m/z values of 2822, 10 033, 10 151, 10 234, 11 635, 11 700 were found to differ significantly between groups at entry to the study. the low protein concentration of feline urine makes the use of proteomic techniques challenging. however, this pilot study indicates that seldi-tof-ms can be utilised to examine the feline urinary proteome and that differences in low molecular weight protein patterns may be useful to differentiate those cats which are at risk of the development of azotemia. further work is necessary to identify these proteins/peptides. fibroblastic growth factor 23 (fgf-23) is a phosphotonin with an important physiological role in the regulation of phosphorous and vitamin d metabolism, and may therefore play a part in the development of renal secondary hyperparathyroidism. previous studies in cats have shown parathyroid hormone (pth) to be elevated prior to the development of azotemia. the study objectives were to explore the hypothesis that fgf-23 is a mediator of the development of renal secondary hyperparathyroidism in the nonazotemic stages of feline ckd. healthy, non-azotemic (plasma creatinine concentrations (cr) o 2.0 mg/dl) geriatric cats were recruited into the study prospectively and followed for 12 months. at the study end point cats were categorised into the following 3 groups: group 1 (n 5 15)-cr 1.58 mg/dl, group 2 (n 5 33)-cr !1.58 mg/dl but did not meet the criteria for group 3 and group 3 (n 5 14)-cr 4 2.0 mg/dl in association with reduced urine concentrating ability (usg o 1.035) or demonstration of persistent azotemia (cr 4 2.0 mg/dl). plasma samples were subjected to routine biochemical analysis, intact pth, calcitriol and intact fgf-23 assay. variables were compared between the 3 groups at the baseline time point. gfr was measured in an additional group of 19 cats (11 non-azotemic, 4 iris stage ii, 4 iris stage iii) using a corrected slope-intercept iohexol clearance method. relationships were explored using linear regression analysis and determining the coefficient of determination (r 2 ). results are presented as median [range] . at the baseline time point fgf-23 concentrations were significantly higher in group 2 (208.1[51.4-814.6 ], p 5 0.001) and group 3 (237.6[127.4-908.1], p 5 0.001) compared to group 1 (126.2[69.4-505.2] ). weak positive relationships were identified between fgf-23 and pth (r 2 5 0.126, p 5 0.005, n 5 62) and fgf-23 and cr (r 2 5 0.077, p 5 0.029, n 5 62). however, the positive relationships between fgf-23 and phosphate (r 2 5 0.016, p 5 0.323, n 5 62) and fgf-23 and calcitriol (r 2 5 0.085, p 5 0.212, n 5 20) were not significant. the additional group of cats in which gfr measurement was performed there was an inverse relationship between fgf-23 and gfr (r 2 5 0.208, p 5 0.040). in conclusion, fgf-23 was elevated in cats prior to the development of azotemia. the role of fgf-23 in the development of feline renal secondary hyperparathyroidism remains to be determined and should be explored through interventional studies. however, considering the relationship between fgf-23 and gfr, it cannot be excluded that the phosphotonin is simply a marker of reduced filtration. chronic kidney disease (ckd) is common in geriatric cats and hypoxia might contribute to the progression of this disease. the aim of this study was to evaluate urinary vascular endothelial growth factor (vegf) as a marker of renal hypoxia. cats were recruited through geriatric clinics held at two first opinion london practices. vegf was measured in stored samples using a canine elisa kit validated for use on feline urine and indexed to creatinine concentration to yield a vegf to creatinine ratio (vcr). two studies were undertaken -firstly a cross-sectional analysis of clinical variables associated with vcr in cats with ckd. diagnosis of ckd was based on concurrent findings of plasma creatinine ! 2 mg/dl and usg 1.035, with persistence of azotemia for ! 2 weeks. only patients receiving no medical therapies were included. normotensive and (pre-treatment) hypertensive cats were included, but borderline cases (mean systolic blood pressure 160-170 mmhg on the date of sampling) were not. hyperthyroid cats were also excluded from this cross-sectional study. associations between vcr and clinical data were initially assessed using the spearman's coefficient and mann whitney test. linear regression was then used for multivariate analysis. the second study used samples from a trial in which hypertensive cats that had been treated with amlodipine for at least 3 months were entered into a randomised cross-over study where they received placebo or benazepril (0.5 to 1 mg/kg daily) for 12 weeks in turn. vcr on placebo was compared with that on benazepril using the wilcoxon signed ranks test. cats with well controlled hyperthyroidism were included in this intervention study. results are reported as median [25th, 75th percentile]. vcr was higher (49.5[33.3, 74.1] vs. 36.1[25.6, 42 .1] fg/g, p 5 0.010) in untreated hypertensives (n 5 30) than normotensives (n 5 63). vcr was correlated with pcv (r 5 à0.236, p 5 0.024, n 5 92), upc (r 5 0.444, p o 0.001, n 5 93), plasma phosphate (r 5 0.286, p 5 0.005, n 5 93), and usg (r 5 à0.284, p 5 0.006, n 5 93), but not plasma creatinine concentration. in the best multivariate model, pcv was associated with vcr independently of upc (r 2 5 0.435, n 5 92). vcr was significantly reduced by benazepril therapy (65.3 [43.1, 92 .7] fg/g) compared with placebo (76.0[48.0, 116.7] fg/g; p 5 0.031, n 5 17) with a reduction seen in 76% of cases. these results suggest urinary vegf excretion is associated with proteinuria in cats with ckd and might be a marker of renal hypoxia induced by low pcv. ace inhibitor therapy might reduce urinary vegf excretion because angiotensin ii causes constriction on efferent arterioles resulting in tubular hypoxia. fgf-23 is a phosphaturic hormone. fgf-23 concentrations increase with declining renal function in humans. the objectives of this study were to validate a method for fgf-23 quantification in feline plasma and to assess the association between fgf-23 concentration and plasma creatinine or phosphate concentration in cats with chronic kidney disease (ckd). non-azotemic and azotemic (plasma creatinine concentration (cr) 4 2.0 mg/dl) geriatric (4 9yrs) cats were recruited into the cross-sectional study from two london first opinion practices. cats were excluded from the study if they were fed a phosphate restricted diet, or had evidence of concurrent disease. the cats were categorized, using a modified iris staging system, into the following four groups: group 1 (cr 1.6 mg/dl), group 2 (cr 2.0-2.8 mg/dl), group 3 (cr 2.9-5.0 mg/dl), group 4 (cr 4 5.0 mg/dl). groups 2 and 3 were further subdivided based on the iris targets for plasma phosphate concentration (po 4 ): group 2a (po 4 4.5 mg/dl), group 2b (po 4 4 4.5 mg/dl), group 3a (po 4 5 mg/dl), group 3b (po 4 4 5 mg/dl). fgf-23 concentrations were measured in feline edta plasma using a human intact fgf-23 elisa, validated by intraand inter-assay variability and assessment of dilutional parallelism. comparisons between groups were made using the kruskal-wallis test and mann-whitney u test, with statistical significance defined as p o 0.05. bonferroni correction was applied where appropriate (statistical significance then determined as p o 0.008). results are reported as median [25th, 75th percentiles]. fgf-23 concentrations ! 800pg/ml (upper limit of quantification) were assigned the value of 800pg/ml. intra-and inter-assay variability of fgf-23 measurements were o 10.0% and dilutional parallelism between feline samples and the calibration curve were demonstrated. plasma fgf-23 concentrations increased with increasing creatinine concentrations (group 1: 158 [115, 274] , n 5 20, group 2: 354 [239, 473] , n 5 20, group 3: 800 [425, 800], n 5 23, group 4: 800 [800, 800], n 5 14). fgf-23 measurements were significantly different between all groups (p 5 0.005 to o 0.001) except between groups 2 and 3 (p 5 0.01). fgf-23 concentrations were significantly higher in cats with higher plasma phosphate concentrations (group 2a: 329 [237, 423] , n 5 16 vs. group 2b: 576 [374, 793], n 5 4; p 5 0.047) and (group 3a: 432 [167, 800] , n 5 10 vs. group 3b: 800 [625, 800], n 5 13; p 5 0.028). in conclusion, fgf-23 concentrations were higher in cats with more severe ckd or higher plasma phosphate concentrations as would be predicted from its known biological actions. further work is warranted to explore the role of fgf-23 in the development of renal secondary hyperparathyroidism by measuring parathyroid hormone (pth) and calcitriol in cats at different stages of ckd. progressive non-cardiogenic edema and lung dysfunction are common complications of acute kidney injury (aki) in people. pulmonary abnormalities have not been systematically reviewed in dogs with renal azotemia, but anecdotal reports of dogs with aki and concurrent non-cardiogenic pulmonary edema are suggestive of uremic pneumonopathy (up), a centrally-distributed pulmonary edema syndrome associated with kidney disease in people. we therefore hypothesized that pulmonary-associated clinical signs or thoracic radiograph abnormalities are more common in dogs with renal azotemia than in non-azotemic dogs, and that this association is more likely in dogs with aki than dogs with chronic renal failure (crf). our study objectives were 1) to describe thoracic radiograph and lung histopathologic abnormalities in dogs with renal azotemia, 2) to compare the occurrence of these findings in dogs with aki, crf, or non-systemic illness, and 3) to determine if these abnormalities are associated with shorter survival times. records of dogs with renal azotemia evaluated from 1/1/2000 to 8/20/2010 were reviewed; dogs which could be classified as having aki or crf and which had complete thoracic radiograph studies available for review were included. dogs with primary intracranial disease and normal serum creatinine and a complete thoracic radiograph study were selected as controls. signalment, weight, presence of pulmonary-related clinical signs, azotemia duration and severity at time of radiography, and leptospirosis antibody titer were noted. alveolar, bronchial, interstitial, or nodular lesions were described using a 4-point scale, and lung tissue collected at time of necropsy was reviewed; both the radiologist and pathologist were blinded to final diagnoses. significance was p o 0.05 for all analyses. the final study population included 54 aki, 50 crf, and 63 control dogs. crf dogs were older (p o 0.001) than aki and control dogs. pulmonary-related clinical signs were more commonly diagnosed at first evaluation in aki dogs (29/53 dogs, 54.7%) than in crf (13/50, 26.0%; p 5 0.003) or control dogs (9/63, 14.3%; p o 0.001). presence of an alveolar pattern was the only radiographic finding which differed amongst groups (more common in aki [n 5 8, 14.8%, p 5 0.047] and crf [n 5 8, 16%, p 5 0.028] dogs than in control dogs [n 5 2, 3.2%]). there was no association between presence of an alveolar pattern and any other variable. alveolar mineralization was the most common lesion in aki dogs (5/8 dogs; 62.5%), with concurrent alveolar space concretions or mineralization of vessels or bronchioles noted in 1 dog each. necropsies had not been performed in any of the crf dogs, but mineralization was not seen in lung tissues from any control dogs (n 5 9). neither pulmonary-associated clinical signs nor alveolar pattern were associated with median number of days from discharge until death in dogs with aki (p 5 0.220 and 0.468, respectively) or crf (p 5 0.280 and 0.253, respectively). in this group of dogs, presence and type of radiographic pulmonary abnormalities were associated with renal azotemia but not with median time until death. the association between and clinical relevance of alveolar mineralization in aki dogs were not determined, but both the radiographic and histopathologic abnormalities reported here differ from up in people. chronic kidney disease (ckd) is a common cause of morbidity and mortality in cats. the purpose of this study was to investigate the effects of chinese rhubarb (rheum officinale) supplementation on the progression of feline ckd. cats with stable iris stage ii or iii ckd and without comorbidity were included in the study. cats were divided into 3 treatment groups and administered rhubarb extract (group 1, rubenal s , vetoquinol, 75 mg tablet po q 12 h), benazepril as a positive control (group 2, 0.5 mg/kg po q 24 h), or both (group 3). cats were fed a commercial renal specific diet and enteric phosphate binder as appropriate. body weight, laboratory data, and blood pressure were recorded every 3 months for up to 34 months. variables between groups at enrollment and within groups over visits were compared with anova and repeated measures ano-va, respectively. a treatment by visit interaction term was included in all repeated measures models. significance was set at p 0.05. except for body weight there was no significant differences between treatment groups at enrollment. there was no significant change in body weight, hematocrit (hct), upc, or creatinine over time as compared to baseline within any group. there was no significant difference between groups over time in regards to change in weight, hct, upc, or creatinine. the treatment by time interaction was non-significant in all models. although there was no benefit associated with combination treatment, the results for rhubarb treatment alone were not different from benazepril treatment. azodyl, an encapsulated, enteric-coated probiotic/prebiotic nutraceutical, is marketed for reduction of azotemia (bun & creatinine) in dogs and cats. cat owners often sprinkle contents onto cat food to facilitate administration. however, exposure to air and stomach acid are thought to inactivate the lyophilized bacteria within the product. therefore, we examined the ability of foodsprinkled azodyl to reduce azotemia in cats with ckd. 10 cats with ckd were enrolled in the study and randomized receive azodyl or placebo. owners were provided with 3-4 capsules of azodyl prior to enrollment to ensure compliance with administration. 2 baseline blood samples were obtained 1 month apart, and then 1 & 2 months after beginning therapy. clinicians and owners were masked as to medication assignment. we hypothesized that a 30% decrease in bun and/or creat in the azodyl group would be significant, and set a 5 0.2. in order to maximize the probability of detecting a difference, we determined the % change as being the difference between the maximal baseline analyte concentration and minimal therapeutic concentration. we compared the % change between groups by mann-whitney u test. bun and creatinine did not differ between groups. based on these results, azodyl, applied by sprinkling onto food fails to reduce azotemia in cats with ckd. whether intact capsule administration reduces azotemia in cats with ckd remains unknown. lower urinary tract disease (lutd) occurs commonly in cats, and idiopathic cystitis (fic) and urolithiasis account for over 80% of cases in cats less than 10 years of age. although several strategies have been recommended, a common recommendation is to induce dilute urine resulting in more frequent urination and to dilute calculogenic constituents. in addition to conventional therapy using modified diets, traditional chinese and western herbs have been recommended, although only one, chorieto, has published data. we evaluated 3 commonly used herbal treatments recommended for use in cats with lutd including (1) san ren tang, (2) wei ling tang, and (3) alisma. we hypothesized that these 3 chinese herbal preparations would induce increased urine volume and decreased urine saturation for calcium oxalate and struvite. six healthy, spayed female, adult cats were evaluated in a placebocontrolled, randomized, cross-over design study. cats were randomized to 1 of 4 treatments including placebo (p), san ren tang (srt), wei ling tang (wlt), or alisma (a). treatment was for 2 weeks each with a 1 week washout period between treatments. at end of each treatment period, a 24-hour urine sample was collected using modified litter boxes. urine volume and biochemistries were measured, and urine saturation for struvite and calcium oxalate was estimated using equil 1.5b. analysis of variance (anova) was used to analyze data statistically if distributed normally and kruskal-wallis was used to analyze data statistically if data were not distributed normally. a p o 0.05 was considered significant. body weights were not different between treatments. no differences were found in 24-hour urinary analyte excretions, 24-hour urine volume, urine ph, or 24-hour urinary saturation for calcium oxalate or struvite between treatments (table) . urolithiasis is a multifactorial disease, frequent and recurrent in dogs in the worldwide, in which breed, sex, age, diet, some anatomical abnormalities, urinary tract infection, urine ph and some geographical and hereditary features in the populations studied have been implicated as risk factors. the effective long-term management of urolithiasis depends on identification and control of the pathophysiological mechanisms involved, which, in turn, depend on accurate knowledge of the mineral composition of the uroliths. the aim of this study was to determine for first occasion the main epidemiological data of canine urolithiasis in mexico. this study was developed with 491 dogs with urolithiasis from 25 of the 33 states of the country. chemical composition of the uroliths was determined by stereoscopic microscopy, infrared spectroscopy, scanning electron microscopy and x-ray microanalysis. urolithiasis affected nearly the same number of males and females; with ages ranging from two months to 15 years with a median age of 5 years. adult animals were the most affected. breeds more affected were schnauzer miniature, poodle, dalmatian, yorkshire terrier, scottish terrier, chihuahua and bichon frisee´. uroliths were found in the lower urinary tract in 97.74% of the cases. mineral composition of the uroliths was: struvite 49.69%, followed by calcium oxalate 25.46%, purines 7.13%, silicate 6.72%, others 0.20%, mixed 8.15% and compound uroliths 2.44%. struvite uroliths affected females in most cases, whereas calcium oxalate, purines and silicate uroliths, were mainly observed in males. our results are similar to studies developed in other countries and continents, though we found a higher frequency of uroliths containing silicate, either pure, mixed or compounds uroliths (10.79%); in mexico city the frequency reached 15%. this high frequency may be due to high consumption of silicate in home-made food or in the groundwater derived from aquifers. acknowledgments: this work has been partially supported by a project of waltham foundation in mexico and the consejo nacional de ciencia y tecnologı´a (conacyt) of mexico. voiding urohydropropulsion is a non-invasive method for removing small urocystoliths from the dog, most commonly used in females due to the relatively wider and shorter urethra. this procedure is typically performed under general anesthesia to allow complete relaxation of the urethra, however, anesthesia results in longer procedure times and difficult endotracheal tube stabilization due to the vertical positioning of animals, especially in larger dogs. the aim of this study was to devise a novel injectable sedation protocol for urohydropropulsion when cystoscopy was not concurrently required. an intravenous catheter was placed, and a combination of medetomidine (10 to 15 mg/kg iv) and hydromorphone (0.025 to 0.05 mg/kg iv) was administered, with the addition of ketamine (2 mg/ kg iv) in fractious animals; atipamezole (double volume of medetomidine, administered im) was used as a reversal agent upon procedure completion. this protocol was considered in cardiovascularly healthy, non-diabetic dogs without evidence of urinary obstruction. monitoring equipment included electrocardiography, blood pressure measurement, and pulse oximetry, and supplemental flowby oxygen was provided. two dogs received the proposed sedation protocol in order to perform urohydropropulsion. dog one was a 3 year old female spayed shih tzu cross, and dog 2 was a 2 year old female spayed standard poodle. ultrasonography revealed a moderate number of urocystoliths present in both dogs, measuring up to 1 mm in dog 1 and 2.3 mm in dog 2. urohydropropulsion was performed and resulted in retrieval of 15 urocystoliths in dog 1, and approximately 20 urocystoliths in dog 2. repeat ultrasonography revealed no uroliths present after urohydropropulsion in both dogs. the time from administration of sedation to administration of reversal agent was 6 minutes for dog 1, and 8.5 minutes for dog 2. records were obtained from 3 dogs that had traditional general anesthetic protocols for urohydropropulsion with cystoscopy for confirmation of urocystolith removal, performed within the last 2 years, and the average anesthetic time was 64 minutes. subsequent to the use of medetomine-based sedation protocols for the above dogs, cystoscopy was performed in a 9 year old neutered male golden retriever with prostatomegaly. medetomidine (15 ug/kg iv) and butorphanol (0.2 mg/kg iv) were administered; atipamezole (double volume of medetomidine, administered im) was used as a reversal agent upon procedure completion. this sedation allowed adequate immobilization for cystoscopy of the urethra and urinary bladder, and endoscopic biopsying of the prostatic urethra and urinary bladder. the time from administration of sedation to administration of reversal agent was 15 minutes for this dog. in conclusion, a novel sedative protocol for urohydropropulsion is proposed which allows for an appropriate level of sedation along with a short procedure time and rapid recovery. this sedation protocol may also be useful for certain cystoscopic procedures. analysis may be delayed for a variety of reasons, including the need for sample batching within the laboratory or shipping to an outsourced location. therefore, it is important to know how storage of the sample may affect enzyme activity. we hypothesized that urinary nag and ggt activity would be affected differently in samples stored by refrigeration vs. freezing. thirty-four canine urine samples submitted to the clinical pathology laboratory at kansas state university were included. samples were collected from clinical patients with a variety of medical/surgical disorders and were selected based on the day of the week and a minimum volume of 10 ml. a complete urinalysis was performed on each sample; however there were no exclusion criteria based on urinalysis results. nag and ggt activity in the urine supernatant was assessed by colorimetric assay. aliquots of each supernatant were refrigerated for 5 days and frozen at à201c for 5 and 30 days at which time enzyme activity was re-assessed. compared to baseline values, enzyme activity for both nag and ggt were stable after 5 days of refrigeration, however there were significant (p o 0.01) declines in ggt and nag activity when urine supernatants were frozen for 5 and 30 days. treatment for canine urinary tract infections (uti) typically consists of 7-14 days of antimicrobial drugs in primary care veterinary practice. compliance with this drug regimen can be difficult for some clients. enrofloxacin is a veterinary approved fluoroquinolone antimicrobial and is useful for treatment of canine uti. fluoroquinolones are often used in human medicine to treat uncomplicated utis in women and can be prescribed for as little as 3 days. the primary objective of this study was to determine if dogs with naturally occurring uncomplicated uti have equivalent microbiologic cure with a high dose short duration protocol of enrofloxacin, compared to a standard antimicrobial protocol. client-owned adult dogs with naturally occurring, uncomplicated uti were prospectively enrolled in a multi-center clinical trial and assigned to 1 of 2 groups in a randomized blinded manner. group 1 received treatment with 18-20 mg/kg oral enrofloxacin once daily for 3 consecutive days. group 2 dogs were treated with 13.75-25 mg/kg oral amoxicillin-clavulante twice daily for 14 days. both groups had urinalyses and urine cultures submitted on day 0, 10, and 21. at the time of this interim analysis, thirty-six dogs have completed the trial. bacteriological cure was achieved in 15 dogs (83%) treated with enrofloxacin and 14 dogs (78%) treated with amoxicillinclavulante, respectively. these data suggest that the high-dose, short-duration enrofloxacin protocol was equally effective to the standard protocol in treating uncomplicated canine uti in the sample patient population. and may represent a viable alternative therapeutic regimen for similar patients. azotemia is frequent in dogs with dmvd (nicolle et al; jvim 2007; 21:943-949) and could result from renal hemodynamic alterations. renal resistive index (ri) allows assessment of renal vascular resistance. the aim of this prospective study was to assess ri in dogs with different dmvd stages. fifty-five dogs with dvmd were used (isachc class 1 (n 5 28), 2 (n 5 19), and 3 (n 5 8)). physical examination, renal ultrasonography and echo-doppler examinations were performed in awake dogs by trained observers. plasma creatinine, urea and nt-probnp were measured. statistical analyses were performed using a general linear model. whereas ri of renal and arcuate arteries were unaffected by isachc class, left interlobar ri increased (p o 0.001) from 0.62 ae 0.05 (mean ae sd) in class 1 to 0.76 ae 0.08 in class 3. left interlobar ri was also higher (p o 0.001) in azotemic (0.74 ae 0.008) than in non azotemic (0.62 ae 0.005) dogs. similar findings were observed for right interlobar ri. a positive effect of nt-probnp (p 5 0.002), urea (p o 0.001), creatinine (p 5 0.002), urea-to-creatinine ratio (p o 0.001), left atrium-to-aorta ratio (p o 0.001), regurgitation fraction (p 5 0.011), systolic pulmonary arterial pressure (p o 0.001) and shortening fraction (p 5 0.028) on ri was also observed. in conclusion, interlobar ri increases with the severity of dmvd and azotemia. a cause-effect relationship remains however to be established. antibodies against alpha-enolase are associated with immunemediated nephritis in people. it was previously shown that vaccinated cats commonly develop antibodies against alpha-enolase. the purpose of this study was to assess for associations between alphaenolase antibodies and azotemia in privately-owned cats. clinically stable privately owned cats ! 10 years of age, with and without azotemia (creatinine 4 2 mg/dl), and with an available vaccine history for ! 5 years were recruited for the study. sera were assayed for creatinine concentrations and alpha-enolase antibodies by use of previously validated techniques. results from cats with and without azotemia were compared by student's 2-tailed t test or fisher's exact test with significance defined as p o 0.05. median ages were 15 years (range: 10-18) and 12 years (range: 10-15) for cats with (n 5 35) and without azotemia (n 5 27), respectively. there was no significant difference in vaccine events (number, type, or route of administration) between groups. azotemic cats (34.3%) were more likely than normal cats (12.5%) to be positive for antibodies against alpha-enolase (p 5 0.016). in addition, alpha-enolase antibody concentrations were greater (p 5 0.041) in azotemic cats (mean % elisa 5 62.5%) than cats with normal creatinine concentrations (mean %elisa 5 47.2%). results of this study suggest that antibodies against alpha-enolase in cats may be associated with renal disease. additional prospective evaluation in a larger number of cats is indicated. aki is used in human medicine as a predictor of mortality based on the akin (acute kidney injury network) scoring system which utilizes relative increases in creatinine to determine stage. with this scheme, mortality has been shown to increase as the stage of kidney injury (indicated by akin score) increases. accordingly, we hypothesized that this system would improve predicting prognosis in dogs and cats. we retrospectively evaluated 1088 dogs and 856 cats (2008) (2009) ) that had ! 2 creatinine measurements within 7 days, and whose first creatinine was o 1.6 mg/dl. patients were categorized as: level 0 (no aki); level 1 (second creatinine value o 1.6 mg/dl, but creatinine increased ! 0.3 mg/dl); or level 3 (second creatinine 4 1.6 mg/dl with a creatinine increase ! 0.3 mg/dl). thirty and 90 day survival for each level was compared to level 0. adjusted odds ratio (or) in dogs for 30 day survival was 1.3 for level 1 (ci 95%, 0.8-2.2) and 3.2 (ci 95%, 1.8-5.5) for level 2; or for 90 day survival was 1.3 for level 1 (ci 95%, 0.8-2.2) and 3.7 (ci 95%, 2.1-6.5) for level 2. for cats, or at 30 days was 1.5 (ci 95%, 0.5-4.6) for level 1 and 3.1 (ci 95%, 1.5-6.7) for level 2; or for 90 day survival was 0.9 (ci 95%, 0.3-2.8) for level 1 and 4.1 (ci 95%, 1.8-9.3) for level 2. thus, detecting increasing stage of aki helps predict mortality in dogs and cats. abstract n/u-27 feline urate urolithiasis: 143 cases (2000 -2008 . j dear 1 , r shiraki 2 , a ruby 2 , j westropp 3 . 1 william r pritchard veterinary medical teaching hospital, university of california, davis, ca, 2 gerald v. ling urinary stone analysis laboratory, university of california, davis, ca and 3 the department of veterinary medicine and epidemiology, university of california, davis, ca. feline urate urolithiasis accounts for 10% of the feline stones our laboratory analyzes each year; little information is known about this disease, particularly the incidence of those cats with hepatopathies. the objective of the study was to characterize the signalment, clinicopathologic data, and diagnostic imaging of cats with this disease as well as the salts of uric acid present. a retrospective analysis of feline urate uroliths submitted to the stone lab between january 2000-december 2008 were included. from these data, primary veterinarians were solicited to submit records. furthermore, all records from cats with urate uroliths from the vmth were analyzed separately. 143 records were received from the primary care veterinarians. sixteen cases were identified from the vmth. median values for the cbc and chemistry panels available were within the reference ranges provided, with only a few outliers present. of the 78 cats with radiographic reports, 70 (90%) had visible evidence of uroliths. two external cases had confirmed pss; five cases from the vmth had a pss. cats with urate uroliths and pss were younger than cats without a documented hepatopathy (2 years vs. 7 years). the siamese breed was overrepresented. all stones were ammonium hydrogen urate. the pathogensis of urate uroliths in cats is poorly understood. most cats were not completely evaluated for pss, however, there were few clinicopathologic parameters which indicated hepatopathies were present. further studies are warranted to evaluate genetics and purine metabolism in cats with urate uroliths to help tailor proper management and breeding strategies. 3-indoxyl and p-cresyl sulfate (is, and cs, respectively), small protein-bound molecules derived from gastrointestinal protein metabolism, are among the most important uremic solutes affecting morbidity and mortality in human chronic kidney disease (ckd). in the blood stream, these compounds are predominantly bound to protein, but their debilitating effects on prognosis and quality of life in ckd appear to be driven by the free fraction. the objectives of the present study were to assess the normal, physiological levels of is and cs in healthy cats and to evaluate the correlation of the respective free and protein-bound levels. blood samples were taken from 105 clinically healthy adult cats enrolled at five participating veterinary practices in germany. after centrifugation, the serum was deep frozen until transport on dry ice to the analytical laboratory. serum creatinine and urea levels were quantified by vettest s (idexx laboratories, inc.). total and free is and cs, respectively, were quantified by turbulent flow chromatography coupled with a tandem mass spectrometry detector. statistical analysis of the results comprised i) a descriptive report of the median with upper and lower bounds of the 95% confidence interval for reference values of is and cs, ii) a calculation of various pearson correlation coefficients r, also tested with reference to the null hypothesis of no relationship, and iii) wilcoxon-mann-whitney utest for an estimation of the effect of hemolysis on serum is or cs levels. six animals with serum creatinine or urea levels outside the reference range were excluded from the calculation of reference values. median levels of is in cat serum were 1.19 mg/l with upper and lower bound 95% confidence intervals at 1.46 and 0.99 mg/l, respectively. the corresponding median levels of cs were 2.11 mg/l (median) and 2.46 vs 1.57 mg/l (upper vs lower bound levels, respectively). these values showed a low, non-significant correlation with serum creatinine or urea levels. however, is and cs serum levels were moderately correlated (total levels r 5 0.4808, p o 0001). their respective free levels constituted about 5% of the total serum levels (r ! 0.8977, p o 0.001). non-hemolytic samples tended to yield lower values than hemolytic samples. due to the low number of hemolytic samples (n 5 14) , the group difference could, however, not be statistically confirmed. the results indicate that it is sufficient to determine total levels of either is or cs in serum while studying the effects of therapeutic or dietetic interventions on the evolution of these parameters in feline ckd. reference values are provided for orientation towards clinically relevant changes. disrupted urothelial differentiation has been implicated in the pathogenesis of feline idiopathic cystitis (fic). studies of cultured human urothelium have shown that abnormalities in urothelial differentiation and repair may be mediated by persistent 15-hydroxy-prostaglandin dehydrogenase (pgdh) activity and subsequent metabolism of cytoprotective prostaglandins. the goal of this study was to confirm persistent pgdh expression in fic bladders compared to desmoplakin i1ii expression, a marker of urothelial differentiation. urinary bladder biopsy specimens were obtained by cystotomy from 9 symptomatic cats with chronic fic. cats with a history of another major disease, previous cystotomy, or recent treatment with corticosteroids, nsaids, antihistamines, antidepressants, or glycosaminoglycans were excluded. urinary bladder tissue specimens were also obtained from 10 untreated clinically normal specific-pathogen-free cats. tissue specimens were fixed in buffered 10% formalin and embedded in paraffin. tissue sections were deparaffinized and subjected to citrate buffer microwave antigen retrieval. tissues were stained for pgdh using a rabbit anti-pgdh antibody, an isotype negative control or goat anti-desmoplakin i1ii and developed using the avidin-biotin peroxidase complex method. all fic (9/9) and normal (10/10) cat bladder samples showed similar staining of urothelial cytoplasm for pgdh. however, desmoplakin i1ii staining, found on the luminal cell surface in 4/4 normal tissues, was disrupted in 6/6 fic bladder samples. desmoplakin i1ii staining confirmed altered urothelial differentiation in fic cats. however, pgdh expression remained intact in fic samples. we hypothesize that pgdh expression in fic may contribute to its pathophysiology due to breakdown of prostaglandins essential for urothelial healing. additional studies will explore this hypothesis. the university of tennessee college of veterinary medicine's picture archiving and communication system was searched over a 9 month period for cats that had undergone both abdominal radiographs and ultrasound during the same visit. one hundred and three cats were identified (age range o 1 to 18yrs; median 11yrs). kidney size was determined based on radiographic and ultrasound findings. of the included cats, 41.8% had two normal sized kidneys, 18.4% had one small and one normal, 15.5% had one large and one normal, 11.7% had two small, 8.7% had two large, and 3.9% had one small and one large kidney. the presence of mineralization, uroliths and hydronephrosis was also noted. medical records were reviewed for clinical chemistry data and historical information concerning previous urinary disease. no significant differences were found between kidney size and renal function, kidney size and the presence of uroliths, renal mineralization and function or the presence of uroliths and function. the presence of uroliths was significantly associated with hydronephrosis. of the 30 cats with at least one large kidney, 9 (30%) had hydronephrosis. of the 23 cats with current or previously diagnosed uroliths, urinary tract infections or other uropathies, 10 (43.5%) had at least one small kidney. small kidneys were commonly found in older cats, however, this correlation was not statistically significant. based on these findings, small kidneys are more likely to be the result of urinary disease as opposed to being either congenital or due to aging. this study aimed to evaluate ife, which has been advocated for treatment of lipid-soluble drug intoxication, in the treatment of clinically-occurring canine ivermectin toxicosis. one australian shepherd and two miniature australian shepherds were included. all three dogs were homozygous for the mdr-1 gene mutation. two dogs roamed on horse ranches where ivermectin-based deworming products had recently been used. ivermectin was administered to the third dog (165 mg/kg po). all three dogs exhibited tremors, ptyalism, and cns depression, which progressed over several hours to stupor in two dogs, and to a comatose state requiring mechanical ventilation in the remaining dog. a 20% formulation of ife (liposyn ii, hospira) was administered as a bolus (1.5 ml/kg) followed by a slow iv infusion (7.5-15 ml/kg over 30 minutes). no change was observed in the neurologic status of any patient. lipemia visible upon blood sampling persisted for 36 hours in one dog. no other adverse effects were noted. serum ivermectin levels confirmed ivermectin exposure in each case. in this study, ife administration did not result in clinical benefit in cases of ivermectin toxicosis. brain ivermectin concentrations in mdr1 mutant/mutant genotype dogs may be too high to be overcome by ife. additionally, these dogs may lack p-glycoprotein-mediated biliary clearance mechanisms needed for optimal ife function. further investigation is needed to determine the utility and optimal dosing of ife in canine toxicoses, to characterize its safety, and to determine how mdr-1 status may alter the efficacy of ife in treatment of canine ivermectin intoxication. rufinamide is a recently approved antiepileptic drug used for the treatment of seizure disorders in human patients. rufinamide is administered at a dose of 45 mg/kg divided twice daily to achieve therapeutic concentrations of 15 mg/ml. the objective of this study was to determine the pharmacokinetic properties and short-term adverse effects of single-dose oral rufinamide in healthy dogs in preparation for a possible clinical trial evaluating the efficacy of rufinamide in the treatment of canine epilepsy. six healthy adult dogs were included. the pharmacokinetics of rufinamide were calculated following administration of a single mean oral dose of 20.0 mg/kg (range 18.6-20.8 mg/kg), extrapolated from the dose used in human patients. dogs were monitored by repeat physical examinations, electrocardiograms and blood pressure assessments during the course of the study. plasma rufinamide concentrations were determined using high-performance liquid chromatography. pharmacokinetic data were analyzed using winnonlin version 1.0. no adverse effects were observed. the mean terminal half-life was 9.86 1/à 4.77 hours. the mean maximum plasma concentration was 19.561/à5.82 mg/ml and the mean time to maximum plasma concentration was 9.33 1/à 4.68 hours. mean clearance was 1.448 1/à 0.703 l/hr. auc inf was 410.72 1/à 175.88 mgãh/ml. results of this study suggest that rufinamide given orally at 20 mg/ kg twice daily in healthy dogs should result in a plasma concentration and half-life sufficient to achieve the therapeutic level extrapolated from humans without short-term adverse effects. further investigation into the efficacy and long-term safety of rufinamide in the treatment of canine epilepsy is warranted. the aims of this study were to investigate the abg for (i) the prevalence of skull abnormalities; (ii) the prevalence of sm; (iii) an association between lateral ventricular size, cerebellar size and sm; and (iv) associations between sm, skull abnormalities, csf pleocytosis and clinical signs. seventy-six abgs, recruited as part of a larger epidemiological and genetic study, underwent brain and spinal mri evaluation (3.0t general electric signa hdx, milwaukee, wi). all dogs were evaluated neurologically, recording deficits and the presence of spinal pain. sequences acquired included t2w, t1w pre-and postcontrast, and t2w flair, sagittal and transverse. cervical spinal cord central canal (cc) and or syrinx size and its percent area of spinal cord was measured using osirix s . the presence of chari-like malformation (cm) was assessed by recording the presence of caudal cerebellar deviation and/or foramenal vermal herniation. lateral ventricle and cerebellar volume was expressed as a percent of the cerebrum and intracranial volume2qa respectively. forty-five dogs underwent atlanto-occipital cerebrospinal fluid tap at the time of mri and the white blood cell (wbc) count was recorded. student's t-tests were used to compare the measured variables between groups with and without skull abnormalities, spinal pain and neurological signs. the mean age of the 30 males (24 intact) and the 46 females (34 intact) was 50.4 months (range 8-135; median 44 months). neurological deficits and neck pain were noted in 21 (27%) and 15 (19.7%) of dogs respectively; 5 dogs (6.57%) exhibited both. cerebellar deviation and vermal herniation were present in 37 (48.68%) and 46 (60.52%) dogs respectively; twenty-three dogs (30.26%) had both. mean height of the cc was 2.3 mm (0-7.2 mm). forty (52.63%) ccs were greater than 2 mm in height; the mean length of these lesions was 2.03 vertebrae (0.5-7). mean csf wbc count was 4.97/ml (0-39). syrinx height and extent were significantly higher in dogs with neurological signs (size p 5 0.01; extent p 5 0.0004). there were no significant differences in syrinx sizes and extent in dogs with or without skull abnormalities or spinal pain. there were no associations of syrinx height or extent with csf wbc count or age of dog. intact females had a significantly lower syrinx extent than intact males (p 5 0.009). there were no significant differences in presence of spinal pain or neurological signs between dogs with or without skull abnormalities. there was a significant negative association of ventricular percentage and cerebellar percentage (p o 0.0001). there was a significant association of ventricular percentage with syrinx percentage (p 5 0.0015) and height (p 5 0.0007). this study suggests that sm and cm are prevalent in abgs. syrinx size and extent are associated with neurological signs and ventriculomegaly is associated with both small cerebellar size and large syrinx size. however, sm may not be associated with cm as defined by cerebellar herniation and deviation and is not associated with csf inflammation. the power tissue resection device (ptrd) is a hand-piece comprised of an outer cannula with motor driven vacuum-assisted inner cutting blade. this device was designed and is marketed for human neurosurgical brain/spinal cord tumor resection. the purpose of this study is to describe the use of the ptrd for intervertebral disc fenestration and to compare the effectiveness of manual fenestration to that of the ptrd. fifteen cadaveric lumbar spines were randomly placed into three study groups: group 1 was the control group on which no fenestrations were performed, group 2 was the manual fenestration group and group 3 was the ptrd fenestration group. the effectiveness of fenestration via both manual and ptrd was assessed by calculating the ratio of remaining nuclear weight post fenestration to total nuclear volume. discs with lower ratios were more effectively fenestrated. results showed a smaller ratio of post fenestration remaining nuclear weight to nuclear volume following fenestration with the ptrd (0.23 ae 0.09) as compared to manual fenestration (0.30 ae 0.10). these results did not show statistical significance. when fenestrated samples were compared to control samples (0.39 ae 0.07), there was a statistically significant reduction in ratios. in conclusion, the ptrd is easy to use and is as effective as the manual technique for canine intervertebral disc fenestration. according to the human who classification gliomatosis cerebri (gc) is a rare astrocytic tumor affecting at least three lobes of the brain with extensive infiltration, but relative preservation of brain architecture. gc has not been reported to occur as a hereditary disease, neither in man nor in animals. here, we report the temporally clustered occurrence of gc in a family of bearded collies. a 7 years old female bearded collie with forebrain signs was presented. differentials included inflammatory/ infectious, metabolic/ toxic, and neoplastic diseases. within a time period of 12 months, 3 offspring of this bitch were presented with similar clinical signs. two dogs were full siblings (2 males). the remaining female dog originated from a match with a different male dog. mri was performed in all 4 dogs and revealed a diffuse and extensive intra-axial lesion with moderate mass effect and midline shift. the ill defined lesion showed mainly a white matter distribution with hyperintense signal in t2-w and flair images and iso-to hypointense signal in t1-w images without contrast enhancement. the lesion was bilateral in all cases, continued along the white matter extending partially into the gray matter with contact to the brain surface. neuropathology revealed a diffuse and extensive infiltration of the brain and spinal cord by a neoplastic glial cell population involving white and gray matter of both hemispheres, thalamus, brainstem and cerebellum in all 4 dogs. based on the cell morphology and immunoexpression of glial fibrillary astrocytic protein by neoplastic cells diagnosis of gc was made. this is the first report of familial occurrence of gc, which is likely the result of a germ-line mutation. several human hereditary cancer syndromes are associated with cns tumors including amongst others the li-fraumeni cancer family syndrome (p53 mutation), neurofibromatosis (type 1 and 2) (neurofibromin, merlin mutation), and tuberous sclerosis (hamartin, tuberin mutation). furthermore, familial clustering of human gliomas unassociated to the known inherited cancer syndromes has been described. in the dog, hereditary cns tumors are not known. the exact mode of inheritance and putative gene mutations of gc in this bearded collie family are currently under investigation. preliminary results are consistent with a monogenic autosomal dominant mode of inheritance, although a recessive inheritance cannot be completely ruled out at this time. mutations in the tp53 gene were not found following amplification and sequencing of exons 5-8 in 2 affected dogs. previously presented at the ecvn annual meeting in cambridge, uk. the gm 2 gangliosidoses are characterized by a deficiency of bhexosaminidase. there are two isoforms: hex a composed of an a and b subunit encoded by hexa and hexb genes respectively and hex b with two b subunits. hex a requires an activator encoded by gm2a. two japanese chin dogs with confirmed gm 2 gangliosidosis showed elevated total hexosaminidase and normal hexosaminidase a activity, a pattern associated with the ab variant in humans and consistent with prior reports in the breed. this study was performed to identify the mutation responsible using resequencing with an applied biosystems 3730xl dna analyzer as previously described (awano 2009). mutations in gm2a cause the ab variant in humans, but resequencing gm2a revealed no mutation that could account for the disease. resequencing hexa and hexb revealed a c.967g 4 a mutation in hexa which was homozygous in both affected dogs. sixty-five normal japanese chin dogs were screened for the mutant allele; 60 were homozygous for the ancestral allele and 5 heterozygous. this mutation predicts a p.323e 4 k substitution affecting one of two primary active-site amino acids that participate in the hydrolysis of gm 2 ganglioside. substitution of a lysine residue at this site is likely to eliminate subunit a enzymatic activity. the apparently normal levels of hexosaminidase a activity in affected dog samples may be a result of b subunit overexpression. human hex b possesses low levels activity against the artificial substrate used to assess hex a activity, but specificity of activity of the canine enzyme is not known. previously presented at the american society for neurochemistry: additional data in this abstract. phenytoin (pht) is the intravenous drug of choice in humans for seizure emergencies following benzodiazapines. iv fosphenytoin (fos) is a pht pro-drug which causes less administration related adverse events. while the short half-life of pht is not suitable for chronic oral therapy in dogs, iv fos has not been studied. two dogs received 15 mg/kg phenytoin equivalent (pe) and two dogs received 25 mg/kg pe of fosphenytoin intravenously at a rate of 50 mg pe/min. blood for plasma levels were drawn at 10 time-points over 12 hours; total and unbound drug levels were measured by hplc. vital signs including ekg, blood pressure, and neurological examination were monitored. the half-life of metabolism of fos to pht was $10 min, with 4 80% of fos metabolized to pht by 30 minutes. eighty to 84% of pht was protein-bound during the first 15 minutes after dosing, compared to 90-95% in humans. the elimination half-life for total pht ranged from 2.8-3.5 hours and for unbound pht ranged from 1.9-5.4 hours. dogs receiving 15 mg/kg pe intravenously achieved unbound pht plasma maximum concentrations of 2.2-2.4ug/ml at 5 minutes, consistent with human loading dose levels. adverse events observed in some dogs included vomiting, mild ataxia, and short lived tremors, the severity of which appeared dose dependent. all dogs were clinically normal within 30 minutes of all doses. a 15 mg/kg pe dose of iv fos appears adequate for production of pht levels predicted to be effective for the treatment of canine seizure emergencies. further studies in clinical canine patients are warranted. acquired myasthenia gravis (mg) is caused by antibodymediated inactivation of the acetylcholine receptor on the neuromuscular endplate causing focal, regional or generalized muscle weakness. many medical treatments have been reported; however, responses to therapy and outcomes are unpredictable and death often results from aspiration pneumonia. therapeutic apheresis is an extracorporeal procedure that separates blood into its components for removal or specific alteration prior to return to the patient. therapeutic plasma exchange (tpe) is an apheresis treatment in which plasma (containing pathologic antibodies) is removed and exchanged with donor plasma. tpe is used routinely to treat mg in human patients with severe disease or disease unresponsive to conventional therapy. we report the successful use of tpe to treat 2 large breed dogs with confirmed mg (aceytlcholine receptor antibody concentration: 3.05 and 3.32 nm/l, respectively; normal concentration: o 0.6 nm/ l) that was severe and not adequately responsive to traditional therapies. both dogs were non-ambulatory, recumbent, and demonstrated megaesophagus and aspiration pneumonia. three tpe treatments (1 plasma exchange each) were performed over 5 and 7 days, respectively, in each dog without complication. both dogs became ambulatory within 3 days of starting tpe treatment with subsequent resolution of regurgitation and megaesophagus. pyridostigmine was continued during tpe sessions and discontinued in both dogs within 3-6 months. both dogs remain asymptomatic and have had no recurrence of mg during 16 and 4 months of follow-up, respectively. tpe is a viable treatment option for dogs with mg that have severe disease, life-threatening complications or that remain unresponsive to traditional therapies. tpe may alleviate clinical signs more rapidly, and improve long-term outcomes when compared to historical experiences in patients with comparable disease. clinical findings, clinicopathologic data, imaging features, and treatment of canine spinal meningiomas have been described in the veterinary literature, but histological characteristics and tumor grading have less commonly been reported. the aims of this retrospective case series were to describe the clinical, imaging, and histologic features of seven canine spinal meningiomas including a cervical spinal cystic meningioma that had imaging and intraoperative features of a subarachnoid cyst. medical records from dogs with a histopathological diagnosis of spinal cord meningioma presented to the veterinary teaching hospital between 2006 and 2010 were reviewed. signalment, presenting clinical signs, physical and neurologic examination, clinicopathologic data, surgery reports and available images were reviewed. all meningiomas were histologically classified and graded following the international who human classification for cns tumors. seven dogs were included, 4 males and 3 females. median age at presentation was 8.7 years (range, 3.5-11.4 years), and median weight was 35 kg (range, 8-45 kg) . median time between onset of clinical signs and diagnosis was 108 days (range, 45 days -1 year). cerebrospinal fluid (csf) analysis was performed in 4 dogs, showing increased protein concentration in 2 cases, and being normal in the other 2. spinal radiographs revealed vertebral canal widening in one case. myelography (4/7) showed intradural/extramedullary lesions in three cases, one of them consistent with a csf-filled subarachnoid cavity, and an extradural lesion in one case. magnetic resonance imaging (mri) was performed in all cases and revealed mild to marked hyperintensity on t2w and precontrast t1w images and homogeneous contrast enhancing (ce) intradural/extramedullary masses (4 cervical and 2 thoracic) in six cases, with one of these showing an additional intramedullary ce pattern. a dural tail was identified in two dogs. one dog had a fluid-filled subarachnoid enlargement located dorsally to the spinal cord. this lesion was hyperintense on t2w, hypointense on t1w and flair images, and did not enhance. it was diagnosed as a spinal subarachnoid cyst, but the histopathological study of the surgically resected mass revealed a grade i cystic meningioma. five other cases underwent cytoreductive surgery, two transitional meningiomas (grade i) that survived 3 (alive at the time of writing) and 7 months; and three anaplastic meningiomas (grade iii) that survived 10-16.6 months before neurological deterioration and euthanasia. another anaplastic meningioma was euthanized right after diagnosis. there are few reports grading canine spinal meningiomas, with most being grade i or ii. of the few grade iii tumors reported, only one had been treated surgically and was euthanized 90 days later because of neurological deterioration. we report four grade iii (anaplastic) meningiomas, three of which surgically treated and with longer survival times. finally, cystic meningioma should be considered in the differential diagnosis of cases with imaging features consistent with arachnoid cyst because of their similar appearance, making histopathological analysis essential for a definitive diagnosis. head trauma is a common veterinary emergency, but few prognostic indicators have been studied in dogs, making it challenging for clinicians to counsel clients about the odds of recovery. a recent meta-analysis showed that higher plasma glucose, lower plasma ph and lower hemoglobin at admission were associated with increased risk of death in human head trauma. the goal of this retrospective study was to investigate the association between admission point of care blood gas parameters and survival to discharge in dogs with head trauma. fifty one dogs presenting to the cornell university hospital for animals with head trauma from 2007 to 2010 that had a blood gas analysis done within 1 hour of presentation were eligible for inclusion. parameters assessed included glucose, base excess (be), anion gap (ag), ph, hemoglobin, and sodium. biochemical data were found to be normally distributed using the kolmogorov-smirnov test. t-tests or welch tests were used to compare parameters between survivors (s,n 5 42) and non-survivors (ns, n 5 9). of glucose, be, ag, ph, hemoglobin, and sodium, only mean glucose (s 5 131 mg/dl, ns 5 171.4 mg/dl, p 5 0.029) was significantly different between groups, although there was a trend for a difference in mean be (s 5 à3.6, ns 5 à8,5, p 5 0.055). logistic regression analysis showed that of the parameters, only be was independently associated with outcome (odds ratio 0.79, 95% ci 5 0.63-0.98, p 5 0.036). these results suggest that two easily measured biochemical parameters (glucose and be) may yield useful prognostic information in dogs with head trauma, but further studies are needed to further elucidate these findings. type i intervertebral disc disease (ivdd) commonly affects chondrodystrophic dogs. neurological recovery and outcome following surgical decompression may be unpredictable due to suspected ischemic neuronal injury. hyperlactatemia has been associated with spinal cord injury in humans and experimental animals. the purpose of the study was 1) to determine the relationship between serum and csf lactate levels and 2) to compare lactate levels with neurological outcome following decompressive surgery in dogs with ivdd. healthy, chondrodystrophic dogs diagnosed with ivdd localized to the t3-l3 spinal cord were included. serum lactate levels were obtained at: anaesthetic induction, skin incision, muscle dissection, and extubation. in patients with hyperlacatemia at extubation, additional samples were obtained. csf was analyzed for lactate concentration. neurological status was recorded at presentation and multiple times during the recovery period. 31 dogs were included in the study (3-12 years old). 22/31 dogs had normal lactate levels throughout the study. 9/31 dogs had serum hyperlactatemia prior to anaesthetic induction; 6/9 dogs returned to normal during anaesthesia and 3/9 dogs had continued hyperlactatemia until the end of the observation period. neurological status of the dogs varied similarly between all groups. in 12/14 dogs where csf lactate levels were measured, initial serum levels were lower than csf lactate levels; in 5/14 dogs where csf and serum were collected simultaneously, serum lactate concentration was consistently lower than csf lactate. no association between presenting neurological status or neurological outcome and serum or csf lactate concentration was made. neither serum nor csf lactate concentration is useful for predicting neurological outcome in dogs with ivdd. chiari-like malformation (cm) has been associated with syringomyelia (sm) in cavalier king charles spaniel (ckcs) and is postulated to result from a mismatch between the volume of the caudal cranial fossa and the brain parenchyma contained within. the objective of this study was to assess the role of cerebellar volume in caudal cranial fossa overcrowding and syringomyelia. three dimensional models were created using t2-weighted transverse magnetic resonance images in the commercial software package mimics s . volumes of cerebellar parenchyma were analyzed as percentages of caudal cranial fossa volume (cerebellar caudal cranial fossa percentage) and total brain parenchyma volume (cerebellar brain percentage). data was assessed for normality and the appropriate statistical test was used to compare means/medians between groups. forty-five small breed dogs (sb), 58 ckcs and 31 labradors (ld) were compared. as sm is thought to be a late onset disease process, two subgroups were formed for comparison: 21 ckcs younger than 2 years with sm (group 1) and 13 ckcs older than 5 years without sm (group 2). ckcs had a larger cerebellar caudal cranial fossa percentage than the other groups .76%] vs. sb 47.81% [40.36-62.91%] and ld 41.32% [32.59-52.95%]; p o 0.001). the cerebellar brain percentage was also larger in ckcs compared to the other groups (ckcs 8.90% [6.62-11.46%] vs. sb 7.37% [5.25-11.34%] and ld 7.23% [6.36-9.54%]; p o 0.001). group 1 had a significantly larger cerebellar caudal cranial fossa percentage than group 2 (53.71% ae 1.27 vs. 49.31% ae 2.35, p 5 0.001) and a significantly larger cerebellar brain percentage (9.45% ae 0.43 vs. 8.58% ae 0.55, p 5 0.021). our findings show that the ckcs has a relatively larger cerebellum than small breed dogs and labradors and there is an association between increased cerebellar volume and sm in ckcs. chiari-like malformation (cm) is nearly omnipresent in the cavalier king charles spaniel (ckcs) breed. the mis-match of the caudal cranial fossa and the parenchyma within is thought to lead to syringomyelia (sm). there is currently a lack of information if the morphological changes seen in ckcs with cm are progressive or non-progressive. in this retrospective study we used established measurements of cerebral volumes, foramen magnum height and cerebellar herniation length to assess if there is a significant difference between subsequent magnetic resonance (mr) imaging of the brain of the same dog. electronic patient records were reviewed for ckcs with cm which had two separate mri scans, which were a minimum of 3 months apart. ckcs with diseases affecting measurements were excluded. for the volumetric measurements three-dimensional models were created using t2-weighted transverse mr images in the medical imaging software (mimics v12.0, materialise n.v, 2008) . volumes of the caudal cranial fossa parenchyma were analyzed as percentages of caudal cranial fossa volume and caudal cranial fossa volume was analyzed as a percentage of total cranial cavity volume. the volume of the ventricular system was recorded as a percentage of total parenchymal volume. data was assessed for normality and the appropriate statistical test was used to compare means/medians. twelve ckcs were included with a median scan interval of 9.5 months (3-83 months). the size of the foramen magnum increased significantly between the first and second scan (1.52 ae 0.08cm vs. 1.59 ae 0.09cm; p 5 0.03), as did the length of cerebellar herniation (0.17 ae 0.05cm vs. 0.22 ae 0.09cm; p 5 0.02) and the caudal cranial fossa percentage (13.44% [9.9-15.28%] vs. 13.96% [9.9-15.48%]; p 5 0.02). there was no significant difference noted between the two time points in any of the other volumetric measurements ( this work could suggest that overcrowding of the caudal cranial fossa in conjunction with the movements of cerebrospinal fluid and cerebellar tissue secondary to pulse pressures created during the cardiac cycle causes pressures on the occipital bone. this leads to a resorption of the bone and therefore an increase in caudal cranial fossa and foramen magnum size allowing cerebellar herniation length to increase. the cord dorsum potential (cdp) is a stationary potential arising in dorsal horn interneurons after stimulation of sensory nerves. cdps have been recorded in normal anesthetized dogs previously, and normal latency values have been determined for tibial and radial nerves. this study was undertaken to determine whether cdps could be reliably recorded from the caudal nerves in normal dogs, thus allowing electrophysiological assessment of the cauda equina, and whether neuromuscular blockade improved recording quality. ten adult dogs weighing from 23.2 to 32.0 kg were anesthetized and cord dorsum recordings were compared before and after administration of atracurium. recording needles were placed onto the dorsal lamina at intervertebral sites from l7/s1 to l2/3. stimulations were made on the lateral aspect of the caudal vertebrae approximately 5-8 cm from the tail base. recordings from 500 stimulations were averaged. cdps were recorded successfully in all dogs. onset latency varied from 2.2 to 4.7 ms. the cdp was largest when recorded closest to the site of entry of the stimulated nerve into the cord, as determined by post-mortem examination immediately after testing in 6 dogs. administration of atracurium did decrease muscle artifact, and in some cases helped isolate the origin of the cdp. these data show that cdps can be readily assessed from the caudal nerves of anesthetized dogs, with or without atracurium. cord dorsum potentials from caudal nerves may add important information about the integrity of the cauda equina in dogs with suspected degenerative lumbosacral stenosis. canine intracranial glial tumors and many human brain tumors express heat shock proteins (hsps) associated with their degree of malignancy. the up-regulation of hsps during tumor cell growth helps keep tumor proteins stable and therefore makes them a reasonable target for therapy. ki67 expression and ec have been strong indicators of cell proliferation and dedifferentiation, respectively.the aims of this study were to determine (i) if canine meningiomas express hsp 27 and/or hsp 72; (ii) whether the expression of the hsps was associated with ki67 and/or e-cadherin (ec) expression; and (iii) whether peritumoral edema was associated with hsp, ki67 and/or ec expression. forty-one formalin-fixed, paraffin-embedded canine intracranial meningiomas underwent immunohistochemical staining using anti-hsp 27, or 72 antibodies. these tumor samples were also immunohistochemically stained for ki67 and ec expression. canine mammary carcinoma and squamous cell carcinoma tissues served as the control samples, as both have previously been shown to express hsps. skin was used as control for ki67 and ec. four non-overlapping high power fields of each stained sample were selected and cell staining was analyzed using a semi-quantitative method for hsps and ki67; a qualitative assessment was used for ec. all analyses were performed using sas v 9.2 (cary, nc). descriptive statistics of staining percentages were calculated for all tumors tested. simple pearson's correlation was used to test for correlations of ec area with hsp areas and ki-67 percent positive cells and of ec intensity with hsp intensities and ki-67 percent positive cells. all hypothesis tests were 2sided and the significance level was a 5 0.05. thirteen meningiomas had mr images quantitatively evaluated for peritumoral edema using t2flair sequences. the edema index (ei) was evaluated for an association with hsp 27, hsp 72, ec and ki67 expression. hsp 27 was expressed in 36% (mean 8.7% of cells; range 0-58%), hsp 72 in 52% (mean 5.2% of cells; range 0-26%) and ec in 68% of meningiomas. there was no association demonstrated between either hsp expression variable and ec or ki-67 expression. there was also no association between the ec expression variables and ki-67. however, there was a significant negative association between hsp 72 extent (p 5 0.03) and area (p 5 0.04) with ei. in conclusion, hsp 27 and 72 expression was demonstrated in canine intracranial meningiomas but was not associated with ki-67 or ec expression. this study suggests that hsps may not have a significant role in the maintenance of canine meningiomas and so do not represent a novel treatment target for this group of tumors unlike canine glial cell tumors. however, hsp 72 may be involved in the pathogenesis of peritumoral edema in meningiomas and warrants further investigation. an extended release (xr) formulation of levetiracetam, a second generation antiepileptic drug, was recently approved for human use on a once daily basis. although levetiracetam is clinically effective for seizure control in dogs, it requires a three times daily administration. the potential benefits of the xr formulation include reduced daily dosing leading to improved compliance and relatively constant plasma concentrations. the aim of this study was to compare the pharmacokinetics of levetiracetam xr tablets with immediate release (ir) tablets following single dosing in dogs. five clinically and neurologically normal mixed breed dogs were used in a cross-over design. all dogs (mean body weight 25.9kg; range 23.2-30.5) had normal hematology, serum chemistry and urinalyses. following a 12 hour fast, each dog was administered oral ir levetiracetam (500 mg; mean dose 19.3 mg/kg; range 16.4-21.6). heparinized blood for drug analysis was taken from each dog prior to administration and 0.25, 0.5, 0.75, 1, 2, 4 and 8 hours after. blood was immediately centrifuged and supernatant plasma was stored at à801c until analysis. after a 4 day wash-out period, each dog was administered 500 mg oral xr levetiracetam and blood samples were taken at identical timings. plasma samples were thawed at room temperature before preparation by solid phase extraction for hplc analysis. reverse phase chromatographic separation was performed. levetiracetam and an internal standard were detected using ultraviolet spectroscopy at 205 nm. concentrations of levetiracetam were determined by peak area comparison to the internal standard. mean data were fit to a one compartment pharmacokinetic model with first order elimination and absorption and included a lag-phase for xr formulation. no adverse clinical effects were noted in any of the dogs. the auc associated with xr was 230 hr ug/ml, a 5.14 fold increase over that with ir (44.8 hr ug/ml). the absorption half-life was 3.2 hr with xr and 0.41 hr with ir, a 7.75 fold difference. the elimination halflife was 3.13 hr with xr and 2.19hr with ir, a 1.43 fold difference. the tmax associated with xr 5.01 hr and 1.22 hr with ir, a 4.21 fold difference. the cmax associated with xr was 18.5 mg/ml and 9.62 mg/ml with ir, a 1.92 fold difference. the plasma concentration of ir levetiracetam was not detectable at 8hr after administration whereas it was greater than 10 mg/ml at 8hr after xr administration. based on the auc data, there is an approximately 5 fold increase in bioavailability of the xr compared to the ir formulation. the cmax was approximately 2 times greater following xr administration and a high plasma level in excess of the suggested canine therapeutic concentration (5 mg/ml) for at least 8 hours. although specific dosing recommendations cannot be made from this data, the favorable pharmacokinetics of xr over ir suggests that single, daily administration could be efficacious. thoracic and lumbar vertebrae are frequently affected by fractures and or luxations in dogs following trauma. surgical repair is part of the emergency treatment described for this disorder but does not guarantee improvement of the associated clinical signs. multiple surgical repair techniques have been described but have not been compared in terms of their success and the factors associated with a positive outcome. the aims of this study were to retrospectively evaluate the effect of 3 different types of vertebral repair, injury type and injury location on outcome in dogs with thoracolumbar (tl) and lumbosacral (ls) spinal trauma. medical records were searched for dogs with radiographic evidence of a tl or ls vertebral fracture and or luxation (1995) (1996) (1997) (1998) (1999) (2000) (2001) (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) ; signalment, body weight and duration of disease were recorded. dogs were retrospectively scored neurologically (0-5; normal to plegic with absent pain perception) on admission and at re-evaluation following surgery. lesion location was classed as t3-l3 and l4-s3; dogs were evaluated as one group and as two separate groups with respect to outcome. a subset of lesions were classed as cord compression or not based on advanced imaging. three repair techniques were evaluated (i) pins and polymethylmethacrylate (pmma); (ii) screws and pmma; and (iii) spinal stapling. regression analysis was applied to test for an association between the type of surgery and a successful outcome (non-painful and ambulatory). simple bivariate analyses were performed to investigate for variables predictive of a successful outcome. fifty-nine dogs were included. twenty-eight dogs were classed as t3-l3 and 31 were l4-s3. there were 55 dogs with fractures and 43 with luxations; 23 dogs had both. thirty-one of 35 dogs evaluated had spinal cord compression. ten dogs were repaired with im pins and pmma, 18 dogs with screws and pmma and 31 dogs with spinal stapling. overall, there was a 78.7% success rate; there was no significant difference in outcome between the anatomic sites (p 5 0.5). all dogs initially graded as 1-3 pre-operation were classed as a successful outcome after at least one week following surgery; 79% of dogs initially graded as 4 (plegic with pain perception) were classed as successful recovery. one dog (12.5%) initially as graded as 5 (plegic with no pain perception) had a successful outcome. a low admission score was statistically predictive of a successful outcome (p o 0.0001). surgery type was not associated with a successful recovery (p 5 0.13). signalment, body weight, location of injury, injury type (fracture, luxation or both), presence of compression, and duration of disease did not predict outcome. from this study, the successful recovery of dogs following surgical fixation is high and is only dependent on the neurological score at the time of admission. the choice of surgical technique does not seem to influence outcome although a prospective study comparing two surgery types is warranted to further investigate this issue the results of which can be confounded by surgeon experience and variable follow-up. cranial thoracic intervertebral disc disease (ivdd) is extremely rare due to the presence of the intercapital ligament, although anecdotic data suggest german shepherd dogs (gsd) can share some predisposition for this disorder. the objective of the study was to retrospectively evaluate through mri if cranial thoracic ivdd is significantly more common in gsd compare to other large breed dogs. a search was done through database of the ontario veterinary college. any gsd were a spinal mri including t1-t10 spine was performed was recruited. a group of large-breed non-gsd was used as a control. in the midsaggital t2wi plane, three variables were assessed and graded for each intervertebral disc space t1-t9: spinal cord compression (scc), disc degeneration (dd), and herniation. wilcoxon sign rank test was used to assess if scores were different between groups. exact conditional logistic regression was used to determine whether any intervertebral disc space was a risk factor. 22 gsd and 46 large breed non-gsds were recruited. the gsd group had significantly higher scores than the non-gsd for scc, and herniation. regarding the individual intervertebral discs, in the gsd group t2-3, t3-4, t4-5 discs had significantly an increased risk for scc, and t3-4 for herniation. the results of this study show that gsd have a higher risk of cranial thoracic disc ivdd than other large breed dogs. that risk was higher in discs t2-t3, t3-4, and t4-5, particularly in t3-4. genetic and/or conformational factors, such as weakness of the intercapital ligament, may predispose gsd to this lesion. diskospondylitis is a common disease of the canine spine; however, few reports of mr imaging findings in dogs are available. the purpose of this study was to describe the signalment, clinical and mr imaging features in affected dogs. twenty-three dogs with a diagnosis of diskospondylitis based on clinical signs, mr imaging, and urine, blood, csf and/or intervertebral disk cultures were included. large breed dogs (4 25kg) accounted for 21of the cases. the mean age was 6.8 years with males and females equally represented. most dogs (15/23) were ambulatory with varying degrees of pain and paresis. mr imaging characteristics of 27 sites were reviewed. on t2w images, vertebral endplates were of mixed signal intensity (16/27) while the vertebral body was hypointense (11/27). the intervertebral disk space was hyperintense on t2w (16/27) and stir (14/15) images and mixed signal intensity (7/13) on t1w images. paravertebral soft tissue hyperintensities were noted on 10/27 t2w and 12/16 stir images. contrast enhancement occurred at 12/19 endplates and 15/19 intervertebral disk spaces. only 4/18 vertebral bodies and 7/18 parvertebral soft tissues contrast enhanced. intramedullary spinal cord t2w hyperintensity was noted at 10/23 sites. spinal cord or cauda equina compression occurred at 22/27 sites. based on the spearman correlation coefficient, a significant direct correlation was found between the degree of spinal cord or cauda equina compression and the patient's neurologic status (p 5 0.0011). the incidence and severity of spinal cord compression in canine diskospondylitis may have prognostic value and may have been previously underestimated using other imaging modalities. hemilaminectomy and pediculectomy are both well described and commonly utilized techniques to access the spinal canal. these procedures are most often performed to approach a compressive lesion, such as intervertebral disc disease and neoplasia, the goal being adequate visualization of the spinal canal and access to the offending lesion. a proposed benefit of pediculectomy is preservation of the articular facets and thus better maintaining stability of the vertebral column, but at the cost of reduced access to the spinal canal. the purpose of this study was to describe standardized anatomical limits of each technique and report any observed differences that could be considered during presurgical planning. ten canine cadavers had both procedures performed on opposite sides to access the t11-12, t13-l1, and l2-3 spinal canal. measurements were obtained after performing a computed tomography study of the spine and recorded from the transverse slice most representative of the defect. the surgical technique, vertebral site, and side of vertebral column were compared with the mean spinal canal and defect height using a covariate model. dorsal and ventral remnant lamina heights were also compared. the height of the defect relative to the spinal canal was 76-87% with hemilaminectomy and 58-75% with pediculectomy. the observed difference in defect height of 12-18% (p o 0.0001) and varied with spinal canal height. dorsal remnant lamina height was 0.9-2.4% of spinal canal height with hemilaminectomy and 7-19% with pediculectomy. ventral remnant lamina height ranged from 1-2% and 0.6-2.3%, respectively, though the difference was not statistically significant. while a larger defect is expected with a hemilaminectomy procedure, our results demonstrate that this difference increases with increasing spinal canal height. interestingly, the proportion of exposed spinal canal decreases with increasing canal height for both procedures. the difference in defect height between techniques was due to greater removal of the dorsal spinal canal, possibly making the hemilaminectomy technique better suited for more dorsal lesions, while no statistically difference in access to the ventral canal is observed. no effect of vertebral site was detected. of note was the involvement of articular facets in half of the pediculectomy defects, involving an average of 22% of the articular facet height. this result questions the suggested benefit for the vertebral stability, but further biomechanical studies would be required. low level laser therapy (lllt) is a treatment used in human and veterinary medicine for a variety of clinical syndromes. some uses in human medicine include acute pain associated with osteoarthritis, rheumatoid arthritis, tendonitis, tmj disorders, chronic joint disorders, and wound healing. research is currently on-going to determine the adequate wavelengths to promote effective treatment results with lllt in these conditions. it is purported that lllt acts via the mitochondria to increase cellular metabolism promoting wound healing and a decrease in pain and inflammation. in this study, we hypothesized that dogs treated with lllt in conjunction with hemilaminectomy would display quicker recovery times regardless of the presence or absence of deep pain sensation. seventeen dogs (9 dachshunds, 2 chihuahuas, 2 french bulldogs, 2 lhasa ahpsos, and 1 each of a pembroke welsch corgi, and a miniature poodle) were selected and divided into two groups. the dogs ranged in age from 2 to 11 years old, weighed between 7 and 33 pounds, and underwent hemilaminectamies after acute onset of paraplegia secondary to intervertebral disc disease (surgically confirmed). one group received laser treatments on days 1 through 4 of hospitalization. the second group did not receive lllt, but followed the same peri-operative medication protocol. the laser used in this study was an erchonia laser model pl5000 (635nm). the hertz setting was similar for each patient using the previously established protocol for intervertebral disc disease (ivdd) with pulse rate ranging from 9hz to 1151 hz. all dogs received advanced imaging pre-operatively with myelogram or mri. results of the study revealed that treatment with lllt of 635nm wavelength did not shorten or improve recovery times for dogs with acute onset paraplegia secondary to ivdd after hemilaminectomy procedures. dogs that showed recovery to ambulation at the two week recheck were consistently dogs that were deep pain positive on presentation. a lengthened recovery time or no recovery was seen in the majority of those dogs with absent deep pain on presentation as has been revealed historically in past studies. lllt did not appear to have an effect on this result. however, there are few data describing normal glucose uptake of the canine brain for comparison with suspected or confirmed disease. thus the purpose of this study was to assess the normal distribution of fdg uptake of canine brain structures using a high-resolution research tomography-pet and 7 t-magnetic resonance imaging (mri) fusion system. fdg-pet and t2-weighted mr imaging of the brain were performed on 4 healthy laboratory beagle dogs. acquired pet and mr images were automatically co-registered by the image analysis software. on mr images, regions of interest (roi) were manually drawn over 48 intracranial structures, including 6 gross structures (whole brain, telencephalon, diencephalon, mesencephalon, dorsal metencephalon, ventral metencephalon and myelencephalon). a standard uptake value (suv) and relative suv ratio (rsuv 5 suv of roi/suv of whole brain) were calculated for each roi. 7 t-mr images compensated the low anatomical resolution of pet qj;by proving good spatial and contrast resolution for the identification of the clinically relevant brain anatomy. among gross structures, mesencephalon and ventral metencephalon had the highest (suv: 4.17 ae 0.23; rsuv: 1.12 ae 0.03) and the lowest (suv: 3.34 ae 0.35; rsuv: 0.90 ae 0.06) fdg uptake respectively. when suvs were calculated on 42 detailed regions, rostral colliculus and corpus callosum had the highest (suv: 6.00 ae 0.16; rsuv: 1.62 ae 0.05) and the lowest (suv: 2.81 ae 0.12; rsuv: 0.76 ae 0.06) value respectively. these data acquired from normal dog brain will be used in clinical neurology to investigate various intracranial diseases such as inflammation, neoplasm and behavioral disorders. degenerative lumbosacral stenosis (dlss) is a multifactorial condition affecting predominantly large breed dogs. the combination of stenosis and compressive neuropathy cause lumbar pain, lameness and neurologic dysfunction. previous reports describe urinary and fecal incontinence in severely affected dogs. the objectives of this retrospective case series were to describe the clinical signs associated with dysuria and eventual diagnosis of dlss in dogs, and to describe factors associated with regained micturition following prompt diagnosis and treatment. medical records from the university of georgia and the university of missouri between 1995 and 2009 of 11 dogs were reviewed. inclusion required observation of dysuria, urine retention, absence of structural lower urinary tract disease and concurrent presumptive diagnosis of dlss. dysuria was defined as inability to initiate or sustain a urine stream. urine residual volume was evaluated postvoiding. dysuria was further evaluated using urethral contrast studies, urodynamic testing (urethral profilometry (4) and cystometry (4)), ultrasonography (5), and urine culture (8). presumptive diagnosis of dlss was based on imaging using plain radiography and epidurography (8), computed tomography (1) or magnetic resonance imaging (2). breeds represented included the german shepherd dog (n 5 3), golden retriever (n 5 2), burnese mountain dog (n 5 2), and 1 each labrador retriever, weimaraner, rottweiler and mixed-breed. all dogs were male. 8 were intact at onset of clinical signs. median body weight was 38.5 kg (range 29.5-46) and median age was 5 years (range 2-10). median duration of clinical signs prior to admission was 2 months (range 0.25-12). other pertinent presenting clinical signs included dyschezia (2), fecal incontinence (4), general proprioceptive ataxia (2), weakness (2), and difficulty rising (1). physical examination findings included pelvic limb muscle atrophy (2) and prostatomegaly (1). abnormal neurologic examination findings included postural reaction deficits (6), hyporeflexia (4), decreased tail tone (3) and lumbosacral hyperesthesia (6). neurologic examination was normal in 3 dogs. dorsal laminectomy was performed and diagnosis confirmed in 9 dogs; recovery was monitored for a median of 5.5 months (range 0.25-9). three of the 9 dogs (33%) regained normal micturition within 0.25-1.5 months of surgery. though not statistically significant, dogs that regained micturition tended to have a shorter duration of clinical signs (median 0.25 months, range 0.25-2) versus dogs that remained dysuric (median 5 months, range 2-12). two of the 3 dogs that regained micturition were neutered at the onset of clinical signs, but only 1of 6 dogs that remained dysuric was neutered. signs improved in all dogs with postural reaction deficits and decreased tail tone. hyperesthesia resolved in 5 of 6 dogs (83%) and fecal continence returned in 2 of 4 dogs (50%). these findings suggest that following prompt diagnosis and surgical decompression, normal micturition could be regained in dlss affected dogs presenting with signs of dysuria. glycogen storage disease type ia (gsdia; von gierke disease) is an inherited metabolic disorder resulting from a deficiency of glucose 6-phosphatase-a (g6pase). previous reports indicate that clinical manifestations of gsdia occur only in individuals with homozygous expression of a p.i121l mutation. heterozygote dogs (het) have been previously reported to exhibit an overall normal outward phenotype. the purpose of this report is to briefly describe some differences that have been observed between het and homozygous wild type (wt) dogs. a colony of dogs at the university of florida contains a mix of affected, wt, and het individuals. in the course of studies designed to determine the effectiveness of gene therapy for correction of gsdia in dogs, both wt and het dogs have been utilized as controls. available information about body weights, clinical pathology tests, fasting studies, and liver biopsies was retrieved from records for both wt and het dogs and compared. although birth weights are similar, het dogs have a slower average rate of weight gain than wt dogs and this difference is especially prominent during the first few months of life (figure 1) . in contrast to affected dogs, both wt and het dogs are able to maintain normal blood glucose concentrations for up to 10-12 hours of fasting, however, after longer fasts of 12-17 hours, het dogs have lower glucose and higher lactate concentrations (table 2 ). in addition, liver biopsy samples from het dogs had greater apparent levels of glycogen suggested by pas staining than did samples from wt dogs, and this correlated with the results of proton magnetic resonance spectroscopy which demonstrated 2.9 times greater glycogen content in a liver biopsy sample from a het dog compared to a sample from a wt dog. together, these findings suggest that the level of g6pase activity in heterozygote dogs does not provide a completely normal physiological, biochemical, or histological phenotype as previously reported. the glucokinase gene (gck) encodes an enzyme involved in cellular glucose-sensing mechanisms in pancreatic beta cells and hepatocytes. gck mrna is present in feline pancreas but the gene is not expressed in feline liver. hepatic gck expression is abundant in omnivores so its absence may reflect an evolutionary adaptation of strict carnivores, like feline species. we hypothesized speciesspecific features in the gck hepatic promoter may underlie the gene expression pattern observed in cats. the putative feline gck (fgck) promoter region was located using bioinformatic software to identify homology with human gck (hgck). genomic dna from a dsh cat was subjected to direct sequencing using a series of pcr reactions with speciesspecific primers. dna clones thus obtained were aligned to generate the feline sequence. direct sequencing yielded 8.9 kb of genomic dna sequence with high homology with sequences (acbe01359231, acbe01359221) archived in the feline genome project. the feline sequence had six regions homologous with non-coding regions of hgck; four of these conserved regions are upstream of the putative fgck start. a 0.8 kb segment immediately upstream of feline hepatic exon 1 is not present in hgck. the 0.8 kb insert is the reverse complement of a conserved sequence located downstream of exon 1 in feline and human sequences. in conclusion, the putative hepatic promoter of fgck shares extensive homology with the hgck promoter but contains a 0.8 kb insert not found in hgck. functional studies are needed to confirm the role of the unique insert in regulation of fgck gene expression. deuterium oxide (d 2 o) dilution has been proposed for quantifying body water content, but remains difficult to perform routinely. the objective of this study was to assess if the volume of distribution (vd) of creatinine could be proposed as an alternative in dogs for such a measurement. creatinine and d 2 o vd were measured before (c) and after induction (o) of obesity (by giving an hypercaloric diet (6100 kcal/ kg) for 6 months) in six healthy adult beagle dogs. creatinine (40 mg/kg) and d 2 o (200 mg/kg) were simultaneously injected by bolus iv. blood was collected before administration and then at 5, 10, 30, 60, 120, 180, 240, 360, and 480 min post-injection (creatinine), and 10, 30, 60, 90, 120, 150, 180 and 240 min (d 2 o) . plasma concentrations of both markers were determined. vd was calculated using pharmacokinetic equations. the body weight increased from 10.7 ae 0.6 (c) (mean ae sd) to 15.8 ae 1.0 kg (o). d 2 o vd decreased from 636 ae 18 (c) to 468 ae 19 (o) ml/kg. similarly, creatinine vd decreased from 624 ae 31 (c) to 420 ae 25 (o) ml/kg. the individual difference between creatinine and d 2 o vd (expressed in % of d 2 o vd) ranged from à9.5 to 1.0% (c) and from à17.3 to à7.1% (o). in conclusion, creatinine vd provides a good estimate of d 2 o vd in both normal and obese conditions. a 16 wk double blinded study was conducted comparing the affect of two foods on mobility in dogs. all work was approved by an iacuc. 53 healthy beagle dogs (7-15 years old, mixed gender) were used. 33 affected (a) and 20 non-affected (na) dogs were identified based on orthopedic examination and radiography as having or not having evidence of naturally occurring joint pathology (presence of osteophytes, dysplasia, effusion, pain on manipulation etc) in one or more joints. a and na dogs were evenly distributed between two locations. foods had nutrient profiles adequate for maintenance according to the 2009 aafco official publication. the test food contained greater amounts of methionine, manganese, carnitine, vit. e,, vit. c, alpha linolenic acid (ala), and eicosapentaenoic (epa) acid: the food provided 388 mg n3 fatty acids and 847 mg n6 fatty acids per 100kcal. all dogs were fed the control food for 4 wks followed by a 12 wk feeding period where 17 a and 10 na dogs consumed the test food and 16 a and 10 na dogs the control. blood and urine were collected at weeks 0, 4, 8 and 12 and analyzed for serum fatty acids and urine thromboxane:creatinine ratios were determined. evaluators in this study were different than those making the original diagnosis and so were blinded as to treatment and diagnosis. orthopedic exams were performed by two veterinary surgeons at each site on weeks 0, 4, 8 and 12. the same two evaluators examined the same dogs throughout. the data was evaluated for the difference between a and na dogs and between foods with age, gender and location as covariates. body weight, disease status, age and gender were blocked. analysis included anova repeated measures mixed procedure (sas version 9.0) to determine treatment effects over time.serum epa was greater and arachidonic acid lower at weeks 4, 8 and 12 in the test food fed dogs (p o 0.05). urine thromboxane:creatinine ratios were decreased in the a dogs fed the test food compared to the a dogs fed the control food at 12 wks (p o 0.05). lameness score was significantly improved (p o 0.05) within and between groups of dogs fed the test food. a significantly greater proportion of a dogs fed the test food had improvement in total het (n511) blood glucose (mg/dl) 78 (ae17) 64 (ae13) blood lactate (mmol/l) 1.1 (ae0.5) 1.7 (ae1.1) joint score, lameness, functional disability and overall assessment score at 12 wks compared to a dogs fed the control food. 69% of a dogs had an improved overall assessment score on the test food after 4 wks and at 12 wks compared to 40% at 4 wks and 27% at 12 wks of a dogs consuming the control food. this study shows that a food with moderate amounts of added linolenic acid and epa can have a positive impact on systemic inflammation and mobility in 4-12 weeks. a similar abstract will be presented at the orthopedic research society meeting in january 2011 to an audience largely of orthopedic researchers interested in human orthopedics. fat is an important dietary component, serving both as a source of energy and as a supplier of essential fatty acids (fa). medium-chain triglycerides (mct) contain intermediate length fa that do not rely on l-carnitine for transport across the inner mitochondrial membrane, bypassing this rate-limiting step in fa oxidation. longchain (n-3) polyunsaturated fatty acids (pufa) from fish oil (fo), and in particular eicosanoids derived from eicosapentaenoic acid (epa), may protect against excessive inflammatory reactions, which may be exacerbated by eicosanoids derived from (n-6) arachidonic acid (aa). this study investigated the effects of adding mct:fo and l-carnitine to a control diet (prescription diet s k/d s ) on lean body mass, and serum fa and metabolites. forty healthy beagles (3.1 to 14.8 y) were fed one of three foods (n 5 13 to 14 dogs each) for 6 mo. the study protocol was reviewed and approved by iacuc, hill's pet nutrition, inc. all foods were complete, balanced, and sufficient for maintenance of adult dogs; and had similar concentrations of moisture, protein, and fat (approx. 7.4%, 14.0%, 18.1%, respectively). composition of serum fa was determined by gas chromatography of fa methyl esters. metabolomic profiles of serum samples were determined from extracted supernatants that were split and run on gc/ms and lc/ ms/ms platforms, for identification and relative quantification of small metabolites. body composition was determined by dual energy x-ray absorptiometry. serum concentrations of lauric and myristic fa increased; epa and dha increased in a dose-dependent manner; and aa decreased in dogs fed treatment food 3 (proc-mixed procedure in sas; all p 0.05) when compared to dogs fed treatment foods 1 or 2. serum concentrations of acetylcarnitine and succinylcarnitine increased, indicating lcarnitine incorporation, in dogs fed treatment foods 2 and 3. thus, a diet enriched with mct:fo significantly altered serum fa composition, enriching (n-3) pufa and lowering aa concentrations. there was no change in lean body mass for any of the diets compared to baseline values, and no difference between treatments, showing that all three treatment foods met protein requirements. ten owned dogs, obese for more than 12 months (body condition score [bcs] of 9; fat mass [fm] 5 45.7 ae 1.51%) were studied. these dogs had their weight reduced by 20% (bcs 5 8; fm 5 33.5 ae 1.92%; p o 0.001) being designated weight reduced (wr) group and then were fed to maintain constant body weight during 150 days (bcs 5 8.2; p 5 0.6), designated maintenance (main) group. a control (ct) group of 10 beagles was also included (bcs 5 4.5; fm 5 18.3 ae 1.38%; p o 0.01). in all groups the glucose postprandial response test was performed after 12 hours of fasting. blood samples were taken prefeeding and after 5, 10, 15, 30, 45, 60, 120, 180, 240, 300 and 360 minutes of the consumption of cooked rice enough to the ingestion of 6g of starch/kg body weight. tnf-a and il-6 were dosed in milliplex tm map panel, insulin and leptin by radioimmunoassay. statistical analysis included paired or non-paired t-tests and wilcoxon (p o 0.05). the regimen normalized meal glucose response, the area under the curve (auc) of glucose for wr was lower than for obese (p o 0.05) and similar to main and ct (p 4 0.05). insulin secretion did not normalize immediately, as obese and wr exhibited similar auc of insulin and higher values than for ct (p o 0.05). main, however, presented similar auc of insulin than ct, with lower values than obese and wr (p o 0.01), suggesting that dogs require some time to adapt their metabolism. leptin, tnf-a, and il-6 presented significant reductions after weight loss (p o 0.05), without differences between wr, main and ct (p 4 0.05), suggesting an improvement of the pro-inflammatory state consequent to obesity. studying food base excess (be) modification, methionine intoxication was described. in a basal kibble dog diet (be 5 305 meq/kg; 1.98 g/kg of s) two dosages of ammonium sulphate and methionine was added, resulting in diets with be of 135 meq/kg (4.26 g/kg of s) and à51 meq/kg (6.65 g/kg of s), or be of 142 meq/kg (4.68 g/kg of s) and 4 meq/kg (7.49 g/kg of s), respectively. a 2 â 2 factorial plus a control diet design, resulting in five treatments, and 32 adult health beagle dogs were used, in a completed randomized design with six dogs per diet. a 5-d adaptation phase followed 3-d of total urine collection (in bottles with 100 mg of thymol). urine were pooled by dog and analyzed for density, volume and ph. food macroelements were determined by standards methods (aoac, 1995) and used for be calculation. dog's acid-basic status was studied by blood gas analysis of venous blood, at 8:00h (pre feeding) and 6 hours after meal. a dose-dependent reduction of urinary ph was verified for both compounds (p o 0.01). blood bicarbonate (r 5 0.98; p o 0.01), and blood base excess (r 5 0.75; p o 0.01) were highly correlated with food be. acidemia and reduced blood be were verified in diets with be close to zero (higher dose of both compounds, or 4 6.6 g/kg of s), resulting in daily or each other day vomiting episodes in the dogs. ataxia, seizures, and vomiting were previously describe in dogs fed 47 g/kg of methionine, but our results suggest that a much lower value (27.7 g/kg) was toxic and that the safe upper limit should be between this value and 15.7 g/kg (the lower evaluated dose). in people with chronic kidney disease and heart failure, obesity is associated with longer survival times. this association, called the "obesity paradox," also has been recognized in dogs and cats with heart failure. excess weight appears to modulate the serious deleterious effects of muscle loss in these diseases. the purpose of this study was to determine the effects of body condition and body weight changes in dogs with naturally-occurring chronic kidney disease (ckd). dogs diagnosed with ! iris stage ii ckd between 2008 and 2009 at iowa state university and tufts cummings school of veterinary medicine were eligible for the study. dogs o 1 year of age and those with acute renal failure or suspected congenital renal diseases were excluded. medical records were reviewed using a standardized data form, and data were collected for initial body weight and body condition score (bcs, 1-9 scale), clinicopathologic values, changes in body weight and bcs, comorbidities, and treatments. dogs were classified as underweight (bcs 5 1-3), moderate weight (bcs 5 4-6), or overweight (bcs 5 7-9). a change in body weight was defined as 4 0.2 kg. survival times were determined for all dogs that were discharged from the hospital and lived 4 1 day. associations between survival and bcs or body weight changes were analyzed using cox proportional hazards models. one hundred two dogs were enrolled in the study. at the time of diagnosis, 21 dogs were classified as iris stage ii, 57 dogs were stage iii and 24 dogs were stage iv. median body weight at baseline was 18.7 kg (range, 2.4-50.1 kg). for dogs with body condition scores recorded (n 5 74), 13 were underweight (18%), 51 were moderate (69%), and 10 were overweight (14%). for dogs that had at least two body weights recorded over the course of their disease, 22 gained weight, 46 lost weight, and 10 had no change in weight. changes in body weight were not associated with survival; however, bcs at the time of diagnosis was significantly associated with survival. dogs classified as underweight had a significantly shorter survival time compared to both moderate (p o 0.001) and overweight dogs (p o 0.001). these results suggest that body condition is an important consideration in dogs with acquired chronic kidney disease. further studies are warranted to evaluate the relationship between obesity and longer survival in dogs with ckd. protein restriction is the cornerstone of dietary management of kidney disease. the national research council recommends 20% crude protein and the american association of feed control officials (aafco) recommends a minimum of 26% crude protein for maintenance for healthy adult cats. protein requirement is unknown for adult cats with kidney disease. most commercially produced cat foods for adult maintenance contains 30% or more crude protein on a dry matter basis. a typical therapeutic food for cats with kidney disease contains about 28% crude protein. the objective of the present study was to investigate whether dietary crude protein at 28.5% would be adequate for the maintenance for adult cats with impaired kidney function. seven adult cats, 3 female and 4 male, with age ranging from 5 to 13.7 years old (mean: 8.1 years) were used in the study. all cats had elevated serum creatinine concentration (4 1.6 mg/dl, range: 1.61-2.10 mg/dl) and reduced glomerula filtration rate (mean: 32% reduction; range: 12-60% reduction) during the study. they did not have other systematic diseases, e.g., hyperthyroidism, at the beginning of the study. cats were fed an expanded dry food made with ingredients commonly used in commercial dry cat foods. the food contained 28.5% crude protein (chemical analysis) and 4366 kcal/kg (calculated) on a dry matter basis, or 65.3 g protein/1000 kcal. each essential amino acid in the food was at least 130% of that recommended by aafco. other nutrients in the food also exceeded aafco's recommendations for maintenance for adult cats. cats were fed the food for 30 weeks. lean body mass (dual x-ray absorptionmetry; hologic, hologic, inc, ma) and serum albumin concentration were measured periodically to monitor protein status of cats. the average lean body mass (mean ae sd) was 3.63 ae 0.82 kg, 3.72 ae 0.85 kg, 3.74 ae 0.84 kg, and 3.75 ae 0.89 kg in weeks 3, 11, 17, and 30 of the study, respectively. paired t-test did not detect statistical difference (p 4 0.05) when comparing the lean body mass in weeks 3 versus weeks 11, 17, and 30, respectively. serum albumin concentration were within the normal reference range during the study (mean ae sd: 3.04 ae 0.35%, 2.80 ae 0.35%, 3.04 ae 0.32%, and 3.07 ae 0.40% in weeks 3, 11, 17, and 30, respectively) . these data show that 28.5% dietary crude protein in a dry food with 4366 kcal/kg on a dry matter basis, or 65.3 g protein/1000 kcal, is adequate for maintenance for cats with impaired kidney function. in humans, several disease conditions exist that involve abnormal patterns of polyunsaturated fatty acids and similar abnormalities may be present in companion animals. indeed there have been reports of decreased plasma arachidonic acid and reduced delta-6 desaturase activities in dogs with atopy and other skin disorders. the present study investigated serum fatty acid profiles in dogs and cats presented to the texas a&m university veterinary teaching hospital, clinical pathology laboratory over the past one year period. results were compared with normative data generated among dogs and cats from earlier feeding studies. sera used were residual samples submitted to the laboratory for other diagnostic procedures and stored frozen for no more than 2 months after collection. the samples were grouped according to presenting disorders involving liver, kidney, digestive, and cardiac diseases. total lipids were extracted using chloroform:methanol (2:1 v/v) and fatty acid methyl esters were prepared for capillary gas chromatography. relative percentage distribution of individual serum fatty acids for each animal were then compared with average normative serum phospholipid fatty acid values (dogs, n 5 44; cats, n 5 29) by calculating the ratio of the value in the diseased individual to the normal mean value and used as an index of normalcy. normalcy ratios were then plotted on a logarithmic scale with normal at 1.0. the ratio was then compared to changes greater than 3, 2 and 1 standard deviations of the normal mean values. in this way a graphical presentation of resultant values was obtained. although the animals had been fed various commercial diets and some home-prepared foods, a number of noteworthy patterns emerged from this analysis. dogs showed increased linoleic acid, decreased arachidonic acid, increased total monounsaturated and decreased saturated fatty acids at p o 0.001; oleic acid was increased at p o 0.01. remarkably, these findings were similar for all canine disease categories evaluated (n 5 30, heart; n 5 17, kidney; n 5 28, liver, and n 5 28 digestive disorders). in cats, a slight decrease in arachidonic acid and large decrease in 22:0 was observed but only in heart disorders. by contrast, modest elevations of arachidonate were observed in kidney, liver, and digestive disease groups but at p o 0.01. sample sizes of the feline sera were considerably smaller (range of 4-13 per group). a limitation of this analysis is that variability of normal data may exist depending on diet fed making comparisons less reliable however, these preliminary data suggest that metabolic diseases of dogs may depress plasma arachidonic acid independent of diet fed suggesting either reduced conversion from linoleic acid or increased utilization of arachidonate for eicosanoid production during times of metabolic stress. conversely, in cats, increases in arachidonic acid may be associated with diet arachidonate or other mechanisms. additional studies to verify these findings are warranted. the objective of this study was to determine whether or not lalanyl-l-glutamine (ala-gln) supplementation in dogs with parvoviral enteritis improves the survival rate and ameliorates clinical signs without side effects. this randomized, double-blinded, placebo-controlled clinical trial included 39 client-owned dogs. the dogs were randomly assigned into two groups and administered ala-gln solution (dipeptiven; 0.4 g/kg) or an equivalent volume of placebo orally twice a day. all of the dogs (ala-gln group [n 5 20] and placebo group [n 5 19]) received standard treatment while hospitalized and were monitored daily according to a clinical scoring system and diagnostic evaluation for 11 days. among the 39 dogs, 17 (ala-gln-treated group [n 5 8] and placebo group [n 5 9]) were vaccinated and 22 (ala-gln-treated group [n 5 12] and placebo group [n 5 10]) were not vaccinated. the population consisted of 29 purebreds and 10 mixed breed dogs, with a mean age of 12.2 ae 2.2 weeks. the survival data were compared statistically by means of a log-rank test for the kaplan-meier survival curves. the clinical scores of ala-gln-treated dogs improved significantly relative to the placebo group. there was a significant difference between the two groups in the survival distribution (p 5 0.038); specifically, 3 of the ala-gln-treated dogs (15.0%) died, whereas 8 of the dogs in the placebo group (42.1%) died. no side effects were associated with the administration of ala-gln. these results suggest that the oral administration of ala-gln is effective in improving clinical signs and survival rate in dogs with parvoviral enteritis. bleeding disorders, thrombocytopenia and alterations in platelet function have been documented in humans receiving lipid-containing parenteral nutrition formulations. despite a lack of evidence in the veterinary literature, it is believed that parenteral lipids are contraindicated in critical illness when the development of bleeding disorders is likely. the objective of this study was to determine if there is an in vitro effect on platelet function and thromboelastography (teg) in normal dogs with varying concentrations of a 20% soybean oil emulsion (intralipid s ). twelve clinically healthy dogs were used for this study. whole blood platelet aggregation, using adp and collagen agonists, was measured using multiple electrode aggregometry in hirudinated blood with final lipid concentrations of 0, 1, 10, and 30 mg/ml. the teg parameters r, k, a-angle, and maximum amplitude (ma) were evaluated from citrated whole blood with equivalent final lipid concentrations as platelet aggregation. there was no significant difference between groups with collageninduced platelet aggregation. there was a significant increase in the area under the curve (auc) with adp-induced aggregation at a lipid concentration of 30 mg/ml (p 5 0.027). the ma was significantly reduced at both the 10 mg/ml (p o 0.001) and 30 mg/ml (p o 0.001) lipid concentration. there was no statistical difference between groups evaluating the other teg parameters. while platelet aggregation appeared enhanced at the highest concentration evaluated, this concentration is not clinically relevant. the reduction in ma seems discordant but both fibrinogen and platelets contribute to the ma. therefore the higher lipid concentrations may be interfering with fibrinogen kinetics or fibrinogenplatelet interaction. in vivo studies are indicated to determine if any of these changes are clinically significant. rosiglitazone is a peroxisome proliferator-activated receptor gamma (pparg) agonist and an fda-approved anti-diabetic agent in humans that has been investigated for its ability to reduce tumor cell growth. specifically, the combination of rosiglitazone and carboplatin has demonstrated enhanced tumor control. the purpose of this study was to determine the peak plasma concentrations and side effect profile of rosiglitazone after oral administration in dogs with spontaneously occurring cancer. all dogs received carboplatin intravenously concurrently with oral rosiglitazone. ten cancer-bearing dogs with normal pre-treatment hepatic and renal function were enrolled. complete pre-treatment hematological and biochemical parameters were available in ten dogs and post-treatment parameters in nine dogs. peak plasma concentrations varied with dose and ranged from 150.3-959.4 ng/ml and occurred between 30 minutes and 4 hours post administration and rapidly declined after the peak. the dose limiting toxicity was hepatic at a dose of 9 mg/m 2 . there was one grade iii, two grade i alt, and one grade iii ast elevations noted. no changes in total bilirubin, alkaline phosphatase, or ggt values were noted. blood glucose values remained within normal limits. mild, self-limiting gastrointestinal and hematologic toxicities were observed when rosiglitazone was administered in combination with carboplatin. based on this study, the recommended dose of rosiglitazone in cancer-bearing dogs with normal hepatic function is 6 mg/m 2 orally once daily. side effects of the combination appear similar to side effects noted with carboplatin alone. further study is needed to determine efficacy of this combination and if more frequent dosing is required to maintain plasma concentrations. carboplatin has shown little activity as a single agent for the treatment of canine transitional cell carcinoma (tcc). however, gemcitabine has shown synergism with carboplatin in human cell lines. the purpose of this study was to evaluate the activity of gemcitabine against canine tcc cell lines alone or in combination with carboplatin. we hypothesized that gemcitabine in combination with carboplatin would have synergistic effects in vitro. the results of this study could provide a rationale for treatment of canine tcc with the combination of these drugs. tcc cell lines tcc-kiss, tcc-knapp-js, tcc-axa, tcc-hxc, and tcc-sh were treated with gemcitabine, carboplatin, or the combination. cell proliferation was assessed using cyquant assay, cell cycle was evaluated using propidium iodide staining, and apoptosis was assessed by measuring caspase-3/7 activation. synergy was quantified by combination index analysis using compusyn software. treatment of canine tcc cell lines with carboplatin or gemcitabine decreased cell proliferation, induced cell cycle arrest, and apoptosis. when tcc cell lines were treated with gemcitabine and carboplatin in combination at a therapeutically relevant concentration (gemcitabine o 100um, carboplatin o 250um), a significant decrease in cell proliferation was observed compared to gemcitabine or carboplatin alone, and the drug combination was synergistic in 3 of 5 cell lines, and additive in the remaining 2 lines. gemcitabine exhibits biologic activity against canine tcc cell lines and carboplatin combined with gemcitabine exhibits synergistic activity at biologically relevant concentrations. our results support further evaluation of these drugs in dogs with tcc to determine the clinical efficacy of this combination. metronomic chemotherapy has been shown in murine models and humans to improve tumor control by inhibiting tumor angiogenesis and suppressing regulatory t cells (treg). treg are a subset of t lymphocytes demonstrated to be increased in humans and dogs with cancer and are thought to suppress cellular immune responses against tumors. the purpose of this study was to determine whether metronomic cyclophosphamide therapy depletes treg and/or exhibits antiangiogenic activity in dogs with soft tissue sarcoma. client owned dogs with histologically confirmed grade i or ii soft tissue sarcoma were administered cyclophosphamide at 12.5 mg/m 2 or 15 mg/m 2 orally once daily for 28 days. whole blood and tumor biopsies were obtained on days 0, 14, and 28. flow cytometric analysis of blood was performed to assess changes in t lymphocyte subsets, including cd4 1 and cd8 1 cells as well as cd4 1 foxp3 1 treg. tumor microvessel density (mvd) was assessed by performing immunohistochemistry for cd146. five dogs were enrolled in the 12.5 mg/m 2 /day dose cohort and six dogs were enrolled in the 15.0 mg/m 2 /day dose cohort. in patients that received cyclophosphamide at 12.5 mg/m 2 /day, the mean number of treg decreased from day 0 to 28 but there was no change in the mean percentage of treg or mvd. for patients that received 15.0 mg/m 2 /day, both the mean number and percent of treg as well as mvd decreased over the 28 day time period. cyclophosphamide at 15.0 mg/m 2 /day or greater selectively depletes treg and inhibits angiogenesis in dogs with soft tissue sarcoma. arsenic trioxide (ato) is used to treat leukemias, multiple myeloma, and relapsed lymphoid malignancies in humans; its use has not been explored in veterinary oncology. prior therapy with glucocorticoids decreases likelihood and duration of remission for dogs with lymphoma treated with chemotherapy. we hypothesized that ato will re-sensitize glucocorticoid-resistant canine lymphoma cells to glucocorticoid-induced apoptotic death. the osw canine lymphoma cell line was cultured with 200 um dexamethasone. remaining viable dividing cells were considered resistant. resistant cells were exposed to 0.01 um and 0.02 um of ato without dexamethasone, after which cells were washed and re-exposed to 200 um dexamethasone. after 24, 48 and 72 hours of dexamethasone exposure, cells were counted using trypan blue stain. apoptosis was assessed by tunel assays on cytospin preparations collected at 24, 48, and 72 hours from ato-exposed and control groups. statistical analysis was performed using 1 way anova and tukey's test. the proportion of dead cells increased over time in both 0.01 um and 0.02 um ato exposed groups. the proportion of dead cells was greater for 0.01 um ato (p o 0.001) and 0.02 um (p o 0.001) groups compared to control. apoptosis increased with increasing ato concentration and duration of dexamethasone exposure compared to control. these results support the effectiveness of ato at re-sensitizing glucocorticoid-resistant canine lymphoma cells to apoptotic death following re-exposure to glucocorticoids. ongoing gene expression studies aim to elucidate this mechanism. additional studies to determine if this effect is seen with other chemotherapeutic agents are warranted. lymphoma is the most common hematopoietic tumor of dogs. protein disturbances may be associated with this disease including monoclonal gammopathies in a low percentage of cases. serum protein electrophoresis (spe) is routinely used to aid diagnosis of various canine diseases including lymphoma when total protein concentration is elevated. the purpose of this study was to compare spe changes in lymphoma patients without elevated total proteins with a population of healthy dogs. agarose gel electrophoresis was performed on residual serum from 17 healthy control dogs and 21 untreated dogs with multicentric lymphoma (stage iii -v) after measuring total protein (tp) using the biuret method. densitometric traces of the protein bands were obtained using computer software (totallab 100) and the albumin, alpha-1, alpha-2, beta-2 and gamma globulin subfractions were identified by visual inspection. the total protein concentration, the number of subfractions and the relative and absolute protein subfraction concentrations were then compared statistically between the two populations. in lymphoma dogs, tp, absolute albumin, beta-2 and gamma globulin concentrations and both relative and absolute concentrations of the alpha-1 globulins were significantly lower however relative and absolute alpha-2 globulin concentrations were significantly elevated. no monoclonal gammopathies were identified in any of the dogs and not every patient with lymphoma had the above changes in their electrophoretogram. this study has demonstrated that significant changes occur in the albumin and globulin fractions of canine lymphoma patients despite no obvious increase in tp. further investigation is required to identify the proteins responsible for these changes. it is well known that immunophenotype has a prognostic value for the outcome of canine lymphoma, with t-cell lymphomas having a worse prognosis than b-cell lymphomas. the recent advent of flowcytometric techniques allowed easy detection of many different markers on lymphoma cells and therefore, not only distinguish between t and b cells, but also estimate possible aberration on immunophenotype. in human oncology, although some controversy persists, it seems that non-hodgkins lymphoma and acute leukemia carrying aberrations have a worse prognosis. the aim of this study was to evaluate the role of immunophenotype aberration in canine high-grade lymphoma considering outcome and time span to achieve complete response under chemotherapy. samples of bone marrow, blood and lymph node suspensions from twentythree dogs were evaluated with flow-cytometry. eleven dogs had aberrant expression of neoplastic lymphocytes and twelve were non-aberrant. the most common aberrations found were: positivity to cd34, biphenotypes, double expression of tantigens (cd41, cd81), diminished expression of cd45. all dogs were treated with a chop-based protocol. there was a significant difference for the time to achieve response to chemotherapy (partial or complete). 12/12 non aberrant lymphomas went into cr or pr after the first treatment (l-asparaginase), while aberrant lympho-mas needed more than 2 treatment to reach cr or pr. there was a trend for a prolonged disease free interval with non-aberrant versus aberrant, although it was not statistically significant. aberration of immunophenotype may be a prognostic factor for canine lymphomas, but further studies with larger groups are needed. class ii major histocompatibility expression is a significant and independent predictor of prognosis in human b cell lymphoma. low class ii mhc is consistently associated with poorer outcome. the mechanism underlying this relationship is not clear, but one hypothesis is that high class ii mhc allows for better antigen presentation and tumor-specific immune responses. in the this study, we investigated whether that class ii mhc expression in canine b cell lymphoma was associated with remission and survival times. a total of 160 patients were categorized by level of class ii mhc,expression of cd34 and cell size for on neoplastic b cells. multivariable cox-proportional hazard analysis was used investigate this research question using a randomly selected subset of the data, and the predictive ability of this model was validated on the remaining 1/3 of patient data. results suggested that low class ii mhc expression was associated with decreased times to relapse and death as is seen in human b cell lymphoma, and that large neoplastic cells were associated with decreased survival time. cd34 expression was not associated with patient outcomes. these findings have implications for the use of dogs to model human lymphomas, for the study of tumor vaccines, and for prediction of mortality in dogs with b cell lymphoma with a high level of specificity. one of the reasons for the failure of canine lymphoma treatment is related to the resistance of tumor cells against chemotherapy drugs. the major form of this resistance is provide by multidrug resistance abc transporters. abc transporters proteins comprise a large superfamily of transmembrane proteins, atp-dependent, that extrude a large variety of drugs from the cells. multidrug resistance phenotype in cancer cells is associated with overexpression of these transmembrane proteins. abcg2, also known as bcrp, is a 655 residue half-transporter protein that protect hematopoetic stem cells against toxic compounds. the aim of this study was to investigate the expression of bcrp (abcg2) in canine multicentric lymphoma. samples were collected by fine needle aspiration of an enlarged lymph nodes, from 25 dogs with multicentric lymphoma (stage iii to v) at diagnosis, and 8 normal lymph nodes (control). dogs that were previously treated with prednisone or chemotherapy were excluded from the study. quantitative rt-pcr was used to measure the mrna expression level of bcrp and flnb expression as a endogenous reference canine gene. a widely range expression value for abcg2 expression was found for canine multicentric lymphoma. high gene expression was observed in 52% (13/25) canine lymphoma, but 48% of dogs had a lower expression when compared with normal lymph node. gene expression was not associated with clinical staging, complete or partial remission, relapse and survival time. in conclusion, abcg2 was expressed in canine lymph node and canine multicentric lymphoma at the diagnosis, and it was not correlated with clinical response. osteosarcoma (osa), the most frequent primary malignant bone tumor of dogs, is both locally aggressive and highly metastatic. prognostic factors for canine osa include tumor location, distant metastatic disease, and serum alkaline phosphatase (alp) concentration. an increased serum alp concentration is associated with poor prognosis; however the mechanisms underlying this phenomenon are currently unclear. during normal bone development alp may be used as a marker for osteoblasts. additionally, alp is a downstream target of activated canonical wnt/b-catenin signaling. therefore, we hypothesized that increased serum alp would be associated with increased expression of b-catenin in canine osa. the goals of this study were: (1) characterize and compare cellular alp expression in osa tissue from patients with normal and high serum alp; and (2) assess b-catenin expression in those same patient populations. we used frozen osa samples collected from patients with either high alp (n 5 3) or normal alp (n 5 3). total rna was isolated from the frozen tissue, converted to cdna, and analyzed using quantitative reverse-transcriptase polymerase chain reaction (qrt-pcr) with either target gene alp (aim 1), or target gene b-catenin (aim 2). additionally, b-catenin expression was analyzed by western blot. qpcr data for bcatenin and alp expression were normalized to 18s, and relative expression was calculated by the ddct method. the relative expression of cellular alp was higher in high serum alp samples compared to normal serum alp samples: 44.74 ae 22.04 (mean relative expression ae standard deviation; p o 0.001). further, the relative expression of b-catenin was also increased; b-catenin expression of high serum alp samples relative to low serum alp samples was 24.57 ae 11.41 (p o 0.001), which is also seen by western blot. this study begins to clarify the mechanism behind high serum alp in canine osa, and suggests the wnt signaling pathway may be active in this population of patients. further work will focus on elucidating the role active wnt signaling plays in the biology of osa. in the future, serum alp status of osa patients may help identify patients that would benefit from therapies targeting this pathway. accurate assessment of abdominal lymph node status is of vital importance for appropriate treatment planning and determining prognosis in dogs with apocrine gland adenocarcinoma of the anal sac (agaas). pretreatment knowledge of lymph node status is helpful for determining prognosis and planning the optimal extent of lymphadenectomy. in addition, pretreatment knowledge of lymph node status may help in selecting patients who might benefit from adjuvant chemotherapy and radiation therapy. abdominal ultrasound is currently the most commonly employed test to screen for abdominal lymphadenopathy in dogs with agaas. imaging studies in people indicate that magnetic resonance imaging ( to determine and compare the plasma concentration of cyclophosphamide and its metabolite 4-ohcp, within the plasma of lymphoma bearing dogs being treated with either oral or intravenous cyclophosphamide. in this prospective study, patients were randomly assigned to either receive oral or intravenous cyclophosphamide, at a dose of 250 mg/m2. based on a priori power calculation eight patients per treatment group were enrolled. plasma was obtained at times 0, 15, 30, 60 minutes, and then at 2, 4, 6, 8, 24 hours post administration for evaluation of 4-ohcp concentrations by liquid chromatography-dual mass spectrometry (lc/ms/ms). average values were obtained for both cyclophosphamide and 4-ohcp concentrations within the plasma of both groups. the following values were obtained, half life (hl), time to maximum concentration (tmax), maximum concentration (cmax), and area under the curve (auc). the mann-whitney statistical test was used to compare the groups. the auc for cyclophosphamide was statistically significant (p o 0.05) when compared between the two groups. the auc for 4-ohcp was not statistically significant between the groups. the difference between cmax for cyclophosphamide and 4-ohcp was statistically significantly (p o 0.05) between the groups. although the auc for cyclophosphamide was statistically significant between the two groups, the auc for the active metabolite 4-ohcp was not different when administered intravenously or orally. thus drug exposure to the active metabolite of cyclophosphamide is the same when administered intravenously or orally. previously the percentage of successful intraosseous (io) catheter insertions, insertion times, and ''ease of use'' scores using the ez-io g3 power driver by a wide spectrum of novice participants in feline cadavers were evaluated. novice users' mean io catheter insertion time using the ez-io g3 driver was also compared to the mean iv catheter insertion time in normovolemic feline and canine patients presented to the western college of veterinary medicine (wcvm) small animal hospital. novice users included 40 wcvm personnel (8 technicians, 11 veterinary students, 5 interns, 8 residents, 8 clinicians). after watching a 5-minute ez-io g3 training video, each participant inserted 3 io catheters using the ez-io g3 driver. site (proximal humerus or trochanteric fossa of the femur) and side of cat (right or left) were randomized for each attempt for each participant. a 15 gauge x 15 mm long needle and a 15 gauge x 25 mm needle were used for io catheter insertion in the humerus and femur, respectively. participants then graded the ''ease of use'' of the ez-io g3 device on a visual analog scale (vas) that was converted to a 5-point scale. twenty-six iv catheter insertions in normovolemic feline and canine patients performed by wcvm small animal hospital personnel (6 technicians, 16 veterinary students, 1 intern, 1 resident, 2 clinicians) were then timed and compared to the mean io catheter insertion time in feline cadavers by study participants using the ez-io g3 device. the io catheter was inserted correctly on every attempt by 70% (28/40) of participants. no difference was found between participant groups for mean io catheter placement confirmation percentage (p 5 0.72). percentage of io catheter ''slippage off the bone'' at the time of placement did not vary across participant groups (p 5 0.12). mean io catheter insertion times were all less than 10 seconds and did not differ significantly as a function of attempt number (p 5 0.93) or as a function of participant group (p 5 0.78). participants rated the ez-io g3's ''ease of use'' favorably and subjective scores did not differ across participant groups with varying levels of clinical experience (p 4 0.2). compared to the mean insertion time for iv catheterization (188 sec), mean io catheter insertion by participants using the ez-io g3 (8 sec) was significantly faster (p 5 0.001). regardless of their level of clinical experience, participants rated the ez-io g3 device favorably in terms of its ''ease of use'' and their willingness to use the device in the future. regardless of their level of clinical experience, study participants successfully placed io catheters using the ez-io g3 device and did so significantly faster than the reported iv catheter insertion time in normovolemic feline and canine patients in the wcvm small animal hospital. intraosseous catheterization using the ez-io g3 has the potential to provide very rapid vascular access and is a skill that can be easily learned. previously presented at the western college of veterinary medicine undergraduate poster competition. multicavitary effusion is a common cause of presentation for dogs to emergency medical centers. the goal of this study was to identify common underlying causes of multicavitary effusion as well as determine their relative importance. a retrospective analysis of 43 cases of multicavitary effusion admitted to the icu of a tertiary referral center (ontario veterinary college) from 2007 to 2010 was performed. twenty-three different breeds, with golden and labrador retrievers (25.6% and 14%, respectively) being most commonly seen, were included in the study. ages ranged from 2 to 14 years with a median age of 9 years and a mean of 8.1 years. 69.8% of cases were males (30/43 cases). most common presenting signs included lethargy (48.8%), anorexia (32.6%), vomiting (21%) and dyspnea (21%). cavitary effusion was detected by either ultrasonography (pericardial, pleural or abdominal) or radiographs (pleural). bicavitary effusion was present in 28 cases (65.1%) whereas 15 cases (34.9%) had tricavitary effusion. neoplasia was found to be the most common underlying cause overall (51.2%), with hemangiosarcoma being the leading type (40.9% of neoplasia cases), followed by congestive heart failure (9.3%), gastrointestinal lymphangectasia (4.7%), peritonitis/pancreatitis (4.7%), cirrhotic liver disease (2.3%) and acute renal failure (2.3%). in 11 cases (25.6%), no underlying cause could be found. of these, 4 (9.3% of all cases) were diagnosed as having idiopathic pericardial effusion. taken together, these findings suggest a strong association between multicavitary effusion and diseases carrying a guarded prognosis in dogs. infection control practices in veterinary clinics and hospitals are becoming increasingly important, with rising client expectations, growing concern about the spread of antimicrobial-resistant pathogens, and the potential for zoonotic transmission of disease. surgical patients are at increased risk of developing infections, and can serve as sources of these pathogens for other animals and people with whom they have contact within and outside the clinic. taking all reasonable precautions to reduce the risk of surgical site infections, beginning with preoperative preparation of the surgeon and patient, is therefore an important part of any infection control program. while guidelines are available for preoperative preparation procedures, there has been no objective investigation of compliance with these guidelines in veterinary practices. the objectives of this pilot study were to describe a range of preoperative hand scrub and surgical site preparation practices in veterinary clinics, and to determine if there were any areas that consistently require improvement. observation of preparation practices was performed in each of ten clinics over 9-14 days using 2-3 small wireless surveillance cameras. data was coded for 148 surgical patients, and 31 surgeons performing a total of 190 hand scrubs. patient hair removal was most commonly performed after induction of the animal (129/140, 92%) and using clippers (114/137, 83%) . steps in surgical site aseptic preparation ranged from 1-4. contact time with soap ranged from 18-369s (mean 82s, median 53s), and with alcohol from 3-220s (mean 41s, median 30.5s). application of alcohol or antiseptic using a ''cleanest to dirtiest'' pattern was infrequent (29/66 (44%) and 11/83 (13%), respectively). potential contamination of the surgical site occurred most frequently when the animal was moved to the surgery table after initial preparation (40/58, 69%). preoperative alcohol hand rub was used in 2/10 facilities, but soap and water hand scrub was still more commonly used even at these clinics. proximal-to-distal scrubbing was noted in 95/142 (67%) of soap and water scrubs. contact time during surgeon hand preparation ranged from 7-529s (mean 144s, median 124s) for soap and water and from 4-123s (mean 34s, median 25s) for alcohol-based hand rub. approximately 75% of the variation in contact time was due to inter-surgeon variation. no significant changes in practices were identified over the course of the observation period. some preoperative preparation practices were fairly consistent between clinics in this study, while others varied considerably. contact times with preparatory solutions were often far shorter than recommended, and there was a high frequency of non-sterile contact with the surgical site during movement of patients to the surgical suite. the camera system used to perform this study did not have a significant time-dependent effect on the behavior of participants, and could be useful for performing similar field-based observational studies in the future. this prospective randomized study compared the percentage of successful intraosseous (io) catheter insertions, insertion times, and ''ease of use'' scores using the ez-io g3 power driver to manual io catheterization in feline cadavers. the io catheter insertion time in cadavers using the ez-io g3 device was also compared to iv catheter insertion time in normovolemic feline and canine patients. after a purposely limited training period, a preclinical veterinary student was timed and video-recorded as she performed 20 io catheter placements in feline cadavers (10 io insertions by placing an illinois needle manually and 10 io insertions using the ez-io g3). order of technique (manual or ez-io g3), site of io placement (proximal humerus or trochanteric fossa of the femur), and side of cat (right or left) were randomized for each attempt. when using the ez-io g3, a 15 gauge x 15 mm long needle and a 15 gauge x 25 mm needle were used for io catheter insertion in the humerus and femur, respectively. after each attempt, the student graded the ''ease of use'' of each technique on a visual analog scale (vas) that was converted to a 5-point scale. twenty-six iv catheter insertions in normovolemic feline and canine patients performed by western college of veterinary medicine (wcvm) small animal hospital personnel (6 technicians, 16 veterinary students, 1 intern, 1 resident, 2 clinicians) were then timed and compared to the student's mean io catheter insertion time using the ez-io g3.median io catheter insertion times for the 2 techniques were significantly different (manual io technique 24 sec; ez-io g3 5 sec) (p o 0.001); the manual method took 48 seconds longer (95% confidence interval of 28 to 67 seconds) than the ez-io g3 method. insertion time was more variable for the manual technique than for the ez-io g3. percentage of catheter ''slippage off the bone'' and extravasation around the inserted catheter were significantly higher for placement of the manual io catheter compared with placement of the ez-io g3 catheter (p o 0.001). student's subjective ratings were more favorable and more consistent for the ez-io g3 technique compared to the manual technique for io catheter insertion. compared to the mean insertion time for iv catheterization in the wcvm small animal hospital, io catheter insertion by the student using the ez-io g3 was significantly faster (iv catheter 188 sec; ez-io g3 io catheter 7 sec) (p o 0.001). intraosseous catheter insertion using the ez-io g3 can be said to be significantly faster, less traumatic, more user-friendly, and as effective as io catheter placement using the manual technique. vascular access via io catheter insertion using the ez-io g3 device may be suggested to be faster than iv catheter insertion. previously presented at the western college of veterinary medicine undergraduate poster competition. computed tomography (ct) has been widely investigated and applied as a means for non-invasive quantitative bone mineral determination in human medicine. the aim of this study was to assess age-related changes and anatomic variation in bone mineral density (bmd) using quantitative ct in normal cats. seventeen normal cats were included in this study and divided into the following 3 age groups: o 1 year (n 5 4); 2-5 years (n 5 10); and 4 6 years (n 5 3). a computed tomographic scan of each vertebra from the 12 th thoracic to the 7 th lumbar spine, and the pelvis, was performed with a bone-density phantom (50, 100, and 150 mg/cm 3 , calcium hydroxyapatite, cirs phantom s ). on the central transverse section, the elliptical region of interest (roi) was drawn to measure the mean hounsfield unit value. those values were converted to equivalent bmd by use of the bone-density phantom and linear regression analysis (r 2 4 0.95). the mean bmd value of the thoracic vertebrae (651.3 ae 100.4 mg/cm 3 ) was significantly higher than of the lumbar vertebrae (520 ae 119.5 mg/cm 3 ). the maximum bmd occurred at the t12, t13, and l1 levels in all age groups. there was a statistically significant difference in the mean bmd value among the 3 age groups at the t12 (p o 0.001), t13 (p o 0.001), and l4 levels (p 5 0.013), respectively. in addition, there was no significant difference between the mean bmd value of the left and right iliac bodies (485.1 ae 140.4 mg/cm 3 and 482.1 ae 148.2 mg/cm 3 , respectively). the present study suggests that age-related changes and anatomic variation in bmd values should be considered when assessing bmd using quantitative ct in cats with bone disorders. dynamic contrast-enhanced computed tomography (dce-ct) is a rapid and widely available method of cerebral perfusion imaging. however, there is no established reference value of cerebral blood flow (cbf) measured by dce-ct according to a dog's age. the purpose of this study was to identify the correlation between regional cbf and aging in clinically normal dogs using dce-ct. fourteen dogs with no evidence of hemodynamic disorders and central nervous system dysfunction were included in this study. dogs were assigned to the following 3 age groups: o 1 year (group 1); 3-6 years (group 2); and o 10 years (group 3). dce-ct scans were performed at the level of the third ventricle and mesencephalic aqueduct. cbf in the gray and white matter was calculated using stroketool-ct s software. the overall mean ae standard deviation quantitative estimate for regional cbf in clinically normal dogs was 67.8 ae 12.9 ml/min/ 100 g, 44.7 ae 4.1 ml/min/100 g, and 31.0 ae 3.4 ml/min/100 g in groups 1, 2, and 3, respectively. there was no significant regional cbf difference between the right and left sides of the brain in each group. also, a statistically significant difference in the regional cbf was observed between groups 2 and 3 (p o 0.001). thus, aging affects the regional cbf in normal dogs and the values should be considered assessing the results of dce-ct. according to several clinical behavior guidelines, ''toileting'' type inappropriate urination (i.e. large amounts of urine deposited in horizontal surfaces) can arise in cats suffering from a medical problem (typically lower urinary tract disease). by contrast, ''spraying'' type behaviour (i.e. possibly smaller amounts of urine deposited on vertical areas) is more typically associated with anxiety brought about by a threat to local resources, arising from either a change in the physical environment or threat to these resources from another cat. however, there is some evidence that ''sprayers'' may also be presented with a medical problem, which might be linked to the disease (e.g. painful voiding associated with crystalluria may lead to a standing posture being adopted and small amounts eliminated at a given time). this might be associated with an apprehensive state or simply a co-morbid state. as part of a larger research project aimed at investigating behavioral and physical aspects of cats presented with inappropriate urination, owners of 14 ''spraying'' and 18 ''toileting'' cats with appropriate control subjects from the same households were recruited throughout local media coverage and the internet. the case-control dyads were brought by the owners to the veterinary hospital of the university of sa˜o paulo, at the same time, for a medical work-up (i.e. physical examination, complete blood count, biochemical profile, urinalysis, urine culture and abdominal ultrasound). no significant differences between the ''sprayers'' and ''toileters'' regarding the occurrence of medical problems were found. both groups had a similar proportion of cats affected by medical illnesses (sprayers: 35.7%, toileters: 44.4%; chi 2 , p 5 0.618), directly or indirectly relating to the urinary system (e.g. diabetes, chronic kidney disease). in both groups, control cats also had a relatively high occurrence of medical concerns (21.4% and 27.8%, respectively for each control group). these results emphasize the importance of careful medical evaluation of cats presented for a urinary housesoiling problem. the relatively high prevalence of medical concerns among apparently healthy cats in multi-cat households may have arisen, at least in part, as a result of an inability/failure of owners to monitor individuals, thus allowing some early signs to pass unnoticed. the way in which medical and behavioral elements are linked (if at all) remain unknown but deserve further investigation. considered as a semi-social species, domestic cats appear to be highly sensitive to the effects of social stress, especially when living in high density populations. cats are capable of adapting to live ingroup; nonetheless, they do not appreciate living in close proximity with others as result of an environment lacking of great opportunities of escaping and hiding. this study aimed at testing the following hypotheses: (a) owners' perceived quality of life affects cats' global levels of stress; (b) cats' global levels of stress are influenced by cats' personality; (c) cats' living style (single housing versus large group housing) does affect stress levels in cats. to our knowledge, this is the first study investigating stress levels of domestic owned cats, under natural conditions, throughout measurement of faecal glucocorticoids metabolites concentration, and taking into consideration cat personality, cat living style and owner's subjective life quality. in this study, adrenocortical activity, as a valuable physiological indicator of emotional stress, was evaluated throughout the measurement of faecal glucocorticoids metabolites in fourteen single and sixteen in-group housed cats. cat personality as well as owners life quality was evaluated by self reported questionnaires given to the owners to answer. significant differences in mean glucocorticoids metabolites concentrations (mgcm) between the two populations (i.e. single versus in-group cats) were not detected (random effect model, p 5 0.178). however, when mgcm were taken as a function of cat personality, there were differences regarding single catstimid cats showed higher levels in comparison to easy-going (random effect model, p 5 0.018) and bossy (random effect model, p 5 0.020) cats. as to owner subjective life quality, a direct association between the scores given by the owners to the social dimension and mgcm was found for single cats only (i.e. the better the owner felt itself social wise the higher the mgcm of the cat; random effect model, p 5 0.002). social stratification may compensate the stress resulting from spatial restriction in large in-group living cats. other underexplored factors such as feline personality and owner life style seem to play an equally important role in domestic cats' day to day levels of stress, especially in the cats kept as single pets. in dogs, raas activation is a major feature of congestive heart failure (chf). benazepril (fortekor s ) is a potent ace inhibitor with well-documented effectiveness in canine chf. although ace activity (ace a ) has been used in preclinical studies as a surrogate marker of efficacy, some authors have reported a poor correlation between plasma ace a and changes in angiotensin ii (aii) or aldosterone (al). the purpose of this study was to investigate the effect of benazepril on canine plasma renin activity/concentration (pra/prc), angiotensin i (ai), aii, al, and fractional excretion of potassium (ufek), sodium (ufena) and aldosterone (ufeal). sixteen beagle dogs were fed a low-sodium diet and dosed with placebo or benazepril tablets (10 mg po, q24h) for 5 days. blood and urine samples were collected on day 1 (d1) and day 5 (d5) over 24-hour periods. data were analyzed by repeated measures anova of baseline corrected values, and anova of auc 24hours . compared with placebo, benazepril induced a significant increase in pra and ai at d1 (p-value [pra] :0.001, p-value [ai] :0.002) and d5 (p-value [pra] :0.001, p-value [ai] o 0.001). no differences in prc were noticed. based on auc 24hours, aii levels were 34% lower in the benazepril group at d5 (p-value [aii] :0.01). ufeal and al decreased by up to 27% and 24% at d1 and d5, respectively, though differences did not reach statistical significance. benazepril markedly influences raas dynamics in dogs. decreased exposure to aii and al are likely to be the key events required to counteract pathological remodeling of the heart in chf. this study compared two intravenous anesthetic agents, alfaxalone (alf) (alfaxan s , jurox pty. ltd.) and propofol (ppf) (rapinovet s , schering plough animal health) and their effects on spontaneous ventilation after induction of anesthesia in dogs at various doses. this randomized, crossover, dose-escalation study used six dogs in weight and gender-matched pairs (3m-3f). for each drug, each dog was dosed incrementally at 1, 2, 5, 10 and 20 times the labeled anesthetic induction dose rate (alf 2 mg/kg, ppf 6.5 mg/kg) or until a dose was reached that rendered the dog apneic. a minimum of three days was allowed between doses. for each dose administration, the entire calculated dose was delivered constantly over 1 min. the primary variable was apnea, defined as an absence of spontaneous ventilation for 1 minute. apneic dogs were manually ventilated with oxygen until they resumed adequate spontaneous ventilation. once the apneic dose was determined for an individual dog for one drug, the dog began incremental doses with the alternate drug. for each anesthetic episode times were recorded from completion of induction dose to; removal of endotracheal tube, dog lifting head, dog attaining sternal recumbency and dog standing. pulse rate, respiratory rate, spo 2 and etco 2 were each measured every 5 min. within-dog comparisons were made using the paired student's t-test. for both alf and ppf all 6 dogs respired voluntarily at the labeled (1 x) dose. for ppf at 2 and 5 x doses, 4 and 0 dogs respired voluntarily respectively. for alf at 2, 5 and 10 x doses, all 6, 4 and 1 dog respired voluntarily respectively. for all six dogs to become apneic required 5 x dose of ppf and 20 x dose of alf. the mean no observable adverse effect dose (noael) expressed as a multiple of the labeled dose was higher for alf (4.8 x) than for ppf (1.7 x) (p 5 0.035). there were no significant differences between times to extubation, head lift or attaining sternal recumbency after alf and ppf at 1, 2 and 5 x doses. at the 2 x dose, dogs took longer to stand after alf (29.0 ae 7.0 min) than ppf (25.0 ae 8.3 min). we concluded that based on anaesthetic duration, the manufacturer's labeled dose rates of 2 mg/kg for alf and 6.5 mg/kg for ppf were equivalent. however, based on the dose escalation, the number of dogs becoming apneic at each dose-multiple is consistent with ppf having a narrower safety margin, i.e., ppf caused more respiratory depression than alf. parenteral levetiracetam (lev) has been shown to rapidly attain therapeutic levels in dogs when given iv or im, and has been used offlabel for the treatment of seizure emergencies. the purpose of this study was to determine the safety and pharmacokinetics of subcutaneously administered levetiracetam in healthy dogs. potential application of these results would be use of sq lev instead of or in addition to rectal diazepam for the treatment of cluster seizures at home. lev was administered sq between the shoulder blades to 4 healthy, purpose-bred hound dogs, at a dose of 60 mg/kg (undiluted). blood samples were collected at 15, 120 and 420 minutes after lev administration via jugular venipuncture. plasma lev concentrations were measured by high pressure liquid chromatography. none of the dogs became sedated, nor was there pain evident on palpation of the injection site. mean (standard deviation) lev concentration was 65.2 (29.5), 114.5 (10.5) and 84.9 (20.6) mg/ml at 15, 120 and 420 minutes, respectively. administration of sq lev was well tolerated, and exceeded the suggested therapeutic range (5-45 mg/ml) within 15 minutes of administration, and remained above the range for at least 7 hours. these data indicate that sq lev administration may be an alternative for the at-home treatment of cluster seizures in dogs, and prospective studies in epileptic dogs are warranted. the purpose of this study was to assess the effects of cyp inhibitors (ketoconazole, chloramphenicol, fluoxetine, trimethoprim, cimetidine, and medetomidine) in varying combinations on the bioavailability of oral methadone in healthy greyhound dogs. the iacuc approved this study. cyp inhibitors were administered po for 48 hours prior to methadone administration. methadone hydrochloride was administered po at a targeted dose of 1 mg/kg. blood was obtained for the determination of methadone plasma concentrations by mass spectrometry. the area under the curve (auc) of methadone for each treatment group was compared statistically to the auc of methadone administered without inhibitors using the mann-whitney rank sum test. significant increases (p o 0.001) in the methadone auc occurred in all treatment groups which included chloramphenicol, including chloramphenicol as the only inhibitor. the magnitude of increase was at least 50 fold. mean concentrations of methadone exceeded 10 ng/ml for at least 10 hours in all groups administered concurrent chloramphenicol. no significant increases in the auc occurred in any of the groups which did not include chloramphenicol. in conclusion, chloramphenicol significantly inhibits the metabolism of methadone in greyhound dogs. as a result, the oral bioavailability of methadone is significantly increased and plasma concentrations are achieved that are reported to be effective in humans for 10-36 hours after a single oral administration. doxycycline hyclate is used frequently in small animals, horses and exotic animals for treatment of a wide variety of infections. because doxycycline hyclate tablets may not be suitable for oral administration in some animals, particularly horses and cats, it has been compounded into liquid suspensions. the commercially available doxycycline calcium 10 mg/ml oral suspension, vibramycin s (pfizer) is not suitable for use in animals due to its low concentration and flavoring that animals find unpalatable. because of the known inherent instability of doxycline in aqueous vehicles under storage, this study was conducted to determine the potency of two formulations stored in dark and light conditions. a high pressure liquid chromatography (hplc) assay with uv absorption at 350 nm was developed for analyzing doxycycline in formulations, in comparison to a reference standard from the united states pharmacopeia (usp). doxycycline hyclate 100 mg tablets were first tested for potency. the tablets were then crushed and mixed with a pharmaceutical vehicle to make two concentrations: 33.3 mg/ml and 166.7 mg/ml. the vehicle used was a 50:50 mixture of a vehicle for oral solution (ora-sweet, usp-nf) and vehicle for oral suspension (ora-plus, usp-nf). the suspensions were prepared in replicates of 3. each replicate was divided, with one aliquot stored at room temperature in lighted conditions, and the other aliquot stored at room temperature in the dark. doxycycline was extracted from the formulations and measured by hplc at day 0, 1, 4, 7, 14, 21, and 28 . each replicate was tested and the potency reported as the percent doxycycline relative to the usp reference standard. on day 0, 1, 4, and 7, the potency of each formulation was within 90-110% of the reference standard (range 93.4-109%). this value is within the accepted range cited in usp o 795 4 on pharmaceutical compounding-non-sterile preparations. however, starting at day 14, the potency declined dramatically and remained low for the tests performed on day 21 and 28. the potency on day 14, 21, and 28 was below 20% of the reference standard (range 14-18%). there was also a noticeable change in the quality of the formulation starting on day 14, and a change in the color of the formulation to a dark brown. these results indicate that when doxycycline hyclate tablets are compounded as a suspension in an aqueous vehicle as described in this study, at 33.3 and 166.7 mg/ml under the storage conditions used in this study, potency of the formulation cannot be assured beyond 7 days. we recommend a beyond-use-day (bud) of 7 days for formulations prepared and stored at room temperature in light or dark conditions. therapeutic options for multidrug resistance (mdr) escherichia coli urinary tract infections (uti) are limited. fosfomycin (fos) tromethamine is an oral, broad-spectrum, cell-wall active, bactericidal drug approved for treatment of uncomplicated uti in humans. the purpose of this study was to determine time dependency of fos and the disposition of fos tromethamine in dogs. using a randomized, double crossover design, 12 client-owned dogs received fos sodium iv (40 mg/kg) and fos tromethamine (po, 80 mg/kg) either with (n 5 6) or without food (n 5 6). serum and urine were collected for 24 hr; fos was quantitated with a bioassay (atcc e. coli 25922, serum or atcc proteus vulgaris 13315, urine). in-vitro killing curves (cell counts through 24 hours) were performed at 0 (control),0.5,1,2,8,16 and 32 x mic for mdr e. coli canine fos susceptible (e-test s ) uropathogens. killing curves indicated fos to be time dependent. after iv administration, clearance (mlãkg/hr), volume of distribution (l/kg), elimination half-life (hl; hr) and mean residence time (mrt, hr) were (mean ae sd): 210 ae 104, 0.23 ae 0.15, 0.36 ae 0.19, 1.14 ae 0.35 and 1.7 ae 0.4, respectively. for po, c max , hl and mrt were 66 ae 21, 2.5 ae 1.09 and 5.1 ae 1.7, respectively. serum fos exceeded the mic 90 reported for multidrug resistant (mdr) e. coli (1.5 mg/ml) for 7hr (iv; 2.5 mg/ml) and 12hr (po, 9 mg/ml diminazene is an aromatic diamidine, anti-protozoal drug that has shown promise in a small number of cases of cytauxzoonosis. in a noncontrolled case series, 5 of 6 cats with clinical cytauxzoonosis given 3 mg/ kg of diminazene aceturate survived infection. dosage frequency was two intramuscular injections given one week apart. commercial formulations contain the diminazene diaceturate salt. the active base is diminazene with the salt consisting of two aceturate molecules. currently there is no data available on the pharmacokinetics of either diminazene compound in cats. the objective of this study was to determine the pharmacokinetics of diminazene diaceturate in healthy cats. four purpose bred cats with normal physical examination, cbc, chemistry and urinalysis were used. a powdered commercial drug formulation (veriben s , ceva sanet animale) was freshly reconstituted with sterile water to a concentration of 7 mg/ml prior to administration and sterile filtered solution. heparinized blood samples were collected just before (hour 0) or 0.5, 1, 2, 4, 8, 12, 18, 24, 36, 48, 72, 120 , and 168 hours after intramuscular administration of 3 mg/kg (1.68 mg/kg of diminazene base) diminazene diaceturate. the plasma was separated by centrifugation within 30 minutes of collection and frozen (à801c) until analysis. concentrations of diminazene were measured by hplc analysis using uv absorption and ion-pairing conditions. the pharmacokinetic profile was analyzed using a simple one-compartment model. in these cats, diminazene had a mean terminal half life (t 1/2 ) of 1.70 (1/-0.29) hrs and mean peak plasma concentration (c max ) 0.51 (1/à 0.11) mg/ml. the mean residence time (mrt) of diminazene was 2.45 hrs (1/à0.42). systemic clearance (cl/f) was 1.38 (1/à 0.26) l/kg/hr. the volume of distribution per fraction absorbed (vd/f) was 3.36 (1/à 0.72) l/kg. a single intramuscular dose of diminazene diaceturate was well tolerated by all 4 cats. without knowing the concentration required to inhibit or kill cytauxzoon felis, it is not yet possible to make suggestions regarding optimum dosing schedules for this drug. additional toxicology data and studies to assess clinical efficacy for the treatment of cytauxzoonosis are indicated before routine clinical use can be considered. meloxicam has been shown to accumulate in areas of inflammation in both the rat and human. the objective of this study was to investigate the concentration of meloxicam in synovial fluid of inflamed joints versus that of non-inflamed joints in dogs. eight male dogs were treated with 0.2 mg/kg of meloxicam on day one and 0.1 mg/kg of meloxicam on day two. all treatments were administered orally. on day three reversible acute synovitis was induced in one stifle by aseptic, intra-articular administration of 1 ml sodium urate crystal suspension (10 mg/ml). in four dogs synovitis was induced in the l stifle and in four dogs the same procedure was used in the r stifle. in each dog the stifle without induction of synovitis served as the ''normal'' joint sample. a synovial fluid sample was collected from both the r and l stifle of each dog. sample collection occurred eight hours after administration of sodium urate and twenty four hours after the last administration of meloxicam. synovial meloxicam concentration was analysed using high performance liquid chromatography-mass spectrometry (hplc/ ms-ms). the concentration of meloxicam in the inflamed versus non-inflamed joint in each dog was compared using the paired t-test. the results indicate that meloxicam preferentially accumulates in inflamed joints in the dog as meloxicam concentrations are statistically significantly higher in inflamed joints than in non-inflamed joints. no national surveillance system exists for monitoring emergent resistance in companion animals. however, e. coli resistance is an increasing therapeutic and public health concern in these in dogs and cats. the purpose of this study was to describe current resistance patterns of canine and feline pathogenic e. coli throughout the united states and identify risk factors of antimicrobial resistance. isolates (n 5 1512) of clinical e. coli collected from dogs or cats from may 2008 through may 2010 located in 6 different regions. susceptibility was determined to 15 drugs (6 drug classes) by broth microdilution methods. pharmacodyamaic statistics were described regionally. phenotypes were determined and type of resistance was based on the number of drug classes to which resistance was expressed: none (ndr), single (sdr) and multi (mdr). the majority of isolates were from urinary tract (71.5%) and dogs (75.7%). the proportion of resistance type for each drug was: ndr (17.5%), sdr (56.4%) and mdr (26.12%). the proportion of mdr was greatest in the southwest (25.29%) and least in the northwest (9.19%) (p o 0.05). for all regions, the proportion of resistance was: cephalothin (cph, 65.2%) 4 amoxicillin-clavulanic acid (amx, 59.4%), ampicillin (amp, 52.9%), tricarcillin-clavulanic acid (tcx, 21.8%), doxycyline (dxy, 15.9%) 4 cefoxitin (cfx, 15.2%), cefpodoxime (cpx, 13.7%), chloramphenicol (chp, 13.2%), enrofloxacin (enr, 13.0%) , ciprofloxacin (cif,12.3%), trimethoprim-sulfamethoxazole (tmx, 10.7%), ceftazidime(cfz, 10.0%), gentamicin (gtm, 10.0%), cefotaxime (cft, 9.2%) 4 meropenem (1.1%) (p o 0.05). the mic 90 exceeded the resistant breakpoint for amp, amx, cpx, cph, cif, cfx, dxy and enr whereas mic 50 did not surpass the susceptible breakpoint. beta-lactams (96.37%) was the most and aminoglycosides the least (0.12%) sdr. the drug class most frequently involved in mdr was beta-lactams (97.72%) and least, gen (30.13%). resistance differs regionally, being greatest in the southwest. cph is the most and meropenam is the drug least associated with resistance; these patterns are consistent with current drugs used by veterinarians. the fluoroquinolones (fqs) are common choices for treatment of e. coli urinary tract infections (utis) in animals and humans. 2nd generation drugs approved in animals include enrofloxacin (enr), marbofloxacin (mar), orbifloxacin (orb); human drugs include ciprofloxacin (cip). 3rd and 4th generation fq for humans include moxifloxacin (mox), gatifloxacin (gat) and ofloxacin, (ofl]), its lisoform levofloxacin (lev). for animals, pradofloxacin (pra) is approved for use in europe. the purpose of this study was to assess the in vitro activity of 1st (naladixic acid [nal] through 4th generation fqs (n 5 11) toward dog or cat e.coli uropathogens (n 5 51). isolates were subjected to susceptibility testing to 6 drugs classes (15 drugs). isolate phenotypes included no (ndr; n 5 12), single (sdr; n 5 15) or multidrug (to more than 2 drug classes; mdr; n 5 24) resistance (including enr resistant [enr r -mdr; n 5 12] or enr susceptible (enr s -mdr, n 5 12). the minimum inhibition concentrations (mics) were determined for each isolates using broth microdilution (e. coli atcc s 25922 served as a negative control). mic statistics were generated for each drug among phenotypes. the overall potency (mic 90 ) for all enr susceptible isolates (ndr, sdr and enr s -mdr) was gat 4 pra, mox, mar, lev, cip 4 sar, orb, ofl 4 enr 4 nal. each e. coli isolate expressing ndr or sdr was susceptible to all fq. however, isolates expressing resistance to 1st or 2nd generation fq were also resistance to later generation drugs. glucocorticoids (gc) are standard therapy for allergic asthma but do not reverse the underlying type i hypersensitivity. allergenspecific immunotherapy (asit), a process of ''desensitization'', is potentially curative but requires identification of offending allergens. the purpose of this study was to determine if oral or inhaled gc administered at routinely used dosages would interfere with allergen identification. we hypothesized that oral but not inhaled gc would interfere with accurate identification of allergen-specific ige using skin and serum testing in experimentally asthmatic cats. asthma was induced in eighteen cats using bermuda grass allergen (bga). cats (n 5 6/group) were randomized to receive oral gc (10 mg prednisolone q 24 hr po), inhaled gc (600 ug budesonide q 24hr) or placebo (gelatin capsule q 24hr po) for one month. intradermal skin testing (idst) and bga-specific ige amounts were measured prior to, during (weeks one and four) and every two weeks after treatment until both tests were positive. a paired t test was used to compare serum ige among groups pre-and post-treatment (p o 0.05 significant). idst reactivity was eliminated in 4/6 cats on oral gc, 3/6 on inhaled gc, and 1/6 placebo-treated cats. within two weeks after stopping treatment, idst was again positive in all cats. contrary to our hypothesis, serum ige reactivity to bga was not significantly diminished by any treatment. in conclusion, a two week withdrawal from gcs is adequate for idst identification of allergen but no withdrawal is required prior to serum ige testing to identify the sensitizing allergens. previously in people, increasing severity of asthma is associated with low serum concentrations of 25-hydroxyvitamin d (25-oh-d). 25-oh-d is thought to ameliorate lower airway inflammation primarily by decreasing the production of pro-inflammatory mediators, and by increasing the production of the anti-inflammatory cytokine il-10. in people, serum 25-oh-d concentration is associated with sunlight exposure as well as dietary intake. cats do not rely on sunlight for vitamin d synthesis; all vitamin d comes from dietary intake. cats have a naturally occurring lower airway disease syndrome (lad) that shares many features with human asthma. the goal of this study was to evaluate serum 25-oh-d concentrations in cats with lad. cats with naturally developing lad were enrolled. criteria for a diagnosis of lad included a history of cough, wheeze or respiratory distress, radiographic evidence of a bronchial pattern and hyperinflation, negative heartworm antigen and antibody test, and a resolution of clinical signs in response to glucocorticoids. dietary history was obtained. 25-oh-d concentrations were determined on serum samples by a commercial laboratory. twelve cats with lad were enrolled. all cats ate commercial cat food. the median 25-oh-d concentration was 112 nmol/l with a range of 65-176 nmol/l which is within the reported reference range of 65-170 nmol/l. in contrast to human asthma, lower airway disease in cats is not associated with low serum concentrations of 25-oh-d. interstitial lung diseases (ild) are uncommon in dogs, with the most commonly recognized ild idiopathic pulmonary fibrosis (''westie fibrosis''). in human medicine, ild represent a large umbrella of pulmonary diseases, with ipf only a subset. other, more treatable, ilds are also identified, and may respond to either the removal of a stimulus (hypersensitivity) or steroid therapy. the goal of this report is to describe the clinical course, including outcome, computed tomography and histopathology of dogs affected with an ild. the computed tomography (ct) log was reviewed for dogs that underwent thoracic ct scanning for evaluation of respiratory signs, and had changes consistent with ild as the primary abnormality, including the presence of diffuse disease in all lobes, and at least 2 of the following: reticulation, ground glass opacity, consolidation, or traction bronchiectasis. survival time from ct date was calculated. the presence of moderate pulmonary hypertension [phtn] (4 50 mmhg) as estimated by tricuspid regurgitant jet, was also reported and survival times were compared with a mann-whitney rank sum with p o 0.05 considered significant. thirteen dogs were identified. terriers and chihuahuas were the most commonly affected breeds. two dogs were adolescents, the remaining dogs ranged from 7-16 years, with a median of 11 years. histopathology results (n 5 5), including moderate to severe interstitial fibrosis (3) alveolar proteinosis with fibrosis (1), and interstitial eosinophilic pneumonia (1). one had suspected cryptogenic organizing pneumonia and had a good response to glucocorticoids. eight dogs died of respiratory failure, with a median post ct survival time of 42 days (range 2-365), two dogs died of non-pulmonary disease, 2 dogs had severe lower respiratory infections as puppies with persistent respiratory signs, and both are still alive at 4 2 years since diagnosis, 1 terrier is alive at 19 months and 1 was lost to follow up. 5 dogs had phtn, with a median survival of 60 days (42-590), while the 5 dogs without had a survival of 180 days (range 21-730), [p 5 0.3]. interstitial lung disease in dogs is not just idiopathic pulmonary fibrosis. following respiratory infection, young dogs may develop an ild with a relatively indolent course and rare ild is steroid responsive. ct is useful to identify ild but further research correlated with echocardiography and histopathology is advised to use it to prognosticate. idiopathic pulmonary fibrosis (ipf) is an interstitial pulmonary disease, mainly described in west highland white terriers (whwt). identification of molecular pathways important in the pathogenesis of ipf would improve our understanding of this disease and may help identify therapeutic targets. the aim of the present study was to investigate gene expression in lungs of whwt with ipf using oligonucleotide microarray. total rna was extracted from post-mortem pulmonary samples from five whwt with ipf and five control dogs (ctrl) without pulmonary disease. the rna was pooled from each group (ipf and ctrl) and analysed using the canine specific affymetrix microarray technology. genes with a minimum of a two-fold difference in expression between the two groups were selected for further analysis. the most significant biological functions for these genes were identified using ingenuity pathways analysis. more than 1000 genes were identified as having greater than twofold difference in expression. the significant biological functions associated with these genes were related to cellular movement, cellular proliferation and apoptosis. most notable among these were genes encoding the leukocyte chemotactic proteins: ccl2 (fold change 14.93), ccl17 (12.74) and il8 (14.32); the proteins involved in fibroblast migration; and the matrix metalloproteinases (mmps) involved in matrix degradation: mmp7 (à17.55), mmp9 (-3.42), mmp1 (à2.76). this study has identified genes which may be important in pathogenesis of ipf, e.g. proteins involved in leukocytes chemotaxis, fibroblast recruitment and activation, regulation of apoptosis, and extracellular-matrix turn-over. however, real-time quantitative rt-pcr studies are needed to confirm these results before any definitive conclusions can be drawn. idiopathic pulmonary fibrosis (ipf) is an interstitial disease, mainly described in west highland white terriers (whwt). defini-tive diagnosis ultimately relies on lung histopathology. identification of specific biomarkers would be very helpful. expression microarray is a powerful screening tool to study local gene expression in a disease state. the aim of the present study was to measure gene expression profiles in lungs of whwt with ipf to identify potential blood or bronchoalveolar lavage fluid (balf) biomarkers. total rna was extracted from post-mortem pulmonary samples from five whwt with histopathologically confirmed ipf and five control dogs (ctrl) without pulmonary disease. the rna was pooled from each group (ipf and ctrl) and analysed using the canine specific affymetrix microarray technology. ipa-biomarkers analysis (ingenuity system) was used to filter and prioritize biomarkers candidates using the three following criteria: a minimum of a two-fold difference in expression between ipf and ctrl; expression of the gene in lung tissue; possible detection of the protein in blood or in balf. fifty-four molecules met all the criteria. based on difference in expression, promising proteins included ccl7 (fold change 17.8), a3-actinin (15.7), ccl2 (14.9), serum amyloid a1 (14.4), il8 (14.3), plunc (à25.1), mmp7 (à17.6). some are well-known biomarkers of ipf in humans either for diagnosis (mmp7, il8) or prognosis (ccl2). these results provide novel potential biomarkers of canine ipf. measurement of these proteins in blood and balf of healthy dogs, dogs with ipf and with other respiratory diseases is needed to assess their use as biomarkers of canine ipf. heliox is a mixture of helium and oxygen that has been used therapeutically in human medicine for treatment of airway obstruction. helium's low density and other physical properties have been shown to reduce the work of breathing by limiting turbulence. the purpose of this study was, therefore, to evaluate respiratory parameters in response to inhaled heliox in dogs with meso-and brachycephalic conformation. eleven healthy dogs were recruited, five were mesocephalic and six were brachycephalic. flow-volume loops were collected using commercial software (buxcor) while breathing 70:30 helium: oxygen (heliox) and 70:30 nitrogen:oxygen (nitrox) in a randomized order via a low dead-space face mask. due to the intrinsic gas properties, gas flow rates and volumes were corrected in-vitro by a conversion factor for the effect of helium on the pneumotachograph. respiratory rate, tidal volume (ml), minute ventilation (l), inspiratory time (ti), expiratory time (te), peak inspiratory flow (pif) and peak expiratory flow (pef) were recorded while breathing heliox or nitrox. values were compared using a paired sample t-test, with p o 0.05 considered significant. all dogs cooperated with testing. there was no significant difference in respiratory rate, tidal volume, minute ventilation, inspiratory or expiratory times, or peak inspiratory flow. peak expiratory flow was significantly higher (p 5 0.01) while breathing heliox than when breathing nitrox in brachycephalics but not in mesocephalics (p 5 0.22). heliox is well-tolerated in healthy dogs and results in an increased expiratory flow rate in brachycephalic dogs. further investigation of heliox is warranted in dogs with airway obstruction. of this prospective multicentric study is to assess the effects that surgical correction has on the severity of clinical signs and levels of acute phase proteins (c-reactive protein [crp] , haptoglobin [hp]) and cardiac troponin i (ctni). thirty three brachycephalic dogs with boas were included and evaluated before and, approximately two months, after surgical correction. the most common components of boas found were elongated soft palate (33/33; 100%), stenotic nares (31/33; 94%) and everted laryngeal saccules (13/33; 39.4%). staphylectomy was performed by means of two different surgical techniques: laser (n 5 12) or electrical scalpel (n 5 21). there were significant differences between dogs depending on the surgical technique used, with a higher reduction of respiratory signs (p o 0.002) and a better postsurgical improvement (p o 0.015) with the use of laser. the levels of crp, hp and ctni were categorized into normal or elevated. before surgical treatment three (9.1%), six (18.2%) and thirteen (39.4%) dogs had elevated values of crp, hp and ctni, respectively. two months after surgical correction, five (15.1%), eleven (33.3%) and fourteen (42.4%) dogs had elevated values of crp, hp and ctni, respectively. there were no statistical differences between values of crp and ctni before and after surgical correction but the levels of hp increased significantly after surgical treatment (p o 0.008), probably due to postsurgical treatment with corticosteroids. as previously suggested by others, there was a statistically significant reduction of respiratory and gastrointestinal signs in dogs with boas submitted to surgical correction (p o 0,001). according to the results obtained in the present study, the determination of crp, hp and ctni before and two months after surgical treatment do not have a prognostic value in dogs with boas. even though, near half of the dogs studied had elevated levels of ctni (39.4%) that persisted after surgical treatment (42.4%), suggesting some degree of myocardial damage is present. further studies are needed considering the influence of breed and age. to the authors' knowledge, this is the first description of crp, hp and ctni determination in dogs with boas. overweight and obesity are common conditions that lead to alterations in respiratory mechanics, airway resistance, pattern of breathing and gas exchange in humans. the objective of the present study was to investigate if there are significant differences on respiratory parameters and arterial gas analysis of obese and overweight cats, in conscious state and under general anesthesia. twenty nine adult cats were arranged in three groups: obese (n 5 15), overweight (n 5 7) and with ideal body score index (bsi) (n 5 7). mean of bsi in the groups were: 8,8 (obese), 6,3 (overweight) and 4,9 (ideal bsi). cats did not had respiratory, cardiac or others systemic diseases. the respiratory parameters were evaluated with a ventilometer equipment coupled to facemasks in conscious cats and directly to the endotracheal tube in anesthetized cats under spontaneous respiration. the anesthesia was performed with propofol (3 ae 1,2 ml/kg) and the cats were maintained in the same anesthetic plan. the three groups were compared by analysis of variance followed by tukey's test and conscious and anesthetized cats were compared by student's t test, with a 5% significance level. there were not observed differences on the respiratory parameters evaluated on ventilometry (tidal volume, expiratory and inspiratory times and peak pressures, respiratory rate and partial pressure of end tidal co 2 (petco 2 )) and on arterial gas parameters (pao 2 e paco 2 ) in the three groups. the pao 2 of cats with ideal bsi was 88,1 ae 13,5 mmhg, although was not significantly different (p 5 0,06) from overweight (72,0 ae 11,9 mmhg) and obese cats (72,6 ae 14,6 mmhg). comparison of anesthetized to conscious cats, it was detected decreases in tidal volume, expiratory and inspiratory times and peak pressures and increase in petco 2 in respiratory rate in the anesthetized cats. only petco 2 , inspiratory time and respiratory rate in overweight cats did not differ in anesthetized cats. these results suggest that obesity and overweight did not result in impairment of respiratory function in cats and propofol induced respiratory depression. osteosarcoma (osa) is the most common bone tumor in dogs, however, little is known regarding the mechanisms underlying malignant transformation in these tumors. breeds such as rottweilers and greyhounds are at higher risk for developing osa, suggesting that heritable factors play a role in this disease. mirnas have tumor/tissue specific roles in regulating gene expression and dysregulated mirna expression is found frequently in cancer. we hypothesize that canine osa is characterized by a unique mirna expression profile(s) with dysregulation of some mirnas being associated with specific breeds. mirna expression profiling of primary osa tumors from 6 greyhounds and 6 rottweilers was performed using the nanostring technologies ncounter mirna expression assay kit, interrogating the mirna expression profile of 651 human mirnas, 168 of whose mature sequences are 100% conserved between human and dog. 17 mirnas were differentially expressed in greyhound versus rottweiler tumors (p o 0.05), suggesting that breed-specific dysregulation of mirnas may contribute to the development and progression of spontaneous osa. hierarchical clustering revealed distinct mirna expression signatures in greyhound osa tumors as compared to rottweilers. based on these preliminary results, we are evaluating a larger cohort of osa tumor samples including greyhounds, rottweilers, golden retrievers, and a mixed population of other breeds. statistical analysis will be performed to determine the association of mirna transcript levels with specific breeds and overall outcome. characterization of mirna expression in canine osa will facilitate our understanding the biology of this disease and has the potential to identify targets for therapeutic intervention. originally combination therapies using drugs with documented single-agent activity and lack of overlapping toxicities could potentially improve outcome. the hypothesis intended to be tested is that palladia s can be safely administered concurrently with a standard weekly protocol of vinblastine (vbl), at dosages known to have activity against mast cell tumors. dogs with histologically confirmed measurable mast cell tumors were evaluated for eligibility to enter a standard phase i dose-finding trial (313 cohort), at a starting dose of 2.3 mg/m2 iv vbl (weekly for a total of 4 treatments) and 2.25 mg/kg po palladia s eod, concurrently. dose escalation of palladia s was scheduled in 0.25 mg/kg increments until mtd was established or fda label dose completed (3.25 mg/kg). safety evaluation was performed weekly throughout the 4 week study period. dose-limiting toxicities were described following established vcog-ctcae(v1.0) criteria. while antitumor response is not a primary endpoint of phase i trials, activity was documented prior to vbl treatments 2-4, and monthly thereafter, based on recist criteria. nine dogs have been enrolled; cohort 3 is filled and approaching completion of the evaluation period. hematologic dose limiting toxicity led to 2 de-escalations of vbl. the current safe combination appears to include vbl at 1.6 mg/m2 every other week and palladia s at 2.25 mg/kg eod. response was seen in all but one dog. without head to head trials comparing efficacy of bi-weekly vbl combined with palladia s and vbl alone, choice of therapy should remain at the clinician's discretion. originally prostate specific membrane antigen (psma) is a transmembrane protein expressed by tumor-associated neovasculature, but not normal blood vessels. based upon its selective expression in endothelial cells associated with cancer, psma may serve as a conserved angiogenic target shared by macroscopic solid tumors of various histologies. to investigate the feasibility of targeting a homogenous population of psma-expressing endothelial cells as a novel anticancer strategy, we have investigated psma expression in several canine hemangiosarcoma (chsa) cell lines, and subsequently developed self-assembling nanoparticles containing diagnostic (near infrared dyes) and therapeutic (doxorubicin) cargo which selectively bind to psma by means of the a10 aptamer, a commercially-available oligonucleotide. the expression of psma by chsa cells was confirmed transcriptionally and translationally by real time pcr and immunohistochemistry, respectively. selective binding and endocytosis of a10 decorated nanoparticles was studied by fluorescent microscopy. the ability of a10 decorated nanoparticles encapsulating doxorubicin to exert in vitro cytotoxic effects in chsa cells was assessed by colony forming assays. using a chsa xenograft murine tumor model, clinically-relevant anticancer effects of a10 decorated nanoparticles encapsulating doxorubicin were tested. all chsa cell lines expressed psma mrna and protein. a10 decorated nanoparticles were selectively endocytosed by psma-expressing cells, and when these nanoparticles encapsulated doxorubicin, significant cytotoxic effects were exerted in vitro. finally, a10 decorated nanoparticles encapsulating doxorubicin significantly reduced the size of macroscopic chsa tumor burdens in transplanted mice. diagnostic and therapeutic nanoparticles can be targeted to psma-expressing endothelial cells, and chsa provides a comparative model for the future study of nanoparticle therapeutics. canine transitional cell carcinoma (tcc) is the most common tumor of the urinary tract, and is similar to human invasive tcc in histopathologic characteristics, molecular features, sites of metastasis, and response to medical therapy. prevalence is increasing, and novel therapies and strategies are needed to effectively treat this aggressive form of cancer in both species. personalized medicine techniques intend to improve treatment outcome by using patient tumor profiling to identify potential and individualized therapeutic targets. a genomic algorithm has been developed termed ''coexpression extrapolation'', or coxen, that aims to use expression microarray data to predict drug activity in patient tcc samples. the utility of this predictive methodology has been established in other types of cancer in vitro, however its clinical utility has not yet been determined. validation studies of coxen in 10 canine tcc cell lines were conducted. the goal was to determine the value of coxen in predicting baseline sensitivity of canine tcc to 5 chemotherapy agents (gemcitabine, mitoxantrone, carboplatin, vinblastine and cisplatin) that would then be used in a proposed clinical trial. additionally, expression data from 25 canine treatment-naı¨ve primary tumor samples were generated on an affymetrix array platform (canine genome v 2.0). both the expression data and tcc cell line data (antiproliferative effects, 50% growth inhibition or gc 50 ) were used to establish a canine specific predictive coxen algorithm. coxen scores for canine tcc cell-line drug activity were then analyzed. scores predicted the activity of cisplatin, gemcitibine, and mitoxantrone in all 10 cell lines, and of carboplatin in 6 cell lines. because all of the cell lines were sensitive to vinblastine (gi 50 o0.05 mm), the coxen score was not predictive of its potency. interestingly, coxen fails to predict vinblastine response in human tcc cell line data as well. in concurrent work, comparative genomic studies to define and compare the gene expression signatures of tcc in dogs and humans provides further evidence that canine tcc is a valuable genomic model of the human disease. current studies involve testing the chemo-predictivity of this derived canine coxen algorithm in additional canine tcc cell lines. canine tcc offers an excellent model for in vitro and in vivo studies of the coxen approach. this preclinical work will be used to guide the feasibility of future coxen clinical trials in dogs and humans with tcc. a small molecule complex (aminoact) isolated from bovine milk is a natural peptide mixture with multi-kinase inhibitory effects against epidermal growth factor receptor (egfr) and insulin-like growth factor receptor-1 (igfr-1). ingestion of aminoact in people with cancer results in lower serum tnf-alpha, an increase in antioxidant superoxide dismutase (sod) enzyme activity, and subjects' blood serum causes apopsotis in cancer cell lines. this study was designed to first assess safety and secondly the efficacy of three dosage levels of ax-3 in sustaining progression free survival (pfs) for dogs with refractory advanced and/or metastatic cancer. the prospective, open label study included dogs of different breeds with naturally occurring histologically confirmed malignancies. the first 13 dogs received aminoact at 1g/m 2 ; the second group of 7 dogs subsequently received the same dosage 1 350 mg of aminoact; and the third group of dogs subsequently received 2g/ m 2 . each dog was treated orally daily for six weeks along with 550 mg betaine hcl, that aids in peptide absorption. all patients were evaluated for toxicity using the vcog-ctcae and efficacy using the recist criteria via assessment of clinical parameters, blood work and client questionnaires. no toxicity other than mild, transient (grade i) nausea was noted, nor were there any changes in hemograms or biochemical profiles in any patient. dogs with tumors that were confirmed as responders (4 50% reduction in size) include pulmonary adenocarcinoma, mast cell tumor, trichoepithelioma and soft tissue sarcoma. it appears in limited studies that the response rate may be more durable at higher dosages. the response to aminoact is dose dependent and only transient mild toxicity was observed, which suggest the maximum effective dosage has not been reached. further clinical studies will be valuable in determining the effective dosage and response duration. treating cancer in dogs with aminoact offers a unique opportunity as a model for human cancer biology and translational cancer therapeutics. stereotactic radiation therapy (srt) combines patient immobilization, image guidance, and intensity modulated delivery to achieve ablative radiation doses within the tumor, while preferentially sparing surrounding normal tissues. the purpose of this study was to evaluate the efficacy of srt as a means of achieving local tumor control for canine nasal tumors. retrospective analysis was performed on dogs with a nasal tumor confirmed by histopathology and computed tomography, no previous surgical or radiation therapy, at least six months of follow-up, and completion of three fractions of srt at csu.srt was administered via the varian trilogy linear accelerator once daily for three consecutive days. the varian eclipse treatment planwas reviewed to determine the planned target volume (ptv) and dose to 95% of the ptv. kaplan-meir survival analysis was performed for disease free interval (dfi) and overall survival (os). sixteen patients with nasal tumors (8 adenocarcinoma/carcinomas, 2 squamous cell carcinomas, 3 chondrosarcomas, 2 osteosarcomas, and 1 undifferentiated sarcoma) were treated with srt. a median dose of 29.1gy was administered to 95% ptv with a median ptv of 75.3cc. srt was well tolerated by the normal tissues with minimal, manageable side effects. to date, the median dfi is 270 days, while the median os is 367 days. based upon the initial clinical experience, stereotactic radiation therapy is an emerging modality in the management of canine nasal tumors. canine leptospirosis can vary from subclinical infection to illness that ranges from mild to severe, including death, depending on the susceptibility of the dog, virulence of the organism, and route and degree of infection. the objective of this study was to evaluate the ability of a canine leptospira bacterin to prevent infection and disease following challenge with virulent leptospira canicola, l. pomona, l. grippotyphosa, or l. icterohaemorrhagiae. groups of 8week-old beagles were vaccinated (day 0) and boosted (day 21) with placebo (n 5 10) or the 4-way bacterin (n ! 20) and subsequently challenged with each serovar. the results demonstrated that blood and various tissue samples from placebo-recipients became reliably infected, and the dogs developed typical clinical signs of leptospirosis including loss of appetite, ocular congestion, depression, dehydration, jaundice, hematuria, melena, vomiting, petechiae, and death. in addition, placebo-recipients developed kidney and liver dysfunction. in contrast, some vaccine-recipients became infected, but the organisms were cleared quickly from the blood. vaccinated dogs failed to develop severe clinical disease requiring medical intervention, and no animals died (p ! 0.001). a few of the vaccinated dogs developed clinical abnormalities, but the clinical signs remained mild and were self-limiting (p o 0.0001 for each serovar). administration of the bacterin also prevented thrombocytopenia ( ciprofloxacin, a synthetic fluoroquinolone antimicrobic, is not fda-approved for veterinary use. however, due to recent availability of less expensive generic formulations, extra-label use of ciprofloxacin by veterinarians appears more common. although ciprofloxacin crystalluria and uroliths have been reported in humans, we are unaware of any published reports in dogs. this is surprising since mean urine ciprofloxacin concentration (0.36 mg/ml) in dogs following a modest iv dose (13 mg/kg) was 3 times higher than the solubility of ciprofloxacin in water (0.12 mg/ml). to identify the occurrence of ciprofloxacin uroliths in dogs, records from the minnesota urolith center were reviewed. between january 2001 and december 2009, ciprofloxacin was identified in uroliths from 58 dogs; uroliths were composed of 100% ciprofloxacin in 10, mixed uroliths containing ciprofloxacin were identified in 6, a shell of ciprofloxacin was observed in 21, and ciprofloxacin surface crystals were identified in 21. based on an experimental study in which 83% of human volunteers consuming 1000 mg of ciprofloxacin with nahco3 exhibited ciprofloxacin crystalluria (urine ph 47.3), while no volunteers consuming 1000 mg of ciprofloxacin and nh4cl to acidify urine formed crystals; we postulated that ciprofloxacin uroliths could be dissolved in acidic urine. to test this hypothesis, canine uroliths composed of 100% ciprofloxacin from a single source (6-yr-old male, english bulldog receiving 24 mg/kg of ciprofloxacin po, q12 hr to manage superficial pyoderma; turbulent flow chromatography/tandem mass spectrometry detected 517 mg of ciprofloxacin/g of urolith) were incubated in urine at selected ph's and monitored for dissolution. urine obtained from multiple dogs not receiving fluoroquinolones, was pooled and divided into 5 aliquots. aliquots were adjusted with hcl or naoh to a ph of 3, 5, 6, 7, or 8. aliquots were capped and preserved by refrigeration; ph was monitored and readjusted weekly. ten uroliths of approximately equal weight were randomly assigned to individual flasks containing 10 mls of urine. flasks were constantly agitated and maintained at 381c. every 24 hours, urine was discarded and replaced with 10 mls of urine of identical ph until stone dissolution was complete. ciprofloxacin urolith dissolution times at each urine ph are reported below. ciprofloxacin uroliths are a newly recognized disease and a potential adverse effect of ciprofloxacin administration in dogs. in vitro dissolution of ciprofloxacin uroliths was achieved in canine urine, supporting the premise that in vivo dissolution is possible. urolith dissolution times were shortest at lower and higher ph's, which is consistent with the pka (6.0 and 8.8) of this amphiprotic antimicrobic (more soluble at ph below the acidic pka and above the alkaline pka). foods designed to promote struvite urolith dissolution may be designed for short term feeding facilitating rapid dissolution or may be formulated with a more moderate target urine ph to allow for dissolution and then life-long maintenance feeding minimizing recurrence. the purpose of this study was to compare the efficacy and rate of dissolution of a maintenance food with a struvite dissolution food. sixteen client-owned adult cats (13 fs, 3 mc) with naturally occurring struvite urocystoliths (mineral composition based on history, radiographs, urinalysis, urine culture and physical examination) were randomized to either a dry maintenance food (test) or a dry food known to dissolve struvite uroliths (control). the clinical care team and owner were blinded to treatment assignment. the test food was formulated to provide 0.06% mg (dm), 0.65% p, 35% protein, and a calculated target urine ph value (uph) of 6.2-6.4. the control food was formulated to provide 0.06% mg (dm), 0.77% p, 35% protein, and a targeted urine ph of 5.9-6.1. owners were advised to feed the assigned diet exclusively in an amount to maintain body condition. after diet assignment radiographs were performed at eight weekly intervals until there was no evidence of uroliths or until there was evidence that the uroliths were the same size or larger. a physical examination, complete blood count, serum chemistry profile, urinalysis and urine culture were repeated at the conclusion of the study. statistical analysis was by anova. all uroliths dissolved in all cats and both foods were palatable. radiographs of cats fed the control food indicated the uroliths dissolved in a significantly shorter time (mean ae std dev of 1.6 ae 0.7 weeks) compared to cats consuming the test food (mean 3.9 ae 2 weeks; po0.05).). cats in the control group finished the study at 1 (n 5 4), 2 (n 5 3) and 3 weeks. cats in the test group finished the study 1, 2, 3 (n 5 2), 4, 5, 6, and 7 weeks. all the minnesota urolith center occasionally receives uroliths for analysis that are immersed in formalin. results of quantitative analysis of these uroliths revealed that some submitted in formalin consisted of newberyite (magnesium hydrogen phosphate trihydrate). because newberyite is uncommonly found in uroliths formed by cats and dogs, we hypothesized that this mineral was an in vitro artifact caused by exposure of struvite (magnesium ammonium phosphate hexahydrate) to formalin. the purpose of this study was to determine if formalin alters the mineral composition of uroliths. urolith submissions containing stones of either 100% struvite (n 5 5 dogs and 5 cats), 100% calcium oxalate (n 5 5 dogs and 5 cats), 100% calcium phosphate apatite (n 5 5 dogs and 5 cats), 100% cystine (n 5 5 dogs and 5 cats), 100% ammonium urate (n 5 5 dogs and 5 cats), and 100% silica (n 5 5 dogs) preserved by only air drying were tested. one urolith from each submission was quantitatively analyzed by polarized light microscopy or infrared spectroscopy. a subsequent urolith from the same submission was immersed in 1 ml of 10% buffered formalin for 48 hours at room temperature. uroliths were then air dried for 30 minutes and the analysis was repeated. after exposure to formalin, portions of all 10 struvite uroliths were transformed into newberyite. three (1 dog and 2 cats) of 10 ammonium urate uroliths were completely dissolved. newberyite was not detected in any of the remaining uroliths. likewise quantitative mineral analysis of non-struvite uroliths remained unchanged. to avoid misdiagnosis of mineral composition, uroliths should not be immersed in formalin prior to analysis. we previously reported that transfusion to normal dogs of autologous erythrocyte concentrates (prbcs) that had been stored for 21 days causes a profound inflammatory response (2x increase in leucocyte count and fibrinogen, 60x increase in c-reactive protein). we speculated that inflammation was due to cytokines produced during the storage period, and hypothesized that transfusion of fresh (f) prbcs would elicit less inflammation than would stored (s) prbcs. a whole blood unit was collected from healthy dogs (n 5 10) for prbcs on day 0, then again on day 32. on day 35 dogs received an autologous transfusion of prbcs stored for either 35 days (s, n 5 5) or 3 days (f, n 5 5). cbcs and in-tem thromboelastometry (ct:coagulation time, cft:clot formation time, a:alpha, mcf:maximum clot firmness) were evaluated on blood samples collected at 0 (pre) and 5,9,24,48, and 72 hours after transfusion. fresh prbcs did not elicit any change in leucocytes, platelets, or thromboelastometry. stored prbcs elicited a degenerative left shift (5 hr) followed by a regenerative left shift (9-48 hr), thrombocytopenia (36% decrease at 5 hr), and marked hypocoagulability characterized by prolonged ct (5,9,24 hr) and cft (5,9 hr), and decreased a (5,9 hr) and mcf (5,9, 24 hr). data are mean(sd). a: p o 0.05 between groups f and s by t test. b: p o 0.05 compared to ''0'' by rm anova. transfusion of autologous stored prbcs elicits a greater inflammatory response than fresh prbcs, and results in hypocoagulability on thromboelastometry. clopidogrel is a potent antiplatelet drug that is gaining popularity in veterinary medicine for antithrombotic therapy. the parent molecule is an inactive prodrug that must be converted by hepatic isozymes to an active metabolite. the majority of the parent molecule is directed to the formation of inactive metabolites with only an extremely small proportion of parent molecule directed to the formation of the active metabolite. there are multiple hepatic isozymes responsible for the formation of the active metabolite. a non-specific hepatic isozyme inducer such as rifampin could increase the formation of the active metabolite of clopidogrel thereby increasing the pharmacodynamic response which may allow a reduced drug dose to achieve a clinical effect. we have previously presented data supporting the increased pharmacodynamic response of clopdiogrel after rifampin therapy. the goal of this study was to demonstrate an increased pharmacokinetic response of clopidogrel after rifampin induction of hepatic isozymes. six healthy, purpose-bred dogs were used for this study. the pharmacokinetics of clopidogrel were determined by measuring the parent molecule, primary inactive metabolite and active metabolite through lc/ms/ms. the pharmacodynamics of clopidogrel were determined by measuring collagen-induced whole blood aggregation. blood samples were collected prior to clopidogrel administration (baseline), after 7 days of 2 mg/kg clopidogrel po q 24 hrs, and after 7 days of 2 mg/kg clopidogrel po q 24 hrs 1 10 mg/ kg po q 12 hrs rifampin. given the absence of a known standard for the active metabolite, only a semi-quantitative assessment of active metabolite concentration can be made. there was no identifiable active metabolite peak noted at baseline or after clopidogrel treatment. however, with clopidogrel and rifampin combined administration there was an active metabolite peak identified in all dogs with a mean area of 41.7 ae 23.5. the development of the active metabolite peak was associated with an increase in the pharmacodyamic response of clopidogrel in the dogs. this is the first study in any species to document the increased formation of the active metabolite of clopidogrel in response to a strong, non-specific hepatic isozyme inducer. this increased pharmacokinetic response was associated with an increased pharmacodynamic response of clopidogrel. this data provides supportive evidence to develop therapeutic protocols to improve the pharmacodynamic response to clopidogrel in dogs that may reduce dosing requirements or correct subtherapeutic pharmacodynamic response. critical illness-related corticosteroid insufficiency (circi) has been identified in humans, foals, dogs and cats with lower-thanexpected circulating cortisol concentrations, and/or by a blunted cortisol response to acth stimulation. our purpose was to determine if circi exists in critically ill horses. endogenous plasma acth and serum cortisol concentrations, and cortisol at t 5 0 and t 5 30 min after 0.1 mg/kg cosyntropin, were measured by radioimmunoassay from horses with colic or systemic illness on admission, and days 2, 4 and 6 of hospitilization. horses were divided into mild, moderate, or severe illness groups based on clinicopathologic data. inappropriately low cortisol was defined as endogenous cortisol o mean-1sd achieved after administration of 0.1 mg/kg cosyntropin to normal horses ( o 269 nmol/l). inadequate delta cortisol was defined as o mean delta cortisol in normal horses after 0.1 mg/kg cosyntropin ( o 159 nmol/l). cortisol, acth and delta cortisol were compared using anova between groups, with p o 0.05 considered significant. fifty-eight horses classified as having mild (11), moderate (30) and severe (17) disease at admission had survival rates of 100%, 97% and 35% respectively. admission acth and cortisol concentrations were highest in severely ill horses (93 ae 198pg/ml, 361 ae 137 nmol/l) compared to moderate (31 ae 36, 279 ae 137) and mildly ill horses (18.0 ae 12.0, 237 ae 133). admission cortisol concentrations were higher overall in severely ill horses (p 5 0.016), but were low in 24% (4/17). admission delta cortisol was low in 85% (11/13) of severely ill horses, and was associated with marked adrenal hemorrhage in non-survivors. severely ill horses have high cortisol and acth, but low cortisol and delta cortisol may indicate circi secondary to adrenal hemorrhage. equine pituitary pars intermedia dysfunction (ppid) is a common endocrinopathy of aged horses that results from neurodegeneration of the dopaminergic periventricular neurons that innervate the intermediate lobe of the pituitary. factors that initiate spontaneous dopaminergic neurodegenerative disease remain elusive, however accumulation of misfolded a-synuclein protein and dysfunctional protein clearance have been implicated. misfolded protein accumulation occurs due to increased protein production or decreased clearance of damaged macromolecules through the process of autophagy. while have previously demonstrated that horses with ppid have increased asynuclein in the periventricular neurons compared to controls, it remains unknown whether the protein accumulates due to increased production or decreased clearance. we hypothesized that autophagy is decreased in the pituitary neurointermediate lobe from horses with ppid compared to controls. neurointermediate lobe pituitary tissue was from collected from horses with ppid (n 5 12) and healthy horses (n 5 37, 2-35 years). realtime pcr was used to determine the relative expression of autophagy genes (mtor, beclin1, atg12, atg7, atg5, pink, lamp2) and a-synuclein relative gene expression from horses with ppid were compared to healthy horses by t-test following log transformation. a pearson coefficient of correlation was calculated comparing a-synuclein expression with autophagy gene expression. the expression of a-synuclein, autophagy-related genes (atg12, beclin, lamp2), and mtor was greater in horses with ppid than in healthy horses. age was not correlated to a-synuclein or autophagy gene expression. there was a significant positive correlation between expression of a-synuclein and beclin1, atg12, atg7, atg5, and pink, but not mtor expression. accumulation of a-synuclein protein in horses with ppid may result from increased a-synuclein expression. autophagy genes are upregulated in horses with ppid, suggesting a compensatory response, although these findings need to be confirmed by demonstrating an increased functional response. asynuclein expression was positively correlated to expression of autophagy genes except mtor, suggesting a-synuclein may stimulate autophagy in an mtor independent manner. acvim forum session 86a efficacy of delayed antiviral therapy against ehv-1 challenge. lk maxwell 1 , ll gilliam 1 , n pusterla 2 , r carmichael 1 , rw eberle 1 , jw ritchey 1 , tc holbrook 1 , t gull 1 , gb rezabek 1 , d mcfarlane 1 , cg macallister 1 . 1 oklahoma state university, stillwater, ok. 2 university of california, davis, ca. equine herpes virus type-1 (ehv-1) outbreaks are often not recognized until exposed horses are at immediate risk for developing equine herpes myeloencephalopathy (ehm). the objective of this study was to determine whether delayed therapy with the antiviral drugs valacyclovir or ganciclovir could protect those horses most at risk for ehm. eighteen aged ( 4 20 years) mares were randomized to treatment: no therapy (control), oral valacyclovir therapy, or intravenous ganciclovir therapy. drug administration was initiated at the onset of the second febrile phase, between days 4-6 after ehv-1 inoculation (pi), and continued for one week. neurological examinations were performed prior to the study and for three weeks pi. one horse was excluded from the study for failure to become febrile. body temperature was significantly lower in the ganciclovirtherapy horses as compared to control horses on days 6-8 pi (p o 0.05), whereas valacyclovir-therapy horses did not differ from control horses. viremia in whole blood, as determined by pcr, was also lower in the ganciclovir-therapy horses on days 7-10 pi and on day 7 pi in the valacyclovir-therapy horses (p o 0.05). although antiviral drug administration did not reduce the risk of ataxia (p 5 0.06) or nasal shedding, ganciclovir therapy did decrease the severity of ataxia (p o 0.05) as compared to valacyclovir-therapy and control horses, where 0/6, 2/5, and 4/6 horses, respectively, developed at least a two grade change in ataxia. in summary, ganciclovir administration provided better protection against ehm than did valacyclovir when therapy was initiated just prior to the onset of neurological disease. equine vaccination is amongst the most important method of prophylaxis against equine influenza virus (eiv), a pathogen in which continuous antigenic drift can lead to vaccine failure. a 6month duration of immunity (doi) challenge infection study was conducted using commercial inactivated vaccines containing different strains of a/equine/2/influenza virus's, including innovator tm , containing kentucky/97 (pfizer animal health, new york, ny), and calvenza, containing a combination of ohio/2003, kentucky/ 95, and newmarket93 (boehringer ingelheim vetmedica, st. joseph, ms) . the challenge virus strain was colorado/07, the most contemporary challenge strain currently in use. the study design was a blinded, randomized challenge trial. three groups of 10 yearling ponies, with no history or serological evidence of eiv infection were established. each group received one of three treatments: vaccination with innovator tm ; vaccination with calvenza tm ; or injection with a saline placebo. each treatment was administered 3 times, at intervals of 1 month between the first two treatments, and 3 months between the second and third treatments. all ponies were challenged by nasal nebulization of 5x10 7 eid 50 influenza virus a/eq/2/colorado/07 6 months after the third treatment. clinical signs of disease, including rectal temperature, nasal discharge, anorexia, coughing, and depression, were recorded daily for 2 days prior to challenge infection, and 14 days post-challenge. nasal shedding of eiv was measured on the same days, using a realtime pcr test procedure. eiv-specific antibody responses were measured by elisa. differences between groups were analyzed by non-parametric repeated measures anova, and differences were declared significant when p o 0.05. all control group ponies demonstrated clinical signs of disease consistent with eiv infection post-challenge infection, including pyrexia, nasal discharge, inappetance and partial anorexia. these signs were significantly lower in both vaccine groups; mean body temperature was elevated ( 4101.51f) for 8 days in controls, but only 2 days in vaccine groups. nasal shedding of eiv was detected in all ponies in all groups: over the duration of the study the calvenza group shed significantly less virus than innovator and control. over time antibody titers were significantly higher in the calvenza than the innovator group, and both were significantly greater than controls. this study demonstrated that both current commercial inactivated eiv vaccines have a duration of clinical protection of at least 6 months after a highly pathogenic challenge with a recent eiv isolate. both antibody responses and virological protection differed between the vaccines. formulation difference between the vaccines, including the eiv antigens employed, may have contributed to this performance difference. degenerative myelopathy (dm) may be homologous to a form of amyotrophic lateral sclerosis in humans which has excitotoxic and immunologic pathogeneses described. the aims of this study were to determine (i) presence or absence of abnormalities in concentrations of csf amino acid (aa) neurotransmitters (glutamate, glycine and gàaminobutyric acid (gaba)) and cytokines in dogs with dm and if present (ii) investigate associations with disease severity. twenty-two dogs histopathologically confirmed for dm and 21 dogs with suspected dm based on thorough diagnostic investigations and 42 clinically normal age-matched control dogs were included in the study. the neurological severity of the dm dogs was graded (1-4) using an established scale. csf was evaluated for presence of glutamate, glycine and gaba by high performance liquid chromatography and for gm-csf, ifn-g, il-2, il-4, il-6, il-7, il-8, il-10, il-15, il-18, ip-10, kc (keratinocyte chemoattractant), mcp-1 (monocyte chemotactic protein-1) and tnf-a using a commercially available, canine multiplex immunoassay (millipore, billerica, ma). all data analyses were performed using sas v 9.2 (cary, nc). analyte levels were compared between dm confirmed, dm suspected and control dogs by an analysis of variance (anova). spearman correlation was used to test for correlations of analyte levels and neurological grades. all hypothesis tests were 2-sided with a 5 0.05. there were no significant differences between individual csf analytes in dm confirmed and dm suspected dogs. glutamate levels were not significantly different between dm affected (mean 5 0.09 mg/ ml; range 5 0.01-0.29; sd 5 0.069) and control dogs (mean 5 0.07 mg/ ml; range 5 0.01-0.29; sd 5 0.056). control dogs (mean 5 0.86 mg/ml; range 5 0.08-1.16; sd 5 0.213) had significantly higher levels of gaba (p o 0.0001) than dm dogs (mean 5 0.12 mg/ml; range 5 0.07-0.79; sd 5 0.12). control dogs (mean 5 2.14 mg/ml; range 5 0.01-3.83; sd 5 0.76) also had significantly higher glycine concentrations (p o 0.0001) than dm dogs (mean 5 0.45 mg/ml; range 5 0.01-0.98; sd 5 0.24). dm-affected dogs also had significantly higher levels of il-2 (p 5 0.03), kc (p o 0.0001) and mcp-1 (p 5 0.005) than control dogs. neurotransmitter levels were not significantly associated with neurological grade. kc levels were significantly higher in the least affected dogs (p 5 0.0015). there were no associations with disease severity and analyte concentrations. dm affected dogs have an imbalance of csf aa concentrations creating a relatively excitotoxic environment. reports in human als confirm an imbalance between csf excitatory and inhibitory aas suggesting a pathogenic role for excitotoxicity in als. it also appears that dm affected dogs have increases in csf cytokines and chemokines suggestive of an immunologic component to the pathogenesis as is similar to als. further prospective analysis of dm is warranted to evaluate the role of treatment on csf variables. the pathogenesis of neuropathic pain (np) and syringomyelia (sm) in association with chiari-like malformation (clm) in dogs has focused on the anatomical anomalies and secondary cerebrospinal fluid (csf) flow abnormalities. neuropathic pain in humans has been associated with abnormalities of neurotransmitters such as glutamate and serotonin as well as immunologic mechanisms. the aim of this study was to investigate the csf neurotransmitter and cytokine levels in brussels griffon dogs (bgs) with clm, sm and np. as part of an mri study investigating the prevalence of sm in bgs, atlanto-occipital csf was acquired from 46 dogs and stored at -80c until analysis. all dogs underwent a neurologic exam prior to mri; osirix s software was used to measure sm and the presence of cerebellar herniation and deviation were recorded. deproteinized csf samples were analysed for presence of serotonin (ng/ml), glutamate, glycine and gaba (mg/ml) by high performance liquid chromatography. all csf samples were evaluated simultaneously for gm-csf, ifn-g, il-2, il-4, il-6, il-7, il-8, il-10, il-15, il-18, ip-10, kc, mcp-1 and tnf-a. a commercially available, canine multiplex immunoassay (millipore, billerica, ma) was used for the cytokine analysis (pg/ml). student's t-tests were used to compare the means of neurotransmitter and cytokine values between groups with and without skull abnormalities or spinal pain. simple pearson's correlation was used to test for correlations of neurotransmitter and cytokine values with syrinx dimensions and correlations of neurotransmitter with cytokine values. all hypothesis tests were 2-sided and the significance level was a 5 0.05. np was detected in 8 dogs (17%); sm was present 24 dogs (52%); and cm was detected in 24 dogs (52%). ifn-g levels were significantly lower in dogs with np than without (p 5 0.036). there were significant positive correlations between syrinx size and il-8 (p 5 0.017), kc (p 5 0.025) and mcp-1 (p 5 0.003). there were significant negative correlations between ifn-g and syrinx height (p 5 0.025) and extent (p 5 0.042). there was a significant negative correlation between il-2 and syrinx height (p 5 0.042). neurotransmitter levels were not associated with skull abnormalities or spinal pain, but there was a positive correlation of glycine with il-2 (p 5 0.004) and mcp-1 with glutamate (p 5 0.0147) and serotonin (p 5 0.0059). the size of the syrinx in bgs with sm is associated with several cytokine elevations but only a decrease of ifn-g was associated with np. based on this study it does not appear that excitotoxicity plays a role in either sm development or np. further work is justified on the role of the immune system in cm, sm and np. current knowledge about the conservative management of disk associated cervical spondylomyelopathy (da-csm) is rather limited and mainly based on retrospectively retrieved data. the goals of this study were to prospectively evaluate the evolution of clinical signs in dogs treated conservatively for da-csm. additionally, several potential prognostic parameters and the correlation of initial clinical signs with magnetic resonance imaging (mri) and transcranial magnetic stimulation (tms) were investigated. twenty-one dogs were included. after neurological evaluation, neurological status was graded from 0 (5 normal) to 6 (5 tetraplegia). all animals underwent low-field mri and tms with measurement of onset latencies and peak-to-peak amplitudes from the extensor carpi radialis and cranial tibial muscles. from the mr images, the following dimensions were calculated: remaining spinal cord area; compression ratio; vertebral occupying ratio of the spinal cord; canal height to body height ratio (cbr); canal height to body length ratio (cblr); and the canal compromise ratio. intraparenchymal intensity (isi) changes were graded from 0 to 3. all dogs were reevaluated by the same person after 1, 3, 6, 12, and 24 months. eight of 21 dogs (38%) experienced a positive clinical evolution with improvement of clinical signs or stabilization of mild clinical signs. all dogs with a negative clinical evolution 1 month after diagnosis experienced a further progression of clinical signs resulting in a poor outcome. the opposite was true for all dogs with a positive clinical evolution after 1 month. outcome was further significantly associated by the remaining spinal cord area and the vertebral canal compromise ratio. prognosis was not significantly affected by clinical presentation or tms. progression of clinical signs, in unsuccessfully treated dogs, was generally characterized by a rapid and dramatic deterioration of neurological status. there were no significant correlations between clinical presentation, mri and tms. two dogs underwent necropsy and histopathological examination. this revealed in both cases chronic wallerian degeneration and segmental myelomalacia. the results of this study suggest that conservative treatment of da-csm is associated with a rather guarded prognosis. clinical evolution 1 month after diagnosis and selected mri parameters can be considered as prognostic indicators. the lack of correlation between clinical presentation and outcome, medical imaging and electrophysiological evaluation is disturbing and warrants further investigation. a mri-guided stereotactic brain biopsy system has not been clinically evaluated in dogs. the purpose of this study was to determine the ability of the brainsight tm system to obtain histologically diagnostic samples and access the impact of this procedure on neurologic status for 72 hours after the biopsy. five dogs with mri definable lesions in the brain have been enrolled. breeds included a pitbull mix, pembroke welsh corgi, french bulldog, border terrier and west highland white terrier. age ranged from 5-11 years. weight ranged from 8.5-18.8 kg.dogs presented with seizures (n 5 5), ambulatory paresis(n 5 4), unilateral blindness(n 5 2) and head tilt(n 5 1). one dog had a normal neurologic exam. lesions chosen for biopsy were in the olfactory and/or frontal lobes (n 5 3), parietal lobe(n 5 1), and pyriform lobe(n 5 1). lesions were between 14-22 mm in diameter. all lesions were well-circumscribed and contrast enhancing except for one. histologic diagnosis of meningioma(n 5 3) and granulomatous meningoencephalitis(n 5 1) were made. the poorly-circumscribed, non-contrast enhancing frontal mass yielded non-specific necrosis. following biopsy, three dogs returned to pre-biopsy neurologic status within 12 hours. the french bulldog took 48 hours to return to previous neurologic status due to brachycephalic syndrome that required oxygen support. one dog had acute respiratory arrest 16 hours post-biopsy. necropsy is pending. these results suggest that this mri-guided biopsy system can provide an accurate histologic diagnosis of brain lesions. biopsies of poorly-circumscribed and non-contrast enhancing brain lesions may be less diagnostic. further evaluation is on-going to determine the true diagnostic yield and complication rate of this procedure. concurrent malformations of the craniocervical junction are commonly identified in humans with chiari type i malformation. recent evidence suggests such craniocervical junction abnormalities (cjas) also occur in dogs suspected of having chiari-like malformation (clm). the purpose of this study was to objectively describe morphometric features of the craniocervical junction region of dogs with suspected clm and to investigate for associations between these features and the occurrence of other malformations in this region. magnetic resonance (mr) and computed tomographic (ct) images from 274 dogs with clm were evaluated. three regions of neural tissue compression were assessed: cerebellar compression (cc); ventral compression at the c1/c2 articulation, termed ''medullary kinking'' (mk); and dorsal compression (dc) at the c1/c2 articulation. a compression index (ci) was calculated for all abnormal regions for each dog. multiple logistic regression analysis was performed (p o 0.05) to ascertain whether ci values for the different regions of compression were associated with the incidence of other craniocervical junction abnormalities. 68% of dogs had mk and 38% of dogs had dc. 28% of dogs also had evidence of atlanto-occipital overlapping (aoo medical infrared imaging (mii) is a non-invasive diagnostic imaging technique that measures skin surface temperature and generates thermal pattern maps based on predetermined color scales. because skin temperature, dependent on regional perfusion, is under direct control of the sympathetic nervous system, mii provides information about the function of the autonomic nervous system. because of recent advances in technology and lack of sedation needed to image patients, mii has potential use as a screening test for a variety of conditions that may result in autonomic dysregulation like chiari-like malformation in dogs (clm). the purposes of this study were to establish a mii protocol for dogs suspected of having clm, to identify thermal imaging patterns for various regions of interest (roi), to evaluate changes in thermal patterns and compare the results to those of mri findings, considered the standard for diagnosing clm in dogs. one hundred and five cavalier king charles spaniel dogs with clinical signs attributable to clm and confirmed clm with mri were evaluated with a complete blood count and chemistry profile, examination by a board certified surgeon/neurologist, multidetector ct scan of the craniocervical junction, whole body mri and mii. the protocol for thermal imaging included cranial and caudal views of the body, full lateral right and left body views, dorsal views of the head and body, and right and left lateral views of the head. thermal patterns were assessed with custom image recognition software. after each dog was imaged awake, general anesthesia was administered and the dogs re-imaged using the same protocol. mri findings in dogs with severe or moderate cerebellar compression and cerebellar herniation were compared with mii results. the top of head and front of head roi were 89.2% and 97.3% successful in identifying dogs with clm. based on these preliminary findings, mii may be a viable screening tool to detect clm in dogs. medical infrared imaging (mii) is an imaging technique that measures skin surface temperature derived from cutaneous perfusion and generates thermal pattern maps based on color scales. mii has been used as a test for a variety of conditions that cause autonomic dysregulation resulting in altered cutaneous perfusion. acute thoracolumbar intervertebral disk disease (tlivdd) is common in dogs. the purpose of this study was to: 1) determine the success of mii in identifying dogs with tlivdd, 2) compare the mii localization with mri results and surgical findings 3) determine if the mii pattern returns to that of normal dogs following decompression surgery. 72 small breed chondodystrophic dogs with tlivdd confirmed with mri and 14 dogs with no tlivdd were evaluated with mri and mii. regions correlating with the intervertebral disk spaces were analyzed for average temperatures and thermographic patterns. thermal patterns were assessed with computer recognition pattern analysis (crpa) software. 21 dogs were re-evaluated 8 weeks after surgery using the same protocol. when analyzing temperature averages over a region, no significant difference was found between control and affected dogs. crpa was 90% successful in differentiating normal from affected dogs. crpa was 97% successful in identifying the intervertebral disk space when compared with mri and surgical findings. based on these findings, mii may be a viable screening tool to detect tlivdd in dogs. microglia physiologically shows regional topographical differences in immunophenotype and function within the central nervous system indicating the endowment for a prompt response to pathological stimuli such as trauma. spinal cord injuries (sci) consist of a primary injury encompassing the mechanical impact and the ''secondary wave'' of injury occurring minutes to weeks later and comprising various consecutive effects such as increased production of free radicals, excessive release of excitatory neurotransmitters and inflammatory reactions. activated microglia has the potential to perform some of these reactions, their contribution to the secondary wave is therefore controversially discussed. it has to be considered a double-edged sword as both, beneficial and deleterious effects have been attributed to these cells. the purpose of the presented study was to assess microglial involvement, particularly in the ''secondary wave'' following sci. microglia from 15 dogs with sci was isolated and characterized ex vivo in terms of morphology, immunophenotype, and function by flow cytometry. the results were compared to region-specific findings obtained from healthy control dogs (n 5 30). the histopathological exam confirmed the diagnosis of sci in the cervical (n 5 5) and thoracolumbar (n 5 10) spinal cord, and revealed a significant activation of microglia/ macrophages and upregulation of myelinophagia in dogs with sci 5 days or longer prior to euthanasia. microglial ex vivo examination showed significantly increased expressions of b7-1, b7-2, mhc ii, cd1c, icam-1, cd14, cd44, and cd45, and significantly enhanced phagocytosis and generation of reactive oxygen species (ros) in sci compared to healthy controls. microglial cells seem to be highly activated following sci with an immunophenotype indicating their active role in co-stimulation of t cells, in leukocyte adhesion and aggregation, and in lipid and glycolipid presentation. microglial phagocytosis might play a pivotal role in removal of injured or damaged cells and initialize subsequent healing processes. however, as ros can be directly neurotoxic an enhanced microglial generation might lead to bystander damage of the traumatized spinal cord and might therefore add to the deleterious effects of the secondary wave. modulating the microglial response in sci might be a valuable novel therapeutic strategy alleviating further damage to the spinal cord. thymidine kinase (tk) is a soluble biomarker present in s-phase of a salvage pathway for dna synthesis, and can be measured in serum. tk activity correlates with stage, prognosis, and relapse in canine and human lymphoma. we previously reported the results of a pilot study evaluating tk activity in archived canine osteosarcoma, transitional cell carcinoma, and hemangiosarcoma (hsa) sera, and found elevated tk activity in 80% of canine hsa sera evaluated. the purpose of this study was to prospectively evaluate serum tk activity in a large number of dogs presenting to emergency clinics with hemoabdomen and a splenic mass, to determine if tk activity could be used as a noninvasive means to distinguish hsa versus benign conditions in this population. dogs presenting with hemoabdomen and a splenic mass identified on ultrasound examination were studied. serum was collected prior to anesthesia, euthanasia or surgical intervention and frozen until batch analysis. tissue from all patients was evaluated histologically by a single pathologist. sera from age-matched normal dogs comprised a control population. an elisa using azidothymidine as a tk1 substrate was used. comparisons between groups were made using 2-tailed student t-tests, and receiver-operator characteristic (roc) curves were generated. sixty-two patients and 39 normal controls were studied. there were 35 dogs with hsa, 10 dogs with other splenic neoplasia, and 17 dogs with benign diseases. using a training set of 24 normal dogs, a cutoff of 6.55 u/l was established from the roc curve. tk activity was significantly higher (p o 0.0001) in dogs with hsa than in the validation set of 15 normal dogs (mean1/àsd 5 17.711/à4.5 and 2.011/à0.6 respectively), but not between dogs with hsa and benign splenic disease (mean1/àsd 5 7.021/à3.7, p 5 0.13). using a cutoff of 6.55 u/l, tk activity demonstrated a sensitivity of 0.54, specificity of 0.76, positive predictive value of 0.83 and negative predictive value of 0.45 for distinguishing hsa versus benign splenic disease. when interval thresholds of o 1.55 and 47.95 u/l were used together, diagnostic utility was markedly increased for distinguishing both hsa versus normal and hsa versus benign disease. in conclusion, serum tk evaluation may assist in detection of canine hsa, and may also discriminate between benign disease and hsa in dogs with hemoabdomen and a splenic mass. t cell chronic lymphocytic leukemia (cll) is a heterogeneous disease that affects a number of dog breeds. cll patients have variable disease outcomes. the objectives of this study were to use gene expression profiling of cd8 t cell leukemias with variable outcomes in order to identify markers that can be used in routine diagnostic tests to distinguish good from poor prognosis disease, and to identify potential targets for novel therapy. gene expression profiling of 12 cd8 t cell leukemias (7 good, 5 poor prognosis) was conducted. samples from 6 normal dogs were also profiled. several differentially expressed genes were found including cd9, cd94, and cd 25. these were selected for further study using flow cytometry to determine expression of protein on the cell surface. seventy nine cases of cd81 t cell leukemia were screened for cd 9 expression. forty seven had associated outcome information. based on analysis to date, cd9 expression as assessed by flow cytometry does not appear to provide prognostic information. a monoclonal antibody to cd25 was recently made available. to date 33 patients with cd8 t cell leukemia have been profiled. cd25 is variably expressed on t cell leukemias compared to normal cd8 t cells. cd25 is the receptor for interleukin 2. cyclosporin, a commonly used immunosuppressive drug, inhibits il-2 production, and has been used to treat a subset of t cell leukemias in people. thus, the finding that cd25 is up regulated on t cell leukemias compared with normal t cells suggests a possible new therapeutic avenue. recent molecular studies have revealed a highly complex bacterial microbiota in the intestine of dogs. there is mounting evidence that microbes play an important role in the pathogenesis of acute and chronic enteropathies of dogs, including idiopathic inflammatory bowel disease (ibd). similarly, compositional changes of the intestinal bacterial ecosystem have been associated with ibd in humans. the aim of this study was to characterize the bacterial microbiota in dogs with various gastrointestinal disorders using a next generation sequencing technique. fecal samples were obtained from healthy dogs (n 5 31), dogs with acute uncomplicated diarrhea (n 5 7), dogs with acute hemorrhagic diarrhea (ahd; n 5 13), and dogs with active (n 5 8) and therapeutically controlled ibd (n 5 10). the bacterial composition was analyzed by massive parallel 16s rrna gene 454-pyrosequencing. differences between groups were analyzed using mann-whitney u tests and kruskal-wallis tests followed by dunn's multiple comparison tests. statistical significance was set at p o 0.05. significant differences in the proportions of several bacterial groups were identified between healthy and diseased dogs. dogs with gastrointestinal disease had significantly higher proportions of proteobacteria (p o 0.01). proportions of firmicutes were lower in diseased dogs, but this difference did not reach significance (p 5 0.06). within the firmicutes the most notable findings were decreases in bacterial groups belonging to clostridium clusters iv and xiva (i.e., ruminococcus, dorea, and faecalibacterium spp.; p o 0.01 for all). dogs with ahd had the most profound changes of the microbiota, followed by dogs with acute uncomplicated diarrhea, and dogs with active ibd. faecalibacterium spp. was the bacterial group most prominently depleted in dogs with active ibd, but was not significantly different between healthy dogs and dogs with therapeutically controlled ibd (p 5 0.66). results of this study revealed bacterial dysbiosis in fecal samples of dogs with various gi disorders. bacterial changes were more profound in dogs with severe disease, but were not identified in dogs with therapeutically controlled ibd, suggesting that the microbiota is stable in non-active disease. the bacterial groups identified are considered to be important short chain fatty acid producers and may serve as candidates for the diagnosis or therapeutic monitoring of gi disease. future studies are necessary to determine if these microbial changes correlate with functional changes in the intestinal microbiota. ciprofloxacin oral tablets, available in a generic formulation for people, are widely used for treatment in dogs. oral absorption data for ciprofloxacin in dogs has been variable, and too limited to guide accurate dosing. subsequently, published doses for dogs in veterinary formularies have varied from 5 to 25 mg/kg. this study was undertaken to explore the factors that may affect oral absorption of generic ciprofloxacin in dogs, and to derive a pharmacokinetic-based dose for treating susceptible bacteria. six healthy adult beagle dogs were used for the study (11.2 kg mean weight). after placing jugular vein catheters for collecting blood samples, these dogs were administered either a single oral dose of ciprofloxacin (250 mg tablet; mean dose 23 mg/kg), or an intravenous (iv) dose (10 mg/kg; 2 mg/ml solution). a randomized crossover design was used with a washout time between treatments. blood was collected for plasma drug analysis for 24 hours. ciprofloxacin concentration in plasma was analyzed using high pressure liquid chromatography (hplc) and pharmacokinetics analyzed using a computer program. oral absorption was also evaluated via deconvolution analysis. the oral dose was well-tolerated, but the iv dose produced transient vomiting and depression in some dogs. after the oral dose, the peak plasma concentration (c max ) was 4.4 mg/ ml (cv 55.9%), terminal half-life (t1/2) 2.6 hr (cv 10.8%), auc 22.5 mg á hr/ml (cv 62.3%), and systemic absorption (f) 63.7% (cv 41.6%). after the iv dose, the t1/2 was 3.7 hr (cv 52.3%), systemic clearance 0.587 l/kg/hr (cv 33.9%), and volume of distribution 2.39 l/kg (cv 23.7%). after examining the pharmacokinetic results from the oral dose, it was apparent that oral ciprofloxacin was absorbed well in some dogs (approximately 80%), but poorly in others (approximately 30%). to explore the factors that may have affected oral absorption, two high absorbers and two low absorbers were administered an additional oral dose as a 10 mg/ml solution (250 mg total dose) via gastric tube. after administration of the oral solution, the plasma concentrations were more uniform and consistent among dogs. absorption of the oral solution of ciprofloxacin was 71.0% (cv 7%) with a t1/2 of 3.1 (cv 18.6%) hr and c max of 4.67 mg/ml (cv 17.6%). therefore, it appears that inconsistent oral absorption of ciprofloxacin in some dogs may be formulation-dependent, and affected by tablet dissolution in the canine small intestine. doses were calculated using the data for oral tablets in these dogs. the pharmacokinetic-pharmacodynamic (pk-pd) target was an auc/ mic ratio of 100. because of the wide range in oral absorption of tablets, a dose to reach the pk-pd target ranged from canine distemper (cd) is a highly contagious, acute or subacute systemic viral disease of dogs and other carnivores which can be controlled efficiently by the use of modified live-virus (mlv) vaccines. however, mlv strains do cross-react with molecular diagnostic tests and cause significant confusion for clinicians. the purpose of this study was to use quantitative real-time pcr viral load information to differentiate between vaccine virus used in mlv vaccines and wildtype infections in dogs. a real-time pcr test for cd virus (cdv) based on the p gene for phosphoprotein was used to determine viral loads in vaccinated and wildtype infected animals. a total of 158 respiratory mucosal swab samples from mlv vaccinated and asymptomatic dogs were obtained within the first 3 weeks after mlv vaccination. based on the viral load in vaccinated animals, a cutoff value was established for the differentiation of dogs with clinical signs of respiratory distress and presumably infected with a wildtype strain of cdv. two hundred clinical cases with known clinical and vaccination histories were analyzed to validate the cutoff value. the cdv real-time pcr proved to be of high analytical and diagnostic sensitivity: a standard curve was established using known numbers of cdv molecules to allow absolute quantitative cdv viral load data. the limit of detection was in the single molecule range while the limit of quantitation was established at around 10 molecules per pcr reaction. a comparison to ifa showed real-time pcr to be 30% more sensitive. the cdv viral load in vaccinated animals averaged 26,738 viral particles per swab. a cutoff value of 107,903 viral particles was calculated by adding 3 standard deviations to the average value. this cutoff value correctly detected 95.2% of the vaccinated samples. acutely infected dogs with cdv compatible clinical signs have high viral loads normally several logs higher than the cutoff value. in dogs with clinical distress, recent cdv mlv vaccination but viral loads below the cutoff value, other infectious agents were detected by using a panel of real-time pcr tests. testing additional infectious agents in clinical settings is important in order to explain clinical signs when viral loads below cutoff values indicate that cdv is not the cause of clinical signs. in conclusion, quantitative real-time pcr is a sensitive, rapid and reliable test regardless of recent vaccination. the use of a cutoff value will be of significant help to discriminate between vaccine interference and wildtype infection in clinical settings. feline ureteral obstructions are a common urinary dilemma and traditional therapy is associated with substantial morbidity/mortality. feline nephrostomy tubes are reported as being effective when pelvic drainage is required. the biggest limitation is externalized drainage, requiring careful management to prevent infection/dislodgement. the development of an indwelling ureteral bypass using a combination locking-loop nephrostomy/cystostomy tube was modified from humans, resulting in permanent indwelling drainage, reduced complications, and improved quality of life. the objective is to describe the technical and clinical outcome using a novel device called a subcutaneous ureteral bypass (sub) in cats with ureteral obstructions. fifteen cats (16 kidneys) had a sub placed for: ureterolithiasis (9), ureteral stricture (1/à stones) (6), and ureteral stent rejection (2). the median pre-and post-procedure creatinine was 6 mg/dl (range: 2.8-13) and 2.5 mg/dl (range: 2.4-9), respectively. the median pelvis diameter pre and post-procedure were 15 (range: 7-28) and 5 mm (range 3.4-6), respectively. six french tubes were placed in 14, and 5 fr. in 2. the bypass remained indwelling for a median of 4240 days (range 64-4450). there were 3 major complications resulting in nephrostomy tube dislodgement (2) and port leakage (1) 5 days after surgery. one patient with severe coagulopathy developed a clot which resolved with tpa infusion through the port. no sub got occluded/obstructed long-term. overall, the use of a sub for cats with ureteral obstructions can be considered a functional option when other therapies have failed or are contraindicated, but shtime. oxidative stress is considered central to the pathogenesis of many systemic diseases. in humans, biomarkers of oxidative stress, antioxidant depletion and lipid peroxidation, have been correlated with disease severity and associated with poor clinical outcomes. therapeutic antioxidant supplementation with nac in glutathione (gsh)-deficient patients has shown clinical benefits, including repletion of intracellular gsh levels. we have shown that clinically ill dogs are gsh-deficient, and that gsh deficiency correlates with mortality, but it is not clear whether there are direct benefits of antioxidant intervention in these patients. the purpose of this randomized, investigator-blinded, placebo-controlled, prospective study was to evaluate the effect of nac to normalize blood antioxidants (rbc reduced gsh (rbc gsh), plasma cysteine (cys), serum vitamin e (vit e), and whole blood selenium (se)), reduce lipid peroxidation (urine isoprostane/creatinine ratio (u i/ c)), and improve illness scores (spi2) and outcome (survival to discharge) in clinically ill dogs. clinically ill client-owned dogs, admitted to the uw veterinary medical teaching hospital that did not receive blood transfusions, tpn, vitamins, or antioxidants were eligible for the study. dogs enrolled in the study were randomized to receive iv infusions q. 6 h. of either nac (1 â 140 mg/kg and 6 â 70 mg/kg) or equal volumes of 5% dextrose (placebo) over 48 hours. at the time of enrollment, and 2 hours following the final 48 hour infusion, blood and urine were collected to quantify rbc gsh, cys, vit e, and se concentrations; u i/c ratios; and calculate spi2 scores. rbc gsh and cys concentrations were quantified by hplc. commercially available hplc, atomic absorption spectroscopy, and eia were used to quantify vit e, se, and u i/c ratios, respectively. nonparametric statistical analyses were used, with results reported as medians and p o 0.05 considered significant. sixty-one ill dogs were randomized to either nac (n 5 30) or placebo (n 5 31). overall this group of ill dogs had significantly decreased rbc gsh (1.50 vs. 1.91 mm; p 5 0.0013), vit e (27 vs. 56 mg/ml; p 5 0.0002), and se (0.37 vs. 0.55 mg/ml; p 5 0.0308) levels and elevated u i/c ratios (969 vs. 398 pg/mg; p 5 0.0005) in comparison to healthy control dogs. dogs in the placebo group showed a significant further decrease in rbc gsh over the next 48 hours (1.48 to 1.42; p 5 0.035). nac supplementation significantly increased plasma cys levels (9.1 to 15.1 mm; p o 0.0001), and prevented a further decline in rbc gsh (1.55 to 1.58 mm; p 5 0.174). however, serum vit e (30 vs. 28 mg/ml), se (0.40 vs. 0.35 mg/ ml), u i/c ratios (946 vs. 806 pg/mg), spi2 scores (0.74 vs. 0.75), and outcome (81% vs. 76%) were not significantly different between the nac and placebo groups after treatment. the results of this study further support that clinically ill dogs experience oxidative stress, and suggest that antioxidant supplementation with nac within the first 48 hours of hospitalization prevents further rbc gsh depletion. further studies are necessary to investigate whether longer duration or combined antioxidant supplementation normalizes the redox state and impacts long-term outcome. diabetes mellitus in cats is very similar to type ii diabetes in humans, preceded by a period of insulin resistance. evaluating insulin resistance in a cat is a time consuming, expensive, and difficult procedure. there is a need for a simple biomarker based test predictive of insulin resistance. there is a biomarker based assay predicative of insulin resistance in humans. the purpose of this study was to evaluate the utility of this assay in overweight cats and show improvement in insulin sensitivity following weight loss and weight maintenance. the insulin resistance assay is based on the quantitative analysis of 5 metabolites (2-hydroxybuterate, creatine, palmitate, decanoylcarnitine, and oleoyl-lpc). a proprietary algorithm (metabolon, inc, durham nc) was used to generate a predictive rd (rate of disposal) value (normal range in cats 6.5-10.5). individuals with an rd value less than 6 will have a greater than 50% chance of being insulin resistant and an rd value less than 3 will have a greater than 90% chance of being insulin resistant. initial studies demonstrated that the rd values indicating insulin resistance in cats correlated with age, obesity and severity of diabetes as determined by histopathology and blood glucose levels. in a feeding study of 40 cats (4 39% vs. o 25% body fat) rd values improved from 6.19 ae 1.23 to 6.8411.38 (p 5 0.029). during weight maintenance, 19% body fat for 4 months, further improvement was observed (rd, 8.30 1 1.08 (p 5 9.2e-12)). these results demonstrate that long term weight maintenance following weight loss is critical for increasing insulin sensitivity in cats. the use of monoclonal antibodies and antibody fragments to directly target tumor antigens and neutralize their growth factors has shown promising results in human clinical trials. however, these targeted approaches have not been possible in dogs since specific tumor antigens have not been identified, monoclonal antibodies of canine origin are not available and the efficacy of xenogeneic antibodies in the dog is limited by neutralizing antibody responses. to overcome these obstacles, we have generated canine antibody phage display libraries from canine splenocytes. these libraries consist of single chain variable fragments (scfv) comprised of canine variable heavy (vh) and variable light (vl) immunoglobulin chains displayed on the surface of bacteriophage (fig. 1) . the antigen specificity within these libraries is diverse and recapitulates the antigen-experienced immunoglobulin repertoire of the dog. we can now use simple panning techniques to isolate scfv of canine origin that bind to either known targets or unknown targets which can then be identified using standard molecular techniques. canine hsa is a highly aggressive malignancy of vascular endothelial cells that affects large breed dogs. although there are no confirmed immunological targets for hsa, serum levels of vascular endothelial growth factor (vegf) are elevated in these patients and, as in many human cancers, vegf may represent an important therapeutic target for neutralization. we used simple panning techniques to screen canine scfv libraries generated from the spleens of dogs with hsa against canine vegf and successfully isolated 3 scfv clones that bind and neutralize canine vegf in vitro. these scfvs are now being taken into a murine model of canine hsa to determine whether they can inhibit tumor growth and metastases. in addition, we have panned the same antigen-experienced scfv phage display libraries against allogeneic primary canine hsa cells of low passage number to isolate canine-derived antibody fragments that can target malignant endothelial cell surface molecules. early results demonstrate enrichment of scfv phage libraries for malignant endothelial cell binders. these scfv can be readily linked to chemotherapeutic agents or other toxins and used to deliver high doses directly to the malignant cell. this novel approach aims to reduce side effects of systemic chemotherapy and augment therapeutic response. calcitriol, (vitamin d 3 ), has antineoplastic activity and acts synergistically to potentiate the antitumor activity of a diverse array of chemotherapeutics. ccnu, vinblastine, corticosteroids, and tyrosine kinase inhibitors, are used to treat canine mast cell tumors (mct). vitamin d receptor is expressed in the majority of canine mcts, suggesting a role for calcitriol in the management of dogs with these tumors. the purpose of our study was to examine the in vitro effects of calcitriol in combination with ccnu, vinblastine, imatinib, or toceranib on canine mastocytoma c2 cells. also, we evaluated the antitumor activity of dn101, a highly concentrated oral formulation of calcitriol, as single-agent treatment in dogs with naturally occurring mcts. c2 cells were incubated with serial dilutions of calcitriol (0.1-25 nm). twenty-four hours later, cells were then treated with vehicle control or serial dilutions of ccnu (2.5-20 um), vinblastine (1.25-10 nm), imatinib (1.67-12.5 nm), or toceranib (3.13-25 nm). cell viability was assessed with an mtt assay after 48 hours and data was used to derive a combination index (ci: values o 1, 1, 41 indicate synergism, additivity, antagonism, respectively). in the phase ii clinical trial, dogs were eligible if they had at least 1 measurable, histologically confirmed, mct. calcitriol was administered orally. recist criteria were used to assess tumor response. calcitriol, ccnu, vinblastine, imatinib, and toceranib each suppressed c2 cell viability in a dose-dependent manner. ci values o 1 were obtained for calcitriol (0.1-6.25 nm) combined with ccnu (5 and 10um), vinblastine (2.5 and 5 nm), imatinib (1.67-12.5 nm) and toceranib (0.1-12.5 nm). due to the occurrence of toxicity (vomiting, anorexia, hypercalcemia), the phase ii trial was terminated early; only 10 of 20 planned patients were treated. one dog with a metastatic muzzle mct had a complete response that lasted 89 days. three dogs achieved partial response lasting from 74-90 days. in summary, our in vitro data demonstrate that calcitriol combined with ccnu, vinblastine, imatinib or toceranib has synergistic effects on c2 mastocytoma cells. antitumor responses were observed in dogs with spontaneously occurring mcts treated orally with single-agent calcitriol, but the frequency of adverse effects was high. together these results suggest calcitriol combination therapies might have significant clinical utility in the treatment of canine mcts but refinement of the calcitriol-dosing regimen must be done. cyclosporine is a potent immunosuppressive agent used to treat many canine inflammatory and immune-mediated diseases. cyclosporine has gained popularity as an immunosuppressive agent because of a favorable toxicity profile compared to many other immunosuppressive agents. optimal dosing regimens for cyclosporine in the dog remain unclear, primarily because standard methods that monitor effectiveness of immunosuppression have not been established. pharmacokinetic testing is currently used during treatment with oral cyclosporine to adjust doses based on measurement of blood drug levels. individual patients, however, often demonstrate marked variations in blood drug levels while on similar oral doses of cyclosporine, and can also demonstrate different clinical responses even at comparable drug levels, making correlation of blood cyclosporine levels and degree of disease control extremely difficult. pharmacodynamic testing offers an alternative method for regulating cyclosporine dosing by objectively measuring the effects of cyclosporine on t-cells, the drug's main cellular target in the body. our acvim foundation-funded research has focused on developing and evaluating a comprehensive panel of biomarkers of immunosuppression that can be utilized for pharmacodynamic monitoring during treatment with cyclosporine and other immunosuppressive agents that affect t-cell function. we have completed several studies using flow cytometry to evaluate activated t-cell expression of surface molecules (cd25 & cd95) and cytokines (il-2, ifn-g & il-4) as potential biomarkers. our first study was an in vitro study evaluating expression of surface molecules and cytokines in canine t-cells exposed to varying concentrations of cyclosporine. this study established consistent drug-associated suppression of the cytokines il-2, ifn-g and il-4. our second study was an in vivo study in normal dogs evaluating the effects of two doses of oral cyclosporine, a high dose considered to be reliably immunosuppressive (starting dose 10 mg/kg bid, titrated upwards as needed to attain trough drug blood levels of at least 600 ng/ml) and a lower dose used to treat atopy (5 mg/kg sid), on t-cell expression of these three cytokines. significant suppression of il-2 and ifn-g expression was seen at the high cyclosporine dose, while at the lower dose only ifn-g expression was suppressed. because tcell expression of il-4 was not significantly suppressed at the high cyclosporine dose, il-4 was not evaluated at the lower drug dose. because of specialized sample handling requirements, flow cytometry is not as practitioner friendly as other assays (such as pcr) for routine use in pharmacodynamic testing. we have therefore conducted an in vitro study comparing the effects of cyclosporine on activated t-cell expression of il-2 and ifn-g using flow cytometry and qrt-pcr, and demonstrated dose dependent and comparable suppression of il-2 and ifn-g using either methodology. we are currently evaluating, using qrt-pcr, the effects of oral cyclosporine on t-cell expression of il-2 and ifn-g in normal dogs prior to moving on to pharmacodynamic trials in our clinic patients. effect of hypothyroidism on reproduction in bitches. dl panciera 1 , bj purswell 2 , ka kolster 2 , sr werre 3 . departments of 1 small animal clinical sciences, 2 large animal clinical sciences, and 3 laboratory for study design and data analysis, virginia-maryland regional college of veterinary medicine, virginia tech, blacksburg, va. numerous reproductive abnormalities, including irregular interestrous period, anestrus, and infertility have been attributed to hypothyroidism. we previously documented reduced fertility and lower birth weight and increased periparturient mortality in pups born to bitches with experimentally-induced hypothyroidism for a median duration of 56 weeks. the purpose of this study was to evaluate reproductive function in these same bitches after hypothyroidism was treated with a replacement dose of levothyroxine. twelve multiparous bitches were studied. hypothyroidism was induced in 6 dogs by administration of 1 mci/kg 131 i. hypothyroidism was confirmed by finding serum t4 concentrations before and 4 hours after iv administration of human recombinant tsh that were o 5 nmol/l. levothyroxine (0.02 mg/kd q 24 h) was administered to all hypothyroid bitches. six bitches served as euthyroid, untreated controls. dogs were evaluated daily for signs of estrus and were bred by 1 of 2 males when serum progesterone was !5 ng/ml. interestrous interval, gestation length, strength and duration of contractions during whelping, time between pups, number of live pups and stillbirths, viability of pups at birth, weight of pups, and periparturient mortality were recorded. the student's t-test and anova were used to compare differences between control and hypothyroid bitches for continuous, normally distributed data. the wilcoxon rank sum test was used to analyze data between groups that was not normally distributed. the mean duration of hypothyroidism prior to levothyroxine administration was 102 ae 8.1 weeks. breeding took place after levothyroxine treatment for 46 ae 12.6 weeks in the hypothyroid group. all 6 dogs in the hypothyroid group and 5/6 control dogs were pregnant, while 4/8 hypothyroid and all 6 control bitches became pregnant prior to levothyroxine administration. no difference in interestrus interval or gestation length was noted between groups. during whelping, no difference in strength of contractions, contraction duration, interval between pups, or viability scores of pups was found between groups. litter size, birth weight and peirparturient mortality were similar between groups. levothyroxine administration reverses the detrimental effects of hypothyroidism on fertility and neonatal health. racing sled dogs have a high prevalence of exercise-induced gastric erosions/ulcers, with reports ranging from 50-67% of dogs running at least 100 miles in a day or less. omeprazole reduces the severity of, but does not completely prevent, gastritis under racing conditions, and can be difficult to administer under these conditions. famotidine can be administered in food, but has only demonstrated efficacy under less intense training conditions. the purpose of these studies was to evaluate different acid suppression strategies under racing conditions for the prevention of exercise-induced gastritis. experiment #1 was a randomized placebo-controlled study using 36 sled dogs (3-8 years) competing in a 330 mile race over 50-60 h. treatment groups were famotidine (approx 1 mg/kg qd) or no treatment, beginning 2 days prior to the start of the race and proceeding until gastroscopy was performed 24h after the race. experiment #2 was a randomized positive-control study using 52 sled dogs (2-8 years) running a mock race of 300 miles in 50h. dogs were divided into omeprazole (approx 1 mg/kg qd, administered 30 min prior to a meal) or famotidine (approx 2 mg/kg bid) groups beginning 2 days prior to the exercise challenge and continuing for 24h after completion. gastroscopy was performed immediately prior to the start of dosing and 24h after completion of the exercise. in all cases, mucosal appearance during gastroscopy was blindly scored using previously described scoring system. famotidine (1 mg/kg qd) reduced the prevalence of clinicallyrelevant, exercise-induced gastric lesions compared to no treatment (7/16 vs 11/16, p 5 0.031). compared to famotidine at 2 mg/kg bid, omeprazole significantly decreased the severity (0.4 vs 1.2, p 5 0.0002) and prevalence (2/23 vs 7/21, p 5 0.049) of gastric lesions. although famotidine provides some benefit in the prevention of exercise-induced gastric lesions, neither the recommended dose nor the higher dose were considered acceptable in the prevention of exerciseinduced gastritis as between 33-44% of the dogs receiving famotidine had clinically significant lesions. a previous study examining omeprazole under racing conditions, but without careful administration on an empty stomach, resulted in a 22% prevalence of clinically significant gastric lesions. however, the bioavailability of omeprazole is reduced in the presence of food, and when the daily administration of the drug is carefully scheduled to coincide with an empty stomach, the resulting prevalence of clinically significant lesions induced by racing-intensity exercise is reduced to just over 10%. the conclusions of these studies are that omeprazole is superior to famotidine in preventing gastritis in racing sled dogs during competition. routine administration of omeprazole is recommended to prevent stress-associated gastric disease in exercising and racing alaskan sled dogs. mares may be an important source of environmental contamination with rhodococcus equi on breeding farms. attempts to reduce fecal shedding of r equi by the mare and the effects of the mare's fecal r equi concentration on airborne concentrations in the foaling stall have not been previously reported. twenty-one arabian mares were treated daily with either oral gallium nitrate or placebo in a randomized double-blind study. fecal samples were collected at day 320 of gestation (time 1), the week before foaling (time 2), and the week after foaling (time 3). airborne concentration of r equi were measured in the stall within 6 hours post foaling using a microbial air sampling system into which standard (100-mm) culture plates with a media selective for r. equi have been loaded. concentration of total r equi were determined by morphological characteristics. the concentration of virulent r equi was determined using a modified colony immunoblot method. concentrations of total and virulent r equi were compared among mares to examine effects of treatment, time, and treatment by time interaction. there were significant (p o 0.05) effects of treatment that depended on time of sample collection. at sample times 1 and 2 there were no significant differences between groups in the fecal concentration of virulent r equi. at time 3 concentrations of virulent r equi were significantly lower among mares in the treatment group (p o 0.05) compared to control. effects of time depended significantly on groups: for the control group, there were no significant effects of time. for the treatment group, concentrations tended to decrease over time, and concentrations at time 3 were significantly (p o 0.05) lower than those at time 1. no other differences among times for concentrations in the treatment group were statistically significant. there were no significant effects of treatment, sample time, or their interaction on the concentration of total r equi between groups; however, the pattern for these data was similar to that observed for the virulent isolates. no significant differences were determined between treatment groups for airborne concentrations of virulent or total r equi. treatment of mares with oral gallium nitrate significantly reduced the fecal concentrations of virulent r equi over time, but had no impact on the airborne concentration of r equi shortly after foaling. the purpose of this study was to evaluate the protein profile of bronchoalveolar lavage fluid (balf) in horses affected with recurrent airway obstruction (rao) and in control horses using proteomics and western blot techniques. rao-affected (n 5 5) and control horses (n 5 6) were subjected to an experimental exposure trial; when the rao-affected horses showed clinical signs of disease, balf was collected from all horses. balf was also collected from client-owned rao-affected horses (n 5 15) with naturally-occurring clinical signs of disease and client-owned control horses (n 5 12) from the same environments. the balf from the experimental exposure trial horses was subjected to trypsin digestion and proteomics analysis with mass spectrometry (ms). peaks detected with ms were identified using tandem ms analysis and database searches. western blot was used to confirm the identity and expression levels of two proteins identified using proteomics techniques in the balf of all horses. data from ms experiments were analyzed with the student's t-test to compare peak intensity between rao-affected and control horses. western blot band density data was analyzed with the kruskal-wallis anova for comparison between groups of horses. significance level was set at p o 0.05. with ms proteomic analysis of the balf from the experimental exposure trial horses, 2049 total peaks (peptides) were identified. of these peaks, 100 were differentially expressed between the rao-affected (24 over-expressed) and control horses (76 over-expressed). identifications were made for 250 balf proteins. transferrin and secretoglobin were chosen for validation with western blot. proteomics indicated that secretoglobin was not differentially expressed between the experimental exposure trial group; this was confirmed with western blot analysis. western blot also showed that clientowned rao-affected horses had lower secretoglobin expression than client-owned control horses and control horses before experimental exposure. according to the proteomics data, transferrin was over-expressed in control horses after experimental exposure compared to rao-affected horses. while the western blot analysis did not show a statistically significant difference in this comparison, transferrin was significantly over-expressed in control horses before experimental exposure compared to client-owned rao-affected horses. in addition, both secretoglobin and transferrin band densities on western blot were negatively correlated with airway obstruction and neutrophilic pulmonary inflammation. this study demonstrates that proteomics techniques can be used in the investigation of equine balf proteins. the proteins identified as differentially expressed between rao-affected and control horses in this study including, but not limited to, secretoglobin and transferrin should undergo further evaluation for their use as biomarkers of rao, and as potential targets of new therapeutic agents for rao. cardiotoxic effects of rattlesnake venom in the horse are not well defined. the first aim of this study was to document cardiac damage in naturally envenomated horses. twenty horses with clinical diagnosis of snake bite were included. a snake venom elisa was utilized to confirm envenomation when possible. serum and plasma were collected at selected intervals. plasma was assayed for cardiac troponin i (ctni) using a flurometric assay (stratus cs s , dade behring). holter monitors (zymed s , philips) were placed at admission, 1 week and 1 month post presentation. echocardiography was performed on available horses 5-7 months after envenomation. the second aim of this study was to investigate potential mechanisms of the cardiac damage. serum samples were assayed for tnfalpha using a commercial assay (endogen). antibody titers to crotalus atrox venom were measured at admission, 1 week and 1 month after natural envenomation and compared to titers in vaccinated horses (crotalus atrox toxoid, red rock biologics). a significant number of horses showed elevations in ctni (p o 0.05) at one or more time point indicating myocardial damage. holter readings revealed the presence of arrhythmias or persistent tachycardia in 19 horses. five of twenty horses were available for echocardiography; no abnormalities were noted. horses with increased ctni tended to have greater tnfalpha concentrations compared with horses without increased ctni. peak venom titers in bitten horses were significantly higher than peak titers in vaccinated horses (p o 0.05). rattlesnake envenomation was associated with evidence of cardiac damage in a significant proportion of bitten horses. further studies are needed to determine the cause as well as mechanisms to treat and/or prevent its occurrence. little is known about the gastric mucosal flora in healthy horses and its role in gastric disease has not been critically examined. our laboratory previously reported that a diverse microbial flora with a predominance of streptococcus spp. and lactobacillus spp. exists in healthy horses using fluorescence in situ hybridization (fish). the present study sought to further characterize the gastric mucosal flora of healthy horses using massive parallel 16srrna bacterial tag encoded flx-titanium amplicon pyrosequencing (btefap). biopsies of the squamous, glandular, antral and any ulcerated mucosa were obtained from 4 healthy horses via gastroscopy after a 12-hour fast and 2 horses immediately post-mortem. dna was extracted from the mucosal biopsies and btefap and data processing was performed. hierarchical cluster analysis based on relative abundance data on the genus level were performed to look for trends in bacterial diversity among the individual horses. pyrosequencing yielded between 4,500 and 13,000 reads per horse with 9238, 16587, 16587 reads in the antrum, squamous and glandular regions, respectively. the microbiome segregated into two distinct clusters: cluster 1 comprised of 2 horses that were stabled, fed hay and sampled at post-mortem and cluster 2 consisted of 4 horses that were pastured on grass, fed hay and biopsied gastroscopically after a 12-hour fast. samples from different antomic regions clustered by horse rather than region. despite being very similar at the higher taxonomic level (phyla) differences in the distribution of bacteria were seen at the genus and species level. the dominant bacteria in cluster 1 horses were firmicutes (483% reads/sample) consisting of mainly streptococcus spp., lactobacillus jensenii, l. fornicalis and sarcina maxima. cluster 2 had more diversity with a predominance of proteobacteria, bacteroidetes and firmicutes and 51 genera identified such as streptococcus spp., moraxella spp., actinobacillus spp., and others. though the relative abundance of the individual taxonomic groups was significantly different between individual horses, no significant differences in the overall diversity could be found (as assesed by shanon weaver, ace and choa i diversity indices). helicobacter spp. sequences were not identified in any sample (out of 58,891 reads). the ulcerated mucosa from horse 3 (group 1) had lower diversity and higher numbers of bacteria predominated by lactobacillus equigenerosi. this data shows that the equine gastric mucosa harbors an abundant and diverse microbiome which is unique to each individual and differs by sampling method, fasting prior to sampling and diet. seasonal pasture myopathy (spm; atypical myopathy [am] in europe), typified by nonexertional rhabdomyolysis, occurs in pastured horses during autumn or spring. clinical signs rapidly progress from muscular weakness to recumbency and frequently death. extensive myonecrosis and intramyofiber lipid storage occur in highly oxidative respiratory and postural muscles. recently, a defect of lipid metabolism called madd has been identified in european horses with am. this report documents the first cases of equine madd in the united states. six midwestern us horses suspected of having spm in the spring or fall of 2009 were evaluated for madd by urine organic acids, plasma acylcarnitines and/or muscle carnitine and histopathology. five horses had clinical signs and clinicopathologic data consistent with severe rhabdomyolysis. one horse was found dead on pasture after 2 days of rear limb stiffness and inappetance. urinary organic acid profiles revealed markedly elevated ethylmalonic and methylsuccinic acids, butyrylglycine, isovalerylglycine, and hexanoylglycine, consistent with equine madd. plasma acylcarnitine profiles from 2 horses had marked elevations of short chain acylcarnitines, while the third horse and only survivor had minor elevations of short chain acylcarnitines. affected muscle showed extensive degeneration with intramyofiber lipid accumulation, a marked decrease in free carnitine, and high levels of carnitine esters. spm appears to be a highly fatal emerging disease of pastured horses in the us characterized by weakness, colic-like signs and myoglobinuria. the disease is associated with a defect in muscular lipid metabolism that can be diagnosed by performing lipid staining of muscle samples and urine organic acid profiles. candidatus mycoplasma haemolamae (cmhl) is a common red blood cell parasite of new world camelids. the high degree of parasitemia that develops in an infected splenectomized animal allows for the efficient collection of parasitic dna. this dna can then be used in the development of genetically-derived tools such as pcr and in-situ hybridization. thus, one splenectomized animal can replace many immunologically intact animals within a research setting. the purpose of this study was to track the natural progression of cmhl parasitemia and associated clinical signs in a splenectomized alpaca. an intact, 9-month-old, 39.1 kg male alpaca was used in this study. he had tested positive via pcr for cmhl on three different occasions, although no organisms were seen on peripheral blood smears. the alpaca was placed under general anesthesia and a ventral midline incision was made. the spleen was located, the vessels ligated, and the organ removed. buprenorphine and flunixin meglumine were given for 2 and 4 days after surgery respectively. body weight, attitude, rectal temperature, blood glucose, and pcv were recorded daily. in addition, a peripheral blood smear was examined daily and the percent of red blood cells that were infected with mycoplasma organisms was determined. the alpaca was not parasitemic prior to surgery. one percent of the rbc's contained mycoplasma on days 2 and 3 after splenectomy. parasitic bloom developed on day 4 with 85% of the red blood cells infected, and over 70% containing 3 or more organisms. the alpaca was treated with 20 mg/kg oxytetracycline i.v. on day 4. on postoperative day 5 no parasites were seen in the peripheral blood. the peripheral blood remained free of parasites for 11 days. on the morning of the 12 th day, 3% of the peripheral red blood cells contained mycoplasma. by late that afternoon, 75% of the observed rbc's contained 3-4 organisms. the alpaca again received oxytetracycline. there were no more parasites observed from that time until the alpaca was euthanized 5 days later. the alpaca lost 3.7 kg between days à1 and 12 after surgery. his weight fluctuated between 34.9 and 35.4 kg for the remainder of the study period. blood glucose ranged between 87 and 163 mg/dl there was no major change in pcv (range 21-29%), a finding that was expected as the spleen was not available to remove infected red blood cells. body temperature ranged between 38.1 and 39 degrees celsius except for days 4 and 16 when more than 70% of red blood cells contained parasites. on those days rectal temperature reached 39.4 and 39.1 degrees respectively. this study confirmed that a non-parasitemic, yet pcr positive alpaca did indeed harbor cmhl. the time from splenectomy to parasitic bloom was shorter, and the length of oxytetracycline suppression longer than has been observed in other species. gastro-intestinal (gi) disease frequently results in increased wall thickness in many species. identification of changes in gi wall thickness using ultrasound has proved to be a useful diagnostic tool and is widely used in human patients, small animals and horses. although gi motility has been evaluated in cattle, normal reference ranges for wall thickness has not been reported in ruminants. the aims of this study were to report normal values for wall thickness of various gi structures and to assess the repeatability of this technique in adult dairy cows, sheep and goats. eight healthy adult holstein friesian (hf) cattle (656 ae 11 kg), eight jersey (j) cattle (458 ae 56 kg), thirteen adult sheep (79 ae 11 kg) and eleven adult goats (43.5 ae 8 kg) were recruited and examined on three consecutive days. ultrasonographic images were optimised for the structure of interest. structures were identified based upon appearance and anatomical position. a minimum of three cineloops were obtained of the abdominal organs per intercostal space (ics) and three along the ventral midline in each ics; images were analysed offline. data were analysed using anova and post-hoc bonferoni, student's ttest and intra-class correlation coefficients. each structure was measured per ics per species; if no differences were noted for structures in different ics, then measurements were pooled. no differences were noted between hf and j cattle so data were pooled. data are displayed in table 1. good repeatability (icc40.91) was obtained for all measurements and no differences were noted between animals of the same species or between days. these measurements for assessment of normal gi thickness are repeatable and may allow valuable additional information to be gained from ruminants with gi disease. ocular infections with the infectious bovine keratoconjunctivitis (ibk) agent moraxella bovis (m. bovis) are associated with significant economic loss in the cattle industry.although antibiotic therapy is the treatment of choice for ibk, treatment failures are common and current vaccines are not optimally effective mainly due to antigenic variation. as a result, our laboratory has been actively investigating the therapeutic potential of bdellovibrio bacteriovorus 109j (b. bacteriovorus); a predatory bacterium capable of attacking and inducing lysis of gram-negative bacteria, as a new treatment for ibk. we have previously shown that b. bacteriovorus can reduce the number of m. bovis attached to bovine epithelial cells in an in vitro model of ibk and that b. bacteriovorus can be trained to kill m. bovis as effectively as e. coli using serial passages. in this study, we hypothesized that b. bacteriovorus can remain viable in bovine tears without its prey for up to 24 hours. this hypothesis was addressed by incubating inocula of active b. bacteriovorus in its preferred media peptone yeast extract (pye) and comparing b. bacteriovorus viability in bovine tears or phosphate buffered saline (pbs) at time 0, 2, 4, 12, and 24 hours. using a plaque assay to quantify the mean amount of plaque forming units (pfus) of b. bacteriovorus exposed to each treatment, it was determined that viability of b. bacteriovorus over time was comparable between treatment groups. overall, the results supported that b. bacteriovorus can remain viable in tears for up to 24 hours in the absence of prey bacteria. further studies are needed to determine the therapeutic potential of b. bacteriovorus in an in vivo model of ibk. correction of the measured ionized calcium concentration (cca 21 ) to a ph 5 7.40 is routinely applied in experimental studies in order to assist in the interpretation of measured values relative to a reference range. the equation most commonly used for ph correction in bovine plasma is: cca 21 ph 5 7.40 5 cca 21 â10 (-0.24â{7.40 -ph}) . the validity of this equation for bovine plasma is unknown. accordingly, our first objective was to characterize the in vitro relationship between cca 21 and ph for bovine plasma. feeding rations with a low dietary cation-anion difference (dcad) during late gestation mitigates periparturient hypocalcemia in dairy cows, particularly when chloride containing acidodgenic salts are fed. the mechanism for this beneficial effect remains unclear. our second objective was to determine whether hyperchloremia displaces calcium from binding sites to albumin, thereby increasing cca 21 . the in vitro relationship between plasma log(cca 21 ) and ph in was investigated using lithium heparin anticoagulated blood from 10 healthy holstein-friesian calves. plasma was harvested and tonometered with co 2 at 371c over a ph range of 7.10-7.70. plasma chloride concentration (ccl -) was altered by equivolume dilution of plasma with 3 electrolyte solutions of varying ccl -(97 ae 1, 110 ae 1, and 123 ae 1 meq/l; mean ae sd). the slope of the linear regression equation relating log(cca 21 ) to ph for 112 tonometered plasma samples from the 10 calves was -0.24 ae 0.05 at normal values for cca 21 (2.63 ae 0.09 meq/l), albumin concentration (30.9 ae 2.0 g/l), and ccl -(99.1 ae 2.2 meq/l). the experimentally-determined value for the slope for bovine plasma was identical to that determined previously for human plasma. the formula for correcting cca 21 in bovine plasma for change in ph from 7.40 is therefore: cca 21 ph 5 7.40 5 cca 21 â 10 (à 0.24 â {7.40-ph}) . this equation is only valid at normal concentrations of albumin and chloride in plasma. equivolume dilution of plasma by electrolyte solutions of varying cclindicated that cca 21 ph 5 7.40 increased by 0.007 meq for every 1 meq/l increase in ccl -. in other words, plasma cca 21 at a given ph increases directly in response to an increase in plasma ccl -, presumably because the additional chloride displaces calcium that is electrostatically bound to albumin. furthermore, the increase in cca 21 is independent to the change in plasma ph induced by an increase in ccland decrease in plasma strong ion difference. our finding that hyperchloremia directly increases plasma cca 21 provides an additional mechanism by which ingestion of high chloride (acidogenic) rations prevents the clinical signs of periparturient paresis. our finding is consistent with the results of other studies that indicate acidogenic salts that contain chloride as the predominant anion (ie, nh 4 cl, cacl 2 ) are more effective in increasing cca 21 than equimolar quantities of acidogenic salts such as mgso 4 . coagulase negative staphylococci (cns) are among the most common bacteria isolated from the bovine mammary gland. historically, these bacteria were lumped together as minor mastitis pathogens. modern molecular techniques have allowed accurate speciation and fingerprinting of the cns species. these methodologies have recently been applied to the study of cns in bovine mastitis. the aim of the studies presented here was to evaluate the role of individual cns species on milk somatic cell count (scc) and duration of intramammary infection (imi). in the first study, mammary quarter foremilk samples were aseptically collected from all lactating cattle ($180 head) at the university of missouri dairy research center once monthly for 17 months for bacterial culture and milk scc. staphylococcal isolates were speciated by sequencing the rpob gene and strain-typed using pulsed-field gel electrophoresis (pfge). using species and fingerprint data along with published definitions for staphylococcal imi, 91 cns imis were identified. overall, 11 species of cns were identified with staphylococcus chromogenes, s. cohnii, s. epidermidis, and s. simulans being most prevalent. duration of imi and scc data were analyzed using regression models accounting for repeated measures. mean milk scc and duration of imi were found to differ between cns species (p o 0.05). although most imis were of short duration (1 month), staphylococcus capitis and s. chromogenes imis had longer mean durations of infection than 3 or more of the other species isolated. mean sccs were under 350,000 cells/ml in most cases. however, staphylococcus simulans and s. xylosus imis were more inflammatory (mean 4 600,000 cells/ml) and had a higher mean scc than s. cohnii, s. epidermidis, and s. haemolyticus. to examine the relationship between cns imi and milk scc in a larger population of cattle, cns isolates from the canadian bovine mastitis research network (cbmrn) culture collection were obtained for speciation. speciation and fingerprinting were performed as above. isolates were from subclinical imi from before and after the dry period and from subclinical imi during lactation. data associated with each isolate were obtained from the cbmrn database. nine-hundred-thirty-eight isolates from 696 mammary quarters in 89 herds were successfully speciated. twenty-two different species of cns were identified. staphylococcus chromogenes was the most frequent species identified accounting for 40% of the infections. three species, s. chromogenes, s. xylosus, and s. simulans accounted for 475% of all infections. data were analyzed using a linear hierarchical repeated measures mixed model. differences in mean scc were found between some cns species and culture negative control quarters and also between different species of cns (p o 0.05). overall, our data demonstrate potential differences in pathogenicity between strains of cns that cause bovine mastitis. passive transfer of maternally derived antibodies via ingestion of good-quality colostrum within the first 24 hours of life is crucial for the health and future productivity of dairy calves. however, infectious diseases can be transmitted via colostrum feeding, which may require use of a colostrum replacement product or pasteurization to decrease disease transmission. while pasteurization of colostrum is effective for sterilization, heating during pasteurization can alter the viscosity of colostrum, destroys important nutritional biomolecules, and has been shown to decrease colostral igg concentrations. the purpose of this study was to investigate the effect of high pressure processing (hpp) on the viscosity, igg concentration, and bacterial contamination of bovine colostrum. first milking colostrum samples were collected from 18 cows from 3 different farms, and 50 ml aliquots of each sample were pooled for analysis. pooled colostrum was processed in triplicate using an isostatic press at 400 mpa (60,000 psig) for 0, 5, 15, 30, and 45 minutes. samples were tested for the effects of hpp on the viscosity, bacterial load (cfu/ml), and igg concentration. there was a significant decrease (p o 0.05) in bacterial load at each time point when compared to time 0. no significant difference in igg concentration was found between any time points. subjectively, the colostrum viscosity appeared to increase with the processing time, though the rheologic assessment has not been completed at this time. hpp appears to be an effective method to decrease bacterial contamination of colostrum while maintaining appropriate igg concentrations. minimizing the processing time or pressure may be necessary to maintain an acceptable viscosity of the colostrum. based on these results, additional studies are justified in order to determine the optimum combination of processing time and pressure and the effect of hpp on specific bovine pathogens. the heme-associated iron-binding apoprotein lactoferrin (lf) is known for its, anti-inflammatory, anti-parasitic, antimicrobial and bactericidal effects. lactoferrin demonstrates ubiquity throughout mammalian host biological fluids: saliva; tears; mammary secretions, as well as at mucosal surfaces. it is also released from immune cells under pathogenic stimulation. the purpose of this study which has been approved by western university's institutional animal care and use committee is to further characterize the mechanisms through which lf modulates inflammation in the face of bacterial endotoxin. it was hypothesized that lf would inhibit p38 phosphorylation. numerous studies speak to the ability of lf to alter leukocyte function, inhibit cytokine production, and bind lipopolysaccharide (lps); mechanisms through which it is believed to achieve its anti-inflammatory effects. recently, investigators demonstrated its ability to interact with host dna while others describe regulation of granulocyte adhesion and motility; elucidating its roles in the apoptotic signaling. in earlier studies, dawes me, et al. demonstrated lactoferrin's ability to limit the expression of inducible cyclooxygenase-2 and the gelatinase, matrix metalloproteinase -9 by lps-induced macrophages. the generation of these inflammatory mediators is modulated by pro-inflammatory cytokines such as interleukin-1 b (il-1 b) and tumor necrosis factor-alpha (tnf-a), the production of both being dependent on signaling through the p38 mitogenactivated protein kinase (mapk) pathway. peripheral mononuclear cells (5 â 10 6 )isolated from buffy coat cells of healthy neonatal to 4-month old holstein calves were cultured in the presence and absence of lf (200 ng/ml), lps (1 mg/ml), anisomycin (25 mg/ml), a known p38 activator -the positive control, and 50 mm of sb203580, a known p38 inhibitor -the negative control. sample lysates obtained post culture was subjected to immunoprecipitation and kinase reactions. reactions were terminated under reducing conditions and evaluated using western immunoblotting. phosphorylation of activated transcription factor-2 (atf-2) by phosphorylated p38 served as the marker of investigation. immunologically reactive atf-2 expression by lps and anisomycin-treated cells was compatible with a prominent band at 40 kd. evidence of lf-induced inhibition of lps-induced p-38 activation was observed in lanes representative of co-cultures of lf 1 lps; lf 1 anisomycin; and anisomycin 1 sb203580, which was demonstrated by decreased immunological reactivity at 40kd. the findings here, suggest that lf interferes with lps-induced p-38 activation of transcription factor atf-2, in vitro. this serves as additional proof of its potential use in attenuating the systemic effects of lps. six (6) clinically normal, purpose-bred cats of similar age and body condition were imaged with [ 18 f] fluorodeoxyglucose ([ 18 f]fdg) and [ 18 f]ftha by using dynamic cardiac-gated fused pet/ct for kinetic assessment of myocardial glucose and fatty acid uptake and metabolism, respectively. kinetic tracer uptake within the myocardium was achieved by initiating image data acquisition simultaneously with tracer injection. pet data were acquired over a 1 hour period with the heart in the center of the scanner field of view. regions of interest were drawn in the left ventricular wall and thoracic aorta for the purpose of measuring the kinetics of tracer redistribution. serial blood samples were also taken during pet imaging for comparison with image data. the equilibrium biodistribution of both tracers was documented 1 hour post-injection in a whole body pet/ct image. standard echocardiographic examination of cardiac structures was also performed. both radiotracers remained in the plasma fraction; however, [ 18 f]ftha was cleared from the more rapidly than [ 18 f]fdg (t 1/2 $ 2 and $ 20 min, respectively). the tracers were readily visualized within the feline myocardium in dynamic pet images and analysis of the blood pool clearance from the kinetic image data agreed with blood sampling data. myocardial uptake of each tracer was best described by a double exponential analysis and was rapid but variable among animals (range 1-30 bq/cc/min), although blood glucose levels were similar in all cats during image acquisition. physiologic [ 18 f]fdg was observed in the brain, salivary tissue, gastrointestinal tract, renal pelves and urinary bladder, with [ 18 f]ftha seen in the myocardium, liver and renal cortex. all cats were normotensive with normal echocardiographic parameters. this study demonstrates the utility of kinetic imaging using the left ventricle (lv) shape has been suggested to change from elliptical to more globular in response to chronic volume overload. real-time three-dimensional echocardiography (rt3de) offers new modalities for lv assessment. the aim of the study was to investigate left ventricular changes in shape and volume occurring in response to different severities of naturally acquired myxomatous mitral valve disease (mmvd) in dogs using rt3de. privately owned dogs were classified by standard echocardiography into: healthy (20), mild (20), moderate (8) and severe mmvd (17). a lv cast was obtained using semi-automated endocardial border tracking from rt3de dataset, from which global and regional (automatically acquired basal, mid, and apical segments based on lv long-axis dimension) end-diastolic (edv) and endsystolic volumes (esv), lv long-axis dimension and rt3de sphericity index, were derived. global and regional edv and esv increased significantly with increasing mmvd severity, assessed by mmvd group-wise comparisons and linear regression analyses using left atrial to aortic root ratio, and lv end-diastolic and end-systolic dimensions. all three segments contributed to the overall increased global volumes, but the mid edv segment was strongest associated with increasing lv end diastolic dimension (p 5 0.048). furthermore, lv long axis distance and lv sphericity index increased with increasing mmvd severity. the basal and apical edv segments were strongest associated with sphericity index (p o 0.0001). in conclusion, this rt3de study showed that increased lv edv, primarily in the mid segment, leads to rounding of lv apical and basal segments in response to increasing mmvd severity in dogs. 84 dogs from shelters in florida with naturally acquired di infection were euthanized and necropsied. all adult di in each dog were sexed using morphological features. total worm burdens and numbers of males and females were recorded. no other information was available for any dog. all data, raw and transformed, were examined visually and descriptively. raw numerical data were further examined by a paired t-test; log-odds transformed data were examined by logistic regression. we also conducted a binomial distribution goodness of fit analysis assuming a null hypothesis of a m:f 5 1.0. worm intensities ranged from 1 to 143 di per dog. eight dogs had unisex infections: 7/8 had all-female infections. dogs with lowintensity dual-sex infections were more likely to have greater numbers of female di. overall, sex ratios were equal (paired t-test, p 5 0.7). however, logistic regression demonstrated that the probability of being female is strongly affected by the total worm intensity, with lower intensities increasing the probability of having a predominance of female worms. our data show that di sex ratios in naturally-infected dogs equal 1 when examining the entire dog population, but deviate to favor female worms at low worm intensities. these data could impact adulticide treatment strategies. the reasons for sex ratio distortion in di are unknown. we evaluated cardiac reverse remodeling after mitral valve repair under cardiopulmonary bypass (cpb) for mitral regurgitation in small breed dogs. fifty dogs (body weight 1.8-9.3 kg, age 5-14 years) with mitral regurgitation were treated between august 2006 and november 2009. the cardiac murmur was grade 4/6-6/6. the preoperative chest x-rays showed cardiac enlargement (vertebral heart scale (vhs) 11.0-13.1). echocardiography showed severe mitral regurgitation and left atrium enlargement (la/ao 2.0-4.2). after inducing anesthesia, a thoracotomy was performed in the fifth intercostal space. cpb was started by using a cpb circuit connected to carotid artery and jugular vein catheters. after inducing cardiac arrest, the left atrium was sectioned and chordae tendineae rupture confirmed. the chordae tendineae were replaced with expanded polytetrafluoroethylene. a mitral annulus plasty was also done, and the left atrium was closed. after de-clamping for restarting the heart, the chest was closed. heart rate decreased from 118-164 bpm to 75-138 bpm. the grade of cardiac murmur was reduced to 0/6-3/6 three months postoperatively, and the heart shadow was reduced (vhs 9.8-11.5) in the chest x-rays. echocardiography confirmed the marked reduction in mitral regurgitation and the left atrial dimensions (la/ao 1.2-2.2). mitral valve repair reduced enlarged cardiac size by reduction of regurgitant rate. pulmonary arterial hypertension (pah) is a well recognized condition in dogs leading to considerable morbidity and mortality. the majority of therapeutics has focused on endothelial dysfunction causing reduced production of vasodilators, such as nitric oxide and prostacyclin, coupled with overproduction of vasoconstrictors, such as endothelin-1. more recently, it has been shown that the mitochondria play an important role in the development of pah as oxygen sensors and regulators of cellular proliferation. in pah, pulmonary artery smooth muscle cells undergo a metabolic shift from oxidative phosphorylation in the mitochondria to glycolysis in the cytoplasm as the major energy source and this leads to suppression of apoptosis and increased proliferation. dichloroacetate (dca) inhibits pyruvate dehydrogenase kinase to activate pyruvate dehydrogenase which catalyzes the rate limiting step for entry of pyruvate into the krebs cycle, thus increasing mitochondrial respiration. in three different rat models of pah, dca has been shown prevent and reverse pah by normalizing molecular pathology, stimulating apoptosis of pulmonary artery smooth muscle cells, and reducing pulmonary artery hypertrophy. dca has known toxic effects, including reversible hepatotoxicity and peripheral neuropathy, and has not been studied in any species with naturally occurring pah. the objective of this open label pilot study is to evaluate the therapeutic and toxic effects of dca in naturally occurring canine pah. three dogs with pah diagnosed by doppler echocardiography and no correctable underlying cause are enrolled in the study. dogs are orally administered 25 mg/kg of dca divided daily for 2 weeks, and then 12.5 mg/kg of dca divided daily for the remainder of the study. at baseline, 2, 4, 8, and 12 weeks, an echocardiogram, cbc, serum chemistry profile, urinalysis, nt-probnp, blood uric acid, blood lactate, noninvasive blood pressure, nerve conduction study, and trough dca level (12 hr post-dose) are obtained. the measured echocardiographic parameters include peak and mean tricuspid regurgitant flow velocity and pressure gradient, peak and enddiastolic pulmonary regurgitant flow velocity and pressure gradient, pulmonary valve flow velocity acceleration time and ejection time, pulmonary valve flow velocity time integral, right ventricular myocardial performance index, tricuspid annular plane systolic excursion, and systolic tricuspid annular tissue velocity. variables are inspected for normalcy and equality of variances and a two-sided paired t-test is used to compare the variables before and after treatment at each evaluation time. the basis for the role of the mitochondria in pah and the results of this pilot study will be presented to determine if dca warrants further study as a therapy for dogs with pah. study produced the strongest associations between the ncl phenotype and cfa2 markers. all 19 ncl-affected tibetan terriers were homozygous for the same haplotype which extended for 22 consecutive snps spanning 0.95 mb. none of the 20 annotated genes within this target region had previously been associated with human or rodent ncl. we used dna from ncl-affected tibetan terriers to resequence the coding regions and intron-exon borders of several genes harbored within the target region and found a single base pair deletion, c.1623delg, in exon 16 of positional candidate atp13a2. this deletion produces a frame shift and a predicted premature termination codon. we genotyped all 454 tibetan terrier dna samples in our collection and found all 45 ncl-affected tibetan terriers to be homozygous for the c.1623delg allele. eleven additional c.1623delg homozygotes were either less than 5 years old, or lost to follow up. there were no known cases of ncl in the remaining 398 tibetan terriers which were either heterozygous (n 5 149) or homozygous for the ancestral allele (n 5 249). atp13a2 is a member of group of ion transport genes and has been associated with lysosomes. mutations in human atp13a2 cause kufor-rakeb syndrome (krs), a rare neurodegenerative disorder with clinical features that include parkinsonism plus spasticity, supranuclear upgaze paresis, and dementia. post-mortem findings in krs have not been reported. we conclude that ncl in tibetan terriers is caused by a mutation in atp13a2. our results suggest that krs may be a form of adult onset ncl in humans. niemann-pick type c (npc) disease is a progressive neurological disorder characterized by dementia and ataxia, hepatic and pulmonary disease, and death typically within the first or second decade. despite the identification of causative mutations, the pathogenesis is not clear and therapies to successfully treat npc disease have been ineffective to date. the recent use of intravenously administered 2-hydroxypropylbeta-cyclodextrin (hpbcd), an fda-designated orphan drug (may 2010), in a small number of children with npc disease is based on favorable treatment outcome data in subcutaneously treated mouse and cat models. to rigorously evaluate the mechanistic, pharmacologic, and toxicity issues associated with hpbcd therapy in npc disease, we have utilized the spontaneous feline npc model harboring a missense mutation in npc1 (pc955s), orthologous to the most common mutation in juvenile-onset patients. the feline npc model has clinical, neuropathological and biochemical abnormalities similar to those present in juvenile-onset patients making this model homologous to the most common disease form seen in human patients. we identified that intrathecal administration of hpbcd ameliorated all clinical aspects of neurological disease at least up to 24 weeks of age (an age when untreated cats die) but had no effect on hepatic disease. we identified that while subcutaneous therapy with hpbcd at all doses ameliorated liver disease, only 8000 mg/kg substantially affected neurological disease but also resulted in early death due to pulmonary toxicity. finally, we identified a dose-related toxic effect of hpbcd on hearing function that had not been described in any other species. leukodystrophies are disorders of myelin synthesis and maintenance that affect cns myelin. they are subdivided as leukodystrophies, hypomyelinating disorders and spongy degenerations. although infrequently seen, several forms have been described in various dog breeds. we present a novel form of complex leukodystrophy consisting of hypomyelination and spongy degeneration that presents primarily with hind end tremors in border terrier puppies. three border terriers from two different litters (and lineages) are described here that presented with a history of shaking movements. the youngest dog was a 3-week old male. it was the only dog affected in the litter. the other two dogs were 6-week old female littermates. there were two unaffected males in the same litter. physical examination revealed no abnormalities. on neurological examination, the affected dogs displayed severe hind end tremors, with a characteristic swinging side-to-side movement (best described as ''rumpshaker''). the tremors also involved the head and thoracic limbs but to a lesser degree, and disappeared when the dogs were asleep or at rest. severe cerebellar ataxia was observed when the dogs ambulated. proprioceptive positioning was delayed in the pelvic limbs of all 3 dogs. spinal reflexes and nociception appeared normal. necropsy was performed in all 3 puppies. no macroscopic changes were observed. histologic evaluation of the cns revealed spongy degeneration and hypomyelination in all funiculi of the cervical and thoracic spinal cord. white matter of the frontal, temporal and parietal cortices had mild multifocal spongy degeneration and hypomyelination, whereas white matter of the cerebellum, medulla and pons showed severe diffuse spongy degeneration and hypomyelination with gliosis. the combination of reduced myelin formation combined with spongiform white matter changes in the absence of microglial responses suggest a complex pathogenesis affecting both oligodendrocytes' capacity to synthesize myelin and the stability of the myelin that was formed. the number of oligodendrocytes and axons appeared subjectively normal indicating a primarily hypomyelinating process. the clinical and pathological features of this disease have not been described in any other canine leukodystrophy. the primary and most striking clinical feature is the presence of severe tremors in the hind end, causing the ''rumpshaker'' pheynotype. genetic studies are underway to determine if the disease is inherited and the inheritance mode. a syndrome of border collie collapse (bcc) appears to be common in dogs used for working stock. this syndrome has also been called malignant hyperthermia, heat intolerance, exerciseinduced collapse and ''wobbles''. a presumptive diagnosis of bcc can only be made by eliminating other causes of exercise intolerance and weakness. the purpose of this study was to describe the clinical features of collapse in affected dogs and determine if there were characteristic clinical or laboratory features at rest or after exercise that could aid in diagnosis. seven adult border collies with a history of collapse during sheep herding (affected) and 5 adult border collies regularly used for sheep herding but showing no signs of exercise intolerance (normal) were evaluated before and after participating in a videotaped 10 minute exercise protocol consisting ofa series of continuous short outruns and fetches of three sheep in an outdoor pen. exercise was halted at 10 minutes or earlier if there were signs of gait or mentation abnormalities. pre-exercise evaluation included physical examination, orthopedic and neurological exam. pre and immediate post exercise rectal temperature, pulse and respiration, patellar reflexes, ecg, cbc, serum biochemistry profile, cortisol, arterial blood gas and plasma lactate and pyruvate concentrations were measured. clinical parameters (gait, temperature, reflexes) and lactate and pyruvate concentrations were evaluated at intervals up to 120 minutes after exercise. additional testing in affected dogs included measurement of acetylcholine receptor antibodies (achrab) and dna testing for dynamin-associated exercise induced collapse (deic) and the ryanodine receptor mutation associated with canine malignant hyperthermia(mh). one week after exercise affected dogs had thoracic radiographs and echocardiography performed and were anesthetized for emg and muscle biopsies. there were no significant differences in temperature, pulse, respiration, or any laboratory parameter at any time point between normal and affected dogs. no arrhythmias were detected. affected dogs were negative for the dna mutations tested and for achr ab. thoracic radiographs, echocardiograms, emgs and muscle biopsies were normal. the 5 normal dogs had no alterations in mentation or gait during or after exercise. three of the affected dogs had exercise halted early (6 min-9 min) because of altered gait or mentation. all 7 of the affected dogs were abnormal in the 15 minutes following exercise. abnormalities seen in all dogs included disorientation, dull mentation, swaying, falling to the side, exaggerated lifting of limbs each step, choppy gait, delayed limb protraction, scuffing of rear and/or forelegs, and crossing legs when turning. all dogs returned to normal by 30 minutes. bcc appears to be an episodic nervous system disorder that can be triggered by exercise. genetic testing excluded deic and the described canine mh mutation. common causes of exercise intolerance were eliminated, but the cause of collapse in bcc was not determined and no clinical or biochemical marker to aid diagnosis was established. equine cushing's disease (ecd) is common in older horses. the purpose of this study was to determine the frequency of diagnosis, identify prognostic factors and assess owner satisfaction with treatment. the study was a retrospective cohort design evaluating equine accessions reported to the veterinary medical data base (vmdb) and the ohio state university from 1993-2004. proportional accessions, annual incidence and demographic characteristics of horses with ecd were compared with all accessions in the vmdb. medical records for a subset of horses were extracted and owners contacted to obtain long-term follow up information. two hundred seventeen new cases of ecd were reported to the vmdb. incidence increased from 0.25/1,000 in 1993 to 3.72/1,000 in 2002. eighty-one percent of horses were ! 15 years of age. average delay from onset of signs to diagnosis was 180 days (range 1 to 1,824 days). hirsutism (84%) and laminitis (50%) were the most common clinical signs. improvement in one or more signs 2 months after diagnosis was reported by 9/22 (41%) of horse owners. none of the clinical or laboratory data were associated with survival and, 50% of horses were alive, 4.6 years after diagnosis. 17/20 (85%) of horses were euthanatized and 13/17 (76%) were euthanatized due to conditions associated with ecd. twenty-eight of 29 (97%) of horse owners said they would treat a second horse for ecd. ecd is becoming a more frequent diagnosis. fifty percent of horses survived 4.5 years after diagnosis and owners were satisfied with the horse's quality of life. supported by centers of excellence in livestock diseases and human health, college of veterinary medicine, university of tennessee. the role of the hypothalamic-pituitary-adrenal (hpa) axis in sepsis has been a subject of a great deal of research. the role that the somatogenic axis plays in sepsis is less well understood and how these two axes interact during critical illness is not clear. the purpose of this study was to assess inter-relationships of adrenocorticotropin (acth), cortisol, and insulin-like growth factor-i (igf-i), in septic and non-septic term foals. blood samples were obtained from term septic foals less than 7 days of age (n 5 20) admitted to texas a&m university veterinary medical teaching hospital or mid-atlantic equine hospital. the foals were classified as septic by a sepsis score ! 11 and/ or a positive blood culture. non-septic term foals less than 7 days of age (n 5 8) and having a sepsis score o 11 and a negative blood culture, were obtained from texas a&m university veterinary medical teaching hospital and mid-atlantic equine hospital. plasma and serum were processed from whole blood collected by jugular venipuncture upon admission, at 24 hours post admission and at 5 days post admission or at the time of discharge. plasma concentrations of acth, and serum concentrations of cortisol and igf-i were determined by specific rias. data were analyzed using linear mixed-effects modeling with foal modeled as a random effect and day of admission modeled as an ordered categorical variable; post-hoc testing of pair-wise comparisons was made using the method of sidak. significance was set at p o 0.05, and analyses were performed using s-plus software (tibco, inc., seattle, wa). plasma concentrations of acth were not significantly different between septic and non-septic foals whereas septic foals had greater serum cortisol (37 ae 8 ng/ml vs 25 ae 7 ng/ml) but lower serum igf-i (116 ae 14 ng/ml vs 152 ae 15 ng/ml) relative to non-septic foals pooled overall sampling times. the positive association of the peripheral blood concentrations of acth and cortisol depended on disease status of the foals. specifically, cortisol and acth were positively correlated for the septic foals (p 5 0.027) but not significantly correlated in the non-septic foals. peripheral concentrations of acth and igf-i were not significantly correlated whether data were pooled overall or stratified by sepsis status. however, peripheral concentrations of cortisol and igf-i were negatively associated (p 5 0.019); disease status did not influence this association, although it appeared to be a stronger association for the septic than the non-septic foals. the negative correlation between serum concentrations of the adrenal axis steroid cortisol and the somatogenic axis peptide igf-i may reflect interactions of these homeorhetic hormones. further studies of these and other metabolic hormones in a greater number of foals are warranted to better understand how these factors contribute to survival or non-survival of critically ill foals. botulism is a potentially fatal paralytic disorder which definitive diagnosis is difficult. the purpose of this study was to investigate if repetitive stimulation of the common peroneal nerve will aid in the diagnosis of suspected botulism in foals. four healthy foals were used for its comparison with 3 foals with suspected botulism. controls were anesthetized and affected foals were sedated to avoid risks of anesthesia. the common peroneal nerve was chosen for its superficial location and easy access. stimulating electrodes were placed along the common peroneal nerve. for recording, the active and reference electrodes were positioned over the midpoint and distal end of the extensor digitorum longus muscle, respectively. repeated supramaximal stimulation of the nerve was performed utilizing a range of frequencies (1 to 50 hz). amplitude, area under the curve and percentages of decrement or increment for each m wave over subsequent potentials for each set of stimuli were analyzed. baseline m waves were decreased in affected foals compared to controls. a decremental response was seen at all frequencies in control foals. decremental responses were also observed in affected foals at low frequencies. however, an incremental response in amplitude and area under the curve was seen in all affected foals at 50 hz. reduced baseline m waves with incremental responses at high rates are supportive of a presynaptic neuromuscular disorder which botulism was the most likely cause in these foals. repetitive nerve stimulation is a safe, simple, fast, and non-invasive technique that can aid in the diagnosis of suspected botulism in foals. this study examined the frequency with which dogs are exposed to e. chaffeensis and e. ewingii relative to e. canis, which is transmitted by the more ubiquitously distributed brown dog tick (rhipicephalus sanguineus). a total of 6,512 canine serum samples, ranging from 182 to 614 from each of the 14 participating institutions, collected at random from clinical accessions, diagnostic laboratories and/or shelters were evaluated. all serum samples were tested by three microtiter plate elisas using species-specific peptides for antibodies to e. canis, e. chaffeensis and e. ewingii. zip code information for sample origin was provided by the collaborator and was used to assess seroprevalence by region. comparisons were evaluated using the chi-square test. seroreactivity for at least 2 of 3 ehrlichia spp was found in samples from every institution both mississippi and oklahoma had greater than a 7% samples from ohio had the lowest aggregate seroprevalence (1.0%) with only 4 dogs e. canis seropositive, one e. ewingii seropositive and no e. chaffeensis seroreactors. the geospatial pattern of e. chaffeensis and e. ewingii seropositive samples was similar to that previously reported based on modeling seroreactivity to e. chaffeensis in white-tailed deer as well as the distribution of human monocytic ehrlichiosis (hme) cases reported by the cdc. this study provides the first large scale regional documentation of canine exposure to these three ehrlichia spp., highlighting where infections most commonly occur and thus identifying areas where heightened awareness about these emerging vector urinary incontinence (ui) occurs in approximately 20% of spayed female dogs. the most common cause is urethral sphincter mechanism incompetency (usmi). pharmacological agents are effective, however, not all dogs respond, and dogs may become refractory to treatment over time. urethral bulking, where a compound is injected submucosally in the urethra, has been used in women and in female dogs with urinary incontinence. new synthetic compounds have been used in human medicine; the most promising is polydimethylsiloxane (pdms), which has been shown to be more effective than glutaraldehyde cross-linked collagen. the purpose of this descriptive clinical trial is to evaluate the safety and effectiveness of pdms urethral bulking agent (pdms uba) in client-owned, spayed female dogs with naturally-occurring ui due to usmi.twenty-two, spayed female dogs were included. dogs had a median age of 6 years (2 to 11 years). eighteen dog breeds were represented, and dogs weighed a median of 29.9 kg (7.3 to 62.7 kg). average length of time of ui was 2.5 ae 2.3 years; 18/22 dogs had been treated medically, of which 2/18 were continent, 14/18 were improved, and 2/18 had no improvement. dogs were deemed healthy based on results of physical examination, complete blood cell counts, plasma biochemical analysis, and urinalysis; urine cultures were negative.dogs were anesthetized, positioned in dorsal recumbency, and cystoscopy performed using a 2.7 mm, 0-or 30-degree, 18 cm rigid cystoscope. urethral bulking was performed with pdms uba. on average, 2.5 ae 0.9 ml were injected in 3 to 5 locations approximately 1 to 1.5 cm distal to the trigone submucosally in the proximal urethra. good coaptation was achieved in all dogs. the procedure took on average 15.9 ae 4.3 minutes. one dog experienced urethral obstruction after the procedure; a foley catheter was inserted for approximately 12 hours and removed at which time she urinated normally and was continent. three dogs experienced an acute allergic reaction characterized by blepharedema and urticaria treated successfully with diphenhydramine. dogs were discharged on day of procedure except for the one dog that experienced urethral obstruction. all dogs were treated with meloxicam (0.1 mg/kg po q24h for 3 days).owners were contacted on day after discharge and 21/22 dogs were continent; 1/22 dogs was improved. dogs were re-evaluated 1 week after discharge and 21/22 dogs were continent and 1/22 dogs polyneuropathy in large breed dogs is a relatively common clinical problem for which the genetic basis is generally unknown. the first cases of polyneuropathy in the leonberger breed (leonberger polyneuropathy or lpn) were identified in 1999 by one of the authors (gds) and a report published in 2003 (musclenerve 27:471-477) . in this report a spontaneous, distal and symmetrical polyneuropathy with onset between 1 to 9 years of age was described and characterized clinically, electrophysiologically, histologically and morphometrically. there were striking similarities between lpn and the charcot-marie-tooth group of human inherited sensory and motor polyneuropathies, which have many known genetic mutations.a genome-wide case-control association study for lpn was performed with 53 cases and 42 controls on high-density 170k canine snp arrays and revealed a significantly associated region on cfa 16 (p raw 5 2.36 â 10 à10 p genome 5 9.99 â 10 à5 ). a clear association of an approximately 1 mb cfa16 haplotype with cases (p 5 1.71 â 10 à8 ) was observed, particularly with those cases that were affected more severely and at a younger age (p 5 2.55 â 10 à11 ). a positional candidate gene, arhgef10, which has previously been associated with peripheral nerve abnormalities in humans, was sequenced, revealing a deletion that results in a frame shift and premature stop codon. of all leonbergers with young onset lpn (before 4 years), 48.5% (32 of 66) have two copies of this deletion, and, of all young onset leonbergers that are nerve biopsy positive for lpn, 59.4% (19 of 32) have two copies of this deletion. importantly, nearly all dogs carrying two copies of the deletion (32 of 34 or 94.1%) are affected with lpn by the age of 4 years.the leonberger breed was generated from crossing several breeds, including the st. bernard, and a polyneuropathy clinically and histologically similar to lpn occurs in this breed. to determine if the arhgef10 mutation was associated with polyneuropathy in the st. bernard, dna was extracted from archived frozen muscle biopsy specimens from clinical cases (n 5 3). the identical arhgef10 startle disease or hyperekplexia is caused by defects in mammalian glycinergic neurotransmission resulting in an exaggerated startle reflex and extensor hypertonia triggered by noise or touch. in humans and animals, startle disease is typically caused by mutations in one of three genes (glra1, glrb, and slc6a5) encoding postsynaptic glycine receptor subunits (a1 and b) or a presynaptic glycine transporter (glyt2). a litter of seven irish wolfhounds was recently identified in which two puppies developed muscle stiffness and tremor beginning at 5-7 days of age post-partum. signs were dramatic when the puppies were handled and resolved when the puppies were relaxed or sleeping. both puppies were euthanized due to ongoing stiffness, tremor and breathing difficulties. necropsies were performed, but no microscopic pathological abnormalities were identified in the peripheral or central nervous system.based on the clinical signs, exons from the three candidate genes were amplified from genomic dna isolated using pcr and directly sequenced. no deleterious polymorphisms were identified in either glra1 or glrb. however, difficulties were experienced in amplifying slc6a5 exons 2 and 3 from affected animals, although control samples were positive, suggesting that the pcr primer designs and conditions were not at fault. further pcrs revealed that the reason for this anomaly was the presence of a homozygous 4.2 kb deletion encompassing exons 2 and 3 of the glyt2 gene in both affected animals. this deletion is predicted to result in the loss of part of the large cytoplasmic n-terminus that is vital for trafficking of glyt2 to synaptic sites, and a loss of all subsequent transmembrane domains via a frameshift. this genetic lesion was confirmed by defining the deletion breakpoint, southern blotting and multiplex ligationdependent probe amplification (mlpa). this analysis enabled the development of a rapid genotyping test that revealed heterozygosity for the deletion in the dam and sire and three other siblings, suggesting recessive inheritance of this disorder. wider testing of related animals has identified a total of 18 carriers of the slc6a5 deletion and enabled the identification of non-carrier animals to guide future breeding strategies. insulin resistance (ir), obesity, and type 2 diabetes affect glucagon-like peptide 1 (glp-1) concentrations in humans and rodents, but this incretin hormone has not been examined in horses. we therefore hypothesized that glp-1 concentrations would change in horses as obesity and ir were induced or exacerbated by overfeeding. six horses previously diagnosed with equine metabolic syndrome were provided with twice the amount of digestible energy required for maintenance as sweet feed and hay for 8 weeks. intravenous and oral glucose tolerance tests (ogtts) were performed at 0 and 8 weeks. effects of time and period (0 and 8 weeks) were assessed by repeated measures anova.mean body weight increased from 438 ae 61 kg (range, 381 to 533 kg) to 464 ae 61 kg (range, 394 to 550 kg) over 8 weeks, with individual horse weight gain varying from 2 to 10%. mean body condition score increased (p 5 0.006) from 6 ae 2 (range, 4 to 8.5) to 8 ae 1 (range, 7 to 9). three horses developed mild laminitis. glucagon-like peptide 1 concentrations increased over time during ogtts (p 5 0.023), but the period â time effect was not significant (p 5 0.141). area under the glp-1 curve remained unaffected by weight gain, whereas area under the insulin curve increased (p 5 0.003) over time, indicating a reduction in insulin sensitivity. obesity and ir were induced or exacerbated when horses previously diagnosed with ems were overfed, but glp-1 concentrations did not change as a result. hypertonic saline solution (7.2%) (hss) is an intravenous fluid used for the emergency treatment of intravascular volume deficits. the use of this fluid in horses with severe dehydration is controversial. the purpose of this study was to compare the use of hss and isotonic saline solution (0.9%) (iss) for the emergency treatment of endurance horses.endurance horses eliminated from competition and requiring intravenous fluid therapy were eligible for enrollment in the study. twenty-two horses were randomly assigned to receive 4 ml/kg of either hss or iss along with 5 l lactate ringer's solution (lrs). following this bolus, all horses were treated with an additional 10l of lrs. blood and urine samples were collected before, during and after treatment. data was compared using two-way anova with repeated measures.as compared to iss, hss horses showed a greater decrease in pcv (p 5 0.04), total protein (p 5 0.01), albumin (p 5 0.01), and globulin (p 5 0.02). hss horses showed a greater increase in sodium and chloride (p o 0.001) as compared to iss horses. horses receiving hss had a shorter time to urination (p 5 0.03) and lower specific gravity (p o 0.001) than those receiving iss.results of this study indicate that hss may provide faster restoration of intravascular volume deficits than iss in endurance horses receiving emergency medical treatment. more profound electrolyte changes should be expected with hss however. b 2 -adrenergic receptor agonists have been shown to increase erythrocyte carbonic anhydrase activity, which may stimulate the jacobs-stewart cycle and increase pulmonary circulation transvascular fluid fluxes during exercise. increase in pulmonary transvascular fluid fluxes (j v-a ) and consequent increase in the pulmonary interstitial fluid would be detrimental for alveolar o 2 exchange during the fast erythrocyte transition time across the pulmonary capillaries. therefore, we hypothesised that treatment with inhaled b 2 -adrenergic receptor agonist will increase j v-a and the alveolar-arterial po 2 difference (aado 2 ) during exercise.six stb horses were exercised on a high-speed treadmill at 80% vo 2 peak until fatigue. horses were randomly assigned to treatment with salbutamol (sal: 500mcg) or placebo (control: con) inhalation via aeromask ã 60 min prior to exercise, with cross over treatment used at the repeated exercise test (8 days later). arterial and mixedvenous blood, as well as co 2 elimination and o 2 uptake, were sampled simultaneously at rest, during exercise at 60 sec intervals until fatigue, and into recovery. blood gases were analyzed. aado 2 was calculated using the inspired po 2 (149 mmhg), and blood partial pressure of o 2 and co 2 . blood volume (%) changes across the lung were calculated from changes in hemoglobin and hematocrit values in venous and arterial blood. cardiac output (q) was calculated using the fick equation. j v-a was calculated using q and blood volume changes across the lung. variables were analyzed using two-way repeated-measures anova (p o 0.05).the duration of exercise to fatigue was 4.3 ae 0.3 min and 4.4 ae 0.4 min in both con and sal, respectively. at rest sal had no effect on j v-a , oxygen consumption (vo 2 ), blood oxygen saturation (so 2 ) or aado 2 (p40.05). at the onset of exercise j v-a increased in con and sal (p o 0.0001) and at fatigue reached 10.0 ae 2.4 l/min and 10.0 ae 1.6 l/min, respectively. treatment with sal had no effect on j v-a during exercise (p 5 0.9). at the onset of exercise so 2 and vo 2 increased in con and sal (p o 0.0001). treatment with sal had no effect on so 2 or vo 2 during exercise (p40.05). aado 2 increased during exercise in con and sal (p o 0.0001) and at fatigue reached 19.4 ae 2.3 mmhg and 18.1 ae 2.4 mmhg, respectively. treatment with sal had no effect on aado 2 during exercise (p 5 0.3).inhaled b 2 -adrenergic receptor agonist salbutamol at the dose of 500mcg given 60 min before exercise did not affect the duration of exercise to fatigue, j v-a , vo 2 , so 2 or aado 2 . therefore, it had no detrimental effect on alveolar-capillary diffusion distance and the ventilation/perfusion mismatch in exercising horses. inflammatory airway disease (iad) and recurrent airway obstruction (rao) represent two classes of equine lung inflammatory diseases that may share some similar immunologic mechanisms. there is evidence that th2 cytokines and il-17 play some role in rao. iad is a common condition in horses, but its pathophysiology is still not understood. the aim of the present study was therefore to determine the mrna expression of th1, th2 and th17 inflammatory cytokines, to understand the immunological mechanisms of iad.the mrna expression of ten inflammatory cytokines and chemokines was measured in the bronchoalveolar fluid (balf) of seventeen horses with iad and compared with ten control horses. the horses were selected based on 1-their clinical signs, 2-the inflammatory cells count in the balf, 3-their physical examination and 4-their medical history. the mrna expression of il-5, il-1b, il-6, il-8 and il-10 was significantly up-regulated in balf from horses with iad.furthermore, the balf samples were subdivided in two groups based on the differential cells count 1-balf with increased mast cells (iad-mast) and 2-balf with increased neutrophils (iad-neutro). il-4 was significantly down-regulated in the iad-neutro group compared to the iad-mast group. il-17, il-5 and il-8 were significantly up-regulated in the iad-neutro group compared to the iad-mast group.the present study shows that iad in horses is characterized by a th2 and a th17 mrna inflammatory expression profile and that different immunological mechanisms are involved in mast cells or neutrophils accumulation in the balf of horses with iad. b 2 -adrenoreceptor (b 2 -ar) agonists are a class of medications that promote smooth muscle relaxation and bronchodilation in horses and humans with airway disease. activated human peripheral blood lymphocytes (pbls) also respond to b 2 -ar agonist stimulation by attenuating the production of cytokines associated with the pathogenesis of asthma and recurrent airway obstruction (rao). the aim of this study was to develop an in vitro technique for measuring the response of equine pbls to stimulation with salbutamol, a b 2 -ar agonist. this method was then used to compare the response of pbls from rao-affected and non-affected horses to b 2 -ar agonist stimulation. pbls from 4 rao and 4 nonaffected horses were cultured (4x10 6 /ml) in rpmi complete media with concanavalin a (cona, 2ug/10 6 cells) for 0, 1, or 2 days then stimulated with salbutamol (30 minutes). using flow cytometric techniques, response was measured by detecting protein kinase a phosphorylation of vasodilator stimulated phosphoprotein (vasp). results were verified by western blot analysis. activated pbls were then incubated with cona for one day were pre-incubated with b 2 or b-adrenoreceptor antagonist (ici 118,551, sigma s ; atenolol, sigma s ) for 15 minutes, followed by 30 minutes salbutamol (500 nm) stimulation. results were analyzed by anova or ancova and differences were considered significant when p o 0.05.response to b-antagonist was only observed in activated pbls (pre-cultured with con a) and was greater in cells from rao horses as compared to cells from non-affected horses. the addition of b-antagonist attenuated the response of pbls to salbutamol while the addition of a b 1 -antagonist had no effect. these findings indicate that activated pbls from rao-affected horses have a greater response to salbutamol as compared to pbls from non-affected horses, and this response is mediated mainly through the b 2 -ar.human b 2 -ar are known be polymorphic and this polymorphism results in a variable response to b 2 -agonist binding that affects long term outcome in human asthmatics. further studies are required to determine if the difference in response of pbls from rao affected as compared to non-affected horses is due to genetic polymorphism in the equine b 2 -ar, and whether this difference is associated with a propensity for horses to develop equine rao. key: cord-022633-fr55uod6 authors: nan title: saem abstracts, plenary session date: 2012-04-26 journal: acad emerg med doi: 10.1111/j.1553-2712.2012.01332.x sha: doc_id: 22633 cord_uid: fr55uod6 nan objectives: we sought to determine if the ocp policy resulted in a meaningful and sustained improvement in ed throughput and output metrics. methods: a prospective pre-post experimental study was conducted using administrative data from 15 community and tertiary centers across the province. the study phases consisted of the 8 months from february to september 2010 compared against the same months in 2011. operational data for all centres were collected through the edis tracking systems used in the province. the ocp included 3 main triggers: ed bed occupancy >110%, at least 35% of ed stretchers blocked by patients awaiting inpatient bed or disposition decision, and no stretcher available for high acuity patients. when all criteria were met, selected boarded patients were moved to an inpatient unit (non-traditional care space if no bed available). the primary outcome was ed length of stay (los) for admitted patients. the ed load of boarded patients from 10-11 am was reported the editors of academic emergency medicine (aem) are honored to present these abstracts accepted for presentation at the 2012 annual meeting of the society for academic emergency medicine (saem), may 9 to 12 in chicago, illinois. these abstracts represent countless hours of labor, exciting intellectual discovery, and unending dedication by our specialty's academicians. we are grateful for their consistent enthusiasm, and are privileged to publish these brief summaries of their research. this year, saem received 1172 abstracts for consideration, and accepted 746. each abstract was independently reviewed by up to six dedicated topic experts blinded to the identity of the authors. final determinations for scientific presentation were made by the saem program scientific subcommittee co-chaired by ali s. raja, md, mba, mph and steven b. bird, md, and the saem program committee, chaired by michael l. hochberg, md. their decisions were based on the final review scores and the time and space available at the annual meeting for oral and poster presentations. there were also 125 innovation in emergency medicine education (ieme) abstracts submitted, of which 37 were accepted. the ieme subcommittee was co-chaired by joanna leuck, md and laurie thibodeau, md. we present these abstracts as they were received, with minimal proofreading and copy editing. any questions related to the content of the abstracts should be directed to the authors. presentation numbers precede the abstract titles; these match the listings for the various oral and poster sessions at the annual meeting in chicago, as well as the abstract numbers (not page numbers) shown in the key word and author indexes at the end of this supplement. all authors attested to institutional review board or animal care and use committee approval at the time of abstract submission, when relevant. abstracts marked as ''late-breakers'' are prospective research projects that were still in the process of data collection at the time of the december abstract deadline, but were deemed by the scientific subcommittee to be of exceptional interest. these projects will be completed by the time of the annual meeting; data shown here may be preliminary or interim. on behalf of the editors of aem, the membership of saem, and the leadership of our specialty, we sincerely thank our research colleagues for these contributions, and their continuing efforts to expand our knowledge base and allow us to better treat our patients. david background: two to ten percent of patients evaluated in the emergency departments (ed) present with altered mental status (ams). the prevalence of non-convulsive seizure (ncs) and other electroencephalographic (eeg) abnormalities in this population is not known. this information is needed to make recommendations regarding the routine use of emergent eeg in ams patients. objectives: to identify the prevalence of ncs and other eeg abnormalities in ed patients with ams. methods: an ongoing prospective study at two academic urban ed. inclusion: patients ‡ 13 years old with ams. exclusion: an easily correctable cause of ams (e.g. hypoglycemia, opioid overdose). a 30-minute eeg with the standard 19 electrodes was performed on each subject as soon as possible after presentation (usually within 1 hour). outcome: the rate of eeg abnormalities based on blinded review of all eegs by two boardcertified epileptologists. descriptive statistics are used to report eeg findings. frequencies are reported as percentages with 95% confidence intervals (ci), and inter-rater variability is reported with kappa. results: the interim analysis was performed on 130 consecutive patients (target sample size: 260) enrolled from may to october 2011 (median age: 61, range 13-100, 40% male). eegs for 20 patients were reported uninterpretable by at least one rater (6 by both raters). of the remaining 110, only 30 (27%, 95%ci 20-36%) were normal according to either rater (n = 15 by both). the most common abnormality was background slowing (n = 75, 68%, 95%ci 59-76%) by either rater (n = 47 by both), indicating underlying encephalopathy. ncs was diagnosed in 8 patients (7%, 95%ci, 4-14%) by at least one rater (n = 4 by both), including 6 (5%, 95%ci 2-12%) patients in non-convulsive status epilepticus (ncse). 29 patients (26%,95%ci 19-35%) had interictal epileptiform discharges read by at least one rater (n = 12 by both) indicating cortical irritability and an increased risk of spontaneous seizure. inter-rater reliability for eeg interpretations was modest (kappa: 0.53, 95%ci 0.39-0.67). objectives: to define diagnostic sbi and non-bacterial (non-sbi) biosignatures using rna microarrays in febrile infants presenting to emergency departments (eds). methods: we prospectively collected blood for rna microarray analysis in addition to routine screening tests including white blood cell (wbc) counts, urinalyses, cultures of blood, urine, and cerebrospinal fluid, and viral studies in febrile infants 60 days of age in 22 eds . we defined sbi as bacteremia, urinary tract infection (uti), or bacterial meningitis. we used class comparisons (mann-whitney p < 0.01, benjamini for mtc and 1.25 fold change filter), modular gene analysis, and k-nn algorithms to define and validate sbi and non-sbi biosignatures in a subset of samples. results: 81% (939/1162) of febrile infants were evaluated for sbi. 6.8% (64/939) had sbi (14 (1.5%) bac-teremia, 56 (6.0%) utis, and 4 (0.4%) bacterial meningitis). infants with sbis had higher mean temperatures, and higher wbc, neutrophil, and band counts. we analyzed rna biosignatures on 141 febrile infants: 35 sbis (2 meningitis, 5 bacteremia, 28 uti), 106 non-sbis (49 influenza, 29 enterovirus, 28 undefined viral infections), and 11 healthy controls. class comparisons identified 1,288 differentially expressed genes between sbis and non-sbis. modular analysis revealed overexpression of interferon related genes in non-sbis and inflammation related genes in sbis. 232 genes were differently expressed (p < 0.01) in each of the three non-sbi groups vs sbi group. unsupervised cluster analysis of these 232 genes correctly clustered 91% (128/141) of non-sbis and sbis. k-nn algorithm identified 33 discriminatory genes in training set (30 non-sbis vs 17 sbis) which classified an independent test (76 non-sbis vs 18 sbis) with 87% accuracy. four misclassified sbis had over-expression of interferon-related genes, suggesting viral-bacterial co-infections, which was confirmed in one patient. background: improving maternal, newborn, and child health (mnch) is a leading priority worldwide. however, limited frontline health care capacity is a major barrier to improving mnch in developing countries. objectives: we sought to develop, implement, and evaluate an evidence-based maternal, newborn, and child survival (mncs) package for frontline health workers (fhws). we hypothesized that fhws could be trained and equipped to manage and refer the leading mnch emergencies. methods: setting -south sudan, which suffers from some of the world's worst mnch indices. assessment/intervention -a multi-modal needs assessment was conducted to develop a best-evidence package comprised of targeted trainings, pictorial checklists, and reusable equipment and commodities ( figure 1 ). program implementation utilized a trainingof-trainers model. evalution -1) pre/post knowledge assessments, 2) pre/post objective structured clinical examinations (osces), 3) focus group discussions, and 4) closed-response questionnaires. results: between nov 2010 to oct 2011, 72 local trainers and 708 fhws were trained in 7 of the 10 states in south sudan. knowledge assessments among trainers (n = 57) improved significantly from 62.7% (sd 20.1) to 92.0% (sd 11.8) (p < 0.001). mean scores a maternal osce and a newborn osce pre-training, immediately post-training, and upon 2-3 month follow-up are shown in the table. closed-response questionnaires with 54 fhws revealed high levels of satisfaction, use, and confidence with mncs materials. participants reported an average of 3.0 referrals (range 0-20) to a higher level of care in the 2-3 months since training. furthermore, 78.3% of fhws were more likely to refer patients as a result of the training program. during seven focus group discussions with trained fhws, respondents (n = 41) reported high satisfaction with mncs trainings, commodities, and checklists, with few barriers to implementation or use. conclusion: these findings suggest mncs has led to improvements in south sudanese fhws' knowledge, skills, and referral practices with respect to appropriate management of mnch emergencies. no study has compared various lactate measurements to determine the optimal parameter to target. objectives: to compare the association of blood lactate kinetics with survival in patients with septic shock undergoing early quantitative resuscitation. methods: preplanned analysis of a multicenter edbased rct of early sepsis resuscitation targeting three physiological variables: cvp, map, and either central venous oxygen saturation or lactate clearance. inclusion criteria: suspected infection, two or more sirs criteria, and either sbp <90 mmhg after a fluid bolus or lactate >4 mmol/l. all patients had an initial lactate measured with repeat at two hours. normalization of lactate was defined a lactate decline to <2.0 mmol/l in a patient with an intial lactate ‡2.0. absolute lactate clearance (initial -delayed value), and relative ((absolute clearance)/(initial value)*100) were calculated if the initial lactate was ‡2.0. the outcome was in-hospital survival. receiver operating characteristic curves were constructed and areas under the curve (auc) were calculated. difference in proportions of survival between the two groups at different lactate cutoffs were analyzed using 95% ci and fisher exact tests. results: of 272 included patients, the median initial lactate was 3.1 mmol/l (iqr 1.7, 5.8), and the median absolute and relative lactate clearance were 1 mmol/l (iqr 0.3, 2.5) and 37% (iqr 14, 57 ). an initial lactate >2.0 mmol/l was seen in 187/272 (69%), and 68/187 (36%) patients normalized their lactate. overall sutures on trunk and extremity lacerations that present in the ed. the use of absorbable sutures in the ed setting confers several advantages: patients do not need to return for suture removal which results in a reduction in ed crowding, ed wait times, missed work or school days, and stressful procedures (suture removal) for children. objectives: the primary objective of this study is to compare the cosmetic outcome of trunk and extremity lacerations repaired using absorbable versus nonabsorbable sutures in children and adults. a secondary objective is to compare complication rates between the two groups. methods: eligible patients with lacerations were randomly allocated to have their wounds repaired with vicryl rapide (absorbable) or prolene (nonabsorbable) sutures. at a 10 day follow-up visit the wounds were evaluated for infection and dehiscence. after 3 months, patients were asked to return to have a photograph of the wound taken. two blinded plastic surgeons using a previously validated 100 mm visual analogue scale (vas) rated the cosmetic outcome of each wound. a vas score of 15 mm or greater was considered to be a clinically significant difference. results: of the 100 patients enrolled, 45 have currently completed the study including 19 in the vicryl rapide group and 26 in the prolene group. there were no significant differences in the age, race, sex, length of wound, number of sutures, or layers of repair in the two groups. the observer's mean vas for the vicryl rapide group was 55.76 mm ) and that for the prolene group was 55.9 mm (95%ci 44.77-67.03), resulting in a mean difference of 0.14 mm (95%ci-16.95 to 17.23, p = .98). there were no significant differences in the rates of infection, dehiscence, or keloid formation between the two groups. conclusion: the use of vicryl rapide instead of nonabsorbable sutures for the repair of lacerations on the trunk and extremities should be considered by emergency physicians as it is an alternative that provides a similar cosmetic outcome. objectives: to determine the relationship between infection and time from injury to closure, and the characteristics of lacerations closed before and after 12 hours of injury. methods: over an 18 month period, a prospective multi-center cohort study was conducted at a teaching hospital, trauma center and community hospital. emergency physicians completed a structured data form when treating patients with lacerations. patients were followed to determine whether they had suffered a wound infection requiring treatment and to determine a cosmetic outcome rating. we compared infection rates and clinical characteristics of lacerations with chisquare and t-tests as appropriate. results: there were 2663 patients with lacerations; 2342 had documented times from injury to closure. the mean times from injury to repair for infected and noninfected wounds were 2.4 vs. 3.0 hrs (p = 0.39) with 78% of lacerations treated within 3 hours and 4% (85) treated 12 hours after injury. there were no differences in the infection rates for lacerations closed before (2.9%, 95%ci 2.2-3.7) or after (2.1%, 95%ci 0.4-6.0) 6 hours and before (3.0%, 95% ci 2.3%-3.8%) or after (1.2%, 95% ci 0.03%-6.4%) 12 hours. the patients treated 12 hours after injury tended to be older (41 vs. 34 yrs p = 0.02) and fewer were treated with primary closure (85% vs. 96% p < 0.0001). comparing wounds 12 or more hours after injury with more recent wounds, there was no effect of location on decision to close. wounds closed after 12 hours did not differ from wounds closed before 12 hours with respect to use of prophylactic antibiotics, type of repair, length of laceration, or cosmetic outcome. conclusion: closing older lacerations, even those greater than 12 hours after injury, does not appear to be associated with any increased risk of infection or adverse outcomes. excellent irrigation and decontamination over the last 30 years may have led to this change in outcome. background: deep burns may result in significant scarring leading to aesthetic disfigurement and functional disability. tgf-b is a growth factor that plays a significant role in wound healing and scar formation. objectives: the current study was designed to test the hypothesis that a novel tgf-b antagonist would reduce scar contracture compared with its vehicle in a porcine partial thickness burn model. methods: ninety-six mid-dermal contact burns were created on the backs and flanks of four anesthetized young swine using a 150 gm aluminum bar preheated to 80°celsius for 20 seconds. the burns were randomized to treatment with topical tgf-b antagonist at one of three concentrations (0, 187, and 375 ll) in replicates of 8 in each pig. dressing changes and reapplication of the topical therapy were performed every 2 days for 2 weeks then twice weekly for an additional 2 weeks. burns were photographed and full thickness biopsies were obtained at 5, 7, 9, 14, and 28 days to determine reepithelialization and scar formation grossly and microscopically. a sample of 32 burns in each group had 80% power to detect a 10% difference in percentage scar contracture. results: a total of 32 burns were created in each of the three study groups. burns treated with the high dose tgf-b antagonist healed with less scar contracture than those treated with the low dose and control (52 ± 20%, 63 ± 15%, and 62 ± 14%; anova p = 0.02). additionally, burns treated with the higher, but not the lower dose of tgf-b antagonist healed with significantly fewer full thickness scars than controls (62.5% vs. 100% vs. 93.8% respectively; p < 0.001). there were no infections and no differences in the percentage wound reepithelialization among all study groups at any of the time points. conclusion: treatment of mid-dermal porcine contact burns with the higher dose tgf-b antagonist reduced scar contracture and rate of deep scars compared with the low dose and controls. background: diabetic ketoacidosis (dka) is a common and lethal complication of diabetes. the american diabetes association recommends treating adult patients with a bolus dose of regular insulin followed by a continuous insulin infusion. the ada also suggests a glucose correction rate of 75-100 mg/dl/hr to minimize complications. objectives: compare the effect of bolus dose insulin therapy with insulin infusion to insulin infusion alone on serum glucose, bicarbonate, and ph in the initial treatment of dka. methods: consecutive dka patients were screened in the ed between march '06 and june '10. inclusion criteria were: age >18 years, glucose >350 mg/dl, serum bicarbonate 15 or ketonemia or ketonuria. exclusion criteria were: congestive heart failure, current hemodialysis, pregnancy, or inability to consent. no patient was enrolled more than once. patients were randomized to receive either regular insulin 0.1 units/kg or the same volume of normal saline. patients, medical and research staff were blinded. baseline glucose, electrolytes, and venous blood gases were collected on arrival. bolus insulin or placebo was then administered and all enrolled patients received regular insulin at rate of 0.1 unit/kg/hr, as well as fluid and potassium repletion per the research protocol. glucose, electrolytes, and venous blood gases were drawn hourly for 4 hours. data between two groups were compared using unpaired t-test. results: 99 patients were enrolled, with 30 being excluded. 35 patients received bolus insulin; 34 received placebo. no significant differences were noted in initial glucose, ph, bicarbonate, age, or weight between the two groups. after the first hour, glucose levels in the insulin group decreased by 151 mg/dl compared to 94 mg/dl in the placebo group (p = 0.0391, 95% ci 2.7 to 102.0). changes in mean glucose levels, ph, bicarbonate level, and ag were not statistically different between the two groups for the remainder of the 4 hour study period. there was no difference in the incidence of hypoglycemia in the two groups. conclusion: administering a bolus dose of regular insulin decreased mean glucose levels more than placebo, although only for the first hour. there was no difference in the change in ph, serum bicarbonate or anion gap at any interval. this suggests that bolus dose insulin may not add significant benefit in the emergency management of dka. ihca; 3. return of spontaneous circulation (rsoc). traumatic cardiac arrests were excluded. we recorded baseline demographics, arrest event characteristics, follow-up vitals and laboratory data, and in-hospital mortality. apache ii scores were calculated at the time of rosc, and at 24 hrs, 48 hrs, and 72 hrs. we used simple descriptive statistics to describe the study population. univariate logistic regression was used to predict mortality with apache ii as a continuous predictor variable. discrimination of apache ii scores was assessed using the area under the curve (auc) of the receiver operator characteristic (roc) curve. results: a total of 229 patients were analyzed. the median age was 70 years (iqr: 56-79) and 32% were female. apache ii score was a significant predictor of mortality for both ohca and ihca at baseline and at all follow-up time points (all p < 0.01). discrimination of the score increased over time and achieved very good discrimination after 24 hrs (table, figure) . conclusion: the ability of apache ii score to predict mortality improves over time in the 72 hours following cardiac arrest. these data suggest that after 24 hours, apache ii scoring is a useful severity of illness score in all post-cardiac arrest patients. background: admission hyperglycemia has been described as a mortality risk factor for septic non-diabetics, but the known association of hyperglycemia with hyperlactatemia (a validated mortality risk factor in sepsis) has not previously been accounted for. objectives: to determine whether the association of hyperglycemia with mortality remains significant when adjusted for concurrent hyperlactatemia. methods: this was a post-hoc, nested analysis of a single-center cohort study. providers identified study subjects during their ed encounters; all data were collected from the electronic medical record. patients: nondiabetic adult ed patients with a provider-suspected infection, two or more systemic inflammatory response syndrome criteria, and concurrent lactate and glucose testing in the ed. setting: the ed of an urban teaching hospital; 2007 to 2009. analysis: to evaluate the association of hyperglycemia (glucose >200 mg/dl) with hyperlactatemia (lactate ‡ 4.0 mmol/l), a logistic regression model was created; outcome-hyperlactatemia; primary variable of interest-hyperglycemia. a second model was created to determine if concurrent hyperlactatemia affects hyperglycemia's association with mortality; outcome-28-day mortality; primary risk variablehyperglycemia with an interaction term for concurrent hyperlactatemia. both models were adjusted for demographics, comorbidities, presenting infectious syndrome, and objective evidence of renal, respiratory, hematologic, or cardiovascular dysfunction. results: 1236 ed patients were included; mean age 76 ± 19 years. 133 (9%) subjects were hyperglycemic, 182 (13%) hyperlactatemic, and 225 (16%) died within 28 days of the initial ed visit. after adjustment, hyperglycemia was significantly associated with simultaneous hyperlactatemia (or 3.9, 95%ci 2.48, 5.98). hyperglycemia with concurrent hyperlactatemia was associated with increased mortality risk (or 4.4, 95%ci 2.27, 8.59) , but hyperglycemia in the absence of simultaneous hyperlactatemia was not (or 0.86, 95%ci 0.45, 1.65) . conclusion: in this cohort of septic adult non-diabetic patients, mortality risk did not increase with hyperglycemia unless associated with simultaneous hyperlactatemia. the previously reported association of hyperglycemia with mortality in this population may be due to the association of hyperglycemia with hyperlactatemia. the background: near infrared spectroscopy (sto2) represents a measure of perfusion that provides the treating physician with an assessment of a patient's shock state and response to therapy. it has been shown to correlate with lactate and acid/base status. it is not known if using information from this monitor to guide resuscitation will result in improved patient outcomes. objectives: to compare the resuscitation of patients in shock when the sto2 monitor is or is not being used to guide resuscitation. methods: this was a prospective study of patients undergoing resuscitation in the ed for shock from any cause. during alternating 30 day periods, physicians were blinded to the data from the monitor followed by 30 days in which physicians were able to see the information from the sto2 monitor and were instructed to resuscitate patients to a target sto2 value of 75. adult patients (age>17) with a shock index (si) of >0.9 (si = heart rate/systolic blood pressure) or a blood pressure <80 mmhg systolic who underwent resuscitation were enrolled. patients had a sto 2 monitor placed on the thenar eminence of their least-injured hand. data from the sto 2 monitor were recorded continuously and noted every minute along with blood pressure, heart rate, and oxygen saturation. all treatments were recorded. patients' charts were reviewed to determine the diagnosis, icu-free days in the 28 days after enrollment, inpatient los, and 28-day mortality. data were compared using wilcoxon rank sum and chi-square tests. results: 107 patients were enrolled, 51 during blinded periods and 56 during unblinded periods. the median presenting shock index was 1.24 (range 0.5 to 4.0) for the blinded group and 1.10 (0.5-3.3) for the unblinded group (p = 0.13). the median time in department was 70 minutes (range 22-407) for the blinded and 76 minutes (range 11-275) for the unblinded groups (p = 0.99). the median hospital los was 1 day (range 0-30) for the blinded group, and 2 days (range 0-23) in the unblinded group (p = 0.63). the mean icu-free days was 22 ± 9 for the blinded group and 19 ± 11 for the unblinded group (p = 0.26). among patients where the physician indicated using the sto2 monitor data to guide patient care, the icu-free days were 21.4 ± 9 for the blinded group and 16.3 ± 12 for the blinded group (p = 0.06). background: inducing therapeutic hypothermia (th) using 4°c iv fluids in resuscitated cardiac arrest patients has been shown to be feasible and effective. limited research exists assessing the efficiency of this cooling method. objectives: the objective was to determine an efficient infusion method for keeping fluid close to 4°c upon exiting an iv. it was hypothesized that colder temperatures would be associated with both higher flow rate and insulation of the fluid bag. methods: efficiency was studied by assessing change in fluid temperature (0c) during the infusion, under three laboratory conditions. each condition was performed four times using 1 liter bags of normal saline. fluid was infused into a 1000 ml beaker through 10 gtts tubing. flow rate was controlled using a tubing clamp and in-line transducer with a flowmeter, while temperature was continuously monitored in a side port at the terminal end of the iv tubing using a digital thermometer. the three conditions included infusing chilled fluid at a rate of 40 ml/min, which is equivalent to 30 ml/kg/hr for an 80 kg patient, 105 ml/min, and 105 ml/min using a chilled and insulated pressure bag. descriptive statistics and analysis of variance was performed to assess changes in fluid temperature. results: the average fluid temperatures at time 0 were 3.40 (95% ci 3.12-3.69) (40 ml/min), 3.35 (95% ci 3.25-3.45) (105 ml/min), and 2.92 (95% ci 2.40-3.45) (105 ml/min + insulation). there was no significant difference in starting temperature between groups (p = 0.16). the average fluid temperatures after 100 ml had been infused were 10.02 (95% ci 9.30-10.74) (40 ml/min), 7.35 (95% ci 6.91-7.79) (105 ml/min), and 6.95 (95% ci 6.47-7.43) (105 ml/min + insulation). the higher flow rate groups had significantly lower temperature than the lower flow rate after 100 ml of fluid had been infused (p < 0.001). the average fluid temperatures after 1000 ml had been infused were 16.77 (95% ci 15.96-17.58) (40 ml/min), 11.40 (95% ci 11.18-11.61) (105 ml/min), and 7.75 (95% ci 7.55-7.99) (105 ml/min + insulation). there was a significant difference in temperature between all three groups after 1000 ml of fluid had been infused (p < 0.001). conclusion: in a laboratory setting, the most efficient method of infusing cold fluid appears to be a method that both keeps the bag of fluid insulated and is infused at a faster rate. fluid bolus. patients were categorized by presence of vasoplegic or tissue dysoxic shock. demographics and sequential organ failure assessment (sofa) scores were evaluated between the groups. the primary outcome was in-hospital mortality. data were analyzed using t-tests, chi-squared test, and proportion differences with 95% confidence intervals as appropriate. results: a total of 242 patients were included: 89 patients with vasoplegic shock and 153 with tissue dysoxic shock. there were no significant differences in age (61 vs. 58 years), caucasian race (53% vs. 58%), or male sex (57% vs. 52%) between the dysoxic shock and vasoplegic shock groups, respectively. the group with vasoplegic shock had a lower initial sofa score than did the group with tissue dysoxic shock (5.7 vs. 7.3 points, p = 0.0002). the primary outcome of in-hospital mortality occurred in 8/89 (9%) of patients with vasoplegic shock compared to 40/153 (26%) in the group with tissue dysoxic shock (proportion difference 17%, 95% ci 7-26%, p < 0.0001). conclusion: in this analysis of patients with septic shock, we found a significant difference in in-hospital mortality between patients with vasoplegic versus tissue dysoxic septic shock. these findings suggest a need to consider these differences when designing future studies of septic shock therapies. background: the pre-shock population, ed sepsis patients with tissue hypoperfusion (lactate of 2.0-3.9 mm), commonly deteriorates after admission and requires transfer to critical care. objectives: to determine the physiologic parameters and disease severity indices in the ed pre-shock sepsis population that predict clinical deterioration. we hypothesized that neither initial physiologic parameters nor organ function scores will be predictive. methods: design: retrospective analysis of a prospectively maintained registry of sepsis patients with lactate measurements. setting: an urban, academic medical center. participants: the pre-shock population, defined as adult ed sepsis patients with either elevated lactate (2.0-3.9 mm) or transient hypotension (any sbp <90 mmhg) receiving iv antibiotics and admitted to a medical floor. consecutive patients meeting pre-shock criteria were enrolled over a 1-year period. patients with overt shock in the ed, pregnancy, or acute trauma were excluded. outcome: primary patientcentered outcome of increased organ failure (sequential organ failure assessment [sofa] score increase >1 point, mechanical ventilation, or vasopressor utilization) within 72 hours of admission or in-hospital mortality. results: we identified 248 pre-shock patients from 2649 screened. the primary outcome was met in 54% of the cohort and 44% were transferred to the icu from a medical floor. patients meeting the outcome of increased organ failure had a greater shock index (1.02 vs 0.93, p = 0.042) and heart rate (115 vs 105, p < 0.001) with no difference in initial lactate, age, map, or exposure to hypotension (sbp <100 mmhg). there was no difference in the predisposition, infection, response, and organ dysfunction (piro) score between groups (6.4 vs 5.7, p = 0.052). outcome patients had similar initial levels of organ dysfunction but had higher sofa scores at 24, 48, and 72 hours, a higher icu transfer rate (60 vs 24%, p < 0.001), and increased icu and hospital lengths of stay. conclusion: the pre-shock sepsis population has a high incidence of clinical deterioration, progressive organ failure, and icu transfer. physiologic data in the ed were unable to differentiate the pre-shock sepsis patients who developed increased organ failure. this study supports the need for an objective organ failure assessment in the emergency department to supplement clinical decision-making. background: lipopolysaccharide (lps) has long been recognized to initiate the host inflammatory response to infection with gram negative bacteria (gnb). large clinical trials of potentially very expensive therapies continue to have the objective of reducing circulating lps. previous studies have found varying prevalence of lps in blood of patients with severe sepsis. compared with sepsis trials conducted 20 years ago, the frequency of gnb in culture specimens from emergency department (ed) patients enrolled in clinical trials of severe sepsis has decreased. objectives: test the hypothesis that prior to antibiotic administration, circulating lps can be detected in the plasma of fewer than 10% of ed patients with severe sepsis. methods: secondary analysis of a prospective edbased rct of early quantitative resuscitation for severe sepsis. blood specimens were drawn at the time severe sepsis was recognized, defined as two or more systemic inflammatory criteria and a serum lactate >4 mm or spb<90 mmhg after fluid challenge. blood was drawn in edta prior to antibiotic administration or within the first several hours, immediately centrifuged, and plasma frozen at )80°c. plasma lps was quantified using the limulus amebocyte lysate assay (lal) by a technician blinded to all clinical data. results: 180 patients were enrolled with 140 plasma samples available for testing. median age was 59 ± 17 years, 50% female, with overall mortality of 18%. forty of 140 patients (29%) had any culture specimen positive for gnb including 21 (15%) with blood cultures positive. only five specimens had detectable lps, including two with a gnb-positive culture specimen, and three were lps-positive without gnb in any culture. prevalence of detectable lps was 3.5% (ci: 1.5%-8.1%). the frequency of detectable lps in antibiotic-naive plasma is too low to serve as a useful diagnostic test or therapeutic target in ed patients with severe sepsis. the data raise the question of whether post-antibiotic plasma may have a higher frequency of detectable lps. background: egdt is known to reduce mortality in septic patients. there is no evidence to date that delineates the role of using a risk stratification tool, such as the mortality in emergency department sepsis (meds) score, to determine which subgroups of patients may have a greater benefit with egdt. objectives: our objective was to determine if our egdt protocol differentially affects mortality based on the severity of illness using meds score. methods: this study is a retrospective chart review of 243 patients, conducted at an urban tertiary care center, after implementing an egdt protocol on july 1, 2008 (figure) . this study compares in-hospital mortality, length of stay (los) in icu, and los in ed between the control group (126 patients from 1/1/07-12/31/07) and the postimplementation group (117 patients from 7/1/08-6/ 30/09), using meds score as a risk stratification tool. inclusion criteria: patients who presented to our ed with a suspected infection, and two or more sirs criteria, a map<65 mmhg, a sbp< 90 mmol/l. exclusion criteria: age<18, death on arrival to ed, dnr or dni, emergent surgical intervention, or those with an acute myocardial infarction or chf exacerbation. a two-sample t-test was used to show that the mean age and number of comorbidities was similar between the control and study groups (p = 0.27 and 0.87 respectively). mortality was compared and adjusted for meds score using logistic regression. the odds ratios and predicted probabilities of death are generated using the fitted logistic regression model. ed and icu los were compared using mood's median test. results: when controlling for illness severity using meds score, the relative risk (rr) of death with egdt is about half that of the control group (rr = 0.52, 95% ci [0.278-0.973], p=0.04). also, by applying meds score to risk stratify patients into various groups of illness severity, we found no specific groups where egdt is more efficacious at reducing the predicted probability of death (table 1) . without controlling for meds score, there is a trend in reduction of absolute mortality by 9.7% when egdt is used (control = 30.2%, study = 20.5%, p = 0.086). egdt leads to a 40.3% reduction in the median los in icu (control = 124 hours, study = 74 hours, p = 0.03), without increasing los in ed (control = 6 hours, study = 7 hours, p = 0.50). conclusion: egdt is beneficial in patients with severe sepsis or septic shock, regardless of their meds score. background: in patients experiencing acute coronary syndrome (acs), prompt diagnosis is critical in achieving the best health outcome. while ecg analysis is usually sufficient to diagnose acs in cases of st elevation, acs without st elevation is reliably diagnosed through serial testing of cardiac troponin i (ctni). pointof-care testing (poct) for ctni by venipuncture has been proven a more rapid means to diagnosis than central laboratory testing. implementing fingerstick testing for ctni in place of standard venipuncture methods would allow for faster and easier procurement of patients' ctni levels, as well as increase the likelihood of starting a rapid test for ctni in the prehospital setting, which could allow for even earlier diagnosis of acs. objectives: to determine if fingerstick blood samples yield accurate and reliable troponin measurements compared to conventional venous blood draws using the i-stat poc device. methods: this experimental study was performed in the ed of a quaternary care suburban medical center between june-august 2011. fingerstick blood samples were obtained from adult ed patients for whom standard (venipuncture) poc troponin testing was ordered. the time between fingerstick and standard draws was kept as narrow as possible. ctni assays were performed at the bedside using the i-stat 1 (abbott point of care). results: 94 samples from 87 patients were analyzed by both fingerstick and standard ed poct methods (see table) . four resulted in cartridge error. compared to ''gold standard'' ed poct, fingerstick testing has a positive predictive value of 100%, negative predictive value of 96%, sensitivity of 79%, and specificity of 100%. no significant difference in ctni level was found between the two methods, with a nonparametric intraclass correlation coefficient of 0.994 (95% ci 0.992-0.996, p-value < 0.001). conclusion: whole blood fingerstick ctni testing using the i-stat device is suitable for rapid evaluation of ctni level in prehospital and ed settings. however, results must be interpreted with caution if they are within a narrow territory of the cutoff for normal vs. elevated levels. additional testing on a larger sample would be beneficial. the practicality and clinical benefit of using fingerstick ctni testing in the ems setting must still be assessed. background: adjudication of diagnosis of acute myocardial infarction (ami) in clinical studies typically occurs at each site of subject enrollment (local) or by experts at an independent site (central). from 2000 from -2007 , the troponin (ctn) element of the diagnosis was predicated on the local laboratories, using a mix of the 99th percentile reference ctn and roc-determined cutpoints. in 2007, the universal definition of ami (ud-ami) defined it by the 99th percentile reference alone. objectives: to compare the diagnosis rates of ami as determined by local adjudication vs. central adjudication using udami criteria. methods: retrospective analysis of data from the myeloperoxidase in the diagnosis of acute coronary syndromes (acs) study (midas), an 18-center prospective study with enrollment from 12/19/06 to 9/20/07 of patients with suspected acs presenting to the ed < 8 hours after symptom onset and in whom serial ctn and objective cardiac perfusion testing was planned. adjudication of acs was done by single local principal investigators using clinical data and local ctn cutpoints from 13 different ctn assays, and applying the 2000 definition. central adjudication was done after completion of the midas primary analysis using the same data and local ctn assay, but by experts at three different institutions, using the udami and the manufacturer's 99th percentile ctn cutpoint, and not blinded to local adjudications. discrepant dignoses were resolved by consensus. local vs. central ctn cutpoints differed for six assays, with central cutpoints lower in all. statistics were by chi-square and kappa. results: excluding 11 cases deemed indeterminate by central adjudication, 1096 cases were successfully adjudicated. local adjudication resulted in 104 ami (9.5% of total) and 992 non-ami; central adjudication resulted in 134 (12.2%) ami and 962 non-ami. overall, 44 local diagnoses (4%) were either changed from non-ami to ami or ami to non-ami (p < 0.001). interrater reliability across both methods was found to be kappa = 0.79 (p < 0.001). for acs diagnosis, local adjudication identified 252 acs cases (23%) and 854 non-acs, while central adjudication identified 275 acs (25%) and 831 non-acs. overall, 61 local diagnoses (6%) were either changed from non-acs to acs or acs to non-acs (p < 0 .001). interrater reliability found kappa = 0.85 (p < 0.001). conclusion: central and local adjudication resulted in significantly different rates of ami and acs diagnosis. however, overall agreement of the two methods across these two diagnoses was acceptable. occur four times more often in cocaine users. biomarkers myeloperoxidase (mpo) and c-reactive protein (crp) have potential in the diagnosis of acs. objectives: to evaluate the utility of mpo and crp in the diagnosis of acs in patients presenting to the ed with cocaine-associated chest pain and compare the predictive value to nonusers. we hypothesized that these markers may be more sensitive for acs in nonusers given the underlying pathophysiology of enhanced plaque inflammation. methods: a secondary analysis of a cohort study of enrolled ed patients who received evaluation for acs at an urban, tertiary care hospital. structured data collection at presentation included demographics, chest pain history, lab, and ecg data. subjects included those with self-reported or lab-confirmed cocaine use and chest pain. they were matched to controls based on age, sex, and race. our main outcome was diagnosis of acs at index visit. we determined median mpo and crp values, calculated maximal auc for roc curves, and found cut-points to maximize sensitivity and specificity. data are presented with 95% ci. results: overall, 95 patients in the cocaine positivegroup and 86 patients in the nonusers group had mpo and crp levels measured. patients had a median age of 47 (iqr, (40) (41) (42) (43) (44) (45) (46) (47) (48) (49) (50) (51) (52) , 90% black or african american, and 62% male (p > 0.05 between groups). fifteen patients were diagnosed with acs: 8 patients in the cocaine group and 7 in the nonusers group. comparing cocaine users to nonusers, there was no difference in mpo (median 162 [iqr, ] v 136 ng/ml; p = 0.78) or crp (3 [1] [2] [3] [4] [5] [6] [7] [8] [9] v 5 [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] mg/l; p = 0.08). the auc for mpo was 0.65 (95% ci 0.39-0.90) v 0.54 (95% ci 0.19-0.73). the optimal cut-point to maximize sensitivity and specificity was 242 ng/ml which gave a sensitivity of 0.42 and specificity of 0.75. using this cutpoint, 57% v 29% of acs in cocaine users vs the nonusers would be identified. the auc for crp was 0.63 (95% ci 0.39-0.88) in cocaine users vs 0.73 (95% ci 0.52-0.95) in nonusers. the optimal cut point was 11.9 mg/l with a sensitivity of 0.67 and specificity of 0.79. using this cutpoint, 43% v 88% of acs in cocaine users and nonusers would have been identified. conclusion: the diagnostic accuracy of mpo and crp is not different in cocaine users than nonusers and does not appear to have sufficient discriminatory ability in either cohort. results: 18 hrs of moderate pe caused a significant decrease in rv heart function in rats treated with the solvent for bay 41-8543: peak systolic pressure (psp) decreased from 39 ± 1.5 mmhg, control to 16 ± 1.5, pe, +dp/dt decreased from 1192 ± 93 mmhg/sec to 463 ± 77, -dp/dt decreased from )576 ± 60 mmhg/sec to )251 ± 9. treatment of rats with bay 41-8543 significantly improved all three indices of rv heart function (psp 29 ± 2.6, +dp/dt 1109 ± 116, -dp/dt )426 ± 69). 5 hrs of severe pe also caused significant rv dysfunction (psp 25 ± 2, -dp/dt )356 ± 28) and treatment with bay 41-8543 produced protection of rv heart function (psp 34 ± 2, -dp/dt )535 ± 41) similar to the 18 hr moderate pe model. conclusion: experimental pe produced significant rv dysfunction, which was ameliorated by treatment of the animals with the soluble guanylate cyclase stimulator, bay 41-8543. 1 hospital of the university of pennsylvania, philadelphia, pa; 2 cooper university hospital, camden, nj background: patients who present to the ed with symptoms of potential acute coronary syndrome (acs) can be safely discharged home after a negative coronary computerized tomographic angiography (cta). however, the duration of time for which a negative coronary cta can be used to inform decision making when patients have recurrent symptoms is unknown. objectives: we examined patients who received more than one coronary cta for evaluation of acs to determine whether they had disease progression, as defined by crossing the threshold from noncritical (<50% maximal stenosis) to potentially critical disease. methods: we performed a structured comprehensive record search of all coronary ctas performed from 2005 to 2010 at a tertiary care health system. low-tointermediate risk ed patients who received two or more coronary ctas, at least one from an ed evaluation for potential acs, were identified. patients who were revascularized between scans were excluded. we collected demographic data, clinical course, time between scans, and number of ed visits between scans. record review was structured and done by trained abstractors. our main outcome was progression of coronary stenosis between scans, specifically crossing the threshold from noncritical to potentially critical disease. results: overall, 32 patients met study criteria (median age 45, interquartile range [iqr] (37.5-48); 56% female; 88% black). the median time between studies was 27.3 months (iqr, . 22 patients did not have stenosis in any vessel on either coronary cta, two studies showed increasing stenosis of <20%, and the rest showed ''improvement,'' most due to better imaging quality. no patient initially below the 50% threshold subsequently exceeded it (0%; 95% ci, 0-11.0%). patients also had varying numbers of ed visits (median number of visits 5, range 0-23), and numbers of ed visits for potentially cardiac complaints (median 1, range 0-6); 10 were re-admitted for potentially cardiac complaints (for example, chest pain or shortness of breath), and 9 received further provocative cardiac testing, all of which had negative results. conclusion: we did not find clinically significant disease progression within a 2 year time frame in patients who had a negative coronary cta, despite a high number of repeat visits. this suggests that prior negative coronary cta may be able to be used to inform decision making within this time period. 42.7-48.6) compared to non tro ct patients. there was no significant difference in image quality between tro ct images and those of dedicated ct scans in any studies performing this comparison. similarly, there was no significant difference between tro ct and other diagnostic modalities in regards to length of stay or admission rate. when compared to conventional coronary angiography as the gold standard for evaluation of cad, tro ct had the following pooled diagnostic accuracy estimates: sensitivity 0.94 conclusion: tro chest ct is comparable to dedicated pe, coronary, or ad ct in regard to image quality, length of stay, and admission rate and is highly accurate for detecting cad. the utility of tro ct depends on the relative pre-test probabilities of the conditions being assessed and its role is yet to be clearly defined. tro ct, however, involves increased radiation exposure and contrast volume and for this reason clinicians should be selective in its use. background: coronary computed tomographic angiography (ccta) has high sensitivity, specificity, accuracy, and prognostic value for coronary artery disease (cad) and acs. however, how a ccta informs subsequent use of prescription medication is unclear. objectives: to determine if detection of critical or noncritical cad on ccta is associated with initiation of aspirin and statins for patients who presented to the ed with chest pain. we hypothesized that aspirin and statins would be more likely to be prescribed to patients with noncritical disease relative to those without any cad. methods: prospective cohort study of patients who received ccta as part of evaluation of chest pain in the ed or observation unit. patients were contacted and medical records were reviewed to obtain clinical follow-up for up to the year after ccta. the main outcome was new prescription of aspirin or statin. cad severity on ccta was graded as absent, mild (1% to 49%), moderate (50% to 69%), or severe ( ‡70%) stenosis. logistic regression was used to assess the association of stenosis severity to new medication prescription; covariates were determined a priori. results: 859 patients who had ccta performed consented to participate in this study or met waiver of consent for record review only (median age, , 59% female, 71% black). median follow-up time was 333 days, iqr 70-725 days. at baseline, 13% of the total cohort was already prescribed aspirin and 8% on statin medication. two hundred seventy nine (32%) patients were found to have stenosis in at least one vessel. in patients with absent, mild, moderate, and severe cad on ccta, aspirin was initiated in 11%, 34%, 52%, and 55%; statins were initiated in 7%, 22%, 32%, and 53% of patients. after adjustment for age, race, sex, hypertension, diabetes, cholesterol, tobacco use, and admission to the hospital after ccta, higher grades of cad severity were independently associated with greater post-ccta use of aspirin (or 1.9 per grade, 95% ci 1.4-2.2, p < 0.001) and statins (or 1.9, 95% ci 1.5-2.4, p < 0.001). conclusion: greater cad severity on ccta is associated with increased medication prescription for cad. patients with noncritical disease are more likely than patients without any disease to receive aspirin and statins. future studies should examine whether these changes lead to decreased hospitalizations and improved cardiovascular health. background: hess et al. developed a clinical decision rule for patients with acute chest pain consisting of the absence of five predictors: ischemic ecg changes not known to be old, elevated initial or 6-hour troponin level, known coronary disease, ''typical'' pain, and age over 50. patients less than 40 required only a single troponin evaluation. objectives: to test the hypothesis that patients less than 40 years old without these criteria are at <1% risk for major adverse cardiovascular events (mace) including death, ami, pci, and cabg. methods: we performed a secondary analysis of several combined prospective cohort studies that enrolled ed patients who received an evaluation for acs in an urban ed from 1999 to 2009. cocaine users and stemi patients were excluded. structured data collection at presentation included demographics, pain description, history, lab, and ecg data for all studies. hospital course was followed daily. thirty-day follow up was done by telephone. our main outcome was 30-day mace using objective criteria. the secondary outcome was potential change in ed disposition due to application of the rule. descriptive statistics and 95% cis were used. results: of 9289 visits for potential acs, patients had a mean age of 52.4 ± 14.7 yrs; 68% were black and 59% female. there were 638 patients (6.9%) with 30-day cv events (93 dead, 384 ami, 298 pci). sequential removal of patients in order to meet the final rule for patients less than 40 excluded patients based upon: ischemic ecg changes not old (n = 434, 30% mace rate), elevated initial troponin level (n = 237, 60% mace), known coronary disease (n = 1622, 11% mace), ''typical'' pain (n = 3179, 3% mace), and age over 40 (n = 2690, 3.4% mace) leaving 1127 patients less than 40 with 0.8% mace [95% ci, 0.4-1.5%]. of this cohort, 70% were discharged home from the ed by the treating physician without application of this rule. adding a second negative troponin in patients 40-50 years old identified a group of 1139 patients with a 2.0% rate of mace [1.3-3 .0] and a 48% discharge rate. the hess rule appears to identify a cohort of patients at approximately 1% risk of 30-day mace, and may enhance discharge of young patients. however, even without application of this rule, the 70% of young patients at low risk are already being discharged home based upon clinical judgment. background: a clinical decision support system (cdss) incorporates evidence-based medicine into clinical practice, but this technology is underutilized in the ed. a cdss can be integrated directly into an electronic medical record (emr) to improve physician efficiency and ease of use. the christopher study investigators validated a clinical decision rule for patients with suspected pulmonary embolism (pe). the rule stratifies patients using wells' criteria to undergo either d-dimer testing or a ct angiogram (ct). the effect of this decision rule, integrated as a cdss into the emr, on ordering cts has not been studied. objectives: to assess the effect of a mandatory cdss on the ordering of d-dimers and cts for patients with suspected pe. methods: we assessed the number of cts ordered for patients with suspected pe before and after integrating a mandatory cdss in an urban community ed. physicians were educated regarding cdss use prior to implementation. the cdss advised physicians as to whether a negative d-dimer alone excluded pe or if a ct was required based on wells' criteria. the emr required physicians to complete the cdss prior to ordering the ct. however, physicians maintained the ability to order a ct regardless of the cdss recommendation. patients ‡18 years of age presenting to the ed with a chief complaint of chest pain, dyspnea, syncope, or palpitations were included in the data analysis. we compared the proportion of d-dimers and cts ordered during the 8-month periods immediately before and after implementing the cdss. all 27 physicians who worked in the ed during both time periods were included in the analysis. patients with an allergy to intravenous contrast agents, renal insufficiency, or pregnancy were excluded. results were analyzed using a chi-square test. results: a total of 11,931 patients were included in the data analysis (6054 pre-and 5877 post-implementation). cts were ordered for 215 patients (3.6%) in the pre-implementation group and 226 patients (3.8%) in the post-implementation group; p = 0.396. a d-dimer was ordered for 392 patients (6.5%) in the pre-implementation group and 382 patients (6.5%) in the post-implementation group; p = 0.958. in this single-center study, emr integration of a mandatory cdss for evaluation of pe did not significantly alter ordering patterns of cts and d-dimers. identification of patients with low-risk pulmonary emboli suitable for discharge from the emergency department mike zimmer, keith e. kocher university of michigan, ann arbor, mi background: recent data, including a large, multicenter randomized controlled trial, suggest that a low-risk cohort of patients diagnosed with pulmonary embolism (pe) exists who can be safely discharged from the ed for outpatient treatment. objectives: to determine if there is a similar cohort at our institution who have a low rate of complications from pe suitable for outpatient treatment. methods: this was a retrospective chart review at a single academic tertiary referral center with an annual ed volume of 80,000 patients. all adult ed patients who were diagnosed with pe during a 24-month period from 11/1/09 through 10/31/11 were identified. the pulmonary embolism severity index (pesi) score, a previously validated clinical decision rule to risk stratify patients with pe, was calculated. patients with high pesi (>85) were excluded. additional exclusion criteria included patients who were at high risk of complications from initiation of therapeutic anticoagulation and those patients with other clear indications for admission to the hospital. the remaining cohort of patients with low risk pe (pesi £ 85) was included in the final analysis. outcomes were measured at 14 and 90 days after pe diagnosis and included death, major bleeding, and objectively confirmed recurrent venous thromboembolism (vte). results: during the study period, 298 total patients were diagnosed with pe. there were 172 (58%) patients categorized as ''low risk'' (pesi £ 85), with 42 removed because of various pre-defined exclusion criteria. of the remaining 130 (44%) patients suitable for outpatient treatment, 5 patients (3.8%; 95% ci, 0.5% -7.2%) had one or more negative outcomes by 90 days. this included 2 (1.5%; 95% ci, 0% -3.7%) major bleeding events, 2 (1.5%; 95% ci, 0% -3.7%) recurrent vte, and 2 (1.5%; 95% ci, 0% -3.7%) deaths. none of the deaths were attributable to pe or anticoagulation. one patient suffered both a recurrent vte and died within 90 days. both patients who died within 90 days were transitioned to hospice care because of worsening metastatic burden. at 14 days, there was 1 bleeding event (0.8%; 95% ci, 0% -2.3%), no recurrent vte, and no deaths. the average hospital length of stay for these patients was 2.8 days (sd ±1.6). conclusion: over 40% of our patients diagnosed with pe in the ed may have been suitable for outpatient treatment, with 4% suffering a negative outcome within 90 days and 0.8% suffering a negative outcome within 14 days. in addition, the average hospital length of stay for these patients was 2.8 days, which may represent a potential cost savings if these patients had been managed as outpatients. our experience supports previous studies that suggest the safety of outpatient treatment of patients diagnosed with pe in the ed. given the potential savings related to a decreased need for hospitalization, these results have health policy implications and support the feasibility of creating protocols to facilitate this clinical practice change. background: chest x-rays (cxrs) are commonly obtained on ed chest pain patients presenting with suspected acute coronary syndrome (acs). a recently derived clinical decision rule (cdr) determined that patients who have no history of congestive heart failure, have never smoked, and have a normal lung examination do not require a cxr in the ed. objectives: to validate the diagnostic accuracy of the hess cxr cdr for ed chest pain patients with suspected acs. methods: this was a prospective observational study of a convenience sample of chest pain patients over 24 years old with suspected acs who presented to a single urban academic ed. the primary outcome was the ability of the cdr to identify patients with abnormalities on cxr requiring acute ed intervention. data were collected by research associates using the chart and physician interviews. abnormalities on cxr and specific interventions were predetermined, with a positive cxr defined as one with abnormality requiring ed intervention, and a negative cxr defined as either normal or abnormal but not requiring ed intervention. the final radiologist report was used as a reference standard for cxr interpretation. a second radiologist, blinded to the initial radiologist's report, reviewed the cxrs of patients meeting the cdr criteria to calculate inter-observer agreement. patients were followed up by chart review and telephone interview 30 days after presentation. results: between january and august 2011, 178 patients were enrolled, of whom 38 (21%) were excluded and 10 (5.6%) did not receive cxrs in the ed. of the 130 remaining patients, 74 (57%) met the cdr. the cdr identified all patients with a positive cxr (sensitivity = 100%, 95%ci 40-100%). the cdr identified 73 of the 126 patients with a negative cxr (specificity = 58%, 95%ci 49-67%). the positive likelihood ratio was 2.4 (95%ci 1.9-2.9). inter-observer agreement between radiologists was substantial (kappa = 0.63, 95%ci 0.41-0.85). telephone contact was made with 78% of patients and all patient charts were reviewed at 30 days. none had any adverse events related to a background: increasing the threshold to define a positive d-dimer in low-risk patients could reduce unnecessary computed tomographic pulmonary angiography (ctpa) for suspected pe. this strategy might increase rates of missed pe and missed pneumonia, the most common non-thromboembolic finding on ctpa that might not otherwise be diagnosed. objectives: measure the effect of doubling the standard d-dimer threshold for ' 'pe unlikely'' revised geneva (rgs) or wells' scores on the exclusion rate, frequency, and size of missed pe and missed pneumonia. methods: prospective enrollment at four academic us hospitals. inclusion criteria required patients to have at least one symptom or sign and one risk factor for pe, and have 64-channel ctpa completed. pretest probability data were collected in real time and the d-dimer was measured in a central laboratory. criterion standard for pe or pneumonia consisted of cpta interpretation by two independent radiologists combined with necessary treatment plan. subsegmental pe was defined as total vascular obstruction <5%. patients were followed for outcome at 30 days. proportions were compared with 95% cis. results: of 678 patients enrolled, 126 (19%) were pe+ and 93 (14%) had pneumonia. with rgs£6 and standard threshold (<500 ng/ml), d-dimer was negative in 110/678 (16%, 95% ci: 13-19%), and 4/110 were pe+ (posterior probability 3.8%, 95% ci: 1-9.3%). with rgs£6 and a threshold <1000 ng/ml, d-dimer was negative in 208/678 (31%, 27-44%) and 11/208 (5.3%, 2.8-9.3%) were pe+, but 10/11 missed pes were subsegmental, and none had concomitant dvt. the posterior probability for pneumonia among patients with rgs≤6 and d-dimer<500 was 9/110 (8.2%, 4-15%) which compares favorably to the posterior probability of 12/208 (5.4%, 3-10%) observed with rgs& #8804;6 and d-dimer<1000 ng/ml. of the 200 (35%) patients who also had plain film cxr, radiologists found an infiltrate in only 58. use of wells£4 produced similar results as the rgs≤6 for exclusion rate and posterior probability of both pe and pneumonia. conclusion: doubling the threshold for a positive d-dimer with a pe unlikely pretest probability can significantly reduce ctpa scanning with a slightly increased risk of missed isolated subsegmental pe, and no increase in rate of missed pneumonia. background: the limitations of developing world medical infrastructure require that patients are transferred from health clinics only when the patient care needs exceed the level of care at the clinic and the receiving hospital can provide definitive therapy. to determine what type of definitive care service was sought when patients were transferred from a general outpatient clinic operating monday through friday from 8:00 am to 3:00 pm in rural haiti to urban hospitals in port-au-prince. methods: design -prospective observational review of all patients for whom transfer to a hospital was requested or for whom a clinic ambulance was requested to an off-site location to assist with patient care. setting -weekday, daytime only clinic in titanyen, haiti. participants/subjects -consecutive series of all patients for whom transfer to another health care facility or for whom an ambulance was requested during the time period of 11/22/2010 -12/14/2010 and 3/28/2011 -5/13/2011 . results: between 11/22/2010 -12/14/2010 and 3/28/2011 -5/13/2011 patients were identified who needed to be transferred to a higher level of care. sixteen patients (43.2%) presented with medical complaints, 12 (32.4%) were trauma patients, 6 (16.2%) were surgical, and 3 (8.1%) were in the obstetric category. within these categories, 6 patients were pediatric and 4 non-trauma patients required blood transfusion. conclusion: while trauma services are often focused on in rural developing world medicine, the need for obstetric care and blood transfusion constituted six (16.2%) cases in our sample. these patients raise important public health, planning, and policy questions relating to access to prenatal care and the need to better understand transfusion medicine utilization among rural haitian patients with non-trauma related transfusion needs. the data set is limited by sample size and single location of collection. another limitation of understanding the needs is that many patients may not present to the clinic for their health care needs in certain situations if they have knowledge that the resources to provide definitive care are unavailable. background: the practice of emergency medicine in japan has been unique in that emergency physicians are mostly engaged in critical care and trauma with a multi-specialty model. for the last decade with progress in medicine, an aging population with complicated problems, and institution of postgraduate general clinical training, the us model emergency medicine with single-specialty model has been emerging throughout japan. however, the current status is unknown. objectives: the objective of this study was to investigate the current status of implementation of the us model emergency medicine at emergency medicine training institutions accredited by the japanese association for acute medicine (jaam). methods: the er committee of the jaam, the most prestigious professional organization in japanese emergency medicine, conducted the survey by sending questionnaires to 499 accredited emergency medicine training institutions. results: valid responses obtained from 299 facilities were analyzed. us model em was provided in 211 facilities (71% of 299 facilities), either in full time (24 hours a day, seven days a week; 123 facilities) or in part time (less than 24 hours a day; 88 facilities). among these 211 us model facilities, 44% have a number of beds between 251-500. the annual number of ed visits was less than 20,000 in 64%, and 37% have ambulance transfers between 2,001-4,000 per year. the number of emergency physicians was less than 5 in 60% of the facilities. postgraduate general clinical training was offered at us model ed in 199 facilities, and ninety hospitals adopted us model em after 2004, when a 2-year period of postgraduate general clinical training became mandatory for all medical graduates. sixty-four facilities provided a residency program to be a us model emergency physician, and another 9 institutions were planning to establish it. conclusion: us model em has emerged and become commonplace in japan. the background including advance in medicine, aging population, and mandatory postgraduate general clinical training system are considered to be contributing factors. erkan gunay, ersin aksay, ozge duman atilla, nilay zorbalar, savas sezik tepecik research and training hospital, izmir, turkey background: workplace safety and occupational health problems are increasing issues especially in developing countries as a result of the industrial automatisation and technologic improvements. occupational injuries are preventable but they can occasionally cause morbidity and mortality resulting in work day loss and financial problems. hand injuries are one-third of all traumatic injuries and are the most injured parts after occupational accidents. objectives: we aim to evaluate patients with occupational upper extremity injuries for demographic characteristics, injury types, and work day loss. methods: trauma patients over 15 years old admitted to our emergency department with an occupational upper extremity injury were prospectively evaluated from 15.04.2010 to 30.04.2011. patients with one or more of digit, hand, forearm, elbow, humerus, and shoulder injuries were included. exclusion criteria were multitrauma, patient refusal to participate, and insufficient data. patients were followed up from the hospital information system and by phone for work day loss and final diagnosis. results: during the study period there were 570 patients with an occupational upper extremity injury. total of 521 (91.4%) patients were included. patients were 92.1% male, 36.5% between the age 25 to 34, and mean age was calculated 32.9 ± 9.6 years. 43.8% of the patients were from the metal and machinery sector, and primary education was the highest education level for the 74.7% of the patients. most injured parts were fingers with the highest rate for index finger and thumb. crush injury was the most common injury type. 96.3% (n = 502) of the patients were discharged after treatment in the emergency department. tendon injuries, open fractures, and high degree burns were the reasons for admission to clinics. mean work day loss was 12.8 ± 27.2 days and this increases for the patients with laboratory or radiologic studies, consultant evaluation, or admission. the 15-24 age group had a significantly lower work day loss average. conclusion: evaluating occupational injury characteristics and risks is essential for identifying preventive measures and actions. with the guidance of this study preventive actions focusing on high-risk sectors and patients may be the key factor for avoiding occupational injuries and creating safer workplace environments in order to reduce financial and public health problems. background: as emergency medicine (em) gains increased recognition and interest in the international arena, a growing number of training programs for emergency health care workers have been implemented in the developing world through international partnerships. objectives: to evaluate the quality and appropriateness of an internationally implemented emergency physician training program in india. methods: physicians participating in an internationally implemented em training program in india were recruited to participate in a program evaluation. a mixed methods design was used including an online anonymous survey and semi-structured focus groups. the survey assessed the research, clinical, and didactic training provided by the program. demographics and information on past and future career paths were also collected. the focus group discussions centered around program successes and challenges. results: fifty of 59 eligible trainees (85%) participated in the survey. of the respondents, the vast majority were indian; 16% were female, and all were between the ages of 25 and 45 years (mean age 31 years). all but two trainees (96%) intend to practice em as a career. one-third listed a high-income country first for preferred practice location and half listed india first. respondents directly endorsed the program structure and content, and they demonstrated gains in self-rated knowledge and clinical confidence over their years of training. active challenges identified include: (1) insufficient quantity and inconsistent quality of indian faculty, (2) administrative barriers to academic priorities, and (3) persistent threat of brain drain if local opportunities are inadequate. conclusion: implementing an international emergency physician training program with limited existing local capacity is a challenging endeavor. overall, this evaluation supports the appropriateness and quality of this partnership model for em training. one critical challenge is achieving a robust local faculty. early negotiations are recommended to set educational priorities, which includes assuring access to em journals. attrition of graduated trainees to high-income countries due to better compensation or limited in-country opportunities continues to be a threat to long-term local capacity building. background: with an increasing frequency and intensity of manmade and natural disasters, and a corresponding surge in interest in international emergency medicine (iem) and global health (gh), the number of iem and gh fellowships is constantly growing. there are currently 34 iem and gh fellowships, each with a different curriculum. several articles have proposed the establishment of core curriculum elements for fellowship training. to the best of our knowledge, no study has examined whether iem and gh fellows are actually fulfilling these criteria. objectives: this study sought to examine whether current iem and gh fellowships are consistently meeting these core curricula. methods: an electronic survey was administered to current iem and gh fellowship directors, current fellows, and recent graduates of a total of 34 programs. survey respondents stated their amount of exposure to previously published core curriculum components: em system development, humanitarian assistance, disaster response, and public health. a pooled analysis comparing overall responses of fellows to those of program directors was performed using two-sampled t-test. results: response rates were 88% (n = 30) for program directors and 53% (n = 17) for current and recent fellows. programs varied significantly in terms of their emphasis on and exposure to six proposed core curriculum areas: em system development, em education development, humanitarian aid, public health, ems, and disaster management. only 43% of programs reported having exposure to all four core areas. as many as 67% of fellows reported knowing their curriculum only somewhat or not at all prior to starting the program. conclusion: many fellows enter iem and gh fellowships without a clear sense of what they will get from their training. as each fellowship program has different areas of curriculum emphasis, we propose not to enforce any single core curriculum. rather, we suggest the development of a mechanism to allow each fellowship program to present its curriculum in a more transparent manner. this will allow prospective applicants to have a better understanding of the various programs' curricula and areas of emphasis. background: advance warning of probable intensive care unit (icu) admissions could allow the bed placement process to start earlier, decreasing ed length of stay and relieving overcrowding conditions. however, physicians and nurses poorly predict a patient's ultimate disposition from the emergency department at triage. a computerized algorithm can use commonly collected data at triage to accurately identify those who likely will need icu admission. objectives: to evaluate an automated computer algorithm at triage to predict icu admission and 28-day in-hospital mortality. methods: retrospective cohort study at a 55,000 visit/ year level i trauma center/tertiary academic teaching hospital. all patients presenting to the ed between 12/16/2008 and 10/1/2010 were included in the study. the primary outcome measure was icu admission from the emergency department. the secondary outcome measure was 28-day all-cause in-hospital mortality. patients discharged or transferred before 28 days were considered to be alive at 28 days. triage data includes age, sex, acuity (emergency severity index), blood pressure, heart rate, pain scale, respiratory rate, oxygen saturation, temperature, and a nurse's free text assessment. a latent dirichlet allocation algorithm was used to cluster words in triage nurses' free text assessments into 500 topics. the triage assessment for each patient is then represented as a probability distribution over these 500 topics. logistic regression was then used to determine the prediction function. results: a total of 94,973 patients were included in the study. 3.8% were admitted to the icu and 1.3% died within 28 days. these patients were then randomly allocated to train (n = 75,992; 80%) and test (n = 18,981; 20%) data sets. the area under the receiver operating characteristic curve (auc) when predicting icu background: at the 2011 saem annual meeting, we presented the derivation of two hospital admission prediction models adding coded chief complaint (ccc) data from a published algorithm (thompson et al. acad emerg med 2006; 13:774-782) to demographic, ed operational, and acuity (emergency severity index (esi)) data. objectives: we hypothesized that these models would be validated when applied to a separate retrospective cohort, justifying prospective evaluation. methods: we conducted a retrospective, observational validation cohort study of all adult ed visits to a single tertiary care center (census: 49,000/yr) (4/1/09-12/31/10). we downloaded from the center's clinical tracking system demographic (age, sex, race), ed operational (time and day of arrival), esi, and chief complaint data on each visit. we applied the derived ccc hospital admission prediction models (all identified ccc categories and ccc categories with significant odds of admission from multivariable logistic regression in the derivation cohort) to the validation cohort to predict odds of admission and compared to prediction models that consisted of demographic, ed operational, and esi data, adding each category to subsequent models in a stepwise manner. model performance is reported by areaunder-the-curve (auc) data and 95%ci. signs, pain level, triage level, 72-hour return, number of past visits in the previous year, injury, and one of 122 chief complaint codes (representing 90% of all visits in the database). outputs for training included ordering of a complete blood count, basic chemistry (electrolytes, blood urea nitrogen, creatinine), cardiac enzymes, liver function panel, urinalysis, electrocardiogram, x-ray, computed tomography, or ultrasound. once trained, it was used on the nhamcs-ed 2008 database, and predictions were generated. predictions were compared with documented physician orders. outcomes included the percent of total patients who were correctly pre-ordered, sensitivity (the percent of patients who had an order that were correctly predicted), and the percent over-ordered. waiting time for correctly pre-ordered patients was highlighted, to represent a potential reduction in length of stay achieved by preordering. los for patients overordered was highlighted to see if over-ordering may cause an increase in los for those patients. unit cost of the test was also highlighted, as taken from the 2011 medicare fee schedule. physician times. however, during peak ed census times, many patients with completed tests and treatment initiated by triage await discharge by the next assigned physician. objectives: determine if a physician-led discharge disposition (dd) team can reduce the ed length of stay (los) for patients of similar acuity who are ultimately discharged compared to standard physician team assignment. methods: this prospective observational study was performed from 10/2010 to 10/2011 at an urban tertiary referral academic hospital with an annual ed volume of 87,000 visits. only emergency severity index level 3 patients were evaluated. the dd team was scheduled weekdays from 14:00 until 23:00. several ed beds were allocated to this team. the team was comprised of one attending physician and either one nurse and a tech or two nurses. comparisons were made between los for discharged patients originally triaged to the main ed side who were seen by the dd team versus the main side teams. time from triage physician to team physician, team physician to discharge decision time, and patient age were compared by unpaired t-test. differences were studied for number of patients receiving x-rays, ct scan, labs, and medications. results: dd team mean los in hours for discharged patients was shorter at 3.4 (95% ci: 3.3-3.6, n = 1451) compared to 6.4 (95% ci: 6.3-6.5, n = 4601) on the main side, p < 0.01. the mean time from triage physician to dd team physician was 1.4 hours (95% ci: 1.4-1.5, n = 1447) versus to 2.7 hours (95% ci: 2.7-2.8, n = 4568) to main side physician, p < 0.01. the dd team physician mean time to discharge decision was 1.0 hour (95% ci: 1.0-1.1, n = 1432) compared to 2.5 hours (95% ci: 2.4-2.6, n = 4590) for main side physician, p < 0.01. the dd team patients' mean age was 42.6 years (95% ci: 41.9-43.6, n = 1454) compared to main side patients' mean age of 49.1 years (95% ci: 48.5-49.6, n = 4621.) the dd team patients (n = 1454) received fewer x-rays (40% vs. 59%), ct scans (13% vs. 23%), labs (64% vs. 85%), and medications (63% vs. 68%) than main side patients (n = 4621), p < 0.01 for all compared. conclusion: the dd team complements the advanced triage process to further reduce los for patients who do not require extended ed treatment or observation. the dd team was able to work more efficiently because its patients tended to be younger and had fewer lab and imaging tests ordered by the triage physician compared to patients who were later seen on the ed main side. ed objectives: to evaluate the association between ed boarding time and the risk of developing hapu. methods: we conducted a retrospective cohort study using administrative data from an academic medical center with an adult ed with 55,000 annual patient visits. all patients admitted into the hospital through the ed 6/30/2008-2/28/2011 were included. development of hapu was determined using the standardized, national protocol for cms reporting of hapu. ed boarding time was defined as the time between an order for inpatient admission and transport of the patient out of the ed to an in-patient unit. we used a multivariate logistic regression model with development of a hapu as the outcome variable, ed boarding time as the exposure variable, and the following variables as covariates: age, sex, initial braden score, and admission to an intensive care unit (icu) from the ed. the braden score is a scale used to determine a patient's risk for developing a hapu based on known risk factors. a braden score is calculated for each hospitalized patient at the time of admission. we included braden score as a covariate in our model to determine if ed boarding time was a predictor of hapu independent of braden score. results: of 46,704 patients admitted to the hospital through the ed during the study period, 243 developed a hapu during their hospitalization. clinical characteristics are presented in the table. per hour of ed boarding time, the adjusted or of developing a hapu was 1.02 (95% ci 1.01-1.04, p = 0.007). a median of 40 patients per day were admitted through the ed, accumulating 144 hours of ed boarding time per day, with each hour of boarding time increasing the risk of developing a hapu by 2%. conclusion: in this single-center, retrospective study, longer ed boarding time was associated with increased risk of developing a hapu. queried ed and inpatient nurses and compared their opinions toward inpatient boarding. it also assessed their preferred boarding location if they were patients. objectives: this study queried ed and inpatient nurses and compared their opinions toward inpatient boarding. methods: a survey was administered to a convenience sample of ed and ward nurses. it was performed in a 631-bed academic medical center (30,000 admissions/yr) with a 68-bed ed (60,000 visits/yr). nurses were identified as ed or ward and whether they had previously worked in the ed. the nurses were asked if there were any circumstances where admitted patients should be boarded in the ed or inpatient hallways. they were also asked their preferred location if they were admitted as a patient. six clinical scenarios were then presented and their opinions on boarding queried. results: ninety nurses completed the survey; 35 (39%) were current ed nurses (ced), 40 (44%) had previously worked in the ed (ped). for the entire group 46 (52%) believed admitted patients should board in the ed. overall, 52 (58%) were opposed to inpatient boarding, with 20% of ced versus 83% of current ward (cw) nurses (p < 0.0001) and 28% of ped versus 85% of nurses never having worked in the ed (ned) opposed (p < 0.001). if admitted as patients themselves, overall 43 (54%) preferred inpatient boarding, with 82% of ced versus 33% of cw nurses (p < 0.0001) and 74% of ped versus 34% ned nurses (p = 0.0007) preferring inpatient boarding. for the six clinical scenarios, significant differences in opinion regarding inpatient boarding existed in all but two cases: a patient with stable copd but requiring oxygen and an intubated, unstable sepsis patient. conclusion: ward nurses and those who have never worked in the ed are more opposed to inpatient boarding than ed nurses and nurses who have worked previously in the ed. nurses admitted as patients seemed to prefer not being boarded where they work. ed and ward nurses seemed to agree that unstable or potentially unstable patients should remain in the ed. 8 weeks. staff satisfaction was evaluated through pre/ post-shift and study surveys; administrative data (physician initial assessment (pia), length of stay (los), patients leaving without being seen (lwbs) and against medical advice [lama] ) were collected from an electronic, real-time ed information system. data are presented as proportions and medians with interquartile ranges (iqr); bivariable analyses were performed. results: ed physicians and nurses expected the intervention to reduce the los of discharged patients only. pia decreased during the intervention period (68 vs 74 minutes; p < 0.001). no statistically/clinically significant differences were observed in the los; however, there was a significant reduction in the lwbs (4.7% to 3.5% p = 0.003) and lama (0.7% to 0.4% p = 0.028) rates. while there was a reduction of approximately 5 patients seen per physician in the affected ed area, the total number of patients seen on that unit increased by approximately 10 patients/day. overall, compared to days when there was no extra shift, 61% of emergency physicians stated their workload decreased and 73% felt their stress level at work decreased. conclusion: while this study didn't demonstrate a reduction in the overall los, it did reduce pia times and the proportion of lwbs/lama patients. while physicians saw fewer patients during the intervention study period, the overall patient volume increased and satisfaction among ed physicians was rated higher. provider-and hospital-level variation in admission rates and 72-hour return admission rates jameel abualenain 1 , william frohna 2 , robert shesser 1 , ru ding 1 , mark smith 2 , jesse m. pines 1 1 the george washington university, washington, dc; 2 washington hospital center, washington, dc background: decisions for inpatient versus outpatient management of ed patients are the most important and costliest decision made by emergency physicians, but there is little published on the variation in the decision to admit among providers or whether there is a relationship between a provider's admission rate and the proportion of their patients who return within 72 hours of the initial visit and are subsequently admitted (72h-ra). objectives: we explored the variation in provider-level admission rates and 72h-ra rates, and the relationship between the two. methods: a retrospective study using data from three eds with the same information system over varying time periods: washington hospital center (whc) (2008-10), franklin square hospital center (fshc) , and union memorial hospital (umh) . patients were excluded if left without being seen, left against medical advice, fast-track, psychiatric patients, and aged <18 years. physicians with <500 ed encounters or an admission rate <15% were excluded. logistic regression was used to assess the relationship between physician-level 72h-ra and admission rates, adjusting for patient age, sex, race, and hospital. results: 389,120 ed encounters were treated by 90 physicians. mean patient age was 50 years sd 20, 42% male, and 61% black. admission rates differed between hospitals (whc = 40%, umh = 37%, and fshc = 28%), as did the 72h-ra (whc = 0.9%, umh = 0.6%, and fshc = 0.6%). across all hospitals, there was great variation in individual physician admission rates (15.4%-50.0%). the 72h-ra rates were quite low, but demonstrated a similar magnitude of individual variation (0.3%-1.2%). physicians with the highest admission rate quintile had lower odds of 72h-ra (or 0.8 95% ci 0.7-0.9) compared to the lowest admission rate quintile, after adjusting for other factors. no intermediate admission rate quintiles (2nd, 3rd, or 4th) were significantly different from the lowest admission rate quintile with regard to 72h-ra. conclusion: there is more than three-fold variation in individual physician admission rates indicating great variation among physicians in hospital admission rates and 72h-ra. the highest admitters have the lowest 72h-ra; however, evaluating the causes and consequences of such significant variation needs further exploration, particularly in the context of health reform efforts aimed at reducing costs. background: ed scribes have become an effective means to assist emergency physicians (eps) with clinical documentation and improve physician productivity. scribes have been most often utilized in busy community eds and their utility and functional integration into an academic medical center with resident physicians is unknown. objectives: to evaluate resident perceptions of attending physician teaching and interaction after introduction of scribes at an em residency training program, measured through an online survey. residents in this study were not working with the scribes directly, but were interacting indirectly through attending physician use of scribes during ed shifts. methods: an online ten question survey was administered to 31 residents of a midwest academic emergency medicine residency program (pgy1-pgy3 program, 12 annual residents), 8 months after the introduction of scribes into the ed. scribes were introduced as emr documentation support (epic 2010, epic systems inc.) for attending eps while evaluating primary patients and supervising resident physicians. questions investigated em resident demographics and perceptions of scribes (attending physician interaction and teaching, effect on resident learning, willingness to use scribes in the future), using likert scale responses (1 minimal, 9 maximum) and a graduated percentage scale used to quantify relative values, where applicable. data were analyzed using kruskal-wallis and mann-whitney u tests. results: twenty-one of 31 em residents (68%) completed the survey (81% male; 33% pgy1, 29% pgy2, 38% pgy3). four residents had prior experience with scribes. scribes were felt to have no effect on attending eps direct resident interaction time (mean score 4.5, sd 1.2), time spent bedside teaching (4.8, sd 0.9), or quality of teaching (4.9, sd 0.8), as well as no effect on residents' overall learning process (4.6, sd 1.1). however, residents felt positive about utilizing scribes at their future occupation site (6.0, sd 2.7). no response differences were noted for prior experience, training level, or sex. conclusion: when scribes are introduced at an em residency training site, residents of all training levels perceive it as a neutral interaction, when measured in terms of perceived time with attending eps and quality of the teaching when scribes are present. the effect of introduction of an electronic medical record on resident productivity in an academic emergency department shawn london, christopher sala university of connecticut school of medicine, farmington, ct background: there are little available data which describe the effect of implementation of an electronic medical record (emr) on provider productivity in the emergency department, and no studies which, to our knowledge, address this issue pertaining to housestaff in particular. objectives: we seek to quantify the changes in provider productivity pre-and post-emr implementation to support our hypothesis that resident clinical productivity based on patients seen per hour will be negatively affected by emr implementation. methods: the academic emergency department at hartford hospital, the principle clinical site in the university of connecticut emergency medicine residency, sees over 95,000 patients on an annual basis. this environment is unique in that pre-emr, patient tracking and orders were performed electronically using the sunrise system (eclipsys corp) for over 8 years prior to conversion to the allscripts ed emr in october, 2010 for all aspects of ed care. the investigators completed a random sample of days/evening/night/weekend shift productivity to obtain monthly aggregate productivity data (patients seen per hour) by year of training. results: there was an initial 4.2% decrease of in productivity for pgy-3 residents on average from 1.44 patients per hour on average in the three blocks preceding activation of the emr to 1.38 patients seen per hour compared in the subsequent three prior blocks. pgy 3 performance returned to baseline in the subsequent three months to 1.48 patients per hour. there was no change noted in patients seen per hour of pgy-1 and pgy-2 residents. conclusion: while many physicians tend to assume that emrs pose a significant barrier to productivity in the ed, in our academic emergency department, there was no lasting change on resident productivity based on the patients seen per hour metric. the minor decrease which did occur in pgy-3 residents was transient and was not apparent 3 months after the emr was implemented. our experience suggests that decrease in the rate of patients seen per hour in the resident population should not be considered justification to delay or avoid implementation of an emr in the emergency department. emory university, atlanta, ga; 2 children's healthcare of atlanta, atlanta, ga background: variation in physician practice is widely prevalent and highlights an opportunity for quality improvement and cost containment. monitoring resources used in the management of common pediatric emergency department (ed) conditions has been suggested as an ed quality metric. objectives: to determine if providing ed physicians with severity-adjusted data on resource use and outcomes, relative to their peers, can influence practice patterns. methods: data on resource use by physicians were extracted from electronic medical records at a tertiary pediatric ed for four common conditions in mid-acuity (emergency severity index level 3): fever, head injury, respiratory illness, and gastroenteritis. condition-relevant resource use was tracked for lab tests (blood count, chemistry, crp), imaging (chest x-ray, abdominal x-ray, head ct scan, abdominal ct scan), intravenous fluids, parenteral antibiotics, and intravenous ondansetron. outcome measures included admission to hospital and ed length of stay (los); 72-hr return to ed (rr) was used as a balancing measure. scorecards were constructed using box plots to show physicians their practice patterns relative to peers (the figure shows an example of the scorecard for gatroenteritis for one physician, showing resources use rates for iv fluids and labs). blinded scorecards were distributed quarterly for five quarters using rolling-year averages. a pre/post-intervention analysis was performed with sep 1, 2010 as the intervention date. fisher's exact and wilcoxon rank sum tests were used for analysis. results: we analyzed 45,872 patient visits across two hospitals (24,834 pre-and 21,038 post-intervention), comprising 17.6% of the total ed volume during the study period. patients were seen by 100 physicians (mean 462 patients/physician). the table shows overall physician practice in the pre-and post-intervention periods. significant reduction in resource use was seen for abdominal/pelvic ct scans, head ct scan, chest x-rays, iv ondansetron, and admission to hospital. ed los decreased from 129 min to 126 min (p = 0.0003). there was no significant change in 72-hr return rate during the study period (2.2% pre-, 2.0% post-intervention). conclusion: feedback on comprehensive practice patterns including resource use and quality metrics can influence physician practice on commonly used resources in the ed. billboards, via iphone application, twitter, and text messaging. there is a paucity of data describing the accuracy of publically posted ed wait times. objectives: to examine the accuracy of publicly posted wait times of four emergency departments within one hospital system. methods: a prospective analysis of four ed-posted wait times in comparison to the wait times for actual patients. the main hospital system calculated and posted ed wait times every twenty minutes for all four system eds. a consecutive sample of all patients who arrived 24/7 over a 4-week period during july and august 2011 was included. an electronic tracking system identified patient arrival date and the actual incurred wait time. data consisted of the arrival time, actual wait time, hospital census, budgeted hospital census, and the posted ed wait time. for each ed the difference was calculated between the publicly posted ed wait time at the time of patient's arrival and the patient's actual ed wait time. the average wait times and average wait time error between the ed sites were compared using a two-tailed student's t-test. the correlation coefficient between the differences in predicted/ actual wait times was also calculated for each ed. results: there were 8890 wait times within the four eds included in the analysis. the average wait time (in minutes) at each facility was: 64.0 (±62.4) for the main ed, 22.0 (±22.1) for freestanding ed (fed) #1, 25.0 (±25.6) for fed #2, and 10.0 (±12.6) for the small community ed. the average wait time error (in minutes) for each facility was 31(±61.2) for the main ed, 13 (±23.65) for fed #1, 17 (±26.65) for fed #2, and 1 (±11.9) for the community hospital ed. the results from each ed were statistically significant for both average wait time and average wait time error (p < 0.0001). there was a positive correlation between the average wait time and average wait time error, with r-values of 0.84, 0.83, 0.58, and 0.48 for the main ed, fed #1, fed #2, and the small community hospital ed, respectively. each correlation was statistically significant; however, no correlation was found between the number of beds available (budgeted-actual census) and average wait times. conclusion: publically posted ed wait times are accurate for facilities with less than 2000 ed visits per month. they are not accurate for eds with greater than 4000 visits per month. reduction of pre-analytic laboratory errors in the emergency department using an incentive-based system benjamin katz, daniel pauze, karen moldveen albany medical center, albany, ny background: over the last decade, there has been an increased effort to reduce medical errors of all kinds. laboratory errors have a significant effect on patient care, yet they are usually avoidable. several studies suggest that up to 90% of laboratory errors occur during the pre-or post-analytic phase. in other words, errors occur during specimen collection and transport or reporting of results, rather than during laboratory analysis itself. objectives: in an effort to reduce pre-analytic laboratory errors, the ed instituted an incentive-based program for the clerical staff to recognize and prevent specimen labeling errors from reaching the patient. this study sought to demonstrate the benefit of this incentive-based program. methods: this study examined a prospective cohort of ed patients over a three year period in a tertiary care academic ed with annual census of 72,000. as part of a continuing quality improvement process, laboratory specimen labeling errors are screened by clerical staff by reconciling laboratory specimen label with laboratory requisition labels. the number of ''near-misses'' or mismatched specimens captured by each clerk was then blinded to all patient identifiers and was collated by monthly intervals. due to poor performance in 2009, an incentive program was introduced in early 2010 by which the clerk who captured the most mismatched specimens would be awarded a $50 gift card on a quarterly basis. the total number of missed laboratory errors was then recorded on a monthly basis. investigational data were analyzed using bivariate statistics. background: most studies on operational research have been focused in academic medical centers, which typically have larger volumes of patients and are located in urban metropolitan areas. as cms core measures in 2013 begin to compare emergency departments (eds) on treatment time intervals, especially length of stay (los), it is important to explore if any differences exist inherent to patient volume. objectives: the objective of this study is to look at differences in operational metrics based on annual patient census. the hypothesis is that treatment time intervals and operational metrics differ amongst these different categories. methods: the ed benchmarking alliance has collected yearly operational metrics since 2004. as of 2010, there are 499 eds providing data across the united states. eds are stratified by annual volume for comparison in the following categories: <20k, 20-40k, 40-60k, and over 80k. in this study, metrics for eds with <20k visits per year were compared to those of different volumes, averaged from 2004-2010. mean values were compared to <20k visits as a reference point for statistical difference using t-tests to compare means with a p-value < 0.05 considered significant. results: as seen in the table, a greater percentage of high acuity of patients was seen in higher volume eds than in <20k eds. the percentage of patients transferred to another hospital was higher in <20k eds. a higher percentage arrived by ems and a higher percentage were admitted in higher volume eds when compared to <20k visits. in addition, the median los for both discharged and admitted patients and percentage who left before treatment was complete (lbtc) were higher in the higher volume eds. conclusion: lower volume eds have lower acuity when compared to higher volume eds. lower volume eds have shorter median los and left before treatment complete percentages. as cms core measures require hospitals to report these metrics, it will be important to compare them based on volume and not in aggregate. does the addition of a hands-free communication device improve ed interruption times? amy ernst, steven j. weiss, jeffrey a. reitsema university of new mexico, albuquerque, nm background: ed interruptions occur frequently. recently a hands-free communication device (vocera) was added to a cell phone and a pager in our ed. objectives: the purpose of the present study was to determine whether this addition improved interruption times. our hypothesis was that the device would significantly decrease length of time of interruptions. methods: this study was a prospective cohort study of attending ed physician calls and interruptions in a level i trauma center with em residency. interruptions included phone calls, ekg interpretations, pages to resuscitation, and other miscellaneous interruptions (including nursing issues, laboratory, ems, and radiology). we studied a convenience sampling intended to include mostly evening shifts, the busiest ed times. length of time the interruption lasted was recorded. data were collected for a comparison group pre-vocera. three investigators collected data including seven different addendings' interruptions. data were collected on a form, then entered into an excel file. data collectors' agreement was determined during two additional four hour shifts to calculate a kappa statistic. spss was used for data entry and statistical analysis. descriptive statistics were used for univariate data. chi-square and mann whitney u nonparametric test were used for comparisons. results: of the total 511 interruptions, 33% were phone calls, 24% were ekgs to be read, 18% were pages to resuscitation, and 25% miscellaneous. there were no significant differences in types of interruptions pre-vs. post-vocera. pre-vocera we collected 40 hours of data with 65 interruptions with a mean 1.6 per hour. post-vocera, 180 hours of data were collected with 446 interruptions with a mean 2.5 per hour. there was a significant difference in length of time of interruptions with an average of 9 minutes pre-vocera vs. 4 minutes post-vocera (p = 0.012, diff 4.9, 95% ci 1.8-8.1). vocera calls were significantly shorter than non-vocera calls (1 vs 6 minutes, p < 0.001). comparing data collectors for type of interruption during the same 4-hour shift resulted in a kappa (agreement) of 0.73. conclusion: the addition of a hands-free communication device may improve interruptions by shortening call length. '' talk background: analyses of patient flow through the ed typically focus on metrics such as wait time, total length of stay (los), or boarding time. however, little is known about how much interaction a patient has with clinicians after being placed in a room, or what proportion of the in-room visit is also spent ''waiting,'' rather than directly interacting with care providers. objectives: the objective was to assess the proportion of time, relative to the time in a patient care area, that a patient spends actively interacting with providers during an ed visit. methods: a secondary analysis of 29 audiotaped encounters of patients with one of four diagnoses (ankle sprain, back pain, head injury, laceration) was performed. the setting was an urban, academic ed. ed visits of adult patients were recorded from the time of room placement to discharge. audiotapes were edited to remove all downtime and non-patient-provider conversations. los and door-to-doctor times were abstracted from the medical record. the proportion of time the patient spent in direct conversation with providers (''talk-time'') was calculated as the ratio of the edited audio recording time to the time spent in a patient care area (talk-time = [edited audio time/(los -door-to-doctor)]). multiple linear regression controlling for time spent in patient care area, age, and sex was performed. results: the sample was 31% male with a mean age of 37 years. median los: 133 minutes (iqr: 88-169), median door-to-doctor: 42 minutes (iqr: 29-67), median time spent in patient care area: 65 minutes (iqr: 53-106). median time spent in direct conversation with providers was 16 minutes (iqr: 12-18), corresponding to a talk-time percentage of 19.2% (iqr: 14.7-24.6%). there were no significant differences based on diagnosis. regression analysis showed that those spending a longer time in a patient care area had a lower percentage of talk time (b = )0.11, p = 0.002). conclusion: although limited by sample size, these results indicate that approximately 80% of a patients' time in a care area is spent not interacting with providers. while some of the time spent waiting is out of the providers' control (e.g. awaiting imaging studies), this significant ''downtime'' represents an opportunity for both process improvement efforts to decrease downtime as well as the development of innovative patient education efforts to make the best use of the remaining downtime. degradation of emergency department operational data quality during electronic health record implementation michael j. ward, craig froehle, christopher j. lindsell university of cincinnati, cincinnati, oh background: process improvement initiatives targeted at operational efficiency frequently use electronic timestamps to estimate task and process durations. errors in timestamps hamper the use of electronic data to improve a system and may result in inappropriate conclusions about performance. despite the fact that the number of electronic health record (ehr) implementations is expected to increase in the u.s., the magnitude of this ehr-induced error is not well established. objectives: to estimate the change in the magnitude of error in ed electronic timestamps before and after a hospital-wide ehr implementation. methods: time-and-motion observations were conducted in a suburban ed, annual census 35,000, after receiving irb approval. observation was conducted 4 weeks pre-and 4 weeks post-ehr implementation. patients were identified on entering the ed and tracked until exiting. times were recorded to the nearest second using a calibrated stopwatch, and are reported in minutes. electronic data were extracted from the patient-tracking system in use pre-implementation, and from the ehr post-implementation. for comparison of means, independent t-tests were used. chi-square and fisher's t-tests were used for proportions, as appropriate. results: there were 263 observations; 126 before and 137 after implementation. the differences between observed times and timestamps were computed and found to be normally distributed. post-implementation, mean physician seen times along with arrival to bed, bed to physician, and physician to disposition intervals occurred before observation. physician seen timestamps were frequently incorrect and did not improve postimplementation. significant discrepancies (ten minutes or greater) from observed values were identified in timestamps involving disposition decision and exit from the ed. calculating service time intervals resulted in every service interval (except arrival to bed) having at least 15% of the times with significant discrepancies. it is notable that missing values were more frequent post-ehr implementation. conclusion: ehr implementation results in reduced variability of timestamps but reduced accuracy and an increase in missing timestamps. using electronic timestamps for operational efficiency assessment should recognize the magnitude of error, and the compounding of error, when computing service times. background: procedural sedation and analgesia is used in the ed in order to efficiently and humanely perform necessary painful procedures. the opposing physiological effects of ketamine and propofol suggest the potential for synergy, and this has led to interest in their combined use, commonly termed ''ketofol'', to facilitate ed procedural sedation. objectives: to determine if a 1:1 mixture of ketamine and propofol (ketofol) for ed procedural sedation results in a 13% or more absolute reduction in adverse respiratory events compared to propofol alone. methods: participants were randomized to receive either ketofol or propofol in a double-blind fashion according to a weight-based dosing protocol. inclusion criteria were age 14 years or greater, and asa class 1-3 status. the primary outcome was the number and proportion of patients experiencing an adverse respiratory event according to pre-defined criteria (the ''quebec criteria''). secondary outcomes were sedation consistency, sedation efficacy, induction time, sedation time, procedure time, and adverse events. results: a total of 284 patients were enrolled, 142 per group. forty-three (30%) patients experienced an adverse respiratory event in the ketofol group compared to 46 (32%) in the propofol group (difference 2%; 95% ci )9% to 13%; p = 0.798). thirty-eight (27%) patients receiving ketofol and 36 (25%) receiving propofol developed hypoxia, of whom three (2%) ketofol patients and 1 (1%) propofol patient received bag-valve-mask ventilation. sixty-five (46%) patients receiving ketofol and 93 (65%) receiving propofol required repeat medication dosing or lightened to a ramsay sedation score of 4 or less during their procedure (difference 19%; 95% ci 8% to 31%; p = 0.001). procedural agitation occurred in 5 patients (3.5%) receiving ketofol compared to 15 (11%) receiving propofol (difference 7.5%, 95% ci 1% to 14%). recovery agitation requiring treatment occurred in six patients (4%, 95% ci 2.0% to 8.9%) receiving ketofol. other secondary outcomes were similar between the groups. patients and staff were highly satisfied with both agents. conclusion: ketofol for ed procedural sedation does not result in a reduced incidence of adverse respiratory events compared to propofol alone. induction time, efficacy, and sedation time were similar; however, sedation depth appeared to be more consistent with ketofol. with propofol and its safety is well established. however, in 2010 cms enacted guidelines defining propofol as deep sedation and requiring administration by a physician. common edps practice had been one physician performing both the sedation and procedure. edps has proven safe under this one-physician practice. however, the 2010 guidelines mandated separate physicians perform each. objectives: the study hypothesis was that one-physician propofol sedation complication rates are similar to two-physician. methods: before and after, observational study of patients >17 years of age consenting to edps with propofol. edps completed with one physician were compared to those completed with two (separate physicians performing the sedation and the procedure). all data were prospectively collected. the study was completed at an urban level i trauma center. standard monitoring and procedures for edps were followed with physicians blinded to the objectives of this research. the frequency and incremental dosing of medication was left to the discretion of the treating physicians. the study protocol required an ed nurse trained in data collection to be present to record vital signs and assess for any prospectively defined complications. we used chi-square tests to compare the binary outcomes and asa scores across the time periods, and two-sample t-tests to test for differences in age between the two time periods. results: during the 2-year study period we enrolled 481 patients: 252 one-physician edps sedations and 229 3 (-7 to 13) also received bag-valve-mask 3 (2) [0.7 to 6) 1 (1) [0.1 to 4] 1 (-2 to 5) two-physician. all patients meeting inclusion criteria were included in the study. total adverse event rates were 4.4% and 3.1%, respectively (p = 0.450). the most common complications were hypotension and oxygen desaturation, and they respectively showed one-physcian rates of 2.0% and 0.8% and two-physician rates of 1.8% and 0.9% (p = 0.848 and 0.923.) the unsuccessful procedure rates were 4.0% vs 3.9% (p = 0.983). conclusion: this study demonstrated no significant difference in complication rates for propofol edps completed by one physician as compared to two. background: overdose patients are often monitored using pulse oximetry, which may not detect changes in patients on high-flow oxygen. objectives: to determine whether changes in end-tidal carbon dioxide (etco 2 ) detected by capnographic monitoring are associated with clinical interventions due to respiratory depression (crd) in patients undergoing evaluation for a decreased level of consciousness after a presumed drug overdose. methods: this was a prospective, observational study of adult patients undergoing evaluation for a drug overdose in an urban county ed. all patients received supplemental oxygen. patients were continuously monitored by trained research associates. the level of consciousness was recorded using the observer's assessment of alertness/sedation scale (oaa/s). vital signs, pulse oximetry, and oaa/s were monitored and recorded every 15 minutes and at the time of occurrence of any crd. respiratory rate and etco 2 were measured at five second intervals using a capno-stream20 monitor. crd included an increase in supplemental oxygen, the use of bag-valve-mask ventilations, repositioning to improve ventilation, and physical or verbal stimulus to induce respiration, and were performed at the discretion of the treating physicians and nurses. changes from baseline in etco 2 values and waveforms among patients who did or did have a clinical intervention were compared using wilcoxon rank sum tests. results: 100 patients were enrolled in the study (age 35, range 18 to 67, 62% male, median oaas 4, range 1 to 5). suspected overdoses were due to opioids in 34, benzodiazepines in 14, an antipsychotic in 14, and others in 38. the median time of evaluation was 165 minutes (range 20 to 725). crd occurred in 47% of patients, including an increase in o 2 in 38%, repositioning in 14%, and stimulation to induce respiration in 23%. 16% had an o 2 saturation of <93% (median 88, range 73 to 92) and 8% had a loss of etco 2 waveform at some time, all of whom had a crd. the median change in etco 2 from baseline was 5 mmhg, range 1 to 30. among patients with crd it was 14 mmhg, range 10 to 30, and among patients with no crd it was 5 mmhg, range 1 to 13 (p = 0.03). conclusion: the change in etco 2 from baseline was larger in patients who required clinical interventions than in those who did not. in patients on high-flow oxygen, capnographic monitoring may be sensitive to the need for airway support. how reliable are health care providers in reporting changes in etco 2 waveform anas sawas 1 , scott youngquist 1 , troy madsen 1 , matthew ahern 1 , camille broadwater-hollifield 1 , andrew syndergaard 1 , jared phelps 2 , bryson garbett 1 , virgil davis 1 1 university of utah, salt lake city, ut; 2 midwestern university, glendale, az background: etco 2 changes have been used in procedural sedation analgesia (psa) research to evaluate subclinical respiratory depression associated with sedation regiments. objectives: to evaluate the accuracy of bedside clinician reporting of changes in etco 2 . methods: this was a prospective, randomized, singleblind study conducted in ed setting from june 2010 until the present time. this study took place at an academic adult ed of a 405-bed (21 in the ed) and a level i trauma center. subjects were randomized to receive either ketamine-propofol or propofol according to a standardized protocol. loss of etco 2 waveforms for ‡ 15 sec were recorded. following sedation, questionnaires were completed by the sedating physicians. digitally recorded etco 2 waveforms were also reviewed by an independent physician and a trained research assistant (ra). to ensure the reliability of trained research assistants, we compared their analyses with the analyses of an independent physician for the first 41 recordings. the target enrollment was 65 patients in each group (n = 130 total). statistics were calculated using sas statistical software. results: 91 patients were enrolled; 53 (58.2%) are males and 38 (41.8%) are females. mean age was 44.93 ± 17.93 years. most participants did not have major risk factors for apnea or for further complications (86.3% were asa class 1 or 2). etco 2 waveforms were reviewed by 87 (95.6%) sedating physicians and 84 (92.3%) nurses at the bedside. there were 70 (76.9%) etco 2 waveforms recordings, 42 (60.0%) were reviewed by an independent physician and 70 (100%) were reviewed by an ra. a kappa test for agreement between independent physicians and ras was conducted on 41 recordings and there were no discordant pairs (kappa = 1). compared to sedating physicians, the independent physician was more likely to report etco 2 wave losses (or 1.37, 95% ci 1.08-1.73). compared to sedating physicians, ras were more likely to report etco 2 wave losses (or 1.39, 95% ci 1.14-1.70). conclusion: compared to sedating physicians at the bedside, independent physicians and ras were more likely to note etco 2 waveform losses. an independent review of recorded etco 2 waveform changes will be more reliable for future sedation research. background: comprehensive studies evaluating current practices of ed airway management in japan are lacking. many emergency physicians in japan still experience resistance regarding rapid sequence intubation (rsi). objectives: we sought to describe the success and complication rate of rsi with non-rsi. methods: design and setting: we conducted a multicenter prospective observational study using the jean registry of eds at 11 academic and community hospitals in japan during between 2010 and 2011. data fields include ed characteristics, patient and operator demographics, method of airway management, number of attempts, and adverse events. we defined non-rsi as intubation with sedation only, neuromuscular blockade only, and without medication. participants: all patients undergoing emergency intubation in ed were eligible for inclusion. cardiac arrest encounters were excluded from the analysis. primary analysis: we described rsi with non-rsi in terms of success rate on first attempt, within three attempts, and complication rate. we present descriptive data as proportions with 95% confidence intervals (cis). we report odds ratios (or) with 95% ci via chi-square testing. results: the database recorded 2710 intubations (capture rate 98%) and 1670 met the inclusion criteria. rsi was the initial method chosen in 489 (29%) and non-rsi in 1181 (71%). use of rsi varied among institutes from 0% to 79%. success cases of rsi on first and within three attempts are 353 intubations (72%, 95%ci 68%-76%) and 474 intubations (97%, 95%ci 95%-98%), respectively. the success cases of non-rsi on first and within three attempts are 724 intubations (61%, 95%ci 58%-64%) and 1105 intubations (94%, 95%ci 92%-95%). success rates of rsi on first and within three attempts are higher than non-rsi (or 1.64, 95%ci 1.30-2.06 and or 2.14, 95% ci 1.22-3.77, respectively). we recorded 67 complications in rsi (14%) and 165 in non-rsi (14%). there is no significant difference of complication rate between rsi and non-rsi (or 0.98, 95% ci 0.72-1.32). conclusion: in this multi-center prospective study in japan, we demonstrated a high degree of variation in use of rsi for ed intubation. additionally we found that success rate of rsi on first and within three attempts were both higher than non-rsi. this study has the limitation of reporting bias and confounding by indication. (originally submitted as a ''late-breaker.'') methods: this was a prospective, randomized, singleblind study conducted in the ed setting from june 2010 until the present time. this study took place at an academic adult ed of a 405-bed (21 in the ed) and a level i trauma center. subjects were randomized to receive either ketamine-propofol or propofol according to a standardized protocol. etco 2 waveforms were digitally recorded. etco 2 changes were evaluated by the sedating physicians at the bedside. recorded waveforms were reviewed by an independent physician and a trained research assistant (ra). to ensure the reliability of trained ras, we computed a kappa test for agreement between the analysis of independent physicians and ras for the first 41 recordings. a post-hoc analysis of the association between any loss, the number of losses, and total duration of loss of etco 2 waveform and crp was performed. on review we recorded the absence or presence of loss of etco 2 and the total duration in seconds of all lost etco 2 episodes ‡15 seconds. ors were calculated using sas statistical software. results: 91 patients were enrolled; 53 (58.2%) are males and 38 are (41.8%) females. 86.3% participants were asa class 1 or 2. waveforms were reviewed by 87 (95.6%) sedating physicians. there were 70 (76.9%) waveforms recordings, 42 (60.0%) were reviewed by an independent physician and 70 (100%) were reviewed by ras, where there were no discordant pairs (kappa = 1). there were 24 (26.4%) crp events. any loss of etco 2 was associated with a non-significant or of 4.06 (95% ci 0.75-21.9) for crp. however, the duration of etco 2 loss was significantly associated with crp with an or of 1.38 (95% ci 1.08-1.76) for each 30 second interval of lost etco 2 . the number of losses was significantly associated with the outcome (or 1.48, 95% ci 1.15-1.91). conclusion: defining subclinical respiratory depression as present or absent may be less useful than quantitative measurements. this suggests that risk is cumulative over periods of loss of etco 2 , and the duration of loss may be a better marker of sedation depth and risk of complications than classification of any loss. background: ed visits present an opportunity to deliver brief interventions (bis) to reduce violence and alcohol misuse among urban adolescents at risk for future injury. previous analyses demonstrated that a brief intervention resulted in reductions in violence and alcohol consequences up to 6 months. objectives: this paper describes findings examining the efficacy of bis on peer violence and alcohol misuse at 12 months. methods: patients (14-18 yrs) at an ed reporting past year alcohol use and aggression were enrolled in the rct, which included computerized assessment, and randomization to control group or bi delivered by a computer (cbi) or therapist assisted by a computer (tbi). baseline and 12 months included violence (peer aggression, peer victimization, violence related consequences) and alcohol (alcohol misuse, binge drinking, alcohol-related consequences). results: 3338 adolescents were screened (88% participation). of those, 726 screened positive for violence and alcohol use and were randomized; 84% completed 12-month follow-up. as compared to the control group, the tbi group showed significant reductions in peer aggression (p < 0.01) and peer victimization (p < 0.05) at 12 months. bi and control groups did not differ on alcohol-related variables at 12 months. conclusion: evaluation of the saferteens intervention one year following an ed visit provides support for the efficacy of computer-assisted therapist brief intervention for reducing peer violence. violence against ed health care workers: a 9-month experience terry kowalenko 1 , donna gates 2 , gordon gillespie 2 , paul succop 2 1 university of michigan, ann arbor, mi; 2 university of cincinnati, cincinnati, oh background: health care (hc) support occupations have an injury rate nearly 10 times that of the general sector due to assaults, with doctors and nurses nearly 3 times greater. studies have shown that the ed is at greatest risk of such events compared to other hc settings. objectives: to describe the incidence of violence in ed hc workers over 9 months. specific aims were to 1) identify demographic, occupational, and perpetrator factors related to violent events; 2) identify the predictors of acute stress response in victims; and 3) identify predictors of loss of productivity after the event. methods: longitudinal, repeated methods design was used to collect monthly survey data from ed hc workers (w) at six hospitals in two states. surveys assessed the number and type of violent events, and feelings of safety and confidence. victims also completed specific violent event surveys. descriptive statistics and a repeated measure linear regression model were used. results: 213 ed hcws completed 1795 monthly surveys, and 827 violent events were reported. the average per person violent event rate per 9 months was 4.15. 601 events were physical threats (3.01 per person in 9 months). 226 events were assaults (1.13 per person in 9 months). 501 violent event surveys were completed, describing 341 physical threats and 160 assaults with 20% resulting in injuries. 63% of the physical threats and 52% of the assaults were perpetrated by men. comparing occupational groups revealed significant differences between nurses and physicians for all reported events (p = 0.0048), with the greatest difference in physical threats (p = 0.0447). nurses felt less safe than physicians (p = 0.0041). physicians felt more confident than nurses in dealing with the violent patient (p = 0.013). nurses were more likely to experience acute stress than physicians (p < 0.001). acute stress significantly reduced productivity in general (p < 0.001), with a significant negative effect on ''ability to handle/ manage workload'' (p < 0.001) and ''ability to handle/ manage cognitive demands'' (p < 0.05). conclusion: ed hcws are frequent victims of violence perpetrated by visitors and patients. this violence results in injuries, acute stress, and loss of productivity. acute stress has negative consequences on the workers' ability to perform their duties. this has serious potential consequences to the victim as well as the care they provide to their patients. a randomized controlled feasibility trial of vacant lot greening to reduce crime and increase perceptions of safety eugenia c. garvin, charles c. branas perelman school of medicine at the university of pennsylvania, philadelphia, pa background: vacant lots, often filled with trash and overgrown vegetation, have been associated with intentional injuries. a recent quasi-experimental study found a significant decrease in gun crimes around vacant lots that had been greened compared with control lots. objectives: to determine the feasibility of a randomized vacant lot greening intervention, and its effect on police-reported crime and perceptions of safety. methods: for this randomized controlled feasibility trial of vacant lot greening, we partnered with the pennsylvania horticulture society (phs) to perform the greening intervention (cleaning the lots, planting grass and trees, and building a wooden fence around the perimeter). we analyzed police crime data and interviewed people living around the study vacant lots (greened and control) about perceptions of safety before and after greening. results: a total of 5200 sq ft of randomly selected vacant lot space was successfully greened. we used a master database of 54,132 vacant lots to randomly select 50 vacant lot clusters. we viewed each cluster with the phs to determine which were appropriate to send to the city of philadelphia for greening approval. the vacant lot cluster highest on the random list to be approved by the city of philadelphia was designated the intervention site, and the next highest was designated the control site. overall, 29 participants completed baseline interviews, and 21 completed follow-up interviews after 3 months. 59% of participants were male, 97% were black or african american, and 52% had a household income less than $25,000. unadjusted difference-in-differences estimates showed a decrease in gun assaults around greened vacant lots compared to control. regression-adjusted estimates showed that people living around greened vacant lots reported feeling safer after greening compared to those who lived around control vacant lots (p < 0.01). conclusion: conducting a randomized controlled trial of vacant lot greening is feasible. greening may reduce certain gun crimes and make people feel safer. however, larger prospective trials are needed to further investigate this link. screening for violence identifies young adults at risk for return ed visits for injury abigail hankin-wei, brittany meagley, debra houry emory university, atlanta, ga background: homicide is the second leading cause of death among youth ages 15-24. prior studies, in nonhealth care settings, have shown associations between violent injury and risk factors including exposure to community violence, peer behavior, and delinquency. objectives: to assess whether self-reported exposure to violence risk factors can be used to predict future ed visits for injuries. methods: we conducted a prospective cohort study in the ed of a southeastern us level i trauma center. patients aged 15-24 presenting for any chief complaint were included unless they were critically ill, incarcerated, or could not read english. recruitment took place over six months, by a trained research assistant (ra). the ra was present in the ed for 3-5 days per week, with shifts scheduled such that they included weekends and weekdays, over the hours from 8 am-8 pm. patients were offered a $5 gift card for participation. at the time of initial contact in the ed, patients completed a written questionnaire which included validated measures of the following risk factors: a) aggression, b) perceived likelihood of violence, c) recent violent behavior, d) peer behavior, e) community exposure to violence, and f) positive future outlook. at 12 months following the initial ed visit, the participants' medical records were reviewed to identify any subsequent ed visits for injury-related complaints. data were analyzed with chi-square and logistic regression analyses. results: 332 patients were approached, of whom 300 patients consented. participants' average age was 21.1 years, with 57% female, and 86% african american. return visits for injuries were significantly associated with hostile/aggressive feelings (rr 3.7, ci 1.42, 9) , self-reported perceived likelihood of violence (rr 5.16, ci 1.93, 13.78) , recent violent behavior (rr 3.16, ci 1.01, 9.88) , and peer group violence (rr 4.4, ci 1.72, 11.25) . these findings remained significant when controlling for participant sex. conclusion: a brief survey of risk factors for violence is predictive of return visit to the ed for injury. these findings identify a potentially important tool for primary prevention of violent injuries among young adults visiting the ed for both injury and non-injury complaints. background: sepsis is a commonly encountered disease in ed, with high mortality. while several clinical prediction rules (cpr) including meds, sirs, and curb-65 exist to facilitate clinicians in early recognition of risk of mortality for sepsis, most are of suboptimal performance. objectives: to derive a novel cpr for mortality of sepsis utilizing clinically available and objective predictors in ed. methods: we retrospectively reviewed all adult septic patients who visited the ed at a tertiary hospital during the year 2010 with two sets of blood cultures ordered by physicians. basic demographics, ed vital signs, symptoms and signs, underlying illnesses, laboratory findings, microbiological results, and discharge status were collected. multivariate logistic regressions were used to obtain a novel cpr using predictors with <0.1 p-value tested in univariate analyses. the existing cprs were compared with this novel cpr using auc. results: of 8699 included patients, 7.6% died in hospital, 51% had diabetes, 49% were older than 65 years of age, 21% had malignancy, and 16% had positive blood bacterial culture tests. predisposing factors including history of malignancy, liver disease, immunosuppressed status, chronic kidney disease, congestive heart failure, and older than 65 years of age were found to be associated with mortality (all p < 0.05). patients who developed mortality tended to have lower body temperature, narrower pulse pressure, higher percentage of red cell distribution width (rdw) and bandemia, higher blood urea nitrogen (bun), ammonia, and c-reactive protein level, and longer prothrombin time and activated partial thromboplastin time (aptt) (all p < 0.05). the most parsimonious cpr incorporating history of malignancy (or 2.3, 95% ci 1.9-2.7), prolonged aptt (3.0, 2.4-3.8), presence of bandemia (1.7, 1.4-2.0 results: there was poor agreement between the physician's unstructured assessment used in clinical practice and the guidelines put forth by the aha/acc/acep task force. ed physicians were more likely to assess a patient as low risk (42%), while aha guidelines were more likely to classify patients as intermediate (50%) or high (40%) risk. however, when comparing the patient's final acs diagnosis and the relation to the risk assessment value, ed physicians proved better predictors of high-risk patients who in fact had acs, while the aha/acc/acep guidelines proved better at correctly identifying low-risk patients who did not have acs. conclusion: in the ed, physicians are far more efficient at correctly placing patients with underlying acs into a high-risk category, while established criteria may be overly conservative when applied to an acute care population. further research is indicated to look at ed physicians' risk stratification and ensuing patient care to assess for appropriate decision making and ultimate outcomes. compartative conclusion: the amuse score was more specific, but the wells score was more sensitive for acute lower limb dvt in this cohort. there is no significant advantage in using the amuse over the wells score in ed patient with suspected dvt. background: the direct cost of medical care is not accurately reflected in charges or reimbursement. the cost of boarding admitted patients in the ed has been studied in terms of opportunity costs, which are indirect. the actual direct effect on hospital expenses has not been well defined. objectives: we calculate the difference to the hospital in the cost of caring for an admitted patient in the ed and in a non-critical care in-patient unit. methods: time-directed activity-based costing (tdabc) has recently been proposed as a method of determining the actual cost of providing medical services. tdabc was used to calculate the cost per patient bed-hour both in the ed and for an in-patient unit. the costs include nursing, nursing assistants, clerks, attending and resident physicians, supervisory salaries, and equipment maintenance. boarding hours were determined from placement of admission order to transfer to in-patient unit. a convenience sample of 100 consecutive non-critical care admissions was assessed to find the degree of ed physician involvement with boarded patients. results: the overhead cost per patient bed-hour in the ed was $60.80. the equivalent cost per bed-hour inpatient was $23.39, a differential of $37.41. there were 27,618 boarding hours for medical-surgical patients in 2010, a differential of $1,033,189.38 for the year. for the short-stay unit (no residents), the cost per patient hour was $11.36 and the boarding hours were 11,804. this resulted in a differential cost of $583,389.76, a total direct cost to the hospital of $1,616,579.14. review of 100 consecutive admissions showed no orders placed by the ed physician after decision-toadmit. conclusion: concentration of resources in the ed means considerably higher cost per unit of care as compared to an in-patient unit. keeping admitted patients boarding in the ed results in expensive underutilization. this is exclusive of significant opportunity costs of lost revenue from walk-out and diverted patients. this study includes the cost of teaching attendings and residents (ed and in-patient) . in a non-teaching setting, the differential would be less and the cost of boarding would be shared by a fee-for-service ed physician group as well as the hospital. improving identification of frequent emergency department users using a regional health information background: frequent ed users consume a disproportionate amount of health care resources. interventions are being designed to identify such patients and direct them to more appropriate treatment settings. because some frequent users visit more than one ed, a health information exchange (hie) may improve the ability to identify frequent ed users across sites of care. objectives: to demonstrate the extent to which a hie can identify the marginal increase in frequent ed users beyond that which can be detected with data from a single hospital. methods: data from 6/1/10 to 5/31/11 from the new york clinical information exchange (nyclix), a hie in new york city that includes ten hospitals, were analyzed to calculate the number of frequent ed users ( ‡4 visits in 30 days) at each site and across the hie. results: there were 10,555 (1% of total patients) frequent ed users, with 7,518 (71%) of frequent users having all their visits at a single ed, while 3,037 (29%) frequent users were identified only after counting visits to multiple eds (table 1) . site-specific increases varied from 7% to 62% (sd 16.5). frequent ed users accounted for 1% of patients, but for 6% of visits, averaging 9.74 visits per year, versus 1.55 visits per year for all other patients. 28.5% of frequent users visited two or more eds during the study period, compared to 10.6% of all other patients. conclusion: frequent ed users commonly visited multiple nyclix eds during the study period. the use of a hie helped identify many additional frequent users, though the benefits were lower for hospitals not located in the relative vicinity of another nyclix hospital. measures that take a community, rather than a single institution, into account may be more reflective of the care that the patient experiences. indocyanine background: due to their complex nature and high associated morbidity, burn injuries must be handled quickly and efficiently. partial thickness burns are currently treated based upon visual judgment of burn depth by the clinician. however, such judgment is only 67% accurate and not expeditious. laser doppler imaging (ldi) is far more accurate -nearly 96% after 3 days. however, it is too cumbersome for routine clinical use. laser assisted indocyanine green angiography (laicga) has been indicated as an alternative for diagnosing the depth of burn injuries, and possesses greater utility for clinical translation. as the preferred outcome of burn healing is aesthetic, it is of interest to determine if wound contracture can be predicted early in the course of a burn by laic-ga. objectives: determine the utility of early burn analysis using laicga in the prediction of 28-day wound contracture. methods: a prospective animal experiment was performed using six anesthetized pigs, each with 20 standardized wounds. differences in burn depth were created by using a 2.5 · 2.5 cm aluminum bar at three exposure times and temperatures: 70 degrees c for 30 seconds, 80 degrees c for 20 seconds, and 80 degrees c for 30 seconds. we have shown in prior validation experiments that these burn temperatures and times create distinct burn depths. laicga scanning, using lifecell spy elite, took place at 1 hour, 24 hours, 48 hours, 72 hours, and 1 week post burn. imaging was read by a blinded investigator, and perfusion trends were compared with day 28 post-burn contraction outcomes measured using imagej software. biopsies were taken on day 28 to measure scar tissue depth. results: deep burns were characterized by a blue center indicating poor perfusion while more superficial burns were characterized by a yellow-red center indicating perfusion that was close to that of the normal uninjured adjacent skin (see figure) . a linear relationship between contraction outcome and burn perfusion could be discerned as early as 1 hour post burn, peaking in strength at 24-48 hours post-burn. burn intensity could be effectively identified at 24 hours post-burn, although there was no relationship with scar tissue depth. conclusion: pilot data indicate that laicga using lifecell spy has the ability to determine the depth of injury and predict the degree of contraction of deep dermal burns within 1-2 days of injury with greater accuracy than clinical scoring. the objectives: we hypothesize that real-time monitoring of an integrated electronic medical records system and the subsequent firing of a ''sepsis alert'' icon on the electronic ed tracking board results in improved mortality for patients who present to the ed with severe sepsis or septic shock. methods: we retrospectively reviewed our hospital's sepsis registry and included all patients diagnosed with severe sepsis or septic shock presenting to an academic community ed with an annual census of 73,000 visits and who were admitted to a medical icu or stepdown icu bed between june 2009 and october 2011. in may 2010 an algorithm was added to our integrated medical records system that identifies patients with two sirs criteria and evidence of endorgan damage or shock on lab data. when these criteria are met, a ''sepsis alert'' icon (prompt) appears next to that patient's name on the ed tracking board. the system also pages an in-house, specially trained icu nurse who can respond on a prn basis and assist in the patient's management. 18 months of intervention data are compared with 11 months of baseline data. statistical analysis was via z-test for proportions. results: for ed patients with severe sepsis, the preand post-alert mortality was 19 of 125 (15%) and 34 of 378 (9%), respectively (p = 0.084; n = 503). in the septic shock group, the pre-and post-alert mortality was 27 of 92 (29%) and 48 of 172 (28%), respectively (p = 0.977). with ed and inpatient sepsis alerts combined, the severe sepsis subgroup mortality was reduced from 17% to 9% (p = 0.013; n = 622). conclusion: real-time ed ehr screening for severe sepsis and septic shock patients did not improve mortality. a positive trend in the severe sepsis subgroup was noted, and the combined inpatient plus ed data suggests statistical significance may be reached as more patients enter the registry. limitations: retrospective study, potential increased data capture post intervention, and no ''gold standard'' to test the sepsis alert sensitivity and specificity. ) . descriptive statistics were calculated. principal component analysis was used to determine questions with continuous response formats that could be aggregated. aggregated outcomes were regressed onto predictor demographic variables using multiple linear regression. results: 80/100 physicians completed the survey. physicians had a mean of 9.8 ± 9.0 years experience in the ed. 23.8% were female. eight physicians (10%) reported never having used the tool, while 70.8% of users estimated having used it more than five times. 75% of users cited the ''p'' alert on the etb as the most common notification method. most felt the ''p'' alert did not help them identify patients with pneumonia earlier (mean = 2.5 ± 1.2), but found it moderately useful in reminding them to use the tool (3.5 ± 1.3). physicians found the tool helpful in making decisions regarding triage, diagnostic studies, and antibiotic selection for outpatients and inpatients (3.7 ± 1.0, 3.6 ± 1.1, 3.6 ± 1.1, and 4.2 ± 0.9, respectively). they did not feel it negatively affected their ability to perform other tasks (1.6 ± 0.9). using multiple linear regression, neither age, sex, years experience, nor tool use frequency significantly predicted responses to questions about triage and antibiotic selection, technical difficulties, or diagnostic ordering. conclusion: ed physicians perceived the tool to be helpful in managing patients with pneumonia without negatively affecting workflow. perceptions appear consistent across demographic variables and experience. objectives: we seek to examine whether use of the salt device can provide reliable tracheal intubation during ongoing cpr. the dynamic model tested the device with human powered cpr (manual) and with an automated chest compression device (physio control lucas 2). the hypothesis is that the predictable movement of an automated chest compression device will make tracheal intubation easier than the random movement from manual cpr. methods: the project was an experimental controlled trial and took place in the ed at a tertiary referral center in peoria, illinois. this project was an expansion arm of a similarly structured study using traditional laryngoscopy. emergency medicine residents, attending physicians, paramedics, and other acls-trained staff were eligible for participation. in randomized order, each participant attempted intubation on a mannequin using the salt device with no cpr ongoing, during cpr with a manual compression, and during cpr with an automatic chest compression. participants were timed in their attempt and success was determined after each attempt. results: there were 43 participants in the trial. the success rates in the control group and the automated cpr group were both 86% (37/43) and the success rate in the manual cpr group was 79% (34/43 objectives: our primary hypothesis was that in fasting, asymptomatic subjects, larger fluid boluses would lead to proportional aortic velocity changes. our secondary endpoints were to determine inter-and intra-subject variation in aortic velocity measurements. methods: the authors performed a prospective randomized double-blinded trial using healthy volunteers. we measured the velocity time integral (vti) and maximal velocity (vmax) with an estimated 0-20°pulsed wave doppler interrogation of the left ventricular outflow in the apical-5 cardiac window. three physicians reviewed optimal sampling gate position, doppler angle and verified the presence of an aortic closure spike. angle correction technology was not used. subjects with no history of cardiac disease or hypertension fasted for 12 hours and were then randomly assigned to receive a normal saline bolus of 2 ml/kg, 10 ml/kg or 30 ml/kg over 30 minutes. aortic velocity profiles were measured before and after each fluid bolus. results: forty-two subjects were enrolled. mean age was 33 ± 10 (range 24 to 61) and mean body mass index 24.7 ± 3.2 (range 18.7 to 32). mean volume (in ml) for groups receiving 2 ml/kg, 10 ml/kg, and 30 ml/kg were 151, 748, and 2162, respectively. mean baseline vmax (in cm/s) of the 42 subjects was 108.4 ± 12.5 (range 87 to 133). mean baseline vti (in cm) was 23.2 ± 2.8 (range 18.2 to 30.0). pre-and post-fluid mean differences for vmax were )1.7 (± 10.3) and for vti 0.7 (± 2.7). aortic velocity changes in groups receiving 2 ml/kg, 10 ml/kg, and 30 ml/kg were not statistically significant (see table) . heart rate changes were not significant. background: clinicians recognize that septic shock is a highly prevalent, high mortality disease state. evidence supports early ed resuscitation, yet care delivery is often inconsistent and incomplete. the objective of this study was to discover latent critical barriers to successful ed resuscitation of septic shock. objectives: clinicians recognize that septic shock is a highly prevalent, high mortality disease state. evidence supports early ed resuscitation, yet care delivery is often inconsistent and incomplete. the objective of this study was to discover latent critical barriers to successful ed resuscitation of septic shock. methods: we conducted five 90-minute risk-informed in-situ simulations. ed physicians and nurses working in the real clinical environment cared for a standardized patient, introduced into their existing patient workload, with signs and symptoms of septic shock. immediately after case completion clinicians participated in a 30minute debriefing session. transcripts of these sessions were analyzed using grounded theory, a method of qualitative analysis, to identify critical barrier themes. results: fifteen clinicians participated in the debriefing sessions: four attending physicians, five residents, five nurses, and one nurse practitioner. the most prevalent critical barrier themes were: anchoring bias and difficulty with cognitive framework adaptation as the patient progressed to septic shock (n = 26), difficult interactions between the ed and ancillary departments (n = 22), difficulties with physician-nurse commu-nication and teamwork (n = 18), and delays in placing the central venous catheter due to perceptions surrounding equipment availability and the desire to attend to other competing interests in the ed prior to initiation of the procedure (n = 17 and 14). each theme was represented in at least four of the five debriefing sessions. participants reported the in-situ simulations to be a realistic representation of ed sepsis care. conclusion: in-situ simulation and subsequent debriefing provides a method of identifying latent critical areas for improvement in a care process. improvement strategies for ed-based septic shock resuscitation will need to address the difficulties in shock recognition and cognitive framework adaptation, physician and nurse teamwork, and prioritization of team effort. the background: the association between blood glucose level and mortality in critically ill patients is highly debated. several studies have investigated the association between history of diabetes, blood sugar level, and mortality of septic patients; however, no consistent conclusion could be drawn so far. objectives: to investigate the association between diabetes and initial glucose level and in-hospital mortality in patients with suspected sepsis from the ed. methods: we conducted a retrospective cohort study that consisted of all adult septic patients who visited the ed at a tertiary hospital during the year 2010 with two sets of blood cultures ordered by physicians. basic demographics, ed vital signs, symptoms and signs, underlying illnesses, laboratory findings, microbiological results, and discharge status were collected. logistic regressions were used to evaluate the association between risk factors, initial blood sugar level, and history of diabetes and mortality, as well as the effect modification between initial blood sugar level and history of diabetes. results: a total of 4997 patients with available blood sugar levels were included, of whom 48% had diabetes, 46% were older than 65 years of age, and 56% were male. the mortality was 6% (95% ci 5.3-6.7%). patients with a history of diabetes tended to be older, female, and more likely to have chronic kidney disease, lower sepsis severity (meds score), and positive blood culture test results (all p < 0.05). patients with a history of diabetes tended to have lower in-hospital mortality after ed visits with sepsis, controlling for initial blood sugar level (aor 0.72, 95% ci 0.56-0.92, p = 0.01). initial normal blood sugar seemed to be beneficial compared to lower blood sugar level for in-hospital mortality, controlled history of diabetes, sex, severity of sepsis, and age (aor 0.61, 95% ci 0.44-0.84, p = 0.002). the effect modification of diabetes on blood sugar level and mortality, however, was found to be not statistically significant (p = 0.09). conclusion: normal initial blood sugar level in ed and history of diabetes might be protective for mortality of septic patients who visited the ed. further investigation is warranted to determine the mechanism for these effects. methods: this irb-approved retrospective chart review included all patients treated with therapeutic hypothermia after cardiac arrest during 2010 at an urban, academic teaching hospital. every patient undergoing therapeutic hypothermia is treated by neurocritical care specialists. patients were identified by review of neurocritical care consultation logs. clinical data were dually abstracted by trained clinical study assistants using a standardized data dictionary and case report form. medications reviewed during hypothermia were midazolam, lorazepam, propofol, fentanyl, cisatracurium, and vecuronium. results: there were 33 patients in the cohort. median age was 57 (range 28-86 years), 67% were white, 55% were male, and 49% had a history of coronary artery disease. seizures were documented by continuous eeg in 11/33 (33%), and 20/33 (61%) died during hospitalization. most, 30/33 (91%), received fentanyl, 21/33 (64%) received benzodiazepine pharmacotherapy, and 23/33 (70%) received propofol. paralytics were administered to 23/33 (68%) patients, 14/33 (42%) with cisatracurium and 9/33 (27%) with vecuronium. of note, one patient required pentobarbital for seizure management. conclusion: sedation and neuromuscular blockade are common during management of patients undergoing therapeutic hypothermia after cardiac arrest. patients in this cohort often received analgesia with fentanyl, and sedation with a benzodiazepine or propofol. given the frequent use of sedatives and paralytics in survivors of cardiac arrest undergoing hypothermia, future studies should investigate the potential effect of these drugs on prognostication and survival after cardiac arrest. background: the use of therapeutic hypothermia (th) is a burgeoning treatment modality for post-cardiac arrest patients. objectives: we performed a retrospective chart review of patients who underwent post cardiac arrest th at eight different institutions across the united states. our objective was to assess how th is currently being implemented in emergency departments and assess the feasibility of conducting more extensive th research using multi-institution retrospective data. methods: a total of 94 charts with dates from 2008-2011 were sent for review by participating institutions of the peri-resuscitation consortium. of those reviewed, eight charts were excluded for missing data. two independent reviewers performed the review and the results were subsequently compared and discrepancies resolved by a third reviewer. we assessed patient demographics, initial presenting rhythm, time until th initiation, duration of th, cooling methods and temperature reached, survival to hospital discharge, and neurological status on discharge. results: the majority of cases of th had initial cardiac rhythms of asystole or pulseless electrical activity (55.2%), followed by ventricular tachycardia or fibrillation (34.5%), and in 10.3% the inciting cardiac rhythm was unknown. time to initiation of th ranged from 0-783 minutes with a mean time of 99 min (sd 132.5). length of th ranged from 25-2171 minutes with a mean time of 1191 minutes (sd 536). average minimum temperature achieved was 32.5°c, with a range from 27.6-36.7°c (sd 1.5°c). of the charts reviewed, 29 (33.3%) of the patients survived to hospital discharge and 19 (21.8%) were discharged relatively neurologically intact. conclusion: research surrounding cardiac arrest has always been difficult given the time and location span from pre-hospital care to emergency department to intensive care unit. also, as witnessed cardiac arrest events are relatively rare with poor survival outcomes, very large sample sizes are needed to make any meaningful conclusions about th. our varied and inconsistent results show that a multi-center retrospective review is also unlikely to provide useful information. a prospective multi-center trial with a uniform th protocol is needed if we are ever to make any evidence-based conclusions on the utility of th for post-cardiac arrest patients. serum results: mean la was 2.04, sd = 1.45. mean age was 4.5 years old, sd = 5.20. a statistically significant positive correlation was found between la and pulse, respiratory rate (rr), wbc, platelets, and los, while a significant negative correlation was seen with temperature and hco 3 -. when two subjects were dropped as possible outliers with la >10, it resulted in non-significant temperature correlation, but a significant negative correlation with age and bun was revealed. patients in the higher la group were more likely to be admitted (p = 0.0001) and have longer los. of the discharged patients, there was no difference in mean la level between those who returned (n = 25, mean la of 1.88, sd = 0.88) and those who did not (n = 154, mean la of 1.88, sd = 1.35), p = 0.99. furthermore, mean la levels for those with sepsis (n = 138, mean la of 2.18, sd = 1.75) did not differ from those without sepsis (n = 147, mean la of 1.9, sd = 1.08), p = 0.11. conclusion: higher la in pediatric patients presenting to the ed with suspected infection correlated with increased pulse, rr, wbc, platelets, and decreased bun, hco 3 -, and age. la may be predictive of hospitalization, but not of 3-day return rates or pediatric sepsis screening in the ed. background: mandibular fractures are one of the most frequently seen injuries in the trauma setting. in terms of facial trauma, madibular fractures account for 40-62% of all facial bone fractures. prior studies have demonstrated that the use of a tongue blade to screen these patients to determine whether a mandibular fracture is present may be as sensitive as x-ray. one study showed the sensitivity and specificity of the test to be 95.7% and 63.5%, respectively. in the last ten years, high-resolution computed tomography (hct) has replaced panoramic tomography (pt) as the gold standard for imaging of patients with suspected mandibular fractures. this study determines if the tongue blade test (tbt) remains as sensitive a screening tool when compared to the new gold standard of ct. objectives: the purpose of the study was to determine the sensitivity and specificity of the tbt as compared to the new gold standard of radiologic imaging, hct. the question being asked: is the tbt still useful as a screening tool for patients with suspected mandibular fractures when compared to the new gold standard of hct? methods: design: prospective cohort study. setting: an urban tertiary care level i trauma center. subjects: this study took place from 8/1/10 to 8/31/11 in which any person suffering from facial trauma presented. intervention: a tbt was performed by the resident physician and confirmed by the supervising attending physician. ct facial bones were then obtained for the ultimate diagnosis. inter-rater reliability (kappa) was calculated, along with sensitivity, specificity, accuracy, ppv, npv, likelihood ratio (lr) (+), and likelihood ratio (lr) (-) based on a 2 · 2 contingency tables generated. results: over the study period 85 patients were enrolled. inter-rater reliability was kappa = 0.93 (se +0.11). the table demonstrates the outcomes of both the tbt and ct facial bones for mandibular fracture. the following parameters were then calculated based on the contingency table: sensitivity 0.97 (ci 0.81-0.99), specificity 0.72 (ci 0.58-0.83), ppv 0.67 (ci 0.52-0.78), npv 0.97 (ci 0.87-0.99), accuracy 0.81, lr(+) 3.48 ), lr (-) 0.04 (ci 0.01-0.31). conclusion: the tbt is still a useful screening tool to rule out mandibular fractures in patients with facial trauma as compared to the current gold standard of hct. background: appendicitis is the most common surgical emergency occurring in children. the diagnosis of pediatric appendicitis is often difficult and computerized tomography (ct) scanning is utilized frequently. ct, although accurate, is expensive, time-consuming, and exposes children to ionizing radiation. radiologists utilize ultrasound for the diagnosis of appendicitis, but it may be less accurate than ct, and may not incorporate emergency physician (ep) clinical impression regarding degree of risk. objectives: the current study compared ep clinical diagnosis of pediatric appendicitis pre-and post-bedside ultrasonography (bus). methods: children 3-17 years of age were enrolled if their clinical attending physician planned to obtain a consultative ultrasound, ct scan, or surgical consult specific for appendicitis. most children in the study received narcotic analgesia to facilitate bus. subjects were initially graded for likelihood of appendicitis based on research physician-obtained history and physical using a visual analogue scale (vas). immediately subsequent to initial grading, research physicians performed a bus and recorded a second vas impression of appendicitis likelihood. two outcome measures were combined as the gold standard for statistical analysis. the post-operative pathology report served as the gold standard for subjects who underwent appendectomy, while post 2-week telephone follow-up was used for subjects who did not undergo surgery. various specific ultrasound measures used for the diagnosis of appendicitis were assessed as well. results: 29/56 subjects had pathology-proven appendicitis. one subject was pathology-negative post-appendectomy. of the 26 subjects who did not undergo surgery, none had developed appendicitis at the post 2-week telephone follow-up. pre-bus sensitivity was 48% (29-68%) while post-bus sensitivity was 79% (60-92%). both pre-and post-bus specificity was 96% (81-100%). pre-bus lr+ was 13 (2-93), while post-bus lr+ was 21 (3-148). pre-and post-bus lr-were 0.5 and 0.2, respectively. bus changed the diagnosis for 20% of subjects (9-32%). background: there are very little data on the normal distance between the glenoid rim and the posterior aspect of the humeral head in normal and dislocated shoulders. while shoulder x-rays are commonly used to detect shoulder dislocations, they may be inadequate, exacerbate pain in the acquisition of some views, and lead to delay in treatment, compared to bedside ultrasound evaluation. objectives: our objective was to compare the glenoid rim to humeral head distance in normal shoulders and in anteriorly dislocated shoulders. this is the first study proposing to set normal and abnormal limits. methods: subjects were enrolled in this prospective observation study if they had a chief complaint of shoulder pain or injury, and received a shoulder ultrasound as well as a shoulder x-ray. the sonographers were undergraduate students given ten hours of training to perform the shoulder ultrasound. they were blinded to the x-ray interpretation, which was used as the gold standard. we used a posterior-lateral approach, capturing an image with the glenoid rim, the humeral head, as well as the infraspinatus muscle. two parallel lines were applied to the most posterior aspect of the humeral head and the most posterior aspect of the glenoid rim. a line perpendicular to these lines was applied, and the distance measured. in anterior dislocations, a negative measurement was used to denote the fact that the glenoid rim is now posterior to the most posterior aspect of the humeral head. descriptive analysis was applied to estimate the mean and 25th to 75th interquartile range of normal and anteriorly dislocated shoulders. results: eighty subjects were enrolled in this study. there were six shoulder dislocations, however only four were anterior dislocations. the average distance between the posterior glenoid rim and the posterior humeral head in normal shoulders was 8.7 mm, with a 25th to 75th inter-quartile range of 6.7 mm to 11.9 mm. the distance in our four cases of anterior dislocation was )11 mm, with a 25th to 75th interquartile range of )10 mm to )12 mm. conclusion: the distance between the posterior humeral head to posterior glenoid rim may be 7 mm to 12 mm in patients presenting to the ed with shoulder pain but no dislocation. in contrast, this distance in anterior dislocations was greater than )10 mm. shoulder ultrasound may be a useful adjunct to x-ray for diagnosing anterior shoulder dislocations. conclusion: in this retrospective study, the presence of rv strain on focus significantly increases the likelihood of an adverse short term event from pulmonary embolism and its combination with hypotension performs similarly to other prognostic rules. background: burns are expensive and debilitating injuries, compromising both the structural integrity and vascular supply to skin. they exhibit a substantial potential to deteriorate if left untreated. jackson defined three ''zones'' to a burn. while the innermost coagulation zone and the outermost zone of hyperemia display generally predictable healing outcomes, the zone of stasis has been shown to be salvageable via clinical intervention. it has therefore been the focus of most acute therapies for burn injuries. while laser doppler imaging (ldi) -the current gold standard for burn analysis -has been 96% effective at predicting the need for second degree burn excision, its clinical translation is problematic, and there is little information regarding its ability to analyze the salvage of the stasis zone in acute injury. laser assisted indocyanine green dye angiography (laicga) also shows potential to predict such outcomes with greater clinical utility. objectives: to test the ability of ldi and laicga to predict interspace (zone of stasis) survival in a horizontal burn comb model. methods: a prospective animal experiment was performed using four pigs. each pig had a set of six dorsal burns created using a brass ''comb'' -creating four rectangular 10 · 20 mm full thickness burns separated by 5 · 20 mm interspaces. laicga and ldi scanning took place at 1 hour, 24 hours, 48 hours, and 1 week post burn using novadaq spy and moor ldi respectively. imaging was read by a blinded investigator, and perfusion trends were compared with interspace viability and contraction. burn outcomes were read clinically, evaluated via histopathology, and interspace contraction was measured using image j software. results: laicga data showed significant predictive potential for interspace survival. it was 83.3% predictive at 24 hours post burn, 75% predictive 48 hours post burn, and 100% predictive 7 days post burn using a standardized perfusion threshold. ldi imaging failed to predict outcome or contraction trends with any degree of reliability. the pattern of perfusion also appears to be correlated with the presence of significant interspace contraction at 28 days, with an 80% adherence to a power trendline. ventions, 11 isolation, 4 testing, 4 treatment, and 1 ''other'' category intervention were identified. one intervention involving school closures was associated with a 28% decrease in pediatric ed visits for respiratory illness. conclusion: most interventions were not tested in isolation, so the effect of individual interventions was difficult to differentiate. interventions associated with statistically significant decreases in ed crowding were school closures, as well as interventions in all categories studied. further study and standardization of intervention input, process, and outcome measures may assist in identifying the most effective methods of mitigating ed crowding and improving surge capacity during an influenza or other respiratory disease outbreak. communication background: the link between extended shift lengths, sleepiness, and occupational injury or illness has been shown, in other health care populations, to be an important and preventable public health concern but heretofore has not been fully described in emergency medical services (ems objectives: to assess the effect of an ed-based computer screening and referral intervention for ipv victims and to determine what characteristics resulted in a positive change in their safety. we hypothesized that women who were experiencing severe ipv and/or were in contemplation or action stages would be more likely to endorse safety behaviors. methods: we conducted the intervention for female ipv victims at three urban eds using a computer kiosk to deliver targeted education about ipv and violence prevention as well as referrals to local resources. all adult english-speaking non-critically ill women triaged to the ed waiting room were eligible to participate. the validated universal violence prevention screening protocol was used for ipv screening. any who disclosed ipv further responded to validated questionnaires for alcohol and drug abuse, depression, and ipv severity. the women were assigned a baseline stage of change (precontemplation, contemplation, action, or maintenance) based on the urica scale for readiness to change behavior surrounding ipv. participants were contacted at 1 week and 3 months to assess a variety of pre-determined actions such as moving out, to prevent ipv during that period. statistical analysis (chi-square testing) was performed to compare participant characteristics to the stage of change and whether or not they took protective action. results: a total of 1,474 people were screened and 154 disclosed ipv and participated in the full survey. 53.3% of the ipv victims were in the precontemplative stage of change, and 40.3% were in the contemplation stage. 110 women returned at 1 week of follow-up (71.4%), and 63 (40.9%) women returned at 3 months of followup. 55.5% of those who returned at 1 week, and 73% of those who returned at 3 months took protective action against further ipv. there was no association between the various demographic characteristics and whether or not a woman took protective action. conclusion: ed-based kiosk screening and health information delivery is both a feasible and effective method of health information dissemination for women experiencing ipv. stage of change was not associated with actual ipv protective measures. objectives: we present a pilot, head-to-head comparison of x26 and x2 effectiveness in stopping a motivated person. the objective is to determine comparative injury prevention effectiveness of the newer cew. methods: four humans had metal cew probe pairs placed. each volunteer had two probe pairs placed (one pair each on the right and left of the abdomen/inguinal region). superior probes were at the costal margin, 5 inches lateral of midline. inferior probes were vertically inferior at predetermined distances of 6, 9, 12, and 16 inches apart. each volunteer was given the goal of slashing a target 10 feet away with a rubber knife during cew exposure. as a means of motivation, they believed the exposure would continue until they reached the goal (in reality, the exposure was terminated once no further progress was made). each volunteer received one exposure from a x26 and a x2 cew. the exposure order was randomized with a 2-minute rest between them. exposures were recorded on a hi-speed, hi-resolution video. videos were reviewed and scored by six physician, kinesiology, and law officer experts using standardized criteria for effectiveness including degree of upper and lower extremity, and total body incapacitation, and degree of goal achievement. reviews were descriptively compared independently for probe spread distances and between devices. results: there were 8 exposures (4 pairs) for evaluation and no discernible, descriptive reviewer differences in effectiveness between the x26 and the x2 cews when compared. background: the trend towards higher gasoline prices over the past decade in the u.s. has been associated with higher rates of bicycle use for utilitarian trips. this shift towards non-motorized transportation should be encouraged from a physical activity promotion and sustainability perspective. however, gas price induced changes in travel behavior may be associated with higher rates of bicycle-related injury. increased consideration of injury prevention will be a critical component of developing healthy communities that help safely support more active lifestyles. objectives: the purpose of this analysis was to a) describe bicycle-related injuries treated in u.s. emergency departments between 1997 and 2009 and b) investigate the association between gas prices and both the incidence and severity of adult bicycle injuries. we hypothesized that as gas prices increase, adults are more likely to shift away from driving for utilitarian travel toward more economical non-motorized modes of transportation, resulting in increased risk exposure for bicycle injuries. methods: bicycle injury data for adults (16-65 years) were obtained from the national electronic injury surveillance system (neiss) database for emergency department visits between 1997-2009. the relationship between national seasonally adjusted monthly rates of bicycle injuries, obtained by a seasonal decomposition of time series, and average national gasoline prices, reported by the energy information administration, was examined using a linear regression analysis. results: monthly rates of bicycle injuries requiring emergency care among adults increase significantly as gas prices rise (p < 0.0001, see figure) . an additional 1,149 adult injuries (95% ci 963-1,336) can be predicted to occur each month in the u.s. (>13,700 injuries annually) for each $1 rise in average gasoline price. injury severity also increases during periods of high gas prices, with a higher percentage of injuries requiring admission. conclusion: increases in adult bicycle use in response to higher gas prices are accompanied by higher rates of significant bicycle-related injuries. supporting the use of non-motorized transportation will be imperative to address public health concerns such as obesity and climate change; however, resources must also be dedicated to improve bicycle-related injury care and prevention. background: this is a secondary analysis of data collected for a randomized trial of oral steroids in emergency department (ed) musculoskeletal back pain patients. we hypothesized that higher pain scores in the ed would be associated with more days out of work. objectives: to determine the degree to which days out of work for ed back pain patients are correlated with ed pain scores. methods: design: prospective cohort. setting: suburban ed with 80,000 annual visits. participants: patients aged 18-55 years with moderately severe musculoskeletal back pain from a bending or twisting injury £ 2 days before presentation. exclusion criteria included nonmusculoskeletal etiology, direct trauma, motor deficits, and employer-initiated visits. observations: we captured initial and discharge ed visual analog pain scores (vas) on a 0-10 scale. patients were contacted approximately 5 days after discharge and queried about the days out of work. we plotted days out of work versus initial vas, discharge vas, and change in vas and calculated correlation coefficients. using the bonferroni correction because of multiple comparisons, alpha was set at 0.02. results: we analyzed 67 patients for whom complete data were available. the mean age was 40 ± 9 years and 30% were female. the average initial and discharge ed pain scales were 8.0 ± 1.5 and 5.7 ± 2.2, respectively. on follow-up, 88% of patients were back to work and 36% did not lose any days of work. for the plots of the days out of work versus the initial and discharge vas and the change in the vas, the correlation coefficients (r 2 ) were 0.03 (p = 0.17), 0.08 (p = 0.04), and 0.001 (p = 0.87), respectively. conclusion: for ed patients with musculoskeletal back pain, we found no statistically significant correlation between days out of work and ed pain scores. background: conducted electrical weapons (cews) are common law enforcement tools used to subdue and repel violent subjects and, therefore, prevent further injury or violence from occurring in certain situations. the taser x2 is a new generation of cew that has the capability of firing two cartridges in a ''semi-automatic'' mode, and has a different electrical waveform and different output characteristics than older generation technology. there have been no data presented on the human physiologic effects of this new generation cew. objectives: the objective of this study was to evaluate the human physiologic effects of this new cew. methods: this was a prospective, observational study of human subjects. an instructor shot subjects in the abdomen and upper thigh with one cartridge, and subjects received a 10-second exposure from the device. measured variables included: vital signs, continuous spirometry, pre-and post-exposure ecg, intra-exposure echocardiography, venous ph, lactate, potassium, ck, and troponin. results: ten subjects completed the study (median age 31.5, median bmi 29.4, 80% male). there were no important changes in vital signs or in potassium. the median increase in lactate during the exposure was 1.2, range 0.6 to 2.8. the median change in ph was )0.031, range )0.011 to 0.067. no subject had a clinically relevant ecg change, evidence of cardiac capture, or positive troponin up to 24 hours after exposure. the median change in creatine kinase (ck) at 24 hours was 313, range )40 to 3418. there was no evidence of impairment of breathing by spirometry. baseline median minute ventilation was 14.2, which increased to 21.6 during the exposure (p = 0.05), and remained elevated at 21.6 post-exposure (p = 0.01). conclusion: we detected a small increase in lactate and decrease in ph during the exposure, and an increase in ck 24 hours after the exposure. the physiologic effects of the x2 device appear similar to previous reports for ecd devices. use background: public bicycle sharing (bikeshare) programs are becoming increasingly common in the us and around the world. these programs make bicycles easily accessible for hourly rental to the public. there are currently 15 active bikeshare programs in cities in the us, and more than 30 programs are being developed in cities including new york and chicago. despite the importance of helmet use, bikeshare programs do not provide the opportunity to purchase or rent helmets. while the programs encourage helmet use, no helmets are provided at the rental kiosks. objectives: we sought to describe the prevalence of helmet use among adult users of bikeshare programs and users of personal bicycles in two cities with recently introduced bicycle sharing programs (boston, ma and washington, dc). methods: we performed a prospective observational study of bicyclists in boston, ma and washington, dc. trained observers collected data during various times of the day and days of the week. observers recorded the sex of the bicycle operator, type of bicycle, and helmet use. all bicycles that passed a single stationary location in any direction for a period of between 30 and 90 minutes were recorded. data are presented as frequencies of helmet use by sex, type of bicycle (bikeshare or personal), time of the week (weekday or weekend), and city. logistic regression was used to estimate the odds ratio for helmet use controlling for type of bicycle, sex, day of week, and city. results: there were 43 observation periods in two cities at 36 locations. 3,073 bicyclists were observed. there were 562 (18.2%) bicylists riding bikeshare bicycles. overall helmet use was 45.5%, although helmet use varied significantly with sex, day of use, and type of bicycle (see figure) . bikeshare users were helmeted at a lower rate compared to users of personal bicycles (19.2% vs 51.4%). logistic regression, controlling for type of bicycle, sex, day of week, and city demonstrate that bikeshare users had higher odds of riding unhelmeted (or 4.34, 95% ci 3.47-5.50). women had lower odds of riding unhelmeted (or 0.62, 0.52-0.73), while weekend riders were more likely to ride unhelmeted (or 1.32, 1.12-1.55). conclusion: use of bicycle helmets by users of public bikeshare programs is low. as these programs become more popular and prevalent, efforts to increase helmet use among users should increase. background: abusive head trauma (aht) represents one of the most severe forms of traumatic brain injury (tbi) among abused infants with 30% mortality. young adult males account for 75% of the perpetrators. most aht prevention programs are hospital-based and reach a predominantly female audience. there are no published reports of school-based aht prevention programs to date. objectives: 1. to determine whether a high schoolbased aht educational program will improve students' knowledge of aht and parenting skills. 2. to evaluate the feasibility and acceptability of a school-based aht prevention program. methods: this program was based on an inexpensive commercially available program developed by the national center on shaken baby syndrome. the program was modified to include a 60-minute interactive presentation that teaches teenagers about aht, parenting skills, and caring for inconsolable crying infants. the program was administered in three high schools in flint, michigan during spring 2011. student's knowledge was evaluated with a 17-item written test administered pre-intervention, post-intervention, and two months after program completion. program feasibility and acceptability were evaluated through interviews and surveys with flint area school social workers, parent educators, teachers, and administrators. results: in all, 342 high school students (40% male) participated. of these, 317 (92.7%) completed the pretest and post-test with 171 (50%) completing the twomonth follow-up test. the mean pre-intervention, postintervention, and two-month follow-up scores were 53%, 87%, and 90% respectively. from pre-test to posttest, mean score improved 34%, p < 0.001. this improvement was even more profound in young males, whose mean post-test score improved by 38%, p < 0.001. of the 69 participating social workers, parent educators, teachers, and administrators, 97% ranked the program as feasible and acceptable. conclusion: students participating in our program showed an improvement in knowledge of aht and parenting skills which was retained after two months. teachers, social workers, parent educators, and school administrators supported the program. this local pilot program has the potential to be implemented on a larger scale in michigan with the ultimate goal of reducing aht amongst infants. will background: fear of litigation has been shown to affect physician practice patterns, and subsequently influence patient care. the likelihood of medical malpractice litigation has previously been linked with patient and provider characteristics. one common concern is that a patient may exaggerate symptoms in order to obtain monetary payouts; however, this has never been studied. objectives: we hypothesize that patients are willing to exaggerate injuries for cash settlements and that there are predictive patient characteristics including age, sex, income, education level, and previous litigation. methods: this prospective cross-sectional study spanned june 1 to december 1, 2011 in a philadelphian urban tertiary care center. any patient medically stable enough to fill out a survey during study investigator availability was included. two closed-ended paper surveys were administered over the research period. standard descriptive statistics were utilized to report incidence of: patients who desired to file a lawsuit, patients previously having filed lawsuits, and patients willing to exaggerate the truth in a lawsuit for a cash settlement. chi-square analysis was performed to determine the relationship between patient characteristics and willingness to exaggerate injuries for a cash settlement. results: of 126 surveys, 11 were excluded due to incomplete data, leaving 115 for analysis. the mean age was 39 with a standard deviation of 16, and 40% were male. the incidence of patients who had the desire to sue at the time of treatment was 9%. the incidence of patients who had filed a lawsuit in the past was 35%. of those patients, 26% had filed multiple lawsuits. fifteen percent [95% ci 9-23%] of all patients were willing to exaggerate injuries for cash settlement. sex and income were found to be statistically significant predictors of willingness to exaggerate symptoms: 22% of females vs. 4% of males were willing to exaggerate (p = 0.01), and 20% of people with income less than $100,000/yr vs. 0% of those with income over $100,000/ yr were willing to exaggerate (p = 0.03). conclusion: patients at a philadelphian urban tertiary center admit to willingness to exaggerate symptoms for a cash settlement. willingness to exaggerate symptoms is associated with female sex and lower income. background: current data suggest that as many as 50% of patients presenting to the ed with syncope leave the hospital without a defined etiology. prior studies have suggested a prevalence of psychiatric disease as high as 26% in patients with syncope of unknown etiology. objectives: to determine whether psychiatric disease and substance abuse are associated with an increased incidence of syncope of unknown etiology. methods: prospective, observational, cohort study of consecutive ed patients ‡18 presenting with syncope was conducted between 6/03 and7/06. patients were queried in the ed and charts reviewed about a history of psychiatric disease, use of psychiatric medication, substance abuse, and duration. data were analyzed using sas with chi-square and fisher's exact tests. results: we enrolled 519 patients who presented to the ed after syncope, 159 of whom did not have an identifiable etiology for their syncopal event. 36.5% of those without an identifiable etiology were male. 166 (32%) patients had a history of or current psychiatric disease (42% male), and 55 patients (11%) had a history of or current substance abuse (60% male). among males with psychiatric disease, 39% had an unknown etiology of their syncopal event, compared to 22% of males without psychiatric disease (p = 0.009). similarly, among all males with a history of substance abuse, 45% had an unknown etiology, as compared to 24% of males without a history of substance abuse (p = 0.01). a similar trend was not identified in elderly females with psychiatric disease (p = 0.96) or substance abuse (p = 0.19). however, syncope of unknown etiology was more common among both men and women under age 65 with a history of substance abuse (47%) compared to those without a history of substance abuse (27%; p = 0.01). conclusion: our results suggest that psychiatric disease and substance abuse are associated with increased incidence of syncope of unknown etiology. patients evaluated in the ed or even hospitalized with syncope of unknown etiology may benefit from psychiatric screening and possibly detoxification referral. this is particularly true in men. (originally submitted as a ''late-breaker.'') scope background: after discharge from an emergency department (ed), pain management often challenges parents, who significantly under-treat their children's pain. rapid patient turnover and anxiety make education about home pain treatment difficult in the ed. video education standardizes information and circumvents insufficient time and literacy. objectives: to evaluate the effectiveness of a 6-minute instructional video for parents that targets common misconceptions about home pain management. methods: we conducted a randomized, double-blinded clinical trial of parents of children ages 1-18 years who presented with a painful condition, were evaluated, and discharged home in june and july 2011. parents were randomized to a pain management video or an injury prevention control video. primary outcome was the proportion of parents who gave pain medication at home. these data were recorded in a home pain diary and analyzed using a chi-square test. parents' knowledge about pain treatment was tested before, immediately following, and 2 days after intervention. mcnemar's test statistic determined odds that knowledge correlated with the intervention group. results: 100 parents were enrolled: 59 watched the pain education video, and 41 the control video. 72.9% completed follow up, providing information about home pain education use. significantly more parents provided at least one dose of pain medication to their children after watching the educational video: 96% vs. 80% (difference 16%, 95% ci 7.8%, 31.3%). the odds the parent had correct knowledge about pain treatment significantly improved immediately following the educational video for knowledge about pain scores (p = 0.04), the effect of pain on function (p < 0.01), and pain medication misconceptions (p < 0.01). these significant differences in knowledge remained 3 days after the video intervention. the educational video about home pain treatment viewed by parents significantly increased the proportion of children receiving pain medication at home and significantly improved knowledge about at-home pain management. videos are an efficient tool to provide medical advice to parents that improves outcomes for children. methods: this was a prospective, observational study of consecutive admitted cpu patients in a large-volume academic urban ed. cardiology attendings round on all patients and stress test utilization is driven by their recommendation. eligibility criteria include: age>18, aha low/intermediate risk, nondynamic ecgs, and normal initial troponin i. patients >75 and with a history of cad or co-existing active medical problem were excluded. based on prior studies and our estimated cpu census and demographic distribution, we estimated a sample size of 2,242 patients in order to detect a difference in stress utilization of 7% (2-tailed, a = 0.05, b = 0.8). we calculated a timi risk prediction score and a diamond & forrester (d&f) cad likelihood score on each patient. t-tests were used for univariate comparisons of demographics, cardiac comorbidities, and risk scores. logistic regression was used to estimate odds ratios (ors) for receiving testing based on race, controlling for insurance and either timi or d&f score. results: over 18 months, 2,451 patients were enrolled. mean age was 53 ± 12, and 54% (95% ci 52-56) were female. sixty percent (95% ci 58-62) were caucasian, 12% (95% ci 10-13) african american, and 24% (95% ci 23-26) hispanic. mean timi and d&f scores were 0.5 (95% ci 0.5-0.6) and 38% (95% ci 37-39). the overall stress testing rate was 52% (95% ci 50-54). after controlling for insurance status and timi or d&f scores, african american patients had significantly decreased odds of stress testing (or timi 0.67 (95% ci 0.52-0.88), or d&f 0.68 (95% ci 0.51-0.89)). hispanics had significantly decreased odds of stress testing in the model controlling for d&f (or d&f 0.78 (95% ci 0.63-0.98)). conclusion: this study confirms that disparities in the workup of african american patients in the cpu are similar to those found in the general ed and the outpatient setting. further investigation into the specific provider or patient level factors contributing to this bias is necessary. the outcomes for hf and copd were sae 11.6%, 7.8%; death 2.3%, 1.0%. we found univariate associations with sae for these walk test components: too ill to walk (both hf, copd p < 0.0001); highest heart rate ‡110 (hf p = 0.02, copd p = 0.10); lowest sao 2 < 88% (hf p = 0.42, copd p = 0.63); borg score ‡5 (hf p = 0.47, copd p = 0.52); walk test duration £ 1 minute (hf p = 0.07. copd p = 0.22). after adjustment for multiple clinical covariates with logistic regression analyses, we found ''walk test heart rate ‡110'' had an odds ratio of 1.9 for hf patients and ''too ill to start the walk test'' had an odds ratio of 3.5 for copd patients. conclusion: we found the 3-minute walk test to be easy to administer in the ed and that maximum heart rate and inability to start the test were highly associated with adverse events in patients with exacerbations of hf and copd, respectively. we suggest that the 3-minute walk test be routinely incorporated into the assessment of hf and copd patients in order to estimate risk of poor outcomes. the objectives: the objective of this study was to investigate differences in consent rates between patients of different demographic groups who were invited to participate in minimal-risk clinical trials conducted in an academic emergency department. methods: this descriptive study analyzed prospectively collected data of all adult patients who were identified as qualified participants in ongoing minimal risk clinical trials. these trials were selected for this review because they presented minimal factors known to be associated background: increasing rates of patient exposure to computerized tomography (ct) raise questions about appropriateness of utilization, as well as patient awareness of radiation exposure. despite rapid increases in ct utilization and published risks, there is no national standard to employ informed consent prior to radiation exposure from diagnostic ct. use of written informed consent for ct (icct) in our ed has increased patient understanding of the risks, benefits, and alternatives to ct imaging. our team has developed an adjunct video educational module (vem) to further educate ed patients about the ct procedure. objectives: to assess patient knowledge and preferences regarding diagnostic radiation before and after viewing vem. methods: the vem was based on icct currently utilized at our tertiary care ed (census 37,000 patients/ year). icct is written at an 8th grade reading level. this fall, vem/icct materials were presented to a convenience sample of patients in the ed waiting room 9 am-7 pm, monday-sunday. patients who were <18 years of age, critically ill, or with language barrier were excluded. to quantify the educational value of the vem, a six-question pretest was administered to assess baseline understanding of ct imaging. the patients then watched the vem via ipad (macintosh) and reviewed the consent form. an eight-question post-test was then completed by each subject. no phi were collected. pre-and post-test results were analyzed using mcnemar's test for individual questions and a paired t-test for the summed score (sas version 9.2). results: 100 patients consented and completed the survey. the average pre-test score for subjects was poor, 66% correct. review of vem/icct materials increased patient understanding of medical radiation as evidenced by improved post-test score to 79%. mean improvement between tests was 13% (p < 0.0001). 78% of subjects responded that they found the materials helpful, and that they would like to receive icct. conclusion: the addition of a video educational module improved patient knowledge regarding ct imaging and medical radiation as quantified by pre-and posttesting. patients in our study sample reported that they prefer to receive icct. by educating patients about the risks associated with ct imaging, we increase informed, shared decision making -an essential component of patient-centered care. does objectives: we sought to determine the relationship between patients' pain scores and their rate of consent to ed research. we hypothesized that patients with higher pain scores would be less likely to consent to ed research. methods: retrospective observational cohort study of potential research subjects in an urban academic hospital ed with an average annual census of approximately 70,000 visits. subjects were adults older than 18 years with chief complaint of chest pain within the last 12 hours, making them eligible for one of two cardiac biomarker research studies. the studies required only blood draws and did not offer compensation. two reviewers extracted data from research screening logs. patients were grouped according to pain score at triage, pain score at the time of approach, and improvement in pain score (triage score -approach score). the main outcome was consent to research. simple proportions for consent rates by pain score tertiles were calculated. two multivariate logistic regression analyses were performed with consent as outcome and age, race, sex, and triage or approach pain score as predictors. results: overall, 396 potential subjects were approached for consent. patients were 58% caucasian, 49% female, and with an average age of 57 years. six patients did not have pain scores recorded at all and 48 did not have scores documented within 2 hours of approach and were excluded from relevant analyses. overall, 80.1% of patients consented. consent rates by tertiles at triage, at time of approach, and by pain score improvement are shown in tables 1 and 2. after adjusting for age, race, and sex, neither triage (p = 0.75) nor approach (p = 0.65) pain scores predicted consent. conclusion: research enrollment is feasible even in ed patients reporting high levels of pain. patients with modest improvements in pain levels may be more likely to consent. future research should investigate which factors influence patients' decisions to participate in ed research. conclusion: in this multicenter study of children hospitalized with bronchiolitis neither specific viruses nor their viral load predicted the need for cpap or intubation, but young age, low birth weight, presence of apnea, severe retractions, and oxygen saturation <85% did. we also identified that children requiring cpap or intubation were more likely to have mothers who smoked during pregnancy and a rapid respiratory worsening. mechanistic research in these high-risk children may yield important insights for the management of severe bronchiolitis. brigham & women's hospital, boston, ma background: siblings and children who share a home with a physically abused child are thought to be at high risk for abuse. however, rates of injury in these children are unknown. disagreements between medical and child protective services professionals are common and screening is highly variable. objectives: our objective was to measure the rates of occult abusive injuries detected in contacts of abused children using a common screening protocol. methods: this was a multi-center, observational cohort study of 20 child abuse teams who shared a common screening protocol. data were collected between jan 15, 2010 and april 30, 2011 for all children <10 years undergoing evaluation for physical abuse and their contacts. for contacts of abused children, the protocol recommended physical examination for all children <5 years, skeletal survey and physical exam for children <24 months, and physical exam, skeletal survey, and neuroimaging for children <6 months old. results: among 2,825 children evaluated for abuse, 618 met criteria as ''physically abused'' and these had 477 contacts. for each screening modality, screening was completed as recommended by the protocol in approximately 75% of cases. of 134 contacts who met criteria for skeletal survey, new injuries were identified in 16 (12.0%). none of these fractures had associated findings on physical examination. physical examination identified new injuries in 6.2% of eligible contacts. neuroimaging failed to identify new injuries among 25 eligible contacts less than 6 months old. twins were at significantly increased risk of fracture relative to other nontwin contacts (or 20.1). conclusion: these results support routine skeletal survey for contacts of physically abused children <24 months old, regardless of physical examination findings. even for children where no injuries are identified, these results demonstrate that abuse is common among children who share a home with an abused child, and support including contacts in interventions (foster care, safety planning, social support) designed to protect physically abused children. methods: this was a retrospective study evaluating all children presenting to eight paediatric, universityaffiliated eds during one year in 2010-2011. in each setting, information regarding triage and disposition were prospectively registered by clerks in the ed database. anonymized data were retrieved from the ed computerized database of each participating centre. in the absence of a gold standard for triage, hospitalisation, admission to intensive care unit (icu), length of stay in the ed, and proportion of patients who left without being seen by a physician (lwbs) were used as surrogate markers of severity. the primary outcome measure was the association between triage level (from 1 to 5) and hospitalisation. the association between triage level and dichotomous outcomes was evaluated by a chi-square test, while a student's t-test was used to evaluate the association between triage level and length of stay. it was estimated that the evaluation of all children visiting these eds for a one year period would provide a minimum of 1,000 patients in each triage level and at least 10 events for outcomes having a proportion of 1% or more. results: a total of 404,841 children visited the eight eds during the study period. pooled data demonstrated hospitalisation proportions of 59%, 30%, 10%, 2%, and 0.5% for patients triaged at level 1, 2,3, 4, and 5 respectively (p < 0.001). there was also a strong association between triage levels and admission to icu (p < 0.001), the proportion of children who lwbs (p < 0.001), and length of stay (p < 0.001). background: parents frequently leave the emergency department (ed) with incomplete understanding of the diagnosis and plan, but the relationship between comprehension and post-care outcomes has not been well described. objectives: to explore the relationship between comprehension and post-discharge medication safety. methods: we completed a planned secondary analysis of a prospective observational study of the ed discharge process for children aged 2-24 months. after discharge, parents completed a structured interview to assess comprehension of the child's condition, the medical team's advice, and the risk of medication error. limited understanding was defined as a score of 3-5 from 1 (excellent) to 5 (poor). risk of medication error was defined as a plan to use over-the-counter cough/cold medication and/or an incorrect dose of acetaminophen (measured by direct observation at discharge or reported dose at follow-up call). parents identified as at risk received further instructions from their provider. the primary outcome was persistent risk of medication error assessed at phone interview 5-10 days post-discharge. a major barrier to administering analgesics to children is the perceived discomfort of intravenous access. the delivery of intranasal analgesia may be a novel solution to this problem. objectives: we investigated whether the addition of the mucosal atomizer device (mad) as an alternative for fentanyl delivery would improve overall fentanyl administration rates in pediatric patients transported by a large urban ems system. we performed a historical control trial comparing the rate of pediatric fentanyl administration 6 months before and 6 months after the introduction of the mad. study subjects were pediatric trauma patients (age <16 years) transported by a large urban ems agency. the control group was composed of patients treated in the 6 months before introduction of the mad. the experimental group included patients treated in the 6 months after the addition of the mad. two physicians reviewed each chart and determined whether the patient met predetermined criteria for the administration of pain medication. a third reviewer resolved any discrepancies. fentanyl administration rates were measured and compared between the two groups. we used two-sample t-tests and chi-square tests to analyze our data. results: 228 patients were included in the study: 137 patients in the pre-mad group and 91 in the post-mad group. there were no significant differences in the demographic and clinical characteristics of the two groups. 42 (30.4%) patients in the control arm received fentanyl. 34 (37.8%) of patients in the experimental arm received fentanyl with 36% of the patients receiving fentanyl via the intranasal route. the addition of the mad was not associated with a statistically significant increase in analgesic administration. age and mechanism of injury were statistically more predictive of analgesia administration. conclusion: while the addition of the mucosal atomizer device as an alternative delivery method for fentanyl shows a trend towards increased analgesic administration in a prehospital pediatric population, age and mechanism of injury are more predictive in who receives analgesia. further research is necessary to investigate the effect of the mad on pediatric analgesic delivery. methods: this was a prospective study evaluating php-se before (pre) and after (post) a ppp introduction and 13 months later (13-mo). php groups received either ppp review and education or ppp review alone. the ppp included a pain assessment tool. the se tool, developed and piloted by pediatric ems experts, uses a ranked ordinal scale ranging from 'certain i cannot do it' (0) to 'completely certain i can do it' (100) for 10 items: pain assessment (3 items), medication administration (4) and dosing (1) , and reassessment (2). all 10 items and an averaged composite were evaluated for three age groups (adult, child, toddler). paired sample t-tests compared post-and 13-mo scores to pre-ppp scores. results: of 264 phps who completed initial surveys, 146 phps completed 13-mo surveys. 106 (73%) received education and ppp review and 40 (27%) review only. ppp education did not affect php-se (adult p = 0.87, child p = 0.69, toddler p = 0.84). the largest se increase was in pain assessment. this increase persisted for child and toddler groups at 13 months. the immediate increase in composite se scores for all age groups persisted for the toddler group at 13 months. conclusion: increases in composite and pain assessment php-se occur for all age groups immediately after ppp introduction. the increase in pain assessment se persisted at 13 months for pediatric age groups. composite se increase persisted for the toddler age group alone. background: pediatric medications administered in the prehospital setting are given infrequently and dosage may be prone to error. calculation of dose based on known weight or with use of length-based tapes occurs even less frequently and may present a challenge in terms of proper dosing. objectives: to characterize dosing errors based on weight-based calculations in pediatric patients in two similar emergency medical service (ems) systems. methods: we studied the five most commonly administered medications given to pediatric patients weighing 36 kg or less. drugs studied were morphine, midazolam, epinephrine 1:10,000, epinephrine 1:1000, and diphenhydramine. cases from the electronic record were studied for a total of 19 months, from january 2010 to july 2011. each drug was administered via intravenous, intramuscular, or intranasal routes. drugs that were permitted to be titrated were excluded. an error was defined as greater than 25% above or below the recommended mg/kg dosage. results: out of 248,596 total patients, 13,321 were pediatric patients. 7885 had documented weights of <36 kg and 241 patients were given these medications. we excluded 72 patients for weight above the 97%ile or below the 3%ile, or if the weight documentation was missing. of the 169 patients and 187 doses, errors were noted in 53 (28%; 95% ci 22%, 35%). midazolam was the most common drug in errors (29 of 53 doses or 55%; 95% ci 40%, 68%), followed by diphenhydramine (11/53 or 21%; 95% ci 11%, 34%), epinephrine (7/53 or 13%; 95% ci 5%, 25%), and morphine sulfate (6/53 or 11%; 95% ci, 4%, 23%). underdosing was noted in 34 of 53 (64%; 95% ci 50%, 77%) of errors, while excessive dosing was noted in 19 of 53 (36%; 95% ci 23%, 50%). conclusion: weight-based dosing errors in pediatric patients are common. while the clinical consequences of drug dosing errors in these patients are unknown, a considerable amount of inaccuracy occurs. strategies beyond provision of reference materials are needed to prevent pediatric medication errors and reduce the potential for adverse outcomes. drivers background: homelessness affects up to 3.5 million people a year. the homeless present more frequently to eds, their ed visits are four times more likely to occur within 3 days of a prior ed evaluation, and they are admitted up to five times more frequently than others. we evaluated the effect of a street outreach rapid response team (sorrt) on the health care utilization of a homeless population. a nonmedical outreach staff responds to the ed and intensely case manages the patient: arranges primary care follow-up, social services, temporary housing opportunities, and drug/ alcohol rehabilitation services. objectives: we hypothesized that this program would decrease the ed visits and hospital admissions of this cohort of patients. methods: before and after study at an urban teaching hospital from june, 2010-december, 2011 in indianapolis, indiana. upon identification of homeless status, sorrt was immediately notified. eligibility for sorrt enrollment is determined by housing and urban development homeless criteria and the outreach staff attempted to enter all such identified patients into the program. the patients' health care utilization was evaluated in the 6 months prior to program entry as compared to the 6 months after enrollment by prospectively collecting data and a retrospective medical record query for any unreported visits. since the data were highly skewed, we used the nonparametric signed rank test to test for paired differences between periods. results: 22 patients met criteria but two refused participation. the 20-patient cohort had 388 total ed visits (175 pre and 213 post) with a mean of 8.8 (sd 10.1) and median of 6.5 (range 1-44) ed visits in 6 months pre-sorrt as compared to a mean of 10.7 (sd 19.5) and median of 5.0 (0-90) in 6 months post-sorrt (p = 0.815). there were 28 total inpatient admissions pre-intervention and 27 post-intervention, with a mean of 1.4 (sd 2.0) and median of 0.5 (0.7) per patient in the pre-intervention period as compared to 1.4 (sd 1.9) and 1.0 (0-6) in the post-intervention period (p = 0.654). in the pre-sorrt period 50.0% had at least one inpatient admission as compared to 55.0% post-sorrt (p = 1.00). there were no differences in icu days or overall length of stay between the two periods. conclusion: an aggressive case management program beginning immediately with homeless status recognition in the ed has not demonstrated success in decreasing utilization in our population. methods: this was a secondary analysis of a prospective randomized trial that included consenting patients discharged with outpatient antibiotics from an urban county ed with an annual census of 100,000. patients unable to receive text messages or voice-mails were excluded. health literacy was assessed using a validated health literacy assessment, the newest vital sign (nvs). patients were randomized to a discharge instruction modality: 1) standard care, typed and verbal medication and case-specific instructions; 2) standard care plus text-messaged instructions sent to the patient's cell phone; or 3) standard care plus voice-mailed instructions sent to the patient's cell. patients were called at 30 days to determine preference for instruction delivery modality. preference for discharge instruction modality was analyzed using z-tests for proportions. results: 758 patients were included (55% female, median age 30, range 5 months to 71 years); 98 were excluded. 23% had an nvs score of 0-1, 31% 2-3, and 46% 4-6. among the 51.1% of participants reached at 30 days, 26% preferred a modality other than written. there was a difference in the proportion of patients who preferred discharge instructions in written plus another modality (see table) . with the exception of written plus another modality, patient preference was similar across all nvs score groups. conclusion: in this sample of urban ed patients, more than one in four patients prefer non-traditional (text message, voice-mail) modalities of discharge instruction delivery to standard care (written) modality alone. additional research is needed to evaluate the effect of instructional modality on accessibility and patient compliance. figure) . conclusion: cumulative saps ii scoring fails to predict mortality in ohca. the risk scores assigned to age, gcs, and hco 3 independently predict mortality and combined are good mortality predictors. these findings suggest that an alternative severity of illness score should be used in post-cardiac arrest patients. future studies should determine optimal risk scores of saps ii variables in a larger cohort of ohca. objectives: to determine the extent to which cpp recovers to pre-pause levels with 20 seconds of cpr after a 10-second interruption in chest compressions for ecg rhythm analysis. methods: this was a secondary analysis of prospectively collected data from an iacuc-approved protocol. fortytwo yorkshire swine (weighing 25-30 kg) were instrumented under anesthesia. vf was electrically induced. after 12 minutes of untreated vf, cpr was initiated and a standard dose of epinephrine (sde) (0.01 mg/kg) was given. after 2.5 minutes of cpr to circulate the vasopressor, compressions were interrupted for 10 seconds to analyze the ecg rhythm. this was immediately followed by 20 seconds of cpr to restore cpp before the first rs was delivered. if the rs failed, cpr resumed and additional vasopressors (sde, and vasopressin 0.57 mg/kg) were given and the sequence repeated. the cpp was defined as aortic diastolic pressure minus right atrial diastolic pressure. the cpp values were extracted at three time points: immediately after the 2.5 minutes of cpr, following the 10-second pause, and immediately before defibrillation for the first two rs attempts in each animal. eighty-three sets of measurements were logged from 42 animals. descriptive statistics were used to analyze the data. in most cities, the proportion of patients who achieve prehospital return of spontaneous circulation (rosc) is less than 10%. the association between time of day and ohca outcomes in the prehospital setting is unknown. objectives: we sought to determine whether rates of prehospital rosc varied by time of day. we hypothesized that night ohcas would exhibit lower rates of rosc. methods: we performed a retrospective review of cardiac arrest data from a large, urban ems system. included were all ohcas occurring in individuals >18 years of age from 1/1/2008 to 12/31/2010. excluded were traumatic arrests and cases where resuscitation measures were not performed. day was defined as 7:00 am-6:59 pm, while night was 7:00 pm-6:59 am. we examined the association between time of day and paramedic-perceived prehospital rosc in unadjusted and adjusted analyses. variables included age, sex, race, presenting rhythm, aed application by a bystander or first responder, defibrillation, and bystander cpr performance. analyses were performed using chisquare tests and logistic regression. objectives: determine whether a smei helps to improve physician compliance with ihi bundle and reduce patient mortality in ed patients with s&s. methods: we conducted a pre-smei retrospective review of four months of ed patients with s&s to determine baseline pre-smei physician compliance and patient mortality. we designed and completed a smei attended by 25 of 28 ed attending physicians and 28 of 30 ed resuscitation residents. finally, we conducted a twenty-month post-smei prospective study of ongoing physician compliance and patient mortality in ed patients with s&s. results: in the four month pre-smei retrospective review, we identified 23 patients with s&s, with a 61% physician overall compliance and mortality rate of 30%. the average ed physician smei multiple-choice pre-test score was 74%, and showed a significant improvement in the post-test score of 94% (p = 0.0003). additionally, 87% of ed physicians were able to describe three new clinical pearls learned and 85% agreed that the smei would improve compliance. in the twenty months of the post-smei prospective study, we identified 144 patients with s&s, with a 75% physician overall compliance, and mortality rate of 21%. relative physician compliance improved 23% (p = 0.0001) and relative patient mortality was reduced by 32% (p < 0.0001) when comparing pre-and post-smei data. conclusion: our data suggest that a smei improves overall physician compliance with the six hour goals of the ihi bundle and reduces patient mortality in ed patients with s&s. conclusion: using a population-level, longitudinal, and multi-state analysis, the rate of return visits within 3 days is higher than previously reported, with nearly 1 in 12 returning back to the ed. we also provide the first estimation of health care costs for ed revisits. background: the ability of patients to accurately determine their level of urgency is important in planning strategies that divert away from eds. in fact, an understanding of patient self-triage abilities is needed to inform health policies targeting how and where patients access acute care services within the health care system. objectives: to determine the accuracy of a patient's self-assessment of urgency compared against triage nurses. methods: setting: ed patients are assigned a score by trained nurses according to the canadian emergency department triage and acuity scale (ctas). we present a cross-sectional survey of a random patient sample from 12 urban/regional eds conducted during the winters of 2007 and 2009. this previously validated questionnaire, based on the british healthcare commission survey, was distributed according to a modified dillman protocol. exclusion criteria consisted of: age 0-15 years, left prior to being seen/treated, died during ed visit, no contact information, presented with a privacy-sensitive case. alberta health services provided linked non-survey administrative data. results: 21,639 surveys distributed with a response rate of 46%. patients rated health problems as life-threatening (6%), possibly life-threatening (22%), urgent (30%), somewhat urgent (37%), or not urgent (5%). triage nurses assigned the same patients ctas scores of i (<1%), ii (20%), iii (45%), iv (29%) or v (5%). patients self-rated their condition as 3 or 4 points less urgent than the assigned ctas score (<1% of the time), 2 points less urgent (5%), 1 point less urgent (25%), exactly as urgent (38%), 1 point more urgent (24%), 2 points more urgent (7%), or 3 or 4 points more urgent (1%, respectively). among ctas i or ii patients, 54% described their problem as life-threatening/possibly life-threatening, 26% as urgent (risk of permanent damage), 18% as urgent (needed to be seen that day), and 2% as not urgent (wanted to be but did not need to be seen that day). conclusion: the majority of ed patients are generally able to accurately assess the acuity of their problem. encouraging patients with low-urgency conditions to self-triage to lower-acuity sources of care may relieve stress on eds. however, physicians and patients must be aware that a small minority of patients are unable to self-triage safely. when the tourniquet was released, blood spurted from the injured artery as hydrostatic pressure decayed. pressure and flow were recorded in three animals (see table) . the concept was proof-tested in a single fresh frozen human cadaver with perfusion through the femoral artery and hemorrhage from the popliteal artery. the results were qualitatively and quantitatively similar to the swine carcass model. conclusion: a perfused swine carcass can simulate exsanguinating hemorrhage for training purposes and serves as a prototype for a fresh-frozen human cadaver model. additional research and development are required before the model can be widely applied. background: in the pediatric emergency department (ped), clinicians must work together to provide safe and effective care. crisis resource management (crm) principles have been used to improve team performance in high-risk clinical settings, while simulation allows practice and feedback of these behaviors. objectives: to develop a multidisciplinary educational program in a ped using simulation-enhanced teamwork training to standardize communication and behaviors and identify latent safety threats. methods: over 6 months a workgroup of physicians and nurses with experience in team training and simulation developed an educational program for clinical staff of a tertiary ped. goals included: create a didactic curriculum to teach the principles of crm, incorporate principles of crm into simulation-enhanced team training in-situ and center-based exercises, and utilize assessment instruments to evaluate for teamwork, completion of critical actions, and presence of latent safety threats during in-situ sim resuscitations. results: during phase i, 130 clinicians, divided into teams, participated in 90-minute pre-training assessments of pals-based in-situ simulations. in phase ii, staff participated in a 6-hour curriculum reviewing key crm concepts, including team training exercises utilizing simulation and expert debriefing. in phase iii, staff participated in post-training 90 minute teamwork and clinical skills assessments in the ped. in all phases, critical action checklists (cac) were tabulated by simulation educators. in-situ simulations were recorded for later review using the assessment tools. after each simulation, educators facilitated discussion of perceptions of teamwork and identification of systems issues and latent hazards. overall, 54 in-situ simulations were conducted capturing 97% of the physicians and 84% of the nurses. cac data were collected by an observer and compared to video recordings. over 20 significant systems issues, latent hazards, and knowledge deficits were identified. all components of the program were rated highly by 90% of the staff. conclusion: a workgroup of pem, simulation, and team training experts developed a multidisciplinary team training program that used in-situ and centerbased simulation and a refined crm curriculum. unique features of this program include its multidisciplinary focus, the development of a variety of assessment tools, and use of in-situ simulation for evaluation of systems issues and latent hazards. this program was tested in a ped and findings will be used to refine care and develop a sustainment program while addressing issues identified. objectives: our hypothesis is that participants trained on high-fidelity mannequins will perform better than participants trained on low-fidelity mannequins on both the acls written exam and in performance of critical actions during megacode testing. the study was performed in the context of an acls initial provider course for new pgy1 residents at the penn medicine clinical simulation center and involved three training arms: 1) low fidelity (low-fi): torso-rhythm generator; 2) mid-fidelity (mid-fi): laerdal simmanò turned off; and 3) high-fidelity (high-fi): laerdal simmanò turned on. training in each arm of the study followed standard aha protocol. educational outcomes were evaluated by written scores on the acls written examination and expert rater reviews of acls megacode videos performed by trainees during the course. a sample of 54 subjects were randomized to one of the three training arms: low-fi (n = 18), mid-fi (n = 18), or high-fi (n = 18). results: statistical significance across the groups was determined using analysis-of-variance (anova). the three groups had similar written pre-test scores [low-fi 0.4 (0.1), mid-fi 0.5 (0.1), and high-fi 0.4 (0. 2)] and written post-test scores [low-fi 0.9 (0.1), mid-fi 0.9 (0.1), and high-fi 0.8 (0.1)]. similarly, test improvement was not significantly different. after completion of the course, high-fi subjects were more likely to report they felt comfortable in their simulator environment (p = 0.005). low-fi subjects were less likely to perceive a benefit in acls training from high-fi technology (p < 0.001). acls instructors were not rated significantly different by the subjects using the debriefing assessment for simulation in healthcareª (dash) student version except for element 6, where the high-fi group subjects reported lower scores (6.1 vs 6.6 and 6.7 in the other groups, p = 0.046). objectives: we sought to determine if stress associated with the performance of a complex procedural task can be affected by level of medical training. heart rate variability (hrv) is used as a measure of autonomic balance, and therefore an indicator of the level of stress. methods: twenty-one medical students and emergency medicine residents were enrolled. participants performed airway procedures on an airway management trainer. hrv data were collected using a continuous heart rate variability monitoring system. participant hrv was monitored at baseline, during the unassisted first attempt at endotracheal intubation, during supervised practice, and then during a simulated respiratory failure clinical scenario. standard deviation of beat to beat variability (sdnn), very low frequency (vlf), total power (tp), and low frequency (lf) was analyzed to determine the effect of practice and level of training on the level of stress. a cohen's d test was used to determine differences between study groups. results: sdnn data showed that second-year residents were less stressed during all stages than were fourthyear medical students (avg d = 1.12). vlf data showed third-year residents exhibited less sympathetic activity than did first-year residents (avg d = )0.68). the opportunity to practice resulted in less stress for all participants. tp data showed that residents had a greater degree of control over their autonomic nervous system (ans) than did medical students (avg d = 0.85). lf data showed that subjects were more engaged in the task at hand as the level of training increased indicating autonomic balance (avg d = 0.80). conclusion: our hrv data show that stress associated with the performance of a complex procedural task is reduced by increased training. hrv may provide a quantitative measure of physiologic stress during the learning process and thus serve as a marker of when a subject is adequately trained to perform a particular task. objectives: we seek to examine whether intubation during cpr can be done as efficiently as intubation without ongoing cpr. the hypothesis is that the predictable movement of an automated chest compression device will make intubation easier than the random movement from manual cpr. methods: the project was an experimental controlled trial and took place in the emergency department at a tertiary referral center in peoria, illinois. emergency medicine residents, attendings, paramedics, and other acls trained staff were eligible for participation. in randomized order, each participant attempted intubation on a mannequin with no cpr ongoing, during cpr with a human compressor, and during cpr with an automatic chest compression device (physio control lucas 2). participants could use whichever style laryngoscope they felt most comfortable with and they were timed during the three attempts. success was determined after each attempt. results: there were 43 participants in the trial. the success rate in the control group and the automated cpr group were both 88% (38/43) and the success rate in the manual cpr group was 74% (32/43). the differences in success rates were not statistically significant (p = 0.99 and p = 0.83). the automated cpr group had the fastest average time (13.6 sec; p = 0.019). the mean times for intubation with manual cpr and no cpr were not statistically different (17.1 sec, 18.1 sec; p = 0.606). conclusion: the success rate of tracheal intubation with ongoing chest compression was the same as the success rate of intubation without cpr. although intubation with automatic chest compression was faster than during other scenarios, all methods were close to the 10 second timeframe recommended by acls. based on these findings, it may not always be necessary to hold cpr to place a definitive airway; however, further studies will be needed. background: after acute myocardial infarction, vascular remodeling in the peri-infarct area is essential to provide adequate perfusion, prevent additional myocyte loss, and aid in the repair process. we have previously shown that endogenous fibroblast growth factor 2 (fgf2) is essential to the recovery of contractile function and limitation of infarct size after cardiac ischemia-reperfusion (ir) injury. the role of fgf2 in vascular remodeling in this setting is currently unknown. objectives: determine the role of endogenous fgf2 in vascular remodeling in a clinically relevant, closed-chest model of acute myocardial infarction. methods: mice with a targeted ablation of the fgf2 gene (fgf2 knockout) and wild type controls were subjected to a closed-chest model of regional cardiac ir injury. in this model, mice were subjected to 90 minutes of occlusion of the left anterior descending artery followed by reperfusion for either 1 or 7 days. immunofluorescence was performed on multiple histological sections from these hearts to visualize capillaries (endothelium, anti-cd31 antibody), larger vessels (venules and arterioles, antismooth muscle actin antibody), and nuclei (dapi). digital images were captured, and multiple images from each heart were measured for vessel density and vessel size. results: sham-treated fgf2 knockout and wild type mice show no differences in capillary or vessel density suggesting no defect in vessel formation in the absence of endogenous fgf2. when subjected to closed-chest regional cardiac ir injury, fgf2 knockout hearts had normal capillary and vessel number and size in the peri-infarct area after 1 day of reperfusion compared to wild type controls. however, after 7 days, fgf2 knockout hearts showed significantly decreased capillary and vessel number and increased vessel size compared to wild type controls (p < 0.05). conclusion: these data show the necessity of endogenous fgf2 in vascular remodeling in the peri-infarct zone in a clinically relevant animal model of acute myocardial infarction. these findings may suggest a potential role for modulation of fgf2 signaling as a therapeutic intervention to optimize vascular remodeling in the repair process after myocardial infarction. the diagnosis of aortic dissections by ed physicians is rare scott m. alter, barnet eskin, john r. allegra morristown medical center, morristown, nj background: aortic dissection is a rare event. the most common symptom of dissection is chest pain, but chest pain is a frequent emergency department (ed) chief complaint and other diseases that cause chest pain, such as acute coronary syndrome and pulmonary embolism, occur much more frequently. furthermore, 20% of dissections are without chest pain and 6% are painless. for all these reasons, diagnosing dissection can be difficult for the ed physician. we wished to quantify the magnitude of this problem in a large ed database. objectives: our goal was to determine the number of patients diagnosed by ed physicians with aortic dissections compared to total ed patients and to the total number of patients with a chest pain diagnosis. methods: design: retrospective cohort. setting: 33 suburban, urban, and rural new york and new jersey eds with annual visits between 8,000 and 75,000. participants: consecutive patients seen by ed physicians from january 1, 1996 through december 31, 2010. observations: we identified aortic dissections using icd-9 codes and chest pain diagnoses by examining all icd-9 codes used over the period of the study and selecting those with a non-traumatic chest pain diagnosis. we then calculated the number of total ed patients and chest pain patients for every aortic dissection diagnosed by emergency physicians. we determined 95% confidence intervals (cis). results: from a database of 9.5 million ed visits, we identified 782 (0.0082%) aortic dissections, or one for every 12,200 (95% ci 11,400 to 13,100) visits. the mean age of aortic dissection patients was 58 ± 19 years and 57% were female. of the total visits there were 763,000 (8%) with a chest pain diagnosis. thus there is one aortic dissection diagnosis for every 980 (95% ci 910 to 1,050) chest pain diagnoses. conclusion: the diagnosis of aortic dissections by ed physicians is rare. an ed physician seeing 3,000 to 4,000 patients a year would diagnose an aortic dissection approximately once every 3 to 4 years. an aortic dissection would be diagnosed once for approximately every 1,000 ed chest pain patients. patients were excluded if they suffered a cardiac arrest, were transferred from another hospital, or if the ccl was activated for an inpatient or from ems in the field. fp ccl activation was defined as 1) a patient for whom activation was cancelled in the ed and ruled out for mi or 2) a patient who went to catheterization but no culprit vessel was identified and mi was excluded. ecgs for fp patients were classified using standard criteria. demographic data, cardiac biomarkers, and all relevant time intervals were collected according to an on-going quality assurance protocol. results: a total of 506 ccl activations were reviewed, with 68% male, average age 57, and 59% black. there were 210 (42%) true stemis and 86 (17%) fp activations. there were no significant differences between the fp patients who did and did not have catheterization. for those fp patients who had a catheterization (13%), ''door to page'' and ''door to lab'' times were significantly longer than the stemi patients (see table) , but there was substantial overlap. there was no difference in sex or age, but fp patients were more likely to be black (p = 0.02). a total of 82 fp patients had ecgs available for review; findings included anterior elevation with convex (21%) or concave (13%) elevation, st elevation from prior anterior (10%) or inferior (11%) mi, pericarditis (16%), presumed new lbbb (15%), early repolarization (5%), and other (9%). conclusion: false ccl activation occurred in a minority of patients, most of whom had ecg findings warranting emergent catheterization. the rate of false ccl activation appears acceptable. background: atrial fibrillation (af) is the most common cardiac arrhythmia treated in the ed, leading to high rates of hospitalization and resource utilization. dedicated atrial fibrillation clinics offer the possibility of reducing the admission burden for af patients presenting to the ed. while the referral base for these af clinics is growing, it is unclear to what extent these clinics contribute to reducing the number of ed visits and hospitalizations related to af. objectives: to compare the number of ed visits and hospitalizations among discharged ed patients with a primary diagnosis of af who followed up with an af clinic and those who did not. methods: a retrospective cohort study and medical records review including three major tertiary centres in calgary, canada. a sample of 600 patients was taken representing 200 patients referred to the af clinic from the calgary zone eds and compared to 400 matched control ed patients who were referred to other providers for follow-up. the controls were matched for age and sex. inclusion criteria included patients over 18 years of age, discharged during the index visit, and seen by the af clinic between january 1, 2009 and october 25, 2010. exclusion criteria included non-residents and patients hospitalized during the index visit. the number of cardiovascular-related ed visits and hospitalizations was measured. all data are categorical, and were compared using chi-square tests. results: patients in the control and af clinic cohorts were similar for all baseline characteristics except for a higher proportion of first episode patients in the intervention arm. in the six months following the index ed visit, 55 study group patients (27.5%) visited an ed on 95 occasions, and 12 (6%) were hospitalized on 16 occasions. of the control group, 122 patients (30.5%) visited an ed on 193 occasions, and 44 (11%) were hospitalized on 55 occasions. using a chi-square test we found no significant difference in ed visits (p = 0.5063) or hospitalizations (p = 0.0664) between the control and af clinic cohorts. conclusion: based on our results, referral from the ed to an af clinic is not associated with a significant reduction in subsequent cardiovascular related ed visits and hospitalizations. due to the possibility of residual confounding, randomized trials should be performed to evaluate the efficacy of af clinics. reported an income of less than $10,000. there were no significant associations between sex, race, marital status, education level, income, insurance status, and subsequent 30-and-90 day readmission rates. hla score was not found to be significantly related to readmission rates. the mean hla score was 18.9 (sd = 7.87), equivalent to less than 6th grade literacy, meaning these patients may not be able to read prescription labels. for each unit increase in hfkt score, the odds of being readmitted within 30 days decreased by 0.219 (p < 0.001) and for 31-90 days decreased by 0.440 (p < 0.001). for each unit increase in scbs score, the odds of being readmitted within 90 days decreased by 0.949 (p = 0.038). conclusion: health care literacy in our patient population is not associated with readmission, likely related to the low literacy rate of our study population. better hf knowledge and self-care behaviors are associated with lower readmission rates. greater emphasis should be placed on patient education and self-care behaviors regarding hf as a mechanism to decrease readmission rates. comparison of door to balloon times in patients presenting directly or transferred to a regional heart center with stemi jennifer ehlers, adam v. wurstle, luis gruberg, adam j. singer stony brook university, stony brook, ny background: based on the evidence, a door-to-balloon-time (dtbt) of less than 90 minutes is recommended by the aha/acc for patients with stemi. in many regions, patients with stemi are transferred to a regional heart center for percutaneous coronary intervention (pci). objectives: we compared dtbt for patients presenting directly to a regional heart center with those for patients transferred from other regional hospitals. we hypothesized that dtbt would be significantly longer for transferred patients. methods: study design-retrospective medical record review. setting-academic ed at a regional heart center with an annual census of 80,000 that includes a catchment area of 12 hospitals up to 50 miles away. patients-patients with acute stemi identified on ed 12-lead ecg. measures-demographic and clinical data including time from triage to ecg, from ecg to activation of regional catheterization lab, and from initial triage to pci (dtbt , and door to intravascular balloon deployment (d2b). methods: the study was performed in an inner-city academic ed between 1/1/07 and 12/31/10. every patient for whom ed activation of our stemi system occurred was included. all times data from a pre-existing quality assurance database were collected prospectively. patient language was determined retrospectively by chart review. results: there were 132 patients between 1/1/07 and 12/31/10. 21 patients (16%) were deemed too sick or unable to provide history and were excluded, leaving 111 patients for analysis. 85 (77%) spoke english and 26 (23%) did not. in the non-english group, chinese was the most common language, in 22 (20%) background: syncope is a common, potentially highrisk ed presentation. hospitalization for syncope, although common, is rarely of benefit. no populationbased study has examined disparities in regional admission practices for syncope care in the ed. moreover, there are no population-based studies reporting prognostic factors for 7-and 30-day readmission of syncope. objectives: 1) to identify factors associated with admission as well as prognostic factors for 7-and 30-day readmission to these hospitals; 2) to evaluate variability in syncope admission practices across different sizes and types of hospitals. methods: design -multi-center retrospective cohort study using ed administrative data from 101 albertan eds. participants/subjects -patients >17 years of age with syncope (icd10: r55) as a primary or secondary diagnosis from 2007 to june 2011. readmission was defined as return visits to the ed or admission <7 days or 7-30 days after the index visit (including against medical advice and left without being seen during the index visit). outcomes -factors associated with hospital admission at index presentation, and readmission following ed discharge, adjusted using multivariable logistic regression. results: overall, 44521 syncope visits occurred over 4 years. increased age, increased length of stay (los), performance of cxr, transport by ground ambulance, and treatment at a low-volume hospital (non-teaching or non-large urban) were independently associated with index hospitalization. these same factors, as well as hospital admission itself, were associated with 7-day readmission. additionally, increased age, increased los, performance of a head ct, treatment at a low-volume hospital, hospital admission, and female sex were independently associated with 7-30 day readmission. arrival by ground ambulance was associated with a decreased likelihood of both 7-and 7-30 day readmission. conclusion: our data identify variations in practice as well as factors associated with hospitalization and readmission for syncope. the disparity in admission and readmission rates between centers may highlight a gap in quality of care or reflect inappropriate use of resources. further research to compare patient out-comes and quality of patient care among urban and non-urban centers is needed. background: change in dyspnea severity (ds) is a frequently used outcome measure in trials of acute heart failure (ahf). however, there is limited information concerning its validity. objectives: to assess the predictive validity of change in dyspnea severity. methods: this was a secondary analysis of a prospective observational study of a convenience sample of ahf patients presenting with dyspnea to the ed of an academic tertiary referral center with a mixed urban/ suburban catchment area. patients were enrolled weekdays, june through december 2006. patients assessed their ds using a 10-cm visual analog scale at three times: the start of ed treatment (baseline) as well as at 1 and 4 hours after starting ed treatment. the difference between baseline and 1 hour was the 1-hour ds change. the difference between baseline and 4 hours was the 4-hour ds change. two clinical outcome measures were obtained: 1) the number of days hospitalized or dead within 30 days of the index visit (30-day outcome), and 2) the number of days hospitalized or dead within 90 days of the index visit (90-day outcome). results: data on 86 patients were analyzed. the median 30-day outcome variable was 6 days with an interquartile range (iqr) of 3 to 16. the median 90-day outcome variable was 10 days (iqr 4 to 27.5). the median 1-hour ds change was 2.6 cm (iqr 0.3 to 6.7). the median 4-hour ds change was 4.9 cm (iqr 2.2 to 8.2). the 30-day and 90-day mortality rates were 9% and 13% respectively. the spearman rank correlations and 95% confidence intervals are presented in the table below. conclusion: while the point estimates for the correlations were below 0.5, the 95% ci for two of the correlations extended above 0.5. these pilot data support change in ds as a valid outcome measure for ahf when measured over 4 hours. a larger prospective study is needed to obtain a more accurate point estimate of the correlations. background: the majority of volume-quality research has focused on surgical outcomes in the inpatient setting; very few studies have examined the effect of emergency department (ed) case volume on patient outcomes. objectives: to determine whether ed case volume of acute heart failure (ahf) is associated with short-term patient outcomes. methods: we analyzed the 2008 nationwide emergency department sample (neds) and nationwide inpatient sample (nis), the largest, all-payer, ed and inpatient databases in the us. ed visits for ahf were identified with a principal diagnosis of icd-9-cm code 428.xx. eds were categorized into quartiles by ed case volume of ahf. the outcome measures were early inpatient mortality (within the first 2 days of admission), overall inpatient mortality, and hospital length of stay (los). results: there were an estimated 946,000 visits for ahf from approximately 4,700 eds in 2008; 80% were hospitalized. of these, the overall inpatient mortality rate was 3.2%, and the median hospital los was 4 days. early inpatient mortality was lower in the highest-volume eds, compared with the lowest-volume eds (0.8% vs. 2.1%; p < 0.001). similar patterns were observed for overall inpatient mortality (3.0% vs. 4.1%; p < 0.001). in a multivariable analysis adjusting for 37 patient and hospital characteristics, early inpatient mortality remained lower in patients admitted through the highest-volume eds (adjusted odds ratios [or], 0.70; 95% confidence interval [ci], 0.52-0.96), as compared with the lowest-volume eds. there was a trend towards lower overall inpatient mortality in the highest-volume eds; however, this was not statistically significant (adjusted or, 0.92; 95%ci, 0.75-1.14). by contrast, using the nis data including various sources of admissions, a higher case volume of inpatient ahf patients predicted lower overall inpatient mortality (adjusted or, 0.51; 95%ci, 0.40-0.65). the hospital los in patients admitted through the highest-volume eds was slightly longer (adjusted difference, 0.7 day; 95%ci, 0.2-1.2), compared with the lowest-volume eds. conclusion: ed patients who are hospitalized for ahf have an approximately 30% reduced early inpatient mortality if they were admitted from an ed that handles a large volume of ahf cases. the ''practice-makesperfect'' concept may hold in emergency management of ahf. emergency department disposition and charges for heart failure: regional variability alan b. storrow, cathy a. jenkins, sean p. collins, karen p. miller, candace mcnaughton, naftilan allen, benjamin s. heavrin vanderbilt university, nashville, tn background: high inpatient admission rates for ed patients with acute heart failure are felt partially responsible for the large economic burden of this most costly cardiovascular problem. objectives: we examined regional variability in ed disposition decisions and regional variability in total dollars spent on ed services for admitted patients with primary heart failure. methods: the 2007 nationwide emergency department sample (neds) was used to perform a retrospective, cohort analysis of patients with heart failure (icd-9 code of 428.x) listed as the primary ed diagnosis. demographics and disposition percentages (with se) were calculated for the overall sample and by region: northeast, south, midwest, and west. to account for the sample design and to obtain national and regional estimates, a weighted analysis was conducted. results: there were 941,754 weighted ed visits with heart failure listed as the primary diagnosis. overall, over eighty percent were admitted (see table) . fifty-two percent of these patients were female; mean age was 72.7 years (se 0.20). hospitalization rates were higher in the northeast (89.1%) and south (81.2%) than in the midwest (76.0%) and west (74.8%). total monies spent on ed services were highest in the south ($69,078,042) followed by the northeast ($18,233,807), west ($6,360,315) and midwest ($5,899,481) . conclusion: this large retrospective ed cohort suggests a very high national admission rate with significant regional variation in both disposition decisions as well as total monies spent on ed services for patients with a primary diagnosis of heart failure. examining these estimates and variations further may provide strategies to reduce the economic burden of heart failure. background: workplace violence in health care settings is a frequent occurrence. gunfire in hospitals is of particular concern. however, information regarding such workplace violence is limited. accordingly, we characterized u.s. hospital-based shootings from 2000-2010. objectives: to determine extent of hospital-based shootings in the u.s. and involvement of emergency departments. methods: using lexisnexis, google, netscape, pub-med, and sciencedirect, we searched reports for acute care hospital shooting events from january 2000 through december 2010, and those with at least one injured victim were analyzed. results: we identified 140 hospital-related shootings (86 inside the hospital, 54 on hospital grounds), in 39 states, with 216 victims, of whom 98 were perpetrators. in comparison to external shootings, shootings within the hospital have not increased over time (see figure) . perpetrators were from all age groups, including the elderly. most of the events involved a determined shooter: grudge (26%), suicide (19%), ''euthanizing'' an ill relative (15%), and prisoner escape (12%). ambient societal violence (8%) and mentally unstable patients (4%) were comparatively infrequent. the most common injured was the perpetrator (45%). hospital employees comprised only 21% of victims; physician (3%) and nurse (5%) victims were relatively infrequent. the emergency department was the most common site (29%), followed by patient rooms (20%) and the parking lot (20%). in 13% of shootings within hospitals, the weapon was a security officer's gun grabbed by the perpetrator. ''grudge'' motive was the only factor determinative of hospital staff victims (or = 4.34, 95% ci 1.85-10.17). conclusion: although hospital-based shootings are relatively rare, emergency departments are the most likely site. the unpredictable nature of this type of event represents a significant challenge to hospital security and deterrence practices, as most perpetrators proved determined, and many hospital shootings occur outside the building. impact of emergency physician board certification on patient perceptions of ed care quality albert g. sledge iv 1 , carl a. germann 1 , tania d. strout 1 , john southall 2 1 maine medical center, portland, me; 2 mercy hospital, portland, me background: the hospital value-based purchasing program mandated by the affordable care act is the latest example of how patients' perceptions of care will affect the future practice environment of all physicians. the type of training of medical providers in the emergency department (ed) is one possible factor affecting patient perceptions of care. a unique situation in a maine community ed led to the rapid transition from non-emergency medicine (em) residency trained physicians to all em residency trained and american board of emergency medicine (abem) certified providers. objectives: the purpose of this study was to evaluate the effect of the implementation of an all em-trained, abem-certified physician staff on patient perceptions of the quality of care they received in the ed. methods: we retrospectively evaluated press ganey data from surveys returned by patients receiving treatment in a single, rural ed. survey items addressed patient's perceptions of physician courtesy, time spent listening, concern for patient comfort, and informativeness. additional items evaluated overall perceptions of care and the likelihood that the respondent would recommend the ed to another. data were compared for the three years prior to and following implementation of the all trained, certified staff. we used the independent samples t-test to compare mean responses during the two time periods. bonferroni's correction was applied to adjust for multiple comparisons. results: during the study period, 3,039 patients provided surveys for analysis: 1,666 during the pre-certification phase and 1,373 during the post-certification phase. across all six survey items, mean responses increased following transition to the board-certified staff. these improvements were noted to be statistically significant in each case: courtesy p < 0.001, time listening p < 0.001, concern for comfort p < 0.001, informativeness p < 0.001, overall perception of care p < 0.001, and likelihood to recommend p < 0.001. conclusion: data from this community ed suggest that transition from a non-residency trained, abem certified staff to a fully trained and certified model has important implications for patient's perceptions of the care they receive. we observed significant improvement in rating scores provided by patients across all physicianoriented and general ed measures. background: transfer of care from the ed to the inpatient floor is a critical transition when miscommunication places patients at risk. the optimal form and content of handoff between providers has not been defined. in july 2011, ed-to-floor signout for all admissions to the medicine and cardiology floors was changed at our urban, academic, tertiary care hospital. previously, signout was via an unstructured telephone conversation between ed resident and admitting housestaff. the new signout utilizes a web-based ed patient tracking system and includes: 1) a templated description of ed course is completed by the ed resident; 2) when a bed is assigned, an automated page is sent to the admitting housestaff; 3) ed clinical information, including imaging, labs, medications, and nursing interventions (figure) is reviewed by admitting housestaff; 4) if housestaff has specific questions about ed care, a telephone conversation between the ed resident and housestaff occurs; 5) if there are no specific questions, it is indicated electronically and the patient is transferred to the floor. objectives: to describe the effects on patient safety (floor-to-icu transfer in 24 hours) and ed throughput (ed length of stay (los) and time from bed assignment to ed departure) resulting from a change to an electronic, discussion-optional handoff system. conclusion: transition to a system in which signout of admitted patients is accomplished by accepting housestaff review of ed clinical information supplemented by verbal discussion when needed resulted in no significant change in rate of floor-to-icu transfer or ed los and reduced time from bed assignment to ed departure. background: emergency physicians may be biased against patients presenting with nonspecific complaints or those requiring more extensive work-ups. this may result in patients being seen less quickly than those with more straightforward presentations, despite equal triage scores or potential for more dangerous conditions. objectives: the goal of our study was to ascertain which patients, if any, were seen more quickly in the ed based on chief complaint. methods: a retrospective report was generated from the emr for all moderate acuity (esi 3) adult patients who visited the ed from january 2005 through december 2010 at a large urban teaching hospital. the most common complaints were: abdominal pain, alcohol intoxication, back pain, chest pain, cough, dyspnea, dizziness, fall, fever, flank pain, headache, infection, pain (nonspecific), psychiatric evaluation, ''sent by md,'' vaginal bleeding, vomiting, and weakness. non-parametric independent sample tests assessed median time to be seen (ttbs) by a physician for each complaint. differences in the ttbs between genders and based on age were also calculated. chi-square testing compared percentages of patients in the ed per hour to assess for differences in the distribution of arrival times. results: we obtained data from 116,194 patients. patients with a chief complaint of weakness and dizziness waited the longest with a median time of 35 minutes and patients with flank pain waited the shortest with 24 minutes (p < 0.0001) ( figure 1 ). overall, males waited 30 minutes and females waited 32 minutes (p < 0.0001). stratifying by gender and age, younger females between the ages of 18-50 waited significantly longer times when presenting with a chief complaint of abdominal pain (p < 0.0001), chest pain (p < 0.05), or flank pain (p < 0.0001) as compared to males in the same age group ( figure 2 ). there was no difference in the distribution of arrival times for these complaints. conclusion: while the absolute time differences are not large, there is a significant bias toward seeing young male patients more quickly than women or older males despite the lower likelihood of dangerous conditions. triage systems should perhaps take age and gender better into account. patients might benefit from efforts to educate em physicians on the delays and potential quality issues associated with this bias in an attempt to move toward more egalitarian patient selection. background: detailed analysis of emergency department (ed) event data identified the time from completion of emergency physician evaluation (doc done) to the time patients leave the ed as a significant contributor to ed length of stay (los) and boarding at our institution. process flow mapping identified the time from doc done to the time inpatient beds were ordered (bo) as an interval amendable to specific process improvements. objectives: the purpose of this study was to evaluate the effect of ed holding orders for stable adult 3.6 (3.0 -4.1) 7.8 (6.9 -8.7) 15.2 (12.8 -17.6) 4.9 (2.8 -7.0) 17.3 (12.9 -21.7) 6.5 (4.5 -8.5) inpatient medicine (aim) patients on: a) the time to bo and b) ed los. methods: a prospective, observational design was used to evaluate the study questions. data regarding the time to bo and los outcomes were collected before and after implementation of the ed holding orders program. the intervention targeted stable aim patients being admitted to hospitalist, internal medicine, and family medicine services. ed holding orders were placed following the admission discussion with the accepting service and special attention was paid to proper bed type, completion of the emergent work-up and the expected immediate course of the patient's hospital stay. holding orders were of limited duration and expired 4 hours after arrival to the inpatient unit. results: during the 6-month study period, 7321 patients were eligible for the ed holding orders intervention; 6664 (91.0%) were cared for using the standard adult medicine order set and 657 (9.0%) received the intervention. the median time from doc done to bo was significantly shorter for patients in the ed holding orders group, 41 min (iqr 19, 88) vs. 95 min (iqr 53, 154) for the standard adult medicine group, p < 0.001. similarly, the median ed los was significantly shorter for those in the ed holding orders group, 413 min (iqr 331, 540) vs. 456 min (iqr 346, 581) for the standard adult medicine group, p < 0.001. no lapses in patient care were reported in the intervention group. conclusion: in this cohort of ed patients being admitted to an aim service, placing ed holding orders rather than waiting for a traditional inpatient team evaluation and set of admission orders significantly reduced the time from the completion of the ed workup to placement of a bo. as a result, ed los was also significantly shortened. while overall utilization of the intervention was low, it improved with each month. emergency department interruptions in the age of electronic health records matthew albrecht, john shabosky, jonathan de la cruz southern illinois university school of medicine, springfield, il background: interruptions of clinical care in the emergency department (ed) have been correlated with increased medical errors and decreased patient satisfaction. studies have also shown that most interruptions happen during physician documentation. with the advent of the electronic health record and computerized documentation, ed physicians now spend much of their clinical time in front of computers and are more susceptible to interruptions. voice recognition dictation adjuncts to computerized charting boast increased provider efficiency; however, little is known about how data input of computerized documentation affects physician interruptions. objectives: we present here observational interruptions data comparing two separate ed sites, one that uses computerized charting by conventional techniques and one assisted by voice recognition dictation technology. methods: a prospective observational quality initiative was conducted at two teaching hospital eds located less than 1 mile from each other. one site primarily uses conventional computerized charting while the other uses voice recognition dictation computerized charting. four trained observers followed ed physicians for 180 minutes during shifts. the tasks each ed physician performed were noted and logged in 30 second intervals. tasks listed were selected from a predetermined standardized list presented at observer training. tasks were also noted as either completed or placed in queue after a change in task occurred. a total of 4140 minutes were logged. interruptions were noted when a change in task occurred with the previous task being placed in queue. data were then compared between sites. results: ed physicians averaged 5.33 interruptions/ hour with conventional computerized charting compared to 3.47 interruptions/hour with assisted voice recognition dictation (p = 0.0165). conclusion: computerized charting assisted with voice recognition dictation significantly decreased total per hour interruptions when compared to conventional techniques. charting with voice recognition dictation has the potential to decrease interruptions in the ed allowing for more efficient workflow and improved patient care. background: using robot assistants in health care is an emerging strategy to improve efficiency and quality of care while optimizing the use of human work hours. robot prototypes capable of performing vital signs and assisting with ed triage are under development. however, ed users' attitudes toward robot assistants are not well studied. understanding of these attitudes is essential to design user-friendly robots and to prepare eds for the implementation of robot assistants. objectives: to evaluate the attitudes of ed patients and their accompanying family and friends toward the potential use of robot assistants in the ed. methods: we surveyed a convenience sample of adult ed patients and their accompanying adult family members and friends at a single, university-affiliated ed, 9/ 26/11-10/27/11. the survey consisted of eight items from the negative attitudes towards robots scale (normura et al.) modified to address robot use in the ed. response options included a 5-point likert scale. a summary score was calculated by summing the responses for all 8 items, with a potential range of 8 (completely negative attitude) to 40 (completely positive attitude). research assistants gave the written surveys to subjects during their ed visit. internal consistency was assessed using cronbach's alpha. bivariate analyses were performed to evaluate the association between the summary score and the following variables: participant type (patient or visitor), sex, race, time of day, and day of week. results: of 121 potential subjects approached, 113 (93%) completed the survey. participants were 37% patients, 63% family members or friends, 62% women, 79% white, and had a median age of 45.5 years (iqr 18-84). cronbach's alpha was 0.94. the mean summary score was 22.2 (sd = 0.87), indicating subjects were between ''occasionally'' and ''sometimes'' comfortable with the idea of ed robot assistants (see table) . men were more positive toward robot use than women (summary score: 24.6 vs 20.8; p = 0.033). no differences in the summary score were detected based on participant type, race, time of day, or day of week. conclusion: ed users reported significant apprehension about the potential use of robot assistants in the ed. future research is needed to explore how robot designs and strategies to implement ed robots can help alleviate this apprehension. background: emergency department cardioversion (edc) of recent-onset atrial fibrillation or flutter (af) patients is an increasingly common management approach to this arrhythmia. patients who qualify for edc generally have few co-morbidities and are often discharged directly from the ed. this results in a shift towards a sicker population of patients admitted to the hospital with this diagnosis. objectives: to determine whether hospital charges and length of stay (los) profiles are affected by emergency department discharge of af patients. methods: patients receiving treatment at an urban teaching community hospital with a primary diagnosis of atrial fibrillation or flutter were identified through the hospital's billing data base. information collected on each patient included date of service, patient status, length of stay, and total charges. patient status was categorized as inpatient (admitted to the hospital), observation (transferred from the ed to an inpatient bed but placed in an observation status), or ed (discharged directly from the ed). the hospital billing system automatically defaults to a length of stay of 0 for observation patients. ed patients were assigned a length of stay of 0. total hospital charges and mean los were determined for two different models: a standard model (sm) in which patients discharged from the ed were excluded from hospital statistics, and an inclusive model (im) in which discharged ed patients were included in the hospital statistics. statistical analysis was through anova. results: a total of 317 patients were evaluated for af over an 18-month period. of these, 197 (62%) were admitted, 22 (7%) were placed in observation status, and 98 (31%) were discharged from the ed. hospital charges and los in days are summarized in the table. all differences were statistically significant at (p < 0.001). conclusion: emergency department management can lead to a population of af patients discharged directly from the ed. exclusion of these patients from hospital statistics skews performance profiles effectively punishing institutions for progressive care. background: recent health care reform has placed an emphasis on the electronic health record (ehr). with the advent of the ehr it is common to see ed providers spending more time in front of computers documenting and away from patients. finding strategies to decrease provider interaction with computers and increase time with patients may lead to improved patient outcomes and satisfaction. computerized charting adjuncts, such as voice recognition software, have been marketed as ways to improve provider efficiency and patient contact. objectives: we present here observational data comparing two separate ed sites, one where computerized charting is done by conventional techniques and one that is assisted with voice recognition dictation, and their effects on physican charting and patient contact. methods: a prospective observational quality initiative was conducted at two teaching hospitals located less than 1 mile from each other. one site primarily uses conventional computerized charting while the other uses voice recognition dictation. four trained quality assistants observed ed physicians for 180 minutes during shifts. the tasks each physician performed were noted and logged in 30 second intervals. tasks listed were identified from a predetermined standardized list presented at observer training. a total of 4140 minutes were logged. time allocated to charting and that allocated to direct patient care were then compared between sites. results: ed physicians spent 28.6% of their time charting using conventional techniques vs 25.7% using voice recognition dictation (p = 0.4349). time allocated to direct patient care was found to be 22.8% with conventional charting vs 25.1% using dictation (p = 4887). in total, ed physicians using conventional charting techniques spent 668/2340 minutes charting. ed physicians using voice recognition dictation spent 333/1800 minutes dictating and an additional 129.5/1800 minutes reviewing or correcting their dictations. the use of voice recognition assisted dictation rather than conventional techniques did not significantly change the amount of time physicians spent charting or with direct patient care. although voice recognition dictation decreased initial input time of documenting data, a considerable amount of time was required to review and correct these dictations. objectives: for our primary objective, we studied whether emergency department triage temperatures detected fever adequately when compared to a rectal temperature. as secondary objectives, we examined the temperature differences when a rectal temperature was taken within an hour of non-invasive temperature, temperature site (oral, axillary, temporal), and also examined the patients that were initially afebrile but were found to be febrile by rectal temperature. methods: we performed an electronic chart review at our inner city, academic emergency department with an annual census of 110,000 patients. we identified all patients over the age of 18 who received a non-invasive triage temperature and a subsequent rectal temperature while in the ed from january 2002 through february 2011. specific data elements included many aspects of the patient's medical record (e.g. subject demographics, temperature, and source). we analyzed our data with standard descriptive statistics, t-tests for continuous variables, and pearson chi-square tests for proportions. results: a total of 27,130 patients met our inclusion criteria. the mean difference in temperatures between the initial temperature and the rectal temperature was 1.3°f, with 25.9% having higher rectal temperatures ‡2°f, and 5.0% having higher rectal temperatures ‡4°f. the mean temperature difference among the 10,313 patients who an initial noninvasive temperature and a rectal temperature within one hour was 1.4°f. the mean difference among patients that received oral, axillary, and temporal temperatures was 1.2°f, 1.8°f, and 1.2°f respectively. approximately one in five patients (18.1%) were initially afebrile and found to be febrile by rectal temperature, with an average temperature difference of 2.5°f. these patients had a higher rate of admission, and were more likely to be admitted to the intensive care unit. conclusion: there are significant differences between rectal temperatures and non-invasive triage temperatures in this emergency department cohort. in almost one in five patients, fever was missed by triage temperature. background: pediatric emergency department (ped) overcrowding has become a national crisis, and has resulted in delays in treatment, and patients leaving without being seen. increased wait times have also been associated with decreased patient satisfaction. optimizing ped throughput is one means by which to handle the increased demands for services. various strategies have been proposed to increase efficiency and reduce length of stay (los). objectives: to measure the effect of direct bedding, bedside registration, and patient pooling on ped wait times, length of stay, and patient satisfaction. methods: data were extracted from a computerized ed tracking system in an urban tertiary care ped. comparisons were made between metrics for 2010 (23,681 patients) and the 3 months following process change (6,195 patients). during 2010, patients were triaged by one or two nurses, registered, and then sent either to a 14-bed ped or a physically separate 5-bed fast-track unit, where they were seen by a physician. following process change, patients were brought directly to a bed in the 14-bed ped, triaged and registered, then seen by a physician. the fast-track unit was only utilized to accommodate patient surges. results: anticipating improved efficiencies, attending physician coverage was decreased by 9%. after instituting process changes, improvements were noted immediately. although daily patient volume increased by 3%, median time to be seen by a physician decreased by 20%. additionally, median los for discharged patients decreased by 15%, and median time until the decisionto-admit decreased by 10%. press-ganey satisfaction scores during this time increased by greater than 5 mean score points, which was reported to be a statistically significant increase. conclusion: direct bedding, bedside registration, and patient pooling were simple to implement process changes. these changes resulted in more efficient ped throughput, as evidenced by decreased times to be seen by a physician, los for discharged patients, and time until decision-to-admit. additionally, patient satisfaction scores improved, despite decreased attending physician coverage and a 30% decrease in room utilization. ) . during period 1, the ou was managed by the internal medicine department and staffed by primary care physicians and physician assistants. during periods 2 and 3, the ou was managed and staffed by em physicians. data collected included ou patient volume, length of stay (los) for discharged and admitted patients, admission rates, and 30-day readmission rates for discharged patients. cost data collected included direct, indirect, and total cost per patient encounter. data were compared using chi-square and anova analysis followed by multiple pairwise comparisons using the bonferroni method of p-value adjustment. results: see table. the ou patient volume and percent of ed volume was greater in period 3 compared to periods 1 and 2. length of stay, admission rates, 30-day readmission rates, and costs were greater in period 1 compared to periods 2 and 3. conclusion: em physicians provide more cost-effective care for patients in this large ou compared to non-em physicians, resulting in shorter los for admitted and discharged patients, greater rates of patients discharged, and less 30-day readmission rates for discharged patients. this is not affected by an increase in ou volume and shows a trend towards improvement. background: emergency department (ed) crowding continues to be a problem, and new intake models may represent part of the solution. however, little data exist on the sustainability and long-term effects of physician triage and screening on standard ed performance metrics, as most studies are short-term. objectives: we examined the hypothesis that a physician screening program (start) sustainably improves standard ed performance metrics including patient length of stay (los) and patients who left without completing assessment (lwca). we also investigated the number of patients treated and dispositioned by start without using a monitored bed and the median patient door-to-room time. methods: design and setting: this study is a retrospective before-and-after analysis of start in a level i tertiary care urban academic medical center with approximately 90,000 annual patient visits. all adult patients from december 2006 until november 2010 are included, though only a subset was seen in start. start began at our institution in december 2007. observations: our outcome measures were length of stay for ed patients, lwca rates, patients treated and dispositioned by start without using a monitored bed, and door-to-room time. statistics: simple descriptive statistics were used. p-values for los were calculated with wilcoxon test and p-value for lwca was calculated with chi-square. results: table 2 shows median length of stay for ed patients was reduced by 56 minutes/patient (p-value <0.0001) when comparing the most recent year to the year before start. patients who lwca were reduced from 4.8% to 2.9% (p-value <0.0001) during the same time period. we also found that in the first half-year of start, 18% of patients screened in the ed were treated and dispositioned without using a monitored bed and by the end of year 3, this number had grown to 29%. median door-to-room time decreased from 18.4 minutes to 9.9 minutes over the same period of time. conclusion: a start system can provide sustained improvements in ed performance metrics, including a significant reduction in ed los, lwca rate, and doorto-room time. additionally, start can decrease the need for monitored ed beds and thus increase ed capacity. . labs were obtained in 98%, ct in 37%, us in 30%, and consultation in 23%. 18% of the cohort was admitted to the hospital. the most commonly utilized source of translation was a layman (35%). a professional translator was used in 9% and translation service (language line, marty) in 30%. the examiner was fluent in the patient's language in 11%. both the patient and examiner were able to maintain basic communication in 11%. there were 47 patients in the professional/ fluent translation group and 44 patients in the lay translation group. there was no difference in ed los between groups 288 vs 304 min; p = 0.6. there was no difference in the frequency of lab tests, computerized tomography, ultrasound, consultations, or hospital admission. frequencies did not differ by sex or age. conclusion: translation method was not associated with a difference in overall ed los, ancillary test use, or specialist consultation in spanish-speaking patients presenting to the ed for abdominal pain. emergency department patients on warfarin -how often is the visit due to the medication? jim killeen, edward castillo, theodore chan, gary vilke ucsd medical center, san diego, ca background: warfarin has important therapeutic value for many patients, but has been associated with signi-ficant bleeding complications, hypersensitivity reactions, and drug-drug interactions, which can result in patients seeking care in the emergency department (ed). objectives: to determine how often ed patients on warfarin present for care as a result of the medication itself. methods: a multi-center prospective survey study in two academic eds over 6 months. patients who presented to the ed taking warfarin were identified, and ed providers were prospectively queried at the time of disposition regarding whether the visit was the result of a complication or side effect associated with warfarin. data were also collected on patient demographics, chief complaint, triage acuity, vital signs, disposition, ed evaluation time, and length of stay (los). patients identified with a warfarin-related cause for their ed visit were compared with those who were not. statistical analysis was performed using descriptive statistics. results: during the study period, 31,500 patients were cared for by ed staff, of whom 594 were identified as taking warfarin as part of their medication regimen. of these, providers identified 54.7% (325 patients) who presented with a warfarin-related complication as their primary reason for the ed visit. 56.9% (338) each 100 hours of daily boarding is associated with a drop of 1.3 raw score points in both pg metrics. these seemingly small drops in raw scores translate into major changes in rankings on press ganey national percentile scales (a difference of as much as 10 percentile points). our institution commonly has hundreds of hours of daily boarding. it is possible that patient-level measurements of boarding impact would show stronger correlation with individual satisfaction scores, as opposed to the daily aggregate measures we describe here. our research suggests that reducing the burden of boarding on eds will improve patient satisfaction. background: prolonged emergency department (ed) boarding is a key contributor to ed crowding. the effect of output interventions (moving boarders out of the ed into an intermediate area prior to admission or adding additional capacity to an observation unit) has not been well studied. objectives: we studied the effect of a combined observation-transition (ot) unit, consisting of observation beds and an interim holding area for boarding ed patients, on the length of stay (los) for admitted patients, as well as secondary outcomes such as los for discharged patients, and left without being seen rates. methods: we conducted a retrospective review (12 months pre-, 12 months post-design) of an ot unit at an urban teaching ed with 59,000 annual visits (study ed). we compared outcomes to a nearby communitybased ed with 38,000 annual visits in the same health system (control ed) where no capacity interventions were performed. the ot had 17 beds, full monitoring capacity, and was staffed 24 hours per day. the number of beds allocated to transition and observation patients fluctuated throughout the course of the intervention, based on patient demands. all analyses were conducted at the level of the ed-day. wilcoxon rank-sum and analysis of covariance tests were used for comparisons; continuous variables were summarized with medians. results: in unadjusted analyses, median daily los of admitted patients at the study ed was 31 minutes lower in the 12 months after the ot opened, 6.98 to 6.47 hours (p < 0.0001). control site daily los for admitted patients increased 26 minutes from 4.52 to 4.95 hours (p < 0.0001). results were similar after adjusting for other covariates (day of week, ed volume, and triage level). los of discharged patients at study ed decreased by 14 minutes, from 4.1 hours to 3.8 hours (p < 0.001), while the control ed saw no significant changes in discharged patient los (2.6 hours to 2.7 hours, p = 0.06). left without being seen rates did not decrease at either site. conclusion: opening an ot unit was associated with a 30-minute reduction in average daily ed los for admitted patients and discharged patients in the study ed. given the large expense of opening an ot, future studies should compare capacity-dependent (e.g., ot) vs. capacity-independent (e.g, organizational) interventions to reduce ed crowding. fran balamuth, katie hayes, cynthia mollen, monika goyal children's hospital of philadelphia, philadelphia, pa background: lower abdominal pain and genitourinary problems are common chief complaints in adolescent females presenting to emergency departments. pelvic inflammatory disease (pid) is a potentially severe complication of lower genital tract infections, which involves inflammation of the female upper genital tract secondary to ascending stis. pid has been associated with severe sequelae including infertility, ectopic pregnancy, and chronic pelvic pain. we describe the prevalence and microbial patterns of pid in a cohort of adolescent females presenting to an urban emergency department with abdominal or genitourinary complaints. objectives: to describe the prevalence and microbial patterns of pid in a cohort of adolescent patients presenting to an ed with lower abdominal or genitourinary complaints. methods: this is a secondary analysis of a prospective study of females ages 14-19 years presenting to a pediatric ed with lower abdominal or genitourinary complaints. diagnosis of pid was per 2006 cdc guidelines. patients underwent chlamydia trachomatis (ct) and neisseria gonorrhea (gc) testing via urine aptima combo 2 assay and trichomonas vaginalis (tv) testing using the vaginal osom trichomonas rapid test. descriptive statistics were performed using stata 11.0. results: the prevalence of pid in this cohort of 328 patients was 19.5% (95% ci 15.2%, 23.8%), 37.5% (95% ci 25.3%, 49.7%) of whom had positive sexually transmitted infection (sti) testing: 25% (95% ci 14.1%, 35.9%) with ct, 7.8% (95% ci 1.1, 14.6%) with gc, and 12.5% (95% ci 4.2%, 20.8%) with tv. 84.4% (95% ci 75.2, 93.5%) of patients diagnosed with pid received antibiotics consistent with cdc recommendations. patients with lower abdominal pain as their chief complaint were more likely to have pid than patients with genitourinary complaints (or 3.3, 95% ci 1.7, 6.4). conclusion: a substantial number of adolescent females presenting to the emergency department with lower abdominal pain were diagnosed with pid, with microbial patterns similar to those previously reported in largely adult, outpatient samples. furthermore, appropriate treatment for pid was observed in the majority of patients diagnosed with pid. impact background: in resource-poor settings, maternal health care facilities are often underutilized, contributing to high maternal mortality. the effect of ultrasound in these settings on patients, health care providers, and communities is poorly understood. objectives: the purpose of this study was to assess the effect of the introduction of maternal ultrasound in a population not previously exposed to this intervention. methods: an ngo-led program trained nurses at four remote clinics outside koutiala, mali, who performed 8,339 maternal ultrasound scans over three years. our researchers conducted an independent assessment of this program, which involved log book review, sonographer skill assessment, referral follow-up, semi-structured interviews of clinic staff and patients, and focus groups of community members in surrounding villages. analyses included the effect of ultrasound on clinic function, job satisfaction, community utilization of prenatal care and maternity services, alterations in clinical decision making, sonographer skill, and referral frequency. we used qrs nvivo9 to organize qualitative findings, code data, and identify emergent themes, and graphpad software (la jolla, ca) and microsoft excel to tabulate quantitative findings results: -findings that triggered changes in clinical practice were noted in 10.1% of ultrasounds, with a 3.5% referral rate to comprehensive maternity care facilities. -skill retention and job satisfaction for ultrasound providers was high. -the number of patients coming for antenatal care increased, after introduction of ultrasound, in an area where the birth rate has been decreasing. -over time, women traveled from farther distances to access ultrasound and participate in antenatal care. -very high acceptance among staff, patients and community members. -ultrasound was perceived as most useful for finding fetal position, sex, due date, and well-being. -improved confidence in diagnosis and treatment plan for all cohorts. -improved compliance with referral recommendations. -no evidence of gender selection motivation for ultrasound use. conclusion: use of maternal ultrasound in rural and resource-limited settings draws women to an initial antenatal care visit, increases referral, and improves job satisfaction among health care workers. methods: a retrospective database analysis was conducted using the electronic medical record from a single, large academic hospital. ed patients who received a billing diagnosis of ''nausea and vomiting of pregnancy'' or ''hyperemesis gravidarum'' between 1/1/10 and 12/31/10 were selected. a manual chart review was conducted with demographic and treatment variables collected. statistical significance was determined using multiple regression analysis for a primary outcome of return visit to the emergency department for nausea and vomiting of pregnancy. results: 113 patients were identified. the mean age was 27.1 years (sd±5.25), mean gravidity 2.90 (sd±1.94), and mean gestational age 8.78 weeks (sd±3.21). the average length of ed evaluation was 730 min (sd±513). of the 113 patients, 38 (33.6%) had a return ed visit for nausea and vomiting of pregnancy, 17 (15%) were admitted to the hospital, and 49 (43%) were admitted to the ed observation protocol. multiple regression analysis showed that the presence of medical co-morbidity (p = 0.039), patient gravditity (p = 0.016), gestational age (p = 0.038), and admission to the hospital (p = 0.004) had small but significant effects on the primary outcome (return visits to the emergency department). no other variables were found to be predictive of return visits to the ed including admission to the ed observation unit or factors classically thought to be associated with severe forms of nausea and vomiting in pregnancy including ketonuria, electrolyte abnormalities, or vital sign abnormalities. conclusion: nausea and vomiting in pregnancy has a high rate of return ed visits that can be predicted by young patient age, low patient gravidity, early gestational age, and the presence of other comorbidities. these patients may benefit from obstetric consultation and/or optimization of symptom management after discharge in order to prevent recurrent utilization of the ed. prevalence conclusion: there is a high prevalence of ht in adult sa victims. although our study design and data do not allow us to make any inferences regarding causation, this first report of ht ed prevalence suggests the opportunity to clarify this relationship and the potential opportunity to intervene. background: sexually transmitted infections (sti) are a significant public health problem. because of the risks associated with stis including pid, ectopic pregnancy, and infertility the cdc recommends aggressive treatment with antibiotics in any patient with a suspected sti. objectives: to determine the rates of positive gonorrhea and chlamydia (g/c) screening and rates of empiric antibiotic use among patients of an urban academic ed with >55,000 visits in boston, ma. methods: a retrospective study of all patients who had g/c cultures in the ed over 12 months. chi-square was used in data analysis. sensitivity and specificity were also calculated. results: a positive rate of 9/712 (1.2%) was seen for gonorrhea and 26/714 (3.6%) for chlamydia. females had positive rates of 2/602 (0.3%) and 17/603 (2.8%) respectively. males had higher rates of 7/110 (6.4%) (p =< 0.001) and 9/111 (8.1%) (p = 0.006). 284 patients with g/c sent received an alternative diagnosis, the most common being uti (63), ovarian pathology (35), vaginal bleeding (34), and vaginal candidiasis (33); 4 were excluded. this left 426 without definitive diagnosis. of these, 24.2% (87/360) of females were treated empirically with antibiotics for g/c, and a greater percentage of males (66%, 45/66) were treated empirically (p < 0.001). of those empirically treated, 109/132 (82.6%) had negative cultures. meanwhile 9/32 (28.1%) who ultimately had positive cultures were not treated with antibiotics during their ed stay. sensitivity of the provider to predict presence of disease based on decision to give empiric antibiotics was 71.9 (ci 53.0-85.6). specificity was 72.3 (ci 67.6-76.6). conclusion: most patients screened in our ed for g/c did not have positive cultures and 82.6% of those treated empirically were found not to have g/c. while early treatment is important to prevent complications, there are risks associated with antibiotic use such as allergic reaction, c difficile infection, and development of antibiotic resistance. our results suggest that at our institution we may be over-treating for g/c. furthermore, despite high rates of treatment, 28% of patients who ultimately had positive cultures did not receive antibiotics during their ed stay. further research into predictive factors or development of a clinical decision rule may be useful to help determine which patients are best treated empirically with antibiotics for presumed g/c. background: air travel may be associated with unmeasured neurophysiological changes in an injured brain that may affect post-concussion recovery. no study has compared the effect of commercial airtravel on concussion injuries despite rather obvious decreased oxygen tension and increased dehydration effect on acute mtbi. objectives: to determine if air travel within 4-6 hours of concussion is associated with increased recovery time in professional football and hockey players. methods: prospective cohort study of all active-roster national football league and national hockey league players during the 2010-2011 seasons. internet website review of league sties for injury identification of concussive injury and when player returned to play solely for mtbi. team schedules and flight times were also confirmed to include only players who flew immediately following game (within 4-6 hr). multiple injuries were excluded as were players who had injury around all-star break for nhl and scheduled off week in nfl. results: during the 2010-2011 nfl and nhl seasons, 122 (7.2%) and 101 (13.0%) players experienced a concussion (percent of total players), in the respective leagues. of these, 68 nfl players (57%) and 39 nhl players (39%) flew within 6 hours of the incident injury. the mean distance flown was shorter for nfl (850 miles, sd 576 vs. nhl 1060, sd 579) miles and all were in a pressurized cabin. the mean number of games missed for nfl and nhl players who traveled by air immediately after concussion was increased by 29% and 24% (respectively) than for those who did not travel by air nfl: 3.8 (sd 2.2) vs. 2.6 games (sd 1.8) and nhl: 16.2 games (sd 22.0) vs.12.4 (sd 18.6); p < 0.03. conclusion: this is an initial report of an increased rate of recovery in terms of more games missed, for professional athletes flying commercial airlines post-mtbi compared to those that do not subject their recently injured brains to pressurized airflight. the obvious changes of decreased oxygen tension with altitude equivalent of 7,500 feet, decreased humidity with increased dehydration, and duress of travel accompanying pressurized airline cabins all likely increase the concussion penumbra in acute mtbi. early air travel post concussion should be further evaluated and likely postponed 48-72 hr. until initial symptoms subside. background: previous studies have shown better in-hospital stroke time targets for those who arrive by ambulance compared to other modes of transport. however, regional studies report that less than half of stroke patients arrive by ambulance. objectives: our objectives were to describe the proportion of stroke patients who arrive by ambulance nationwide, and to examine regional differences and factors associated with the mode of transport to the emergency department (ed). methods: this is a cross-sectional study of all patients with a primary discharge diagnosis of stroke based on previously validated icd-9 codes abstracted from the national hospital ambulatory medical care survey for 2007-2009. we excluded subjects <18 years of age and those with missing data. the study related survey variables included patient demographics, community characteristics, mode of transport to the hospital, and hospital characteristics. results: 566 patients met inclusion criteria, representing 2,153,234 patient records nationally. of these, 50.4% arrived by ambulance. after adjustment for potential confounders, patients residing in the west and south had lower odds of arriving by ambulance for stroke when compared to northeast (southern region, or 0.45, 95% ci 0.26-0.76, western region, or 0.45, 95% ci 0.25-0.84, midwest region, or 0.56, 95% ci 0.31-1.01). compared to the medicare population, privately insured and self insured had lower odds of arriving by ambulance (or for private insurance 0.48, 95% ci 0.28-0.84 and or for self payers 0.36, 95% ci 0.14-0.93). age, sex, race, urban or rural location of ed, or safety net status were not independently associated with ambulance use. conclusion: patients with stroke arrive by ambulance more frequently in the northeast than in other regions of the us. identifying reasons for this regional difference may be useful in improving ambulance utilization and overall stroke care nationwide. objectives: we sought to determine whether there was a difference in type of stroke presentation based upon race. we further sought to determine whether there is an increase in hemorrhagic strokes among asian patients with limited english proficiency. methods: we performed a retrospective chart review of all stroke patients age 18 and older for 1 year of patients that were diagnosed with cerebral vascular accident (cva) or intracranial hemorrhage (ich). we collected data on patient demographics, and past medical history. we then stratified patients according to race (white, black, latino, asian, and other). we classified strokes as ischemic, intracranial hemorrhage (ich), subarachnoid hemorrhage (sah), subdural hemorrhage (sdh), and other (e.g., bleeding into metatstatic lesions). we used only the index visit. we present the data percentages, medians and interquartile ranges (iqr). we tested the association of the outcome of intracranial hemorrhage against demographic and clinical variables using chi-square and kruskal-wallis tests. we performed a logistic regression model to determine factors related to presentation with an intracranial hemorrhage (ich background: the practice of obtaining laboratory studies and routine ct scan of the brain on every child with a seizure has been called into question in the patient who is alert, interactive, and back to functional baseline. there is still no standard practice for the management of non-febrile seizure patients in the pediatric emergency department (ped). objectives: we sought to determine the proportion of patients in whom clinically significant laboratory studies and ct scans of the brain were obtained in children who presented to the ped with a first or recurrent non-febrile seizure. we hypothesize that the majority of these children do not have clinically significant laboratory or imaging studies. if clinically significant values were found, the history given would warrant further laboratory and imaging assessment despite seizure alone. methods: we performed a retrospective chart review of 93 patients with first-time or recurrent non-febrile seizures at an urban, academic ped between july 2007 to june 2011. exclusion criteria included children who presented to the ped with a fever and age less than 2 months. we looked at specific values that included a complete blood count, basic metabolic panel, and liver function tests, and if the child was on antiepileptics along with a level for a known seizure disorder, and ct scan. abnormal laboratory and ct scan findings were classified as clinically significant or not. results: the median age of our study population is 4 years with male to female ratio of 1.7. 70% of patients had a generalized tonic-clonic seizure. laboratory studies and ct scans were obtained in 87% and 35% of patients, respectively. five patients had clinically significant abnormal labs; however, one had esrd, one developed urosepsis, one had eclampsia, and two others had hyponatremia, which was secondary to diluted formula and trileptal toxicity. three children had an abnormal head ct: two had a vp shunt and one had a chromosomal abnormality with developmental delay. conclusion: the majority of the children analyzed did not have clinically significant laboratory or imaging studies in the setting of a first or recurrent non-febrile seizure. of those with clinically significant results, the patient's history suggested a possible etiology for their seizure presentation and further workup was indicated. background: in patients with a negative ct scan for suspected subarachnoid hemorrhage (sah), ct angiography (cta) has emerged as a controversial alternative diagnostic strategy in place of lumbar puncture (lp). objectives: to determine the diagnostic accuracy for sah and aneurysm of lp alone, cta alone, and lp followed by cta if the lp is positive. methods: we developed a decision and bayesian analysis to evaluate 1) lp, 2) cta, and 3) lp followed by cta if the lp is positive. data were obtained from the literature. the model considers probability of sah (15%), aneurysm (85% if sah), sensitivity and specificity of ct (92.9% and 100% overall), of lp (based on rbc and xanthochromia), and of cta, traumatic tap and its influence on sah detection. analyses considered all patients and those presenting at less than 6 hours or greater than 6 hours from symptom onset by varying the sensitivity and specificity of ct and cta. results: using the reported ranges of ct scan sensitivity and the specificity, the revised likelihood of sah following a negative ct ranged from 0.5-3.7%, and the likelihood of aneurysm ranged from 2.3-5.4%. following any of the diagnostic strategies, the likelihood of missing sah ranged from 0-0.7%. either lp strategy diagnosed 99.8% of sahs versus 83-84% with cta alone because cta only detected sah in the presence of an aneurysm. false positive sah with lp ranged from 8.5-8.8% due to traumatic taps and with cta ranged from 0.2-6.0% due to aneurysms without sah. the positive predictive value for sah ranged from 5.7-30% with lp and from 7.9-63% with cta. for patients presenting within 6 hours of symptom onset, the revised likelihood of sah following a negative ct became 0.53%, and the likelihood of aneurysm ranged from 2.3-2.7%. following any of the diagnostic strategies, the likelihood of missing sah ranged from 0.01-0.095%. either lp strategy diagnosed 99.8% of sah versus 83-84% with cta alone. false positive sah with lp was 8.8% and with cta ranged from 0.2-5.1%. the positive predictive value for sah was 5.7% with lp and from 7.9-63% with cta. cta following a positive lp diagnosed 8.5-24% of aneurysms. conclusion: lp strategies are more sensitive for detecting sah but less specific than cta because of traumatic taps, leading to lower predictive value positives for sah with lp than with cta. either diagnostic strategy results in a low likelihood of missing sah, particularly within 6 hours of symptom onset. background: recent studies support perfusion imaging as a prognostic tool in ischemic stroke, but little data exist regarding its utility in transient ischemic attack (tia). ct perfusion (ctp), which is more available and less costly to perform than mri, has not been well studied. objectives: to characterize ctp findings in tia patients, and identify imaging predictors of outcome. methods: this retrospective cohort study evaluated tia patients at a single ed over 15 months, who had ctp at initial evaluation. a neurologist blinded to ctp findings collected demographic and clinical data. ctp images were analyzed by a neuroradiologist blinded to clinical information. ctp maps were described as qualitatively normal, increased, or decreased in mean transit time (mtt), cerebral blood volume (cbv), and cerebral blood flow (cbf). quantitative analysis involved measurements of average mtt (seconds), cbv (cc/100 g) and cbf (cc/[100g x min]) in standardized regions of interest within each vascular distribution. these were compared with values in the other hemisphere for relative measures of mtt difference, cbv ratio, and cbffratio. mtt difference of ‡2 seconds, rcbv as £0.60, and rcbf as £0.48 were defined as abnormal based on prior studies. clinical outcomes including stroke, tia, or hospitalization during follow-up were determined up to one year following the index event. dichotomous variables were compared using fisher's exact test. logistic regression was used to evaluate the association of ctp abnormalities with outcome in tia patients. results: of 99 patients with validated tia, 53 had ctp done. mean age was 72 ± 12 years, 55% were women, and 64% were caucasian. mean abcd 2 score was 4.7 ± 2.1, and 69% had an abcd 2 ‡ 4. prolonged mtt was the most common abnormality (19, 36%), and 5 (9.4%) had decreased cbv in the same distribution. on quantitative analysis, 23 (43%) had a significant abnormality. four patients (7.5%) had prolonged mtt and decreased cbv in the same territory, while 17 (32%) had mismatched abnormalities. when tested in a multivariate model, no significant associations between mismatch abnormalities on ctp and new stroke, tia, or hospitalizations were observed. conclusion: ctp abnormalities are common in tia patients. although no association between these abnormalities and clinical outcomes was observed in this small study, this needs to be studied further. objectives: we hypothesized that pre-thrombolytic anti-hypertensive treatment (aht) may prolong door to treatment time (dtt). methods: secondary data analysis of consecutive tpatreated patients at 24 randomly selected michigan community hospitals in the instinct trial. dtt among stroke patients who received pre-thrombolytic aht were compared to those who did not receive pre-thrombolytic aht. we then calculated a propensity score for the probability of receiving pre-thrombolytic aht using a logistic regression model with covariates including demographics, stroke risk factors, antiplatelet or beta blocker as home medication, stroke severity (nihss), onset to door time, admission glucose, pretreatment systolic and diastolic blood pressure, ems usage, and location at time of stroke. a paired t-test was then performed to compare the dtt between the propensity-matched groups. a separate generalized estimating equations (gee) approach was also used to estimate the differences between patients receiving pre-thrombolytic aht and those who did not while accounting for within-hospital clustering. results: a total of 557 patients were included in instinct; however, onset, arrival, or treatment times were not able to be determined in 23, leaving 534 patients for this analysis. the unmatched cohort consisted of 95 stroke patients who received pre-thrombolytic aht and 439 stroke patients who did not receive aht from 2007-2010 (table) . in the unmatched cohort, patients who received pre-thrombolytic aht had a longer dtt (mean increase 9 minutes; 95% confidence interval (ci) 2-16 minutes) than patients who did not receive pre-thrombolytic aht. after propensity matching (table) , patients who received pre-thrombolytic aht had a longer dtt (mean increase 10.4 minutes, 95% ci 1.9-18.8) than patients who did not receive pre-thrombolytic aht. this effect persisted and its magnitude was not altered by accounting for clustering within hospitals. conclusion: pre-thrombolytic aht is associated with modest delays in dtt. this represents a feasible target for physician educational interventions and quality improvement initiatives. further research evaluating optimum hypertension management pre-thrombolytic treatment is warranted. post-pds, 7% had only pre-pds, and 9% had both. the most common pds included failure to treat post-treatment hypertension (131, 24%), antiplatelet agent within 24 hours of treatment (61, 11%), pre-treatment blood pressure over 185/110 (39, 7%), anticoagulant agent within 24 hours of treatment (31, 6%), and treatment outside the time window (29, 5%). symptomatic intracranial hemorrhage (sich) was observed in 7.3% of patients with pds and 6.5% of patients without any pd. in-hospital case fatality was 12% with and 10% without a pd. in the fully adjusted model, older age was significantly associated with pre-pds (table) . when post-pds were evaluated with adjustment for pre-pds, age was not associated with pds; however, pre-pds were associated with post-pds. conclusion: older age was associated with increased odds of pre-pds in michigan community hospitals. pre-pds were associated with post-pds. sich and in-hospital case fatality were not associated with pds; however, the low number of such events limited our ability to detect a difference. ct background: mri has become the gold standard for the detection of cerebral ischemia and is a component of multiple imaging enhanced clinical risk prediction rules for the short-term risk of stroke in patients with transient ischemic attack (tia). however, it is not always available in the emergency department (ed) and is often contraindicated. leukoaraiosis (la) is a radiographic term for white matter ischemic changes, and has recently been shown to be independently predictive of disabling stroke. although it is easily detected by both ct and mri, their comparative ability is unknown. objectives: we sought to determine whether leukoaraiosis, when combined with evidence of acute or old infarction as detected by ct, achieved similar sensitivity to mri in patients presenting to the ed with tia. methods: we conducted a retrospective review of consecutive patients diagnosed with tia between june 2009 and july 2011 that underwent both ct and mri as part of routine care within 1 calendar day of presentation to a single, academic ed. ct and mr images were reviewed by a single emergency physician who was blinded to the mr images at the time of ct interpretation. la was graded using the van sweiten scale (vss), a validated grading scale applicable to both ct and mri. anterior and posterior regions were graded independently from 0 to 2. results: 361 patients were diagnosed with tia during the study period. of these, 194 had both ct and mri background: helping others is often a rewarding experience but can also come with a ''cost of caring'' also known as compassion fatigue (cf). cf can be defined as the emotional and physical toll suffered by those helping others in distress. it is affected by three major components: compassion satisfaction (cs), burnout (bo), and traumatic experiences (te). previous literature has recognized an increase in bo related to work hours and stress among resident physicians. objectives: to assess the state of cf among residents with regard to differences in specialty training, hours worked, number of overnights, and demands of child care. we aim to measure associations with the three components of cf (cs, bo, and te). methods: we used the previously validated survey, proqol 5. the survey was sent to the residents after approval from the irb and the program directors. results: a total of 193 responses were received (40% of the 478 surveyed). five were excluded due to incomplete questionnaires. we found that residents who worked more hours per week had significantly higher bo levels (median 25 vs 21, p = 0.038) and higher te (22 vs 19, p = 0.048) than those working less hours. there was no difference in cs (42 vs 40, p = 0.73). eighteen percent of the residents worked a majority of the night shifts. these residents had higher levels of bo background: emergency department (ed) billing includes both facility and professional fees. an algorithm derived from the medical provider's chart generates the latter fee. many private hospitals encourage appropriate documentation by financially incentivizing providers. academic hospitals sometimes lag in this initiative, possibly resulting in less than optimal charting. past attempts to teach proper documentation using our electronic medical record (emr) were difficult in our urban, academic ed of 80 providers (approximately 25 attending physicians, 36 residents, and 20 physician assistants). objectives: we created a tutorial to teach documentation of ed charts, modified the emr to encourage appropriate documentation, and provided feedback from the coding department. this was combined with an incentive structure shared equally amongst all attendings based on increased collections. we hypothesized this instructional intervention would lead to more appropriate billing, improve chart content, decrease medical liability, and increase educational value of charting process. methods: documentation recommendations, divided into two-month phases of 2-3 proposals, were administered to all ed providers by e-mails, lectures, and reminders during sign-out rounds. charts were reviewed by coders who provided individual feedback if specific phase recommendations were not followed. our endpoints included change in total rvu, rvus/ patient, e/m level distribution, and subjective quality of chart improvement. we did not examine effects on procedure codes or facility fees. results: our base average rvu/patient in our ed from 1/1/11-6/30/11 was 2.615 with monthly variability of approximately 2%. implementation of phase one increased average rvu/patient within two weeks to 2.73 (4.4% increase from baseline, p < 0.05). the second aggregate phase implemented 8 weeks later increased average rvu/patient to 3.04 (16.4% increase from baseline, p < 0.05). conclusion: using our teaching methods, chart reviews focused on 2-3 recommendations at a time, and emr adjustments, we were able to better reflect the complexity of care that we deliver every day in our medical charts. future phases will focus on appropriate documentation for procedures, critical care, fast track, and pediatric patients, as well as examining correlations between increase in rvus with charge capture. identifying mentoring ''best practices'' for medical school faculty julie l. welch, teresita bellido, cherri d. hobgood background: mentoring has been identified as an essential component for career success and satisfaction in academic medicine. many institutions and departments struggle with providing both basic and transformative mentoring for their faculty. objectives: we sought to identify and understand the essential practices of successful mentoring programs. methods: multidisciplinary institutional stakeholders in the school of medicine including tenured professors, deans, and faculty acknowledged as successful mentors were identified and participated in focused interviews between mar-nov 2011. the major area of inquiry involved their experiences with mentoring relationships, practices, and structure within the school, department, or division. focused interview data were transcribed and grounded theory analysis was performed. additional data collected by a 2009 institutional mentoring taskforce were examined. key elements and themes were identified and organized for final review. results: results identified the mentoring practices for three categories: 1) general themes for all faculty, 2) specific practices for faculty groups: basic science researchers, clinician researchers, clinician educators, and 3) national examples. additional mentoring strategies that failed were identified. the general themes were quite universal among faculty groups. these included: clarify the best type of mentoring for the mentee, allow the mentee to choose the mentor, establish a panel of mentors with complementary skills, schedule regular meetings, establish a clear mentoring plan with expectations and goals, offer training and resources for both the mentor and mentee at institutional and departmental levels, ensure ongoing mentoring evaluation, create a mechanism to identify and reward mentoring. national practice examples offered critical recommendations to address multi-generational attitudes and faculty diversity in terms of gender, race, and culture. conclusion: mentoring strategies can be identified to serve a diverse faculty in academic medicine. interventions to improve mentoring practices should be targeted at the level of the institution, department, and individual faculty members. it is imperative to adopt results such as these to design effective mentoring programs to enhance the success of emergency medicine faculty seeking robust academic careers. background: women comprise half of the talent pool from which the specialty of emergency medicine draws future leaders, researchers, and educators and yet only 5% of full professors in us emergency medicine are female. both research and interventions are aimed at reducing the gender gap, however, it will take decades for the benefits to be realized which creates a methodological challenge in assessing system's change. current techniques to measure disparities are insensitive to systems change as they are limited to percentages and trends over time. objectives: to determine if the use of relative rate index (rri) better predicts which stage in the system women are not advancing in the academic pipeline than traditional metrics. methods: rri is a method of analysis that assesses the percent of sub-populations in each stage relative to their representation in the stage directly prior. thus, there is a better notion of the advancement given the availability to advance. rri also standardizes data for ease of interpretation. this study was conducted on the total population of academic professors in all departments at yale school of medicine during the academic year of 2010-2011. data were obtained from the yale university provost's office. results: n = 1305. there were a total of 402 full, 429 associate, and 484 assistant professors. males comprised 78%, 59%, and 54% respectively. rri for the department of emergency medicine (dem) is 0.67, 1.93, and 0.78, for full, associate, and assistant professors, respectively while the percentages were 44%, 60%, and 33% respectively. conclusion: relying solely on percentages masks improvements to the system. women are most represented at the associate professor level in dem, highlighting the importance of systems change evidence. specifically, twice as many women are promoted to associate professor rank given the number who exists as assistant professors. within 5 years, the dem should have an equal system as the numbers of associate professors have dramatically increased and will be eligible to promote to full professor. additionally, dem has a better record of retaining and promoting women than other yale departments of medicine at both associate and full professor ranks. objectives: we examine the payer mixes of community non-rehabilitation eds in metropolitan areas by region to identify the proportion of academic and nonacademic eds that could be considered safety net eds. we hypothesize that the proportion of safety net academic eds is greater than that for non-academic eds and is increasing over time. methods: this is an ecological study examining us ed visits from 2006 through 2008. data were obtained from the nationwide emergency department sample (neds). we grouped each ed visit according to the unique hospital-based ed identifier, thus creating a payer mix for each ed. we define a ''safety net ed'' as any ed where the payer mix satisfied any one of the following three conditions: 1) >30% of all ed visits are medicaid patients; 2) >30% of all ed visits are self-pay patients; or 3) >40% of all ed visits are either medicaid or self-pay patients. neds tags each ed with a hospital-based variable to delineate metropolitan/non-metropolitan locations and academic affiliation. we chose to examine a subpopulation of eds tagged as either academic metropolitan or non-academic metropolitan, because the teaching status of non-metropolitan hospitals was not provided. we then measured the proportion of eds that met safety net criteria by academic status and region. results: we examined 2,821, 2,793, and 2,844 weighted metro eds in years 2006-2008, respectively. table 1 presents safety net proportions. the proportions of academic safety net eds increased across the study period. widespread regional variability in safety net proportions existed across all years. the proportions of safety net eds were highest in the south and lowest in the northeast and midwest. table 2 describes these findings for 2008. conclusion: these data suggest that the proportion of safety-net academic eds may be greater than that of non-academic eds, is increasing over time, and is objectives: to examine the effect of ma health reform implementation on ed and hospital utilization before and after health reform, using an approach that relies on differential changes in insurance rates across different areas of the state in order to make causal inferences as to the effect of health reform on ed visits and hospitalizations. our hypothesis was that health care reform (i.e. reducing rates of uninsurance) would result in increased rates of ed use and hospitalizations. methods: we used a novel difference-in-differences approach, with geographic variation (at the zip code level) in the percentage uninsured as our method of identifying changes resulting from health reform, to determine the specific effect of massachusetts' health care reform on ed utilization and hospitalizations. using administrative data available from the massachusetts division of health care finance and policy acute hospital case mix databases, we compared a one-year period before health reform with an identical period after reform. we fit linear regression models at the area-quarter level to estimate the effect of health reform and the changing uninsurance rate (defined as self-pay only) on ed visits and hospitalizations. results: there were 2,562,330 ed visits and 777,357 hospitalizations pre-reform and 2,713,726 ed visits and 787,700 hospitalizations post-reform. the rate of uninsurance decreased from 6.2% to 3.7% in the ed group and from 1.3% to 0.6% in the hospitalization group. a reduction in the rate of the uninsured was associated with a small but statistically significant increase in ed utilization (p = 0.03) and no change in hospitalizations (p = 0.13). conclusion: we find that increasing levels of insurance coverage in massachusetts were associated with small but statistically significant increases in ed visits, but no differences in rates of hospitalizations. these results should aid in planning for anticipated changes that might result from the implementation of health reform nationally. with high levels of co-morbidity when untreated in adolescents. despite broad cdc screening recommendations, many youth do not receive testing when indicated. the pediatric emergency department (ped) is a venue with a high volume of patients potentially in need of sti testing, but assessing risk in the ped is difficult given constraints on time and privacy. we hypothesized that patients visiting a ped would find an audio-enhanced computer-assisted self-interview (acasi) program to establish sti risk easy to use, and would report a preference for the acasi over other methods of disclosing this information. objectives: to assess acceptability, ease of use, and comfort level of an acasi designed to assess adolescents' risk for stis in the ped. methods: we developed a branch-logic questionnaire and acasi system to determine whether patients aged 15-21 visiting the ped need sti testing, regardless of chief complaint. we obtained consent from participants and guardians. patients completed the acasi in private on a laptop. they read a one-page computer introduction describing study details and completed the acasi. patients rated use of the acasi upon completion using five-point likert scales. results: 2030 eligible patients visited the ped during the study period. we approached 873 (43%) and enrolled and analyzed data for 460/873 (53%). the median time to read the introduction and complete the acasi was 8.2 minutes (interquartile range 6.4-11.5 minutes). 90.7% of patients rated the acasi ''very easy'' or ''easy'' to use, 90.6% rated the wording as ''very easy'' or ''easy'' to understand, 60% rated the acasi ''very short'' or ''short'', 60.3% rated the audio as ''very helpful'' or ''helpful,'' 82.9% were ''very comfortable'' or ''comfortable'' with the system confidentiality, and 71.2% said they would prefer a computer interface over in-person interviews or written surveys for collection of this type of information. conclusion: patients rated the computer interface of the acasi as easy and comfortable to use. a median of 8.2 minutes was needed to obtain meaningful clinical information. the acasi is a promising approach to enhance the collection of sensitive information in the ped. the participants were randomized to one of three conditions, bi delivered by a computer (cbi), bi delivered by a therapist assisted by a computer (tbi), or control, and completed 3, 6, and 12 month follow-up. in addition to content on alcohol misuse and peer violence, adolescents reporting dating violence received a tailored module on dating violence. the main outcome for this analysis was frequency of moderate and severe dating victimization and aggression at the baseline assessment and 3, 6, and 12 months post ed visit. results: among eligible adolescents, 55% (n = 397) reported dating violence and were included in these analyses. compared to controls, after controlling for baseline dating victimization, participants in the cbi showed reductions in moderate dating victimization at 3 months (or 0.7; ci 0.51-0.99; p < 0.05, effect size 0.12) and 6 months (or 0.56; ci 0.38-0.83; p < 0.01, effect size 0.18); models examining interaction effects were significant for the cbi on moderate dating victimization at 3 and 6 months. significant interaction effects were found for the tbi on moderate dating victimization at 6 and 12 months and severe dating victimization at 3 months. the computer-based intervention shows promise for delivering content that decreases moderate dating victimization over 6 months. the therapist bi is promising for decreasing moderate dating victimization over 12 months and severe dating victimization over 3 months. ed-based bis delivered on a computer addressing multiple risk behaviors could have important public health effects. figure 1 . the 21-only ordinance was associated with a significant reduction of ar visits. this ordinance was also associated with reduction in underage ar visits, ui student visits, and public intoxication bookings. these data suggest that other cities should consider similar ordinances to prevent unwanted consequences of alcohol. background: prehospital providers perform tracheal intubation in the prehospital environment, and failed attempts are of concern due to the danger of hypoxia and hypotension. some question the appropriateness of intubation in this setting due to the morbidity risk associated with intubation in the field. thus it is important to gain an understanding of the factors that predict the success of prehospital intubation attempts to inform this discussion. objectives: to determine the factors that affect success rates on first attempt of paramedic intubations in a rapid sequence intubation (rsi) capable critical care transport service. methods: we conducted a multivariate logistic analysis on a prospectively collected database of airway management from an air and land critical care transport service that provides scene responses and interfacility transport in the province of ontario. background: motor vehicle collisions (mvcs) are one of the most common types of trauma for which people seek ed care. the vast majority of these patients are discharged home after evaluation. acute psychological distress after trauma causes great suffering and is a known predictor of posttraumatic stress disorder (ptsd) development. however, the incidence and predictors of psychological distress among patients discharged to home from the ed after mvcs have not been reported. objectives: to examine the incidence and predictors of acute psychological distress among individuals seen in the ed after mvcs and discharged to home. methods: we analyzed data from a prospective observational study of adults 18-64 years of age presenting to one of eight ed study sites after mvc between 02/ 2009 and 10/2011. english-speaking patients who were alert and oriented, stable, and without injuries requiring hospital admission were enrolled. patient interview included assessment of patient sociodemographic and psychological characteristics and mvc characteristics. level of psychological distress in the ed was assessed using the 13-item peritraumatic distress inventory (pdi). pdi scores >23 are associated with increased risk of ptsd and were used to define substantial psychological distress. descriptive statistics and logistic regression were performed using stata ic 11.0 (statacorp lp, college station, texas). results: 9339 mvc patients were screened, 1584 were eligible, and 949 were enrolled. 361/949 (38%) participants had substantial psychological distress. after adjusting for crash severity (severity of vehicle damage, vehicle speed), substantial patient distress was predicted by sociodemographic factors, pre-mvc depressive symptoms, and arriving to the ed on a backboard (table) . conclusion: substantial psychological distress is common among individuals discharged from the ed after mvcs and is predicted by patient characteristics separate from mvc severity. a better under standing of the frequency and predictors of substantial psychological distress is an important first step in identifying these patients and developing effective interventions to reduce severe distress in the aftermath of trauma. such interventions have the potential to reduce both immediate patient suffering and the development of persistent psychological sequelae. figure) the predictive characteristics of pets, pesi, and spesi for 30-day mortality in emperor, including auc, negative predictive value, sensitivity, and specificity were calculated. results: the 646 of 1438 patients (44.9%; 95% ci 42.3%-47.5%) classified as pets low had 30-day mortality of 0.5% (95% ci 0.1-1.5%), versus 10.2% (95% ci 8.0%-12.4%) in the pets high group, statistically similar to pesi and spesi. pets is significantly more specific for mortality than the spesi (47.0% v 37.6%; p < 0.0001), classifying far more patients as low-risk while maintaining a sensitivity of 96% (95% ci 88.3%-99.0%), not significantly different from spesi or pesi (p > 0.05). conclusion: with four variables, pets in this derivation cohort is as sensitive for 30-day mortality as the more complicated pesi and spesi, with significantly greater specificity than the spesi for mortality, placing 25% more patients in the low-risk group. external validation is necessary. nicole seleno, jody vogel, michael liao, emily hopkins, richard byyny, ernest moore, craig gravitz, jason haukoos denver health medical center, denver, co background: the sequential organ failure assessment (sofa) score, base excess, and lactate have been shown to be associated with mortality in critically ill trauma patients. the denver emergency department (ed) trauma organ failure (tof) score was recently derived and internally validated to predict multiple organ failure in trauma patients. the relationship between the denver tof score and mortality has not been assessed or compared to other conventional measures of mortality in trauma. objectives: to compare the prognostic accuracies of the denver ed tof score, ed sofa score, and ed base excess and lactate for mortality in a large heterogeneous trauma population. methods: a secondary analysis of data from the denver health trauma registry, a prospectively collected database. consecutive adult trauma patients from 2005 through 2008 were included in the study. data collected included demographics, injury characteristics, prehospital care characteristics, response to injury characteristics, ed diagnostic evaluation and interventions, and in-hospital mortality. the values of the four clinically relevant measures (denver ed tof score, ed sofa score, ed base excess, and ed lactate) were determined within four hours of patient arrival, and prognostic accuracies for in-hospital mortality for the four measures were evaluated with receiver operating characteristic (roc) curves. multiple imputation was used for missing values. results: of the 4,355 patients, the median age was 37 (iqr 26-51) years, median injury severity score was 9 (iqr 4-16), and 81% had blunt mechanisms. thirty-eight percent (1,670 patients) were admitted to the icu with a median icu length of stay of 2.5 (iqr 1-8) days, and 3% (138 patients) died. in the non-survivors, the median values for the four measures were ed sofa 5.0 (iqr 0.0-8.0); denver ed tof 4.0 (iqr 4.0-5.0); ed base excess 7.0 (iqr 8.0-19.0) meq/l; and ed lactate 6.5 (iqr 4.5-11.8) mmol/l. the areas under the roc curves for these measures are demonstrated in the figure. conclusion: the denver ed tof score more accurately predicts in-hospital mortality in trauma patients as compared to the ed sofa score, ed base excess, or ed lactate. the denver ed tof score may help identify patients early who are at risk for mortality, allowing for targeted resuscitation and secondary triage to improve outcomes in these critically ill patients. the background: both animal and human studies suggest that early initiation of therapeutic hypothermia (th) and rapid cooling improve outcomes after cardiac arrest. objectives: the objective was to determine if administration of cold iv fluids in a prehospital setting decreased time-to-target-temperature (tt) with secondary analysis of effects on mortality and neurological outcome. methods: patients resuscitated after out-of-hospital cardiac arrest (oohca) who received an in-hospital post cardiac arrest bundle including th were prospectively enrolled into a quality assurance database from november 2007 to november 2011. on april 1, 2009 a protocol for intra-arrest prehospital cooling with 4°c normal saline on patients experiencing oohca was initiated. we retrospectively compared tt for those receiving prehospital cold fluids and those not receiving cold fluids. tt was defined as 34°c measured via foley thermistor. secondary outcomes included mortality, good neurological outcome defined as cerebral performance category (cpc) score of 1 or 2 at discharge, and effects of pre-rosc cooling. results: there were 132 patients who were included in this analysis with 80 patients receiving prehospital cold iv fluids and 52 who did not. initially, 63% of patients were in vf/vt and 36% asystole/pea. patients receiving prehospital cooling did not have a significant improvement in tt (256 minutes vs 271 minutes, p = 0.64). survival to discharge and good neurologic outcome were not associated with prehospital cooling (54% vs 50%, p = 0.67) and cpc of 1 or 2 in 49% vs 44%, (p = 0.61). initiating cold fluids prior to rosc showed both a nonsignificant decrease in survival (48% vs 56%, p = 0.35) and increase in poor neurologic outcomes (42% vs 50%, p = 0.39). 77% of patients received £ 1l of cooled ivf prior to hospital arrival. patients receiving prehospital cold ivf had a longer time from arrest to hospital arrival (44 vs 34 min, p =< 0.001) in addition to a prolonged rosc to hospital time (20 vs 12 min, p = 0.005). conclusion: at our urban hospital, patients achieving rosc following oohca did not demonstrate faster tt or outcome improvement with prehospital cooling compared to cooling initiated immediately upon ed arrival. further research is needed to assess the utility of prehospital cooling. assessment background: an estimated 10% of emergency department (ed) patients 65 years of age and older have delirium, which is associated with short-and long-term risk of morbidity and mortality. early recognition could result in improved outcomes, but the reliability of delirium recognition in the continuum of emergency care is unknown. objectives: we tested whether delirium can be reliably detected during emergency care of elderly patients by measuring the agreement between prehospital providers, ed physicians, and trained research assistants using the confusion assessment method for the icu (cam-icu) to identify the presence of delirium. our hypothesis was that both ed physicians and prehospital providers would have poor ability to detect elements of delirium in an unstructured setting. methods: prehospital providers and ed physicians completed identical questionnaires regarding their clinical encounter with a convenience sample of elderly (age >65 years) patients who presented via ambulance to two urban, teaching eds over a three-month period. respondents noted the presence or absence of (1) an acute change in mental status, (2) inattention, (3) disorganized thinking, and (4) altered level of consciousness (using the richmond agitation sedation scale). these four components comprise the operational definition of delirium. a research assistant trained in the cam-icu rated each component for the same patients using a standard procedure. we calculated inter-rater reliability (kappa) between prehospital providers, ed physicians, and research assistants for each component. objectives: this study aimed to assess the association between age and ems use while controlling for potential confounders. we hypothesized that this association use would persist after controlling for confounders. methods: a cross-sectional survey study was conducted at an academic medical center's ed. an interview-based survey was administered and included questions regarding demographic and clinical characteristics, mode of ed arrival, health care use, and the perceived illness severity. age was modeled as an ordinal variable (<60, 60-79, and ‡ 80 years). bivariate analyses were used to identify potential confounders and effect measure modifiers and a multivariable logistic regression model was constructed. odds ratios were calculated as measures of effect. results: a total of 1092 subjects were enrolled and had usable data for all covariates, 465 (43%) of whom arrived via ems. the median age of the sample was 60 years and 52% were female. there was a statistically significant linear trend in the proportion of subjects who arrived via ems by age (p < 0.0001). compared to adults aged less than 60 years, the unadjusted odds ratio associating age and ems use was 1.41 (95% ci: background: we previously derived a clinical decision rule (cdr) for chest radiography (cxr) in patients with chest pain and possible acute coronary syndrome (acs) consisting of the absence of three predictors: history of congestive heart failure, history of smoking, and abnormalities on lung auscultation. objectives: to prospectively validate and refine a cdr for cxr in an independent patient population. methods: we prospectively enrolled patients over 24 years of age with a primary complaint of chest pain and possible acs from september 2009 to january 2010 at a tertiary care ed with 73,000 annual patient visits. physicians completed standardized data collection forms before ordering chest radiographs and were thus blinded to cxr findings at the time of data collection. two investigators, blinded to the predictor variables, independently classified cxrs as ''normal,'' ''abnormal not requiring intervention,'' and ''abnormal requiring intervention'' (e.g, heart failure, infiltrates) based on review of the radiology report and the medical record. analyses included descriptive statistics, inter-rater reliability assessment (kappa), and recursive partitioning. results: of 1159 visits for possible acs, mean age (sd) was 60.3 (15.6) and 51% were female. twenty-four percent had a history of acute myocardial infarction, 10% congestive heart failure, and 11% atrial fibrillation. seventy-one (6.1%, 95% ci 4.9-7.7) patients had a radiographic abnormality requiring intervention. ing the likelihood of coronary artery disease (cad) could reduce the need for stress testing or coronary imaging. acyl-coa:cholesterol acyltransferase-2 (acat2) activity has been shown in monkey and murine models to correlate with atherosclerosis. objectives: to determine if a novel cardiac biomarker consisting of plasma cholesteryl ester levels (ce) typically derived from the activity of acat2 is predictive of cad in a clinical model. methods: a single center prospective observational cohort design enrolled a convenience sample of subjects from a tertiary care center with symptoms of acute coronary syndrome undergoing coronary ct angiography or invasive angiography. plasma samples were analyzed for ce composition with mass spectrometry. the primary endpoint was any cad determined at angiography. multivariable logistic regression analyses were used to estimate the relationship between the sum of the plasma concentrations from cholesteryl palmitoleate (16:1) and cholesteryl oleate (18:1) (defined as acat2-ce) and the presence of cad. the added value of acat2-ce to the model was analyzed comparing the c-statistics and integrated discrimination improvement (idi). results: the study cohort was comprised of 113 participants enrolled over 24 months with a mean age 49 (±11.7) years, 59% with cad at angiography. the median plasma concentration of acat2-ce was 938 lm (758, 1099) in patients with cad and 824 lm (683, 998) in patients without cad (p = 0.03) (figure) . when considered with age, sex, and the number of conventional cad risk factors, acat2-ce were associated with a 6.5% increased odds of having cad per 10 lm increase in concentration. the addition of acat2-ce significantly improved the c-statistic (0.89 vs 0.95, p = 0.0035) and idi (0.15, p < 0.001) compared to the reduced model. in the subgroup of low-risk observation unit patients, the ce model had superior discrimination compared to the diamond forrester classification (idi 0.403, p < 0.001). conclusion: plasma levels of acat2-ce, considered in a clinical model, have strong potential to predict a patient's likelihood of having cad. in turn, this could reduce the need for cardiac imaging after the exclusion of mi. further study of acat2-ce as biomarkers in patients with suspected acs is needed. background: outpatient studies have demonstrated a correlation between carotid intima-media thickness (cimt) on ultrasound and coronary artery disease (cad). there are no known published studies that investigate the role of cimt in the ed using cardiac ct or percutaneous cardiac intervention (pci) as a gold standard. objectives: we hypothesized that cimt can predict cardiovascular events and serve as a noninvasive tool in the ed. methods: this was a prospective study of adult patients who presented to the ed and required evaluation for chest pain. the study location was an urban ed with a census of 120,000 annual visits and 24-hour cardiac catheterization. patients who did not have ct or pci or had carotid surgery were excluded from the study. ultrasound cimt measurements of right and left common carotid arteries were taken with a 10mhz linear transducer (zonare, mountain view, ca). anterior, medial, and posterior views of the near and far wall were obtained (12 cimt scores total). images were analyzed by carotid analyzer 5 (mailing imaging application llc, coralville, iowa). patients were classified into two groups based on the results from ct or pci. a subject was classified as having significant cad if there was over 70% occlusion or multi-vessel disease. results: ninety of 102 patients were included in the study; 55.7% were males. mean age was 56.6 ± 13 years. there were 34 (37.8%) subjects with significant cad and 56 (62.2%) with non-significant cad. the mean of all 12 cimt measurements was significantly higher in the cad group than in the non-cad group (0.60 ± 0.20 vs. 0.35 ± 0.23; p < 0.00001). a logistic regression analysis was carried out with significant cad as the event of interest and the following explanatory variables in the model: objectives: to determine the diagnostic yield of routine testing in-hospital or following ed discharge among patients presenting to an ed following syncope. methods: a prospective, observational, cohort study of consecutive ed patients ‡18 years old presenting with syncope was conducted. the four most commonly utilized tests (echocardiography, telemetry, ambulatory electrocardiography monitoring, and cardiac markers) were studied. interobserver agreement as to whether tests results determined the etiology of the syncope was measured using kappa (k) values. results: of 570 patients with syncope, 150 (26%) had echocardiography with 33 (6%) demonstrating a likely etiology of the syncopal event such as critical valvular disease or significantly depressed left ventricular function (k = 0.78). on hospitalization, 349 (61%) patients were placed on telemetry, 19 (3%) of these had worrisome dysrhythmias (k = 0.66). 317 (55%) patients had troponin levels drawn of whom 19 (3%) had positive results (k = 1); 56 (10%) patients were discharged with monitoring with significant findings in only 2 (0.4%) patients (k = 0.65). overall, 73 (8%, 95% ci 7-10%) studies were diagnostic. conclusion: although routine testing is prevalent in ed patients with syncope, the diagnostic yield is relatively low. nevertheless, some testing, particularly echocardiography, may yield critical findings in some cases. current efforts to reduce the cost of medical care by eliminating non-diagnostic medical testing and increasing emphasis on practicing evidence-based medicine argue for more discriminate testing when evaluating syncope. (originally submitted as a ''late-breaker.'') unusual fatigue was reported by 70.7% (severe 29.7%) and insomnia by 47.8% (severe 21.0%). these findings have led to risk management recommendations to consider these symptoms as predictive of acute coronary syndromes (acs) among women visiting the ed. objectives: to document the prevalence of these symptoms among all women visiting an ed. to analyze the potential effect of using these symptoms in the ed diagnostic process for acs. methods: a survey on fatigue and insomnia symptoms was administered to a convenience sample of all adult women visiting an urban academic ed (all arrival modes, acuity levels, all complaints). a sensitivity analysis was performed using published data and expert opinion for inputs. results: we approached 548 women, with 379 enrollments. see table. the top box shows prevalences of prodromal symptoms among all adult female ed patients. the bottom box shows outputs from sensitivity analysis on the diagnostic effect of initiating an acs workup for all female ed patients reporting prodromal symptoms. conclusion: prodromal symptoms of acs are highly prevalent among all adult women visiting the ed in this study. this likely limits their utility in ed settings. while screening or admitting women with prodromal symptoms in the ed would probably increase sensitivity, that increase would be accompanied by a dramatic reduction in specificity. such a reduction in specificity would translate to admitting, observing, or working up somewhere between 29% and 61% of all women visiting the ed, which is prohibitive in terms of personal costs, risks of hospitalization, and financial costs. while these symptoms may or may not have utility in other settings such as primary care, their prevalence, and the implied lack of specificity for acs suggest they will not be clinically useful in the ed. length methods: we examined a cohort of low-risk chest pain patients evaluated in an ed-based ou using prospective and retrospective ou registry data elements. cox proportional hazard modeling was performed to assess the effect of testing modality (stress testing vs. ccta) on the los in the cdu. as ccta is not available on weekends, only subjects presenting on weekdays were included. cox models were stratified on time of patient presentation to the ed, based on four hour blocks beginning at midnight. the primary independent variable was first test modality, either stress imaging (exercise echo, dobutamine echo, stress mri) or ccta. age, sex, and race were included as covariates. the proportional hazards assumption was tested using scaled schoenfield residuals, and the models were graphically examined for outliers and overly influential covariate patterns. test selection was a time varying covariate in the 8am strata, and therefore the interaction with ln (los) was included as a correction term. after correction for multiple comparisons, an alpha of 0.01 was held to be significant. results: over the study period, 841 subjects (of 1,070 in the registry) presented on non-weekend days. the median los was 18.5 hours (iqr 12.4-23.3 hours), 57% were white, and 61% were female. the table shows the number of subjects in each time strata, the number tested, and the number undergoing stress testing vs. ccta. after adjusting all models for age, race, and sex, the hazard ratio (hr) for los is as shown. only those patients presenting between 8am and noon noted a significant improvement in los with ccta use (p < 0.0001). objectives: determine the validity of a managementfocused em osce as a measure of clinical skills by determining the correlation between osce scores and faculty assessment of student performance in the ed. methods: medical students in a fourth year em clerkship were enrolled in the study. on the final day of the clerkship students participated in a five-station em osce. student performance on the osce was evaluated using a task-based evaluation system with 3-4 critical management tasks per case. task performance was evaluated using a three-point system: performed correctly/timely (2), performed incorrectly/late (1), or not performed (0). descriptive anchors were used for performance criteria. communication skills were also graded on a three-point scale. student performance in the ed was based on traditional faculty assessment using our core-competency evaluation instrument. a pearson correlation coefficient was calculated for the relationship between osce score and ed performance score. case item analysis included determination of difficulty and discrimination. the acgme also requires that trainees are evaluated on these 6ccs during their residency. trainee evaluation in the 6ccs are frequently on a subjective rating scale. one of the recognized problems with a subjective scale is the rating stringency of the rater, commonly known as the hawk-dove effect. this has been seen in standardized clinical exam scoring. recent data have shown that score variance can be related to evaluator performance with a negative correlation. higher-scoring physicians were more likely to be a stringent or hawk type rater on the same evaluation. it is unclear if this pattern also occurs in the subjective ratings that are commonly used in assessments of the 6ccs. objectives: comparison of attending physician scores on the acgme 6ccs with attending ratings of residents for a negative correlation or hawk-dove effect. methods: residents are routinely evaluated on the 6ccs with a 1-9 numerical rating scale as part of their training. the evaluation database was retrospectively reviewed. residents anonymously scored attending physicians on the 6ccs with a cross-sectional survey that utilized the same rating scale, anchors, and prompts as the resident evaluations. average scores for and by each attending were calculated and a pearson correlation calculated by core competency and overall. results: in this irb-approved study, a total of 43 attending physicians were scored on the 6ccs with 447 evaluations by residents. attendings evaluated 162 residents with a total of 1,678 evaluations completed over a 5-year period. attending mode score was 9 ranging from 2 to 9; resident scores had a mode of 8 with a range of 1 to 9. there was no correlation between the rated performance of the attendings overall or in each 6ccs and the scores they gave (p = 0.065-0.861). conclusion: hawk-dove effects can be seen in some scoring systems and has the potential to affect trainee evaluation on the acgme core competencies. however, a negative correlation to support a hawk-dove scoring pattern was not found in em resident evaluations by attending physicians. this study is limited by being a single center study and utilizing grouped data to preserve resident anonymity. background: all acgme-accredited residency programs are required to provide competency-based education and evaluation. graduating residents must demonstrate competency in six key areas. multiple studies have outlined strategies for evaluating competency, but data regarding residents' self-assessments of these competencies as they progress through training and beyond is scarce. objectives: using data from longitudinal surveys by the american board of emergency medicine, the primary objective of this study was to evaluate if resident self-assessments of performance in required competencies improve over the course of graduate medical training and in the years following. additionally, resident self-assessment of competency in academic medicine was also analyzed. methods: this is a secondary data analysis of data gathered from two rounds of the abem longitudinal study of emergency medicine residents (1996-98 and 2001-03) and three rounds of the abem longitudinal study of emergency physicians (1999, 2004, 2009 ). in both surveys, physicians were asked to rate a list of 18 items in response to the question, ''what is your current level of competence in each of the following aspects of work in em?'' the rated items were grouped according to the acgme required competencies of patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, and system-based practice. an additional category for academic medicine was also added. results: rankings improved in all categories during residency training. rankings in three of the six categories improved from the weak end of the scale to the strong end of the scale. there is a consistent decline in rankings one year after graduation from residency. the greatest drop is in medical knowledge. mean self-ranking in academic medicine competency is uniformly the lowest ranked category for each year. conclusion: while self-assessment is of uncertain value as an objective assessment, these increasing rankings suggest that emergency medicine residency programs are successful at improving residents' confidence in the required areas. residents do not feel as confident about academic medicine as they do about the acgme required competencies. the uniform decline in rankings the first year after residency is an area worthy of further inquiry. screening medical student rotators from outside institutions improves overall rotation performance shaneen doctor, troy madsen, susan stroud, megan l. fix university of utah, salt lake city, ut background: emergency medicine is a rapidly growing field. many student rotations are limited in their ability to accommodate all students and must limit the number of students they allow per rotation. we hypothesize that pre-screening visiting student rotators will improve overall student performance. objectives: to assess the effect of applicant screening on overall rotation grade and mean end of shift card scores. methods: we initiated a medical student screening process for all visiting students applying to our 4-week elective em rotation starting in 2008. this consisted of reviewing board scores and requiring a letter of intent. students from our home institution were not screened. all end-of-shift evaluation cards and final rotation grades (honors, high pass, pass, fail) from 2004 to 2011 were analyzed. we identified two cohorts: home students (control) and visiting students. we compared pre-intervention (2004) (2005) (2006) (2007) (2008) and postintervention (2008-2011) scores and grades. end of shift performance scores are recorded using a fivepoint scale that assesses indicators such as fund of knowledge, judgment, and follow-through to disposition. mean ranks were compared and p-values were calculated using the armitage test of trend and confirmed using t-tests. results: we identified 162 visiting students (91 pre, 81 post) and 160 home students (90 pre, 80 post). 12 (13.2%) visiting students achieved honors pre-intervention while 31 (38.3%) achieved honors post-intervention (p = 0.000093). no significant difference was seen in home student grades: 28 (31.1%) received honors pre-2008 and 17 (21.3%) received honors post-2008 conclusion: we found that implementation of a screening process for visiting medical students improved overall rotation scores and grades as compared to home students who did not receive screening. screening rotating students may improve the overall quality of applicants and thereby the residency program. background: there are many descriptions in the literature of computer-assisted instruction in medical education, but few studies that compare them to traditional teaching methods. objectives: we sought to compare the suturing skills and confidence of students receiving video preparation before a suturing workshop versus a traditional instructional lecture. methods: 88 first and second year medical students were randomized into two groups. the control group was given a lecture followed by 40 minutes of suturing time. the video group was provided with an online suturing video at home, no lecture, and given 40 minutes of suturing time during the workshop. both groups were asked to rate their confidence before and after the workshop, and their belief in the workshop's effectiveness. each student was also videotaped suturing a pig's foot after the workshop and graded on a previously validated 16-point suturing checklist. 83 videos were scored. results: there was no significant difference between the test scores of the lecture group (m = 11.21, sd = 3.17, n = 42) and the video group (m = 11.27, sd = 2.53, n = 41) using the two-sample independent ttest for equal variances (t(81) = )0.09, p = 0.93). there was a statistically significant difference in the proportion of students scoring correctly for only one point: ''curvature of needle followed'': 25/42 in the lecture group and 35/41 in the video group (chi = 6.92, df = 1, p = 0.008). students in the video group were found to be 2.45 times more likely to have a neutral or favorable feeling of suturing confidence before the workshop (p = 0.067, ci 0.94-6.4) using a proportional odds model. no association was detected between group assignment and level of suturing confidence after the workshop (p = 0.475). there was also no association detected between group assignment and opinion of the suturing workshop (p = 0.681) using a logistic regression odds model. among those students who indicated a lack of confidence before training, there was no detected association (p = 0.967) between group assignment and having an improved confidence using a logistic regression odds model. conclusion: students in the video group and students in the control group achieved similar levels of suturing skill and confidence, and equal belief in the workshop's effectiveness. this study suggests that video instruction could be a reasonable substitute for lectures in procedural education. background: accurate interpretation of the ecg in the emergency department is not only clinically important but also critical to assess medical knowledge competency. with limitations to expansion of formal didactics, educational technology offers an innovative approach to improve the quality of medical education. objectives: the aim of this study was to assess an online multimedia-based ecg training module evaluating st elevation myocardial infarction (stemi) identification among medical students. methods: a convenience sample of fifty-two medical students on their em rotations at an academic medical center with an em residency program was evaluated in a before-after fashion during a 6-month period. one cardiologist and two ed attending physicians independently validated a standardized exam of ten ecgs: four were normal ecgs, three were classic stemis, and three were subtle stemis. the gold standard for diagnosis was confirmed acute coronary thrombus during cardiac catheterization. after evaluating the 10 ecgs, students completed a pre-intervention test wherein they were asked to identify patients who required emergent cardiac catheterization based on the presence or absence of st segment elevation on ecg. students then completed an online interactive multimedia module containing 13 minutes of stemi training based on american heart association/american college of cardiology guidelines on stemi. medical students were asked to complete a post-test of the 10 ecgs after watching online multimedia. objectives: our objective was to quantify the number of pre-verbal pediatric head cts performed at our community hospital that could have been avoided by utilizing the pecarn criteria. methods: we conducted a standardized chart review of all children under the age of 2 who presented to our community hospital and received a head ct between jan 1st, 2010 and dec 31st, 2010. following recommended guidelines for conducting a chart review, we: 1) utilized four blinded chart reviewers, 2) provided specific training, 3) created a standardized data extraction tool, and 4) held periodic meetings to evaluate coding discrepancies. our primary outcome measure was the number of patients who were pecarn negative and received a head ct at our institution. our secondary outcome was to reevaluate the sensitivity and specificity of the pecarn criteria to detect citbi in our cohort. data were analyzed using descriptive statistics and 95% confidence intervals were calculated around proportions using the modified wald method. results: a total of 138 patients under the age of 2 received a head ct at our institution during the study period. 23 patients were excluded from the final analysis because their head cts were not for trauma. the prevalence of a citbi in our cohort was 2.6% (95% ci 0.6%-7.7%) ( (dti) measures disruption of axonal integrity on the basis of anisotropic diffusion properties. findings on dti may relate to the injury, as well as the severity of postconcussion syndrome (pcs) following mtbi. objectives: to examine acute anisotropic diffusion properties based on dti in youth with mtbi relative to orthopedic controls and to examine associations between white matter (wm) integrity and pcs symptoms. methods: interim analysis of a prospective casecontrol cohort involving 12 youth ages 11-16 years with mtbi and 10 orthopedic controls requiring extremity radiographs. data collected in ed included demographics, clinical information, and pcs symptoms measured by the postconcussion symptom scale. within 72 hours of injury, symptoms were re-assessed and a 61-direction, diffusion weighted, spin-echo imaging scan was performed on a 3t philips scanner. dti images were analyzed using tract-based spatial statistics. fractional anisotropy (fa), mean diffusivity (md), axial diffusivity (ad), and radial diffusivity were measured. results: there were no group demographic differences between mtbi cases and controls. presenting symptoms within the mtbi group included gcs = 15 83%, loss of consciousness 33%, amnesia 33%, post-traumatic seizure 8%, headache 83%, vomiting 33%, dizziness 42%, and confusion 42%. pcs symptoms were greater in mtbi cases than in the controls at ed visit (30.1 ± 17.0 vs. 15.5 ± 16.8, p < 0.06) and at the time of scan (19.1 ± 12.9 vs. 5.7 ± 6.5, p < 0.01). the mtbi group displayed decreased fa in cerebellum and increased md and ad in the cerebral wm relative to controls (uncorrected p < 0.05). increased fa in cerebral wm was also observed in mtbi patients but the group difference was not significant. pcs symptoms at the time of the scan were positively correlated with fa and inversely correlated with rd in extensive cerebral wm areas (p < 0.05, uncorrected). in addition, pcs symptoms in mtbi patients were also found to be inversely correlated with md, ad, and rd in cerebellum (p < 0.05). conclusion: dti detected axonal damage in youth with mtbi which correlated with pcs symptoms. dti performed acutely after injury may augment detection of injury and help prediction of those with worse outcomes. background: sports-related concussion among professional, collegiate, and more recently high school athletes has received much attention from the media and medical community. to our knowledge, there is a paucity of research in regard to sports-related concussion in younger athletes. objectives: the aim of this study was to evaluate parental knowledge of concussion in young children who participate in recreational tackle football. methods: parents/legal guardians of children aged 5-15 years enrolled in recreational tackle football were asked to complete an anonymous questionnaire based on the cdc's heads up: concussion in youth sports quiz. parents were asked about their level of agreement in regard to statements that represent definition, symptoms, and treatment of concussion. results: a total of 310 out of 369 parents voluntarily completed the questionnaire (84% response rate). parent and child demographics are listed in table 1 . ninety four percent of parents believed their child had never suffered a concussion. however, when asked to agree or disagree with statements addressing various aspects of concussion, only 13% (n = 41) could correctly identify all seven statements. most did not identify that a concussion is considered a mild traumatic brain injury and can be achieved from something other than a direct blow to the head. race, sex, and zip code had no significant association with correctly answering statements. education (0.24; p < 0.01) and number of years the child played (0.11; p < 0.05) had a small effect. fifty-three percent of parents reported someone had discussed the definition of concussion with them and 58% the symptoms of concussion. see table 2 for source of information to parents. no parent was able to classify all symptoms listed as correctly related or not related to concussion. however, identification of correct concussion definitions correlated with identification of correct symptoms (0.25; p < 0.05). conclusion: while most parents had received some education regarding concussion from a health care provider, important misconceptions remain among parents of young athletes regarding the definition, symptoms, and treatment of concussion. this study highlights the need for health care providers to increase educational efforts among parents of young athletes in regard to concussion. figure 1 ). 2/2 (100%) of patients with baseline liver dysfunction were 25(oh)d deficient and 5/6 (83%) of deaths were patients who had insufficient levels of 25(oh)d. there was an inverse association between 25(oh)d level and tnf-a (p = 0.03; figure 2 ) and il-6 (p = 0.04). background: fever is common in the emergency department (ed), and 90% of those diagnosed with severe sepsis present with fever. despite data suggesting that fever plays an important role in immunity, human data conflict on the effect of antipyretics on clinical outcomes in critically ill adults. objectives: to determine the effect of ed antipyretic administration on 28-day in-hospital mortality in patients with severe sepsis. methods: single-center, retrospective observational cohort study of 171 febrile severe sepsis patients presenting to an urban academic 90,000-visit ed between june 2005 and june 2010. all ed patients meeting the following criteria were included: age ‡ 18, temperature ‡ 38.3°c, suspected infection, and either systolic blood pressure £ 90 mmhg after a 30 ml/kg fluid bolus or lactate of ‡ 4. patients were excluded for a history of cirrhosis or acetaminophen allergy. antipyretics were defined as acetaminophen, ibuprofen, or ketorolac. results: one hundred-thirty five (78.9%) patients were treated with an antipyretic medication (89.4% acetaminophen). intubated patients were less likely to receive antipyretic therapy (51.9% vs. 84.0%, p < 0.01), but the groups were otherwise well matched. patients requiring ed intubation (n = 27) had much higher in-hospital mortality (51.9% vs. 7.6%, p < 0.01). patients given an antipyretic in the ed had lower mortality (11.9% vs. 25.0%, p < 0.05). when multivariable logistic regression was used to account for apache-ii, intubation status, and fever magnitude, antipyretic therapy was not associated with mortality (adjusted or 0.97, 0.31-3.06, p = 0.96). conclusion: although patients treated with antipyretic therapy had lower 28-day in-hospital mortality, antipyretic therapy was not independently associated with mortality in multivariable regression analysis. these findings are hypothesis-generating for future clinical trials, as the role of fever control has been largely unexplored in severe sepsis (grant ul1 rr024992, nih-ncrr). , and caval index )0.09 ± 0.14 (ci )0.14, )0.05) and all were statistically significant. the groups receiving 10 ml/kg and 30 ml/kg had statistically significant changes in caval index; however the 30 ml/kg group had no significant change in mean ivc diameter. one-way anova differences between the means of all groups were not statistically different. conclusion: overall, there were statistically significant differences in mean ivc-us measurements before and after fluid loading, but not between groups. fasting asymptomatic subjects had a wide inter-subject variation in both baseline ivc-us measurements and fluid-related changes. the wide differences within our 30 ml/kg group may limit conclusions regarding proportionality. there were significant differences in performance on ed measures by ownership (p < 0.0001) and region (p = 0.0002). scores on ed process measures were highest at for-profit hospitals (27% above average) and hospitals in the south (5% above average), and lowest at public hospitals (16% below average) and hospitals in the northeast (8% below average). conclusion: there was considerable variation in performance on the ed measures included in the vbp program by hospital ownership and region. ed directors may come under increasing pressure to improve scores in order to reduce potential financial losses under the program. our data provide early information on the types of hospitals with the greatest opportunity for improvement. methods: design/setting -an independent agency mandated by the government collected and analyzed ed patient experience data using a comprehensive, validated multidimensional instrument and a random periodic sampling methodology of all ed patients. a prospective pre-post experimental study design was employed in the eight community and tertiary care hospitals most affected by crowding. two 5.5 month study periods were evaluated (pre: 28/06-12/12/2010; post: 13/12/2010-29/ 05/2011). outcomes -the primary outcome was patient perception of wait times and crowding reported as a composite mean score (0-100) from six survey items with higher scores representing better ratings. the overall rating of care by ed patients (composite score) and other dimensions of care were collected as secondary outcomes. all outcomes were compared using chi-square and two-tailed student's t-tests. results: a total of 3774 surveys were completed in both the pre-ocp and post-ocp study periods representing a response rate of 45%. we compared in-patient mortality from ami for patients who lived in a community with either 2.5 miles or 5 miles of a closure but did not need to travel farther to the nearest ed with those who did not. we used patient-level data from the california office of statewide health and planning development (oshpd) database patient discharge data, and locations of patient residence and hospitals were geo-coded to determine any changes in distance to the nearest ed. we applied a generalized linear mixed effects model framework to estimate a patient's likelihood to die in the hospital of ami as a function of being affected by a neighborhood closure event. results background: fragmentation of care has been recognized as a problem in the us health care system. however, little is known about ed utilization after hospitalization, a potential marker of poor outpatient care coordination after discharge, particularly for common inpatient-based procedures. objectives: to determine the frequency and variability in ed visits after common inpatient procedures, how often they result in readmission, and related payments. methods: using national medicare data for 2005-2007, we examined ed visits within 30 days of hospital discharge after six common inpatient procedures: percutaneous coronary intervention, coronary artery bypass grafting (cabg), elective abdominal aortic aneurysm repair, back surgery, hip fracture repair, and colectomy. we categorized hospitals into risk-adjusted quintiles based on the frequency of ed visits after the index hospitalization. we report visits by primary diagnosis icd-9 codes and rates of readmission. we also assessed payments related to these ed visits. results: overall, the highest quintile of hospitals had 30-day ed visit rates that ranged from a low of 17.8% with an associated 7.3% readmission rate (back surgery) to a high of 27.8% with an associated 13.6% readmission rate (cabg). the most variability was more than 3-fold and found among patients undergoing colectomy in which the worst-performing hospitals saw 24.1% of their patients experienced an ed visit within 30 days while the best-performing hospitals saw 7.4%. average total payments for the 30-day window from initial discharge across all surgical cohorts varied from $18,912 for patients discharged without subsequent ed visit; $20,061for those experiencing an ed visit(s); $38,762 for those readmitted through the ed; and $33,632 for those readmitted from another source. if all patients who did not require readmission also did not incur an ed visit within the 30-day window, this would represent a potential cost savings of $125 million. conclusion: among elderly medicare recipients there was significant variability between hospitals for 30-day ed visits after six common inpatient procedures. the ed visit may be a marker of poor care coordination in the immediate discharge period. this presents an opportunity to improve post-procedure outpatient care coordination which may save costs related to preventable ed visits and subsequent readmissions. objectives: we sought to assess the effect of pharmacist medication review on ed patient care, in particular time from physician order to medication administration for the patient (order-to-med time). methods: we conducted a multi-center, before-after study in two eds (urban academic teaching hospital and suburban community hospital, combined census of 61,000) after implementation of the electronic prospective pharmacy review system (prs). the system allowed a pharmacist to review all ed medication orders electronically at the time of physician order and either approve or alter the order. we studied a 5-month time period before implementation of the system (pre-prs, 7/1/10-11/30/11) and after implementation (post-prs, 7/ 1/11-11/30/11). we collected data on all ed medication orders including dose, route, class, pharmacist review action, time of physician order, and time of medication administration. differences in order-to-medication between the pre-and post-prs study periods were compared using a results: ed metrics that were significantly associated with lbtcs varied across ed patient-volume categories (table) . for eds seeing less than 20k patients annually, the percentage of ems arrivals admitted to the hospital and ed square footage were both weakly associated with lbtcs (p = 0.09). for eds seeing at least 20k-39k patients, median ed length of stay (los), percent of patients admitted to hospital through the ed, percent of ems arrivals admitted to hospital, and percent of pediatric patients were all positively associated, while percent of patients admitted to the hospital was negatively associated with lbtcs. for eds seeing 40k-59k, median los and percent of x-rays performed were positively associated, while percent of ekgs performed was negatively associated with lbtcs. for eds seeing 60k-79k, percent of patients admitted to the hospital through the ed was negatively associated and percent of ekgs performed was positively associated with lbtcs. for eds with volume greater than 80k, none of the selected variables were associated with lbtc. conclusion: ed factors that help explain high lbtc rates differ depending on the size of an ed. interventions attempting to improve lbtc rates by modifying ed structure or process will need to consider baseline ed volume as a potential moderating influence. objectives: our study sought to compare bacterial growth of samples taken from surfaces after use of a common approved quat compound and a virtually non-toxic, commercially available solution containing elemental silver (0.02%), hydrogen peroxide (15%), and peroxyacetic acid (20%) (shp) in a working ed. we hypothesized that, based on controlled laboratory data available, shp compound would be more effective on surfaces in an active urban ed. methods: we cleaned and then sampled three types of surfaces in the ed (suture cart, wooden railing, and the floor) during midday hours one minute after application of tap water, quat, and shp and then again at 24 hours without additional cleaning. conventional environmental surface surveillance rodac media plates were used for growth assessment. images of bacterial growth were quantified at 24 and 48 hours. standard cleaning procedures by hospital staff were maintained per usual. results: shp was superior to control and quat one minute after application on all three surfaces. quat and water had 10x and 40x more bacterial growth than the surface cleaned with shp, respectively. 24 hours later, the shp area produced fewer colonies sampled from the wooden railing: 4x more bacteria for quat, and 5x for water when compared to shp. 24h cultures from the cart and floor had confluent growth and could not be quantified. conclusion: shp outperforms quat in sterilizing surfaces after one minute application. shp may be a superior agent as a non-toxic, non-corrosive, and effective agent for surfaces in the demanding ed setting. further studies should examine sporidical and virucidal properties in a similar environment. objectives: evaluate the effect on patient satisfaction of increasing waiting room times and physician evaluation times. methods: emergency department flow metrics were collected on a daily basis as well as average daily patient satisfaction scores. the data were from july 2010 through february 2011, in a 44,000 census urban hospital. the data were divided into equal intervals. the arrival to room time was divided by 15 minute intervals up to 135 minutes with the last group being greater than 136 minutes. the physician evaluation times were divided into 20 minute intervals, up to 110, the last group greater than 111 with 46 days in the group. data were analyzed using means and standard deviations, and well as anova for comparison between groups. results: the overall satisfaction score for the outpatient emergency visit was higher when the patient was in a room within 15 minutes of arrival (88.4, std deviation 5.9), analysis of variation between the groups had a p = 0.13, for the means of each interval (see table 1 ). the total satisfaction with the visit as well as satisfaction with the provider dropped when the evaluation extended over 110 minutes, but was not statistically significant on anova analysis (see table 2 for means). conclusion: once a patient's time in the waiting room extends beyond 15 minutes, you have lost a significant opportunity for patient satisfaction; once they have been in the waiting room for over 120 minutes, you are also much more likely to receive a poor score. physician evaluation time scores are much more consistent but as longer evaluation times occurred beyond total of 110 minutes we started to see a trend downward in the satisfaction score. results: in all three eds, pain medication rates (both in ed and rx) varied significantly by clinical factors including location of pain, discharge diagnosis, pain level, and acuity. we observed little to no variation in pain medication rates by patient factors such as age, sex, race, insurance, or prior ed visits. the table displays key pain management practices by site and provider. after adjusting for patient and clinical characteristics, significant differences in pain medication rates remained by provider and site (see figure) . conclusion: within this health system, the approach to pain management by both providers and sites is not standardized. investigation of the potential effect of this variability on patient outcomes is warranted. results: all measures showed significant differences, p < 0.01. average pts/h decreased post-cpoe and did not recover post transitional period, 1.92 ± 0.13 vs 1.75 ± 0.11, p < 0.05. rvu/h also decreased post-cpoe and did not recover post transitional period, 5.23 ± 0.37 vs 4.79 ± 0.32 and 4.82 ± 0.33, p < 0.05. charges/h also decreased after cpoe implementation and did not recover after system optimization. there was a sustained significant decrease in charges/h of 4.5% ± 6.5% post cpoe and 3.6% ± 6.4% post optimization, p < 0.05. sub-group analysis for each provider group was also evaluated and showed variability for different providers. conclusion: there was a significant decrease in all productivity metrics four months after the implementation of cpoe. the system did undergo optimization initiated by providers with customization for ease and speed of use. however, productivity measurements did not recover after these changes were implemented. these data show that with the implementation of a cpoe system there is a decrease in productivity that continues even after a transition period and system customization. background: procedural competency is a key component of emergency medicine residency training. residents are required to log procedures to document quantity of procedures and identify potential weaknesses in their training. as emergency medicine evolves, it is likely that the type and number of procedures change over time. also, exposure to certain rare procedures in residency is not guaranteed. objectives: we seek to delineate trends in type and volume of core em procedures over a decade of emergency medicine residents graduating from an accredited four-year training program. methods: deidentified procedure logs from 2003-2011 were analyzed to assess trends in type and quantity of procedures. procedure logs were self-reported by individual residents on a continuous basis during training onto a computer program. average numbers of procedures per resident in each graduating class were noted. statistical analysis was performed using spss and includes a simple linear regression to evaluate for significant changes in number of procedures over time and an independent samples two-tailed t-test of procedures performed before and after the required resident duty hours change. results: a total of 112 procedure logs were analyzed and the frequency of 29 different procedures was evaluated. a significant increase was seen in one procedure, the venous cutdown. significant decreases were seen in 12 procedures including key procedures such as central venous catheters, tube thoracostomy, and procedural sedation. the frequency of five high-stakes/ resuscitative procedures, including thoracotomy and cricothyroidotomy, remained steady but very low (<4 per resident over 4 years). of the remaining 11 procedures, 8 showed a trend toward decreased frequency, while only 5 increased. conclusion: over the past 9 years, em residents in our program have recorded significantly fewer opportunities to perform most procedures. certain procedures in our emergency medicine training program have remained stable but uncommon over the course of nearly a decade. to ensure competency in uncommon procedures, innovative ways to expose residents to these potentially life saving skills must be considered. these may include practice on high-fidelity simulators, increased exposure to procedures on patients during residency (possibly on off-service rotations), or practice in cadaver and animal labs. objectives: to study the effectiveness of a unique educational intervention using didactic and hands-on training in usgpiv. we hypothesized that senior medical students would improve performance and confidence with usgpiv after the simulation training. methods: fourth year medical students were enrolled in an experimental, prospective, before and after study conducted at a university medical school simulation center. baseline skills in participant's usgpiv on simulation vascular phantoms were graded by ultrasound expert faculty using standardized checklists. the primary outcome was time to cannulation, and secondary outcomes were ability to successfully cannulate, number of needle attempts, and needle-tip visualization. subjects then observed a 15-minute presentation on correct performance of usgpiv followed by a 30-minute hands-on practical session using the vascular simulators with a 1:4 to 1:6 ultrasound instructor to student ratio. an expert blinded to the participant's initial performance graded post-educational intervention usgpiv ability. pre-and post-intervention surveys were obtained to evaluate usgpiv confidence, previous experience with ultrasound, peripheral iv access, usg-piv, and satisfaction with the educational format. objectives: this study examines the grade distribution of resident evaluations when the identity of the evaluator was anonymous as compared to when the identity of the evaluator was known to the resident. we hypothesize that there will be no change in the grades assigned to residents. methods: we retrospectively reviewed all faculty evaluations of residents and grades assigned from july 1, 2008 through november 15, 2011. prior to july 1, 2010 the identity of the faculty evaluators was anonymous, while after this date, the identity of the faculty evaluators was made known to the residents. throughout this time period, residents were graded on a five-point scale. each resident evaluation included grades in the six acgme core competencies as well as in select other abilities. specific abilities evaluated varied over the dates analyzed. evaluations of residents were assigned to two groups, based on whether the evaluator was anonymous or made known to the resident. grades were compared between the two groups. results: a total of 10,760 grades were assigned in the anonymous group, with an average grade of 3.90 (95ci 3.88, 3.91). a total of 7,122 grades were assigned in the known group with an average grade of 3.77 (95ci 3.75, 3.79). specific attention was paid to assignment of unsatisfactory grades (1 or 2 on the five-point scale). the anonymous group assigned 355 grades in this category, comprising 3.3% of all grades assigned. the known group assigned 100 grades in this category, comprising 1.4% of all grades assigned. unsatisfactory grades were assigned by the anonymous group 1.9% (95ci 1.5, 2.3) more often. additionally, 5.8% (95ci 3.8, 6.8) fewer exceptional grades (4 or 5 on the five-point scale) were assigned by the anonymous group. conclusion: the average grade assigned was closer to average (3 on a five-point scale) when the identity of the evaluator was made known to the residents. additionally, fewer unsatisfactory and exceptional grades were assigned in this group. this decrease of both unsatisfactory and exceptional grades may make it more difficult for program directors to effectively identify struggling and strong residents respectively. testing to improve knowledge retention from traditional didactic presentations: a pilot study david saloum, amish aghera, brian gillett maimonides medical center, brooklyn, ny background: the acgme requires an average of at least 5 hours of planned educational experiences each week for em residents, which traditionally consists of formal lecture based instruction. however, retention by adult learners is limited when presented material in a lecture format. more effective methods such as small group sessions, simulation, and other active learning modalities are time-and resource-intensive and therefore not practical as a primary method of instruction. thus, the traditional lecture format remains heavily relied upon. efficient strategies to improve the effectiveness of lectures are needed. testing utilized as a learning tool to force immediate recall of lecture material is an example of such a strategy. objectives: to evaluate the effect of immediate postlecture short answer quizzes on em residents' retention of lecture content. methods: in this prospective randomized controlled study, em residents from a community based 3-year training program were randomized into two groups. block randomization provided a similar distribution of postgraduate year training levels and performance on both us-mle and in-training examinations between the two groups. each group received two identical 50-minute lectures on ecg interpretation and aortic disease. one group of residents completed a five-question short answer quiz immediately following each lecture (n = 13), while the other group received the lectures without subsequent quizzes (n = 16). the quizzes were not scored or reviewed with the residents. two weeks later, retention was assessed by testing both groups with a 20-question multiple choice test (mct) derived in equal part from each lecture. mean and median test results were then compared between groups. statistical significance was determined using a paired t-test of median test scores from each group. results: residents who received immediate post-lecture quizzes demonstrated significantly higher mct scores (mean = 57%, median 58%, n = 10) compared to those receiving lectures alone (mean = 48%, median = 50%, n = 15); p = 0.023. conclusion: short answer testing immediately after a traditional didactic lecture improves knowledge retention at a 2-week interval. limitations of the study are that it is a single center study and long term retention was not assessed. background: the task of educating the next generation of physicians is steadily becoming more difficult with the inherent obstacles that exist for faculty educators and the work hour restrictions that students must adhere to. the obstacles make developing curricula that not only cover important topics but also do so in a fashion that helps support and reinforce the clinical experiences very difficult. several areas of medical education are using more asynchronous techniques and self-directed online educational modules to overcome these obstacles. objectives: the aim of this study was to demonstrate that educational information pertaining to core pediatric emergency medicine topics could be as effectively disseminated to medical students via self-directed online educational modules as it could through traditional didactic lectures. methods: this was a prospective study conducted from august 1, 2010 through december 31, 2010. students participating in the emergency medicine rotation at carolinas medical center were enrolled and received education in a total of eight core concepts. the students were divided into two groups which changed on a monthly basis. group 1 was taught four concepts via self-directed online modules and four traditional didactic lectures. group 2 was taught the same core concepts, but in opposite fashion to group 1. each student was given a pre-test, post-test, and survey at the conclusion of the rotation. results: a total of 28 students participated in the study. students, regardless of which group assigned, performed similarly on the pre-test, with no statistical difference among scores. when looking at the summative total scores between online and traditional didactic lectures, there was a trend towards significance for more improvement among those taught online. the student's assessment of the online modules showed that the majority either felt neutral or preferred the online method. the majority thought the depth and length of the modules were perfect. most students thought having access to the online modules was valuable and all but one stated that they would use them again. conclusion: this study demonstrates that self-directed, online educational modules are able to convey important concepts in emergency medicine similar to traditional didactics. it is an effective learning technique that offers several advantages to both the educator and student. background: critical access hospitals (cah) provide crucial emergency care to rural populations that would otherwise be without ready access to health care. data show that many cah do not meet standard adult quality metrics. adults treated at cah often have inferior outcomes to comparable patients cared for at other community-based emergency departments (eds). similar data do not exist for pediatric patients. objectives: as part of a pilot project to improve pediatric emergency care at cah, we sought to determine whether these institutions stock the equipment and medications necessary to treat any ill or injured child who presents to the ed. methods: five north carolina cah volunteered to participate in an intensive educational program targeting pediatric emergency care. at the initial site visit to each hospital, an investigator, in conjunction with the ed nurse manager, completed a 109-item checklist of commonly required ed equipment and medications based on the 2009 acep ''guidelines for care of children in the emergency department''. the list was categorized into monitoring and respiratory equipment, vascular access supplies, fracture and trauma management devices, and specialized kits. if available, adult and pediatric sizes were listed. only hospitals stocking appropriate pediatric sizes of an item were counted as having that item. the pharmaceutical supply list included antibiotics, antidotes, antiemetics, antiepileptics, intubation and respiratory medications, iv fluids, and miscellaneous drugs not otherwise categorized. results: overall, the hospitals reported having 91% of the items listed (range 87-96%). the two greatest deficiencies were fracture devices (range 33-66%), with no hospital stocking infant-sized cervical collars, and antidotes, with no hospital stocking pralidoxime, 1/5 hospitals stocking fomepizole, and 2/5 hospitals stocking pyridoxine and methylene blue. only one of the five institutions had access to prostaglandin e. the hospitals stated cost and rarity of use as the reason for not stocking these medications. conclusion: the ability of cah to care for pediatric patients does not appear to be hampered by a lack of equipment. ready access to infrequently used, but potentially lifesaving, medications is a concern. tertiary care centers preparing to accept these patients should be aware of these potential limitations as transport decisions are made. background: while incision and drainage (i&d) alone has been the mainstay of management of uncomplicated abscesses for decades, some advocate for adjunct antibiotic use, arguing that available trials are underpowered and that antibiotics reduce treatment failures and recurrence. objectives: to investigate the role of antibiotics in addition to i&d in reducing treatment failure as compared to management with i&d alone. methods: we performed a search using medline, embase, web of knowledge, and google scholar databases (with a medical librarian) to include trials and observational studies analyzing the effect of antibiotics in human subjects with skin and soft-tissue abscesses. two investigators independently reviewed all the records. we performed three overlapping meta-analy-ses: 1. only randomized trials comparing antibiotics to placebo on improvement of the abscess during standard follow-up. 2. trials and observational studies comparing appropriate antibiotics to placebo, no antibiotics, or inappropriate antibiotics (as gauged by wound culture) on improvement during standard follow-up. 3. only trials, but broadened outcome to include recurrence or new lesions during a longer follow-up period as treatment failure. we report pooled risk ratios (rr) using a fixed-effects model for our point estimates with shore-adjusted 95% confidence intervals (ci). results: we screened 1,937 records, of which 12 studies fit inclusion criteria, 9 of which were meta-analyzed (5 trials, 4 observational studies) because they reported results that could be pooled. of the 9 studies, 5 enrolled subjects from the ed, 2 from a soft-tissue infection clinic, and 2 from a general hospital without definition of enrollment site. five studies enrolled primarily adults, 3 pediatrics, and 1 without specification of ages. after pooling results for all randomized trials only, the rr = 1.03 (95% ci: 0.97-1.08). exposure being ''appropriate'' antibiotics (using trials and observational studies) resulted in a pooled rr = 1.01 (95% ci: 0.98-1.03). when we broadened our treatment failure criteria to include recurrence or new lesions at longer lengths of follow-up (trials only), we noted a rr = 1.05 (95% ci: 0.97-1.15). conclusion: based on available literature pooled for this analysis, there is no evidence to suggest any benefit from antibiotics in addition to i&d in the treatment of skin and soft tissue abscesses. (originally submitted as a ''late-breaker.'') primary objectives: to compare wound healing and recurrence rates after primary vs. secondary closure of drained abscesses. we hypothesized the percentage of drained ed abscesses that would be completely healed at 7 days would be higher after primary closure. methods: this randomized clinical trial was undertaken in two academic emergency departments. immunocompetent adult patients with simple, localized cutaneous abscesses were randomly assigned to i & d followed by primary or secondary closure. randomization was balanced by center, with an allocation sequence based on a block size of four, generated by a computer random number generator. the primary outcome was percentage of healed wounds seven days after drainage. a sample of 50 patients had 80% power to detect an absolute difference of 40% in healing rates assuming a baseline rate of 25%. all analyses were by intention to treat. results: twenty-seven patients were allocated to primary and 29 to secondary closure, of whom 23 and 27, respectively, were followed to study completion. healing rates at seven days were similar between the primary and secondary closure groups ( we compared 100 consecutive patients each scanned on the 64 or 320 slice ccta in 2010-2011. measures and outcomes-data were prospectively collected using standardized data collection forms required prior to performing ccta. the main outcomes were cumulative radiation doses and volumes of intravenous contrast. data analysis-groups compared with t-, mann whitney u, and chi-square tests. results: the mean age of patients imaged with the 64 and 320 scanners were 49 (sd 10) vs. 51 (13) (p = 0.27). male:female ratios were also similar (57:43 vs. 51:49 respectively, p = 0.40). both mean (p < 0.001) and median (p = 0.006) effective radiation dose were significantly lower with the 320 (6.8 and 6 msv) vs. the 64-slice scanner (12.2 and 10 msv) respectively. prospective gating was successful in 100% of the 320 scans and only in 38% of the 64 scans (p < 0.001). mean iv contrast volumes were also lower for the 320 vs. the 64-slice scanner (74 ± 10 vs. 96 ± 12 ml; p < 0.001). the % non-diagnostic scans was similarly low in both scanners (3% each). there were no differences in use of beta-blockers or nitrates. conclusion: when compared with the 64-slice scanner, the 320-slice scanner reduces the effective radiation doses and iv contrast volumes in ed patients with cp undergoing ccta. need for beta-blockers and nitrates was similar and both scanners achieved excellent diagnostic image quality. background: a few studies have demonstrated that bedside ultrasound measurement of inferior vena cava to aorta (ivc-to-ao) ratio is associated with the level of dehydration in pediatric patients and a proposed cutoff of 0.8 has been suggested, below which a patient is considered dehydrated. objectives: we sought to externally validate the ability of ivc-to-ao ratio to discriminate dehydration and the proposed cutoff of 0.8 in an urban pediatric emergency department (ed). methods: this was a prospective observational study at an urban pediatric ed. we included patients aged 3 to 60 months with clinical suspicion of dehydration by the ed physician and an equal number of control patients with no clinical suspicion of dehydration. we excluded children who were hemodynamically unstable, had chronic malnutrition or failure to thrive, open abdominal wounds, or were unable to provide patient or parental consent. a validated clinical dehydration score (cds) (range 0 to 8) was used to measure initial dehydration status. an experienced sonographer blinded to the cds and not involved in the patient's care measured the ivc-to-ao ratio on the patient prior to any hydration. cds was collapsed into a binary outcome of no dehydration or any level of dehydration (1 or higher). the ability of ivc-to-ao ratio to discriminate dehydration was assessed using area under the receiver operating characteristic curve (auc) and the sensitivity and specificity of ivc-to-ao ratio was calculated for three cutoffs (0.6, 0.8, 1.0). calculation of auc was repeated after adjusting for age and sex. results: 92 patients were enrolled, 39 (42%) of whom had a cds of 1 or higher. median age was 28 (interquartile range 16-39) months, and 53 (58%) were female. the ivcto-ao ratio showed an unadjusted auc of 0.66 (95% ci 0.54-0.77) and adjusted auc of 0.67 (95% ci 0.56-0.79). for a cutoff of 0.6 sensitivity was 26% (95% ci 13%-42%) and specificity 92% (95% ci 82%-98%); for a cutoff of 0.8 sensitivity was 51% (95% ci 35%-68%) and specificity 74% (95% ci 60%-85%); for a cutoff of 1.0 sensitivity was 79% (95% ci 64%-91%) and specificity 40% (95% ci 26%-54%). conclusion: the ability of the ivc-to-ao ratio to discriminate dehydration in young pediatric ed patients was modest and the cutoff of 0.8 was neither sensitive nor specific. background: while early cardiac computed tomographic angiography (ccta) could be more effective to manage emergency department (ed) patients with acute chest pain and intermediate (>4%) risk of acute coronary syndrome (acs) than current management strategies, it also could result in increased testing, cost, and radiation exposure. objectives: the purpose of the study was to determine whether incorporation of ccta early in the ed evaluation process leads to more efficient management and earlier discharge than usual care in patients with acute chest pain at intermediate risk for acs. methods: randomized comparative effectiveness trial enrolling patients between 40-75 years of age without known cad, presenting to the ed with chest pain but without ischemic ecg changes or elevated initial troponin and require further risk stratification for decision making, at nine us sites. patients are being randomized to either ccta as the first diagnostic test or to usual care, which could include no testing or functional testing such as exercise ecg, stress spect, and stress echo following serial biomarkers. test results were provided to physicians but management in neither arm was driven by a study protocol. data on time, diagnostic testing, and cost of index hospitalization, and the following 28 days are being collected. the primary endpoint is length of hospital stay (los). the trial is powered to allow for detection of a difference in los of 10.1 hours between competing strategies with 95% power assuming that 70% of projected los values are true. secondary endpoints are cumulative radiation exposure, and cost of competing strategies. tertiary endpoints are institutional, caregiver, and patient characteristics associated with primary and secondary outcomes. rate of missed acs within 28 days is the safety endpoint. results: as of november 21st, 2011, 880 of 1000 patients have been enrolled (mean age: 54 ± 8, 46.5% female, acs rate 7.55%). the anticipated completion of the last patient visit is 02/28/12 and the database will be locked in early march 2012. we will present the results of the primary, secondary, and some tertiary endpoints for the entire cohort. conclusion: romicat ii will provide rigorous data on whether incorporation of ccta early in the ed evaluation process leads to more efficient management and triage than usual care in patients with acute chest pain at intermediate risk for acs. (originally submitted as a ''late-breaker.'') meta background: many studies have documented higher rates of advanced radiography utilization across u.s. emergency departments (eds) in recent years, with an associated decrease in diagnostic yield (positive tests / total tests). provider-to-provider variability in diagnostic yield has not been well studied, nor have the factors that may explain these differences in clinical practice. objectives: we assessed the physician-level predictors of diagnostic yield using advanced radiography to diagnose pulmonary embolus (pe) in the ed, including demographics and d-dimer ordering rates. methods: we conducted a retrospective chart review of all ed patients who had a ct chest or v/q scan ordered to rule out pe from 1/06 to 12/09 in four hospitals in the medstar health system. attending physicians were included in the study if they had ordered 50 or more scans over the study period. the result of each ct and vq scan was recorded as positive, negative, or indeterminate, and the identity of the ordering physician was also recorded. data on provider sex, residency type (em or other), and year of residency completion were collected. each provider's positive diagnostic yield was calculated, and logistic regression analysis was done to assess correlation between positive scans and provider characteristics. results: during the study period, 15,015 scans (13,571 cts and 1,443 v/qs) were ordered by 93 providers. the physicians were an average of 9.7 years from residency, 36% were female, and 98% were em-trained. diagnostic yield varied significantly among physicians (p < 0.001), and ranged from 0% to 18%. the median diagnostic yield was 5.9% (iqr 3.8%-7.8%). the use of d-dimer by provider also varied significantly from 4% to 48% (p < 0.001). the odds of a positive test were significantly lower among providers less than 10 years out of residency graduation (or 0.80, ci 0.68-0.95) after controlling for provider sex, type of residency training, d-dimer use, and total number of scans ordered. conclusion: we found significant provider variability in diagnostic yield for pe and use of d-dimer in this study population, with 25% of providers having diagnostic yield less than or equal to 3.8%. providers who were more recently graduated from residency appear to have a lower diagnostic yield, suggesting a more conservative approach in this group. background: the literature reports that anticoagulation increases the risk of mortality in patients presenting to emergency departments (ed) with head trauma (ht). it has been suggested that such patients should be treated in a protocolized fashion, including ct within 15 minutes, and anticipatory preparation of ffp before ct results are available. there are significant logistical and financial implications associated with implementation of such a protocol. objectives: our primary objective was to determine the effect of anticoagulant therapy on the risk of intracranial hemorrhage (ich) in elderly patients presenting to our urban community hospital following bunt head injury. methods: this was a retrospective chart review study of ht patients >60 years of age presenting to our ed over a 6-month period. charts reviewed were identified using our electronic medical record via chief complaints and icd-9 codes and cross referencing with written ct logs. research assistants underwent review of at least 25% of their contributing data to validate reliability. we collected information regarding use of warfarin, clopidogrel, and aspirin and ct findings of ich. using univariate logistic regression, we calculated odds ratios (or) for ich with 95% ci. results: we identified 363 elderly ht patients. the mean age of our population was 72, 34 (8.3%) admitted to using anticoagulant therapy, and 23% were on antiplatelet drugs. 14 (3.8%) of the cohort had icb, 3 patients required neurosurgical intervention, and 1 had transfusion of blood products. of the non-anticoagulated patients, 12 (3.6%) were found to have ich, half of those (6) , and mir-223) were measured using real-time quantitative pcr from serum drawn at enrollment. il-6, il-10, and tnf-a were measured using a bio-plex suspension system. baseline characteristics, il-6, il-10, tnf-a and micrornas were compared using one way anova or fisher exact test, as appropriate. correlations between mirnas and sofa scores, il-6, il-10, and tnf-a were determined using spearman's rank. a logistic regression model was constructed using in-hospital mortality as the dependent variable and mirnas as the independent variables of interest. bonferroni adjustments were made for multiple comparisons. results: of 93 patients, 24 were controls, 29 had sepsis, and 40 had septic shock. we found no difference in serum mir-146a or mir-223 between cohorts, and found no association between these micrornas and either inflammatory markers or sofa score. mir-150 demonstrated a significant correlation with sofa score (q = 0.31, p = 0.01), il-10 (q = 0.37, p = 0.001), but not il-6 or tnf-a (p = 0.046, p = 0.59). logistic regression demonstrated mir-150 to be associated with mortality, even after adjusting for sofa score (p = 0.003). conclusion: mir-146a or mir-223 failed to demonstrate any diagnostic or prognostic ability in this cohort. mir-150 was associated with inflammation, increasing severity of illness, and mortality, and may represent a novel prognostic marker for diagnosis and prognosis of sepsis. objectives: to examine the association between emergency physician recognition of sirs and sepsis and subsequent treatment of septic patients. methods: a retrospective cohort study of all-age patient medical records with positive blood cultures drawn in the emergency department from 11/2008-1/ 2009 at a level i trauma center. patient parameters were reviewed including vital signs, mental status, imaging, and laboratory data. criteria for sirs, sepsis, severe sepsis, and septic shock were applied according to established guidelines for pediatrics and adults. these data were compared to physician differential diagnosis documentation. the mann-whitney test was used to compare time to antibiotic administration and total volume of fluid resuscitation between two groups of patients: those with recognized sepsis and those with unrecognized sepsis. results: sirs criteria were present in 233/338 reviewed cases. sepsis criteria were identified in 215/338 cases and considered in the differential diagnosis in 121/215 septic patients. severe sepsis was present in 89/338 cases and septic shock was present in 42/338 cases. the sepsis 6-hour resuscitation bundle was completed in the emergency department in 16 cases of severe sepsis or septic shock. 121 patients who met sepsis criteria and were recognized by the ed physician had a median time to antibiotics of 150 minutes (iqr: 89-282) and a median ivf of 1500 ml (iqr: 500-3000). the 94 patients who met sepsis criteria but went unrecognized in the documentation had a median time to antibiotics of 225 minutes (iqr: 135-355) and median volume of fluid resuscitation of 1000 ml (iqr: . median time to antibiotics and median volume of fluid resuscitation differed significantly between recognized and unrecognized septic patients (p = 0.003 and p = 0.002, respectively). conclusion: emergency physicians correctly identify and treat infection in most cases, but frequently do not document sirs and sepsis. lack of documentation of sepsis in the differential diagnosis is associated with increased time to antibiotic delivery and a smaller total volume of fluid administration, which may explain poor sepsis bundle compliance in the emergency department. background: severe sepsis is a common clinical syndrome with substantial human and financial impact. in 1992 the first consensus definition of sepsis was published. subsequent epidemiologic estimates were collected using administrative data, but ongoing discrepancies in the definition of severe sepsis led to large differences in estimates. objectives: we seek to describe the variations in incidence and mortality of severe sepsis in the us using four methods of database abstraction. methods: using a nationally representative sample, four previously published methods (angus, martin, dombrovskiy, wang) were used to gather cases of severe sepsis over a 6-year period (2004) (2005) (2006) (2007) (2008) (2009) . in addition, the use of new icd-9 sepsis codes was compared to previous methods. our main outcome measure was annual national incidence and in-hospital mortality of severe sepsis. results: the average annual incidence varied by as much as 3.5 fold depending on method used and ranged from 894,013 (300 / 100,000 population) to 3,110,630 (1,031 / 100,000) using the methods of dombrovskiy and wang, respectively. average annual increase in the incidence of severe sepsis was similar (13.0-13.3%) across all methods. total mortality mirrored the increase in incidence over the 6-year period ( background: radiation exposure from medical imaging has been the subject of many major journal articles, as well as the topic of mainstream media. some estimate that one-third of all ct scans are not medically justified. it is important for practitioners ordering these scans to be knowledgeable of currently discussed risks. objectives: to compare the knowledge, opinions, and practice patterns of three groups of providers in regards to cts in the ed. methods: an anonymous electronic survey was sent to all residents, physician assistants, and attending physicians in emergency medicine (em), surgery, and internal medicine (im) at a single academic tertiary care referral level i trauma center with an annual ed volume of over 160,000 visits. the survey was pilot tested and validated. all data were analyzed using the pearson's chi-square test. results: there was a response rate of 32% (220/668). data from surgery respondents were excluded due to a low response rate. in comparison to im, em respondents correctly equated one abdominal ct to between 100 and 500 chest x-rays, reported receiving formal training regarding the risks of radiation from cts, believe that excessive medical imaging is associated with an increased lifetime risk of cancer, and routinely discuss the risks of ct imaging with stable patients more often (see table 1 ). particular patient factors influence whether radiation risks are discussed with patients by 60% in each specialty (see table 2 ). before ordering an abdominal ct in a stable patient, im providers routinely review the patient's medical imaging history less often than em providers surveyed. overall, 67% of respondents felt that ordering an abdominal ct in a stable ed patient is a clinical decision that should be discussed with the patient, but should not require consent. conclusion: compared with im, em practitioners report greater awareness of the risks of radiation from cts and discuss risks with patients more often. they also review patients' imaging history more often and take this, as well as patients' age, into account when ordering cts. these results indicate a need for improved education for both em and im providers in regards to the risks of radiation from ct imaging. background: in nebraska, 80% of emergency departments have annual visits less than 10,000, and the predominance are in rural settings. general practitioners working in rural emergency departments have reported low confidence in several emergency medicine skills. current staffing patterns include using midlevels as the primary provider with non-emergency medicine trained physicians as back-up. lightly-embalmed cadaver labs are used for resident's procedural training. objectives: to describe the effect of a lightlyembalmed cadaver workshop on physician assistants' (pa) reported level of confidence in selected emergency medicine procedures. methods: an emergency medicine procedure lab was offered at the nebraska association of physician assistants annual conference. each lab consisted of a 2-hour hands-on session teaching endotracheal intubation techniques, tube thoracostomy, intraosseous access, and arthrocentesis of the knee, shoulder, ankle, and wrist to pas. irb-approved surveys were distributed pre-lab and a post-lab survey was distributed after lab completion. baseline demographic experience was collected. pre-and post-lab procedural confidence was rated on a six-point likert scale (1-6) with 1 representing no confidence. the wilcoxon signed-rank test was use to calculate p values. results: 26 pas participated in the course. all completed a pre-and post-lab assessment. no pa had done any one procedure more than 5 times in their career. pre-lab modes of confidence level were £3 for each procedure. post-lab modes were >4 for each procedure except arthrocentesis of the ankle and wrist. however, post lab assessments of procedural confidence significantly improved for all procedures with p values <0.05. conclusion: midlevel providers' level of confidence improved for emergent procedures after completion of a procedure lab using lightly-embalmed cadavers. a mobile cadaver lab would be beneficial to train rural providers with minimal experience. background: use of automated external defibrillators (aed) improves survival in out-of-hospital cardiopulmonary arrest (ohca). since 2005, the american heart association has recommended that individuals one year of age or older who sustain ohca have an aed applied. little is known about how often this occurs and what factors are associated with aed use in the pediatric population. objectives: our objective was to describe aed use in the pediatric population and to assess predictors of aed use when compared to adult patients. methods: we conducted a secondary analysis of prospectively collected data from 29 u.s. cities that participate in the cardiac arrest registry to enhance survival (cares). patients were included if they had a documented resuscitation attempt from october 1, 2005 through december 31, 2009 and were ‡1 year old. patients were considered pediatric if they were less than 19 years old. aed use included application by laypersons and first responders. hierarchical multivariable logistic regression analysis was used to estimate the associations between age and aed use. results: there were 19,559 ohcas included in this analysis, of which 239 (1.2%) occurred in pediatric patients. overall aed use in the final sample was 5,517, with 1,751 (8.9%) total survivors. aeds were applied less often in pediatric patients (19.7%, 95% ci: 14.6%-24.7% vs 28.3%, 95% ci: 27.7%-29.0%). within the pediatric population, only 35.4% of patients with a shockable rhythm had an aed used. in all pediatric patients, regardless of presenting rhythm, aed use demonstrated a statistically significant increase in return of spontaneous circulation (aed used 29.8%, 95% ci: 16.2-43.4 vs aed not used 16.8%, 95% ci: 11.4-22.1, p < 0.05), although there was no significant increase in survival to hospital discharge (aed used 12.8%; aed not used 5.2%; p = 0.057). in the adjusted model, pediatric age was independently associated with failure to use an aed (or 0.61, 95% ci: 0.42-0.87) as was female sex (or 0.88, 95% ci: 0.81-0.95). patients who had a public arrest (or 1.35, 95% ci: 1.24-1.46) or one that was witnessed by a bystander (or 1.20. 95%: ci 1.11-1.29) were also predictive of aed use. conclusion: pediatric patients who experience ohca are less likely to have an aed used. continued education of first responders and the lay public to increase aed use in this population is necessary. does implementation of a therapeutic hypothermia protocol improve survival and neurologic outcomes in all comatose survivors of sudden cardiac arrest? ken will, michael nelson, abishek vedavalli, renaud gueret, john bailitz cook county (stroger), chicago, il background: the american heart association (aha) currently recommends therapeutic hypothermia (th) for out of hospital comatose survivors of sudden cardiac arrest (cssca) with an initial rhythm of ventricular fibrillation (vf). based on currently limited data, the aha further recommends that physicians consider th for cssca, from both the out and inpatient settings, with an initial non-vf rhythm. objectives: investigate whether a th protocol improves both survival and neurologic outcomes for cssca, for out and inpatients, with any initial rhythm, in comparison to outcomes previously reported in literature prior to th. methods: we conducted a prospective observational study of cssca between august 2009 and may 2011 whose care included th. the study enrolled eligible consecutive cssca survivors, from both out and inpatient settings with any initial arrest rhythm. primary endpoints included survival to hospital discharge and neurologic outcomes, stratified by sca location, and by initial arrest rhythm. results: overall, of 27 eligible patients, 11 (41%, 95% ci 22-66%) survived to discharge, 7 (26%, 95% ci 9-43%) with at least a good neurologic outcome. twelve were out and 15 were inpatients. among the 12 outpatients, 6 (50%, 95% ci 22-78%) survived to discharge, 5 (41%, 95% ci 13-69%) with at least a good neurologic outcome. among the 15 inpatients, 5 (33%, 95% ci 9-57) survived to discharge, 2 (13%, 95% ci 0-30%) with at least a good neurologic outcome. by initial rhythm, 6 patients had an initial rhythm of vf/t and 21 non-vf/t. among the 6 patients with an initial rhythm of vf/t, 4 (67%, ci 39-100%) survived to discharge, all 4 with at least a good outcome, including 3 out and 1 inpatients. among the 21 patients with an initial rhythm of non-vf/t, 7 (33%, ci 22-53%) survived to discharge, 3 (14%, ci 0-28%) with at least a good neurologic outcome, including 2 out and 1 inpatients. conclusion: our preliminary data initially suggest that local implementation of a th protocol improves survival and neurologic outcomes for cssca, for out and inpatients, with any initial rhythm, in comparison to outcomes previously reported in literature prior to th. subsequent research will include comparison to local historical controls, additional data from other regional th centers, as well as comparison of different cooling methods. protocolized background: therapeutic hypothermia (th) has been shown to improve the neurologic recovery of cardiac arrest patients who experience return of spontaneous circulation (rosc). it remains unclear as to how earlier cooling and treatment optimization influence outcomes. objectives: to evaluate the effects of a protocolized use of early sedation and paralysis on cooling optimization and clinical outcomes in survivors of cardiac arrest. methods: a 3-year (2008-2010), pre-post intervention study of patients with rosc after cardiac arrest treated with th was performed. those patients treated with a standardized order set which lacked a uniform sedation and paralytic order were included in the pre-intervention group, and those with a standardized order set which included a uniform sedation and paralytic order were included in the post-intervention group. patient demographics, initial and discharge glasgow coma scale (gcs) scores, resuscitation details, cooling time variables, severity of illness as measured by the apache ii score, discharge disposition, functional status, and days to death were collected and analyzed using student's t-tests, man-whitney u tests, and the log-rank test. results: 232 patients treated with th after rosc were included, with 107 patients in the pre-intervention group and 125 in the post-intervention group. the average time to goal temperature (33°c) was 227 minutes (pre-intervention) and 168 minutes (post-intervention) (p = 0.001). a 2-hour time target was achieved in 38.6% of the patients (post-intervention) compared to 24.5% in the pre-group (p = 0.029). twenty-eight day mortality was similar between groups (65.4% and 65.3%) though hospital length of stay (10 days pre-and 8 days post-intervention) and discharge gcs (13 preand 14-post-intervention) differed between cohorts. more post-intervention patients were discharged to home (55.8%) compared to 43.2% in the pre-intervention group. conclusion: protocolized use of sedation and paralysis improved time to goal temperature achievement. these improved th time targets were associated with improved neuroprotection, gcs recovery, and disposition outcome. standardized sedation and paralysis appears to be a useful adjunct in induced th. background: ct is increasingly used to assess children with signs and symptoms of acute appendicitis (aa) though concerns regarding long-term risk of exposure to ionizing radiation have generated interest in methods to identify children at low risk. objectives: we sought to derive a clinical decision rule (cdr) of a minimum set of commonly used signs and symptoms from prior studies to predict which children with acute abdominal pain have a low likelihood of aa and compared it to physician clinical impression (pci). methods: we prospectively analyzed 420 subjects aged 2 to 20 years in 11 u.s. emergency departments with abdominal pain plus signs and symptoms suspicious for aa within the prior 72 hours. subjects were assessed by study staff unaware of their diagnosis for 17 clinical attributes drawn from published appendicitis scoring systems and physicians responsible for physical examination estimated the probability of aa based on pci prior to their medical disposition. based on medical record entry rate, frequently used cdr attributes were evaluated using recursive partitioning and logistic regression to select the best minimum set capable of discriminating subjects with and without aa. subjects were followed to determine whether imaging was used and use was tabulated by both pci and the cdr to assess their ability to identify patients who did or did not benefit based on diagnosis. results: this cohort had a 27.3% prevalence (118/431 subjects) of aa. we derived a cdr based on the absence of two out of three of the following attributes: abdominal tenderness, pain migration, and rigidity/ guarding had a sensitivity of 89.8% (95% ci: 83.1-94.1), specificity of 47.6% (95% ci: 42.1-53.1), npv of 92.5% (95% ci: 87.4-95.7), and negative likelihood ratio of 0.21 (95% ci: 0.12-0.37). the pci set at aa <30% pre-test probability had a sensitivity of 94.1% (95% ci: 88.3-97.1), specificity of 49.4% (95% ci: 43.9-54.9), npv of 95.7% (95% ci: 91.3-97.9), and negative likelihood ratio of 0.12 (95% ci: 0.06-0.25). the methods each classified 37% of the patients as low risk for aa. our cdr identified 29.1% (43/148) of low risk subjects who received ct but being aa (-), could have been spared ct, while the pci identified 20.1% (30/149). conclusion: compared to physician clinical impression, our clinical decision rule can identify more children at low risk for appendicitis who could be managed more conservatively with careful observation and avoidance of ct. negative background: abdominal pain is the most common complaint in the ed and appendicitis is the most common indication for emergency surgery. a clinical decision rule (cdr) identifying abdominal pain patients at a low risk for appendicitis could lead to a significant reduction in ct scans and could have a significant public health impact. the alvarado score is one of the most widely applied cdrs for suspected appendicitis, and a low modified alvarado score (less than 4) is sometimes used to rule out acute appendicitis. the modified alvarado score has not been prospectively validated in ed patients with suspected appendicitis. objectives: we sought to prospectively evaluate the negative predictive value of a low modified alvarado score (mas) in ed patients with suspected appendicitis. we hypothesized that a low mas (less than 4) would have a sufficiently high npv (>95%) to rule out acute appendicitis. methods: we enrolled patients greater than or equal to 18 years old who were suspected of having appendicitis (listed as one of the top three diagnosis by the treating physician before ancillary testing) as part of a prospective cohort study in two urban academic eds from august 2009 to april 2010. elements of the mas and the final diagnosis were recorded on a standard data form for each subject. the sensitivity, specificity, negative predictive value (npv), and positive predictive value (ppv) were calculated with 95% ci for a low mas and final diagnosis of appendicitis. background: evaluating children for appendicitis is difficult and strategies have been sought to improve the precision of the diagnosis. computed tomography is now widely used but remains controversial due to the large dose of ionizing radiation and risk of subsequent radiation-induced malignancy. objectives: we sought to identify a biomarker panel for use in ruling out pediatric acute appendicitis as a means of reducing exposure to ionizing radiation. methods: we prospectively enrolled 431 subjects aged 2 to 20 years presenting in 11 u.s. emergency departments with abdominal pain and other signs and symptoms suspicious for acute appendicitis within the prior 72 hours. subjects were assessed by study staff unaware of their diagnosis for 17 clinical attributes drawn from appendicitis scoring systems and blood samples were analyzed for cbc differential and 5 candidate proteins. based on discharge diagnosis or post-surgical pathology, the cohort exhibited a 27.3% prevalence (118/431 subjects) of appendicitis. clinical attributes and biomarker values were evaluated using principal component, recursive partitioning, and logistic regression to select the combination that best discriminated between those subjects with and without disease. mathematical combination of three inflammation-related markers in a panel comprised of myeloid-related protein 8/14 complex (mrp), c-reactive protein (crp), and white blood cell count (wbc) provided optimal discrimination. results: this panel exhibited a sensitivity of 98% (95% ci, 94-100%), a specificity of 48% (95% ci, 42-53%), and a negative predictive value of 99% (95% ci, 95-100%) in this cohort. the observed performance was then verified by testing the panel against a pediatric subset drawn from an independent cohort of all ages enrolled in an earlier study. in this cohort, the panel exhibited a sensitivity of 95% (95% ci, 87-98%), a specificity of 41% (95% ci, 34-50%), and a negative predictive value of 95% (95% ci, 87-98%). conclusion: appyscore is highly predictive of the absence of acute appendicitis in these two cohorts. if these results are confirmed by a prospective evaluation currently underway, the appyscore panel may be useful to classify pediatric patients presenting to the emergency department with signs and symptoms suggestive of, or consistent with, acute appendicitis and thereby sparing many patients ionizing radiation. background: there are no current studies on the tracking of emergency department (ed) patient dispersal when a major ed closes. this study demonstrates a novel way to track where patients sought emergency care following the closure of saint vincent's catholic medical center (svcmc) in manhattan by using de-identified data from a health information exchange, the new york clinical information exchange (nyclix). nyclix matches patients who have visited multiple sites using their demographic information. on april 30, 2010, svcmc officially stopped providing emergency and outpatient services. we report the patterns in which patients from svcmc visited other sites within nyclix. objectives: we hypothesize that patients often seek emergency care based on geography when a hospital closes. methods: a retrospective pre-and post-closure analysis was performed of svcmc patients visiting other hospital sites. the pre-closure study dates were january 1, 2010-march 31, 2010. the post closure study dates were may 1, 2010-july 31, 2010. a svcmc patient was defined as a patient with any svcmc encounter prior to its closure. using de-identified aggregate count data, we calculated the average number of visits per week by svcmc patients at each site (hospital a-h). we ran a paired t-test to compare the pre-and post-closure averages by site. the following specifications were used to write the database queries: of patients who had one or more prior visits to svcmc for each day within the study return the following: a. eid: a unique and meaningless proprietary id generated within the nyclix master patient index (mpi). b. age: thru the age of 89. persons over 90 were listed as ''90 + '' c. ethnicity/race d. type of visit: emergency e. location of visit: specific nyclix site. results: nearby hospitals within 2 miles saw the highest number of increased ed visits after svcmc closed. this increase was seen until about 5 miles. hospitals >5 miles away did not see any significant changes in ed visits. see table. conclusion: when a hospital and its ed close down, patients seem to seek emergency care at the nearest hospital based on geography. other factors may include the patient's primary doctor, availabilities of outpatient specialty clinics, insurance contracts, or preference of ambulance transports. this study is limited by the inclusion of data from only the eight hospitals participating in nyclix at the time of the svcmc closure. upstream methods: data were collected on all ed ems arrivals from the metro calgary (population 1.1 million) area to its three urban adult hospitals. the study phases consisted of the 7 months from february to october 2010 (pre-ocp) compared against the same months in 2011 (post-ocp). data from the ems operational database and the regional emergency department information system (redis) database were linked. the primary analysis examined the change in ems offload delay defined as the time from ems triage arrival until patient transfer to an ed bed. a secondary analysis evaluated variability in ems offload delay between receiving eds. conclusion: implementation of a regional overcapacity protocol to reduce ed crowding was associated with an important reduction in ems offload delay, suggesting that policies that target hospital processes have bearing on ems operations. variability in offload delay improvements is likely due to site-specific issues, and the gains in efficiency correlate inversely with acuity. methods: a pre-post intervention study was conducted in the ed of an adult university teaching hospital in montreal (annual visits = 69 000). the raz unit (intervention), created to offload the acu of the main ed, started operating in january, 2011. using a split flow management strategy, patients were directed to the raz unit based on patient acuity level (ctas code 3 and certain code 2), likelihood to be discharged within 12 hours, and not requiring an ed bed for continued care. data were collected weekdays from 9:00 to 21:00 for 4 months (september -december 2008) (pre-raz) and for 1.5 months (february -march 2011) (post-raz). in the acu of the main ed, research assistants observed and recorded cubicle access time, and nurse and physician assessment times. databases were used to extract socio-demographics, ambulance arrival, triage code, chief complaint, triage and registration time, length of stay, and ed occupancy. background: telephone follow-up after discharge from the ed is useful for treatment and quality assurance purposes. ed follow-up studies frequently do not achieve high (i.e. ‡ 80%) completion rates. objectives: to determine the influence of different factors on the telephone follow-up rate of ed patients. we hypothesized that with a rigorous follow-up system we could achieve a high follow-up rate in a socioeconomically diverse study population. methods: research assistants (ras) prospectively enrolled adult ed patients discharged with a medication prescription between november 15, 2010 and september 9, 2011 from one of three eds affiliated with one health care system: (a) academic level i trauma center, (b) community teaching affiliate, and (c) community hospital. patients unable to provide informed consent, non-english speaking, or previously enrolled were excluded. ras interviewed subjects prior to ed discharge and conducted a telephone follow-up interview 1 week later. follow-up procedures were standardized (e.g. number of calls per day, times to place calls, obtaining alternative numbers) and each subject's follow-up status was monitored and updated daily through a shared, web-based data system. subjects who completed follow-up were mailed a $10 gift card. we examined the influence of patient (age, sex, race, insurance, income, marital status, usual major activity, education, literacy level, health status), clinical (acuity, discharge diagnosis, ed length of stay, site), and procedural factors (number and type of phone numbers received from subjects, offering two gift cards for difficult to reach subjects) on the odds of successful followup using multivariate logistic regression. results: of the 3,940 enrolled, 45% were white, 59% were covered by medicaid or uninsured, and 44% reported an annual household income of <$26,000. 86% completed telephone follow-up with 41% completing on the first attempt. the table displays the factors associated with successful follow-up. in addition to patient demographics and lower acuity, obtaining a cell phone or multiple phone numbers as well as offering two gift cards to a small number of subjects increased the odds of successful follow-up. conclusion: with a rigorous follow-up system and a small monetary incentive, a high telephone follow-up rate is achievable one week after an ed visit. methods: an interrupted time-series design was used to evaluate the study question. data regarding adherence with the following pneumonia core measures were collected pre-and post-implementation of the enhanced decision-support tool: blood cultures prior to antibiotic, antibiotic within 6 hours of arrival, appropriate antibiotic selection, and mean time to antibiotic administration. prescribing clinicians were educated on the use of the decision-support tool at departmental meetings and via direct feedback on their cases. results: during the 33-month study period, complete data were collected for 1185 patients diagnosed with cap: 613 in the pre-implementation phase and 572 post-implementation. the mean time to antibiotic administration decreased by approximately one minute from the pre-to post-implementation phase, a change that was not statistically significant (p = 0.824). the proportion of patients receiving blood cultures prior to antibiotics improved significantly (p < 0.001) as did the proportion of patients receiving antibiotics within 6 hours of ed arrival (p = 0.004). a significant improvement in appropriate antibiotic selection was noted with 100% of patients experiencing appropriate selection in the post-phase, p = 0.0112. use of the available support tool increased throughout the study period, v 2 = 78.13, df = 1, p < 0.0001. all improvements were maintained 15 months following the study intervention. conclusion: in this academic ed, introduction of an enhanced electronic clinical decision support tool significantly improved adherence to cms pneumonia core measures. the proportion of patients receiving blood cultures prior to antibiotics, antibiotics within 6 hours, and appropriate antibiotics all improved significantly after the introduction of an enhanced electronic clinical decision support tool. background: emergency medicine (em) residency graduates need to pass both the written qualifying exam and oral certification exam as the final benchmark to achieve board certification. the purpose of this project is to obtain information about the exam preparation habits of recent em graduates to allow current residents to make informed decisions about their individual preparation for the abem written qualifying and oral certification exams. objectives: the study sought to determine the amount of residency and individual preparation, to determine the extent of the use of various board review products, and to elicit evaluations of the various board review products used for the abem qualifying and certification exams. methods: design: an online survey instrument was used to ask respondents questions about residency preparation and individual preparation habits, as well as the types of board review products used in preparing for the em boards. participants: as greater than 95% of all em graduates are emra members, an online survey was sent to all emra members who have graduated for the past three years. observations: descriptive statistics of types of preparation, types of resources, time, and quantitative and qualitative ratings for the various board preparation products were obtained from respondents. results: a total of 520 respondents spent an average of 9.1 weeks and 15 hours per week preparing for the written qualifying exam and spent an average of 5 weeks and 7.8 hours per week preparing for the oral certification exam. in preparing for the written qualification exam, 90% used a preparation textbook with 16% using more than one textbook and 47% using a board preparation course. in preparing for the oral qualifying exam, 56% used a preparation textbook while 34% used a preparation course. sixty-seven percent of respondents reported that their residency programs had a formalized written qualifying exam preparation curriculum of which 48% was centered on the annual in-training exam. eight-five percent of residency programs had a formalized oral certification exam preparation. respondents reported spending on average $715 preparing for the qualifying exam and $509 for the certification exam. conclusion: em residents spend significant amounts of time and money and make use of a wide range of residency and commercially available resources in preparing for the abem qualifying and certification exams. background: communication and professionalism skills are essential for em residents but are not wellmeasured by selection processes. the multiple mini-interview (mmi) uses multiple, short structured contacts to measure these skills. it predicts medical school success better than the interview and application. its acceptability and utility in em residency selection is unknown. objectives: we theorized that the mmi would provide novel information and be acceptable to participants. methods: 71 interns from three programs in the first month of training completed an eight-station mmi developed to focus on em topics. pre-and post-surveys assessed reactions using five-point scales. mmi scores were compared to application data. results: em grades correlated with mmi performance (f(1.66) = 4:18, p < 0.05) with honors students having higher mmi summary scores. higher third year clerkship grades trended to higher mmi performance means, although not significantly. mmi performance did not correlate with a match desirability rating and did not predict other individual components of the application including usmle step 1 or usmle step 2. participants preferred a traditional interview (mean difference = 1.36, p < 0.0001). a mixed format was preferred over a pure mmi (mean difference = 1.1, p < 0.0001). preference for a mixed format was similar to a traditional interview. mmi performance did not significantly correlate with preference for the mmi; however, there was a trend for higher performance to associate with higher preference (r = 0.15, t(65) = 1.19, n.s.) performance was not associated with preference for a mix of interview methods (r = 0.08, t(65) = 0.63, n.s.). conclusion: while the mmi alone was viewed less favorably than a traditional interview, participants were receptive to a mixed methods interview. the mmi appears to measure skills important in successful completion of an em clerkship and thus likely em residency. future work will determine whether mmi performance correlates with clinical performance during residency. background: the annual american board of emergency medicine (abem) in-training exam is a tool to assess resident progress and knowledge. when the new york-presbyterian (nyp) em residency program started in 2003, the exam was not emphasized and resident performance was lower than expected. a course was implemented to improve residency-wide scores despite previous em literature failing to exhibit improvements with residency-sponsored in-training exam interventions. objectives: to evaluate the effect of a comprehensive, multi-faceted course on residency-wide in-training exam performance. methods: the nyp em residency program, associated with cornell and columbia medical schools, has a 4year format with 10-12 residents per year. an intensive 14-week in-training exam preparation program was instituted outside of the required weekly residency conferences. the program included lectures, pre-tests, high-yield study sheets, and remediation programs. lectures were interactive, utilizing an audience response system, and consisted of 13 core lectures (2-2.5 hours) and three review sessions. residents with previous in-training exam difficulty were counseled on designing their own study programs. the effect on intraining exam scores was measured by comparing each resident's score to the national mean for their postgraduate year (pgy). scores before and after course implementation were evaluated by repeat measures regression modeling. overall residency performance was evaluated by comparing residency average to the national average each year and by tracking abem national written examination pass rates. results: resident performance improved following course implementation. following the course's introduction, the odds of a resident beating the national mean increased by 3.9 (95% ci 1.9-7.3) and the percentage of residents exceeding the national mean for their pgy year increased by 37% (95% ci 23%-52%). following course introduction, the overall residency mean score has outperformed the national exam mean annually and the first-time abem written exam board pass rate has been 100%. conclusion: a multi-faceted in-training exam program centered around a 14-week course markedly improved overall residency performance on the in-training exam. limitations: this was a before and after evaluation as randomizing residents to receive the course was not logistically or ethically feasible. .0 years of practice. among the nonresidency trained, non-boarded em physicians, the percentage of individuals with board actions against them was significantly higher (6.9% vs. 1.9%, 95% ci for difference of 5.0% = 3.1 to 7.5%), but the incidence of actions was not significant (1.3 vs. 3.4 events/ 1000 years of practice, 95% ci for difference of 2.1/ 1000 = )3/1000 to +8/1000), but the power to detect a difference was 30%. conclusion: in this study population, em-trained physicians had significantly fewer total state medical board disciplinary actions against them than non-em trained physicians, but when adjusted for years of practice (incidence), the difference was not significantly different at the 95% confidence level. the study was limited by low power to detect a difference in incidence. objectives: we chose pain documentation as a long term project for quality improvement in our ems system. our objectives were to enhance the quality of pain assessment, to reduce patient suffering and pain through improved pain management, to improve pain assessment documentation, to improve capture of initial and repeat pain scales, and to improve the rate of pain medication. this study addressed the aim of improving pain assessment documentation. methods: this was a quasi-experiment looking at paramedic documentation of the pqrst mnemonic and pain scales. our intervention consisted of mandatory training on the importance and necessity of pain assessment and treatment. in addition to classroom training, we used rapid cycle individual feedback and public posting of pain documentation rates (with unique ids) for individual feedback. the categories of chief complaint studied were abdominal pain, blunt injury, burn, chest pain, headache, non-traumatic body pain, and penetrating injury. we compared the pain documentation rates in the 3 months prior to intervention, the 3 months of intervention, and 3 months post intervention. using repeated-measures anova, we compared rates of paramedic documentation over time. results: our ems system transported 42166 patients during the study period, of whom 15490 were for painful conditions in the defined chief complaint categories. there were 168 paramedics studied, of whom 149 had complete data. documentation increased from 1819 of 5122 painful cases (35.5%) in qtr 1 to 4625 of 5180 painful cases (89.3%) in qtr 3. the trend toward increased rates of pain documentation over the three quarters was strongly significant (p < 0.001). paramedics were significantly more likely to document pain scales and pqrst assessments over the course of the study with the highest rates of documentation compliance in the final 3-month period. conclusion: a focused intervention of education and individual feedback through classroom training, one on one training, and public posting improves paramedic documentation rates of perceived patient pain. background: emergency medical services (ems) systems are vital in the identification, assessment, and treatment of trauma, stroke, myocardial infarction, and sepsis and improving early recognition, resuscitation, and transport to adequate medical facilities. ems personnel provide similar first-line care for patients with syncope, performing critical actions such as initial assessment and treatment as well as gathering key details of the event. objectives: to characterize emergency department patients with syncope receiving initial care by ems and their role as initial providers. methods: we prospectively enrolled patients over 18 years of age who presented with syncope or near syncope to a tertiary care ed with 72,000 annual patient visits from june 2009 to june 2011. we compared patient age, sex, comorbidities, and 30-day cardiopulmonary adverse outcomes (defined as myocardial infarction, pulmonary embolism, significant cardiac arrhythmia, and major cardiovascular procedure) between ems and non-ems patients. descriptive statistics, two-sided ttests, and chi-square testing were used as appropriate. results: of the 669 patients enrolled, 254 (38.0%) arrived by ambulance. the most common complaint in patients transported by ems was fainting (50.4%) or dizziness (45.7%); syncope was reported in 28 (11.0%). compared to non-ems patients, those who arrived by ambulance were older (mean age (sd) 64.5 (18.7), vs. 60.6 (19.5) years, p = 0.012). there were no differences in the proportion of patients with hypertension (20.0% vs 32.0%, p = 0.75), coronary artery disease (8.85% vs 15.3%, p = 0.67), diabetes mellitus (6.5% vs 9.5%, p = 0.57), or congestive heart failure (3.8% vs 6.6%, p = 0.74). sixtynine (10.8%) patients experienced a cardiopulmonary event within 30 days. twenty-eight (4.4%) patients who arrived by ambulance and 41 (6.4%) non-ems patients had a subsequent cardiopulmonary adverse event (rr 1.08, 95%ci 0.68-1.69) within 30 days. the table tabulates interventions provided by ems prior to ed arrival. conclusion: ems providers care for more than one third of ed syncope patients and often perform key interventions. ems systems offer opportunities for advancing diagnosis, treatment, and risk stratification in syncope patients. background: abdominal pain is the most common reason for visiting an emergency department (ed), and abdominopelvic computed tomography (apct) use has increased dramatically over the past decade. despite this, there has been no significant change in rates of admission or diagnosis of surgical conditions. objectives: to assess whether an electronic accountability tool affects apct ordering in ed patients with abdominal or flank pain. we hypothesized that implementation of an accountability tool would decrease apct ordering in these patients. methods: before and after study design using an electronic medical record at an urban academic ed from jul-nov 2011, with the electronic accountability tool implemented in oct 2011 for any apct order. inclusion criteria: age >= 18 years, non-pregnant, and chief complaint or triage pain location of abdominal or flank pain. starting oct 17 th , 2011, resident attempts to order apct triggered an electronic accountability tool which only allowed the order to proceed if approved by the ed attending physician. the attending was prompted to enter the primary and secondary diagnoses indicating apct, agreement with need for ct and, if no agreement, who was requesting this ct (admitting or consulting physician), and their pretest probability (0-100) of the primary diagnosis. patients were placed into two groups: those who presented prior to (pre) and after (post) the deployment of the accountability tool. background: there has been a paradigm shift in the diagnostic work-up for suspected appendicitis. edbased staged protocols call for the use of ultrasound prior to ct scanning because of its lack of radiation, and the morbidity related to contrast. a barrier to implementation is the lack of 24/7 availability of ultrasound. objectives: to evaluate the impact of the implementation of ed performed appendix ultrasounds (apus) on ct utilization in the staged workup for appendicitis in the emergency department. methods: we performed a quasi-experimental, before/ after study. we compared data from the first 8 months of 2009, before the availability of ed performed apus, with the same interval in 2011 after introduction of ed apus. we excluded patients who had appendectomies for reasons other than appendicitis or had been diagnosed prior to arrival. no patient identifiers were included in the analysis and the study was approved by the hospital irb. we report the following descriptive statistics (percentages, sensitivities, and absolute utilization changes conclusion: implementation of an ed apus in the staging work up of appendicitis was associated with a significant reduction in overall ct utilization in the ed. objectives: this study aims to evaluate ed patients' knowledge of radiation exposure from ct and mri scans as well as the long-term risk of developing cancer. we hypothesize that ed patients will have a poor understanding of the risks, and will not know the difference between ct and mri. methods: design -this was a cross-sectional survey study of adult, english-speaking patients at two eds from 6/13/11-8/13/11. setting -one location was a tertiary care center with an annual ed census of 45,000 patient visits and the other was a community hospital with annual ed census of 35,000 patient visits. obser-vations -the survey consisted of six questions evaluating patients' understanding of radiation exposure from ct and mri as well as long-term consequences of radiation exposure. patients were then asked their age, sex, race, highest level of education, annual household income, and whether they considered themselves health care professionals. results: there were 500 participants in this study, 315 (of 5,589 total) from the academic center and 185 (of 4,988 total) from the community hospital during the study period. overall, only 10% (95% ci 7-12%) of participants understood the radiation risks associated with ct scanning. 60% (95% ci 56-65%) of patients believed that an abdominal ct had the same or less radiation as a chest x-ray. 25% (95% ci 21-29%) believed that there was an increased risk of developing cancer from repeated abdominal cts. only 22% (95% ci 19-26%) of patients knew that mri scans had less radiation than ct. 44% (95% ci 39-49%) either didn't know or believed that repeated mris were associated with an increased risk of developing cancer. higher educational level, household income, and identification as a health care professional all were associated with correct responses, but even within these groups, a majority gave incorrect responses. conclusion: in general, ed patients do not understand the radiation risks associated with advanced imaging modalities. we need to educate these patients so that they can make informed decisions about their own health care. background: homelessness has been associated with many poor health outcomes and frequent ed utilization. it has been shown that frequent use of the ed in any given year is not a strong predictor of subsequent use. identifying a group of patients who are chronic high users of the ed could help guide intervention. objectives: the purpose of this study is to identify if homelessness is associated with chronic ed utilization. methods: a retrospective chart review was accomplished looking at the records of the 100 most frequently seen patients in the ed for each year from 2005-2010 at a large, urban academic hospital with an annual volume of 55,000. patients' visit dates, chief complaints, dispositions, and housing status were reviewed. homelessness was defined by self-report at registration. patients were categorized according to their ed utilization with those seen >4 times in at least three of the five years of the study identified as chronic high utilizers; and those who visited the ed >20 times in at least three of the five years of the study were identified as chronic ultra-high utilizers. descriptive statistics with confidence intervals were calculated, and comparisons were made using non-parametric tests. results: during the 5-year study period, 189,371 unique patients were seen, of whom 0.7% patients were homeless. 335 patients were identified as frequent users. there were patients who presented on the top 100 utilizer lists from multiple years. 67 (20%, 95%ci 16-25) patients were identified as homeless. 148 patients were seen >4 times in at least three of the 5 years and 23 (16%, 11-22) were homeless. 12 patients were seen >20 times in at least three of the 5 years and 5 (41%, 19-68) were homeless. our facility has a 40% admission rate; however, non homeless chronic ultra-high utilizers had admission rates of 24% and homeless chronic ultra-high utilizers were admitted 14%. conclusion: chronic ultra-high utilizers of our ed are disproportionately homeless and present with lower severity of illness. these patients may prove to be a cost-effective group to house or otherwise involve with aggressive case management. the debate over homeless housing programs and case management solutions can be sharpened by better defining the groups who would most benefit and who represent the greatest potential saving for the health system. background: the prevalence of obese patients presenting to our emergency department (ed) is 38%: obese patients present in disproportionate number compared to the general population (us rate = 27%). in spite of this, there is a disconnect in patients' perceptions of weight and health: many patients underestimate their weight and report a key barrier to weight loss is patient-provider communications; such discussions have proven to be highly effective in smoking, drug, and alcohol cessation, an important initial step toward promoting wellness. information about patient provider communication is essential for designing and implementing emergency department (ed) based interventions to help increase patient awareness about weightrelated medical issues and provide counseling for weight reduction. objectives: we assessed patients' perceptions about obesity as disease and patient communication with their providers through two questions: do you believe your present weight is damaging to your health? has a doctor or other health professional every told you that you are overweight? methods: a descriptive cross-sectional study was performed in an academic tertiary care ed. a randomized sample of patients (every fifth) presenting to the ed (n = 453) was enrolled. pregnant patients, patients who were medically unstable, cognitively impaired, or who were unable or unwilling to provide informed consent were excluded. percentages of ''yes'' and ''no'' are reported for each question based on patient bmi, ethnicity, sex, and the number of comorbid conditions. regression analysis was used to determine differences in responses between subgroups. results: among overweight/obese, white/black patients, 42.5% do not feel their weight is damaging to their health and 54.7% reported they have not been told by a doctor they are overweight. of individuals who have been told by a doctor they were overweight, 23.2% still believe their present weight is not damaging to their health. of individuals who have not been told by a doctor they were overweight, 41.5% believe their present weight is damaging to their health. differences in race and age were not found. p values <0.05 for all results. conclusion: our data point toward a disconnect regarding patients' perceptions of health and weight. timely education about the burden of obesity may lead to a decrease in its overall prevalence. (originally submitted as a ''late-breaker.'') objectives: to examine the attitudes and expectations of patients admitted for inpatient care following an emergency department visit. methods: a descriptive study was done by surveying a voluntary sample of adult patients (n = 210) admitted to the hospital from the emergency department in one urban teaching hospital in the midwest. a short, ninequestion survey was developed to assess patient attitudes and expectations towards hiv testing, consent, and requirements. analyses consisted of descriptive statistics, correlations, and chi-square analyses. results: the majority of patients report that hiv testing should be a routine part of health care screening (82.4%) and that the hospital should routinely test admitted patients for hiv (78.6%). despite these overall positive attitudes towards hiv testing, the data also suggest that patients have strong attitudes towards consent requirements with 80% acknowledging that hiv testing requires special consent and 72% reporting that separate consent should be required. the data also showed a statistically significant difference in the proportion of patients who believed that hiv testing is a part of routine health care screening by race (v2 = 6.825, df = 1, p = .009). conclusion: patients attitudes and expectations towards routine hiv testing are consistent with the cdc recommendations. emergency departments are an ideal setting to initiate hiv testing and the findings suggest that patients expect hospital policies outline procedures for obtaining consent and screening all patients who are admitted to the hospital from the ed. results: the analysis revealed a ''hot spot'', a cluster of 833 counties (24.5%) with high ca rates adjacent to counties with high ca rates, located across the southeastern us (p < 0.001). within these counties, the average ca rate was 14% higher than the national average. a ''cool spot'', a cluster of 548 counties (16.1%) with low rates, was located across the midwest (p < 0.001). in this cool spot the average ca rate was 12% lower than the national average. figures 1 and 2 show us adjusted rates and spatial autocorrelation of ca deaths, respectively. conclusion: we identify geographic disparities in ca mortality and describe the cardiac arrest belt in the southeastern us. a limitation of this analysis was the use of icd-10 codes to identify cardiac arrest deaths; however, no other national data exist. an improved understanding of the drivers of this variability is essential to targeted prevention and treatment strategies, especially given the recent emphasis on development of cardiac resuscitation centers and cardiac arrest systems of care. an understanding of the relation between population density, cardiac arrest count, and cardiac arrest rate will be essential to the design of an optimized cardiac arrest system. we defined ed utilization during the past 12 months as non-users (0 visits), infrequent users (1-3 visits), frequent users (4-9 visits), and super-frequent users ( ‡10 visits). we compared demographic data, socioeconomic status, health conditions, and access to care between these ed utilization groups. results: overall, super-frequent use was reported by 0.4% of u.s. adults, frequent use by 2%, and infrequent ed use by 19%. higher ed utilization was associated with increased self-reported fair to poor health (55% for super-frequent, 48% for frequent, 22% for infrequent, 10% for non-ed users). frequent ed users were also more likely to be impoverished, with 31% of superfrequent, 25% of frequent, 13% of infrequent, and 9% of non-ed users reporting a poverty-income ratio <1. adults with higher ed utilization were more likely to report the ed as the place they usually go when sick (10% for super-frequent, 6% for frequent, 2% for infrequent, 0.5% for non-ed users). they also reported greater outpatient resource utilization, with 73% of super-frequent, 48% of frequent, 25% of infrequent, and 10% of non-ed users reporting ‡10 outpatient visits/year. frequent ed users were also more likely than non-ed users to be covered by medicaid (34% for super-frequent, 26% for frequent, 12% for infrequent, 5% for non-ed users). conclusion: frequent ed users were a vulnerable population with lower socioeconomic status, poor overall health, and high outpatient resource utilization. interventions designed to divert frequent users from the ed should also focus on chronic disease management and access to outpatient services, rather than focusing solely on limiting ed utilization. objectives: we explored factors associated with specialty provider willingness to provide urgent appointments to children insured by medicaid/chip. methods: as part of a mixed method study of child access to specialty care by insurance status, we conducted semi-structured qualitative interviews with a purposive sample of 26 specialists and 14 primary care physicians (pcps) in cook county, il. interviews were conducted from april to september 2009, until theme saturation was reached. resultant transcripts and notes were entered into atlas.ti and analyzed using an iterative coding process to identify patterns of responses in the data, ensure reliability, examine discrepancies, and achieve consensus through content analysis. results: themes that emerged indicate that pcps face considerable barriers getting publicly insured patients into specialty care and use the ed to facilitate this process. ''if i send them to the emergency room, i'm bypassing a number of problems. i'm fully aware that i'm crowding the emergency room.'' specialty physicians reported that decisions to refuse or limit the number of patients with medicaid/chip are due to economic strain or direct pressure from their institutions ''in the last budget revision, we were [told], 'you are losing money, so you need to improve your patient mix'''. in specialty practices with limited medicaid/chip appointment slots, factors associated with appointment success included: high acuity or complexity, personal request from or an informal economic relationship with the pcp, geography, and patient hardship. ''if it's a really desperate situation and they can't find anybody else, i will make an exception''. specialists also acknowledged that ''patients who can't get an appointment go to the er and then i am obligated to see them if they're in the system.'' conclusion: these exploratory findings suggest that a critical linkage exists between hospital eds and affiliated specialty clinics. as health systems restructure, there is an opportunity for eds to play a more explicit role in improving care coordination and access to specialty care. albert amini, erynne a. faucett, john m. watt, richard amini, john c. sakles, asad e. patanwala university of arizona, tucson, az background: trauma patients commonly receive etomidate and rocuronium for rapid sequence intubation (rsi) in the ed. due to the long duration of action of rocuronium and short duration of action of etomidate, these patients require prompt initiation of sedatives after rsi. this prevents the potential of patient awareness under pharmacological paralysis, which could be a terrifying experience. objectives: the purpose of this study was to evaluate the effect of the presence of a pharmacist during traumatic resuscitations in the ed on the initiation of sedatives and analgesics after rsi. we hypothesized that pharmacists would decrease the time to provision of sedation and analgesia. methods: this was an observational, retrospective cohort study conducted in a tertiary, academic ed that is a level i trauma center. consecutive adult trauma patients who received rocuronium in the ed for rsi were included during two time periods: 07/01/07 to 07/ 30/08 (pre-phase -no pharmacy services in the ed) and 07/01/09 to 06/30/11 (post-phase -pharmacy services in the ed). since the pharmacist could not respond to all traumas in the post-phase, this was further categorized based on whether the pharmacist was present or absent at the trauma resuscitation. data collected included patient demographics, baseline injury data, and medications used. the median time from rsi to initiation of sedatives and analgesics was compared between the pre-phase group (group 1), post-phase pharmacist absent group (group 2), and post-phase pharmacist present group (group 3) using the kruskal-wallis test. results: a total of 200 patients were included in the study (group 1 = 100, group 2 = 70, and group 3 = 30). median age was 35, 48.5, and 54.5 years in groups 1, 2, and 3, respectively (p = 0.005). there were no other differences between groups with regard to demographics, mechanism of injury, presence of traumatic brain injury, glasgow coma scale score, vital signs, ed length of stay, or mortality. median time between rsi and post-intubation sedative use was 13, 15, and 6 minutes in groups 1, 2 and 3, respectively (p < 0.001). median time between rsi and post-intubation analgesia use was 80, 16, and 10 minutes in groups 1, 2, and 3, respectively (p < 0.001). the presence of a pharmacist during trauma resuscitations decreases time to provision of sedation and analgesia after rsi. background: outpatient antibiotics are frequently prescribed from the ed, and limited health literacy may affect compliance with recommended treatments. objectives: among patients stratified by health literacy level, multimodality discharge instructions will improve compliance with outpatient antibiotic therapy and follow-up recommendations. methods: this was a prospective randomized trial that included consenting patients discharged with outpatient antibiotics from an urban county ed with an annual census of 100,000. patients unable to receive text messages or voicemails were excluded. health literacy was assessed using a validated health literacy assessment, the newest vital sign (nvs). patients were randomized to a discharge instruction modality: 1) usual care, typed and verbal medication and case-specific instructions; 2) usual care plus text messaged instructions sent to the patient's cell phone; or 3) usual care plus voicemailed instructions sent to the patient's cell phone. antibiotic pick-up was verified with the patient's pharmacy at 72 hours. patients were called at 30 days to determine antibiotic compliance. z-tests were used to compare 72-hour antibiotic pickup and patient-reported compliance across instructional modality and nvs score groups. results: 758 patients were included (55% female, median age 30, range 5 months to 71 years); 98 were excluded. 23% had an nvs score of 0-1, 31% 2-3, and 46% 4-6. the proportion of prescriptions filled at 72 hours varied significantly across nvs score groups; self-reported medication compliance at 30 days revealed no difference across different instructional modalities nor nvs scores (table 1) . conclusion: in this sample of urban ed patients, 72hour prescription pickup varied significantly by validated health literacy score, but not by instruction delivery modality. in this sample, patients with lower health literacy are at risk of not filling their outpatient antibiotics in a timely fashion. has been developed, validated, and utilized to study the processes of care involved in successful care transitions from inpatient to outpatient settings, but has not been utilized in the ed. objectives: we hypothesized that the ctm-3 could be successfully implemented in the ed without differential item difficulty by age, sex, education, or race; and would be associated with measures of quality of care and likelihood of following physician recommendations. methods: a descriptive study design based on exit surveys was used to measure ctm-3 scores and likelihood of following treatment recommendations. surveys were administered to a daily cross-sectional sample of all patients leaving the ed between 7a-12a by research assistants in an urban academic ed setting for 3 weeks in november 2011. we report means and standard deviations, and analysis of variance to identify differences in ctm-3 scores for those who planned and did not plan to follow ed recommendations. results: 750 surveys were completed; patients were 43 ± 19 years old, 58% black, 61% female, 56% with at least some college education, and 38% were admitted. average ctm-3 score was 87.1 ± 21.6 (range 0-100). scores were not associated with sex (p = 0.57), race (p = 0.19), or education level (p = 0.25). lower ctm scores were associated with increasing age (p = 0.03), patient perceptions that the ed team was less likely to use words that they understood, listen carefully to them, inspire their confidence and trust, or encourage them to ask questions (all p < 0.01). those who reported they were ''very likely'' to follow ed treatment had an average score of 89 ± 21, while those who were ''unlikely'' or ''very unlikely'' to follow ed treatment plans had an average score 47 ± 28 (p = 0.00). conclusion: the ctm-3 performs well in the ed and exhibited only differential item difficulty by age; there was no significant difference by race, sex, or education level. furthermore, it is highly associated with likelihood of following physician recommendations. future studies will focus on ctm-3 scores ability to discriminate between patients who did or did not experience a subsequent ed visit or rehospitalization. age and race were found to be significant predictors of the race pathway. regression of the data by race revealed blacks (or 1.9: ci 1.3-2.6; p < 0.0002), hispanics (or 3.0: ci 1.3-2.6; p = 0.0001), and asians (or 2.3: ci 1.1-4.9; p = 0.03), were more likely to enter the race cohort than were whites; however, much of this discrepancy is accounted for by age. the mean age of minority patients was 62 years, while white patients were older at 71 years (p = 0.002). conclusion: in a diverse demographic population we found that racial minorities were presenting at younger ages for chest pain and were more likely to receive cardiac testing at bedside than their white counterparts; and hence, were selected to a lower level of care (nonmonitored unit background: expanding insurance coverage is designed to improve access to primary care and reduce use of emergency services. whether expanding coverage achieves this is of paramount importance as the united states prepares for the affordable care act. objectives: we examined ed and outpatient department use after the state children's health insurance program (schip) coverage expansion, focusing on adolescents (a major target group for schip) versus young adults (not targeted). we hypothesized that coverage would increase use of outpatient services and emergency department services would decrease. methods: using the national ambulatory medical care survey and the national hospital ambulatory medical care survey, we analyzed years 1992-1996 as baseline and then compared use patterns in 1999-2009 after schip launch. primary outcomes were populationadjusted annual visits to ed versus non-emergency outpatient settings. interrupted time-series were performed on use rates to ed and outpatient departments between adolescents (11-18 years old) and young adults (19-29 years old) in the pre-schip and schip periods. outpatient-to-ed ratios were calculated and compared across time periods. results: the mean number of outpatient adolescent visits increased by 299 visits per 1000 persons (95% ci, 140-457), while there was no statistically significant increase in young adult outpatient visits across time periods. there was no statistically significant change in the mean number of adolescent ed visits across time periods, while young adult ed use increased by 48 visits per 1000 persons (95% ci, 24-73). the adolescent outpatient-to-ed ratio increased by 1.0 (95% ci, 0.49-1.6), while the young adults ratio decreased by 0.53 across time periods (95% ci, )0.90 to )0.16). conclusion: since schip, adolescent non-ed outpatient visits increased while ed visits remained unchanged. in comparison to young adults, expanding insurance coverage to adolescents improved access to health care services and suggests a shift to non-ed settings. as an observational study we are unable to control for secular trends during this time period. also as an ecological study we are unable to examine individual variation. expanding insurance through the affordable care act of 2010 will likely increase use of outpatient services but may not decrease emergency department volumes. background: cancer patients are receiving a greater proportion of their care on an outpatient basis. the effect of this change in oncology care patterns on ed utilization is poorly understood. objectives: to examine the characteristics of ed utilization by adult cancer patients. methods: between july 2007 and march 2009, all new adult cancer patients referred to a tertiary care cancer centre were recruited into a study examining psychological distress. these patients were followed prospectively until september 2011. the collected data were linked to administrative data from three tertiary care eds. variables evaluated in this study included basic we have previously shown that reducing non-value-added activities through the application of the lean process improvement methodology improves patient satisfaction, physician productivity and emergency department length of stay. objectives: in this investigation, we tested the hypothesis that non-value-added activities reduce physician job satisfaction. methods: to test this hypothesis, we conducted timemotion studies on attending emergency physicians working in an academic setting and categorized their activities into value-added (time in room with patient, time discussing cases and educating medical learners, time in room with patient and learner), necessary non-valueadded activities (charting, sign out, looking up labs), and unnecessary non-value-added activities (looking for things, looking for people, on the phone). the physicians were then surveyed using a 10-point likert scale to determine their relative satisfaction with each of the individual tasks (1 worst part of day, 10 best part of day). results: physicians spent 46% of their shift performing value-added work, 38% of their shift performing necessary non-value-added activities, and 16% of their shift performing unnecessary non-value-added activities (waste). weighted physician satisfaction (satisfaction x [percent time spent performing the activity / percent time engaged in activity category]) was highest when the physician was performing value-added work (8.75) compared to performing either necessary non-valueadded work (3.35) or waste (2.61). conclusion: the attending physicians we studied spent the majority of their time performing non-value-added activities, which were associated with lower satisfaction. application of process improvement techniques such as lean, which focus on reducing non-value-added work, may improve emergency physician job satisfaction. background: rocuronium and succinylcholine are the most commonly used paralytics for rapid sequence intubation (rsi) in the ed. after rsi, patients need sustained sedation while they are mechanically ventilated. however, the longer duration of action of rocuronium may influence subsequent sedation dosing, while the patient is therapeutically paralyzed. objectives: we hypothesized that patients who receive rocuronium would be more likely to receive lower doses of post-rsi sedation compared to patients who receive succinylcholine. methods: this was an observational, retrospective cohort study conducted in a tertiary, academic ed. consecutive adult patients, who received rsi using etomidate for induction of sedation between 07/01/09 to 06/30/10, were included. patients were then categorized based on whether they received rocuronium or succinylcholine for paralysis. the dosing of post-rsi sedative infusions was compared at 0, 30, 60, and 120 minutes after initiation between the two groups using the wilcoxon rank-sum test. results: a total of 254 patients were included in the final analysis (rocuronium = 127, succinylcholine = 127). mean age was 52 and 47 years in the rocuronium and succinylcholine groups, respectively (p = 0.04). there were no other baseline differences between groups with regard to demographics, reason for intubation, stroke, traumatic brain injury, glasgow coma scale score, pain scores, or vital signs. in the overall cohort, 90.2% (n = 229) of patients were given a sedative infusion or bolus in the ed. most patients were initiated on propofol (n = 169) or midazolam (n = 49) infusions. median propofol infusion rates at 0, 30, 60, and 120 minutes were 20, 20, 27.5, and 30 mcg/kg/min in the rocuronium group and 20, 40, 45, and 45 mcg/kg/ min in succinylcholine group, respectively. the difference was statistically significant at 30 (p < 0.001) and 60 (p = 0.003) minutes. median midazolam infusion rates at 0, 30, 60, and 120 minutes were 2, 2, 2, and 3 mg/hour in the rocuronium group and 2, 3, 4, and 4.5 mg/hour in succinylcholine group, respectively. the difference was statistically significant at 60 (p = 0.003) and 120 (p = 0.04) minutes. conclusion: patients who receive rocuronium are more likely to receive lower doses of sedative infusions post-rsi due to sustained therapeutic paralysis. this may put them at risk for being awake under paralysis. what is the impact of the implementation of an there was a difference in presenting pain (p < 0.001), stress (p < 0.001), and anxiety (p < 0.001) among patients that received an opioid in the ed. there was a difference in presenting pain (p < 0.001) for patients discharged with an opioid prescription, but not for stress (p = 0.32) or anxiety (p = 0.90). conclusion: patient-reported pain, stress, and anxiety are higher among patients who received an opiate in the ed than in those who did not, but only pain is higher among patients who received a discharge prescription for an opioid. methods: this was a prospective, randomized crossover study on the use of gvl and dl by incoming pediatric interns prior to advanced life support training. at the start of the study, the interns received a didactic session and expert modeling of the use of both devices for intubation. two scenarios were used: (1) normal intubation with a standard airway and (2) difficult intubation with tongue edema and pharyngeal swelling. interns then intubated laerdal simbaby in each scenario with both gvl and dl for a total of four randomized intubation scenarios. primary outcomes included time to successful intubation and the rate of successful intubation. the interns also rated their satisfaction with the devices using a visual analog scale (0-10) and chose their preferred device for their next intubation. results: 29 interns were included in this study. in the normal airway scenario, there were no differences in the mean time for intubation with gvl or dl (62.9 ± 24.1 vs 61.8 ± 26.2 seconds, p = ns) or the number of interns who performed successful intubation (23 vs 22, p = ns). in the difficult airway scenario, the interns took longer to intubate with gvl than dl (92.3 ± 26.6 vs 59.9 ± 22.7 seconds, p = 0.008), but there were no differences in the number of successful intubations (17 vs 19, p = ns). interns rated their satisfaction higher for gvl than dl (7.3 ± 1.8 vs 6.5 ± 1.5, p = 0.05) and gvl was chosen as the preferred device for their next intubation by a majority of the interns (19/29, 66%). conclusion: for novice clinicians, gvl does not improve the time to intubation or intubation success objectives: to determine the time to intubation, the number of attempts, and the occurrence of hypoxia, in patients intubated with a c-mac device versus those intubated using a standard laryngoscope. methods: randomized controlled trial using exception from informed consent that included patients undergoing endotracheal intubation with a standard laryngoscope at an urban level i trauma center. eligible patients were randomized to undergo intubation using the c-mac or standard laryngoscopy. standard laryngoscopy was performed using a c-mac device laryngoscope with the video output obstructed to ensure equivalent laryngoscope blades in the two groups. data were collected by a trained research assistant at the patient's bedside and video review by the investigators. the number of attempts made, the initial and lowest oxygen saturation (spo 2 ), and the total time until the intubation was successful was recorded. hypoxia was defined as an oxygen saturation <93%. data were compared with wilcoxon rank sum and chi-square tests. results: thirty-eight patients were enrolled, 20 (70% male, median age 58, range 28 to 86, median spo 2 97%, range 79 to 100) in the standard laryngoscopy group and 18 (67% male, median age 58, range 19 to 73, median spo 2 96.5%, range 78 to 100) in the c-mac group. the median number of attempts for standard laryngoscopy was 1, range 1 to 3, and for c-mac was 1, range 1 to 2 (p = 0.43). the median time to intubation for the standard laryngoscopy group was 54 seconds (range 7 to 89) and for the c-mac group was 41 seconds (range 4 to 101)(p = 0.05). hypoxia was detected in 5/20 (20%) in the standard laryngoscopy group and 1/18 (6%) in the c-mac group (p = 0.15). the median decrease in oxygen saturation during the attempt was 5.4% (range 0% to 31%) for the standard laryngoscopy group and 2.3% (range 0% to 16%) for the c-mac group. conclusion: we did not detect a difference in number of attempts, the occurrence of hypoxia, or the diagnosis of aspiration pneumonia between standard laryngoscopy and the c-mac. the time to successful intubation was shorter for patients intubated with the c-mac. the c-mac device appears to be superior to standard laryngoscopy for emergent endotracheal intubation. (originally submitted as a ''late-breaker.'') the background: aspiration pneumonia is a complication of endotracheal intubation that may be related to the difficulty of the airway procedure. objectives: to determine the association of the device used, the time to intubation, the number of attempts to intubate, and the occurrence of hypoxia with the subsequent development of aspiration pneumonia. methods: this was a prospective observational study of patients undergoing endotracheal intubation by emergency physicians at an urban level i trauma center conducted from 7/1/2010 until 11/1/2011. the device used on the initial attempt to intubate was at the discretion of the treating physician. data were collected by a trained research assistant at the patient's bedside. the device used, the number of attempts made to intubate, the lowest oxygen saturation during the attempt, and the total time until intubation was successfully accomplished were recorded. patient's medical records were reviewed for the subsequent diagnosis of aspiration pneumonia. hypoxia was defined as an oxygen saturation <93%. data were analyzed using multinomial logistic regression and odds ratios (or). results: 654 patients were enrolled; 141 (22%) subsequently developed aspiration pneumonia. 328 were intubated with a standard laryngoscope (sl), 277 using the c-mac, 26 with an intubating laryngeal mask, and 23 with nasotracheal intubation (ni) (or 0.87, 95% ci = 0.70-1.06). comparison of individual devices versus sl did not show an association by device type. the median number of attempts for patients with aspiration pneumonia was 1, range 1 to 3, and for those without was 1, range 1 to 9 (or 0.78, 95%ci = 0.43-1.38). the median time to intubation for patients who developed aspiration pneumonia was 55 seconds (range 4 to 756) and for those who did not was 54 seconds (range 4 to 721)(or 1.00, 95%ci = 0.99-1.00). hypoxia during intubation was detected in 53/141 (38%) in the aspiration pneumonia group and 175/513 (34%) in the no aspiration pneumonia group (or 1.06, 95% ci = 0.65-1.72). conclusion: there was not an association between the device used, the number of attempts, the time to intubation, or the occurrence of hypoxia during the intubation, and the subsequent occurrence of aspiration pneumonia. background: japanese census data estimate that 35 million, or nearly 29% of the overall population, will be over age 65 by the year 2020. similar trends are apparent throughout the developed world. although increased patient age affects airway management, comprehensive information in emergency airway management for the elderly is lacking. objectives: we sought to characterize emergency department (ed) airway management for the elderly in japan including success rate, and major adverse events using a large multi-center registry. methods: design and setting: we conducted a multicenter prospective observational study using the japanese emergency airway network (jean) registry of eds at 11 academic and community hospitals in japan between 2010 and 2011 inclusive. data fields included ed characteristics, patient and operator demographics, methods of airway management, number of attempts, success rate, and adverse events. participants: patient inclusion criteria were all adult patients who underwent emergent tracheal intubation in the ed. primary analysis: patients were divided to into two groups defined as follows: 18 to 64 years old and over 65 years old. we describe primary success rates and major adverse events using simple descriptive statistics. categorical data are reported as proportions and 95% confidence intervals (cis). results: the database recorded 2710 patients (capture rate 98%) and 2623 met the inclusion criteria. of 2623 patients, 1104 patients were 18 to 64 years old (62%) and 1519 were over 65 years old (38%). the older group had a significantly higher success rate at first attempt intubation (1074/1519; 70.7%, 95% ci 68.8-72.6%) compared with the younger group (710/1104; 64.3%, 95% ci 61.9-66.7%). the older group had similar major adverse event rates (112/1519; 7.4%, 95% ci 6.3-8.5%) compared with the younger group (83/1104; 7.5%, 95% ci 6.2-8.8%). (see table 1) background: the degree to which a patient's report of pain is associated with changes in blood pressure, heart rate, and respiratory rate is not known. objectives: to determine to what degree a standardized painful stimulus effects a change in systolic blood pressure (sbp), diastolic blood pressure (dbp), heart rate (hr), or respiratory rate (rr), and compare changes in vital signs between patients based on pain severity. methods: prospective observational study of healthy human volunteers. subjects had their sbp, dbp, hr, and rr measured prior to pain exposure, immediately after, and 10 minutes after. pain exposure consisted of subjects placing their hand in a bath of 0 degree water for 45 seconds. the bath was divided into two sections; the larger half was the reservoir of cooled water monitored to be 0 degrees, the other half filled from constant overflow over the divider. water drained from this section into the cooling unit and was then pumped up into the base of the reservoir through a diffusion grid. subjects completed a 100 mm visual analog scale (vas) representing their perceived pain during the exposure and graded their pain as minimal, moderate or severe. data were compared using 95% confidence intervals. results: 90 subjects were enrolled, mean pain vas 40 mm, range 0 to 77, 49 reported mild pain, 41 moderate pain, and 0 severe pain. the percent change from baseline in vital signs during the exposure and 10 minutes after are presented in the table. conclusion: there was a wide variety in reported pain among subjects exposed to a standard painful stimulus. there was a larger change in heart rate during the exposure among subjects who described a standardized painful exposure as moderate than in those who described it as severe. the small observed changes in blood pressure and respiratory rate seen during the exposure did not differ by pain report or persist after 10 minutes. background: vital signs are often used to validate intensity of pain. however, few studies have looked at the capacity of vital signs to estimate pain intensity, particularly in patients with a diagnosis that a majority of physicians would agree produce significant pain in the ed. objectives: to determine the association between pain intensity and vital signs in consecutive ed patients and in a sub-group of patients with diagnosis known to cause significant pain. methods: we performed a post-hoc analysis of prospectively acquired data in a cohort study done in an urban teaching hospital with computerized triage and nurses records. we included all consecutive ed adult patients ( ‡16 years old), who had any level of pain intensity measured during triage, from march 2008 to november 2010. the primary outcome was the mean heart rate, systolic and diastolic blood pressure for every pain intensity level from 1 to 10 on a verbal numerical scale. our secondary outcomes where the same but limited to patients with the following diagnosis: fracture, dislocation, and renal colic. we performed descriptive statistics, one-way and two-way anovas when appropriate. results: during our study period, 42,947 patients ‡16 years old where triaged with a pain intensity of at least 1/10 and 3939 had a diagnosis known to cause significant pain. 56.5% of patients were female, with a mean pain intensity of 6.8/10, mean age of 47.9 years (±19.3), and 22.3% were ‡65 years old. there was a statistically significant difference (p < 0.05) in mean heart rate, systolic and diastolic blood pressure for each level of pain intensity, ex: difference between 1/10 and 10/10 for mean heart rate was 3.9 beats per minutes, for systolic pressure was 4.0 mmhg and for diastolic 4.5 mmhg. results are similar for painful diagnosis: difference for mean heart rate was 0.3 beats per minutes, for systolic pressure was 6.5 mmhg and diastolic 8.8 mmhg. however, these differences are not clinically significant. conclusion: although our study is a post hoc analysis, pain intensity, heart rate, systolic and diastolic pressures during triage are usually reliable data and a prospective study would likely produce the same result. these vital signs cannot be used to estimate or validate pain intensity in the emergency department. 8% had a positive urine drug screen. logistic multivariate regressions analyses revealed the following factors to be significantly associated with the risk of having an abnormal head ct: association with seizure (p = 0.0072); length of time of loss of consciousness, ranging from none to 0-30 min to >30 min (p = 0.0013); alteration of consciousness (p = 0.00009); post-traumatic amnesia (p = 0.0132); alcohol intake prior to injury (p = 0,0003); and initial ed gcs (p = 0.0255). conclusion: in an emergency department cohort of patients with traumatic brain injury, symptoms including loss of or alteration in consciousness, seizure, post traumatic amnesia, and alcohol intake appear to be significantly associated with abnormal findings on head ct. these clinical findings on presentation may be useful in helping triage head injury patients in a busy emergency department, and can further define the need for urgent or emergent imaging in patients without clearly apparent injuries. background: the etiology of neurogenic shock is classically attributed to diminished peripheral vascular resistance (pvr) secondary to loss of sympathetic outflow to the peripheral vasculature. however, the sympathetic nervous system also controls other key elements of the cardiovascular system such as the heart and capacitance vessels and disruptions in their function could complicate the hemodynamic presentation. objectives: we sought to systematically examine the hemodynamic profiles of a series of trauma patients with neurogenic shock. methods: consecutive trauma patients with documented spinal cord injury complicated by clinical shock were enrolled. hemodynamic data including systolic and diastolic blood pressure, heart rate (hr), impedance-derived cardiac output, pre-ejection period (pep), left ventricular ejection time (lvet), and calculated systemic pvr were collected in the ed. data were normalized for body surface area and a validated integrated computer model of human physiology (guyton model) was used to analyze and categorize the hemodynamic profiles based on etiology of the hypotension using a systems analysis. correlation between markers of sympathetic outflow (hr, pep, lvet) and shock etiology category was examined. results: of 9 patients with traumatic neurogenic shock, the etiology of shock was decrease in pvr in 4 (45%; 95% ci 19 to 73%), loss of vascular capacitance in 3 (33%; 12 to 65%), and mixed peripheral resistance and capacitance responsible in 2 (22%; 6 to 55%). the markers of sympathetic outflow had no correlation to any of the elements in the patients' hemodynamic profiles. conclusion: neurogenic shock is often considered to have a specific well-characterized pathophysiology. results from this study suggest that neurogenic shock can have multiple mechanistic etiologies and represents a spectrum of hemodynamic profiles. this understanding is important for the treatment decisions made in the management of these patients. -year (2008-2010) , pre-post intervention study of trauma patients requiring massive blood transfusion was performed. we divided the population into two cohorts: a pre-protocol group (pre) which included trauma patients receiving mbt not aided by a protocol, and a post-protocol group (post) who underwent mbt via the mbtp. patient demographics, 24hour blood component totals, timing of blood component delivery, trauma injury severity score (iss), initial glasgow coma scale (gcs) score, trauma mechanism, and patient mortality data were collected and analyzed using fisher's exact tests, student's t-tests, and mann-whitney u tests. results: fifty-two patients were included for study. median times to delivery of first products were reduced for prbcs (4 minutes), ffp (16 minutes), and platelets (33 minutes) between the pre and post cohorts. median time to delivery of any subsequent blood product was significantly reduced (10 minutes) in the post cohort (p = 0.024). the median number of blood products delivered was increased by 5.5 units for prbcs, 4 units for ffp, 0.5 units for platelets, and 1 unit for cryoprecipitate after implementation of mbtp. the percentage of patients receiving higher blood product ratios (>3:1) was reduced between the pre and post cohorts for prbc to ffp (25% reduction) and prbc to platelet ratio groups (7 % reduction). despite improved transfusion timing and ratios, we found no significant difference in mortality (p = 0.129) between pre and post cohorts when we adjusted for injury severity. conclusion: protocolized delivery of massive blood transfusion might reduce time to product availability and delivery, though it is unclear how this affects patient mortality in all us trauma centers. background: burns are common injuries that can result in significant scarring leading to poor function and disfigurement. unlike mechanical injuries, burns often progress both in depth and size over the first few days after injury, possibly due to inflammation and oxidative stress. a major gap in the field of burns is the lack of an effective therapy that reduces burn injury progression. objectives: since mesenchymal stem cells (msc) have been shown to improve healing in several injury models, we hypothesized that species-specific msc would reduce injury progression in a rat comb burn model. methods: using a 150 gm brass comb preheated to 100 degrees celsius, we created four rectangular burns, separated by three unburned interspaces on both sides of the backs of male sprague-dawley rats (300 g). the interspaces represented the ischemic zones surround-ing the central necrotic core. left untreated, most of these interspaces become necrotic. in an attempt to reduce burn injury progression, 20 rats were randomized to tail vein injections of 1 ml rat-specific msc 10 6 cells/ml (n = 10) or normal saline (n = 10) 60 minutes after injury. tracking of the stem cells was attempted by injecting several rats with quantum dot-labeled msc. results: by four days post-injury, all of the interspaces in the control rats (54/54, 100%) became necrotic while in the experimental group, 29/48 (60%) of the interspaces became necrotic (fisher's exact test; p < 0.001). at 7 days, the percentage of the unburned interspaces that became necrotic in the msc treated group was significantly less than in the control group (80% vs. 100%, p < 0.0001). we were unable to identify any quantum dot labeled msc in the injured skin. no adverse reactions or wound infections were noted in rats injected with msc. conclusion: intravenous injection of rat msc reduced burn injury progression in a rat comb burn model. although basic demographics of bicyclists in accidents have been described, there is a paucity of data describing the street surface involved in accidents, and whether designated bicycle roadways offer protection. this lack of information limits informed attempts to change infrastructure in a way that will decrease morbidity and/or mortality of cyclists. objectives: to identify road surface types involved in pedal cyclist injuries and determine the relationship between injury severity and the use of designated bicycle roadways (dbr) versus non-designated roadways (ndr). we hypothesized that more severe injuries would happen at intersections regardless of dbr versus ndr. methods: this retrospective cohort study reviewed the trauma database from a level i trauma center in tucson, az. we identified all bicyclists in the database injured in accidents involving a motor vehicle from january 1, 2009 1, through december 31, 2009 . the patients were then linked to a local government database that documents location (latitude/longitude) and direction of travel of the cyclist. seventy-eight total incidents were identified and categorized as occurring on a dbr versus ndr and occurring at an intersection versus not at an intersection. results: only one patient who arrived at the trauma center died. fifty-one of the accidents (65%) occurred on dbrs; 63% of accidents occurring on dbrs took place in intersections. conversely, 63% of accidents on ndrs occurred outside of intersections. the odds of an injury occurring at an intersection versus not at an intersection were 2.9 times higher (95% ci: 1.0-8.5) for dbrs compared to ndrs. the odds of a trauma being severe (admitted) versus not severe (discharged home) were 2.7 times higher (95% ci: 0.9-8.7) when a collision occurred not at an intersection versus at an intersection. conclusion: contrary to our hypothesis, in this study group severe injuries were more likely outside of an intersection. however, intersections on dbrs were identified as problematic as cyclists on a dbr were more likely to be injured in an intersection. future city planning could target improved cyclist safety in intersections. background: minor thoracic injury (mti) is frequent and a significant proportion will still have moderate to severe pain at 90 days. there is a lack of risk factors to orient specific treatment at ed discharge. objectives: to determine risk factors of having pain ( ‡3/10, on a numerical intensity pain score from 0 to 10) at 90 days in a population of minor thoracic injury patients discharged from the ed. methods: a prospective multi-center cohort study was conducted in four canadian eds, from november 2006 to january 2010. all consecutive patients, 16 years and older, with mti (with or without rib fracture), a normal chest x-ray, and discharged from the ed were eligible. a standardized clinical and radiological evaluation was done at 1 and 2 weeks. standardized phone interviews were done at 30 and 90 days. pain evaluation occurred at five time points (ed visit, 1 and 2 weeks, 30 and 90 days). using a pain trajectory model (sas), we planned to identify groups with different pain evolution at 90 days. the final model was based on the importance of difference in pain evolution, confidence intervals, and number of patients in each group. to judge the adequacy of the final model, we examined whether the posteriori probabilities (i.e., a participant's probability of belonging to a certain trajectory group) averaged at least 70% for each trajectory group. then using logistic multinomial regression and the low risk group of having pain as the control group, we identified significant predictors of patients in the moderate and high risk groups having pain at 90 days. results: in our cohort of 1,057 patients, 1,025 had an evaluation at 90 days. we identified three groups at low (34%), moderate (50.6%), and high risk (15.4%) of having pain ‡3/10 at 90 days. using risk factor identified by univariate analysis, we created a model to identify patients at risk containing the following predictors: age ‡ 30 years old, women, current smoker, two or more rib fractures, complaint of dyspnea, and saturation <95% at initial visit. posteriori probabilities for low, moderate, and high risk were 76%, 74%, and 88%. conclusion: to our knowledge, this is the first study to identify potential risk factor for having pain at 90 days after minor thoracic injury. these risk factors should be validated in a prospective study to guide specific treatment plan. the use of ultrasound to evaluate traumatic optic neuropathy benjamin burt, lisa montgomery, cynthia garza meissner, sanja plavsic-kupesic, nadah zafar ttuhsc -paul l foster school of medicine, el paso, tx background: whenever head trauma occurs, there is the possibility for a patient to have an optic nerve injury. the current method to evaluate optical nerve swelling is to look for proptosis. however, by the time proptosis presents, significant damage has already occurred. therefore, there is a need to establish a method to evaluate nerve injury prior to the development of proptosis. objectives: fundamental to understanding the pathophysiology of optic nerve injury and repair is an understanding of the optic nerve's temporal response to trauma including blood flow changes and vascular reactivity. the aim of our study was to assess the dependability and reproducibility of ultrasound techniques to sequence optic nerve healing and monitor the vascular response of the ophthalmic artery following an optic nerve crush. methods: the rat's orbit was imaged prior to and following a direct injury to the optic nerve, at 72 hours and at 28 days. 3d, 2d, and color doppler techniques were used to detect blood flow and the course of the ophthalmic artery and vein, to evaluate the course and diameter of the optic nerve, and to assess the extent of optic nerve trauma and swelling. the parameters used to evaluate healing over time were pulsatility and resistance indices of the ophthalmic artery. results: we have established baseline ultrasound measurements of the optic nerve diameter, normal resistance and pulsatility indices of the ophthalmic artery, and morphological assessment of the optic nerve in a rat model. longitudinal assessment of 2d and 3d ultrasound parameters were used to evaluate vascular response of the ophthalmic artery to optic nerve crush injury. we have developed a rat model system to study traumatic optic nerve injury. the main advantages of ultrasound are low cost, non-invasiveness, lack of ionizing radiation, and the potential to perform longitudinal studies. our preliminary data indicate that 2d and 3d color doppler ultrasound may be used for the evaluation of ophthalmic artery and total orbital perfusion following trauma. once baseline ultrasound and doppler measurements are defined there is the opportunity to translate the rat model to evaluate patients with head trauma who are at risk for optic nerve swelling and to assess the usefulness of treatment interventions. background: alcoholism is a chronic disease that affects an estimated 17.6 million american adults. a common presentation to the emergency department (ed) is a trauma patient with altered sensorium who is presumed to be alcohol intoxicated by the physicians based on their olfactory sense. often ed physicians may leave patients suspected of alcohol intoxication aside until the effects wear off, potentially missing major trauma as the source of confusion or disorientation. this practice often results in delays in diagnosing acute potentially life-threatening injuries in the patients with presumed alcohol intoxication. objectives: this study will determine the accuracy of physicians' olfactory sense for diagnosing alcohol intoxication. methods: patients suspected of major trauma in the ed underwent an evaluation by the examining physician for the odor of alcohol as well as other signs of intoxication. each patient had determination of blood alcohol level. alcohol intoxication was defined as a serum ethanol level ‡80 mg/dl. data were reported as means with 95% confidence intervals (95% ci) or proportions with inter-quartile ranges (iqr 25%-75%). results: one hundred and fifty one patients (70% males) were enrolled in the study, median age 45 years (iqr 33-56). the median score for glasgow coma scale was 15. the level of training of examining physician was a median of pgy 4 (iqr pgy 3 -attending). prevalence of alcohol intoxication was 43% (95% ci: 35% to 51%). operating characteristics: physician assessment of alcohol intoxication, sensitivity 84% (95% ci: 73% to 92%), specificity 87% (95% ci: 78% to 93%), positive likelihood ratio 6.6 (95% ci: 3.8 to 11.6), negative likelihood ratio 0.18 (95% ci: 0.1 to 0.3), and accuracy 86% (95% ci: 80% to 91%). patients who were falsely suspected of being intoxicated were 7.3% (95% ci: 4% to 13%). conclusion: although the physicians had a high degree of accuracy in identifying patients with alcohol intoxication based on their olfactory sense, they still falsely overestimated intoxication in a significant number of non-intoxicated trauma patients. the background: optimal methods for education and assessment in emergency and critical care ultrasound training for residents are not known. methods of assessment often rely on surrogate endpoints which do not assess the ability of the learner to perform the imaging and integrate the imaging into diagnostic and therapeutic decisions. we designed an educational strategy that combines asynchronous learning to teach imaging skills and interpretation with a standardized assessment tool using a novel ultrasound simulator to assess the learner's ability to acquire and interpret images in the setting of a standardized patient scenario. objectives: to assess the ability of emergency medicine and surgical residents to integrate and apply information and skills acquired in an asynchronous learning environment in order to identify pathology and prioritize relevant diagnoses using an advanced cardiac ultrasound simulator. methods: 12 em r2 residents and 12 r2 surgical residents completed an online focused training program in cardiac ultrasonography (iccu elearning, https:// www.caeiccu.com/lms). this consisted of approximately 14 hours of intensive training in cardiac ultrasound. residents were then given cases with a patient scenario that lacked significant details that would suggest a specific diagnosis. the resident was then given a list of 17 possible diagnoses and asked to rank the top five diagnoses in order of most likely to least likely. each resident (blinded to the pathology displayed by the simulator) then imaged using an ultrasound simulator. after imaging, the residents were given the same list of potential diagnoses, and asked to rank them again from 1-5. results: overall, residents ranked the correct diagnosis in the top five significantly more times post-ultrasound than pre-ultrasound. additionally, the residents made the correct diagnosis significantly more times postultrasound than pre-ultrasound. similar patterns occur for congestive heart failure, pericardial effusion with tamponade, and pleural effusion. there was no significant difference pre-and post-ultrasound for pulmonary embolism and anterior infarction. conclusion: an asynchronous online learning program significantly improves the ability of emergency medicine and surgical residents to correctly prioritize the correct diagnosis after imaging with a standardized pathology imaging simulator. mark favot, jacob manteuffel, david amponsah henry ford hospital, detroit, mi background: em clerkships are often the only opportunity medical students have to spend a significant amount of time caring for patients in the ed. it is imperative that students gain exposure to as many of the various fields within em as possible during this time. if the exposure of medical students to ultrasound is left to the discretion of the supervising physicians, we feel that many students would complete an em clerkship with limited skills and knowledge in ultrasound. the majority of medical students receive no formal training in ultrasound during medical school and we believe that the em clerkship is an excellent opportunity to fill this educational gap. objectives: evaluate the usefulness and effectiveness of a focused ultrasound curriculum for medical students in an em clerkship at a large, urban, academic medical center. methods: prospective cohort study of fourth year medical students doing an em clerkship. as part of the clerkship requirements, the students have a portion of the curriculum dedicated to the fast exam and ultrasound-guided vascular access. at the end of the month they take a written test, and 1 month later they are given a survey via e-mail regarding their ultrasound experience. em residents also completed the test to serve as a comparison group. all data analysis was done using sas 9.2. scores were integers ranging between 0 and 10. descriptive statistics are given as count, mean, standard deviation, median, minimum, and maximum for each group. due to non-gaussian nature of the data and small group sizes, a wilcoxon two-sample test was used to compare the distributions of scores between the groups. results: in the table, the distribution of scores was compared between the residents (controls) and the students (subjects). the mean and median scores of the student group were higher than those of the resident group. the difference in scores between the two groups was statistically significant (p = 0.021). conclusion: our data reveal that after completing an em clerkship with time devoted to learning ultrasound for the fast exam and vascular access, fourth year medical students are able to perform better than em residents on a written test. what remains to be determined is if their skills in image acquisition and in performance of ultrasound-guided vascular access procedures also exceed those of em residents. results: there were 106 respondents (total response rate 24.71%). compared to non-em students, students pursuing em (8 students, 7.55%) were more drawn to their specialty for work hour control (p < 0.0009) and shorter residency length (p < 0.0338). em students were less likely than non-em students to be drawn to their chosen specialty for future academic opportunities (p < 0.0085). em students formed their mentorships by referral significantly more than non-em students (p < 0.0399), though there was no statistical difference in quality of existing mentorships amongst students. of the 93 students not currently and never formerly interested in em, the most common response (25.8%) for why they did not choose em was the lack of a strong mentor in the field. conclusion: the results confirmed previous findings of lifestyle factors drawing students to em. future academic opportunities were less likely to draw students to em than students pursuing other specialties. lack of mentorship in the field was the most common reason given for why students did not consider em. given the lack of direct em exposure until late in the curriculum of most medical schools, mentorship may be particularly important for em and future study should focus on this area. background: misdiagnosis is a major public health problem. dizziness leads to 10 million visits annually in the us, including 2.6 million to the emergency department (ed). despite extensive ed workups, diagnostic accuracy remains poor, with at least 35% of strokes missed in those presenting with dizziness. ed physicians need and want support, particularly in the best method for diagnosis. strong evidence now indicates the bedside oculomotor exam is the best method of differentiating central from peripheral causes of dizziness. objectives: after a vertigo day that includes instruction in head impulse testing, emergency medicine residents will feel comfortable discharging a patient with signs of vestibular neuritis and a positive head impulse test without ordering a ct scan. methods: post graduate year 1-4 emergency medicine residents participated in a four hour vertigo day. we developed a mixed cognitive and systems intervention with three components: an online game that began and ended the day, a didactic taught by dr. newman-toker, and a series of small group exercises. the small group sessions included the following: a question and answer session with the lecturer; vertigo special tests (cerebellar assessment, dix hall-pike, epley maneuver); a head impulse hands-on tutorial using a mannequin; and a video lecture on other tests useful in vertigo evaluation (nystagmus, test of skew, vestibulocular reflex, ataxia). results: thirty emergency medicine residents were studied. before and after the intervention the residents were given a survey in which one question asked ''in a patient with acute vestibular syndrome and a history and exam compatible with vestibular neuritis, i would be willing to discharge the patient without neuroimaging based on an abnormal head impulse test result that i elicited''. resident answers were based on a sevenpoint likert scale from strongly agree to strongly disagree. twenty-five residents completed both surveys. of the seven residents who changed their responses pre to post,a significant proportion (100%) changed their answer from disagree/neutral to agree after a 4hour vertigo day (mcnemar's test, p value = 0.0082). conclusion: in this single-center study, teaching headimpulse testing as part of a vertigo day increases resident comfort with discharging a patient with vestibular neuritis without a ct scan. background: previous studies have been inconsistent in determining the effect of increased ed census on resident workload and productivity. we examined resident workload and productivity after the closure of a large urban ed near our facility, which resulted in a rapid 21% increase in our census. objectives: we hypothesized that the closure of a nearby hospital closure with a resulting influx of ed patients to our facility would not change resident productivity. methods: this computer-assisted retrospective study compared new patient workups per hour and patient load before and after the closure of a large nearby hospital. specifically, new patient workups per hour and the 4 pm patient census per resident were examined for a one-year period in the calendar year prior to the closing and also for one year after the closing. we did not include the four month period surrounding the closure in order to determine the long-term overall effect. background: emergency medicine residents use simulation for training due to multiple factors including the acuity of certain situations they are faced with, and the rarity of others. current training on highfidelity mannequin simulators is often critiqued by residents over the physical exam findings present, specifically the auscultatory findings. this detracts from the realism of the training, and may also lead a resident down a different diagnostic or therapeutic pathway. wireless remote programmed stethoscopes represent a new tool for simulation education which allows any sound to be wirelessly transmitted to a stethoscope receiver. objectives: our goal was to determine if a wireless remote programmed stethoscope was a useful adjunct in simulation-based cases using a high-fidelity mannequin. our hypothesis was that this would represent a useful adjunct in simulation education of emergency medicine residents. methods: starting june 2011, pgy1-3 emergency medicine residents were assessed in two simulation-based cases using pre-determined scoring anchors. an experimental randomized crossover design was used in which each resident performed a simulation case with and without a remote programmed stethoscope on a highfidelity mannequin. scoring anchors and surveys were used to collect data with differences of means calculated. results: fourteen residents participated in the study. residents noted most realistic physical exam findings associated with the case with the adjunct in 13/14 (93%) and that their preference was for the use of the adjunct in 13/14 (93%). based off of a five-point likert scale, with 5 being the most realistic, the adjunct-associated case averaged 4.4 as compared to 3.0 without (difference of means 1.4, p = 0.00017). average scores of residents with the adjunct were 2.5/3 with the use of the adjunct and 2.3/3 without (difference of means 0.2, p = 0.076). average total times were 28:49 with the adjunct as compared to 30:02 without. conclusion: a wireless remote programmed stethoscope is a useful adjunct in simulation training of emergency medicine residents. residents noted physical exam findings to be more realistic, preferred its use, and had approached significant improvement of scores when using the adjunct. background: prior studies predict an ongoing shortage of emergency physicians to staff the nation's eds, especially in rural areas. to address this, em organizations have discussed broadening access to acgme or aoa accredited em residency programs to physicians who previously trained in another specialty and focusing on physicians already practicing in rural areas. objectives: to investigate whether em program directors (pds) from allopathic and osteopathic residency programs would be willing to accept applicants previously trained in other specialties and whether this willingness is modified by applicants' current practice in rural areas. methods: a five-question web-based survey was sent to 200 u.s. em pds asking questions about their policies on accepting residents with past training and from rural practices. questions included whether a pd would accept a resident with prior training in other specialties, how many years from this training would the applicant be still a competitive candidate and if a physician was practicing in a rural region would the likelihood of acceptance to the program be improved. different characteristics of the residency programs were recorded including length of program, years in existence, size, type, and location of program. we compared responses by program characteristics using chi-square test. results: of the 96 (48%) pds responding to date, a large majority (87%) reported they do accept applicants with previous residency training, although directors of osteopathic programs were less likely to accept these applicants (56% vs 94% for allopathic; p < 0.001). overall, 28% of pds reported no limit on the length of time from prior training to when they are accepted at an em program. 73% reported it is very or possibly realistic they would accept a candidate who had completed training and was board certified in another specialty. a majority of all respondents (61%) felt a physician practicing in a rural setting might be viewed as a more favorable candidate, even if the resident would only be in the program for 2 years after receiving training credit. directors of newer programs (<5 years of existence) were more likely to view these candidates favorably than older programs (91% vs 53%; p = 0.02). conclusion: there appear to be many em residency programs that would at least review the application and consider accepting a candidate who trained in another specialty. a qualitative assessment of emergency medicine self-reported strengths todd guth university of colorado, aurora, co background: self-reflection has been touted as a useful way to assess the acgme core competencies. objectives: the purpose of this study is to gain insight into resident physician professional development through analysis of self-perceived strengths. a secondary purpose is to discover potential topics for selfreflective narrative essays relating to the acgme core competencies. methods: design: a small qualitative study was performed to explore the self-reported strengths of emergency medicine (em) residents in a single four-year residency. participants: all 54 residents regardless of year of training were also asked to report their selfperceived strengths. observations: residents were asked: ''what do you feel are your greatest strengths as a resident? provide a quick description.'' the author and another reviewer identified themes from within each year of residency with abraham maslow's conscious competence conceptual framework in mind. occurrences of each theme were counted by the reviewers and organized according to frequency. once the top ten themes for each year of residency were identified and exemplar quotes identified, the two reviewers identified trends. inter-rater agreements were calculated. results: representing unconscious incompetency, the first trend was the reported presence of ''enthusiasm and a positive attitude'' from residents early in their training that decreases further along in training. additionally, a ''willingness and motivation to improve and learn'' was reported as a strength throughout all the years of training but most frequently reported in the first two years of residency. entering into conscious incompetence, the second trend identified was ''recognition of limitations and openness to constructive feedback'' that was mentioned frequently in the second and third years of residency. demonstrating conscious competence, the third trend identified was the increase in identification of the strengths of ''educational leadership, teamwork skills and communication, and departmental patient flow and efficiency'' in the later years of residency. conclusion: self-reported strengths has helped to identify both themes within each year of residency and trends among the years of residency that can serve as areas to explore in self-reflective narratives relating to the acgme core competencies. training. pofu can also be used to assess the acgme core competency of practice-based learning. the exact form or frequency of pofu assessment among various em residencies, however, is not currently known. objectives: we aimed to survey em residencies across the country to determine how they fulfill the pofu requirement and whether certain program structure variables were associated with different pofu systems. we hypothesized that implementation of pofu systems among em residencies would be highly variable. methods: in this irb-approved study, all program directors of acgme allopathic em residencies were invited to complete a 10-question survey on their current approaches to pofu. respondents were asked to describe their current pofu system's characteristics and rate its ease of use, effectiveness, and efficiency. data were collected using surveymonkey(tm) and reported using descriptive statistics. results: of 158 residencies surveyed, 81 (51%) submitted complete data. 77.5% were completed by program directors and over three-fourths (76.1%) of em residencies require monthly completion of pofus. the mean total pofus required per year was 78 (95% ci 58-98), with a median of 64 and a range of 2-400. almost 2/3 (63%) of residencies use an electronic pofu system. most (84%) 4-year em residencies use an electronic pofu system, compared with half (54%) of 3-year residencies (difference 30%, p = 0.025, 95% ci 5.1%-47.2%). seven commercially available electronic programs are used by 71% of the residencies, while 29% use a customized product. most respondents (88%) rated their pofu system as easy to use, but less than half (49%) felt it was an effective learning tool or an efficient one (45%). onethird (34%) would use a different pofu system if available, and almost half (44%) would be interested in using a multi-residency pofu system. conclusion: em residency programs use many different strategies to fulfill the rrc requirement for pofu. the number of required pofus and the method of documentation vary considerably. about two-thirds of respondents use an electronic pofu system. less than half feel that pofu logs are an effective or efficient learning tool. background: certification of procedural competency is requisite to graduate medical education. however, little is known regarding which platforms are best suited for competency assessment. simulators offer several advantages as an assessment modality, but evidence is lacking regarding their use in this domain. furthermore, perception of an assessment environment has important influence on the quality of learning outcomes, and procedural skill assessment is ideally conducted on a platform accepted by the learner. objectives: to ascertain if a simulator performs as well as an unembalmed cadaver with regard to residents' perception of their ability to demonstrate procedural competency during ultrasound (us) guided internal jugular vein (ij) catheterization. methods: in this cross-sectional study at an urban community hospital during july of 2011, 15 residents in their second or third year of training from a 3-year em residency program performed us guided catheterizations of the ij on both an unembalmed cadaver and a simulator manufactured by blue phantom. after the procedure, residents completed an anonymous survey ascertaining how adequately each platform permitted their demonstration of proficiency on predefined procedural steps. answers were provided on a likert scale of 1 to 10, with 1 being poor and 10 being excellent. p values < 0.10 were considered educationally significant. results: the median overall rating of the simulator (s) to serve as an assessment platform was similar to that of the cadaver (c) with scores of 8.0 and 8.3 respectively, p = 0.89. median ratings for permitting the demonstration of specific procedural steps were as follows: conclusion: senior em residents positively rate the blue phantom simulator as an assessment platform and similarly to that of a cadaver with regard to permitting their demonstration of procedural competency for us guided ij catheterization, but did prefer the cadaver to a greater degree when identifying and guiding the needle into the ij. methods: in fall 2011, wcmc and wcmc-q students taking the course completed a 20 question pre-and post-test. wcmc-q students also completed a postcourse single-station objective structured clinical examination (osce) that evaluated their ability to identify and perform eight actions critical for a first responder in an emergency situation (table 1) . results: on both campuses, mean post-test scores were significantly higher than mean pre-test scores (p £ 0.001). on the pre-test, mean wcmc student scores were significantly higher than for wcmc-q students (p = 0.02); however, no difference was found in mean post-test scores (p = 0.895). there was no association between the scores on the osce (mean = 7.01, sd = 1.00) and the post-test (p = 0.683) even after adjusting for a possible evaluators' effect (table 2) . clinical skills course was effective in enhancing student knowledge in both qatar and new york as evidenced by the significant improvement in scores from the pre-to post-tests. the course was able to bring wcmc-q student scores and presumably knowledge up to the same level as wcmc students. students performed well on the osce, suggesting that the course was able to teach them the critical actions required of a first responder. the lack of association between the post-test and osce scores suggests that student knowledge does not independently predict ability to learn and demonstrate critical actions required of a first responder. future studies will evaluate whether the course affects the students' clinical practice. assess breathing 3 assess circulation 4 call ems 5 call ems and assess abcs prior to other interventions 6 immobilize 7 localize and control bleeding 8 splint fractured extremity and skills specific to wilderness medicine by incorporating simulated medical scenarios into a day-long adventure race. this event has gained acceptance nationally in wilderness medical circles as an excellent way to appreciate the challenges of wilderness medicine, however its effectiveness as a teaching tool has not yet been verified. objectives: the objective of this study was to determine if improvement in simulated clinical and didactic performance can be demonstrated by teams participating in a typical medwar event. methods: we developed a complex clinical scenario and written exam to test the basic tenets that are reinforced through the medwar curriculum. teams were administered the test and scored on a standardized scenario immediately before and after the 2011 midwest medwar race. teams were not given feedback on their pre-race performance. scenario performance was based on the number of critical actions correctly performed in the appropriate time frame. data from the scenario and written exams were analyzed using a standard paired difference t-test. results: a total of 31 teams participated in both the pre-and post-event scenarios. the teams' pre-race scenario performance was 71.0% (sd = 17.0, n = 31) of critical actions met compared to a post-race performance of 89.7 % (sd = 11.4, n = 31). the mean improvement was 18.7% (sd = 18.7, n = 31, 95% ci 12. 1, 25. 3) with a significant paired two-tailed t-test (p £ 0.01). a total of 95 individual subjects took the written pre-and posttests. the written scores averaged pre-race 84.5% (sd = 12.5, n = 95) and post-race 88.7% (sd = 11.5, n = 95). the mean improvement was 4.2% (sd = 11.7, n = 95, ci )7.5, 15.9), with a significant paired twotailed t-test (p £ 0.01). conclusion: medwar participants demonstrated a significant improvement in both written exam scores and the management of a simulated complex wilderness medical scenario. this strongly suggests that medwar is an effective teaching platform for both wilderness medicine knowledge and skills. palliative methods: ed residents and faculty of an urban, tertiary care, level i trauma center were asked to complete an anonymous survey (6/2010-10/2011). participants ranked 22 statements on a five-point likert scale (1 = strongly disagree-5 = strongly agree). statements covered four main domains of barriers related to: 1) education/training, 2) communication, 3) ed environment; 4) personal beliefs. respondents were also asked if they would call pc consult for 15 ed clinical scenarios (based on established triggers). results: 30/45 (67%) eligible participants completed the survey (23 residents, 7 faculty), average age was 31 years, 52% (15/29) male, and 58% (15/26) caucasian. respondents identified two major barriers to ed-pc provision: lack of 24 hour availability of pc team (mean score 4.4) and lack of access to complete medical records (4.2). listed domain barriers included: communication-related issues (mean 3.3) like access to family or primary providers, ed environment (2.8) for example chaotic setting with time-constraints, education/training (2.7) related to pain/pc, and personal beliefs regarding end-of-life (2.5). all respondents agreed that they would call pc consult for a 'hospice patient in respiratory distress', and a majority (73%) would consult pc for 'massive intracranial hemorrhage, traumatic arrest, and metastatic cancer'. however, traditional in-patient triggers like frequent re-admits for organ failure issues (dementia, congestive heart failure, and obstructive pulmonary disease exacerbations) were infrequently (10%) chosen for pc consult. conclusion: to enhance pc provision in the ed setting, two main ed physician perceived barriers will likely need to be addressed: lack of access to medical records and lack of 24-7 availability of pc team. ed physicians may not use the same criteria to initiate pc consults as compared to the traditionally established inpatient pc consult trigger models. percent of charts with an mse by ait prior to resident evaluation (a measure of reduced diagnostic uncertainty and decision-making), (4) ed volume. results: there were no educationally significant differences in productivity or acuity between the pre-ait and post-ait groups. mse was recorded in the chart prior to resident evaluation in 10.9% of cases. ed volume rose by 9.0% between periods. conclusion: ait did not affect productivity or acuity of patients seen by em2s. while some volume was directed away from residents by ait (patients treated-andreleased by ait only), overall volume increased and made up the difference. this is similar to previously reported rankings that program directors gave to the same criteria. although medical students agreed with program directors on the importance of most aspects of the nrmp application areas of discordance included higher medical student ranking for extracurricular activities and a lower relative ranking for aoa status than program directors. this can have implications for medical student mentoring and advising in the future. background: emergency care of older adults requires specialized knowledge of their unique physiology, atypical presentations, and care transitions. older adults often require distinctive assessment, treatment and disposition. emergency medicine (em) residents should develop expertise and efficiency in geriatric care. older adults represent over 25% of most emergency department (ed) volumes. yet many em residencies lack curricula or assessment tools for competent geriatric care. the geriatric emergency medicine competencies (gemc) are high-impact geriatric topics developed to help residencies meet this demand. objectives: to examine the effect of a brief gemc educational intervention on em resident knowledge. methods: a validated 29-question didactic test was administered at six em residencies before and after a gemc focused lecture delivered summer and fall of 2009. scores were analyzed as individual questions and in defined topic domains using a paired student's t-test. results: a total of 301 exams were included. the testing of didactic knowledge before and after the gemc educational intervention had high internal reliability (87.9%). the intervention significantly improved scores in all domains (table 1) . graded increase in geriatric knowledge occurred by pgy year with the greatest improvement seen at the pgy 3 level (table 2) . conclusion: even a brief gemc intervention had a significant effect on em resident knowledge of critical geriatric topics. a formal gemc curriculum should be considered in training em residents for the demands of an ageing population. the overall procedure experience of this incoming class was limited. most r1s had never received formal education in time management, conflict of interest management, or safe patient trade-off. the majority lacked confidence in their acute and chronic pain management skills. these entry level residents lacked foundational skill levels in many knowledge areas and procedures important to the practice of em. ideally medical school curricular offerings should address these gaps; in the interim, residency curricula should incorporate some or all of these components essential to physician practice and patient safety. background: the american heart association and international liaison committee on resuscitation recommend patients with return of spontaneous circulation following cardiac arrest undergo post-resuscitation therapeutic hypothermia. in post-cardiac arrest patients presenting with a rhythm of vf/vt, therapeutic hypothermia has been shown to reduce neurologic sequelae and decrease overall mortality. objectives: to explore clinical practice regarding the use of therapeutic hypothermia and compare survival outcomes in post-cardiac arrest patients. a secondary outcome was to assess whether the initial presenting cardiac arrest rhythm (ventricular fibrillation/ventricular tachycardia (vf/vt) versus pulseless electrical activity (pea) or asystole) was associated with differences in outcomes. methods: a retrospective medical record review was conducted for all adult ( ‡18 years) post-cardiac arrest patients admitted to the icu of an academic tertiary care centre (annual ed census 150,000) from 2006-2007. data were extracted using a standardized data collection tool by trained research personnel. results: 200 patients were enrolled. mean (sd) age was 66 (16) and 56.5% were male. of 58 (29.0%) patients treated with hypothermia, 27 (46.6%) presented with an initial rhythm of vf/vt and 31 (53.4%) presented with pea or asystole. nine (33.3%) patients with vf/vt were treated with therapeutic hypothermia and discharged from hospital compared to 2 (6.4%) patients with pea or asystole (d 26.9%; 95% ci: 6.4%, 46.3%). of 142 patients not treated with hypothermia, 37 (26.1%) presented with vf/vt, 93 (65.5%) presented with pea or asystole, and 12 (8.4%) initial rhythms were unknown. fifteen (40.5%) patients with vf/vt, not treated with hypothermia, were discharged from hospital compared to 13 (13.9%) patients with pea or asystole (d 26.6%; 95% ci: 10.0%, 43.5%). regardless of initial presenting rhythm or initiation of therapeutic hypothermia, 37 (88.1%) discharged patients had good neurological function as assessed by the cerebral performance category (cpc score 1-2). conclusion: although recommended, post-cardiac arrest therapeutic hypothermia was not routinely used. patients with vf/vt and treated with hypothermia had better outcomes than those with pea or asystole. further research is needed to assess whether cooling patients with presenting rhtyhms of pea or asystole is warranted. racial background: chronic obstructive pulmonary disease (copd) is a major public health problem in many countries.the course of the disease is characterised by episodes, known as acute exacerbations (ae), when symptoms of cough, sputum production, and breathlessness become much worse. the standard prehospital management of patients suffering from an aecopd includes oxygen therapy, nebulised bronchodilators, and corticosteroids. high flow oxygen is used routinely in prehospital areas for breathless patients with copd. there is little high quality evidence on the benefits or potential dangers in this setting but audits have shown increased mortality, acidosis, and hypercarbia in patients with aecopd treated with high flow oxygen. objectives: to compare standard high flow oxygen treatment with titrated oxygen treatment for patients with an aecopd in the prehospital setting. methods: cluster randomized controlled parallel group trial comparing high flow oxygen treatment with titrated oxygen treatment in the prehospital setting. in an intention to treat analysis (n = 405), the risk of death was significantly lower in the titrated oxygen arm compared with the high flow oxygen arm for all patients and for the subgroup of patients with confirmed copd (n = 214). overall mortality was 9% (21 deaths) in the high flow oxygen arm compared with 4% (7 deaths) in the titrated oxygen arm; mortality in the subgroup with confirmed copd was 9% (11 deaths) in the high flow arm compared with 2% (2 deaths) in the titrated oxygen arm. titrated oxygen treatment reduced mortality compared with high flow oxygen by 58% for all patients (p = 0.02) and by 78% for the patients with confirmed chronic obstructive pulmonary disease (p = 0.04). patients with copd who received titrated oxygen according to the protocol were significantly less likely to have respiratory acidosis or hypercapnia than were patients who received high flow oxygen. conclusion: titrated oxygen treatment significantly reduced mortality, hypercapnia, and respiratory acidosis compared with high flow oxygen in aecopd. these results provide strong evidence to recommend the routine use of titrated oxygen treatment in patients with breathlessness and a history or clinical likelihood of copd in the prehospital setting. (originally submitted as a ''late-breaker.'') trial registration australian new zealand clinical trials register actrn12609000236291. background: toxic particulates and gases found in ambulance exhaust are associated with acute and chronic health risks. the presence of such materials in areas proximate to ed ambulance parking bays, where emergency services' vehicles are often left running, is potentially of significant concern to ed patients and staff. objectives: investigators aimed to determine whether the presence of ambulances correlated with ambient particulate matter concentrations and toxic gas levels at the study site ed. methods: the ambulance exhaust toxicity in healthcare-related exposure and risk [aether] program conducted a prospective observational study at an academic urban ed / level i trauma center. environmental ambient gas was sampled over a continuous five-week period from september to october 2011. two sampling locations in the public triage area (public patient dropoff area without ambulances) and three sampling locations in the ambulance triage area were randomized for 24-hour monitoring windows with a temporal resolution of 2 minutes to obtain 7 days of non-contiguous data for each location. concentrations of particulate matter less than 2.5 microns in aerodynamic size (pm2.5), oxygen, hydrogen sulfide (h 2 s), and carbon monoxide (co) as well as lower explosive limit for methane (lel) were monitored with professionally calibrated devices. ambulance traffic was recorded through offline review of 24/7 security video footage of the site's ambulance bays. results: 4,118 measurements at the public triage nurse desk space revealed pm2.5 concentrations with a mean of 21.32 ± 27.01 lg/m 3 (median 15.95 lg/m 3 ; maximum 1,152.58 lg/m 3 ). 4,867 ambulance triage nurse desk space pm2.5 concentrations recorded a mean of 60.45 ± 53.38 lg/m 3 (p < 0.0001, unpaired t test; median 43.37 lg/m 3 ; maximum 580.78 lg/m 3 ). oxygen levels remained steady throughout the study period; co, h 2 s, and lel were not detected. ambulance activity levels had the highest correlations with pm2.5 concentrations at the ambulance triage foyer (r = 0.47) and desk area (r = 0.42) where patients wait and ed staff work 8-12 hr shifts. conclusion: ed spaces proximate to ambulance parking bays had higher levels of pm2.5 than areas without ambulance traffic. concentrations of ambient particulate matter in acute care environments may pose a significant health threat to patients and staff. an ems ''pit crew'' model improves ekg and stemi recognition times in simulated prehospital chest pain patients sara y. baker 1 , salvatore silvestri 1 , christopher d. vu 1 , george a. ralls 1 , christopher l. hunter 1 , zack weagraff 2 , linda papa 1 1 orlando regional medical center, orlando, fl; 2 florida state university college of medicine, orlando, fl background: prehospital teams must minimize time to ekg acquisition and stemi recognition to reduce overall time from first medical contact to reperfusion. auto-racing ''pit crews'' model rapid task completion by pre-assigning roles to team members. objectives: we compared time-to-completion of key tasks during chest pain evaluation in ems teams with and without pre-assigned roles. we hypothesized that ems teams using the ''pit crew'' model would improve time to recognition and treatment of stemi patients. methods: a randomized, controlled trial of paramedic students was conducted over 2 months at orlando medical institute, a state-approved paramedic training center. we compared a standard ems chest pain management algorithm (control) with a pre-assigned tasks (''pit crew'') algorithm (intervention) in the evaluation of simulated chest pain patients. students were randomized into groups of three; intervention and control groups did not interact after randomization. all students reviewed basic prehospital chest pain management and either the standard or pre-assigned tasks algorithm. groups encountered three simulated patients. laerdal simmanò software was used track completion of tasks: taking vital signs, iv access, ekg acquisition and interpretation, asa administration, hospital stemi notification, and total time on scene. results: we conducted 54 simulated-patient encounters (30 control / 24 intervention encounters). mean time-to-completion of each task was compared in the control and intervention groups respectively. time to obtain vital signs was 4:18 vs. 2:21 min (p = 0.001); time to asa administration was 3:54 vs 2:00 min (p < 0.001); time to ekg acquisition was 5:39 vs 3:42 min (p < 0.001); time to ekg interpretation was 6:43 vs 4:21 min (p < 0.001); time to iv access was 5:42 vs 4:45 min (p = 0.05); time to stemi notification was 7:19 vs 4:26 min (p < 0.001); and time to scene completion was 9:02 vs 5:27 min (p < 0.001). conclusion: paramedic student teams with pre-assigned roles (the ''pit crew'' model) were faster to obtain vital signs, administer asa, acquire and interpret the ekg, stemi notification, and overall time on scene during simulated patient encounters. further study with experienced ems teams in actual patient encounters is necessary to confirm the relevance of these findings. background: use of automated external defibrillators (aed) has remained low in the u.s. understanding the effect of neighborhoods on the probability of having an aed used in the setting of a public arrest may provide important insights for future placement of aeds. objectives: to determine associations between the racial and income composition of neighborhoods (as defined by u.s. census tracts), individual arrest characteristics, and whether bystanders or first responders initiate aed use. methods: cohort study using surveillance data prospectively submitted by emergency medical services systems and hospitals from 29 u.s. sites to the cardiac arrest registry to enhance survival between october 1, 2005 and december 31, 2009 . neighborhoods were defined as high-income vs. low-income based on the median household income being above or below $50,000 and as white or black if >90% of the census tract was of one race. neighborhoods without a predominant racial composition were defined as integrated. arrests that occurred within a public location (excluding medical facilities and airports) were eligible for inclusion. hierarchical multi-level modeling, using stata v11.0, was used to determine the association between individual and census tract characteristics on whether an aed was used. results: of 2,769 eligible cases, an aed was used in 1336 arrests (48.2%) by a first responder (n = 1,127, 40.8%) or bystander (n = 209, 7.5%). patients whose arrest was witnessed (odds ratio [or] 1.26; 95% confidence interval [ci] 1.06-1.50) were more likely to have an aed used (table) . when compared to high-income white neighborhoods, arrest victims in low-income black neighborhoods were least likely to have an aed used (or 0.54; 95% ci 0.33-0.87). arrest victims in lowincome white (or 0.57; 95% ci 0.32-1.02) and lowincome integrated (or 0.70; 95% ci 0.51-0.96) were also less likely to have an aed used. conclusion: arrest victims in black and low-income neighborhoods are least likely to have an aed used by a layperson or first responder. future research is needed to better understand the reasons for low rates of aed use for cardiac arrests in these neighborhoods. the impact of an educational intervention on the pre-shock pause interval among patients experiencing an out-of-hospital cardiac arrest jonathan studnek 1 , eric hawkins 1 , steven vandeventer 2 1 carolinas medical center, charlotte, nc; 2 mecklenburg ems agency, charlotte, nc background: pre-shock pause duration has been associated with survival to hospital discharge (std) among patients experiencing out-of-hospital cardiac arrest (oohca) resuscitation. recent research has demonstrated that for every 5-second increase in this interval there is an 18% decrease in std. objectives: determine if a decrease in the pre-shock pause interval for patients experiencing oohca could be realized after implementation of an educational intervention. methods: this was a retrospective analysis of data obtained from a single als urban ems system from 1/1/2010 to 12/31/10 and 8/1/11 to 11/6/2011. in august 2011, an educational intervention was designed and delivered to approximately 150 paramedics emphasizing the importance of reducing the time off chest during cpr. specifically, the time period just prior to defibrillation was emphasized by having rescuers count every 20th compression and pre-charge the defibrillator on the 180th compression. in order to determine if this change resulted in process improvement, 12 months of data were assessed before and 3 months after the educational intervention. pre-shock pause was the outcome variable and was defined as the time period after compressions ceased until a shock was delivered. this interval was measured by a cpr feedback device connected to the defibrillator. inclusion criteria were adult patients who required at least one defibrillation and had the cpr feedback device connected during the defibrillation attempt. analysis was descriptive utilizing means and 95% ci as well as wilcoxon rank sum test to assess difference between the two time periods. results: in the pre-intervention period there were 117 patients who received 211 defibrillations compared to 30 patients receiving 71 defibrillations in the post-intervention phase. the mean duration of the pre-shock pause pre-intervention was 35 seconds (95% ci 20-50) while the post-intervention duration was 9 seconds (95% ci 7-12). the difference in pre-shock pause duration was statistically significant with p < 0.001. conclusion: these data indicate that after a simple educational intervention emphasizing decreasing time off chest prior to defibrillation the pre-shock pause duration decreased. future research must describe the sustainability of this intervention as well as the effects this process measure may have on outcomes such as survival to hospital discharge. background: the broselow tape (bt) has been used as a tool for estimating medication dosing in the emergency setting. the obesity trend has demonstrated a tendency towards insufficient pediatric weight estimations from the bt, and thus potential under-dosing of resuscitation medications. objectives: this study compared drug dosing based on the bt with dosing from a novel electronic tool (et) that accounts for provider estimation of body habitus. methods: data were obtained from a prospective convenience sample of children ages 1 to 8 years arriving to a pediatric emergency department. a clinician performed an assessment of body habitus (average/underweight, overweight, or obese), blinded to the patient's actual weight and parental weight estimate. parental estimate of weight and measured length and weight were collected. epinephrine dosing was calculated from the measured weight, the bt measurement, as well as from a smart-phone tool based on the measured length and clinician's estimate of body habitus, and a modified tool (mt) incorporating the parent estimate of habitus. the wilcoxson rank-sum test was used to compare median percent differences in dosing. results: one hundred children (mean age 3 years) were analyzed; 47% were overweight or obese. clinicians correctly identified children as overweight/obese 23% of time (ci 0.12-0.38). adding parent estimate of weight improved this to a sensitivity of 74% (ci 0.59-0.86). the median difference between the weight-based epinephrine dose and bt dose was 11%. for the et the median difference from the weight-based dose was 7% (p = 0.05 compared to the bt), and for the mt was 1.7% (p < 0.01 compared to the bt). when a clinically significant difference was defined as ±10% of the actual dose, bt was within that range 40% of the time, et was within range 56% of the time (p = 0.02), and mt was within range 64% of the time ( background: in most out-of-hospital cardiac arrest (ohca) events, a call to 9-1-1 is the first action by bystanders. accurate diagnosis of cardiac arrest by the call taker depends on the caller's verbal description. if cardiac arrest is not suspected, then no telephone cpr instructions will be given. objectives: we measured the effect of a change in the ems call taker question sequence on the accuracy of diagnosis of cardiac arrest by 9-1-1 call takers. methods: we retrospectively reviewed the cardiac arrest registry to enhance survival (cares) dataset for january 1, 2009 through june 30, 2011 from a city, population 750,000, with a longstanding telephone cpr program (apco). we included ohca cases of any age who were in arrest prior to the arrival of ems and for whom resuscitation was attempted. in early 2010, 9-1-1 call takers were taught to follow a revised telephone script that emphasized focused questions, assertive control of the caller, and provision of hands-only cpr instructions. the medical director personally explained the reasons for the changes, emphasizing the importance of assertive control of the caller and the comparative safety of chest compressions in patients not in cardiac arrest. beginning in 2010, call recordings were reviewed regularly with feedback to the call taker by the 9-1-1 center leadership. the main outcome measure was sensitivity of the 9-1-1 call taker in diagnosing cardiac arrest. bystander cpr was reported by ems crews attending the event. we compared 2009 with 2010 and 2011 using the v 2 test and odds ratios (or). results: there were 504 ohca cases in 2009, 457 cases in 2010, and 287 in the first half of 2011 (68/100,000 population). the mean age was 57 ± 21 years, and 27% of the events were witnessed. before the revision, 40% of ohca cases were identified by 9-1-1 dispatchers; and after the revised questioning sequence, 74% were identified (or 4.3, 95% ci 3.2-5.6). the false positive rate changed little (from 56/month to 72/month). the mean time to question callers was unchanged (53 vs 51 seconds). bystander cpr was performed in 37.3% of events in 2009, 39.2% in 2010, and 49.1% of events in 2011 (p < 0.001). conclusion: emphasis on scripted assessment improved sensitivity without loss of specificity in identifying ohca. with repeated feedback, it translated to an increase in victims receiving bystander cpr. in an out-of hospital cardiac arrest population confirmed by autopsy salvatore silvestri, christopher hunter, george ralls, linda papa orlando regional medical center, orlando, fl background: quantitative end-tidal carbon dioxide (etco 2 ) measurements (capnography) have consistently been shown to be more sensitive than qualitative (colorimetric) ones, and the reliability of capnography for assessing airway placement in low perfusion states has sometimes been questioned in the literature. objectives: this study examined the rate of capnographic waveform presence of an intubated out-of-hospital cardiac arrest cohort and its correlation to endotracheal tube location confirmed by autopsy. our hypothesis is that capnography is 100% accurate in determining endotracheal tube location, even in low perfusion states. methods: this cross-sectional study reviewed a detailed prehospital cardiac arrest database that regularly records information using the utstein style. in addition, the ems department quality manager routinely logs the presence of an alveolar (four-phase) capnographic waveform in this database. the study population included all cardiac arrest patients from january 1, 2009 through december 31, 2009 managed by a single ems agency in orange county, florida. patients were included if they had endotracheal intubation performed, had capnographic measurement obtained, failed to regain return of spontaneous circulation (rosc), and had an autopsy performed. the main outcome was the correlation of the presence of an alveolar waveform and the location of the ett at autopsy. results: during the study period, 921 cardiac arrests were recorded. of these, 263 had an advanced airway placed (ett or laryngeal tube airway), and no rosc. of the 263 advanced airway cases, 73 were managed with an ett. autopsies were performed on 30 of these patients and resulted in our study cohort. the location of the ett at autopsy was recorded on all 30 of these cases. capnographic waveforms were recorded in the field in all 30 of these study patients, and 100% of the tubes were located within the trachea at autopsy. the sensitivity of capnography in determining proper endotracheal tube location was 100% in this study. conclusion: in our study, the presence of a capnographic waveform was 100% reliable in confirming proper placement of endotracheal tubes placed in outof-hospital patients with poor perfusion states. results: over 60 variables were presented to the 34 ems medical directors responding (100% survey population captured). among the myriad of responses, 14 (42%) initiate cardiopulmonary resuscitation (cpr) at 30 compressions to 2 ventilations consistent with il-cor/aha guidelines. seven (21%) initiate continuous chest compressions from the start of cpr with no pause and interposed ventilations. nine (26%) begin chest compressions only during the first 2-3 minutes, with either passive oxygenation by oxygen mask (six; 18%) or no oxygen (three; 9%). airway management following non-invasive oxygenation and ventilation by primary endotracheal intubation occurs in 12 systems (35%), while six (18%) use supraglottic devices. fourteen (42%) allow paramedics to decide between endotracheal and supraglottic device placement. thirty systems (88%) utilize continuous waveform capnography. the initial approach to non-ems witnessed ventricular fibrillation is chest compression prior to first defibrillation in 30 systems (88%). eighteen systems (52%) escalate defibrillation energy settings, with four systems (12%) utilizing dual sequential defibrillation. twenty (59%) initiate therapeutic hypothermia in the field. conclusion: wide variability in ca care standards exists in america's largest urban ems systems in mid-2011, with many current practices promoting more continuity in chest compressions than specified in the 2010 ilcor/aha guidelines. endotracheal intubation, a past mainstay of ca airway management, is deemphasized in many systems. immediate defibrillation of non-ems witnessed ventricular fibrillation is uncommon. objectives: determine the out-of-hospital cardiac arrest survival in this area of puerto rico using the utstein method. methods: prospective observational cohort study of adult patients presenting with an out-of-hospital cardiac arrest to the upr hospital ed. study endpoints will be survival and neurologically intact survival at hospital discharge, 6 months, and 12 months. results: a total of 144 consecutive cardiac arrest events were analyzed for a period of 2 years. one-hundred fifteen events met criteria for primary cardiac etiology (79.86%). the average age for this group was 68.47 years. there were 45 female (39.13%) and 70 male (60.86%) participants. the average time to start cpr was 14.60 minutes. transportation to the ed was 71.3% by ems and 25.22% by private vehicle. a total of 68 events were witnessed (59.13%). the survival rate to hospital admission was 23.66%. the overall cardiac arrest survival was 9.30% and overall neurologically intact survival was 4.30%. neurologically intact survival at 6 and 12 months was 2.15%. the rate of bystander cpr in our population was 16.13% with a survival rate of 6.66%. conclusion: survival from out-of-hospital cardiac arrest in the area served by the upr hospital is low but comparable to other cities in the us as reported by the cdc cardiac arrest registry to enhance survival (cares). this low survival rate might be due to low bystander cpr rate and prolonged time to start cpr. background: hyperventilation has been directly correlated with increased mortality for out-of-hospital cpr. ems providers may hyperventilate patients at levels above national bls guidelines. real-time feedback devices, such as ventilation timers, have been shown to improve cpr ventilation rates towards bls standards. it remains unclear if the combination of a ventilation timer and pre-simulation instruction would influence overall ventilation rates and potentially reduce undesired hyperventilation. objectives: this study measured ventilation rates of standard cpr (and pre-instruction on effects of hyperventilation) compared to cpr with the use of a commercial ventilation timer (and pre-instruction on effects of hyperventilation). we propose that use of a ventilation timer, measuring and displaying to ems providers real-time ventilations delivered, will have no difference in ventilation rates when comparing these groups. methods: this prospective study placed ems providers into four groups: two controls measuring ventilation rates before (1a) and after instruction (1b) on the deleterious effects of hyperventilation, and a concurrent intervention pair with before (2a) and after instruction (2b), with the second pair measuring ventilation rates with a ventilation timer that provides immediate feedback on respirations given. ventilation rates were measured for a 60-second period after one minute of simulated cpr using mannequins. the control set without instruction (1a, n = 12) averaged 14.21 breaths (95% ci = 10.31-18.11) and with instruction (1b, n = 13) averaged 20.23 breaths (95% ci = 16.16-24.30). the intervention set without instruction (2a, n = 11) averaged 13.04 breaths (95% ci = 9.29-16.78) and with instruction (2b, n = 13) averaged 11.77 breaths (95% ci = 8.02-15.51). there was a significant improvement (p = 0.016) in ventilation rates with use of a ventilation timer (control group versus intervention group regardless of pre-instruction). there was no statistically significant difference between groups with respect to instruction alone (p = 0.223). conclusion: the use of a ventilation timer significantly reduced overall ventilation rates, providing care closer to bls guidelines. the addition of pre-simulation instruction added no significant benefit to reducing hyperventilation. background: in 2010, the american heart association (aha) recommended a compression rate of (roc) 100/ min and a depth of compressions (doc) at least 2 inches for effective cpr. as an educational tool for lay rescuers, the aha as adopted the catch phrase ''push hard, push fast''. objectives: in this irb-exempt study, we sought to determine if persons without formal cpr training could perform non-ventilated cpr as well as those who have been trained in the past or those currently certified. methods: a convenience sample of patrons of the new york state fair was asked to perform 2 minutes of hands-only cpr on a prestan pp-am-100m adult cpr manikin. these devices provide visual indicators of acceptable rate and depth of compressions. each subject was video recorded on a dell latitude 620 laptop computer with a logitech quick cam using logitech quick cam 8.4.6 for windows software. results: a total of 175 volunteers (74 male, 102 female) aged 16-68 years participated: 52 were never certified (nc) in cpr, 73 were previously certified (pc), and 50 were currently certified (cc). there was no difference in age across the groups. the cc group had a higher proportion of females (chi-square = 9.71, p < 0.008). cc volunteers sustained roc and doc for an average of 57.1 seconds as compared to an average of 18.5 seconds (pc) and 2.3 seconds (nc) respectively. (f = 27.8, p < 0.001). the cc maintained roc of closer to 100/ min (mean 111.6/min) when compared to the pc (mean 85.3/min) and nc (mean 86.0/min) groups (f = 14.7, p < 0.001). a higher proportion of volunteers of the cc group were able to perform adequate doc (chi-square = 11.2, p < 0.004), and hand placement (chisquare = 19.21, p < 0.001) when compared to the other two groups. conclusion: compared to the target roc and doc, none of the groups did well and only 14 subjects met target roc/doc. increased out-of-hospital cardiac arrest survivability due to lay rescuer intervention is only assured if cpr is effectively administered. the effect and benefit of maintaining formal cpr training and certification is clear. background: more than 300,000 out-of-hospital cardiac arrests (ohcas) occur annually in the united states (us). automated external defibrillators (aeds) are life-saving devices in public locations that can significantly improve survival. an estimated 1 million aeds have been sold in the us; however, little is known about whether locations of aeds match oh-cas. these data could help determine optimal placement of future aeds and targeted cpr/aed training to improve survival. objectives: we hypothesized that the majority (>50%) of aeds are not located in close proximity (200 feet) to the occurrence of cardiac arrests in a major metropolitan city. methods: this was a retrospective review of prospectively collected cardiac arrest data from philadelphia ems from january 1, 2008 until december 31, 2010. included were ohcas of presumed cardiac etiology in individuals 12 years of age or older. excluded were oh-cas of presumed traumatic etiology, cases where resuscitation was terminated at the scene, and those dead on arrival. aed locations in philadelphia were obtained from myheartmap, a database of installed and wallmounted aeds in pennsylvania. we used gis mapping software to visualize where ohcas occurred relative to where aeds were located and to determine the radius of ohcas to aeds. arrests within a 200, 400, and 600 foot radius of aeds were identified using the attribute location selection option in arcgis. the lengths of radii were estimated based on the average time it would take for a person to walk to and from an aed (200 feet2 minutes; 400 feet4 minutes; 600 feet6 minutes). results: we mapped 3,483 ohcas and 2,314 aeds in philadelphia county. ohcas occurred in males (55%; 1916/3483) and the mean age was 65.4 years. ventricular fibrillation occurred in 19% (662/3483). aeds were primarily located in schools/universities (30%), office buildings (22%), and residential buildings (4%). aeds were not identified within 200 feet in 93% (3,239) of ohcas, within 400 feet of 90% (3,135) of ohcas, and within 600 feet in 79% (2,752) of ohcas. the figure (large black circles) illustrates aed/ohca within 200 feet on the left and 600 feet on the right. conclusion: aeds were rarely close to the locations of ohcas, which may be a contributor to low cardiac arrest survival rates. innovative models to match aed availability with ohcas should be explored. (originally submitted as a ''late-breaker.'') potential background: early and frequent epinephrine administration is advocated by acls; however, epinephrine research has been conducted primarily with standard cpr (std). active compression-decompression cpr with an impedance threshold device (acd-cpr + itd) has become the standard of care for out of hospital cardiac arrest in our area. the hemodynamic effects of iv epinephrine under this technique are not known. objectives: to determine the hemodynamic effects of iv epinephrine in a swine model undergoing acd-cpr+itd. methods: six female swine (32 ± 1kg) were anesthetized, intubated, and mechanically ventilated. intracranial, thoracic aorta, and right atrial pressures were recorded via indwelling catheters. carotid blood flow (cbf) was recorded via doppler. etc0 2 , sp0 2 , and ekg were monitored. ventricular fibrillation was induced and went untreated for 6 minutes. three minutes each of standard cpr (std), std-cpr+itd, and acd-cpr+itd was preformed. at minute 9 of the resuscitation, 40 lg/kg of iv epinephrine was administered and acd-cpr+itd was continued for 1 minute. statistical analysis was performed with a paired t-test. results: aortic pressure and calculated cerebral and carotid perfusion pressures increased from std < std+itd < acd-cpr+itd (p £ 0.001). epinepherine administered during acd-cpr+itd signficantly increased mean aortic (29 ± 5vs42 ± 12, p = 0.01), cerebral (12 ± 5 vs 22 ± 10, p = 0.01), and coronary perfusion pressures (8 ± 7 vs 17 ± 4, p = 0.02); however, mean cbf and etco 2 decreased (respectively 29 ± 15 vs 14 ± 7.0, p = 0.03; 20 ± 7 vs 18 ± 6, p = 0.04). conclusion: the administration of epinepherine during acd-cpr+itd signficantly increased markers of macrocirculation, while significantly decreasing etco 2 , a proxy for organ perfusion. while the calculated cerebral perfusion pressures increased, the directly measured cbf decreased. this calls into question the ability of calculated perfusion pressures to accurately reflect blood flow and oxygen delivery to end organs. hypoxia background: during cardiac arrest most patients are placed on 100% oxygen with assisted ventilations. after return of spontaneous circulation (rosc), 100% oxygen is typically continued for an extended time. animal data suggest that immediate post-arrest titration of oxygen by pulse oximetry produces better neurocognitive/ histologic outcomes. recent human data suggest that arterial hyperoxia is associated with worse outcomes. objectives: to assess the relationship between hypoxia, normoxia, and hyperoxia post-arrest and outcomes in post-cardiac arrest patients treated with therapeutic hypothermia. methods: we conducted a retrospective chart review of 190 post-arrest patients admitted to an academic medical center between january, 2000 and december, 2007 who had arterial blood gases (abg) drawn after rosc. demographic variables were analyzed using anova and chi-square tests as appropriate. unadjusted logistic regression analyses were performed to assess the relationship between hypoxia (pao 2 < 60 mmhg), normoxia (60-300 mmhg), hyperoxia (>300 mmhg), and mortality. results: on first abg (190 patients), 37 (19.5%) were hypoxic, 92 (48.4%) normoxic, and 61 (32.1%) hyperoxic. the average age of the cohort was 62.8 years (no difference for hypoxic, normoxic, and hyperoxic patients). overall mortality was 70.5% (134/190). there were no significant differences between initial heart rate, systolic blood pressure, sex, race, or pre-arrest functional status. in-hospital mortality was significantly higher when the first abg demonstrated hypoxia (94.6%; 35/ 37) than for normoxia (68.5%; 63/92) or hyperoxia (59%; 36/61). in unadjusted logistic regression analysis of first pao 2 values, hyperoxia was not associated with increased mortality (or 0.7; 95% ci 0.3-1.4) but hypoxia was associated with increased mortality (or 6.1; 95% ci 1.4-27.5). conclusion: hypoxia but not hyperoxia on first abg was associated with mortality in a cohort of post-arrest patients. background: there are over 330,000 deaths due to cardiac arrest per year in the us. the aha recommends monitoring the quality of cpr primarily through the use of end tidal co 2 (etco 2 ). the level of etco 2 is significantly dependant on minute ventilation and altered by pressor and bicarbonate use. cerebral oximetry (cereox) uses near infrared spectroscopy to non-invasively measure oxygen saturation of the frontal lobes of the brain. cereox has been correlated with cerebral blood flow and jugular vein bulb saturations. objectives: the objective of this study is to compare the simultaneous measurement of etco 2 and cereox to investigate which monitoring method provides the best measure of cpr quality as defined by return of spontaneous circulation (rosc). methods: a prospective cohort of a convenient sample of patients using out-of-hospital and ed cardiac arrest from two large eds. patients were monitored simultaneously by etco 2 and cereox during cpr. patient demographics and arrest data were collected using the utstein criteria. all patients were monitored throughout the resuscitation efforts. rosc was defined as a palpable pulse and a measurable blood pressure for a minimum of thirty minutes. results: twenty two patients were enrolled with complete data sets; 27% of the subjects had rosc. average down time of rosc subjects was 12 minutes (sd ± 14.6) and 31 minutes (sd ± 17.8) for subjects without rosc. the inability to obtain a value of 30 either for etco 2 or cereox was 50% and 75% specific with an 80% and 100% npv respectively for predicting lack of rosc. obtaining a value of 30 either for etco 2 or cereox was 66% and 100% sensitive, respectively in identifying rosc. subjects with rosc had sustained values above 30 for 1.25 mins on cereox and 4.9 mins on etco 2 prior to rosc. the increase in values over a three minute period prior to rosc was 13.5 on cereox and 1.3 on etco 2 . conclusion: the inability to obtain a value of 30 on either the etco 2 or cereox strongly predicted lack of rosc. cereox provides a larger magnitude and closer temporal increase prior to rosc than etco 2 . attaining a value of 30 on cereox was more predictive of rosc than etco 2. an discrepancies due to communicating information to multiple listeners in a short amount of time. this creates a communication barrier not always apparent to practitioners. we examine the perceptions of ems and ed personnel on the transfer of care and its correlation to missing patient data. objectives: evaluate provider perception of information transfer by ems and ed personnel and compare this to an external observer's objective assessment. methods: this is a retrospective quality improvement program at an academic level i trauma center. transfers of medical and trauma patients from ems to ed personnel were attended by trained external observers, research associates (ra). ra recorded the data communicated: name, age, past medical history (pmh), allergies, medications, events, active problems, vital signs (vs), level of consciousness (loc), iv access, and treatments given. then, ems and ed staff rated their perception of transfer on a 1-10 rating scale. results: ra evaluated 448 patient transfers (268 medical and 180 trauma). transfer time did not differ, 4.05 minutes for medical (95% ci: 3.77-4.32), 3.92 minutes for trauma patients (95% ci: 3.53-4.31)(p = 0.57). missing data between the two groups also did not differ, except loc and treatment were missed more in medical transfers, while pmh was missed more in the trauma transfers. comparing the transfers with all vs present (67%, 300/448) and all vs missing (12%, 55/ 448), with all vs missing, there was no difference in perception of transfer for ems (9.6/10 vs present vs 9.4/10 vs absent) or ed staff (9.5/10 vs present, 9.4/10 vs absent). when all vital signs were missing, ra rated 69.1% of transfers as poor, whereas when all vs were present 80.8% of transfers were considered good. conclusion: ems and ed staff felt transfers of care were professional, teams were attentive, and had similar amounts of interruptions for both medical and trauma cases. their perception of transfer of care was similar even when key information was missing, although external observers rated a significant amount of transfers poorly. thus, ems and ed staffs were not able to evaluate their own performance in a transfer of care and external observers were found to be better evaluators of transfers of care. swati singh, john brown, prasanthi ramanujam ucsf, san francisco, ca background: ems transports a large number of psychiatric emergencies to emergency departments (ed) across the us. research on paramedic education related to behavioral emergencies is sparse, but based on expert opinion we know that gaps in paramedic knowledge and training exist. in our system, paramedics triage patients to medical, detoxification, and purely psychiatric destinations, so a paramedic's understanding of these emergencies directly affects the flow of patients in our eds. objectives: our objectives were to understand the gaps in current training and develop a targeted curriculum for field providers with a long term goal of appropriately recognizing and triaging subjects to the ed. methods: data were collected using a survey that was distributed during a paramedic association meeting in october 2011. subjects were excluded if they did not complete the survey. survey questions addressed demographics of paramedics, frequency of various psychiatric emergencies and their confidence in managing these emergencies. data were collated, analyzed, and presented as descriptive statistics. results: forty-nine surveys were distributed with a response rate of 82% (n = 40/49). of the respondents, 70% (n = 28) were male and 68% (n = 27) had at least five years experience. mood, thought, and cognitive disorders were the most frequently encountered presentations and 65% (n = 26) of respondents came across psychiatric emergencies multiple times a week. many respondents did not feel confident managing agitated delirium (n = 16, 40%), acute psychosis (n = 17, 43%), and intimate partner or elder abuse (n = 14, 35%). a third to a half of the respondents felt they have little or no training in chemical sedation (n = 18, 45%), verbal de-escalation (n = 14, 35%), and triaging patients (n = 21, 53%). conclusion: we identified a need for a revised curriculum on management of psychiatric emergencies. future steps will focus on development of a curriculum and change in knowledge after implementation of this curriculum. background: prehospital endotracheal intubation has long been a cornerstone of resuscitative efforts for critically ill or injured patients. paramedic airway management training will need to be modified due to the 2011 acc/aha guidelines to ensure maintenance of competency in overall management of airway emergencies. how best to modify the training of paramedics requires an understanding of current experience. objectives: the purpose of this report is to characterize the airway management expertise of experienced and non-experienced paramedics in a single ems system. methods: we retrospectively reviewed all prehospital intubations from an urban/suburban ambulance service (professional ambulance, inc.) over a five-year period (january 01, 2006 to december 31, 2010). characteristics of airway management by paramedics with 0-5 years of experience (group 1) were compared to those with greater than 5 years of experience (group 2). airway management was guided by massachusetts statewide treatment protocols governing direct laryngoscopy and all adjunctive approaches. attempts are characterized by laryngoscope blade passing the lips. difficult and failed airways were managed with extraglottic devices (egd) or needle cricothyroidotomy. we reviewed patient characteristics, intubation methods, rescue techniques, and adverse events. results: 150 patients required airway management: 120 (80%) were performed by group 1 and 30 (20%) were performed by group 2. group 1 was both faster to intubate (1.39 vs 1.83 attempts, p = 0.0035) and less likely to use a rescue device (19.1% vs 50.0%, p = 0.0009). both are equally likely to go directly to a rescue device (10% vs 10%, p = 1.0). all patients were successfully oxygenated and ventilated with either an endotracheal tube or egd. no surgical airways were performed and no patients died as a result of a failed airway. conclusion: while intubation success rates of paramedics with less than and greater than five years of experience are similar, less experienced paramedics use fewer attempts and are less likely to use a rescue device. both recognize difficult airways and go directly to rescue devices equally. this highlights difficulties faced maintaining competence. education requirements must be evaluated and redesigned to allow paramedics to maintain competence and emphasize airway management according to the latest resuscitation guidelines. how well do ems 9-1-1 protocols predict ed utilization for pediatric patients? stephanie j. fessler 1 , harold k. simon 1 , daniel a. hirsh 1 , michael colman 2 1 emory university, atlanta, ga; 2 grady health systems, atlanta, ga background: the use of emergency medical services (ems) for low-acuity pediatric problems has been well documented. however, it is unclear how accurately general ems dispatch protocols predict the subsequent ed utilization for these patients. objectives: to determine the ed resource utilization rate of pediatric patients categorized as low acuity by 9-1-1 dispatch protocols and then subsequently transferred to a children's hospital. methods: all transports for pediatric patients from the scene by a large urban general ems provider that were prioritized as low acuity by initial 9-1-1 dispatch protocols were identified. protocols were based on the national academy of medical priority dispatch system, v12. starting on jan 1, 2010, 100 consecutive cases of patients transported to three pediatric emergency departments (ped) of a large tertiary care pediatric health care system were reviewed. demographics, ped visit characteristics, resource utilization, and disposition were recorded. those patients who received meds other than po antipyretics, had labs other than a strep test, a radiology study, a procedure, or were not discharged home were categorized into the significant ed resource utilization group. results: 93% of the patients were african american and either had public insurance or self-pay (86%, 13% respectively). the median age was 11 months (4d-13yr). 54% were female. none of these low-acuity patients were upgraded by ems operators en route. upon arrival to the ped, 45% of transported patients were classified into the significant utilization group. six of the 100 total patients were admitted, including a 2 y/o requiring emergent intubation, an 8 m/o old with a broken cvl, a 6 y/o with sickle cell pain crisis, and a 2 y/o with altered mental status. the remainder of the significant resource utilization group consisted of children needing procedures, anti-emetics, narcotic pain control, labs, and xrays. conclusion: in this general ems 9-1-1 system, dispatch protocols for pediatric patients classified as low priority did poorly in predicting subsequent ed utilization with 45% requiring significant resources. further, ems operators did not recognize a critical child who needed emergent intervention. opportunity exists to refine general ems 9-1-1 protocols for children in order to more accurately define an ems priority status that better correlates with ultimate needs and resource utilization. the objectives: determine if there is an association between a patient's impression of the overall quality of care and his or her satisfaction with provided pain management. it was hypothesized that satisfaction with pain management would be significantly associated with a patient's impression of the overall quality of care. methods: this was a retrospective review of patient satisfaction survey data initially collected by an urban als ems agency from 1/1/2007 to 8/1/2010. participants were randomly selected from all patients transported proportional to their paramedic defined acuity; categorized as low, medium, or high with a goal of 100 interviews per month. the proportions of patients sampled from each acuity level were 25% low, 50% medium, and 25% high. patients were excluded if there was no telephone number recorded in the prehospital patient record or they were pronounced dead on scene. all satisfaction questions used a five-point likert scale with ratings from excellent to poor that were dichotomized for analysis as excellent or other. the outcome variable of interest was the patient's perception of the overall quality of care. the main independent variable had patients rate the staff who treated them at the scene on their helping to control or reduce their pain. demographic variables were assessed for potential confounding. results: there were 2,759 patients with complete data for the outcome and main independent variable with 45.0% male respondents and an average age of 54.1 (sd = 22.7). overall quality of care was rated excellent by 66.0% of patients while 59.1% rated their pain management as excellent. of patients who rated their pain management as excellent, 87.9% rated overall quality of care as excellent while only 34.2% of patients rated overall quality excellent if pain management was not excellent. when controlling for potential confounding variables, those patients who perceived their pain management to be excellent were 13.9 (95% ci 11.5-16.9) times more likely to rate their overall quality of care as excellent compared to those with non-excellent perceived pain management. conclusion: patients' perceptions of the overall quality of care were significantly associated with their perceptions of pain management. objectives: the purpose of this study is to determine whether ground-based paramedics could be taught and retain the skills necessary to successfully perform a cricothyrotomy. methods: this retrospective study was performed in a suburban county with a population of 160,000 and 21,000 ems calls per year. participants were groundbased paramedics in a local ems system who were taught wire-guided cricothyrotomy as part of a standardized paramedic educational update program. as part of the educational program, paramedics were taught wire-guided cricothyrotomy on a simulation model previously developed to train emergency medicine residents. after viewing an instructional video, the participants were allowed to practice using a 16step checklist. not all of these 16 steps were automatic failures. each paramedic was individually supervised performing a cricothyrotomy on the simulator until successful; a minimum of five simulations was required. retention was assessed using the same 16-step checklist during annual skills testing, after a minimum of 6 weeks to a maximum of 3 months posttraining. results: a total of 55 paramedics completed both the initial training and reassessment during the time period studied. during the initial training phase, 100% (55 of 55) of the paramedics were successful in performing all 16 steps of the wire-guided cricothyrotomy. during the retention phase 87.3% (48 of 55) retained the skills necessary to successfully perform the wire-guided cricothyrotomy. of the 16-step checklist, most steps were performed successfully by all the paramedics or missed by only 1 of the 55 paramedics. step #8, which involved removing the needle prior to advancing the airway device over the guidewire, was missed by 34.5% (19 of 55) of the participants. step #8 was not an automatic failure since most participants immediately self-corrected and completed the procedure successfully. conclusion: paramedics can be taught and can retain the skills necessary to successfully perform a wireguided cricothyrotomy on a simulator. future research is necessary to determine if paramedics can successfully transfer these skills to real patients. helicopter emergency medical services in background: netcare911 is one of the largest private providers of emergency air medical care in south africa. each hems (helicopter emergency medical service) crew is manned by a physician-paramedic team and is dispatched based on specific medical criteria, time to definitive care, and need for physician expertise. objectives: to describe the characteristics of net-care911 air medical evacuations in gauteng province and to analyze the role of physicians in patient care and effect on call times. methods: all patients transported by a netcare911 helicopter over a one year period from january -december 2008 were enrolled in the study. injury classifications, demographics, procedures, scene and flight times were collected retrospectively from run sheets. data were described by medians and interquartile intervals. results: a total of 386 patients were transported on 384 flights originating from the netcare911 gauteng helicopter base. ninety-two percent were traumarelated, with 74% resulting from motor vehicle accidents. physician expertise was listed 30% of the time as the indication for air medical response. a total of 105 advanced procedures were performed by physicians on 93 patients, including paralytic-assisted intubations, chest tube placement, and cardiac pacing. the median total call time was 46 minutes with 10 minutes spent on scene, compared with 54 and 24 minutes when advanced procedures were performed by hems (p < 0.001). conclusion: trauma accounts for an overwhelming majority of patients requiring emergency air medical transportation. advanced medical procedures were performed by physicians in nearly a quarter of the patients. there were significant differences in call times when advanced procedures were performed by hems. objectives: we sought to evaluate the level of awareness and adoption of the off-line protocol guidelines by utah ems agencies. methods: we surveyed all ems agencies in utah 18 months after protocol guideline release. medical directors, ems captains, or training coordinators completed a short phone survey regarding their knowledge of the emsc protocol guidelines, and whether their agency had adopted them. in particular, participants were asked about the pain protocol guideline and their management of pediatric pain. results: of the 186 agencies, 182 participated in the survey (98%). of those participating, 15 agencies (8%) were excluded from the analysis: 4 (2%) who only treat adults and 11 (6%) who do not participate in electronic data entry. of the remaining 171 agencies (94%), 155 (91%) were familiar with the utah emsc protocol guidelines; 116 agencies (68%) have either partially or fully adopted the protocol guidelines. 132 agencies (77%) were familiar with the pain treatment protocol guideline; 29 (17%) had adopted it; 34 (21%) planned to either partially or fully adopt the protocol. overall, 84 agencies (49%) had offline protocols allowing the administration of narcotics to children. of those, 49 (58%) had intranasal fentanyl as an available medication and delivery route. of the 84 agencies with offline protocols for pain, 77 (83%) reported familiarity with the emsc pain protocol guideline. conclusion: the creation and dissemination of statewide emsc protocol guidelines results in widespread awareness (91%) and to date 68% of agencies have adopted them. future investigation into factors associated with protocol adoption should be explored. background: intranasal (in) naloxone is safe and effective for the treatment of opioid overdose. while it has been extensively studied in the out-of-hospital environment in the hands of paramedics and lay people, we are unaware of any studies evaluating the safety and efficacy of in naloxone administration by bls providers. in recent years in naloxone has been added to the bls armamentarium; however, most services/states require an als unit be dispatched and attempt an intercept if in naloxone is administered by the bls providers. objectives: the purpose of this study is to evaluate the safety and effectiveness of bls-administered in naloxone in an urban environment. methods: retrospective cohort review as part of the ongoing qa process of all patients who had in naloxone administration by bls providers. the study was part of a special projects waiver by massachusetts oems from february 2011 through november 2011 in a busy urban tiered ems system in the metro-boston area. exclusion criteria: cardiac arrest. demographic information was collected, as well as vital signs, number of naloxone doses by bls, patient response to bls naloxone administration (clinical improvement in mental status and/or respiratory status), als intercept. descriptive statistics and confidence intervals are reported using microsoft excel and spss 17.0. results: fifty-six cases of bls-administered in naloxone were identified, and 2 were excluded as cardiac arrests. the included cases had a mean age of 38.8 years ±13.5 (range 16-82), and 74% (ci 60-85) were male. of the 54 included cases, 76% (ci 62-87) of patients responded to bls administration of naloxone. of the responders, 17% (ci 7-32) required two doses. there were 10 protocol violations representing 19% (ci 9.2-31.4) of the total administrations, however in 100% of these 10 protocol violations the patients had a positive response to the administration of in naloxone. seven of the protocol violations were patients who required a second 2 mg dose of naloxone. eleven cases did not have an als intercept; only 1 of these 11 patients did not respond to bls administration of naloxone. there were no identified adverse events. conclusion: bls providers safely and successfuly administered in naloxone achieving a response rate consistent with studies of als providers' administration of in naloxone. given the success rate of bls providers, it may be feasible for bls to manage responders without the aid of an als intercept. background: an estimated 20% of patients arriving by ambulance to the ed are in moderate to severe pain. however, the management of pain in the prehospital setting has been shown to be inadequate, and untreated pain may have negative consequences for patients. objectives: to determine if focused education on pediatric pain management and implementation of a pain management protocol improved the prehospital assessment and treatment of pain in adult patients. specifically, this study aimed to determine if documentation of pain scores and administration of morphine by ems personnel improved. methods: this was a retrospective before and after study conducted by reviewing a county-wide prehospital patient care database. the study population included all adult patients transported by ems between 01 february 2006 and 28 february 2010 with a working assessment of trauma or burn. ems patient care records were searched for documentation of pain scores and morphine administration 2 years before and 2 years after an intensive pediatric focused pain management education program and implementation of a pain management protocol. frequencies and 95% cis were determined for all patients meeting the inclusion criteria in the before and after time period and chisquare was used to compare frequencies between time periods. a secondary analysis was conducted using only subjects documented as meeting the protocol's treatment guidelines. results: 7,999 (10%) of 77,122 adult patients transported by ems during the study period met the inclusion criteria: 4,357 in the before and 3,642 in the after period. subject demographics were similar between the two periods. documentation of pain score did not change between the time periods ( background: there is a presumption that ambulance response times affect patient outcome. we sought to determine if shorter response times really make a difference in hospital outcomes. objectives: to determine if ambulance response time makes a difference in the outcomes of patients transported for two major trauma (motor vehicle crash injuries, penetrating trauma) and two major medical (difficulty breathing and chest pain complaints) emergencies. methods: this study was conducted in a metropolitan ems system serving a population total of 800,000 including urban and rural areas. cases were included if the private ems service was the first medical provider on scene, the case was priority 1, and the patient was 13 years and older. a 12-month time period was used for the data evaluation. four diagnoses were examined: motor vehicle crash injuries, penetrating trauma, difficulty breathing, and chest pain complaints. ambulance response times were assessed for each of the four different complaints. the patients' initial vital signs were assessed and the number of vital signs out of range was recorded. a sampling of all cases which went to the single major trauma center was selected for evaluation of hospital outcome. using this hospital sample, number of vital signs out of range were assessed as a surrogate marker indicating severity of hospital outcome. correlation coefficients were used to evaluate interactions between independent and outcome variables. results: of the 2164 cases we reviewed over the 12month period, we found that the ems service responded significantly faster to trauma complaints at 4.53 minutes (n = 254) than medical complaints at 5.92 minutes (n = 1910) . in the hospital sample of 587 cases, number of vital signs out of range were positively correlated with hospital days (r = 0.11), admits (r = 0.12), icu admits (r = 0.10), and deaths (r = 0.09), but not response times (r = (-)0.08). in the entire sample, there was no correlation between vital signs out of range and response times for any diagnosis (see figure) . conclusion: conclusions: based on our hospital sample which showed that number of vital signs out of range was a surrogate marker of worse hospital outcomes, we find that hospital outcomes are not related to initial response times. adverse effects following prehospital use of ketamine by paramedics eric ardeel baylor college of medicine, houston, tx background: ketamine is widely used across specialties as a dissociative agent to achieve sedation and analgesia. emergency medical services (ems) use ketamine to facilitate intubation and pain control, as well as to sedate acutely agitated patients. published studies of ems ketamine practice and effects are scarce. objectives: describe the incidence of adverse effects occurring after ketamine administration by paramedics treating under a single prehospital protocol. methods: a retrospective analysis was conducted of 98 consecutive patients receiving prehospital ketamine from paramedics in the suburban/rural ems system of montgomery county hospital district, texas between august 1, 2010 and october 25, 2011. ketamine administration indications were: need for rapid control of violent/agitated patients requiring treatment and transport; sedation and analgesia after trauma; facilitation of intubation and mechanical ventilation. ketamine administration contraindications were: equivalent ends achieved by less invasive means; hypertensive crisis; angina; signs of significantly elevated intracranial pressure; anticipated inability to support or control airway. all patients were included, regardless of indication for ketamine administration. data were abstracted from electronic patient care records and available continuous physiologic monitoring data, and analyzed for the presence of adverse effects as defined a priori in ''clinical practice guidelines for emergency department ketamine dissociative sedation: 2011 update.'' results: no patients were identified as experiencing adverse effects as defined by the referenced literature. ketamine was utilized most often for patients with the following nemsis provider's primary impression: 25 (26%) altered level of consciousness, 23 (23%) behavioral/psychiatric, 20 (20%) traumatic injury. overall, combativeness was associated with 64 (65%) patients. the mean age was 41 years (range 3-94 years) and 50 (51%) were male. the mean ketamine dose was 150 mg (range 25-500 mg) and twenty-four (24%) patients received multiple administrations. conclusion: in this patient population, our data indicate that prehospital ketamine use by ems paramedics, across all indications for administration, was safe. further study of ketamine's utility in ems is warranted. an background: rigorous evaluation of the effect of implementing nationally vetted evidence-based guidelines (ebgs) has been notoriously difficult in ems. specifically, human subjects issues and the health insurance portability and accountability act (hipaa) present major challenges to linking ems data with distal outcomes. objectives: to develop a model that addresses the human subjects and hipaa issues involved with evaluating the effect of implementing the traumatic brain injury (tbi) ebgs in a statewide ems system. methods: the excellence in prehospital injury care (epic) project is an nih-funded evaluation of the effect of implementing the ems tbi guidelines throughout arizona (ninds-1r01ns071049-01a1). to accomplish this, a partnership was developed between the arizona department of health services (adhs), the university of arizona, and more than 100 ems agencies that serve approximately 85% of the state's population. results: ebg implementation: implementation follows all routine regulatory processes for making changes in ems protocols. in arizona, the entire project must be carried out under the authority of the adhs director. evaluation: a before-after system design is used (randomization is not acceptable). hipaa: as an adhsapproved public health initiative, epic is exempt from hipaa, allowing sharing of protected health information between participating entities. for epic, the state attorney general provided official verification of hi-paa exemption, thus allowing direct linkage of ems and hospital data. irb: once epic was officially deemed a public health initiative, the university irb process was engaged. as an officially sanctioned public health project, epic was determined to not be human subjects research. this allows the project to implement and evaluate the effect of this initiative without requiring individual informed consent. conclusion: by utilizing an ems-public health-university partnership, the ethical and regulatory challenges related to evaluating implementation of new ebgs can be successfully overcome. the integration of the department of health, the attorney general, and the university irb can properly protect citizens while permitting efficient implementation and rigorous evaluation of the effect of ebgs. this novel approach may be useful as a model for evaluation of implementing ems ebgs in other states and large counties. (20.6%-58.1% by age) were transported to non-trauma centers. the most common reasons cited by ems for hospital selection were: patient preference (50.6%), closest facility (20.7%), and specialty center (15.2%). patient preference increased with age (p for trend 0.0001) and paralleled under-triage ( figure 1 ). iss ‡ 16 patients transported to non-trauma hospitals by patient request had lower unadjusted mortality (3.8%, 95%ci 1.9-5.8) than similar patients transported to trauma centers (11.8%, 95%ci 10.7-12.8) or transported for other reasons (12.6%, 95%ci 11.4-13.7) (figure 2) . under-triage appears to be influenced by patient preference and age. self-selection for transport to non-trauma centers may result in under-triaged patients with inherently better prognosis than triagepositive patients. background: only 25% of all out-of-hospital cardiac arrest (ohca) patients receive bystander cpr (cardiopulmonary resuscitation). the neighborhood in which an ohca occurs has significant influence on the likelihood of receiving bystander cpr. objectives: to utilize geographic information systems to identify ''high-risk'' neighborhoods, defined as census tracts with high incidence of ohca and low cpr prevalence. methods: design: secondary analysis of the cardiac arrest registry to enhance survival (cares) dataset for denver county, colorado. population: all consecutive adults (>18 years old) with ohca due to cardiac etiology from january 1, 2009 through december 31, 2010. data analysis: analyses were conducted in arc-gis. three spatial statistical methods were used: local morans i (lmi), getis-ord gi*(gi*), and spatial empirical bayes (seb) adjusted rates. census tracts with high incidence of ohca, as identified by all three spatial statistical methods, were then overlain with low bystander cpr census tracts, which were identified in at least two out of three statistical methods (lmi, gi*, or the lowest quartile of bystander cpr prevalence). overlapping census tracts identified with both high ohca incidence and low cpr prevalence were designated as ''highrisk''. results: a total of 728 arrests in 142 census tracts occurred during the study period, with 595 arrests included in final sample. events were excluded if they were unable to be geocoded (n = 41), outside denver county (n = 8), or occurred in a jail (n = 3), hospital/ physician's office (n = 7), or nursing home (n = 74). for high ohca incidence: lmi identified 29 census tracts, gi* identified 45 census tracts, and the seb method identified 28 census tracts. twenty-five census tracts were identified by all three methods. for low bystander cpr prevalence: lmi identified 9 census tracts, gi* identified 16 census tracts, and 101 census tracts were identified as being in the lowest quartile of cpr prevalence. twenty-four census tracts were identified by two of the three methods. two census tracts were identified as high-risk having both high ohca incidence and low cpr prevalence (figure) . high-risk census tract demographics as compared to denver county are shown in the table. conclusion: the two high-risk census tracts, comprised of minority and low-income populations, appear to be possible sites for targeted community-based cpr interventions. objectives: we sought to assess the accuracy and correlation of geographic information system (gis) derived transport time compared to actual ems transport time in ohca patients. methods: prospective, observational cohort analysis of ohca patients in vancouver, b.c., one of the sites of the resuscitation outcomes consortium (roc). a random sample from all of the ohca cases from 12/05 through 05/07 was selected for analysis from one site of the roc epistry. using gis, ems transport time was derived from reported latitude/longitude coordinates of the ohca event to the actual receiving hospital. this was calculated via the actual network distance using arcgis. this gis-derived time was then compared to the actual ems transport time (in minutes) using the wilcoxon signed rank test. scatter plot analysis of actual vs. gis times were created to evaluate the relationship between actual and calculated time. a linear regression model predicting actual ems transport time from the derived gis-time was also developed in order to examine the potential relationship between the two variables. differences in the relationship were also investigated based on time of the day to reflect varying traffic conditions. results: 641 cases were randomly selected for analysis. the median actual transport time was significantly longer than the median gis derived transport time (7.08 minutes vs. 5.50 minutes). scatter plot analysis did not reveal any significant correlation between actual and gis-based time. additionally, there was poor approximation of gis-based time and actual ems time (r 2 = 0.20) with no evidence of a significant linear relationship between the two. the poorest correlation of time was observed during the morning hours (07:00-09:00; r 2 = 0.02) while the strongest correlation was during the overnight hours (00:00-07:00; r 2 = 0.26). conclusion: gis derived time does not appear to correlate well with actual ems transport time of ohca patients. efforts should be made to accurately obtain actual ems transport times for ohca patients. objectives: we first sought to describe the incidence of ohca presenting to the ed. we then sought to determine the association between hospital characteristics and survival to hospital admission. methods: we identified patients with diagnoses of cardiac arrest or ventricular fibrillation (icd-9 427.5 or 427.41) in the 2007 nationwide emergency department sample, a nationally representative estimate of all ed admissions in the us. eds reporting ‡1 patient with ohca were included. our primary outcome was survival to hospital admission. we examined variability in hospital survival rate and also classified hospitals into high or low performers based on median survival rate. we used this dichotomous hospital level outcome to examine factors associated with survival to admission including hospital and patient demographics, ed volume, cardiac arrest volume, and cardiac catheterization availability. all unadjusted and adjusted analyses were performed using weighted statistics and logistic regressions. results: of the 966 hospitals, 949 (98.2%) were included. in total, 44,782 cases of cardiac arrest were identified, representing an estimated 203,331 cases nationally. overall ed ohca survival to hospital admission was 23.5% (iqr 0.1%, 29.4%) in adjusted analyses, increased survival to admission was seen in hospitals with teaching status (or 2.7, 95% ci 1.7-4.4, p < 0.001), annual ed visits ‡10,000 (or 3.9, 95% ci 2.5-6.1, p < 0.001), and pci capability (or 9.1, 95% ci 1.2-68.2, p = 0.032). in separate adjusted analyses including teaching status and pci capabilities, hospitals with >40 annual cardiac arrest cases (or 3.0, 95% ci 2.2-4.2, p < 0.001) were also shown to have improved survival (figure) . conclusion: ed volume, cardiac arrest volume, and pci capability were associated with improved survival to hospital admission in patients presenting to the ed after ohca. an improved understanding of the contribution of ed care to ohca survival may be useful in guiding the regionalization of cardiac arrest care. background: prior investigations have demonstrated regional differences in out-of-hospital cardiac arrest (ohca) outcomes, but none have evaluated survival variability by hospital within a single major us city. objectives: we hypothesized that 30-day survival from ohca would vary considerably among one city's receiving hospitals. methods: we performed a retrospective review of prospectively collected cardiac arrest data from a large, urban ems system. our population included all ohcas with a recorded social security number (which we used to determine 30-day survival through the social security death index) that were transported to a hospital between 1/1/2008 and 12/31/2010. we excluded traumatic arrests, pediatric arrests, and hospitals receiving less than 10 ohcas with social security numbers over the three-year study period. we examined the associa-tion between receiving hospital and 30-day survival. additional variables examined included: level i trauma center status, teaching hospital status, ohca volume, and whether post-arrest therapeutic hypothermia (th) protocols were in place in 2008. statistics were performed using chi-square tests and logistic regression. results: our study population comprised 550 arrest cases delivered to 18 unique hospitals with an overall 30-day survival of 14.4%. mean age was 69.0 (sd 16.2) years. males comprised 54.2% of the cohort; 53.3% of victims were black. thirty-day survival varied significantly among the hospitals, ranging from 4.8% to 35.0% (chi-square 32.3, p = 0.014). ohcas delivered to level i trauma centers were significantly more likely to survive (19.5% vs. 12.7%, p = 0.05), as were those delivered to hospitals known to offer post-arrest th (19.2% vs. 11.8%, p = 0.018). hospital teaching status and ohca volume were not associated with survival. conclusion: there was significant variability in ohca survival by hospital. patients were significantly more likely to survive if transported to a level i trauma center or hospital with post-arrest th protocols, suggesting a potential role for regionalization of ohca care. limiting our population to ohcas with recorded social security numbers reduced our power and may have introduced selection bias. further work will include survival data on the complete set of ohcas transported to hospitals during the three-year study period. background: traumatic brain injury is a leading cause of death and disability. previous studies suggest that prehospital intubation in patients with tbi may be associated with mortality. limited data exist comparing prehospital (ph) nasotracheal (nt), prehospital orotracheal (ot), and ed ot intubation and mortality following tbi. objectives: to estimate the associations between ph nt, ph ot, and ed ot intubation and in-hospital mortality in patients with moderate to severe tbi, with hypotheses that ph nt and ph ot intubation would be associated with increased mortality when compared to ed ot or no intubation. methods: an analysis using the denver health trauma registry, a prospectively collected database. consecutive adult trauma patients from 1995-2008 with moderate to severe tbi defined as head abbreviated injury scale (ais) scores of 2-5. structured chart abstraction by blinded physicians was used to collect demographics, injury and prehospital care characteristics, intubation status and timing, in-hospital mortality and survival time, and neurologic function at discharge. poor neurologic function was defined as cerebral performance category score of 3-5. multivariable logistic regression and survival analyses were performed, using multiple imputation for missing data. results: of the 3,517 patients, the median age was 38 (iqr 27-51) years. the median ph gcs was 14 (iqr 6-15), median injury severity score was 20 (iqr 13-29), and median head ais was 4 (iqr 3-5). ph nt occurred in 15.8%, ph ot in 9.5%, and ed ot in 17.4%, while mortality occurred in 17.5%. the 24-, 48-, and 72-hour survival analyses are outlined in the table. survival curves for ph nt, ph ot, and ed ot are demonstrated in the figure (p < 0.001) . conclusion: prehospital intubation in patients with moderate to severe tbi is associated with increased mortality. contrary to our initial hypothesis, there was also a significant association between ed intubation and mortality. these associations persisted despite survival time, and while adjusting for injury severity. background: sbdp150 is a breakdown product of the cytoskeletal protein alpha-ii-spectrin found in neurons and has been detected in severe tbi. objectives: this study examined whether early serum levels of sbdp150 could distinguish: 1) mild tbi from three control groups; 2) those with and without traumatic intracranial lesions on ct (+ct vs -ct); and 3) those having a neurosurgical intervention (+nsg vs -nsg) in mild and moderate tbi (mmtbi). methods: this prospective cohort study enrolled adult patients presenting to two level i trauma centers following mmtbi with blunt head trauma with loss of consciousness, amnesia, or disorientation and a gcs 9-15. control groups included uninjured controls and trauma controls presenting to the ed with orthopedic injuries or an mvc without tbi. mild tbi was defined as gcs 15 and moderate tbi as having a gcs <15. blood samples were obtained in all patients within 4 hours of injury and measured by elisa for sbdp150 (ng/ml). the main outcomes were: 1) the ability of sbdp150 to distinguish mild tbi from three control groups; 2) to distinguish +ct from -ct and; 3) to distinguish +nsg from -nsg. data were expressed as means with 95%ci, and performance was tested by roc curves (auc and 95%ci). results: there were 275 patients enrolled: 54 tbi patients (42 gcs 15, 12 gcs 9-14), 23 trauma controls (16 mvc controls and 7 orthopedic controls), and 198 uninjured controls. the mean age of tbi patients was 39 years (range 19-70) with 63% males. fourteen (14%) had a +ct and 9% had +nsg. mean serum sbdp150 levels were 0.764 (95%ci 0.561-0.968) in normal controls, 1.035 (0.091-2.291) in orthopedic controls, 1.209 (0.236-2.181 ) in mvc controls, 2.764 (1.700-3.827 ) in mild tbi with gcs 15, and 5.227 (0.837-9.617) in tbi with gcs 9-14 (p < 0.001). the auc for distinguishing mild tbi from both controls was 0.83 (95%ci 0.68-0.99). mean sbdp150 levels in patients with -ct versus +ct were 2.170 (1.340-3.000) and 6.797 (2.227-11.368) respectively (p < 0.001) with auc = 0.78 (95%ci 0.61-0.95). mean sbdp150 levels in patients with -nsg versus +nsg were 2.492 (1.391-3.593) and 6.867 (3.891-9.843) respectively (p < 0.001) with auc = 0.88 (95%ci 0.77-0.98). conclusion: serum sbdp150 levels were detectable in serum acutely after injury and were associated with measures of injury severity including ct lesions and neurosurgical intervention. further study is required to validate these findings before clinical application. utility of platelet background: pre-injury use of anti-platelet agents (e.g., clopidogrel and aspirin) is a risk factor for increased morbidity and mortality in patients with traumatic intracranial hemorrhage (tich). some investigators have recommended platelet transfusion to reverse the anti-platelet effects in tich. objectives: this evidence-based medicine review examines the evidence regarding the effect of platelet transfusion in emergency department (ed) patients with pre-injury anti-platelet use and tich on patientoriented outcomes. methods: the medline, embase, cochrane library, and other databases were searched. studies were selected for inclusion if they compared platelet transfusion to no platelet transfusion in the treatment of adult ed patients with pre-injury anti-platelet use and tich, and reported rates of mortality, neurocognitive function, or adverse effects as outcomes. we assessed the quality of the included studies using ''grading of recommendations assessment, development and evaluation'' (grade) criteria. categorical data are presented as percentages with 95% confidence interval (ci). relative risks (rr) are reported when clinically significant. results: five retrospective, registry-based studies were identified, which enrolled 635 patients cumulatively. based on standard criteria, three studies were of ''low'' quality evidence and two studies had ''very low'' qualities. one study reported higher in-hospital mortality in patients with platelet transfusion (ohm et al), another showed a lower mortality rate in patients receiving platelet transfusion (wong et al). three studies did not show any statistical difference in comparing mortality rates between the groups (table) . no studies reported intermediate-or long-term neurocognitive outcomes or adverse events. conclusion: five retrospective registry studies with suboptimal methodologies provide inadequate evidence to support the routine use of platelet transfusion in adult ed patients with pre-injury anti-platelet use and tich. abnormal levels of end-tidal carbon dioxide (etco 2 ) are associated with severity of injury in mild and moderate traumatic brain injury (mmtbi) linda papa 1 , artur pawlowicz 2 , carolina braga 1 , suzanne peterson 1 , salvatore silvestri 1 1 orlando regional medical center, orlando, fl; 2 university of central florida, orlando, fl background: capnography is a fast, non-invasive technique that is easily administered and accurately measures exhaled etco 2 concentration. etco 2 levels respond to changes in ventilation, perfusion, and metabolic state, all of which may be altered following tbi. objectives: this study examined the relationship between etco 2 levels and severity of tbi as measured by clinical indicators including glasgow coma scale (gcs) score, computerized tomography (ct) findings, requirement of neurosurgical intervention, and levels of a serum biomarkers of glial damage. methods: this prospective cohort study enrolled adult patients presenting to a level i trauma center following a mmtbi defined by blunt head trauma followed by loss of consciousness, amnesia, or disorientation and a gcs 9-15. etco 2 measurements were recorded from the prehospital and emergency department records and compared to indicators of tbi severity. results: of the 46 patients enrolled, 21 (46%) had a normal etco 2 level and 25 (54%) had an abnormal etco 2 level. the mean age of enrolled patients was 40 (range 19-70) and 32 (70%) were male. mechanisms of injury included motor vehicle collision in 19 (41%), motor cycle collision in 9 (20%), fall in 8 (17%), bicycle/ pedestrian struck in 8 (17%), and other in 2 (4%). eight (17%) patients had a gcs 9-12 and 38 (83%) had a gcs 13-15. of the 11 (24%) patients with intracranial lesions on ct, 10 (91%) had an abnormal etco 2 level (p = 0.006). of the 5 (11%) patients who required a neurosurgical intervention, 100% had an abnormal etco 2 level (p = 0.05). levels of a biomarker indicative of astrogliosis were significantly higher in those with abnormal etco 2 compared to those with a normal etco 2 (p = 0.026). conclusion: abnormal levels of etco 2 were significantly associated with clinical measures of brain injury severity. further research with a larger sample of mmtbi patients will be required to better understand and validate these findings. background: acetaminophen (apap) poisoning is the most frequent cause of acute hepatic failure in the us. toxicity requires bioactivation of apap to toxic metabolites, primarily via cyp2e1. children are less susceptible to apap toxicity; one current theory is that children's conjugative pathway (sulfonation) is more active. liquid apap preparations contain propylene glycol (pg), a common excipient that inhibits apap bioactivation and reduces hepatocellular injury in vitro and in rodents. cyp2e1 inhibition may decrease toxicity in children, who tend to ingest liquid apap preparations, and suggests a potential novel therapy. objectives: to compare phase i (toxic) and phase ii (conjugative) metabolism of liquid versus solid prepara-tions of apap. we hypothesize that ingestion of a liquid apap preparation results in decreased production of toxic metabolites relative to a solid preparation, likely due to the presence of pg in the liquid preparations. methods: design-pharmacokinetic cross-over study. setting-university hospital clinical research center. subjects-adults ages 18-40 taking no chronic medications. interventions-subjects were randomized to receive a 15 mg/kg dose of a commercially available solid or liquid apap preparation. after a washout period of greater than 1 week, subjects received the same dose of apap in the alternate preparation. apap, apap-glucuronide and apap-sulfate (phase 2 metabolites), apap-cysteinate and apap-mercapturate (phase 1 metabolites) were analyzed via lc/ms in plasma over 8 hours. peak concentrations and measured auc were compared using paired-sample t-tests. plasma pg levels were measured. results: fifteen subjects completed the protocol. peak concentrations and aucs of the cyp2e1 derived toxic metabolites were significantly lower following ingestion of the liquid preparation (table, figure) . the glucuronide and sulfate metabolites were not different. pg was present following ingestion of liquid but not solid preparations. conclusion: ingestion of liquid relative to solid preparations in therapeutic doses results in decreased plasma levels of toxic apap metabolites. this may be due to inhibition of cyp2e1 by pg, and may explain the decreased susceptibility in children. a less hepatotoxic formulation of apap can potentially be developed if co-formulated with a cyp2e1 inhibitor. background: pressure immobilization bandages have been shown to delay mortality for up to 8 hours after coral snake envenomation, providing an inexpensive and effective treatment when antivenin is not readily available. however, long-term efficacy has not been established. objectives: determine if pressure immobilization bandages, consisting of an ace wrap and splint, can delay morbidity and mortality from coral snake envenomation, even in the absence of antivenin therapy. methods: institutional animal care and use committee approval was obtained. this was a randomized, observational pilot study using a porcine model. ten pigs (17.3 kg to 25.6 kg) were sedated and intubated for 5 hours. pigs were injected subcutaneously in the left distal foreleg with 10 mg of lyophilized m. fulvius venom resuspended in water, to a depth of 3 mm. pigs were randomly assigned to either a control group (no compression bandage and splint) or a treatment group (compression bandage and splint) approximately 1 minute after envenomation. pigs were monitored daily for 21 days for signs of respiratory depression, decreased oxygen saturations, and paresis/paralysis. in case of respiratory depression, pigs were euthanized and time to death recorded. chi-square was used to compare rates of survival up to 21 days and a kaplan-meier survival curve constructed. results: average survival time of control animals was 412 ± 90 minutes compared to 12,642 ± 7,132 minutes for treated animals. significantly more pigs in the treatment group survived to 24 hours than in the control group (p = 0.03). two of the treatment pigs survived to the endpoint of 21 days, but showed necrosis of the distal lower extremity. conclusion: long-term survival after coral snake envenomation is possible in the absence of antivenin with the use of pressure immobilization bandages. the applied pressure of the bandage is critical to allowing survival without secondary consequences (i.e. necrosis) of envenomation. future studies should be designed to accurately monitor the pressures applied. background: patients exposed to organophosphate (op) compounds demonstrate a central apnea. the kölliker-fuse nuclei (kf) are cholinergic nuclei in the brainstem involved in central respiratory control. objectives: we hypothesize that exposure of the kf is both necessary and sufficient for op-induced central apnea. methods: anesthetized and spontaneously breathing wistar rats (n = 24) were exposed to a lethal dose of dichlorvos using three experimental models. experiment 1 (n = 8) involved systemic op poisoning using subcutaneous (sq) dichlorvos (100 mg/kg or 3x ld50). experiment 2 (n = 8) involved isolated poisoning of the kf using stereotactic microinjections of dichlorvos (625 micrograms in 50 microliters) into the kf. experiment 3 (n = 8) involved systemic op poisoning with isolated protection of the kf using sq dichlorvos (100 mg/kg) and stereotactic microinjections of organophosphatase a (opda), an enzyme that degrades dichlorvos. respiratory and cardiovascular parameters were recorded continuously. histological verification of injection site was performed using kmno4 injections. animals were followed post-poisoning for 1 hour or death. betweengroup comparisons were performed using a repeated measured anova or student's t-test where appropriate. results: animals poisoned with sq dichlorvos demonstrated respiratory depression starting 5.1 min post exposure, progressing to apnea 15.9 min post exposure. there was no difference in respiratory depression between animals with sq dichlorvos and those with dichlorvos microinjected into the kf. despite differences in amount of dichlorvos (100 mg/kg vs 1.8 mg/kg) and method of exposure (sq vs cns microinjection), 10 min following dichlorvos both groups (sq vs microinjection respectively) demonstrated a similar percent decrease in respiratory rate (51.5 vs 72.2, p = 0.14), minute ventilation ( background: patients sustaining rattlesnake envenomation often develop thrombocytopenia, the etiology of which is not clear. laboratory studies have demonstrated that venom from several species, including the mojave rattlesnake (crotalus scutulatus scutulatus), can inhibit platelet aggregation. in humans, administration of crotaline fab antivenom (av) has been shown to result in transient improvement of platelet levels; however, it is not known whether platelet aggregation also improves after av administration. objectives: to determine the effect of c. scutulatus venom on platelet aggregation in vitro in the presence and absence of crotaline fab antivenom. methods: blood was obtained from four healthy male adult volunteers not currently using aspirin, nsaids, or other platelet-inhibiting agents. c. scutulatus venom from a single snake with known type b (hemorrhagic) activity was obtained from the national natural toxins research center. measurement of platelet aggregation by an aggregometer was performed using five standard concentrations of epinephrine (a known platelet aggregator) on platelet-rich plasma over time, and a mean area under the curve (auc) was calculated. five different sample groups were measured: 1) blood alone; 2) blood + c. scutulatus venom (0.3 mg/ml); 3) blood + crotaline fab av (100 mg/ml); 4) blood + venom + av (100 mg/ ml); 5) blood + venom + av (4 mg/ml). standard errors of the mean (sem) were calculated for each group. results: antivenom administration by itself did not significantly affect platelet aggregation compared to baseline (103.8 ± 3.4%, p = 0.47). administration of venom decreased platelet aggregation (72.0 ± 8.5%, p < 0.05). concentrated av administration in the presence of venom normalized platelet aggregation (101.4 ± 6.8%) and in the presence of diluted av significantly increased aggregation (133.9 ± 9.0%); p < 0.05 for both groups when compared to the venom-only group. to control for the effects of the venom and av, each was run independently in platelet-rich plasma without epinephrine; neither was found to significantly alter platelet aggregation. conclusion: crotaline fab av improved platelet aggregation in an in vitro model of platelet dysfunction induced by venom from c. scutulatus. the mechanism of action remains unclear but may involve inhibition of venom binding to platelets or a direct action of the antivenom on platelets. background: routine use of both breathalyzers and hand sanitizers is common across emergency depart-ments. the most common hand sanitizer on the market, purell, contains 62% ethyl alcohol and a lesser amount of isopropyl alcohol. previous investigations have documented that risk is low to the health care worker who applies frequent hand sanitizers to themselves. however, it is unknown whether this alcohol mixture causes false readings on a breathalyzer machine being used to determine alcohol levels on others. objectives: to determine the effect on the measurement of breathalyzer readings in individuals who have not consumed alcohol after hand sanitizer is applied to the experimenter holding a breathalyzer machine. methods: after obtaining informed consent, a breathalyzer reading was obtained in participants who had not consumed any alcohol in the last 24 hours. three different experiments were performed with 25 different participants in each. in experiment 1, two pumps of hand sanitizer were applied to the experimenter. without allowing the sanitizer to dry, the experimenter then measured the breathalyzer reading of the participant. in experiment 2, one pump of sanitizer was applied to the experimenter. measurements of the participant were taken without allowing the sanitizer to dry. in experiment 3, one pump of sanitizer was placed on the experimenter and rubbed until dry according to the manufacturer's recommendations. readings were recorded and analyzed using paired t-tests. results: the initial breathalyzer reading for all participants was 0. after two pumps of hand sanitizer were applied without drying (experiment 1), breathalyzers ranged from 0.02 to 0.17, with a mean above the legalintoxication limit of 0.11 (t(24) = )15.3, p < 0.001). after one pump of hand sanitizer was applied without drying (experiment 2), breathalyzers ranged from 0.02 to 0.11, with a mean of 0.06 (t(24) = )14.1, p < 0.001). after one pump of hand sanitizer was applied according to manufacturer's directions (experiment 3), breathalyzers ranged from 0.0 to 0.02 with a mean of 0.01 (t(24) = )5.1, p < 0.001). conclusion: use of hand sanitizer according to the manufacturer's recommendations results in a small but significant increase in breathalyzer readings. however, the improper and overuse of common hand sanitizer elevates routine breathalyzer readings, and can mimic intoxication in individuals who have not consumed alcohol. stephanie carreiro, jared blum, francesca beaudoin, gregory jay, jason hack objectives: the primary aim of this study is to determine if pretreatment with ile affects the hemodynamic response to epinephrine in a rat model. hemodynamic response was measured by a change in heart rate (hr) and mean arterial pressure (map). we hypothesized that ile would limit the rise in map and hr that typically follow epinephrine administration. methods: twenty male sprague dawley rats (approximately 7-8 weeks of age) were sedated with isoflurane and pretreated with a 15 ml/kg bolus of ile or normal saline, followed by a 15 mcg/kg dose of epinephrine intravenously. intra-arterial blood pressure and hr were monitored continuously until both returned to baseline (biopaq). a multifactorial analysis of variance (manova) was performed to assess the difference in map and hr between the two groups. standardized t-tests were then used to compare the peak change in map, time to peak map, and time to return to baseline map in the two groups. results: overall, a significant difference was found between the two groups in map (p = 0.01) but not in hr (p = 0.34). there was a significant difference (p = 0.023) in time to peak map in the ile group (54 sec, 95% ci 44-64) versus the saline group (40 sec, 95% ci 32-48) and a significant difference (p = 0.004) in time to return to baseline map in ile group (171 sec, 95% ci 148-194) versus the saline group (130 sec, 95% ci 113-147). there was no significant difference (p = 0.28) in the peak change in map of the ile group (75.4, mmhg, 95% ci 66-85) versus the saline group (69.9 mmhg, 95% ci 64-76). conclusion: our data show that in this rat model ile pretreatment leads to a significant difference in map response to epinephrine, but no difference in hr response. ile delayed the peak effect and prolonged the duration of effect on map but did not alter the peak increase in map. this suggests that the use of ile may delay the time to peak effect of epinephrine if the drugs are administered concomitantly to the same patient. further research is needed to explore the mechanism of this interaction. rasch analysis of the agitation severity scale when used with emergency department acute psychiatry patients tania d. strout, michael r. baumann maine medical center, portland, me background: agitation is a frequently observed and problematic phenomenon in mental health patients being treated in the emergency setting. the agitation severity scale (agss), a reliable and valid instrument, was developed using classical test theory to measure agitation in acute psychiatry patients. objectives: the aim of this study was to analyze the agss according to the rasch measurement model and use the results to determine whether improvements to the instrument could be made. methods: this prospective, observational study was irb-approved. 270 adult ed patients with psychiatric chief complaints and dsm-iv-tr diagnoses were observed using the agss. the rasch rating scale model was employed to evaluate the 17 items comprising the agss using winsteps statistical software. unidimensionality, item fit, response category performance, person and item separation reliability, and hierarchical ordering of items were all examined. a principle components analysis (pca) of the rasch residuals was also performed. results: variable maps revealed that all of the agss items were used to some degree and that the items were ordered in a way that makes clinical sense. several duplicative items, indicating the same degree of agitation, were identified. item (5.19) and person (2.01) separation statistics were adequate, indicating appropriate spread of items and subjects along the agitation continuum and providing support for the instrument's reliability. keymaps indicated that the agss items are functioning as intended. analysis of fit demonstrated no extreme misfitting items. pca of the rasch residuals revealed a small amount of residual variance, but provided support for the agss as being unidimensional, measuring the single construct of agitation. the results of this rasch analysis support the agss as a psychometrically robust instrument for use with acute psychiatry patients in the emergency setting. several duplicative items were identified that may be eliminated and re-evaluated in future research; this would result in a shorter, more clinically useful scale. in addition, a gap in items for patients with lower levels of agitation was identified. generation of additional items intended to measure low levels of agitation could improve clinician's ability to differentiate between these patients. background: attempted suicide is one of the strongest clinical predictors of subsequent suicide and occurs up to 20 times more frequently than completed suicide. as a result, suicide prevention has become a central focus of mental health policy. in order to improve current treatment and intervention strategies for those presenting with suicide attempt and self-injury in the emergency department (ed), it is necessary to have a better understanding of the types of patients who present to the ed with these complaints. objectives: to describe the epidemiology of ed visits for attempted suicide and self-inflicted injury over a 16year period. methods: data were obtained from the national hospital ambulatory medical care survey (nhamcs). all visits for attempted suicide and self-inflicted injury (e950-e959) during 1993-2008 were included. trend analyses were conducted using stata's nptrend (a nonparametric test for trends that is an extension of the wilcoxon rank-sum test) and regression analyses. a two-tailed p < 0.05 was considered statistically significant. results: over the 16-year period, there were an average of 420,000 annual ed visits for attempted suicide and self-inflicted injury (1.50 [95% confidence interval (ci) 1.33-1.67] visits per 1,000 us population). the overall mean patient age was 31 years, with visits most common among ages 15-19 (3.70; 95%ci 3.11-4.30). the average annual number of ed visits for suicide attempt and self-inflicted injury more than doubled from 244,000 in 1993-1996 to 538,000 in 2005-2008. during the same timeframe, ed visits for these injuries per 1,000 us population almost doubled for males (0.84 to 1.62), females (1.04 to 1.96), whites (0.94 to 1.82), and blacks (1.14 to 2.10). no temporal differences were found for method of injury or ed disposition; there was, however, a significant decrease in visits determined by the physician to be urgent/emergent from 95% in 1993 to 70% in 2008. conclusion: ed visit volume for attempted suicide and self-inflicted injury has increased over the past two decades in all major demographic groups. awareness of these longitudinal trends may assist efforts to increase research on suicide prevention. in addition, this information may be used to inform current suicide and self-injury related ed interventions and treatment programs. benjamin l. bregman, janice c. blanchard, alyssa levin-scherz george washington university, washington, dc background: the emergency department (ed) has increasingly become a health care access point for individuals with mental health needs. recent studies have found that rates of major depression disorder (mdd) diagnosed in eds are far above the national average. we conducted a study assessing whether individuals with frequent ed visits had higher rates of mdd than those with fewer ed visits in order to help guide screening and treatment of depressed individuals encountered in the ed. objectives: this study evaluated potential risk factors associated with mdd. we hypothesized that patients who are frequent ed visitors will have higher rates of mdd. methods: this was a single center, prospective, crosssectional study. we used a convenience sample of noncritically ill, english speaking adult patients presenting with non-psychiatric complaints to an urban academic ed over 6 months in 2011. we oversampled patients presenting with ‡3 visits over the previous 364 days. subjects were surveyed about their demographic and other health and health care characteristics and were screened with the phq 9, a nine-item questionnaire that is a validated, reliable predictor of mdd. we conducted bivariate (chi-square) and multivariate analysis controlling for demographic characteristics using sta-ta v. 10.0. our principal dependent variable of interest was a positive depression screen (phq 9 score ‡10). our principal independent variable of interest was ‡3 visits over the previous 364 days. results: our response rate was 90.7% with a final sample size of 1012. of our total sample, 313 (30.9%) had three or greater visits within the prior 364 days. one hundred (32%) frequent visitors had a positive phq 9 mdd screen as compared to 142 (20.3%) of subjects with fewer than three visits (p < 0.0001). in our multivariate analysis, the odds for having three or more visits for subjects who had a positive depression screen was 1.42 (1.03, 1.97). of subjects with three or more visits with a positive depression screen, only 116 (37%) were actively being treated for mdd at the time of their visit. conclusion: our study found a high prevalence of untreated depression among frequent users of the ed. eds should consider routinely screening patients who are frequent consumers for mdd. in addition, further studies should evaluate the effect of early treatment and follow up for mdd on overall utilization of ed services. access to psychiatric care among patients with depression presenting to the emergency department janice c. blanchard, benjamin l. bregman, dana rosenfarb, qasem al jabr, eun kim george washington university, washington, dc background: literature suggests that there is a high rate of major depressive disorder (mdd) in emergency department (ed) users. however, access to outpatient mental health services is often limited due to lack of providers. as a result, many persons with mdd who are not in active treatment may be more likely to utilize the ed as compared to those who are currently undergoing outpatient treatment. objectives: our study evaluated utilization rates and demographic characteristics associated with patients with a prior diagnosis of mdd not in active treatment. we hypothesized that patients who present to the ed with untreated mdd will have more frequent ed visits. methods: this was a single center, prospective, crosssectional study. we used a convenience sample of noncritically ill, english speaking adult patients presenting with non-psychiatric complaints to an urban academic ed over 6 months in 2011. subjects were surveyed about their demographic and other health and health care characteristics and were screened with the phq 9, a nine-item questionnaire that is a validated, reliable predictor of mdd. we conducted bivariate (chi-square) and multivariate analysis controlling for demographic characteristics using stata v. 10.0. our principal dependent variable of interest was a positive depression screen (phq 9 ‡ 10). our analysis focused on the subset of patients with a prior diagnosis of mdd with a positive screen for mdd during their ed visit. results: our response rate was 90.7% with a final sample size of 1012. 243 (24.0%) patients screened positive for mdd with a phq 9 score ‡10. of the 243 patients with a positive depression screen, 55.1% reported a prior history of treatment for mdd (n = 134). of these patients, only 57.6% were currently actively receiving treatment. hispanics who screened positive for depression with a history of mdd were less likely to actively be undergoing treatment as compared to non-hispanics (22.2% versus 46.9%, p = 0.041). patients with incomes less than $20,000 were more likely to actively be receiving treatment as opposed to higher incomes (76.3% versus 42.7% p = 0.003). conclusion: patients presenting to our ed with untreated mdd are more likely to be hispanic and less likely to be low income. the emergency department may offer opportunities to provide antidepressant treatment for patients who screen positive for depression but who are not currently receiving treatment. evaluation of a two-question screening tool (phq-2) for detecting depression in emergency department patients jeffrey p. smith, benjamin bregman, janice blanchard, nasser hashim, mary pat mckay george washington university, washington, dc background: the literature suggests there is a high rate of undiagnosed depression in ed patients and that early intervention can reduce overall morbidity and health care costs. there are several well validated screening tools for depression including the nine-item patient health questionnaire (phq-9). a tool using a two-question subset, the phq-2, has been shown to be an easily administered, reasonably sensitive screening tool for depression in primary care settings. objectives: to determine the sensitivity and specificity of the phq-2 in detecting major depressive disorders (mdd) among adult ed patients presenting to an urban teaching hospital. we hypothesize that the phq-2 is a rapid, effective screening tool for depression in a general ed population. methods: cross sectional survey of a convenience sample of 1012 adult, non-critically ill, english speaking patients with medical and not psychiatric complaints presenting to the ed between 9am and 11pm weekdays. patients were screened for mdd with the phq-9. we used spss v19.0 to analyze the specificity, sensitivity, positive predictive value (ppv), negative predictive value (npv), and kappa of phq-2 scores of 2 and 3 (out of possible total score of 6) compared to a validated cut-off score of 10 or higher of 27 points on the phq-9. the two questions on the phq-2 are: ''over the last two weeks, how often have you had little interest in doing things? how often have you felt down, depressed or hopeless?'' responses are scored from 0-3 based on ''never'',''several days'', ''more than half'', ''nearly every day''. results: 1012 subjects of 1116 approached agreed to participate (90.7% response rate), and 975 (96.3%) completed the phq-9. the phq-9 identified 225 (23.1%) subjects with mdd. table 1 outlines the percent of subjects who were positive and the sensitivity, specificity, positive, and negative predictive values and kappa for each cut-off on the phq-2. conclusion: the phq-2 is a sensitive and specific screening tool for mdd in the ed setting. moreover, the phq-2 is closely correlated with the phq-9, especially if a score of 3 or greater is used. given the simplicity and ease of using a two-item questionnaire and the high rates of undiagnosed depression in the ed, including this brief, self-administered screening tool to ed patients may allow for early awareness of possible mdd and appropriate evaluation and referral. patients. however, much of this self-harm behavior is not discovered clinically and very little is known about the prevalence and predictors of current ed screening practices. attention to this issue is increasing due to the joint commission's patient safety goal 15, which focuses on identification of suicide risk in patients. objectives: to describe the prevalence and predictors of screening for self-harm and of presence of current self-harm in eds. methods: data were obtained from the nimh-funded emergency department safety assessment and followup evaluation (ed-safe). eight u.s. eds reviewed charts in real time for 35-40 hours a week between 8/ 2010 and 11/2011. all patients presenting during enrollment shifts were characterized as to whether a selfharm screening had been performed by ed clinicians. a subset of patients with a positive screening was asked about the presence of self-harm ideation, attempts, or both by trained research staff. we used multivariable logistic regression to identify predictors of screening and of current self-harm. data were clustered by site. in each model we examined day and time of presentation, age < 65 years, sex, race, and ethnicity. results: of the 92,154 patients presenting during research shift, 24,240 (26%) were screened for self-harm. screening rates varied among sites and ranged from 4% to 32%, with one outlier at 93%. of those screened, 2,471 (10%) had current self-harm. among those with selfharm approached by study personnel (n = 1,037), 916 (88%) had thoughts of self-harm (suicidal or non-suicidal), 806 (78%) had thoughts of suicide, 444 (43%) had self-harm behavior, and 316 (31%) had suicide attempt(s) over the preceding week. predictors of being screened were: age < 65 years, male sex, weekend presentation, and night shift presentation (table) . among those screened, predictors of current self-harm were: age < 65 years, white race, and night shift presentation. conclusion: screening for self-harm is uncommon in ed settings, though practices vary dramatically by site. patients presenting at night and on weekends are more likely to be screened, as are those under age 65 and males. current self-harm is more common among those presenting on night shift, those under age 65, and whites. results: there were 1328 out-of-hospital records reviewed, and hospital discharge data were available in 1120 non-cardiac arrest patients. of the 1120 patients, 1084 (96.8%) patients survived to hospital discharge and 36 (3.2%) died during hospitalization. the mean age of those transported was 54 years (sd20), 612 (55%) were male, 128 (11%) were trauma-related, and 112 (10%) were admitted to the icu. average systolic blood pressure (sbp), pulse (p), respiratory rate (rr), oxygen saturation (o 2 sat), and end-tidal carbon dioxide (etco 2 ) were sbp = 141 (sd29), p = 95 (sd25), rr = 24 (sd9), o 2 sat = 95% (sd8), and etco 2 = 34 (sd10 conclusion: of all the initial vital signs recorded in the out-of-hospital setting, etco 2 was the most predictive of mortality. these findings suggest that pre-hospital etco 2 is a useful clinical tool for determining severity of illness and appropriate triage. background: the prehospital use of continuous positive airway pressure (cpap) ventilation is a relatively new management for acute cardiogenic pulmonary edema (acpe) and there is little high quality evidence on the benefits or potential dangers in this setting. objectives: the aim of this study was to determine whether patients in severe respiratory distress treated with cpap in the prehospital setting have a lower mortality than those treated with usual care. methods: randomized, controlled trial comparing usual care versus cpap (whisperflowò) in a prehospital setting, for adults experiencing severe respiratory distress, with falling respiratory efforts, due to a presumed acpe. patients were randomised to receive either usual care, including conventional medications (nitrates, furosemide, and oxygen) plus bag-valve-mask ventilation, versus conventional medications plus cpap. the primary outcome was prehospital or in-hospital mortality. secondary outcomes were need for tracheal intubation, length of hospital stay, change in vital signs, and arterial blood gas results. we calculated relative risk with 95% cis. results: fifty patients were enrolled with mean age 79ae8 (sd 11ae9), male 56ae0%, mortality 20ae0%. the risk of death was significantly reduced in the cpap arm with mortality 34ae6% (9 deaths) in the usual care arm compared to 4ae2% (1 death) in the cpap arm (rr, 0ae12; 95% ci 0ae02 to 0ae88; p = 0ae04). patients who received cpap were significantly less likely to have respiratory acidosis (mean difference in ph 0ae09; 95% ci 0ae01 to 0ae16; p = 0ae02; n = 24) than patients receiving usual care. the length of hospital stay was significantly less in the patients who received cpap (mean difference 2ae3 days; 95% ci )0ae01 to 4ae6, p = 0ae05). conclusion: we found that cpap significantly reduced mortality, respiratory acidosis, and length of hospital stay for patients in severe respiratory distress caused by acpe. this study shows the use of cpap for acpe improves patient outcomes in the prehospital setting. (originally submitted as a ''late-breaker.'') trial reg. anzctr actrn12609000410257; funding fisher and paykal suppliers of the whisperflowò cpap device. background: because emergency service utilization continues to climb, validated methods to safely identify and triage low-acuity patients to either alternate care destinations or a complaint-appropriate level of ems response is of keen interest to ems systems and potentially payers. though the literature generally supports the medical priority dispatch system (mpds) as a tool to predict low-acuity patients by various standards, correlation with initial patient physiologic data and patient age is novel. objectives: to determine whether the six mpds priority determinants for protocol 26 (sick person) can be used to predict initial ems patient acuity assessment or severity of an aggregate physiologic score. our longterm goal is to determine whether mpds priority can be used to predict patient acuity and potentially send only a first responder to do an in-person assessment to confirm this acuity, while reserving als transport resources for higher acuity patients. methods: calls dispatched through the wichita-sedgwick county 9-1-1 center between july 20, 2009 and october 1, 2011 using mpds protocol 26 (sick person) were linked to the ems patient care record for all patients 14 and older. the six mpds priority determinants were evaluated for correlation with initial ems acuity code, initial vital signs, rapid acute physiology score (raps), or patient age. the ems acuity code scores patients from low to severe acuity, based on initial ems assessment. results: there were 9370 calls dispatched using protocol 26 for those 14 years of age and older during the period, representing approximately 13% of all ems calls. there is a significant difference in the first encounter vital signs among different mpds priority levels. based on the logistic regression model, the mpds priority code alone had a sensitivity of 68% and specificity of 55% for identifying low-acuity patients with ems acuity score as the standard. the area under the curve (auc) for roc is 0.62 for mpds priority codes alone, while addition of age increases this value to 0.69. if we use the raps score as the standard to the mpds priority code, auc is 0.528. if we include both mpds and age in the model, the auc is 0.533. conclusion: in our system, mpds priority codes on protocol 26 (sick person) alone, or with age or raps score, are not useful either as predictors of patient acuity on ems arrival or to reconfigure system response or patient destination protocols. alternate ambulance destination program c. nee-kofi mould-millman 1 , tim mcmahan 2 , michael colman 2 , leon h. haley 1 , arthur h. yancey 1 1 emory university, atlanta, ga; 2 grady ems, atlanta, ga background: low-acuity patients calling 9-1-1 are known to utilize a large proportion of ems and ed resources. the national association of ems physicians and acep jointly support ems alternate destination programs (adps) in which low-acuity patients are allocated alternative resources non-emergently. analysis of one year's adp data from our ems system revealed that only 4.5% of eligible patients were transported to alternate destinations (ambulatory clinics). reasons for this low success rate need investigation. objectives: to survey emts and discover the most frequent reasons given by them for transportation of eligible patients to eds instead of to clinics. methods: this study was conducted within a large, urban, hospital-based ems system. upon conducting an adp for 12 months, a paper-based survey was created and pre-tested. all medics with any adp-eligible patient contact were included. emts were asked about personal, patient, and system related factors contributing to ed transport during the last 3 months of the adp. qualitative data were coded, collated, and descriptively reported. results: sixty-three respondents (26 emt-intermediates and 37 emt-paramedics) completed the survey, representing 79% of eligible emts. thirty-one emts (49%) responded that they did not attempt to recruit eligible patients into the adp in the last 3 program months. of those emts, 25 (81%) attributed their motive to multiple, prior, failed recruitment attempts. the 32 emts who actively recruited adp patients were asked reasons given by patients for clinic transport refusals: 19 (60%) cited that patients reported no prior experience of care at the participating clinics, and 23 (72%) reported patients had a strong preference for care in an ed. regarding system-related factors contributing to non-clinic transport, 24 of the 32 emts (75%) reported that clinic-consenting patients were denied clinic visits, mostly because of non-availability of same-day clinic appointments. conclusion: respondents indicated that poor emt enrollment of eligible patients, lack of available clinic time slots, and patient preference for ed care were among the most frequent reasons contributing to the low success rate of the adp. this information can be used to enhance the success of this, and potentially other adp programs, through modifications to adp operations and improved patient education. the effect of a standardized offline pain treatment protocol in the prehospital setting on pediatric pain treatment brent kaziny 1 , maija holsti 1 , nanette dudley 1 , peter taillac 1 , hsin-yi weng 1 , kathleen adelgais 2 1 university of utah, school of medicine, salt lake city, ut; 2 university of colorado, school of medicine, aurora, co background: pain is often under treated in children. barriers include need for iv access, fear of delayed transport, and possible complications. protocols to treat pain in the prehospital setting improve rates of pain treatment in adults. the utah ems for children (emsc) program developed offline pediatric protocol guidelines for ems providers, including one protocol that allows intranasal analgesia delivery to children in the prehospital setting. objectives: to compare the proportion of pediatric patients receiving analgesia for orthopedic injury by prehospital providers before and after implementation of an offline pediatric pain treatment protocol. methods: we conducted a retrospective study of patients entered into the utah prehospital on-line active reporting information system (polaris, a database of statewide ems cases) both before and after initiation of the pain protocol. patients were included if they were age 3-17 years, with a gcs of 14-15, an isolated extremity injury, and were transported by an ems agency that had adopted the protocol. pain treatment was compared for 2 years before and 18 months after protocol implementation with a wash-out period of 12 months for agency training. the difference in treatment proportions between the two groups was analyzed and 95% cis were calculated. results: during the two study periods, 1155 patients met inclusion criteria. patient demographics are outlined in the table. 93/501 (18.6%) patients were treated for pain before compared to 174/654 (26.6%) patients treated after the pain protocol was implemented; a difference of 8.0% (95% ci: 3.2%-12.8%). patients were more likely to receive pain medication if they had a pain score documented (or: 1.16; 95% ci: 1.09-1.22) and if they were treated after the implementation of a pain protocol (or: 1.27; 95% ci: 1.00-1.62). factors not associated with the treatment of pain include age, sex, and mechanism of injury. conclusion: the creation and adoption of statewide emsc pediatric offline protocol guideline for pain management is associated with a significant increase in use of analgesia for pediatric patients in the prehospital setting. background: evidence-based guidelines are needed to determine the appropriate use of air medical transport, as few criteria currently used predict the need for air transport to a trauma center. we previously developed a clinical decision rule (cdr) to predict mortality in injured, helicopter-transported patients. objectives: this study is a prospective validation of the cdr in a new population. methods: a prospective, observational cohort analysis of injured patients ( ‡16 y.o.) transported by helicopter from the scene to one of two level i trauma centers. variables analyzed included patient demographics, diagnoses, and clinical outcomes (in-hospital mortality, emergent surgery w/in 24 hrs, blood transfusion w/in 24 hrs, icu admit greater than 24 hrs, combined outcome of all). prehospital variables were prospectively obtained from air medical providers at the time of transport and included past medical history, mechanism of injury, and clinical factors. descriptive statistics compared those with and without the outcomes of interest. the previous cdr (age ‡ 45, gcs £ 13, sbp < 90, flail chest) was prospectively applied to the new population to determine its accuracy and discriminatory ability. results: 416 patients were transported from october 2010-august 2011. the majority of patients were male (59%), white (79%), with an injury occurring in a rural location (60%). most injuries were blunt (95%) with a median iss of 9. overall mortality was 5%. the most common reasons for air transport were: mvc with high risk mechanism (17%), gcs £ 13 (16%), loc >5 minutes (16%), and mvc >20 mph (14%). of these, only gcs £ 13 was significantly associated with any of the clinical outcomes. when applying the cdr, the model had a sensitivity of 100% (81.2%-100%), a specificity of 51.2% (50.6%-51.6%), a npv of 100% (98.1%-100%), and a ppv of 9.9% (8.0%-9.9%) for mortality. the area under the curve for this model was 0.92, suggesting excellent discriminatory ability. conclusion: the air transport decision rule in this study performed with high sensitivity and acceptable specificity in this validation cohort. further external validation in other systems and with ground transported patients are needed in order to improve decision making for the use of helicopter transport of injured patients. background: acute non-variceal upper gastrointestinal (gi) bleeding is a common indication for hospital admission. to appropriately risk-stratify such patients, endoscopy is recommended within 24 hours. given the possibility to safely manage patients as outpatients after endoscopy, risk stratification as part of an emergency department (ed) observation unit (ou) protocol is proposed. objectives: our objective was to determine the ability of an ou upper gi bleeding protocol to identify a lowrisk population, and to expeditiously obtain endoscopy and disposition patients. we also identified rates of outcomes including changes in hemoglobin, abnormal endoscopy findings, admission, and revisits. background: acute uncomplicated pyelonephritis (pyelo) requires no imaging but a ct flank pain protocol (ctfpp) may be ordered to determine if patients with pyelo and flank pain also have an obstructing stone. the prevalence of kidney stone and the characteristics predictive of kidney stone in pyelo patients is unknown. objectives: to determine elements on presentation that predict ureteral stone, as well as prevalence of stone and interventions in patients undergoing ct for pyelo. methods: retrospective study of patients at an academic ed who received a ctfpp scan between 8/05 and 4/09. 5497 ctfpps were identified and 1899 randomly selected for review. pyelo was defined as: positive urine dip for infection and >5 wbc/hpf on formal urinalysis in addition to flank pain/cva tenderness, chills, fever, nausea, or vomiting. patients were excluded for age < 18 y.o., renal disease, pregnancy, urological anomaly, or recent trauma. clinical data (178 elements) were gathered blinded to ct findings; ct results were abstracted separately and blinded to clinical elements. ct findings of hydronephrosis and hyrdroureter (hydro) were used as a proxy for hydro that could be determined by ultrasound prior to ct. patients were categorized into three groups: ureteral stone, no significant findings, and intervention or follow-up required. classification and regression tree analysis was used to determine which variables could identify ureteral stone in this population of pyelo patients. results: out of the 1899 patients, 105 (7.0%) met criteria for pyelo; subjects had a mean age of 39 ± 15.9 and 82% (n = 87) were female. ct revealed 31 (29%, 95% ci = 0.22-0.39) symptomatic stones, and 72 (68%, 95% ci = 0.59-0.76) exams with no significant findings. two patients needed intervention/ follow-up (1%, 95% ci = 0.0052-0.0667), one for perinephric hemorrhage and the other for pancreatitis. hydro was predictive for ureteral stone with an or = 18.4 (95% ci = 6.4-52, p < 0.0001). eleven (35%) ureteral stone patients were admitted and 9 (8%) of them had procedures. of these patients, 100% had ct signs of obstruction, 8 (88%) had hydronephrosis, and 1 (11%) had hydroureter. conclusion: hydronephrosis was predictive of ureteral stone and in-house procedures. prospective study is needed to determine whether ct scan is warranted in patients with pyelonephritis but without hydronephrosis or hydroureter. curative objectives: the specific aim of this analysis was to describe characteristics of patients presenting to the emergency department (ed) at their index diagnosis, and to determine whether emergency presentation precludes treatment with curative intent. methods: we performed a retrospective cohort analysis on a prospectively maintained institutional tumor registry to identify patients diagnosed with crc from 2008-2010. emrs were reviewed to identify which patients presented to the ed with acute symptoms of crc as the initial sign of their illness. the primary outcome variable was treatment plan (curative vs. palliative). secondary outcome variables included demographics, tumor type and location. descriptive statistics were conducted for major variables. chi-squre and fisher's exact tests were used to detect the association between categorical variables. two-sample t-test was used to identify the association between continuous and categorical variables. results: between jan 1 2008 and dec 31 2010, 376 patients were identified at our institution with crc. 214 (57%) were male and 162 (43%) were female, with mean age 60.6; sd: 13.3. thirty-three patients (8.8%) initially presented to the ed, of whom 5 (15.5%) received palliation. of 339 patients who initially presented elsewhere, 69 (20.5%) received palliation. acute ed presentation with crc symptoms did not preclude treatment with curative intent (p = 0.47). patients who presented emergently were more likely to be female (64% vs male 41%; p = 0.01) and older (65 vs. 60; p = 0.02). there was no statistically significant relationship between age, sex, tumor location, or type and treatment approach. conclusion: patients with crc may present to the ed with acute symptoms, which ultimately leads to the diagnosis. emergent presentation of crc does not preclude patients from receiving therapy with curative intent. cannabinoid (or 2.93, , and white blood cell (wbc) count ‡14,000/mm 3 (or 11.35, 95% ci 3.42-37.72). conclusion: age ‡65 years is not associated with need for admission from an ed observation unit. older adults can successfully be cared for in these units. initial temperature, respiratory rate, and pulse were not predictive of admission, but extremely elevated blood pressure was predictive. other relevant predictor variables included comorbidities and elevated wbc count. advanced age should not be a disqualifying criterion for disposition to an ed observation unit. older adult fallers in the emergency department luna ragsdale, cathleen colon-emeric duke university, durham, nc background: approximately 1/3 of community-dwelling older adults experience a fall each year, and 2.2 million are treated in u.s. emergency departments (ed) annually. the ed offers a potential location for identification of high-risk individuals and initiation of fall-prevention services that may decrease both fall rates and resource utilization. objectives: the goal of this study was to: 1) validate an approach to identifying older adults presenting with falls to the ed using administrative data; and 2) characterize the older adult who falls and presents to the ed and determine the rate of repeat ed visits, both fall-related and all visits, after an index fall-related visit. methods: we identified all older adults presenting to either of the two hospitals serving durham county residents during a six month period. manual chart review was completed for all encounters with icd9 codes that may be fall-related. charts were reviewed 12 months prior and 12 months post index visit. descriptive statistics were used to describe the cohort. results: a total of 4452 older adults were evaluated in the ed during this time period; 1714 (55.7%) had an icd9 code for a potentially fall-related injury. of these, record review identified 534 (12%) with a fall from standing height or less. of the fallers, 65.9% of the patients were discharged, 31% were admitted, and 3% were admitted under observation. of those who fell, 38.2% had an ed visit within the previous year. approximately 1/3 (33.3%) of these were fall related. over half (53.4%) of the patients who fell returned to the ed within one year of their index visit. a large proportion (44.4%) of the return visits was fall-related. follow-up with a primary care provider or specialist was recommended in 46% of the patients who were discharged. overall mortality rate for fallers over the year following the index visit was 18%. conclusion: greater than fifty percent of fallers will return to the ed after an index fall, with a large proportion of the visits related to a fall. a large number of these fallers are discharged home with less than fifty percent having recommended follow-up. the ed represents an important location to identify high-risk older adults to prevent subsequent injuries and resource utilization. objectives: we studied whether falls from a standing position resulted in an increased risk for intracranial or cervical injury verses falling from a seated or lying position. methods: this is a prospective observational study of patients over the age of 65 who presented with a chief complaint of fall to a tertiary care teaching facility. patients were eligible for the study if they were over age 65, were considered to be at baseline mental status, and were not triaged to the trauma bay. at presentation, a questionnaire was filled out by the treating physician regarding mechanism and position of fall, with responses chosen from a closed list of possibilities. radiographic imaging was obtained at the discretion of the treating physician. charts of enrolled patients were subsequently reviewed to determine imaging results, repeat studies done, or recurrent visits. all patients were called in follow-up at 30 days to assess for delayed complications related to the fall. data were entered into a standardized collection sheet by trained abstractors. data were analyzed with fisher's exact test and descriptive statistics. this study was reviewed and approved by the institutional review board. results: two-hundred sixty two patients were enrolled during the study period. one-hundred ninety eight of these had fallen from standing and 64 fell from either sitting or lying positions. the mean age for patients was 84 (sd 7.9) for those who fell from standing and 84 (sd 8.4) for those who fell from sitting or lying. there were 6 patients with injuries who fell from standing: three with subdural hematomas, one with a cerebral contusion, one with an osteophyte fracture at c6, and one with an occipital condyle fracture with a chip fracture of c1. there were 2 patients with injuries who fell from a seated or lying position: one with a traumatic subarachnoid hemorrhage and one with a type ii dens fracture. the overall rate of traumatic intracranial or cervical injury in elders who fell was 3%. no patients required surgical intervention. there was no difference in rate of injury between elders who fell from standing versus those who fell from sitting or lying (p = 1). (table) . conclusion: both instruments identify the majority of patients as high-risk which will not be helpful in allocating scarce resources. neither the isar nor the trst can distinguish geriatric ed patients at high or low risk for 1or 3-month adverse outcomes. these prognostic instruments are not more accurate in dementia or lower literacy subsets. future instruments will need to incorporate different domains related to short-term adverse outcomes. background: for older adults, both inpatient and outpatient care involves not only the patient and physician, but often a family member or informal caregiver. they can assist in medical decision making and in performing the patient's activities of daily living. to date, multiple outpatient studies have examined the positive roles family members play during the physician visit. however, there is very limited information on the involvement of the caregiver in the ed and their relationship with the health outcomes of the patient. objectives: to assess whether the presence of a caregiver influences the overall satisfaction, disposition, and outpatient follow-up of elderly patients. we performed a three-step inquiry of patients over 65 years old who arrived to the upenn ed. patients and care partners were initially given a questionnaire to understand basic demographic data. at the end of the ed stay, patients were given a satisfaction survey and followed through 30 days to assess time to disposition, whether the patient was admitted or discharged, outpatient follow-up, and ed revisit rates. chi-square and t-tests were used to examine the strength of differences in the elderly patients' sociodemographics, self-rated health, receiving aid with their instrumental activities of daily living, and number of health problems by accompaniment status. multivariate regression models were constructed to examine whether the presence or absence of caregivers affected satisfaction, disposition, and follow-up. results: overall satisfaction was higher among patients who had caregivers (2.4 points), among patients who felt they were respected by their physician (3.8 points), and had lower lengths of stay (2 hours). patients with caregivers were also more likely to be discharged home (or 2.4) and to follow-up with their regular physician (or 2.1). there was no evidence to suggest caregivers affected the overall rates of revisits back to an ed. conclusion: for older adults, medical care involves not only the patient and physician, but often a family member or an informal care companion. these results demonstrate the positive influence of caregivers on the patients they accompany, and emergency physicians should define ways to engage these caregivers during their ed stay. this will also allow caregivers to participate when needed and can help to facilitate transitions across care settings. background: shared decision making has been shown to improve patient satisfaction and clinical outcomes for chronic disease management. given the presence of individual variations in the effectiveness and side effects of commonly used analgesics in older adults, shared decision making might also improve clinical outcomes in this setting. objectives: we sought to characterize shared decision making regarding the selection of an outpatient analgesic for older ed patients with acute musculoskeletal pain and to examine associations with outcomes. methods: we conducted a prospective observational study with consecutive enrollment of patients age 65 or older discharged from the ed following evaluation for moderate or severe musculoskeletal pain. two essential components of shared decision making, 1) information provided to the patient and 2) patient participation in the decision, were assessed via patient interview at one week using four-level likert scales. results: of 233 eligible patients, 110 were reached by phone and 87 completed the survey. only 25% (21/87) of patients reported receiving 'a lot' of information about the analgesic, and only 21% (18/87) reported participating 'a lot' in the selection of the analgesic. there were trends towards white patients (p = 0.06) and patients with higher educational attainment (p = 0.07) reporting more participation in the decision. after adjusting for sex, race, education, and initial pain severity, patients who reported receiving 'a lot' of information were more likely to report optimal satisfaction with the analgesic than those receiving less information (78% vs. 47%, p < 0.05). after the same adjustments, patients who reported participating 'a lot' in the decision were also more likely to report optimal satisfaction with the analgesic (82% vs. 47%, p < 0.05) and greater reductions in pain scores (mean reduction in pain 4.6 vs. 2.7, p < 0.05) at one week than those who participated less. background: quality of life (qol) measurements have become increasingly important in outcomes-based research and cost-utility analyses. dementia is a prevalent, often unrecognized, geriatric syndrome that may limit the accuracy of patient self-report in a subset of patients. the relationship between caregiver and geriatric patient qol in the emergency department (ed) is not well understood. objectives: to qualify the relationship between caregiver and geriatric patient qol ratings in ed patients with and without cognitive dysfunction. methods: this was a prospective, consecutive patient, cross-sectional study over two months at one urban academic medical center. trained research assistants screened for cognitive dysfunction using the short blessed test and evaluated health impairment using the quality of life-alzheimer's disease (qol-ad) test. when available in the ed, caregivers were asked to independently complete the qol-ad. consenting subjects were non-critically ill, english-speaking, community-dwelling adults over 65 years of age. responses were compared using wilcoxon signed ranks test to assess the relationships between patient and caregiver scores from the qol-ad stratified by normal or abnormal cognitive screening results. significance was defined by p < 0.05. results: patient qol ratings were obtained from 108 patient-caregiver pairs. patients were 51% female, 52% african-american, with a mean age of 76-years, and 58% had abnormal cognitive screening tests. compared with caregivers, cognitively normal patients had no significant qol assessment differences except for questions of energy level and overall mood. on the other hand, cognitively impaired patients differed significantly on questions of energy level and ability to perform household chores with a trend towards significant differences for living setting (p = 0.097) and financial situation (p = 0.057). in each category, the differences reflected a caregiver underestimation of quality compared with the patient's self-rating. conclusion: discrepancies between qol domains and total scores for patients with cognitive dysfunction and their caregivers highlights the importance of identifying cognitive dysfunction in ed-based outcomes research and cost-utility analyses. further research is needed to quantify the clinical importance of the patient-and caregiver-assessed quality of life. background: age is often a predictor for increased morbidity and mortality. however, it is unclear whether old age is a predictor of adverse outcome in syncope. objectives: to determine whether old age is an independent predictor of adverse outcome in patients presenting to the emergency department following a syncopal episode. methods: a prospective observational study was conducted from june 2003 to july 2006 enrolling consecutive adult ed patients (>18 years) presenting with syncope. syncope was defined as an episode of transient loss of consciousness. adverse outcome or critical intervention were defined as gastrointestinal bleeding or other hemorrhage, myocardial infarction/percutaneous coronary intervention, dysrhythmia, alteration in antidysrhythmics, pacemaker/defibrillator placement, sepsis, stroke, death, pulmonary embolus, or carotid stenosis. outcomes were identified by chart review and 30-day follow-up phone calls. results: of 575 patients who met inclusion criteria, an adverse event occurred in 24% of patients. overall, 35% of patients with risk factors had adverse outcomes compared to 1.6% of patients with no risk factors. in particular, 28/127 (22%; 95% ci 16-30%) of patients <65 with risk factors had adverse outcomes, while 85/196 (43%; 95% ci 36-50%) of the elderly with risk factors had adverse outcomes. in contrast, among young people 2/196 (1%; 95% ci 0.04-3.8%) of patients without risk factors had adverse outcomes while 2/56 (3.6%; 95% ci 0.28-13%) of patients ‡65 without risk factors had adverse outcomes. conclusion: although the elderly are at greater risk for adverse outcomes in syncope, age ‡ 65 or older alone does not appear to be a predictor of adverse outcome following a syncopal event. based on these data, it should be safe to discharge home from the ed patients with syncope, but without risk factors, regardless of age. (originally submitted as a ''late-breaker.'') antibiotics background: adherence to national guidelines for hiv and syphilis screening in eds is not routine. in our ed, hiv and syphilis screening rates among patients tested for gonorrhea and chlamydia (gc/ct) have been reported to be 45% and 30%, respectively. objectives: to determine the effect of a sexually transmitted infection (sti) laboratory order set on hiv and syphilis screening among ed patients tested for gc/ct. we hypothesized that a sti order set would increase screening rates by at least 30%. methods: a 6-month, quasi-experimental study in an urban ed comparing hiv and syphilis screening rates of gc/ct-tested patients before (control phase) and after the implementation of a sti laboratory order set (intervention phase). the order set linked blood-based rapid hiv and syphilis screening with gc/ct testing. consecutive patients completing gc/ct testing were included. the primary outcome was the absolute difference in hiv and syphilis screening rates among gc/ ct-tested patients between phases. we estimated that 550 subjects per phase were needed to provide 90% power (p-value of £0.05) to detect an absolute difference in screening rates of 10%, assuming a baseline hiv screening rate of 45%. results: the ed census was 42,461. characteristics of patients tested for gc/ct were similar between phases: the mean age was 33 years (sd = 12) and most were female (65%), black (49%), hispanic (30%), and unmarried (84% services have recommended the use of immunization programs against influenza disease within hospitals since the 1980s. the emergency department (ed) being the ''safety net'' for most non-insured people is an ideal setting to intervene and provide primary prevention from influenza. objectives: the purpose of this study is to assess whether a pharmacist-based influenza immunization program is feasible in the ed, and successful in increasing the percentage of adult patients receiving the influenza vaccine. methods: implementation of pharmacist-based immunization program was developed in coordination with ed physicians and nursing staff in 2010. the nursing staff, using an embedded electronic questionnaire within their triage activity, screened patients for eligibility for the influenza vaccine. the pharmacist using an electronic alert system within the electronic medical record identified patients who we deemed eligible and if agreed the pharmacist vaccinated the patient. patients who refused to be vaccinated were surveyed to ascertain their perception concerning immunization offered by a pharmacist in the ed. feasibility and safety data for vaccinating patient in the ed were recorded. results: 149 patients were approached and enrolled into the study. of the 149, 41% agreed to receive the influenza vaccine from a pharmacist in the ed. the median screening time was 5 minutes and median vaccination time was 3 minutes for a total of 8 minutes from screening time to vaccination time. 74% were willing to receive the influenza vaccine from a pharmacist, and 78% were willing to receive the vaccine in the ed. the main reason given for refusing to receive the influenza vaccine was ''patient does not feel at risk of getting the disease''; only 14.6% stated they were vaccinated recently. conclusion: a pharmacist-based influenza immunization program is feasible in the ed, and has the potential to successfully increase the percentage of adult patients receiving the vaccine. 1.4 ± 0.1, p < 0.05). ed visits by hiv-infected patients also had longer lengths of ed stay (317 ± 26.0 minutes vs. 222.5 ± 5.6 minutes, p < 0.05) and were more likely to be admitted (29% vs. 15%, p < 0.05), than their non-hiv infected counterparts. conclusion: although ed visits by hiv-infected individuals in the u.s. are relatively infrequent, they occur at rates higher than the general population, and consume significantly more ed resources than the general population. the background: the influence of wound age on the risk of infection in simple lacerations repaired in the emergency department (ed) has not been well studied. it has traditionally been taught that there is a ''golden period'' beyond which lacerations are at higher risk of infection and therefore should not be closed primarily. the proposed cutoff for this golden period has been highly variable (3-24 hours in surgical textbooks). objectives: to answer the following research question: are wounds closed via primary repair after the golden period at increased risk for infection? methods: we searched medline, embase, and other databases as well as bibliographies of relevant articles. we included studies that enrolled ed patients with lacerations repaired by primary closure. exclusion: 1. intentional delayed primary repair or secondary closure, 2. wounds requiring intra-operative repair, skin graft, drains, or extensive debridement, and 3. grossly contaminated or infected at presentation. we compared the outcome of wound infection in two groups of early versus delayed presentations (based on the cut-offs selected by the original articles). we used ''grading of recommendations assessment, development and evaluation'' (grade) criteria to assess the quality of the included trials. frequencies are presented as percentages with 95% confidence intervals. relative risk (rr) of infection is reported when clinically significant. results: 418 studies were identified. four trials enrolling 3724 patients in aggregate met our inclusion/exclusion criteria. two studies used a 6-hour cut-off and the other two used a 12-hour cut-off for defining delayed wounds. the overall quality of evidence was low. the infection rate in the wounds that presented with delay ranged from 1.4% to 32%. one study with the smallest sample size (morgan et al), which only enrolled lacerations to the hand and forearm, showed higher rates of infection in patients with delayed wounds (table). the infection rates in delayed wound groups in the remaining three studies were not significantly different from the early wounds. conclusion: the evidence does not support the existence of a golden period, nor does it support the role of wound age on infection rate in simple lacerations. background: although clinical studies in children have shown that temperature elevation is an independent and significant predictor of bacteremia in children, the relationship in adults is largely unknown or equivocal. objectives: review the incidence of positive blood cultures on critically ill adult septic patients presenting to an emergency department (ed) and determine the association of initial temperature with bacteremia. methods: july 2008 to july 2010 retrospective chart review on all patients admitted from the ed to an urban community hospital with sepsis and subsequently expiring within 4 days of admission. fever was defined as a temperature ‡38°c. sirs criteria were defined as: 1) temperature ‡38°c or £36°c, 2) heart rate ‡90 beats/ minute, 3) respiratory rate ‡20 or mechanical ventilation, 4) wbc ‡ 12,000/mm 3 or <4,000 or bands ‡10%. objectives: we examined the utility of limited genetic sequencing of bacterial isolates using multilocus sequence typing (mlst) to discriminate between known pathogenic blood culture isolates of s. epidermidis and isolates recovered from skin. methods: ten blood culture isolates from patients meeting the centers for disease control and prevention (cdc) criteria for clinically significant s. epidermidis bacteremia and ten isolates from the skin of healthy volunteers were studied. mlst was performed by sequencing 400 bp regions of seven genes (arc, aroe, gtr, muts, pyr, tpia, and yqil) . genetic variability at these sites was compared to an international database (www.sepidermidis.mlst.net) and each strain was then categorized into a genotype on the basis of known genetic variation. the ability of the gene sequences to correctly classify strains was quantified using the support vector machine function in the statistical package r. 1,000 bootstrap resamples were performed to generate confidence bounds around the accuracy estimates. results: between-strain variability was considerable, with yqil being most variable (6 alleles) and tpia being least (1 allele). the muts gene, responsible for dna repair in s. epidermidis, showed almost complete separation between pathogenic and commensal strains. when the seven genes were used in a joint model, they correctly predicted bacterial strain type with 90% accuracy (iqr 85, 95%). conclusion: multilocus sequence typing shows excellent early promise as a means of distinguishing contaminant versus truly pathogenic isolates of s. epidermidis from clinical samples. near-term future goals will involve developing more rapid means of sequencing and enrolling a larger cohort to verify assay performance. conference are presented by influenza scenario in table 1 and background: antiviral medications are recommended for patients with influenza who are hospitalized or at high risk for complications. however, timely diagnosis of influenza in the ed remains challenging. influenza rapid antigen tests have short turn-around times, making them potentially useful in the ed setting, but their sensitivities may be too low to assist with treatment decisions. objectives: to evaluate the test characteristics of the binaxnow influenza a&b rapid antigen test (rat) in ed patients. methods: we prospectively enrolled a systematic sample of patients of all ages presenting to two eds with acute respiratory symptoms or fever during three consecutive influenza seasons (2008) (2009) (2010) (2011) . research personnel collected nasal and throat swabs, which were combined and tested for influenza with rt-pcr using cdc-provided primers and probes. ed clinicians independently decided whether to obtain a rat during clinical care. rats were performed in the clinical laboratory using the binaxnow influenza a&b test on nasal swabs collected by ed staff. the study cohort included subjects who underwent both a research pcr and clinical rat. rat test characteristics were evaluated using pcr as the criterion standard with stratified sub-analyses for age group and influenza subtype (pandemic h1n1 (ph1n1), non-pandemic influenza a, influenza b). results: 561 subjects were enrolled; 131 subjects were pcr positive for influenza (76 ph1n1, 20 non-pandemic influenza a, and 35 influenza b). for all age groups, rat sensitivities were low and specificities were high ( hiv infection with cd4 < 200; and among nursing home residents, inability to independently perform activities of daily living. sources for bacterial cultures included blood, sputum (adults only), bronchoalveolar lavage (bal), tracheal aspirate, and pleural fluid. only sputum specimens with a bartlett score ‡1+ were considered adequate for culturing. results: among 461 children enrolled, 7 (2%) had s. aureus cultured from ‡1 specimen, including 5 with methicillin-resistant s. aureus (mrsa) and 2 with methicillin-susceptible s. aureus (mssa). specimens positive for s. aureus included 3 pleural fluid, 2 blood, 2 tracheal aspirates, and 1 bal. two children with s. aureus had evidence of co-infection: 1 influenza a, and 1 streptococcus pneumoniae. among 673 adults enrolled, 17 (3%) grew s. aureus from ‡1 specimen, including 9 with mrsa and 8 with mssa. specimens positive for s. aureus included 5 blood, 11 sputum, and 3 bal. five adults with s. aureus had evidence of co-infections: 2 coronavirus, 1 respiratory syncytial virus, 1 s. pneumoniae, and 1 pseudomonas aeruginosa. presenting clinical characteristics and outcomes of subjects with staphylococcal cap are summarized in tables 1-2. conclusion: these preliminary findings suggest s. aureus is an uncommon cause of cap. although the small number of staphylococcal cases limits conclusions that can be drawn, in our analysis staphylococcal cap appears to be associated with co-infections, pleural effusions, and severe disease. future work will focus on continued enrollment and developing clinical prediction models to aid in diagnosing staphylococcal cap in the ed. background: emergency care has been a neglected public health challenge in sub-saharan africa. the goal of global emergency care collaborative (gecc) is to develop a sustainable model for emergency care delivery in low-resource settings. gecc is developing a training program for emergency care practitioners (ecps). objectives: to analyze the first 500 patient visits at karoli lwanga ''nyakibale'' hospital ed in rural uganda to determine the knowledge and skills needed in training ecps. methods: a descriptive cross-sectional analysis of the first 500 consecutive patient visits in the ed's patient care log was reviewed by an unblinded abstractor. data on demographics, procedures, laboratory testing, bedside ultrasounds (us) performed, radiographs (xrs) ordered, and diagnoses were collated. all authors discussed uncertainties and formed a consensus. descriptive statistics were performed. results: of the first 500 patient visits, procedures were performed in 367 (73.4%) patients, including 244 (48.8%) who had ivs placed, 47 (9.4%) who received wound care, and 42 (8.4%) who received sutures. complex procedures, such as procedural sedations, lumbar punctures, orthopedic reductions, nerve blocks, and tube thoracostomies, occurred in 49 (9.8%) patients. laboratory testing, xrs, and uss were performed in 188,(37.6%), 99 (19.8%), and 45 (7%) patients, respectively. infectious diseases were diagnosed in 217 (43.4%) patients; 78 (15.6 %) with malaria and 57 (11.4%) with pneumonia. traumatic injuries were present in 140 (28%) patients; 77 (15.4%) needing wound care and 31 (6.2%) with fractures. gastrointestinal and neurological diagnoses affected 58 (11.6%) and 27 (5.4%) patients, respectively. conclusion: ecps providing emergency care in sub-saharan africa will be required to treat a wide variety of patient complaints and effectively use laboratory testing, xrs, and uss. this demands training in a broad range of clinical, diagnostic, and procedural skills, specifically in infectious disease and trauma, the two most prevalent conditions seen in this rural sub-saharan africa ed. assessment of point-of-care ultrasound in tanzania background: current chinese ems is faced with many challenges due to a lack of systematic planning, national standards in training, and standardized protocols for prehospital patient evaluation and management. objectives: to estimate the frequency with which prehospital care providers perform critical actions for selected chief complaints in a county-level ems system in hunan province, china. methods: in collaboration with xiangya hospital (xyh), central south university in hunan, china, we collected data pertaining to prehospital evaluation of patients on ems dispatches from a ''1-2-0'' call center over a 2-month period. this call center services an area of just under 5000 km 2 with a total population of 1.36 million. each ems team consists of a driver, a nurse, and a physician. this was a cross-sectional study where a single trained observer accompanied ems teams on transports of patients with a chief complaint of chest pain, dyspnea, trauma, or altered mental status. in this convenience sample, data were collected daily between 8 am and 6 pm. critical actions were pre-determined by a panel of emergency medicine faculty from xyh and the university of maryland school of medicine. simple statistical analysis was performed to determine the frequency of critical actions performed by ems providers. results: during the study period, 1170 patients were transported, 452 of whom met the inclusion criteria. 218 (48.2%) evaluations were observed directly for critical actions. the table shows the frequency of critical actions performed by chief complaint. none of the patients with chest pain received an ecg even though the equipment was available. rapid glucose was checked in only 2.1% of patients presenting with altered mental status. a lung exam was performed in 22.7% of patients with dyspnea, and the respiratory rate was measured in 9.1%. among patients transported for trauma, blood pressure, and heart rate were only measured in 1% and 4.1%, respectively. conclusion: in this observation study of prehospital patient assessments in a county-level ems system, critical actions were performed infrequently for the chief complaints of interest. performance frequencies for critical actions ranged from 0 to 22.7%, depending on the chief complaint. standardized prehospital patient care protocols should be established in china and further training is needed to optimize patient assessment. trends little is known about the comparative effectiveness of noninvasive ventilation (niv) versus invasive mechanical ventilation (imv) in chronic obstructive pulmonary disease (copd) patients with acute respiratory failure. objectives: to characterize the use of niv and imv in copd patients presenting to the emergency department (ed) with acute respiratory failure and to compare the effectiveness of niv vs. imv. methods: we analyzed the 2006-2008 nationwide emergency department sample (neds), the largest, all-payer, us ed and inpatient database. ed visits for copd with acute respiratory failure were identified with a combination of copd exacerbation and respiratory failure icd-9-cm codes. patients were divided into three treatment groups: niv use, imv use, and combined use of niv and imv. the outcome measures were inpatient mortality, hospital length of stay (los), hospital charges, and complications. propensity score analysis was performed using 42 patient and hospital characteristics and selected interaction terms. results: there were an estimated 101,000 visits annually for copd exacerbation and respiratory failure from approximately 4,700 eds. ninety-six percent were admitted to the hospital. of these, niv use increased slightly from 14% in 2006 to 16% in 2008 (p = 0.049), while imv use decreased from 28% in 2006 to 19% in 2008 (p < 0.001); the combined use remained stable (4%). inpatient mortality decreased from 10% in 2006 to 7% in 2008 (p < 0.001). niv use varied widely between hospitals, ranging from 0% to 100% with median of 11%. in a propensity score analysis, niv use (compared to imv) significantly reduced inpatient mortality (risk ratio 0.57; 95% confidence interval [ci] 0.48-0.56), shortened hospital los (difference )3 days; 95%ci )4 to )3), and reduced hospital charges 044; 855) . niv use was associated with a lower rate of iatrogenic pneumothorax compared with imv use (0.04% vs. 0.6%, p < 0.001). an instrumental analysis confirmed the benefits of niv use, with a 5% reduction in inpatient mortality in the niv-preferring hospitals. conclusion: niv use is increasing in us hospitals for copd with acute respiratory failure; however, its adoption remains low and varies widely between hospitals. niv appears to be more effective and safer than imv in the real-world setting. background: dyspnea is a common ed complaint with a broad differential diagnosis and disease-specific treatment. bronchospasm alters capnographic waveforms, but the effect of other causes of dyspnea on waveform morphology is unclear. objectives: we evaluated the utility of capnographic waveforms in distinguishing dyspnea caused by reactive airway disease (rad) from non-rad in adult ed patients. methods: this was a prospective, observational, pilot study of a convenience sample of adult patients presenting to the ed with dyspnea. waveforms, demographics, past medical history, and visit data were collected. waveforms were independently interpreted by two blinded reviewers. when the interpreters disagreed, the waveform was re-reviewed by both reviewers and an agreement was reached. treating physician diagnosis was considered the criterion standard. descriptive statistics were used to characterize the study population. diagnostic test characteristics and inter-rater reliability are given. results: fifty subjects were enrolled. median age was 52 years (range 21-82), 50% were female, 34% were caucasian. 29/50 (58%) had a history of asthma or chronic obstructive pulmonary disease. rad was diagnosed by the treating physician in 19/50 (38%) and 32/50 (64%) had received treatment for dyspnea prior to waveform acquisition. the interpreters agreed on waveform analysis in 47/50 (94%) cases (kappa = 0.88). test characteristics for presence of acute rad, including 95%ci, were: overall accuracy 70% (55.2%-81.7%), sensitivity 69% (43.5%-86.4%), specificity 71% (51.8%-85.1%), positive predictive value 59% (36.7%-78.5%), negative predictive value 79% (58.5%-91.0%), positive likelihood ratio 2.25 (1.36-3.72) , negative likelihood ratio 0.42 (0.23-0.74). conclusion: inter-rater agreement is high for capnographic waveform interpretation, and shows promise for helping to distinguish between dyspnea caused by rad and dyspnea from other causes in the ed. treatments received prior to waveform acquisition may affect agreement between waveform interpretation and physician diagnosis, affecting the observed test characteristics. asthma background: asthma and chronic obstructive pulmonary disease (copd) patients who present to the emergency department (ed) usually lack adequate ambulatory disease control. while evidence-based care in the ed is now well defined, there is limited inform-ation regarding the pharmacologic or non-pharmacologic needs of these patients at discharge. objectives: this study evaluated patients' needs with regard to the ambulatory management of their respiratory conditions after ed treatment and discharge. methods: over 6 months, 94 adult patients with acute asthma or copd, presenting to a tertiary care alberta hospital ed and discharged after being treated for exacerbations, were enrolled. using results from standardized in-person questionnaires, charts were reviewed by respiratory researchers to identify care gaps. results: overall, 58 asthmatic and 36 copd patients were enrolled. more patients with asthma required education on spacer devices (52% vs 31%). few asthma (9%) and no copd patients had written action plans; asthma patients were more likely to need adherence counseling (53% vs 36%) for preventer medications. more patients with asthma required influenza vaccination (72% vs 39%; p = 0.003); pneumococcal immunization was low (36%) in copd patients. only 22% of asthmatics reported ever being referred to an asthma education program and 19% of the copd patients reported ever being referred to pulmonary rehabilitation. at ed presentation, 28% of the asthmatics required the addition of inhaled corticosteroids (ics) and 16% required the addition of ics/long acting beta-agonist (ics/laba) combination agents. on the other hand, 36% of copd patients required the addition of long-acting anticholinergics while most (83%) were receiving preventer medications. finally, 31% of copd and 29% of asthma patients who smoked required smoking cessation counseling. conclusion: overall, we identified various care gaps for patients presenting to the ed with asthma and copd. there is an urgent need for high-quality research on interventions to reduce these gaps. methods: this is an interim, sub-analysis of an interventional, double-blinded study performed in an academic urban-based adult ed. subjects with acute exacerbation of asthma with fev1 < 50% predicted within 30 minutes following initiation of ''standard care'' (including a minimum of 5 mg nebulized albuterol, 0.5 mg nebulized ipratropium, and 50 mg corticosteroid) who consented to be in a trial were included. all treatment was administered by emergency physicians unaware of the study objectives. patients were randomly assigned to treatment with placebo or an intravenous beta agonist. all subjects had fev1 and ds obtained at baseline, 1, 2, and 3 hours after treatment. fev1 was measured using a bedside nspire spirometer, and ds was calculated using a modified borg dyspnea score. results: thirty-eight patients were included for analysis. spearman's rho test (rho) was used to measure correlations between fev1 and ds at 1, 2, and 3 hours post study entry and subsequent hospitalization. rho is negative for fev1 (higher fev1 correlates to lower rate of hospitalization) and positive for ds (higher ds correlates to higher rate of hospitalization). at each time point, ds were more highly correlated to hospitalization than were fev1 (see table) . conclusion: dyspnea score at 1, 2, and 3 hours were significantly correlated with hospital admission, whereas fev1 was not. in this set of subjects with moderate to severe asthma exacerbations, a standardized subjective tool was superior to fev1 for predicting subsequent hospitalization. methods: this is an interim, subgroup analysis of a prospective, interventional, double-blind study performed in an academic urban ed. subjects who were consented for this trial presented with acute asthma exacerbations with fev1 £ 50% predicted within 30 minutes following initiation of ''standard care'' (includes a minimum of 2.5 mg nebulized albuterol, 0.5 mg nebulized ipratropium, and 50 mg of a corticosteroid). ed physicians who were unaware of the study objectives administered all treatments. subjects were randomized in a 1:1 ratio to either placebo or investigational intravenous beta agonist arms. blood was obtained at 1 and 1.25 hours after the start of the hour long infusion. blood was centrifuged and serum stored at )80°c, and then shipped on dry ice for albuterol and lactate measurements at a central lab. the treatment lactate and d lactate were correlated with 1 hr serum albuterol concentrations and hospital admission using partial pearson correlations to adjust for ds. results: 38 subjects were enrolled to date, 20 with complete data. the mean baseline serum lactate level was 18.1 mg/dl (sd ± 8.6). this increased to 32.7 mg/ dl (sd ± 15.0) at 1.25 hrs. the mean 1 hr ds was 3.85 (sd ± 2.0). the correlations between treatment lactate, d lactate, 1 hr serum albuterol concentrations (r, s and total) and admission to hospital are shown (see table) . both treatment and d lactate were highly conrrelated with total serum albuterol, r albuterol, and s albuterol. there was no correlation between treatment lactate or d lactate and hospital admission. conclusion: lactate and d lactate concentrations correlate with albuterol concentrations in patients presenting had asthma. fifty one percent were <21 years old and 54% were female. we found a decline of 27% (95% ci: 23%-30%, p < 0.0001; r 2 = 0.73, p < 0.0001) in the overall yearly asthma visits to total ed visits from 1996 to 2010. when we analyzed sex and age groups separately, we found no statistically significant changes for females or for males <21 years old (r 2 £ 0.016, p ‡ 0.65). for females and males >21 years old, yearly asthma visits to total ed visits from 1996 to 2010 decreased 39% (95% ci: 33%-43%, p < 0.0001; r 2 = 0.90, p < 0.0001) and 20% (95% ci: 14%-26%, p < 0.0001; r 2 = 0.80, p < 0.0001), respectively. conclusion: we found an overall decrease in yearly asthma visits to total ed visits from 1996 to 2010. we speculate that this decrease is due to greater corticosteroid use despite the increasing prevalence of asthma. it is unclear why this decrease was seen in adults and not in children and why it was greater for adult females than males. objectives: our objectives were to describe the use of a unique data collection system that leveraged emr technology and to compare its data entry error rate to traditional paper data collection. methods: this is a retrospective review of data collection methods during the first 12 months of a multicenter study of ed, anti-coagulated, head injury patients. on-shift ed physicians at five centers enrolled eligible patients and prospectively completed a data form. enrolling ed physicians had the option of completing a one-page paper data form or an electronic ''dotphrase'' (dp) data form. our hospital system uses an epicòbased emr. a feature of this system is the ability to use dps to assist in medical information entry. a dp is a preset template that may be inserted into the emr when the physician types a period followed by a code phrase (in this case ''.ichstudy''). once the study dp was inserted at the bottom of the electronic ed note, it prompted enrolling physicians to answer study questions. investigators then extracted data directly from the emr. our primary outcomes of interest were the prevalence of dp data form use and rates of data entry errors. results: from 7/2009 through 8/2010, 883 patients were enrolled. dp data forms were used in 288 (32.6%; 95% ci 29.5, 35.7%) cases and paper data forms in 595 (67.4%; 95% ci 64.3, 70.5%). the prevalence of dp data form use at the respective study centers was 11%, 16%, 18%, 31%, and 85%. sixty-six (43.7 %; 95% ci 35.8, 51.6%) of 151 physicians enrolling patients used dp data entry at least once. using multivariate analysis, we found no significant association between physician age, sex, or tenure and dp use. data entry errors were more likely on paper forms (234/595, 39.3%; 95% ci 35.4, 43.3%) than dp data forms (19/288, 6.6%; 95% ci 3.7, 9.5%), difference in error rates 32.7% (95% ci 27.9, 37.6%, p < 0.001). conclusion: dp data collection is a feasible means of study data collection. dp data forms maintain all study data within the secure emr environment obviating the need to maintain and collect paper data forms. this innovation was embraced by many of our emergency physicians. we found lower data entry error rates with dp data forms compared to paper forms. background: inadequate randomization, allocation concealment, and blinding can inflate effect sizes in both human and animal studies. these methodological limitations might in part explain some of the discrepancy between promising results in animal models and non-significant results in human trials. whereas blinding is not always possible, in clinical or animal studies, true randomization with allocation concealment is always possible, and may be as important in minimizing bias. objectives: to determine the frequency with which published emergency medicine (em) animal research studies report randomization, specific randomization methods, allocation concealment, and blinding of interventions and measurements, and to estimate whether these have changed over time. methods: all em animal research publications from 1/ 2000 through 12/2009 in ann emerg med and acad emerg med were reviewed by two trained investigators for a statement regarding randomization, and specific descriptions of randomization methods, allocation concealment, blinding of intervention, and blinding of measurements, when possible. raw initial agreement was calculated and differences were settled by consensus. the first (period 1 = 2000-2004) and second (period 2 = 2005-2009) 5-year periods were compared with 95% confidence intervals. results: of 117 em animal research studies, 109 were appropriate for review because they involved intervention in at least two groups. blinding of interventions and measurements were not considered possible in 37% and 3%, respectively. significant differences between period 1 and 2 were absent, although there was a trend towards less blinding of interventions and more blinding of measurements. raw agreement was 91%. conclusion: although randomization is mentioned in the majority of studies, allocation concealment and blinding remain underutilized in em animal research. we did not compare outcomes between blinded and non-blinded, randomized and non-randomized studies, because of small sample size. this review fails to demonstrate significant improvement over time in these methodological limitations in em animal research publications. journals might consider requiring authors to explicitly describe their randomization, allocation, and blinding methods. background: cluster randomized trials (crts) are increasingly utilized to evaluate quality improvement interventions aimed at health care providers. in trials testing ed interventions, migration of eps between hospitals is an important concern, as contamination may affect both internal and external validity. objectives: we hypothesized geographically isolating emergency departments would prevent migratory contamination in a crt designed to increase ed delivery of tpa in stroke (the instinct trial). methods: instinct was a prospective, cluster-randomized, controlled trial. twenty-four michigan community hospitals were randomly selected in matched pairs for study. following selection of a single hospital, all hospitals within 15 miles were excluded from the sample pool. individual emergency physicians staffing each site were identified at baseline (2007) and 18 months later. contamination was defined at the cluster level, with substantial contamination defined a priori as >10% of eps affected. non-adherence, total crossover (contamination + non-adherence), migration distance and characteristics were determined. results: 307 emergency physicians were identified at all sites. overall, 7 (2.3%) changed study sites. one moved between control sites, leaving 6 (2.0%) total crossovers. of these, 2 (0.7%) moved from intervention to control (contamination) and 4 (1.3%) moved from control to intervention (non-adherence). contamination was observed in 2 of 24 sites, with 17% and 9% contamination of the total site ep workforce at follow-up, respectively. two of 6 crossovers occurred between hospitals within the same health system. average migration distance was 42 miles for all eps in the study and 35 miles for eps moving from intervention to control sites. conclusion: the mobile nature of emergency physicians should be considered in the design of quality improvement crts. use of a 15-mile exclusion zone in hospital selection for this crt was associated with very low levels of substantial cluster contamination (1 of 24) and total crossover. assignment of hospitals from a single health system to a single study group and/or an exclusion zone of 45 miles would have further reduced crossovers. increased reporting of contamination in cluster randomized controlled trials is encouraged to clarify thresholds and facilitate crt design. objectives: an extension of the lr, the average absolute likelihood ratio (aalr), was developed to assess the average change in the odds of disease that can be expected from a test, or series of tests, and an example of its use to diagnose wide qrs complex tachycardia (wct) is provided. methods: results from two retrospective multicenter case series were used to assess the utility of qrs duration and axis to assess for ventricular tachycardia (vt) in patients with undifferentiated regular sustained wct. serial patients with heart rate (hr) >120 beats per minute and qrs duration >120 milliseconds (msec) were included. the final tachydysrhythmia diagnosis was determined by a number of methods independent of the ecg. the aalr is defined as: aalr = 1/n total [r (n i *lr i ) (for lr > 1) + r (n k /lr k ) (for lr < 1)], where lr i and lr k are the interval lrs, and n i and n k are the number of patients with test results within the corresponding intervals. roc curves were constructed, and interval lrs and aalrs were calculated for the qrs duration and axis tests individually, and when applied together. confidence intervals were bootstrapped with 10,000 replications using the r boot package. results: 187 patients were included: 95 with supraventricular tachycardia (svt) and 92 with vt. optimal qrs intervals (msec) for distinguishing vt from svt were: qrs £ 130, 130 < qrs < 160, and qrs ‡ 160. qrs axis results were dichotomized to upward right axis (181-270 degrees) or not ()89 to 180 degrees). results are listed in the table. conclusion: application of the qrs interval and axis tests together for patients with wide qrs complex tachycardia changes the odds of ventricular tachycardia, on average, by a factor of 3.5 (95% ci 2.4 to 6.2), and this is mildly improved over the qrs duration test alone. both a strength and weakness of the aalr is its dependence on the pretest probability of disease. the aalr may be helpful for clinicians and researchers to evaluate and compare diagnostic testing approaches, particularly when strategies with serial non-independent tests are considered. consultation for adults with metastatic solid tumors at an urban, academic ed located within a tertiary care referral center. field notes were grouped into barrier categories and then quantified when possible. patient demographics for those who did and did not enroll were extracted from the medical record and quantified. patients who did not meet inclusion criteria for the study (e.g., cognitive impairment) were excluded from the analysis. results: attempts were made to enroll 42 eligible patients in the study, and 23 were successfully enrolled (55% enrollment rate). barriers to enrollment were deduced from the field notes and placed into the following categories from most to least common: patient refusal (6); diagnostic uncertainty regarding cancer stage (4); severity of symptoms preclude participation (4); patient unaware of illness or stage (3); and family refusal (2). conclusion: patients, families, and diagnostic uncertainty are barriers to enrolling ed patients with advanced illness in clinical trials. it is unclear whether these barriers are generalizable to other study sites and disease processes other than cancer. objectives: the purpose of this study was to evaluate the use of a high-fidelity mannequin bedside simulation scenario followed by a debriefing session as a tool to improve medical student knowledge of palliative care techniques. methods: third year medical students participating in a 12-week simulation curriculum during a surgery/ emergency medicine/anesthesia clerkship were eligible for the study. all students were administered a pretest to evaluate their baseline knowledge of palliative care and randomized to a control or intervention group. during week 3 or 4, students in the intervention group participated in and observed two end-of-life scenarios. following the scenarios, a faculty debriefer trained in palliative care addressed critical actions in each scenario. during week 10, all students received a posttest to evaluate for improvement in knowledge. the pre-test and post-test consisted of 12 questions addressing prognostication, symptom control, and the medicare hospice benefit. students were de-identified and pre-and post-tests were graded by a blinded scorer. results: from jan-dec 2011, 70 students were included in the study and 5 were excluded due to incomplete data. the mean score on the pre-test for the intervention group was 3.16, and for the control group was 3.45 (p = 0.90 the results indicate that educators identify the most important scenarios as protocol-based simulations. respondents also suggested that scenarios of very common emergency department presentations bear a great deal of importance. emergency medicine educators assign priority to simulations involving professionalism and communication. finally, many respondents noted that they use simulation to teach the presentation and management of rare or less frequent, but important disease processes. the identification of these scenarios would suggest that educators find simulation useful for filling in ''gaps'' in resident education. background: prescription drug misuse is a growing problem among adolescent and young adult populations. objectives: to determine factors associated with past year prescription drug misuse defined as using prescription sedatives, stimulants, or opioids to get high, taking them when they were prescribed to someone else or taking more than was prescribed among patients seeking care in an academic ed. methods: adolescents and young adults (14-20) presenting for ed care at a large, academic teaching hospital were approached to complete a computerized screening questionnaire regarding demographics, prescription drug misuse, illicit drug use, alcohol use, and violence in the past 12 months. logistic regression was used to predict past year prescription drug misuse. results: over the study time period, there were 2156 participants (86% response rate) of whom 300 (13.9%) endorsed past year prescription drug misuse. specifically, rates of past year misuse for opioids was 8.7%, sedatives was 5.4%, and stimulants was 8.0%. significant overlap exists among classes with over 40% misusing more than one class of medications. in the multivariate analysis significant predictors of past year prescription drug misuse included female gender (or conclusion: approximately one in seven adolescents or young adults seeking ed care have misused prescription drugs in the past year. while opioids are the most common drug misused, significant overlap exists among this population. given the correlation of prescription drug misuse with the use and misuse of other substances (i.e. alcohol, cough medicine, marijuana) more research is needed to further understand these relationships and inform interventions. additionally, future research should focus on understanding the differences in demographics and risk factors associated with misuse of each separate class of prescription drugs. prospective 10 objectives: this study aims to examine the association of depression with high ed utilization in patients with non-specific abdominal pain. methods: this single-center, prospective, cross-sectional study was conducted in an urban academic ed located in washington, dc as part of a larger study to evaluate the interaction between depression and frequency of ed visits and chronic pain. as part of this study, we screened patients using the phq-9, a nineitem questionnaire that is a validated, reliable predictor of major depressive disorder. we analyzed the subset of respondents with a non-specific abdominal pain diagnosis (icd-9 code of 789.xx). our principal outcome of interest was the rate of a positive depression screen in patients with non-specific abdominal pain. we analyzed the prevalence of a positive depression screen among this group and also conducted a chi-square analysis to compare high ed use among abdominal pain patients with a positive depression screen versus those without a positive depression screen. we defined high ed utilization as >3 visits in a 364-day period prior to the enrollment visit. background: numerous studies have found high rates of co-morbid mental illness and chronic pain in emergent care settings. one psychiatric diagnosis frequently associated with chronic pain is major depressive disorder (mdd). objectives: we conducted a study to characterize the relationship between mdd and chronic pain in the emergency department (ed) population. we hypothesized that patients who present to the ed with selfreported chronic pain will have higher rates of mdd. methods: this was a single-center, prospective, crosssectional study. we used a convenience sample of noncritically ill, english speaking adult patients presenting with non-psychiatric complaints to an urban academic ed over 6 months in 2011. we oversampled patients presenting with pain-related complaints (musculoskeletal pain or headache). subjects were surveyed about their demographic and other health and health care characteristics and were screened with the phq 9, a nine-item questionnaire that is a validated, reliable predictor of mdd. we conducted bivariate (chi-square) and multivariate analysis controlling for demographic characteristics (race, income, sex, age) using stata v. 10.0. our principal dependent variable of interest was a positive depression screen (phq 9 score ‡10). our principal independent variable of interest was the presence of self-reported chronic pain (greater than 3 months). results: of 77 patients enrolled, 2 did not meet all inclusion criteria. 50 had two or more assessments for comparison. their average age was 39 (range 21-59), 70% were male, and 74% were in police custody. 38% used methadone alone; 16% heroin alone; 4% oxycodone alone; and the rest used multiple opioids. the average dose of im methadone was 10.3 mg (range 5-20 mg); all but 3 patients received 10 mg. the mean cows score before receiving im methadone was 11.19 (range 3-23), compared to 4.83 (range 0-20) 30 minutes after methadone (p < 0.001; mean difference = )6.36; 95% ci = )4.57 to )8.15). the mean wss before and after methadone was )1.54 (range )1 to )2) and )0.755 (range )2 to 2), respectively (p < 0.001; 95% ci = )1.0 to )0.57). the mean physician-assessed wss was significantly lower than the patient's own assessment by 0.78 (p < 0.001). adverse events included an asthmatic patient with bronchospasm whose oxygen saturation decreased from 95% to 88% after receiving methadone, a patient whose oxygen saturation decreased from 95% to 93%, and two patients whose amss decreased from )1 to )2 (indicating moderate sedation). background: as the us population ages, the coexistence of copd and acute coronary syndrome (acs) is expected to be more frequent. very few studies have examined the effect of copd on outcomes in acs patients, and, to our knowledge, there has been no report on biomarkers that possibly mediate between copd and long-term acs patient outcomes. objectives: to determine the effect of copd on longterm outcomes in patients presenting to the emergency department (ed) with acs and to identify prognostic inflammatory biomarkers. methods: we performed a prospective cohort study enrolling acs patients from a single large tertiary center. hospitalized patients aged 18 years or older with acs were interviewed and their blood samples were obtained. seven inflammatory biomarkers were measured, including interleukin-6 (il-6), c-reactive protein (crp), tumor necrosis factor-alpha (tnf-alpha), vascular cell adhesion molecule (vcam), e-selectin, lipoprotein-a (lp-a), and monocyte chemoattractant protein-1 (mcp-1). the diagnoses of acs and copd were verified by medical record review. annual telephone follow-up was conducted to assess health status and major adverse cardiovascular events (mace) outcomes, a composite endpoint including myocardial infarction, revascularization procedure, stroke, and death. background: aortic dissection (ad) is an uncommon life-threatening condition requiring prompt diagnosis and management. thirty-eight percent of cases are missed upon initial evaluation. the cornerstone of accurate diagnosis hinges on maintaining a high index of clinical suspicion for the various patterns of presentation. quality documentation that reflects consideration for ad in the history, exam, and radiographic interpretation is essential for both securing the diagnosis and for protecting the clinician in missed cases. objectives: we sought to evaluate the quality of documentation in patients presenting to the emergency department with subsequently diagnosed acute ad. methods: irb-approved, structured, retrospective review of consecutive patients with newly diagnosed non-traumatic ad from 2004 to 2010. inclusion criteria: new ad diagnosis via ed. exclusion criteria: ad diagnosed at another facility; chronic, traumatic, or iatrogenic ad. trained/monitored abstractors used a standardized data tool to review ed and hospital medical records. descriptive statistics were calculated as appropriate. inter-rater reliability was measured. our primary performance measure was the prevalence of a composite of all three key historical elements (1. any back pain, 2. neurologic symptoms including syncope, and 3. sudden onset of pain.) in the attending emergency physician's documentation. secondary outcomes included documentation of: ad risk factors, pain quality, back pain at multiple locations, presence/absence of pulse symmetry, mediastinal widening on chest radiograph, and migratory nature of the pain. results: 65/203 met our inclusion/exclusion criteria. the mean age was 58.4 years; 65% were male, 23 (35.4%) were stanford a. 32 (60%) presented with a chief complaint of chest pain. primary outcome measure: 6/65 (9.2%; 95%ci = 3.5,19.0) documented the presence/ absence of all three key historical elements. [back pain = 42/65; 64.6% (51.8, 76.1); neuro symptoms = 39/ 65; 60% (47.1, 72.0); sudden onset = 12/65; 18.5% (9.9, 30.0).] limitations: small number of confirmed ad cases. conclusion: in our cohort, emergency physician documentation of key historical, physical exam, and radiographic clues of ad is suboptimal. although our ed miss rate is lower than that which has been reported by previous authors, there is an opportunity to improve documentation of these pivotal elements at our institution. objectives: this study assessed the opinions of iem and gh fellowship program directors, in addition to recent and current fellows regarding streamlining the application process and timeline in an attempt to implement change and improve this process for program directors and fellows alike. methods: a total of 34 current iem and gh fellowship programs were found through an internet search. an electronic survey was administered to current iem and gh fellowship directors, current fellows, and recent graduates of these 34 programs. results: response rates were 88% (n = 30) for program directors and 53% (n = 17) for current and recent fellows. the great majority of current and recent fellows (77%) and program directors (83%) support transitioning to a common application service. similarly, 88% of current and recent fellows and 83% of program directors support instituting a uniform deadline date for applications. however, only 47% of recent/current fellows and 33% of program directors would support a formalized match process like nrmp. conclusion: the majority of fellows and program directors support streamlining the application for all iem and gh fellowship programs. this could improve the application process for both fellows and program directors, and ensure the best fit for the candidates and for the fellowship programs. in order to establish effective emergency care in rural sub-saharan africa, the unique practice demographics and patient dispositions must be understood. objectives: the objectives of this study are to determine the demographics of the first 500 patients seen at nyakibale hospital's ed and assess the feasibility of treating patients in a rural district hospital ed in sub-saharan africa. methods: a descriptive cross-sectional analysis of the first 500 consecutive patient visits in the ed's patient care log was reviewed by an unblinded abstractor. data collected included age, sex, condition upon discharge, and disposition. all authors discussed uncertainties and formed a consensus. descriptive statistics were performed. results: of the first 500 patient visits, 254 (50.8%) occurred when the outpatient clinic was open. there were 275 (55%) male visits. the average age was 25.2 years (sd ± 22.2). pediatric visits accounted for 218 (43.6%) patients, and 132 (26.4%) visits were for children under five years old. only one patient expired in the ed, and 401 (80.2%) were in good condition after treatment, as subjectively defined by the ed physicians. one person was transferred to another hospital. after treatment, 180 (36%) patients were discharged home. of those admitted to an inpatient ward, 126 (25.2%) patients were admitted to medical wards, 97 (19.4%) to pediatrics, and 60 (12%) to surgical. only six (1.2 %) patients went directly to the operating theatre. conclusion: this consecutive sample of patient visits from a novel rural district hospital ed in sub-saharan africa included a broad demographic range. after treatment, most patients were judged to be in ''good condition'', and over one third of patients could be discharged after ed management. this sample suggests that it is possible to treat patients in an ed in rural sub-saharan africa, even in cases where surgical backup and transfers to higher level of care are limited or unavailable. background: communication failures in clinical handoffs have been identified as a major preventable cause of patient harm. in italy, advanced prehospital care is provided predominantly by physicians who work on ambulances in teams with either nurses or basic rescuers. the hand-offs from prehospital physicians to hospital emergency physicians (eps) is especially susceptible to error with serious consequences. there are no studies in italy evaluating the communication at this transition in patient care. studying this, however, requires a tool that measures the quality of this communication. objectives: the purpose of this study is to develop and validate a tool for the evaluation of communication during the clinical handoff from prehospital to emergency physicians in critically ill patients. methods: several previously validated tools for evaluating communication in hand-offs were identified through a literature search. these were reviewed by a focus group consisting of eps, nurses, and rescuers, who then adapted and translated the australian isbar (identification, situation, background, assessment, recommendation), the tool most relevant to local practice. the italian isbar tool consists of the following elements: patient and provider identification; patient's chief complaint; patient's past medical history, medications, and allergies; prehospital clinical assessment (primary survey, illness severity, vital signs, diagnosis); treatment initiated and anticipated treatment plan. we conducted and video-taped the hand-offs of care from the prehospital physicians to the eps in 12 pediatric critical care simulations. four physician raters were trained in the italian isbar tool and used it to independently assess communication in each simulation. to assess agreement we calculated the proportion of agreement among raters for each isbar question, fleiss' kappas for each simulation, as well as mean agreement and mean kappas with standard deviations. results: there was 100% agreement among the four physicians on 70% of the items. the mean level of agreement was 91% (sd 0.15). the overall mean kappa was 0.67 (sd 0.10). conclusion: the standardized tool resulted in good agreement by physician raters. this validated tool may be helpful in studying and improving hand-offs in the prehospital to emergency department setting. objectives: we hypothesized that residents who were provided with vps prior to hfs would perform more thoroughly and efficiently than residents who had not been exposed to the online simulation. methods: we randomized a group of 30 residents from an academic, pgy 1-4 emergency medicine program to complete an online vps case, either prior to (vps group, n = 14 residents) or after (n = 16) their hfs case. the vps group had access to the online case (which reviewed asthma management) 3 days prior to the hfs session. all residents individually participated in their regularly scheduled hfs and were blinded to the content of the case -a patient in moderate asthma exacerbation. the authors developed a dichotomous checklist consisting of 33 items recorded as done/not done along with time completed. a two sample proportion test was used to evaluate differences in the individual items completed between groups. a wilcoxon rank sum test was used to determine the differences in overall and subcategory performance between the two groups. median time to completion was analyzed using the log-rank test. results: the vps group had better overall checklist performance than the control group (p-value 0.046). in addition, the vps group was more thorough in obtaining an hpi (p-value 0.009). specific actions (related to asthma management) were performed better by the vps group: inquiring about last/prior ed visits (0.038), total number of hospitalizations in the prior year (0.029), prior intubations (0.001), and obtaining peak flow measurements (0.030). overall there was no difference in time to event completion between the two groups. conclusion: we found that when hfs is primed with educational modalities such as vps there was an improvement in performance by trainees. however, the improved completeness of the vps group may have served as a barrier to efficiency, inhibiting our ability to identify a statistical significant efficiency overall. vps may aid in priming the learners and maximize the efficiency of training using high-fidelity simulations. training using an animal model helped develop residents' skills and confidence in performing ptv. retention was found to be good at 2 months post-training. this study underscores the need for hands-on training in rare but critical procedures in emergency medicine. methods: in this cross-sectional study at an urban community hospital, 15 residents in their second or third year of training from a 3-year em residency program performed us-guided catheterizations of the ij on a simulator manufactured by blue phantom. two board-certified em physicians observed for the completion of pre-defined procedural steps using a checklist and rated the residents' overall performance of the procedure. overall performance ratings were provided on a likert scale of 1 to 10, with 1 being poor and 10 being excellent. residents were given credit for performing a procedural step if at least one rater marked its completion. agreement between raters was calculated using intraclass correlation coefficients for domain and summary scores. the same protocol was then repeated on an unembalmed cadaver using two different board-certified em physician raters. criterion validity of the residents' proficiency on the simulator was evaluated by comparing their median overall performance rating on the simulator to that on the cadaver and by comparing the proportion of residents completing each procedural step between modalities with descriptive statistics. results: em residents' overall performance rating on the simulator was 7.4 (95% ci: 6.0 to 8.8) and on the cadaver was 6.1 (95% ci: 4.7 to 7.5). the results for each procedural step are summarized in the attached figure. inter-rater agreement was high for assessments on both the simulator and cadaver with overall kappa scores of 0.89 and 0.96 respectively. background: the environment in the emergency department (ed) is chaotic. physicians must learn how to multi-task effectively and manage interruptions. noise becomes an inherent byproduct of this environment. previous studies in the surgical and anesthesiology literature examined the effect of noise levels and cognitive interruptions on resident performance during simulated procedures; however, the effect of noise distraction on resident performance during an ed procedure has not yet been studied. objectives: our aim was to prospectively determine the effects of various levels of noise distraction on the time to successful intubation of a high-fidelity simulator. methods: a total of 45 emergency medicine, emergency medicine/internal medicine, and emergency medicine/family medicine residents were studied in a background noise environments of less than 50 decibels (noise level 1), 60-70 decibels (noise level 2), and of greater than 70 decibels (noise level 3). noise levels were standardized by a dosimeter (ex tech instruments, heavy duty 600). each resident was randomized to the order in which he or she was exposed to the various noise levels and had a total of 2 minutes to complete each of the intubation attempts, which were performed in succession. time, in seconds, to successful intubation was measured in each of these scenarios with the start time defined as the time the resident picked up the storz c-mac video laryngoscope blade and the finish time defined as the time the tube passed through the vocal cords as visualized by an observer on the storz c-mac video screen. analytic methods included analysis of variance, student's t-test, and pearson's chi-square. results: no significant differences were found between time to intubation and noise level nor did the order of noise level exposure affect the time to intubation (see table) . there were no significant differences in success rate between the three noise levels (p = 0.178). a significant difference in time to intubation was found between the residents' second and third intubation attempts with decreased time to intubation for the third attempt (p = 0.001). conclusion: noise level did not have an effect on time to intubation or intubation success rate. time to intubation decreased between the second and third intubations regardless of noise level. background: growing use of the emergency department (ed) is cited as a cause of rising health care costs and a target of health care reform. eds provide approximately one quarter of all acute care outpatient visits in the us. eds are a diagnostic center and a portal for rapid inpatient admission. the changing role of eds in hospital admissions has not been described. objectives: to compare if admission through the ed has increased compared to direct hospital admission. we hypothesized that the use of the ed as the admitting portal increased for all frequently admitted conditions. methods: we analyzed the nationwide inpatient sample (nis), the largest us all-payer inpatient care database, from 1993-2006. nis contains data from approximately 8 million hospital stays each year, and is weighted to produce national estimates. we used an interactive, webbased data tool (hcupnet) to query the nis. clinical classification software (ccs) was used to group discharge diagnoses into clinically meaningful categories. we calculated the number of annual admissions and proportion admitted from the ed for the 20 most frequently admitted conditions. we excluded ccs codes that are rarely admitted through the ed (<10%) as well as obstetbackground: the optimal dose of opioids for patients in acute pain is not well defined, although 0.1 mg/kg of iv morphine is commonly recommended. patient-controlled analgesia (pca) provides an opportunity to assess the adequacy of this recommendation as use of the pca pump is a behavioral indication of insufficient analgesia. objectives: to assess the need for additional analgesia following a 0.1 mg/kg dose of iv morphine by measuring additional self-dosing via a pca pump. methods: a three-arm randomized controlled trial was performed in an urban ed with 75,000 annual adult visits. a convenience sample of ed patients ages 18 to 65 with abdominal pain of <7 days duration requiring iv opioids was enrolled between 4/2009 and 6/2010. all patients received an initial dose of 0.1 mg/kg iv morphine. patients in the pca arms could request additional doses of 1 mg or 1.5 mg iv morphine by pressing a button attached to the pump with a 6-minute lock-out period. for this analysis, data from both pca arms were combined. software on the pump recorded times when the patient pressed the button (activation) and when he/she received a dose of morphine (successful activation). results: 137 patients were enrolled in the pca arms. median baseline nrs pain score was 9. mean amount of supplementary morphine self-administered over the 2 hour study period subsequent to the loading dose was 5.7 mg and 6.7 mg for the 1 and 1.5 mg pca groups respectively. 124 patients activated the pump at least once (91%, 95% ci: 84 to 94%). figure 1 shows the frequency distribution of the number of times the pump was activated. of those who activated the pump, the median number of activations per person was 5 (iqr: 3 to 12). there were 1124 activations of the pump. 60% of activations were successful (followed by administration of morphine), while 40% were unsuccessful as they occurred during the 6-minute lock-out periods. 19% of the activations occurred in the first 30 minutes, 29% in the second 30 minutes, 25% in the third 30 minutes, and 27% in the last 30 minutes after the initial loading dose. conclusion: almost all patients requested supplementary doses of pca morphine, half of whom activated the pump five times or more over a course of 2 hours. this frequency of pca activations suggests that the commonly recommended dose of 0.1 mg/kg morphine may constitute initial oligoanalgesia in most patients. marie-pier desjardins, benoit bailey, fanny alie-cusson, serge gouin, jocelyn gravel chu sainte-justine, montreal, qc, canada background: administration of corticosteroid at triage has been suggested to decrease the time to corticosteroid administration in the ed. objectives: to compare the time between arrival and corticosteroid administration in patients treated with an asthma pathway (ap) or with standard management (sm) in a pediatric ed. methods: chart review of children aged 1 to 17 years diagnosed with asthma, bronchospasm, or reactive airways disease seen in the ed of a tertiary care pediatric hospital. for a one year period, 20% of all visits were randomly selected for review. from these, we reviewed patients who were eligible to be treated with the ap ( ‡18 months with previous history of asthma and no other pulmonary condition) and who had received at least one inhaled bronchodilator treatment. charts were evaluated by a data abstractor blinded to the study hypothesis using a standardized datasheet. various variables were evaluated such as age, respiratory rate and 0 2 saturation at triage, type of physician who saw patient first, treatment prior to visit, in ed, and at discharge, time between arrival and corticosteroid administration, and length of stay (los background: return visits comprise 3.5% of pediatric emergency department (ped) visits, at a cost of >$500 million/year nationally. these visits are typically triaged with higher acuity and admission rates and raise concern for lapses in quality of care and patient education during the first visit. objectives: the aim of this qualitative study was to describe parents' reasons for return visits to the ped. methods: we prospectively recruited a convenience sample of parents of patients under the age of 18 years who returned to the ped within 72 hours of their previous visit. we excluded patients who were instructed to return, had previously left without being seen, arrived without a parent, were wards of the state, or did not speak english. after obtaining consent, the principal investigator (ce) conducted confidential, in-person, tape-recorded interviews with parents during ped return visits. parents answered 12 open-ended questions and 9 closed-ended questions using a five-point likert scale. responses to open-ended questions were analyzed using thematic analysis techniques. the scaled responses were grouped into three categories of agree, disagree, or neutral. results: from the 49 closed-ended responses, 86% of parents agreed that their children were getting sicker, and 92% agreed that their children were not getting better. 80% agreed that they were unsure how to treat the illness, however only 41% agreed they did not feel figure 1 : frequency distribution of number of pca activations comfortable taking care of the illness. only 29% agreed that the medical condition and/or the instructions were not clearly explained in the first visit. some common themes from the open-ended questions included worsening or lack of improvement of symptoms. many parents reported having unanswered questions about the cause of the illness and hoped to find out the cause during the return visit. conclusion: most parents brought their children back to the ped because they believed the symptoms had worsened or were not improving. although a large proportion of parents believed that the medical condition was clearly explained at the first visit, many parents still had unanswered questions about the cause of their child's illness. while worsening symptoms seemed to drive most return visits, it is possible that some visits related to failure to improve might be prevented during the first ped visit through a more detailed discussion of disease prognosis and expected time to recover. pediatric background: experience indicates that it is difficult to effectively quell many parents' anxiety toward pediatric fevers, making this a common emergency department (ed) complaint. the question remains as to whether athome treatment has any effect on the course of emergency department treatment or length of stay in this population. objectives: to determine whether anti-pyretic treatment prior to arrival in the emergency department affects the evaluation or emergency department length of stay of febrile pediatric patients. methods: a convenience sample of children, ages 0-12 years, who presented to a tertiary care ed with chief complaint of fever were enrolled. parents were asked to participate in an eight-question survey. questions related to demographic information, pre-treatment of the fever, contact with primary care providers prior to ed arrival, and immunization status. upon admission or discharge, investigators recorded information regarding length of stay, laboratory tests and imaging ordered, and medications given. results: eighty-one patients were enrolled in the study. seventy-six percent of the patients were pre-treated with some form of anti-pyretic by the caregiver prior to ed arrival. there was no significant effect of pre-treatment on whether laboratory tests or medications were ordered in the ed or whether the patient was admitted or discharged. the length of ed stay was found to be significantly shorter among those who received anti-pyretics prior to arrival (184 ± 11 vs. 247 ± 36 minutes; p = 0.03). conclusion: among febrile children, those who receive anti-pyretics prior to their ed visit had statistically significant shorter length of stays. this also supports implementation of triage or nursing protocols to administer an anti-pyretic as soon as possible in the hope of decreasing ed throughput times. background: during the past two decades, the prevalence of overweight (bmi percentile >95) in children has more than doubled, reaching epidemic proportions both nationally and globally. the public health burden is enormous given the increased risk of adult obesity as well as the adverse consequences on cardiovascular, metabolic, and psychological health. despite the overwhelming prevalence, the effect of obesity on emergency care has received little attention. objectives: the goal of this study is to determine the relation of weight on reported emergency department visits in children from a nationally representative sample. methods: weight (as reported by parents) and height along with frequency of and reason for emergency department (ed) use in the last 12 months were obtained from children aged 10-17 y (n = 46,707) in the cross-sectional, telephone-administered, national survey of children's health (nsch). bmi percentiles were calculated using sex-specific bmi for age growth charts from the cdc (2000). children were categorized as: underweight (bmi percentile£5), normal weight (>5 to <85), at-risk for overweight (85 to <95), and overweight ( ‡95). prevalence of ed use was estimated and compared across bmi percentile categories using chisquare analysis and multivariable logistic regression. taylor-series expansion was used for variance estimation of the complex survey design. results: the prevalence of at least one ed use in the past 12 months increased with increasing bmi percentiles (figure 1, p < 0.001). additionally, overweight children were more likely to have more than one visit. overweight children were also less likely to report an injury, poisoning, or accident as the reason for ed visit compared to other bmi categories (47, 55, 59, 54% in overweight, at-risk, normal, and underweight respectively, p < 0.05). conclusion: as rates of childhood obesity continue to grow in the u.s., we can expect greater demands on the ed. this will likely translate into an increased emphasis on the care of chronic conditions rather than injuries and accidents in the pediatric ed setting. results: mean pediatric satisfaction score was 84.1 (sd 3.9) compared with 81.4 (3.2) for adult patients (p < 0.001); monthly sample sizes ranged from 14-74 and from 30-125 for the two populations, respectively. both populations showed an increase in satisfaction after opening of the ped-ed. for both populations there was no significant trend in patient satisfaction from the beginning of the study period to the opening of the ped-ed, but after the opening the models of the populations differed. the pediatric satisfaction model was an interrupted two-slope model, with an immediate jump of 3.5 points in november and an increase of 0.2 points per month thereafter. in contrast, adult satisfaction scores did not show a jump but increased linearly (two slope model) after 11/2011 at a rate of 0.3 per month. prior to the opening of the ped-ed, mean monthly pediatric and adult satisfaction scores were 81.5 (2.4) and 79.5 (2.8), respectively (difference 2.0 95% ci 0.1-3.8, p = 0.04). after the opening the mean scores were 86.8 (3.1) and 83.2 (2.4), respectively (difference 3.6, 95% ci 2.1-5.0, p < 0.001). conclusion: opening of a dedicated ped-ed was associated with a significant increase in patient satisfaction scores both for children and adults. patient satisfaction for children, as compared to adults, was higher before and after opening a ped-ed. the background: there are racial disparities in outcomes among injured children. in particular, black race appears to be an independent predictor of mortality. objectives: to evaluate disparities among ed visits for unintentional injuries among children ages 0-9. methods: five years of data (2004) (2005) (2006) (2007) (2008) from the national hospital ambulatory cares survey were combined. inclusion criteria were defined as unintentional injury visits (e-code 800.0 to 869.9 or 888.0 to 929.9) and age 0-9 years. visit rates per 100 population (defined by the us census) were calculated by race and age group. weighted multivariate logistic regression analysis was performed to describe associations between race and specific outcome variables and related covariates. primary statistical analyses were performed using sas version 9.1.3. results: 21,524,000 of 585,294,000 weighted ed visits met our inclusion criteria (3.7%). per 100 persons, black children had 1.5 times as many ed visits for unintentional injuries as whites (table) . there were no racial differences in the sex ratio (1.4 boy visits: 1 girl), proportion of visits by age, ed disposition, immediacy with which they needed to be seen, whether or not they were evaluated by an attending physician, metropolitan vs. rural hospital, admission length of stay, mode of transportation for ed arrival, number of procedures, diagnostic services, or ed medications. background: sudden cardiac arrests in schools are infrequent, but emotionally charged events. little data exist that describes aed use in these events. objectives: the purpose of our study was to 1) describe characteristics and outcomes of school cardiac arrests (ca), and 2) assess the feasibility of conducting bystander interviews to describe the events surrounding school ca. methods: we performed a telephone survey of bystanders to ca occurring in k-12 schools in communities participating in the cardiac arrest registry to enhance survival (cares) database. the study period was from 8/2005-12/2010 and continued in one community through 2011. utstein style data and outcomes were collected from the cares database. a structured telephone interview of a bystander or administrative personnel was conducted for each ca. a descriptive summary was used to assess for the presence of an aed, provision of bystander cpr (bcpr), and information regarding aed deployment, training, and use and perceived barriers to aed use. descriptive data are reported. results: during the study period there were 30,603 ca identified at cares communities, of which 73 were identified as educational institutions. of these, 46 (0.15%) events were at k-12 schools with 21 (45.7%) being high schools. of the 46 arrests, a minority were children (15 (32.6%) < age 19), most (32, 84.8%) were witnessed, a majority (36, 76.1%) received bcpr, and 26 (56.5%) were initially in ventricular fibrillation (vf). most arrests 28/40 (70%) occurred during the school day (7a-5p). overall, 14 (30.4%) survived to hospital discharge. interviews were completed for 29 of 46 (63.0%) k-12 events. eighteen schools had an aed on site. most schools (84.2%) with aeds reported that they had a training program and personnel identified for its use. an aed was applied in 10 of 18 patients, and of these 8 were in vf and 4 survived to hospital discharge. multiple reasons for aed non-use (n = 8) were identified. conclusion: cardiac arrests in schools are rare events; most patients are adults and received bcpr. aed use was infrequent, even when available, but resulted in excellent (4/10) survival. further work is needed to understand aed non-use. post-event interviews are feasible and provide useful information regarding cardiac arrest care. physician background: gastroenteritis is a common childhood disease accounting for 1-2 million annual pediatric emergency visits. current literature supports the use of anti-emetics reporting improved oral re-hydration, cessation of vomiting, and reduced need for iv re-hydration. however, there remains concern that using these agents may mask alternative diagnoses. objectives: to assess outcomes associated with use of a discharge action plan using ed-dispensed ondansetron at home in the treatment of gastroenteritis. methods: a prospective, controlled, observational trial of patients presenting to an urban pediatric emergency department (census 22,400) over a 12-month period for acute gastroenteritis. fifty patients received ondansetron in the ed. twenty-nine patients were enrolled in the pediatric emergency department discharge action plan (ped-dap) where ondansetron for home use was dispensed by the treating clinician. twenty-one patients were controls. control patients did not receive home ondansetron. ped-dap patients were given instructions to administer the ondansetron for ongoing symptoms any time 6 hours post ed discharge. all patients were followed by phone at 7-14 days to assess for the following: time of emesis resolution, alternative diagnoses, unscheduled visits, and adverse events. results: all 50 patients were followed by phone. 24/29 ped-dap patients received home ondansetron. 21/29 patients had resolution of emesis in the ed. 7/29 had resolution of their emesis between time of discharge and 24 hours. 1/29 of ped-dap patients reported emesis after 24 hours from ed discharge. five patients reported an unscheduled visit. all five return visits returned to the ed (1/5 returned for emesis, 4/5 for diarrhea). 17/21 controls reported resolution of symptoms within the ed. 2/21 of controls had resolution between time of discharge and 24 hours. 1/21 of the control patients had resolution with between 24 and 48 hours post discharge. 1/21 had an unscheduled appointment with the pmd at 72 hours post-discharge for ongoing fever and nausea. in follow-up there were no alternative diagnoses identified. the effect of the ped-dap on resolution of emesis between discharge and 24 hours appears to be statistically significant (p value < 0.04). conclusion: ondansetron given in schedule with a discharge action plan appears to provide a modest benefit in resolution of symptoms relative to a control population. objectives: to determine the repeatability coefficient of a 100 mm vas in children aged 8 to 17 years in different circumstances: assessments done either at 3 or 1 minute interval, when asked to recall their score or to reproduce it. methods: a prospective cohort study was conducted using a convenience sample of patients aged 8 to 17 years presenting to a pediatric ed. patients were asked to indicate, on a 100 mm paper vas, how much they liked a variety of food with four different sets of three questions: (set 1) questions at 3 minute interval with no specific instruction other than how to complete the vas and no access to previous scores, (set 2) same format as set 1 except for questions at 1 minute interval, (set 3) same as set 1 except patients were asked to remember their answers, and (set 4) same as set 1 except patients were shown their previous answers. for each set, the repeatability coefficient of the vas was determined according to the bland-altman method for measuring agreement using repeated measures: 1.96 x ö 2 x s w where s w is the within-subject standard deviation by anova. the sample size required to estimate s w to 10% of the fraction value as recommended was 96 patients if we obtained three measurements for each patient. results: a total of 100 patients aged 12.1 ± 2.4 years were enrolled. the repeatability coefficient for the questions asked at 3 minute intervals was 12 mm, and 8 mm when asked at 1 minute interval. when asked to remember their previous answers or to reproduce them, the repeatability coefficient for the questions was 7 mm and 6 mm, respectively. conclusion: the condition of the assessments (variation in intervals or patients asked to remember or to reproduce their previous answers) influence the testretest reliability of the vas. depending on circumstances, the theoretical test-retest reliability in children aged 8 to 17 years varies from 6 to 12 mm on a 100 mm paper vas. background: skull radiographs are a useful tool in the evaluation of pediatric head trauma patients. however, there is no consensus on the ideal number of views that should be obtained as part of a standard skull series in the evaluation of pediatric head trauma patients. objectives: to compare the sensitivity and specificity of a two-and four-film x-ray series in the diagnosis of skull fracture in children, when interpreted by pediatric emergency medicine physicians. methods: a prospective, crossover experimental study was performed in a tertiary care pediatric hospital. the skull radiographs of 100 children were reviewed. these were composed of the 50 most recent cases of skull fracture for which a four-film radiography series was available at the primary setting and 50 controls, matched for age. two modules, containing a random sequence of two-and four-film series of each child, were constructed in order to have all children evaluated twice (once with two films and once with four films). board-certified or -eligible pediatric emergency physicians evaluated both modules two to four weeks apart. the interpretation of the four-film series by a radiologist, or when available, the findings on ct scan, served as the gold standard. accuracy of interpretation was evaluated for each patient. the sensitivity and specificity of the two-film versus the four-film skull xray series, in the identification of fracture, were compared. this was a non-inferiority cross-over study evaluating the null hypothesis that a series with two views would have a sensitivity (specificity) that is inferior by no more than 0.055 compared to a series with four views. a total of 50 controls and 50 cases were needed to establish non-inferiority of the two-film series versus the four-film series, with a power of 80% and a significance level of 5%. results: ten pediatric emergency physicians participated in the study. for each radiological series, the proportion of accurate interpretation varied between 0.20 to 1.00. the four-film series was found to be more sensitive in the detection of skull fracture than a two-film series (difference: 0.084, 95%ci 0.030 to 0.139). however, there was no difference in the specificity (difference: 0.004, 95%ci )0.024 to 0.033). conclusion: for children sustaining a head trauma, a four-film skull radiography series is more sensitive than a two-film series, when interpreted by pediatric emergency physicians. the objectives: we developed a free online video-based instrument to identify knowledge and clinical reasoning deficits of medical students and residents for pediatric respiratory emergencies. we hypothesized that it would be a feasible and valid method of differentiating educational needs of different levels of learners. methods: this was an observational study of a free, web-based needs assessment instrument that was tested on 44 third and fourth year medical students (ms3-4) and 29 pediatric and emergency medicine residents (r1-3). the instrument uses youtube video triggers of children in respiratory distress. a series of cased-based questions then prompts learners to distinguish between upper and lower airway obstruction, classify disease severity, and manage uncomplicated croup and bronchiolitis. face validity of the instrument was established by piloting and revision among a group of experienced educators and small groups of targeted learners. final scores were compared across groups using t-tests to determine the ability of the instrument to differentiate between different levels of learners (concurrent validity). cronbach's alpha was calculated as a measure of internal consistency. results: response rates were 19% among medical students and 43% among residents. the instrument was able to differentiate between junior (ms3, ms4, and r1) and senior (r2, r3) learners for both overall mean score (61% vs.78%, p < 0.01) and mean video portion score (74 vs. 84%, p = 0.02). table 1 compares results of several management questions between junior and senior learners. cronbach's alpha for the test questions was 0.47. conclusion: this free online video-based needs assessment instrument is feasible to implement and able to identify knowledge gaps in trainees' recognition and management of pediatric respiratory emergencies. it demonstrates a significant performance difference between the junior and senior learners, preliminary evidence of concurrent validity, and identifies target groups of trainees for educational interventions. future revisions will aim to improve internal consistency. results: the survey response rate was 87% (60/69). among responding programs, 40 (67%) reside within a children's hospital (vs. general ed); 51 (85%) are designated level i pediatric trauma centers. forty-three (72%) programs accept 1-2 pem fellows per year; 53 (88%) provided at least some eus training to fellows, and 42 (70%) offer a formal eus rotation. on average this training has existed for 3 ± 1 years and the mean duration of eus rotations is 4 ± 2 weeks. twenty-eight (67%) programs with eus rotations provide fellow training in both a general ed and a pediatric ed. there were no hospital or program level factors associated with having a structured training program for pem fellows. conclusion: as of 2011, the majority of pem fellowship programs provide eus training to their fellows, with a structured rotation being offered by most of these programs. background: ed visits are an opportunity for clinicians to identify children with poor asthma control and intervene. children with asthma who use eds are more likely than other children to have poor control, not be using controller medications, and have less access to traditional sources of primary care. one significant barrier to ed-based interventions is recognizing which children have uncontrolled asthma. objectives: to determine whether the pacci, a 12item parent-administered questionnaire, can help ed clinicians better recognize patients with the most uncontrolled asthma and differentiate between intermittent and persistent asthma. methods: this was a randomized controlled trial performed at an urban pediatric ed. parents were asked to answer questions about their child's asthma including drug adherence and history of exacerbations, as well as answer demographic questions. using a convenience sample of children 1-18 years presenting with an asthma exacerbation, attending physicians in the study were asked to complete an assessment of asthma control. physicians were randomized to receive a completed pacci (intervention) or not (control group). using an intent-to-treat approach, clinicians' ability to accurately identify 1) four categories of control used by the national heart, lung, and blood institute (nhlbi) asthma guidelines, 2) intermittent vs. persistent level asthma, and 3) controlled / mildly uncontrolled vs. moderate/severely uncontrolled asthma were compared for both groups using chi-square analysis. results: between january and august 2011, 57 patients were enrolled. there were no statistically significant differences between the intervention and control groups for child's sex, age, race and parents' education. conclusion: the pacci improves ed clinicians' ability to categorize children's asthma control according to nhlbi guidelines, and the ability to determine when a child's control has been worsening. ed clinicians may use the pacci to identify those children in greatest need for intervention, to guide prescription of controller medications, and communicate with primary care providers about those children failing to meet the goals of asthma therapy. figure) . fewer than half of physicians reported the parent of a 2-year-old being discharged from their ed following an mvc-related visit would receive either child passenger safety information or referrals (table) . conclusion: emergency physician report of child passenger safety resource availability is associated with trauma center designation. even when resources are available, referrals from the ed are infrequent. efforts to increase referrals to community child passenger safety resources must extend to the community ed settings where the majority of children receive injury care. background: pediatric subspecialists are often difficult to access following ed care especially for patients living far from providers. telemedicine (tm) can potentially eliminate barriers to access related to distance, and cost. objectives: to evaluate the overall resource savings and access that a tm program brings to patients and families. methods: this study took place at a large, tertiary care regional pediatric health care system. data were collected from 1/2011-10/2011. metrics included travel distance saved (round trip between tm presenting sites and the location of the receiving sites), time savings, direct cost savings (based on $0.55/mile) and potential work and school days saved. indirect costs were calculated as travel hrs saved/encounter (based on an average speed of 55 miles/hr). demographics and services provided were included. results: 690 tm consults were completed by 13 separate pediatric subspecialty services. most patients were school aged (86% >/= 5yrs old objectives: to analyze test characteristics of the pathway and its effects on ed length of stay, imaging rates, and admission rate before versus after implementation. methods: children ages 3-18 presenting to one academic pediatric ed with suspicion for appendicitis from october 2010 -august 2011 were prospectively enrolled to a pathway using previously validated lowand high-risk scoring systems. the attending physician recorded his or her suspicion of appendicitis and then used one of two scoring systems incorporating history, physical exam, and cbc. low-risk patients were to be discharged or observed in the ed. high-risk patients were to be admitted to pediatric surgery. those meeting neither low-nor high-risk criteria were evaluated in the ed by pediatric surgery, with imaging at their discretion. chart review and telephone follow-up were conducted two weeks after the visit. charts of a random sample of patients with diagnoses of acute appendicitis or chief complaint of abdominal pain and undergoing a workup for appendicitis in the eight months before and after institution of the pathway were retrospectively reviewed by one or two trained abstractors. results: appendicitis was diagnosed in 65 of 178 patients prospectively enrolled to the pathway (37%). mean age was 9.6 years. of those with appendicitis, 63 were not low-risk (sensitivity 96.9%, specificity 48.7%). the high-risk criteria had a sensitivity of 73.8% and specificity of 77.0%. a priori attending physician assessment of low risk had a sensitivity of 100% and specificity of 49.6%. a priori assessment of high risk had a sensitivity of 58.5% and specificity of 90.2%. we reviewed 232 visits prior to the pathway and 290 after. mean ed length of stay was similar (256 minutes before versus 257 after). ct was used in 12.1% of visits before and 7.3% after (p = 0.07). use of ultrasound increased (44.8% before versus 55.9% after, p < 0.02). admission rates were not significantly different (48.3% before versus 42.7% after, p = 0.2). conclusion: the low-risk criteria had good sensitivity in ruling out appendicitis and can be used to guide physician judgment. institution of this pathway was not associated with significant changes in length of stay, utilization of ct, or admission rate in an academic pediatric ed. computer-delivered alcohol and driver safety behavior screening and intervention program initiated during an emergency department visit mary k. murphy 1 , lucia l. smith 2 , anton palma 2 , david w. lounsbury 2 , polly e. bijur 2 , paul chambers 2 1 yale university, new haven, ct; 2 albert einstein college of medicine, bronx, ny background: alcohol use is involved in 32 percent of all fatal motor vehicle crashes and recent estimates show that at least 448,000 people were injured due to distracted driving last year. patients who visit the emergency department (ed) are not routinely screened for driver safety behavior; however, large numbers of patients are treated in the ed every day creating an opportunity for screening and intervention on important public health behaviors. objectives: to evaluate patient acceptance and response to a computer-based traffic safety educational intervention during an ed visit and one month follow-up. methods: design. pre /post educational intervention. setting. large urban academic ed serving over 100,000 patients annually. participants. medically stable adult ed patients. intervention. patients completed a self-administered, computer-based program that queried patients on alcohol use and risky driving behaviors (texting, talking, and other forms of distracted driving). the computer provided patients with educational information on the dangers of these behaviors and collected data on patient satisfaction with the program. staff called patients one month post ed visit for a repeat query. results: 150 patients participated; average age 39 (21-70), 58% hispanic, 52% male. 96% of patients reported the program was easy to use and were comfortable receiving this education via computer during their ed visit. self-reported driver safety behaviors pre, post intervention (% change): driving while talking on the phone 45%,16% ()29%, p = 0.001), aggressive driving 44%,15% ()29%, p = 0.001), texting while driving 28%,9% ()19%, p = 0.001), driving while drowsy 18%,4% ()14%, p = 0.002), drinking in excess of nih safe drinking guidelines15%,%7 ()8%, p = 0.039), drinking and driving 10%,1% ()9%, p = 0.006). conclusion: we found a high prevalence of selfreported risky driving behaviors in our ed population. at 1 month follow-up, patients reported a significant decrease in these behaviors. overall patients were very satisfied receiving educational information about these behaviors via computer during their ed visit. this study indicates that a low-intensity, computer-based educational intervention during an ed visit may be a useful approach to educate patients about safe driving behaviors and promote behavior change. prevalence of depression among emergency department visitors with chronic illness janice c. blanchard, benjamin l. bregman, jeffrey smith, mohammad salimian, qasem al jabr george washington university, washington, dc background: persons with chronic illnesses have been shown to have higher rates of depression than the general population. the effect of depression on frequent emergency department (ed) use among this population has not been studied. objectives: this study evaluated the prevalence of major depressive disorder (mdd) among persons presenting with depression to the george washington university ed. we hypothesized that patients with chronic illnesses would be more likely to have mdd than those without. methods: this was a single center, prospective, crosssectional study. we used a convenience sample of noncritically ill, english-speaking adult patients presenting with non-psychiatric complaints to an urban academic ed over 6 months in 2011. subjects were screened with the phq 9, a nine-item questionnaire that is a validated, reliable predictor of mdd. we also queried respondents about demographic characteristics as well as the presence of at least one chronic disease (heart disease, hypertension, asthma, diabetes, hiv, cancer, kidney disease, or cerebrovascular disease). we evaluated the association between mdd and chronic illnesses with both bivariate analysis and multivariate logistic regression controlling for demographic characteristics (age, race, sex, income, and insurance coverage). results: our response rate was 90.7% with a final sample size of 1012. of our total sample, 525 (51.9%) had at least one of the chronic illnesses defined above. of this group, 162 (30.9%) screened positive for mdd as compared to 82 (16.6%) of the group without chronic illnesses (p < 0.0001). in multivariate analysis, persons with chronic illnesses had an odds ratio for a positive depression screen of 1.80 (1.31, 2.50) as compared to persons without illness. among the subset of persons with chronic illnesses (n = 525), 46.9% had ‡3 visits in the prior 364 days as compared to 34.4% of persons with chronic illnesses without mdd (p = 0.007). conclusion: our study found a high prevalence of untreated mdd among persons with chronic illnesses who present to the ed. depression is associated with more frequent emergency department use among this population. initial blood alcohol level aids ciwa in predicting admission for alcohol withdrawal craig hullett, douglas rappaport, mary teeple, daniel butler, arthur sanders university of arizona, tucson, az background: assessment of alcohol withdrawal symptoms is difficult in the emergency department. the clinical institute withdrawal assessment (ciwa) is commonly used, but other factors may also be important predictors of withdrawal symptom severity. objectives: the purpose of this study is to determine whether ciwa score at presentation to triage was predictive of later admission to the hospital. methods: a retrospective study of patients presenting to an acute alcohol and drug detoxification hospital was performed from july 2010 through january 2011. patients were excluded if other drug withdrawal was present in addition to alcohol. initial assessment included age, sex, vital signs, and blood alcohol level (bal) in addition to hourly ciwa score. admission is indicated for a ciwa score of 10 or higher. data were analyzed by selecting all patients not immediately admitted at initial presentation. logistic regression using wald's criteria for stepwise inclusion was used to determine the utility of the initially gathered ciwa, bal, longest sobriety, liver cirrhosis, and vital signs in predicting subsequent admission. results: there were 123 patients who fit the inclusion criteria, with 9 admitted for treatment at initial intake and another 27 admitted during the following 10 hours. logistic regression indicated that presenting bal was a strong predictor (p = 0.01) of admission for treatment after initial presentation, as was presenting ciwa (p = 0.03). thus, presenting bal provided a substantial addition above initial ciwa in predicting later admission. no other variables added significantly to the prediction of later admission. to determine the interaction between presenting bal and ciwa scores, we ran a repeated measures analysis of the first five ciwa scores (from presentation to 4 hours later), using bal split into low (bal < 0.10) and high (bal > 0.10) groups (see figure) . their interaction was significant, f (1, 93) = 11.86, p < 0.001, g 2 = 0.11. those presenting with higher initial bal had suppressed ciwa scores that rose precipitously as the alcohol cleared. those with low presenting bal showed a decline in ciwa over time conclusion: initial assessment using the common assessment tool ciwa is aided significantly by bal assessment. patients with higher presenting bal are at higher risk for progression to serious alcohol withdrawal symptom. objectives: to describe patient and visitor characteristics and perspectives on the role of visitors in the ed and determine the effect of visitors on ed and hospital outcome measures. methods: this cross-sectional study was done in an 81,000-visit urban ed, and data were attempted to be collected from all patients over a consecutive 96-hour period from august 25 to 28, 2011. trained data collectors were assigned to the ed continuously for the study period. patients assigned to a rapid care section of the ed (24%) were excluded. a visitor was defined as a person other than a health care provider (hcp) or hospital staff present in a patient's room at any time. patient perspectives on visitors were assessed in the following domains: transportation, emotional support, physical care, communication, and advocating for the patient. ed and hospital outcome measures pertaining to ed length of stay (los) and charges, hospital admission rate, hospital los and charges were obtained from patient medical records and hospital billing. data analyses included frequencies, student's t-tests for continuous variables, and chi-square tests of association for categorical variables. all tests for significance were two-sided. objectives: to examine the effect of sunday alcohol availability on ethanol-related visits and alcohol withdrawal visits to the ed. methods: study design was a retrospective beforeafter study using electronically archived hospital data at an urban, safety net hospital. all adult non-prisoner ed visits from 1/1/2005 to 12/31/2009 were analyzed. an ethanol-related ed visit was defined by icd-9 codes related to alcohol (291.x, 303.x, 305.0, 980.0 ). an alcohol withdrawal visit was defined by icd-9 codes of delirium tremens (291.0), alcohol psychosis with hallucination (291.3), and ethanol withdrawal (291.81). we generated a ratio of ethanol-related ed visits to total ed visits (ethanol/total) and ratio of alcohol withdrawal ed visits to total ed visits (withdrawal/total). a day was redefined as 8 am to 8 am. the ratios were averaged within the four seasons to account for seasonal variations. data from summer 2008 were dropped as it spanned the law change. we stratified data into sunday and non-sunday days prior to analysis to isolate the effects of the law change. we used multivariable linear regression to estimate the association of the ratio with the law change while adjusting for time and the seasons. each ratio was modeled separately. the interaction between time and the law change was assessed using p < 0.05. results: during the study there were a total of 212,189 ed visits including 12,042 (6% of total) ethanol-related visits and 5,496 (3% of total) alcohol withdrawal visits. unadjusted ratios in seasonal blocks are plotted in the figure with associated 95% ci and best fit regression line for before and after law change, respectively. after adjusting for time and season in the multivariable linear regression, we found no significant association of either ethanol/total or withdrawal/total with the law change. this remained true for both sunday and non-sunday data. all interactions assessed were not significant. conclusion: the change in colorado law to allow the sale of full-strength alcoholic beverages on sundays did not significantly affect ethanol-related or alcohol withdrawal ed visits. background: olanzapine is a second-generation antipsychotic (sga) with actions at the serotonin/histamine receptors. post-marketing reports and a case report have documented dangerous lowering of blood pressure when this antipsychotic is paired with benzodiazepines, but a recent small study found no bigger decreases in blood pressure compared to another antipsychotic like haloperidol. decreases in oxygen saturations, however, were larger when olanzapine was combined with benzodiazepines in alcohol-intoxicated patients. it is unclear whether these vital sign changes are associated with the intramuscular (im) route only. objectives: the assessment of vital signs following administration of either oral (po) or im olanzapine, either with or without benzodiazepines (benzos) and with or without concurrent alcohol intoxication. methods: this is a structured retrospective chart review of all patients who received olanzapine in an academic medical center ed from 2004-2010 who had vital signs documented both before medication administration and within four hours afterwards. vital signs were calculated as pre-dose minus lowest post-dose vital sign within 4 hours, and were analyzed in an anova with route (im/po), benzo use (+/)), and alcohol use (+/)) as factors. significance level was set to <0.05. results: there were 482 patients who received olanzapine over the study period. a total of 275 patients (225 po, 50 im) met inclusion criteria. systolic blood pressures decreased across all groups as patients reduced their agitation. neither the route of administration, concurrent use of benzos, nor the use of alcohol were associated with significant changes in systolic bp (p = ns for all comparisons; see figure 1 ). decreases in oxygen saturations, however, were significantly larger for alcoholintoxicated patients who subsequently received im olanzapine + benzos compared to other groups (route: p < 0.001; alcohol: p < 0.01; route x alcohol: p < 0.001; route x benzos x alcohol: p < 0.05; see figure 2 ). conclusion: alcohol and benzos are not associated with significant decreases in blood pressure after po olanzapine, but im olanzapine + benzos is associated with potentially significant oxygen desaturations in patients who are intoxicated. intoxicated patients may have differential effects with the use of im sgas such as olanzapine when combined with benzos, and should be studied separately in drug trials. patients with a psychiatric diagnosis rasha buhumaid, jessica riley, janice blanchard george washington university, washington, dc background: literature suggests that frequent emergency department (ed) use is common among persons with a mental health diagnosis. few studies have documented risk factors associated with increased utilization among this population. objectives: to understand demographic characteristics of frequent users of the emergency department and describe characteristics associated with their visits. it was hypothesized that frequent visitors would have a higher rate of medical comorbidities than infrequent visitors. methods: this was a retrospective study of patients presenting to an urban, academic emergency department in 2009. a cohort of all patients with a mental health-related final icd-9 coded diagnosis (axis i or axis ii) was extracted from the electronic medical record. using a standard abstraction form, a medical chart review collected information about medical comorbidities, substance abuse, race, age, sex, and insurance coverage, as well as diagnosis, disposition, and time of each visit. results: our sample consisted of 109 frequent users ( ‡4 visits in a 365 day period) and 442 infrequent users (£3 visits in a 365 day period). frequent users were more likely to be male (68% vs. 54.5% p = 0.01), black (86% vs. 59% p < 0.0001), and had a higher average number of comorbid conditions (2.0, 95%ci 1.73,2.26) as compared to infrequent users (1.0, 95%ci 0.90,1.10). a higher percentage of visits in the infrequent user group occurred during the day (49% vs. 38.3% p < 0.0001) while a higher number of visits in the frequent users occurred after midnight (24.3% vs. 16.6% p = 0.0003). visits in the frequent user group were less likely to be for a psychiatric complaint (34.3% vs. 81.2%) and less likely to result in a psychiatric admission (18.3% versus 56.7%) as compared to the infrequent user group (p < 0.0001). conclusion: our data indicate that among patients with psychiatric diagnoses, those who make frequent ed visits have a higher rate of comorbid conditions than infrequent visitors. despite their increased use of the ed, frequent visitors have a significantly lower psychiatric admission rate. many of the visits by frequent users are for non-psychiatric complaints and may reflect poor access to outpatient medical and mental health services. emergency departments should consider interventions to help address social and medical issues among mental health patients who frequently use ed services. background: the world health organization estimates that one million people die annually by suicide. in the u.s., suicide is the fourth leading cause of death between the ages of 10 and 65. many of these patients are seen in ed, while outpatient visits for depression are also high. no recent analysis has compared these groups in the recent years. objectives: to determine if there is a relationship between the incidence of suicidal and depressed patients presenting to emergency departments and the incidence of depressed patients presenting to outpatient clinics from 2002-2008. the secondary objective is to analyze trends in suicidal patients in the ed. methods: we used nhamcs (national hospital ambulatory medical care survey) and namcs (national ambulatory medical care survey), national surveys completed by the centers for disease control, which provide a sampling of emergency department and outpatient visits respectively. for both groups, we used mental-health-related icd-9-cm, e codes and reasons for visit. we compared suicidal and depressed patients who presented to the ed, to those who presented to outpatient clinics. our subgroup analyses included age, sex, race/ethnicity, method of payment, regional variation, and urban verses rural distribution. results: ed visits for depression (1.14%) and suicide attempts (0.49%) remained stable over the years, with no significant linear trend. however, office visits for depression significantly decreased from 3.14% of visits in 2002 to 2.65% of visits in 2008. non-latino whites had a higher percentage of ed visits for depression (1.25%) and suicide attempt (0.57%) (p < 0.0001), and a higher percentage of office visits for depression than all other groups. among patients age 50-69 years, ed visits for suicide attempt significantly increased from 0.12% in 2002 to 0.44% in 2008. homeless patients had a higher percent of ed visits for depression (6.5%) and suicide attempt ( background: for potentially high-risk ed patients with psychiatric complaints, efficient ed throughput is key to delivering high-quality care and minimizing time spent in an unsecured waiting room. objectives: we hypothesized that adding a physician in triage would improve ed throughput for psychiatric patients. we evaluated the relationship between the presence of an ed triage physician and waiting room (wr) time, time to first physician order, time to ed bed assignment, and time spent in an ed bed. methods: the study was conducted from 11/2009-2/ 2011 at an academic ed with 55000 annual visits and a dedicated on-site emergency psychiatric unit. we performed a pre/post retrospective observational cohort study using administrative data, including weekend visits from noon-10pm, 8 months pre and post addition of weekend triage physicians. after adjusting for patient age, sex, insurance status, emergency severity index score, mode of arrival, ed occupancy rate, wr count, boarding count, and average wr los, multiple linear regression evaluated the relationship between the presence of a triage physician and four ed throughput outcomes: time spent in the wr, time to first order, time spent in an ed bed, and the total ed los. results: 565 visits met inclusion criteria, 280 in the 8 months before and 285 in the 8 months after physicians were assigned to triage on weekends. table 1 reports demographic data; multivariate analysis results are found in table 2 . the presence of a triage physician was associated with an 8 (95% ci 0.6-15.2) minute increase in wr time and no associated change in time to first order, time spent in an ed bed, or in the overall ed los. conclusion: use of triage physicians has been reported to decrease the time patients spend in an ed bed and improve ed throughput. however, for patients with psychiatric complaints, our analysis revealed a slight increase in wr time without evident change in the time to first order, time spent in an ed bed, or total ed los. improvements in ed throughput for psychiatric patients will likely require system-level changes, such as reducing ed boarding and improving lab efficiency to speed the process of medical clearance and reduce time spent in the unsecured wr. these findings may not be generalizable to eds without a dedicated ed psychiatric unit with full-time social workers to assist with disposition. initial assessment included ciwa scoring, repeated hourly, as well as other variables (see table 1 ). treatment and admission to the inpatient hospital was indicated for a ciwa score of 10 or higher. statistical analysis was performed utilizing repeated measures general linear modeling for ciwa scores and anova for all other variables. results: there were 123 patients who fit the inclusion criteria, with 9 admitted for treatment at initial intake and another 27 admitted during the following 10 hours. the table below compares the three most prevalent ethnic populations seen at our hospital. native americans presented at a significantly younger age (p < 0.05) than the other two ethnicities. initial ciwa scores taken on admission were significantly lower in the native american group than the other two groups (p < 0.05) and at 1 hour a difference existed but failed to reach significance. repeated measures analysis indicate that ciwa scores progressed in a u-shaped curvilinear fashion (see figure 1 ) conclusion: initial assessment utilizing ciwa scores appears to be affected by ethnicity. care must be taken when assessing and making decisions on a single initial ciwa score. further research is needed in this area as our numbers are small and differences might be seen in subsequent scoring. in addition, our study consists of primarily male patients and does not include african-american patients. background: age is a risk factor for adverse outcomes in trauma, yet evidence supporting the use of specific age cut-points to identify seriously injured patients for field triage is limited. objectives: to evaluate under-triage by age, empirically examine the association between age and serious injury for field triage, and assess the potential effect of mandatory age criteria. methods: this was a retrospective cohort study of injured children and adults transported by 48 ems agencies to 105 hospitals in 6 regions of the western u.s. from 2006-2008. hospital records were probabilistically linked to ems records using trauma registries, emergency department data, and state discharge databases. serious injury was defined as an injury severity score (iss) ‡16 (the primary outcome). we assessed under-triage (triage-negative patients with iss ‡16) by age decile, different mandatory age criteria, and used multivariable logistic regression models to test the association (linear and non-linear) between age and iss ‡ 16, adjusted for important confounders. results: 260,027 injured patients were evaluated and transported by ems over the 3-year period. under-triage increased markedly for patients over 60 years, reaching 58% for those over 90 years ( figure 1 ). mandatory age triage criteria decreased under-triage, while substantially increasing over-triage: one iss ‡ 16 patient identified for every 65 additional patients triaged to major trauma centers. among patients not identified by other criteria, age had a strong non-linear association with iss ‡ 16 (p < 0.01); the probability of serious injury steadily increased after 30 years, becoming more notable after 60 years ( figure 2 ). conclusion: under-triage in trauma increases in patients over 60 years, which may be reduced with mandatory age criteria at the expense of system efficiency. among patients not identified by other criteria, serious injury steadily increased after 30 years, though there was no age at which risk abruptly increased. background: although limited resuscitation with hemoglobin-based oxygen carriers (hbocs) improves survival in several polytrauma models, including those of traumatic brain injury (tbi) with uncontrolled hemorrhage (uh) via liver injury, their use remains controversial. objectives: we examine the effect of hboc resuscitation in a swine polytrauma model with uh by aortic tear +/) tbi. we hypothesize that limited resuscitation with hboc would offer no survival benefit and would have similar effects in a model of uh via aortic tear +/) tbi. methods: anesthetized swine subjected to uh inflicted via aortic tear +/) fluid percussion tbi underwent equivalent limited resuscitation with hboc, lr, or hboc+nitroglycerin (ntg) (vasoattenuated hboc) and were observed for 6 hours. comparisons were between tbi and no-tbi groups with adjustment for resuscitation fluid type using two-way anova with interaction and tukey kramer adjustment for individual comparisons. results: there was no independent effect of tbi on survival time after adjustment for fluid type (anova, tbi term p = 0.59) and there was no interaction between tbi and resuscitation fluid type (anova interaction term p = 0.12). there was a significant independent effect of fluid type on survival time (anova p = 0.005 background: intracranial hemorrhage (ich) after a head trauma is a problem frequently encountered in the ed. an elevated inr is recognized as a risk of bleeding. however, in a patient with an inr in normal range, a level associated with a lower risk of ich is not known. objectives: the aim of this study was to identify an inr threshold that could predict a decreased risk of an ich after a head trauma in patients with a normal inr. it is hypothesized that there is a threshold at which the likelihood of bleeding decreases significantly. methods: we did a study using data from a registry of patients with mild to severe head trauma (n = 3356) evaluated in a level i trauma center in canada between march 2008 and february 2011. all the patients with a documented scan interpreted by a radiologist and a normal inr, defined as a value less then 1.6, were included. we determined the correlation between inr value binned by 0.1 and the proportion of patients with an ich. threshold was defined by consensus as an abrupt change of more than 10% in the percentage of patients with ich. univariate frequency distribution was tested with pearson's chisquare test. logistic regression analysis was then used to study the effects of inr on ich with the following confounding factors: age, sex, and intake of warfarin, clopidogrel, or aspirin. results are presented with 95% confidence intervals. results: 751 patients met the inclusion criteria. the mean age was 55.3 years ± 29.9 and 65% were men. 267 patients (35.6%) had an ich on brain scan. we found a significantly lower risk of ich at a threshold of inr less than 1.0 (p < 0.001, univariate or = 0.37, 95%ci 0.25-0.54) and a strong correlation between the risk of bleeding for every increase of the inr (r 2 = 0.8987). in fact, after adjustment for confounding variables, every 0.1 inr increase was associated with an increased risk of having an ich (or 1.50; 95% ci 1.31-1.72). conclusion: we were able to demonstrate an inr threshold under which the probability of ich was significantly lower. we also found a strong association between the risk of bleeding and the increase in inr within a normal range, suggesting that clinicians should not be falsely reassured by a normal inr. our results are limited by the fact that this is a retrospective study and a small proportion of traumatic brain injured patients in our database had no scan or inr at their ed visit. a prospective cohort study would be needed to confirm our results. background: increasingly, patients with tbi are being seen and managed in the emergency neurology setting. knowing which early signs are associated with prognosis can be helpful in directing the acute management. objectives: to determine whether any factors early in the course of head trauma are associated with shortterm outcomes including inpatient admission, in-hospital mortality, and return to the hospital within 30 days. methods: this irb-approved study is a retrospective review of patients head injury presenting to our tertiary care academic medical center during a 9-month period. the dataset was created using redcap, a data management solution hosted by our medical school's center for translational science institute. results: the median age of the cohort (n = 500) was 26, iqr = 15-48yrs, with 62% being male. 84% had a gcs of 13-15 (mild tbi), 3% 9-13 (moderate tbi), and 13% gcs < 8 (severe tbi). 39% of patients were admitted to the hospital. the median length of hospital stay was 2 days, with an iqr of 1-5 days. of those admitted, 53% had an icu stay as well. the median icu los was also 2 days, with an iqr of 1-6days. twenty nine (6%) patients died during their hospital stay. lower gcs was predictive of inpatient admission (p = 0.0003) as well as icu days (p < 0.0001). significant predictors of re-admission to the hospital within 30 days included hypotension (p = 0.002) upon initial presentation. the prehospital and ed gcs scores were not statistically significant. significant predictors of in-hospital death in a model controlling for age included bradycardia (p = 0.0042), hyperglycemia (p = 0.0040), and lower gcs (p = 0.0003). the incidence of bradycardia (hr < 60) was 4.4%. conclusion: early hypotension, hyperglycemia, and bradycardia along with lower initial gcs are associated with significantly higher likelihood of hospital admission, including icu admission, as well as intrahospital death and re-admission. background: over 23,000 people per day require treatment for ankle sprains, resulting in lost workdays and training for athletes. platelet rich plasma (prp) is an autologous concentration of platelets which, when injected into the site of injury, is thought to improve healing by promoting inflammation through growth factor and cytokine release. studies to date have shown mixed results, with few randomized or placebo-controlled trials. the lower extremity functional scale (lefs) is a previously validated objective measure of lower extremity function. objectives: is prp helpful in acute ankle sprains in the the emergency department? methods: prospective, randomized, double-blinded, placebo-controlled trial. patients with severe ankle sprains and negative x-rays were randomized to trial or placebo. severe was defined as marked swelling and ecchymosis and inability to bear weight. both groups had 50 cc of blood drawn. trial group blood was centrifuged with a magellan autologous platelet separator (arteriocyte, cleveland) to yield 3-4 cc of prp. prp along with 0.5 cc of 1% lidocaine and 0.5 cc of 0.25% bupivicaine was injected at the point of maximum tenderness by a blinded physician under ultrasound guidance. control group blood was discarded and participants were injected in a similar fashion substituting sterile 0.9% saline for prp. both groups had visual analog scale (vas) pain scores and lefs on days 0, 3, 8, and 30. all participants had a posterior splint and were made non weight bearing for 3 days after which they were reexamined, had their splint removed, and were asked to bear weight as tolerated. participants were instructed not to use nsaids during the trial. results: 1156 patients were screened and 37 were enrolled. four withdrew before prp injection was complete. eighteen were randomized to prp and 15 to placebo. see tables for results. vas and lefs are presented as means with sd in parentheses. demographics were not statistically different between groups. conclusion: in this small study, prp did not appear to offer benefit in either pain control or healing. both groups had improvement in their pain and functionality and did not differ significantly during the study period. limitations include small study size and large number of participant refusals. methods: a structured chart review of all icd-9 radius fracture coded charts spanning march 18, 2010 to july 17, 2011 was conducted. specific variable data were collected and categorized as follows: age, moi, body mass index, and fracture location. the charts were reviewed by two medical students, with 10% of the charts reviewed by both students to confirm inter-rater reliability. frequencies and inter-quartile ranges were determined. comparisons were made with fisher's exact test and multiple logistic regression. results: 187 charts met inclusion criteria. 46 charts were excluded due to one of the following reasons: no fracture or no x-ray (14), isolated ulnar fracture (19), or undocumented or penetrating moi (13). of the analyzed patients (n = 141), distal radius fractures were most common (66%), followed by proximal (32%) and midshaft (2%). chart reviewers were found to be reliable (j = 1). age and moi were significantly associated with fracture location (see table) . ages 18-54 and bike accidents were more strongly associated with proximal radius fractures (odds ratio: 12 [2-94] and 5 [2-13], respectively). conclusion: patients presenting to our inner city ed with a radius fracture are more likely to have a distal fracture. adults 18-54 and bike accidents had a significantly higher incidence of proximal fractures than other ages or mois. background: trauma centers use guidelines to determine the need for a trauma surgeon in the ed on patient arrival. a decision rule from loma linda university that includes penetrating injury and tachycardia was developed to predict which pediatric trauma patients require emergent intervention, and thus are most likely to benefit from surgical presence in the ed. objectives: our goal was to validate the loma linda rule (llr) in a heterogeneous pediatric trauma population and to compare it to the american college of surgeons' major resuscitation criteria (mrc). we hypothesized that the llr would be more sensitive than the mrc for identifying the need for emergent operative or procedural intervention. methods: we performed a secondary analysis of prospectively collected trauma registry data from two urban level i pediatric trauma centers with a combined annual census of approximately 115,000 visits. consecutive patients <15 years old with blunt or penetrating trauma from 1993 through 2010 were included. patient demographics, injury severity scores (iss), times of ed arrival and surgical intervention, and all variables of both rules were obtained. the outcome (emergent operative intervention within 1 hour of ed arrival or ed cricothyroidotomy or thoracotomy) was confirmed by trained, blinded abstractors. sensitivities, specificities, and 95% confidence intervals (cis) were calculated for both rules. results: 8,079 patients were included with a median age of 5.9 years and a median iss of 9. emergent intervention was required in 51 patients (0.6%). the llr had a sensitivity ranging from 59.4%-59.7% (95% ci: 25.9%-93.5%) and specificity ranging from 49.5%-86.5% (95% ci: 21.6%-82.1%) between both institutions. the mrc had a sensitivity ranging from 73.6%-81.6% (95% ci: 54.7%-95.1%) and specificity ranging from 69.4%-84.7% (95% ci: 54.7%-90.1%) between institutions. conclusion: emergent intervention is rare in pediatric trauma patients. the mrc was more sensitive for predicting the need for emergent intervention than the llr. neither set of criteria was sufficiently accurate to recommend their routine use for pediatric trauma patients. droperidol for sedation of acute behavioural disturbance leonie a. calver 1 , colin page 2 , michael downes 3 , betty chan 4 , geoffrey k. isbister 1 1 calvary mater newcastle and university of newcastle, newcastle, australia; 2 princess alexandra hospital, brisbane, australia; 3 calvary mater newcastle, newcastle, australia; 4 prince of wales hospital, sydney, australia background: acute behavioural disturbance (abd) is a common occurrence in the emergency department (ed) and is a risk to staff and patients. there remains little consensus on the most effective drug for sedation of violent and aggressive patients. prior to the food and drug administration's black box warning, droperidol was commonly used and was considered safe and effective. objectives: this study aimed to investigate the effectiveness of parenteral droperidol for sedation of abd. methods: as part of a prospective observational study, a standardised protocol using droperidol for the seda-acute and delayed behavioral deficits were demonstrated in this rat model of co toxicity, which parallels the neurocognitive deficit pattern observed in humans (see figure) . similar to prior studies, pathologic analysis of brain tissue demonstrated the highest percentage of necrotic cells in the cortex, pyramidal cells, and cerebellum. the collected data are summarized in the table. we have developed an animal model of severe co toxicity evidenced by behavioral deficits and neuronal necrosis. future efforts will compare neurologic outcomes in severely co poisoned rats treated with hypothermia and 100% inspired o2 versus hbo to normothermic controls treated with 100% inspired o2. increasing in popularity, attracting more than 70,000 annual participants worldwide. prior studies have consistently documented renal function impairment, but only after race completion. the incidence of renal injury during these multi-day ultramarathons is currently unknown. this is the first prospective cohort study to evaluate the incidence of acute kidney injury (aki) in runners during a multi-day ultramarathon foot race. objectives: to assess the effect of inter-stage recovery versus cumulative damage on resulting renal function during a multi-day ultramarathon. methods: demographic and biochemical data gathered via phlebotomy and analyzed by istatò (abbott, nj) were collected at the start and finish of day 1 (25 miles), 3 (75 miles), and 5 (140 miles) during racing the planet'sò150-mile, 7-day self-supported desert ultramarathons. pre-established rifle criteria using creatinine (cr) and glomerular filtration rate (gfr) defined aki as ''no injury'' (cr <1.5x normal, decrease of gfr <25%), ''risk'' (cr 1.5x normal, decrease of gfr by 25-49%), and ''injury'' (cr 2x normal, decrease of gfr by 50-75%). results: thirty racers (76% male) with a mean (+/) sd) age of 39 + /-10 years were studied during the 2008 sahara (n = 7, 23.3%), 2008 gobi (n = 10, 33%), and 2009 namibia (n = 13, 43.3%) events. the average decrease in gfr from day 1 start to day 1 finish was 28 + /-25 (p < 0.001, 95% ci 18.5-37.6); day 1 start to day 3 finish was 29.6 + /-20.1 (p < 0.001, 95% ci 18.4-40.7); and day 1 start to day 5 finish was 30.9 ± 17.5 (p < 0.001, 95% ci 20.8-41). runners categorized as risk and injury for aki after stage 1 was 44.8 % and 10%; after stage 3 was 67% and 13%, and after stage 5 was 57.1% and 7.1% conclusion: the majority of participants developed significant levels of renal impairment despite recovery intervals. given the changes in renal function, potentially harmful non-steroidal anti-inflammatory drugs should be minimized to prevent exacerbating acute kidney injury. background: more than 10% of the elderly abuse prescription drugs, and emergency medicine providers frequently struggle to identify features of opioid addiction in this population. the prescription drug use questionnaire (pduqp) is a validated, 42-item, patient-administered tool developed to help health care providers better identify problematic opioid use, or dependence, in patients who receive opioids for the treatment of chronic pain. objectives: to identify the prevalence of prescription drug misuse features in elderly ed patients. methods: this cross-sectional, observational study was conducted between 07/2011 and 08/2011 in the ed of an urban, university-affiliated community hospi-tal that serves a large geriatric population. all patients aged 65 to 89 inclusive were eligible, and were recruited on a convenience basis. exclusion criteria included known dementia, and critical illness. outcomes of interest included self-reported history of prior prescription opioid use, substance abuse history, aberrant medication-taking behaviors, and pduqp results. results: one hundred patients were approached for participation. two were excluded for inability to read english, three were receiving analgesia for metastatic cancer, 28 had never taken a prescription opioid, and seven refused to participate beyond pre-screening. sixty patients completed the study (see table 1 ). of those, 13.3% reported four or more visits within 12 months; chronic pain was reported by 56.7%; debilitating pain by 55.9%; prior pain management referral by 18.3%; and storing opioids for future use by 30%. seventeen patients reported current prescription opioid use, and were administered the pduqp (see figure) . in this population, 47.1% thought their pain was not adequately being treated; 41.2% reported having to increase the amount of pain medication they were taking over the prior 6 months; 35.3% saved up future pain medication; 11.8% had doctors refuse to give them pain medication for fear that the patient would abuse the prescription opioids; and 29.4% reported having a previous drug or alcohol problem. conclusion: screening instruments, such as the pduqp, facilitate identification of geriatric patients with features of opioid misuse. a high proportion of patients in this study save opioids for further use. interventions for safe medication disposal may decrease access to opioids and subsequent morbidity. age extremes, male sex, and several chronic health conditions were associated with increased odds of heat stroke, hospital admission, and death in the ed by a factor of 2-3. chronic hematologic disease (e.g. anemia) was associated with a 10-12 fold increase in adjusted odds of each of these outcomes. conclusion: hri imposes a substantial public health burden, and a wider range of chronic conditions confer susceptibility than previously thought. males, older adults, and patients with chronic conditions, particularly anemia, are likely to have more severe hri, be admitted, or die in the ed. background: carbon monoxide (co) poisoning is a remarkable cause of death worldwide. co, produced by the incomplete combustion of hydrocarbons, has many toxic effects on especially the heart and brain. co binds strongly to cytochrome oxidase, hemoglobin, and myoglobin causing hypoxia of organs and issues. co converts hemoglobin to carboxyhemoglobin and makes transport of oxygen through the body impossible and causes severe hypoxia. objectives: the aim of this study is to investigate the levels of s100b and neuron specific enolase (nse) measured both during admittance and at the sixth hour of hyperbaric and normobaric oxygen therapy carried out on patients with a diagnosis of co poisoning. methods: the study is designed as a prospective observational laboratory study. forty patients were enrolled in the study: 20 underwent normobaric oxygen therapy (nbot) and the other 20 underwent hyperbaric oxygen therapy (hbot). levels of s100b and nse were measured both during admittance and at the sixth hour of admittance of all patients. demographic data, clinical characteristics, and outcome measures were recorded. all data were statistically analyzed. results: in both treatment groups, mean levels of nse after therapy were significantly lower than admittance levels. although levels of nse measured before and 6 hours after treatment in hbot group were high, the difference between groups was not statistically significant (p > 0.05). in both treatment groups, mean levels of s100b after therapy were significantly lower than admittance levels; likewise nse. although levels of s100b measured before and 6 hours after treatment in hbot group were high, the difference between groups was not statistically significant (p > 0.05). additionally, while levels of s100b measured after treatment in the hbot group were lower compared to the nbot group, the difference between groups was also not statistically significant (p > 0.05). conclusion: levels of s100b and nse as evidence for brain injury elevation in case of co poisoining and decrease by therapy according to our study as well as previous studies. decrease in levels of s100b is more significant. according to our results, s100b and nse may be useful markers in case of co poisoning; however, we did not meet any data providing more value in determining hbot indications and determining levels of cohb in the management of patients with a diagnosis of co poisoining. neurons objectives: this study was conducted to determine if neurons in the dmh, and its neighbor the paraventricular hypothalamus (pvn), were likewise involved in mdma-mediated neuroendocrine responses, and if serotonin 1a receptors (5-ht1a) play a role in this regional response. methods: in both experiments, male sprague dawley rats (n = 5-12/group) were implanted with bilateral cannulas targeting specific regions of the brain, i.v. catheters for drug delivery, and i.a. catheters for blood withdrawal. experiments were conducted in raturn cages, which allow blood withdrawal and drug administration in free moving animals while recording their locomotion. in the first experiment, rats were microinjected into the dmh, the pvn, or a region between, with the gabaa agonist muscimol (80 pmol/100nl/side) or pbs (100nl) and 5 min later were injected with either mdma (7.5 mg/kg i.v.) or an equal volume of saline. blood was withdrawn prior to microinjections and 15 minutes after mdma for ria measurement of plasma acth. locomotion was recorded throughout the experiment. in a separate experiment of identical design, either the 5-ht1a antagonist way 100635 (way, 5 nmol/100 nl/side) or saline was microinjected followed by i.v. injection of mdma or saline. in both experiments, increases in acth and distance traveled were compared between groups using an anova analysis. results: when compared to controls, microinjections of muscimol into the dmh, pvn, or the area in between attenuated plasma increases in acth and locomotion evoked by mdma. when microinjected into the dmh or pvn, way had no effect on acth, but when injected into the region of the dmh it significantly increased locomotion. background: poor hand-offs between physicians when admitting patients have been shown to be a major source of medical errors. objectives: we propose that training in a standardized admissions protocol by emergency medicine (em) to internal medicine (im) residents would improve the quality of and quantity of communication of vital patient information. methods: em and im residents at a large academic center developed an evidence-based admission handover protocol termed the '7ps' (table 1) . em and im residents received '7ps' protocol training. im residents recorded prospectively how well each of the seven ps were communicated during each admission pre-and post-intervention. im residents also assessed the overall quality of the handover using a likert scale. the primary outcome was the change in the number of 'ps' conveyed by the em resident to the accepting im resident. data were collected for six weeks before and then for six weeks starting two weeks after the educational intervention. results: there were 78 observations recorded in the preintervention (control) group and 48 observations in the post-intervention group. for each of the seven 'ps' the percentage of observation where all of the information was communicated is shown in table 2 . the communication of 'ps' increased following the intervention. this rise was statistically significant for patient information and pending tests. in the control group the mean of total communicated ps was 5 and in the intervention group, the mean increased to 6 (p < 0.005). the quality of the handover communication had a mean rating of 3.9 in the control group and 4.3 in the intervention group (p < 0.05). conclusion: this educational intervention in a cohort of em and im residents improved the quality and quantity of vital information communicated during patient handovers. the intervention was statistically significant for patient information transfer and tests pending. the results are limited by study size. based on our preliminary data, an agreed-upon handover protocol with training improved the amount and quality of communication during patients' hospital admission on simple items that were likely had been taken for granted as routinely transmitted. we recruited a convenience sample of residents and students rotating in the pediatric emergency department. a two-sided form had the same seven clinical decisions on each side: whether to perform blood, urine, spinal fluid tests, imaging, iv fluids, antibiotics, or a consult. the rating choices were: definitely not, probably not, probably would, and definitely would. trainees rated each decision after seeing a patient, but before presenting to the preceptor, who, after evaluating the patient, rated the same seven decisions on the second side of the form. the preceptor also indicated the most relevant decision (mrd) for that patient. we examined the validity of the technique using hypothesis testing; we posited that residents would have a higher degree of concordance with the preceptor than would medical students. this was tested using dichotomized analyses (accuracy, kappa) and roc curves with the preceptor decision as the gold standard. results: thirty-one students completed 130 forms (median 4 forms; iqr 2,6) and 23 residents completed 206 (6; iqr 3,12). preceptors included 24 attending physicians and 3 fellows (9; iqr 4, 21). students were concordant with preceptors in 70% (k = 0.38) of mrd while residents agreed in 79.6% (p = 0.045), k = 0.59. roc analysis revealed significant differences between students and residents in the auc for the mrd (0.84 vs 0.72; p = 0.03). conclusion: this measure of trainee-preceptor concordance requires further research but may eventually allow for assessment of trainee clinical decision-making. it also has the pedagogical advantage of promoting independent trainee decision-making. background: basic life support (bls) and advanced cardiac life support (acls) are integral parts of emergency cardiac care. this training is usually reserved in most institutions for residents and faculty. the argument can be made to introduce bls and acls training earlier in the medical student curriculum to enhance acquisition of these skills. objectives: the goal of the survey was to characterize the perceptions and needs of graduating medical students in regards to bls and acls training. methods: this was a survey-based study of graduating fourth year medical students at a u.s. medical school. the students were surveyed before voluntarily participating in a student-led acls course in march of their final year. the surveys were distributed before starting the training course. both bls and acls training, comfort levels, and perceptions were assessed in the survey. results: of the 182 students in the graduating class, 152 participated in the training class with 109 (72%) completing the survey. 50% of students entered medical school without any prior training and 49% started clinics without training. 83.5% of students reported witnessing an average of 3.0 in-hospital cardiac arrests during training (range of 0-20). overall, students rated their preparedness 2.0 (sd 1.0) for adult resuscitations on a 1-5 likert scale with 1 being the unprepared. 98% and 92% of students believe that bls and acls should be included in the medical student curriculum respectively with a preference for teaching before starting clerkships. 36% of students avoided participating in resuscitations due to lack of training. of those, 95% said they would have participated had they been trained. conclusion: to our knowledge, this is one of the first studies to address the perceptions and needs for bls and acls training in u.s. medical schools. students feel that bls and acls training is needed in their curriculum and would possibly enhance perceived comfort levels and willingness to participate in resuscitations. background: professionalism is one of six core competency requirements of the acgme, yet defining and teaching its principles remains a challenge. the ''social contract'' between physician and community is clearly central to professionalism so determining the patient's understanding of the physician's role in the relationship is important. because specialization has created more narrowly focused and often quite different interactions in different medical environments, the patient concept of professionalism in different settings may vary as well. objectives: we hoped to determine if patients have different conceptions of professionalism when considering physicians in different clinical environments. methods: patients were surveyed in the waiting room of an emergency department, an outpatient internal medicine clinic, and a pre-operative/anesthesia clinic. the survey contained 18 examples of attributes, derived from the american board of internal medicine's eight characteristics of professionalism. participants were asked to rate, on a 10-point scale, the importance that a physician possess each attribute. an anova analysis was used to compare the sites for each question. results: of 604 who took the survey, 200 were in the emergency department, 202 were in the medicine clinic, and 202 were in the pre-operative clinic. females comprised 56% of the study group and the average age was 49 with a range from 18 to 94. there was a significant difference on the attribute of ''providing a portion of work for those who cannot pay;'' this was rated higher in the emergency department (p = 0.003). there was near-significance (p = 0.05) on the attribute of ''being able to make difficult decisions under pressure,'' which was rated higher in the pre-op clinic. there was no difference for any of the other questions. the top four professional attributes at each clinical site were the same -''honesty,'' ''excellence in communication and listening,'' ''taking full responsibility for mistakes,'' and ''technical competence/ skill;'' the bottom two were ''being an active leader in the community'' and ''patient concerns should come before a doctor's family commitments.'' conclusion: very few differences between clinical sites were found when surveying patient perception of the important elements of medical professionalism. this may suggests a core set of values desired by patients for physicians across specialties. emergency medicine faculty knowledge of and confidence in giving feedback on the acgme core competencies todd guth, jeff druck, jason hoppe, britney anderson university of colorado, aurora, co background: the acgme mandates that residency programs assess residents based upon six core competencies. although the core competencies have been in place for a number of years, many faculty are not familiar with the intricacies of the competencies and have difficulty giving competency-specific feedback to residents. objectives: the purpose of the study is to determine the extent to which emergency medicine (em) faculty can identify the acgme core competencies correctly and to determine faculty confidence with giving general feedback and core competency focused feedback to em residents. methods: design and participants: at a single department of em, a survey of twenty-eight faculty members, their knowledge of the acgme core competencies, and their confidence in providing feedback to residents was conducted. confidence levels in giving feedback were scored on a likert scale from 1 to 5. observations: descriptive statistics of faculty confidence in giving feedback, identification of professional areas of interest, and identification of the acgme core competencies were determined. mann-whitney u tests were used to make comparisons between groups of faculty given the small sample size of the respondents. results: there was a 100% response rate of the 28 faculty members surveyed. eight faculty members identified themselves as primarily focused on education. although those faculty members identifying themselves as focused on education scored higher than non-education focused faculty for all type of feedback (general feedback, constructive feedback, negative feedback), there was only a statistical difference in confidence levels 4.57 versus 2.65 (p < 0.002) for acgme core competency specific feedback when compared to noneducation focused faculty. while education focused faculty correctly identified all six of acgme core competencies 94% of the time, not one of the non-education focused faculty identified all six of the core competencies correctly. non-education focused faculty only correctly identified three or more competencies 25% of the time. conclusion: if residency programs are to assess residents using the six acgme core competencies, additional faculty development specific to the core competencies will be needed to train all faculty on the core competencies and on how to give core competency specific feedback to em residents. there is no clear consensus as to the most effective tool to measure resident competency in emergency ultrasound. objectives: to determine the relationship between the number of scans and scores on image recognition, image acquisition, and cognitive skills as measured by an objective structured clinical exam (osce) and written exam. secondarily, to determine whether image acquisition, image recognition, and cognitive knowledge require separate evaluation methodologies. methods: this was a prospective observational study in an urban level i ed with a 3-year acgme-accredited residency program. all residents underwent an ultrasound introductory course and a one-month ultrasound rotation during their first and second years. each resident received a written exam and osce to assess psychomotor and cognitive skills. the osce had two components: (1) recognition of 22 images, and (2) acquisition of images. a registered diagnostic medical sonographer (rdms)-certified physician observed each bedside examination. a pre-existing residency ultrasound database was used to collect data about number of scans. pearson correlation coefficients were calculated for number of scans, written exam score, image recognition, and image acquisition scores on the osce. results: twenty-nine residents were enrolled from march 2010 to february 2011 who performed an average of 247 scans (range 118-617). there was no significant correlation between number of scans and written exam scores. an analysis of the number of scans and the ocse found a moderate correlation with image acquisition (r = 0.42, p = 0.029) and image recognition (r = 0.61, p = <0.01)). pearson correlation analysis between the image acquisition score and image recognition score found that there was no correlation (r = 0.175, p = 0.383). there was a moderate correlation with image acquisition scores to written scores (r = 0.541, p = 0.025) and image recognition scores to written scores (r = 0.596, p = 0.019). conclusion: the number of scans does not correlate with written tests but has a moderate correlation with image acquisition and image recognition. this suggests that resident education should include cognitive instruction in addition to scan numbers. we conclude that multiple methods are necessary to examine resident ultrasound competency. background: although emergency physicians must often make rapid decisions that incorporate their interpretation of an ecg, there is no evidence-based description of ecg interpretation competencies for emergency medicine (em) trainees. the first step in defining these competencies is to develop a prioritized list of ecg findings relevant to em contexts. objectives: the purpose of this study was to categorize the importance of various ecg diagnoses and/or findings for the em trainee. methods: we developed an extensive list of potentially important ecg diagnoses identified through a detailed review of the cardiology and em literature. we then conducted a three-round delphi expert opinion-soliciting process where participants used a five-point likert scale to rate the importance of each diagnosis for em trainees. consensus was defined as a minimum of 75 percent agreement on any particular diagnosis at the second round or later. in the absence of consensus, stability was defined as a shift of 20 percent or less after successive rounds. results: twenty-two em experts participated in the delphi process, sixteen (72%) of whom completed the process. of those, fifteen were experts from eleven different em training programs across canada and one was a recognized expert in em electrocardiography. overall, 77 diagnoses reached consensus, 42 achieved stability, and one diagnosis achieved neither consensus nor stability. out of 120 potentially important ecg diagnoses, 53 (43%) were considered ''must know'' diagnoses, 62 (51%) ''should know'' diagnoses, and 7 (6%) ''nice to know'' diagnoses. conclusion: we have categorized ecg diagnoses within an em training context, knowledge of which may allow clinical em teachers to establish educational priorities. this categorization will also facilitate the development of an educational framework to establish em trainee competency in ecg interpretation. ''rolling refreshers background: cardiac arrest survival rates are low despite advances in cardiopulmonary resuscitation. high quality cpr has been shown to impart greater cardiac arrest survival; however, retention of basic cpr skills by health care providers has been shown to be poor. objectives: to evaluate practitioner acceptance of an in-service cpr skills refresher program, and to assess for operator response to real-time feedback during refreshers. methods: we prospectively evaluated a ''rolling refresher'' in-service program at an academic medical center. this program is a proctored cpr practice session using a mannequin and cpr-sensing defibrillator that provides real-time cpr quality feedback. subjects were basic life support-trained providers who were engaged in clinical care at the time of enrollment. subjects were asked to perform two minutes of chest compressions (ccs) using the feedback system. ccs could be terminated when the subject had completed approximately 30 seconds of compressions with <3 corrective prompts. a survey was then completed by to obtain feedback regarding the perceived efficacy of this training model. cpr quality was then evaluated using custom analysis software to determine the percent of cc adequacy in 30-second intervals. results: enrollment included 88 subjects from the emergency department and critical care units (55 nurses, 17 physicians, 16 students and allied health professionals). all participants completed a survey and 61 cpr performance data logs were obtained. positive impressions of the in-service program were registered by 81% (71/88) and 74% (65/88) reported a self-perceived improvement in skills confidence. eighty-three percent (73/88) of respondents felt comfortable performing this refresher during a clinical shift. thirtynine percent (24/61) of episodes exhibited adequate cc performance with approximately 30 seconds of cc. of the remaining 37 episodes, 71.1 ± 29.2% of cc were adequate in the first 30 seconds with 80.1 ± 28.6% of cc adequate during the last 30 second interval (p = 0.1847). of these 37 individuals, 30 improved or had no change in their cpr skills, and 7 individuals skills declined during cc performance (p = 0.007). conclusion: implementation of a bedside cpr skill refresher program is feasible and is well received by hospital staff. real time cpr feedback improved upon cpr skill performance during the in-service session. teaching emergency medicine skills: is a self-directed, independent, online curriculum the way of the future? tighe crombie, jason r. frank, stephen noseworthy, richard gerein, a. curtis lee university of ottawa, ottawa, on, canada background: procedural competence is critical to emergency medicine, but the ideal instructional method to acquire these skills is not clear. previous studies have demonstrated that online tutorials have the potential to be as effective as didactic sessions at teaching specific procedural skills. objectives: we studied whether a novel online curriculum teaching pediatric intraosseus (io) line insertion to novice learners is as effective as a traditional classroom curriculum in imparting procedural competence. methods: we conducted a randomized controlled educational trial of two methods of teaching io skills. preclinical medical students with no past io experience completed a written test and were randomized to either an online or classroom curriculum. the online group (og) were given password-protected access to a website and instructed to spend 30 minutes with the material while the didactic group (dg) attended a lecture of similar duration. participants then attended a 30-minute unsupervised manikin practice session on a separate day without any further instruction. a videotaped objective structured clinical examination (osce) and post-course written test were completed immediately following this practice session. finally, participants were crossed over into the alternate curriculum and were asked to complete a satisfaction survey that compared the two curricula. results were compared with a paired t-test for written scores and an independent t-test for osce scores. results: sixteen students completed the study. pre-course test scores of the two groups were not significantly different prior to accessing their respective curricula (mean scores of 32% for og and 34% for dg, respectively; p > 0.05). post-course written scores were also not significantly different (both with means of 76%; p > 0.05); however, for the post-treatment osce scores, the og group scored significantly higher than the dg group (mean scores of 92.6% and 88.1%; t(14) = 1.76, p < 0.05.) conclusion: this novel online curriculum was superior to a traditional didactic approach to teaching pediatric io line insertion. novice learners assigned to a selfdirected online curriculum were able to perform an emergency procedural skill to a high level of performance. em educators should consider adopting online teaching of procedural skills. background: applicants to em residency programs obtain information largely from the internet. curricular information is available from a program's website (pw) or the saem residency directory (sd). we hypothesize that there is variation between these key sources. objectives: to identify discrepancies between each pw and sd. to describe components of pgy1-3 em residency programs' curricula as advertised on the internet. methods: pgy1-3 residencies were identified through the sd. data were abstracted from individual sd and pw pages identifying pre-determined elements of interest regarding rotations in icu, pediatrics, inpatient (medicine, pediatrics, general surgery), electives, orthopedics, toxicology, and anesthesia. agreement between the sd and pw was calculated using a cohen's unweighted kappa calculation. curricula posted on pws were considered the gold standard for the programs' current curricula. results: a total of 117 pgy1-3 programs were identified through the sd and confirmed on the pw. ninetyone of 117 programs (78%) had complete curricular information on both sites. only these programs were included in the kappa analysis for sd and pw comparisons. of programs with complete listings, 66 of 91 programs (73%) had at least one discrepancy. the agreement of information between pw and sd revealed a kappa value of 0.26 (95% ci 0.19-0.33). analysis of pw revealed that pgy1-3 programs have an average of 4.15 (range, 2-9), 3.1 (range, 1-6), 1.7 (range, 0-4), and 1.0 (range, 0-4) blocks of icu, pediatrics, elective, and inpatient, respectively. common but not rrc-mandated rotations in orthopedics, toxicology, and anesthesiology are present in 77, 80, and 93 percent of programs, respectively. conclusion: publicly accessible curricular information through the sd and pw for pgy1-3 em programs only has fair agreement (using commonly accepted kappa value guides). applicants may be confused by the variability of data and draw inaccurate conclusions about program curricula. from the gravid uterus and improves cardiac output; however, this theory has never been proven. objectives: we set out to determine the difference in inferior vena cava (ivc) filling when third trimester patients were placed in supine, llt, and right lateral tilt (rlt) positions using ivc ultrasound. methods: healthy pregnant women in their third trimester presenting to the labor and delivery suite were enrolled. patients were placed in three different positions (supine, rlt, and llt) and ivc maximum (max) and minimum (min) measurements were obtained using the intercostal window in short axis approximately two centimeters below the entry of the hepatic veins. ivc collapse index (ci) was calculated for each measurement using the formula (max-min)/max. in addition, blood pressure, heart rate, and fetal heart rate were monitored. patients stayed in each position for at least 3 minutes prior to taking measurements. we compared ivc measurements using a one-way analysis of variance for repeated measures. results: twenty patients were enrolled. the average age was 25 years (sd 5.7) with a mean estimated gestational age of 39.5 weeks (sd 1.4). there were no significant differences seen in ivc filling in each of the positions (see table 1 ). in addition, there were no differences in hemodynamic parameters between positions.ten (50%) patients had the largest ivc measurement in the llt position, 7 (35%) patients in the rlt position, and 3 (15%) in the supine position. conclusion: there were no significant differences in ivc filling between patient positions. for some third trimester patients llt may not be the optimal position for ivc filling. background: although the acgme and rrc require competency assessment in ed bedside ultrasound (us), there are no standardized assessment tools for us training in em. objectives: using published us guidelines, we developed four observed structured competency evalua-tions (osce) for four common em us exams: fast, aortic, cardiac, and pelvic. inter-rater reliability was calculated for overall performance and for the individual components of each osce. methods: this prospective observational study derived four osces that evaluated overall study competency, image quality for each required view, technical factors (probe placement, orientation, angle, gain, and depth), and identification of key anatomic structures. em residents with varying levels of training completed an osce under direct observation of two em-trained us experts. each expert was blinded to the other's assessment. overall study competency and image quality of each required views were rated on a five-point scale (1poor, 2-fair, 3-adequate, 4-good, 5-excellent), with explicit definitions for each rating. each study had technical factors (correct/incorrect) and anatomic structures (identified/not identified) assessed as binary variables. data were analyzed using cohen's and weighted k, descriptive statistics, and 95% ci. results: a total of 185 us exams were observed, including 33 fast, 53 cardiac, 53 aorta, and 46 pelvic. total assessments included 185 ratings of overall study competency, 691 ratings of required view image quality, 2998 ratings of technical factors, and 2978 ratings of anatomic structures. inter-rater assessment of overall study competency showed excellent agreement, raw agreement 0.84 (0.77, 0.89), weighted k 0.87 (0.82, 0.91). ratings of required view image quality showed excellent agreement: raw agreement 0.75 (0.72, 0.79), weighted k 0.82 (0.79, 0.84). inter-rater assessment of technical factors showed substantial agreement: raw agreement 0.96 (0.95, 0.97), cohen's k 0.78 (0.74, 0.82). ratings of identification of anatomic structures showed substantial agreement: raw agreement 0.86 (0.85, 0.88), cohen's k 0.64 (0.60, 0.67). conclusion: inter-rater reliability is substantial to excellent using the derived ultrasound osces to rate em resident competency in fast, aortic, cardiac, and pelvic ultrasound. validation of this tool is ongoing. a objectives: the objective of this study was to identify which transducer orientation, longitudinal or transverse, is the best method of imaging the axillary vein with ultrasound, as defined by successful placement in the vein with one needle stick, no redirections, and no complications. methods: emergency medicine resident and attending physicians at an academic medical center were asked to cannulate the axillary vein in a torso phantom model. the participants were randomized to start with either the longitudinal or transverse approach and completed both sequentially, after viewing a teaching presentation. participants completed pre-and post-attempt questionnaires. measurements of each attempt were taken regarding time to completion, success, skin punctures, needle redirections, and complications. we compared proportions using a normal binomial approximation and continuous data using the t-distribution, as appropriate. a sample size of 57 was chosen based on the following assumptions: power, 0.8; significance, 0.05; effect size, 50% versus 75%. results: fifty-seven operators with a median experience of 85 prior ultrasounds (26 to 120 iqr) participated. first-attempt success frequency was 39/57 (0.69) for the longitudinal method and 21/57 (0.37) for the transverse method (difference 0.32, 95% ci 0.12-0.51); this difference was similar regardless of operator experience. the longitudinal method had fewer redirections (mean difference 1.8, 95% ci 0.8-2.8) and skin punctures (mean difference 0.3, 95% ci )2 to 0.18). arterial puncture occurred in 2/57 longitudinal attempts and 7/ 57 transverse attempts, with no pleural punctures in either group. among successful attempts, the time spent was 24 seconds less for longitudinal method (95% ci 3-45). though 93% of participants had more experience with the transverse method prior to the training session, 58% indicated after the session that they preferred the longitudinal method. methods: a prospective single-center study was conducted to assess the compressibility of the basilic vein with ultrasound. healthy study participants were recruited. the compressibility was assessed at baseline, and then further assessed with one proximal tourniquet, two tourniquets (one distal and one proximal), and a proximal blood pressure cuff inflated to 150 mmhg. compressibility was defined as the vessel's resistance to collapse to external pressure and rated as completely compressible, moderately compressible, or mildly compressible after mild pressure was applied with the ultrasound probe. results: one-hundred patients were recruited into the study. ninety-eight subjects were found to have a completely compressible basilic vein at baseline. when one tourniquet and two tourniquets were applied 64 and 58 participants, respectively, continued to have completely compressible veins. a fisher's exact test comparing one versus two tourniquets revealed no difference between these two techniques (p = 0.46). only two participants continued to have completely compressible veins following application of the blood pressure cuff. the compressibility of this group was found to be statistically significant by fisher's exact test compared to both tourniquet groups (p < 0.0001). furthermore, 24 participants with the blood pressure cuff applied were found to have moderately compressible veins and 72 participants were found to have mildly compressible veins. conclusion: tourniquets and blood pressure cuffs can both decrease the compressibility of peripheral veins. while there was no difference identified between using one and two tourniquets, utilization of a blood pressure cuff was significantly more effective to decrease compressibility. the findings of this study may be utilized in the emergency department when attempting to obtain peripheral venous access, specifically supporting the use of blood pressure cuffs to decrease compressibility. background: electroencephalography (eeg) is an underused test that can provide valuable information in the evaluation of emergency department (ed) patients with altered mental status (ams). in ams patients with nonconvulsive seizure (ncs), eeg is necessary to make the diagnosis and to initiate proper treatment. yet, most cases of ncs are diagnosed >24 h after ed presentation. obstacles to routine use of eeg in the ed include space limitations, absence of 24/7 availability of eeg technologists and interpreters, and the electrically hostile ed environment. a novel miniature portable wireless device (microeeg) is designed to overcome these obstacles. objectives: to examine the diagnostic utility of micro-eeg in identifying eeg abnormalities in ed patients with ams. methods: an ongoing prospective study conducted at two academic urban eds. inclusion: patients ‡13 years old with ams. exclusion: an easily correctable cause of ams (e.g. hypoglycemia, opioid overdose). three 30-minute eegs were obtained in random order from each subject beginning within one hour of presentation: 1) a standard eeg, 2) a microeeg obtained simultaneously with conventional cup electrodes using a signal splitter, and 3) a microeeg using an electrocap. outcome: operative characteristics of micro-eeg in identifying any eeg abnormality. all eegs were interpreted in a blinded fashion by two board-certified epileptologists. within each reader-patient pairing, the accuracy of eegs 2 and 3 were each assessed relative to eeg 1. sensitivity, specificity, and likelihood ratios (lr) are reported for microeeg by standard electrodes and electrocap (eegs 2 and 3). inter-rater variability for eeg interpretations is reported with kappa. results: the interim analysis was performed on 130 consecutive patients (target sample size: 260) enrolled from may to october 2011 (median age: 61, range: 13-100, 40% male). overall, 82% (95% confidence interval [ci], 76-88%) of interpretations were abnormal (based on eeg1). kappa values representing the agreement of neurologists in interpretation of eeg 1-3 were 0.54 (0.36-0.73), 0.57 (0.39-0.75), and 0.55 (0.37-0.74), respectively. conclusion: the diagnostic accuracy and concordance of microeeg are comparable to those of standard eeg but the unique ed-friendly characteristics of the device could help overcome the existing barriers for more frequent use of eeg in the ed. (originally submitted as a ''late-breaker.'') a background: patients who use an ed for acute migraine are characterized by higher migraine disability scores, lower socio-economic status, and are unlikely to have used a migraine-specific medication prior to ed presentation. objectives: to determine if a comprehensive migraine intervention, delivered just prior to ed discharge, could improve migraine impact scores one month after the ed visit. methods: this was a randomized controlled trial of a comprehensive migraine intervention versus typical care among patients who presented to an ed for management of acute migraine. at the time of discharge, for patients randomized to comprehensive care, we reinforced their diagnosis, shared a migraine education presentation from the national library of medicine, provided them with six tablets of sumatriptan 100 mg and 14 tablets of naproxen 500 mg, and if they wished, provided them with an expedited free appointment to our institution's headache clinic. patients randomized to typical care received the care their attending emergency physician felt was appropriate. the primary outcome was a between-group comparison of the hit6 score, a validated headache assessment instrument, one month after ed discharge. secondary outcomes included an assessment of satisfaction with headache care and frequency of use of migraine-specific medication within that one month period. the outcome assessor was blinded to assignment. results: over a 19 month period, 50 migraine patients were enrolled. one month follow-up was successfully obtained in 92% of patients. baseline characteristics were comparable. one month hit6 scores in the two groups were nearly identical (59 vs 56, 95%ci for difference of 3: )5, 11), as was dissatisfaction with overall headache care (17% versus 18%, 95%ci for difference of 1%: )22, 24%). not surprisingly, patients randomized to the comprehensive intervention were more likely to be using triptans or migraine-preventive therapy (43% versus 0%, 95%ci for difference of 43%: 20, 63%) one month later. conclusion: a comprehensive migraine intervention, when compared to typical care, did not improve hit6 scores one month after ed discharge. future work is needed to define a migraine intervention that is practical and useful in an ed. background: lumbar puncture (lp) is the standard of care for excluding non-traumatic subarachnoid hemorrhage (sah), and is usually performed following head ct (hct). however, in the setting of a non-diagnostic hct, lp demonstrates a low overall diagnostic yield for sah (<1% positive rate). objectives: to describe a series of ed patients diagnosed with sah by lp following a non-diagnostic hct, and, when compared to a set of matched controls, determine if clinical variables can reliably identify these ''ct-negative/lp-positive'' patients. methods: retrospective case-control chart review of ed patients in an integrated health system between the years 2000-2011 (estimated 5-6 million visits among 18 eds). patients with a final diagnosis of non-traumatic sah were screened for case inclusion, defined as an initial hct without sah by final radiologist interpretation and a lp with >5 red blood cells/mm 3 , along with either 1) xanthochromic cerebrospinal fluid, 2) angiographic evidence of cerebral aneurysm or arteriovenous malformation, or 3) head imaging showing sah within 48 hours following lp. control patients were randomly selected among ed patients diagnosed with headache following a negative sah evaluation with hct and lp. controls were matched to cases by year and presenting ed in a 3:1 ratio. stepwise logistic regression and classification and regression tree analysis (cart) were employed to identify predictive variables. inter-rater reliability (kappa) was determined by independent chart review. results: fifty-five cases were identified. all cases were hunt-hess grade 1 or 2. demographics are shown in table 1 . thirty-four cases (62%) had angiographic evidence of sah. five variables were identified that positively predicted sah following a normal hct with 98% sensitivity (95% ci, 90-100%) and 25% specificity (95% ci, 19-32%): age > 50 years, neck pain or stiffness, onset of headache with exertion, vomiting with headache, or loss of consciousness at headache onset. kappa values for selected variables ranged from 0.75-1.0 (18% sample). the c-statistic (auc) and hosmer-lemeshow test p-value for the logistic regression model are 0.87 and 0.74, respectively (table 2) . conclusion: several clinical variables can help safely limit the amount of invasive testing for sah following a non-diagnostic hct. prospective validation of this model is needed prior to practice implementation. background: post-thrombolysis intracerebral hemorrhage (ich) is associated with poor outcomes. previous investigations have attempted to determine the relationship between pre-existing anti-platelet (ap) use and the safety of intravenous thrombolysis, but have been limited by low event rates thus decreasing the precision of estimates. objectives: our objective was to determine whether pre-existing ap therapy increases the risk of ich following thrombolysis. methods: consecutive cases of ed-treated thrombolysis patients were identified using multiple methods, including active and passive surveillance. retrospective data were collected from four hospitals from 1996-2005, and 24 distinct hospitals from 2007-2010 as part of a cluster randomized trial. the same chart abstraction tool was used during both time periods and data were subjected to numerous quality control checks. hemorrhages were classified using a pre-specified methodology: ich was defined as presence of hemorrhage in radiographic interpretations of follow up imaging (primary outcome). symptomatic ich (secondary outcome) was defined as radiographic ich with associated clinical worsening. a multivariable logistic regression model was constructed to adjust for clinical factors previously identified to be related to postthrombolysis ich. as there were fewer sich events, the multivariable model was constructed similarly, except that variables divided into quartiles in the primary analysis were dichotomized at the median. results: there were 830 patients included, with 47% having documented pre-existing ap treatment. the mean age was 69 years, the cohort was 53% male, and the median nihss was 12. the unadjusted proportion of patients with any ich was 15.1% without ap and 19.3% with ap (difference 4.2%, 95% ci )1.2% to 9.6%); for sich this was 6.1% without ap and 9% with ap (difference 3.1%, 95%ci )1 to 6.7%). no significant association between pre-existing ap treatment with radiographic or symptomatic ich was observed (table) . conclusion: we did not find that ap treatment was associated with post-thrombolysis ich or sich in this cohort of community treated patients. pre-existing tobacco use, younger age, and lower severity were associated with lower odds of sich. an association between ap therapy and sich may still exist -further research with larger sample sizes is warranted in order to detect smaller effect sizes. background: post-cardiac arrest therapeutic hypothermia (th) improves survival and neurologic outcome after cardiac arrest, but the parameters required for optimal neuroprotection remain uncertain. our laboratory recently reported that 48-hour th was superior to 24-hour th in protecting hippocampal ca1 pyramidal neurons after asphyxial cardiac arrest in rats. cerebellar purkinje cells are also highly sensitive to ischemic injury caused by cardiac arrest, but the effect of th on this neuron population has not been previously studied. objectives: we examined the effect of post-cardiac arrest th onset time and duration on purkinje neuron survival in cerebella collected during our previous study. methods: adult male long evans rats were subjected to 10-minute asphyxial cardiac arrest followed by cpr. rats that achieved return of spontaneous circulation (rosc) were block randomized to normothermia (37.0 deg c) or th (33.0 deg c) initiated 0, 1, 4, or 8 hours after rosc and maintained for 24 or 48 hours (n = 21 per group). sham injured rats underwent anesthesia and instrumentation only. seven days post-cardiac arrest or sham injury, rats were euthanized and brain tissue was processed for histology. surviving purkinje cells with normal morphology were quantified in the primary fissure in nissl stained sagittal sections of the cerebellar vermis. purkinje cell density was calculated for each rat, and group means were compared by anova with bonferroni analysis. results: purkinje cell density averaged (+/) sd) 35.9 (2.4) cells/mm in sham-injured rats. neuronal survival in normothermic post-cardiac arrest rats was significantly reduced compared to sham (10.7% (5.0%)). overall, th resulted in significant neuroprotection compared to normothermia (38.9% (15.7%) of sham). purkinje cell density with 24-hour duration th was 35.0% (11.2%) of sham and 48-hour duration th was 43.3% (15.6%), both significantly improved from sham (p = 0.245 between durations). th initiated 0, 1, 4, and 8 hours post-rosc provided similar benefit: 44.6% (21.6%), 33.2% (8.1%), 36.6% (12.9%), and 41.1% (9.3%) of sham, respectively. conclusion: overall, these results indicate that postcardiac arrest th protects cerebellar purkinje cells with a broad therapeutic window. our results underscore the importance of considering multiple brain regions when optimizing the neuroprotective effect of post-cardiac arrest th. the effect of compressor-administered defibrillation on peri-shock pauses in a simulated cardiac arrest scenario joshua glick, evan leibner, thomas terndrup penn state hershey medical center, hershey, pa background: longer pauses in chest compressions during cardiac arrest are associated with a decreased probability of successful defibrillation and patient survival. having multiple personnel share the tasks of performing chest compressions and shock delivery can lead to communication complications that may prolong time spent off the chest. objectives: the purpose of this study was to determine whether compressor-administered defibrillation led to a decrease in pre-shock and peri-shock pauses as compared to bystander-administered defibrillation in a simulated in-hospital cardiac arrest scenario. we hypothesized that combining the responsibilities of shock delivery and chest-compression performance may lower no-flow periods. methods: this was a randomized, controlled study measuring pauses in chest compressions for defibrillation in a simulated cardiac arrest. medical students and ed personnel with current cpr certification were surveyed for participation between july 2011 and october 2011. participants were randomized to either a control (facilitator-administered shock) or variable (participantadministered shock) group. all participants completed one minute of chest compressions on a mannequin in a shockable rhythm prior to initiation of prompt and safe defibrillation. pauses for defibrillation were measured and compared in both study groups. results: out of 200 total enrollments, the data from 197 defibrillations were analyzed. subject-initiated defibrillation resulted in a significantly lower pre-shock handsoff time (0.57 s; 95% ci: 0.47-0.67) compared to facilitator-initiated defibrillation (1.49 s; 95% ci: 1.35-1.64). furthermore, subject-initiated defibrillation resulted in a significantly lower peri-shock hands-off time (2.77 s; 95% ci: 2.58-2.95) compared to facilitator-initiated defibrillation (4.25 s; 95% ci: 4.08-4.43). conclusion: assigning the responsibility for shock delivery to the provider performing compressions encourages continuous compressions throughout the charging period and decreases total time spent off the chest. this modification may also decrease the risk of accidental shock and improve patient survival. however, as this was a simulation-based study, clinical implementation is necessary to further evaluate these potential benefits. objectives: to determine the sensitivity and specificity of peripheral venous oxygen (po 2 ) to predict abnormal central venous oxygen saturation in septic shock patients in the ed. methods: secondary analysis of an ed-based randomized controlled trial of early sepsis resuscitation targeting three physiological variables: cvp, map, and either scvo 2 or lactate clearance. inclusion criteria: suspected infection, two or more sirs criteria, and either systolic blood pressure <90 mmhg after a fluid bolus or lactate >4 mm. peripheral venous po 2 was measured prior to enrollment as part of routine care, and scvo 2 was measured as part of the protocol. we analyzed for agreement between venous po 2 and scvo 2 using spearman's rank. sensitivity and specificity to predict an abnormal scvo 2 (<70%) were calculated for each incremental value of po 2 . results: a total of 175 were analyzed. median po 2 was 43 mmhg (iqr 32, 55). median initial scvo 2 was 79% (iqr 70, 88). thirty-nine patients (23%) had an initial scvo 2 < 70%. spearman's rank demonstrated fair correlation between initial po 2 and scvo 2 (q = 0.26). a cutoff of venous po 2 < 57 was 90% sensitive and 20% specific for detecting an initial scvo 2 < 70%. twenty-seven patients (20%) demonstrated an initial po 2 of >56. conclusion: in ed septic shock patients, venous po 2 demonstrated only fair correlation with scvo 2, though a cutoff value of 56 was sensitive for predicting an abnormal scvo 2 . twenty percent of patients demonstrated an initial value above the cutoff, potentially representing a group in whom scvo 2 measurement could be avoided. future studies aiming to decrease central line utilization could consider the use of peripheral o 2 measurements in these patients. sessions. ninety-two percent were rns, median clinical experience was 11-15 years, and 56% were from an intensive care unit. provider confidence increased significantly with a single session despite the highly experienced sample (figure 1 ). there was a trend for further increased confidence with an additional session and the increased confidence was maintained for at least 3-6 months given the normal sensitivity analysis. conclusion: high fidelity simulation significantly increases provider confidence even among experienced providers. this study was limited by its small sample size and recent changes in acls guidelines. background: recent data suggest alarming delays and deviations in major components of pediatric resuscitation during simulated scenarios by pediatric housestaff. objectives: to identify the most common errors of pediatric residents during multiple simulated pediatric resuscitation scenarios. methods: a retrospective observational study conducted in an academic tertiary care hospital. pediatric residents (pgy1 and pgy3) were videotaped performing a series of five pediatric resuscitation scenarios using a high-fidelity simulator (simbaby, laerdal): pulseless non-shockable arrest, pulseless shockable arrest, dysrhythmia, respiratory arrest, and shock. the primary outcome was the presence of significant errors prospectively defined using a validated scoring instrument designed to assess sequence, timing, and quality of specific actions during resuscitations based on the 2005 aha pals guidelines. residents' clinical performances were measured by a single video reviewer. the primary analysis was the proportion of errors for each critical task for each scenario. we estimated that the evaluation of each resident would provide a confidence interval less than 0.20 for the proportion of errors. results: twenty-four of 25 residents completed the study. across all scenarios, pulse check was delayed by more than 30 seconds in 56% (95%ci: 46%-66%). for non-shockable arrest, cpr was started more than 30 seconds after recognizing arrest in 21% (95%ci 7-42%) and inappropriate defibrillation was performed in 29% (95%ci 13-51%). for shockable arrest, participants failed to identify the rhythm in 58% (95%ci 37-78%), cpr was not performed in 25% (95%ci 10-47%), while defibrillation was delayed by more than 90 seconds in 33% (95%ci 16-51%) and not performed in one case. for shock, participants failed to ask for a dextrose check in 71% (95%ci 51-86%), and it was delayed by more than 60 seconds for all others. conclusion: the most common error across all scenarios was delay in pulse check. delays in starting cpr and inappropriate defibrillation were common errors in non-shockable arrests, while failure to identify rhythm, cpr omission, and delaying defibrillation were noted for shockable arrests. for shock, omission of rapid dextrose check was the most common error, while delaying the test when ordered was also significant. future training in pediatric resuscitation should target these errors. background: many scoring instruments have been described to measure clinical performance during resuscitation; however, the validity of these tools has yet to be proven in pediatric resuscitation. objectives: to determine the external validity of published scoring instruments to evaluate clinical performance during simulated pediatric resuscitations using pals algorithms and to determine if inter-rater reliability could be assessed. methods: this was a prospective quasi-experimental design performed in a simulation lab of a pediatric tertiary care facility. participants were residents from a single pediatric program distinct from where the instrument was originally developed. a total of 13 pgy1s and 11 pgy3s were videotaped during five simulated pediatric resuscitation scenarios. pediatric emergency physicians rated resident performances before and after a pals course using standardized scoring. each video recording was viewed and scored by two raters blinded to one another. a priori, it was determined that, for the scoring instrument to be valid, participants should improve their scores after participating in the pals course. differences in means between pre-pals and post-pals and pgy1 and pgy3 were compared using an anova test. to investigate differences in the scores of the two groups over the five scenarios, a two-factor anova was used. reliability was assessed by calculating an interclass correlation coefficient for each scenario. results: following the pals course, scores improved by 8.6% (3.8 to 13.3), 15.7% (8.6 to 22.7), 6.3% ()1.8 to 14.3), 18.2% (9.3 to 27), and 4.1% ()3.0 to 11.2) for the pulseless non-shockable arrest, pulseless shockable arrest, dysrhythmia, respiratory, and shock scenarios respectively. there were no differences in scores between pgy1s and pgy3s before and after the pals course. there was an excellent reliability for each scoring instrument with iccs varying between 0.85 and 0.98. conclusion: the scoring instrument was able to demonstrate significant improvements in scores following a pals course for pgy1 and pgy3 pediatric residents for the pulseless non-shockable arrest, pulseless shockable, and respiratory arrest scenarios only. however, it was unable to discriminate between pgy1s and pgy3s both before and after the pals course for any scenarios. the scoring instrument showed excellent inter-reliability for all scenarios. a background: medical simulation is a common and frequently studied component of emergency medicine (em) residency curricula. its utility in the context of em medical student clerkships is not well defined. objectives: the objective was to measure the effect of simulation instruction on medical students' em clerkship oral exam performance. we hypothesized that students randomized to the simulation group would score higher. we predicted that simulation instruction would promote better clinical reasoning skills and knowledge expression. methods: this was a randomized observational study conducted from 7/2009 to 5/2010. participants were fourth year medical students in their em clerkship. students were randomly assigned on their first day to one of two groups. the study group received simulation instruction in place of one of the lectures, while the control group was assigned to the standard curriculum. the standard clerkship curriculum includes lectures, case studies, procedure labs, and clinical shifts without simulation. at the end of the clerkship, all students participated in written and oral exams. graders were not blinded to group allocation. grades were assigned based on a pre-defined set of criteria. the final course composite score was computed based on clinical evaluations and the results of both written and oral exams. oral exam scores between the groups were compared using a two-sample t-test. we used the spearman rank correlation to measure the association between group assignment and the overall course grade. the study was approved by our institutional irb. results: sixty-one students participated in the study and were randomly assigned to one of two groups. twenty-nine (47.5%) were assigned to simulation and the remaining 32 (52.5%) students were assigned to the standard curriculum. students assigned to the simulation group scored 5.34% (95% ci 2.78-7.91%) higher on the oral exam than the non-simulation group. additionally, simulation was associated with a higher final course grade (p < 0.05). limitations of this pilot study include lack of blinding and interexaminer variability. conclusion: simulation training as part of an em clerkship is associated with higher oral exam scores and higher overall course grade compared to the standard curriculum. the results from this pilot study are encouraging and support a larger, more rigorous study. initial approaches to common complaints are taught using a standard curriculum of lecture and small group case-based discussion. we added a simulation exercise to the traditional altered mental status (ams) curriculum with the hypothesis that this would positively affect student knowledge, attitudes, and level of clinical confidence caring for patients with ams. methods: ams simulation sessions were conducted in june 2010 and 2011; student participation was voluntary. the simulation exercises included two ams cases using a full-body simulator and a faculty debriefing after each case. both students who did and did not participate in the simulations completed a written post-test and a survey related to confidence in their approach to ams. results: 154 students completed the post-test and survey. 65 (42%) attended the simulation session. 48 (31%) attended all three sessions. 58 (38%) participated in the lecture and small group. 15 (10%) did not attend any session. post-test scores were higher in students who attended the simulations versus those who did not: 7 (iqr, 6-8) vs. 6 (iqr, 4-7); p < 0.001. students who attended the simulations felt more confident about assessing an ams patient (58% vs. 42%; p = 0.05), articulating a differential diagnosis (66% vs. 47%; p = 0.03), and knowing initial diagnostic tests (74% vs. 53%; p = 0.01) and initial interventions (79% vs. 56%; p = 0.003) for an ams patient. students who attended the simulations were more likely to rate the overall ams curriculum as useful (94% vs. 61%; p < 0.001). conclusion: addition of a simulation session to a standard ams curriculum had a positive effect on student performance on a knowledge-based exam and increased confidence in clinical approach. the study's major limitations were that student participation in the simulation exercise was voluntary and that effect on applied skills was not measured. future research will determine whether simulation is effective for other chief complaints and if it improves actual clinical performance. background: the acgme has defined six core competencies for residents including ''professionalism'' and ''interpersonal and communication skills.'' integral to these two competencies is empathy. prior studies suggest that self-reported empathy declines during medical training; no reported study has yet integrated simulation into the evaluation of empathy in medical training. objectives: to determine if there is a relation between level of training and empathy in patient interactions as rated during simulation. methods: this is a prospective observational study at a tertiary care center comparing participants at four different levels of training: first (ms1) and third year (ms3) medical students, incoming em interns (pgy1), and em senior residents (pgy3/4). trainees participated in two simulation scenarios (ectopic pregnancy and status asthmaticus) in which they were responsible for clinical management (cm) and patient interactions (pi). this was the first simulation exposure during an established simulation curriculum for ms1, ms3, and pgy1. two independent raters reviewed videotaped simulation scenarios using checklists of critical actions for clinical management (cm: 0-11 points) and patient interactions (pi: 0-17 points). inter-rater reliability was assessed by intra-class correlation coefficients (iccs objectives: we explored attitudes and beliefs about the handoff, using qualitative methods, from a diverse group of stakeholders within the ems community. we also characterized perceptions of barriers to high-quality handoffs and identified strategies for optimizing this process. methods: we conducted seven focus groups at three separate gatherings of ems professionals (one local, two national) in 2010/2011. snowball sampling was used to recruit 48 participants with diverse professional, experiential, geographic, and demographic characteristics. focus groups, lasting 60-90 minutes, were moderated by investigators trained in qualitative methods, using an interview guide to elicit conversation. recordings of each group were transcribed. three reviewers analyzed the text in a multi-stage iterative process to code the data, describe the main categories, and identify unifying themes. results: participants included emts, paramedics, physicians, and nurses. clinical experience ranged from 4 months to 36 years. recurrent thematic domains when discussing attitudes and beliefs were: perceptions of respect and competence, professionalism, teamwork, value assigned to the process, and professional duty. modifiers of these domains were: hierarchy, skill/training level, severity/type of patient illness, and system/ regulatory factors. strategies to improving barriers to the handoff included: fostering familiarity and personal connections between ems and ed staff, encouraging two-way conversations, feedback, and direct interactions between ems providers and ed physicians, and optimizing ways for ems providers to share subjective impressions (beyond standardized data elements) with hospital-based care teams. conclusion: ems professionals assign high value to the ed handoff. variations in patient acuity, familiarity with other handoff participants, and perceptions of respect and professionalism appear to influence the perceived quality of this transition. regulatory strategies to standardize the contents of the handoff may not alone overcome barriers to this process. miology, public health) then developed an approach to assign ems records to one of 20 symptom-based illness categories (gastrointestinal illness, respiratory, etc). ems encounter records were characterized into these illness categories using a novel text analytic program. event alerts were identified across the state and local regions in illness categories using either change detection from baseline with (cusum) analysis (three standard deviations) and a novel text-proportion (tap) analysis approach (sas institute, cary, nc). results: 2.4 million ems encounter records over a 2year period were analyzed. the initial analysis focused upon gastrointestinal illness (gi) given the potential relationship of gi distress to infectious outbreaks, food contamination and intentional poisonings (ricin). after accounting for seasonality, a significant gi event was detected in feb 2010 (see red circle on graph). this event coincided with a confirmed norovirus outbreak. the use of cusum approach (yellow circle on graph) detected the alert event on jan 24, 2010. the novel tap approach on a regional basis detected the alert on dec 6, 2009. conclusion: ems has the advantage of being an early point of contact with patients and providing information on the location of insult or injury. surveillance based on ems information system data can detect emergent outbreaks of illness of interest to public health. a novel text proportion analytic technique shows promise as an early event detection method. assessing chronic stress in the emergency medical services elizabeth a. donnelly 1 , jill chonody 2 1 university of windsor, windsor, on, canada; 2 university of south australia, adelaide, australia background: attention has been paid to the effect of critical incident stress in the emergency medical services (ems); however, less attention has been given to the effect of chronic stress (e.g., conflict with administration or colleagues, risk of injury, fatigue, interference in non-work activities) in ems. a number of extant instruments assess for workplace stress; however, none address the idiosyncratic aspects of work in ems. objectives: the purpose of this study was to validate an instrument, adapted from mccreary and thompson (2006) , that assesses levels of both organizational and operational work-related chronic stress in ems personnel. methods: to validate this instrument, a cross-sectional, observational web-based survey was used. the instrument was distributed to a systematic probability sample of emts and paramedics (n = 12,000). the survey also included the perceived stress scale (cohen, 1983) to assess for convergent construct validity. results: the survey attained a 13.6% usable response rate (n = 1633); respondent characteristics were consistent across demographic characteristics with other studies of emts and paramedics. the sample was split in order to allow for exploratory and confirmatory fac-tor analyses (n = 847/n = 786). in the exploratory factor analysis, principal axis factoring with an oblique rotation revealed a two-factor, 34-item solution (kmo = 0.943, v 2 = 23344.38, df = 561, p £.001). confirmatory factor analysis suggested a more parsimonious, two-factor, 20-item solution (v 2 = 632.67, df = 168, p £ 0.001, rmsea = 0.06, cfi = 0.92, tli = 0.91, srmr = 0.04). the factors demonstrated good internal reliability (operational stress a = 0.877, organizational stress a = 0.868). both factors were significantly correlated (p £ 0.01) with the hypothesized convergent validity measure. conclusion: theory and empirical research indicate that exposure to chronic workplace stress may play an important part in the development of psychological distress, including burnout, depression, and posttraumatic stress disorder (ptsd). workplace stress and stress reactions may potentially interfere with job performance. as no extant measure assesses for chronic workplace stress in ems, the validation of this chronic stress measure enhances the tools ems leaders and researchers have in assessing the health and well-being of ems providers. effect of naltrexone background: survivors of sarin and other organophosphate poisoning can develop delayed encephalopathy that is not prevented by standard antidotal therapy with atropine and pralidoxime. a rat model of poisoning with the sarin analogue diisoprophylfluorophosphate (dfp) demonstrated impairment of spatial memory despite antidotal therapy with atropine and pralidoxime. additional antidotes are needed after acute poisonings that will prevent the development of encephalopathy. objectives: to determine the efficacy of naltrexone in preventing delayed encephalopathy after poisoning with the sarin analogue dfp in a rat model. the hypothesis is that naltrexone would improve performance on spatial memory after acute dfp poisoning. the sarin analogue dfp was used because it has similar toxicity to sarin while being less dangerous to handle. methods: a randomized controlled experiment at a university animal research laboratory of the effects of naltrexone on spatial memory after dfp poisoning was conducted. long evans rats weighing 250-275 grams were randomized to dfp group (n = 4, rats received a single intraperitoneal (ip) injection of dfp 5 mg/kg) or dfp+naltrexone group (n = 5, rats received a single ip injection of dfp (5 mg/kg) followed by naltrexone 5 mg/kg/day). after injection, rats were monitored for signs and symptoms of cholinesterase toxicity. if toxicity developed, antidotal therapy was initiated with atro-background: one of the primary goals of management of patients presenting with known or suspected acetaminophen (apap) ingestion is to identify the risk for apap-induced hepatotoxicity. current practice is to measure apap level at a minimum of 4 hours post ingestion and plot this value on the rumack-matthew nomogram. one retrospective study of apap levels drawn less than 4 hours post-ingestion found a level less than 100 mcg/ml to be sufficient to exclude toxic ingestion. objectives: the aim of this study was to prospectively determine the negative predictive value (npv) for toxicity of an apap level of less than 100 mcg/ml obtained less than 4 hours post-ingestion. methods: this was a multicenter prospective cohort study of patients presenting to one of five tertiary care hospitals that are part of the toxicology investigator's consortium (toxic). eligible patients presented to the emergency department less than 4 hours after known or suspected ingestion and had the initial apap level obtained at greater than 1 but less than 4 hours post ingestion. a second apap level was obtained at 4 hours or more post-ingestion and plotted on the rumack-matthew nomogram to determine risk of toxicity. the outcome of interest was the npv of an initial apap level less than 100 mcg/ml. a power analysis based on an alpha = 0.05 and power of 0.80 yielded the requirement of 71 subjects. results: data were collected on 171 patients over a 30month period from may 2009 to nov 2011. patients excluded from npv analysis consisted of: initial apap level greater than 100 mcg/ml (31), negligible apap level on both the initial and confirmatory apap level (31), initial apap level drawn less than one hour after ingestion (15), or an unknown time of ingestion (1). ninety-three patients met the eligibility criteria. two patients (2.2%) with an initial apap level less than 100 mcg/ml (54 mcg/ml at 90 min, 38 mcg/ml at 84 min) were determined to be at risk for toxicity based on oh s330 2012 saem annual meeting abstracts implementation of an emergency department sign-out checklist improves patient hand-offs at change of shift nicole m ma computer-assisted self-interviews improve testing for chlamydia and gonorrhea in the pediatric emergency department is the australian triage system a better indicator of psychiatric patients' needs for intervention than the ena emergency severity index triage system? patients were given an initial dose of 10 mg droperidol intramuscularly followed by an additional dose of 10 mg after 15 min if required. inclusion criteria were patients requiring physical restraint and parenteral sedation. the primary outcome was the time to sedation. secondary outcomes were the proportion of patients requiring additional sedation within the first hour, over-sedation measured as -3 on the sedation assessment tool, and respiratory compromise measured as oxygen saturation <90%. results: droperidol was administered to 424 patients and 370 of these had sedation scores documented. presentations included 56% with alcohol intoxication. dose ranged from 2.5 mg to 30 mg, median 10 mg (interquartile range conclusion: droperidol is effective for rapid sedation for abd and rarely causes over-sedation serum creatinine (scr) is widely used to predict risk; however, gfr is a better assessment of kidney function. objectives: to compare the ability of gfr and scr to predict the development of cin among ed patients receiving cects. we hypothesized that gfr would be the best available predictor of cin. methods: this was a retrospective chart review of ed patients ‡18 years old who had a chest or abdomen/pelvis cect between 06/01/11 and 07/31/11. baseline and follow-up scr levels were recorded. patients with initial scr >1.6 mg/dl were excluded, as per hospital radiology department protocol. cin was defined as a scr increase of either 25%, 0.5 mg/dl, or a gfr decrease of 25% within 72 hours of contrast exposure. gfr was calculated using the ckd epi and mdrd formulae, and analyzed in original units and categorized form (<60, ‡60) with each additional unit decrease in ckd epi, subjects were 3% more likely to develop cin (or = 1.03) (p < 0.0281). additionally, subjects with ckd epi <60 were 3.20 (or) times more likely to have cin than subjects with ckd epi ‡60 in original units, ckd epi (p < 0.0001) and mdrd (p < 0.0016) both had a significantly higher auc than scr. conclusion: age, as an independent variable, is the best predictor of cin, when compared with scr and gfr. due to a small number of cases with cin, the confidence intervals associated with the odds ratios are wide. future research should focus on patient risk stratification and establishing ed interventions to prevent cin. 694 a rat model of carbon monoxide induced neurotoxicity heather ellsworth non-traumatic subarachnoid hemorrhage diagnosed by lumbar puncture following non-diagnostic head ct: a retrospective case-control study and decision a dass score of >14 has been previously defined as an indicator of increased stress levels. multivariable logistic regression was utilized to identify demographic and work-life characteristics significantly associated with stress. results: 53.6% of individuals responded to the survey (34,340/64,032) and prevalence of stress was estimated at 5.9%. the following work-life characteristics were associated with stress: certification level, work experience, and service type. the odds of stress in paramedics was 32% higher when compared to emt-basics (or = 1.32, 95% ci = 1.23-1.42). when compared to £2 years of experience 28-2.18) were more likely to be stressed. ems professionals working in county (or = 1 ci = 1.07-1.51) and private services (or = 1 56) were more likely than those working in fire-based services to be stressed. the following demographic characteristics were associated with stress: general health and smoking status finally, former smokers (or = 1.34, 95% ci = 1.17-1.54) and current smokers (or = 1.37, 95% ci = 1.18-1.59) were more likely to be stressed than non-smokers literature suggests this is within the range of stress among nurses, and lower than physicians. while the current study was able to identify demographic and work-life characteristics associated with stress, the long-term effects are largely unknown methods: design: prospective randomized controlled trial. subjects: female sus scrofa swine weighing 45-55kg were infused with amitriptyline 0.5 mg/kg/minute until the map fell to 60% of baseline values. subjects were then randomized to experimental group (ife 7 ml/kg followed by an infusion of 0.25 ml/kg/minute) or control group (sb 2 meq/kg plus equal volume of normal saline). interventions: we measured continuous heart rate (hr), sbp, map, cardiac output (co), systemic vascular resistance (svr), and venous oxygen saturation (svo 2 ). laboratory values monitored included ph, pco 2 , bicarbonate, lactate, and amitriptyline levels. descriptive statistics including means, standard deviations, standard errors of measurement, and confidence limits were calculated. results: of 14 swine, seven each were allocated to ife and sb groups. there was no difference at baseline for each group regarding hr, sbp, map, co, svr, or svo 2 . ife and sb groups required similar mean amounts of tca to reach hypotension one ife and two sb pigs survived. conclusion: in this interim data analysis of amitriptyline-induced hypotensive swine, we found no difference in mitigating hypotension between ife and sb lipid rescue 911: a survey of poison center medical directors regarding intravenous fat emulsion therapy michael r. christian 1 , erin m. pallasch cook county hospital (stroger), chicago, il 745 reliability of non-toxic acetaminophen concentrations obtained less than 4 hours after ingestion evaluating age in the field triage of injured background: hiv screening in eds is advocated to achieve the goal of comprehensive population screening. yet, hiv testing in the ed is sometimes thwarted by a patient's condition (e.g. intoxication) or environmental factors (e.g. other care activities). whether it is possible to test these patients at a later time is unknown. objectives: we aimed to determine if ed patients who were initially unable to receive an hiv testing offer might be tested in the ed at a later time. we hypothesized that factors preventing testing are transient and that there are subsequent opportunities to repeat testing offers. methods: we reviewed medical records for patients presenting to an urban, academic ed who were approached consecutively to offer hiv testing during randomly selected periods from january 2008 to january 2009. patients for whom the initial attempted offer could not be completed were reviewed in detail with standardized abstraction forms, duplicate abstraction, and third-party discrepancy adjudication. primary outcomes included repeat hiv testing offers during that ed visit, and whether a testing offer might eventually have been possible either during the initial visit or at a later visit within 6 months. outcomes are described as proportions with confidence intervals. results: of 824 patients approached, initial testing offers could not be completed for 120 (15%). these 120 were 62% male, 52% white, and had a median age of 41 (18-64). a repeat offer of testing during the initial visit would have been possible for 99/120 (83%), and 52/99 (53%) were actually offered testing on repeat approach. of the 21 for whom a testing offer would not have been possible on the initial visit, 14 (67%) had at least one additional visit within 6 months, and 11/14 (79%) could have been offered testing on at least one visit. overall, a repeat testing offer would have been possible for 110/120 (93%, 95% ci 85-96%). conclusion: factors preventing an initial offer of hiv testing in the ed are generally transient. opportunities for repeat approach during initial or later ed encounters suggest that, given sufficient resources, the ed could succeed in comprehensively screening the population presenting for care. ed screening personnel who are initially unable to offer testing should repeat their attempt. hiv adopt an ''opt-out'' rapid hiv screening model in order to identify hiv infected patients. previous studies nationwide have shown acceptance rates for hiv screening of 20-90% in emergency departments. however, it is unknown how acceptance rates will vary in a culturally and ethnically diverse urban emergency department.objectives: to determine the characteristics of patients who accept or refuse ''opt-out'' hiv screening in an urban emergency department.methods: a self-administered, anonymous survey is administered to ed patients who are 18 to 64 years of age. the questionnaire is administered in english, russian, mandarin, and spanish. questions include demographic characteristics, hiv risk factors, perception of hiv risk, and acceptance of rapid hiv screening in the emergency department. results: to date 145 patients responded to our survey. of the 145, 102 (70.3%) did not accept an hiv test (group 1) in their current ed visit and 43 (29.7%) accepted an hiv test (group 2). the major two reasons given for opting out (i.e., group 1) was ''i do not feel that i am at risk'' (59.8%) and ''i have been tested for hiv before'' (25.5%). there was no difference between the groups in regards to sex (p = 0.737), age (p = 0.351), religious affiliation (p = 0.750), marital status (p = 0.331), language spoken at home (p = 0.211), and whether they had been hiv tested before (73.2% in group 1 and 59.4% in group 2; p = 0.123). however, there was a statistically significant difference with regards to educational level and income. more patients in group 1 (69.0%) and 46.1% in group 2 had less than a college level education (p < 0.05). similarly, more patients in group 1 (58.3%) and only 34.8% in group 2 had an annual household income of £$25,000 (p < 0.05). conclusion: in a culturally and ethnically diverse urban emergency department, patients with a lower socioeconomic status and educational level tend to opt out of hiv screening test offered in the ed. no significant difference in acceptance of ed hiv testing was found to date based on primary language spoken at home or religious affiliation background: antimicrobial resistance is a problem that affects all emergency departments. objectives: our goal was to examine all urinary pathogens and their resistance patterns from urine cultures collected in the emergency department (ed).methods: this study was performed at an urban/suburban community-teaching hospital with an annual volume of 40,000 visits. using electronic records, all cases of urine cultures received in 2009 were reviewed for data including type of bacteria, antibiotic resistance, and health care exposure (hcx). hcx was defined as no prior hospitalization within the previous six months, hospitalization within the previous three months, hospitalization within the previous six months, nursing home resident (nh), and presence of an indwelling urinary catheter (uc). an investigator abstracted all data with a second re-abstracting a random 5% for kappa statistics between 0.697 and 1.00. group background: approximately 12-20% of patients treated with epinephrine for anaphylaxis receive a second dose but the risk factors associated with repeat epinephrine use remain poorly defined. objectives: to determine whether obesity is a risk factor for requiring 2 + epinephrine doses for patients who present to the emergency department (ed) with anaphylaxis due to food allergy or stinging insect hypersensitivity. methods: we performed a retrospective chart review at four tertiary care hospitals that care for adults and children in new england between the following time periods: massachusetts general hospital (1/1/01-12/31/ 06), brigham and women's hospital (1/1/01-12/31/06), children's hospital boston (1/1/01-12/31/06), hasbro children's hospital (1/1/04-12/31/09). we reviewed the medical records of all patients presenting to the ed for food allergy or stinging insect hypersensitivity using icd9cm codes. we focused on anthropomorphic data and number of epinephrine treatments given before and during the ed visit. among children, calculated bmis were classified according to cdc growth indicators as underweight, healthy, overweight, or obese. all patients who presented on or after their 18th birthday were considered adults.background: transitions of care are ubiquitous in the emergency department (ed) and inevitably introduce the opportunity for errors. despite recommendations in the literature, few emergency medicine (em) residency programs provide formal training or standard process for patient hand-offs. checklists have been shown to be effective quality improvement measures in inpatient settings and may be a feasible method to improve ed hand-offs. objectives: to determine if the use of a sign-out checklist improves the accuracy and efficiency of resident sign-out in the ed as measured by reduced omission of key information, communication behaviors, and time to sign-out each patient. methods: a prospective study of first-and second-year em and non-em residents rotating in the ed at an urban academic medical center with an annual ed volume of 55,000. trained clinical research assistants observed resident sign-out during shift change over a two-week period and completed a 15-point binary observable behavior data collection tool to indicate whether or not key components of sign-out occurred. time to sign out each patient was recorded. we then created and implemented a computerized sign-out checklist consisting of key elements that should be addressed during transitions of care, and instructed residents to use this during hand-offs. a two-week post-intervention observation phase was conducted using the same data collection tool. proportions, means, and non-parametric comparison tests were calculated using stata. results: one hundred fifteen sign-outs were observed prior to checklist implementation and 72 after; one sign-out was excluded for incompleteness. significant improvements were seen in four of the measured signout components: inclusion of history of present illness increased by 18% (p < 0.001), likely diagnosis increased by 17% (p = 0.015), disposition status increased by 18% (p < 0.01), and patient/care team awareness of plan increased by 19% (p < 0.01). (figure 1 ) time data for 108 sign-outs pre-implementation and 72 post-implementation were available. seven sign-outs were excluded for incompleteness or spurious values. mean length of sign out was 83s (95% ci 65 to 100) and 71.7s (95% ci 52 to 92) per patient. conclusion: implementation of a checklist improved the transfer of information but did not affect the overall length of time for the sign-out. the objectives: to determine risk factors associated with adult patients presenting to the ed with cellulitis who fail initial antibiotic therapy and require a change of antibiotics or admission to hospital. methods: this was a prospective cohort study of patients ‡18 years presenting with cellulitis to one of two tertiary care eds (combined annual census 120,000). patients were excluded if they had been treated with antibiotics for the cellulitis prior to presenting to the ed, if they were admitted to hospital, or had an abscess only. trained research personnel administered a questionnaire at the initial ed visit with telephone follow-up 2 weeks later. patient characteristics were summarized using descriptive statistics and 95% confidence intervals (cis) were estimated using standard equations. backwards stepwise multivariable logistic regression models determined predictor variables independently associated with treatment failure (failed initial antibiotic therapy and required a change of antibiotics or admission to hospital). results: 598 patients were enrolled, 47 were excluded, and 53 were lost to follow-up. the mean (sd) age was 53.1 (18.4) and 56.4% were male. 497 (99.8%) patients were given antibiotics in the ed. 185 (37.2%) were given oral, 231 (46.5%) were given iv, and 81 (16.3%) patients were given both oral and iv antibiotics. 102 (20.5%) patients had a treatment failure. fever (temp >38°c) at triage (or: 4.1, 95% ci: 1.5, 10.7), leg ulcers (or: 3.1, 95% ci: 1.4, 6.6), edema or lymphedema (or: 2.5, 95% ci: 1.4, 4.5), and prior cellulitis in the same area (or: 1.8, 95% ci: 1.1, 2.9) were independently associated with treatment failure. conclusion: this analysis found four risk factors associated with treatment failure in patients presenting to the ed with cellulitis. these risk factors should be considered when initiating empiric outpatient antibiotic therapy for patients with uncomplicated cellulitis. use background: children presenting for care to a pediatric emergency department (ped) commonly require intravenous catheter (iv) placement. prior studies report that the average number of sticks to successfully place an iv in children is 2.4. successfully placing an iv requires identification of appropriate venous access targets. the veinviewer visionò (vvv) assists with iv placement by projecting a map of subcutaneous veins on the surface of the skin using near infrared light. objectives: to compare the effectiveness of the vvv versus standard approaches: sight (s) and sight plus palpation (s+p) for identifying peripheral veins for intravenous catheter placement in children treated in a ped. methods: experienced pediatric emergency nurses and physicians identified peripheral venous access targets appropriate for intravenous cannulation of a cross-sectional convenience sample of english speaking children aged 2-17 years presenting for treatment of sub-critical injury or illness whose parents provided consent. the clinicians marked the veins with different colored washable marker and counted them on the dorsum of the hand and in the antecubital fossa using the three approaches: s, s+p, and vvv. a trained research assistant photographed each site for independent counting after each marking and recorded demographics and bmi. counts were validated using independent photographic analyses. data were entered into sas 9.2 and analyzed using paired t-tests. results: 146 patients completed the study. clinicians were able to identify significantly more veins on the dorsum of the hand using vvv than s alone or s+p, 3.26 (p < 0.0001, ci 2.89-3.64) and 2.31 (p < 0.0001, ci 1.97-2.65), respectively, as well as significantly more veins in the antecubital fossa using vvv than s alone or s+p, 2.62 (p < 0.0001, ci 2.29-2.96) and 1.93 (p < 0.0001, ci 1.62-2.42), respectively. the differences in numbers of veins identified remained significant at p < 0.05 level across all ages, races, and bmis of children and across clinicians and validating independent photographic analyses. conclusion: experienced emergency nurses and physicians were able to identify significantly more venous access targets appropriate for intravenous cannulation in the dorsum of the hand and antecubital fossa of children presenting for treatment in a ped using vvv than the standard approaches of sight or sight plus palpation. an background: mental health emergencies have increased over the past two decades, and contribute to the ongoing rise in u.s. ed visit volumes. although data are limited, there is a general perception that the availability of in-person psychiatric consultation in the ed and of inpatient psychiatric beds is inadequate. objectives: to examine the availability of in-person psychiatry consultation in a heterogeneous sample of u.s. eds, and typical delays in transfer of ed patients to an inpatient psychiatric bed. methods: during 2009-2011, we mailed a survey to all ed directors in a convenience sample of nine us states (ar, co, ga, hi, ma, mn, or, vt, and wy). all sites were asked: ''are psychiatric consults available in-person to the ed?'' (yes/no), with affirmative respondents asked about the typical delay. sites also were asked about typical ed boarding time between a request for patient transfer and actual patient departure from the ed to an inpatient psychiatric bed. ed characteristics included rural/urban location, visit volume (visits/hour), admission rate, ed staffing, and the proportion of patients without insurance. data analysis used chi-square tests and multivariable logistic regression. results: surveys were collected from 495 (91%) of the 541 eds, with >80% response rate in every state. overall, only 30% responded that psychiatric consults were available in-person to the ed. in multivariable logistic regression, ed characteristics independently associated with lack of in-person psychiatric consultation were: location within specific states (eg, ar, ga), rural location, lower visit volume, and lower admission rate. among the subset of eds with psychiatric consults available, 48% reported a typical wait time of at least 1 hour. overall, 54% of eds reported that the typical time from request to actual patient transfer to an inpatient psychiatric bed was >6 hours, and 47% reported a maximum time in past year of >1 day (median 3 days, iqr 2-4). in a multivariable model, location in ma and higher visit volume were associated with greater odds of a maximum wait time of >1 day. conclusion: among 495 surveyed eds in nine states, only 30% have in-person psychiatric consultants available. moreover, approximately half of eds report boarding times of >6 h from request for transfer to actual departure to an inpatient psychiatric bed.background: many emergency departments (ed) in the united states use a five tiered triage protocol that has a limited evaluation of psychiatric patients. the australian triage scale (ats), a psychiatric triage system, has been used throughout australia and new zealand since the early 1990s. objectives: the objective of the study is to compare the current triage system, emergency nurses association (ena) esi 5-tier, to the ats for the evaluation of the psychiatric patients presenting to the ed. methods: a convenience sample of patients, 18 years of age and older, presenting with psychiatric complaints at triage were given the ena triage assessment by the triage nurse. a second triage assessment, performed by a research fellow, included all observed and reported elements using the ats protocol, a self-assessment survey and an agitation assessment using the richmond agitation sedation scale (rass). the study was performed at an inner city level i trauma center with 60,000 visits per year. the ed was a catchment facility for the police department for psychiatric patients in the area. patients were excluded if they were unstable, unable to communicate, or had a non-psychiatric complaint. results were analyzed in spss v16. the analysis of data used frequencies, descriptive and anova. results: a total of 100 patients were enrolled in the study: 72% were african american, 14% caucasian, 13% hispanic, 1% asian, and 1% indian; 63% of subjects enrolled were male. the patients' level of agitation using rass showed 59% were alert and calm, 22% were restless and anxious, 6% were agitated, and 5% combative, violent, or dangerous to self. the only significant correlation found was among the ats and several self assessment questions: ''i feel agitated on a 0 to 10 scale'' (p = 0.031) and ''i feel violent on a 0 to 10 scale'' (p = 0.001). there were no significant correlations found among the ena triage, rass scores, and throughput times. conclusion: the ats test was more sensitive to the patient declaring that he or she was agitated or felt violent. this shows that this system might be a more useful system in determining the severity of need of psychiatric patients presenting to the ed. variations background: hemoglobin-based oxygen carriers (hbocs) have been evaluated for small-volume resuscitation of hemorrhagic shock due to their oxygen carrying capability, but have found limited utility due to vasoactive side-effects from nitric oxide (no) scavenging. objectives: to define an optimal hboc dosing strategy and evaluate the effect of an added no donor, we use a prehospital swine polytrauma model to compare the effect of low-vs. moderate-volume hboc resuscitation with and without nitroglycerin (ntg) co-infusion as an no donor. we hypothesize that survival time will improve with moderate resuscitation and that an no donor will add additional benefit. methods: survival time was compared in groups (n = 7) of anesthetized swine subjected to simultaneous traumatic brain injury and uncontrolled hemorrhagic shock by aortic tear. animals received one of three different resuscitation fluids: lactated ringers (lr), hboc, or vasoattenuated hboc with ntg co-infusion. for comparison, these fluids were given in a severely limited fashion (sl) as one bolus every 30 minutes up to four total, or a moderately limited fashion (ml) as one bolus every 15 minutes up to seven total, to maintain mean arterial pressure ‡60 mmhg. comparison of resuscitation regimen and fluid type on survival time was made using two-way anova with interaction and tukey kramer adjustment for individual comparisons. results: there was a significant interaction between fluid regimen and resuscitation fluid type (anova, p = 0.011) indicating that the response to sl or ml resuscitation was fluid type-dependent. within the lr and hboc+ntg groups, survival time (mean, 95%ci) was longer for sl, 323.5 min ( injuries are common and result from many different mechanisms of injury (moi). knowing common fracture locations may help in diagnosis and treatment, especially in patients presenting with distracting injuries that may mask the pain of a radius fracture.objectives: we set out to determine the incidence of radius fracture locations among patients presenting to an urban emergency department (ed).background: carbon monoxide (co) is the leading cause of poisoning morbidity and mortality in the united states. standard treatment includes supplemental oxygen and supportive care. the utility of hyperbaric oxygen (hbo) therapy has been challenged by a recent cochrane review. hypothermia may mitigate delayed neurotoxic effects after co poisoning as it is effective in cardiac arrest patients with similar neuropathology. objectives: to develop a rat model of acute and delayed severe co toxicity as measured by behavioral deficits and cell necrosis in post-sacrifice brain tissue.methods: a total of 28 rats were used for model development; variable concentrations of co and exposure times were compared to achieve severe toxicity. for the protocol, six senescent long evans rats were exposed to 2,000 ppm of co for 20 minutes then 1,500 ppm for 160 minutes, followed by three successive dives at 30,000 ppm with an endpoint of apnea or seizure; there was a brief interlude between dives for recovery. a modified katz assessment tool was used to assess behavior at baseline and 2 hours, 1 day, and 1, 2, 3, 4, 5, and 6 weeks post-exposure. following this, the brains were transcardially fixed with formalin, and 5 lm sagittal slices were embedded in paraffin and stained with hematoxylin and eosin. a pathologist quantified the percentage of necrotic cells in the cortex, hippocampus (pyramidal cells), caudoputamen, cerebellum (purkinje cells), dentate gyrus, and thalamus of each brain to the nearest 10% from 10 randomly selected high power fields (400x background: there remains controversy about the cardiotoxic effects of droperidol, and in particular the risk of qt prolongation and torsades des pointes (tdp).objectives: this study aimed to investigate the cardiac and haemodynamic effects of high-dose parenteral droperidol for sedation of acute behavioural disturbance (abd) in the emergency department (ed). methods: a standardised intramuscular (im) protocol for the sedation of ed patients with abd was instituted as part of a prospective observational safety study in four regional and metropolitan eds. patients with abd were given an initial dose of 10 mg droperidol followed by an additional dose of 10 mg after 15 min if required. inclusion criteria were patients requiring physical restraint and parenteral sedation. the primary outcome was the proportion of patients who have a prolonged qt interval on ecg. the qt interval was plotted against the heart rate (hr) on the qt nomogram to determine if the qt was abnormal. secondary outcomes were frequency of hypotension and cardiac arrhythmias. results: ecgs were available from 273 of 424 patients with abd given droperidol. the median dose was 10 mg (iqr 10-15 mg; range: 5 to 30 mg). the median age was 33 years (rnge: 16 to 92) and 163 were males (60%). a total of four (1%) qt-hr pairs were above the ''at-risk'' line on the qt nomogram. transient hypotension occurred in 8 (3%), and no arrhythmias were detected.conclusion: droperidol appears to be safe when used for rapid sedation in the dose range of 5 to 30 mg. it rarely causes hypotension or qt prolongation. blood background: soldiers and law enforcement agents are repeatedly exposed to blast events in the course of carrying out their duties during training and combat operations. little data exist on the effect of this exposure on the physiological function of the human body. both military and law enforcement dynamic entry personnel, ''breachers'', began expressing sensitivity to the risk of injury as a result of multiple blast exposures. breachers apply explosives as a means of gaining access to barricaded or hardened structures. these specialists can be exposed to as many as a dozen lead-encased charges per day during training exercises.objectives: this observational study was performed by the breacher injury consortium to determine the effect of short-term exposure to blasts by breachers on whole blood lead levels (blls) and zinc protoporphyrin levels (zppls). methods: two 2-week basic breaching training classes were conducted by the united states marine corps' weapons training battalion dynamic entry school. each class included 14 students and up to three instructors, with six non-breaching marines serving as a control group. to evaluate for lead exposure, venous blood samples were acquired from study participants on the weekend prior and following training in the first training class, whereas the second training class had an additional level performed mid-training. blls and zppls were measured in a whole-blood sample using the furnace atomic absorption method and hematofuorimeter method, respectively. results: analysis of these blast injury data indicated students demonstrated significantly increased blls post-explosion (mean = 7 mcg/dl, sd 2.42, p < 0.001) compared to pre-training (mean = 3 mcg/dl, sd 1.60) and control subjects (mean = 3 mcg/dl, sd 2.73, p < 0.001). instructors also demonstrated significantly increased blls post explosion (mean = 6 mcg/dl, sd 1.95, p < 0.02) compared to pre-training (mean = 3 mcg/ dl, sd 1.14) and control subjects (mean = 3 mcg/dl, sd 2.73, p < 0.001). student and instructor zppls were not significantly different in post-training compared to pretraining or control groups. conclusion: the observation from this study that breachers are at risk of mild increases in blls support the need for further investigation into the role of lead following repeated blast exposure with munitions encased in lead. direct observation of the background: notification of a patient's death to family members represents a challenging and stressful task for emergency physicians. complex communication skills such as those required for breaking bad news (bbn) are conventionally taught with small-group and other interactive learning formats. we developed a de novo multi-media web-based learning (wbl) module of curriculum content for a standardized patient interaction (spi) for senior medical students during their emergency medicine rotation.objectives: we proposed that use of an asynchronous wbl module would result in students' skill acquisition for breaking bad news. methods: we tracked module utilization and performance on the spi to determine whether students accessed the materials and if they were able to demonstrate proficiency in its application. performance on the spi was assessed utilizing a bbn-specific content instrument developed from the griev_ing mnemonic as well as a previously validated instrument for assessing communication skills.results: three hundred seventy-two students were enrolled in the bbn curriculum. there was a 92% completion rate of the wbl module despite students being given the option to utilize review articles alone for preparation. students interacted with the activities within the module as evidenced by a mean number of mouse clicks of 42.1 (sd 21.6). overall spi scores were 94.5%, (sd 4.4) with content checklist scores of 92.8% (sd 5.7) and interpersonal communication scores 97.9% (sd 4.7). five students had failing content scores (<75%) on the spi and had a mean number of clicks of 30.8 (sd 28.2), which is not significantly lower than those passing (p = 0.21). students in the first year of wbl deployment completed self-confidence assessments which showed significant increases in confidence (2.86 tobackground: pelvis ultrasonography (us) is a useful bedside tool for the evaluation of women with suspected pelvic pathology. while pelvic us is often performed by the radiology department, it often lacks clinical correlation and takes more time than bedside us in the ed. this was a prospective observational study comparing the ed length of stay (los) of patients receiving ed us versus those receiving radiology us. objectives: the primary objective was to measure the difference in ed los. the secondary objectives were to 1) assess the role of pregnancy status, ob/gyn consult in the ed, and disposition, in influencing the ed los; and 2) to assess the safety of ed us by looking at patient return to the ed within 2 weeks and whether that led to an alternative diagnosis.methods: subjects were women over 13 years old presenting with a gi or gu complaint, and who received either an ed or radiology us. a t-test was used for the primary objective, and linear regression to test the secondary objective. odds ratios were performed to assess for interaction between these factors and type of ultrasound. subgroup analyses were performed if significant interaction was detected. results: forty-eight patients received an ed us and 85 patients received a radiology us. subjects receiving an ed us spent 162 minutes less in the ed (p < 0.001). in multivariate analysis, even when controlling for pregnancy status, ob/gyn consult, and disposition, patients who received an ed us had a los reduction of 108 minutes (p < 0.05). in odds ratio analysis, patients who were pregnant were 11 times more likely to have received an ed us (p < 0.05). patients who received an ob/gyn consult in the ed were five times more likely to receive a radiology us (p < 0.05). there was no association between type of us and disposition. in subgroup analyses, pregnant and non-pregnant patients who received an ed us still had a los reduction of 140 minutes (p < 0.01) and 112 minutes (p < 0.05), respectively. sample sizes were inadequate for subgroup analysis for subjects who had ob/gyn consults. in patients who did not receive an ob/gyn consult, those who received an ed us had a los reduction of 139 minutes (p < 0.001). finally, 10% of subjects returned within two weeks, but none led to an alternative diagnosis. conclusion: even when controlling for disposition, ob/gyn consultation, and pregnancy status, patients who received an ed us had a statistically and clinically significant reduction in their ed los. in addition, ed us is safe and accurate. background: although early surface cooling of burns reduces pain and depth of injury, there are concerns that cooling of large burns may result in hypothermia and worse outcomes. in contrast, controlled mild hypothermia improves outcomes after cardiac arrest and traumatic burn injury. objectives: the authors hypothesized that controlled mild hypothermia would prolong survival in a fluidresuscitated rat model of large scald burns. methods: forty sprague-dawley rats (250-300 g) were anesthetized with 40 mg/kg intramuscular ketamine and 5 mg/kg xylazine, with supplemental inhalational isoflurane as needed. a single full-thickness scald burn covering 40% of the total body surface area was created per rat using a mason-walker template placed in boiling water (100 deg c) for a period of 10 seconds. the rats were randomized to hypothermia (n = 20) and nonhypothermia (n = 20). core body temperature was continuously monitored with a rectal temperature probe. hypothermia was induced through intraperitoneal injection of cooled (4 deg c) saline. the core temperature was reduced by 2 deg c and maintained for a period of 2 hours, applying an ice or heat pack when necessary. the rats were then rewarmed back to baseline temperature. in the control group, room temperature saline was injected into the intraperitoneal cavity and core temperature was maintained using a heating pad as needed. the rats were monitored until death or for a period of 7 days, whichever was greater. the primary outcome was death. the difference in survival was determined using a kaplan-meier analysis or log rank test. results: the mean core temperatures were 32.5 deg c for the hypothermic group and 35.6 deg c for the normothermic group. the mean survival times were 124 hours for the hypothermic group (95% confidence interval [ci] = 98 to 150) and 100 hours for the normothermic group (95% ci = 68 to 132). the seven-day survival rates in the hypothermic and non-hypothermic groups were 67% and 53%. these differences were not significant, p = 0.33 for both comparisons. conclusion: induction of brief mild hypothermia increases but does not significantly prolong survival in a resuscitated rat model of large scald burns. serum objectives: we sought to determine levels of serum mtdna in ed patients with sepsis compared to controls and the association between mtdna and both inflammation and severity of illness among patients with sepsis. methods: prospective observational study of patients presenting to one of three large, urban, tertiary care eds. inclusion criteria: 1) septic shock: suspected infection, two or more systemic inflammatory response (sirs) criteria, and systolic blood pressure (sbp) <90 mmhg despite a fluid bolus; 2) sepsis: suspected infection, two or more sirs criteria, and sbp >90 mmhg; and 3) control: ed patients without suspected infection, no sirs criteria, and sbp >90 mmhg. three mtdnas (cox-iii, cytochrome b, and nadh) were measured using real-time quantitative pcr from serum drawn at enrollment. il-6 and il-10 were measured using a bio-plex suspension array system. baseline characteristics, il-6, il-10, and mtdnas were compared using one way anova or fisher exact test, as appropriate. correlations between mtdnas and il-6/il-10 were determined using spearman's rank. linear regression models were constructed using sofa score as the dependent variable, and each mtdna as the variable of interest in an independent model. a bonferroni adjustment was made for multiple comparisons.results: of 93 patients, 24 were controls, 29 had sepsis, and 40 had septic shock. we found no significant difference in any serum mtdnas among the cohorts (p = 0.14 to 0.30). all mtdnas showed a small but significant negative correlation with il-6 and il-10 (q = )0.24 to )0.35). among patients with sepsis or septic shock (n = 69), we found a small but significant negative association between mtdna and sofa score, most clearly with cytochrome b (p = 0.001). conclusion: we found no difference in serum mtdnas between patients with sepsis, septic shock, and controls. serum mtdnas were negatively associated with inflammation and severity of illness, suggesting that as opposed to trauma, serum mtdna does not significantly contribute to the pathophysiology of the sepsis syndromes. methods: we consecutively enrolled ed patients ‡18 years of age who met anaphylaxis diagnostic criteria from april 2008 to july 2011 at a tertiary center with 72,000 annual visits. we collected data on antihypertensive medications, suspected causes, signs and symptoms, ed management, and disposition. markers of severe anaphylaxis were defined as 1) intubation, 2) hospitalization (icu or floor), and 3) signs and symptoms involving ‡3 organ systems. antihypertensive medications evaluated included beta-blockers, angiotensin converting enzyme (ace) inhibitors, and calcium channel blockers (ccb). we conducted univariate and multivariate analyses to measure the association between antihypertensive medications and markers of severe anaphylaxis. because previous studies demonstrated an association between age and the suspected cause of the reaction with anaphylaxis severity, we adjusted for these known confounders in multivariate analyses. we report associations as odds ratios (ors) and corresponding 95% cis with p-values. results: among 302 patients with anaphylaxis, median age (iqr) was 44 (31-58) and 204 (67.5%) were female. eight (2.7%) patients were intubated, 57 (19%) required hospitalization, and 139 (46%) had ‡3 system involvement. forty-nine (16%) were on beta-blockers, 34 (11%) on ace inhibitors, and 22 (7.3%) on ccb. in univariate analysis, ace inhibitors were associated with intubation and ‡3 system involvement and ccb were associated with hospital admission. in multivariate analysis, after adjusting for age and suspected cause, ace inhibitors remained associated with hospital admission and beta-blockers remained associated with both hospital admission and ‡3 system involvement. conclusion: in ed patients, beta-blocker and ace inhibitor use may predict increased anaphylaxis severity independent of age and suspected cause of the anaphylactic reaction. background: advanced cardiac life support (acls) resuscitation requires rapid assessment and intervention. some skills like patient assessment, quality cpr, defibrillation, and medication administration require provider confidence to be performed quickly and correctly. it is unclear, however, whether high-fidelity simulation can improve confidence with a multidisciplinary group of providers with high levels of clinical experience. objectives: the purpose of the study was to test the hypothesis that providers undergoing high-fidelity simulation of cardiopulmonary arrest scenarios will express greater confidence. methods: this was a prospective cohort study conducted at an urban level i trauma center from january to october 2011 with a convenience sample of registered (rn) and license practical nurses, nurse practitioners, resident physicians, and physician assistants who agreed to participate in 2/4 high-fidelity simulation (laerdal 3g) sessions of cardiopulmonary arrest scenarios about 3 months apart. demographics were recorded. providers completed a validated preand post-test five-point likert scale confidence measurement tool before and after each session that ranged from not at all confident (1) to very confident (5) in recognizing signs and symptoms of, appropriately intervening in, and evaluating intervention effectiveness in cardiac and respiratory arrests. descriptive statistics, paired t-tests, and anova were used for data analysis. sensitivity testing evaluated subjects who completed their second session at 6 months rather than 3 months. results: sixty-five subjects completed consent, 39 completed one session, and 23 completed at least two background: prehospital studies have focused on the effect of health care provider gender on patient satisfaction. we know of no study that has assessed patient satisfication with patient and prehospital provider gender. some studies have shown higher patient satisfaction rates when cared for by a female health care provider.objectives: to determine the effect of ems provider gender on patient satisfaction with prehospital care. methods: a convenience sampling of all adult patients brought in to our ed, an urban level i trauma center by ambulance. a trained research associate (ra) stationed at triage conducted a survey using press ganey ems patient satisfaction questions. there were thirteen questions evaluating prehospital provider skills such as driving, courtesy, listening, medical care, and communication. each skill was assigned a point value between one and five; the higher the value the better the skill was performed. the patient's ambulance care report was copied for additional data extraction.results: a total of 225 surveys were done. average patient age was 71, and 54% were female. scores for all questions totaled 65 (mean 62.63 ± 5.1). prehospital providers pairings were: male-male (n = 141), male-female (n = 71), and female-female (n = 13). there were no statistically significant differences in scores between our pairings (mean scores for male:male 19.3, male:female 19.1, and female:female 19.2; p = 0.73). we found nonstatistical differences in satisfaction scores based on the gender of the emt in the back of the ambulance: males had a mean score of 62.7 and females had a mean score of 62.6 (p = 0.91). we examined gender concordance by comparing gender of the patient to the gender of the prehospital provider and found that male-male had a mean score of 62.8, female-female 62.2, and when the patient and prehospital provider gender did not match, 62.5 (p = 0.71). conclusion: we found no effect of gender difference on patient satisfaction with prehospital care. we also found that overall, patients are very satisfied with their prehospital care. objectives: we set out to determine the sensitivity and specificity of eps in determining the presence of recently ingested tablets or tablet fragments.methods: this was a prospective volunteer study at an academic emergency department. healthy volunteers were enrolled and kept npo for 6 hours prior to tablet ingestion. over 10 minutes subjects ingested 800 ml of water and 30 tablets. ultrasounds video clips were performed prior to any tablet ingestion, after drinking 200 ml of water, after 10 tablets, after 20 tablets, after 30 tablets, and 60 minutes after the final tablet ingestion yielding six clips per volunteer. all video clips were randomized and shown to three eps who were fellowship-trained in emergency ultrasound. eps recorded the presence or absence of tablets.results: ten volunteers underwent the pill ingestion protocol and sixty clips were collected. results for all cases and each rater are reported in the table. overall there was moderate agreement between raters (kappa = 0.42). sub-group analysis of 10, 20, or 30 pills did not show any significant improvement in sensitivity and specificity.conclusion: ultrasound has moderate specificity but poor sensitivity for identification of tablet ingestion. these results imply that point-of-care ultrasound has limited utility in diagnosing large tablet ingestion. background: intravenous fat emulsion (ife) therapy is a novel treatment that has been used to reverse the acute toxicity of some xenobiotics with varied success. us poison control centers (pcc) are recommending this therapy for clinical use, but data regarding these recommendations are lacking.objectives: to determine how us pcc have incorporated ife as a treatment strategy for poisoning. methods: a closed-format multiple-choice survey instrument was developed, piloted, revised, and then sent electronically to every medical director of an accredited us pcc using surveymonkey in march 2011; addresses were obtained from the aapcc listserv, participation was voluntary and remained anonymous; three reminder invitations were sent during the study period. data were analyzed using descriptive statistics.results: forty-five of 57 (79%) pcc medical directors completed the survey. all 45 respondents felt that ife therapy played a role in the acute overdose setting. thirty (67%) pcc have a protocol for ife therapy: 29 (97%) recommend an initial bolus of 1.5 ml/kg of a 20% lipid emulsion, 28 (93%) pcc recommend an infusion of lipids, and 27/28 pcc recommend an initial infusion rate of 0.25 ml/kg of a 20% lipid emulsion. thirty-three (73%) felt that ife had no clinically significant side effects at a bolus dose of 1.5 ml/kg (20% emulsion). forty-four directors (98%) felt that the ''lipid sink'' mechanism contributed to the clinical effects of ife therapy, but 26 (58%) felt that there was a yet undiscovered mechanism that likely contributed as well. in a scenario with cardiac arrest due to a single xenobiotic, directors stated that their center would always or often recommend ife after overdose of bupivicaine (43; 96%), verapamil (36; 80%), amitriptyline (31; 69%), or an unknown xenobiotic (12; 27%). in a scenario with significant hemodynamic instability due to a single xenobiotic, directors stated that their pcc would always or often recommend ife after overdose of bupivicaine (40; 89%), verapamil (28; 62%), amitriptyline (25; 56%), or an unknown xenobiotic (8; 18%).conclusion: ife therapy is being recommended by us pcc. protocols and dosing regimens are nearly uniform. most directors feel that ife is safe but are more likely to recommend ife in patients with cardiac arrest than in patients with severe hemodynamic compromise. further research is warranted. levels drawn at 4 hours or more (240 mcg/ml at 5 hours, 198 mcg ⁄ ml at 4 hours, respectively). npv for toxic ingestion of an initial apap level less than 100 mcg/ml was 97.8% (95% ci 92.3-99.7%).conclusion: an apap level of less than 100 mcg/ml drawn less than 4 hours after ingestion had a high npv for excluding toxic ingestion. however, the authors would not recommend reliance on levels obtained under 4 hours to exclude toxicity as the potential for up to 6.7% false negative results is considered unacceptable. background: genetic variations in the mu-opioid receptor gene (oprm1) mediate individual differences in response to pain and addiction.objectives: to study whether the common a118g (rs1799971) mu-opioid receptor single nucleotide polymorphism (snp) or the alternative splicing snp of oprm1 (rs2075572) was associated with overdose severity, we assessed allele frequencies of each including associations with clinical severity in patients presenting to the emergency department (ed) with acute drug overdose. methods: in an observational cohort study at an urban teaching hospital, we evaluated consecutive adult ed patients presenting with suspected acute drug overdose over a 12-month period for whom discarded blood samples were available for analysis. specimens were linked with clinical variables (demographics, urine toxicology screens, clinical outcomes) then de-identified prior to genetic snp analysis. in-hospital severe outcomes were defined as either respiratory arrest (ra, defined by mechanical ventilation) or cardiac arrest (ca, defined by loss of pulse). blinded taqman genotyping (applied biosystems) of the snps were performed after standard dna purification (qiagen) and whole genome amplification (qiagen repli-g). the plink 1.07 genetic association analysis program was used to verify snp data quality, test for departure from hardy-weinberg equilibrium, and test individual snps for statistical association. results: we evaluated 178 patients (37% female, mean age 41.2) who overall suffered 13 ras and 3 cas (of whom 2 died). urine toxicology was positive in 33%, of which there were positives for 32 benzodiazepines, 26 cocaine, 21 opiates, 13 methadone, and 6 barbiturates. all genotypes examined conformed to hardy-weinberg equilibrium. the 118g allele was associated with 2.5fold increased odds of ca/ra (or 2.5, p < 0.05). the rs2075572 mutant allele was not associated with ca/ ra. conclusion: these data suggest that the 118g mutant allele of the oprm1 gene is associated with worse clinical severity in patients with acute drug overdose. the findings add to the growing body of evidence linking the a118g snp with clinical outcome and raise the question as to whether the a118g snp may be a potential target for personalized medical prescribing practices with regard to behavioral/physiologic overdose vulnerability. key: cord-006854-o2e5na78 authors: nan title: scientific session of the 16th world congress of endoscopic surgery, jointly hosted by society of american gastrointestinal and endoscopic surgeons (sages) & canadian association of general surgeons (cags), seattle, washington, usa, 11–14 april 2018: poster abstracts date: 2018-04-20 journal: surg endosc doi: 10.1007/s00464-018-6121-4 sha: doc_id: 6854 cord_uid: o2e5na78 nan purpose: to evaluate the efficacy of single-incision laparoscopic surgery for totally extraperitoneal repair (sils-tep) of incarcerated inguinal hernia. patients and methods: clinical setting a retrospective analysis of 14 patients undergoing sils-tep for incarcerated hernia from may 2016 to august 2017 at kinki central hospital was performed. exclusion criteria sils-tep was contraindicated for the following conditions in our hospital: a history of radical prostatectomy; a small indirect inguinal hernia in a young patient; and unsuitable for general anesthesia. surgical procedure laparoscopic abdominal exploration through a single, 2.5-cm, intraumbilical incision was performed. the incarcerated hernia content was gently retracted from the hernia sac into the abdominal cavity. in some cases, simultaneous manual compression on the incarcerated hernia from the body surface was required. if no bowel resection was needed, a standard sils-tep using mesh was performed following laparoscopic abdominal exploration and incarcerated hernia reduction. if bowel resection was required, inguinal hernia repair using mesh was not performed to avoid postoperative mesh infection, and two-stage sils-tep was performed 2-3 months after the bowel resection. results: fourteen patients (11 men, 3 women) with irreducible inguinal hernias, including 11 with unilateral hernias and 3 with bilateral hernias, underwent surgery. the patients' median age was 74 years (range, 38-83 years), and median bmi was 23.5 kg/m 2 (range, 18.8-30.5 kg/m 2 ). of the 14 patients, 7 had acute incarceration, and 7 had a chronic irreducible hernia. seven patients with acute incarcerated hernias underwent emergency surgery, and two of the seven patients needed singleincision laparoscopic partial resection of the ileum, followed by two-stage sils-tep. twelve patients, excluding two patients who required single-incision laparoscopic partial resection of the ileum, underwent laparoscopic exploration with hernia reduction followed by sils-tep. one case of chronic incarceration out of the twelve patients who underwent sils-tep after hernia reduction required conversion to kugel patch repair. the median operative times were 102 min (range 52-204 min) for unilateral hernias and 165 min (range 83-173 min) for bilateral hernias. the median blood loss was minimal (range 0-177 ml). the median postoperative hospital stay was 1 day (range 1-3 days). the median follow-up period was 7 months (range 1-15 months). a seroma developed in 25% (3/12) of patients and was managed conservatively. no other major complications or hernia recurrence were noted during the follow-up period. conclusions: sils-tep, which offers good cosmetic results, could be safely performed for incarcerated inguinal hernia. objective: introduction of mis in pediatric age group has been proved feasible and safe. there is considerable evolution with introduction of a number of invovation in mis pediatric inguinal hernia repair. high ligation of sac is the basic premise of surgical repair in pediatric inguinal hernias. there are different mis techniques broadly grouped into intracorporeal or intracorporeal with extracorporeal component namely the suturing. every techniques has its own complications. the main objective of our study was to focus on different anatomical pointers which can lead inadverent complications mainly bleeding and recurrence. methods and procedures: prospective review of 37 hernias (29 male and 2 female) (8 months-13 years) performed laparoscopically between september 2015 and june 2016. under laparoscopic guidance, the internal ring was encircled extraperitoneally using a 2-0 non-absorbable suture and knotted extraperitoneally. data analyzed included operating time, ease of procedure, occult patent processus vaginalis (ppv), contralateral inguinal hernia, complications, cosmesis and recurrence. results: sixteen right (52%), 14 left (45%) and 1 bilateral hernia (3%) were repaired. five unilateral hernias (16.66%), all left, had a contralateral ppv that was repaired (p=0.033). mean operative time for a unilateral and bilateral repair were 13.20 (8-25) and 20.66 min (17-27 min) respectively. one hernia repair still recurred (2.7%) even with all precautions and another had a post operative hydrocoele (2.7%). one case (2.7%) needed an additional port placement due to inability to reduce the contents of hernia completely. because of our techinique we could not find any adverent peroperative bleeding. there were no stitch abscess/granulomas, obvious spermatic cord injuries, testicular atrophy, or nerve injuries. conclusion: the results confirm safety, efficacy and cost effectiveness of laparoscopic inguinal hernia repair. during our per-operative analysis we focus to address the anatomical landmark to minimize future recurrence and peroperative surgical complications. we identified and named a point as j. point at the tip of triangle of "doom". that is most important point to address peroperatively. there is high chance of recurrence if that point is not encircled well or inadequately circled because of fear of iliac vessels injury. we aslo concluded that 'water dissection technique' is effective techniques in un-experienced hand and in early stages of laparoscopic hernia repair to prevent inadvertent iliac vessels injury. 1 medstar georgetown university hospital, 2 georgetown university school of medicine, 3 introduction: incisional hernias following abdominal surgery can be associated with significant morbidity leading to decreased quality of life, increase in health care spending and need for repeat operations. patients undergoing gastrointestinal and hepatobiliary surgery for malignant disease may be at higher risk for developing incisional hernias. identifying these risk factors for incisional hernia development can help decrease occurrence. this will be the largest multi-institutional study looking at incidence of symptomatic hernia rates for major abdominal operations including colectomy, hepatectomy, pancreatectomy, and gastrectomy. methods and procedures: an irb-approved retrospective study within the medstar hospital database was conducted, incorporating all isolated colectomy, hepatectomy, pancreatectomy, and gastrectomy procedures performed across 11 hospitals between the years of 2002 to 2016. all patients were identified using icd-9 and icd-10 codes for relevant procedures and then subdivided into either having benign or malignant disease. exclusion criteria comprised of patients who had concomitant organ resection, or those undergoing organ transplant. data validation was performed to verify the accuracy of the data set. the rate of symptomatic incisional hernia rates (ihrs) were determined for each cohort based on subsequent hernia procedural codes identified and repairs performed. descriptive statistics and chi squared test were used to report ihrs in each group. results: during this 15-year span, a total of 7,583 major abdominal operations were performed at all 11 institutions, comprising of 4,970 colectomies, 1,122 hepatectomies, 1,165 pancreatectomies, and 326 gastrectomies. malignancy was the indication for surgery in 2,178 (43.8%) colectomies, 747 (66.6%) hepatectomies, 763 (65.5%) pancreatectomies, and 207 (63.5%) gastrectomies. ihr in each cohort for benign vs malignant etiologies, respectively, are as follows: 193 (6.9%) vs 104 (4.8%) in colectomy (p=0.002), 12 (3.2%) vs 16 (2.1%) in hepatectomy (p=0.385), 17 (4.2%) vs 24 (3.1%) in pancreatectomy (p=0.431), and 4 (3.4%) vs 5 (2.4%) in gastrectomy (p=0.88) patients. conclusion: symptomatic incisional hernia rates following major gastrointestinal and hepatobiliary surgery ranges from 2.1 to 6.9%. there was no significant increase in hernia rates in patients undergoing surgery for malignancy. patients undergoing colectomy for benign disease had a high incidence of symptomatic ihrs. introduction: prosthetic infections, although relatively uncommon, are a major source of cost and morbidity. the study aimed to evaluate the influence of mesh structure including the polymer type and mean pore size on bacterial adherence in a mouse model. methods: three commercially available hernia meshes were included in the study. for each mesh type, a 1 cm square was surgically placed intraabdominally in 6 mice. one mouse served as a control while an enterotomy was made in the subsequent mice to introduce a bacterial load onto the mesh. after 24 hours the meshes were harvested. the inoculated meshes were then plated on agar plates and bacterial counts were counted after 24 hours. the bacterial counts were compared between the various mesh types. results: the mean bacterial adherence was increased in the large pore mesh was 695 colonies, for the small pore mesh was 892 colonies, and in the biologic mesh group it was 504 colonies. conclusions: through the use of a mouse model, the influence of mesh type and pore size on bacterial adherence was evaluated. meshes that have larger pores with a lower prosthetic load and the biologic mesh interestingly had lower early bacterial colonization after 24 hours following an enterotomy. further evaluation with a longer incubation time could be helpful to determine the effect of bacterial colonization of mesh. hrishikesh salgaonkar, raquel maia, lynette loo, wee boon tan, sujith wijerathne, davide lomanto; national university hospital, singapore laparoscopic repair of groin hernias is widely accepted approach over open due to lesser pain, faster recovery, better cosmesis and decreased morbidity. however, there is still debate on its use in large inguino-scrotal hernias, recurrent hernias and history of lower abdominal surgery anticipating adhesions and difficulty in dissecting extensive hernia sac. retrospective analysis of prospectively collected data was done of patients undergoing laparoscopic repair of large inguino-scrotal, incarcerated groin hernia, recurrent cases after open or laparoscopic repair and history of previous lower abdominal surgery. between january 2013 to july 2015, 89 patients with large inguino-scrotal hernias, recurrent hernia, history of lower abdominal surgery, incarcerated femoral hernia underwent laparoscopic inguinal hernia repair. patient characteristics, operating time, surgical technique, conversion rate, complications and recurrence up to 18 months recorded. 51 patients had large inguino-scrotal hernia, 22 recurrent hernia (17 previous open, 5 previous lap) , 14 history of lower abdominal surgery (4 lscs, 6 appendectomy, 2 prostatectomy, 2 midline laparotomy), 1 incarcerated femoral hernia, 1 meshoma removal. 75 patients underwent total extraperitoneal (tep) repair, 9 transabdominal pre-peritoneal (tapp), 5 needed conversion to open. mean operation time was 74 min for unilateral and 118 min for bilateral hernia. seroma formation seen in 19 patients, 2 minor wound infections treated conservatively. we conclude that the laparoscopic approach can be safely employed for the treatment of complex groin hernias; surgical experience in laparoscopic hernia repair is mandatory with tailored technique in order to minimize morbidity and achieve good clinical outcomes with acceptable recurrence rates. mesh fixation in ventral incisional hernia is a topic of ongoing debate. permanent and absorbable tacks are acceptable and widely used methods for mesh fixation. the purpose of this study was to compare outcomes of permanent tack fixation versus absorbable when used alone or with suture fixation in laparoscopic incisional hernia repairs. a retrospective review of all patients undergoing laparoscopic ventral hernia using tack fixation (absorbable/permanent) alone or in conjunction with suture fixation was queried from the ahsqc database. outcome measures included hernia recurrence rate, pain, quality of life, wound related issues, and hospital length of stay. propensity match scoring was performed to compare patients undergoing tack only fixation versus tack and suture fixation with a p-value of .05 considered significant. a total of 804 patients were identified after propensity match scoring with 402 who underwent repair with permanent tacks alone or with sutures and 402 who underwent repair with absorbable tacks alone or with sutures. following matching there were no differences in bmi, age, hernia width/length, or baseline pain/ quality of life. there were no significant differences found in outcome measures including recurrence rates, pain and quality of life outcomes at 30 days, 6 months, and 1 year, surgical site infection (ssi), and postoperative length of stay (p[0.05). there was a significant increase in any post op complication in the permanent tack fixation group compared to the absorbable tack fixation group (21% vs 14%, p.0003) which is likely due to the increase in surgical site occurrences noted in the permanent tack fixation group (14% vs. 10%, p.005). based on this large data set, there are no significant differences in postoperative outcomes in permanent versus absorbable fixation in laparoscopic hernia repair except in surgical site occurrences. further study is needed to evaluate but at the present time, there is no convincing evidence that one type of fixation is superior to another in laparoscopic ventral hernia repair. introduction: inguinal hernia repair is the most common procedure in general and visceral surgery worldwide. laparoscopic transabdominal preperitoneal mesh hernioplasty (tapp) has been also popular surgical method in japan. single incision laparoscopic surgery is one of the newest branches of advanced laparoscopy, and its indication has been spread to not only simple surgery such as cholecystectomy, but also complex surgery. we report our experience with single incision laparoscopic tapp (s-tapp) for japanese patients with inguinal hernia. case description: a consecutive series of 290 patients (247 male, 43 female) who underwent s-tapp during june 2010 to september 2017 in a single institution. twenty eight of the patients had bilateral inguinal hernia. the mean follow-up was 1192 days. the average age of the patients was 61.2±16.5 years. establishment of the ports: a 25-mm vertical intra-umbilical incision is made for port access. one 5-mm optical port and two 5-mm ports were placed side-by-side through the umbilical scar. surgical procedure: the procedure was carried out in the conventional fashion with a wide incision in the peritoneum to achieve broad and clear access to the preperitoneal space, and an appropriate placement of polypropylene mesh (3dmaxtm light, bard) with fixation using the tacking device (absorbatack®, covidien). the hernia sac is usually reduced by blunt dissection, or is ligated and transected with ultrasound activated device. the peritoneal flap is closed by one suture with 4-0 pds and the 6-7 tacks using absorbatack®. discussion: in one patient, we encountered a large sliding hernia on the right side having sigmoid colon as content of the sac, which required conversion to the conventional laparoscopic procedure. there were nine recurrence cases after surgery of laparoscopic or anterior approach, and two cases after prostatectomy. there was no intra-operative complication. the mean operative time was 87.4 ±31.1 min, and blood loss was minimum in all cases. the average postoperative stay was 5.4±2.7 days. there was one recurrence case (0.3%) 16 months after the surgery. there was no severe complication after the surgery, but there were 15 seromas (4.7%) and one hematoma (0.3%). two patients had blunt tactile sense in the area of the lateral femoral cutaneous nerve (0.9%), which improved in two months. conclusion: our results suggest that s-tapp is a safe and feasible method without additional risk. moreover, cosmetic benefit is clear. however, further evaluation for postoperative pain and longterm complications compared to standard laparoscopic tapp mesh hernioplasty should be required. manuel garcia, md, daniel srikureja, md, marcos j michelotti, md, facs; loma linda university health introduction: prosthetic mesh use has become standard practice during ventral hernia repair to reduce the risk of recurrence. the ideal mesh is macro-porous which favors rapid cellular ingrowth and tissue integration, has limited tissue reactivity, low profile and weight, and has high tensile strength to add resilience to the repair. additionally, the material is expected to have good handling characteristics. currently, there is a wide variety of options for mesh. biosynthetic material (poliglycolic acid/trimethylene carbonate -pga/tmc) has been shown to behave well in terms of early vascularization and ingrowth as well as adequate long term tissue generation. gore® synecor® biomaterial is a composite mesh including two layers of absorbable biosynthetic material (pga/tmc) with one tridimensional non-absorbable macro-porous knit of dense ptfe mesh. it has shown good vascularization and ingrowth at 30 days in animal examination. however, there is still no evidence of long term behavior of this mesh in human tissue. we present the first histologic analysis of this mesh 1 year after placement in a human. objective: to perform a histologic analysis of the gore® synecor® biomaterial one year after placement in the human body. methods: after incidentally finding incorporated gore® synecor® mesh in a patient with prior ventral hernia repair 1 year ago, during open bilateral inguinal hernia repair, a sample of mesh was taken and sent to pathology lab for analysis. tissue healing, vascularization, and ingrowth of the composite mesh were analyzed. results: histologic findings significant for a biomaterial consistent with a knitted ptfe material surrounded by mature fibrovascular tissue and foreign body inflammation consistent with expected healing response for this time frame. no evidence of any other biomaterial (pga/tmc) or evidence of infection. conclusion: gore® synecor® biomaterial has shown to be well integrated into appropriately healed tissue, with pronounced vascularization and ingrowth. the pga/tmc layers have been seen to be completely absorbed and replaced by collagen. these findings, in a human 12 months sample, replicate what had been shown in animal specimens. method: from 2014 to 2017, 6 patients came to hospital with renal paratransplant hernia. they were evaluated for this study. the following data were collected from their records: age, gender, weight, age at graft rejection, surgical complications, treatment method and the treatment results with composite ptfe mesh. results: for laparoscopic repair of incisional hernia after renal transplant, the median interval between kidney transplantation and developing of incisional hernia was 64 (range 12 to 425) days. predisposing factors were obesity, age over fifty years, and female gender. in six patients, hernia was large, and the repair was performed with using composite ptfe mesh. one patient had developed serous collection in surgical site, which was managed successfully with multiple punctures. hernia recurrence or infection was not noted in these patients during 3 to 36 months follow-up periods. conclusion: incisional hernia is not a rare entity after kidney transplantation. predisposing factors, such as obesity, age over 50 years, and female gender have a role in its development. repeated surgeries in kidney recipients can increase the risk of incisional hernia. managing this complication by laparoscopic approach is a safe and effective method. sujith wijerathne, raquel maia, hrishikesh salgaonkar, wee boon tan, lynette loo, davide lomanto; national university hospital, singapore introduction: a femoral hernia is a less common type of hernia. it is estimated to account for less than 5% of all abdominal wall hernias. only about 1 in every 20 groin hernias are femoral hernias. they are found more commonly in females due to wider shape of pelvis. laparoscopy by offering magnification and better vision provides us the opportunity for clear visualization of the myopectineal orifice. laparoscopy seems to be a safe and feasible approach for femoral hernia repair in an asian population. case description: between 2013 and 2016, 70 consecutive patients with femoral hernia who underwent laparoscopic hernia repair were prospectively studied. patient demographics, hernia characteristics, operating time, conversion rate, intraoperative, postoperative complications and recurrence were measured. discussion: total of 83 femoral hernias were repaired, 45 on right and 38 on left groin. this included 52 patients with bilateral and 18 unilateral hernia. 19 concomitant obturator hernia were found. there were 65 male and 5 female patient. no conversion was reported. one patient had injury to bowel at the 10 mm port entry site, without contamination, identified and managed immediately. 10 patients developed seroma, all were managed conservatively except one who needed aspiration. peri-port bruising was noticed in 3 patients and 2 patients had hematoma. one patient with hematoma underwent excision of the organised hematoma.1 of the hematoma patient was on aspirin pre-operatively. no wound infection, chronic groin pain or recurrence was documented during follow up till date. conclusion: laparoscopic repair offers accurate diagnosis and simultaneous treatment of both inguinal and femoral hernia with minimum morbidity and good clinical outcomes. better visualisation and magnification gives us an opportunity to identify occult hernias which can be repaired during the same setting, thereby reducing the chance of recurrence and possible need for second surgery. laparoscopic repair has become the procedure of choice for the treatment of the majority of groin hernia at our institution. introduction: totally extraperitoneal (tep) repair that does not require peritoneal incisions is a good procedure that involves minimal visceral damage. however, balloon-or camera-assisted blunt dissections that are performed in a haphazard manner do not follow precise dissection of the fascia layer. furthermore, they have a disadvantage in that they are difficult to understand anatomically. we therefore developed a novel preperitoneal approach to resolve this issue. methods: a 12-mm trocar is inserted into the rectus abdominis sheath cavity after a small incision is made below the umbilicus and the posterior rectus sheath is exposed. a 5-mm trocar is inserted 5 cm towards the pubic bone from the umbilicus. using forceps from this position, narrow branches that enter the posterior rectus sheath from the inferior epigastric vessels are dissected, thereby broadly exposing the anterior surface of the posterior rectus sheath. the third 5 mm-trocar is inserted near the lateral margin of the rectus abdominis. on the outside, local anesthetic is injected beneath the posterior rectus sheath and the preperitoneal cavity is separated in fluid so that the peritoneum is not injured during posterior rectus sheath incision. a small incision is made to the posterior rectus sheath or attenuated posterior rectus sheath at one finger width higher than the expected upper margin of the prosthetic mesh. due to the effects of local injection, a sharp incision to the fascia can be made with an electric scalpel. utilizing this mechanism, the posterior rectus sheath aponeurosis and the lining transverse fascia and superficial preperitoneal layer are individually identified. once the preperitoneal cavity is reached, the peritoneal margin is determined in the lateral direction, and the peritoneum that is pulled due to pneumoperitoneum is separated from the preperitoneal fascia on the outside from the cranial side towards the deep inguinal ring. on the inside, the pneumoperitoneum pressure pushes the peritoneum inferiorly, leading to enlargement and increased visibility of the posterior rectus sheath deep fascia, which is dissected one layer at a time from the outside. the umbilical prevesical fascia is dropped inferiorly, and the dissection of the preperitoneal cavity necessary for mesh deployment is performed. results: by individually dissecting each fascia using emphysema through pneumoperitoneum and enlargement through local injection, the method for reaching the preperitoneal cavity could be successfully completed by following the dissection of the fascia layer without proceeding with the operation blindly, thereby resulting in the elimination of intraoperative bleeding and postoperative hematoma. introduction: in the field of abdominal wall reconstruction, the utility of drain placement is of debatable value. we present outcomes evaluating drain placement vs no drain placement at the time of robotic transversus abdominis release (rtar) technique with placement of mesh in the retromuscular position, a currently understudied subject. methods: retrospective review of a prospectively maintained hernia patient database was conducted identifying individuals who received either drain placement or no drain placement during abdominal wall reconstruction via the rtar technique from august 2015 to june 2017 at a single high volume hernia center. perioperative data and postoperative outcomes between the two groups are presented with statistical analysis for comparison and quality of life (qol) measures assessed using the carolina comfort scale. results: thirty-five patients were identified for this study, of which 9 had drains placed intraoperatively in the retromuscular position at the conclusion of rtar (drn) and 25 underwent rtar without the placement of draining devices (nd). the drn cohort had a mean bmi, defect area, mesh area, and operative time of 37.1, 247 cm 2 , 940 cm 2 and 248 minutes, respectively, compared to 31.8, 157 cm 2 , 822 cm 2 , and 305 minutes in the nd group. all cases utilized medium weight macroporous polypropylene synthetic implantable mesh materials in both the drn and nd subgroups. there were no reported postoperative complications, including no development of hematoma, seroma, or surgical site infections in either group. hernia recurrence was not identified in either the drn or nd cohorts through a mean follow up of 200 days (6.7 months). there were no statistically significant differences in postoperative qol outcomes. conclusion: our series review suggests that the use of intraoperative drains may not afford any benefits with the rtar technique when mesh is placed in the retromuscular position. additional postoperative management associated with drain care may be unnecessary. surg endosc (2018) 32:s130-s359 background: appendectomy is one of the most common operations performed during emergency surgery. although laparoscopic appendectomy (la) has become the treatment of choice, there is still a debate regarding the use of la for treating complicated appendicitis. in this retrospective analysis, we aimed to clinically compare la and open appendectomy (oa) for treating complicated appendicitis. methods: we retrospectively identified 339 patients who underwent an operation for complicated appendicitis at our hospital; these patients were operated on between 2011 and july 2017. [editor1] in total, 222 patients underwent conventional appendectomy and 117 patients were laparoscopically treated. outcomes included operation time, blood loss, length of hospital stay, and postoperative complications. logistic regression analysis was performed to analyze the concurrent effects of various factors on the rate of postoperative complications. objective: small bowel perforation has conventionally been dealt with open exploration, which frequently leads to many wound-related complications. wound infection is the major reason for increasing morbidity in these patients and delay recovery. laparoscopic surgery has various benefits over open surgery like, smaller wound, lesser pain and faster recovery. the aim of this study was to relay the advantages of minimally invasive surgery (mis) to patients with small bowel perforation to decrease postoperative wound complications and duration of hospital stay. methods: it is a retrospective study, including 136 patients with small bowel perforation from 2013 to 2016. of these 136, 43 had traumatic etiology, 28 had typhoid-related perforation and the remaining 65 had a duodenal perforation. 84 of them were male, and the average age was 30.4 years. only patients who presented within 96 hours of perforation were included in the study. laparoscopic exploration was done on introducing camera from 10-mm infraumbilical port after intraperitoneal carbon dioxide insufflation. the remaining two 5-mm working ports were then introduced depending on the site of perforation once identified. the perforations were then repaired using intracorporeal single-layer suturing using polydioxanone 3-0 suture. the peritoneal cavity was given thorough lavage and abdominal drain placed in the pouch of douglas. fecal contamination was found in all the patients. a total of 6 patients underwent conversion to open surgery due to inability to find the site of perforation laparoscopically. of the 136 operated patients, 7 patients developed port-site infection, and there were no major postoperative complications in the 4-week follow up period. conclusion: we conclude from our study that laparoscopic intervention in early small bowel perforation is a safe approach with favorable outcomes, especially with regards to wound complications, that are a major factor in increasing the morbidity in such patients postoperatively. laparoscopic approach leads to early discharge and recovery postoperatively. with the emerging era of laparoscopic surgery, leading to its easy accessibility, more patients can advantage from this technique when they arrive in emergency with intestinal perforation. s144 surg endosc (2018) 32:s130-s359 introduction: pneumatosis intestinalis (pi), or gas in the bowel wall, can be seen on various imaging modalities. the pathophysiology behind pi is unclear. one theory proposes a mechanical cause (e.g. small bowel obstruction) while another proposes a bacterial etiology. management of pi in adults is difficult as often there is a benign clinical course. however, when paired with specific clinical features such as hepatic portal venous gas (hpvg) on imaging, the course of management changes as the suspicion of bowel ischemia increases. hpvg alone has been associated with a high mortality rate and a poor prognosis. management in this case becomes surgical. case presentation: we present a case of 59-year-old latino male who presented to the emergency room with abdominal pain and altered mental status. focused physical examination revealed a non-rigid abdomen, no rebound tenderness, no guarding, and diffuse tenderness only to deep palpation. ct scan of the abdomen and pelvis demonstrated moderate portal venous gas in the right and left hepatic lobes, an upper midline dilated small bowel loop with pneumatosis intestinalis, and a moderately distended stomach with gas and fluid. laboratory studies revealed metabolic acidosis and a lactic acid level of 2.9 mmol/l. due to these findings, bowel ischemia was suspected, and the patient was taken to the operating room for a diagnostic laparoscopy. the laparoscopy was converted to an exploratory laparotomy due to extensive adhesions. intraoperatively, there was no small bowel compromise and no identifiable transition point. extensive lysis of adhesions and repair of iatrogenic enterotomy were performed. patient tolerated the procedure well, clinically improved, and was discharged from the hospital. discussion: this case illustrates the difficulty in management of a patient with pneumatosis intestinalis and, specifically, hepatic portal vein gas seen on ct imaging. hpvg has traditionally been a harbinger of morbidity and mortality, but exploratory laparotomy revealed only diffuse abdominal adhesions and the absence of bowel ischemia despite high clinical suspicion. background: ventral hernia repair is one of the most common surgical procedures facing the general surgeon. there is little consensus as to the best surgical technique for complex scenarios. often these patients have complicating co-morbid conditions such as radiation therapy, that has an inevitable effect in the abdominal wall structures, which can lead to non-traditional repairs. case report: we present a case of a 62 year-old female who underwent a tah/bso and right hemicolectomy which was complicated by wound dehiscence. she underwent primary repair and adjuvant whole pelvis radiation for her squamous cell carcinoma. subsequently, the patient developed acute obstructive symptoms do to a stricture within her small bowel and a large ventral hernia measuring 14913 cm with non-reducible abdominal contents below the level of the fascia more prominent in the suprapubic area. the patient's bmi was 15.3. various considerations are important in planning a surgical repair in a previously irradiated field with loss of domain which include, minimal dissection, and the use of an atraumatic surgical techniqueque with either external oblique release or transversus abdominis muscle release (tar). we chose a a tar, as it provides wider myofascial release and dissection below the arcuate line towards the space of retzius and bogros allowing for a larger sublay mesh placement. also it avoids the need of skin flaps reducing the risk for wound complications in under-perfused tissue. the tar was performed successfully and there were no intraoperative and postoperative complications. her follow-up at 6 months revealed no wound complications or hernia recurrence. conclusion: for patients with compromised tissue and loss of domain a tar technique may be useful when reconstructing complex abdominal wall hernias. it provides the core principals of hernia repair such as primary fascial closure, wide mesh overlap, and finally it provides a reliable approach for the under-perfused tissue without need of skin and soft tissue flap creation. outcomes in the management of cholecystectomy patients in the setting of a new acute care surgery service model: impact on hospital course larsa al-omaishi, bs, william s richardson, md; ochsner medical clinic foundation introduction: the acute care surgery (acs) model, defined as a dedicated team of surgeons to address all emergency department, inpatient, and transfer consultations, is quickly evolving within hospitals across the united states due to demonstrated improved patient outcomes in the non-trauma setting. the traditional model of call scheduling consisted of one senior attending and one senior resident on call per 24-hour shift. attendings were responsible for consults, previously scheduled operations, as well as clinic time. multiple recent studies have shown statistically significant improvements in several parameters of patient care by using acs including but not limited to 1. time from emergency department to surgical evaluation 2. time from surgical evaluation to operating room 3. operative time 4. percent laparoscopic 5. length of hospital stay 6. intra-operative complications (blood loss, perforation rates) 7. post-operative complications (fever, infection, redo) 8. cost. one study demonstrated a statistically significant cost savings for the acute care surgery model with respect to appendectomies, but not cholecystectomies. study design: a retrospective analysis of patients who underwent cholecystectomy in the setting of non-traumatic emergent cholecystitis was performed to compare data from two cohorts: the traditional model and the acs between january 1, 2013 and dec 1, 2016 at ochsner medical center, a 600-bed acute care center in new orleans. parameters gathered included 1. time from emergency department to surgical evaluation 2. time from surgical evaluation to operating room 3. operative time 4. percent laparoscopic 5. length of hospital stay 6. intra-operative complications (blood loss, perforation rates, conversion to open) 7. post-operative complications (fever, infection, redo). demographics were also collected including age, weight, height, ethnicity, asa, etc. inclusion criteria included: age[18 and having undergone cholecystectomy between jan 1, 2013 and december 1, 2016. exclusion criteria included choledocholithiasis, gallstone pancreatitis, ascending cholangitis, gangrenous cholecystitis, septic complications precipitating further procedures and delays, or researcher discretion. results: 699 patients were initially identified as having undergone cholecystectomy within the allotted time period [2013 -178, 2014 -166, 2015 -157, 2016 -198] . 470 were excluded due to one of the reasons above. median patient age was 53 years old and the average patient encounter was 3.9 days. conclusion: the acs model is better suited to manage emergent non-traumatic cholecystectomies than the traditional call service at our institution, as evidenced by several parameters. s146 surg endosc (2018) 32:s130-s359 he nailed it background: nail guns are powerful tools and are widely used. injuries with these devices may be devastating due to the significant force they can deploy. patients and methods: we herein report a first case of a self inflicted abdominal injury with a nail gun. results: a 55 year old male with history of coronary artery disease, type 2 dm and early signs of dementia attempted to refill a nail gun. he lodged the device against his right abdomen while the air hose was still attached and then accidently fired 2 nails into his abdomen. after he unsuccessfully tried to pull the nails out he drove himself 25 minutes to our emergency room. he was hemodynamically stable on arrival; pain control was achieved, antibiotics were given and he received tetanus immunization. ct-scan showed the two foreign bodies penetrating from the ruq with one reaching the transverse colon. on emergency laparoscopy, the nails were found to have penetrated the thick omentum and the puncture site of one nail into the colon was identified. the omentum was resected off the colon and the right colon was completely mobilized. no additional injuries were found. the entrance area of the nails was then used to create a loop colostomy. the postoperative course was initially uneventful but the patient developed a severe posttraumatic inflammatory reaction of the fat tissue in the right upper quadrant and had to be readmitted for pain control and antibiotics were again administered. he recovered and was discharged with a plan for laparoscopically assisted colostomy closure after 6 weeks. discussion: to the best of our knowledge this is the first reported isolated colonic injury by a nail gun. given the tremendous force of the device with unknown collateral damage to the surrounding tissue it was decided to manage the accident with a laparoscopic assisted colostomy using the entrance point of the nails for fecal diversion. introduction: it is difficult to diagnose obturator hernias by routine physical examination. obturator hernias are frequently complicated by ileus and the diagnosis is often first made from abdominal ct. obturator hernias are difficult to reduce, and often necessitate emergency surgery. they are common in elderly people, and they often had bad general condition. so it was high in the death rate. at our hospital, we first attempt to reduce the hernia from the body surface under ultrasonographic guidance. after relieving the strangulation, we perform radical operation electively in patients who are for possible for surgery under the general anesthesia. we perform laparoscopic repair for obturator hernias. obturator hernias are often complicated by other types of hernia. in these cases, we perform total repair. herein, we present a review of the patients who underwent surgery for obturator hernia at our hospital. methods: we review the data of 9 cases of obturator hernia encountered by us from february 2012 to december 2014. we performed total repair in three of the cases. however, it is difficult to procure a mesh that would be adequate for all the defects (inner inguinal ring, femoral ring, obturator). no single mesh can fit, because the inguinal and pelvic curves present opposing curves near the obturator. therefore, we placed two pieces of mesh available at our hospital (3d max [bard] and onlay sheet of kugel patch [bard] ) together in the patientswe could successfully cover all the defects using these two pieces of mesh and could fit the mesh to the pelvic shape by devising an appropriate connection between the meshes. results: we reviewed a total of 9 operated cases for obturator hernia. the hernia was bilateral in 7 cases, and complicated by other hernias in 6 cases. we first determined the appropriate approach for the repair. we performed total repair in 3 cases. they were no complications and no cases of recurrence. conclusion: our approach to the repair of obturator hernias was very useful. we can use the exact area and shape of the mesh needed in individual patients by this method. we show the method of shaping the mesh to fit the pelvic form. demin aleksandr, do, ajit singh, do, noman khan, do; flushing hospital introduction: internal hernias are known complications that are well documented to involve peterson's defect. in bariatric patient's post gastric bypass there is a high index of suspicion for internal hernias as well as a low threshold to operate. there have been some debates around the closure of the potential peterson's space with several studies advocating closure versus some which show that there is no difference in the rate of symptomatic internal hernias. we present a case of an unusual cause of small bowel obstruction due to internal hernia caused by a cecal volvulus. it is an atypical presentation however the patient was triaged and brought to the or within 5 hours of admission. although it is rare there have been reports of internal hernias caused by other structures like congenital bands or natural potential spaces. there have been reports of unusual presentations of the cecum herniating through the foramen of winslow. the anatomical rearrangements after bypass create potential areas where an internal hernia can occur. in this case a bowel resection was undertaken due to the anatomical variation of the cecal bascule and cecal volvulus due to high rate of recurrence of this cecal pathology. majority of internal hernias do not require bowel resection especially when detected earlier and prompt surgical exploration is undertaken. mortality as direct consequence internal hernia is extremely rare. however late diagnosis of internal hernias can lead to catastrophic gut loss and may require lifelong tpn and/or visceral transplantation or autologous reconstruction. conclusion: careful history and physical of our bariatric patient can elicit the signs and symptoms of internal hernias and prevent the morbidity and mortality that can come with the complications of this condition. unusual presentations and causes are reason for prompt diagnosis and complete exploration. shingo ishida 1 , naotsugu yamashiro 1 , satoshi taga 2 , koichi yano 2; 1 shinkomonji hospital, 2 shinmizumaki hospital symptomatic cholelithiasis is common disease performed with laparoscopic cholecystectomy (lc). we will hesitate to operate if the patient is pregnant in the third trimester. pregnant patients undergoing laparoscopic surgery have been reported increasingly. however, most case reports are confined to patients in the first and second trimester. we report a patient who underwent lc in the third trimester and review the relevant literature. a 26 -year-old woman in the third trimester (34w2d) of pregnancy was seen in the emergency department of our hospital with a history of upper abdominal pain. there was no problem in the course of pregnancy. the result of the examination proved to be attack of gallstone colic. she was hospitalized the same day and underwent lc the next day. the base of pregnancy uterus was 20 cm above the navel. we needed to consider the surgical approach, for example inserting the first trocar under left hypochondrium. operative duration was 63 minutes. she complained abdominal distension at postoperative day (pod) 1 and 2 but there was no abnormality in the fetus. she was discharged on pod 4. after that she gave birth to a healthy baby. lc in third trimester of pregnancy was safely performed with obstetrics back up. weekday or weekend hospital discharge: does it matter for acute care surgery? ibrahim albabtain 1 , roaa alsuhaibani 2 , sami almalki 2 , hassan arishi 1 , hatim alsulaim 1; 1 kamc, 2 background: hospitals usually reduce staffing levels over weekend. this raises the question of whether patients discharged over a weekend may be inadequately prepared and possibly at higher risk for adverse events post-discharge. the aim of this study was to assess the outcomes of common acute care surgery procedures for patients discharged over weekend, and identify the key predictors of early readmission. methods: this retrospective cohort study was conducted at a tertiary care hospital between january and december 2016. surgical procedures included were cholecystectomy, appendectomy, and hernia repairs. patients' demographic, co-morbidities, complications, readmission and follow-up details were collected from the electronic medical records. predictors and post-operative outcomes associated with weekend discharge were identified by multivariable analysis using univariable and multivariable logistic regression models controlling for potential confounders. results: a total of 743 patients were included. overall median age was 35 years (iqr: 22, 58). the majority of patients were female (n=397, 53.4%). 361 patients (48.6%) underwent a cholecystectomy, 288 (38.8%) an appendectomy, and 94 (12.6%) hernia repairs. weekend discharge was 16.8% vs. 83.2% of weekday discharge. patients discharged during weekend were younger (34.2 vs. 41, p-value.001, mean) . post-discharge 14-day follow-up visits were significantly lower in the weekend discharge subgroup (83.1% vs. 91.2%, p-value 0.006). overall, 30-day readmission rate was 3.2% (n=24), and did not differ between those of weekend and weekday discharge (or=0.28, 95% ci 0.52-9.70). conclusions: patients discharged on weekends tended to be younger in age and less likely to have chronic diseases. patients discharged over the weekend were less likely to follow up compared to weekday discharge patients. however, the readmissions rate did not differ between the two groups. intrauterine device (iud) migration out of the uterine cavity is a serious complication. its incidence in the us has been reported to be about 0.001% annually. previously published systematic review supports the use of laparoscopic surgery for elective removal of migrated iucds from the peritoneal cavity. we present the safety and efficacy of the laparoscopic approach to this complication in the acute care setting. depicted is an otherwise healthy 40 year old female with no previous surgical history who presented to the ed with worsening abdominal pain for one week with no associated symptoms. on physical exam, patient was non toxic. abdomen was moderately distended with guarding and rebound tenderness to palpation, no rigid. patient had been seen shorlty prior to ed admission by her obgyn and recent work up with abdominal/pelvic x-ray and ultrasound has revealed a misplaced iud in the transverse position (side ways). pregnancy test was negative. based on patient clinical presentation and recent radiologic findings, we decided to proceed with diagnostic laparoscopy. after systematic review of cavity, the foreign body was found to be incorporated within the greater omentum. we procceded, laparoscopically with omentectomy+foreign body removal. there were no perioperative complications, patiet was discharged on the following day. the use of laparoscopy in elective iud retrieval within in the abdominal cavity has been considered standard of care in surgical management to date. this poster demonstrates its use as an effective approach for safe removal of intra-abdominal foreign bodies also in the acute setting. symptomatic inguinal and umbilical hernias in the emergency department: opportunity lost? andrew t bates, md, jie yang, phd, maria altieri, chencan zhu, bs, salvatore docimo, jr., do, konstantinos spaniolas, md, aurora pryor, md; stony brook university hospital introduction: patients with symptomatic inguinal and umbilical hernias often present to the emergency department (ed) when their symptoms change or increase, usually not requiring emergent surgery. however, little is known about how often these patients present prior to eventual repair and whether they undergo surgery at the initial presenting institution. the aim of this study was to assess the clinical flow of patients presenting in the ed for inguinal and umbilical hernia. methods: all patients presenting to eds in new york state from 2005 to 2014 with symptomatic inguinal and umbilical hernias were identified using the new york state longitudinal hospital claims database (sparcs). patients were followed for records of hernia repair and subsequent inpatient and outpatient visits up to 2014. results: 42,950 patients presenting to the ed for symptomatic inguinal hernia were identified. 5.3% (2, 297) of ed presentations resulted in inpatient admissions. 14,491 (33.7%) had repair later and their average time from ed presentation to inguinal hernia repair was 158 (±351) days. 90.1% of patients who did not have subsequent surgery had only one ed visit. of those that underwent interval repair, 79.7% had only one ed visit prior to surgery. for those patients with only one ed visit before repair, 29.3% had repair at a different hospital, as opposed to 48.6% if multiple ed visits were made. 15,297 umbilical hernia patients presenting to the ed were identified. 7.2% (1, 109) resulted in inpatient admission. 3,507 (22.9%) had interval repair, with the average time from ed presentation to umbilical hernia repair being 175 (±369.82) days. 92% of patients who did not record of later repair presented to the ed once. of those patients who underwent repair, 78.5% did so after one ed visit. for those patients with only one ed visit before repair, 32.9% had repair at a different hospital, as opposed to 48.6% if multiple ed visits were made. conclusion: a majority of patients with symptomatic inguinal and umbilical hernias that present to the ed do so once with no subsequent follow-up or repair. for those patients that undergo interval repair, a significant portion willnopt for surgery at other hospitals. a significant proportion of patients with acutely symptomatic inguinal/umbilical hernias who undergo interval repair after a previous ed visit, will opt for definitive surgery at another hospital facility. this represents a missed opportunity for continuity of care for providers and healthcare systems. nikhil gupta, dr, himanshu agrawal, dr, arun k gupta, dr, dipankar naskar, dr, c k durga, dr; pgimer dr rml hospital, delhi introduction: peritonitis is the inflammation of the serous membrane that lines the abdominal cavity and the organ contained therein and is one of the most common infections, and an important problem that a surgeon has to face. reproducible scoring system that allows a surgeon to determine the severity of intra-abdominal infections are essential to prognosticate the patient. this study was done to compare apache ii scoring and mpi score to assess prognosis in perforation peritonitis. methods: all patients admitted with hollow viscus perforation from 1st november 2015 till 31st march 2017 was included in the study. it was a cross sectional observational study. apache ii and mannheim peritonitis index (mpi) scoring systems were calculated in all the patients in order to assess their individual risk of morbidity and mortality. the outcome variables were studied postoperatively -post-operative wound infection, wound dehiscence, anastomotic leak, respiratory complications, duration of hospital stay, need of ventilator support and mortality. the inferences were drawn with the use of appropriate tests of significance. results: the study comprised of 63 patients. neither apache ii nor mpi could predict postoperative wound infection. the mean apache ii score of 63 subjects included in the study was 11.2±8.1 with range of 0 to 35 and the mean mpi score of 63 subjects included in the study was 26.9±7.2 with range of 6 to 39. apache ii was able to predict postoperative respiratory complications, post-operative need for ventilatory support, hospital stay duration and mortality while mpi was able to predict post-operative wound dehiscence, post-operative respiratory complications, post-operative need for ventilatory support and mortality. neither apache ii nor mpi could predict postoperative anastomotic leak and postoperative wound infection. conclusion: mannheim peritonitis index is a useful and simple method to determine outcome in patients with peritonitis. mpi is comparable to apache ii in assessing the prognosis in perforation peritonitis and can well be used in emergency setting in place of apache ii scoring when time is a definite constraint. microrna-17 and the prognosis of human carcinomas: a systematic review and meta-analysis chengzhi huang, mengya yu; guangdong general hospital (guangdong academy of medical science) muhammad nadeem 1 , julian ambrus, md 1 , steven schwaitzberg, md 1 , john butsch, md 2; 1 university at buffalo, 2 introduction: mitochondria is a small energy producing structure of a cell. mitochondrial myopathy (mm) is mixed disorder clinically, which can affect various systems besides skeletal muscle. mm starts with muscle weakness or exercise weakness. mm patients have decreased skeletal muscle mitochondrial function than the healthy person, because of weakened intrinsic mitochondrial function and decreased mitochondrial volume density. no one has studied the mm role in gerd and constipation so far. this study is aimed to see effects of mm on the gastrointestinal system specifically gastroesophageal reflux disease (gerd), gall bladder issues, and constipation. methods: between may 2011 and june 2016, 101 mm diagnosed patients at buffalo general hospital were included in this retrospective study. we assessed their demeester score for gerd and wexner's constipation questionnaire for constipation. demeester score[14 and constipation score[15 were set points for gerd and constipation respectively. data was analyzed by using spss version 24. mitochondrial enzymes were assessed by using their muscle biopsy report. results: out of 101 (85.1% female, 14.9% male) mitochondrial myopathy patients, 38.6% and 13.9% were suffering from gerd and constipation respectively. 35.1%, 43.4% and 95.9% patients had gall bladder issues, obstructive sleep apnea (osa) and fatigue respectively. mm gerd patients (87.2% female, 12.8 male) had mean demeester score 22.56 (sd: 6.49) more than normal although 76.3% patients were on gerd medications and 29.2% patients had nadh cytochrome c reductase, cytochrome c oxidase and citrate synthase abnormal mitochondrial enzyme in mm associated gerd but 26.1% mm patients had abnormal cytochrome c oxidase enzyme only. mm along with constipation had mean wexner's constipation score 19.14 (sd: 2.568) more than the normal although 94.9% were taking enema, medications or digital assistance. 50% patients had cytochrome c oxidase and nadh cytochrome c reductase enzymes were abnormal in those patients. 29.4% mm associated gall bladder issues patients had cytochrome c oxidase abnormal. 63.6% mm associated gerd and constipation patients had gall bladder issues. conclusion: in this present study, we found that mm had effects on gastrointestinal system causing gerd, constipation and gall bladder issues. gerd, constipation and gall bladder problems are common in mm patients even patients are taking medications for gerd and constipation. cytochrome c oxidase, citrate synthase and nadh cytochrome c reductase are the most commonly impaired mitochondrial enzyme in mm patients and mm associated gerd, constipation and gall bladder issues patients. objectives: gulf war illness (gwi) is a chronic, multisymptom illness marked by cognitive and mood dysfunction and disrupted neuroendocrine-immune homeostasis affecting 30% of gw veterans. after 25+ years, useful treatments are lacking and its cause is poorly understood, although exposures to pyridostigmine bromide and pesticides are consistently identified among the strongest risk factors. previous work in our laboratory using an established rat model of gwi identified persistent elevation of microrna-124 (mir-124) levels in the hippocampus whose gene targets are involved in cognition-associated pathways and neuroendocrine function, suggesting that mir-124 inhibition is a promising therapeutic approach to improve the complex symptoms exhibited by gwi. the purpose of this study was to identify broad effects of mir-124 inhibition in the brain by profiling the expression of genes known to play a critical role in synaptic plasticity, glucocorticoid signaling, and neurogenesis in gwi rats administered a mir-124 antisense oligonucleotide (mir-124 inhibitor). methods and procedures: nine months after completion of a 28-day exposure regimen involving gw-relevant chemicals and stress, rats underwent intracerebroventricular infusion of mir-124 inhibitor (n=9) or scrambled negative control oligonucleotide (n=8) and were implanted with 28-day osmotic pumps delivering 0.1 nmol/day. intranasal delivery of oligonucleotides was performed on additional rats (n=4 per group; daily for 10 days) to determine whether mir-124 inhibition is achievable using a noninvasive procedure. hippocampi were harvested and quantitative pcr arrays were used to profile the expression of focused panels of genes important for 1) synaptic alterations during learning and memory, 2) signaling initiated by the glucocorticoid receptor (known mir-124 target), and 3) neurogenesis. hippocampi were also analyzed by quantitative pcr to examine expression levels of endogenous mir-124. results: upregulation ([2.5 fold change, p.05) of 8 synaptic plasticity genes, 11 glucocorticoid signaling genes, and 4 neurogenesis genes was observed in the hippocampus of gwi rats infused with mir-124 inhibitor compared to scrambled control, consistent with a significant reduction (p\ 0.001) in mir-124 levels detected in rats receiving mir-124 inhibitor. altered gene expression and a reduction in mir-124 levels were not observed in rats after intranasal delivery. conclusion: mir-124 antagonism in the hippocampus upregulates the expression of several downstream targets involved in synaptic plasticity, glucocorticoid signaling, and neurogenesis and is a promising therapeutic approach to improve cognition, emotion regulation, and neuroendocrine dysfunction in gwi. further testing is being pursued to discover the optimal dose for intranasal administration to test viability of this option for ill gw veterans. nikhil gupta, dr, ananya deori, dr, arun k gupta, dr, dipankar naskar, dr, c k durga, dr; pgimer dr rml hospital, delhi background: the ultrasonic dissector, commonly known as the harmonic scalpel, has been in use for achieving haemostasis in surgery for almost 20 yrs. its advantages in breast surgery, especially in the dissection of axilla, have been a matter of debate as previous studies have shown inconsistent results. this study compares the outcomes of the ultrasonic dissector in axillary dissection with that of the conventional electrocautery. methods: patients who were undergoing mrm and bcs with axillary dissection from november 2014 till march 2016 were included in the study. patients were randomized into two groups, group a undergoing axillary dissection with ultrasonic dissector and group b with electrocautery. the operative time, intra-op bleeding, post-op pain, post op drain volume, hospital stay and any other complications were noted in the two groups. results: the numbers of patients in both groups were 35 each. group a had a significantly shorter operative time, both for axillary dissection (30.86 min vs. 40.63 min, p.001) and the total duration (77.20 vs. 90.20 min, p=0.001). the blood loss was significantly less in group a, as measured by the mop count. there was significant reduction in the total post-op drainage volume, which resulted in fewer days of drain in-situ and the total number days stayed in the hospital. there was no significant change in the post-op complications such as haematoma, seroma, flap necrosis, oedema, etc. conclusion: with the use of ultrasonic dissector, the operative time, blood loss and the axillary drainage was significantly reduced. the axillary drainage in turn, reduced the hospital stay. there was no significant difference in terms of complications like haematoma formation, seroma formation, skin flap necrosis or oedema. for the statistical analysis, χ2 or fisher's exact tests to compare proportions and the nonparametric mann-whitney u test for analysis of values with abnormal distribution were used. discussion: the study included 579 patients. all preoperative laboratory indicators were elevated. the laboratory tests do not demonstrate any statistical significance between these two groups. the group of the patients without stones in the cbd diagnosed by ioc was also divided in patients with diameters.8?mm and with diameters≥0.8?mm of the cbd. also in these two groups, the statistical analysis of the laboratory tests does not demonstrate significant difference. all patients underwent ioc. ioc showed stones in 84/113 patients (74. 3%) . a comparison of patients with and without stones at ioc showed similar mean times from hospitalization to surgery (5.9 background: housed in a high volume tertiary referral center, our division receives a large amount of transfers and referrals from outside institutions for patients who require completion cholecystectomies. in this study "completion cholecystectomy" refers to patients that meet one of three criteria: 1. previous subtotal cholecystectomy, 2. previously aborted cholecystectomy, or 3. previous cholecystectomy with incidental finding of cancer on pathology. traditionally, exploration of a reoperative field in the right-upper quadrant mandates an open approach due to dense adhesions and inflammation. over the past few years, we have found that robotic-assisted surgery has allowed us to perform these completion cholecystectomies in a minimally invasive fashion. methods: case logs and operating room billing logs were reviewed from 2010 to 2017 to identify all robotic-assisted cholecystectomies performed at our institution. review of all reports identified 30 completion cholecystectomies. all additional variables including demographics, operative variables, and postoperative outcomes were determined from manual chart review of all consultation notes, operative reports, anesthesia records, progress notes, discharge summaries, and postoperative office visits. results: of the 30 identified robotic-assisted completion cholecystectomies, 16 patients had a previous subtotal cholecystectomy, 11 patients had an aborted cholecystectomy, and 3 patients had an incidental finding of t2 gallbladder carcinoma on pathology. fifteen patients (50%) underwent preoperative ercp either for choledocolithiasis or to determine biliary anatomy. average time from original procedure was 44 months with 30.0% of previous procedures performed in an open approach. average or time was 142.1 minutes, average ebl was 102.1 cc, and average length of stay was 2.1 days. one patient (3.3%) was readmitted within 30 days for nausea that resolved with antiemetics. three patients (10.0%) had minor postoperative complications (clavien-dindo grade 1 or 2) which resolved with pharmacologic therapy. no patients suffered a 90-day mortality. all cases were completed in minimally invasive fashion without a conversion to an open procedure. conclusions: although rare, completion cholecystectomies present a challenging surgical scenario. although traditionally performed in an open approach, we have had success in recent years at our institution with a robotic-assisted approach to completion cholecystectomy. we feel that the robotic approach offers certain advantages in a hostile, reoperative field which allows us to perform these procedures in a minimally invasive fashion with no conversions to an open procedure to date. previously limited to case reports, this report of 30 procedures represents the largest case series of robot-assisted completion cholecystectomies to our knowledge. s152 surg endosc (2018) 32:s130-s359 background: percutaneous cholecystostomy tube (pct) has been used as a bridge treatment for grade ii-iii moderate to severe acute cholecystitis (ac) to "cool" the gallbladder down over several weeks and allow the inflammation to resolve prior to performing interval cholecystectomy (ic) and removal of the pct, often laparoscopically. the aim of this study was to assess the impact of timing of ic after pct on operative success and outcomes. methods: a retrospective review of electronic medical records of patients who were treated for ac with a pct, and subsequently underwent ic at our institution between january 2005 to december 2016 was performed. the patients were divided into three groups (n=7 each), based on the duration of the pct prior to ic, and these groups were comparatively analyzed. a comparative sub-analysis of clinical outcomes between patients who underwent surgery within the first week vs. third week or later after pct was also performed. results: a total of 21 patients met the study criteria. each group had 7 patients. there were no statistically significant differences between the 3 groups in regards to age, gender, bmi, imaging findings, and indications for cholecystostomy tube placement. overall, there was no statistically significant difference in outcomes between performing ic within the first 5 weeks, 5-8 weeks and [8 weeks after pct placement. the length of stay, overall morbidity, clavien-dindo grade of complications and mortality were similar between the 3 time intervals. however, a sub-analysis showed that patients who underwent ic within the first week of pct placement had statistically significant higher mortality rate (p=0.048) compared to those who underwent ic[3 weeks of pct placement. the two patients who died in our sample had ic within a week after pct placement. even though there was a statistically significantly higher morbidity rate in those who had ic[3 weeks after pct, the clavien-dindo grade of these complications was lower than. conclusion: delaying ic to [5 weeks after pct placement for ac is not associated with any improvement in patient morbidity, length of stay or rate of conversion from laparoscopic to open cholecystectomy. cholecystectomy within the first week of pct placement is associated with higher mortality rate than after 3 weeks likely due to associated sepsis. introduction: the effect of intraoperative bile spillage during laparoscopic cholecystectomy (lc) on operative time (or time), length of stay (los), postoperative complication rates, and 30 day readmission rates was analyzed. laparoscopic cholecystectomy is the gold standard operation for gallbladder disease in the united states. number of studies have shown that same day discharge in elective laparoscopic cholecystectomy is feasible and safe. bile spillage during this procedure can be a common occurrence in teaching institutions, however, data on the effects of operative outcomes is lacking. methods: this is a retrospective study analyzing all of the laparoscopic cholecystectomies performed at the brooklyn hospital center (tbhc), both emergent and elective, from 2016 to 2017. patient data was collected on demographics, comorbidities, bile spillage, operative findings, complications, los, and 30 day readmission rates. statistical analysis was performed using imb spss statistics v. 19. covaried analysis of variance (ancova) was performed on continues variables and significance levels were calculated. pearson's chi square significance level was calculated for all binomial variables. results: of the 281 patients who underwent lc during this time period, intraoperative bile spillage was encountered in 32 patients. interestingly, bile spillage was significantly more likely to be seen in elective cases over acute cases (11.8% vs 10.8%, p.05). there was a statistically significant increase in or time in cases where intraoperative bile spillage was encountered vs. cases where no bile spillage was encountered (146 vs. 124 min, p=0.007). there was a significant increase in rate of conversion to open procedure when bile spillage was encountered (3.1% vs. 0.4%, p.05 ). drain placement rates increased, not surprisingly, when bile spillage was encountered (34.4% vs. 5.6%, p.05). there was no statistically significant difference in los between cases with bile spillage and cases without (2.47 days vs. 1.75 days). there was no significant increase in complication rate or 30 day readmission rates. conclusions: intraoperative bile spillage significantly increases or time, conversion to open procedure, and drain placement. however, there was no significant effect observed of intraoperative bile spillage on length of stay, complication, and 30 day readmission rates. thus, intraoperative bile spillage appears to have little clinical significance on surgical outcomes. however it may have an impact on overall healthcare costs. larger prospective studies evaluating the effect of intraoperative bile spillage on los, or time, complication rates, and 30 day readmission rates are needed to analyze these effects further. tariq nawaz, md; rawalpindi medical university study design: prospective and observational study. place and duration: from january, 2012 to july 2017. surgical unit ll, holy family hospital, rawalpindi. patients and methods: thousand patients with a diagnosis of cholithiasis were included. exclusion criteria are patient younger than 12 year and older than 80 year. calot's triangle dissection was done meticulously. cystic artery and hepatic artery anomalies and variations were observed and analyzed on spss 21. results: the age varies from 12 to 80 years. on the basis of distributional variation the cystic artery was single in 90% cases, branched in 7% cases and absent in 3% cases. on positional variations the cystic artery was superomedial to the cystic duct in 85% cases, anterior in 7% cases, and posterior in 3% cases and low lying in 5% of the cases. on the basis of length variation results showed that 800 (80%) cases had a normal cystic artery. a short cystic artery was found in 150 (15%) cases and a long cystic artery was present in 50 (5%) cases. other arterial variations are of hepatic artery i.e moynihan's hump (3%) and and right hepatic artery present in calots triangle in 5% conclusions: for the safety of laparoscopic cholecystectomy one should be well aware of the anatomical variations of the cystic and hepatic artery. keywords: cholelithiasis, cholecystitis, laparoscopic cholecystectomy. as small as it gets: micro-invasive laparoscopic cholecystectomy using only two 5 mm trocars and a needle grasper background: the majority of surgeons use four ports including for laparoscopic cholecystectomy (lc). multiple efforts have been made to reduce number and size of ports. left upper quadrant (luq). patients and methods: of 114 lcs performed from 6/2014-4/2017, 109 (96%) were done using three instruments including 55 cases in which 2 trocars and the teleflex needle grasper were used. in 26 cases only two 5 mm trocars were (left upper quadrant (luq) and umbilicus) with the minigrasper being placed between the two. the gallbladder (gb) serosa was incised on both sides and a window was created behind the gb midportion and widened towards fundus and infundibulum. cystic artery (ca) and cystic duct (cd) were dissected out obtaining the critical view and after the last fundus adhesion was cut, ca and cd were secured with clips or endoloop. results: median age of 19 women and 7 men was 42.4 (range 24. 1-77.4) years. lc was done for acute cholecystitis (n=4), chronic cholecystitis (n=8), biliary dyskinesia (n=9), choledocholithiasis (n=5). three patients had an ercp with bile duct clearance prior to the lc. in one case a keith needle was used to suspend the gb fundus for better exposure. twelve patients had additional procedures together with their lc (wedge liver biopsy (4), lysis of adhesions (3) , umbilical hernia repair (1) , mesenteric/lymphnode biopsies (4) . median or time was 51 (range 34-129) minutes. the specimen was removed through the luq port site in 9 patients. there were no vascular or bile duct injuries in this series. 71% of cases were done as outpatient procedures, 25% of patients required 23 hours observation only three patients were hospitalized for medical reasons. conclusion: in selected cases with either small stones or biliary dyskinesia, lc with only two 5 mm ports and a needle grasper is possible. the teleflex minigrasper can completely replace a port based grasper. introduction: the standard treatment for lithiasic acute cholecystitis remains the laparoscopic cholecystectomy despite the timing of surgery is still controversial. the aim of this prospective study is to evaluate the advantages and limitations of early laparoscopic cholecystectomy in a district hospital. methods and procedure: all patients undergoing laparoscopic cholecystectomy at the surgical department of "carlo urbani" hospital in jesi (italy) from may to september 2017 were consecutively enrolled. clinical data such as gender, age, bmi, comorbidity, previous abdominal surgery, previous acute cholecystitis were collected. subsequently, the patients were arranged in two groups according to the timing of intervention (early versus elective surgery). for each group, we compared data concerning surgery, such as operative time, intraoperative and postoperative complications, length of hospital stay and cost analysis. results: this study is a part of an ongoing research. so far, we collected 67 laparoscopic cholecystectomies. ten (15%) of them were admitted with acute cholecystitis and were operated during the hospital stay (group a). group b included patients scheduled for elective surgery (n=57; 85%). the two groups were comparable with respect to clinical data. conversion to open approach was performed in 3 cases, all of them in group b. mean surgical time was 67.5±22.01 minutes in group a and 62.4±19.77 minutes in group b (p=0.494). no significant differences in intraoperative and postoperative complications rates were seen in the two groups, just a few in both of them. mean overall length of hospitalization was 6.4±3.89 days in group a and 2±1.63 days in group b (p= 0.001), whereas the difference in length of postoperative hospitalization was not statistically significant. due to the extended hospitalization for group a, the cost increase as compared to group b was statistically significant, too. conclusions: early laparoscopy is comparable to delayed laparoscopy in terms of postoperative hospitalization and complications in the management of acute cholecystitis. a longer hospital stay among patients scheduled for immediate surgery may be associated with a more time-consuming diagnostic work-up before surgery. however, in future research we expect to enhance our cost analysis with more data regarding the costs incurred in the first hospitalization reserved to nonoperative treatment of group b inpatients with acute cholecystitis. s154 surg endosc (2018) introduction: with improvements in healthcare access and technology, admissions of octogenarian population with acute cholangitis (ac) are increasing. octogenarians are vulnerable to inferior outcomes. there is no study to evaluate factors predicting outcomes of ac in octogenarians. the aim of our study is identify factors predicting outcomes, and to evaluate the quick sequential organ failure assessment (qsofa) score and tokyo guidelines 2013 (tg13) severity grading for octogenarian patients with ac. methods: a retrospective review of octogenarian patients admitted with ac from january 2010 to december 2016 was performed. demographic profile, clinical presentation and discharge outcomes were studied. systemic inflammatory response syndrome (sirs), qsofa and tg13 severity grading scores were calculated. mortality is defined as death within 30 days of admission or in hospital mortality. statistical analysis was performed using spss version 21. results: there were a total of 1875 patients admitted for ac, of which 284 (15%) were octogenarians. majority (n=167, 59%) were female, with a mean age of 83 (range 80-86) years. majority were secondary to gallstones (n=197, 69%), and 53 (19%) were due to malignancies. 140 (49%) and 8 (3%) patients fulfilled sirs and qsofa criteria of severity respectively. 142 (50%) and 93 (33%) of patients had a tg13 severity grading of moderate and severe respectively. nine (3%) patients required inotropic support in the emergency department (ed) and 48 (17%) patients were admitted to critical care unit (ccu). 166 (58%) patients underwent endoscopic retrograde cholangiopancreatography (ercp) and 33 (12%) underwent percutaneous transhepatic biliary drainage (ptbd) for biliary decompression. 8 patients underwent index cholecystectomy. length of stay was 11.5 (range 1-91) days and 30-day mortality of 11%. multivariate analysis performed showed that an abnormal glasgow coma score (p=0.017) and malignancy (p.001) predicted 30-day mortality. the use of ed inotropic support predicted ccu admission (p=0034). a positive blood culture (p=0.005), presence of malignancy (p.001), use of ed inotropes (p=0.001), and index cholecystectomy (p=0.008) predicted a longer length of stay. qsofa (p.001) and tg13 severity grading (p=0.001) were predictive of 30-day mortality. sirs criteria did not predict 30-day mortality. conclusion: reduced consciousness and malignancy predicted 30-day mortality in octogenarian patients with ac. qsofa and tg13 severity grading system is superior to sirs criteria in predicting mortality of octogenerians with ac. our group has performed needlescopic grasper assisted silc (nsilc) to overcome these problems. we evaluate the technical feasibility, safety and benefit of nsilc versus three-port laparoscopic cholecystectomy (tplc). methods and procedures: this prospective randomized control study was conducted to compare the advantages if any between the nsilc and tplc. one hundred and forty eight patient were randomized into two groups, with one group underwent n0slic (74 patients) and a control group underwent tplc (74 patients). basic information about the patient and diagnosis was collected. the surgical outcome that was composed with critical view of safety (cvs) time, major procedure time and total operation time, and the comparison of postoperative complication was made. result: nsilc group was consisted of 20 male (27.0%) and 54 female (73.0%), and tplc group was consisted of 32 male (43.2%) and 42 female (56.8%) (p=0.038). the average age of nsilc group was 44.5±13.2 years old, and tplc group was 52.5±15.2 years old (p=0.003). cvs time of tplc group was shorter than silc group (nsilc: 14.4±8.9 min, tplc: 10.0±7.1 min, p=0.002), major procedure time (skin incision to gb removal from liver bed) of tplc group was shorter than nsilc group (silc group: 21.7±15.3 min, tplc: 10.6±8.4 min, p=0.002). however, there was no significant difference in postoperative complication (nsilc: 3, tlc: 6, p=0.634). conclusion: although cvs time, major procedure time, and operation time of silc were longer than tplc, overall clinical results were similar. nsilc is feasible and safe surgical procedure in patient with benign gallbladder disease. introduction: management of malignant biliary obstruction not amenable to surgery is usually by means of ercp or pthc. however, on occasions, these routes are not accessible and the alternate decompressive technique of percutaneous cholecystostomy (pc) has to be adopted. the aim of this study was to evaluate the efficacy and outcomes of pc in a highly selected series at a tertiary referral center. methods: we retrospectively reviewed all patients that had undergone pc from 2000 to 2014. data collected included baseline demographics, comorbidities, details of pc placement and management, etiology of mbo, and post-procedure outcomes. the charlson comorbidity index (cci) was calculated for all patients at the time of pc. results: four hundred and eight patients underwent pc placement of which 28 patients including 18 (64%) males and 10 (36%) females, with malignant biliary obstruction. the mean age at the time of pc placement was 63.5±11.7 years of age, and the mean cci was 8.03±2.82 for all patients. of mbo in all 28 patients was due to pancreatic malignancies (n=14), cholangiocarcinoma (n=6), primary hepatic malignancies (n=3), secondary hepatic tumors (n=4), and ampullary carcinoma (n=1). pc tube complications were reported in 7 (25%) patients. mean number of tube exchanges was 3.4±2.65. mean duration from pc tube placement to death was 159±159.4 days. 14 total deaths were recorded. conclusion: pc placement appears to be a viable option in mbo in elderly and frail patients. in this cohort, pc may be a potential definitive management to improve quality of life. melanie boyle, daivyd palencia, philip leggett; houston northwest medical center background: there are very few studies assessing the relationship between gastroesophageal reflux and biliary disease. this is surprising as they share presenting symptoms as well as risk factors, particularly obesity. our group previously produced a review of 36 patients in our practice who had undergone some type of reflux procedure. conclusions showed that the prevalence of gallbladder disease in our severe reflux population is much higher compared to that found in the general population. our goal of this study is to expand on that data to include a larger sample size to investigate the incidence of biliary disease in our reflux population and decide if this should influence our pre-operative algorithm for anti-reflux surgery patients. methods: we expanded on our previously performed retrospective review of patients that underwent laparoscopic fundoplication for reflux disease. we previously reviewed data from 2015 to 2017. we are now looking at data from 2012 to 2017. our expected sample size will include approximately 150 patients, 75 of which have currently been reviewed. our previous study included only 36. the surgery preformed was either a toupet or nissen fundoplication, and one underwent a dor. demographic data, imaging studies, and pathology results were reviewed. results: we looked at whether each patient who underwent antireflux surgery had a prior cholecystectomy either remotely or recently, underwent concomitant cholecystectomy, or had no biliary disease in their workup. the groups had similar age and were predominantly women. we once again demonstrated that the prevalence of gallbladder disease in our severe reflux population is much higher than the general population. when approaching a patient with gastroesophageal reflux disease, attention should be paid to gallbladder symptomatology as well. we recommend that it may be beneficial to include gallbladder ultrasound in pre-operative workup for antireflux surgery so that concomitant cholecystectomy can be performed if indicated. steven schulberg, do, jonathan gumer, do, matt goldstein, vadim meytes, do, george ferzli, md; nyu langone hospital -brooklyn introduction: acute cholecystitis is a common surgical disease with roughly 500,000 cholecystectomies performed in the us annually. the current dogma revolves around the "72 hour rule" advocating early cholecystectomy if within the window, and if beyond 72 hours, conservative treatment and interval operation. in patients beyond the 72 hour window, as well as with multiple comorbidities, advanced age, and other complicating factors, cholecystostomy has become an acceptable treatment as a bridge to interval cholecystectomy. while this has become an appropriate treatment modality, it does not come without its own set of complications. we aim to evaluate the rate of complications in our institution. methods: this is a retrospective review of all patients at our institution who underwent cholecystostomy placement between 2013 and 2016. we evaluate the comorbidities, readmission rate, overall rate of complication associated with cholecystostomy tubes, and eventual definitive cholecystectomy. results: our cohort includes 100 patients, 52% of whom were male, with a mean age of 71. we had an overall complication rate of 49.5%, including tube dislodgements, leaking tubes, and misplaced tubes. all cause readmission rate was 56% and only 32% of patients who had cholecystostomy drains underwent interval cholecystectomy. conclusion: there has been much interest in treatment of acute cholecystitis in patients with multiple comorbidities. in review of our data, a surprisingly large number of patients had mechanical complications involving the cholecystostomy drain. in an era focused on decreasing readmission rates and their associated costs, drains carry a high risk of malfunction which will in turn, lead to increases in these two metrics. while there is more work to be done in the evaluation of early cholecystectomy versus cholecystostomy in this subgroup of patients, we suspect that early cholecystectomy in the medically optimized patient will lead to reduced length of stay and hospital costs as well as increased patient satisfaction. does selective use of hepatobiliary scintigraphy (hida) scan for diagnosis of acute cholecystitis, following equivocal nondiagnostic gallbladder ultrasonography, affect outcomes fahad ali, ba, amir aryaie, md, eneko larumbe, phd, mark williams, md, edwin onkendi, md; texas tech university health sciences center introduction: acute cholecystitis (ac) is diagnosed by characteristic gallbladder ultrasonographic findings (high specificity, low sensitivity). hepatobiliary scintigraphy (hida) may be needed to confirm ac (higher sensitivity and specificity). the aim of this study was to assess the impact of the current selective use of hida scan for sonographically equivocal cases of ac on outcomes. methods: a retrospective chart review of patients treated for ac at our institution (1/2015 to 12/2016) was performed. patients were divided into 2 groups: the ultrasound only group (us-only) and the ultrasound-hida group (us-hida). timing of us and hida, and intervention for ac since presentation to emergency room (er), and their impact on outcomes were analyzed. ac severity was graded per the tg3-tokyo guidelines. results: a total of 110 patients were analyzed. the 2 groups were statistically similar with regards to age, body mass index, asa class ii, iii and iv, extent of leukocytosis at presentation and liver functions test levels at presentation. in the us-only group, diagnostic ultrasound was obtained sooner, [median of 3 (interquartile range, iqr 1.3-8.7) hours] from presentation to the er compared to the us-hida group, ) hours], p=0.007. hida was obtained after a median delay of 11.5 (iqr 3.7-25) hours from a nondiagnostic ultrasound. majority of patients (87%) in the us-only group had mild (tg3 grade i) to moderate (tg3 grade ii) ac, while 78% of the us-hida group had moderate (tg3 grade ii) to severe (tg3 grade iii) ac (p=0.003). despite this, more patients in the us-hida group (39%) had a "normal" non-diagnostic ultrasound compared to the us-only group (4.3%), p.001. seven patients in the us-hida group had no intervention due to normal hida scan (2) , ac misdiagnosis due to liver cirrhosis (1) , and severe medical comorbidities (4) . more patients (74%) in the us-only group underwent laparoscopic cholecystectomy, compared to 39% in the us-hida group (p=0.006). between the two groups, there was no significant differences in 90-day morbidity, mortality and reoperations. however, the length of stay was longer by a median of 3.5 days in the us-hida group (p=0.003). conclusion: patients with moderate to severe ac are more likely to need hida scan due to a "normal" non-diagnostic ultrasound, have a delay in diagnosis, not have intervention for ac due to severe medical comorbidities and have lower chance of laparoscopic cholecystectomy. the length of hospital stay is significantly longer for these patient by a median of 3.5 days. introduction: benign gallbladder disease is commonly treated with laparoscopic cholecystectomy (lc). gallbladder cancer (gbc) is a rare malignancy characterized by high invasiveness and poor survival. in our institution, all gallbladder specimens are routinely sent to pathology, to rule out gbc. the purpose of our study was to assess the efficacy for routine histopathology of gallbladder specimens after cholecystectomy (cly) for all gallbladder disease. methods and procedures: after obtaining approval from our institutional review board, a retrospective review was conducted on all patients who underwent cly from june of 2012 to may 2016 were included in the study. the data obtained include gender, age, american society of anesthesiologist score (asa), body mass index (bmi), comorbidities, length of stay (los), radiological imaging and pathology results. independent t and chi-square tests were performed using ibm® spss® 24 software. results: there were 903 cly performed at our institution, of which 842 (93%) were lc. females composed of 675 (75%) patients and the median age was 48.7 (1%) gallbladder specimens were found to be cancerous. 896 (99%) gallbladder specimens were benign. majority 533 (59%) were chronic cholecystitis, 238 (27%) were acute cholecystitis and 22 (2%) were gangrenous cholecystitis. 29 (3%) were found to be acalculus cholecystitis and 5 (1%) were cholelithiasis. 69 (7%) were found to be adenomyositis, and other. conclusion: in our institution, less than 1% (7) of all gallbladder specimens were found to be cancerous. it would decrease cost and work load if gallbladder specimens are selectively sent to pathology. emanuel a shapera, md 1 we sought to determine clinical factors associated with recurrent cholangitis in two las vegas community hospitals to aid providers in management of this disease. methods and procedures: retrospective, multi-center study. over 4000 ercps were analyzed between 2010 and 2017. 24 patients were identified as having multiple (60) admissions for cholangitis per tokyo criteria. univariate and multivariate analysis was conducted. results: patients with a significantly (p.0001) higher albumin level on admission (3.7) were discharged home more often than patients discharged to a facility or hospice (2.7). on multivariate analysis, non-home discharge was associated with lower albumin level at admission (p=0.0055) and greater maximum temperature prior to decompression (p=0.0354). increased hospital stay was associated with lower albumin level at admission (p=0.0019). a majority (31/60) of recurrent episodes involved stent placement, exchange or removal. 14 patients (58%) had either biliary malignancy, gallbladder or both. blood cultures were drawn in 52% of all episodes and positive in 45%, e coli being the most common pathogen isolated. all patients had low hdl levels (6-36, mean 22) . conclusions: high fevers and poor nutritional status was associated with increased length of hospital stay and fewer home discharges. tumors, gallbladders and malfunctioning stents contribute substantially to morbidity. close follow up for indicated gallbladder removal, stent management and nutritional optimization is critical to reduce the burden of this disease. we compared the surgical method in neonate choledochal cyst between oec and lec. the perioperative and surgical outcomes that were reviewed included age, operative time, postoperative hospital stay, time to diet, and surgical complications. the patients were followed up for 42 months (range, 9-146 months) . results: there was no difference in range of bile duct excision and manner of roux-en-y hepaticojejunostomy between oec and lec groups. there was no intraoperative complication in both groups and no open conversion in the lec group except one case which was ruptured choledochal cyst. the median age of oec and lec groups were 13 days (range, 2-30) and 12.5 days (range, and median body weight at the time of operation were 3.50 kg (range, 2.64-4.22 ) and 3.32 kg (range, 2.73-4.22) , respectively. the median operative time was 163 minutes (range, 126-336) in oec and 237.5 minutes (range, in lec groups and there was no significant difference between oec and lec groups (p=0.116). intraoperative bleeding was minimal in both groups. the postoperative hospital-stay, time to start diet, and time to return to full feeding had no significant differences in both groups. after discharge, 5 of 19 (26%) oec patients experienced readmission due to cholangitis and ileus, while there were none in the lec group. conclusions: this study revealed that lec had better prognosis compared to oec. lec provided an excellent cosmetic result. so we suggest lec could be the treatment of choice for neonatal choledochal cyst. this is a small series, therefore future studies will have to include a larger number of patients and evaluate long-term follow-up. keywords: choledochal cyst, laparoscopy, neonate. laparoscopic narrow band imaging for intraoperative diagnosis of tumor invasiveness in gallbladder carcinoma: a preliminary study yukio iwashita, hiroki uchida, teijiro hirashita, yuichi endo, kazuhiro tada, kunihiro saga, hiroomi takayama, masayuki ohta, masafumi inomata; oita university faculty of medicine introduction: determining tumor invasiveness before operation is one of the most important unsolved issues in the management of gallbladder cancer. we hypothesized that the assessment of irregular vessels on the gallbladder wall may be useful for detecting subserosal infiltration. we present an initial report on the clinical usefulness of laparoscopic narrow band imaging (nbi) for the intraoperative diagnosis of tumor invasiveness in gallbladder carcinoma. methods: thirteen patients with gallbladder cancer were included in this study. patients with tumors located in the liver bed and those with definitive invasion observed on computed tomography findings were excluded from this study. gallbladders were observed using nbi and the microvasculature was evaluated. according to previous reports of endoscopic nbi, we defined four findings as positive: vessel dilatation, tortuousness, interruption, and heterogeneity. the nbi findings were compared with postoperative pathological findings. the study protocol was approved by the institutional review board of the oita university. results: the serosal surface of the tumor site and its microvasculature were successfully observed in all 13 patients. laparoscopic nbi detected at least one abnormal finding in seven patients, and postoperative pathology showed subserosal infiltration accompanied by vessel invasion. on the contrary, six patients with no positive nbi findings showed mild or no subserosal infiltration and no vessel invasion. conclusions: our study indicated that laparoscopic nbi may be useful for diagnosing subserosal infiltration accompanied by a vessel invasion. shuichi iwahashi, mitsuo shimada, satoru imura, yuji morine, tetsuya ikemoto, yu saito, hiroki teraoku; department of surgery, tokushima university introduction: laparoscopic cholecystectomy (lap-c) is the standard operation for the benign diseases. we have reported reduced port lap-c (rpl-c) was safely and comparable method to sils-c and conventional lap-c (sages 2017) . in this time, we examined the utility of rpl-c containing the post-operative adverse event. procedures: the adjustment is the benign illness including the cholecystolithiasis, and advanced obesity and the cases of the inflammation remaining have been excluded. the incision is put and cut open the abdomen to the umbilical region, and camera port was inserted. we used 5 mm flexible scope. 3 mm forceps for holding of the gallbladder bottom and left hand of operator were inserted directly with no port. methods: rpl-c has been introduced in this department since july, 2009. we performed 224 cases of lap-c, containing sils-c and american style conventional lap-c, and we performed rpl-c has been performed already 156 cases. we compared the patient background and the operation factor between rpl-c, sils-c, conventional lap-c. operators were young surgeons, they were not specialists of gastroenterological surgery or endoscopic surgery. results: the difference was not admitted in the age, gender, the physique, and the disease, and the difference was not admitted in hospital stay after the operation (rpl-c:sils-c:conventional lap-c=5.3±0.2 days:5.5±0.2 days:6.7±1.0 days) and the amount of blood loss (rpl-c:sils-c:conventional lap-c=4.7±0.9 ml:9.0±1.9 ml:9.6±4.2 ml) and operation time (rpl-c:sils-c:conventional lap-c=129±3 min:118±6 min:136± 3 min). and surgical wound after rpl-c was cosmetically acceptable. regarding as the post-operative adverse event, there were no patients of bile duct injury. conclusion: in the patients on reduced port lap-c, there were no bile duct injuries of postoperative adverse event. reduced port lap-c is safely for young surgeons and comparable method. introduction: acute cholangitis is an ascending infection of the biliary tree secondary to obstruction and can be severe if proper intervention and treatment are not performed in a timely fashion. the most common management of cholangitis with ductal obstruction due to choledocholithiasis is intravenous hydration, empiric antibiotic therapy, endoscopic retrograde cholangiopancreatogram (ercp) with sphincterotomy and stone extraction with or without stent placement, followed by a delayed laparoscopic cholecystectomy. we present the case of a patient with blood clot obstruction of a common bile duct (cbd) stent after ercp with sphincterotomy and stone extraction. case presentation: a 58 year old male presented to the emergency department with jaundice, right upper quadrant abdominal pain, truncal pruritis, nausea, vomiting, and fever. biochemical analyses and liver profile demonstrated an elevated white blood cell count, hyperbilirubinemia, and elevated liver enzymes consistent with cholestasis. biliary ultrasound demonstrated multiple gallstones and dilation of the cbd with a distal obstructing calculus. he proceeded to ercp where biliary cannulation was achieved, sphincterotomy performed, and a large amount of sludge and pus was drained. an 8 mm stone was removed from the cbd by balloon sweep with completion cholangiogram demonstrating no filling defects. a stent was then placed in the cbd with adequate flow. following the procedure, the patient continued to have increasing hyperbilirubinemia. a repeat ercp revealed a large blood clot and continued bleeding at the previous sphincterotomy that resolved with epinephrine injection. the former stent was visualized in the proper position, removed with a snare, and found to be fully occluded with blood clots. after retrieval of additional clots, a new stent was placed with adequate return of bile. the patient recovered with resolution of his symptoms and hyperbilirubinemia with laparoscopic cholecystectomy. discussion: cholangitis is characterized by charcot's triad of right upper quadrant abdominal pain, fever, and jaundice due to an ascending bacterial infection of the biliary tree coinciding with obstruction of biliary flow most commonly from gallstones. cholangiography via ercp with associated sphincterotomy, stone extraction, and stenting is both diagnostic and therapeutic. while debated by endoscopists, stent placement has shown to reduce recurrent biliary complications, decrease length of hospital stay, and lessen morbidity. although pancreatitis is the most common cause of hyperbilirubinemia post-ercp, stent occlusion secondary to stones or blood clots should be considered to effectively treat patients. proper hemostasis is important in any procedure and close patient follow-up should be performed to prevent further complications. sarrath sutthipong, md, panot yimcharoen, md, poschong suesat, md; bhumibol adulyadej hospital background: choledochal cyst (cc) is a rare disease, characterized by dilatations of the extra-or/ and intrahepatic bile ducts. ccs occur most frequently in asian and female populations. cc is associated with biliary lithiasis and considered at risk of malignant transformation. todani's classification dividing cc into 5 types is the most useful in clinical practice. the current standard treatment is complete cyst excision with roux-en-y hepaticojejunostomy and cholecystectomy for the extrahepatic disease (todani type i and iv). in this report we present our experience using a total laparoscopic technique to treat adult patients with cc in 5-year period. methods: a retrospective review of the records of the patients above 15 years who underwent laparoscopic cyst excision and roux-en-y hepaticojejunostomy in our hospital between january 2013 and may 2017 was carried out. the data included the clinical presentation, investigation, perioperative details and complication. the type of cc was classified according to todani's classification. results: seven cases of cc were reviewed, 6 females and 1 male with mean age 33 years (range 20-65 years). these included 5 cases of todani type ib and 2 cases of type 4a. the predominant symptoms were chronic abdominal pain and jaundice. a case of both pancreatitis and cholangitis were also seen. investigations included ultrasound with mrcp in 6 cases and ercp in 1 case. the mean operative time was 4 hours and 20 minutes (3 hours 30 minutes to 5 hours range) with mean intraoperative blood loss 85 ml (range 20-200 ml). all the resected specimens showed chronic inflammation. malignancy was not seen in any patients. the early postoperative complications included bile leakage with intra-abdominal collection in 2 patients, which were managed conservatively (evidenced by clinical status and imaging study), re-operation was not required. the median duration of hospital stay was 8 days (range 6-23 days). there was no perioperative mortality. all patients were followed up at 1, 6, and 12 months postoperatively, late complication were not detected during each visit. conclusion: in our opinion, laparoscopic cyst excision and hepaticojejunostomy could offer more feasible and safe methods of treatment for ccs in adult patients with potentially less postoperative morbidity, a shortened length of stay and a lower blood loss when compared to the preferred open approach. however, we would need to study this on a larger sample of patients to report the efficacy and safety of laparoscopic approach. endoscopic trans-papillary gallbladder drainage (etgbd) in acute cholecystitis: a single center experience arun kritsanasakul, chotirot angkurawaranon, jerasak wannapraset, thawee rattanachu-ek, kannikar laohavichitra; rajavithi hospital background: surgery is the mainstay of treatment for cholecystitis, however, it may not be safe or feasible in some circumstances such as severe cholecystitis or cholecystitis in extremely high-risk patients. gallbladder drainage may be an appropriate alternative or a bridging option prior to cholecystectomy. endoscopic trans-papillary gallbladder drainage (etgbd) has been proposed as a modality that is feasible and effective in cholecystitis. objective: the primary outcome of this study is to evaluate the effectiveness of etgbd. the secondary outcome is to evaluate the safety, early experience outcomes, and complications of this procedure. methods: retrospective medical records review between january 2014-december 2016 from a single tertiary referral hospital center, rajavithi hospital, bangkok, thailand. a total of 6 patients who was diagnosed with cholecystitis and underwent etgbd. the procedure was performed at the endoscopic suite under light sedation via total intravenous anesthesia. the patient demographic data and procedures were collected. the technical success of etgbd was defined as decompression of the gallbladder by successful cystic duct stent placement. the clinical success was defined as resolution of symptoms and/or improved laboratory data or ultra-sonographic findings. results: a total of 6 patients underwent etgbd. among these patients, 4 were high risk for surgery due to age or comorbidity, 1 had concomitant jaundice and 1 was failure of medical treatment. both technical and clinical success of etgbd was achieved in 4 of 6 cases (67%). the two patients that did not achieve technical success were due to failure to cannulate guidewire through cystic duct and the other had trans-cystic guidewire perforation that needed surgical intervention. there were two intra-operative complications (33%). one was the patient who had trans-cystic guidewire perforation and another had anesthesia-related complication (hypoventilation requiring endotracheal intubation). there were no 30-day mortality. conclusion: endoscopic trans-papillary gallbladder drainage is an alternative treatment modality for patients with cholecystitis who are at high-risk for surgery and or those who are unsuitable for percutaneous gallbladder drainage. the technique is feasible, however, careful case selection and high endoscopic skill is needed. julia f kohn, bs 1 , alexander trenk, md 2 , woody denham, md 2 , john linn, md 2 , stephen haggerty, md 2 , ray joehl, md 2 , michael ujiki, md 2; 1 university of illinois at chicago; northshore university healthsystem, 2 northshore university healthsystem introduction: subtotal cholecystectomy, where the infundibulum of the gallbladder is transected to avoid dissecting within a heavily inflamed triangle of calot, has been suggested as a method to conclude laparoscopic cholecystectomy while avoiding common bile duct injury. however, some case reports have suggested the possibility of recurrent symptoms from the remnant gallbladder. this retrospective case series reports a minimum of two-year follow-up on patients who underwent subtotal cholecystectomy within one four-hospital system. methods: a retrospective chart review database containing 900 randomly selected cholecystectomies, all of which occurred between 2009 and 2015, was reviewed to identify all instances of subtotal cholecystectomy. charts for these patients were reviewed through 09/2017, including any documentation from other providers, including primary care. results: six patients who underwent subtotal cholecystectomy with a remnant of infundibulum left following surgery were identified. surgical approach and the choice to perform subtotal cholecystectomy were dependent on the attending surgeon; all decisions were made intraoperatively. there was an average of 70 months of follow-up for these patients within our institution. discussion: this case series adds six cases to the literature surrounding long-term outcomes in patients who underwent subtotal cholecystectomy. although one patient was lost to follow-up, no patient had recurrent biliary colic or other complications arising from the remnant gallbladder. this may be encouraging to surgeons who feel that subtotal cholecystectomy with an infundibular remnant is the safest way to proceed with cholecystectomy in patients with severe inflammation. objective: this study aims to evaluate the utility and efficiency of icg as an alternative to routine intraoperative cholangiogram in patients undergoing cholecystectomy. introduction: common bile duct injury is an uncommon, but serious complication associated with laparoscopic cholecystectomy. current guidelines state that when used routinely intraoperative cholangiogram (ioc) can decrease biliary injury, however it is not routinely used due to increased time of operation, and inaccessibility of equipment. indocyanine green (icg) has been found to be effective for identification of biliary anatomy during cholecystectomy, however has not yet been widely adopted. we aim to assess if icg is able to overcome the obstacles of ioc, while still effectively assessing biliary anatomy. methods: we performed a retrospective analysis of laparoscopic cholecystectomies performed in a single institution from january 2014 to september 2017. elective and emergent cases were included. we stratified patients into icg and non-icg groups. patients who had concomitant procedures performed were excluded. we analyzed patient demographic information, as well as bmi, asa classification and comorbidities in both groups. our primary outcome was operation time (skin to skin), and laparotomy conversion rate. secondary outcomes were effectiveness of icg in visualizing biliary anatomy, and cost. results: 145 patients were included in our study, 59 in the non-icg arm and 86 in the icg arm. both groups were similar in background. there were no statistical differences in patient demographics, asa classification, bmi, or comorbidities. there was no statistical difference in operation time (58.0 vs 54.5 minutes; p.202) or conversion rate (1.6 vs 0%; p.226). icg was able to delineate biliary anatomy in 100% of the patients. the cost of a 25 mg/vial kit of icg is approximately $70. conclusion: the use of icg does not increase operating time during laparoscopic cholecystectomy. icg is an inexpensive and effective tool used to delineate biliary anatomy without the inherent burden and limitations of ioc. benefsha mohammad, md 1 , michele richard, md 1 , steve brandwein, md 2 , keith zuccala, md 3; 1 danbury hospital, 2 danbury hospital department of gastroenterology, 3 introduction: obesity is a prevalent issue in today's society, which has increased the number of gastric weight loss surgeries. this presents an anatomical challenge to biliary disease requiring endoscopic retrograde cholangiopancreatography (ercp). in gastric bypass patients, traditional ercp via the mouth in these patients is technically more challenging, requiring a longer endoscope with a reported success rate of less than 70%. a solution is laparoscopic assisted ercp (la-ercp) via gastrostomy. this minimally invasive technique has become increasingly more prevalent and safe. we present our experience with la-ercp at our teaching community hospital in a large cohort of patients. methods and procedures: retrospective chart review was performed on all patients with a history of prior laparoscopic gastric bypass surgery who underwent la-ercp from april 2008 to april 2016. the procedure was performed by two different general surgeons and one gastroenterologist. a pursestring suture and transfacial stay sutures were used to bring the gastric remnant to the abdominal wall. a gastrostomy was then created and accessed by the duodenoscope to perform the ercp. biliary sphincterotomy, papillary or biliary dilation, lithotripsy, stent placement, and/or stone removal were performed as indicated. we observed the incidence of postoperative outcomes, including acute pancreatitis, reoperation, post-procedure infection, pain control, hospital re-admission and bile leak. results: thirty-two patients met inclusion criteria. six patients were male and twenty-six were female, with mean ages of 59 (std dev 7) and 53 years (std dev 15), respectively. indications for la-ercp included suspected choledocholithiasis (25/32), cholangitis with choledocholithiasis (2/ 32), acute pancreatitis (2/32), abdominal pain with abnormal lft (1/32), cholangitis with cholecystitis (1/32), and bile leak (1/32). la-ercp was successfully performed in all thirty-two patients. biliary cannulation, sphincterotomy and stone extraction were performed on 31/32 patients, and one patient underwent sphincterotomy and stent placement for bile leak after recent laparoscopic cholecystectomy. one patient developed acute pancreatitis with elevated pancreatic enzymes which resolved after conservative treatment. one patient required a second la-ercp for stent replacement due to a persistent bile leak. the median length of stay was 2 days (range 1-10 days). conclusions: la-ercp is a safe and feasible alternative to open surgery, and can be safely implemented at community hospitals with adequately trained providers. obesity is a growing burden on society, increasing the incidence of weight loss surgery. our large study proves that in this minimally invasive era, la-ercp provides gastric bypass patients a safe alternative with less pain and increased satisfaction. ahmed elgeidie, elsayed adel; gastrointestinal surgery center background: endoscopic sphincterotomy (es) is an effective therapeutic procedure for common bile duct (cbd) stone clearance but it carries a substantial risk of recurrent stones at long-term outcome. aim of the study: to evaluate the rate of cbd stones recurrence after primary complete endoscopic clearance, and to identify the risk factors of recurrence. methods: between january 2002 and december 2016, 2255 patients with cbd stones who underwent successful es and complete stone clearance were studied retrospectively. recurrent cbd stone, was defined by the confirmation of the presence of cbd stone at least 6 months after previous complete cbd stone clearance by es. the risk factors for recurrent cbd stones and mean time interval between initial es and stone recurrence were analyzed. results: in total, 2255 patients we included. the median follow up period was 89 months. recurrent cbd stones appeared in 159/2255 (7.05%) patients after a median time interval of 22 (6-216) months following es. stone recurrences were observed on multiple occasions in 20 patients (0.88%). on the univariate analysis, the significant risk factors related to recurrent cbd stone were male sex (p=0.001), previous history of cholecystectomy (p=0.001) multiple cbd stones (p= 0.001), large cbd stone (p=0.001) the presence of periampulary diverticulum (p=0.001) and stone crushing using mechanical lithotripsy (p=0.001) conclusion: recurrence of cbd stones is an identified long-term risk after es and stone clearance. background: laparoscopic cholecystectomy during advanced pregnancy is challenging due to the limited intraabdominal space. patients may be at increased risk for developing trocar site hernia. case report: a 35 year old hispanic female in her 22th week of pregnancy came to the er with acute right upper quadrant pain. due to lack of accessibility she had poor prenatal care. she had mildly elevated amylase but normal lfts and ultrasound showed some gallbladder wall thickening suggestive for acute cholecystitis and no dilated biliary duct. fetal ultrasound was normal. she was admitted to the hospital and started on antibiotics, obstetrics was consulted. her amylase peaked at [600 u/l but then normalized and indication for laparoscopic cholecystectomy was made. mrcp and ercp were not performed as it was assumed that the patient had passed a stone. five mm trocars were placed in the luq and the umbilicus and a teleflex minigrasper between the tow. the uterus was found at the umbilical level. the gb was pulled out and the serosa was incised on both sides and a window was created behind the gb midportion and widened towards infundibulum and fundus. there was gb wall thickening and edema. the critical view was obtained and the cystic artery and duct were clipped and divided. the common bile duct appeared normal and no ioc was done. the specimen was retrieved through the luq port site using a 5 mm endobag after dilatation to 1.5 cm due to the presence of two large stones. the port site fascia was closed using a suture passer. the postoperative course was uneventful and both mother and baby were well at the two weeks follow up. discussion: in case of biliary pancreatitis during pregnancy, lc should be performed and if ultrasound shows a normal biliary system and amylase/lipase normalize, mrcp/ercp and ioc may be avoidable to protect the baby. lc with two ports is feasible during pregnancy. removal of the specimen through a lateral abdominal wall site may help prevent an umbilical port site hernia in this patient population. s160 surg endosc (2018) 32:s130-s359 introduction: splenic abscess is a rare, potentially lethal condition, with autopsy studies showing incidence rates between 0.14-0.7%. mortality rates ranging from 47 to 100% making early diagnosis and prompt intervention vital. several case reports have documented post surgical splenic abscess, most notably after laparoscopic sleeve gastrectomy. to the best of our knowledge, there has not been any reported cases of splenic abscess arising after laparoscopic cholecystectomy. it is important to remember this disease process for expeditious targeted treatment in future cases. case presentation: a 69 year-old female with past medical history significant for cholilithiasis, hypertension, and hyperlipidemia presented to the emergency department (ed) with a chief complaint of abdominal pain for two days. labs and imaging were obtained which confirmed the diagnosis of choledocholithiasis and pancreatitis. ercp was performed which showed a 1.5 cm stone causing obstruction, with several other smaller filling defects. the stones were removed after sphincterotomy. post procedurally, the patient underwent an uncomplicated laparoscopic cholecystectomy on hospital day (hd) #5. post operatively, the patient had persistent leukocytosis peaking at 16.8 thousand on postoperative day (pod) #6. a ct scan was performed which showed a rim-enhancing splenic collection measuring 6.692.2 cm suggestive of an abscess. interventional radiology was consulted and aspirated 50 ml of purulent fluid. cultures grew out klebsiella pneumoniae and enterobacter cloacae complex, and the patient was discharged home on zosyn. discussion: laparoscopic cholecystectomy has become the cornerstone in treatment of symptomatic biliary colic and acute cholecystitis. of the many recognized complications of laparoscopic cholecystectomy, splenic abscess has not yet been reported in current literature. the nonspecific signs and symptoms of splenic abscess make clinical diagnosis difficult. the classic triad of fever, palpable spleen and left upper quadrant pain are only seen in about two-thirds of patients. ct scan has been shown to be the most sensitive imaging modality for diagnosis of splenic abscess. current treatment options for splenic abscess are broken down into two subsets: percutaneous and surgical intervention. percutaneous treatment includes image guided aspiration with or without placement of drainage catheter. surgical intervention can be either laparoscopic or open and includes drainage of abscess with splenectomy or splenic conservation. the best treatment option remains unclear, and there is lacking prospective data demonstrating which modality is superior. introduction: laparoscopic subtotal cholecystectomy is widely accepted as a safe alternative to the conventional laparoscopic cholecystectomy in case of acute cholecystitis with frozen calot's triangle. the remnant stump of the gallbladder may be either sutured or looped. however, there are limited studies comparing the outcomes of the two techniques. the present study is aimed at comparing loop and suture closure of the gall bladder stump. methods: a retrospective analysis of our prospectively maintained database revealed that between january 2013 and december 2016. 81 patients underwent laparoscopic subtotal cholecystectomy for acute cholecystitis, chronic cholecystitis or empyema gallbladder with frozen calot's triangle. the decision to use endoloop or sutures for stump closure was made intra-operatively after dividing the gallbladder through the infundibulum. a no.20 sized drain was kept in all the cases. the patients were discharged with drain in situ, and were reviewed on post-operative day 7 during which an ultrasound was done and drain removed if the progress was satisfactory. the intra-operative and post-operative data between the two groups were recorded and analyzed. results: endoloop closure was performed in 45 patients and suture closure using 2.0 ethibond was done in 36 patients. three patients from the sutured group had post operative bile leak among which one patient underwent endobiliary stenting. the other 2 were managed conservatively while the drain had to be retained for 2 weeks. two patients in the endoloop group were detected to have retained stone in the remnant gallbladder cuff among which one had recurrent cholecystitis requiring laparoscopic completion cholecystectomy. none of the patients had bile duct injury or surgical site infection. mean post operative stay was 2.5+1.2 days, did not significantly vary between the groups. suturing needed more surgical expertise and had prolonged operative time than endoloop (68+22 min versus 84+18 min, p=0.04). conclusion: suture or loop closure of the remnant gallbladder after subtotal cholecystectomy are equally effective. suturing the stump may be associated with increased incidence of biliary leak while endoloop may have higher incidence of retained gallstones. the choice between the two may be made intra-operatively based on the surgeon's expertise and preference. background and aim: in recent years, due to the spread of laparoscopic cholecystectomy, bile duct injury as its complication has been reported at a certain frequency. current surgical treatments include 1) suturing and closing the injured part laparoscopically during surgery, 2) transitioning to laparotomy and closing the suture, 3) inserting a tube such as t-tube under the laparotomy, 4) bile duct-intestinal anastomosis under the laparotomy, etc. are taken into consideration. regardless of which treatment method, it is not a definite ideal treatment. we have developed a bioabsorbable material (caprolactone: lactic acid (50: 50) polymer reinforced with polyglycolic acid fiber and designed to be absorbed in about 8 weeks). at this conference, we would like to talk about the current state and problems of development of minimally invasive therapy for biliary damaged area using bioabsorbable materials we developed. method: in order to overcome the problem of the current bile duct injury cure method, we have been developed, a) a method of closing a perforation part endoscopically from the luminal side of a bile duct (a covered stent using a bioabsorbable material in the damaged part), b) develop a method of closing the biliary duct injury under the laparoscope from the outside of the bile duct (adhering the bioabsorbable sheet to the bile duct perforation using a biocompatible adhesive). results: experimental results of suturing the bioabsorbable material in the biliary duct in surgery of laparotomy were able to regenerate the bile duct without stenosis in the damaged area. however, various adhesives were tried to bond the sheet of this bioabsorbable material and the native bile duct under the endoscope, but at the moment, there is no glue that will allow the sheet to be adhered readily and reliably where there is moisture to a certain extent. a tool for delivering the sheet from the bile duct into the injured part is under development and good results are obtained at present. conclusion: it is possible to regenerate the bile duct without constriction using a bioabsorbable material. it is difficult to laparoscopically adhere to the injured part of the bile duct, but we hope that it will be possible in the near future to develop further adhesives. s162 surg endosc (2018) , 30-35 kg/m 2 (c) and more than 35 kg/m 2 (d). we made a 2.5-cm longitudinal skin incision within the umbilicus. a wound retractor and a surgical glove were applied at that incision. we used the three 5-mm ports technique. after retracting the gallbladder upward, the cystic duct and artery were divided and identified using pre-bending forceps through the flexible port and laparoscopic coagulating shears (lcs). the cystic artery was dissected using the lcs and the cystic duct was also dissected after clipping. the gallbladder was freed from the liver bed using the lcs, and the specimen was retrieved from the umbilical wound. results: there were conversions to open laparotomy in 4 cases (1.3%) and requirement of additional ports in 23 (7.7%). the mean age (years), operation time (min), blood loss (ml) and postoperative hospital stay (days) in group a, b, c and d were 60.0, 55.5, 51.2 and 41.2 (p=0.05[), 89.5, 101.7, 98.4 and 85.3 (p=0.206), 19.7, 18.5, 15.6 and 3.4 (p=0.935) , and 3.5, 3.6, 3.2, and 3.0 (p=0.882), respectively. there was a significant difference in age only. the complications were bile duct injury in one case (0.3%) and pneumothorax in two (0.6%). conclusion: obesity had no influence of surgical outcomes for performing silc. introduction: recent studies have reported mixed outcomes when comparing surgeon case volume and laparoscopic cholecystectomy (lc) outcomes. formal minimally invasive surgical training (mist) has been shown to be associated with shorter post-operative length of stay (los), but no difference in major adverse events such as bile leak, bile duct injury, intra-abdominal abscess formation, and death. we aim to determine 30-day rates of major adverse events after lc in a university hospital setting, to identify significant associated risk factors, and to determine if mist or surgeon volume are associated with differences in los and major adverse events. methods: we conducted a single-center retrospective review of 2,764 cholecystectomies performed over a seven-year period (2009) (2010) (2011) (2012) (2013) (2014) (2015) (2016) . characteristics and outcomes were compared using chi squared or rank sum tests. multivariable regression modeling was used to determine independent associations with the two main outcomes, major adverse events and los. results: we identified 2,764 adults who underwent lc during the study period, with a median age of 50, and 70% women. about 19% (n=531) of patients had a los[1 day and 4.3% (n=120) were re-admitted within the first 30 days after surgery for any reason. within 30 days of lc, 2.2% (n= 60) of patients suffered from one or more major adverse events. this includes 0.18% (n=5) of patients with bile duct injury, 1.3% (n=35) of patients with bile leak, 0.3% (n=7) of patients with intra-abdominal abscess, and 0.3% (n=9) of patients died for reasons related to their procedure or post-operative recovery. table 1 shows the characteristics of the patients and procedures with a comparison of the patients with an adverse event versus those without one. in univariate analysis, high annual surgical volume (40+ cases/year) and procedure urgency were found to be significant predictors of adverse events and los, however, mist was not. in multivariable analysis, controlling for significant univariate predictors, urgent or emergent cases were associated with a 3-fold increase in odds of an adverse event (or=3. introduction: laparoscopic cholecystectomy is an extremely common procedure in the united states, with over 700,000 cases performed annually. despite the procedure's overall safety, there has been some evidence that tobacco use is associated with increased risk of wound infection after lc. this retrospective chart review sought to examine whether tobacco use is associated with increased complications following laparoscopic cholecystectomy within a high-volume healthcare system. methods: after irb approval, 900 of approximately 3,000 cholecystectomies performed within one four-hospital system between 2009 and 2015 were randomly selected, and patient charts were retrospectively reviewed. pre-, intra-, and postoperative data were collected, including all complications within 90 days. tobacco use cohorts were defined as follows: never, former (any historical tobacco use), and current (active tobacco use within 1 year of surgery) per the acs nsqip surgical risk guidelines. following preliminary data analysis, multivariable logistic regression models were generated to identify whether tobacco use was predictive of outcomes of interest. of the 900 cases analyzed, 535 patients (59.4%) were never smokers; 31.3% were former smokers, and 9.2% were current tobacco users or had quit less than 12 months prior to surgery. there were 17 surgical site infections, one wound dehiscence, one port site hernia, three common bile duct injuries, and 44 medical complications requiring prolonged hospitalization or readmission within 90 days. current tobacco users were significantly more likely to undergo urgent surgery (following emergency admission or direct admission to the hospital) than former or nonsmokers. however, there was no difference between cohorts for prolonged duration of surgery, conversion to an open procedure, surgical site infection, wound dehiscence or hernia, common bile duct injury, or other medical complication. there was no significant difference between cohorts when all postoperative complications were pooled. conclusions: there does not appear to be a significant difference in 90-day surgical outcomes or complications in active tobacco users vs. former or non-users. although studies in other surgical settings have indicated a possible reduction in complications if patients abstained from smoking prior to surgery, this may not be beneficial in laparoscopic cholecystectomy. moreover, as current tobacco use appears to be associated with higher rates of urgent surgery, these patients may not be able to stop smoking prior to an elective procedure. prospective studies to further clarify whether there is any benefit towards tobacco cessation prior to lc may be valuable. 9, [150, [ 150 respectively (0-20) , cyfra 211 were 8.11, 9.22, 6.36 respectively (0-3.5) . afp and cea were negative. as for this patient, he is of high risk of hepatobiliary system diseases. introduction: thymoma is one of the rare tumor entity benign or malignant arsisng from the epithelial cells of thymus gland, frequently associated with neuromuscular disorder myasthenia gravis. so, we are presenting this rare case of thymoma with myasthenia gravis in our institute. methods: we operated a single patient of thymoma in a case of myasthenia gravis by video assissted thoracoscopic approach. results: operative time-78 min, intraoperative blood loss −20 ml, post operative analgesia requirement in form of nsaids is for 2 days, no ventilatory support required post operatively, with follow up reduction in achr ab from 99 nmol/l to 15 nmol/l and reduction in symptoms in form of reduced ptosis. conclusion: thoracoscopic thymectomy is feasible and safe in terms less operative time, less post operative pain and analgesia requirement and no post operative ventilatory support requirement. carter c lebares, md, stanley j rogers, md; ucsf background: duodenal fistulas are uncommon but morbid complications of acute necrotizing pancreatitis. if percutaneous drainage fails, surgical correction via roux-en-y diversion or pancreaticoduodenectomy can be required. while self-expanding metal stents have been tried, complications like migration and perforation have limited such use. endoscopic transmural stents have successfully treated fistulas of the stomach, particularly post-sleeve gastrectomy. here we present a case of endoscopic transmural stents used to treat a non-resolving duodenal fistula following acute necrotizing pancreatitis. methods: under general anesthesia, using a standard adult gastroscope, the fistula was identified in the second portion of the duodenum (fig. 1) . a flexible-tipped guide wire was used to identify the fistula tract and two 7 fr 5 cm double pigtail biliary stents were deployed ( fig. 2 ) with positioning verified under fluoroscopy. two weeks later these were removed and a single stent deployed into the visibly smaller tract (fig. 3 ). two weeks after that, the single stent was removed and contrast medium was injected under fluoroscopic visualization, demonstrating resolution of the fistula (fig. 4) . case: this patient is a 72 year old woman with hypertension and congenital hearing loss who underwent a cholecystectomy for biliary colic and subsequent ercp with sphincterotomy for retained stone. this was complicated by acute pancreatitis which progressed to severe necrotizing pancreatitis with infected retroperitoneal necrosis. percutaneous drainage yielded initial improvement but a persistent moderate collection (300 cc per day) lead to the identification of a fistula in the second part of the duodenum. repositioning and exchange of percutaneous drains over 8 weeks did not hasten resolution. endoscopic transmural pigtail stents were tried after visualization of a large (8-10 mm diameter) fistula tract. stents were utilized as described in methods, with a total of three endoscopic interventions, at 2 week intervals, resulting in resolution of the fistula as evidenced by contrast injection into the duodenum under fluoroscopy and subsequent ct scan with oral contrast. the patient's symptoms resolved and she was tolerating a normal diet. she remained thus at 1 month follow-up. conclusion: this case demonstrates the benefit of endoscopic transmural stents for the resolution of duodenal fistulas, expanding the utility of this technique to address leaks and fistulas of the upper gastrointestinal tract. further study is warranted to clarify the timing and adjuncts to optimize the use of this promising approach. totally laparoscopic alpps combined with the microwave ablation for a patient with a huge hcc hua zhang; department of hepatopancreatobiliary surgery, west china hospital, sichuan university introduction: associating liver partition and portal vein ligation for staged hepatectomy (alpps) is a novel technique for resecting hepatic tumors that were previously considered unresectable due to the insufficient future liver remnant (flr) which may result in postoperative liver failure (plf). the procedure has been accepted and modified in many medical centers worldwide. but reports about the laparoscopic alpps were rare. this study aimed to report a totally alpps combined with microwave ablation for a patient with huge hcc and confirm the feasibility of laparoscopic alpps. methods: a 51-year-old man had complained of 1-year history of right upper abdominal pain, and the syndrome was worsened in recent month. abdominal enhanced computed tomography (ct) imaging revealed a 15911 cm solid mass in right lobe of liver with non-uniform and unclear boundary, the right posterior branch of the portal vein was invaded. in addition, a small lesion was simultaneous found in left lateral lobe of liver. the tumor was evaluated as unresectable due to the flr was only 355 ml (25%). we decided to perform the laparoscopic alpps procedure. first stage including microwave ablation of the lesion in left lobe, cholecystectomy, ligation of the portal vein and transection of liver parenchyma. the second stage was done 11 days later and consisted of laparoscopic right hemihepatectomy. results: the two stages were underwent by laparoscopy successfully. the operation duration was 300 and 200 minutes, respectively. estimated blood loss was 550 and 250 ml. the hospitalization time in intensive care unit was 1 and 3 days. there was no need for transfusion in both stages. the patient was discharged 22 days after the second stage and the total hospitalization time was 38 days. recovery of the patient was uneventful in addition to the incision infection after the second stage which recovered with conservative management. the patient did not show any signs of liver failure. the ct scan before the second stage showed an enlargement of left lobe, the flr was 533 ml (37.5%). there was no signs of residual liver disease in the ct scan 10 days after the operation. the patient showed no signs of recurrence or liver failure in the following up period of six months. conclusion: totally laparoscopic alpps combined with microwave ablation is safe and feasible for the multiple hcc which was not resectable. the hypertrophy of remaining liver was fast and can achieve an adequate volume in a short time. introduction: chronic pancreatitis is a benign, irreversible inflammatory disorder characterized by the conversion of the pancreatic parenchyma into fibrous tissue. initial management should be conservative, surgery is applied in case of failure of medical treatment. the development of minimally invasive techniques has made it possible to perform these highly technical procedures in a laparoscopic manner. materials and method: we have the history of 2 patients with 19 and 42 years with chronic pancreatitis and pancreatic lithiasis of difficult handling but intractable pain to those who decided to surgical management. we performed the procedure under general anesthesia, epidural analgesia catheter was placed. neumoperitoneum technique of cali, at 14 mmhg and approach using a 12 mm umbilical port, 2 working ports of 12 and a 1 of 5 mm port,. the pancreas was exposed by a section of the gastrocolic ligament with a 5 mm ultrasonic scalpel, with cephalic retraction of the stomach, opening of a smaller sac and approaching the transpavity of omentum. the ventral surface of the pancreas was exposed from the neck. an incision was made in a pancreas body with a monopolar hook. primary pancreatic duct lumen was identified and the incision was extended longitudinally from the neck to the tail of the pancreas (8 cm). roux's y loop was prepared 50 cm from the treitz ligament, with a jejunum section with a 60 mm stapler, roux's loop was transmecoscopically retrocollic, closing the gap of the mesocolon with monocryl. a 60-cm jejunum-jejunal anastomosis was performed with endo-gia stapler and closure of enterotomy with 2-0 polypropylene intracorporeal suture. jejunal (roux) isoperistaltic loop was placed longitudinally at the opening of the main pancreatic duct, and enterotomy was performed with monopolar in antimesenteric segment. the intracorporeal pancreatico and jejunum anastomosis was performed using a lower and an upper plane, with single points of total thickness with ethnobond 2-0. 1 closed drains were placed towards each anastomosis. this procedure was performed in the 2 patients reported. operative time 180-300 min complications none operative time 4-7 days minimal bleeding drains no1 retired in both cases at 7 days 1 year follow-up of patients improved pain\ conclusions: minimally invasive surgery is a fundamental tool for the approach and management of patients with biliopancreatic pathologies. the establishment of multidisciplinary groups, offer an excellent alteranativa in the integral management of the patients. surg endosc (2018) gallbladder anatomy is highly variable, and surgeons must be prepared to identify anomalies of form, number, and position. variants include gallbladder agenesis, diverticulum, duplication, bilobed, multiseptate, phrygian cap, ectopic, and hourglass gallbladder. the hourglass gallbladder has been described from the earliest days of cholecystectomy, as morton described a congenital case in 1908, and else thoroughly described the acquired and congenital strictures leading to the hourglass deformity in 1914. we describe a case of an hourglass gallbladder found during one-step endoscopic retrograde cholangiopancreatography (ercp) and laparoscopic cholecystectomy. this 71 year old male presented to an outside hospital with one day of nausea, and constant, severe, epigastric pain that radiated to his back. he endorsed a history of similar pain several times in the past. his abdomen was soft, nontender, and without murphy sign. laboratory evaluation revealed total bilirubin 2.0 mg/dl, alkaline phosphatase 195 u/l, ast 835 u/l, alt 800 u/l, and no leukocytosis. ct abdomen and pelvis revealed cholelithiasis, distal choledocholithiasis, intra-and extra-hepatic ductal dilation, and a 3.8 centimeter left liver hemangioma. he was transferred for management of choledocholithiasis, and an abdominal ultrasound revealed cholelithiasis, without gallbladder wall thickening or pericholecystic fluid, and a 7.7 millimeter common bile duct without choledocholithiasis. he was taken to the operating room for a one-step ercp and laparoscopic cholecystectomy. upon laparoscopy, dense adhesions to the gallbladder were found. after initially attempting to obtain the critical view of safety, we then embarked on the retrograde "top down" dissection. this isolated a spherical structure measuring 2.492.2 centimeters. two very thin tubular structures were identified, clipped, and transected after we found they were too small to place a cholangiocatheter. the common bile duct appeared to be pulled anteriorly by surrounding inflammation, though this was later found to be the proximal segment of gallbladder. the intra-operative ercp identified a remnant gallbladder with cholelithiasis and no extravasation of contrast. given the unusual anatomy, we completed the operation, ordered a post-operative ct liver and mrcp, and consulted a hepatopancreatobiliary surgeon. a small remnant gallbladder was identified on ct liver, though not on mrcp. completion laparoscopic cholecystectomy with intraoperative cholangiogram and ultrasound was performed on hospital day 4. this hourglass gallbladder variant likely occurred secondary to chronic fibrosis from cholecystitis, leading to a proximal and distal gallbladder lumen. in anatomic uncertainty, the "top down" dissection, intraoperative cholangiography, ct liver, and expert consultation are safe methods to avoid iatrogenic injury. introduction: endoscopic entero-enteral bypass could change our approach to small bowel obstruction in patients with prohibitively high operative risk. magnetic compression anastomoses have been well-vetted in animal studies, but remain infrequent in humans. isolated cases of successful use in humans include treatment of biliary strictures and esophageal atresia. while endoscopic gastro-enteric magnetic anastomoses have been described, the associated multicenter cohort study was terminated due to serious adverse events. since then, the technology has evolved and recently our own institution reported results of the first in-human trial of magnetic compression anastomosis (magnamosis), deployed through an open approach. here we present the first case of endoscopic delivery of the magnamosis device and the successful creation of an enteroenteral anastomosis for chronic small bowel obstruction in a patient with prohibitively high operative risk. methods: the magnamosis device has previously been approved by the food and drug administration (fda) for use in clinical trial. our institutional review board approved emergency compassionate endoscopic use of the device in this patient due to a non-resolving small bowel resection and prohibitively high operative risk. case: this is a 59 year old man with advanced liver disease, chronic obstructive pulmonary disease, and history of emergent right colectomy with end ileostomy for cecal perforation. he presented with multiple acute on chronic episodes of small bowel obstruction with a stable transition point in the distal ileum, radiographically estimated at 15 centimeters proximal to the ileostomy. endoscopic evaluation through the ileostomy revealed a traversable obstruction with proximally dilated small bowel. the magnets were delivered via endoscopic snare under fluoroscopic guidance and positioned in adjacent loops of bowel on either side of the obstruction (image 1). by 7 days post-procedure, healthy villi were visible through the central portion of the mated magnetic rings (image 2). by 10 days the magnetic rings were mobile and the anastomosis was widely patent allowing easy passage of the gastroscope (image 3), and the patient's symptoms were completely resolved. the rings passed through the ileostomy 11 days post-procedure. at 1 month follow up, the anastomosis was unchanged (image 4). conclusion: this case demonstrates the benefit of an endoscopically created magnetic compression anastomosis in a patient with small bowel obstruction and high operative risk. further studies are indicated to evaluate the use of this technique in similar patients or those with malignant obstruct, ion. desiree raygor, md, ruchir puri, md; university of florida health jacksonville cholecystectomy is one of the commonest operations in general surgery [1] . occasionally chronic cholecystitis can lead to a small contracted gallbladder. this diagnosis can be misleading as it may represent congenital agenesis of the gallbladder [2] . a 28-year-old female with a past history of pancreatitis presented with a three day history of right upper quadrant pain associated with nausea and vomiting. upon exam she exhibited tenderness in the right upper quadrant. her leukocyte count and liver function tests were within normal limits. ultrasound revealed a poorly visualized, contracted gallbladder without stones and a dilated common bile duct (cbd). cholescintigraphy revealed non visualization of the gallbladder after two hours, which was suggestive of acute cholecystitis. decision was made to proceed with a laparoscopic cholecystectomy. the abdomen was entered by an open hasson technique and standard trocar placement for a cholecystectomy was performed. on initial inspection, the gallbladder was not readily visible. a structure appearing to be the cbd was present and was mobilized circumferentially (fig. 1) . a 19 gauge butterfly cannula was utilized and multiple cholangiographic images were obtained (fig. 2 ). no cystic duct or gallbladder was identified which was suggestive of congenital agenesis of the gallbladder. the patient did well postoperatively, and was discharged home on postoperative day two. the patient's symptoms resolved and she continues to be pain free one month postoperatively. congenital agenesis of the gall bladder is a rare disorder. a high index of suspicion is required especially in the setting of a small contracted gall bladder. if preoperative imaging is inconclusive then diagnostic laparoscopy should be the next step. cholangiogram should be performed routinely to confirm the diagnosis and to rule out an ectopic gall bladder. conversion to open does not offer any distinct advantage, and laparotomy should be avoided if possible given its associated morbidity. there are many reports upper abdominal major arterial aneurysms. however, an aneurysm of left inferior phrenic artery had never been reported. a 48-year-old woman with liver cirrhosis associated with hepatitis b viral infection was referred to department of surgery for treatment of aneurysm of left inferior phrenic artery. she underwent trans-arterial chemoembolization (tace) for treatment of hepatocellular carcinoma three times, previously. on 20 months after last tace, 7 mm sized highly enhancing nodular lesion of gastric fundus was found on follow-up abdomenpelvis computed tomography (a-p ct). one year later, the size of this lesion increased to 18 mm, and an aneurysm was diagnosed. she underwent angiography and attempted embolization with an aneurysm of the left inferior phrenic artery, but access failed. we performed a laparoscopic vessel ligation. she recovered with no complication and discharged on the 3th postoperative day. s170 surg endosc (2018) 32:s130-s359 yousef almuhanna, vatsal trivedi, fady balaa; university of ottawa a 34 years old female, g7 and 10 weeks pregnant, was brought to the hospital by ems, after being found on the floor in her toilette surrounded by vomitus and urine. mother-inlaw, who happens to be at the house that time, have heard severe retching followed by a loud bang sound. firefighters have found no pulse and therefore started cpr. return of spontaneous circulation was achieved, yet unfortunately, she had arrested again 5 minutes prior to arrival to er. pocus assessment showed large rvot, and therefore tpa was started on the assumption of pulmonary embolism. upon arrival of blood work, it was found that her hemoglobin had dropped from 110 to 54. fast was repeated showing moderate to severe amount of free fluid in the morrison's pouch and pelvis. she was then taken to the operating theatre, had undergone laparotomy showing liver segment ii injury. pringle's maneuver and aortic clamping did not control the bleed, therefore finger fracture and venous clips were used to temporary minimize the bleed, and head to interventional radiology suite. after multiple attempts to control the bleed, and the massive transfusion, she vital signs were not maintained, and had arrested afterwards. sarrath sutthipong, md, chumpunut chuthanan, md, chinnavat sutthivana, md, petch kasetsuwan, md; bhumibol adulyadej hospital, bangkok, thailand background: mesenteric panniculitis (mp) is a rare, benign and chronic fibrosing inflammatory disease that affects the adipose tissue of the mesentery of the small bowel and colon. the specific etiology is unknown and no clear information about the incidence. the diagnosis is suggested by ct and is usually confirmed by surgical biopsy. treatment is based on some selected drugs. surgical resection is sometimes attempted for definitive therapy, although the surgical approach is often limited. we reported a case of the mp diagnosed with ct and surgical biopsy by laparoscopic approach. case report: 50-year-old woman with 5 months history of chronic abdominal pain, mainly localized in the sub-epigastrium, intermittent and mild. she had anorexia but no weight loss or change in bowel habits. no history of medical illness or surgery. the physical examination was unremarkable, except for palpation of ill-defined mass about 5 cm at mid-abdomen, firm, smooth surface with mild tenderness. the laboratory profile and tumor marker were normal. ct of the abdomen, which showed focal heterogeneous enhancement of the mesenteric fat with stranding (8.794.8910 cm) with multiple internal subcentimeter lns in the supraumbilical area, which was probably inflammatory in origin and suggestive of mp. 18f-fdg pet/ct showed faint fdg uptake in multiple mesenteric lns. the patient was subsequently underwent diagnostic laparoscopy with biopsy. intra-operative finding showed a fat-like surface of yellowish mass at mesentery of jejunal segment, incisional biopsy was performed laparoscopically. the histology showed adipose tissue with areas of fat necrosis, fibrosis, foamy macrophages infiltration and predominant chronic inflammation, no evidence of malignancy. ihc studies (including cd68, s-100, cd3 and cd20) were performed and the result was compatible with reactive process. treatment was started with 40 mg prednisone once daily and planned for follow-up with repeated ct scan. discussion: mp involves the small bowel mesentery in over 90% of cases. the diagnosis is made by 3 pathologic findings: fibrosis, chronic inflammation and fatty infiltration. the differential diagnosis is broad and has been associated with malignancies such as lymphoma, well-differentiated liposarcoma and melanoma. the imaging appearance varies depending on the predominant tissue component. a definitive diagnosis is biopsy but open biopsy is not always necessary. no data of laparoscopic biopsy, which has been reported previously. treatment has been reserved for symptomatic cases with a variety of drugs. our case was started on oral corticosteroid treatment and waited for responsive evaluation. background: laparoscopic appendectomy is the gold standard for treatment of acute appendicitis. stapled closure of the appendiceal stump is often performed and has been shown to have several advantages. few prior cases have been reported demonstrating complications from free staples left within the abdominal cavity after the laparoscopic stapler has been fired. case report: a previously healthy 29 year old female initially underwent laparoscopic appendectomy for acute uncomplicated appendicitis during which the appendix and mesoappendix were divided using laparoscopic gastrointestinal anastomosis (gia) staplers. her initial postoperative recovery was uncomplicated and she was discharged home the same day. the patient returned to the emergency department on postoperative day 17 with one day of sharp mid-abdominal pain, obstipation, and emesis. her abdomen was distended and mildly tender but not peritoneal. she was afebrile but was found to have a leukocytosis of 13.2. ct demonstrated twisted loops of dilated small bowel in the right lower quadrant with two transition points, suggestive of internal hernia with closed loop bowel obstruction. diagnostic laparoscopy was performed through the three prior appendectomy incisions. an adhesion was noted between the veil of treves and the mesentery of a more proximal loop of ileum caused by a solitary free closed staple, remote from the staple lines, resulting in an internal hernia containing several loops of ileum ( fig. 1 ). the hernia was reduced, and the small bowel was noted to have early ischemic discoloration. the adhesion was lysed by removing the staple from both structures to prevent recurrence. through the remainder of the procedure, the compromised loops of bowel began to peristalse and the color normalized. the procedure was concluded without resection. the patient recovered on a surgical floor and was discharged home on postoperative day one. conclusion: gastrointestinal staplers are commonly used secondary to ease of use and low complication rate. it is not uncommon to leave free staples in the abdomen during laparoscopy as retrieval can often be more difficult and time consuming. our case is only the second in the literature reporting an internal hernia with closed loop bowel obstruction as a complication of retained staple. choosing the most appropriate size staple load, to reduce the number of extra staples after the fire, and removing as many free staples as possible can prevent potentially devastating complications. video-assisted thoracoscopic pulmonary wedge resection in a patient with hemopytsis and intralobar sequestration: a case report mary k lindemuth, md, subrato j deb, md; the university of oklahoma health science center case report: a 19-year-old male with history of noonan's syndrome, bronchitis, and asthma presented with acute hemoptysis. while chest x-ray was unremarkable, a computed tomography angiogram of his chest was significant for intralobar pulmonary sequestration in the right lower lobe. the aberrant pulmonary artery originated from the abdominal aorta, immediately proximal to the celiac axis, and coursed through the hiatus in the retroperitoneum. flexible, fiberoptic bronchoscopy revealed blood within the right lower lobe bronchus with no appreciable source. a right video-assisted thoracoscopic approach was taken for wedge resection of the sequestration. twoportal technique was utilized with the patient on single lung ventilation. the sequestration was easily identified; the anomalous pulmonary artery coursed directly to a large, focal area of hemorrhage noted within the lower lobe pulmonary parenchyma, as seen in image [rectangle marking the aberrant artery and oval marking the sequestration]. pathologically, the specimen was noted to be benign lung parenchyma with bronchiectasis and abundant, acute hemorrhage. discussion: pulmonary sequestration (ps) is a rare, congenital bronchopulmonary foregut malformation. literature describes the incidence of ps to be only 0.15-6.4% of all pulmonary malformations. as ps is most frequently diagnosed during childhood, the occurrence of diagnosis during adulthood is estimated to be less than 3 per 10,000 adults. two types (intra-and extralobar) are described, with intralobar sequestration most common and contained within the normal visceral pleura. both types have aberrant systemic arterial blood supply, most frequently from the thoracic aorta. likewise, both types are nonfunctioning lung tissue, as there is no direct communication with the bronchopulmonary tree. the most common presentation is pneumonia, and often patients will have had recurrent symptoms before diagnosis. it is rare to present with hemoptysis, which is understood to be secondary to elevated capillary pressure within the sequestration and then communication through the pores of kohn. while endovascular embolization of the aberrant pulmonary artery has been described as a safe alterative for surgical intervention, the subjects of these studies have primarily been children and long-term outcomes are unknown. the definitive treatment of ps continues to be surgical intervention. the surgeon should strive to leave as much normal lung parenchyma as possible. video-assisted thoracoscopic resection is well tolerated by patients when compared to thoracotomy. however, it is vital for the surgeon to be aware of the potential risk of life-threatening hemorrhage secondary to the sequestration having systemic blood supply that must be controlled and ligated. case report: a 51 years-old female patient with history of an increased mass and weight loss of 7 kilograms in 15 months, associated with vomiting and nausea for eight months. abdominal ultrasound showed an irregular cyst, without solid projections and without signs of flow in doppler, measuring 20911920 cm. investigation continued with ct scan that showed a large homogeneous cystic lesion with no septum in the abdominopelvic region, possibly mesenteric, measuring 20.5910.5924 cm. a laparoscopic approach for resection of the cyst was then performed. the surgery was performed with a patient in the dorsal decubitus, using three trocars: one in the umbilical region (11-mm) for the camera, and where the pneumoperitoneum was created by the hasson open technique under direct vision; and another two located in the epigastrium (5-mm) and in the right upper quadrant (3-mm) . in addition to the mesenteric cyst, a simple cyst in the right ovary and a solid nodule with a lipomatous characteristic of approximately 3 cm in the abdominal cavity were visualized. total resection of the mesenteric cyst with periprancreatic fibrous tissue was performed. the cyst was punctured and its contents fully aspirated. resection of the right ovarian cyst was also performed. at the end of the procedure the mesenteric and ovarian cysts, the nodule, part of the omentum, and the peripancreatic tissue were removed through the 11-mm trocar at the umbilicus. patient had no further complications, being discharged four days after the procedure. histopathologic result showed a serous cyst in the right ovary, serous cyst in peripancreatic mesentery with chronic inflammatory process and signs of calcification; no signs of malignancy were observed in any specimen. we aimed to present the succesul therapeutic approach utilizing laparoscopy for safely removing a gastrointestinal stromal tumor. depicted is a 66 year old jehova's witness female who presented to the emergency department for evaluation of bitemporal headache and dizziness and found with profound anemia with hemoglobin 5.4 and hematocrit 16.6 upon arrival to ed. the patient refused blood transfusion as her religious beliefs, jehovah's witness, preclude her from taking blood products. as part of her work up, endoscopy was performed and revealed a large, approximatelly 494 cm, prolapsed, ulcerated, nodular lesion with active bleeding in the cardia of the stomach. this was temporized but the friable tissue, with no single identifiable lesion for clip placement, left the patient at high risk for re-bleeding. she was taken to the operating room and laparoscopic partial gastrectomy with intraoperative esophagogastroduodenoscopy were succefully perfomed, with minimall blood loss and no intra operative complications. patient was discharged on post op day 3. we present the case of a 46-year-old male with a history of morbid obesity with an initial bmi of 44.7, who underwent an elective laparoscopic single anastomosis duodenal-ileal bypass with sleeve gastrectomy (sadi-s). postoperatively he developed an anastomotic leak at the duodeno-ileal anastomosis that would not resolve despite reoperation. he was then converted to a roux-en-y gastric bypass (rygb). postoperative imaging failed to reveal any signs of anastomotic leak and the patient was discharged tolerating an oral diet. he returned to the emergency department 11 days later with a 69392 cm sub-hepatic collection arising from the duodenal stump from the surgical conversion. interventional radiology percutaneously drained the collection and found a connection between the cavity and the duodenum. using this connection, a percutaneous decompressive duodenostomy drain was successfully inserted into the duodenum using a guidewire through the abscess cavity along with an extra-enteric drain placed within this cavity. the collection was obliterated and the duodenal leak was controlled successfully with percutaneous drainage, bowel rest with parenteral nutrition and broad-spectrum intravenous (iv) antibiotics. the patient was reintroduced to a bariatric clear diet after a week of bowel rest and the abscess drain was then discontinued during the same hospital admission. the patient was discharged with the percutaneous duodenostomy tube which was removed in clinic 34 days later, after the patient tolerated capping trials and imaging failed to reveal any further collections, oral contrast extravasation or distal obstruction. in this article we analyze notable imaging from the case and review current literature on the different management options for a duodenal stump blowout. we also discuss the basics of the sadi-s procedure and conversion of a sadi-s procedure to a rygb. keywords: anastomotic leak, duodenal stump blowout, sadi-s, duodenostomy tube. pancreatopic heterotopia is often an incidental finding on autopsy, but in some cases can lead to abdominal pain, obstruction, or intussusception. we present a case of pancreatic herterotopia mimicking an internal hernia on radiologic imaging. a 47 year old female with seven month history of chronic abdominal pain treated for low back pain and recurrent urinary tract infections. she was found to have a computed tomography (ct) scan concerning for internal hernia and labs consistent with acidosis. she was taken for a laparotomy and did not have an internal hernia, but an exophytic mass in the proximal jejunum. the mass was resected and a stapled side to side jejunojejunostomy was created. on pathologic review, the specimen was found to be pancreatic heterotopia. her post operative course was complicated by an ileus, but was discharged post op day three. at her two week follow up she had minimal incisional pain and at one year follow-up she had resolution of her left upper quadrant abdominal pain. prior to this report, pancreatic heterotopia has never been described as presenting on ct scan as an internal hernia. although uncommon it should remain in the differential when evaluating a patient presenting with abdominal pain and radiologic evidence of obstruction or internal hernia. case report: a 26-year-old male patient who was diagnosed with high blood pressure at 18 years-old and presented tetraparesis and intense asthenia for six months. blood tests showed hypokalemia, hypernatremia, and suppressed renin activity. ultrasound of the urinary tract was normal. ct scan of the abdomen showed a hypodense nodule with regular margins, measuring 1.491.0 cm with a density of 18 hu in the non-contrast phase and heterogeneous uptake after the injection of the contrast in the left adrenal gland. thus, the diagnosis of hyperaldosteronism secondary to the left adrenal nodule was confirmed, and surgical resection was indicated. the procedure was performed with the patient in the right lateral decubitus. two 3-mm and one 5-mm trocars were used on the left flank, as well as the 10-mm portal for the camera in the lower right quadrant under direct vision. the pneumoperitoneum was created by the hasson open technique in the transumbilical incision. the procedure consisted of the dissection, isolation and electrocautery of the left renal capsule and the left adrenal region with ultrasonic device, as well as the periadrenal vessels, adjacent lymph nodes and periadrenal and adrenal fat tissue. the surgery was uneventful and the patient had no further complications, being discharged the next day. histopathologic result showed a completely excised adrenocortical adenoma. conclusions: the hybrid minimally invasive approach proved to be safe and effective for this procedure, and the known advantages of minilaparoscopy such as less trauma, better visualization, better dexterity, better aesthetics, and reduced hospital stay were observed. s174 surg endosc (2018) background: coccidioidomycosis is a fungal infection endemic to the southwestern united states, central america and south america. coccidioides is ubiquitous in many of these endemic regions, with near 100% seroconversion in some communities. two-thirds of these mycotic infections may be asymptomatic. the most common presentation of coccidioidomycosis consists of "flu-like" symptoms or pneumonia. less than five percent of symptomatic cases progress to disseminated coccidioidomycosis which may involve any organ system. very rarely infection may include the peritoneum. we report a case of coccidioidomycosis with peritoneal involvement in an immunocompetent individual. case: a 36-year-old male presented to the emergency department with progressive abdominal pain. he was seen and treated for pneumonia in the emergency department one week prior. the patient worked outdoors in arizona and was otherwise healthy with a family history of malignancy and blood disorders. fever, leukocytosis and ascites on computed tomography scan prompted a diagnostic laparoscopy which revealed peritoneal granulomas positive for coccidioides. the patient was treated outpatient with fluconazole. discussion: since 1939 this is the 38th reported case of peritoneal coccidioidomycosis to our knowledge. the patient described in this case report was an otherwise healthy 36-year-old male; this is incongruent with many of the previously recorded cases which involved disseminated disease in immunocompromised patients. the patient's family history of malignancy and blood disorders suggests a potential underlying genetic predisposition that could account for this abdominal presentation. possible mutations include genes coding for the interleukin-12 β1 receptor and the signal transducer and activator of transcription 1 which have been implicated in increased coccidioidomycosis susceptibility. peritoneal infection presents a unique challenge in diagnosis. in these cases coccidioidomycosis may not be suspected due to nonspecific symptoms and imaging, the infrequency of this extra-pulmonary manifestation and clinical characteristics that mimic the presentation of tuberculosis and malignancy. abdominal infections have been misdiagnosed as appendicular abscesses, iliopsoas abscesses, adnexal abscesses and pancreatic masses. consequently, the diagnosis of peritoneal coccidioidomycosis is often made after laparoscopic exploration of the abdomen and histopathology, as it was in this case report. conclusions: coccidioidomycosis incidence is on the rise in endemic areas and it often falls on the surgeon to make the diagnosis in extra-pulmonary cases. the peritoneal subset of coccidioidomycosis should be considered in endemic areas when a young, otherwise healthy patient presents with abdominal pain. failure to recognize the possibility of coccidioidomycosis may lead to unnecessary treatments and procedures. indocyanine green cholangiography to detect anomalous biliary anatomy steven d schwaitzberg, md, gabrielle yee, ms; university at buffalo jacobs school of medicine introduction: common bile duct injury is the most feared complication of cholecystectomy. imaging with indocyanine green (icg) is a safe and effective technique to detect biliary anatomy in open, laparoscopic and robotic surgery. several studies report detecting aberrant biliary anatomy with the use of icg in laparoscopic cholecystectomy with high success rates. by identifying the cystic duct-common hepatic duct confluence before dissecting calot's triangle, icg allows surgeons to perform "virtual" cholangiography at the start of procedures to identify either normal anatomy or possible anatomic variants. it is clear that icg use is an effective tool to achieve the critical view of safety. however, no reports have suggested icg cholangiography as the last operative step in cholecystectomy to identify hidden biliary anomalies and avoid postoperative bile leak complications. case report: we report a novel use of icg cholangiography in visualizing anomalous biliary anatomy prior to closing, thus avoiding potential bile duct leakage. in our case, icg cholangiography was used to fluoresce the common hepatic duct, common bile duct and cystic duct. the cystic duct was transected, and the gallbladder was removed using electrosurgery. at the completion of the gallbladder removal, the liver was elevated to inspect the clips on the cystic duct and artery. at this point, near infrared imaging was reinitiated, and a small 1 mm structure was noted to fluoresce next to the cystic artery. this structure was identified using white light and subsequently clipped. discussion: the use of icg in this context after the completion of the cholecystectomy facilitated the identification of a small hepatocystic or aberrant duct, which would have likely leaked bile sometime in the postoperative period. based on our experience, we recommend one additional routine near infrared viewing to identify small structures or potential leaks at the completion of cholecystectomy. improved visualization of the extrahepatic biliary anatomy by icg has the potential to translate into improved clinical outcomes. solitary fibrous tumors (sft) are uncommon fibroblastic mesenchymal neoplasms that display a wide range of histologic behaviors. these tumors, which are estimated to account for 2% of all soft tissue neoplasms, typically follow a benign clinical course. however, it is estimated that 10-30% of sfts are malignant and demonstrate aggressive behavior with local recurrence and metastasis up to several years after surgical resection. we report a case of sft arising from the stomach, which is an exceptionally rare finding and has been reported only six times in the literature. additionally, this tumor was associated with dedifferentiation into undifferentiated pleomorphic sarcoma. to our knowledge, there are no documented cases of a malignant sft arising from the stomach to demonstrate dedifferentiation into an undifferentiated pleomorphic sarcoma. a 68-year-old male presented to the emergency department with vague complaints of right-sided flank pain. the patient had a history of nephrolithiasis and underwent a ct abdomen. this scan revealed a large heterogeneous mass in the left upper quadrant. the patient underwent endoscopic ultrasonography with fine needle aspiration of the mass, which stained strongly for cd34. gastrointestinal stromal tumor (gist) was the favored diagnosis as it is by far the most common mesenchymal neoplasm of the stomach, especially cd34 positive spindle cell neoplasm. accordingly, the patient began treatment with imatinib; however, after four weeks of therapy, there was no significant radiologic regression. a second biopsy was performed and the specimen was sent for stat6 immunohistochemistry, which revealed diffuse strong nuclear positivity. a diagnosis of solitary fibrous tumor was provided. surgical resection of the tumor was performed, which measured 17914910.5 cm. the patient was to undergo surveillance imaging every 3 to 6 months post-operatively. surveillance scan showed solitary metastatic disease in the left lateral segment of the liver. he underwent left lateral segmentectomy with an uneventful recovery. our case was complicated by diagnostic dilemma with gist, highlighting the challenges of diagnosing and characterizing sfts. dedifferentiation, or the abrupt transition from a classic sft into a high-grade sarcoma, is a particularly concerning finding in our case, as it is associated with a worse prognosis than classic malignant sft. the stat6 marker by immunohistochemistry is very specific for sft and may have aided in the diagnosis earlier. therefore, it is imperative to keep solitary fibrous tumor, albeit exceedingly rare, in the differential diagnosis of mesenchymal neoplasms of the stomach. appendiceal diverticulitits is an uncommon pathology that can clinically mimic acute appendicitis. some radiographic distinctions have been reported, but final pathologic examination of the surgical specimen is required to confirm the diagnosis. symptoms are often more mild, which can lead to a delayed diagnosis, and increases the risk of severe complications such as perforation. a 48 year old female presented with a three day history of right lower quadrant pain. she described the pain as constant and radiating to the left lower quadrant. associated symptoms included nausea and vomiting, and decreased appetite; she denied fevers or diarrhea. the patient had no significant past medical history, and surgical history was significant for a total nephrectomy for living donor kidney transplant to her mother. on physical exam she was tender in the right lower quadrant with rebound and a positive rosving's sign. all laboratory results were unremarkable, and she was hemodynamically stable. ct scan was performed and demonstrated a dilated fluid filled appendix with surrounding inflammatory change without abscess or free intra-peritoneal air. she was subsequently admitted to the hospital, made npo, started on iv antibiotics, and was taken to the operating room where she underwent an uncomplicated laparoscopic appendectomy. post-operatively, her hospital course was unremarkable. pathology revealed acute suppurative appendicitis secondary to an acutely inflamed appendiceal diverticula, consistent with a final diagnosis of acute appendiceal diverticulitis. appendiceal diverticulitis should be considered in patients presenting with acute right lower quadrant abdominal pain. although some consider appendiceal diverticulitis a variant of acute appendicitis, it is important to distinguish between the two diagnoses. appendiceal diverticulitis has a higher rate of complications, including perforation, and is associated with a higher risk of neoplasm, particularly mucinous adenomas and carcinoid tumors. appendectomy should be performed in all cases in order to obtain appropriate pathological examination and rule out coexistent neoplasms. laparoscopic appendectomy is a safe and appropriate approach to treatment of appendiceal diverticulitis. upper gi endoscopy and biopsy showed a gastrointestinal stromal tumor (gist) in the stomach. a videolaparoscopic partial gastrectomy was then proposed. the surgery was performed with the patient in the right lateral decubitus. two 3-mm minilaparoscopic trocars, a 5-mm conventional trocar for an ultrasonic instrument and a 10-mm trocar in the umbilical region for the camera were used. pneumoperitoneum was created using the hasson open technique under direct vision. trans-operatory endoscopy was perfomed to identify the tumor easily. initially, the ultrasonic device released the large omentum, and, then, the tumor was resected in the body of the stomach. the gastric wall was manually sutured with a 2-0 vicryl, and the tumor was removed in an endobag through the 10-mm incision in the umbilicus. the surgery was uneventful, with a total time of 72 minutes. the patient had no further complications, being discharged two days after the procedure with good clinical conditions. histopathological result showed a free margins gist. conclusion: the minimally invasive approach proved to be safe and effective for this procedure. the known advantages of video-surgery such as less trauma, better visualization, increased dexterity, better esthetics, and less postoperative recovery time were confirmed. the upper gi endoscopy contributed to improve the safety and efficacy of the procedure, allowing a more precise resection of the gist, as well as the intragastric review of the suture line at the end of the surgery. background: portal vein thrombosis (pvt) is a rare post-operative complication, which has been associated with a wide range of precipitating factors. most commonly described associated conditions include; cirrhosis, bacteremia, myeloproliferative disorders and hypercoagulable states. pvt most frequently occurs as a complication after hepatobiliary surgery, and although possible, very few cases have been documented occurring after laparoscopic surgery of the gastrointestinal tract. herein, we describe a case of pvt in a patient who underwent elective laparoscopic right hemicolectomy and was treated successfully at our center. case: a 39 year-old female with past medical history of depression, migraines and endometriosis underwent an uncomplicated laparoscopic right hemicolectomy at our facility, for recurrent rightsided diverticulitis. she had suffered 4 previous episodes of diverticulitis and desired definitive surgical treatment. her hospital course was uneventful and she was discharged to home on postoperative day 2. on post-operative day 9, she presented to the emergency department complaining of severe abdominal pain, back pain and nausea. computed tomography of abdomen and pelvis revealed pvt. she was initiated on therapeutic anticoagulation with heparin. hematology was consulted for hypercoagulable workup. further investigation revealed that she had a family history of a brother who had had a lower extremity deep venous thrombosis, with negative hypercoagulable workup. she had also previously been taking leuprolide and conjugated estrogen and medroxyprogesterone for her endometriosis. she was ultimately found to have a heterozygous prothrombin g20210a gene mutation. her anticoagulation was bridged to coumadin and she was discharged home. she has recovered as expected, without any further complications. discussion: although more common in patients with cirrhosis after hepatobiliary surgery, pvt is a rare complication that can occur after virtually all types laparoscopic surgeries, including elective right hemicolectomy. patients may be completely asymptomatic, or present with a broad spectrum of symptoms including; severe abdominal pain, fever, diarrhea, or gastrointestinal bleeding. physicians should be aware of this possible complication, since early diagnosis and treatment is imperative to prevent life-threatening complications, such as intestinal ischemia and perforation. a detailed medical and family history is imperative, and all patients with post-operative pvt should undergo complete hypercoagulability workup. this is a case of a 37 year old male with a previous history of a redo-hiatal hernia 5 years prior who presented with two episodes of upper gastrointestinal bleeding with no identifiable source noted on both endoscopy and angiography. during his second admission, initial hemoglobin was 5.5 g/dl and endoscopy performed showed massive amount of blood in the stomach. continuous oozing was seen originating in the fundus area but no clear source could be identified. empiric epinephrine was injected to the area but failed to achieve hemostasis. angiography was also negative. repeat endoscopy performed showed no active bleeding, however, distention of the wrap into the gastric cavity was observed. the patient re-bled and was taken to the operating room emergently after failed attempt at endoscopic control. the patient underwent proximal gastrectomy after intra-operative gastrostomy and exploration was unable to identify a bleeding source. the patient was left with an open abdomen and in discontinuity while resuscitation was performed in the surgical intensive care unit. he subsequently underwent a roux-en-y reconstruction and gastrostomy tube placement via the distal gastric remnant. upper gastrointestinal series performed demonstrated absence of leak, and the patient was started on a liquid diet supplemented with tube feeding. his recovery was uneventful and he was discharged home in stable condition. pathology revealed gastric ischemia at the base of the wrap making it impossible to visualize through endoscopy. on reviewing the literature, gastric ulcers and ischemia have been previously described. incidence was up to 3% and their onset of presentation ranged from the early post-operative period up to 5 years. most were located in the lesser curvature. the exact pathophysiology for its occurrence is not completely understood. factors hypothesized include technical aspect of the fundoplication causing inappropriate tension, vessel disruption and ischemia, and injury to the vagus nerve affecting gastric emptying which was thought to increase gastrin secretion. treatment includes medical management with proton pump inhibitors; however, few cases describe antrectomy with inclusion of the bleeding ulcer. our case presents failed medical and endoscopic management. we recommend take down of the fundoplication in hemodynamically stable patients to completely evaluate the gastric mucosa, identify, and address the source of bleeding. otherwise emergent cases will require staged gastrectomy including the wrap followed by roux-en-y reconstruction. acalculous cholecystitis associated with a large periampullary duodenal diverticulum: a case report peng yu, md, phd, austin iovoli, aaron hoffman, md; department of surgery, suny buffalo, kaleida health system, buffalo, ny introduction: periampullary diverticulum (pad) could compress common bile duct (cbd), and consequently cause obstructive jaundice and cholangitis as few publications have documented. here we first report an acalculous cholecystitis associated with a pad-related cbd obstruction. case: the patient was a 60-year-old female with a past surgical history of laparoscopic sleeve gastrectomy who presented at the emergency room with upper abdominal pain and vomiting for one day, associated with leukocytosis and left shift. serum total bilirubin raised up to 6.1 mg/dl on hospital day (hd) 3. ct, ultrasound, and mrcp images confirmed a distended, wall-thickening gallbladder with pericholecystic fluid, and a significantly dilated cbd at 1.2 cm of diameter ( fig. 1) , without cholelithiasis or choledocholithiasis. ercp was unable to be completed due to the post-gastrectomy anatomy and the failure in cannulation into the ampulla which embedded in a large foodimpacted pad (fig. 2 ). on hd5, the patient underwent a diagnostic laparoscopy and an intra-operative cholangiogram which confirmed a mildly inflamed edematous gallbladder, and a 3.893.8 cm 2 large pad with a narrow neck that was distorting the distal cbd (fig. 3 ). since the patient's bilirubin level had been improving, we decided to only do a laparoscopic cholecystectomy. intraoperatively an anatomic variation of the cystic artery encircling the cystic duct ( fig. 4 ) was also identified. postoperatively the patient recovered well during the thereafter inpatient course and at the postoperative 3-week outpatient follow-up. the pathology of the excised gallbladder confirmed cholecystitis without cholelithiasis. discussion: lemmel's syndrome is defined, in the absence of cholelithiasis or other detectable obstacle, by obstructive jaundice due to pad. since lemmel described this duodenal-diverticulum-obstructive jaundice in 1934, there still have been very few cases reported or investigated. to date there is no report describing the association of acalculous cholecystitis with lemmel's syndrome. this patient's mild acalculous cholecystitis probably attributed to the biliary obstruction and consequent gallbladder hydrops. her symptoms could be from either acalculous cholecystitis or intermittently worsening biliary obstruction. in this case, the contribution of the anatomic variation of the cystic artery is unclear. in the future, if this patient's symptoms recur, the treatment plans for her will be sphincterotomy, removal of the impacted food in the pad, or diverticulectomy. accidental fish bone ingestion masquerading as acute abdomen aim: to report a case of fish bone ingestion masquerading as acute abdomen. case report: a 48 years old female patient presented with complaints of severe abdominal pain since 5 days. there was no history of associated nausea or vomiting, fever or altered in bowel habits. on examination patient had tenderness and guarding localized to the right iliac fossa. blood investigations revealed raised inflammatory markers. ultrasound whole abdomen and contrast enhanced computed tomography (cect) were normal. patient was managed conservatively but in view of persistence of symptoms a triple puncture diagnostic laparoscopy was performed on day 3 of admission. omental inflammation with soapy appendix was found and appendicectomy was performed. on further assessment a foreign body was also found in the ileum which was removed and identified as a fish bone. patient had a satisfactory post operative recovery and was discharged in stable condition. discussion: acute abdomen due to fish bone ingestion is not a very common occurrence. unfortunately the history is often non-specific and these people can be misdiagnosed with acute appendicitis & other pathologies. ct scans can be useful to aid diagnostics. it is however not fully sensitive in detecting complications arising from fishbone ingestion. conclusion: any patient with acute abdomen, with non-specific history and normal imaging may still benefit from a diagnostic laparoscopy. discussion: this patient presented with a bowel obstruction, partial cecal necrosis and neuroendocrine carcinoma. literature suggests that cecal necrosis in the majority of cases is caused by a vascular event, occlusive or non-occlusive. the patient had atherosclerosis and an underlying malignancy which can be associated with prothrombotic states and contributes to an overall risk of thrombosis. the cecum can sustain ischemic ischemic injury in the presence of severe or prolonged hypotension. most frequent causes being decompensated heart failure, hemorrhage, arrhythmia or severe dehydration, only 1 of which was present in this patient. the midgut neuroendocrine tumor is generally located in the terminal ileum, as a fibrotic submucosal tumor 1 cm or less. mesenteric metastases are often larger than the primary tumor and associated with fibrosis which may entrap loops of the small intestine and cause bowel obstruction. this may eventually encase the mesenteric vessels with resulting venous stasis and ischemia in segments of the intestine as seen in this patient. conclusion: cecal necrosis is a rare entity, but its incidence increases with age. isolated cecal necrosis may manifest as a ct-negative appendicitis or a small bowel obstruction in the absence of past surgical history. s178 surg endosc (2018) laparoscopic transection of the falciform and triangular ligament successfully released the entrapped loop with successful reperfusion by the end of the surgery. in the absence of any prothrombotic comorbidity, the patients were discharged asymptomatic without further anticoagulation. to date only few similar cases have been reported, and most of them described in neonates and pediatric patients. to our knowledge, this cases reporteds in the elderlys. in this patients laparoscopic approach was both diagnostic and therapeutic with the transection the ligament. roberto javier rueda esteban 1 , andres mauricio garcia sierra 2 , felipe perdomo 2; 1 universidad de los andes, 2 fundacion santa fe this is a patient´s rare case of spontaneous splenic rupture associated to chronic myeloid leukemia as an uncommon complication. the case report and review of the relevant literature on symptomatology and clinical management is presented. emphasis is made about the importance of including splenic rupture as differential diagnosis for acute abdominal pain, especially in a patient with neoplastic hematopathology, since early treatment increases patient survival and prognosis. esophagectomy is a complex operation associated with serious immediate complications and long term chronic complications. gastric ulcers are a common chronic complication after esophagectomy with gastric conduit reconstruction. these are rarely complicated by significant bleeding or perforation. we report a case of delayed diagnosis of a fistula forming between a gastric conduit and right bronchial tree 13 years after esophagectomy. this was successfully treated using multiple therapeutic approaches including endoscopic localization and resection through a right thoractomy. to the best of our knowledge, our patient is the only survivor from a chronic gastric conduit bronchial fistula. a 53 year old male with type 1 diabetes mellitus, dyslipidemia, asthma and smoking history presented 15 years after an ivory-lewis esophagectomy for a gastrointestinal stromal tumor (gist) with a chronic cough starting 13 years after his esophagectomy followed by multiple episodes of hematoptysis over the next 2 years. the patient was known to have ulcers in his gastric conduit with a massive bleed 1 year after his esophagectomy. repeat endoscopy revealed two large chronic ulcers that had increased in size based on comparison of pictures from endoscopies 3 to 6 years after his esophagectomy despite maximal medical management. the patient presented to numerous specialists at tertiary care centers in canada and the united states. ultimately, in a clinic the patient was observed to cough immediately after the ingestion of water, but not solids leading to a provisional diagnosis of a gastrobronchial fistula. a barium swallow failed to show a fistula (fig. 1 ). however at endoscopy, instillation of saline directed at an ulcer immediately induced a cough, but this was not reproduced when the saline was directed away from the ulcer. the fistula was ultimately demonstrated by placing a wire through the ulcer and visualizing it bronchoscopically in the right superior segmental bronchus . in an effort to pursue a minimally invasive approach two attempts were made to close the fistula with over-the-scope clips (otsc). unfortunately, the patient's symptoms persisted. a wire was placed through the fistula and delivered through the patient's mouth and endotracheal tube. a right thoracotomy allowed access to the conduit, which was opened and the fistula localized using the wire. the fistula was resected and the bronchus closed. at twelve month follow up the patient did not have a recurrent cough or hemoptysis while tolerating a full diet. introduction: roux en-y gastric bypass (rygb) is one of the initial and most studied weight reduction procedures and remains the gold standard for comparison in bariatric surgery clinical outcomes. although rygb is an effective procedure for weight loss, it has been less popular over last several years because of increased morbidity compared to the more utilized vertical sleeve gastrectomy (vsg). early complications of rygb include bleeding, perforation, or leakage. late complications include internal hernias, small bowel obstruction, anastomotic stenosis, marginal ulcers, and gastrogastric fistulas. case report: a 50-year old female with a past medical history of morbid obesity, diabetes mellitus type 2, hypertension, gerd, peptic ulcer disease, cholelithiasis, liver dysfunction with ascites, asthma, and a past surgical history of rygb (11 years ago) presented to our institution with acute on chronic abdominal pain associated with nausea, vomiting, dysphagia, inability to eat and maintain hydration, and an additional weight loss of about 100 lbs. over the last year. in addition, the patient was a chronic opioid and nsaid user, had an extensive smoking history, and had not followed with her surgeon for 11 years. at the time of presentation, the patient weighed 82 lbs (bmi: 13.2), had normal vital signs, and appeared cachectic. an upper gastrointestinal study followed by an upper endoscopic examination demonstrated complete obliteration of the gastrojejunal anastomosis and revealed a 2-cm long gastrogastric fistula originating from the distal end of the gastric pouch to the lesser curvature of the excluded stomach. after conservative measures were initiated to hydrate and metabolically stabilize the patient, the decision was made to proceed with diagnostic laparoscopy and surgical placement of a gastrostomy tube to the gastric remnant. the patient was discharged after tolerating a full liquid diet and gastrostomy tube feedings, for plan of future revision of gastrojejunostomy when optimal nutritional status is achieved. conclusions: late complications of rygb occur at a rate of 15-20%. major risk factors for anastomotic complications include non-compliance, smoking, and opiate and nsaid abuse. though abdominal pain, anastomotic stenosis, marginal ulcers, and fistulas are relatively common late complications of rygb, complete obliteration of the gastrojejunal anastomosis has not been well described in the literature. this case demonstrates the importance of long term follow up post rygb for early diagnosis of late complications and brings attention to this rare, but possible sequele that can arise in patients after rygb. contrast radiograms and upper endoscopic photographs will be presented. introduction: retroperitoneal sarcoma represents approximately 12-15% of all sarcomas and less than 0.5% of all neoplasia. radiotherapy and chemotherapy still do not represent valid therapeutic alternatives; therefore complete surgical resection is the only potential curative treatment modality for retroperitoneal sarcomas. the ability of complete resection of a retroperitoneal sarcoma with tumor grading remains the most important predictor of local recurrence and disease-specific survival. in a patient with a large fibrosarcoma and associated hypoglycemia, assays for insulin-like activity (ila) were found to be high in the extract of tumor tissue, while insulin was not detected in significant concentration neither in the same extract nor in his serum. laparoscopic surgery represents an alternative technique for radical resection of such tumors as a minimally invasive rather than traditional surgery. only few cases were reported in the literature. introduction: roux-en-y gastric bypass (rygb) is a frequently performed bariatric procedure, of which internal hernia (ih) is a known complication. we discuss a rare finding of occult gastric remnant perforation as a result of an obstructed ih in a post bypass patient. methods: we present a case report of a single bariatric surgeon's experience at a tertiary care hospital. literature review of pubmed confirms the unique presentation and operative findings in our patient, as few similar cases have been published. a 59-year-old male s/p rygb 12 years ago presented to the ed with right upper quadrant pain, nausea, vomiting, and a leukocytosis of 24,100. bmi was 31.7; weight was 254 lbs. workup included an abdominal ultrasound showing gallbladder distention without signs of cholecystitis. liver function tests were normal. further imaging included a ct scan, remarkable for a paraesophageal hernia (peh) containing the gastric pouch, and an elevated left hemidiaphragm. the scan showed no evidence of ih or bowel obstruction. an upper gi series was additionally obtained, which was also negative for small bowel obstruction. due to unclear etiology for this patient's symptoms or source of leukocytosis, diagnostic laparoscopy was planned. results: intraoperative findings were significant for ih containing dilated small bowel with twisted and incarcerated omentum through the jejunojenunostomy site, as well as a distended gallbladder without acute inflammation. ih was reduced and closed without bowel resection. cholecystectomy was completed. subsequent inspection of the diaphragmatic hiatus revealed uncomplicated herniation of the gastric pouch. in attempts to dissect the left diaphragmatic crus, a large pocket of purulent material was encountered below the left diaphragm in the region of the remnant stomach fundus. methylene blue test and intraoperative endoscopy did not demonstrate any connection to gastric pouch. the purulence was attributed to an occult remnant stomach perforation related to distal obstructed ih. a drain was left in the abscess and the peh was not surgically addressed. patient was discharged on postoperative day 5. he has not suffered any further complications or recurrent complaints. conclusion: gastric perforation following rygb is an uncommon complication resulting from ih. this diagnosis was missed by preoperative imaging and was only found after thorough laparoscopic investigation. surgeons should maintain a high clinical suspicion of ih in post rygb patients with otherwise unexplained abdominal symptoms, fever, and leukocytosis, even in the absence of confirmatory diagnostic testing. threshold for operative exploration in this clinical setting should remain low. alejandro garza, md, robert alleyn, md, jose almeda, md, ricardo martinez, md; utrgv obesity is an epidemic condition worldwide carrying significant morbidity and mortality. surgical therapy is the only proven effective method to sustain weight loss. among the different surgical procedures gastric bypass is the most effective. during this surgery, most of the stomach is excluded from the upper gastrointestinal tract which makes future evaluation of the same very challenging. this could potentially lead to delay in diagnosis of any pathology in the bypass stomach. gastric cancer is the 14th most common cause of cancer and cause of cancer death in the united states. we present a case report of a patient who underwent a roux-en-y gastric bypass and went on to developed adenocarcinoma in the gastric remnant 28 year after her surgery. she underwent an exploratory laparotomy, extended antrectomy, subtotal gastrectomy including the gastro-colic ligament, and incidental appendectomy. pathology showed grade 4 undifferentiated adenocarcinoma that penetrated the visceral peritoneum with clear margins. there was angiolymphatic invasion and perineural invasion along with metastatic carcinoma in 5 out of 6 lymph nodes. introduction: polyarteritis nodosa (pan) is a systemic transmural inflammatory vasculitis that affects medium-sized arteries. inflammation of the vessel wall and intimal proliferation creates luminal narrowing which can lead to stenosis and insufficiency. the same inflammatory process causes disruption of the elastic lamina leading to aneurysm formation and possible spontaneous rupture with life-threatening bleeding. multifocal segments of stenosis and aneurysm formation are characteristically identified as a "rosary sign" or "beads on a string". unlike other vasculitides, pan does not involve small arteries or veins, and is not associated with anti-neutrophil cytoplasmic antibodies. we present the case of a 66 year old female with a significant intra-abdominal bleed that was explored and repaired primarily. she was subsequently found on angiogram and postmortem pathology to have findings consistent with pan. case presentation: 66 year old female who presented to the emergency department with abdominal pain followed by hemorrhagic shock and found to have a ruptured left hepatic artery aneurysm during exploratory laparotomy. this aneurysm was suture ligated with a successful outcome. a mesenteric arteriogram was performed the following day and demonstrated lesions consistent with pan including aneurysms of the left gastric branches, right and left hepatic arteries, and beaded appearance of the iliac artery. however, 2 days after hospital discharge she developed massive pulmonary embolism from which she did not recover. postmortem examination confirmed rupture of the left hepatic artery aneurysm in addition to gross anatomical and histological findings consistent with pan. discussion: polyarteritis nodosa is a systemic inflammatory vasculitis that causes intimal proliferation and elastic lamina disruption. this multifocal disruption of the vessel results in aneurysm formation alternating with stenosis creating a characteristic "rosary sign" on imaging. spontaneous rupture of these aneurysms is rare and almost always fatal due to life-threatening hemorrhage. with acutely ruptured aneurysms, prompt diagnosis, aggressive resuscitation, and hemostasis through transarterial embolization or surgery is paramount for patient survival. while acute rupture of an aneurysm as the result of pan is exceedingly rare, it must be considered as a differential diagnosis in the setting of acute abdominal pain and hemodynamic instability. in a patient known to have a medical history of pan and aneurysm formation, routine monitoring and disease progression should be followed. introduction: 300,000 surgeries are done annually in the us for small bowel obstruction, which is most commonly caused by intraabdominal adhesions, malignancy, and hernias. 0.2 to 5.8% of small bowel obstructions are due to paraduodenal hernias. paraduodenal hernias carry a 50% lifetime risk of incarceration with a mortality of 20 to 50%. case report: the patient is a 78 year old male who presented with severe upper abdominal pain for one day. he was passing flatus and had had a bowel movement the previous day. on examination, the patient was tender over the upper abdomen. computed tomography (ct) scan with iv contrast showed a mesenteric swirl sign. the decision was made to perform diagnostic laparoscopy with possible small bowel resection. intraoperatively, a mesenteric defect was noted posterior and to the right of the duodenum, through which bowel was herniating. the herniated bowel and its mesentery were edematous. the defect was sutured closed, taking seromuscular and mesenteric bites through the stomach, jejunum, and mesentery. the patient had an uneventful recovery postoperatively and was discharged on postoperative day 2. he returned on postoperative day 28 with periumbilical pain which resolved with conservative management. he was followed up 6 weeks postoperatively and was doing well. discussion: paraduodenal hernias are the most common internal hernias. they are seen more often in males. they are caused by failure of the counterclockwise rotation of the prearterial segment of the embryonic midgut in weeks 2 to 12 of embryonic development. paraduodenal hernias usually present with chronic intermittent abdominal pain, weight loss, nausea, and vomiting. they may present acutely with symptoms of bowel obstruction. peritoneal signs are often not appreciated due to retroperitoneal position of the hernia. ct scan of the abdomen often shows clustering of bowel loops, which cannot be displaced on repositioning the patient. if imaging is equivocal, diagnostic laparoscopy may be undertaken. surgical correction consists of reducing the bowel, resecting nonviable segments, and either closing the defect or opening the sac laterally into the general peritoneal cavity. in summary, paraduodenal hernias are a rare cause of bowel obstruction and as such present a challenge in diagnosis and early intervention. diverticulosis of the appendix is a rare disease found in 0.004-2.1% of appendectomies, first described in 1893. the clinical presentation may be acute inflammatory with or without appendicitis or it may be an incidental finding in an uninflamed appendix. the congenital type is rare and it has all the bowel wall layers. it most frequently represents as pseudo diverticulum which lacks the muscularis layer. the pathogenesis of appendiceal diverticula is not completely elucidated. its symptoms are similar to and often misdiagnosed for that early acute or chronic appendicitis. while appendectomy is curative for both entities, it is important to distinguish diverticulum of the appendix from appendicitis as it is four times more likely to perforate and may be a sign of an underlying neoplasm. we reported a very rare giant pseudo diverticulum of the appendix in a 69-year-old male presenting with chronic abdominal discomfort for months. abdominal x-ray showed abnormal gaseous finding. physical exam was significant for a soft rubbery mass in the periumbilical region. blood work revealed slight elevation of c-reactive protein. preoperative ct and mri showed a 9-centimeter-large cavity composed of thin wall, located at the tip of the appendix with peri appendicular fat stranding. in the concern of pending obstructive symptom and chronic abdominal pain, we decided to perform the resection laparoscopic. the soft mass arose from the tip of the appendix. there were dense adhesions between the appendix, mesentery, and sigmoid colon. after adhesiohedlysis, laparoscopic appendectomy was performed with endogia. the specimen was extracted through a small incision without spillage. hospital course was uneventful and the patient was discharged on post-operative day 4. the pathological finding was consistent with a pseudo diverticulum of the appendix which lacked muscularis layer and the inner wall of the cavity was lined with a scattered cubital epithelial layer in the continuity with the appendiceal mucosal membrane. here we report a successful laparoscopic resection of an extremely rare giant chronic pseudo diverticulum of the appendix. yvette farran, ms, jorge a miranda, ms, benjamin clapp, md, elizabeth de la rosa, md; texas tech university health sciences center introduction: sigmoid colon intussusception is rarely encountered and given its vague symptomatology diagnosis and management can be difficult. the treatment of an intussusception in adults is different than in children. lipomas as the causative etiology for intussusception are encountered up to 0.83% of the times and up to 70%-90% of the patients require surgical resection for treatment. methods: this is a case report about a 62 year old male that presented with two weeks of worsening abdominal pain and distention. physical exam was only pertinent for abdominal pain on light palpation, guarding and moderate distress. ct scan of abdomen and pelvis demonstrated a lipomatous mass causing complete obstruction of the sigmoid colon with intussusception. this was managed with laparoscopic sigmoidectomy. the patient had an uncomplicated post-operative period and was discharged on post-operative day 2. pathology of the lipomatous mass confirmed a benign lipoma. discussion: intussusception is rarely encountered in clinical practice in adults and constitutes 5% of all cases. lipoma induced sigmoid intussusception with complete obstruction is rare. symptoms can be non-specific as in this case. this case report highlights the importance of timely diagnosis and treatment of an intussusception in adult patients. ct scan is the gold standard for diagnosis and often shows a "target sign". other imaging techniques like ultrasound have shown adequate results but remain less effective than ct scan. the treatment in adults is not a reduction by enema like in pediatrics but rather resection of the lead point. this can be appropriately done with a laparoscopic technique in most cases. conclusion: colonic intussusception is rare. surgery is the only treatment for an intussusception in adults since the lead point needs to be removed, and can be attempted safely with a laparoscopic approach. surg endosc (2018) 32:s130-s359 joshua smith, md, kern brittany, md, amie hop, md, amy banks-venegoni, md; spectrum health case report: 60 year-old female with no significant past medical history presents with a 10-year history of nocturnal cough that had worsened over the past 3 months and had associated regurgitation. she underwent esophagogastroduodenoscopy (egd) that showed a tortuous esophagus and tight lower esophageal sphincter that required dilation. she received an upper gastrointestinal (ugi) contrast study that showed a dilated, tortuous esophagus with 'bird's beak' tapering, consistent with achalasia, as well as a large epiphrenic diverticulum measuring 797 cm. esophageal manometry confirmed "pan-esophageal pressurization" consistent with type ii achalasia. given her symptoms in the presence of these findings, she elected to proceed with surgery. she underwent laparoscopic, trans-hiatal epiphrenic diverticulectomy, heller myotomy and dorr fundoplication. extensive dissection allowed for approximately 8 cm of retraction down from the chest and we were able to come across it with a single blue load of a 60 mm linear cutting stapler. post-operatively, she tolerated the procedure well with immediate improvement in her symptoms. her ugi on post-operative day 1 showed no evidence of leak, she tolerated a soft diet and was discharged home. she was seen at 2-week and 1-year follow-up appointments with complete resolution of symptoms. discussion: epiphrenic diverticula in the presence of achalasia has an occurrence rate of 25%. large diverticula ([5 cm), are even more rare with only a handful of case reports in the literature. historically, thoracotomy or, more recently, thoracoscopic approaches are required for resection. however, thoracic approaches are associated with a 20% increase in morbidity, namely due to staple line leak and the resulting pulmonary complications. only a single case report exists on our review of the literature that demonstrates successful trans-hiatal laparoscopic resection without post-operative complications of a diverticulum of this size. the shortest documented length of hospital stay postoperatively for similar cases is 4 days, while the average is 7-10 days or longer for those with complications. our patient was able to go home on post-operative day 1 after a normal ugi and was tolerating a soft diet. not only does this case show that a large epiphrenic diverticulm can be successfully resected via the trans-abdominal laparoscopic approach, this case makes the argument that patients undergoing any minimally-invasive epiphrenic diverticulectomy and myotomy, with or without fundoplication, may be successfully managed with early post-operative contrast studies and dietary advancement, thus decreasing their length of hospitalization and overall cost of treatment. kazuma sato, shunji kinuta, koichi takiguchi, naoyuki hanari, naoki koshiishi; takeda general hospital background: situs inversus totalis (sit) is a rare congenital condition in which the abdominal and thoracic organs are located opposite to their normal positions. few cases of laparoscopic surgery for gastric cancer with sit have been reported. we report a case of laparoscopic distal gastrectomy with d2 lymph node dissection performed for gastric cancer in a patient with sit. case description: an 80-year-old woman was admitted to our hospital for treatment of gastric cancer that was diagnosed by esophagogastroduodenoscopy (egd) at a local clinic after she experienced anemia and nausea. egd identified an irregularly shaped gastric ulcer located at the anterior side of the lesser curvature of the antrum. a biopsy revealed a moderately differentiated adenocarcinoma. she was then diagnosed with sit by chest radiography and abdominal computed tomography (ct). the abdominal ct showed that all organs were inversely positioned and that the wall of the antrum had thickened; it also showed the lymph nodes in the lesser curvature of the stomach, without distant metastasis or an abnormal course of vascularity. the patient was clinically diagnosed with t3n1m0 stage iiia gastric cancer according to the japanese classification of gastric carcinoma. a laparoscopic distal gastrectomy with d2 lymph node dissection in accordance with the japanese gastric cancer treatment guidelines as well as a roux-en-y anastomosis due to an esophageal hiatal hernia were performed. the surgery was safely and successfully performed, although it required more time than usual because the inverted anatomic structures were repeatedly examined during the surgery. the postoperative course was positive, and the patient was discharged on postoperative day 7 without any complications. the final stage of this case was pt1bn0m0 stage ia. currently, the patient is doing well without recurrent gastric cancer. conclusion: gastric cancer with sit is an extremely rare occurrence. we experienced a case of laparoscopic distal gastrectomy with d2 lymph node dissection performed for gastric cancer in a patient with sit. we simulated the operation for sit by viewing left-right reversed ordinary surgical videos. the abdominal ct angiography with a three-dimensional reconstruction helped reveal any variation and confirmed the structures and locations of vessels before the surgery. the operation could safely be performed following the standardized surgical technique by reversing the surgeon standing position and trocar position. sternum or chest wall resection is performed for a variety of conditions such as primary and secondary tumors of the chest wall or the sternum. sternum reconstruction has been a complex problem in the past due to intraoperative technical difficulties, surgical complications, and respiratory failure caused by the chest wall instability and paradoxical respiratory movements. advances in the fields of surgery and anesthesia result in more aggressive resections. nowadays neither the size nor the position of the chest wall defect limits surgical management, because resection and reconstruction are performed in a single operation that provides immediate chest wall stability. chest wall resection involves resection of the ribs, sternum, costal cartilages and the accompanying soft tissues and the reconstruction strategy depends on the site and extent of the resected chest wall defect. here i'll present, the youngest ever case reported, 2 years old girl with rhabdomyosarcoma involving the sternum. i will present the management challenges and the reconstruction options. introduction: neuroendrocrine malignancies constitute 0.5% of all cancers. the gastrointestinal tract is the commonest site, followed by the lung. the last decade has seen a steady increase in their incidence. this is a case series of twenty five such tumours and their clinicopathological characteristics. materials and methods: twenty five patients with neuroendocrine tumours of the gastrointestinal tract were studied with reference to their demographic and clinicopathological characteristics. apart from routine pathological examination, these tumours were also checked for e cadherin expression as an independent marker of aggressive disease. results: the age of our patients ranged from 18 to 67 years. we had 13 female and 12 male patients, contradicting a female preponderance in literature. the vast majority of the tumours we encountered were from the stomach and duodenum, with 5 and 12 patients, respectively. two tumours were at the gastroduodenal junction, two from the appendix, small intestine and pancreas, each, and one each from the rectum and gall bladder. this is in contrast to literature that shows that neuroendocrine tumours of the git most commonly arise from the appendix and small bowel, followed by the rectum, stomach and duodenum. two of these tumours were functional. the diagnosis was confirmed by immunohistochemistry staining for chromogranin a and synaptophysin. grading was done using who criteria that takes into account the mitotic count, ki 67 index and necrosis. 21 of our cases were grade i. further, immunohistochemistry for e cadherin showed that absence of expression correlated with more aggressive clinical behavior. 18 out of twenty five patients were operable at presentation and standard resections depending on the organ of origin with adjuvant therapies were given as required. 5 could only be given palliative care. the 2 functional tumours were treated with radiolabelled somatostatin analogues following uptake studies. conclusion: as neuroendocrine tumours are relatively rare, information about them is not as abundant as with other malignancies. absence of e cadherin expression is associated with more aggressive disease. more studies are required that document the pathological characteristics and clinical behavior in order to offer well rounded treatment protocols that treat not only the primary, but also the generalized effects of the secretions produced by them. targeted chemotherapy is gaining prominence, but more specific drugs directed at the plethora of receptors these tumours express, could potentially revolutionize treatment. (1) . unfortunately there are no publications from denmark. we would like to present first to our knowledge reported case of double gallbladder in denmark. double gallbladder is a rare anomaly with a prevalence of 1:3800 in autopsy studies, described first by boyden in 1926 (2) . there are several classifications of double gallbladder that are based on relation between gallbladder, cystic duct and common bile duct (2, 3) . non-specific symptoms and inadequate imaging are possible causes of lack of awareness of the condition. removal of all gallbladders, preferably laparoscopic with special attention to the biliary anatomy, is recommended (4). method: case report with review of the literature. a 55-year-old female patient of polish origin was hospitalized due to upper right quadrant pain. on admission clinical manifestations and paraclinical abnormalities of pancreatitis were present. ultrasound scanning of the abdomen showed bile stones, ultrasonic manifestations of acute cholecystitis and normal intra-and extrahepatic bile ducts. because of elevated liver enzymes mrcp was performed and showed double gallbladder, double cystic duct and signs of pancreas anulare. scheduled ercp confirmed bile stones in cbd, double gallbladder with double cystic duct, h-type according to harlaftis classification (3) . because of minor retroperitoneal perforation second ercp was needed for removal of all stones. the patient was then scheduled to laparoscopic cholecystectomy with perioperativ cholangiography. conclusion: anatomical variations of the gallbladder such as double gallbladder are rare and often remain unnoticed. they are most often identified because of clinical manifestations symptoms, diverse imaging studies, during surgery or autopsy. as most of them are not expected, they can contribute to complications during surgery. careful preoperative imaging is very important to prevent accidental bile duct injury. looking at the number of case reports, double gallbladder seems to be slightly more common than expected. the interesting question is whether a gallbladder discovered during an unrelated radiological investigation in a patient that previously underwent a cholecystectomy can represent undetected case of double gallbladder. we would like to present a review of the literature as well as images from mrcp, ercp and laparoscopy. michael jaroncyzk, md, courtney e collins, md, ms, vladimir p daoud, md, ms, ibrahim daoud, md; st. francis hospital; hartford ct introduction: several decades ago, surgical training was saturated with procedures to treated peptic ulcer disease. since the introduction of histamine-2 blockers and proton pump inhibitors, these procedures have dwindled significantly. however, there are still instances where patients require surgical intervention for peptic ulcer disease. perforation is one of the indications for surgery. the surgical options to treat a perforated peptic ulcer are numerous. one of the most common options is a graham patch. we are presenting a case of a patient with a perforated ulcer that did not have available omentum for the repair. methods and procedures: recently, a 64-year-old female with a past history of an open total abdominal hysterectomy and bilateral salpingo-oophorectomy presented as an outpatient with chronic lower abdominal pain. she underwent a work-up and imaging that did not reveal any pathology. at diagnostic laparoscopy, she had diffuse lower abdominal adhesions, which were lysed. she was discharged on the same day, but presented to the emergency department two days later with severe abdominal pain and fevers. the work-up revealed tachycardia, diffuse abdominal tenderness with peritoneal signs, leukocytosis and a large amount of free air on imaging. she was emergently brought to the operating room for a diagnostic laparoscopy. during laparoscopic exploration, the lower abdominal cavity appeared normal for a recent lysis of adhesions. attention was turned to the upper cavity to find the pathology. bile-stained free fluid and peri-gastric exudates were identified, but no perforation was visualized. intra-operative endoscopy revealed the site of perforation in the antrum on the lesser curvature. a biopsy was performed and the decision was made to perform a graham patch. however, the omentum was already densely involved with the lower abdominal cavity from the enterolysis. due to the close proximity of the falciform ligament, it was mobilized laparoscopically and the pedicle was used as a graham patch. the patient recovered without any additional issues. the biopsy was reported as a chronic gastric ulcer. conclusion: surgical history has given us many options to treat peptic ulcer disease that are not nearly as common as they were decades ago. perforated ulcers can be managed laparoscopically and graham patches are a common choice for repair. however, the lack of the omentum for a proper pedicle flap can pose a problem in some patients. we have shown in this patient that a falciform pedicle flap can be successfully used as a substitution. laparoscopic management of boerhaave's syndrome after a late presentation: a case report and literature review tahir yunus, hager aref, obadah alhallaq; imc background: boerhaave's syndrome involves an abrupt elevation in the intraluminal pressure of the oesophagus, causing a transmural perforation. it is associated with high morbidity and mortality. having a nonspecific presentation may contribute to a delay in diagnosis and results in poor outcomes. treatment is challenging, yet early surgical intervention is the most important prognostic factor. case presentation: we present a case of a thirty-two-year-old male with a long medical history of dysphagia due to benign oesophagal stricture. he presented with acute onset of epigastric pain after severe emesis. based on computed tomography scan, he was diagnosed with boerhaave's syndrome. presenting with signs of shock, mandated immediate surgical exploration. for which he was taken for laparoscopic primary repair with uneventful postoperative recovery. the golden period of the first 24 hours of insult still applies for cases of oesophagal perforation. the rarity of these cases makes a comparison between the various treatment methods difficult. our data support that the use of laparoscopic operative intervention with primary repair as the mainstay of treatment for the management of oesophageal perforation. lipomas of the gastrointestinal tract are rare benign soft tissue tumors that are often discovered incidentally. these lesions are often asymptomatic, but have occasionally been reported to have clinical significance as will be described in this case report. a 40 year old male initially presented to his primary care physician's office with a three week history of vague intermittent abdominal pain. his pain was located in the mid epigastrium and was associated with mild nausea. past medical history was significant for hyperlipidemia and a right-sided goiter, and he denied any previous surgeries. outpatient work up revealed a microcytic anemia, intermittent melena and hemoccult positive stools. the patient was referred to hematology and gastroenterology. endoscopies revealed gastritis, and small internal and external hemorrhoids. he underwent an outpatient ct scan which demonstrated a 6.092.3 cm mass within the lumen of the jejunum causing long segment non-obstucting intussusception. subsequently, the patient was referred to surgery and underwent a diagnostic laparoscopy. at the time of surgery, an approximately twelve centimeter segment of proximal jejunum was identified intussuscepting into a distal limb. this segment was attempted to be reduced laparoscopically, however there was significant mesentery within in the intussusceptum and the segment could not be safely reduced. therefore, the section of bowel was delivered through a small periumbilical incision. the intussusceptum was then able to be manually reduced from the intussusception. at this point a large mass was palpated inside the lumen of the jejunum. a small bowel side to side, functional end to end resection and anastomosis was preformed. the bowel was returned to the abdomen and the abdomen was re-insufflated. the remainder of the small bowel was run and no additional lesions were identified. final pathology revealed a 5.593.693.5 cm submucosal partially obstructing lipoma with ulceration at the tip. the patient recovered uneventfully and was discharged home on the second post operative day. this case report describes a submucosal jejunal lipoma that was acting as a lead point for intermittent non-obstructing small bowel intussusception, while simultaneously causing a microcytic anemia due to ulceration at the tip of the lipoma. laparoscopic assisted reduction and small bowel resection is a safe and effective treatment for gastrointestinal tract lipomas that are unable to removed endoscopically. percutaneous endoscopic gastrostomy (peg) is an alternative to laparotomy for open gastrostomy tube placement to provide enteral nutrition for those who are unable to pass nutrition orally. despite being less invasive, the procedure is not without its complications, one of which includes the formation of a gastrocolocutaneous fistula. the case describes a 90 year old female who presented with a peg placed 6 months prior with reports of leakage of tube feeds from the gastrostomy site. as there was concern for possible ileus or obstruction, an upper gi series was completed which seemed to indicate dislodgement of the g-tube. the g-tube was replaced and a follow-up gastrograffin study was repeated which now indicated that the g-tube was within the lumen of the colon. soon thereafter fecal matter was noted to be draining around the g-tube site; however, patient was without clinical signs of peritonitis. the patient was managed non-surgically as she was a poor surgical candidate with multiple prohibitive co-morbidities. the g-tube was removed bedside by cutting it flush at the skin level with the anticipation that the remainder of the tube would be excreted with bowel movements. the decision was then made to attempt closure of the gastric fistula endoscopically which was accomplished with hemoclips. a follow up upper gi study 72 hours later showed no extravasation of contrast through the gastric fistula. the colocutaneous fistula had self-resolved over the next couple days as well. placement of the peg tube through the transverse colon can present with varying ill effects including diarrhea, pneumoperitoneum, peritonitis, gram negative pulmonary infection or feculent vomiting with the formation of a gastrocutaneous fistula. treatment historically for a gastrocolocutaneous fistula has been exploration and excision of the fistula tract with resection of the involved colonic segment. however, there currently is no gold standard for the management of, and really ranges from conservative management to surgical and is dependent on the presenting symptoms. if the peg becomes dislodged with resultant spillage from the colon with resultant peritonitis, surgical exploration is needed with removal of the g-tube and repair of the stomach and colon. on the other hand, non-surgical management has been suggested in management of a well-established fistula. fistula closure may be spontaneous; however, can be inhibited due to delayed gastric emptying or leakage of gastric secretions through the fistula. endoscopic clipping of the fistula tract employing the hemoclips is a treatment option. median arcuate ligament syndrome (mals) is a rare etiology of abdominal pain caused by narrowing of the celiac artery at its origin by the median arcuate ligament with relative hypoperfusion downstream. patients suffer from post-prandial abdominal pain, abdominal pain associated with exercise, nausea, and unintentional weight loss. diagnosis is historically made by demonstrating elevated celiac artery velocities and respiratory variation on dynamic vascular studies. standard of care for mals patients is laparoscopic celiac artery dissection with release of the median arcuate ligament. at our institution, we have encountered fourteen patients (eleven female, three male) diagnosed by elevated peak velocity in the celiac artery by duplex ultrasound in conjunction with ct angiogram, mr angiogram, arteriogram, or multiple modalities. all but one patient had multiple diagnostic imaging modalities, with the most common being ct angiogram; eight patients had invasive imaging. the mean age at presentation was 58.7 years in men and 47.8 years in women. on average, male patients presented with a longer duration of symptoms, 17.7 years (range 3-30 years), as compared to women, 3.3 years (range 1-15 years). symptoms were fairly consistent between genders and included nausea, emesis, abnormal bowel habits, early satiety, post-prandial pain, and weight loss. all male patients reported at least two symptoms, most commonly nausea and post-prandial pain. in female patients, 82% reported having three or more symptoms. notably, post-prandial pain was universal among men and women, while weight loss was exclusive to female patients as reported by 73%. pre-operative peak velocities were recorded in all but one patient, with mean values more elevated in female patients as opposed to male patients, 156 cm/s versus 345 cm/s. post-operative duplexes were obtained in seven patients; pooled data show a mean change of negative 210 cm/s for an average of 112 cm/s after decompression. in all cases, the celiac artery trifurcation was visualized and noted to have a distinct change in artery caliber after division of the ligament. in total, 79% of patients reported significant improvement with return to normal diet and healthy weight gain post-operatively. of the three without complete resolution, two were diagnosed with motility disorders and one was lost to follow-up. our experience demonstrates that laparoscopic release of the median arcuate ligament in patients with significant flow limitation of the celiac artery on dynamic and anatomic imaging can be a successful treatment option for patients with recalcitrant pain and gastrointestinal dysfunction with no alternative diagnosis. matthew a goldstein, ma, kirill zakharov, do, sharique nazir, md; nyu langone brooklyn adhesions are fibrotic bands that form between and among abdominal organs. the most common cause of abdominal adhesions is previous surgery in the area as well as radiation, infection and frequently occurring with unknown etiology. these bands occur among abdominal organs, commonly the small bowel, and can lead to obstruction or remain asymptomatic, akin to the patient discussed here. congenital abdominal adhesions are rare and have received little attention in research and field of study. the patient described in this case is a 25-year-old female with a past medical history of morbid obesity, bmi of 45, hypertension and no past abdominal surgical procedures. the patient presented in august 2017 for bariatric surgical consultation and was ultimately taken for an attempted laparoscopic sleeve gastrectomy. upon entering the abdomen, significant adhesions were encountered and an additional attending was called to assist in identifying the stomach. the splenic flexure was found to be plastered to the diaphragm and the descending and transverse colon were adhered to the anterior surface of the stomach. additionally, small bowel adhesions encased the area between the right and left hepatic lobes as well as the caudate lobe. after extensive enterolysis, the pylorus remained the only identifiable portion of the stomach. the patient also demonstrated significant hepatomegaly and a wedge resection was performed. the amount of adhesion and matting of the small and large bowel obscured the view of the stomach and the procedure was deemed too dangerous and terminated. this case represents the uncommon scenario in which an abdomen with no prior surgical history presents with extensive, obscuring adhesions. one such recent study describes the influence of cytokines and proinflammatory states as contributors to obstruction and malrotation in children, but this patient demonstrated no significant history. further investigation is needed to determine potential etiologies of symptomatic and non-symptomatic congenital adhesions among bariatric patients who fail conservative treatment. today the patient is doing well and the surgical team will attempt to complete the procedure in the coming months. laparoscopic spenulectomy: an interesting case report riva das, md 1 , daniel a ringold, md 2 , thai q vu, md 2; 1 orlando health, 2 abington jefferson health introduction: spenules, or accessory spleens, are a rare disease entity. most often, they are asymptomatic, and found incidentally during radiographic workup for an unrelated problem. torsion can cause a splenule to not only become symptomatic, but also confound the results of usual diagnostic studies. case description: a 61-year-old female patient with history of uncomplicated hypertension, hyperlipidemia, hysterectomy, cholecystectomy, spinal surgery, and partial left nephrectomy, presented to the hospital with a two-week history of intermittent left upper quadrant abdominal pain. she denied any similar episodes in the past, or any associated symptoms. further investigation with a ct scan of the abdomen and pelvis showed an acute inflammatory process in the left upper quadrant in same location as some colonic diverticulosis, as well as a 4.5 cm soft tissue mass. this indeterminate soft tissue mass was described as having decreased attenuation compared with the spleen. differential diagnosis for this mass included malignancy, an atypical splenule, or an infectious/inflammatory mass. an mri was recommended for further evaluation, but did not reveal any additional significant findings. nuclear medicine liver/spleen scintigraphy was performed, which showed no focal activity associated with the indeterminate left upper quadrant mass, therefore making it unlikely to reflect a splenule, and making malignancy the diagnosis of exclusion. following a period of observation with analgesia, intravenous antibiotics, and bowel rest, her abdominal pain did not resolve, and the decision was made to proceed with operative exploration. diagnostic laparoscopy revealed an approximately 5 cm spherical mass in the left upper quadrant located just below the inferior aspect of the spleen. the superior aspect of the mass gave rise to a vascular pedicle, which upon tracing, seemed to originate from the splenic hilum. this pedicle was easily ligated, and the mass removed. pathology revealed an extensive infarcted hemorrhagic nodule with organizing thrombus and attached thrombosed artery, consistent with an infarcted splenule due to torsion along its own axis. the patient had an uncomplicated postoperative course. discussion: this case report demonstrates the unusual presentation and workup of a patient that was ultimately diagnosed with an infarcted splenule, despite imaging findings that did not correlate, and may even have confused her diagnosis. scintigraphy, which is normally the gold standard for diagnosing and localizing accessory splenic tissue, was in this case unrevealing, due to inability of the tracer to traverse the torsed vascular pedicle. operative exploration was both diagnostic and therapeutic. patients which was treated with antibiotics suggested by culture and sensitivity report and local wound care. one patient died due to sepsis at presentation. conclusion: chikungunya virus was found circulating in rodents in pakistan as early as 1983. duodenal ulcer perforation which is a common surgical emergency in our part of the world usually presents with pinpoint perforation in ant wall of first part of duodenum unlike in already diagnosed cases of chikungunya disease where a slit like duodenal perforation is noted in the anterior wall of first part of duodenum. literature and consensus relate this perforation with the excessive use of nsaids due to usual presentation of arthritis in chikungunya disease but the unusual presentation is still to be answered. introduction: bouveret's syndrome is a rare form of gallstone ileus in which an impaction of a gallstone in the duodenum results in a gastric outlet obstruction. gallstone ileus accounts for approximately 2-3% of all cases of small bowel obstruction. the terminal ileum is the most common location for a calculus to cause obstruction followed by the proximal ileum, jejunum and duodenum/stomach respectively. open and laparoscopic surgery has previously been the mainstay of treatment for bouveret's syndrome, however with the advent of new endoscopic techniques and instruments there has been increasing success in endoscopic management. this case report looks at a patient with a gastric outlet obstruction from a gallstone, and discusses the current literature regarding diagnosis and management. case: 69 year old male presented with several day history of epigastric abdominal pain and multiple episodes of nonbloody, nonbilious emesis. he had previously been diagnosed with cholelithiasis, however had refused surgery at that time. on admission the patient was found to have a leukocytosis of 13.5. an ultrasound was performed in which the images were limited due to pneumobilia. a subsequent ct scan revealed pneumobilia, and a large 2 cm gallstone impacted in the first portion of the duodenum causing a gastric outlet obstruction. the patient underwent failed endoscopic attempts at removal and ultimately required a laparotomy, enerotomy with stone extraction. discussion: bouveret's syndrome is a rare variant of gallstone ileus. with newer endoscopic techniques and electrohydraulic lithotripsy, there has been increasing success with endoscopic retrieval of the impacted gallstones. there is some controversy in regards to the need for definitive operative management. stone extraction, without cholecystectomy and fistula repair, has been shown to have less postoperative complications as well as lower mortality rates compared to when a cholecystectomy and fistula repair has been performed. total mesorectal excision (tme) with neoadjuvant chemoradiotherapy (nacrt) is standard treatment for rectal cancer, which has resulted in a decrease in local recurrence. however, nacrt has shown no significant overall survival and some adverse effects mainly caused by radiation therapy. recently, the usefulness of neoadjuvant chemotherapy (nac) has been reported. we retrospectively assessed the efficacy and safety of the neoadjuvant mfolfoxiri compared with nacrt followed by laparoscopic surgery. a total of 76 patients undergoing laparoscopic surgery for lower rectal cancer (clinical stage: ii or iii) from july 2014 to february 2017 in our department were retrospectively evaluated. 40 patients underwent nac, and 36 patients underwent nacrt. the following data were collected: pathological complete response (pcr), histological grade, down staging, radial margin (rm) and postoperative complications. histological grade was defined as follows: tumor cell necrosis or degeneration is present in less than one third of the tumor area (grade 1a), between one and two thirds (grade 1b), more than two thirds but viable cells remain (grade 2), and complete response (grade 3). these two groups were demographically comparable. down staging did not differ between the two groups. histological grade (?grade 1b) and pcr were significantly higher in the nacrt than in the nac group (p.05). rm had no significant difference in both groups, but tended to be able to secure negative rm in the nac group (95% vs. 83. 3%, p=0.06 aims: increasing evidence suggest that cme may improve overall and disease free survival in colon cancer. our aims were to investigate the safety and efficacy of single incision laparoscopic cme colectomy (silcc) compared to multiport cme laparoscopic colectomy (mpclc) providing the first meta-analytical evidence. methods: pubmed, scopus and cochrane library were searched. studies comparing the silcc to mpclc in adults with colon adenocarcinoma were included. the studies were critically appraised using the newcastle ottawa scale. statistical heterogeneity was assessed with x2 and i2. the symmetry of funnel plots was examined for publication bias. results: one randomized and four case control trials were included (540 silcc vs 609 sl introduction: obesity has been associated with increased morbidity following total proctocolectomies with ilealpouch anal anastomosis (tpc-ipaa). however, the incremental added risk of increasing obesity class is not known. the aim of this study was to evaluate the additional morbidity of increasing obesity class for tpc-ipaa. methods: after ethics board approval, the acs-nsqip database (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) was accessed to identify patients who underwent elective tpc-ipaa. body mass index (bmi, kg/m 2 ) was classified as normal (18. 5-24.9) , overweight (25.0-29.9), obesity class-i (30-34.9), obesity class-ii (35-39.9) and obesity class-iii (≥40). primary outcomes were overall surgical site infection (ssi) and organ-space infection (osi). secondary outcomes were 30-day major morbidity and length of hospital stay (los aim: in curatively intended resection of sigmoid and rectal cancer, many surgeons prefer to perform ligation of the root of the inferior mesenteric artery (ima), high tie, because of oncological reasons. however, ligation of the ima has been known to decrease blood flow to the anastomosis. there are few reports of patients undergoing the reduced port laparoscopic approach (rps) including single-incision laparoscopic approach (sils) even among those undergoing laparoscopic lymph node dissection around the ima with preservation of the left colic artery (lca). our objective was to evaluate the quality of this procedure regarding application of rps for the treatment of sigmoid and rectal cancer. methods: the feasibility of this procedure was evaluated in 61 consecutive cases of rps for sigmoid and rectal cancer. a lap protector (lp) was inserted through a 2.5 cm transumbilical incision, and an ez-access was mounted to lp and three 5-mm ports were placed. almost all procedures were performed with standard laparoscopic instruments using a flexible scope (sils). a 12 mm port was inserted in right lower quadrant mainly in rectal cancer surgery (sils +1). our method involves peeling off the vascular sheath from the ima and dissection of the ln around the ima together with the sheath. results: lymph nodes around the ima were dissected with preservation of the lca in 26 cases (group a). the ima was ligated at its root in 35 cases (high tie, group b). in group a, 11 patients were treated with sils and 15 patients were treated with sils+1. in group b, 15 patients were treated with sils and 20 patients were treated with sils+1. median operative time was 187.7, and 154.8 min for group a, and b, respectively. the operative time was significantly longer in group a. estimated blood loss was 13.7 and 13.0 g, and mean numbers of harvested ln were 21.7, and 23.8. none of the other operative results of groups a and b were different statistically. in this series, there was only one anastomotic leakage in group b. conclusion: our method allows equivalent laparoscopic lymph node dissection to the high tie technique. the operative time tends to be longer, however this procedure has a possibility to reduce an anastomotic leakage. introduction: the routine mobilization of the left colonic flexure in colorectal surgery is still a matter of debate. we present our surgical approach with data. this technique may increases the surgical expertise/confidence when the surgical maneuver is necessary. up to 40% of all splenectomies are for surgery-related injuries;80% of those splenic injuries are treated by splenectomy. the iatrogenic splenic injury rate during colorectal surgery is 0.96%. iatrogenic splenic injuries create: increased risk of mortality/morbidity, extended operative time/patient in-hospital stay and increased healthcare costs. risk factors for iatrogenic splenic injury are: advanced age, adhesions, underlying pathology. obesity is not a risk factor. it is debated if the left colonic flexure mobilization is a risk factor for splenic injury. the ligament over-traction is the most frequent damage mechanism. the most dangerous surgical manuever is the spleno-colic ligament surgical dissection. moreover, laparoscopy descreases by almost 3,5 times the splenic injury risk. some surgeons are reluctant to routinely take down the splenic flexure. materials and procedures: 129 robotic left colonic/rectal cases with routine splenic flexure mobilization technique have been performed: left colectomy (n=74), rectal surgery (n=45), transverse-colectomy (n=6) and pancolectomy (n =4). conversion rate 1,6%, ebl\100 ml,1 postop-leak (0.8%) and 0% iatrogenic splenic injuries. results: in our approach, there are 4 pathways that need to be mastered for the splenic flexure mobilization:a) medial to lateral dissection (underneath the inferior mesenteric vein); b) lateral to medial (from the lateral peritoneal reflection); c) access to the lesser sac with omental detachment from the transverse colon; d) access to the lesser sac with the gastrocolic opening, following the inferior border of the pancreas. the dissection should be closer to the colon rather than to the spleen. in our experience the routine mobilization of the splenic flexure may have some advantages: a) better (without tension) distal anastomosis formation; b) better perfusion of the proxiaml stump; c) wider oncological dissection; d) no need of going back to the flexure when the proximal stump is too short; e) mastering a surgical manuver useful in other procedures (e.g. distal pancreasectomy). the theoretical drawbacks of routine splenic flexure mobilization can be:a) longer operative time, which is on average increased by 35 minutes; b) risk of splenic injuries, in our experience, no splenic injuries have been registered. conclusions: technical accuracy with cautious dissection/visualization can reduce iatrogenic splenic damages rate. laparoscopy decreases splenic injury rate. robotic surgery may have the potential to further reduce this complications. our data suggest that the routine mobilization of the splenic flexure, has more advantages than drawbacks and it can reduce the iatrogenic splenic injury rate. more trials are needed in order confirm our findings. introduction: the robotic stapler with the endowrist™ technology (intuitive surgical, inc.) includes a larger range of motion and articulation compared to the laparoscopic device, and may provide some benefits in difficult areas like the pelvis. to date, few studies have been published on the application of robotic endowristed stapling. we present our preliminary experience using the robotic stapler in low anterior rectal resection (larr) with total mesorectal excision (tme) for rectal cancer. methods and procedures: between march 2016 and september 2017, 24 patients underwent elective robotic larr with tme and primary colorectal anastomosis within the eras program. patient demographic, intra-operative data and post-operative outcomes were compared between the endowrist™ 45 robotic stapler group (rs group) and the laparoscopic stapler group (ls group). results: the two groups were homogeneous in terms of demographic and clinical characteristics. thirteen (10 males) and 11 patients (8 males) were included in rs and in ls group, respectively. seven patients received preoperative chemoradiation in rs group, 8 in ls group. there was no difference in intra-operative blood loss and total operative time. the median number of stapler fires for patients in rs group and in ls group was 2 (range, 1-3) and 3 (range, 2-4), respectively. loop-ileostomy was fashioned in 8 patients in rs group (61.5%) and 8 patients in ls group (72.7%). the 30 days mortality was nil. two cases of anastomotic leaks have been detected in rs group (15.4%), 2 cases (18.2%), occurred in ls group, all treated conservatively. the mean length of postoperative stay was 6.5±5.7 days in rs group, 6.9±3.9 days in ls group. conclusions: in our preliminary experience the application of robotic stapler during larr with tme has shown to be safe and feasible with acceptable morbidity. even if our case series is pretty small, fewer stapler fires were required in the rsg compared to lsg. we believe that the robotic stapler might lead to a more precise firing during pelvic surgery: it can explain the trend toward a decreased number of fires, that has been well documented in literature to be related to a lower risk of anastomotic leak. further high quality studies are required to confirm these findings. background and objectives: the present study was aimed at investigating the safety and feasibility of laparoscopic ultra-low anterior resection (l-ular) with total mesorectal excision (tme) and transanal specimen extraction for rectal cancer located at lower one-third rectum, and specifically understanding the oncological outcome of the operation. patients and method: a prospective designed database of a consecutive series of patients undergoing laparoscopic ultra-low anterior resection for rectal malignancy with various tumornode-metastasis (tnm) classifications from 1991 to 2012 at the texas endosurgery institute was analyzed. in this study ultra-low anterior resection is defined as low anterior resection for the malignant lesion at distal 1/3 of rectum. results: 51 ultralow anterior resections were completed laparoscopically with tme and transanal specimen extraction. the operating time for the surgery was 169.7 ± 31.1 minutes, and estimated blood loss during the procedure was 104.5 ± 72.1 ml. the length of the lesion from the anal verge measured with intraoperative colonoscopy ranged from 3.5 cm to 6.9 cm, and shortest distance of colorectal anastomosis from the anal verge is 1 cm. since diverting ileostomy was routinely installed after l-ular, none was found to have anastomotic leakage, however 3 patients developed anal stenosis within 6-month follow-up. therefore the overall rate of postoperative complication is 5.9%. moreover 4 patients were reported to have local recurrence in 2-year followup with the rate of 7.8%. conclusions: l-ular is safe and effective procedure for the rectal cancer at distal 1/3 rectum with comparable local recurrence and postoperative complication rates, thereby suggesting l-ular can be considered as a procedure of choice for rectal cancer at very low location in the rectum. for rectal cancer, however, local full-thickness excisions are fraught with high local recurrence rates -even if limited to early and best selected lesions. this corroborated observation is likely caused by a combination of missed nodal disease and direct implantation of tumor cells into the mesorectum, which upstages even early t1 lesions to at least a t3 lesion. the treatment of choice for invasive adenocarcinoma consists of an oncological total mesorectal resection, possibly with other modalities. rectal tumors of uncertain behavior can present a treatment dilemma between over-treatment vs under-treatment. concept: if the nature of a lesion is not certain or if contradictory results have been obtained, we propose a superficial local excision as a mucosal excisional biopsy to establish the diagnosis while avoiding interference with subsequent definitive treatment modalities by preserving the integrity of the external rectal wall and mesorectum. a benign final pathology concludes the treatment, whereas a detection of invasive cancer will be managed with a subsequent oncological resection. methods: this is a case report of a 70-year-old woman found to have a 4.4 cm villous lesion in the mid to distal rectum without proven or disproven invasive cancer. a tems-guided mucosal resection of the rectal mass at 3 cm above the anal verge was performed whereby the lesion was dissected off the underlying muscularis. results: with preoperative discrepant erus and mri staging ut0-1 vs ct3 lesion, a technically successful mucosal resection of the large rectal mass was carried out. pathology revealed a tubulovillous adenoma without high grade dysplasia or malignancy and a complete resection. conclusion: tems mucosal excisional biopsy of rectal tumors of uncertain behavior allows for a less invasive diagnostic approach that may (a) be definitive treatment if the lesion is proven benign, or (b) confirm the need for more aggressive treatment without having burned any treatment bridges or upstaged an early tumor by violating the mesorectal plane. an oncologic resection with appropriate (neo-)adjuvant chemotherapy can be carried out while preventing the potential for tumor seeding at initial operation. background: adequate visualization of the entire lumen of the large bowel is essential in detecting pathology and establishing diagnoses during colonoscopies. patients are provided dietary instructions and medications in order to achieve adequate bowel preparation. given the extensive amount of preparation required, some patients may be unable to adhere to the prescribed routine, resulting in rescheduling or repeat procedures and misallocation of limited resources. a number of previous quality-improvement efforts have been implemented to ensure adequate preparation prior to colonoscopy. objective: the objective of this study was to develop and assess the feasibility of a novel smart phone application in the delivery of bowel preparation instructions. methods: a novel smart phone application was developed to deliver bowel preparation instructions to patients undergoing colonoscopy for the first time. patients were included in the pilot phase of this project if they were undergoing a colonoscopy for the first time. we included patients who had access to a smart phone, had not previously had a bowel preparation for any reason. we excluded patients with a previous diagnosis of inflammatory bowel disease or colorectal cancer. patient surveys were administered at the time of colonoscopy. patients were questioned regarding the completeness of bowel preparation and adherence to bowel preparation instructions. patient questionnaires were completed to ascertain the ease of use of the smart phone application and any concerns that arose. quality of bowel preparation was assessed by the colonoscopist using the validated ottawa bowel preparation score. this is the pilot study results for the "coloprep" trial (nct03225560). results: a total of 20 patients were enrolled in the pilot phase of this study. patient satisfaction, adherence to instructions and ease of use of the smart phone application were ascertained. bowel preparation, as assessed by the colonoscopist, was reported. conclusions: this study assessed the feasibility of using a novel smart phone application for delivery of bowel preparation instruction. this pilot study is the initial phase of a randomized controlled trial to compare smart phone application vs. written instructions in the delivery of bowel preparation instructions. the . median follow-up was 44 months. there were no statistically significant differences found in clinical features and laboratory findings between the two groups. no statistically significant difference was found regarding the overall success rates and the complication rates between the conservative and the surgical arms (success rates: 90.1% and 86.5% (p= 0.48) and complication rates: 8.6% and 12.2% (p=0.472), respectively). however, surgical treatment was better than conservative treatment in preventing recurrent diverticulitis (recurrence rates: 0% and 5.4% (p=0.031), respectively). conclusion: conservative management with bowel rest and antibiotics is a safe and effective treatment for right-sided colonic uncomplicated diverticulitis and may be considered as the initial option. on the other hand, laparoscopic diverticulectomy is also safe, effective and adequate. surgery is advocated to decrease the recurrence rate. introduction: it has been hypothesized that the structural and functional changes that develop in the defunctioned segment of bowel may contribute to the development of postoperative ileus (poi) after loop ileostomy closure (lic). as such, longer intersurgery interval between ileostomy creation and lic may increase poi. methods and procedures: after institutional review board approval, all patients who underwent lic at a single institution between 2007-2017 were identified. the primary endpoint, primary poi, was defined as either a) being kept nil-per-os on or after postoperative day 3 for symptoms of nausea/vomiting, distension, and/or obstipation or b) having a nasogastric tube (ngt) inserted, without postoperative obstruction or sepsis. secondary endpoints included length of hospital stay (los) and non-poi related morbidity. patients who left the operating room with a ngt, had a planned laparotomy with a concomitant procedure at the time of lic, had a total proctocolectomy as their index operation, or had secondary poi, were excluded. patients were then divided into two groups based on timing from the index operation to lic (\6 months vs. objective: fecal incontinence can be a debilitating problem significantly diminishing productivity and quality of life. sacral neuromodulation has emerged as a first line surgical option treatment in patients with fecal incontinence. though its efficacy has been rigorously evaluated in adult populations there is scant data available for its use in the pediatric pateints with fecal incontinence. this case study discusses the management of fecal incontinence in a pediatric patient with a history of hirschsprung's disease utilizing sacral nerve stimulation. methods: our patient is a 15-year-old female with a history of hirshsprung's diagnosed in infancy and treated surgically with coloanal pull through at the age of 1 who presented with complaints of fecal incontinence. the patient was wearing pads daily, noting frequent uncontrolled bowel movements as well as having frequent missed days of school due to these symptoms. despite maximal medical management and pelvic floor physical therapy the patient continued to have 3-10 episodes of fecal incontinence daily. a ct scan with rectal contrast was used to establish her postoperative anatomy. anal manometry showed low rest/squeeze pressures, absent resting anal inhibitory reflex, and abnormal sensation. furthermore, during balloon expulsion testing the patient failed to pass device. the patient was deemed a candidate for stage 1 testing with sacral nerve neuromodulation. during follow-up, the patient was noted to have resolution of her episodes of fecal incontinence and the second stage was completed. the patient continues to note 100% continence and dramatic improvement in her quality of life. conclusion: in this patient with a history of severe fecal incontinence due to hirschsprung's disease, sacral neuromodulation has had a significant impact on her quality of life. post-operatively she continues to have marked improvement in her symptoms with 4-5 bowel movements a day with no recurrence of fecal incontinence. the use of sacral neuromodulation is a promising treatment for fecal incontinence in the pediatric population. future research investigating the longterm efficacy of this treatment modality in the pediatric population is needed. cases of bowel obstruction caused by colorectal cancer recurrence and progression were excluded. 9 surgical cases (0.48%) were considered to be early bowel obstruction and 15 (0.81%) were classified as late bowel obstruction. left hemicolectomy (n=4, 3.03%) was a significantly more frequent procedure in early bowel obstruction, and abdominoperineal resection (n=5, 4.20%) was significantly more common in late bowel obstruction (p.05). both early and late bowel obstruction included adhesive small bowel obstruction (n=19), internal hernia (n=3), and strangulation obstruction (n=2). internal hernia (n=3) and strangulation obstruction (n=2) occurred after left hemicolectomy and abdominoperineal resection, respectively. there is no apparent relationship between surgical procedures and adhesion regions (abdominal wall, intestinal tract, and pelvic cavity). the incidence rate of postoperative small bowel obstruction remained low, and laparoscopic colectomy had been safely performed. however, countermeasures are needed because of the high frequency of both early and late bowel obstruction which occurred after left hemicolectomy and abdominoperineal resection, respectively. improved utilization of resources as an improvement introduction: nowadays, treatment decisions about patients with rectal cancer are increasingly made within the context of a multi-disciplinary team (mdt) meeting. the outcomes of rectal cancer patients before and after the era of multi-disciplinary team was analyzed and compared in this paper. the purpose of the present study is to evaluate the value of discussing rectal cancer patients in a multi-disciplinary team. methods and procedures: in our health institute, weekly mdt conferences were initiated in january 2015. meetings were attended by surgeons, radiologists, radiation and medical oncologists and key nursing personnel. all rectal cancer patients diagnosed and treated in 2014-2015 in the general surgery division of the "carlo urbani" hospital in jesi (an, italy) were included. then, the data from rectal cancer patients in 2014 were evaluated, before the adoption of mdt and in year 2015, after the adoption of meetings. datasets regarding demographics, tumor stage, treatment, and outcomes based on pathology after operation were obtained. during an mdt discussion patient history, clinical and psychological condition, co-morbidity, modes of work-up, clinical staging, and optimal treatment strategies were discussed. a database was created to include each patient's workup, treatments to date and recommendations by each specialty. ''demographic variables'' consisted of age at diagnosis, sex, body mass index, comorbidities, american society of anesthesiologists physical status classification system, clinical stage and pathological stage. other analyzed variables included baseline carcinoembryonic antigen (cea), the type of imaging, use of neoadjuvant chemo-radiation, restaging following neoadjuvant therapy, distance from the anal verge, operation type and use of adjuvant chemo-radiation. ''outcome variables'' consisted in a comparison for each group between clinical and pathological stage. results: sixty-five patients were included in this study: thirty patients in 2014 (pre-mdt) and thirty-five patients in 2015. demographic variables did not differ significantly between groups. preoperative clinical stages with baseline preoperative cea and postoperative pathological stage were analysed, too. thanks to the mdt and the increased use of the neoadjuvant therapy, a statistically significant difference in reduction of the stage between the clinical and pathological stage in the patients of the mdt group was verified. conclusions: the vast majority of rectal mdt decisions were implemented and when decisions changed, it mostly related to patient factors that had not been taken into account prior to the adoption of multi-disciplinary team. analysis of the implementation of team decisions is an informative process in order to monitor the quality of mdt decision-making. purpose: in japan, lateral pelvic node dissection (lpnd) is the standard treatment for locally advanced lower rectal cancer. there are few reports of patients undergoing single-incision plus one port laparoscopic (sils+1) lpnd even among those undergoing laparoscopic lpnd. the aim of this study is to describe our initial experience and assess the feasibility and safety of sils+1 lpnd for patients with advanced lower rectal cancer. methods: a lap protector (lp) was inserted through a 2.5 cm transumbilical incision, and an ezaccess was mounted to lp and three 5-mm ports were placed. a 12 mm port was inserted in right lower quadrant. a single institutional experience of sils+1 lplnd for rectal cancer are presented. inclusion criteria was indications for lld were lower rectal cancer with t3-4, or t1-2 rectal cancer with metastasis of lateral lymph node, as described by the japanese society for cancer of the colon and rectum (jsccr) guidelines for the treatment of colorectal cancer. perioperative outcomes including operative time, operative blood loss, length of stay, postoperative complications, and histopathological data were collected prospectively. introduction: endoscopic stenting with a self-expandable metallic stent (sems) is widely accepted procedure for malignant colorectal obstruction. we assessed the safety and efficacy of insertion of a sems followed by elective surgery as 'bridge to surgery (bts)' in our institute. methods: this study was a retrospective study in our institute. the data was collected from medical charts from january 2014 to june 2017. results: a total of 408 consecutive patients underwent radical surgery for colorectal malignancy during this period. in this series, 16 patients (3.9%) were diagnosed malignant colorectal obstruction and intended to a bts. the stent was successfully placed in 13 patients and all the patients were planned to undergo radical surgery. the failed 3 patients underwent stoma creation (2 patients) and hartmann's procedure. the technical success rate was 81% and the clinical success rate was 100%. the median time from sems to surgery was 11 days (2-31 days) . open and laparoscopic surgery was performed in 4 and 8 patients, respectively, except for one patient refused radical surgery because of a great age. the tumor could be resected in 12 patients (bts patients) with primary anastomosis. however, diverting stoma creation was needed in 3 patients and decompression rectal tube was placed in 1 patient. the entire patient laparoscopically was no conversion to open surgery. there was no anastomotic leakage in bts patients. the median duration of postoperative hospital stay was 10 days (8-54 days). the overall postoperative complication was 23% (3/13) including 2 bowel obstruction and 1 anastomotic stricture. the median follow-up period was 580 days. during the follow-up period, 3 patients were relapsed peritoneal dissemination, ovarian metastasis, and liver and pulmonary metastases, respectively. former 2 patients were diagnosed stage iva at the time of primary surgery. one patient died from sudden death. conclusions: our data suggested that routine use of sems insertion was safe and effective procedure for malignant colorectal obstruction as a bts. moreover, laparoscopic procedure was useful procedure in bts patient. the short-and long-term surgical outcomes were also acceptable. introduction: serpin e1, also known as plasminogen activator inhibitor-1 (pai-1) is an inhibitor of urokinase type plasminogen activator (upa) and tissue-type plasminogen activators (tpa ). pai-1 plays a role in the regulation of angiogenesis, wound healing, and tumor cell invasion; over expression has been noted in breast, esophageal, and colorectal cancer (crc). pai-1 is also a potent regulator of endothelial cell (ec) proliferation and migration in vitro and of angiogenesis and tumor growth in vivo. the plasminogen/plasmin system plays a key role in cancer progression by mediating extracellular matrix degradation and tumor cell migration. surgery's impact on plasma pai-1 levels is unknown. this study's purpose was to measure plasma pai-1 levels before and during the first month after minimally invasive colorectal resection (micr) for crc. objectives: retroflexion in the rectum at the end of a colonoscopy is a requirement for a complete endoscopic evaluation. retroflexion helps to visualize and detect polyps which would be missed otherwise. currently new endoscopes are available which can do retroflexion in the caecum. aim: our study aims to compare the rate of polyp detection rate in cecum and ascending colon with and without retroflexion in cecum. methods: this is a single center, single operator, retrospective study. a total of two hundred patients were involved. a single center irb waiver was obtained. patients were divided into two groups based on the presence/absence of retroflexion in caecum during their colonoscopy. the data was obtained from 2017 records. group a (n=100) had colonoscopy without retroflexion in caecum group b (n=100) had colonoscopy with retroflexion in caecum inclusion criteria: patients undergoing screening colonoscopy between the age of 40 and 85. results: group a: total of 100 patients were screened. a total of 95 polyps were detected in group a. number of cecal polyps were 4 (4.2% of total polyp count). number of ascending colon polyp were 18 (19% of total polyp). on analyzing the pathology 60% of the cecal polyps were tubular adenoma, 20% hyperplastic polyps 20% and 20% lymphoid aggregate. number of ascending colon polyps were 18, of which 72% were tubular adenoma, 22% tubular adenoma and 6% tubulovillous adenoma group b: total of 100 patients were screened. a total of 80 polyps were detected. number of cecal polyps detected were 5 (6.2% of total polyp count). number of ascending of ascending colon polyps were 11 (13%). on analyzing pathology, 80% cecal polyps were tubular adenoma and 20% were sessile serrated. out of the ascending colon polyps 27% were tubular adenoma, 27% sessile serrated,27% tubulovillous and 18% hyperplastic polyp. side events: two mass lesions were noted in both group a and b. there was incomplete colonoscopy in group a and b. conclusion: this retrospective analysis reveals a small increase in polyp detection in the cecum with retroflexion, especially in detecting sessile polyps which have more malignant potential. however, a large multicenter analysis will be required to validate the above observation. background: while uncommon, rectal prolapse is a disabling condition affecting older females. in a small subset of patients, concomitant organ prolapses with or without incarceration can lead to significant morbidity. as the field of laparoscopy has evolved, minimally invasive surgical options for rectal prolapse have led to improved quality and reduced morbidity for patients suffering this debilitating disease. methods: the 2012-2015 acs-nsqip databases was queried for patients undergoing a traditional or minimally invasive rectopexy based on cpt codes (45400,45402,45540,45541 and 45550) . emergent cases and patients with preoperative infections or inflammatory states were excluded. the primary outcome of interest was a 30-day postoperative composite morbidity score. statistical analysis incorporated multivariate analysis and binomial logistic regression with p.05 holding significance. results: these inclusion and exclusion criteria identified 2393 patients undergoing traditional (1113) and minimally invasive (1280) rectopexy for prolapse between 2012 and 2015. patients undergoing traditional rectopexy were older (p.001), had a higher body mass index (p=0.018), more comorbid conditions (diabetes, copd, hypertension) and less functional independence (p= 0.026). patients undergoing a traditional rectopexy had a higher composite morbidity incidence of 13.2% vs. 8% for minimally invasive rectopexy (p.001). specifically, minimally invasive rectopexy patients had a 2.63% reduction in wound complications (p=0.002) and a shorter hospital stay (3.3 days vs. 4.3 days, p .001) compared to a traditional rectopexy. readmission rates were also 2.6% lower in the minimally invasive group (p=0.015). after controlling for the differences in the cohorts, a minimally invasive approach was a significant protective factor against the incidence of 30-day postoperative morbidity (or 0.476, p.001). conclusion: a minimally invasive rectopexy has improved 30-day postoperative morbidity compared to a traditional rectopexy and should be strongly considered for the treatment of rectal prolapse. objectives: the short-term safety and efficacy of a self-expandable metallic stent (sems) placement followed by elective surgery, "bridge to surgery (bts)", for malignant large-bowel obstruction (mlbo) have been well described. the aim of this study was to investigate the risk factors for postoperative complications and optimal interval between sems placement and surgery in patients with mlbo. methods: retrospective examination of patient records revealed that the bts strategy was attempted in 49 patients with mlbo from january 2013 to march 2017 in our institution. two of these patients were excluded because they had undergone emergency surgery for sems migration; thus, 47 patients with mlbo who had undergone sems placement followed by elective surgery were included. of these patients, eight had developed postoperative complications (clavien-dindo grading≥ii) (postoperative complication: poc group) whereas 39 patients had no such complications (no poc group). results: univariate analyses showed that the factors of asa score, number of lymph nodes resected, interval between sems and surgery, and preoperative albumin concentration were associated with postoperative complications. multivariate analysis identified only the interval between sems and surgery as an independent risk factor. furthermore, a cut-off value of 15 days for interval between sems and surgery was identified by roc curve analysis. conclusions: an interval of ≥15 days from sems placement to surgery is an independent predictive factor for postoperative complications in patients undergoing elective surgery in a bts setting. thus, an interval of over 15 days is recommended for minimizing postoperative complications. haseeb kothar, ronan cahill; mater misericordiae university hospital current clinical advances in operative near-infrared visualisation of cells, tissues and structures are predicated on the use of commercial available near-infrared cameras to excite and visualise emission energy from non-selective, approved compounds (predominantly indocyanine green (icg)). it is expected that new generation compounds wholly selective for specific cellular components are now needed for further advance and a variety of molecular targets have been proposed and are being developed primarily for oncological imaging purposes. recent publications have however suggested icg itself is retained within malignant tissue differently to its uptake and clearance from surrounding non-malignant tissue which is important for two reasons. firstly, it exploits and makes visual the increased vascular permeability and disordered clearance associated with carcinogenesis which is a common endpoint of a variety of mediators including but not limited to vegf. this raises the useful option of targeting downstream effects of cancer compounds on a metabolic basis as opposed to tagging individual cell or antigen components. this means that a single agent could be used to target a variety of cancers rather then needing a specific one for each specific sub-type as well as obviating the issue of cancer cells heterogeneity even in a single cancer deposit. second, it is very likely that some or all of the "localisation" effect of proposed selective compounds may well be due to a similar phenomenum rather then cell-specific binding and may make distinction from other areas of similar metabolic behaviour (ie inflammatory regions) difficult. the crucial step-advance for such agent development so may well relate to timing of compound delivery and "visualisation window" at the region of interest rather then highly selective oncocellular-targeting. to illustrate this in more detail, we have been examining the tissue-specific effects and actions of near-infrared excitation in patients (n=7) with localised malignant colorectal primaries receiving an aliquot of icg before such examination at the time of resection. icg can be selectively apparent in the colorectal primary 15 minutes after its systemic administration likely due to altered vascular dynamics. additional dose-related work has shown that early administration (40-180 minutes before examination) does not give useful information related to tumour fluorescence. interestingly none of these patients had fluorescence seen within their regional lymphatics but none also had malignant lymph nodes associated with their large primaries on pathological examination. however, this procedure is not usually performed in laparoscopic apr for its technique difficulty, which may lead to increased rates of complications ( fig. 1) . here, we compared the feasibility and peri-operative outcomes of the laparoscopic apr with and without pelvic peritoneum closure (ppc) for lower rectal cancer. introduction: there are reports of increased operative duration, blood loss and postoperative morbidity, caused by difficulties in obtaining good visualization and in controlling bleeding when laparoscopic resection is performed in obese patients with colon cancer. purpose: the aim of this study was to investigate the impact of obesity on perioperative outcomes after laparoscopic colorectal resection performed by various operative methods in our department. patients and methods: we conducted a retrospective analysis of 435 patients with colorectal cancer who underwent laparoscopic surgery between january 2011 to december 2015. right colectomy was performed in 84 patients, sigmoidectomy in 73 patients, and low anterior resection in 50 patients. the surgical outcomes were compared between non-obese (body mass index [bmi]\25 kg/m 2 ) and obese (bmi ?25 kg/m 2 ) patients. results: right colectomy cases: the amount of blood loss was significantly increased in the obese group compared with the non-obese group, but operation time did not differ significantly between the groups. there were no significant differences between the two groups in the rate of postoperative complications and duration of post-operative hospitalization. sigmoidectomy cases: there were no significant differences between the two groups in operation time and amount of blood loss. even though the preoperative asa score and the rate of postoperative complications were higher in the obese group, the mean postoperative hospital stay did not differ significantly between the two groups. low anterior resection cases: there were no significant differences between the obese group and the non-obese groups in operation time, amount of blood loss, rate of postoperative complications, and duration of post-operative hospitalization. discussion: although there are some reports of increased operative times in obese patients, the operative procedure was not extended in any of the present study patients. the amount of blood loss was significantly increased in the obese group compared with the non-obese group when right colectomy was performed. among the patients undergoing sigmoidectomy, the postoperative rate of complications was higher in the obese group; however, the preoperative asa status was also higher in the obese group than non-obese group, indicating that factors other than obesity may be involved. conclusion: we concluded that laparoscopic colorectal resection appeared to be safe and feasible in both obese patients and non-obese patients. however, bmi may not accurately reflect the amount of visceral fat present. background: for the complete rectal prolapse (basically longer than 3 cm), we thought sling rectopexy was most reasonable to hang up and fix the rectum, which drooped down and prolapsed due to the relaxation of supporting tissue. we considered ripstein method had enough fixed power of rectum to sacrum. however, complications of rectal stenosis, constipation, mesh infection and mesh penetration were reported. therefore, we modified ripstein method to conquer such complications. aim: a prospective study beyond the randomized control trial (rct) between our modified ( introduction: the results of the japan clinical oncology group (jcog) 0212 study suggested that total mesorectal excision (tme) and lateral lymph node dissection (llnd) could become the standard treatment for lower rectal carcinoma. however, llnd must also be performed laparoscopically if surgery for lower rectal carcinoma is to be carried out as a completely laparoscopic procedure. transanal tme (tatme) is expected to provide better results than the conventional tme, both oncologically and in terms of pelvic function, and its use has recently been spreading in japan. we started performing laparoscopic tatme+llnd in our department in july 2016 and here report the short-term outcomes. subjects and methods: we used laparoscopic tatme+llnd to treat 5 men and 3 women with ct3 or deeper rectal carcinoma in whom the inferior margin of the tumor was on the anal side of the peritoneal reflection. this was a retrospective study of short-term postoperative outcomes. surgical procedure: laparoscopic surgery was started simultaneously by two teams, one working transabdominally and the other working transanally. the transabdominal team performed the standard proximal llnd and mobilization of the splenic flexure via five ports. they then dissected the bilateral lateral lymph nodes, mainly in the obturator (#283) and internal iliac (#263) groups. during this time, the transanal team performed laparoscopic tatme. finally, both dissection layers were connected and the cancer was excised. results: six patients had clinical stage ii and two had clinical stage iii lower rectal carcinoma. all the patients underwent preoperative chemotherapy with s-1+l-ohp. five underwent a sphincterpreserving surgery, and three underwent rectal amputation. the mean operating time was 335 minutes (range, 267-382 minutes), and the mean amount of hemorrhage was 136 g (20-440 g). the mean number of lymph nodes dissected was 24, and r0 resection was performed in all the cases. the mean length of hospital stay was 14 days, and a postoperative complication of clavien-dindo grade iii or higher occurred in one patient (anastomotic failure). conclusions: laparoscopic tatme+llnd performed by two teams simultaneously is an extremely useful procedure that not only reduces operating time, but also is less invasive than laparoscopic surgery. it may also be effective for improving curative nature, nerve preservation, and anal function. objective: in laparoscopic appendectomy, the base of the appendix is usually secured by applying a roeders knot. the aim of this study was to compare the advantages of using staplers and hem-olocks for securing the base of the appendix. method: the study included 82 patients between age of 12 to 75 years with acute appendicitis randomly divided into two groups. in the first group, the base of the appendix was secured using roeders knot. in the second group, mesoappendix was not dissected and was included in the endostapler jaws. the primary outcome was overall morbidity. secondary outcomes were total duration of surgery, total length of stay and ease in difficult cases. result: no morbidity was recorded in any group. the time of the operative procedure was significantly longer in the cases with roeders knot than in the stapler group (p.0001) as mesoappendix was not dissected in the later. 2 cases with unhealthy base were progressed to laparoscopic quadricolectomy. apart from the ease of applying a stapler, cases of second group with gangrenous base were easily tackled using endostapler, avoiding the need of a hemicolectomy. conclusion: all forms of closure of the appendix base are acceptable, but endostapler technique apart from providing a secure base, reduces operative time and is an essential tool in cases of gangrenous base. introduction: accurate staging is essential to estimate the prognosis of patients with colorectal cancer (crc) and lymph node evaluation is key to determine it. in non-metastatic crc, the number of harvested lymph nodes is the strongest prognostic factor for outcome and survival. additionally, it is thought that a higher lymph node yield may be representative of a higher quality of surgical care. due to the importance of the association between lymph node evaluation and outcome in crc, it is necessary to evaluate factors which may affect lymph node harvest. introduction: hatmann's procedure is commonly done in treating complicated diverticulitis, negleccted rectal trauma with sepsis and sometimes malignancy. the traditional techniques to restore the intestinal continuity after hartmann's procedure were for many years the standard of care in these operations, but in fact they carry many morbidity and even mortality and failure. laparoscopic techniques is not only carry the advantage of minimal invasive surgery, but also of better visualizationn and magnification. the aim is evaluating the outcome of using the laparoscope in reversal of hartmann's procedure as regard feasibility and safety. patients and method: forty patients were subjected to laparoscopic reversal of hatmann's procedure in tanta university hospital, there ages ranged between 25 to 70 years, the time elapsed after the original operation ranged from 6 months to 5 years, excluding advanced malignany. conversin occurred in 6 cases due to extensive adhesions and bleeding. results: no mortality, or major morbidity in our study and only single leak treated by covering ilestomy. conclusion; laparoscopic hartmann's procedure is feasible, promising tehnique with minimal morbidity. background: minimal invasive surgery has been well established in the elective colorectal surgery and it has been proven better clinical outcome compared with open surgery. in the emergent setting, laparoscope is used mostly in the colecystectomy, appendectomy but laparoscopic emergent colorectal surgery is limited for it's complexity and difficulity. the aim of this study was to envaluate the feasibility of laparoscopic emergent colorectal surgery. methods: this study is prospective collected, observational single center study of patients undergoing laparoscopic emergent colorectal surgery from 2011 to 2016. the patient demographics, surgery indication and detail, complication, clinical outcome and hospital stay were collected and analyzed. results: there are total 130 emergent colorectal operations and 57 patients were managed with minimal invasive method. among these laparoscopic emergent surgery, there are 33 male patients and 24 female patients. mean age of the patients was 63.8 years (range 31-89 years). the main indication for operation: perforation 49.1% (28/57), leakage after elective colorectal surgery 42.1% (24/57), obstruction 3.5% (2/57), ischemia colitis 3.5% (2/57,), bleeding 1.8% (1/57). there are 19 cases in asa 2, 32 cases in asa 3, 6 cases in asa 4. the qsofa score for sepsis:23 cases was 0, 28 cases was 1, 5 cases was 2, 1 case was 3. there are 27 cases undergoing laparoscopic lavage with diverting stomy, 15 cases were hartmann procedure, 5 cases were anterior resection,4 cases were right hemicolectomy, 3 cases were perforation repair, 3 cases were redo anastomosis. there are 6 cases coversion to open method including 3 cases were due to bowel adhesion,2 cases were due to bowel distension,1 case was due to severe shock status. mean operative time is 180.3 minutes. the overall mortality rate was 5.2% and major complication rate (clavien-dindo grade above 2) was 24.5%. re-operation rate was 15.7%. the mean hospital stay was 17.1 days. conclusions: this study presents evidence of an initially clinical outcome in emergent laparoscopic colorectal suregry. in the absence of large case series, the benefits of a laparoscopic approach should befall to at least a minority of these patients. confocal laser endomicroscopy (cle) can provide real-time observation of the cell structure and tissue morphology. in our study, we aim to assess the situation of anastomotic perfusion using cle. method: the experimental rabbits were separated into two groups: group a (good anastomotic perfusion, n=6), group b (poor anastomotic perfusion, n=6). the partial colectomy and anastomosis was performed for group a and b. then detection for anastomotic perfusion using cle was carried out after the surgery. during the continuous scanning, we counted the number of blood cells that cross over the certain point of anastomotic stoma in the same period. results: assistant with fluorescein sodium, the blood vessels are highlighted. we can see significant difference of imaging effect between group a and group b. the average number of blood cells are 34.7/min of group a and 6.0/min of group b (p.001), which has significant difference. conclusion: cle can allow real-time observation of the blood flow of anastomotic stoma in vivo. therefore, it is feasible to assess the anastomotic perfusion using cle in colorectal surgery. cigdem benlice, ahmet rencuzogullari, james church, gokhan ozuner, david liska, scott steele, emre gorgun; cleveland clinic background: intraoperative colonoscopy (ioc) is an adjunct in colorectal surgery (crs) especially in patients with malignancies in order to detect location of the primary or synchronous lesions as well as assessing anastomotic integrity. however, effects of intraoperative colonoscopy on short term outcomes during crs is a concern. this study aims to evaluate safety and feasibility and post-operative outcomes of intraoperative colonoscopy in left-sided colectomy patients for colorectal cancer patients by using the nationwide database. patients and methods: patients undergoing elective left-sided colectomy with low pelvic anastomosis without any proximal diversion for colorectal cancer were reviewed from the american college of surgeons national surgical quality improvement program (acs-nsqip) proceduretargeted database (2013) (2014) (2015) according to their primary procedure current procedural terminology (cpt) code. subsequently, patients who underwent intraoperative colonoscopy were identified from concurrent cpt codes and divided into two groups based on the simultaneous intraoperative colonoscopy. demographics, comorbidities, 30-day postoperative complications were evaluated and compared between the groups. multivariate logistic regression was conducted adjusting for significant factors between the groups. results: a total of 5579 patients were identified and ioc was performed for 651 (11.7%) patients. objective: laparoscopic ileostomy commonly performed for the patients with colorectal obstruction due to cancer, peritonitis with perforation of colon or the other reason. reduced port surgery is a novel technique that may be performed when considering minimally invasive surgery and desiring a cosmetic benefit. the aim of this study was to evaluate safety and feasibility of reduced port laparoscopic ileostomy for the patients with advanced colorectal cancer before chemotherapy. methods: between july 2012 and august 2017, 39 patients who underwent reduced port laparoscopic ileostomy were included (15 male and 14 female, age: 66 years old. the outcomes were evaluated in terms of operation time, intraoperative blood loss and perioperative complications. sugical procedures: the patients were placed in the supine position and the operator stood left side. an access device with the wound-protector (ez access, hakko, nagono, japan) was inserted on the future ileostomy site in the right lower abdomen, inserting two of 5-mm trocars, maintaining pneumoperitoneum at 10 mmhg with carbon dioxide. a 5-mm trocar was inserted in the left lower abdomen. a 5-mm flexible laparoscope was inserted from access device port. after exploring abdominal cavity, ileum end was identified. then the marking using dye was put on the ileum of 25 cm proximal from the ileum end. the ileum marked by dye was grasped, and extracted through the access devise. then a blooke ileostomy was created. results: reduced port laparoscopic ileostomy was performed for 39 patients with colorectal obstruction due to cancer before chemotherapy. the mean operative time was 107 minutes, the mean blood loss was 5.0 ml. three patient received one additional port. there were no intraoperative complications. five patients (12.8%) experienced postoperative complications (two of deep surgical site infection, one of pneumonia, one of outlet obstruction and one of renal dysfunction). there were no other intraoperative or postoperative complications. conclusion: reduced port laparoscopic ileostomy is a safe and feasible procedure for the patients with advanced colorectal cancer before chemotherapy. methods: we performed elective lcr on 354 patients for primary colorectal cancers between june 2008 and june 2015. seventy-two patients were excluded in this study following reasons: 44 patients underwent multiple organ resection, and colorectal cancer was diagnosed with stage iv in 28 patients. accordingly, 282 patients were eligible for comparative analysis, with 70 in group po (post operation) and 212 in group c (control). in group po, past operative procedures were as follows: appendectomy (57%), digestive tract (7%), hepato-billiary-pancreatic (7%), gynecologic (17%), urologic surgery (10%), and others (2%). results: there were no significant differences between two groups in asa (grade≤2: 81 vs. 88%, p=0.14), bmi ( introduction: the treatment of rectal cancer requires highly skilled practice by the entire multidisciplinary team. important aims of treatment are: to reduce the risk of residual disease in the pelvis, with lower morbidity and to preserve good sphincter function. the tata procedure is transanal transabdominal radical proctosigmoidectomy with coloanal anastomosis. this technique was first developed in 1984 by dr. gerald marks to avoid a permanent colostomy for low-lying rectal cancer. this study reports the long-term results of tata procedure for low rectal cancer. methods and procedures: a prospective study was on 38 patients with low rectal cancer between april 2007 and july 2017 in a tertiary referral university-affiliated center specializing in laparoscopic surgery. all resections were carried out by a team of dedicated colorectal surgery and standard protocol was used for all pre-and-post-operative care. all the patients underwent total mesorectal excision. results: 38 consecutive patients (19 male, 19 female, mean age 57) underwent tata procedure, 30 of them (78,9%) after neoadjuvant radiochemotherapy. the mean operation time was 201 min (range 90-360) and the mean estimated blood loss was 73 ml (range 10-500). the overall incidence of morbidity was 15,8% (6/38) and the mean hospital stay was 4,4 days. the mean follow-up period was 36,8 (range, 1-123) months with a recurrence rate of 7,9% (3/38), overall estimated 5-year survival 78,2% and the disease-free survival rate 89,5%. conclusion: laparoscopic total mesorectal excision with tata procedure is safe with excellent local recurrence and disease-free survival rate. jacek piatkowski, md, phd, marek jackowski, prof; clinic of general, gastroenterological and oncological surgery introduction: more than 10 years ago, laparoscopic technique was considered to be a fully accepted surgical method for treatment of rectal cancer. the following years are a further search for a new surgical method that reduces invasiveness and improves treatment outcomes. it seems that such a method is transanal total mesorectal excision. the aim of this study was to evaluate the new method of rectal cancer surgery (tatme) after 2 years of its use. methods: radicality of treatment (r0 resection, local recurrence), outcome of surgical treatment and quality of life of patients after surgery were evaluated. results: in the period from 10.03.2015. -30.06.2017. 33 patients (19 men, 14 women) were operated in the clinic. in 29 cases the indication for surgery was lower and middle rectal cancer and in 4 cases high grade dysplasia. all patients underwent laparoscopic rectal proctectomy with transanal access (tatme). in all cases, complete oncological radicalization (resection r0) was obtained. the average operation time was 156 minutes. we had used two teams approach (cecil approach) with 2 laparoscopic sets -abdominal and perineal starting at the same time. in the postoperative course, 6 patients had signs of anastomosis leak (3 of them required reoperation). the follow-up period is 1-29 months. none of the patients had any recurrence of cancer. conclusions: 1. transanal tme for rectal cancer surgery is an alternative method to conventional laparoscopic surgery. 2. in a large proportion of patients with lower and middle tumors, the rectum can avoid abdomino-perineal resection with permanent colostomy. background: the double stapling technique (dst) has widely spread colorectal anastomosis especially for anastomosis after low anterior resection. as for the colorectal cancer treatment, heald reported total mesorectal excision (tme) in 1982, and has been accepted as the standard technique for rectal resection due to the decreased local recurrence rate and improved functional results. with advent of dst, there is a background that it has become possible to preserve anus, even in the case with the lesion at lower rectum. laparoscopic surgery for colon cancer was introduced in the 1990s, and has had promising results including long-term outcomes. according to the spread of laparoscopic surgery, laparoscopic surgery had been applied to the rectal resection, with technical difficulty. one of the reasons for the difficulty is that the high rate of anastomotic leakage, a critical adverse effect of low anterior resection (lar). thus, risk factors for anastomotic leakage were widely discussed, including technical factors such as pre-compression and number or firing. the decisive difference in conventional lar and laparoscopic lar in dst, is the stapler used for transection of the rectum. the laparoscopic staplers which are currently available are thought to be not ideal, and there is little evidence of specific specifications of stapler for laparoscopic surgery. materials and methods: all method described in this study was approved by the institutional ethical review committee. we reviewed the colon and rectal wall thickness according to histological examination using h&e staining of distal margin of resected specimen of the patients who conclusions: rstc for severe acute uc is at least as safe as the laparoscopic approach. although the robotic cohort had more comorbidities, major postoperative complications, readmissions, and reoperation rates were less when compared to lstc. rstc was also associated with an earlier return of bowel function and shorter length of stay. a prospective study with larger numbers is needed to see if the superiority of robotic versus laparoscopic approaches is reproducible. s198 surg endosc (2018) introduction: complete mesocolic excision (cme) has been advocated based on oncologic superiority, but is not commonly performed in north america. furthermore, many data are limited to case series with few comparative studies. therefore the objective was to systematically review studies comparing the short-and long-term outcomes between cme and non-cme colectomy for colon cancer. methods: a systematic review was performed according to prisma guidelines of medline, embase, healthstar, web of science, and cochrane library. studies were only included if they compared conventional resection (non-cme) to cme for colon cancer. quality was assessed using the methodological index for non-randomized studies (minors). the main outcome measures were short-term morbidity and oncologic outcomes. study eligibility, data extraction and quality assessment was performed by two independent reviewers, and disagreements resolved by consensus. weighted pooled means and proportions with 95%ci were calculated using a randomeffects model when appropriate. results: out of 825 citations, 23 studies underwent full-text review and 14 met the inclusion criteria, of which 10 were unique series. mean minors score was 13.6 (range 11-16). the mean sample size in the cme group was 1075 (range 45-3756) and 785 (range 40-3425) in the non-cme group. in the 10 unique studies, 4 included only right-sided resection, and 44.2% (95% ci 35.8-52.6) of the remaining 6 were right-sided colectomies. of the 5 studies that reported surgical approach, 52.2% (95%ci 31.0-73.3) of cme were performed laparoscopically. there were 4 papers reporting plane of dissection, with cme plane achieved in 87.4% (79.7-95.2). mean or time in cme group was 167 minutes (range 163-171) and in non-cme group 138 minutes (range 135-142). perioperative morbidity was reported in 6 studies, with pooled overall complications of 22.5% (95%ci 18.4-26.6) for cme and 19.6 (95%ci 13.6-25.5) for non-cme resections. anastomotic leak occurred in 6.0% (95%ci 2.2-9.7) of cme versus 6.0% (95%ci 4.1-7.9) in non-cme colectomies. cme surgery consistently resulted in more lymph nodes retrieved, longer distance to high tie, and specimen length. there were 7 studies that compared 3-or 5-year overall or disease-free survival, or local recurrence. only 2 studies reported statistically significant higher disease-free or overall survival in favour of cme. local recurrence was lower after cme in 1 of 4 reported studies. conclusions: the quality of the current evidence is limited and does not consistently support the superiority of cme. more rigorous data are needed before cme can be recommended as the standard of care for colon cancer resections. gilberto lozano dubernard, md, facs, ramon gil-ortiz, md, gustavo cruz-santiago, md, bernardo rueda-torres, md, javier lopez-gutierrez, md, facs; hospital angeles del pedregal introduction: to assess the feasibility of a single-stage colorectal laparoscopic re intervention without ostomy. colonic laparoscopic interventions on patients that previously underwent a minimally invasive procedure, constitutes the current boundary in the management of the acute colorectal pathology. that includes, patients with fecal peritonitis due to diverting procedures already treated surgically. the outcome of our patients could significantly improve if the surgical procedure is performed in one time, with no stoma. method and procedures: from september 1995 to june 2016, one hundred thirty-two patients underwent colorectal laparoscopic surgery. five of these patients developed complications: three perforations due to colonoscopy and two due to dehiscence of the anastomosis. these five patients underwent a second laparoscopic procedure that included resection and anastomosis. no stoma required. results: all five patients underwent a second laparoscopic procedure due to an anastomosis leak. no stoma was required. the procedure consisted on resection of the previous anastomosis, re anastomosis, abdominal lavage, aspiration and drains placement. all of them supported with parenteral nutrition. there were no surgical complications. only one patient developed pneumonic symptoms that were solved. conclusion: the reported results, regarding no conversion rate, nor mortality, on our series of patients, suggest that single stage laparoscopic re intervention is feasible, despite fecal peritonitis. introduction: total mesorectal excision is known to be a gold standard surgical procedure for the rectal cancer. subsequently complete mesocolic excision (cme) is recognized as an essential surgical procedure for the colon cancer. the transverse colon is relatively minor location for colon cancer. variety of vessels and mobilization of splenic flexure and dissection close to pancreas make operations for the transverse colon cancer complicated. laparoscopic transverse mesocolic excision in our hospital is presented. method: laparoscopic surgery is conducted with five trocars under the lithotomy position. inferior mesenteric vein is cut after dissection of the descending colon with medial approach. the lower edge of pancreas is exposed near the inferior mesenteric vein and is dissected along toward the tail of pancreas. the splenic flexure is mobilized with lateral approach and the dissection between transverse mesocolon and the lower edge of pancreas is continued in the direction to the pancreas head. coming to the exposure of superior mesenteric artery and vein, the origin of middle colic artery and vein are cut. the transverse mesocolon is separated from the pancreas head and the duodenum with preserving the gastrocolic trunk of henle and the right gastroepiploic vein. the hepatic flexure is mobilized and cme for the transverse colon is finished. this method, the 'tail to head of pancreas' approach, we called, was performed from september 2015. this method is well performed with one series of surgical view, and seems to be a simple procedure as cme with central vascular ligation for the transverse colonic cancer. there were no intraoperative complications, and one postoperative pancreatitis with grade ? of clavien-dindo classification of surgical complications. conclusion: our method, the 'tail to head of pancreas' approach, with transverse mesocoloc excision is simple, safe and feasible. the introduction: anastomotic complication after stapled anastomosis in colorectal cancer surgery is a considerable problem. there are various types of anastomotic complication and they have different severity. this study was aimed to evaluate the impact of intraoperative colonoscopy on detection of anastomotic complication, and its effectiveness in treatment of anastomotic complications after anterior resection (ar) and low anterior resection (lar) for colorectal cancer intraoperatively. methods: from dec. 2016 to jul. 2017, a total of 72 patients who underwent anastomosis between sigmoid colon and rectum after colorectal resection were reviewed retrospectively. intraoperative colonoscopy was performed routinely since december 2016 in our hospital after anterior resection and low anterior resection. to identify effectiveness of intraoperative colonoscopy, we compared postoperative complications with non-intraoperative colonoscopy group during previous 11 months. intraoperative colonoscopy was performed after anastomosis to visualize the anastomosis line and to perform an air leakage test. if anastomotic defect and moderate bleeding were found in intraoperative colonoscopy, it was managed by means of reinforcement suture or transanal suture repair. we used logistic regression to analyze anastomotic complication between two groups with or without intraoperative colonoscopy. results: of the 72 patients who were performed intraoperative colonoscopy after ar (n=50) and lar (n=22), abnormal findings including bleeding and air leak were found in 14 patients (19.4%). among those, 9 cases were observed without any procedure, additional procedures were performed in 5 patients (6.9%, transanal suture (3), lembert suture (2)). postoperative complication was developed in 12 patients; 6 patients had anastomosis bleeding (8.3%), 2 patients had ileus (2.8%), 1 patient had pneumonia (1.4%), 3 patients had minor complication (4.2%, acute urinary retention, chylous drainage, laparoscopic port site bleeding). among 6 patients who had anastomosis bleeding, 4 patients were treated by endoscopic clipping, 2 patients were cured by conservative treatment. there was no postoperative anastomotic leakage. the cases of ar and lar were 62 and 48 in non-intraoperative colonoscopy group, there was no significant difference between two group (p=0.07). the proportion of laparoscopic surgery was 86.4% and 92.2% on intraoperative colonoscopy and non-intraoperative colonoscopy group, respectively, there was significant difference statistically (p=0.02). however, there was no significant difference in anastomotic complication rate between two groups. (rr=0.27, 95% ci, 0.34-2.585). conclusions: although there was no significant difference in postoperative anastomotic complication rate between two groups, intraoperative colonoscopy may be valuable method for decreasing postoperative complication by visualizing anastomosis line and performing additional procedure. conclusion: it was suggested that lymph node dissection of both middle and left colic regions is necessary for splenic flexure colon cancer, because lymph node metastasis was recognized in both region. surg endosc (2018) 32:s130-s359 the aims: laparoscopic right hemicolectomy became the standard of care for treating cecum, ascending and proximal transverse colon cancer in many centers. most centers use laparoscopic colectomy with extracorporeal resection and anastomosis (lc). single-incision laparoscopic colectomy with intracorporeal resection and extracorporeal (sc) remains controversial. the aim of the present study is to compare these two techniques using propensity score matching analysis. methods: we analysed the data of 111 patients who underwent laparoscopic right hemicolectomy with lc or sc between december 2015 and december 2016. the propensity score was calculated from age, gender, body mass index, the american society of anesthesiologists score, previous abdominal surgery and d3 lymphnode dissection. short-term outcomes were recorded. postoperative pain was evaluated using a visual analogue scale (vas) and postoperative analgesic use as outcome measure. results: the length of skin incision in the sc group was significantly shorter than in the lc group: median (range) 3 (3.5-6) cm verses 4 (3-6) cm (p=0.007). the vas score on day 1 and day 2 after surgery was significantly less in the sc group than in the lc group: median (range) 30 (10-50) verses 50 (20-69) on day 1 (p=0.037) and median (range) 10 (0-50) verses 30 (0-70) on day 2 (p= 0.029). significantly fewer the number of requiring analgesia in the sc group on day 1 and day 2 after surgery: median (range) 1 (0-3) times verses 2 (0-4) times on day 1 (p=0.024) and 1 (0-2) times verses 1 (0-4) times on day 2 (p=0.035). there were no significant differences in operative time, intraoperative blood loss, the number of lymph nodes removed and postoperative courses between the groups. conclusions: sc for right colon cancer is safe and technically feasible. sc reduces the length of skin incision and postoperative pain compared with conventional lc. patients were divided into the following groups: cephalo-medial-to-lateral approach group (cml group, n=63) and medial-to-lateral approach group (ml group, n=74 introduction: laparoscopic technique has been widely used in the treatment of colorectal cancer, while playing its minimally invasive advantages, but also achieved a good effect of radical oncology. however, t4 colorectal cancer is not recommended laparoscopic surgery. methods: retrospectively collected pt4 colorectal cancer data from 2006 to 2015 in guangdong general hospital, all cases were undergoing radical surgery. results: a total of 211 cases were enrolled in the pt4 group, including 101 cases of laparoscopic group, 110 cases of open group, conversion rate was 12.9%. there was no difference in baseline data (age, sex, bmi, asa, etc.)(p.05). there was a significant difference between the two groups (p.05) in blood loss, postoperative complications and postoperative recovery index. in the pathologic t4a/b, combined-organ resection, postoperative recurrence, the laparotomy group had more cases, and there was a statistically significant difference between the two groups (p\ 0.05). the 3-and 5-year overall survival rates were 74.9% and 60.5% for the lap group and 62.4% and 46.5% for the open group (p=0.060). meanwhile, the 3-and 5-years disease-(p=0.053). iiic stage, lymph node status, ca19-9 and adjuvant chemotherapy were independent prognostic factors affecting overall survival. the age, pt4a/b, iiic stage, ca19-9 and adjuvant chemotherapy were independent influencing factors of disease-free survival. conclusions: laparoscopic surgery for pt4 colorectal cancer surgery, it is not only in the play of its minimally invasive but also obtained with the similar long-term effect. but we need more multicenter, prospective, and large sample clinical studies to validate our findings. introduction: lymph node (ln) retrieval after surgery is important. in the present study we evaluated the efficacy of the fat dissolution technique using fluid containing collagenase and lipase to avoid staging migration after laparoscopic colorectal surgery. methods: seventeen patients who underwent laparoscopic ln dissection for colorectal cancer were evaluated. first, unfixed lns within the resected mesentery were explored by visual inspection and palpation immediately after the operation by the surgeon, which is the most common practice in japan. subsequently, the fat dissolution technique was used on remnant fat tissue, and the lns were evaluated again. the primary endpoint was whether the second assessment increased the number of lns evaluated. results: the median number of lns identified at the first and second assessments was 14 and 6, respectively, resulting in a significant increase in the total number of lns evaluated (14 vs. 21, p\ 0.01, paired t-test). one positive node was identified among all the additional lns identified (1.0%; 1/96). although staging was not altered in any patient, the second assessment resulted in an increase in the originally insufficient number of lns evaluated (\12 for stage ii) in three patients, whose treatment may be altered. tumor cells detected after the fat dissolution technique were stained with carcinoembryonic antigen and cytokeratin-20. conclusion: using the fat dissolution liquid on remnant fat tissue of the mesentery of the colon and rectum enabled identification of additional lns. this method should be considered when the number of lns identified is not sufficient after conventional ln retrieval, and may avoid stage migration. aim: the aim of this study is to evaluate the pathological resection margin after laparoscopic intersphincteric resection for low rectal cancer. method: from 2010 to 2014, there were eight laparoscopic intersphincteric resection cases for low rectal cancer. we evaluated the clinicopathological findings and the positivity of pathological resection margin. results: the median distance from the anal verge to the tumor was 40 mm (range, 10-45), and the median diameter of the tumor was 27 mm (range, 15-60). there was no case with neoadjuvant therapy. the estimated tumor depth were ct1 in 5 cases (62.5%) and ct2 in 3 cases (37.5%), and the actual tumor depth were ptis in 3 cases (37.5%) and pt1 in 2 cases (25.0%) and pt2 in 3 cases (37.5%). the median distal resection margin was 10 mm (range, 5-25). pathological resection margin, such as the proximal, distal and circumferential margin was negative in all cases (100%). there was no mortality, but morbidity occurred in two cases (one case of anastomotic leakage and one case of small bowel obstruction). no recurrence nor distant metastasis was observed in the follow up period. conclusion: there was no positive resection margin case in the series. our patient selection, indication and the technique were considered to be precise and appropriate. introduction: the fistulas of the intestine to the vagina or the bladder include a highly morbid entity, with several functional limitation and loss of the quality of life, its diagnosis is complex and more than its treatment, which include a wide range of possibilities that go from the simple derivative colostomy in search of the spontaneous closure of the fistula, under the complete correction of the pathology with resections, anastomosis and mini-vasive reconstructions. give to know our experience in the minimally invasive treatment of whole vaginal and whole vesicial fistules by laparoscopic via, for the last 3 years. results: a total of 28 patients were operated in this period, 26 women and 2 men, all those by laparoscopic via, with intestinal resection, in 26 thick intestine cases, in one small intestine and in another case with the commitment of the two, everyone restriction and intestinal anastomosis and in no matter were colostomy, primary closures of the fistula in 7 patients were required, conversion to open surgery in a case and there was no recurrence, 2 patients had prolonged hospitalization for localized infections, a requirement reintervencion for revision. a patient suffried a umbilical eventration for the extraction site, which was corrected one year after laparoscopy. conclusion: minimally invasive surgery in patients with this type of pathology becomes an excellent strategy for the integral management of these patients. group work guarantees good results. robbie sparks, dr, ronan cahill; mater misericordiae university hospital background: precise preoperative localisation of colonic cancer is a prerequisite for correct oncological resection. effective endoscopic lesional tattoo is vital for small, radiologically unseen tumors planned for laparoscopic resection but its practice may be imperfect. methods: retrospective review of consecutive patients with preoperative endoscopic lesional tattoo who underwent laparoscopic colonic resection identified from our prospectively-maintained cancer database with supplementary clinical chart and radiological, histological, endoscopic and theatre database/logbook interrogation. results: 169 patients (95 males, mean age 68 years, median bmi 27.8 kg/m 2 , 77 left sided lesions, 36 screen detected, 21 benign polyps, 23% conversion rate). in 104 operations (60%) tattoo visibility was documented with tattoo absence noted in 9 (8.5%) although tattoo was identifiable in the pathological specimen in four. in those with "missing tattoos", six of the lesions were radiologically occult and in three the tumor was found in a different colonic segment then had been judged at colonoscopy. four patients had on-table colonoscopy and five were converted to laparotomy (55% conversion rate, p.005). mean postoperative length of stay was 15.5 (range 4-38) days. one patient's segmental resection contained only benign pathology requiring a second operation to remove the cancer. on univariate analysis, time between endoscopy and surgery (but not patient age, gender, bmi, endoscopist or surgeon seniority, tumor size or location) was significantly associated with absence of tattoo intraoperatively (p=0.006). conclusion: recording related to tattoo is variable but definite lack of gross tattoo visualisation significantly impacts the procedure. the mechanism of tattoo absence is multifactorial needing careful consideration but solvable. the aim of the present study was to perform a systematic review of the literature to determine the role of antibiotics in the management of acute uncomplicated diverticulitis (aud). diverticular disease is the most common disease of the large bowel and poses a significant burden on healthcare resources. in the united states alone, the cost of diverticular disease has been estimated to be over $3 billion making it the fifth most important gastrointestinal disease economically. the use of antibiotics in the management of aud, however, is primarily based on expert opinion as current high-quality evidence is lacking. recent studies have not only questioned the optimal type and duration of antibiotic regimens, but whether antibiotics provide any benefit in the treatment of aud. conclusions: antibiotic use in patients with acute uncomplicated diverticulitis is not associated with a reduction in major complications, readmissions, treatment failure, progression to complicated diverticulitis, or need for elective and emergent surgery. however, it increases the length of hospital stay. given the risk of selection bias in included studies, further randomized trials are needed to clarify the need for antibiotics in uncomplicated diverticulitis. laparoscopic para-aortic lymph node resection for colorectal cancer aim: we want to highlight the feasibility of a sigmoidectomy using total laparoscopic with a transanal extraction of the specimen. methods: it is a 34-year-old female patient, obese (bmi=34 kg/m 2 ) to the antecedents of laparoscopic cholecystectomy and chronic constipation. she was treated three months ago for a sigmoidal diverticulitis complicated with a pelvic abscess. the evolution has been favorable under antibiotic therapy and percutaneous drainage of the abscess. the colonoscopy showed a multiple diverticula located between 20 and 25 cm from the anal verge. prophylactic sigmoidectomy was performed laparoscopically using 3 trocars (10 mm supra ombilical, 12 mm fid and 5 mm right flank). the specimen was extracted transanally, thus avoiding a pubic incision. the steps of the intervention were: 1-mobilisation of left colon 2-closing of distal left colon stump 3-rectal stump lavage 4-opening on the rectum 5-transanal introduction of the anvil 6-specimen transanal extraction 7-closing og rectal stump 8-colonic positioning of the anvil 9-coloractal anastomosis. results: the intervention was 150 minutes. no perioperative incidents. the liquid regime was authorized on the night of the intervention. the operating procedures were favorable with an exit to j2 post operative. the anapath examination of the surgical specimen confirmed the presence of sigmoidal diverticula. conclusion: laparoscopic sigmoidectomy with transanal extraction of the specimen for benign desease is a seductive technique with satisfactory results. it avoids a pubic incision with its parietal and aesthetic complications. chengzhi huang; guangdong general hospital (guangdong academy of medical science) background: colorectal cancer (crc) is one of the most common malignant diseases over the world. of the causes of the death of crc, metastasis to liver or lung are the major factors. however, there is still lack of precise tumor biomarker that precisely predict the clinical outcome of crc. the salt-inducible kinase 1 (sik1) encodes a serine kinase of amp-activated protein kinase (ampk) family, which may play critical roles in tumorigenesis and tumor progression. this study aimed the study the expression and clinical significance of sik1 and crc patients. methods: the expression of sik1 protein was measured by western-blot and analysis of immunohistochemistry. sik1 mrna expression in cancerous tissue was measured by rt-pcr. results: the expression level of sik1 was correlated with the following factors: tumor invasion (t stages), lymph node metastasis, clinical stages (tnm) and tumor location. the down-regulated sik1 implies poor clinical outcome measured by kaplan-meier analysis (p-value.05), and may act as an independent risk factor of crc patients. background: surgical specimens for resected colon cancer vary in quality and there remains no universally accepted technique to guide resection margins. a minimum of 12 lymph nodes provides some quality assurance, however this remains a crude marker of optimal oncological surgery. a tool to precisely identify lymphatic drainage within the mesentery could improve the oncologic quality of resection and better guide adjuvant treatment through more optimal mesenteric lymphadenectomy. while fluorescence imaging (fi) has been described to identify nodal disease in several other cancers, feasibility and best practices have not been established in colon cancer. we describe a novel technique of fi using indocyanine green (icg) to identify lymphatic spread and potentially guide optimal mesenteric lymphadenectomy in colon cancer. methods: three consecutive patients with colon cancer undergoing a laparoscopic resection had peritumoral subserosal injection of icg for fi after extracorporealization of the mobilized specimen. three concentrations of icg were injected −5 mg/10 ml, 5 mg/5 ml, and 5 mg/3 ml. a total of 4 ml was given for each patient. using a modified laparoscopic camera, the icg was excited by light in the near-infrared (nir) spectrum, for real-time visualization of the lymphatic drainage. the main outcome measure was identification of lymphatic drainage. results: three patients with right-sided primary colon cancer were evaluated. all three patients had successful identification of the lymphatic drainage pattern along the mesentery. the most successful protocol was 1 ml (concentration 5 mg/10 ml) subserosal injection at 4 points within close proximity (1 cm) of the tumor with a 23-gauge needle, then waiting 5 minutes for complete mapping. no intraoperative or injection-related adverse effects occurred with 30-day follow-up. the median lymph node yield was 31. all specimens had tumor-free margins. conclusion: from this small series, fluorescence imaging with icg is a potentially safe and feasible technique for identifying mesocolic lymphatic drainage patterns. this proof of concept and protocol will lead to future studies to examine the utility of fluoresence imaging to guide more precise surgery in colon cancer. introduction: anastomotic leakage in colon/rectal surgery is a dangerous event with an occurance rate ranging from 1 to 30%. the associated mortality rate is between 6-22%. the white-light intraoperative subjective surgical assessment (the most frequently used approach) underestimates the actual anastomotic leakage rate. intraoperative tissue perfusion assessment by indocyanine green (icg)-enhanced fluorescence has been reported in multiple clinical scenarios in laparoscopic/ robotic surgery, as well as for for bowel perfusion assessment. this technology can detect microvascular impairment, potentially preventing anastomotic leakage. we reviewed the literature and present our data to evaluate the feasibility and usefulness of icg-enhanced ?uorescence in the intraoperative assessment of vascular peri-anastomotic tissue perfusion in colorectal surgery. methods and procedures: a pubmed literature narrative review has been performed. moreover, out of a total of 164 robotic colorectal cases, we retrospectively analyzed 28 icg-enhanced fluorescence robotic colorectal resections (15 left colectomies-8 rectal resections-3 right-1 transverse-1 pancolectomy). results: after icg-technology use, the biggest (n[100) case-series showed a rate of 3.7-19% of cases in which they changed the level of resection based on icg. icg technology may variably reduce the anastomotic leak rate from 4 to 12%. however, the threshold values to define the actual sub-optimal perfusion are still under investigation. in our experience, out of 28 icg cases performed: the conversion, intraoperative complication, dye allergic reactionand mortality rates were all 0%. post-op surgical complications: 1 case of leak (3,6%) and 1 sbo for incarcerated hernia (3.6%). in 2 cases, with normal white-light assessment, the level of the anastomosis was changed after icg showed ischemic tissues. despite the application of icg, 1 anastomotic leak has been registered. conclusions: icg-enhanced ?uorescence may intraoperatively change the white-light assessed resection/anastomotic level, potentially decreasing the anastomotic leakage rate. our data shows that this technology is safe, feasibile and may prevent anastomotic leakage. however, the decision making is still too subjective and not data driven. at this stage icg, beside being a promising technique, doesn't have high level of evidence (most of the reports are retrospective). some randomized prospective trials with an adequate statistical power are needed. a precise injection dose and timing standardization is required. the main challange is to develop a method to objectively obtain a real-time intensity assessement. this may provide objective metric tresholds for an intraoperative evidence/data-based surgical decision making. introduction: according to the world health organization, colorectal cancer is the 3rd most commonly diagnosed cancer in the world. one of the main risk factors for the development of colorectal cancer is obesity. obesity is seen to increase the risk of colorectal cancer by 9% in women per 5 kg/m 2 and 24% in men per 5 kg/m 2 . bariatric surgery is one of the treatments that is considered to achieve and sustain a significant amount of intentional weight loss in patients. considering that fact that bariatric surgery decreases obesity, this intentional weight loss would seem to provide a favorable outcome in terms of diagnosis and prognosis of colorectal cancer. a systemic review of the literature was conducted via pubmed to identify relevant studies from january 2008 through may 2017. the main outcome for this study is to assess whether patients who underwent bariatric surgery (restrictive and malabsorptive procedures) had an increased or decreased risk of colorectal cancer. all studies included in this meta-analysis are retrospective cohort studies. results were expressed as standard difference in means with standard error. statistical analysis was done using fixed-effects meta-analysis to compare the mean value of the two groups between bariatric surgery and non-surgery in patients with colorectal cancer. (comprehensive meta-analysis version 3.3.070 software; biostat inc., englewood, nj). results: four out of 86 studies were quantitatively assessed and included for meta-analysis. among the four studies, 22,857 underwent bariatric surgery and 78,536 did not undergo bariatric surgery. there is a significant decrease (0.139±0.057; p=0.016) in the risk in patients developing colorectal cancer in patients who underwent bariatric surgery compared to those who didn't get surgery. conclusion: bariatric surgery patients appear to have a decreased risk of colorectal cancer compared to patients who did not have bariatric surgery. guh jung seo, hyung-suk cho; department of colorectal surgery, dae han surgical clinic, gwangju, south korea introduction: the incidence of rectal carcinoid tumors is increasing due to the widespread use of screening colonoscopy. endoscopic mucosal resection (emr) is a useful method for small rectal carcinoid tumors (≤10 mm) because of its simplicity, quick procedure and low complication rates. we aimed to describe our experience and evaluate the outcomes of emr for rectal carcinoid tumors. the patients enrolled in this study were 13 patients with small rectal carcinoid tumors who underwent emr using a submucosal injection technique of epinephrinesaline mixture between august 2010 and october 2016. all medical records, including characteristics of the patients and tumors, complications, were retrospectively reviewed. results: the patients were 6 men and 7 women, with a mean age of 40.8 years (range, 21-72 years). en block resection was performed by emr in all cases. the endoscopic mean size of tumors was 6.46 mm (range, 5-10 mm). the pathologically measured mean size of the resected specimens was 5.92 mm (range, 4-10 mm). the mean size of resected carcinoid tumors was 4.33 mm (range, 1.8-7 mm). the tumor shape was submucosal tumor in 10 and polyp in 3. histological examination revealed that 5 cases had resection margin positive of tumor and 1 case had undetermined resection margin of tumor. of the 6 patients, 4 patients underwent endoscopic treatment and 2 patients underwent transanal excision. no residual tumor was found in additionally removed tissue. there were 2 cases with emr-related complications: 1 early postprocedural bleeding and 1 postpolypectomy syndrome. there was no significant bleeding requiring blood transfusion or perforations. conclusion: endoscopic mucosal resection is considered to be a relatively safe and useful method for treatment of small rectal carcinoids in selected patients. background: disturbance of sexual function after an operation for rectal cancer has often occurred. the relationship between autonomic nerves and arteries in pelvis was examined. methods: clinical studies of 15 male patients with resected rectal cancer were performed using snap gauge method, penile-brachial index and evoked bulvo-cavernous reflex. in 30 canine experiments, pelvic splanchnic nerve (psn) electric stimulation, arterial flow measurement, corpus cavernosum pressure measurement and muscle strip study using drugs were evaluated. results: in clinical studies of 15 male patients, transection of the hypogastric nerve (hgn) and the sympathetic trunk did not affect the erectile function in the postoperative course. in animal experiments transection of these nerves did not affect the increase in inner pressure of the penis cavernosum. in postoperative cases in which only one side of the lower grade branches of the psn (s4) were preserved, the erectile function was preserved. in animal experiments in which the psn of one side was disturbed, the ipa flow of the same side decreased, while the flow of the other side increased. we have evaluated the role of adrenergic components in the psn on the erectile function in the dog. the effect of norepinephrine hydrochloride on canine vascular smooth muscle was examined in vitro. vascular smooth muscle strips from the ipa relaxed longitudinally. electrical stimulation of the psn increased blood flow in the ipa and also elevated the cavernous pressure. these increases were blocked in part by phentolamine, but not by propranolol or atropine. the effects of cholinergic and adrenergic agonists and antagonists on mechanical responses were also examined in muscle strips obtained from various arteries in the intra-pelvic region including the ipa. norepinephrine induced contraction in the iliac artery and relaxation in the ipa, and both the contraction and relaxation responses were blocked by phentolamine but not by propranolol. these findings suggest that in the dog, α-adrenergic components projected through the psn may contribute to penile erection. conclusion: blood flow in the ipa was controlled significantly by the same side psn, but compensatory by the other side psn. it is also conceivable that the erectile function through the psn is controlled by the sympathetic nerve, not by the parasympathetic nerve. in postoperative cases in which only one side of the lower grade branches of the psn (s4) were preserved, the erectile function was preserved. introduction: currently, neoadjuvant chemo-radiotherapy (ncrt) followed by low anterior resection or abdominoperineal resection are the standard treatments for locally advanced rectal cancer. ncrt can improve resecability, achieve better sphincter preservation and reduce local recurrence. although total mesorectal excision is the standard treatment for advanced rectal cancer, recent trends in minimally invasive treatments led to an increase in local excision or "watch and wait" in patients with an excellent response to ncrt. the purpose of this study, part of an ongoing research, is critically evaluating the feasibility of "non-operative treatment" for rectal cancer in a district hospital. methods and procedures: a total of 29 patients with rectal cancer, who where treated with ncrt from january to august 2017 at "carlo urbani" district hospital in jesi (italy), were retrospectively reviewed. all patients had histologically-confirmed primary adenocarcinoma of the rectum located within 12 cm from the anal verge. the involved patients completed ncrt and had no recurrence disease, distant metastasis, synchronous malignancies. they were classified according to the mandard's tumor regression grade (trg) into two clusters: group a (trg 1-3) and b . results: the average age of people is 67.2 and 17 were male. five patients underwent abdominoperineal resection and 76% fell within group a. six patients had lymph nodes involved. four patients suffered relevant complications, such as wound complication, anastomotic leak, operative reintervention and death. univariate analysis showed that the main predictors of tumor regression were the absence of lymph-nodes involvement from initial imaging (p.05), normal initial carcinoembryonic antigen level (p.05) and tumor downstaging in imaging (p.05). in addition, most relevant complications occurred to elderly patients although they observed a good clinical response. besides, 13% of patients were found to be complete pathologic responders upon examination of the surgical specimen. conclusions: the oncologic feasibility of non-operative management for the patients with complete clinical response after ncrt has been growing, but some studies have suggested lack of oncologic safety in these patients. the patients with a complete clinical response expect good survival, but they may still harbor residual disease. no consensus on "watch and wait" policy in the field of rectal cancer was obtained, yet. our data did not entirely support this policy although it might be the best strategy, based on the predictors of tumor regression, to avoid the complications associated with surgery in elderly patients with significant medical comorbidities and fear of a permanent stoma. introduction: conventional 5 incision laparoscopic surgery procedure for rectal cancer is widely accepted as a successful alternative to laparotomy now, bestowing specific advantages without causing detriment to oncological outcome. evolving from this, single-incision laparoscopic surgery (sils) has been successfully utilized for the removal of colonic tumors, but the literature lacks sufficient data analyzing the suitability of sils for rectal cancer especially for total resection mesorectal excision (tme), particularlyon oncological outcome. we report the short-term clinical and oncological outcomes from a large cases retrospective analysis of observational study of sils for tme procedure of rectal cancer. methods: 95 rectal cancer patients who underwent transumbilical single incision laparoscopic tme surgery were recruited in the current study. short-term perioperative clinical parameters and oncological outcomes were observed and all patients were followed up after surgery. then summarize the preliminary application results. results: 87 operations were accomplished successfully with single incision laparoscopy, 7 patients were converted to multiport approach, and 1 was converted to laparotomy, no diverting ileostomy was performed. the average operative time was (128.5±43.6) min, with an average blood loss of (75.5±121.7) ml, the median postoperative hospital stay was (10.3±2.1) days. all patients received a r0 resection and the surgical margin were conformed negative in all 87 cases, the median number of harvested lymph node is (18.4±8.9), the specimens met the requirement of tme. there were 3 postoperational complications, no operation-related mortality or postoperative anastomotic leakage was observed. no patient appeared recurrent in a median follow up of 14 months. conclusions: total mesorectal excision surgery for rectal cancer can be safely performed using transumbilical single incision laparoscopic technique, with acceptable short-term clinical and oncological outcome. surg endosc (2018) background: any surgical trauma induces an inflammatory response, which is considered as a negative factor in the general immune response, specially in malignant disease. the c-reactive protein (crp) is an acute phase protein often used as a marker of surgical trauma. stent treatment has been used as a treatment option for colonic obstruction in palliative cases for many years, and also as a bridge to surgery in selected cases. in a pilot study we compared the inflammatory response after acute stent treatment or surgery for malignant colonic obstruction. method: we compared two consecutive series of treatment of acute malignant colonic obstruction, stent treatment or emergency surgery during 2011-2012. all patients were admitted with acute colonic obstruction due to colorectal cancer. choice of treatment was based on attending senior colorectal surgeons' preference, patient comorbidities and disseminated disease was considered. patient age, crp, time to first defecation and length of stay was recorded. results: a total of 31 patients were identified in a retrospective analysis. 15 patients had acute stent treatment and 16 had acute surgical treatment for colonic obstruction, all due to colorectal cancer. median age was 77 y (30-95) with no difference between the groups. there was no difference in metastatic disease between the groups. median time until first defecation after treatment was significantly shorter for the stented patients (39 h (4-73)) compared with those operated (96 h (24-168)) (p,001). median hospital stay was also shorter in the stent group, 6 days (2-32), versus 11 days (7-30) in the surgical group (p=0,016). crp did not differ between the groups before treatment. both treatments resulted in increased crp levels at postoperative days 1 and 2, but the crp levels were significantly higher in the surgical group than in the stent group at both time points (pod 1 p=0,017, pod 2 p,001) conclusion: acute stent treatment in colonic malignant obstruction seems to induce a less pronounced inflammatory response compared with surgery, as shown by a significantly reduced increase in postoperative crp resulting in shorter time to first defecation and a shorter hospital stay. introduction: meckel's diverticulum is the most common congenital abnormality in newborns, present in about 2-4% of them. diagnostic of meckel's diverticulum requires a high index of suspicion, and even with the use of modern imaging technologies, they are often diagnosed intraoperatively. what to do when an asymptomatic diverticulum is found incidentally during surgery for other causes is a matter of discussion. objective: the aim of this article is to report 27 symptomatic and asymptomatic incidentally found cases seen in a fourth-level hospital in colombia. the reports of the histopathologic examinations carried out in the hospital in the last 12 years were reviewed searching for those containing meckel's diverticulum in their diagnosis. patients were divided in asymptomatic and symptomatic groups. the asymptomatic group was defined as patients who were operated for a different indication and a meckel's diverticulum was found incidentally. morbidity was divided in early and late complications after the initial surgery. results: from january 2004 to june 2017, a total of 42 pathology reports included the diagnosis meckel's diverticulum. a total of 27 adult patients were retrieved. all of those patients with meckel's diverticulum a total of 22 patients were symptomatic, being sbo the most common complication and required the surgical remove incidentally. conclusion: the correct approach of the patients with diverticular pathology allows the early identification and the appropriate management of the surgical complications that can be presented. robert j czuprynski, md, grace montenegro, md; saint louis university hospital presacral masses are a rare entity, with an incidence of 0.014% and can be classified in several categories, including inflammatory, neurogenic, congenital, osseous and miscellaneous. in this case, a neuroendocrine tumor was identified with concern for iliac chain lymphatic and gluteal metastasis. the patient underwent abdominoperineal resection, excision of presacral mass, lymph node biopsy and omental flap. final pathology returned as a grade ii neuroendocrine tumor arising from a tailgut cyst. a 29 year old female with a ten year history of recurrent perianal, ischiorectal and deep postanal abscesses presents with a presacral mass biopsy proven well-differentiated neuroendocrine tumor. octreotide scan demonstrated avidity for presacral mas as well as left intergluteal lymph node and two internal iliac lymph nodes. chromogranin a, neuron-specific enolase and serotonin markers were all negative. the patient was taken to the operating room and underwent abdominoperineal resection, resection of presacral mass and internal iliac nodes with an omental flap. neuroendocrine tumors arising from tailgut cysts of the presacral space are rare in nature. in a retrospective study from great britain, four of thirty one tailgut cysts had malignant transformation, so it is generally recommended to resect the cysts. in this case, the patient's tumor was a moderately differentiated, grade ii with extensive lymphovascular and perineural invasion. there are no prospective studies showing neoadjuvant therapies in neuroendocrine tumors of the presacral space. according nccn guidelines, patient is currently asymptomatic with low tumor burden. recommended treatment at this time is observation with surveillance tumor markers every 3-12 months or octreotide. anastomotic leakage has been commonly regarded as one of the toughing postoperative complications in laparoscopic mid/low rectal cancer surgery, attenuating the short-term clinical benefits. the left colic artery (lca) has been routinely central-ligated in dissection process to guarantee the oncological effects, which may potentially attribute to the postoperative ischemia-induced anastomotic leakage in the patients with left-colic vessel variation, e.g. bypass or absent of riolan arch. however, no specific study focuses on the surgical benefits of lca preservation compares to conventional ones. herein, we conduct a single center randomized controlled trial, demonstrating that lca-preserving technique shows significant reduction rate of postoperative leakage as well as overall complications comparing to the traditional central-ligation group. no difference in survival rate and recurrence in short term is found between the two groups. the lca-preserving strategy is proven to be repeatedly safe and feasible, potentially reduce the risk of anastomotic leakage with comparable short-term outcomes. further investigation is required for both the oncological safety and long-term prognosis for this innovative technique. background: three-photon imaging (tpi), which was based on the field of nonlinear optics and femtosecond lasers, has been proved to be able to provide the 3-dimensional (3d) morphological feature of living tissues without the administration of exogenous contrast agents. the purpose of this study is to investigate whether tpi could make a real-time histological 3d diagnosis for colorectal cancer compared with the gold standard hematoxylin-eosin (h-e). methods: this study was conducted between january 2017 and august 2017. a total of 30 patients diagnosed as colon or rectum carcinoma by preoperative colonoscopy were included. all patients received radical surgery. the fresh, unfixed and unstained full-thickness cancerous and the corresponding normal specimens in the same patient, were immediately prepared to receive tpi after surgery. for 3d visualization, the z-stacks were reconstructed. all tissue went through routine histological procedures. tpi images were compared with h-e by the same attending pathologist. results: the schematic diagram of tpi is shown in fig. 1a . peak tpi signal intensity excited at 1300 nm was detected in living tissues. the field of view (fov) was 5009500 µm and the imaging deep was 200 µm in each specimen. in normal specimens, glands lined regularly and characterized as a typical foveolar, which was comparable to h-e images ( fig. 1b and 1d ). in cancerous specimens, irregular tissue architecture and shape were identified by tpi, which was also validated by corresponding h-e images ( fig. 1c and 1e ). tpi images can be acquired with a view of 3d visualization. based on rates of correlation with pathological diagnosis, the accuracy, sensitivity, specificity, positive predictive value, negative predictive value were 95%, 90%, 100%, 100%, 90.9%, respectively. conclusions: it is feasible to use tpi to make a real-time 3d optical diagnosis for colorectal cancer. with the miniaturization and integration of colonoscopy, tpi has the potential to make a real-time histological 3d diagnosis for colorectal cancer in the future, especially in low rectal cancer. erica pettke 1 , abhinit shah 1 , vesna cekic 1 , daniel feingold 2 , tracey arnell 2 , nipa gandhi 1 , carl winkler, md 1 , richard whelan 1; 1 mount sinai west, 2 columbia university introduction: alvimopan (alvim) is a peripherally acting µ-opioid receptor antagonist used to accelerate gastrointestinal functional recovery postoperatively (postop) after bowel resection. the purpose of this retrospective study was to compare the time to first flatus and bowel movement (bm) as well as length of stay (los) following elective minimally invasive colorectal resection (crr) in a group of patients (pts) who received alvimopan perioperatively (periop) vs a group that did not get this agent. methods: a data review from 2000 to 2015 from 2 irb approved databases was carried out. operative, hospital and office charts were reviewed. routine use of alvim for elective crr cases was stared in 2013. besides gi data, preoperative comorbidities and 30 day postop complication rates were assessed. the results with periop alvim were compared to a no-alvim group. the students t and chi-square tests were used. results: a total of 902 pts underwent elective crr. alvim was administered periop to 262 pts (29%). the breakdown of indications between groups were similar. alvim pts were younger (60.4 vs. 63.8 years old, p=0.002) and, as regards comorbidities, less likely to have heart disease (cad 4.1% vs 13.9%, other heart disease 13.2% vs 19.5%) but were otherwise similar. the rate of laparoscopic-assisted (alvim, 80.9%; no alvim, 68%) and hand assisted or hybrid operations (alvim, 19.1%; no alvim, 32%) were similar. alvim pts had significantly earlier return of flatus (2.4 vs 2.9 days) and first bm (2.6 vs 3.5, p.001 for both) than the no alvim group. there was also a trend toward a shorter los (6.1 vs 6.7 days, p=0.05) for the alvim group. overall complication rates were similar, however, alvim pts had lower rates of post-operative ileus (5.3% vs 14.1%, p.0002), sssi's (5.8 vs 10%, p=0.04), and blood transfusion (7.1 vs 13.0%, p=0.01) than the no alvim group. conclusion: the two groups compared were largely similar (most co-morbidities, indications, crr type) with the differences in age and cardiac issues noted. the impact of the higher rates of sssi's, blood transfusion, and mi in the no alvim group on gi function is unclear. pts who received alvim periop had an accelerated return of bowel function, decreased postoperative ileus and shorter length of stay. these results suggest that alvim is effective in reducing the postoperative ileus but further study is warranted. background: laparoscopic total proctocolectomy (tpc) is selected for minimally invasive surgical treatment of familial adenomatous polyposis (fap) and ulcerative colitis (uc). our policy of tpc is no diverting ileostomy for fap and creating ileostomy for ibd because most of the patients received steroid therapy. objective: we examined the outcome of laparoscopic tpc according to disease of fap and ibd (uc and crohn's disease). methods: twenty-three consecutive patients who underwent laparoscopic tpc between april 2007 and march 2017 were examined. the patients were divided into fap group and ibd group. results: seven patients of fap and 16 patients of ibd (uc 15, crohn's disease 1) underwent laparoscopic tpc or total colectomy. among them, 12 patients (fap 3, ibd 9) were cancerassociated cases. the procedures of the fap group was tpc with iaca in 6 patients and hals total colectomy with ira in 1 patient. the procedures of ibd group were tpc with iaca in 11 patients, tpc with iaa in 2 patients, total colectomy with ira in 3 patients, of which 5 hals cases. the mean operative time and blood loss were 318 minutes, 32.0 g in the fap group and 382 minutes, 86.8 g in the ibd group, respectively. diverting ileostomy was constructed in 11 patients of only uc group. early complications of fap group were observed in 3 cases (postoperative ileus 2, anastomotic leak with conservative treatment 1), and those of ibd were observed in 8 cases (ileus 4, anastomotic leak with conservative treatment 1, abdominal abscess 1, wound infection 1). the median postoperative hospital stay was 12 days in the fap group and 14 days in the ibd group. complications requiring reoperation were 2 cases (fap 1: intestinal obstruction, ibd 1: inflammation of stoma-closure site). no cancer recurrence and mortality were observed. one case of fap underwent additional transanal mucosal resection due to new lesion of adenoma. conclusions: laparoscopic total proctocolectomy for fap and ibd was performed safely, especially less complications occurred in fap patients without diverting ileostomy. in addition, followup of remaining mucosa is important in iaca and ira patients. treatment of complex anal fistula has always been a nightmare for surgeonsby conventional means. even the lowest and simple looking fistula at times comes out to be a complex one with high incidence of recurrence above 20%. most of the availability diagnostic including mri is nit conclusive and many a times the surgeon remains in a state of confusion as to what is going to come at the operation table. the conventional treatment modalities also usually leave the patient wounded needing almost 6 to 12 weeks to heal with a risk of sphincter damage and a high risk of recurrence. we would be presenting the technical details and results of our series of 210 cases of complex anal fistula treated by video assisted endoscopic therapy. jun higashijima, phd, mitsuo shimada, professor, kozo yoshikawa, phd, takuya tokunaga, phd, masaaki nishi, phd, hideya kashihara, phd, chie takasu, phd, daichi ishikawa, phd; department of surgery, the university of tokushima background: one of the important causes for anastomotic leakage (al) in anterior resection is an insufficient blood flow of the stump. the hems (hyper eye medical system) and spies (laparoscopic icg system) can detect the blood flow of fresh organ intraoperatively by injection of indocyanine green (icg). and thermography also can evaluate the bloodflow less invasively. the aim of this study is to evaluate the usefulness of icg system and thermography in laparoscopic anterior resection. patients and methods: this study retrospectively included 86 patients who underwent laparoscopic anterior resection for colon cancer with double stapling anastomosis procedure. blood flow evaluation of oral stumps was performed with measurement of fluorescence time (ft) using hems and spies. and bloodflow was also evaluated by thermography. result: evaluation by icg system: in all cases, the al rate was 8.1% (7/86 cases). over 60 ft cases, the al rate was 60%, higher than that of under 60 s cases and these patinets need additional management, covering stoma or additional resection. and in border cases, ft 50*60 sec, al rate is 10.0%, higher than under 50 s cases. in these borderline cases, if covering stoma was performed in patinets with more than three well known risk factors, the al rate reduced to 2.6% and false positive was 6.9%. and under 50 s cases, they need no additional management. evaluation by thermography: in residual intestine, the temperature was siginificantly higher than resected intestine (31.5 vs 29.0?, p.01). and the temperature in ft under 50 s cases was significantly higher than over ft over 50 s cases (26.3 vs 30.8?). the temperatue and ft was tended to be oppositely correlated (r 2 =0.36). conclusion: both icg system and thermography may be useful to avoid anastomotic leakage. introduction: some patients who undergo neoadjuvant chemoradiation therapy (crt) for rectal cancer achieve a pathologic complete response (pcr) in which no tumor cells are discovered during pathologic analysis of the resection specimen. achievement of pcr is correlated to improved prognoses relative to non-pcr counterparts. such correlations are not well established in the context of a community-based hospital. the study sought to examine response rates, recurrences, and survivals in locally advanced rectal cancer patients and compare patient outcomes to those achieved at major academic institutions. methods and procedures: a single-center retrospective chart review was performed at a local, community-based hospital. study population consisted of 118 patients with locally advanced rectal cancer treated with neoadjuvant crt followed by surgical resection. patients with a history of metastasis, inflammatory bowel disease (ibd), hereditary cancer syndromes, concurrent or prior malignancy, and emergent surgery were excluded. results: 24 patients (20.3%) achieved pcr in the test population. across both groups, mean age (p =.352), gender (p=.254), and ethnicity (p=.529) were found to be comparable. mean interval between crt and or (p=.116), pre-op stage (p=.736), number of nodes (p=.208), radiation dose (p=.094), tumor location (p=.753), and days of follow-up (p=.497) presented statistically insignificant differences between groups. at 5 years, 26 non-pcr patients (27.7%) had a recurrence with zero recurrences in the pcr group. 5-year mortality presented 25 non-pcr patients (26.6%) compared to 1 pcr patient (4.17%). conclusion: a multidisciplinary approach to rectal cancer consisting of standardized preoperative treatment and surgical resection can achieve patient outcomes and survival similar to those of larger academic institutions, even in the context of a community-based hospital. objective: the aim of this study was to assess safety and feasibility of total mesorectum excision (tme) within the holy plane based on embryology for rectal cancer. methods: prospectively collected data of 36 consecutive patients with rectal cancer who underwent tatme from november 2014 to august 2017 were enrolled. surgical outcomes including tme completeness, operative time for tme completion, blood loss, complications, pathological findings and length of hospital stay were assessed. surgical procedure: after performing ractal lavage, self-retaining anal retractor was set, and anal dilators were used for an atraumatic introduction of the transanal access devise (gelpoint path). three of 10-mm trocars and one of 15-mm trocar were inserted through the gelpoint path in a quadrant shape. then the gelpoint path was introduced through the anal to rectum. after rectosigmoid colon was temporally clamped using an atraumatic endo bulldog clip, pneumoperitoneum was maintained at 15 mmhg with carbon dioxide via an air seal platform. a purse-string suture using a 0 polypropylen with 26-mm rounded needle was performed clock-wise to tightly occlude the rectum with a 3 cm margin distal to the tumor. after irrigation with saline and marking dissection line with tattooing the rectal mucosa distal to the mucosal folds, a mucosal transection of rectum was initiated. then a full-thickness rectal transection was performed circumferentially. after dissection of rectococcygeal muscle at 6 o'clock and rectourethral muscle in the anterior wall, circumferential sharp dissection within the holy plane was performed. dissection proceeded between the endopelvic fascia and the prehypogastric nerve fascia in the posterior plane, between the denonvilliers's fascia and the anterior mesorectum in the anterior plane, and between pelvic nerve and the mesorectum with recognition of the neurovascular bandle in the lateral plane. then the dissection connected to the abdominal plane via laparoscopic team with working together until tme completed. results: tme completion performed in 34 (94.4%) patients. thirty five (97.2%) patients had negative of circumferential resection margin. mean of tme completion time and blood loss were 146 min and 72 g, respectively. one (2.8%) patient had an intraoperative complication and 7 (19.4%) patients had postoperative complications. no other complications occurred. the length of hospital stay was 12 days. conclusions: tatme within the holy plane on based on embryology is a safe and feasible procedure for rectal cancer. abstract: acromegaly is a debilitating condition marked by excessive production of growth hormone. this leads to disfiguration, cardiopulmonary complications, and increased risk for cancer. with up to a two-fold increased risk of developing colon cancer and worse prognosis for diagnosed patients, earlier and more frequent screening has been recommended. we present a case of a 54-year-old hispanic male with acromegaly who presented to our hospital with hematochezia and weight loss. a near-obstructing rectal adenocarcinoma with metastasis to the liver was discovered. after completing neoadjuvant chemoradiotherapy, he underwent laparoscopic low-anterior colon resection and simultaneous open hepatic trisegmentectomy. in this case report, we review the literature and current guidelines in screening this high-risk group of patients. introduction: in this study, we discovered that in cme for laparoscopic right hemi-colectomy starting at the ileocolic vessel and proceeds along the superior mesenteric artery (sma) achieved a better oncologic outcome compared with the conventional ones proceeding along the superior mesenteric vein (smv). methods and procedures: 46 patients admitted to a shanghai minimally invasive surgical center were included from september 2015 to january 2017 and were randomly divided into two groups: study group (n = 26) and conventional group (n = 20). operation time, blood loss during surgery, liquid intake time, postoperative hospital stay, postoperative complications within 30 days after surgery, specimen length, and number of lymph nodes harvested as well as the positive lymph node rate were observed and studied. results: there was no statistical difference between the two groups with the exception of number of lymph node dissected and the positive lymph node rate for stage iii colon cancer. the study group had more lymph node retrieved and also a higher positive rate compared with the conventional group. the mean number of lymph node retrieved of study group was 21.8 ± 2.47, while the conventional group was 19.9 ± 2.24 (p.05). and the positive lymph node rate for study group was 41.6%, the conventional group was 34.4%. conclusion: when performing the laparoscopic right hemi-colectomy, dissecting the lymph node along with the left side of sma could be achievable and there were no differences of surgical outcomes compared with the conventional ways, while there was a higher number of lymph nodes dissected and positive rate probably leading to a better oncologic outcome. aims: we describe laparoscopic surgery for rectal cancer using needlescopic instruments performed at our department. methods: from 2012 to 2016, 19 cases of rectal cancer underwent surgery using needlescopic instruments: 3 cases at rectosigmoid colon, 5 at upper rectum, and 11 at lower rectum. an umbilical camera port (12-mm) and two needlescopic instruments (endorelieftm) were directly punctured into the assistant surgical site. we started with 5 port sites. in low rectum cancer cases, we kept the good pelvic visualization to lifting the peritoneum of the bladder onto the ventral side using the lone star retractor staystm. results: the median age was 70 years (56-91 years), with 9 males and 10 females, and body mass index was 21.1 kg/ m 2 (16-25 kg/m 2 ). anterior resection was performed in 2 cases, low anterior resection in 7 cases, intersphincteric resection in 4 cases, abdominoperineal resection in 4 cases, hartmann's procedure in 2 cases, and lateral lymph node dissection in 1 case. in addition, one case of t4b (bladder) was converted from laparoscopic to open surgery. however, there were no cases in which needlescopic instruments were replaced with conventional forceps. moreover, intraoperative complications related to the forceps were not observed. conclusions: in rectum cancer surgery, needlescopic instruments leave a small postoperative wound; healing is rapid and the cosmetic result is excellent. surgical safety is comparable to that using conventional forceps. there is no problem with the rigidity of needlescopic instruments. however, where the shaft is curved, operative control requires attention to mobility and directionality. in low rectum surgery, use of needlescopic instruments is limited due to the curvature of the shaft during the dissection of the anterior rectum wall, but it is possible to maintain a good field of view by using auxiliary equipment. therefore, more cases could be considered for surgeries using needlescopic instruments with the help of auxiliary equipment. introduction: anastomotic leaks are devastating complications of colorectal operations that lead to significant morbidity and potential mortality. inadequate tissue perfusion is considered a key contributor to anastomotic failure following colorectal operations. currently, clinical judgment is the most commonly used method for evaluating adequate blood supply to an anastomosis. more recently intraoperative laser angiography using indocyanine green (icg) has been utilized to assess tissue viability, particularly in reconstructive plastic surgery. this technology provides a real-time evaluation of tissue perfusion and is a helpful tool for intra-operative decisions, particularly in deciding to revise an intended colorectal anastomosis. our study aimed to determine if there is a statistical significance in colorectal anastomotic leak or abscess rate using icg compared to common clinical practice. methods and procedures: 126 patients undergoing left-sided colorectal operations, between march 2012 and february 2015, were retrospectively reviewed. 55 patients' colorectal anastomoses were evaluated using icg angiography (icga) to qualitatively assess tissue perfusion (icg group). peri-operative and post-operative outcomes, including anastomotic leak and abscess rates, were compared to 65 patients who had colorectal operations without icga (control group). the primary outcomes of intra-abdominal leak rate and intra-abdominal abscess rate were compared using exact chi-square tests. the secondary outcomes of 30-days or return, mortality, and readmission rate were compared using chi-square tests. all statistical analyses were performed using sas software. results: two leading indications for surgery included malignancy (n = 57) and diverticulitis (n = 48). the majority of patients either had a low anterior resection (n = 75) or sigmoidectomy (n = 42). all operations were primarily minimally invasive. no statistically significant difference was seen between the two groups in regards to patient demographics, rate of proximal diversion (p = 0.112), and splenic flexure mobilization (p = 0.200). patients in the icga group were more likely to have high ima ligation than in the control group (70.9% vs. 24.4%, p-value.001). of the icga group, 16 of the 55 patients underwent additional colonic resection while 39 of the 55 did not undergo additional colonic resection. there was no statistically significant difference in primary or secondary outcomes between the two groups. conclusion: icg angiography has become a helpful adjunct in determining adequate perfusion to an intended colorectal anastomosis. this data is unable to support any difference in patient outcome utilizing this technology over surgeons' visual and clinical assessment. our results may contribute to larger studies to determine if there is a true difference in anastomotic leak or abscess rate using this technology. objective: to investigate the feasibility and surgical strategy of complete mesocolic excision (cme) with completely medial access by "page-turning" approach (cmapa) for the laparoscopic right hemi-colectomy. the cmapa is a modified medial approach of cme, which focus on the exploration of surgical plane instead of the recognition of vessels. surgical procedures: (1) start point: the anatomy projection of ileocolic vessel; (2) expose the whole trunk of smv to the level of inferior edge of pancreas before ligating any branches, for the purpose of high tie and verifying their location; (3) enter the intermesenteric space (ims) and right retrocolic space (rrcs) with cranial and right extension through transverse retrocolic space (trcs); (4) complete mobilize the mesocolon and remove the tumor en-bloc. see figure 1 ?2. clinical outcome: from september 2011 to march 2017, there were 72 patients underwent cmapa in shanghai ruijin hospital. the average operation time was 135.9 ± 28.3 minutes, average blood loss was 63.2 ± 32.2 ml, number of lymph node was 20.6 ± 7.7, average specimen length was 23.9 ± 4.7 cm, flatus time was 2.5 ± 0.8 days, fluid intake time was 3.2 ± 0.8 days and average hospital stay was 8.9 ± 4.7 days. the overall complications rate was 6.94% (5/72 ). compared to traditional medial approach of cme performed in our center, the blood loss, operation time and hospital stay were significantly reduced by performing cmapa for laparoscopic right hemi-colectomy. conclusion: the advantage of the cmapa (1) to avoid the laparoscopic "leverage effect" and "tunnel effect". (2) to make the branches of superior mesenteric vessels more easily recognized. (3) to offer surgeons an alternative route entering the trcs, ims and rrcs. (4) to avoid repetitive flipping of the colon complying with the "no touch" principle, and to lower the requirements of assistants. figure 1 : anatomy and surgical planes concerning cmapa. aim: we have reported a possibility of "one-stop shop" simulation for liver surgery by mri using gadoliniumethoxybenzyl-diethylenetriamine pentaacetic acid (eob-mri) (emerging technology, sages 2017)., which is characterized by (1) one-time examination, (2) no-radiation exposure, (3) demonstration of liver vasculatures including biliary tract, (4) diagnosis of tumors, (5) volumetry and (6) estimation of liver functional reserve in each segment. the aim of this study is to investigate usefulness of "one-stop shop" simulation for liver surgery using eob-mri. methods: accuracy of liver vasculatures: 3d-reconstruction of dynamic eob-mri imaging was done by synapse vincent software (fujifilm medical co., ltd., japan), using a manual tracing method. visualization of hepatic vessels in eob-mri was compared with that in dynamic ct in 10 patients. assessment of liver functional reserve: the standardized signal intensity (si) of each segment was calculated by si of each segment divided by si of the right erector spine muscle. the standardized total liver functional volume (tlfv) was calculated by ∑ [k=1 to 8] (standardized si of segment (k)9volume of segment (k)) divided by body surface area. the following formula of resection limit was established using 28 normal liver cases (70% of the liver is resectable) and 5 unresectable cirrhotic patients such as recipients of liver transplantation (0% of the liver is resectable). the estimated resection limit (%)=70% 9 (the standardized tlfv of the patient -962)/1,076. this formula was validated using other 30 patients who underwent hepatectomy. results: accuracy of liver vasculatures: the liver simulation by eob-mri succeeded in demonstrating hepatic vasculatures including biliary tract, diagnosis of hepatic tumors, and volumetry without any radiation exposure. regarding the vessel anatomy at hilar area, biliary tract was more clearly visualized in eob-mri. regarding the hepatic artery, right and left hepatic arteries were well visualized in all cases, however, small-sized middle hepatic artery was visualized in only one out of 10 patients. assessment of liver functional reserve: as a result of validation of the 30 patients, one patient having resection volume with over the resection limit died of liver failure, however, the other 29 cases within their resection limits did not suffer from liver failure. conclusion: "one-stop shop" liver surgery simulation could contribute to safety of liver surgery such as laparoscopic hepatectomy, because of no radiation exposure, accurate assessment of anatomical variations especially biliary tract, and helping decision making of resection volume. showing key steps of the procedure to be viewed. the in-studio program was hosted by an education specialist from the science center and a surgical resident from our institution, with laparoscopic instruments available for manipulation by participants. participants then viewed a video highlighting the roles of all healthcare providers involved in the specialty to be featured, including nurses, physicians, dietitians, psychologists, technologists, etc. live questions and answers were then encouraged between students and surgeons during the surgery broadcast. the program also expanded from high schools to vocational-technical colleges and nursing schools. results: during the 2008-2009 academic year there were 6 sessions presented to 11 schools, with 421 student participants. by the 2016-2017 year this increased to 19 sessions presented to 55 schools, with 1721 participants. in sum, throughout the first 9 years of the program, there were 395 schools attending, with a total of 11,351 participants. of polled high school participants, 63% of responders acknowledged considering a career in healthcare after this experience. conclusion: over 10 years, our program has grown steadily in popularity such that schools from several counties attend and regularly return, and we have been asked to expand the program to create a surgical summer camp for students interested in science and technology. live broadcast surgery in an elective, minimally invasive format provides unique visibility and access to surgical procedures for student audiences and promotes future interest in healthcare careers. surg endosc (2018) 32:s130-s359 p296 improving trainees' self-assessment through gaze guidance introduction: effective learning to become competent in surgery depends on a trainee's ability to accurately recognize their strengths and weaknesses. however, a surgical trainee's self-assessment is poorly correlated with expert assessment. this study aimed to improve self-assessment by the visual gaze guidance provided through telestration in laparoscopic training. we hypothesized that visual conveyance of where to look or perform actions on the laparoscopic video enhances the trainees' awareness of the gaps in their skills and knowledge. methods and procedures: a lab-developed telestration system that enables the trainer to point or draw a free hand sketch over a laparoscopic video was used in the study (fig. 1 ). seven surgical trainees (1 surgical fellow, 1 research fellow, 2 pyg-2 and 3 pyg-1) participated in a counterbalanced, within subjects controlled experiment, comparing standard guidance with telestration-supplemented guidance. the trainees performed four laparoscopic cholecystectomy tasks -mobilizing cystic duct and artery, clipping the duct, clipping the artery, and cutting the duct and artery, on a laparoscopic simulation. performance assessment, adapted from the global rating scale (grs) instrument, was completed by the trainers and trainees at the end of each task. the mean self-assessment scores were compared with the trainers' scores by the linear mixed model, where the trainees' performance indicated by the trainers' scores was control. the assessment alignment was evaluated by spearman's rho. results: the trainers' scores were significantly lower than the self-assessment scores in the standard guidance, while the scores of the trainers and trainees were much more similar (fig. 2) . the correlation between the trainers' and trainees' assessment in telestration guidance was high (r= 0.852, p.001), compared to the standard guidance (r=0.569, p=0.03). the correlation comparison for each grs criterion shows a significant increase (p=0.005) in the assessment alignment for depth perception in telestration guidance (r=0.90, p.001), compared to the standard guidance (r=0.30, p=0.31) (fig. 3) . the visual gaze guidance improved the alignment of assessment between the trainer and trainees, especially for the assessment alignment in depth perception. for visual gaze guidance to become an integrated part of the training, further work needs to be conducted to understand how gaze guidance change the nature of the training process. applying to surgical residency: what makes the best candidates? yann beaulieu, beng, louis guertin, md, frcsc, ariane p smith, md, margeret henri, md, frcsc, facs; university of montreal objective: while quotas for canadian surgical residency programs are at their lowest point in ten years, the number of canadian graduating medical students is at an apogee. this year, only 288 spots in surgical residency programs were available for 2893 students applying to carms. undergraduate medical students individually collect anecdotal information regarding what influences admission to their surgical subspecialties of interest, as scarce literature covers the topic. we thus surveyed surgeons and residents to analyze the relative importance of modifiable factors and innate attributes in the selection of new surgical residents. methods: an electronic survey was sent to all surgeons and surgical residents affiliated with the university of montreal. participants were asked to specify their surgical subspecialty, their status, their level of experience and whether they were an active member of a residency selection committee. the subjective importance of predefined application elements and candidate qualities was assessed using 5-point likert-type items. results: of the 510 surgeons and 207 residents to whom the survey was sent, 136 (26.9%) and 91 (44.0%) completed the survey. evaluations of elective rotations and evaluations of core rotations were considered very important by 79.7% and 62.9% of responders respectively. regarding letters of recommendation, the content was rated very important (58.8%) more often than the notoriety of the author (25.6%). networking with key surgeons was considered the least important element to prioritize with 23% of negative assessments. with regards to the fundamental qualities of surgical candidates, the extremes were "clinical judgement" with 90.1% and "innate technical ability" with 26.4% of responders rating them very important. no significant differences in responses were observed between staffs and residents, between members and non-members of selection committees, between different levels of surgical experience and between surgical subspecialties. conclusion: clinical judgement and performance in core and elective rotations along with strong personalized letters of recommendation should be prioritized by medical students aiming for a surgical career. kazuhiko shinohara, phd, md; school of health science, tokyo university of technology background and objective: many types of training devices had been proposed since the early days of endoscopic surgery. however, they are too expensive for daily training of novices. we developed a simple and economical training device made of frozen fruit and agar. material and methods: to make this device, 6 g of agar powder was added to 300 ml of boiling water and boiled for 2 min. the solution was then poured into a stainless steel tray containing frozen blueberries and lychees and refrigerated for 2 h. basic maneuvers required during endoscopic dissection and resection of a tumor with laparoscopic forceps and electrosurgical devices were then performed using this agar model in a conventional laparoscopic training box. results: using this model, endoscopic dissection and enucleation of a tumor with an electrosurgical device could be practiced repeatedly with minimal expense and preparation. background: situs inversus totalis (sit) is a rare congenital anatomy and a challenging condition for laparoscopic surgeries because standardized strategy to overcome such anatomical difficulties. mirror-reversed video images of laparoscopic surgeries for patients with normal anatomy could help to develop surgical strategies for patients with sit. we had a chance to evaluate this idea with a treatment of a patient of early gastric cancer, and describe the surgical results of the case. patient and methods: seventy-two-year-old women with a history of sit was referred to our department for the treatment of early gastric cancer, and laparoscopic distal gastrectomy with d1+ lymphadenectomy was scheduled. a video record of the same surgery for a patient with similar physical attribute performed before then was retrieved, and was edited with a computer into full length, totally mirror-reversed images of the surgery. designated operator and assistant simulated the operation using the video several times before surgery. results: laparoscopic distal gastrectomy was performed with d1+ lymphadenectomy while the operator was on the left side of the patient and the assistant on the other side, being opposite positions as usual. laparoscopic b-1 reconstruction was followed using "delta anastomosis" technique reported by kanaya et al. total laparoscopic procedures were completed with the operation time of 250 minutes and the blood loss below measurable limits. no appreciable complications were observed after surgery and the patient was discharged on postoperative day 12. no recurrence of the disease was detected until 5 years after surgery, conclusion: although further validation is unlikely because of a rare incidence of this anatomy, the same technique would be recommended for one of the preoperative preparations for similar cases. background: surgical simulation is thought to provide a basis for improvement of resident surgical skill training, in the safety of a simulation setting. it is unclear whether surgical skills learned in a simulation curriculum actually contribute to the improvement of surgical skills when transferred to the or. methods: a ten question online survey was sent to attending surgeons and residents. the questionnaire focused on 5 domains: confidence, independence, transferable skills, improvement of skills/knowledge and time spent on the simulation curriculum. evaluation data was collected and anonymously analyzed. background: minimally invasive surgery poses a unique learning curve due to the requirement for non-intuitive psychomotor skills. programmes such as the fundamentals of laparoscopic surgery (fls) provide mandatory training and certification for many residents. however, predictors of fls performance and retention remain to be described. this single-centre observational study aimed to assess for factors predicting the acquisition and retention of fls performance amongst a surgically naïve cohort. methods: laparoscopically naïve individuals were recruited consecutively from preclinical years of a medical university. participants completed five visuospatial and psychomotor tests followed by a questionnaire surveying demographics, extracurricular experiences and personality traits. individuals completed a baseline assessment of the five fls tasks evaluated by fls standards. subsequently, participants attended a 270-minute training-course over week one and two on inanimate box trainers. a post-training assessment was performed in week three to evaluate skill acquisition. participants were withdrawn from laparoscopic exposure and retested at four onemonth intervals to assess skill retention. introduction: bipolar energy can cause thermal injury to adjacent organs when used improperly. sages fuse curriculum provides didactic knowledge on principles and best practices for safety, but there is no hands-on component to practice these skills. the objective of this study is to compare the effectiveness of the vest™ bipolar training module in addition to the fuse curriculum. methods and procedures: the study was a mixed design with two groups, control and simulation. after a pre-test that assessed their baseline knowledge, the subjects were randomized to two groups. both groups were given a 10 min presentation, reading materials from the fuse manual and an online didactic module on bipolar energy. the simulation group also practiced on the simulator for one session that consisted of five trials on the effect of activation time on thermal damage and the importance of providing a margin of safety by sealing short gastric vessels. after one week the performance of both groups was assessed using a post-questionnaire. one week after the post-test both groups performed sealing of 10 vessels on an explanted porcine mesentery with vessels perfused. their performance was videotaped and their activation times were recorded. a total safety score was calculated by assessing the proximity of the location of activation to the intestine by two independent raters. wilcoxon -signed rank and mann-whitney u tests were used to assess difference within and between groups. results: a total of 16 residents (8 in each group) participated in this irb approved study. median test scores for both groups increased (simulation, p=0.041 and control, p=0.027). no difference was found between the two groups in their pre-test (p=1.0) and post-test (p=0.955) scores indicating learning. the median total activation time for control group was higher (42.55 s) compared to simulation (30.6 s) but was not statistically significant (p=0.336). there was a moderate agreement between two raters for margin of safety (kappa=0.58, p.001). total safety scores showed no difference between the two groups (p=0.573). conclusions: subjects with simulation training had lower activation time compared to control. training for margin of safety requires more simulation refinement. small sample size and variations in the explanted models contributed to variability in data but even with small sample size, simulation training along with the fuse curriculum trended towards being more beneficial than the fuse curriculum alone. the general, that aims to build educational infrastructure and standardize training and education in laparoscopy throughout mexico. ilap participants engage in didactic and hands-on modules in educational theory, laparoscopic techniques, and simulation based education (sbe), and then develop and implement a 1-day sbe course for local trainees. the purposes of this study were to understand the existing educational environment at a single institution in mexico and measure the changes in perceptions, attitudes, and engagement in surgical education after an intensive training course. methods and procedures: all 13 faculty and 13 of 25 general surgery resident participants completed a survey that contained 7 items designed to assess the existing educational environment at a large, public hospital in mexico. using a 5-point likert scale, residents self-rated the quality of faculty feedback and the learning environment within their institution (1=strongly disagree, 3= neutral, 5=strongly agree). faculty rated their perceptions of the same educational themes. upon completion of a faculty-lead simulation course, residents rated the educational environment during the course. faculty provided additional qualitative feedback. descriptive analyses were performed. irb-exemption was obtained through lurie children's hospital. results: discordance existed in perceptions of the existing educational environment. the greatest disparity between resident and faculty perceptions included "faculty provide sufficient feedback in the operating room" (31% vs. 100%), "faculty promote an active learning environment" (38% vs. 85%), and "residents may ask questions without fear of negative evaluation" (46% vs. 100%). faculty and residents agreed with "residents are sometimes afraid to speak up in the operating room for fear of retaliation" (46% each). post-course evaluations (n=19) revealed universal improvement in all educational themes during the simulation course. qualitative feedback revealed most faculty plan to incorporate open communication and safe learning into their practice. residents were equally positive, with 100% optimistic that they will see changes within the educational environment. conclusions: significant discordance exists in resident and faculty perceptions of the educational environment at a large teaching hospital in guadalajara, mexico. after participation in the ilap course, residents noted demonstrable change in the faculty approach to education and feedback, and both faculty and residents expressed optimism for increased engagement in education. the immediate successes of the ilap initiative should be followed over time, as the ultimate measure of success is sustainability and scalability throughout mexico. background: laparoscopic anterior resection is technically challenging and the learning curve is long. well-designed formative assessments can provide trainees effective and constructive feedback, an important element in efficient learning. previously reported assessments for laparoscopic colorectal procedures were developed for summative assessment. we aimed to develop a formative assessment tool to evaluate competence and provide trainees with effective feedback in laparoscopic anterior resection. methods: the assessment tool was developed by an expert panel from mcgill university affiliated hospitals. the procedure was deconstructed into a series of sequential steps including general domains, surgical principles, injury prevention and technical skills specific to laparoscopic anterior resection. the tool contains 12 discrete items with global rating scales for each step of the operation; each domain was scored using a 5-point likert scale, with anchors for scores of 1, 3 and 5. each operation was assessed through direct observation in the operating-room by the attending, a trained observer, and trainees themselves. intraclass correlation coefficients (iccs) were calculated to estimate interrater reliability for (1) attending surgeon and trained observer, (2) attending surgeon and self-assessment, and (3) trained observer and self-assessment. internal consistency was measured using cronbach's alpha. comparison between training levels was done using mann-whitney u-test. the global operative assessment of laparoscopic skills (goals) was also used to assess trainees' general laproscopic skills. spearman's correlation was used to determine association between goals and this procedure-specific tool. overall usefulness of this tool was evaluated using a 10 cm visual analog scale. results: in this pilot study, fourteen operations, performed by 5 experienced surgeons and 5 trainees were assessed. the icc between (1) attending surgeon and observer was 0.77 (95% ci 0.26 to 0.93) (2) observer and self-assessment was 0.74 (95% ci 0.30 to 0.92), and (3) attending surgeon and self-assessment was 0.43 (95% ci -0.11 to 0.79). the internal consistency of the items was excellent (cronbach's α=0.93). there was a significant difference in median total score between experienced surgeons and trainees (87.2±9.4 vs. 68.8± 9.3; p=0.016). there was strong correlation (r=0.884) between goals and this procedure-specific score. overall usefulness of this assessment tool was rated as 7.4±1.7. all assessments were completed in about 5 minutes. conclusions: we present a new procedure-specific formative assessment tool for laparoscopic anterior resection and provide preliminary evidence of its reliability and validity. this formative assessment tool could be used for constructive feedback and tracking performance in competencybased surgical training. cullen introduction: one of the key challenges to the proliferation of endoscopic submucosal dissection (esd) in the west has been a lack of training platforms. therefore, the virtual endoluminal surgery simulator (vess) is being developed as a training tool for esd. the aim of our study is to inform the design of vess using cognitive task analysis (cta), which is a human factors engineering framework to describe practitioners' mental models and cognitive processes and incorporate insights into the simulator's design. methods and procedures: cta-based interview questions were developed to probe the cognitive challenges and strategies employed at each stage of the esd procedure. six esd practitioners were interviewed for varying lengths of time. two of these interviews were conducted simultaneously during an observation of a training workshop where the cta participants were instructors (total observation time was five hours, and interview time was *60 minutes). another interview was conducted during observation of esd procedures (total observation time was 22 hours, and interview time was *110 minutes). participants had varying levels of experience in esd, with 4 of them being 'super-experts' (exclusively esd exponents), 1 an 'expert' and 1 a fellow. a cta of the data is currently being conducted to systematically inform design of functionalities in the simulator. results: analysis of our data highlights a few prominent themes at each stage of esd: goals, challenges (e.g., avoiding perforation of muscularis); points of decision-making (e.g., partial or full incision for boundary demarcation); skills involved (e.g., dissection); and ambiguity (e.g., unclear lesion boundaries). participants also described risks associated with each stage of esd and strategies to prevent or overcome the same. conclusions: qualitative data for a cta were collected through observations and interviews of esd practitioners. preliminary analysis has indicated prominent themes to consider in the design of the training simulator. the next step in the study is to conduct a full-scale cta of esd based on the current data. the ultimate benefit of the cta would be to incorporate the results into informing the design of vess in a way that is compatible with the mental models of esd trainees, thus enhancing the fidelity and effectiveness of the simulator. background: colonoscopy is an important diagnostic and therapeutic procedure in the management of colonic disease; achieving competence during residency is an integral part of performing high-quality colonoscopy in-practice, regardless of specialty. there is debate and controversy however, regarding what, if any, number of procedures achieves said proficiency. furthermore, there is significant heterogeneity in the current guidelines and studies published to-date on the definition of competence in colonoscopy. objective: to determine individualized learning curves as an alternative to 'number of procedures' for assessing colonoscopy competence. methods and procedures: this is a multi-institutional prospective cohort study involving eleven surgical trainees (novice endoscopists). the main outcome, colonoscopy competence, was assessed by determining the independent colonoscopy completion rate (iccr), the number of procedures required to reach 90% independent colonoscopy completion and polyp detection rate. individual and overall iccr were calculated using moving average analysis. conclusions: while a benchmark for a minimum number of procedures may be necessary to allow supervisors to adequately assess performance, it is difficult to determine what number is optimal. there appears to be significant heterogeneity in both overall number of colonoscopies completed by each resident, as well as the mean iccr and the number of procedures required to reach the current benchmark for competency. the use of learning curves allows real-time tracking of progress and training tailored to the individual, as we move forward in the era of competency-based medical education. background: with the growing popularity of robotic-assisted surgery, new methods for evaluation of technical skill are necessary to determine when a surgeon is qualified to perform an operation independently. current evaluation methods are limited to 5 point likert scales which require a degree of subjective scoring. surgeons in training need an objective method of evaluation to view progress and target areas for improvement. one method of objectively evaluating surgical performance is a cumulative sum control chart (cusum). by plotting consecutive operative outcomes on a cusum chart, surgeons can view their learning curve for a given task. another method of objective evaluation is the dv logger®, or "black box," which records objective measurements directly from the da vinci® system. methods: we followed two hpb fellows during dry lab simulation of 40 robotic-assisted hepaticojejunostomy reconstructions using biotissues to model a portion of a whipple procedure. we simultaneously recorded objective measurements of dexterity from the da vinci® system and performed cusum analyses for each procedural step. we modeled each variable using machine learning (a self-correcting and autoregressive modeling tool) to reflect the fellows' learning curves for each task. statistically significant objective variables were then combined into a single formula to create an operative robotic index (ori). results: variables that significantly improved over the course of the simulation included completion time (p=0.017), economy of motion in arm 1 (p=0.001), number of times head was removed from the console (p=0.001), total time left master manipulator was active (p=0.005), total time right master manipulator was active (p.001), and total time that any arm was active (p\ 0.001). the inflection points of our cusum charts and plots of objective variables both showed improvement in technical performance beginning between trials 14 and 16 [ fig. 1 and fig. 2 ]. the operative robotic index showed a strong fit to our observed data and improved with additional trials (r 2 =0.796). [ figure 3 ]. conclusions: in this study we identified objective variables recorded by the da vinci® system which correlated with the technical dexterity of fellows during a robotics dry lab. we broke a complex procedure down in stepwise fashion with cusum analyses to determine targets for improvement. using variables which correlated with the improved performance of the fellows, we effectively modeled the learning curve with the creation of an operative robotics index (ori). this study successfully models the learning curve of novice robotic surgeons using a novel combination of objective measures. georg wiese, md, paula veldhuis, steve eubanks, md, facs, scott w bloom, md, frcsc, facs; florida hospital institute for surgical advancement introduction: robotic surgery is a specialized skill which requires time and resources to master. in a general surgery residency program that seeks to train competent surgeons in both open, laparoscopic and endoscopic techniques it is difficult to see where adding robotic training will be of benefit and at what cost this will be to the remaining surgical skills. we therefore sought to ascertain robotic surgery's current role in the training of new general surgeons by soliciting the opinions of current general surgery program directors on the role of robotic surgery at their respective institutions. methods: an irb approved survey was created and sent to general surgery program directors across the country to assess how robotic surgery training is being integrated into current surgical training. the survey was sent via email to publicly available email addresses from the acgme website of program directors. it was voluntary in nature and consisted of questions regarding current status of robotic training in residency as well as future goals. results: overall response from our pd survey were at 12% of the 266 surgical programs with addresses available via acgme, though responses continue to be submitted at the time of this abstract. approximately 48% of all respondents are from independent, university based programs. 85% felt that robotics was an emerging skillset important for residents to master versus 15% feeling that it was more appropriate for fellowship. all respondents noted that robotic surgeons were present at their institution, 90% within the core faculty, and 50% indicated that they were actively recruiting robotically trained surgeons. additionally, 95% of programs indicated that residents were exposed to robotic surgery, 81% of these on core general surgery rotations. 62% of respondents indicated that they had a formal robotic training curriculum with 81% of programs taking measures to integrate robotics into the future curriculum though 71% lacked specific milestones for such training. finally, opinion was evenly divided among respondents as to whether one could sign off on residents to perform robotic assisted cases upon completion of pgy5 year with 45% agreeing with that statement and the remainder indicating some additional training would be necessary. conclusions: our study highlights the emerging field of robotic assisted mis surgery and its increasing role in residency training. it is evident from the data, that robotic surgery is a growing part of residency experience. importantly, however, milestones were significantly lacking for determining resident progress in robotic training. introduction: in chile, medical students have the opportunity to undertake a month-long medicine elective (me) in a community hospital, primary care center or emergency department within the country at the end of their first clinical year. due to the lack of opportunities to practice suturing in the first years, students usually do not have an optimal performance in this type of medical procedure during the me. simulation training programs in suturing improve technical skills, selfconfidence and patient safety in the medical internship. the objective of this study is to evaluate the impact of implementing a simulated suture training program earlier in the medical curriculum, before the me. methods: we conducted a prospective, randomized controlled trial with 50 medical students at the end of their first clinical year. they were randomized into two equal groups. the intervention group received an intensive suture training program consisting in one theory class, four practical sessions and effective feedback from an expert surgeon. the control group did not receive training, remaining with the classic opportunistic learning approach during the me. after the me, all students undertook an electronic survey. statistical analysis was performed on the answers of both groups. per protocol analysis was applied. results: there were no statistical differences between groups in terms of age and sex. four students did not complete the training program. one student in the control group did not reply to the survey. higher self-confidence with regards to suturing was reported in the intervention group in comparison with the control group [10/21 (48%) vs 4/29 (14%), p,001]. also, a greater student desire to carry out suture-related procedures was reported in the intervention group than the control group [16/21 (76%) vs 11/29 (38%), p,001]. in addition, a lower rate of overseeing physician intervention was reported in the intervention group [3/21 (14%) vs 14/29 (48%), p,001] ( table 1) . a greater number of patients requiring sutures were treated by the intervention group than the control group, with a median of 4 patients (3-7) against 2 (1) (2) (3) (4) . the intervention group performed a higher number of sutures with a median of 17 (6-31) vs 7 (2-16), with a statistically significant difference (p,05) in both cases (fig. 1) . conclusion: a simulated suture training program prior to the me generates a positive impact on medical students by improving self-confidence and desire to attend patients that require sutures. this leads to a higher rate of both exposure to suture techniques and suture execution. introduction: measuring performance in the operating room (or) is challenging. performance is a multifaceted construct a complex interaction of many behaviors and actions that reflect an individual's knowledge and skill. no assessment tool to date provides an expertise-based, comprehensive evaluation of the various aptitudes necessary to excel in the or, especially with respect to advanced cognitive skills. using qualitative methodologies, we previously defined behavioral themes that guide surgeons' behaviors, decisions, and actions, within a universal framework of 5 domains that reflect intra-operative performance. the purpose of this pilot study was to use this framework to derive a comprehensive assessment tool and to obtain evidence for its validity as a measure of intra-operative performance. methods: an assessment tool was developed by a panel of 9 surgeons and 5 surgical trainees based on the five-domain model of intra-operative performance: 1) psychomotor skills; 2) declarative knowledge; 3) interpersonal skills (two items); 4) personal resourcefulness, and 5) advanced cognitive skills (ten items). all items were rated on an ordinal scale of 1 (inadequate) to 5 (expert) and equally weighted. surgical residents and surgeons from a single academic center were evaluated on their performance during standard general surgery operations, for example, open inguinal hernia repair and laparoscopic cholecystectomy. for residents, there were 2 evaluators -the attending surgeon and an observing surgeon. attending surgeons evaluated their own performances and were also assessed by 2 observing surgeons. internal consistency, inter-rater reliability, and correlation of total scores with training level (junior residents, senior residents, staff surgeons) were calculated. likert scale questionnaires were administered to evaluate the tool's usability, feasibility, and educational value. results: fifteen subjects (5 junior residents, 5 senior residents, 5 surgeons) participated. the total score on the assessment demonstrated significant differences between training levels ( figure) . inter-rater reliability was high (interclass correlation coefficient=0.87), as were internal consistency between each domain score (cronbach's alpha=0.95), internal consistency amongst items in the advanced cognitive skill domain (cronbach's alpha=0.99), and internal consistency amongst items in the interpersonal skills domain (cronbach's alpha=0.99). all assessments required less than five minutes to complete. overall, evaluators agreed that the assessment tool was easy to use, was comprehensive, and should be used routinely throughout training to track performance and provide formative feedback. conclusion: in this pilot study, we developed a comprehensive assessment tool for intra-operative performance and provide preliminary validity evidence for the score. surg endosc (2018) introduction: the purpose of this study was to evaluate the validity of our developed system for assessing suturing skills in laparoscopic surgery (fig. 1) . we have updated numbers of participants and a comparison method compared with the last year report. methods and procedures: fig. 1 shows our developed computerized system for objective assessment of suturing skills by using a laparoscopic intestinal suturing model, e-lap. the system includes a new artificial intestinal model that mimics living tissue and pressure-measuring and image-processing devices. each examinee performs a specific skill using the artificial model, which is linked to a suture simulator instruction evaluation unit. the model uses internal air pressure measurements and image processing to evaluate suturing skills. five criteria, scored on a five-grade scale, were used to evaluate participants' skills ( fig. 2) . the volume of air pressure leak was determined by the volume of air inside the sutured artificial intestine. for example, for the criterion "air pressure leakage", the approximate midpoint of the acceptable range was grade 3. values lower than the minimum acceptable value received lower grades and those above the midpoint of the acceptable range higher grades. we enrolled 277 surgeons who participated a simulator competition event at the 29th annual meeting of the japan society for endoscopic surgery (jses 2016 houston methodist hosptial, 3 baylor college of medicine introduction: the sages flexible endoscopy course for minimally-invasive surgery (mis) fellows has been shown to improve confidence and skills in performing gi endoscopy. this study evaluated the long-term retention of these confidence levels and investigated how fellows have changed practices within their fellowships as a result of the course. methods: participating mis fellows completed surveys six months after the course. respondents rated their confidence to independently perform sixteen endoscopic procedures (1=not at all; 5=very). while the pre-and post-course surveys identified anticipated endoscopy uses and barriers to use, the 6-month follow-up survey evaluated actual usage and barriers to use in each fellow's practice. respondents also noted participation in additional skills courses and status of fundamentals of endoscopic surgery (fes) certification. comparison of responses from the immediate postcourse survey to the 6-month follow-up survey were examined. mcnemar and paired t-tests were used for analyses. results: twenty-three of 57 (40%) course participants returned the 6-month survey. 26% had passed the fes skills examination and 17% had attended another flexible endoscopy course. no major barriers to endoscopy use were identified. in fact, fellows reported less competition with gi providers as a barrier to practice compared to their original post-course expectations (50% versus 86%, p.01). in addition, confidence was maintained in performing the majority of the 16 endoscopic procedures, although fellows reported significant decreases in confidence in independently performing snare polypectomy (− 26%; p.05), control of variceal bleeding (− 39%; p.05), colonic stenting (− 48%; p.01), barrx (− 40%; p.05), and tif (− 31%; p.05). fewer fellows used the gi suite to manage surgical problems than was anticipated post course (26% versus 74%, p.01). fellows without fes certification reported loss in confidence to independently perform barrx (− 54%; p.05) and colonic stenting (− 63%; p.01), and also a 58% decrease in the use of gi suite to manage surgical problems (p.05) fellows who passed fes noted no significant loss of independence, changes in use, or barriers to use. 18% of fellows made additional partnerships with industry after the course. 41% stated flexible endoscopy has influenced their post-fellowship job choice. 100% would recommend the course to other fellows. the sages flexible endoscopy course for mis fellows results in long-term practice changes with participating fellows maintaining confidence to perform the majority of taught endoscopic procedures six months later, and over 40% reporting that flexible endoscopy influenced their career choice. additionally, fellows experienced no major barriers to implementing endoscopy into practice. the materials and methods: at our center, we formulated a laparoscopic mentorship program where a senior consultant was paired with a particular trainee resident for a period of 6 weeks. 12 consultants & 12 residents were a part of the study. the or schedules were rearranged to accommodate these pairs. an evaluation of the residents' views was performed prior to the study and once at its completion, using a simple questionnaire with each parameter scored between 1 & 10. results and discussion: continuous, consistent evaluation by a consultant over an extended period of time allowed them to assess their assigned resident's laparoscopic skill set. all pairs observed an increased frequency of errors being noticed & improved upon. the consultants stressed upon shedding undesirable operative habits. there was a significant improvement in residents' scores at the end of the short study. conclusion: we found that the short-term mentorship program was easy to incorporate within our or schedule and was well received by the participants. continuous short rotations under senior consultants appear to allow residents to not only fully observe and imbibe correct operative techniques, but also helps shed unfavorable habits. we are currently amid the second cycle of our study & looking forward to the results at the end of this academic year. introduction: colorectal cancer is one of the most common cancers in the united states. endoscopic submucosal dissection (esd) is an emerging minimally invasive technique that allows complete en-bloc resection and a much lower recurrence rate at long-term follow-ups. however, performing colorectal esd is technically demanding since the colorectal wall is thin and constantly moving, and potentially higher rates of complications (e.g., bleeding and perforations). hence, an adequate training for colorectal esd is needed to acquire basic proficiency with minimum complications. objectives: a virtual reality (vr)-based simulator with visual and haptic feedback for training in colorectal esd is being developed, which the aim to allow trainees to attain competence in a controlled environment with no risk to patients. in this work, a newly developed application of the virtual simulator that promotes the endoscopists to perform and assess technical skills in esd is developed. training tasks are built based on physics-based computational models of human anatomy with tumors. methods: the main modules of the vr-based simulator for colorectal esd involve: (1) rendering; (2) haptic interface; (3) physics-based simulation; and (4) performance recording and assessment metrics. the rendering engine allows surgical tasks to be performed in the three-dimensional virtual environment. haptic feedback mechanisms allow users to physically feel the interaction forces. physics-based simulation technologies are employed to enable the complicated simulation for performing virtual surgical tool-tissue interactions. the simulator can also collect learners' performance data to offer feedback based on the built-in metrics. results: four training tasks involving marking, injection solution, circumferential cutting, and submucosal dissection are designed to practice skills with different surgical tools. the marking task aims to identify the lesion. the injection solution task minimizes the risk of bleeding and perforation to protect the muscularis. in the circumferential cutting task, the objective is initial incision of the lesion with the surgical tools. the objective of the dissection task is to remove the tumor from the connective tissue of the submucosa under the lesion. conclusions: the vr-based simulator enables realistic esd tasks to provide a possibility for developing, validating and objectively evaluating the performance metrics in colorectal esd training, and offers an opportunity to rise up the learning curve before application to patients. background: the virtual translumenal endoscopic surgery trainer (vtest) simulator is a virtual reality system that was designed to train the hybrid-notes technique. transfer of skill acquired while training on the vtest was measured in a near-real cholecystectomy procedure staged in the easie-r model. methods: sixteen medical students were divided randomly and evenly into 2 groups: control, training. all subjects performed the cholecystectomy procedure on the vtest simulator to establish a baseline (pre-test). the training group received 15 training sessions, over a period of 3 consecutive weeks, consisting of 5 trials per session or as many trials as can be accomplished in one hour, whichever was achieved first. at the end of the training period, all subjects performed one trial on the vtest simulator (post-test), and again 2 to 3 weeks later (retention test). two months after that, subjects performed the hybrid-notes cholecystectomy procedure on an easie-r model. performance with the easie-r simulator was video-recorded, and three tasks within the cholecystectomy procedure were isolated for evaluation: clipping, cutting, and dissecting the gallbladder. objective performance measures, such as time and error, were extracted from the videos by two independent reviewers, while subjective performance was scored by four expert surgeons who were blinded to the training conditions. expert reviewers used a modified version of the operative performance rating system by the american board of surgery and the objective structured assessment of technical skills (osats) tool. results: there was no difference in task completion time between the control and training groups, (t(10)=1.045, p =.161) in the cutting and clipping tasks. however, there was a significant difference in the number of errors, t(10)=-1.847, p=.047. there was no difference in subjective performance between the training groups for the clipping and cutting tasks. in the gallbladder dissection task, however, there was a statistical significance in "instrument handling" based on one of the surgeons' ratings (t(14)=1.919, p=.03), and a statistical significance in "time and motion" based on another surgeon's rating (t(14)=2.118, p=.03). conclusions: results indicate that 3 weeks of training on the vtest simulator did not allow the subjects to transfer their learned skills equally to the near-real environment, even though they retained the skills when tested for retention. this new insight suggests that modification of the training method for different types of surgical skills may be warranted to optimize their transfer to the real environment. examining conclusions: this study provides evidence to suggest that for bariatric surgeons, experience and skills acquired in performing non-bariatric surgery may not translate to improved outcomes in bariatric surgery. as seen in this study, improvement in bariatric surgical outcomes is likely more dependent on experience specifically performing bariatric procedures. as there may be no benefit acquired from performing surrogate procedures, this may have implications in the design of subspecialty training programs and for accreditation purposes. . a universally adjustable cellphone holder was used where smartphones could be placed inside the fls box in order to capture the task from a similar angle as the onboard camera. residents were able to use their own smartphones to record their performance on each of the five fls tasks in high definition (hd) quality. after each practicing session, they would upload their videos to a designated folder on a password-protected computer in the simulation lab. this folder was linked to a cloud-based storage system that fls instructor had exclusive access. the faculty was able to review each video in the next 24 hours and provide immediate feedback to the residents via email, over the phone or in-person. the video library of performance also allowed the instructor to track the progress of the residents and whether they reached proficiency level in all five tasks to take the fls examination. this program was offered to all surgical trainees. results: utilization of simulation lab to practice fls tasks increased significantly across all postgraduate years after implementation of this model. six residents took the fls examination. the passing rate of the residents remained the same (100% before and after) but their scores in fls manual skills improved significantly compared to the group prior to implementation. the residents evaluated this change positively and reported that the use of videos and immediate feedback by faculty was a valuable intervention in their learning experience. conclusions: the smartphone cameras are readily available and can be used for telementoring. incorporation of telementoring in standard proficiency based fls training can promote self-directed learning and improve the access to experts for immediate feedback as a crucial element of effective training in acquisition of laparoscopic skills. background: it is important that making individual procedures a language, and an objective qualitative evaluation for the laproscopic training. recently, task training and the sham operation using the virtual simulator are carried out for medical students as the basic laparoscopic maneuver training, but there are few reports of objective qualitative evaluation for the training. in this study, we investigated rubric evaluation as the qualitative evaluation for laparoscopic training. materials and methods: one hundred and six students in 5th grade of tokushima univ. were participated. basic laparoscopic task training (gummy band ligation, beads transfer, delivery of beads, gauze excision) with training box and sham laparoscopic cholecystectomy with virtual simulator were performed. task execution time and rubric evaluation which includes the evaluation standard that became a language for each maneuver were performed before and after basic task training and sham operation. the group who are bad at laparoscopic maneuver was decided by time exceeded in tasks more than two from before practice. relationship between the group who are bad at laparoscopic maneuver and the group which self-evaluation was higher in a rubric evaluation was investigated. results: in basic task training, average task execution time in all students was shortened after practice compared with before practice, but investigated individual, 6 students exceeded in more than two tasks. rubric evaluation in basic task training showed no difference between self-evaluation and evaluation by tutor before and after practice. in sham laparoscopic cholecystectomy, all students and tutor showed high score by rubric evaluation after practice compared with before practice. some students showed higher score than tutor, especially in part of extension of operation field by elevation of the gall bladder, exposure of triangle of calot, and exposure of cystic duct. students who showed high score by self-evaluation in many maneuver of sham laparoscopic cholecystectomy also exceeded in more than two basic tasks. conclusions: as rubric evaluation showed the point of the maneuver is made a language definitely, it was useful for an objective qualitative evaluation for laparoscopic training. pre introduction: bariatric surgery candidates have the opportunity to research bariatric surgeons and hospitals prior to scheduling their elective surgery. pre-operative information sessions are important tools for bariatric surgeons to provide patient education while increasing their patient population. online education is becoming increasingly popular, but its utility over in-person education is uncertain. our objective was to compare patients attending the two most commonly used educational formats: online (webinars) and in-person (seminars) and determine which were more likely to undergo bariatric surgery. methods: we conducted a retrospective cohort study of 2,700 patients who attended pre-operative information sessions from january 2014 to december 2016 by reviewing data maintained by the obesity, prevention, policy and management (oppm) database from our institution. the patients were divided into two groups: those who attended an in-person session (n=785) and those who attended an online session (n=1,915). the proportion of patients who went on to have bariatric surgery was compared between the two groups. to categorize the study sample, patient demographics, surgeon providing the information session, and procedure performed were compared between groups. multivariate logistic regression model was applied to compare the effectiveness of in-person session and online session. results: of 2,700 patients analyzed, 71% attended online information sessions (77% female, mean age 42). the remaining 29% attended in-person information sessions (73% female, mean age 46). analysis found that 21.1% of patients who attended online information sessions went on to have a bariatric surgical procedure, while 32.6% of patients who attended in-person sessions went on to have a bariatric surgical procedure. after controlling for differences in age and gender, results of multivariate logistic regression analysis indicate that patients who attended inperson sessions were 71% more likely to have a bariatric surgical procedure than patients who attended an online session ( introduction: knot security is the ability of knots to resist slippage as force is applied, and the optimal number of throws to ensure a secure knot improves efficiency and outcome. the literature on the accepted number of throws per type of suture material has been largely anecdotal, often referring to 3 throws for silk, 4 for polyglactin 910 (vicryl), five for polydioxanone (pds), and six for polyproprolene (prolene). we report a pilot knot-tying study of four suture types to determine optimal numbers of throws. materials and methods: four senior general surgery residents (pgy-5 and above) and four attending surgeons participated. participants viewed a standardized instructional video and a one-handed knot-tying tutorial. they were instructed to tie one-handed knots, beginning each knot with two throws in the same direction, and square the third and subsequent throws in the opposite direction. each surgeon tied 64 knots, using differenttypes of 2-0 suture material: silk, polyglactin, polydioxanone, and polyproprolene. suture types were evaluated using 3, 4, 5, or 6 throws. the participants were randomized to both suture type and order of throw numbers. the knots were then tested on the f.a.s. t knot tester (sawbones, vashon island, wa) for slippage (insecure knot) or breakage (secure knot). generalized estimating equation (gee) analysis was used to determine optimal throw number. results: 512 knots were individually tested on the knot tester for slippage and recorded as % slipped (see table) . the percentage of slipped knots varied by participant and ranged from 5 to 67%. generalized estimating equation analysis suggested that the only significant variable when determining knot security was number of throws (p=0.02), not suture type or participant training level. the optimal number of throws for 2-0 silk, polydioxanone, and polypropylene was five, whereas six throws was optimal for polyglactin. conclusion: knot security is dependent on the number of throws placed, and these optimal numbers were higher in our study than the commonly accepted number of throws. evaluation of take introduction: laparoscopic skills can be learned using portable simulators and these skills are transferrable to the operating room. several training regions within the uk have therefore developed and delivered home-based laparoscopic training programmes for junior surgical trainees. although performance improved in some, overall engagement has been poor. similar results have been observed in north america. the aim of our study was to uncover the reasons for poor engagement with home-based simulation with a view to developing a future, more successful, programme. methods: this was a qualitative study utilising focus groups. interviews were undertaken with key stakeholders involved in various laparoscopic home-based simulation programmes through the uk. training equipment comprised the eosim portable simulator paired with online training tasks. the tasks were similar to those used in the fundamentals of laparoscopic surgery programme (fls). basic metric feedback was provided (eg time to complete task). a total of 45 individuals were interviewed, including surgical trainees, consultant trainers, training directors and programme faculty. this generated approximately 7 hours of data which was coded using nvivo software. a basic thematic analysis was performed. results: trainees cited multiple competing professional commitments as a barrier to engaging with home-based simulation. they tended to focus on scoring 'points' which contributed toward career progression rather than tasks which were interesting, or associated with personal development. this approach is perpetuated by the surgical training system, which rewards trainees with points for publications and exams, but not for operative skill. this leads to conflict between trainers and trainees, the former expecting trainees to instead focus upon developing their technical abilities. trainees were unsatisfied with metric feedback and wanted individual feedback from consultant trainers (attending equivalent). trainees generally perceived consultants as lacking interest toward the programmes and training in general. however, some consultants were in fact unaware of the programmes being delivered and others felt lacking in confidence to deliver necessary training to trainees. conclusions: our findings are widely generalizable and have implications for any institution delivering a similar programme. as a means of improving engagement, the the inception of scheduled simulation study days, providing trainees with the opportunity for personalised feedback from consultants, has been suggested. equipping trainers with the necessary competencies to deliver training can be achieved by ensuring attendance at the necessary professional development courses. tackling the 'box ticking' culture is more challenging and may involve a move toward restructuring the current surgical training scheme. introduction: to provide evidence for the face and content validity of a hybrid active-shooter team training simulation and the impact of a hybrid curricular model on learner's engagement and performance. the following study was conducted because hospitals are increasingly threatened by active-shooter incidents, and no active and noticeable training is currently available to train hospital staff members. methods: thirty-five volunteers (medical students, residents and other allied health providers) from the university of minnesota affiliated medical centers were randomly selected and divided into control and experimental groups. the control group (n=14) was given a traditional lecture-style presentation. the experimental group (n=21) participated in the hybrid curriculum which included augmented reality, kinesthetic simulation, and debriefing components. following both curriculum styles, nasa task load index (tlx) surveys were completed by each group member. a final active shooter simulation experience was presented and evaluated by active-shooter trained raters using a checklist of critical actions from the department of defense. a post-simulation nasa tlx survey and post-test were provided. to assess face and content validation of a hybrid team-training simulation exercise to prepare healthcare personnel in the event of a hospital-related active-shooter crisis, a 5-point likert-scale survey determined the realism, utility, and applicability of this type of training while engagement and performance during the simulation were measured using a nasa-tlx survey and contrasted with the rater's evaluation. our study provided evidence to support the face and content validation of an active-shooter simulation team training curriculum as a useful adjunct to health care institutional safety planning. we demonstrated that this type of training requires an optimal level of cognitive activation to increases learner's engagement and performance. we concluded that the hybrid design of our curriculum was successful in delivering these optimal levels of cognitive stimuli by producing engaging team training simulation experience capable of motivating our learners to acquire the tactical skills and life-preserving behaviors consistent with better survival opportunities during a hospital related active-shooter crisis. the introduction: the virtual electrosurgical skill trainer (vest) provides surgeons and trainees with a hands-on approach to learning the best practices in electrosurgery. it is comprised of five modules covering tissue effects, stray currents, bipolar tools, monopolar tools and or fire safety. the module in this study teaches the origins of stray currents and shows the learner how they can cause damage to non-target tissues via direct and capacitive coupling. the aim of this study was to assess learning using the vest system. methods: the irb approved study followed a single group pretest-posttest design and was conducted at the sages 2017 learning center. thirty-eight subjects participated and out of these, 42% were attending surgeons while the rest were medical students, residents and fellows. 37% of subjects had prior fuse exposure, while the remaining had none. subjects were asked to complete a five-question multiple choice questionnaire before and after using the simulator. it assessed their knowledge in topics such as direct coupling, capacitive coupling and insulation failure. participants then used the simulator to complete three tasks. first, the subject used direct coupling to seal a vessel and observed the desired effects and potential pitfalls. in the second task the subject was immersed inside the peritoneal cavity and was directed to use the active electrode to observe how the activation of energy can cause capacitive coupling. in the third task the subject practiced evaluating the insulation of electrosurgical tools for defects. wilcoxon's signed rank test was used to differentiate between pre-and post-test scores, and the mann-whitney u test was used to differentiate between the groups of subjects as a function of fuse experience. results: the median score on the pre-simulator assessment was 60% and the post-simulator median score was 80% (p =0.035). there was no statistically significant difference in pre-assessment scores between attending surgeons and the others (p=0.148). subjects with prior fuse exposure scored significantly higher on the pre-module assessment compared to those that had no prior fuse exposure (80% vs 40%, p=0.024). in the post-assessment their median scores were 80% and 60%, respectively (p=0.019). conclusions: the vest simulator module successfully increased the overall participants' knowledge of coupling in electrosurgery regardless of level of surgical experience. participants with prior exposure to the fuse curriculum had increased knowledge on this topic at baseline as compared to participants without any fuse exposure. introduction: the objective of this study was to assess the reliability of a modified notechs rating scale for the evaluation of medical students' non-technical (nt) skills. the importance of physician nt skills for the safe care of patients is receiving increasing attention in the literature. tools to assess nt skills such as notechs that addresses communication, situation awareness, cooperation, leadership, and decision-making have been shown to be valid and reliable. despite its importance, the assessment of nt skills of medical students, our future physicians, has received little attention. methods and procedures: twenty-seven medical students participated in 1 of 6 acute care simulated scenarios, each approximately 10 minutes long. video recordings of student performance were reviewed and assessed using a modified notechs rating tool adapted for these scenarios with input from a team of clinicians, nurses, and human factors specialists. the rating scale ranged from 0 to 6, 0 representing very problematic behavior (e.g., not vocalizing concerns or decision process) and 6 representing model behavior (e.g., identifies future problems and remains calm to unexpected events). two reviewers rated all videos independently on the 5 notechs domains and specific subscales. student scores in each nt skill domain and interrater reliability were assessed. results: a summary of the scores of each notechs domain is shown in table 1 . the highest overall average score of a participant was 4.9 while the lowest was 1.5. the intra-class correlation (icc; two-way random model) was 0.66, and the cronbach's α coefficient was [0.62. the lowest icc agreement was in the situation awareness domain (0.59) while the highest agreement was in leadership (0.73). conclusion: medical student nt skills during acute care simulated scenarios vary significantly using a modified notechs assessment. this newly developed tool provides a framework for educators to evaluate medical students' nt skills during simulation training. it further identified domains where students scored lower, such as situation awareness, and could be targeted for education. the moderate icc, between the 0.5-0.75 range, shows that further refinement of the tool is needed to reliably assess the constructs. future steps to obtain validity evidence include additional raters and applying the tool in non-simulated settings. introduction: a general misperception of the real concept of robotic surgery seems to be revealed in our clinical practice. despite its introduction almost years ago, robotic surgery is still related to many myths and beliefs. before designing a trial to see if these false awareness could impact on outcome, we measured this misperception by a survey. moreover we tested if medical school is able today to give to the future doctors a necessary knowledge about robotic surgery. with the same survey we explore the feelings about the introduction of the artificial intelligence in medicine and the perception of the consequences of a larger use of technology in medicine. methods and procedures: a multiple choice survey was designed and anonymously administered via the platform surveymonkey (http://www.surveymonkey.com). a total of 55 questions were selected from the research team and included in the survey. the questionnaire was divided in three parts: the first was to get information on participants' population; the second asked specific questions about robotic surgery; the third focused on technology use in medical education. results: we received and analyzed 81 questionnaires, 70 of which totally filled. many undergraduates consider robotic surgery as "experimental", will prefer open surgery on themselves and see a risk for robotic surgery in damaging the patient-surgeon relationship. this situation is better for medical students, but still a great diffidence were encountered. 25% of ug consider robotic surgery as "experimental" vs only 2.7% of ms (q22). most thought robotic surgery had been used for only 10 years or less (q23). 12.12% of ug and 32.43% of ms gave the right answer (p=.03). almost 66% of ug see robotic surgery as a risk in damaging the patient-surgeon relationship. this is not seen among ms (q29) (p=.007). 40% of ug are fearful of robots used to operate them. this fear is significantly reduced among medical students (p=.05). ug were less familiar with the indications and uses for robotics. ms gave a correct response more frequently (q31, 15.15% vs 37.84%, p=0.04). conclusions: our results indicates that nowadays, the robotic surgery is related a lot of misperceptions and a generally low level of information. this general picture is partially mitigated during the medical school, but the level of knowledge is still low. a big effort seems mandatory in clarify every technical aspect and an ethic debate about robotics, technology and ai as part of medical curriculum is advisable. background: learning theory states that a certain level of physiological stress or cognitive activation is required to achieve optimal task engagement and performance by the learners. our study will seek to determine if a hybrid team training curriculum inclusive of a task-oriented interactive virtual environment could help achieve the optimal level of cognitive activation required to result in a higher task engagement and performance. methods: a total of thirty-five medical professionals from the university of minnesota participated in several team training simulations. participants were randomly selected to an experimental and control groups. the experimental group (n=21) was exposed to a hybrid team training module, consisting of a task-oriented augmented reality phase followed by a second and third phase consisting of a kinesthetic simulation scenario and debriefing, respectively. the augmented reality phase presented the trainees to an interactive 360-degree image of the same clinical room where the simulation would take place allowing for ''situated-learning'' to take place. during the learning phase, trainees were encouraged to interact and communicate with each other while completing the tasks allowing for ''social-learning'' to effect. the control group (n=14), educational component consisted of a traditional audiovisual lecture-style introductory presentation, a simulation, and debriefing. after completing their respective educational components, each group completed a nasa task load index survey to assess the cognitive load experience of the individual educational models. subjects were then exposed to a final simulation (test simulation) similar in content and structure to the initial simulation. this was followed by a second nasa tlx survey. raters evaluated both group level of engagement and performance using a validated checklist of critical actions. results: the experimental groups showed higher weighted overall nasa cognitive load index scores than the control group (p=0.0029) prior to the test simulation. the weighted nasa score remained elevated in the experimental participant groups following the test simulation, whereas in the control group the post-simulation nasa assessment revealed a decrease in cognitive load (p=0.0079). expert raters using a validated checklist determined that 93.75± 6.250% of the experimental (hybrid curriculum) group and 37.50±7.217% of the control group appeared to be more engaged and performed better during the simulation. conclusions: pre-simulation task-oriented augmented reality learning environments designed to incorporate situated, and social learning virtual experiences can provide the optimal level of cognitive boost that can result in a higher participant engagement and performance during team training simulation scenarios. introduction: despite the huge importance of laparoscopy, medical students have a brief contact with this surgical specialty during medical school in brazil. usually, they get in touch with this specialty during the surgery clerkship in the last years of medical school. therefore, few students perform clinical research or develop interest for this area during graduation. objective: to awaken the interest in laparoscopy of medical students early in medical school, improving the development of clinical research projects, and to prepare new generations of minimally invasive surgeons. discussion: the academic league of videolaparoscopy was created in 2010 under the guidance of dr. gustavo carvalho from the university of pernambuco, brazil. an academic league is a group of medical students who are guided by a tutor to develop three areas: research, teaching, and clinical practice. every year new students join the league after being selected with a multiple question test and an analysis of the curriculum vitae. the students are stimulated to participate in laparoscopic procedures as observers, learning about the techniques and instruments. moreover, there are minimally invasive surgery lectures and courses during the year. general surgery residents can also be part of the program as tutors. they are encouraged to present lectures, and to assist with research projects. 50 medical students participated of this program in 7 years. 50% pursued a surgical specialty after graduation. 30% did minimally invasive surgery as a fellowship. conclusions: the students who participate in several activities provided by the league have an increased interest in pursuing the path to become a laparoscopic surgeon. background: surgical education is an active and adaptive process of developing knowledge, technical and non-technical skills. the rise of social media has created a paradigm shift in surgical education, with online learning platforms offering exposure to real-time content, expert instruction, and global collaboration. while these disruptive technologies evolve, their influence on surgical education has not been investigated. our goal was to evaluate the growth and impact of an online surgical education model-the advances in surgery (ais) channel. our hypothesis was that utilization and engagement with the platform continues to grow, providing novel methods of measuring successful education. methods: assessment of the platform's membership demographic, user activity, and engagement was performed from inception in 2013 to quarter 2 2017. the ais channel uniquely provides free, high quality, innovative content from elite surgeons in scheduled and continuously available formats across colorectal, bariatric and endocrine surgery service lines. users login to access content, with demographics, time spent, and content accessed recorded as measures of active account utilization and engagement. the main outcome measures were overall membership trends, utilization patterns by region, content type, and surgical specialty for the platform. results: users were predominately male (81.2%), surgeons (92.9%), and ranged in age from 47 to 56 years (24.6%). the main surgical subspecialty represented was colorectal (52.6%). active account usage/weekly recurrence was 60.1% (10% industry benchmark), with users engaged for a mean 32 minutes/session (excluding live events). since inception, steady exponential growth was seen across several dimensions. registered users and unique ip addresses increased from over 3,000 and 190,128 in 2013 to over 43,000 and 2.1 million in 2017, respectively. the number of countries represented increased to reach 183 across 6 continents. at present, over 76 live surgeries and 16 live congresses have been broadcast from 26 countries, with over 2,000 surgical videos available on demand to facilitate surgical education. the greatest engagement is seen with live surgical broadcasts. conclusion: our analysis demonstrated proof of concept for a unique, online surgical education model to provide effective surgical education. success was validated through the increase in overall users, sustained active account usage, and global penetration. user preferences for live surgical broadcasts were seen. knowing the utilization and preference patterns, the platform can continue to evolve and enhance the learners' experience. with this growth and penetrance, there is the potential to globally improve patient outcomes and the quality of care provided. background: a realistic simulator for transabdominal preperitoneal (tapp) inguinal hernia repair would enhance the surgeons' training experience before they enter the operating theater. the purpose of this study was to evaluate the efficacy of 3d-printed tapp simulator in evaluating preoperative skill before entering operative theater. methods: 15 surgeons in our institution were enrolled in this study. they performed simulation tapp and the performance score was measured using tapp check list. the tapp simulator allows for the performance of all procedures required in tapp. the correlation between post -graduate years (pgys), age, experienced a number of laparoscopic surgery (more than 100, less than 100), experienced number of tapp and the performance score was evaluated. results: strong correlation between experienced member of tapp inguinal hernia repair and the performance score was evaluated in this study (r=0.705). however, the correlation between pgy, age and score was weak ( introduction: as the field of laparoscopic surgery grows, the need for standard measures of complex laparoscopic surgical skills is apparent. fundamentals of laparoscopic skills (fls) testing is required to complete general surgery residency, but there is no standard metric to convey expertise in advanced laparoscopic procedures. in an effort to develop a standardized assessment of laparoscopic suturing expertise, a group of experts was surveyed using delphi methodology to reach consensus on observed laparoscopic suturing skills reflective of performing at an expert level. methods: expert laparoscopic surgeons participated in serial surveys via redcap (research electronic data capture). experts included surgeons who perform[25/year laparoscopic procedures that involve intra-corporeal suturing, obtained from the authors' personal and professions networks. using a 5 point likert scale, participants were asked to agree/disagree if 30 different observed laparoscopic suturing skills indicate performing at an expert level. these skills were chosen from prior assessment instruments in the literature and the authors' previously published work. tasks were considered to meet criteria for consensus and eliminated from the next round of the survey after reaching 80% consensus as "strongly agree." results of the previous round of surveys were shared with participants at the start of the next round. the predefined endpoint for the delphi was set as maximum of 4 rounds, reaching 80% consensus on each skill, or if[50% of initial respondents fail to return for subsequent surveys. results: after the first round of the delphi survey, 17 respondents met inclusion criteria. preliminary data demonstrated 4 skills that reached consensus ([80% of respondents chose "strongly agree"): forehand suturing, avoiding tissue trauma, having a technically acceptable final product (ie. tight closure), and tying a secure knot at the end of suturing. 4 items did not approach consensus (\80% of respondents chose "strongly agree" or "agree"): alternating hands for each throw while tying, never missing a target when grabbing needle/suture, alternating direction of throws when tying, and backhand suturing. data from all four rounds of surveys as well as the final draft of the assessment instrument will be available at time of presentation. conclusion: preliminary data of this delphi study allowed us to reach consensus amongst a group of expert laparoscopic surgeons on the characteristics of expert laparoscopic suturing, which will allow creation of a comprehensive assessment tool for this domain. validation of such a tool will help advance the surgical field towards true competency-based credentialing and promotion. the study was designed to assess the knowledge of scp among european surgeons (specialists and residents). additionally, surgeons' opinion on usefulness of each of the rules of scp was gathered. the data were analyzed in terms of differences between residents and specialists. this is to set ground for and an educational program and increase the safety of elective laparoscopic cholecystectomy by minimizing the occurrence of cbdi. methods: the data on the knowledge of scp and opinion on usefulness of its rules were gathered in form of an anonymous questionnaire distributed among participants of several surgical conferences in poland. the questionnaire then asked about the surgeon's experience in terms of cholecystectomies performed and the number of complications in form of cbdi. it then listed the scp rules and asked the surgeon about their opinion on usefulness of each of the rules on a 10-point scale. gathered data were subject to statistical analysis and a comparison between specialists and residents was performed. the study has been registered in the clinicaltrials.gov-nct03155321. , although these numbers are still low. significant differences in the mean usefulness score between residents and specialists were observed in regard to two rules: rule 2 was found more useful by residents (mean score 7,07 vs.6,01, p=0.008), whereas rule 3 was found more useful by specialists (mean 8.74 vs.8.36, p=0.009). the awareness of the sages safe cholecystectomy program in poland is still low and needs to be promoted. both surgical residents and specialists consider the rules of scp to be useful during surgery, although there are slight differences in the usefulness scores between the groups. an educational program to promote and further implement the scp should be established. introduction: transanal total mesorectal excision (tatme) has attracted substantial interest amongst colorectal surgeons throughout the world. technical challenges of the technique however have been acknowledged by early adopters and this may underpin the early reports of visceral injuries which occurred during the perineal phase. evidence from previous surgical training programs suggest that a structured proctorship programme can shorten the learning curve, operative time and most importantly reduce major complications. the aim of this study was to report on the first national pilot training initiative which was developed in the uk to ensure safe introduction of this technique. methods: a pilot training programme for the uk has been established in partnership with the healthcare industry, and supported by the association of coloproctology of great britain and ireland. the programme consists of three phases: (i) development of a consensus process on the optimum training curriculum of tatme from all relevant stakeholders, including experts, early adopters, and potential learners, to guide the training of this technique (ii) piloting of this training curriculum and (iii) assessment and quality assurance mechanisms to monitor training and measure outcomes. results: a cohesive multi-modal training curriculum has been developed providing clear guidance on case selection, supporting multi-disciplinary and multimodal training including online modules, dry-lab, purse-string simulators, cadaveric training and formal clinical proctoring programme. the uk pilot programme opened for applications in may 2017 and, after a rigorous selection process, the initiative was launched in september 2017 with 10 trainers mentoring 10 consultant colorectal surgeons from five centres. the selection of learners was based on suitable case volume and prior experience in laparoscopic rectal surgery. objective assessment tools were applied to an unedited video of a laparoscopic rectal surgery case for each applicant. for the selected centres, access to the ilapp tatme app was provided to access educational content including operative video footage, prior to attending a bespoke cadaveric workshop. each learner will then benefit from a structured, centrally organised and funded proctorship programme at their own institutions. a global assessment score form has been specifically designed to monitor training and a formal accreditation process will be used to sign off each learner using competency assessment tool. data on the cadaveric workshop and initial outcomes of the clinical mentorship will be presented at the conference. conclusion: a competency-based pilot training programme for transanal total mesorectal excision has been launched in the uk to support safe introduction of this technique. practicing on a fls trainer box is effective but requires large amount of consumables and is scored subjectively. the purpose of this study is to evaluate the face validity of the intracorporeal suturing task on a virtual fundamentals of laparoscopic surgery simulator (virtual fls). we hypothesize that the virtual fls will demonstrate face validity. methods and procedures: after a video demonstration and a practice period, twenty-three medical students and residents completed an evaluation of the simulator. the participants were asked to perform the standard intracorporeal suturing task on each of the virtual fls and the traditional fls box trainer. the presentation order of the devices was balanced. the performance scores on each device were calculated based on time (seconds), deviations to the black dots (mm), and incision gap (mm). the participants were then asked to finish a 13-question questionnaire regarding the face validity of the simulator. participants answered questions with ratings from 1 (not realistic/useful) to 5 (very realistic/ useful). a wilcoxon signed ranks test was performed to identify differences in performance on the virtual fls compared to the traditional fls box trainer. results: responses to 10 of the 13 questions (76.9%) averaged above a 3.0 out of 5. those questions that rated the highest were the degree of realism of the target objects in the virtual fls compared to the fls (3.87) presently, most training methods for thoracoscopic esophagectomy use live porcines; this presents several problems including cost, long preparation times, and ethical issues. these problems further prevent frequent training. currently, no alternative models for thoracocopic esophagectomy training. we report, for the first time, the development and use of a non-biomaterial training model for thoracoscopic esophagectomy. methods: we collaborated with sunarrow co., ltd. (tokyo, japan) to develop the training model. we created organ models for esophagus, trachea, bronchus, aorta, vagus nerve, recurrent nerve, bronchial artery, lymph node, vertebrae, azygos vein, and thoracic duct, and filled the models with a polyvinyl alcohol hydrogel. the gaps between organs were filled with a filler material mimicking connective tissue. we chose a synthetic resin that closely mimics the characteristics (rigidity or elasticity) of each organ. after each organ was fixed, the model was covered with a filler to create a pleural membrane to allow training in peeling operations. in addition, because a patient plate was attached to the rear of the training model, excision with an energy device was possible and more closely simulated surgical conditions. results: using the training model resulted in a highly satisfactory level of experience in three trainees. the trainees were able to learn anatomical positions and sequence of surgical procedures, including endoscope handling. 3 centre for rural health, aberdeen university introduction: as doctors become expert in a complex procedure, they develop automatic nuances of performance that are difficult to explain to a peer or a trainee (so called 'unconscious competence'). traditional methods which aim to allow sharing of expertise have limitations: concurrent reporting alters the flow of the task at hand while retrospective reporting is subject to bias and often incomplete. iview expert is a technique validated in the aerospace domain which externalises an expert's cognitive processes, without disrupting the task at hand. the aim of this project is to assess the feasibility of adapting the technique to medical training. methods: this was an observational case study in which an expert endoscopist wore a head mounted camera to capture a complex procedure (colonoscopy). captured video was reviewed during a facilitated debrief which externalised the expert's cognitive processes. the debrief was recorded and formed an audio commentary. the video and accompanying audio commentary formed a learning package which was watched by a specialty trainee. the technique differs from standard procedural videos in that it provides a more detailed insight into cognitive processes of the expert. this is achieved through the debrief, which encourages reflection upon kinaesthetic (head movement) as well as auditory and visual cues, resulting in a higher level of experiential immersion. questionnaires examined acceptability and educational value of the technique using likert scales and free text answers. quantitative data were presented using basic descriptions in terms of agreement with statements. qualitative data from free text responses were coded in order to identify key themes. results: the expert agreed that wearing the camera was acceptable and did not interfere with the procedure, nor usual decision making processes. qualitative analysis revealed the debrief process to be associated with a high level of experiential immersion: "as if they were there". both the expert and the trainee strongly agreed that the process was educationally valuable and that they learned something new. qualitative analysis demonstrated that the technique revealed useful and unique nuances of the procedure. the intervention could represent a powerful adjunct to existing training methods, especially amongst more experienced practitioners. we are currently undertaking a larger study involving a greater range of procedures with more learners. introduction: endoscopy is an important skill for general surgeons to possess. however, there is lack of training within surgery residency programs. we implemented a one-day endoscopic surgery course with the aim of improving the confidence of surgical residents in performing endoscopic procedures. we also aimed to examine the effect of the exposure to this course on self-reported confidence in performing endoscopic procedures. methods and procedures: the fundamental of endoscopic surgery course at texas tech university health science center is a one-day course consisting of both didactic training and lab training. the didactic part of the course is taught by attending physicians and focuses on the basics of endoscopy, management of upper and lower gastrointestinal (gi) bleed, and techniques to perform a variety of gi endoscopic procedures on swine esophagus and stomach explant. the lab portion of the course allows residents to perform different endoscopic surgical procedures with the attending physicians providing guidance. residents from pgy-1 to pgy-5 participated in the course. a 14-item questionnaire that measured the self-reported confidence in performing several endoscopic procedures on a 1-5 likert scale was administered before and after the course. results: twenty-two participants successfully completed the training and the questionnaires. a significant improvement was observed in the overall confidence in performing a variety of endoscopic procedures (1.231±0.384, p.001). the improvements remained significant even after controlling for the years of postgraduate surgical training (p.001). conclusion: the one-day fundamental of endoscopic surgery course enabled residents to be more confident with endoscopic procedures. overall, the residents felt that the course was helpful and would like to attend more than one session per year. this course should be held, at least, annually to allow the general surgery residents to become even more confident with this important skill. by being more confident in their surgical endoscopy skills, they will ultimately be able to provide better care for patients. introduction: a course evaluation study on the effectiveness of improving laparoscopic skills of surgical residents using swine models was evaluated through a self-report questionnaire administered before and after course completion. the purpose of the training is to provide surgical residents opportunities to practice and advance their laparoscopic proficiencies. methods and procedures: participating residents in all post-graduate year levels (pgy1 through pgy5, n=17) were provided anesthetized pigs with which to perform a variety of simple to complex laparoscopic cases. prior to training, residents were given a questionnaire composed of eleven questions requiring the subjects to rate their confidence in performing various laparoscopic procedures on a 1-5 likert scale. after completion of the course, an identical questionnaire was distributed with two additional questions relating to the overall impact of the course. all statistical analyses were conducted using r statistical software (version 3. conclusion: overall, one-day hands-on training using swine models improved resident's skills, confidence, and understanding of laparoscopic surgery. the information acquired through the questionnaire emphasized the importance of providing a laparoscopic training course as a standard requirement at all medical institutions. allowing opportunities for surgical residents to practice their laparoscopic skillset will not only help in their individual academic advancements, it will allow them to provide optimum care for their patients. background: learning laparoscopy is difficult and many educational tools including simulation training are required. feedback plays a crucial role for motor skill training but require expert tutors and its time consuming. e-learning increases knowledge acquisition through a more interacting multimedia experience and reduces de costs of learning. in the last decade multiple applications (apps) have been developed for mobile medical training. a new ios app was developed using specially designed educational videos that explain the main technical aspects in advanced laparoscopy through simulation training. the aim of this study is to present the first results of its incorporation in a surgical simulation lab as a complement of effective feedback. methods: twenty-five consecutive residents were trained in our simulation lab through a 15 session validated training program for the acquisition of advanced laparoscopic skills needed for the performance of a laparoscopic hand-sewn jejuno-jejunostomy. every session had written instructions and a basic tutorial video. the app consist two main sections, the first one explains the essential techniques needed for intracorporeal suturing and the second is a complete walkthrough of the validated training program. the trainees were divided in two groups, the first was trained without using the app (napp) and the second group was trained using the app (yapp). both groups of trainees could ask for feedback anytime they needed. trainees were assessed before and after the training program using validated rating scales and the number of necessary tutor-feedback sessions were registered. finally the yapp group answered a survey about the strengths and weaknesses of the app for learning advanced laparoscopic skills. results: twenty-five residents completed the training program; 15 yapp and 10 napp. both groups finalized their training with no statistical significant differences in their scores (p:0.32). the number of tutor-feedback needed to complete the training in the yapp vs napp was of [4 (3-6) vs 13 (10-14) (p.001)] respectively. in the questionnaire all participants considered that the app was effective for learning advanced laparoscopy. over 4000 downloads have been registered since the app was published in the apple app store in 2013. we present a novel smartphone app that guides laparoscopic training using simulation-based educational videos with very good results. the use of app guided learning reduces de need of expert tutor feedback reducing the costs of simulated training. jemin choi, young-il choi; kosin university gospel hospital purpose: laparoscopic appendectomy (la) has been widely performed for acute appendicitis. in addition, minimally invasive surgery such as la is common surgical technique to the surgical residents. however, single incision laparoscopic surgery (sils) is a challenge to inexperienced surgical residents. we described our initial experience in teaching sils procedure for appendectomy in our medical center. methods: twenty nine cases of single incision laparoscopic appendectomy (sila) were performed by single surgical resident and 110 cases of la were performed by 4 surgical residents and 5 boardcertified surgeons. a study was reviewed retrospectively. (4) clinical stressors (i.e., vitals of patient coding). we developed a stress simulator testbed by integrating an fls box trainer with a linux computer, running custom c++ code. the code generated various stressor conditions, while recording sensor data from the trainer and human operator. we tested 3 groups of participants in an irb approved trial including: novices (non-medical students), intermediates (medical students), and experts (pgy4 residents and fellows). the study consisted of subjects performing the peg transfer and the pattern cut six times (baseline, four randomized stressors, posttest). after each task, the nasa-tlx survey was administered to determine the overall workload of that stressor condition. an analysis of variance was conducted to identify significant trends in terms of stressor type. results: when compared to baseline nasa-tlx scores, the intermediate group had the greatest changes in overall workload than novices and experts (p=0.0005). additionally, the change between baseline and post-test workload was significantly lower than for the environmental, negative evaluative, and clinical stressors (p=0.0006). for pattern cutting, subjects reported a significantly lower perception of failure (p=0.0479) in both the positive evaluative (mean=8.5556) and post-test conditions (mean=8.222), yet, though not statistically significant (p=0.0564), the measured accuracy in the task during the positive evaluative condition was actually worse (33.3%), second only to the pre-test accuracy (31.1%). the best accuracy for pattern cutting across all expertise levels was 62% for the post-test followed by 54.4% in the negative evaluative condition. these results are interesting as they show that despite perceived improvements in performance with a positive feedback condition, performance actually degrades and is better in the negative feedback condition, which is perceived to be more difficult. these results were not found in the peg transfer task, which is arguably an easier task. conclusion: from the evidence gathered in the study, it is clear that there is a correlation between distractors and performance. further analysis is needed to identify the relationship between the type of stressor, and inherent difficulty of the tasks, in terms of which type of stressor best improves learning and outcomes. surg endosc (2018) each received credentials to perform diagnostic and therapeutic ercp from their respective hospitals in nevada, minnesota, and idaho. one continues to teach ercp to general surgery residents, and another taught the skill to fellows in an advanced endoscopy fellowship. all three continue to use ercp in their practice (2 to 5 times per month), as they each specialized in a field that utilizes ercp routinely. choledocholothiaisis is the most frequent indication, though ercp is also performed for iatrogenic biliary duct leaks, traumatic biliary or pancreatic duct leaks, chronic pancreatitis, and malignancy. conclusions: training in esophagogastroduodenoscopy and colonoscopy is required for general surgery residents, but the addition of ercp to select residents' training enables them to completely manage their patients' surgical disease. the training of select general surgery residents in this skill has been successful, evidenced by the continued use of ercp in the practices of three residents who completed this training program at our institution. the decision to train residents in this skill should be left to individual program directors and department chairs. we recommend that residents selected for this additional training should plan to practice in specialties where ercp can be implemented. conclusion: same-day discharge after nissen fundoplication and hiatal hernia repair is feasible for select patients. one major challenge for same day discharge is the current insurance provisions required for hospital reimbursement. within the parameters of this study, bmi and asa score did not differ between discharged and admitted patients, while older age and increased procedure duration were associated with need for admission. premkumar anandan, ms, facs; bangalore medical college and research institute introduction: minimal access surgery is an imperative element of enhanced recovery program and has significantly improved the outcomes. enhanced recovery program (erp) synonym "fast track" surgery "was first conceived by dr henrich kelhet. largely described for colorectal surgery and reported to be feasible and useful for maintaining physiological function and smooth the progress of recovery. most of the patients who present for surgical emergency are not adequately prepared and many are not in normal physiological state. the feasibility of enhanced recovery programs protocol in such emergency minimal access surgery remains indistinct. this study was designed to validate an enhanced recovery program in patients who undergo emergency minimal access surgery. introduction: pathways for enhanced recovery after surgery (eras) have been shown to improve length of stay and postoperative complication rates across various surgical fields, however there is a relative lack of evidence-based studies in bariatric surgery. the objective of the current study was to determine if starting a bariatric full liquid diet on postoperative day (pod) zero was associated with shorter length of stay (los) for patients who underwent laparoscopic sleeve gastrectomy (lsg) or roux-en-y gastric bypass (rygb). methods: retrospective review of a prospectively collected dataset was conducted at a single institution before and after implementation of a new diet protocol for lsg and rygb. postoperative diet orders were changed from full liquid diet on pod 1 to pod 0. length of stay and 30-day readmissions were reviewed from june 2016 to august 2017. independent samples t-tests were used to compare continuous variables and chi-squared tests for categorical variables before and after diet change was implemented. patients were excluded if they were undergoing revision surgery, were discharged directly from pacu, or had significant intraoperative complications or required reoperation within the same admission. introduction: data suggests value in using tap (transversus abdominis plane) neural blockade in abdominal surgical procedures. we deploy tap blockade using liposomal bupivacaine via ultrasound (us) as part of a narcotic sparing pain management pathway for patients undergoing abdominal surgery in our rural community setting. our goal was to evaluate adequacy of postoperative discomfort and the success in avoiding narcotic usage. methods and procedures: records of patients undergoing abdominal surgical procedures performed by one surgeon over an 18 month period were reviewed under irb approval. patients taking narcotics prior to the procedure (except for discomfort due to the condition being surgically treated) were excluded from analysis, as were those admitted to the hospital for postoperative treatment. us guided lateral tap blocks were performed by the surgeon using 266 mg of liposomal bupivacaine and 50 mg of bupivacaine in the or prior to the incision. unilateral block was performed for unilateral procedures (e.g. inguinal hernia) and bilateral for laparoscopic or midline procedures. incisional sites were treated with a field block of 50 mg of bupivacaine. prescriptions for medications included 1,000 mg of acetaminophen qid and 220 mg of naproxen sodium tid for 7 days. a prescription for tramadol (50 to 100 mg prn up to 4 times daily; 40 tablets with no refill) was given. patients were seen in followup two weeks postoperatively. data (following standard scales/metrics) for patient-reported-outcomes e.g. pain, nausea-vomiting, & fatigue will be analyzed with the above data and the analysis with conclusions will be presented & discussed. federico sertic, md, ashwin gojanur, dr, ahmed hammad, md; guy's and st thomas' hospital introduction: the aim of this project is to assess the quality of post-operative pain relief in colorectal surgery and identify patients in whom pain management has not been effective, in order to improve the quality of post-operative care. effective management of post-operative pain has long been recognised as important in improving the post-operative experience, reducing complications and promoting early discharge from hospital. standards: all patients should be pain free at rest, 100% of elective patients should be told about what analgesia they will have post-operatively, 100% of patients should be satisfied with their pain management and 100% of patients should feel staff did everything they could to control their pain. methods and procedures: questionnaires were given to 20 patients on the day prior to discharge. 13 questions about pre-operative and post-operative pain experience were asked. data regarding post-operative analgesia were collected from medication charts and medical notes. data were collected over a period of two months (august/september 2017). range of procedures: 4 elective laparoscopic abdomino-perineal-excision-of-rectum with igap flaps, 1 elective laparoscopic right hemicholectomy, 7 laparotomy+bowel resection/stoma formation (5 elective, 2 emergency), 1 elective repair of parastomal hernia, 5 appendicectomy (2 laparoscopic elective, 2 laparoscopic emergency, 1 laparotomy emergency) and 2 elective reversal of ileosomy. pain scores (1-10): immediately post-operative pain, day 1 post-operative pain, post-operative pain after day 1 and pain on moving/coughing/straining. results: mean immediate post-operative pain score was 4.0 (10% of patients with score 8+), mean day 1 post-operative pain score was 4.8, mean post-operative pain score after day 1 was 4.25, mean pain score on moving was 6.2 (30% of patients with score 8+), mean pain score on coughing/ straining was 6.8 (30% of patients with score 8+). 90% of patients were satisfied with their post-operative pain management and felt that the staff had done everything they could to manage their pain. 25% of patients were not aware of their post-operative analgesia regimen and 50% did not know how regularly they could request analgesia. conclusions: effective management of post-operative pain is a key part of post-operative care and an important component of enhanced recovery programmes. patient satisfaction with pain management has been found to correlate with received pre-operative information. increasing ward nurses' and acute pain teams' knowledge is important in improving patients' pain experiences. interestingly, those patients who had a background of long-term opioid requirements reported that they were satisfied with their pain management. methods and procedures: a patient undergoing a standard ultrasound guided ql3 block by an anesthesiologist established the baseline anticipated response, and procedure time. the procedure, performed under sedation preoperatively, required over 60 minutes. for this study, patients undergoing laparoscopic colorectal surgery were administered a lateral ql block (modified ql 1) under ultrasound guidance by the operating surgeon. 40 ml of a mixture (10 ml injectable liposomal bupivacaine suspension, 15 ml 0.25% bupivacaine hydrochloride and 15 ml normal saline) was injected bilaterally, after induction, skin preparation, draping, and prior to the operation. postoperative narcotic use and pain vas scores were documented. results: six patients were administered a bi-lateral ql block intraoperatively. procedures were: 3 laparoscopic sigmoid colectomies, one end ileostomy reversal, laparoscopic completion proctectomy with ileal pouch anal anastomosis, and a laparoscopic descending colectomy. of the narcotic naïve patients, mean pain vas on post op days 0, 1 and 2 were 4.5, 3.2 and 2.3 respectively within a multimodality pain management/enhanced recovery program, where standing orders prompting narcotic administration by nursing staff is pain vas 5. all were discharged on pod 2 or 3 without narcotic prescriptions. two of the 6 patients were chronic narcotic users, and they were discharged on their baseline narcotics, i.e. without additional narcotics. all intraoperative blocks were performed in less than 20 minutes. conclusion: a novel, surgeon-administered lateral ql block under ultrasound guidance, is feasible and provides post-operative pain control. patients are discharged home on no/baseline narcotics. a randomized controlled trial is being constructed based on these striking findings. keywords: lc-laparoscopic cholecystectomy, ga-general anaesthesia, sa-spinal anaesthesia. nikhil gupta, rachan kathpal, dr, arun k gupta, dr, dipankar naskar, dr, c k durga, dr; pgimer dr rml hospital, delhi introduction: cholecystectomy have shown some advantages when done under spinal anaesthesia (sa) and associated with less intra operative and post -operative morbidity and mortality. laparoscopic cholecystectomy (lc) under regional anaesthesia alone included patients with coexisting pulmonary disease, who are deemed high risk for ga. the aim of the present study is to assess the efficacy and safety of laparoscopic cholecystectomy under sa. materials: this prospective, interventional study was conducted on 60 patients with chronic calculous cholecystitis attending general surgery out-patient department of our institution. results: in our study, intraoperative complications recorded were hypotension, bradycardia, intra op shoulder tip pain, bleeding from the liver bed, bile spillage, post-op pain and vomiting. 10% patients had intraoperative pain, 5% had shoulder tip pain, 3.3% had bradycardia, 3.3% had hypotension, 1.7% had bile spillage and 1.7% had bleeding. laparoscopic cholecystectomy under spinal anaesthesia should be promoted more even in developing countries but we need to establish well evaluated safety guidelines that could be followed faithfully for minimizing the risk of complication. background: the "opioid crisis" has taken over headlines with increasing public attention brought to the drastically increasing rates of addiction to prescription narcotics. in 2015, the american society of addiction medicine reported 2 million americans with an addiction to prescription pain relievers and a four-fold increase in overdose related deaths. in a medical setting, increased opiate use is associated with increased rates of delirium, ileus, urinary retention, and respiratory depression. these risks are increased in the obese/bariatric population. transversus abdominis plane (tap) block is a safe and effective approach to achieve optimum pain control. it reduces the use of opiates in patients undergoing major abdominal surgery. however, there is currently no data in the literature examining its use in the bariatric population. our study examines the use of liposomal bupivacaine for tap block in patients undergoing laparoscopic sleeve gastrectomy (lsg). methods: sixteen patients undergoing lsg with tap block were compared with historical cohort of sixteen patients undergoing lsg without tap block (standard group). the primary outcome measured was post-operative in-hospital opiate use (morphine equivalents). statistical analysis was performed using student's t test for continuous variables and fisher's exact test for categorical variables. results: both groups were well matched in regards to bmi, age, and asa class. there was a significant decrease in the post-operative use of opiates with the use of the tap block (11.4 mg in the tap block group vs. 43 mg in the standard group; p 0.00002). there was no difference in the mean length of stay between the two groups. there was an increase in the mean operative time with use of the tap block (76 minutes in the tap block group vs. 58 minutes in the standard group; p.05) conclusions: the use liposomal bupivacaine for tap block provides substantial analgesia, allowing for significant reduction in post-operative opiate use in our bariatric patients. this can be an important adjunct in pain control for the bariatric population and aid in post-operative complication risk reduction. introduction: the objective of this study was to identify variation in weight and demographics in the distribution of pre-operative clinical characteristics between super obese females compared with males who were about to undergo bpd/ds surgery. as the american obesity epidemic increases, morbidly obese patients have become integral to every surgical practice; they are no longer limited to bariatric surgeons. every clinical insight helps the surgeon to optimize outcomes when operating on and managing these medically fragile individuals. in this context, however, clinically and statistically significant differences in demographics, body mass, and in the distribution of weight-related medical problems between super-obese women and men are unknown. introduction: a transversus abdominis plane (tap) block is an ultrasound-guided injection of local anesthetic in the plane between the internal oblique and transversus abdominis muscles to interrupt innervation to the abdominal skin, muscles, and parietal peritoneum. currently there are incongruent findings on the benefit of this regional anesthetic to surgical patients, particularly the obese population. we hypothesized the addition of a tap block in an enhanced recovery pathway (eras) for bariatric patients would decrease opioid use and shorten hospital length of stay. methods: a retrospective review of all patients who underwent bariatric surgery at a single institution from january to december 2016 was performed. patients were identified as: no tap block (no tap), tap blocks that were performed after induction either pre-surgery (pre-tap) or post-surgery (post-tap). the primary outcome was time to first opioid (min) and total morphine (mg) equivalents in pacu. objective: prolonged postoperative ileus increases hospital length of stay and therefore impacts healthcare costs. although many surgeons recommend ambulation in the postoperative period to hasten return of bowel function, little evidence exists to support this practice. our hypothesis is that early ambulation does reduce the time to return of bowel function after intestinal surgery. methods: a subset of 16 patients undergoing intestinal surgery from an ongoing, prospective trial evaluating perioperative physical activity was analyzed. preoperatively, patients wore an activity tracker for a minimum of three days to establish a baseline activity level, measured by daily steps. postoperatively, steps were recorded for 30 days. patients were included in this study if they underwent an operation on the small bowel, colon, or rectum. resolution of postoperative ileus was defined as the postoperative day when patients were noted to meet all of the following criteria on review of nursing documentation: passing flatus, stooling or having ostomy output, and tolerating a regular diet without intravenous fluids. "early" postoperative activity was defined as the average number of daily steps during the first two postoperative days. discussion: these results suggest the patients who received an intraoperative block laparoscopically were more likely to be able to spend less time in the post anesthesia care unit and be discharged home the same day. based on these results, additional process improvement ideas will be implemented in an attempt to improve outcomes. riley d stewart, md, msc, frcsc, james ellsmere, md, msc, frcsc; dalhousie university division of general surgery introduction: oropharyngeal and gastrointestinal (gi) perforations from bbq brush bristles are being reported in the literature with increasing frequency. media attention to this problem has increased awareness by the public. most commonly, bbq bristles lodged in the gi tract can be removed endoscopically or pass without complication. rarely, surgical intervention is required for removal of the bristle or drainage of an associated abscess. we report a case of gastric perforation by a bbq bristle leading to a pancreatic abscess. case report: a 41-year-old male presented to a regional center with epigastric pain and malaise. his medical history included: hypertension, dyslipidemia, gerd, and smoking. his surgical history included: a tonsillectomy, excision of bronchial cleft cyst, and an umbilical hernia repair. on presentation, his laboratory investigations where unremarkable aside from an elevated white blood cell count. investigations including an abdominal x-rays and an abdominal ultrasound were unremarkable. he was initially treated with a proton pump inhibitor for presumed peptic ulcer disease. he returned to the local emergency room, no better than before. a ct scan was arranged which demonstrated a foreign body at the pylorus consistent with a bbq bristle and a peripancreatic fluid collection (figs. 1 & 2) . a gastroscopy failed to identify the bristle. he was admitted, placed on iv antibiotics and referred to our center. despite several days of antibiotics prior to arrival, the collection size on repeat ct scan had increased and the patient had ongoing pain. we repeated the endoscopy with a side viewing endoscope. the perforation was identified posteriorly at the pylorus. the bristle had migrated into the peripancreatic space. the perforation was cannulated with a jagtome. fluoroscopy was used to confirm the position of a wire in the fluid collection (figs. 3 & 4) . pus was drained from the collection into the stomach by placement of a 5 french pigtail catheter (fig. 5) . the patient was discharged pain free the following day. the patient was asymptomatic at 6 weeks' follow-up. a repeat ct scan showed resolution of the abscess and safe migration of the bristle and stent out of the gi tract (fig. 6) conclusion: to our knowledge, this is the first reported transgastric endoscopic drainage of a peripancreatic abscess caused by a bbq bristle gastric perforation. this case is a demonstration of the ever-expanding role of therapeutic endoscopy in a surgical practice. andrew w white, md, carl westcott, md; wake forest baptist medical center introduction: endoscopic balloon dilation of the gastroesophageal junction (gej) is generally limited to 20 mm in diameter. in many stenotic or spastic disorders of the gej 20 mm is just not big enough. larger balloon sizes are available (30 and 40 mm), although these are deployed under fluoroscopy without endoscopy. thus, these larger dilations are often not feasible at the time of the diagnostic endoscopy because different facilities and/or equipment are needed. also, fluoroscopic 30 mm balloon dilations are associated with a 5 percent perforation rate. to address these shortcomings we present an experience with a retroflexed "against the scope" balloon dilation of the gej. in detail, the gej is visualized while retroflexed and a balloon is then placed through the scope. the gej is cannulated next to the scope and deployed. please see the attached image for example. methods and procedures: a retrospective chart review was performed for a single surgeon during the past five years. we identified those who had retrograde dilations and evaluated the indications, repeat dilations, complications and symptomatic response. results: a total of 24 retrograde dilations were performed on 15 patients with gej related dysphagia. the average age was 54.2 years. 17 of 24 dilations were with a 20 mm balloon while other dilations used as small as a 14 mm balloon. 19 dilations were performed for persistent dysphagia after cardiomyotomy between 57 and 5971 days after surgery. other indications for dilation were dysphagia after fundoplication (3/24), dysphagia after paraesophageal hernia repair (1/24) and achalasia during pregnancy (1/24). 5 patients required a total of 9 repeat retrograde dilations at an average time of 488 days after previous dilation. there were 2 instances reported where the dilation did not improve symptoms. there was mucosal breakdown noted in 7 instances although there were no perforations. bleeding was noted in 5 instances although this was always minimal and selfresolving. conclusions: retrograde endoscopic dilation is safe and effective in this small series. the 20 mm balloon against a 10 mm scope gives a 30 mm diameter, but a different shape and a decreased total circumference. there is a possible added safety advantage given that the balloon is inflated under visualization. it can be inflated in steps or stopped if it appears too aggressive. in addition these larger dilations were provided at the time of the initial diagnostic egd without extra equipment. more studies are needed to compare retrograde endoscopic dilation to other methods of management of gej stenosis. introduction: robot-assisted surgery allows surgeons to perform many types of complex laparoscopic surgical procedures. more and more patients are treated with this sophisticated system. however, all the instruments used in the currently available surgical robot system is rigid. therefore, there exists a limitation in the extent of reach to the deeper surgical fields. in order to overcome this difficulty, we are developing a novel flexible endoscopic surgery system (fess) which has flexible single port platform of 3 cm in diameter, independently controlled endoscope and instruments, open architecture that is compatible with existing flexible devices and a magnified 3d hd camera that has sensors of both rgb and infrared. furthermore, the system is smaller and would be more cost-effective than existing robotic surgical system. a preliminary experiment was performed in surgical procedures using porcine model to evaluate effectiveness and feasibility of fess. methods and procedures: experimental protocols were approved by the animal research committees of our institution. we used a female swine of 25 kg. an assistant forcep lifted up the fundus of gallbladder to create good visualization of surgical field. the cystic duct was ligated by laparoscopic clip device from assistant port. blunt dissection was performed by pushing the forceps and sharp dissection by monopolar electrocoagulation. results: the fess accomplished the dissection of the gallbladder from the liver bed successfully. two 5 mm forceps had enough grasping and dissecting force and dexterity. the gallbladder was removed from single port site easily. conclusions: this experiment showed that it is feasible to intuitively operate single-site cholecystectomy with fess. in order to realize a pure fess procedure, an additional novel device to create good visualization of the surgical field is necessary for the fess platform. a prototype has already been developed for evaluation in securing the surgical field. the optimal working range, or "sweet spot" of fess is not relatively large. in addressing this issue, the feature of easy setup is being improved to enable more efficient positioning and shifting of the sweet spot for the surgical field. this mechanism could enhance the expansion of procedures suitable for fess. the target procedures of fess are those specifically suitable for single port surgery, such as transanal surgeries and transcervical mediastinoscopic surgeries. intraluminal procedures and natural orifice translumenal surgery (notes), which are not considered suitable for rigid surgical robot, are also good applications of fess. regression of anal and scrotal squamous cell carcinoma (hpv related) with imiquimod index patient is a 48 year old hiv positive homosexual man with anal-scrotal condylomas (ain) initially resected in 2012, then treated with radiation in 2014 for recurrence. recurred in 2016 with changes severe enough to ''…consider diagnosis of invasive squamous cell carcinoma…''. patient elected trial of imiquimod 5% cream three times per week to defer recommendation of abdominoperineal resection. imiquimod has no antiviral effect but stimulates interferon and cytokines to suppress hpv subtypes 6 and 11, among other immune effects. no data exists as to systemic effects of imiquimod. after three months of therapy, lesions had largely regressed with only one specimen showing ''…concern for squamous cell carcinoma in situ…''. patient has elected to continue treatment pending further biopsy. this report is typical of a number of other reports of small numbers of cases of neoplasia regression with imiquimod 5% cream to include melanoma-in-situ, basal cell cancer of skin and other cutaneous malignancies as well as vin. a second female patient, 38 years old, hiv+ with hpv lesions (ain3) including urethral lesions, is being treated with vulvar application of imiquimod to determine if urethral lesions will regress. there is no fda-approved indication for mucosal application of imiquimod. biopsies are pending at completion of six month trial of imiquiimod. surg endosc (2018) introduction: training in flexible endoscopy remains a critical skill for surgeons, as therapeutic endoscopy procedures continue to evolve and to supplant standard surgical operations. the role of endoscopy across surgical subspecialties is shifting, as endolumenal procedures (like per-oral endoscopic myotomy and endolumenal bariatric interventions) have become commonplace. while surgical residency minimum case volumes are mandated, little is known about the volume of endoscopic procedures surgical fellows participate in. we aimed to characterize the volume of flexible endoscopy cases logged by surgical subspecialty fellows as a measure of endoscopic platform use by surgeons. methods: operative case logs for fellows enrolled in post-graduate training programs participating in the fellowship council were de-identified (no patient or program specific information) and provided for analysis. the case log is an online, mandatory, self-reported collection of all surgeries, procedures and endoscopies performed during fellowship year. all cases listed within the category of "gi endoscopy" in which the fellow designated their role as "primary" surgeon for the procedure were further sorted based on subcategory and linked to the year of fellowship graduation. rigid endoscopy, trans-anal endoscopic procedures, and those in which the fellows roll was "first assistant" were excluded. introduction: complex pancreatic and duodenal injuries due to trauma continue to present a formidable challenge to the trauma surgeon with a described mortality of 5-30% and morbidity of 22-27%. duodenal fistula formation subsequent to failure of attempted primary repair is associated with significant morbidity and mortality. we present the first reported series of four patients with complex trauma-related duodenal injuries who had failure of primary repair which were managed with duodenal stenting. we compared outcomes to a matched case control cohort of patients with trauma related duodenal injuries. the aim of this study is to document our experience with enteral stents in patients with complex duodenopancreatic traumatic injuries. methods: a retrospective review at a level i trauma center identified 4 patients who underwent endoscopically placed indwelling covered metal stents after failure of primary duodenal repair in the form of high output duodenal fistulas. a matched case control cohort was identified including 6 patients with duodenal fistulas who were not treated with stents. drainage volumes were collected and classified according to source and phase of intervention (i.e. admission to fistula diagnosis, to stent insertion, after removal, and until discharge). results: there was a decrease in the mean combined drain output of 497 ml/day (p=0.16) after stent placement. when comparing the sum of all output sources, there was a statistically significant difference across phases (p=0.03) and "after removal" was significantly less when compared to the reference phase (p=0.05). there was also a change in the directionality of the slope for the sum of all drain outputs with an increase of 13 ml/day 2 prior to stent placement compared to a decrease of 13 ml/day 2 (p=0.26) after stent placement. the stenting group demonstrated a decrease in mean drain output (1063 ml/day vs 1446 ml/day, p=0.24) and increase in distal gastrointestinal output (700 ml/day vs 223 ml/day, p=0.16). one patient in the stent group required later operative repair. all other patients in the stenting and control group had resolution of their fistulas over time. there were 2 late mortalities in the control group. the stent treated patients demonstrated diversion of approximately 500 ml/day of enteral contents distally. while all patients eventually healed their fistulas, the stent treated patients demonstrated an accelerated abatement of drain outputs when compared to the control cohort, but did not reach statistical significance. indwelling enteral-coated stents appear to be an effective rescue method for an otherwise inaccessible duodenal fistula after failure of primary repair. kevin l chow, md, hassan mashbari, md, mohannad hemdi, md, eduardo smith-singares, md; university of illinois at chicago introduction: esophageal trauma represents an uncommon but potentially catastrophic injury with a reported overall mortality of up to 20%. the management of iatrogenic and spontaneous perforations have been previously described with well-established guidelines which have been mirrored in the trauma setting. esophageal leaks are the most feared complication after primary surgical management and present a challenge to salvage. there has been increasing reports in the literature supporting the use of removable covered metal stents to treat esophageal perforations and leaks in the non-trauma setting. we present the first reported case series of four patients presenting with external penetrating trauma induced esophageal injuries, complicated by failure of initial primary surgical repair and leak development, successfully managed with the use of esophageal stents. materials and methods: a retrospective review was performed at a level i trauma center identifying four patients who underwent endoscopically placed removable covered metal stents, either by a surgical endoscopist or an interventional gastroenterologist, after failure of primary surgical repair of esophageal traumatic injuries. demographic information, hospital stay, additional interventions, complications, imaging studies, iss scores, and outcomes were collected. results: our cohort consisted of 4 patients with penetrating injuries to the chest and neck with esophageal injuries (3 thoracic and 1 cervical esophageal injuries) managed with esophageal stenting after leaks were diagnosed following primary surgical repair. their initial esophageal injuries included grades 1, 3 and 5. leaks were diagnosed on average post-operative day 9. two patients underwent an additional attempted surgical repair and subsequent leak development. esophageal stents were placed under endoscopic and fluoroscopic guidance within 3 days of leak diagnosis. there was resolution of their esophageal fistulas with all patients resuming oral intake (averaging 72 days after stent placement). three patients (75%) required further endoscopic interventions to adjust the stent due to migration or for dilations due to strictures. mortality was 0%, all patients survived to be discharged from the hospital with average icu length of stay of 30 days. conclusion: the use of esophageal stenting has progressed over the last few years, with successful management of both post-operative upper gastrointestinal leaks as well as benign, spontaneous, or iatrogenic esophageal perforations. while the mainstay of external penetrating traumatic esophageal injuries remains surgical exploration, debridement, and repair with perivisceral drainage; our case series illustrates that the use of esophageal stents is an attractive adjunct that can be effective in the management of post-operative leaks in the trauma patient. results of the ovesco-over-overstitch technique for managing bariatric surgical complications introduction: since 1980, the preferred method of enteral access has been the percutaneous endoscopic gastrostomy tube (peg). accidental removal is a common complication associated with excessive cost and possible significant morbidity. removal prior to 14 days is considered ''early removal.'' early removal has more significant risk associated with it, and can necessitate emergent operation to prevent peritonitis and sepsis. some patients, who do not exhibit signs of peritonitis, may be simply observed. for these patients, peg replacement would typically be delayed 5-10 days to ensure closure. this delay results in prolonged npo status and worsened nutritional status. presented below is a case of early accidental removal followed by endoscopic clip closure, and immediate peg replacement. case report: a 43-year-old male presented after a large left middle cerebral artery infarct. a peg placement was completed without complication. eleven hours after the procedure the patient had pulled the peg tube out of the abdominal wall. at this time the patient appeared to have no abdominal pain and no signs of peritonitis. twelve hours following the accidental removal of his peg tube, the patient was taken back to the endoscopy suite, and an egd was performed. the previous peg site was identified and appeared closed and ulcerated. the mucosal defect was closed with two endoscopic metallic clips. a peg tube was then placed at an adjacent site. the following day, the patient was restarted on trickle feeds and advanced to regular tube feeding over a period of 24 hours. since that time, his peg has been functioning well. discussion: we propose that in the case of early accidental peg removal, the patient should be examined first for evidence of peritonitis. if initial physical exam and radiographic investigation do not reveal peritonitis or significant pneumoperitoneum, the patient should undergo urgent repeat endoscopy. at this time, the gastrotomy can be closed endoscopically via metallic clips and peg can be replaced immediately. tube feeds can be initiated after a 12-24 hour period of dependent drainage with serial abdominal exams. introduction: since its inception in 2008, poem has become a viable procedure for the treatment of achalasia and esophageal dysmotility disorders. however many institutions are in the beginning stages of implementing the procedure into their programs. in view of training, we report the successful ability to dissect and identify common landmarks during a poem procedure performed by trainees under supervision in a high volume poem center. methods: 23 posterior poem procedures performed by trainees with experienced proctor guidance during the period between february to july 2017 were evaluated for the frequency of identifying the 2 perforating vessels, the presence of sling fibers, and position on the lesser curvature of stomach evaluated by double scoping method during the creation of the tunnel and myotomy for procedure. results: all 23 poem procedures were successfully completed by trainees (gi and surgery fellows). the average length of procedure was 79 minutes. indication for procedure included 13 patients with type 1 achalasia (56%), 9 with type 2 achalasia (40%) and 1 des (4%). average length of myotomy for all procedures was 10.4 cm. during these procedures 1 or 2 perforator vessels were identified in 11 (48%) of patients, sling muscle was identified in 10 patients (43%) of patients. myotomy extended to anterior lesser curvature of stomach on double scope exam in 100% of patients. no patient had a serious complication requiring intervention. conclusion: trainees performing a posterior poem procedure were able to correctly dissect and identify the sling muscle and/or perforating vessels in approximately 48% and 40% respectively of procedures. however the myotomy position was correctly placed in all procedures. this indicates that while ideally the sling fibers and perforating vessels should be identified, a correctly positioned myotomy can still be successfully performed by trainees without identification of these landmarks. introduction: gastroparesis is a rapidly increasing problem with sometimes devestating patient consequenses. surgical treatments, particularly laparoscopic pyloroplasty, have recently gained popularity but require general anesthesia, advanced skills and create risk of leaks. peroral pyloromyotomy (pop) is a less invasive alternative but is technically demanding and not widely available. we propose an hybrid laparo-endoscopic collaborative approach using a novel gastric access device to allow a endoluminal stapled pyloroplasty as an alternative treatment option for functional gastric outlet obstruction. methods and procedures: under general anesthesia six female pigs (mean weight 33 kg) had endoscopic placement of 2 or 3 5 mm intragastric ports (taggs, kansas, usa) using a technique similar to percutaneous endoscopic gastrostomy. a 5 mm laparoscope was used for visualization. endoflip (crospon, inc., galway, ireland) was used to measure cross sectional area (csa) and compliance of the pylorus before intervention, immediately after and at 1 week survival. pyloroplasty was performed using a 5 mm articulating laparoscopic stapler (dextera microcutter). after removing the taggs ports, the gastrotomies were closed by either endoscopic clip, endoscopic suture or suture under laparoscopic vision. the animals were survived for 1 week. after 6-8 days, a second laparo-endoscopic procedure was performed to verify healing of the pyloroplasty as well as intraluminal dimensions. at the end of the protocol, animals were euthanized. results: six endoluminal linear stapled pyloroplasty were performed. the mean operative time was 112 min. in all cases, this technique was effective in achieving optimal pyloric dilatation. median pyloric diameter (d) and median cross-sectional area (csa) pre-pyloroplasty were 8 mm (4.9-11.6 mm) and 58.6 mm 2 (19-107 mm 2 ). after the procedure, these values were increased to 13.41 mm (9.8-17.6 mm) and 147.7 mm 2 (76-244 mm 2 ) respectively (p=0.0152 the quality of endoscopic examination depends on the quality of endoscopic equipment, experience of the endoscopist and preparation of the patient. contemporarily electronic endoscopes make feasible to transfer image directly to external device which is subsequently linked to computer network and can be transferred further. dynamic image viewed in real time is more accurately interpreted by a physician than a static one. the possibility of simultaneous voice contact makes teleconsultation sterling. the aim of this study was to present our own experience regarding endoscopic teleconsultations. materials and methods: analysis enrolled examinations performed in endoscopic centers located in lesser poland district and in denmark. consultations took place in real time, consulting physicians had more than 10 years of experience in endoscopic procedures and over 10000 colonoscopies and therapeutic procedures performed. there were 84 teleconsultations via standard internet connection 10 mb/s. endoscopic centers were equipped with olympus 180 and 190 series linked to video card. each card had its own ip address, and the image was accessible through internet login from anywhere. consulting physicians used computers connected to internet for tracing the image synchronously and giving advice. results: teleconsultations were undertaken in 0.67% of all endoscopic procedures. teleconsultations concerned difficulties in endoscopic image interpretation in 17 cases and decisions regarding further treatment in 67 cases. the consulting physician solved all problems concerning proper endoscopic image interpretation. in 57 cases the elective procedure was rejected. the elective treatment was continued in remaining cases. 3 patients had a complication of polypectomy that was endoscopically treated. conclusions: the opinion of independent consulting physician in difficult clinical cases regarding endoscopic procedures helps to understand the endoscopic image in real time and implicates a decrease in complications after endoscopic procedures. michelle ganyo, md, robert lawson, md; naval medical center san diego introduction: a presacral phlegmon is a contained collection of infected fluid and inflammation within the bony pelvis, posterior to the rectum and anterior to the sacrum, that usually arises as a complication of surgery, malignancy, inflammatory bowel disease, ischemic colitis or perforated viscous. symptoms include low-back pain, pelvic pain and fevers. antibiotics and supportive therapy are the mainstay of treatment. however, if abscess develops, drainage is required usually by trans-gluteal percutaneous and/or surgical methods, both of which are associated with significant morbidity and mortality. endoscopic ultrasound (eus) -guided drainage of perirectal and presacral abscesses is a well described minimally-invasive approach that permits clear definition of anatomy, real-time access to the abscess and creation of an internalized fistula through placement of one or more transluminal stents. however, to date there is no published report describing endoscopic treatment of the more complicated, clinically challenging presacral phlegmon. here we present a case of a symptomatic presacral phlegmon recalcitrant to medical management that was successfully treated with an endoscopically placed retrievable, transmural, lumen-apposing metal stent. case report: this is a case-report of a 21-year-old, post-partum female who presented with fevers and recurrent lower back pain radiating to her rectum and vagina. her spontaneous vaginal delivery was notable for a second-degree laceration that was primarily repaired at the time of delivery 3 months prior to presentation. her past medical history was otherwise unremarkable. radiographic imaging revealed several perirectal and presacral abscesses that were considered too small for percutaneous drainage. iv antibiotics were started and the largest abscess was targeted for eusguided aspiration. unfortunately, her pain became constant and progressed in severity. a follow-up mri a week later revealed a 7-cm presacral phlegmon. results: colonoscopy revealed a luminal bulge in the rectum but was otherwise normal. to permit drainage and multiple sessions of endoscopic necrosectomy, a 15 mm lumen-apposing metal stent (lams) was placed transrectally under eus-guidance into the presacral phlegmon. endoscopic debridement with forceps and copious irrigation was performed. over the following 2 weeks the patient reported purulent rectal drainage and resolution of her fevers and pain. repeat endoscopy revealed a normal rectum and no sign of the stent. a follow up mri showed a 3-cm area of heterogenous tissue in the presacral area. conclusions: although not previously described for management of a presacral phlegmon, lams appears to be a safe and effective, minimally-invasive treatment option. introduction: flexible endoscopy has evolved to include multiple endoluminal procedures such as anti-reflux procedures, pyloromyotomy, and mucosal and submucosal tumor resections. however, these remain technically demanding procedures as they are hindered by the state of flexible technology which has difficult imaging, limited energy devices, no staplers, and cumbersome suturing abilities. an alternative approach is transgastric laparoscopy, which for almost 2 decades has been shown to be a good procedure for pancreatic pseudocyst drainage and full-thickness and mucosal resection of various lesions. we propose to expand the indications of transgastric laparoscopy by using novel endoscopically placed transgastric laparoscopy ports (taggs, kansa, usa) to replicate endoscopic procedures such as endoluminal antireflux surgery. methods and procedures: under general anesthesia 5 female pigs (mean weight 27.6 kg) had endoscopic placement of 3 5 mm-intragastric ports (taggs, kansas, usa) using a technique similar to percutaneous endoscopic gastrostomy. a 5 mm laparoscope was used for visualization. endoflip, (crospon, inc., galway, ireland) was used to measure cross sectional area (csa) and compliance of the gastroesophageal junction (gej) before and after intervention. laparoendoscopic-assited suture plication of the gej was performed using 3-0 sutures (polysorb®). once the taggs ports were removed, the gastrotomies were closed by using endoscopic clip. at the end of the protocol, animals were euthanized. results: five laparoendoscopic-assited sewing plication were performed. the mean operative time was 65,6 min (endoscopic evaluation: 3.2 min, tagss insertion: 11 min, endoflip evaluation+ gej plication: 43,25 min, gastric wall closure: 15 min). in all cases, this technique was effective in achieving adequate gej plication. median gej diameter (d) and median cross-sectional area (csa) pre-plication were 11.42 mm (8.6-13.6 mm) and 104.8 mm 2 (58-146 mm 2 ). after the procedure, these values were decreased to 6.14 mm (5.7-6.6 mm) and 29.8 mm 2 (25-34 mm 2 ) respectively (p=0,0079). median distensibility (d) and median compliance (c) pre-plication were 7.87 mm 2 /mmhg (2.4-22 .69 mm 2 /mmhg) and 190.56 mm 3 /mmhg (70,9-502,8 mm 3 /mmhg). after the procedure, these values were decreased to 1,5 mm 2 /mmhg (0.7-2.2 mm 2 /mmhg) and 52.17 mm 3 /mmhg (21.9-98.7 mm 3 /mmhg) respectively (p=0,0317). no intraoperative events were observed. conclusion: a hybrid laparoendoscopic approach is a feasible alternative for performing intragastric procedures with the assistance of conventional laparoscopic instruments; especially in cases where the location of the intervention limits the access of standard endoscopy or where endoscopic technology is inadequate. further evaluation is planned in survival models and clinical trials. introduction: due to previous manipulation or submucosal invasion, colonic lesions referred for endoscopic mucosal resection (emr) frequently have flat areas of visible tissue that cannot be snared. current methods for treating residual tissue may lead to incomplete eradication or not allow complete tissue sampling for histologic evaluation. our aim is to describe dissection-enabled scaffold assisted resection (descar): a new technique combining circumferential esd with emr for removal of superficial non-lifting or residual "islands" with suspected submucosal involvement/fibrosis. methods: from 2015 to 2017, lesions referred for emr were retrospectively reviewed. cases were identified where lifting and/or snaring of the lesion was incomplete and the descar technique was undertaken. cases were reviewed for location, prior manipulation, rates of successful hybrid resection and adverse events. results: 29 lesions underwent descar due to non-lifting or residual "islands" of tissue. patients were 52% m, 48% f, and average age 66 (sd ± 9.9 yrs). lesions were located in the cecum (n= 10), right colon (n=12), left colon (n=4) and rectum (n=3). average size was 31 mm (sd ± 20.6 mm). previous manipulation occurred in 28/29 cases (83% biopsy, 34% resection attempt, 52% tattoo). the technical success rate for resection of non-lifting lesions was 100%. there was one delayed bleeding episode but no other adverse events. approximately 22% of patients have been followed up endoscopically to date with no evidence of residual adenoma. conclusions: descar is a feasible and safe alternative to argon plasma coagulation and avulsion for the endoscopic management of non-lifting or residual colonic lesions, providing en-bloc resection of tissue for histologic review. further studies are needed to demonstrate long-term eradication and for comparison with other methods. results: 15 patients underwent 21 fully covered stent placement procedures. indications for stent placement were leak in 8 patients (1 sleeve; 7 bypass) and stricture in 7 patients (4 bypass, 3 sleeve). five patients had stent migration. three required surgical removal, one patient endoscopic repositioning and one passed the stent per rectum. all eight patients with enteric leak successfully underwent stent placement in conjunction with diagnostic laparoscopy and drainage. all but one of these patients developed an enteric leak perioperative to index procedure. the average duration of stent treatment in these patients was 21 days (14-47 days). of the 7 patients treated for a stricture, 3 patients (2 sleeve, 1 bypass) failed treatment and required subsequent definitive operative revision. average length of time of stent treatment in these patients was 3 days (range, 1-14 days) and five had severe intolerance. conclusions: endoscopic stent placement of leak may require multiple procedures and carries the risk of migration; however, this therapy seems to be an effective treatment. failure rates are higher with strictures and are not as tolerated by patients. background: colonoscopy is the most commonly performed endoscopic examination worldwide and is considered the gold standard for colorectal cancer screening. the quality of examination and endoscopic treatment is affected by a number of factors that are verified by recognized parameters such as cecal intubation rate and time (cir, cit), withdrawal time, adenoma detection rate (adr) and polyp detection rate (pdr). advanced endoscopic imaging improves accurate recognition of the nature and variety of pathologic lesions, while the endoscope tips, third eye retroscope and wide-angle endoscopy allow detection of lesions located on the proximal side of the intestinal folds. the aim of the study was to assess the suitability of wide-angle colonoscopy for the detection of colorectal lesions and to analyze the functionality of a special endoscope series regarding cir, cit and withdrawal time. introduction: leak is an uncommon but serious complication of gastrointestinal surgery. when identified post-operatively, percutaneous drains are used to manage abscesses and prevent further peritoneal contamination. if drain position is suboptimal, however, the consequences of persistent leak may necessitate a formal surgical intervention in a hostile abdomen. in select situations, we have utilized natural orifice transluminal endoscopic surgery (notes) methods to enter the abdominal cavity and place/reposition drains under direct endoscopic visualization a part of our comprehensive endoscopic management algorithm for leaks. methods and procedures: a prospectively collected database was queried for patients who had undergone transluminal endoscopic drain repositioning (tedr) as part of multimodal endolumenal therapy for leak (including interventions like defect closure, enteral feeding access, or endolumenal stent placement). inadequate drainage was identified pre-procedurally by undrained fluid collections in conjunction with clinical signs of sepsis. translumenal access was obtained via the leak site and carbon dioxide insufflation was used in all cases. the peritoneal cavity was surveilled and cleared of gross debris by irrigation and suction. intraabdominal drains were located endoscopically and fluoroscopically, grasped with an endoscopic snare or grasper and repositioned adjacent to the leak site to ensure better drainage. results: four patients (3 female), average age 50 (range 52-60), average body mass index 34 (range 29-39) were managed with tedr as a component of endoscopic treatment of full-thickness gastrointestinal leak. two patients developed leak following revisional bariatric surgery. one patient had an acutely dislodged gastrostomy tube with intraperitoneal leak after multiple laparotomies recently closed with a granulating vicryl mesh. one patient developed a leak at an esophagojejunostomy following total gastrectomy. three patients had adequate drainage after the initial tedr, while one patient required tedr on two occasions. all patients had improved drainage demonstrated by resolution of clinical signs of sepsis and resolution of fluid collections. drains were removed as clinically indicated. conclusion: intraabdominal drains are an essential element in the management of full-thickness gastrointestinal leaks, but are not always able to be adequately positioned percutaneously. transluminal endoscopic drain repositioning via a gastrointestinal defect is a viable option to avoid surgical intervention in an otherwise hostile field and is a novel practical notes application. background: epiphrenic diverticula (ed) arise from increased intraluminal pressures, often secondary to achalasia or another underlying esophageal motility disorder which causes "pulsion" physiology. ed are traditionally thought to contribute to patients' symptoms of regurgitation and dysphagia, and are frequently resected at time of heller myotomy and fundoplication done for treatment of the primary motility disorder. ed excision carries significant risks (staple line leak, pulmonary complications, mortality), and little is known regarding patients with ed and esophageal motility disorder who undergo surgical myotomy without ed resection. the goal of this study was to compare outcomes of patients with ed and esophageal motility disorder who did and did not undergo diverticulectomy at time of myotomy and fundoplication. methods: retrospective analysis of prospectively collected database from 2004 to 2017 was performed. patients with diagnosis of ed undergoing surgical treatment of symptomatic esophageal motility disorder were included. all patients underwent laparoscopic heller myotomy with toupet fundoplication by a single surgeon at a tertiary referral hospital. patients were stratified according to whether ed was excised or not excised at time of primary surgery. patient-reported symptoms were obtained from pre/post-operative clinic evaluations and mailed surveys during the follow-up period. independent samples t-test and fisher's exact test were used to compare continuous and categorical variables respectively. results: ed was identified in 15 patients prior to surgery. primary diagnoses included achalasia (n =11), nutcracker esophagus (n=3), and diffuse esophageal spasm (n=1). ed was excised in five patients (33.3%) and not excised in ten patients (66.6%), with no significant difference in frequency of preoperative dysphagia (80% vs. 90%, p=1.00) or regurgitation (40% vs. 60%, p=0.61) between groups respectively. reasons for non-resection included ed was too proximal (n=7), patient/surgeon preference (n=2), and small ed size (n=1). the resection group did not experience any leaks and there were no mortalities in either cohort during the follow-up period. at mean clinic follow-up of 198 days, there was no difference in frequency of residual dysphagia in patients who did or did not undergo ed resection (20% vs. 20%, p=1.00) and neither cohort reported residual regurgitation symptoms. conclusions: this study suggests that leaving ed in place during surgical treatment of an esophageal motility disorder may achieve similar rates of postoperative symptom control. while ed excision in this study did not cause significant excess morbidity, ed resection introduces risk of leak and requires more extensive surgery that may not provide significant benefit to patients. introduction: median arcuate ligament syndrome (mals) has been described in the literature as presenting with a constellation of symptoms including nausea, vomiting, weight loss, and post-prandial epigastric pain. while many of these symptoms are consistent with foregut pathology, a cohort of patients with mals presenting with delayed gastric emptying has not been described in the literature. in this study we report on the possible association of mals with delayed gastric emptying. methods: cases of mal release were collected between 2013 and 2017. eight patients were identified who presented with mals and underwent subsequent mal release. all 8 patients underwent laparoscopic or robotic surgery. patients were compiled into a retrospective database and their demographic, symptomatic, imaging, and outcomes data were analyzed. background: laparoscopic fundoplication (lf) is often performed to treat paraesophageal hernia and/or gerd. care is taken to select the right patients for the operation. some patients may not improve, and others experience dysphagia or bloating after surgery. factors associated with patient satisfaction after fundoplication would be helpful during the patient selection process. methods: a retrospective review of a prospectively collected database was performed. queried patients underwent lf from 2009 to 2015. non-elective operations and fundoplications after heller myotomy were excluded. of this cohort, patients were included only if they responded to a two-year postoperative quality of life survey. surveys were distributed preoperatively, at three weeks, at one year, and at two years. the surveys include the reflux severity index, gerd-hrql, and dysphagia score. the gerd-hrql asks about patient satisfaction with their current state (1 = dissatisfied, 2 = somewhat satisfied, 3 = very satisfied). the cohort was divided according to their answer to this question at two years. demographics and preoperative factors were compared between the groups with kruskal-wallis and fisher's exact tests. univariable and multivariable ordinal logistic regression was performed to identify preoperative symptoms associated with satisfaction at two years. scores on the surveys over time were were also analyzed. results: a total of 94 patients were included in the analysis (dissatisfied = 26, somewhat satisfied = 17, very satisfied = 51). the only significant demographic or preoperative difference was a high number of paraesophageal hernias in the 'very satisfied' cohort (p = 0.017). on univariable regression, younger age and paraesophageal hernia predicted satisfaction. several variables negatively predicted satisfaction with an or \1. multivariable regression, controlled for age and hernia type, identified throat clearing, post-nasal drip, and globus sensation as preoperative symptoms less likely to result in patient satisfaction (p = 0.001, 0.001, and 0.02, respectively). subgroup analysis of patients with paraesophageal hernias revealed that patients with bloating preoperatively are less likely to be satisfied at two years. survey scores over time showed all groups improving over three weeks, but while satisfied patients continued to improved, dissatisfied patients symptomatically worsened over time. conclusion: this study confirms previous reports stating atypical symptoms of gerd are less likely to improve after lf. it also shows individuals with paraesophageal hernia tend to do quite well, unless they report bloating preoperatively. patient-centered analysis such as this can be useful when discussing postoperative expectations with patients, and may reveal opportunities to individualize operative approach. objective: the study was performed to assess whether sutured crural closure or mesh reinforcement for hiatal closure yields better results with regards to symptom resolution and recurrence post-operatively. material and methods: a prospective randomized controlled trial was carried out at grant medical college and sir j. j. group of hospitals, mumbai, india. patients were randomized to receive either sutured repair or mesh reinforcement of hiatal closure. outcomes of interest were symptom resolution, quality of life scores and recurrence in the postoperative period. results: 160 patients were recruited for the trial (80-sutured repair, 80-mesh reinforcement). the two groups were comparable in terms of demographic profiles, symptom severity and findings at esophagogastroscopy and manometry in the pre-operative period as well as size of the hiatal defect measured intra-operatively. post-operatively the mesh repair group had significantly better symptom resolution in terms of early satiety, chest pain and regurgitation (p\ 0.05) while with respect to heartburn, dysphagia and post-prandial pain there was no significant difference between the improvements demonstrated. improvement in quality of life scores after either procedure was not significantly different. recurrence was higher in the suture repair group (8 vs 0, p.001). recurrence lead to poorer symptom severity scores as well as quality of life scores and one patient underwent re-operation. the change in the symptom severity score from baseline after the procedure at 6 months in the subgroup population. conclusion: mesh reinforcement results in a reduced rate of recurrence and offers excellent symptom control in the short-term without a rise in complications when compared to sutured repair for the closure of hiatal defects in laparoscopic hiatal hernia repairs. material and methods: in a period from 2005 to 2016, 72 patients underwent laparoscopic resection (67 -gastric resection, 5 -duodenal resection), using different techniques. all patients were investigated with upper gi endoscopy, eus and abdominal contrast-ct, which allows us to get the complete evaluation of tumor, including size, location, type of growth and the gi layer. based on the findings the decision on the type of resection was made. the majority of resections were wedge or partial resections, performed using endoscopic steplers or using ultrasound scissors followed by double-suturing of gatro/duodenotomy. in the cases of tumor location on the posterior gastric wall we mobilized the the greater curvature to get a direct approach to the tumor with extraluminal growth. in the cases with intraluminal growth we used transgastric approach with small 1,5 cm incision on the anterior gastric wall for endoscopic stepler. technically the most complex procedures were in the cases of tumor location close to anatomically narrow places and muscle sphincters (gastroesophageal junction, pylorus, duodenal bulb, duodenal flexure), with high risk of stenosis and dysfunction of anatomical sphincters. in such cases we used «lifting-technique» in which we dissect serous and muscle layers circumferentially around the tumor making partial enucleation of lesion followed by total resection preserving almost all normal tissue with minimal suturing and deformity at the site of surgery. (1:1), mean age was 60.13 years (sd ± 11.9), 59 patients (72%) had mis. the type of reconstruction was predominantly with a "pull-up" technique (n = 43, 51.2%) followed by the kirschner-akiyama procedure (n = 25, 29.8%), stapled gastroplasty was performed in 12 patients. all the anastomosis were performed at the level of the neck and only one of the patients had a stapled anastomosis, mean operative time was 374 min (sd ± 92 min) including resection of the specimen. primary neoplasms were predominantly hypopharynx (n = 34, 40.5%), distal esophagus (n = 21, 25%), cervical esophagus (n = 12, 14.3%) and thoracic esophagus (n = 11, 13.1%). histologic types were mainly squamous cell carcinoma (n = 63, 77.4%) and adenocarcinoma (n = 12, 14.3%). mean of hospitalization days was 14.76 (sd ± 9.374). no complications were observed in 38 patients and major complications (dindo-clavien ≥iiib) were found in 18 patients. anastomotic leak was present in 6 patients (7.1%) and perioperative mortality (30 days) was 2.4%. progressive shift to laparoscopic surgery was evidenced through the years (2006-2009: 35.29%, 2010-2013:70.27% and 2014-2017:96 .43%; p = 0.000) and reduction in major complications (p = 0,021) was observed. anastomotic leaks (p = 0,545) and perioperative mortality (p = 0.373) did not show significant differences in the present study. conclusions: results in our center show that major complications decrease with time after application of minimally-invasive surgery and no differences in anastomotic leaks and mortality were seen. current data has lead us to abandon open total esophagectomy as a first-choice procedure. introduction: minimal invasive three-fields esophagectomy for minimal invasion is the surgical standard for oncological procedures and benign diseases. cervical dissection has a risk of 2 to 59% in some series, of, lesion or paralysis of the rnl, but the standard in mckeon approach is 14%. a high level of suspicion is needed because this type of lesion has an impact on postoperative evolution and the hospital stay. main: to describe three cases of rnl post esophagectomy paralysis in three planes by least invasion. methods: in a period of 3 years, january 2015 to june 2017, 10 esophagectomies for bening disease were performed. three patients (2 males 1 female) with diagnosis of terminal achalasia and 1 stenosis secondary to caustic ingestion consulted at the minimal invasion service fundcacion valle del lili. they were schedualed for minimal invasive three fields esophagectomy, one patient without complications and early discharge (5 postoperative day) but occasional dysphagia, the other two required early reintubation after de surgery with ards, 1 patient requiered tracheostomy, the second patient could be extubated after 2 days but with occasional dysphagia. all three had mild hoarseness after surgery. the patient who required tracheostomy was decannulated at 20 days without complication. results: the three patients underwent endoscopy without complication in the cervical anastomosis stenosis or disorder in the emptying of the gastric tube, swallowing study without alteration and laryngoscopy with paralysis of the left vocal cord. these patients went to speech therapy with total paralysis recovery at 6 months corroborated with laryngoscopy, without dysphagia or hoarseness. conclusion: rnl innervates the larynx and upper esophageal sphincter, therefore lesion or paresis causes symptoms such as hoarseness, dysphagia, difficulty swallowing, aspiration, difficulty in coughing, pneumonia and ards. injury has a predecessor factor in pulmonary complications and prolongation of the hospital stay. 14% of these patients may require some surgical procedure to restore the function of rnl. noninvasive monitoring of the laryngeal nerve decreases the risk of injury. philip case report: multiple esophageal diverticula associated with achalasia introduction: achalasia is well defined disorder of increased lower esophageal sphincter tone (1). epiphrenic esophageal diverticulum are a rare disorder believed to result from increased intraesophageal pressure often in conjunction with a motility disorder causing functional outflow obstruction. they are a pulsion-type pseudo-diverticulum with mucosal bulging most frequently from the right posterior esophageal wall (2) . we present a very rare case of achalasia associated with multiple esophageal diverticula successfully treated with laparoscopic heller myotomy with dor fundoplication. case presentation: a 75 year old woman presented with 4 years of dysphagia, chest discomfort, regurgitation, and weight loss. esophagoscopy showed a patulous esophagus with multiple esophageal diverticula (figure 1 ). barium esophogram demonstrated 5 esophageal diverticula in the distal esophagus and delayed clearance of esophageal contrast (figure 2 ). high resolution monometry revealed a hypertensive mean les, an aperistaltic body on 10 of 10 wet swallows, and panesophageal pressurization in 7 of 10 wet swallows -consistent with type ii achalasia by chicago classification (1). we performed a laparoscopic heller myotomy with dor fundoplication. the myotomy was extended 6 cm above the gasgtroesophageal junction and 3 cm onto the gastric cardia. an anterior diaphragmatic defect with a moderate type 1 hiatal hernia was repaired with two sutures, ensuring to not impinge the esophagus (figure 3 ). at 10 weeks post operatively the patient reports excellent results. her dysphagia and chest discomfort have entirely resolved. her eckhardt score improved from seven preoperatively to one post operatively. discussion: type ii achalasia is successfully treated in the majority of cases with laparoscopic heller myotomy and partial fundoplication (3). however, esophageal diverticula typically require both myotomy as well as diverticulectomy for successful treatment (4) . there is little experience with the surgical management of multiple esophageal diverticula. we propose a two stage surgical approach for these patients. we reason that the risk of esophageal leak or stenosis in the case of multiple esophageal diverticulectomies out weighs the proposed benefit. indeed epidemiologic studies indicate that the majority of esophageal diverticula are asymptomatic (4) . in the event the patient remains symptomatic after myotomy a second stage operation with diverticulectomies would be possible. this single experience suggests that diverticulectomy may not be necessary in the case of multiple diverticula associated with achalasia. instead, treatment may be directed at relieving the functional obstruction responsible for the symptoms by performing laparoscopic heller myotomy with dor fundoplication. takahiro kinoshita, md, facs, masanori tokunaga, md, akio kaito, md, masahiro watanabe, md, shizuki sugita, md; national cancer center hospital east, japan objective: the optimal surgical approach for siwert type ii cancer is still controversial due to the anatomical complexity of the region. potential advantages of laparoscopic transhiatal approach have not been fully investigated. methods and procedures: we retrospectively analyzed 55 consecutive patients with siewert type ii cancer who underwent laparoscopic transhiatal resection. indication of surgery is patients with siewert type ii cancer with less than 3 cm esophageal invasion. regarding the extent of resection, basically proximal gastrectomy with the lower esophageal resection was selected, aiming at preservation of gastric reservoir function. in terms of reconstruction after proximal gastrectomy, double-tract method was performed. intraoperative peroral endoscopy was routinely employed for determination of the appropriate resection level of the stomach. esophagojejunostomy was employed by overlap method using a 45 mm linear stapler. in order to obtain a wider operative field in the lower mediastinum, the diaphragmatic crus was dissected to widen the esophageal hiatus. results: in 55 patients (38 males and 17 females), median operation time was 282 minutes, and estimated blood loss was 18 g. the rate of surgical morbidity was 18%, and that of anastomotic leakage was 4%. there was no mortality. the mean length of proximal margin was 10 mm, and no positive margin was recorded. the 3-and 5-year overall survival rate was 96.1% and 75%, respectively. conclusions: laparoscopic transhiatal resection for siewert type ii cancer is technically challenging, but appears feasible and safe when performed by an experienced surgical team. a largescale prospective study is necessary for final conclusion. introduction: mesh use for reinforcement of primary crural closure is controversial. synthetic mesh use poses a risk of erosion but there is no evidence that non-synthetic mesh is useful to minimize the risk of hernia recurrence. we evaluated a fully bioresorbable mesh made from poly-4-hydroxybutyrate (p4hb) for crural reinforcement after para-esophageal hernia (peh) repair. the aim of this study was to evaluate the safety and efficacy of p4hb mesh at the hiatus in patients undergoing peh repair. this was a review of prospectively collected data on 50 consecutive patients that had repair of a peh with reinforcement of the crural closure with p4hb mesh. to be considered a peh at least 50% of the stomach was herniated into the chest. a collis gastroplasty or crural relaxing incision was added for short esophagus or crural tension when necessary. routine follow-up consisted of esophagogastroduodenoscopy (egd) at 3 months for patients that had a collis gastroplasty and a barium upper gi study (ugi), high resolution manometry (hrm) and ph test in all patients at 12 months. a hernia of any size identified during objective follow-up testing was considered a recurrence. overall, there was a significant difference in mean measured tension between the three subjective suture ratings by the surgeons. however, there was substantial variability and overlap amongst the surgeon's ratings (figure) . the tension necessary to approximate the crura during peh repair can be objectively measured and as expected increases progressively with anterior movement up the hiatus. while there was some correlation between a surgeon's subjective assessment of the tension necessary to bring the crura together and actual measured tension, there was wide variability and imprecision from one stitch to another. objective tension measurement may provide a more reliable assessment of when excessive force is being used to re-approximate the crura and potentially improve peh recurrence rates. ahmed introduction: paraesophageal hernia repairs are increasing in prevalence, and unfortunately carry a high recurrence rate. consequently, reoperation is expected to increase in frequency. published data on the outcomes of recurrent paraesophageal hernia (rpeh) repair is very limited. because of the technical difficulties of revisional surgery, we hypothesize that laparoscopic revisional paraesophageal hernia repairs are associated with high perioperative morbidity and poor patient outcomes. methods: all rpeh repairs performed by the foregut surgical service at our institution from 2012 to 2015 were reviewed. patients were included if their index operation was a true pehr (initial type 1 hiatal hernia repairs were excluded, as well as multiply recurrent hernias). demographics, medical and surgical history, and operative notes from the index surgery were reviewed. details from standardized pre-operative symptom assessment, objective testing and operative details for the revisional surgery were collected. patients were routinely offered 12 month post-operative upper gastrointestinal contrast evaluation. postoperative outcomes included a standardized symptom assessment and results of objective testing at any time after surgery. results: twenty six patients were identified who underwent repair of rpeh. demographic, operative and perioperative data was available for all patients (table 1) . twenty four patients underwent followup symptom evaluation (two were lost to follow-up after the initial hospitalization). sixteen patients underwent follow-up objective testing by radiographic evaluation with contrast, endoscopy or both. these subgroups were used to calculate symptomatic and objective outcomes (table 1) . conclusion: reoperative laparoscopic surgery for recurrent paraesophageal hernias is technically challenging as evidenced by long operative times. despite this, perioperative outcomes at a high volume center are good, with low morbidity and no mortality. importantly, symptomatic outcomes for this difficult problem are excellent. introduction: hypotension of the lower esophageal sphincter (hles) and the presence of hiatal hernia (hh) have both been associated with gastroesophageal reflux disease (gerd). the exact likelihood with which a hles or a hiatal hernia predict gerd continues to be defined. we hypothesize a synergistic interaction in those with hles and hh in predicting gerd as defined by a positive ph study. methods and procedures: between 2012 and 2013, 148 consecutive patients presenting to a surgical practice with symptoms most concerning for gerd, without prior antireflux surgery were evaluated by high resolution manometry (hrm), esophagogastroduodenoscopy (egd), videoesophagography (veg) and an ambulatory ph study. hles was defined as residual les pressure of\15 mmhg, hh was defined as having been noted and measured by the radiologist, these were further categorized into any hh, 1-3 cm, [3-5 cm background: while clinical outcomes have been reported for antireflux surgery, there is limited data on postoperative outpatient encounters and their associated costs. the aim of this study is to evaluate the utilization of healthcare and its associated costs during the 90-day postoperative period following antireflux surgery. methods: we analyzed data from the truven health marketscan® research databases. patients ≥16 years with an icd-9 procedure code or cpt code for antireflux surgery and a primary diagnosis of gerd during 2012-2014 were selected. only patients with continuous enrollment six months prior to the date of surgery and 90-days after surgery were analyzed. patients with a diagnosis of esophageal cancer or achalasia during the six-month period prior to antireflux surgery, a length of stay [ 30 days following index procedure, a capitated plan, or patients who underwent emergency surgery were excluded. outpatient endoscopy was defined using icd-9 and cpt codes, and related readmission was defined by clinical classification software. introduction: the development of postsurgical gastroparesis following nissen fundoplication is poorly understood. in this study, we analyze the development of gastroparesis requiring intervention and other subsequent procedures following fundoplication and paraesophageal hernia (peh) repair procedures in the state of new york. methods: using a comprehensive state-wide administrative database (sparcs), we examined all in-patient and outpatient records for adult patients who underwent fundoplication or peh repair as a primary procedure for the treatment of gerd between the years of 2005-2010. patients with an initial gastroparesis diagnosis were excluded from the analysis. through the use of a unique identifier, each patient was followed until 2015 for the subsequent diagnosis of gastroparesis or reoperation. surgical procedures for the treatment of gastroparesis included pyloroplasty, pyloromyotomy, or gastroenterostomy procedures. multivariable logistic regression models were used to identify independent predictors for having subsequent reoperation. results: a total of 6,438 patients were analyzed. this included 3,961 fundoplication patients (61.52%) and 2,477 (38.48%) with peh repair. in the fundoplication group, 388 (9.80%) patients had a follow-up diagnosis of gastroparesis or secondary procedure. 211 (8.52%) of the patients who underwent a primary peh repair procedure had a follow-up procedure or gastroparesis diagnosis (table 1) . mean time to follow-up procedure or diagnosis was 2.81 years for the fundoplication group and 2.16 years for the peh repair group. the majority of the follow-up procedures in the fundoplication group were revisional procedures (fundoplication or peh repair) (n = 254, 6.41%), while 134 (3.38%) patients were newly diagnosed with gastroparesis and/or underwent a secondary procedure for its treatment. conclusion: fundoplication and peh repair procedures have a relatively low post-operative incidence of gastroparesis following initial procedure for treatment of gerd. secondary fundoplication or peh repair was more commonly performed compared to any of the surgical procedures for gastroparesis for both procedures. further analysis of association with subsequent procedures is needed. during this procedure, gastro-esophageal reflux was evaluated and assigned to severe, moderate and slight category. if the reflux was observed slightly up to cervical esophagus, the case was assigned to moderate category. if the reflux was observed intensely up to cervical esophagus, the position was returned to head high position for the safety and the case was assigned to severe category. the anti-reflux surgery was considered in the moderate and severe categories. results: we have performed laparoscopic nissen procedure in 87 cases. the mean operation time was 115 min. the outcome was assessed by reflux test performed on 4-5 postoperative day, and the results showed the reflux was disappeared in every cases. median follow-up period of this study was 38 months (7-95 months). in 13 cases (14.9%) ppi was restarted before 6 months after the anti-reflux surgery. in 25 cases (28.7%) ppi was restarted after the anti-reflux surgery during the whole follow-up period of this study. the bmi of the patients had no relationship to the needed restart of ppi. to evaluate the degree of esophagitis objectively before and after the anti-reflux surgery we designed "the esophagitis score". in this scoring method, a number from 0-5 was assigned according to the degree of esophagitis along with the la classification. the results of the study have shown that the reflux esophagitis was improved obviously after the anti-reflux surgery even in the ppi restarted group (p.001). discussion: the number of gerd patients who needed anti-reflux surgery seems to be so high. to extract the patients who needed it remarkably is important. the anti-reflux surgery is most effective for the patients who really have the obvious reflux. reflux test is feasible because of its convenience and visual effects for the patients. the results of the laparoscopic nissen fundoplication were good and satisfied by the patients mostly. surg endosc (2018) 32:s130-s359 introduction: fundoplication at the time of giant paraesophageal hernia repair is controversial. the proposed advantages are better reflux control and lower recurrence. disadvantages include fundoplication specific complications, might be unnecessary and may not decrease recurrence. we retrospectively reviewed giant paraesophageal hernia repairs (peh) with two point gastropexy in the fundus and body, and no antireflux procedure. data collected is postoperative gerd symptoms, postoperative proton pump inhibitors (ppis) therapy and recurrence. methods: a retrospective review of patients who underwent repair of giant peh from 2012 to december of 2016. giant was defined as a hernia with 50% or more of the stomach above the diaphragm. follow up consisted of upper gi (ugi) study one year postoperatively and reflux symptom questionnaire. patients were followed every 4 months in the surgery clinic and a ppi wean was initiated at the second postoperative visit. the primary outcome we evaluated was discontinuation of ppis. in addition, we utilized a standardized reflux scale and recurrence rates collected. chi-squared was used for statistical analysis. background: gastroesophageal reflux disease (gerd) is a highly prevalent disorder with a multitude of treatment options ranging from lifestyle modifications and medical management to surgical options. despite the numerous treatments available, there is still debate over which approach is most appropriate and effective for patients. this study aims to examine the effect of robotic hiatal hernia repair (rhhr) with the novel addition of esophagopexy in patients with gerd. methods: a single institution, single surgeon, prospectively maintained database was used to identify patients who underwent rhhr with a partial fundoplication and concomitant esophagopexy for gerd from november 2015 to july 2017. patient characteristics, operative details and postoperative outcomes were analyzed. primary endpoint was resolution of subjective gerd symptoms and discontinuation of proton pump inhibitor (ppi). recurrence of hiatal hernia was a secondary endpoint. results: eleven patients were identified meeting the inclusion criteria (rhhr + esophagopexy) with a mean follow-up of 9.5 weeks ± 19.4 weeks. in regards to the rhhr, 91% underwent a partial fundoplication and the additional 9% underwent a re-do wrap. this patient cohort was 81.8% female with a mean age of 61.5 ± 11.9 years. preoperative esophagogastroduodenoscopy (egd) was performed in 100% of patients with the study showing a hiatal hernia in 91.0%, gastritis in 45.4% and esophagitis in 63.6% of patients. manometry was performed in 54.5% of the patients showing 50% of these patients with esophageal dysmotility. esophagograms and ph studies were performed preoperatively in 36.4% and 45.5% of patients respectively. preoperatively, 100% of patients had a documented diagnosis of gerd and were taking a ppi and/or h2 blocker. after rhhr with esophagopexy, 81.8% of patients had resolution of their gerd symptoms while 18.2% (n = 2) remained symptomatic. however, one of two patients reported a subjective decrease in symptom severity following the procedure. despite resolution of symptoms, 81.1% remained on ppis. another 9% switched to h2 blockers and one patient discontinued all antisecretory therapy. none of the patients experienced recurrence of their hiatal hernia. conclusion: based on our data, rhhr with esophagopexy results in resolution gerd symptoms in over 80% of symptomatic patient. in patients with hiatal hernias and gerd, rhhr with esophagopexy does lead to resolution of symptoms, however, the majority of patients remained on ppis. long-term follow up is needed to investigate whether these patients are able to discontinue ppis and remain symptom free. chaya shwaartz, nadav zilka, mustapha siddiq, yuri goldes, md; sheba medical center, israel background: d2 gastrectomy for gastric carcinoma is a well-established procedure in patients undergoing surgery for gastric cancer and is the standard of care in our institution. reduced pain, early ambulation, and better cosmetics are some of the benefits of minimally invasive surgery for early gastric cancer. we aimed to describe our experience in laparoscopic d2 gastrectomies undertaken by a single surgeon in our institution. methods: this is a single-center retrospective review of prospectively collected d2 gastrectomies performed by a single surgeon. between november 2011 and february 2017, 45 laparoscopic subtotal/total gastrectomies were performed at sheba medical center, a tertiary center for forgut cancer. clinicopathological characteristics of the patients, surgical performance, postoperative outcomes and pathological data were collected. results: forty-five patients underwent laparoscopic gastrectomy. of these, 38 had subtotal gastrectomy and 7 had total gastrectomy. the median age in our series 65 (43-89). most of the patients in our series had early gastric cancer (t1-2) (80%). the mean average of dissected lymph nodes was 25 ± 13. the mean operative time was 249 ± 48. the postoperative complications, classified using the clavien-dindo classification. severe complications ([ cd iiia) rate was 11%. conclusions: laparoscopic d2 gastrectomy for invasive gastric cancer is safe and feasible when carried out in high-volume centers by an experienced surgeon as part of a multidisciplinary team with careful case selection and appropriate high-quality postoperative support. minimally invasive management of diaphragmatic hernias after esophagectomy: a case report introduction: esophagectomy is a common treatment for both benign and malignant pathologies of the foregut. hiatial paraconduit hernias are rare complications following esophagectomy. in this study, we review our experience with these rare diaphragmatic hernias. methods: a retrospective analysis of all patients presenting with hiatial hernia after esophageal resection at the university of oklahoma health science center between 2014 and 2017 was performed. data was abstracted from the medical record for evaluation and included demographics, symptoms, repair techniques and outcomes. no patients were excluded. results: a total of ten patients were identified to have paraconduit hernias. during this time interval, there were a total of 130 esophageal resections performed. all patients had esophagectomy for malignant disease. seven of the 10 patients have undergone surgery. two patients are asymptomatic and are being followed at their request, and one patient is pending elective correction. of the seven patients who underwent surgery, the median age was 58, with 5 males and two females. six of the seven patients underwent minimally invasive ivor lewis esophagectomy and one had an open mckeown procedure. the median time from esophagectomy to hernia repair was 12 months, with range from 1 month to 120 months. the most common presenting complaint was abdominal pain and nausea. one patient was noted to have a paraconduit hernia on postoperative day 5 and taken to surgery for repair during the hospitalization. there was one death in a patient who presented with necrosis of the small bowel. the remaining 6 patients all had laparoscopic approach. one patient required a hand port to reduce incarcerated colon and one patient was noted to have a cecal perforation during port closure requiring repair. all patients had herniated colon, with small intestine or pancreas herniation noted in three. repair was performed by reducing the viscera, a left phrenic relaxing incision, closure of the hiatus around the conduit and then closure of the diaphragmatic defect with mesh. at median follow up of 6 months, there are no recurrences. conclusion: hiatal paraconduit hernias are becoming a frequent finding among survivors of esophageal cancer surgery. our study demonstrates that there is a propensity for patients who undergo minimally invasive esophagectomy to develop these hernias. the vast majority of patients can undergo laparoscopic repair. our recommendation is to perform a diaphragmatic relaxing incision and liberal use of mesh. early results appear to be favorable regarding recurrence. aim: there have been several reports illustrating the safety and efficacy of various surgical techniques in performing laparoscopic esophagojejunostomy (ej). this study aims to compare two established methods of ej anastomosis -circular stapling with purse-string suture ("lap-jack") and linear stapling technique -in laparoscopic total gastrectomy. methods: 314 patients diagnosed with gastric cancer underwent intracorporeal ej anastomosis in laparoscopic total gastrectomy from january, 2013 to october, 2016. 254 cases used the circular stapler with purse-string "lap-jack" method, and 60 patients used the linear stapling method for ej anastomosis. 59 were matched using propensity scores, and retrospective data for patient characteristics, surgical outcome, and post-operative complications was reviewed. the two groups showed no significant difference in age, bmi, or other clinicopathological characteristics, and there was no conversion to an open procedure. after propensity score matching analysis, the linear group had significantly shorter operating time (252.6 ± 72.3 vs 200.1 ± 61.7, p≤0.001) and more sufficient proximal margin (3.9 ± 3.5 vs 4.9 ± 3.0, p = 0.022). no significant difference was found in estimated blood loss, retrieved lymph node, hospital stay, and time for first flatus. there was no postoperative mortality. early postoperative complication of the circular and linear group occurred in 11 (18.6%) and 16 (27.1%, p = 0.381) patients respectively. ej leakage occurred in 2 (3.4%) cases from each groups, with 1 (50%) case from both group needing radiologic or surgical intervention. no other significant difference in early complication was found. late complication was observed in 7 (3.3%) cases (circular = 4 linear = 3, p = 1.000) with 1 ej anastomosis stricture in the linear group, but there was no statistical significance. conclusion: both circular stapling and linear stapling techniques are feasible and safe in performing intracorporeal ej anastomosis during laparoscopic total gastrectomy. linear-stapling technique had more sufficient proximal margin and shorter operating time. there was no significant difference in anastomosis related complication between the two groups. masahiro watanabe, masanori tokunaga, akio kaito, shizuki sugita, takahiro kinoshita; national cancer center hospital east, gastric surgery division background: although the current standard treatment for advanced gastric cancer (agc) is open gastrectomy, laparoscopic gastrectomy (lg) is increasingly performed, especially in the east. however, it is a technically demanding procedure, and the feasibility remains unclear. the aim of the present study was to clarify the feasibility of lg for agc. patients and methods: the present study included 266 patients who underwent lg for agc between 2010 and 2017. the indication of lg has gradually expanded in our institute, and is currently any stage gastric cancer except for gastric cancer obviously invading adjacent organs or gastric stump carcinoma. we retrospectively reviewed short-and long-term surgical outcomes of the patients. results: male/female ratio was 2:1, and median age (range) was 68 (23-90) years. distal gastrectomy was most frequently performed (62 %), followed by total gastrectomy (33%). median operation time and intraoperative blood loss was 251 (156-529) minutes and 15 (0-505) g, respectively. clavien-dindo grade iii or more complication rate was 8.6%. with a median followup period of 18 months, the 3-year recurrence free survival rates of pstage ii and iii patients were 98% and 91%, respectively. conclusion: the outcomes of lg for agc are satisfactory, provided that an experienced team performs the surgery. introduction: the present study aims to evaluate the predictive value of indocyanine green (icg) for the detection and prevention of anastomotic leak following esophagectomy. anastomotic leak is a highly morbid and potentially fatal complication of esophagectomy. ensuring adequate perfusion of the gastric conduit can minimize the risk of postoperative leak. intraoperative evaluation with fluorescence angiography using icg offers a dynamic assessment of gastric conduit perfusion, and can guide anastomotic site selection. methods: a search of electronic databases medline, embase, scopus, web of science and the cochrane library using the search terms "indocyanine/fluorescence" and esophagectomy was completed to include all english articles published between 1946 and august 2017. articles were selected by two independent reviewers based on the following major inclusion criteria: (1) esophagectomy with gastric conduit reconstruction; (2) use of fluorescence angiography with indocyanine green to assess perfusion; (3) age ≥18 years; (4) sufficient outcome data for the calculation of leak rates and (5) sample size ≥5. the quality of included studies was assessed using the quality assessment of diagnostic accuracy studies-2. results: our literature search yielded 146 potential studies, of which 14 studies were included for meta-analysis after screening and exclusions. there were eleven prospective and three retrospective studies. the pooled anastomotic leak rate when icg was used was found to be 10%. pooled sensitivity and specificity for leak detection were 0.83 (0.70-0.93) and 0.60 (0.55-0.66), respectively. when studies involving intraoperative modifications were removed, pooled sensitivity and specificity were only marginally changed to 0.75 (0.51-0.91) and 0.67 (0.55-0.77), respectively. the diagnostic odds ratio was found to be 5.68 (2.29-14. 10) across all studies and 5.06 (0.93-27.55) when intraoperative interventions were excluded. only three trials included a control group, giving a sample size of 251. in studies with a comparator group, icg was associated with an 87% reduction in the risk of anastomotic leak [or: 0.13 (0.03-0.50)]. conclusions: in non-randomized trials, the use of icg as an intraoperative tool for visualizing vascular perfusion and conduit site selection, is promising. however, poor data quality and heterogeneity in reported variables limits cross-study comparisons and generalizability of findings. randomized, multi-center trials are needed to account for independent risk factors for leak rates and to better elucidate the impact of icg in predicting and preventing anastomotic leaks. objective: robotic assistance for bariatric surgery represents a novel application of a rapidly emerging technology. its safety and efficacy remains primarily characterized by smaller, singleinstitution studies. in this investigation, the influence of robotic assistance on short-term perioperative outcomes is contrasted with the more established primary multi-port laparoscopic approach for patients undergoing roux-en-y gastric bypass (rygb), using data from a national bariatric database. methods: a retrospective analysis of 2,976 robotic-assist and 38,716 laparoscopic rygb patients from the 2015 metabolic and bariatric surgery accreditation and quality improvement program national database were reviewed for differences in patient characteristics and short-term outcomes. on bivariate analysis, variables associated with primary outcomes of 30-day reoperation, readmission and reintervention were imputed into multivariate analyses to determine independent significance. results: robotic-assist bypass patients were older (p\.001), had a higher prevalence of comorbidities and had concomitant operations more frequently performed during surgery (p\.001). on bivariate analysis, robotic-assist patients had a higher rate of readmission than laparoscopic patients (7.5% vs. 6.4%; p=.03), but no differences in 30-day reoperation ( conclusion: robotic-assistance does not confer an increased rate of morbidity and mortality after rygb, and represents a feasible surgical modality for the surgeon willing to adopt the technology and accept its limitations. alicia m bonanno, md, brandon tieu, md, farah husain, md; oregon health and science university introduction: marginal ulcer is a common complication following roux-en-y gastric bypass with incidence rates between 4 and 16%. most marginal ulcers resolve with medical management and lifestyle changes, but in the rare case of a non-healing marginal ulcer there are few treatment options. revision of the gastrojejunal (gj) anastomosis carries significant morbidity and mortality with complication rates ranging from 10 to 50%. thoracoscopic truncal vagotomy (ttv) may be a safer alternative with decreased operative times. the purpose of this study is to evaluate the safety and effectiveness of ttv in comparison to gj revision for treatment of recalcitrant marginal ulcers. methods and procedures: a retrospective chart review of patients who required surgical intervention for non-healing marginal ulcers was performed from 1st september 2012 to 1st september 2017. all underwent medical therapy along with lifestyle changes prior to intervention and had preoperative egd that demonstrated a recalcitrant marginal ulcer. revision of the gj anastomosis or ttv was performed. data collected included operative time, ulcer recurrence, morbidity rate, and mortality rate. statistical analysis was performed using t-test and fischer's exact test. results: a total of fifteen patients were identified who underwent either gj revision (n=8) or ttv (n=7). there were no 30-day mortalities in either group. mean operative time was significantly lower in the ttv group in comparison to gj revision (95.7±16 vs. 197.8±89 minutes respectively, p=0.0141). recurrence of the ulcer was not significant between groups and occurred following 2 gj revisions and 1 ttv. overall complication rate was not significantly different with 88% in the gj revision group and 57% in the ttv group. complications included anastomotic leak (1 gj), anastomotic stricture (2 gj), aspiration (1 ttv), dysphagia (1 gj and 3 ttv), and dumping syndrome (2 gj). conclusions: our results demonstrate that thoracoscopic vagotomy may be a better alternative with decreased operative times and similar effectiveness. however, further prospective observational studies with a larger patient population would be beneficial to evaluate complication rates and ulcer recurrence rates between groups. we present a case of a 59-year-old female with a history of thyroid cancer who initially presented to an outside hospital complaining of reflux, abdominal pain, early satiety, and 35-pound unintentional weight loss. endoscopy demonstrated a 2 cm pre-pyloric mass; with initial biopsies of the mass demonstrating only gastric mucosa. endoscopic ultrasound and fna of the lesion also failed to elucidate its pathology. due to the pyloric location of the mass and inability to rule out invasive malignancy, we recommended a robotic-assisted transgastric submucosal resection with possible distal gastrectomy. intraoperatively we found a 270-degree circumferential pre-pyloric exophytic sessile tumor. frozen sections suggested a benign papillary tumor therefore we proceeded with submucosal resection. the resulting mucosal defect and gastrotomy were closed primarily with absorbable suture. final pathology showed the tumor to be a tubulovillous adenoma with high grade dysplasia arising against a background of intestinal metaplasia. the resection margins were negative for dysplasia. the postoperative course was complicated by a minor leak which did not require operative intervention and subsequent gastric outlet narrowing which required endoscopic dilation and feeding tube placement. however, the patient has recovered well and has advanced to diet as tolerated. gastric adenoma has a prevalence of 0.5-3.75% in the western hemisphere. the risk of carcinomatous transformation in gastric adenomas is related to size, degree of dysplasia, and villosity. gastric adenomas are considered precancerous lesions. pre-operative pathologic diagnosis of dysplasia is often elusive as biopsies will often miss or under-grade the lesion. guidelines advocate for complete resection with either endoscopic submucosal dissection or surgical resection depending on surgeon preference and local expertise. endoscopic resection has been shown to be safe and efficacious in the removal of adenomas with good long-term outcomes. in this case the pathology of the lesion was unclear after multiple unsuccessful biopsies and required a surgical diagnosis to rule out invasive malignancy. management of gastric adenomas, while rare, may require a multidisciplinary approach between surgical endoscopy, minimally invasive surgery, and surgical oncology to achieve local control in an oncologically sound manner. we show that transgastric submucosal resection can be achieved in a minimally invasive fashion using robotic assistance. objective: parahiatal hernia is a rare type of diaphragmatic hernia with incidence of 0.2-0.35%. para-hiatal hernias arises lateral to the left crural musculature adjacent to but separate from the oesophageal diaphragmatic hiatus. in view of its rare occurance and little clinical suspicion, it is almost never diagnosed clinically. the current case report is intended to depict the clinical profile of an intraoperatively diagnosed para-hiatal hernia and feasibility of laparoscopic repair of parahiatal hernias. method: laparoscopic fundoplication is frequently performed at grant medical college and sir j. j. group of hospitals, india. during one such case intraoperatively para-hiatal hernia was diagnosed. discussion: primary or true parahiatal hernias occur as a result of a congenital weakness and secondary defects follow hiatal surgery. the primary treatment of para-hiatal hernia is mesh-plasty. this is coupled with fundoplication in cases of large hernia and those symptomatic for gastroesophageal reflux disease. laparoscopic repair of these uncommon hernias is safe, effective and provides all of the benefits of minimally invasive surgery. conclusion: due to its rare occurrence, knowledge about this condition among laparoscopic surgeons is important to avoid diagnostic dilemma. knowledge about its management aids intraoperatvely to avoid performing incomplete procedure. introduction: extended indications of endoscopic resection for early gastric cancer (egc) have been widely accepted. according to current japanese guidelines, additional gastrectomy with lymph node dissection (lnd) is recommended for patients proven to have potential risks of lymph node metastasis (lnm) on histopathological findings. on the other hand, the frequency of lnm in these patients is exteremely low. the aim of this study was to elucidate the accurate risk of lnm based on the number of risk factors (rf) for possible lnm, and to compare the stratified risk of lnm with predicted risk from additional radical resection. methods and procedures: we enrolled 589 egc patients who did not meet absolute or extended indications of endoscopic resection, and investigated the risk stratification of lnm according to the total number of lnm rfs described below; (1) sm2, (2) lymphatic vessels invasion, (3) undifferentiated adenocarinoma and [20 mm in diameter, and (4) [30 mm in diameter and ulcer formation. we compared the stratification risk to the surgical risk that was calculated based on the japanese national clinical database (ncd) risk calculator in 52 patients with additional gastrectomy after esd. results: the total number of lnm rfs and frequency of lnm were significantly correlated (0/ 1rf; 0.85%, 2rfs; 10.88%, 3rfs, 31.40%, 4rfs, 53.57%; p.05, fischer exact test). the estimated frequency of lnm was found to be lower than the predicted value of in-hospital mortality rate based on ncd in 24.3% of 0/1rf-patients who underwent additional gastrectomy with lnd after esd. the present study suggested that some patients must be over-indicated for additional gastrectomy with lnd, and no additional surgical treatment or less invasive surgery, such as local lnd (sentinel node navigation surgery or lymphatic basin resection), might be indicated for some patients with low number (0/1 rf) of lnm risk factors after esd. aims: laparoscopic proximal gastrectomy has been applied for early gastric cancer in upper third. we previously reported outcomes of laparoscopic total gastrectomy in managing this condition. in this study, we applied this modified technique for upper third early gastric cancer with double tract reconstruction. it is expected that our technique could be useful for treating these cases. methods: from april of 2004 to june of 2017, 69 consecutive patients with upper third early gastric cancer were assigned to undergo surgical treatment with proximal gastrectory at our hospital. we had 195 cases of total gastrectory for upper third early gastric cancer in the same study period. background: laparoscopic total gastrectomy for remnant gastric cancer is much more difficult than common laparoscopic total gastrectomy due to severe adhesions to adjacent organs, displacement of anatomical structure. purpose: the aim was to analyze 10 cases of laparoscopic total gastrectomy for remnant gastric cancer at the department of surgery of juntendo university urayasu hospital between november 1999 and april 2017. method: we analyzed outcome and feasibility of laparoscopic total gastrectomy surgery for remnant gastric cancer. and we compared with laparoscopic total remnant gastrectomy (10 cases) versus laparoscopic total gastrectomy (101 cases) in our hospital. results: in the previous laparoscopic surgeries. we performed laparoscopic distal gastrectomy in 5 cases, laparoscopic proximal gastrectomy in 2 pcases, and open distal gastrectomy in 3 cases. all cases were performed laparoscopic total gastrectomy with r-y reconstruction. 1 case of them had been converted to open surgery due to severe adhesions. the mean operative time was 271 min and the mean blood loss was 189 ml. there were no intraoperative complications, and there were 2 postoperative complications as a pancreatic fistula and a bowel obstruction. however, there were no intra-operative complications more than grade 3 according to the clavien-dindo classification. the mean postoperative hospital stay was 22.4 days. all cases were without recurrence. thus, there were no significant differences in operative time, bleeding volumes, intra and postoperative complications and hospital stay compared with laparoscopic total gastrectomy. conclusions: laparoscopic total remnant gastrectomy can be performed with similar short-term outcomes to laparoscopic total gastrectomy, and may be feasible and safe procedure, and can become an option of therapeutic strategy. although this study was not powered to show lower recurrence rates with synthetic absorbable as compared to biologic, the 8.51% recurrence rate is consistent with other series utilizing this mesh. it is interesting to note the difference in time to recurrence. these results suggest that while synthetic absorbable mesh may result in lower recurrence rates, recurrence seems to occur earlier. the results also suggest that deconditioning (lower bmi), and difficult cases and/or recovery may predispose to recurrence. these findings can help inform lf mesh selection and predict which patients are at higher risk of recurrence. introduction: little discussion of gastroparesis (gp) following laparoscopic paraesophageal hernia repair (lphr) has been reported in the literature. we wished to examine the incidence in our institution, and identify potential risk factors for development of gastroparesis following lphr. methods and procedures: a single institution retrospective chart review was preformed using cpt codes corresponding to paraesophageal hernia repair and fundoplication to identify patients undergoing laparoscopic paraesophageal hernia repair over a five year period (1/1/2012-12/31/ 2016) by three surgeons. emergency procedures and reoperations were excluded. in total, 93 patients undergoing non-emergent first time lphrs were identified. size of the hiatal defect was identified when able, via either measurement between the diaphragmatic crura on ct or by medical record documentation. data obtained included sex, age, hernia type, mesh usage, and existence of specific comorbidities associated with gastroparesis. presence of gastroparesis was identified either by documentation of diagnosis via clinical judgment, or by results of gastric emptying nuclear medicine studies, with timing being no longer than 6 months from date of surgery. independent students t-test and fisher exact test were used to determine statistical differences between the groups. results: 93 patients undergoing non-emergent first time lphrs were identified. of these, we were able to obtain the size of the hiatal defect in 72 patients. 10 patients overall were diagnosed with gastroparesis, with an overall incidence of 11.0%. when comparing all patients who developed gastroparesis to those who did not, only females comprised the group which did develop gastroparesis (0 males/10 females with gp, 28 males/55 females without gp, p=0.029). age was also found to be greater in the group which developed gastroparesis. for patients in which the size of the hernia defect was identified, the average age was 9 years older in the group diagnosed with gastroparesis ( step 1 under laparoscopic view, left part of the lesser omentum was cut with preserving the hepatic branch of vagus nerve. the right crus of the diaphragma has been dissected free from the soft tissue around the stomach and abdominal esophagus. in this step the fascia of the right crus should be preserved and the soft tissue should not been damaged to avoid bleeding. after cutting the peritoneum just inside the right crus, the soft tissue was dissected bluntly to left side. then the inside margin of the left crus of the diaphragma was recognized from the right side. in this part of the procedure, laparoscope uses trocar (a), the assistant uses trocar (b) to pull the stomach to left lower side and the operator's right hand uses trocar (c). step 2 the branches of left gastroepiploic vessels and the short gastric vessels were divided with ultrasonic coagulation and dissection device. the left crus of the diaphragma was exposed and the window at the posterior side of the abdominal esophagus was widely opened. in this part of the procedure, laparoscope uses trocar (a) at the beginning of dividing left gastroepiploic vessels, trocar (b) when dividing short gastric vessels. step 3 the right and left crus are sutured with interrupted stitches to reduce the hiatus. from the right side, the fundus of the stomach is grasped through the widely opened window behind the abdominal esophagus. then the fundus of the stomach is pulled to obtain a 360 degree ''stomach-wrap'' around the abdominal esophagus (fundoplication). using 2-0 non-absorbable braided suture, stitches are placed between both gastric flaps. purpose: laparoscopic gastrecomy has been widely adopted as the treatment of choice by many countries and institutions. internal hernia is a well-known complication after rouxen-y gastric bypass in the field of bariatric surgery. however, there were only a few reports of internal hernia after gastrectomy in gastric cancer patients. the purpose of this study was to analyze the incidence and clinical features of internal hernia after gastric cancer surgery in a high-volume center. method: 2,931 gastric cancer patients who underwent curative gastrectomy at seoul national university bundang hospital between january 2013 and december 2016 were retrospectively reviewed in this study. internal hernia was classified into two types, mesenteric hernia and petersen's hernia. result: 2201 patients who underwent distal gastrectomy (dg) with reconstruction by billroth ii, rouxen-y gastrojejunostomy and uncut rouxen-y gastrojejunostomy, total gastrectomy (tg) with esophagojejunostomy, and proximal gastrectomy with double tract reconstruction (pg dtr) with esophagojejunostomy and gastrojejunostomy had potential space for internal hernia. among these patients, 31 (1.4%) were determined as internal hernia by computed tomography and 29 patients (1.3%) underwent surgical treatment of internal herniation. two patients were conservatively managed. all patients suffered from abdominal pain and 13/31(42%) patients showed nausea and vomiting. the median interval between the initial gastrectomy and surgery for internal hernia was 450 days. mesenteric hernia was observed in 18 cases and petersen's hernia in 12 cases. since we started closing the mesenteric and petersen's defects from may of 2015, there were only 5 cases (16%) observed afterwards but there were 24 cases (84%) before closure of the defects. conclusion: internal hernia after gastrectomy is likely underreported. although we analyzed 31 patients with internal hernia, there might be more patients with mild symptoms who were managed conservatively by their own. a high degree of suspiciousness for internal hernia should be maintained in patients presenting symptoms like nausea, vomiting and abdominal pain after gastrectomy with potential space for internal hernia. with our experience, closure of the mesenteric and petersen's defect is helpful in reducing internal hernia. however, due to low incidence, a multicenter retrospective study is necessary. introduction: the increased incidence of anemia in patients with a hiatal hernia (hh) has been clearly demonstrated, as has resolution of anemia after hh repair in these patients. despite this, the implications of preoperative anemia on postoperative outcomes have not been well described. in this study, we aimed to identify the incidence of preoperative anemia in patients undergoing hh repair at our institution and sought to determine whether preoperative anemia had an impact on postoperative outcomes. methods and procedures: using our irb-approved institutional hh database, we retrospectively identified patients undergoing hh repair between january 2011 and april 2017 at our institution. we identified all patients with anemia, defined as serum hemoglobin levels less than 13 mg/dl in men and 12 mg/dl in women, measured within two weeks prior to surgery, and compared this cohort to those that had normal hemoglobin values preoperatively. specific perioperative outcomes analyzed included: estimated blood loss (ebl), operative time, need for blood transfusion, failure to extubate postoperatively, intensive care unit (icu) admission, postoperative complications, length of stay (los), and 30-day readmission. results: we identified 266 patients undergoing hh repair, of which 233 had preoperative bloodwork available for review. the average age was 64 years and the majority of patients were female (79%, n=208). most were treated electively (75%, n=196) and with a minimally invasive approach (97%, n=255). 70 patients (26.6%) had preoperative anemia. compared to patients without anemia, patients with anemia had increased rates of failed extubation postoperatively (7.1% vs. 1.5%, p=0.033), increased icu admissions (12.9% vs. 5.1%, p=0.034), increased need for perioperative blood transfusions (11.4% vs 0%, p=0.0003), and increased rates of postoperative complications (41.4% vs. 18.1%, p.0001). although mean los (4.3 days vs. 3.2 days, p 0.077), mean operating time (262 mins vs. 252 mins, p=0.10), and ebl (52 ml vs 38 ml, p=0.38) were greater in the anemic group, they did not reach statistical significance, and there was no significant difference in 30-day readmission rate (8.6% vs 8.8%, p=0.95). conclusions: anemia diagnosed on preoperative bloodwork appears to be associated with increased failure to extubate postoperatively, need for icu admissions, need for perioperative blood transfusion, and increased overall complication rate after hh repair. however, we found no significant difference in los or 30-day readmissions between anemic and non-anemic patients. since the majority of patients in this analysis underwent elective repairs, these results would support the preoperative treatment of anemia in patients undergoing hh repair. few studies have compared the procedures' long-term effectiveness with none looking beyond 5 years. this study sought to characterize the efficacy of laparoscopic toupet versus nissen fundoplication for types iii and iv hiatal hernia using a telephone survey. methods and procedures: with irb approval, a review of all laparoscopic hiatal hernia repairs with mesh reinforcement performed over seven years at a single center by one surgeon was conducted. patient demographics and perioperative characteristics were recorded. hiatal hernia was classified per published sages guidelines as type iii or iv using operative reports and preoperative imaging. patients with type i or ii or recurrent hiatal hernia and patients receiving concomitant procedures were excluded. the gerd-health related quality of life survey was administered by telephone no earlier than 18 months postoperatively. patients responded to items concerning symptom severity using a 5-point scale (0=no symptoms to 5=symptoms are incapacitating to do daily activities). symptoms surveyed included heartburn (6 items), difficulty swallowing (1 item) and regurgitation (6 items introduction: as the thoracic esophageal carcinoma has a high metastatic rate of upper mediastinal lymph nodes, especially along the recurrent laryngeal nerve (rln), it is crucial to perform complete lymph node dissection along the rln without complications. although intraoperative neural monitoring (ionm) during thyroid and parathyroid surgery has gained widespread acceptance as the useful tool of visual nerve identification, the utilization of ionm during esophageal surgery has not become common. here, we describe our procedures focusing on a lymphadenectomy along the rln utilizing the ionm. methods and procedures: we first dissect ventral and dorsal side of the esophagus preserving the membranous structure (meso-esophagus), which contains tracheoesophageal artery, rln and lymph nodes. we next identify the location of the rln which runs in the meso-esophagus using ionm before visual contact. after that, we perform lymphadenectomy around the rln preserving the nerve. this technique was evaluated in 30 consecutive cases (neural monitoring group; nm) of esophagectomy in prone positioning, and compared with our historical 56 cases (conventional method group; cm background: laparoscopic hiatal hernia repair, particularly large type 1 and type 3 hernias, is associated with high recurrence rates. various use of overlay mesh reinforcement have been described in an attempt to improve outcomes. unfortunately, overlay use of biologic mesh continues to result in high recurrence rates, and more effective repairs employing permanent mesh raise serious erosion concerns and are therefore rarely used. we theorize that employing an interlay technique with permanent mesh (positioned between both crura) will help enhance crural closure and improve rates of hiatal hernia recurrences with minimal risk of erosion. methods: we reviewed all patients who underwent a laparoscopic hiatal hernia repair from april 2015 to august 2017 by a single surgeon from a prospectively maintained database at a tertiary care referral center (n=72). patients who underwent surgery for achalasia with concurrent hiatal repair were excluded. during this time frame, a new interlay technique of polypropylene mesh was employed upon suture closure of the crura. outcomes of repair were retrospectively reviewed. recurrence of hernia was identified by positive work up of patient's symptoms (new onset dysphagia, gerd, pain). results: a total of 72 consecutive laparoscopic hiatal hernia repair were reported in a period of 28 months. interlay polypropylene mesh was utilized in all repairs. patients were majority females (74.0%), had a median age of 61 and had a mean bmi of 31.3. eleven (15.0%) patients were redo repairs. majority of patients received a nissen fundoplication (n=54, 75.0%) followed by a toupet fundoplication (n=14, 19.4%). median length of stay after surgery was 1 day. median follow up was 43 days (range: 11-659 days). there were zero reported recurrences. conclusion: laparoscopic hiatal hernia repair with interlay polypropylene mesh appears in the short term to be a safe and durable technique to reduce the incidence of hiatal hernia recurrences. further studies are needed to assess more long term outcomes of this novel technique. zia kanani 1 , melissa helm 1 , max schumm 2 , jon c gould, md 1; introduction: laparoscopic fundoplication remains the current gold standard surgical intervention for medically refractory gastroesophageal reflux disease. studies suggest that on average 5-10% of patients undergo reoperative surgery due to recurrent, persistent, or new symptoms. the primary objective of this study was to characterize the long-term symptomatic outcomes of primary and reoperative fundoplications in a clinical series of patients who have undergone one or more fundoplications. methods: patients who underwent laparoscopic primary or reoperative fundoplication between 2011 and 2017 by a single surgeon were retrospectively identified using a prospectively maintained database. patients undergoing takedown of a failed fundoplication and conversion to roux-en y gastric bypass (for morbid obesity, severe gastroparesis, or 3 or more prior failed attempts) were excluded from the current analysis. all procedures were performed laparoscopically. patients were asked to complete the validated gerd-health related quality of life (gerd-hrql) survey prior to surgery and postoperatively at standard intervals to assess long-term symptomatic outcomes and quality of life. gerd-hrql composite scores range from 0 (highest disease-related quality of life) to 50 (lowest diseaserelated quality of life, most severe symptoms conclusions: patients who need to undergo reoperative fundoplication have more severe gerd-related symptoms at 2 years post-op compared to patients undergoing primary fundoplication. however, good outcomes and morbidity rates of laparoscopic reoperation that approximate that of a primary fundoplication are possible in the hands of an experienced surgeon. adenocarcinoma of duodenum: surgical or endoscopic treatment? introduction: it is well known that the adenocarcinoma of the duodenum (adc) is a quite rare lesion infact represents 40% of cancer of the small bowel and 30% of these are localized in the periampullary area: 7% affect the sub-papillary tract and only 3% the supra-papillary segment of the duodenum. the adc may arise from duodenal polyps (familial polyposis, or gardner's syndrom or be associated with coeliac disease). until now the treatment was the pancreatoduodenectomy (for anatomo-surgical reasons and for the possibility of regional lymphonode resection). infact in my series of 476 of such procedures, 102 where performed for duodenal cancer. in this last 4 years 18 patients with adc of supra-papillary segment of the duodenum underwent endoscopic submucosal dissection (esd). the purpose of this study were to check the feasibility of the esd in treating such cases. in our experience this kind of endoscopic operation was feasible with high complication rate; perforation in 3 cases (0.54%); and bleeding occurred in 1 case (0.18%). all the complications were successfully treated endoscopically and the long-term outcomes was favorable. consitering the high rate of complications, the difficult and long procedure, the compliance of patients (c02), the general anesthesia, a very very skilled endoscopist is needed. conclusions: the esd represent a new endoscopic approach enstablished in clinical practice: end is performed following the intraluminal path (3rd space) wich, unlike the others, remain virtual and has to be created by dissecting and expanding the tissues layer between the mucosa and the muscolaris propria allowing the endoscope to gain access. the benefit of esd for treating the adc of the supra-papillary segment of the duodenum, according to our experience, must be validate in the future; a pre-operative pet-tac scan examination must be performed in order to demostred the lesion of the duodenum and if there is any limphatic involvement and no infiltration of the head of the pancreas. yoontaek lee, md, sa-hong min, md, young suk park, md, sang-hoon ahn, md, do joong park, md, phd; seoul national university bundang hospital purpose: this study summarizes the single institution experience of laparoscopic gastrectomy in advanced gastric cancer and evaluates the postoperative morbidities and long-term oncologic outcomes. methods: a total of 1,597 laparoscopic gastrectomy for advanced gastric cancer were performed at seoul national university bundang hospital between may 2003 and may 2017. the characteristics of patients, surgical techniques, postoperative morbidities, and long-term oncologic outcomes were retrospectively reviewed using electronic medical records. results: 109 patients required conversion to open surgery. the reasons of conversion to open surgery were advanced stage (n=59), intraoperative bleeding (n=19), adhesion due to previous abdominal operation (n=10), small abdominal cavity (n=4), associated disease (n=4), and intraoperative pleural injury (n=2). the mean hospital stay was 7.0 days for distal gastrectomy, 9.6 days for total gastrectomy, 8.3 days for proximal gastrectomy, and 6.5 days for pylorus preserving gastrectomy. the mean number of collected lymph nodes was 58.7 for distal gastrectomy, 70.1 for total gastrectomy, 43.0 for proximal gastrectomy, and 46.5 for pylorus preserving gastrectomy. the rates of postoperative complications of grade ii or more were 9.4 %. there was one case of postoperative mortality due to delayed bleeding after discharge. old age was the only independent predictor of surgical morbidities. background: intrathoracic gastric volvulus is a life-threatening condition of paraesophageal hernia. the therapeutic is a challenge because in acute volvulus it may lead to gastric strangulation and necrosis. most patients are elderly and with a significant associated medical illness which has higher morbidity and mortality of major surgery. we present a laparoscopic surgery is safe in paraesophageal hernia with acute intrathoracic gastric volvulus in a high-risk patient. case presentation: an 80-year-old woman with underlying of diabetes mellitus and hypertension was transferred from an outlying hospital with anemia, dysphagia, urinary tract infection and aspiration pneumonia. she had severe recurrent emesis after admission. ct scan of the chest and abdomen revealed a large esophageal hiatal hernia, and most of the stomach was in the inferior mediastinum with organoaxial gastric volvulus. endoscopy revealed flat pigmented spot gastric ulcer which compatible with cameron lesion and twisting of gastric folds without evidence of ischemia. the endoscopic reduction was unsuccessful. a laparoscopic surgery was performed and the herniated stomach was successfully reduced. the hernial sac was excised. the crura were approximated and reinforced with composite mesh. nissen fundoplication was performed along with gastropexy of the greater curve of the stomach to the abdominal wall. there was no perioperative complication. she tolerated enteral diet on a postoperative day 3. she had an uneventful recovery and discharged in 2 weeks after treatment of her associated medical illnesses. she had no relapse of previous symptoms at her six-month follow-up assessment. discussion: endoscopic reduction of acute gastric volvulus may be the first option in a patient with severe comorbidities. however, if there is evidence of ischemia or failure of endoscopic reduction, surgical treatment should be considered. laparoscopic reduction and gastropexy may be a lessinvasive and viable alternative to the more aggressive surgical procedure but definitive surgery with repair hiatal hernia can be done in a selected patient. conclusion: minimally invasive treatments of acute gastric volvulus with paraesophageal hernia, either endoscopic or laparoscopic offer the option for reducing morbidity and mortality in elderly with significant comorbidities. the definitive laparoscopic surgery can be accomplished successfully and safely when it is performed with meticulous attention to the surgical technique and perioperative care. reid fletcher, md, mph, emily ramirez, rn, alfonso torquati, md, philip omotosho, md; rush university medical center introduction: the objective of this study was to evaluate the impact of an enhanced recovery after surgery (eras) program on post-operative length of stay following laparoscopic sleeve gastrectomy. eras programs have been demonstrated to improve outcomes and decrease length of stay in multiple surgical disciplines however relatively little has been published regarding the impact of eras programs in bariatric surgery. methods: an eras program for all patients undergoing bariatric surgery was implemented in february 2017 at a single institution. we retrospectively reviewed all patients undergoing laparoscopic sleeve gastrectomy between february 2017 and august 2017. as a pre-eras historical control, we also reviewed all patients undergoing laparoscopic sleeve gastrectomy between january 2016 and december 2016. baseline patient characteristics, additional concomitant operative procedures as well as 30-day readmission and complication rates were reviewed. logistic regression analysis was used in univariate and multivariate models to identify factors that predicted early post-operative discharge. data analysis was completed using stata 12 se software (statacorp lp; college station, tx). results: eighty-five patients underwent laparoscopic sleeve gastrectomy after implementation of the eras program while 169 patients were included in the pre-eras control group. there were no statistically significant differences in the baseline characteristics between the two groups and there were no differences in the rate of concomitant procedures performed. there was a statistically significant decrease in post-operative length of stay following implementation of the eras program from 2. it has been reported that laparoscopic redo surgery is effective for recurrent gerd and/or hiatal hernia after surgery. however, there has been very few reports from japan. we report an initial experience of laparoscopic surgery for japanese patients with recurrent gerd and/or hiatal hernia. among 177 patients who had undergone laparoscopic fundoplication in our hospital from 1997 to 2016, 15 patients with recurrent gerd/hiatal hernia underwent redo surgery. preoperative work-up included upper gi series, endoscopy, ct, 24 h ph-impedance and manometry. the patients consisted of 8 women and 7 men with a mean age of 65.8 years. the interval from the initial surgery was 26.7 months (4 days-60 months). the types of initial fundoplication were nissen: 10, toupet: 4, anterior: 1. the types of recurrence were sliding hernia: 11 and paraesophageal hernia: 4. one patient with recurrent sliding hernia had poor gastric motility. laparoscopic redo surgery was performed on 14 patients. redo surgery included crural repair with mesh reinforcement: 3, refundoplication: 10 (nissen-nissen: 3, nissen-toupet: 5, toupet-toupet: 1, toupet-lateral: 1) and reduction of the incarcerated paraesophageal hernia: 1. additional procedure included mesh reinforcement: 4 and pyloroplasty: 1. open partial gastrectomy was performed for one patient with incarcerated and strangulated hernia. operation time was 226 min. 3 patients was converted to open surgery. oral intake was started on the 1st pod and postoperative stay was 6.5 days. two patients recurred after redo surgery, one of whom underwent re-redo surgery. during the surgery, ivc was injured but rescued by open surgery. eleven patients had good outcome and 4 patients required ppi after redo surgery. our morphological fundoplication score significantly improved after redo surgery. symptom score and acid exposure time were also significantly improved after redo surgery. laparoscopic redo surgery for recurrent gerd and/or hiatal hernia after surgery is safe and effective, although attention should be paid during surgery to avoid injury of the adjacent organs. surg endosc (2018) introduction: cameron ulcers (cu) are linear erosions or ulcerations in the gastric mucosa at the level of the diaphragmatic hiatus in patients with a hiatal hernia (hh) and are frequently associated with anemia. perioperative outcomes of patients with cu undergoing hh repair are not well described. we sought to identify the incidence of cu in patients undergoing hh repair at our institution and determine whether the presence of cu impacted postoperative outcomes. methods and procedures: using our irb-approved institutional hh database, we retrospectively identified patients undergoing repair between january 2011 and april 2017. we identified all patients with cu found on preoperative esophagogastroduodenoscopy (egd). we compared patients with and without cu to determine if they differed in terms of preoperative anemia (defined as hemoglobin levels less than 13 mg/dl in men and 12 mg/dl in women). lastly, we compared outcomes between the cu group and the non-cu group, focusing on need for perioperative blood transfusion, failure to extubate postoperatively, intensive care unit (icu) admission, postoperative complications, length of stay (los), and 30-day readmission. conclusions: the presence of cu on preoperative egd is associated with increased rate of preoperative anemia, increased los, and increased icu admission after hh repair. although the cause of anemia in patients with hh is commonly attributed to cu, only 38% of cu patients were anemic, indicating that differences in outcomes may not only be attributed to a higher incidence of anemia in cu patients. the implications of cu in patients undergoing hh repair need to be further elucidated. laparoscopic heller myotomy as treatment for achalasia objective: aim of this stud was to review our experience with laparoscopic heller dor myotomy. disphagia constitutes the main symptom. diagnosis is performed by means of esophageal manometry. materials and method: over a period of 15 years, 180 patients were treated with heller myotomy plus dor fundoplication laparoscopically. all patients had lost weight, and there was a prevalence of females with an average age of 46. twenty five patients had chagas disease. they were all assessed with serial x-rays, endoscopy, esophageal manometry, and their symptoms were assessed with a 0-4 score, 4 being the most severe. results: there was no conversion or mortality. in 3 patients the mucosa was perforated during myotomy. the mucosa was sutured without altering the result of the treatment. average hospital stay was 36 hours. one patient had to be reoperate because of esophageal perforation with peritonitis. sixty patients were followed up with manometric control and ph-probe testing, and only 10% of those had pathologic reflux. conclusions: laparoscopic treatment of achalasia is possible and reproducible, while reducing the morbility of laparotomy with relieve of patients symptoms. introduction: stent treatment in the gastrointestinal tract is emerging as a standard therapy for overcoming strictures and sealing perforations. we have started to treat patients with perforated duodenal ulcers using a partially covered stent and external drainage achieving good clinical results. stent migration is a serious complication that may require surgery. pyloric physiology during stent-treatment has not been studied and mechanisms for migration are unknown. the aims of this study were to investigate the pyloric response to distention mimicking stent-treatment, using the endoflip, investigating changes in motility patterns due to distention at baseline, after a pro-kinetic drug and after food ingestion. methods: a non-survival study in five pigs was carried out, followed by a pilot study in one human volunteer. a gastroscopy was performed in anaesthetized pigs and the endoflip was placed through the scope straddling the pylorus. baseline distensibility readings were performed at stepwise balloon distention to 20 ml, 30 ml, 40 ml and 50 ml, measuring pyloric cross sectional area and pyloric pressure. measurements were repeated after administration of a pro-kinetic drug (neostigmin) and after instillation of a liquid meal. in the human study readings were performed in conscious sedation at baseline and after stimulation with metoclopramide. results: during baseline readings the pylorus was shown to open more with increasing distention, together with higher amplitude motility waves. reaching maximum distention-volume (50 ml), pyloric pressure increased significantly (p=0.016) and motility waves disappeared. after prokinetic stimulation pyloric pressure decreased and motility waves increased in frequency and amplitude at 20, 30 and 40 ml distentions. after food stimulation pyloric pressure stayed low and motility waves showed increase in amplitude at distentions of 20, 30 and 40 ml. during both tests the pylorus showed higher pressure and lack of motility waves at maximum probe distention of 50 ml. similar results were found in the human study. the pylorus seems to acts as a sphincter at low distention but when further dilated starts acting as a peristaltic pump. when fully distended, pyloric motility waves almost disappeared and the pressure remained high, leaving the pylorus open and inactive. stent placement in the pylorus results in pyloric distention, possibly changing motility. this study indicates that a duodenal stent placed over the pylorus should have a high radial force in the pyloric part in order to dilate the pylorus and diminish the contraction waves, this might reduce stent migration. introduction: cutting-edge technology in the field of minimal invasive surgery allows the application of singleincision laparoscopic surgery on gastric cancer. however, single-incision distal gastrectomy (sidg) is still technically difficult due to limited range of motion and unstable field of view-even in the hands of an experienced scopist. solo surgery using a passive scope holder may be the key in allowing sidg to be safer and efficient. we report our initial experience of 100 consecutive cases of solo sidg. methods: prospectively collected database of 100 patients clinically diagnosed as early gastric cancer who underwent solo sidg from october 2013 until july 2016 were analyzed. all the operations were held by a single surgeon and a scrub nurse. a passive laparoscopic scope holder was controlled by the surgeon to fix the field of view. results: the mean operation time (sd) was 122.8 (±34.9) min, and the average estimated blood loss was 30.5± 57.0 ml. average body mass index was 23.4±2.9 kg/m 2 . the median hospital stay (range) was 5 (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) days, and the mean number of retrieved lymph nodes was 56.0±22.8. there was no conversion to multiport or open surgery. early postoperative complication occurred on 7% with three delayed gastric emptying, two postoperative pneumonia, one pancreatitis, and one wound complication. conclusion: solo sidg using a passive scope holder allows sidg to become more feasible by providing a stable field of view. there were no peri-operative deaths in either group. in the elective group, age was not an independent risk factor for complications (or 1.05, 95% ci 0.98-1.12). conclusions: the incidence of major complications and mortality in this series were much lower than those previously reported for elective lpehr, while morbidity after emergency repair remains high. the paradigm of watchful waiting for elderly and/or minimally symptomatic patients with giant peh should be revisited. the impact of vagal nerve integrity testing in the surgical kamthorn yolsuriyanwong, md, eric marcotte, md, mukund venu, bipan chand, md; loyola university chicago, stritch school of medicine background: thoracic and gastric operations can cause vagal nerve injury, either accidentally or intended. the most common procedure, which can lead to such an injury, includes fundoplication, lung or heart transplantation and esophageal or gastric surgery. patients may present with minimal symptoms or some degree of gastroparesis. gastroparetic symptoms of include nausea, vomiting, early satiety, bloating and abdominal pain. if these symptoms occur and persist, the clinician should have a high suspicion of a possible vagal injury. investigative studies include endoscopy, esophageal motility, contrast imaging and often nuclear medicine gastric emptying studies (ges). however, ges in the post-surgical patient have limited sensitivity and specificity. if a vagal nerve injury is encountered, subsequent secondary operations must be planned accordingly. methods: from january 2014 to august 2017, patients who had a previous surgical history of a foregut operation, with the potential risk of a vagal nerve injury, had vagal nerve integrity (vni) test results reviewed. vni test was measured indirectly by the response of plasma pancreatic polypeptide to sham feeding. the data collected and analyzed included age, gender, previous surgical procedures, clinical presentation, results of vni testing and the secondary procedure planned or performed. vni testing was compared to other testing modalities to determine if outcomes would have changed. results: eight patients (5 females) were included. the age ranged from 37 to 73 years. two patients had prior lung transplantation and six patients had prior hiatal hernia repair with fundoplication. seven patients presented with reflux and delayed gastric emptying symptoms. one lung transplantation patient had no symptoms but his lung biopsy pathology showed chronic micro-aspiration with rejection. the vni testing results were compatible with vagal nerve injury in 6 patients. according to these abnormal results, the plans for nissen fundoplication in 2 patients were modified by an additional pyloroplasty and the plans for redo-nissen fundoplication in 4 patients were changed to redo-nissen fundoplication plus pyloroplasty in 1 patient and partial gastrectomy with roux-en-y reconstruction in 3 patients. the operative plans in 2 patients with a normal vni test were not altered. all patients that had secondary surgery had improvement in symptoms and or improvement in objective tests (ie signs of rejection). conclusion: the addition of vni testing in patients with previous potential risks of vagal nerve injury may help the surgeon select the appropriate secondary procedure. . we present a single-center experience with a "myotomy first" approach for all patients, regardless of diverticular size. the hypothesis is that cardiomyotomy alone will provide satisfactory symptom abatement in some patients. and mis cardiomyotomy causes minimal scarring, so a staged mis diverticulectomy is feasible at a later date if diverticular retention/stasis continues. in order to discuss this treatment algorithm we present our experience with cardiomyotomy alone for patients with epiphrenic diverticula. methods: the electronic medical record was queried for patients with esophageal diverticula who were managed with cardiomyotomy and dor fundoplication alone. pre and post-operative reflux/dysphagia questionnaires were gathered; imaging studies, operative data, complications and follow up were reviewed. results: from march of 2016 until the present, 7 patients with esophageal diverticula were treated using the "myotomy first" approach. intraoperative esophagoscopy was done to internally visualize the elimination of the inciting spastic esophageal muscle. preoperatively, all patients complained of regurgitation, followed by dysphagia in 6 (85%) and weight loss 3 (42%). postoperatively, dysphagia and weight loss resolved in all subjects. regurgitation symptoms resolved in 6 (85%) patients. the average size of the diverticula was 22.7 cm 2 , the range was 2-62 cm 2 . post operative esophagream's showed persistent diverticual, however most had decreased in size. there were no perioperative complications, average length of stay was 2.1 days and there were no icu admissions or returns to the or. the average length of follow up for these patients was 116 days where all patients reported being satisfied with their results and none of them have yet desired to pursue diverticulectomy. discussion: a "myotomy first" approach resulted in excellent short term symptomatic control. none of the 7 have retained or re-experienced symptoms of diverticular retention worthy of surgical intervention. in the age of laparoscopic surgery, an esophageal epiphrenic diverteculectomy should be staged. this step wise approach seeks to assure surgical necessity for a morbid endeavor. surg endosc (2018) 32:s130-s359 the background: the two-stage oesophagectomy (ivor-lewis procedure) remains the mainstay of curative surgery for oesophageal cancers in the uk. gastro-oesophageal anastomotic leak is a potentially devastating complication of this procedure affecting perioperative morbidity and mortality. although the leak rates have improved over the years, it still remains widely variable. intraoperative reinforcement of gastro-oesophageal anastomosis with an 'omental wrap' has been proposed as a measure to reduce anastomotic leak rates. there is some data to suggest that this additional technique reduces anastomotic leak. we reviewed our single institution data to assess if the omental wrap indeed had a 'cocoon' effect in maturing the anastomosis and reducing leak rates. methods: data for all cancer oesophagectomies (ilog) performed in our institute since april 2013-17 was retrospectively analysed from a prospectively maintained database. the patients were categorised into two groups. masafumi ohira; department of gastroenterological surgery, hokkaido university graduate school of medicine background: in laparoscopic surgery, both surgical technique and adequate support and traction by an assistant are highly important. this study assessed the impact of the first assistant on shortterm outcomes of laparoscopic distal gastrectomy (ldg) and laparoscope-assisted distal gastrectomy (ladg). methods: patients who underwent ldg or ladg for gastric cancer at our hospital, between november 2013 and august 2017, were included. ldg and ladg cases of billroth i reconstruction, performed by a single surgeon accredited in endoscopic procedures, were analyzed. the cases were categorized into the following 4 groups according to the first assistant's postgraduate years (pgy) of experience: group a, 3-5 years; group b, 6-10 years; group c, 11-15 years; and group d, [16 years. short-term outcomes were compared between the groups. results: we examined 48 cases. operative time was significantly longer in group a than in group b (p=0.029). no significant differences in operative time were found between groups b, c, and d. the cases were recategorized into 2 groups as follows: group a, the young assistant group (group y, n=8), and groups b, c, and d, the senior assistant group (group s, n=40). significant differences in operative time and method of anastomosis (circular stapler or delta anastomosis) were observed between the 2 groups (p=0.0054 and p=0.0028, respectively), but no significant differences in complication rates were found (p=1.0000). the unadjusted analysis revealed that the group, method of anastomosis, and body mass index (bmi) were significant factors associated with longer operative time. multivariate linear regression analysis with stepwise model selection using akaike's information criterion (aic) revealed that bmi and group were significant factors associated with longer operative time (p=0.0075 and p=0.0024, respectively). multivariate analysis using these 2 variables and the method of anastomosis confirmed the significance of bmi and group for longer operative time, but no significance was found in the method of anastomosis (p=0.0088, p=0.021, and p=0.51, respectively). conclusions: our study showed that operative time tended to be longer when the first assistant had experience of less than 6 pgy, but the morbidity did not increase. as with the operator, the first assistant needs adequate training to ensure a smooth operation. steven g leeds, md, marc ward, md, brittany buckmaster, pa, estrellita ontiveros, ms; baylor university medical center at dallas background: gastric contents can reach beyond the esophagus into the larynx and pharynx causing an increasingly prevalent disease called laryngopharyngeal reflux (lpr). magnetic sphincter augmentation (msa) has been used as an alternative treatment for gerd with good success, but there is no data to support its use in lpr. methods: forty-five patients with msa implants for symptomatic relief with both gerd and lpr symptoms were examined. all patients experienced at least one typical gerd symptom as well as at least one extra-esophageal symptom. this was assessed using the gerd-hrql which is 15 questions graded 1-5 on each question, and reflux symptom index (rsi) which is 9 questions graded 1-5 on each question. patients filled out questionnaires preoperatively, one month postoperatively (early follow up), and at 6 months to 1 year postoperatively (late follow up). the responses on the gerd-hrql were clustered into questions inquiring about heartburn (6), dysphagia (2), and regurgitation (6) like all surgical fields there is a push towards standardization of the post operative course while maintaining safe practices. other surgical fields have streamlined recovery processes in an effort to standardize care and minimize costs. laparoscopic hiatal hernia repair is a complex procedure, but with experience and a team approach, this operation can become a streamline process. methods: a retrospective review was done for over 250 laparoscopic hiatal hernia repairs at a single institution. aspects of post operative care such hospital floor, nursing ratio utilized, pain medication, diet advancement, use of foley catheters and length of hospital stay were tracked. statistical analysis was done to compare utilization of resources as years went on along with complications and readmissions. results: a total of 258 hiatal hernias were performed between 2011 and 2017. improvements were noted in nearly every field over time, including faster foley removal, decreased length of hospital stay, decreased use of patient controlled analgesics (pcas) and faster advancement of diet. furthermore these patients are now treated on a surgical floor rather than the intensive care unit or step down with a higher nurse to patient ratio, decreasing hospital cost. there were no changes in complications, reoperations or readmissions over the course of the study. conclusions: cost, length of stay and so called "advanced recovery pathways" are all the rage in the surgical literature. anytime a procedure and its post operative course can become less of a "major undertaking" and more routine, the more streamline it becomes. this comes from making a standard protocol that deescalates treatment based on what is actually needed. nearly every aspect of post operative care was simplified; length of stay and cost to the hospital was decreased while no additional complications or readmissions were accrued. the foundation of a formalized advanced recovery pathway will be implemented from these factors which were studied. background: the obesity epidemic continues to worsen. bariatric surgery remains the most effective way to achieve weight loss and resolution of comorbidities. laparoscopic sleeve gastrectomy has become the most common bariatric operation due to excellent efficacy and low morbidity and mortality. the most common complication of sleeve gastrectomy is gastroesophageal reflux disease (gerd), which can adversely impact the quality of life and lead to additional esophageal complications. recently, esophageal magnetic sphincter augmentation (linx®) has become an acceptable alternative to fundoplication for certain patients with gerd. the use of linx® in patients who previously underwent laparoscopic sleeve gastrectomy was described in a case series in 2015. the known complications of these devices include dysphagia, need for endoscopic dilation, and device erosion. the complication profile of linx® in the setting of sleeve gastrectomy has not been reported heretofore. methods: we present a case of a patient with prior sleeve gastrectomy who received a linx® device one year after her bariatric operation due to severe gerd refractory to medical management. initial evaluation demonstrated a hypotensive lower esophageal sphincter and hiatal hernia, but no evidence of stricture or twisting. soon after linx® implantation, the patient developed progressive dysphagia and worsened reflux. repeat evaluation showed esophagitis, a moderate stricture with angulation at the incisura, and a large amount of retained food. discussion: the patient was recommended conversion to roux-en-y gastric bypass, but was deemed to be a poor candidate due to heavy smoking. thus, laparoscopic removal of the linx® device was performed with hiatal hernia repair and gastric stricturoplasty. post-operative fluoroscopic evaluation revealed improvement in the stricture, but persistent gastroesophageal reflux. the patient experienced a significant improvement in her symptoms of dysphagia, nausea, and vomiting. however, once smoking cessation is achieved, she may still need a conversion to roux-en-y gastric bypass in order to address persistent gerd. conclusion: conversion to roux-en-y gastric bypass remains the standard approach to treatment of gerd post sleeve gastrectomy. new approaches to this problem, including placement of linx®, are promising but have not been evaluated for long-term safety and efficacy in the setting of prior bariatric surgery. careful diagnostic evaluation prior to placement of magnetic sphincter augmentation device should be routinely undertaken. postoperatively, close long-term follow up is imperative, particularly in patients with prior sleeve gastrectomy. presence of linx® in a patient with prior bariatric surgery may lead to worsening symptoms if complications of initial operation are present. kazuto tsuboi, md 1 , nobuo omura, md 2 , fumiaki yano, md 3 , masato hoshino, md 3 , se-ryung yamamoto 3 , shunsuke akimoto, md 3 , takahiro masuda 3 , hideyuki kashiwagi, md 1 , norio mitsumori, md 3 , katsuhiko yanaga, md 3; 1 fuji city general hospital, shizuoka, japan, 2 nishisaitama-chuo national hospital, saitama, japan, 3 the jikei university school of medicine, tokyo, japan background: esophageal achalasia is one of the primary esophageal motility disorders, and the patients suffer from dysphagia, vomiting and chest pain. timed barium esophagogram (tbe) is a convenient method to assess esophageal clearance, which we usually performed before and after surgery. meanwhile, laparoscopic heller-dor operation (lhd) has been considered worldwide as a gold standard for the surgical management of esophageal achalasia. the aim of this study is to examine the effect of preoperative clearance rate at the lower part of the esophagus on surgical outcomes in patients with esophageal achalasia. patients and method: between august 1994 and april 2017, patients who underwent lhd at our institution were extracted from the database. out of 557 patients, 398 patients met our inclusion criteria; such as the patients who underwent lhd as an initial operation with complete evaluation with preoperative esophageal clearance by tbe. these patients were divided into three groups by the degree of esophageal clearance (group a: clearance rate \10%, group b: 10%? clearance rate \50%, and group c: 50%? clearance rate). patients' background, pre-and post-operative symptom scores, and surgical results were compared. before and after surgery, the standardized questionnaire was used to assess the degree of frequency and severity of symptoms (dysphagia, vomiting, chest pain and heartburn). moreover, satisfaction with operation was evaluated using the standardized questionnaire. statistical analysis was performed by using krasukal-wallis test or chi-square test, and p-value less than 0.05 was defined as statistically different. results: their mean age was 44.3 years and 204 of them were male (51.3%). one hundred and sixty-eight patients (42.2%) were in group a, 149 (37.4%) in group b, and 81(20.4%) in group c. the maximum width of the esophagus in group c was smaller than that in other groups (p= 0.0258). as to the pre-operative symptom score, the frequency score of dysphagia was significantly lower in group c (p=0.026), whereas the severity score of chest pain was significantly higher in group c (p=0.0465). surgical outcomes including the incidence of mucosal injury were not different among the groups. moreover, the patient satisfaction with lhd was excellent regardless of preoperative esophageal clearance. conclusion: preoperative clearance rate at the lower part of the esophagus in patients with esophageal achalasia did not affect the surgical outcomes of lhd, but the characteristics of preoperative symptoms in patients with poor esophageal clearance was low dysphagia and high chest pain. surg endosc (2018) (3.5 cm92.5 cm) was made by dissecting between submucosal and muscular layers at the anterior remnant gastric wall. after creation of the double flap, the posterior esophageal wall (5 cm from the edge) and the anterior gastric wall (superior edge of the mucosal window) were sutured for fixation, and 1.0 cm from the inferior edge of the mucosal window was opened, and the wall of the esophageal edge and the opening of the remnant gastric mucosa were sutured continuously. the anastomosis was fully covered by the seromuscular flaps with suturing. in latg, roux-en-y reconstruction was performed through a small incision using a circular stapler. introduction: the purpose of this study was to clarify the long-term and short-term outcomes of 330 consecutive patients who underwent thoracoscopic esophagectomy in the prone position using a preceding anterior approach for the resection of esophageal cancer at a single institution. this method was established to make an esophagectomy easier to perform and to achieve better outcomes in terms of safety and curativity. methods and procedures: we retrospectively reviewed a database of 673 patients with thoracic esophageal cancer who had undergone a thoracoscopic esophagectomy (te, 330 patients) or an esophagectomy through thoracotomy (oe, 343 patients) between january 2003 and august 2017. to compare the long-term outcomes of te and oe, we used a propensity score matching analysis and a kaplan-meier survival analysis. to analyze the short-term outcomes of te, patients were chronologically divided into three groups: a first period group (110 patients), a second period group (110 patients), and a third period group (110 patients). as for thoracoscopic procedure, the esophagus was mobilized from the anterior structure during the first step and from the posterior structure during the second step. the lymph nodes around the esophagus were also dissected anteriorly and posteriorly. the intraoperative factors, the number of dissected lymph nodes, and the incidence of adverse events were compared among the three period groups using a one-way anova or chi-square test. results: one hundred and twenty-three patients from each group, for a total of 246 patients, were completely selected and paired. background: it is also difficult to anastomose using circular stapler in the narrow neck field. to overcome the problem we modified circular stapling for anastomosis. gastric juice reflux is frequently observed at the esophagogastric anastomosis. we develop and report trapezoidal tunnel method to reduce the incidence reflux. (1) patients one hundred thirteen cases (27 in left lateral and 93 in prone position), with esophageal carcinomas underwent vats-e, respectively. esophago-gastric anastomosis is performed for 80 cases by modified circular stapling and 3 cases by trapezoidal tunnel method. (2) methods at first the patients are fixed at semi-prone position and esophagectomy is performed in prone position that can be set by rotating and 5 ports are used at the intercostal space (ics). esophagectomy and the l.n. dissection are performed with pneumothorax by maintaining co2 insufflation. esophago-gastric anastomosis is performed as following, i) trapezoidal tunnel method sero-muscular layer of anterior wall in the near top of gastric conduit is peeled from submucosal layer after parallel horizontal incision of sero-muscular layer, and then trapezoidal tunnel of sero-muscular layer is created. the edge of the proximal esophagus is drawn into the tunnel and esophago-gastric submucosa anastomosis is performed. to wrap anastomosis distal side of parallel line is closed. ii) modified circular stapling at first the circular stapler is introduced into the gastric conduit and joined to an anvil, and close a little. and then a joined anvil is placed into the proximal esophagus and secured by means of a pursestring suture. the gastric conduit opening is closed by a linear stapler. purpose: mesh utilization and its impact on postoperative hernia recurrence following paraesophageal hernia repair remains a polarizing topic. this analysis evaluates the recent trends in laparoscopic paraesophageal hernia repairs and analyzes the impact of operative time on postoperative morbidity. methods: the 2013-2015 acs-nsqip database was queried for primary cpt code for laparoscopic paraesophageal hernia repair with and without mesh (43282/43281). only elective cases performed by a general surgeon were included. operative time was grouped into quartiles (80-110, 111-142, 143-185, 186-360 min) and statistical analysis was performed using anova univariate with post-hoc testing and multivariate regression modeling controlling for age, diabetes, renal disease and weight loss. this analysis was powered to detect a greater than 2% difference in outcomes based on mesh utilization. the outcomes of interest were composite morbidity scores and readmission rates within 30 days of surgery. results: the database identified a cohort of 6,234 laparoscopic paraesophageal hernia repairs performed between 2013 and 2015. average patient age was 64 years and average patient body mass index was 31. mesh was utilized in 42% of cases per year and did not change over the study period (p=0.367) however mesh utilization was 37%, 40%, 43%, and 49% within operative time quartiles 1-4 respectively (p.001). postoperative morbidity and readmission rates for each operative time quartile were 2.8%, 4.1%, 5.42%, and 6.13% (p.001) and 4.4%, 5%, 6.2%, and 7.6% (p=0.001), respectively. post-hoc testing indicated statistically significant differences in postoperative morbidity and readmission rates between quartiles 1 and 3/4. multivariate regression analysis documented operative time as a risk factor for postoperative morbidities and readmission, even after controlling for covariates. mesh utilization was only significant for a reduction in the rate of venous thromboembolic complications (or 0.493, p=0.027) but did not impact other morbidities or readmission rates. conclusion: this analysis suggests that patients with higher operative times have increased postoperative morbidity and readmission while mesh utilization does not impact postoperative outcomes, after accounting for the longer operative time of a paraesophageal hernia repair with mesh. introduction: gastroparesis is a chronic gastric motility disorder defined by delayed gastric emptying and symptoms such as nausea, vomiting, bloating and abdominal pain. surgical options for refractory gastroparesis include pyloroplasty, gastric stimulator insertion, and gastrectomy. the palliation from a pyloroplasty and gastric stimulator may be synergistic, however concerns remain regarding the possibility of stimulator infection when performing both procedures simultaneously. we present our initial experience of combined laparoscopic pyloroplasty and insertion of gastric stimulator. methods: gastroparesis patients diagnosed by solid gastric scintigraphy or endoscopic evidence of retained food after prolonged npo status who underwent combined laparoscopic heineke-mikulicz pyloroplasty and gastric stimulator insertion between july 2016 and july 2017 were reviewed. patient demographics, pre-and post-operative symptom scores and outcomes were collected. results were analyzed using statistical tests as appropriate. p value .05 were considered significant. results: seven patients underwent the simultaneous pyloroplasty and gastric stimulator insertion. six patients (86%) were idiopathic and one patient (14%) was diabetic. one patient was male and six patients were female. charleen yeo, enming yong, danson yeo, kaushal sanghvi, aaryan koura, jaideepraj rao, myint oo aung; tan tock seng hospital introduction: gastric cancer is one of the most common cancers in the asian population, with recent literature supporting the laparoscopic approach in early disease. however, the minimally invasive approach in advanced disease is still controversial. the outcomes of laparoscopic gastrectomy in the elderly have also not been extensively studied. we aim to evaluate our institution's short term outcomes of laparoscopic versus open gastrectomy for gastric cancer-with particular focus on advanced disease and elderly patients. methodology: we prospectively collected the data of all patients who underwent gastrectomies for stomach cancer from 2008 to 2015. all patients underwent a partial or total gastrectomy with d2 lymphadenectomy. the decision for open or laparoscopic approach was decided between surgeon and patient. we excluded patients who underwent palliative resection. all patients were followed up for at least one year post-operatively. introduction: it was an eye-opener when the lancet brought the attention about global surgery. it is estimated that the deaths due to lack of access to surgery is far greater than deaths due to malaria, tuberculosis and hiv/aids put together. there is greater need to stress the importance in developing countries. there is a responsibility at the medical schools to enlighten students about this necessity and arouse interest in concept of global surgery. the students or surgical residents in the future are a great resource to solve this major problem. the first step would be to educate surgical residents. we need to assess the existing awareness about global surgery problem among surgical residents. we can plan a program to train the next generation surgeons. methods and procedure: all the surgical residents in our institution (victoria hospital, bangalore, india) were enrolled for this study. a total of 212 residents were enrolled. a multiple-choice questionnaire regarding global surgery was designed. the received questionnaire was analyzed to assess the depth of knowledge about global surgery. there were 20 multiple choice questions (mcq) and an option was provided at the end for feedback and suggestion to improve the global surgery in our country. each question carried one mark. score more than 10 was considered the cutoff for pass and those students were termed 'informed'. results: 91(42.9%) students cleared the cut off score of 10 and were termed 'informed'. among this group 21 (9%) residents scored 20 marks. 121 (57.07%) students did not cross the cut off and were termed 'non-informed'. among these 57 (26.8%) students scored 0 marks and did not know anything on the topic. 43 students provided relevant suggestions and opinions to improve global surgery issue. conclusion: there is a great lacuna in knowledge about global surgery among surgical residents. we need to plan a program integrating global surgery in the syllabus of surgical training. the awareness among residents would arouse interest and participation in the future. introduction: minimally invasive surgical techniques (mists) could have tremendous applications and benefits in resource poor environment. these include but are not limited to short hospital stay, reduced cost of care, and reduced morbidity, especially related to post operative infections. there is growing interest in mists in most low and middle income countries (lmic) but its adoption has remained limited largely due to high cost of initial set-up, lack of technological backup and limited access to training among others. one of the most limiting factors is the maintenance of the vision system. an affordable laparoscopic set-up as an example will therefore go a long way in improving access to mists. methods and procedures: a common zero-degrees 10 mm scope is attached on the camera of a low price smartphone (samsung galaxy j3 2016, samsung®, seoul, south korea). two elastic bands are used to fix the scope right in front of the main camera on the smartphone. the device is covered with sterile transparent drapes (tegaderm®, 3m corporate, st. paul, mn, usa). a light source is connected with a fiber optic cable for endoscopic use. the image can be seen in real time on a common tv screen through an hdmi connection to the smartphone, with a sterile drape. holding the vision system through the scope would guarantee to keep the camera in place without issues. to operate in full screen the vision was digitally zoomed at 91.6, without losing quality (that is more related to the intensity of the light). as a collateral project we built a low cost simulator training box with the same camera to train the surgeon, obtaining a high fidelity and affordable simulation setting. results: we were able to perform the 5 tasks of the fundamentals of laparoscopic surgery curriculum using our vision system with proficiency. in a pig model, we performed a tubal ligation to simulate an appendectomy and we were able to perform basic laparoscopic suturing. no major issue were encountered and small adjustment only were required to have an acceptable, stable and clear view. conclusion: there is growing interest in minimally invasive surgeries among surgeons in lmic, but its adoption has remained limited due to reasons such as high cost of initial set-up, lack of technological backup and limited access to training among others. an affordable laparoscopic camera system will therefore go a long way in improving access to mis in such settings. open. there were no deaths or bile duct injuries in our series. two patients undergoing laparoscopic approach were converted to open (7.1%). complications, los, and gender were similar between the two groups. the laparoscopic group were significantly younger and had a significantly longer operative duration (table) . long term outcomes were not available for analysis. laparoscopic and open cholecystectomy appear safe in the setting of short term surgical missions. neither group suffered major complications. both had similar immediate outcomes. los for both groups was surprisingly similar and shorter than larger series which may possibly due to patient selection. given similar immediate outcomes and large burden of disease, the open approach should be considered. however, this cost may be extracted in terms of greater pain or longer recovery time for patients, which may outweigh the benefits. further data is needed to study pain, long term outcomes, and return to work. introduction: minimally invasive surgery relies on optimal camera control for the successful execution of operations. one disadvantage of laparoscopic surgery is that camera control is dependent on a surgical assistant's interpretation of visual cues and ability to predict the next field of focus in addition to verbal commands from the operating physician to provide the optimal view. robot-assisted minimally invasive surgery provides the operating surgeon the advantage of dictating their field of view. this study aims to utilize a video processing algorithm to determine the incidence of improperly centered field of view in laparoscopic vs. robot-assisted surgery. methods: in this study, 8 recordings of minimally invasive resection of rectal cancer (4 laparoscopic and 4 robot-assisted surgery) were evaluated. recordings were input into matlab® video processing to generate single frames at each second interval. a single reviewer would indicate the pixel which best determined where the camera should be centered based on positioning of instruments, current action (dissection/hemostasis/traction) depicted in the frame, and previous review of recordings. pixel locations were recorded for subsequent analysis. centered views were determined as those with the identified centered position pixel lying within the center quadrant when frames were split into a uniform 393 grid. in addition, distance of each point to the absolute center of the frame was calculated based on the pixel's x and y positions. results: individual operation data was analyzed for percent of centered pixel locations and pixel distance from the center pixel of the frame. robot-assisted surgery demonstrated higher percentage of centered views over laparoscopic surgery (61.5±5.1 vs. 49.7±7.8; p.05). robot-assisted surgery also demonstrated shorter distances to frame center than laparoscopic surgery (123.3±9.8 vs. 144.8±13.9; p.05). conclusion: robot-assisted surgery aims to resolve conflicts of cooperation that occur between surgeon and assistant in laparoscopic surgery by enabling manual visual control of the operative field by the operating surgeon. this study demonstrates that by eliminating such conflicts, optimal surgical view is more frequently obtained. surg endosc (2018) background/objective: valveless laparoscopic insufflator systems are marketed for ability to prevent loss of abdominal collapse and desufflation during laparoscopy. however, community surgeons raised concern for possible entrainment of room air, including oxygen (02), with these systems. this study seeks to quantify o 2 and non-medical air entrainment by a laparoscopic valveless cannula system to understand the risk of intraoperative air embolism. a communityuniversity collaborative was created to design a model and test this hypothesis. methods: an artificial abdomen was developed and calibrated to equivalent compliance and intraoperative volume of an average adult abdomen. it was connected to a flow meter, oxygen concentration sensor, and commercially available laparoscopic valveless cannula system. background: further advance of near-infrared (nir) imaging capability into greater clinical usefulness will be helped by the development of new targetable agents. to avoid issues related to dose timing and contamination, compounds that become fluorescent only at the site being targeted would be a significant advance. here we build on earlier laboratory work to show step-wise advance of the agent towards clinical trialling. methods: a novel agent (nir-aza) was tested in ex vivo colorectal specimens using two commercially available systems to determine characteristics in biological tissue. it was then trialled in a large animal cohort (n=4) to determine its performance for both intestinal perfusion assessment and lymph node mapping (both stomach and colon) using again a commercially available optical imaging system and including a direct comparison with indocyanine green. results: the novel agent was easily detectable in biological tissue in the near infrared wavelength relevant to commercial instrumentation both as a local depot tattoo and as a lymphatic tracing agent. porcine model trialling again showing excellent detection and tracking characteristics both in the circulation and in gastrointestinal tissue with clear tracking to relevant lymph nodes within minutes evident with the latter. while these studies were non-survival, there was no evidence of local tissue or systemic system toxicity in any case. direct qualitative and quantificative comparison between in situ nir-aza and icg at both intestinal and lymph basin regions showed similar levels of fluorescence. conclusion: the trial compound underwent successful testing indicating proof of earlier projected potential. this is encouraging for further work to advance to first in human testing. introduction: enhanced imaging systems have been developed to alter laparoscopic camera output to facilitate visualization during laparoscopic surgery using several novel imaging modes: clara mode reduces overexposure and reflections while brightening darker areas of the image; chroma mode intensifies color contrast to more clearly delineate blood vessels; and a combined chroma-clara mode. the ies also allows the surgeon to change imaging modes throughout the procedure as needed to facilitate different portions of the operation. we hypothesized that this technology would enhance visualization of critical structures during laparoscopic cholecystectomy (lc) compared to standard laparoscopic imaging. methods: videos and still images from an ies (karl storz endoscopy) were assessed in 12 patients undergoing lc using the four imaging modalities. three time points were assessed: 1) after adhesions were taken down but before any other dissection; 2) after partial dissection of the hepatocystic triangle; and 3) after establishment of the critical view of safety (cvs). seven surgeons blinded to the imaging modalities ranked each modality from 1 (best) to 4 (worst) for each of 36 time points (3 dissection points for 12 cases). structures identified on achievement of the cvs were also analyzed. all statistics were performed using spss. rank data was analyzed with the friedman and wilcoxon signed rank tests. results: the median ranks of the chroma and chroma-clara imaging modalities (median [iqr] 2 [1] [2] [3] vs 2 (1-2), p=0.07) were not significantly different from each other, but both ranked significantly higher than the clara and standard modalities (median rank [iqr] 4 [3] [4] and 3 [2] [3] , respectively, p.001). individual surgeon preferences varied; four surgeons preferred chroma-clara, two preferred chroma, one preferred clara, and none preferred the standard mode. in addition, the cystic artery and cystic duct were visible in all cases after achieving the cvs, but the common bile duct was visible in only 13% of cases. conclusion: enhanced imaging system technology provides modalities that were significantly preferred over standard laparoscopic imaging on retrospective review of still and video images during lc. enhanced imaging modalities should be evaluated further to assess their impact on outcomes of lc and other laparoscopic procedures. introduction: cholangiocarcinoma is often diagnosed at an unresectable stage. endoscopic stent placement is generally performed to release the tumor-induced biliary obstruction. however, stents misplacement and migration, tumor tissue ingrowth and cholangitis are relatively frequent complications. energy-based techniques (radiofrequency ablation and photodynamic therapy) have been proposed as alternatives or in addition to the stent placement, showing controversial results. the use of laser sources in the ablation of the biliary wall has not been investigated so far. this study aims at the evaluation of the optimal power and exposure time to achieve a controlled circumferential intraluminal laser ablation of the common bile duct (cbd). methods: through a laparotomy access, the cbd of 4 pigs was exposed and a small choledocotomy was made. a confocal endomicroscopy (ce) scanning (cellvizio) was performed through the choledocotomy, after injection of 5 ml of sodium fluorescein. the 1.2 mm diameter circumferentiallyemitting diode laser probe (940 nm wavelength) was introduced in the cbd. laser ablation was performed at 7 w during 180s (n=2) or 360s (n=2). the power setting was predetermined on preliminary ex-vivo tests on porcine liver specimen. local temperature control was monitored through a fiber bragg grating, embedded in the laser probe. ce scanning was then repeated. the extent of the ablation was measured on hematoxilin-eosin and nadh stained slides. results: the diameter of the probe was too small to enable a single-shot circumferential ablation. there were no full-thickness perforations. after 50s from turning laser on, the temperature at the application site reached a plateau with minimal oscillations, and remained at mean values of 61.5± 6.7°c during both 3 and 6 min. histology revealed that the mucosa ablation, at the contact areas, induced a consistent cellular necrosis (nadh-). ce scanning provided real-time images with a specific aspect of the post-ablation mucosa, including an alteration of the normal glandular structure and a general lack of enhanced imaging. the local application of a circumferential laser source induced a precise and safe mucosa ablation with a long-standing increase in temperature in the cbd, in this experimental trial. however, there is a need of an adapted probe, better fitting the diameter of the cbd to enable a single-shot circumferential treatment. goutaro katsuno, md, phd 1 , yasuhiko nakata, md, phd 1 , nobuyuki kubota, md, phd 1 , teruo kaiga, md, phd 1 , takao mamiya, md 1 , masahiro yan, md 1 , naoaki shimamoto, md 1 , shuichi sakamoto, md, phd 2; 1 department of gastrointestinal and minimally invasive surgery, mitsuwadai general hospital, 2 introduction: recently major developments in video imaging have been achieved for performing complete mesocolic excisions (cme) or total mesorectum excisions (tme). indocyanine green (icg) fluorescence imaging is already contributing greatly to making intraoperative decisions for keeping an intact visceral fascial layer, making suitable mesentery division lines and identifying anastomotic perfusions. the aim of this study is to present our experience with laparoscopic procedures for colo-rectal cancers using icg fluorescence imaging (lap icg-fi). patients and methods: we usually use the near-infrared (nir) laparoscopy (stryker corporation, michigan, usa) for lap icg-fi. [indocyanine green fluorescent imaging] visualization of lymph flow: icg (2.5 mg/1.0 ml) was injected into the submucosal layer around the tumor at 2 points with a 23-gauge localized injection before the lymph node dissection. visualization of blood flow: after complete colorectal mobilization, the mesocolon was completely divided at the planned proximal or distal transection line. indocyanine green was injected intravenously and the transection location(s) and/or distal rectal stump, if applicable, were re-assessed in fluorescent imaging mode. results: we experienced 32 lap icg-fi cases with colo-rectal cancer patients. tumor was located at the rectum in 12 of them, at the sigmoid colon in 10, at the transverse colon in 2, at the descending colon in 2, at the ascending colon in 4, and at the cecum in 2. tnm stage was 0-i in 10 patients, ii in 9, iii in 8, and iv in 5. the median (range) age of the patients was 68 (55-77) years with a median (range) bmi of 24.8 (20-36.4) kg/m 2 . the lymph flow was visualized in 30 patients (94%) intraoperatively. however, a high-quality intraoperative icg lymphangiogram was achieved in 22 patients (73%). in high-quality lymphangiogram, the lymphatic ducts and lymph nodes were clearly visualized in real time, and this proved useful in keeping an intact visceral fascial layer as well as in making a suitable mesentery division line even in the bmi[30 patients. a high-quality intraoperative icg angiogram was achieved in all patients. anastomotic perfusion was satisfactory in all cases. in 2 patients (6.3%), the use of nir+icg resulted in revision of the proximal colonic transection point before formation of the anastomosis. there were no postoperative anastomotic leakages. no injection-related adverse effects were reported. conclusion: lap icg-fi is a simple, safe and useful tool to help us complete lap cme or tme and check real-time anastomotic tissue perfusion. introduction: recently, the spread of laparoscopic surgery as a standard treatment and the development of information & communication technology have yielded abundant video data of laparoscopic procedures. these data have been accumulated and we can access them anytime, anywhere. however, the direction of how to use the abundant video data are still unclear. conventionally, surgical procedures have been performed based on surgeon's subjective decisions and skills, so called "tacit knowledge". for the purpose of objective analysis of laparoscopic procedures in video data, automatic recognition of surgical tools and understanding of surgical workflow must be the first critical step. we used convolutional neural network (cnn) which is the current trend in machine learning and computer vision tasks. methods: using video database of laparoscopic sigmoid colectomy in our institute, we performed annotation of tools and phases in every frame of the operating videos. for the tool detection, we annotated bounding boxes for both left and right tools in the videos. furthermore, phase annotation was performed by watching the videos in consultation with laparoscopic surgeons. the laparoscopic sigmoid colectomy operation passes through 10 phases; 1-placement of ports and preparation, 2-dissection of retrorectal space, 3-medial approach to ima, 4-isolation and division of ima, 5-medial-to-lateral retromesenteric dissection, 6-lateral mobilization of left colon, 7-rectosigmoid mobilization, 8-division of mesorectum, 9-rectosigmoid resection and anastomosis, 10-finishing. we used cnn architecture to perform surgical tool detection and workflow recognition. results: we totally labeled 8 tools used in the procedures of laparoscopic sigmoid colectomy and successfully developed tool detection system by cnn. as for surgical workflow, average times of phase 1-10 were 11. 3, 9.9, 8.7, 5.9, 11.5, 10.2, 8.7, 11.6, 17.8, 2 .7 min, respectively. workflow recognition system using cnn was also successfully developed, while we needed to extract pure operating scenes in advance for efficient recognition outcomes. we've developed tool detection and phase recognition systems using cnn. we need more datasets to improve the detecting ability for future clinical uses. introduction: surgical environments require special aseptic conditions for direct interaction with the preoperative images and surgical equipment, which hampers the use of traditional input devices. we presented the feasibility of using a natural user interface (nui) for gesture control combined with voice control to directly interact in a more intuitive and sterile manner with the preoperative images and the integrated operating room (or) functionalities during laparoscopic surgery. in this study, efficiency and face validity of using this nui for medical image navigation and remote control during the performance of a set of basic tasks in the or will be assessed. methods and procedures: twenty experienced laparoscopic surgeons participated in this study. they performed 25 basic tasks in the or focused on the interaction with a medical image viewer (osirix; pixmeo) and with the functionalities of the integrated or (or1; karl storz). these tasks were carried out by means of traditional manual interaction, using a computer keyboard and mouse and a touching screen, and using a gesture control sensor (myo armband) in combination with voice commands. this nui is controlled by the tedcube system (tedcas medical systems). time required to complete the tasks using each interaction method was recorded. at the end of the tasks, participants completed a questionnaire for face validation and usability assessment. results: the use of the nui required significantly less time than conventional manual control to show preoperative studies and information for surgical support. however, the interaction with the medical image viewer was significantly faster using the traditional input devices. participants evaluated the nui as an intuitive, simple and versatile tool that improves sterility during surgical activity. seventy-five percent of the participants would choose the gesture control system as a method of interaction with the patient's preoperative information during surgery. conclusions: the presented gesture control system allows surgeons to directly interact with preoperative imaging studies and the functionalities of an integrated or during surgery maintaining the aseptic conditions. for the traditional manual interaction, it is necessary to take into account the possible reaction time and displacement time of the technician to execute the surgeon's requests. a more personalized medical image viewer is required and with higher integration with the capabilities of the presented gesture control system. emma k gibson, bs, jacqueline j blank, md, timothy j ridolfi, introduction: following a generous left hemicolectomy an anastomosis between the transverse colon and rectum may be required. extensive mobilization and retroileal routing is sometimes necessary to create a tension-free anastomosis. retroileal routing is a technique in which a window is created in the ilieocolic mesentery. the colon is routed through this window, beneath the ileum, prior to entering the pelvis. retroileal routing is uncommon and there is no data on this technique when performed in using a hand-assisted laparoscopic technique. the aim of this study was to review our experience with hand-assisted laparoscopic left sided colon resections including retroileal routing of the proximal colon to the rectum. methods and procedures: we performed a retrospective review of a single surgeon's experience with hand-assisted laparoscopic left sided resections over a seven-year period from 2008-2015. indication for operation, basic demographics, bmi, procedure time, short-and long-term morbidity, and mortality were recorded. results: a total of 340 patients underwent a hand-assisted laparoscopic left sided resection with a colorectal or coloanal anastomosis. of these, 13 underwent hand-assisted laparoscopic procedures with retroileal routing of the proximal colon. in each case, operations included a midline hand port incision and two 5 mm ports in the lower abdomen. the indications for operation were diverticular disease and neoplasm in nine and four patients respectively. procedures took an average of 188.6 (128-221) minutes to complete. postoperative morbidity included intubation for co2 retention in one patient and a rll effusion in another patient. there were no anastomotic leaks and there were no 30-day or 90-day mortalities. conclusion: retroileal routing of the colon following left hemicolectomy occurs infrequently. a hand-assisted laparoscopic approach appears to be a safe and efficient in these technically challenging cases. objective: approximation of the diaphragmatic crus pillars is a key step in hiatal hernia repair. the dogma of successful hernia repair requires tension free approximation of tissue. there are no techniques described to measure tension across the crus closure. aim of this study is to describe a novel technique for measuring the tension exerted on crural sutures and report initial findings. methods: data was collected at 2 institutions by the same surgeon. after hiatus dissection was complete the crus defect was measured both anterio-posterior and transverse dimension. the crus closure sutures were placed posterior and then lateral to the esophagus. the initial suture is started posteriorly with a figure of eight fashion (#1). with each subsequent stitch placed anteriorly (#2 and #3) or laterally (l1, l2) till adequate hiatus closure is achieved. we measured tension on each suture placed as follows. conclusions: the autolap system provides improved image stability, staff interactions, and enhanced ergonomic comfort for the surgical team. it also offers cost-savings from decreased staffing requirements for hospitals that routinely use staff camera holders. the system set up of 7-8 min was less variable after 20 cases, representing the learning curve. in addition, our approach identified problems with the system that require improvement by the manufacturer. notably, we identified significant ergonomic problems for human camera holders, which has been previously described and can be addressed by this device. background: gastric leaks continue to be a troubling predicament for physicians and patients alike. they are especially concerning after bariatric surgery. electrolyte abnormalities and dehydration continuously pose a life threatening problem in these patients. methods: this is an irb approved retrospective review of our experience with a biologic tissue mesh plug closure of gastric leaks. our interventional radiology colleagues percutaneously accessed the perigastric collection with a wire and a straight catheter was guided through the gastric wall defect and advanced over the wire until it was intraluminal. the surgeon then placed an endoscope down to the level of the gastric defect. the wire was then retrieved by the endoscope achieving percutaneo-oral wire access. the biologic tissue matrix was then measured and cut to a square and inverted into a cone like structure with a flat straight piece on the open end. the cone patch was then secured to the wire with 0 braided polyglactin suture loop. the wire was then withdrawn back through the gastric defect pulling the plug and patch into position and placement was confirmed by endoscopy. results: we attempted closure of a gastric leak arising after bariatric surgery in six patients. five underwent successful deployment while one had premature disconnection of the plug from the wire and could not be deployed. the five who had successful deployment had immediate success and within days resumed enteral intake of liquids and resolution of the leak. two of the six patients additionally underwent covered stent placement to stent a stenotic area at the incisura angularis from the esophagus to the antrum. this stent was typically removed 1-2 weeks later. there were no complications related to the procedure or the plug. only one patient has undergone repeat endoscopy to evaluate the status of the plug. in that patient an ulcer at the plug site was visualized one month after the procedure. three months later endoscopy showed the clean ulcer had shrunk to half of the original ulcer size. conclusion: this novel minimally invasive technique utilizing ir and endoscopic placement of a biologic mesh plug into gastric leaks after bariatric surgery has been highly successful in treating chronic and subacute gastric leaks. we recommend that these endoscopic techniques be used to close gastric defects prior to operative intervention. introduction: laparoscopic surgery has spread worldwide and become a standard procedure among many abdominal surgical fields. the incidence of postoperative adhesion, which is a typical postoperative complication, is considered low compared with that after laparotomy, but once complications develop, such as adhesion-induced intestinal obstruction and chronic abdominal pain, the low-invasiveness of laparoscopic surgery may decrease markedly. while we have previously used a sheet-type absorbable barrier to prevent adhesion, it requires a technique in many cases when it is applied in the abdominal cavity. in this study, we used a spray-type absorbable barrier, which is considered simple to apply, as an adhesion-preventing absorbable barrier following laparoscopic surgery. subjects and methods: a spray-type absorbable barrier for prevention of adhesion (ad spray type l®) was applied to the dissected surface, port region, and beneath the small incised wound in 5 patients who underwent laparoscopic surgery of the large intestine after february 2017. the nozzle is long (334 mm in length) and the angle of the tip is adjustable to some extent, so that the spray could be applied easily to the target region, even in areas in which it would be difficult to secure a work space, by rotating the shaft and finely adjusting the angle of the tip. in order for the barrier to remain in the target region, this preparation must remain viscous after application. discussion: approaches for the insertion and affixing of a conventional sheet-type absorbable barrier for the prevention of adhesion has been reported previously by various researchers. the adhesion-preventing absorbable barrier used in this study was a spray type with a long nozzle, which may have been useful because it made the laparoscopic application easy. however, its application requires some experience and time for preparation compared with the use of the sheet type, which could be disadvantageous. further accumulation of cases, including evaluation of prevention of adhesion after use of the adhesion-preventing absorbable barrier may be necessary. christopher g yheulon, md, priya rajdev, md, s. scott davis, md; introduction: evidence has demonstrated that biosynthetic glue for laparoscopic inguinal hernia repair results in decreased pain. however, the two glue sub-types (biologic-fibrin based; synthetic -cyanoacrylate based) have never been compared. this study aims to assess the outcomes of those subtypes. method and procedures: a systematic review of the medline database was undertaken. randomized trials assessing the outcomes of laparoscopic inguinal hernia repair with penetrating and glue fixation methods were considered for inclusion and data analysis. thirteen trials involving 1633 patients were identified with eight trials utilizing fibrin and five trials utilizing cyanoacrylate. results: there were no differences in recurrence or wound infection between the glue subtypes when compared individually to penetrating fixation alone or indirectly to each other. there was a significant reduction in urinary retention with fibrin glue when compared to penetrating fixation (or 0.31, 95% c.i. 0.12-0.81). no studies utilizing cyanoacrylate analyzed urinary retention as an outcome. there were non-significant trends in reduction of hematoma and seroma for both glue subtypes when compared to penetrating fixation (or 0.71, 95% confidence interval 0.50-1.01). conclusions: glue fixation in laparoscopic inguinal hernia repair reduces the incidence of urinary retention and may reduce the rate of hematoma or seroma formation. as there are no differences in outcomes when comparing fibrin or cyanoacrylate glue, surgeons should choose the glue that is available at the lowest cost at their respective institution. however, improvement of the optical system is necessary to further utilize this advantage. we are developing an optical lens system covering the range from macroscopic to microscopic. methods: we developed a handheld prototype created by combining the objective lens system of an optical microscope and a telescope lens. a feasibility study using a porcine model was conducted. macroscopic observation was done at a distance followed by microscopic observation in contact with tissue. first, we observed the operative field macroscopically. we then observed the serosa of the small intestine microscopically, and effects of blood flow occlusion were studied. results: ( fig.1 and fig.2 ) the same visual field as ordinary laparoscopy was achieved during macroscopic observation, while using microscopic observation it was possible to observe the complex peristaltic movements of the intestine. the minute blood vessels of the visceral peritoneum and larger, deeper blood vessels were also observed. when the mesenteric vessels were occluded, changes in peristaltic movement were seen directly. congestion in blood vessels in the deep layers of the serosa was observed. improvement in peristalsis and congestion were confirmed by restoring blood flow. this system enables direct visual observations not possible with conventional optics. this system can be utilized in both laparoscopic and open surgery. the microscopic visual information obtained by this system may help with intra-operative decision making and serve to facilitate safe and precise surgery. introduction: accurate, real-time visualization is critical for efficient, effective and safe surgery. although optical imaging using near-infrared (nir) fluorescence has been used for visualization of anatomic structures and physiologic functions in open and minimally invasive surgeries, its efficacy and adoption remain suboptimal due to the lack of specificity and sensitivity. herein, we report a novel class of compounds, which are exclusively metabolized in liver or kidney, rapidly excreted into to biliary or urinary systems, and emitted two different nir fluorescence spectrums. methods: novel, water-soluble heptamethine cyanines; compound x (biliary) and compound y (urinary), unreactive towards gluthathione and the cellular proteome were synthesized, and visualized using real-time, dual-color nir imaging device. sprague-dawley rats (n=12) and yorkshire pigs (n= 9) were used to demonstrate and validate its usefulness, distributed into a control group (icg; rat n=3, irdye800cw rat n=3), a biliary group (compound x; rat n=3, pig n=3), a urinary group (compound y; rat n=3, pig n=3), and dual-labeling group (compound x&y; rat n=3, pig n=3). each rat and pig received one or two of the compounds at optimized dose of 0.09-mg/kg intravenously, fluorescence signals and bio-distributions were monitored and recorded over time. the target to background ratio (tbr) was calculated in each target systems and compared to assess sensitivity and specificity. results: compound x was rapidly cleared from liver within 15 min after intravenous injection while the fluorescence signals in biliary system lasted up to 1 h both in rats and pigs. compound y showed significant renal excretion up to 4 h and the urinary signals remained up to 2 h. they were both highly specific to target organs with tbr values of 4.23 (biliary), 6.32 (urinary) and 1.23 (cf. icg) at peak signals. these new compounds have approximately 2-3 times higher quantum yields than icg and 1.75-2.5 times higher specificity to kidney and liver than irdye800cw. one-way anova showed significant differences between control, biliary, and urinary group (p.0001.) dual-labeling results also showed a complete separation of these two metabolic systems (p=0.008) and a real-time display of these two systems were clearly visualized with pseudo-colored labeling inside the animal body. conclusion: we report a new generation of organ-specific, real-time fluorescent markers for intraoperative visualization, navigation and potential geo-fencing. these new compounds have significantly higher quantum yields and higher specificity to visualize kidney and/or liver than any currently available reagents. background: porcine models have been widely accepted for gastrointestinal surgery studies, due to their similarities to human anatomy, histology and physiology. devices such as laparoscopic staplers have been widely used in bariatrics and are currently the cornerstone of bariatric. there are currently few published articles regarding surgical stapler testing in porcine models by means of a survival design. the purpose of this study is to present a new model for stapler testing in porcines. we present the following study in which we asses a novel stapler's feasibility and safety, and its compatibility to currently used stapler reloads. this novel stapler, the aeon™ endoscopic linear stapler (lexington medical inc., billerica, ma. pending fda approval), has been previously tested in-vitro and in-vivo by the lexington medical engineering department in matters of mechanical function, staple line bursting pressure, staple formation and hemostasis. duffy et al. used this instrument for small bowel anastomoses in a two-week survival study in porcine models. methods and procedures: four porcine animal model was used under iacuc protocol for a 29-day survival study held at the fiu (doral, fl, u.s.a) research facility. all animals underwent sleeve gastrectomy using the novel stapler handle, combined with the endo gia™ (medtronic, mansfield, ma) 4 mm-staple reloads in two of the animals and aeon™ 4 mm-staple reloads in the remaining two. no reinforcements or oversewing of the staple line was done. these procedures were performed by two bariatric surgeons. animals were monitored perioperatively by the facility staff as per protocol. the animals were euthanized at day 29. post-mortem assessments were done blindly. gross evaluation and comparison of the gastric tube and their staple lines was done, as well as patency, strictures, and staple line integrity. results: stapler function was equivalent with both reload brands, no technical issues were encountered. 3-5 firings were used per animal. no intraoperative complications related to stapler function ensued. no postoperative complications were encountered. all animals survived the full length of the study-29 days. all sleeves were patent, no strictures or bowel obstruction were present. conclusions: in an animal survival study, a follow-up period of 4 weeks appears to be a good benchmark for stapler testing. the use of the novel stapler for gastric resections appears feasible and safe. further studies such as microscopic examination of the staple lines, might help confirm equivalence, safety and feasibility of these products for the sleeve gastrectomy procedure. jason m samuels, md 1 , peter einersen, md 1 , krzysztof j wikiel, md 2 , heather carmichael 1 , douglas m overby 1 , john t moore 2 , carlton c barnett 2 , thomas n robinson, md 2 , teresa s jones 2 , edward l jones, md 2; 1 university of colorado denver, 2 denver va medical center introduction: the purpose of our study was to evaluate the impact of smoke evacuation devices on operating room fires caused by surgical skin preps. surgical fires are rare but preventable events that cause devastating injuries. alcohol-based surgical skin prep serves as the fuel for a fire ignited by electrosurgical instruments. we hypothesized that increasing air exchanges near the tip of the active electrode will reduce the concentration of alcohol thus reducing the incidence of surgical fires. methods: a standardized, ex vivo model was created with a 15915 cm section of clipped, porcine skin. surgical skin preparations tested: 70% isopropyl alcohol with 2% chlorhexidine gluconate (chg-ipa) and 74% isopropyl alcohol with 0.7% iodine povacrylex (iodine-ipa). based upon previous studies, a high-risk situation was replicated with immediate energy activation in the presence of pooled alcohol-based prep. the site was draped to simulate a small surgical procedure with approximately 25 square cm exposed. (figure 1 ) a standard and smoke evacuating electrosurgical pencil was activated for 3 s on 30 w coagulation mode in the presence of 21% oxygen. a standard wall suction was also tested with the tip held 5 cm from the tip of the electrosurgical pencil. a chi-square test was used to compare differences between groups. results: surgical fires were created in 80% (16/20) of the tests with the chg-ipa and 95% (19/20; p=0.34) of the tests with iodine-ipa. continuous wall suction did not change the incidence of fire. the smoke evacuation electrosurgical pencil significantly decreased the incidence of fire when compared to the standard pencil and continuous wall suction for both preparations (table 1) . with chg-ipa, the smoke evacuation electrosurgical pencil decreased the frequency of fire by 81% (figure 2 , p.001). similarly, when using iodine-ipa, the electrosurgical pencil with integrated smoke evacuation demonstrated a 73% decrease in fires (figure 2, p.001). conclusion: alcohol-based skin preps fuel surgical fires. the use of a smoke evacuator electrosurgical pencil reduces the occurrence of surgical fires. elimination of alcohol-based preps and the use of smoke evacuation devices decrease the risk of operating room fires. brian bassiri-tehrani, md, netanel alper, md, jeffrey s aronoff, md, yaniv larish, md; lenox hill hospital ureteral stents have historically been used in pelvic surgery when anatomical or clinical considerations warrant urological expertise to aid in identifying the ureters. in the colorectal and gynecologic surgery literature, prophylactic ureteral stents appear to increase the ability to detect ureteral injuries while not being shown to prevent such injuries. with the increasingly widespread use of laparoscopy and the robotic platform in complex colorectal and pelvic surgery, the utility of stents remains unclear. one of the limiting factors regarding the use of ureteral stents in minimally invasive surgery is the lack of tactile feedback; the inability of the surgeon to directly palpate the stents. one proposed method to overcome this deficiency has been the use of lighted ureteral stents. increased operating time, increased cost, and need for specialized equipment are potential drawbacks of lighted stents. an alternative to using lighted stents in minimally-invasive surgery is to directly inject indocyanine green (icg) into the ureters after cystoscopy-guided placement of ureteral stents. intraoperative visualization of the ureters is acheived by using either the pinpoint endoscopic fluorescence imaging system in laparoscopy, or firefly integrated with the robotic platform. it is hoped that the risk of inadvertent ureteral injuries during colorectal and pelvic operations will be minimized using this technique, due to improved visualization of the ureters throughout the procedure. in this case presentation, we describe a novel use of icg in a patient undergoing a laparoscopic surgery for resection of a 6.798.095.1 cm pelvic mass abutting the bladder, sigmoid colon and left ureter. preoperatively, there was concern that the mass would be intimately adherent to, or even invading, the bilateral ureters based on ct scan findings. after ureteral injection of icg, visualization of both ureters was easily achieved at the time of operation, and the procedure proceeded with careful and safe dissection of the mass with visualization of the ureters at all times. though there is a paucity of studies evaluating the use of icg in the laparoscopic modality, this technique was safe, easy to employ, inexpensive and very useful to visualize the ureters intraoperatively. indeed, larger studies with appropriate sample sizes would help to further validate this novel use of icg. university of colorado -denver, 2 va eastern colorado healthcare system introduction: operating room fires are "never events" that expose the patient to the risk of devastating complications. our group has previously demonstrated that alcohol-based surgical skin preparations fuel operating room fires. manufacturer guidelines recommend a three-minute delay after application of alcohol-based preps to decrease the risk of prep pooling and surgical fires. the purpose of this study was to evaluate the efficacy of the three-minute dry time in reducing the incidence of surgical fires. methods and procedures: a standardized, ex vivo model was used with a 15915 cm section of clipped, porcine skin. alcohol-based surgical skin preparations tested were 70% isopropyl alcohol (ipa) with 2% chlorhexidine gluconate (chg) and 74% ipa with 0.7% iodine povacrylex (iodine-ipa). nonalcohol-based solutions included 2% chlorhexidine gluconate and 1% povidone-iodine "paint." an electrosurgical ''bovie'' pencil was activated for 3 seconds on 30 watts coagulation mode in 21% oxygen, both immediately and 3 minutes after skin preparation application, with and without solution pooling. results: no fires occurred with immediate testing of nonalcohol-based preparations (0/40). alcohol-based preps created flames on immediate testing in 83% (33/40) of cases when pooling was present. without pooling, flames occurred in 40% (16/40) of cases on immediate testing. after a 3-minute delay, there was no difference in the incidence of fire when pooling was present (33/40 vs. 33/40, p [1) . similarly, there was no difference when pooling was not present (16/40 vs. 14/40, p=1). (table 1 ) conclusions: alcohol-based surgical skin preparations fuel surgical fires. waiting 3 minutes for drying of the surgical skin prep did not change the incidence of surgical fire (regardless of whether there was pooling of the prep solution). the use of nonalcohol-based skin preps eliminated the risk of fire. introduction: laparoscopic port sites are associated with a significant incidence of long-term hernia formation. in addition, closure with closed loop suture may lead to increased post operative pain thereby limiting patient mobility. the development of novel trocar closure systems could offer a pathway towards quality improvement and warrants investigation. we performed a randomized controlled trial comparing a novel anchor based system (neoclose®) versus standard suture closure. methods: a prospective randomized controlled trial of 70 patients undergoing port site closure following robotic assisted laparoscopic sleeve gastrectomy or gastric bypass was completed (35 with neoclose® device and 35 with standard laparoscopic suture closure). each patient had both the camera port and stapling port closed (70 port sites in each group). primary outcome measures included the incidence of hernia (6 week ultrasound), time for port site closure, and depth of needle penetration. secondary outcome measures were analog pain scoring at post op day 1, week 1 and week 6. results: physical exam as well as ultrasound evaluation showed no hernias in either group at 6 weeks. when compared to suture closure, the neoclose® device was associated with shorter closure times (20.2±1.2 versus 30.0±2.4 s, p.001) and needle depth penetration (3.3±0.1 versus 5.2±0.2 cm, p\ 0.001). the neoclose® device was associated with decreased pain at 1 week after the operation (analog pain score 0.3±0.1 versus 0.9±0.2, p.01). no difference in pain scoring was observed on post operative day 1 or at week 6. conclusions: trocar site closure with the neoclose® device is associated with decreased closure times and needle depth penetration. no difference in the incidence of hernias was identified very early after operation. the neoclose® device led to decreased pain 1 week after trocar closure which is potentially secondary to decreased tension when compared to closure with closed loop suture. long term hernia data (1 year) is pending with patients scheduled for follow up physical exams and ultrasounds. federico gheza, md, mario a masrur, md, simone crivellaro, md; uic introduction: robotic instruments provides a better ergonomics during suturing compared to standard laparoscopy. minimally invasive procedures with limited need of few suture may benefit from an economically affordable device able to overcome some limitations of laparoscopic suturing. flexdex surgical recently obtained the fda approval for human use of its articulated laparoscopic needle driver. the official training provided by the company (available at https://flexdex.com/register-for-training) is a 3 h basic dry lab. the training curriculum as well as the accreditation process is not well structured. no literature is available today on this matter. our goal was building a dedicated training, to allow a safe and predictable early use in humans. methods and procedures: the training module design and implementation was done in our minimally invasive laboratory. in the preliminary phase we define with a small group of residents and research specialists a short list of mandatory concepts to detail showing the instrument. a simple suturing task was then performed by the same group with the new device, laparoscopically and with the robot, available in our lab for training only. a more complex task, based on a dedicated self-designed high-fidelity model of urethral anastomosis was then proposed, exploring different options (one flexdex only vs two flexdex, surgeon vs assistant holding the camera). lastly, we applied the new device in animals to evaluate the usefulness of including simple tasks or entire procedures in the training curriculum. results: we were able to define a multilevel, adaptable training module including a basic information session, a dry lab with inanimate low-and highfidelity models and a pig lab. subjects with different level of expertise (medical student, resident, fellow, expert and very expert surgeon) were involved to have an extensive feedback. however, our main focus was to design a training module for laparoscopic and robotic surgeons, to safely introduce the flexdex in their practice. the only outcome for this preliminary work was collected through a "post exposure" survey. the expert surgeon that did the entire training was able to give feedback after his first application of the device in humans as well. conclusions: flexdex is a promising device, available in the united states in approved facilities only. a minimally invasive lab with high laparoscopic and robotic training experience is the ideal setting to build a curriculum. a first adaptable, multilevel, original, high-fidelity training is proposed to be validated with further studies and could be implementable for accreditation purposes. surg endosc (2018) 32:s130-s359 augmenting spatial awareness in laparoscopic surgery by immersive holographic mixed reality navigation using hololens objectives: endoscopic minimally invasive surgery provides a limited field of view, thus requiring a high degree of spatial awareness and orientation. because of a 2d field of endoscopic view, a surgeon's spatial awareness is diminished. this study aims to evaluate the efficacy of our novel surgical navigation system of immersive holographic mixed reality (mr) using a head-mounted smart glass display hololens to enhance spatial awareness of the operating field in laparoscopic surgery. the authors describe a method of registering and overlaying the preoperative mdct imaging localization of tumors, vessels, and organs onto the real world in the operating theatre through holographic smartglasses in augmented reality (ar). methods: in this study we included 20 laparoscopic gi, hpb, urology, and gynecologic surgeries using this system. we developed a ct-based patient-specific holographic mr surgical navigating application using hololens, that is a pair of see-through monitors built-in head-mounted display. by reconstructing the patient-specific 3d surface polygons of tumors, vessels, and organs out of the patient's mdct, mr anatomy was displayed on the see-through grasses three-dimensionally during actual surgery. the hololens features an inertial measurement unit which includes an accelerometer, gyroscope, and a magnetometer for environment understanding sensors, an energy-efficient depth camera, a photographic video camera, and an ambient light sensor. results: the accurate surgical anatomy of size, position, and depth of the tumors, surrounding organs, and vessels during surgeries could be measured using build-in dual infrared light sensors. the exact location between surgical devices and patient's anatomy could be traced on the pair of mr smart-glasses by satellite tracking. the gesture controlled manipulation by surgeons' hands with surgical groves was useful for intraoperative anatomical references of tumors and vascular position under sterilized environment. it allowed the user to manipulate the spatial attributes of the virtual and real anatomies. this system reduced the length of the operation and discussion time. this could support complex procedures with the help of pre-and intra-operative imaging with better visualization of the surgical anatomy and spatial awareness with visualization of surgical instruments in relation to anatomical landmarks. conclusions: the immersive holographic mr system provides a real-time 3d interactive perspective of the inside of the patient, accurately guiding the surgeon. this helps spatial awareness of the surgeons in the operating field and has illustrative benefits in surgical planning, simulation, education, and navigation. enhancing scene visualization is a feasible strategy for augmenting spatial awareness in laparoscopic surgery. francisco miguel sánchez margallo, phd 1 , juan a. sánchez-margallo, phd 1 , andreas skiadopoulos, phd 2 , konstantinos gianikellis, phd 3; 1 minimally invasive surgery centre, cáceres, spain, 2 university of nebraska at omaha, 3 university of extremadura, spain introduction: new handheld devices have been developed in order to address the technical limitations and ergonomic issues present in laparoscopic surgery. the aim of this study is to analyze the surgeon's performance and ergonomics using the radius r2 drive instruments (tubingen scientific medical, germany) during the execution of laparoscopic cutting and suturing tasks. methods and procedures: three experienced laparoscopic surgeons performed both an intracorporeal suturing task and a cutting task on a box trainer. both tasks were repeated three times. a maryland dissector and a pair of scissors were used for the cutting task. for the suturing task, a maryland dissector and needle holder were used. conventional laparoscopic instruments and their equivalent r2 drive instruments were used. the order in the use of the type of instruments was randomized. execution time and surgeon's ergonomics were assessed. for the latter, surface electromyography (trapezius, deltoid and paravertebral muscles) and the nasa-tlx index were analyzed. for the cutting task, the percentage of the area of deviation from the cutting pattern (% of error) was assessed. the suturing performance was assessed by means of a task-specific validated checklist. results: surgeons required more time to perform both laparoscopic tasks using the r2 drive instruments. the use of both instruments had a similar percentage of deviation from the exterior part of the cutting pattern. however, the deviation from the inner part was significantly higher using the r2 drive instruments (conv: 7.9±1.3% vs r2 drive: 10.8±2.1%; p\.05). needle driving was scored lower using the r2 drive instruments, but quality of knot tying was similar to conventional instruments. the use of r2 drive increased the muscle activity of the trapezius muscles bilaterally for both laparoscopic tasks. this muscle activity also increased for the left deltoid muscle during the cutting task. surgeons stated that the use of r2 drive instruments leads to a higher mental and physical workload when compared to traditional laparoscopic instruments. conclusions: despite the novel and ergonomic design of the r2 drive laparoscopic instruments, the results of this study suggest that an improvement in surgical performance and physical workload is required prior their use in an actual surgical setting. further studies should be done to analyze the use of these instruments during other laparoscopic tasks and procedures. we believe that surgeons need a longer and comprehensive training period with these laparoscopic instruments to reach their full potential in laparoscopic practice. background/objectives: 3d printing has been shown to be a useful tool for preoperative planning in various surgical disciplines. however, there are only several single case reports in the field of liver surgery. this is because of problematic visualization of anatomy, difficulties in methodology and-most importantly-high costs limiting implementation of 3d printing. the goal of this study is to evaluate the utility of personalized 3d-printed liver models as routinely used tools in planning and guidance of laparoscopic liver resections. materials and methods: contrast-enhanced computed tomography images of 6 consecutive patients who underwent laparoscopic liver resections in a single centre were acquired and processed. proper segmentation algorithms were used to obtain virtual models of anatomical structures, including vessels, tumor, gallbladder and liver parenchyma in stl (stereolithography) format. after processing files, models in parts were subsequently printed with desktop ultimaker 2+ (ultimaker, netherlands) 3d printer, using polylactic acid filaments as printing material. all parts were matched together to create a mold, which was later casted with transparent silicone. models were delivered to surgical teams prior to the surgery as well as used in patients' education. results: up to now, six full-sized, transparent, personalized liver models were created before laparoscopic liver resections and used as a tool for preoperative planning and intraoperative guidance. usefulness of these models has been evaluated qualitatively with surgeons. operative data was obtained for each patient and it will be used for quantitative analysis in further study phases. costs of one model varied between $100 and $150 and whole process of development took approximately 5 days in every case. conclusions: 3d-printed models allow precise planning in complex cases of minimally invasive liver surgery by providing high-quality visualization of patient-specific anatomy. implementation of this technology might potentially lead to clinical benefits, such as reduction of operative time or improvement of short-term outcomes. having said that, more data is needed to decisively prove these hypotheses. introduction: modern laparoscopic graspers may risk inadvertent injury to tissues, and have been shown to produce crush and puncture injuries. in addition, the force transmitted to the tissues by grasper handles can be highly variable, dependent on the orientation and amount of tissue engaged by the grasper. we have developed a novel vacuum-based laparoscopic grasper designed to reduce tissue injury from grasping. the aim of this study is to compare the incidence and severity of tissue trauma caused by vacuum-based graspers versus standard compressive graspers while manipulating tissue. we performed an in vivo surgical porcine study to assess gross and histologic tissue injury after grasping trials. grasping trials were divided equally between two adult porcine models; 43 samples of small bowel were grasped with a standard atraumatic laparoscopic grasper (aesculap double-action atraumatic wave grasper) and 85 were grasped with our novel vacuum grasper with varying vacuum head designs (45 for head a, 20 each for heads b and c). following grasping, the porcine model was allowed to dwell for 2 hours prior to harvest. gross injury was graded as follows: 1) no injury, 2) ecchymosis only, 3) serosal injury, 4) seromuscular injury, and 5) perforation. histologic injury was graded as follows: 1) serositis, 2) partial-thickness injury to the muscularis propria (mp), 3) full-thickness mp injury, and 4) full-thickness mp and mucosal injury. mann-whitney u test was performed to compare both gross and histologic injury scores between the groups. results: on gross assessment, no samples were noted to have injury more severe than ecchymoses following grasping. the vacuum grasper was found to cause more ecchymosis (median=2) than the compressive laparoscopic grasper (med.=1, u=2591, p.001). on histologic assessment, the compressive grasper caused significantly more severe injury (med.= 3) compared to the vacuum grasper (med.=2, u=1355, p=0.008). subgroup analysis showed that heads a (med.=2, u=741.5, p=0.04) and b (med.=2, u=558, p=0.047) caused significantly less injury compared to the compressive grasper. head c (med.=2, u=311.5, p=0.065) also showed less injury but did not reach statistical significance. conclusion: this study demonstrates that our novel laparoscopic vacuum grasper produces less tissue trauma than standard compressive graspers. vacuum-based grasping is a viable alterative for reducing inadvertent tissue injury in laparoscopy. minimally invasive surgery centre, cáceres, spain, 2 university of nebraska at omaha, 3 university of extremadura, spain introduction: the aim of this study is to analyze the surgeon's performance, workload and ergonomics using an ergonomically designed handheld robotic needle holder during laparoscopic urethrovesical anastomosis in an animal model, and comparing it with the use of a conventional laparoscopic needle holder. methods and procedures: six experienced surgeons performed an urethrovesical anastomosis in a porcine model using a handheld robotic needle holder and a conventional laparoscopic axialhandled needle holder (karl storz gmbh). the robotic instrument (dex®, dextérité surgical) has an ergonomic handle and a flexible tip with unlimited rotation, providing seven degrees of freedom. the use of the surgical instrument was randomized. for each procedure, an expert surgeon evaluated the surgical performance in a blinded fashion using the global operative assessment of laparoscopic skills rating scale. besides, the quality of the intracorporeal suture was assess by a validated suturing-specific checklist. the surgeon's posture was recorded and analyzed using the xsens mvn biomech system based on inertial measurement units. the surgeon's workload was evaluated by means of the nasa task load index, a subjective, multidimensional assessment tool. the patency of each anastomosis was assessed using methylene blue. results: all urethrovesical anastomoses were completed without complications. only one anastomosis with the robotic device failed the patency test. surgeons showed similar surgical skills with both instruments, although they presented greater autonomy with the conventional instruments (p =.048). for the suturing performance, the use of the robotic device led to an increase in the number of movements during the needle driving and lower tendency to follow its curvature during the withdrawal maneuver (p=.007). the level of workload increased with the robotic device. however, the surgeon's satisfaction with the surgical outcome did not differ using both instruments. the use of the robotic instrument led to similar posture of the shoulder and wrist and better posture of the right elbow (p=.026) when compared to the conventional instrument. conclusions: the use of the robotized needle holder obtained similar results for the surgical performance and surgical outcome of the urethrovesical anastomosis when compared to the conventional instrument. we consider that aspects such as the surgeon's autonomy, dexterity in driving the needle and workload could be improved with a comprehensive training with the new device. inertial sensors can be an alternative for actual and crowded surgical environments. surgeons acquired a better body posture using the novel robotic needle holder. surg endosc (2018) 32:s130-s359 introduction: temporal and spatial tissue temperature profile in electrosurgical devises, such as ultrasonic scissors and bipolar vessel sealing system, was experimentally measured, and the incidence of postoperative complications after thoracoscopic esophagectomy was assessed according to the electrosurgical devises used. methods and procedures: experiment of thermal spread: sonicision (sonic) was used for ultrasonic scissors and ligasure (ls) was used for bipolar vessel sealing system. each device was activated in order to cut porcine muscle at room temperature. temperatures of both the device blade and porcine tissues beside the device were measured using a temperature probe. each experiment was performed at least three times. room temperature was 25 degrees. clinical analysis: the 46 patients who underwent thoracoscopic esophagectomy with 3-field lymph node dissection in the prone position were selected in the study. incidence of postoperative complications after thoracoscopic esophagectomy was compared according to electrosurgical devises. bronchoscopy was used for diagnosis of recurrent laryngeal nerve paralysis (rlnp). sonic and ls was employed in 6 and 40 patients, respectively. material: we compared 50 consecutive cases using 3d laparoscopic surgery versus 50 cases of 2d conventional laparoscopic surgery from january to june 2017. all surgical procedures were performed by experienced laparoscopic surgeons using 3d (einsteinvision system) and hd conventional laparoscopic optic.3d-laparoscopic surgery offers the depth perception of the surgical field that is lost with the conventional (2d) laparoscopic surgery, and in many series is reported to be better in terms of surgical performance. outcome measures was operation time, surgical performance, blood looses, complications and surgeon satisfaction with the procedure. results: cholecystectomy was the most frequent surgery performed with 19 cases (38%); hernia surgery 12 cases (24%); fundoplication 6 cases (12%), appendectomy 4 cases (8%), left colon excison with colo-rectal anastomosis 3 cases(6%), and other 6 cases (12%) wich included ovarian cyst excision, liver biopsy, prostatectomy and pediatric surgery. we compared each 3d procedure with a standard laparoscopy case performed by the same surgeon during the time of the study. 3d vs 2d surgical procedures outcome measures are shown in table 1 . we found better results in operation time, surgical performance and less blood looses in favor of three-dimensional laparoscopy (.05). conclusion: 3d laparoscopy reduces operation time related to better performance during the procedure. depth perception facilitates dissection, intracorporeal knotting, mesh placement and colo-rectal anastomosis. surgeons reported better surgical performance and comfort during 3d laparoscopy; there were any reported side effects such as headache or dizziness. background: social media (some) uniquely allows international collaboration, with immediacy and ease of access and communication. in areas where surgical management is contentious, this could be a valuable tool to frame the current state, propose best practices, and possibly guide management in a rapid, cost-effective, global scale. our goal was to determine the ability to use twitter-a some platform-as an alternative surgical research tool. methods: twitter was used to host an online poll on a pre-selected controversial topic with no current consensus guidelines-pathological complete response in rectal cancer. an influential colorectal surgeon published the survey "t3n1 rectal cancer undergoes a complete response" on two separate occasions. both polls were open for duration of three days. two methodologies were tested to increase exposure and direct towards relevant participants: first, tagging several worldwide experts, then using the well-established hashtag #colorectalsurgery and publishing during an international surgical conference. the main outcome measure was the feasibility, validity, reproducibility, and methods to further participation of a twitter survey. results: the tweet polls were posted three weeks apart. there was no cost and the time required for the process was three minutes, demonstrating the feasibility. providing three closed options to select from facilitated validity. the poll's anonymity limited knowledge of the participant's qualifications, but public comments and "retweets" came from surgeons with experience ranging from trainee to department chair. a robust volume of respondents was observed. the 1st post received 169 votes, 14 "likes", 13 "retweets", and 18 comments from a diverse international group (9 countries). all tagged members participated in the forum. the 2nd received 125 votes, 13 "likes", 14 "retweets", and 3 comments. the results were reproducible, with the majority favoring 1 option on both occasions (69% and 75%, respectively; p=0.4312). treatment recommendations, their rationale, and open questions were identified in the thread. conclusions: some can be used as a research tool, with valid, reproducible, and representative survey results. while exposure was comparable across the two methods, tagging specific members guided experts to provide more opinions than using conference and specialty hashtags. this could expand awareness, education, and possibly affect management in a transparent, cost-effective method. the anonymous nature of respondents limited the ability to make conclusions, but interest and opinion leaders for further study can be easily identified. this demonstrates the potential for some to facilitate international collaborative research. background: despite the technological advancement of a minimally invasive approach to pylorus -preserving pancreaticoduodenectomy (pppd), the morbidity is still high. among the many complications, postoperative pancreatic fistula (popf) is reported in high incidence rate, which varies from researcher to researcher, and a fistula risk score (frs) has been developed to predict the popf. the aim of this study is validate the fistula risk score in minimally invasive approach of pppd and find the other meaningful parameter for prediction of popf. method and materials: from january 2008 to august 2017, laparoscopy attempted right-sided pancreas resection was performed on 142 patients including robotic reconstruction in the division of hepatobiliary and pancreas at yonsei university health system. among them, 43 patients were excluded due to total pancreatectomy (n=15), open conversion (n=12), pancreaticogastrostomy and hybrid manual anastomosis (n=12), non-measurable drain and missing datas (n=4 conclusions: fistula risk score is significant prediction factor of popf including biochemical leaks. in addition to the previously known frs variables, our data showed that bmi is an important predictor of popf with clinical relavancy in a minimally invasive approach of pppd. laparoscopic hemi-hepatectomy for liver tumor satoru imura, hiroki teraoku, yuji saito, shuichi iwahashi, tetsuya ikemoto, yuji morine, mitsuo shimada; tokushima university introduction: with progress of surgical technique and devices, laparoscopic liver resection became a realizable option for patients with liver tumor. major liver resection such as anatomical left or right hemi-hepatectomy has also been introduced in many centers. herein, we evaluate surgical results of laparoscopic hemi-hepatectomy for liver tumor. patients and methods: until march 2017, 27 consecutive patients who underwent laparoscopic or laparoscope-assisted hemi-hepatectomy (left: 18, right: 9) were reviewed and the surgical data such as operation time, blood loss, postoperative complications were analyzed retrospectively. results: of the 18 patients underwent left hemi-hepatectomy, 6 cases were primary liver cancer, 6 cases were metastatic tumor, and 6 cases were benign tumor. pure laparoscopic surgery was performed in 5 cases. the mean blood loss was 203 (30-995) ml, mean operating time was 315 (204-578) minutes and mean postoperative hospital stay was 18 (8-52) days. the rate of postoperative complications was 5.6% (wound infection; n=1). all right hemi-hepatectomy was performed by laparoscope-assisted method. of the 9 patients underwent right hemi-hepatectomy, 3 cases were primary liver cancer, 3 cases were metastatic tumor, and 3 cases were benign tumor. the mean blood loss was 188 (10-600) ml, mean operating time was 382 (290-514) minutes and mean postoperative hospital stay was 19 (8-48) days. the rate of postoperative complications was 22.2% (biliary stenosis; n=2). the patients with hepatocellular carcinoma were followed up for a median of 68 (29-92) months. recurrence occurred in 4 cases and none of them had died at the time of follow-up. conclusion: laparoscopic hemi-hepatectomy is a safe and effective procedure for the treatment of benign and malignant liver tumors. ibrahim a salama, professor; department of hepatobiliary surgery, national liver institute, menoufia university abstract background: iatrogenic biliary injuries are considered as the most serious complications during cholecystectomy. better outcome of such injuries have been shown in cases managed in a specialized center. objective: evaluatation of biliary injuries management in major referral hepatobiliary center. patients and methods: four hundred seventy two consecutive patients with post-cholecystectomy biliary injuries were managed with multidisciplinary team (hepatobiliary surgeon, gastroenterologist and radiologist) at major hepatobiliary center in egypt over 10 years period using endoscopy in 232 patients, percutaneous techniques in 42 patients and surgery in 198 patients. results: endoscopy was very successful initial treatment of 232 patients (49%) with mild/moderate biliary leakage (68%) and biliary stricture (47%) with increased success by addition of percutaneous (rendezvous technique) in 18 patients (3.8%). however, surgery was needed in 198 (42%) for major duct transection, ligation, major leakage and massive stricture. surgery was urgently in 62 patients and electively in 136 patients. hepaticojejunostomy was done in most of cases with transanastomatic stents. one mortality after surgery due to biliary sepsis and postoperative stricture was in 3 cases (1.5%) treated with percutaneous dilation and stenting. conclusion: management of biliary injuries was much better with multidisciplinary care team with initial minimal invasive technique to major surgery in major complex injury encouraging for early referral to highly specialized hepatobiliary center. introduction: simple liver cyst is the solitary non parasitic cystic lesion of the liver. teatment of symptomatic liver cyst varies from simple aspiration to hepatic resection. each treatment has its own merits and associatied complications. laparoscopic unroofing (fenestration) offers the best balance between efficacy and safety. polycystic liver disese (pcld) treatment by this method are less clear because of high failure rate. liver resection though more effective carries higher risks. treatment of hydatid disease are controversial. materials and method: simple cyst may be asymptomatic and picked up as incidental findings on ultrasound examination for other abdominal complaints. few cyst have symptoms of mass effect or with complication effect due to haemorrage, rupture, infection. on examination liver is palpable. compression over bile duct give rise to jaundice. the commonest symptoms are pain, early satiety, nausea and vomiting. simple cyst are more common in female after 50 years of age. the cyst located antriorly inferiorly and laterally are the ideal case. investigation like ultrasonography is important. it will helps us to detect the cyst nature, will help to differentiate bet ween simple cyst from poly cystic liver disease, from neoplastic liver. in endemic area of hydatid liver disease serological test is mandatory. ct scan is important regarding details information about to localise the cyst, to identify the liver tissue arroud the cyst, relationship of cyst with the nearby vital structures, number of cyst, calcification and carcinomatous changes in its wall. aspiration of cystfluid, biological and cytological examination to rule out the presence of infection, biliary communication and malignancy. recently, ca 19-9 estimation is helpful for the differentiating the simple cyst from the cystadenoma or carcinoma. for jaundice patient ercp is impotant to locate the intraductal polyp causing the biliary obstruction or cyst causes the compression of the biliary tree. for bleeding in cyst mri is helpful. carcinoma at epithelial lining may occur. result: laparoscopic de-roofing (fenestration) less radical procedure ensures adequate drainage of cyst content into the peritoneal cavity. the cyst wall can be removed using harmonic scalpel so smoked produced and fogging of lens can be minimized. the interior surface inspected with care to exclude neoplastic growth and biliary communication. whole operative procedure, duration of postoperative recovery, hospital stay is much shorter in this procedure. large cevron incision can be avoided. no recurrence in two years follow up period. liver resection and total cystectomy theoretically minimizes the recurrence risk but invoke the a real risk of postoperative complications and death. conclusion: careful case selection and meticulous surgical skills are the two major determinants of the outcome. in the llr group, the first port was placed with an alexis® wound retractor (applied medical, usa) and free access® (top corporation, japan) at the abdominal defect made by previous sc. an additional 2 or 3 trocars were placed as needed. results: all patients in the llr group were treated using the laparoscopic approach. there were no other significant differences in patient background and characteristics. operative duration was similar for these groups. blood loss, complication rate, and hospital stay in the llr group were significantly decreased compared with the olr group. conclusion: in concurrent liver resection and sc, the open approach may require multiple large incisions, but the laparoscopic approach can complete procedures with a stoma wound and a few port wounds. additionally, use of a platform on the wound for sc enhances safety and efficacy for dissection of intraabdominal adhesions and a clear operative view. primary hepatic lymphoma: the importance of liver biopsy diego t enjuto 1 , carlos ortiz 2 , laura casanova 2 , jose luis castro 1 , pablo sánchez 1 , jaime vázquez 1 , norberto herrera 1 , benjamín tallon 2 , carmen jimenez 3; 1 hospital severo ochoa, 2 hospital san rafael, 3 hospital henares primary hepatic lymphoma (phl) is a very uncommon lymphoproliferative malignancy. it accounts for only 0.4% of all extranodal non-hodgkin lymphoma and 0.016 % of all cases of non-hodgkin disease. the diagnosis is made when there is only liver involvement or when there is minimal non-liver disease. bone marrow, spleen, or hematologic affection should be excluded to confirm the diagnosis. we present our experience with two phl's that were correctly diagnosed thanks to laparoscopic liver biopsy. 67-year-old male admitted because of a 2-month history of right upper quadrant pain and nonmeasured weight loss. liver function tests and cholestasic enzymes showed normal values. serologic tests showed negative results for both hbv (hepatitis b virus) and hcv (hepatitis c virus). ct (computed tomography) scan showed three intrahepatic lesions in segments v, vi, and vii. ct-guided fine needle did not reach the diagnosis so a laparoscopic hepatic biopsy was performed. the final diagnosis was burkitt-like lymphoma. chemotherapy with r-chop (rutiximab, cyclophosphamide, adriamycin, vincristine, and prednisone) modality was started and completed after 6 cycles. it is currently 2 years since the patient was diagnosed and there are no clinical or radiological signs of recurrence. 54-year-old male who complained of diarrhoea and abdominal pain. chronic hb infection with no viral charge was detected. ultrasound showed heterogeneity of the whole left hepatic lobe and an mri was performed. a ten by segen centimeters lesion occupying the left hepatic lobe enhanced in arterial phase was seen suggesting adenoma. laparoscopic hepatic biopsy was completed to reach a definitive diagnosis. non-hodgkin lymphoma follicular type has just been confirmed with the histology and immuno-histochemistry. chemotherapy with r-chop should be started in the following weeks. phl's diagnosis is hard to achieve. fine needle biopsies are frequently negative because of the large area of necrosis. surgical biopsies are sometimes indispensable to get enough tissue to reach the diagnosis. phls are sometimes misdiagnosed as hepatocellular carcinoma because of its relation to hcv meaning a major hepatic resection. that is the reason why we consider that all diagnostic measures should be undertaken to rule out a different type of tumor. surgical resection is normally not needed in phls; as they are chemosensitive lesions. surgical options usually add unnecessary morbidity and mortality to these patients. chemotherapy standard treatment for phl consists on r-chop combination. pancreatic neoplasm enucleation -when is it safe? case report and review of the literature elaine jayne buckley 1 , k molik 2 , j mellinger 1; 1 siu-som, 2 hshs pediatric surgery introduction: solid pseudopapillary tumors are rare neoplasms accounting for 2-3% of pancreatic malignancies with a low risk of recurrence and metastasis. pancreatic malignancies are less common in pediatric populations, though small case series have identified that pseudopapillary tumors comprise between 20 and 70% of pediatric pancreatic neoplasms. as these tumors have a low risk of metastasis, the mainstay of treatment has remained surgical excision. several surgical approaches have been described from extensive resections such as pancreaticoduodenectomy to local enucleation. we present a case of enucleation of a large pseudopapillary tumor from the pancreatic head complicated by pancreatic fistula. a literature review was performed given the rarity of this tumor to review surgical approaches, to compare complications and long-term outcomes, and to identify specific strategies to decrease the risk of pancreatic fistula. case description: a 13 year-old female presented with 6 months of abdominal pain. computed tomography identified a right upper quadrant mass felt to be consistent with a lipoma. follow up ct at 6 months suggested the mass was more likely a gastrointestinal stromal tumor (gist), and surgical resection was recommended. enucleation of the mass was chosen in view of a wellcircumscribed appearance, clear operative tissue planes, and concern for long-term morbidity of a more extensive resection given the patient's young age. pathology demonstrated an 8.5 cm pseudopapillary tumor with negative margins. her post-operative course was complicated by a grade b pancreatic fistula, managed with nutritional support, external drain maintenance, and endoscopic stenting. the patient achieved healing of the pancreatic fistula after four months. results: our literature review demonstrates no difference in recurrence, mortality or morbidity between types of surgery. pancreatic fistula contributed to the majority of postoperative morbidity in all cases. recommendations for enucleation include small (2-4 cm) tumors with between 2 and 5 mm margin from the main pancreatic duct. techniques identified to minimized post-operative pancreatic fistula include preoperative imaging of the duct anatomy, preoperative pancreatic stent placement, and intraoperative ultrasound to identify the pancreatic duct. some literature supports preservation of pancreatic parenchyma, particularly in younger patients, to reduce endocrine and exocrine dysfunction given the low rates of recurrence and metastasis with this rare neoplasm. conclusion: our case demonstrates complications of enucleation of a large pseudopapillary tumor with successful multidisciplinary post-operative management. with the risk reduction strategies identified, we suggest that enucleation may be considered for pseudopapillary tumors in younger patients to preserve pancreatic parenchyma and long-term pancreatic function. introduction: recent advancements in minimally invasive techniques led to increased effort and interest in laparoscopic pancreatic surgery. laparoscopic distal pancreatectomy is a widely accepted procedure for left-sided pancreatic lesions. in other cases, the adoption of laparoscopic pancreaticoduodenectomy has been hindered by the technical complexity of laparoscopic reconstruction. hybrid laparoscopy-assisted pancreaticoduodenectomy (hlapd) in which pancreaticoduodenal resection is performed laparoscopically, while reconstruction is completed via a small upper midline minilaparotomy, is combines the efficacy of open approach, and the benefits of laparoscopic approach. the purpose of this study is to report our experience of hlapd and to define the learning curves. methods: 90 patients with benign and malignant periampullary lesion underwent hlapd by a single surgeon between july 2007 and may 2017 were retrospectively reviewed. the clinicopathologic variables were prospectively collected and analyzed. the learning curve for hlapd was assessed using cumulative sum (cusum) and risk-adjusted cusum (ra-cusum) methods. results: the most common histopathology was pancreatic ductal adenocarcinoma (n=27, 27.8%), followed by intraductal papillary mucinous neoplasms (n=16, 16.5%), ampulla of vater cancer (n= 16, 16.5%), and common bile duct cancer (n=15, 15.5%). the median operation time was 540 min (range, 300-865 min) and the median estimated blood loss was 550 ml. based on the cusum and the ra-cusum analyses, the learning curve for hlapd was grouped into four phases: phase i was the initial learning period (cases 1-10), phase ii was the technical stabilizing period (cases 11-37), phase iii was the second learning period (cases 38-70) and phase iv represented the second stabilizing period (cases 71-90). there was a statistical difference in terms of surgical indication between phase ii and iii (p=0.002). conclusions: hlapd is a technically feasible and safe procedure in selected patients. this procedure has benefits of both open and minimally invasive procedure, and could be a stepping-stone for transition from open to purely minimally invasive pancreaticoduodenectomy. in silico investigation of the background: wilson's disease is a rare autosomal recessive genetic disorder of copper metabolism, which is characterized by hepatic and neurological disease. the gene atp7b (on chromosome 13) leads to wilson's disease is highly expressed in the liver, kidney, and placenta and encodes a transmembrane protein atpase (atp7b), which functions as a copper-dependent p-type atpase. methods: here, the rare codons of atp7b gene and their location in the structure of atp7b protein was studied with rare codon calculator (racc) (http://nihserver.mbi.ucla.edu/racc/), atgme (http://atgme.org/), latcom (http://structure.biol.ucy.ac.cy/latcom.html) and sherlocc program (http://bcb.med.usherbrooke.ca/sherlocc.php). racc server identified arg, leu, ile, and pro codons as rare codons. results: results showed that cyp152a1 gene have 35 single rare codons of arg. additionally, racc detected two rare codons of leu, 13 single rare codons of ile and 28 rare codon of pro. atp7b gene analysis in minmax and sliding_window algorithm resulted in identification of 16 and 17 rare codon clusters, which shows the difference features of these algorithms in detection of rcc. analyzing the 3d model of atp7b protein show that arg816 residue constitute hydrogen bonds with glu810 and glu816 that with mutation of this residue to ser816 this hydrogen bonds were disrupted and may interfere in the proper folding of this protein. moreover, the side chain of arg1228 don't forms any bond with others residues that with mutation to thr1228 form new hydrogen bond with the side chain of arg1228. these addition and deletion of hydrogen bonds effects on the folding mechanism of atp7b protein and interfere with the proper function of the atp7b position. his1069 forms the hydrogen bonds with the his880 and it seems that this hydrogen bond close together two region of this protein and it seems that has a critical role in the final folding of atp7b protein. conclusions: computational study of diseases such as wilson's disease and involved genes (atp7b) help us in understanding of disease's physiopathology and finding new approaches for detection and treatment. pancreatic stump leak and fistula formation are significant causes of morbidity in patients undergoing distal pancreatectomy (dp), with incidence of 15% to as high as 64% in a large systematic review. we present a case of a 58 year old female, four months status post distal pancreatectomy and splenectomy for pseudopapillary neoplasm of pancreatic tail. patient presented to our institution with 7 day history of left upper quadrant pain and general malaise. differential diagnosis on admission was abdominal wall abscess vs incarcerated incisional hernia. physical exam was positive for severe tenderness to palpation over a *4 cm94 cm non reducible mass in left upper quadrant with surrounding skin erythema. patient underwent a diagnostic laparoscopy and intraoperative findings revealed extensive adhesions to the anterior abdominal wall and a loop of small bowel was found adhered to the previous incision site in left upper quadrant. upon further dissection we entered a large 1098 cm cavity with saponified caseous material. the saponified material and thick tan fluid were evacuated into an endocatch bag and two large bore jackson pratt drains were left within the cavity. further examination showed that the small intestine was normal with no signs of obstruction or ischemia. fluid studies and cultures were sent and showed yeast like organisms and negative for acid fast bacillus. we report an unusual presentation of a distal pancreatectomy stump leak in the formation of an intra-abdominal saponified fluid collection four months after the primary procedure. given the high incidence of pancreatic stump leak and fistula formation after distal pancreatectomy, much effort has been made to identify factors associated with higher incidence of leaks and their usual and unusual presentations, which will be reviewed in this report. initial concerns regarding healthy donor's safety and graft integrity, need for acquiring surgical expertise in both laparoscopic liver surgery and living donor transplantation (ldlt) have delayed the development of laparoscopic donor hepatectomy in adult-to-adult ldlt. however, decreased blood loss, less postoperative pain, shorter length of stay in hospital, and excellent cosmetic outcome have well been validated as the advantage of laparoscopic hepatectomy. hence, the safety and feasibility for laparoscopic donor should be further investigated. we present initial experiences and safety for totally laparoscopic living donor right hepatectomy. in 20 cases who received elective living donor right hepatectomy for adult-to-adult ldlt, totally laparoscopic approach was applied from may 2016 up to august 2017. the anatomical variation of portal vein was not considered as an exclusion criteria, but all donors were with type i portal vein variation. the bile duct anomaly was preoperatively evaluated with magnetic resonance cholangiopancreatography (mrcp) and was never excluded for totally laparoscopic approach. 2d conventional rigid 30º rigid laparoscopic system was used in 2 cases and the remaining 18 cases used 3d flexible laparoscopic system. in about 40%, hepatic duct anomalies (type 2, 3a, 3b) were identified. the operation time was from 6 hours to 7 hours. and the time for the graft removal was within 15 minutes. the hepatic duct transection was performed under operative cholangiography via a cystic duct and the patency of left hepatic duct was also confirmed by operative cholangiography. however, during postoperative period, bile leakage was identified in only 1 case and resolved after the biliary stent insertion by ercp. during operation, there was no transfusion and the inflow control like pringle maneuver was not used at all. v5 or v8 were reconstructed in 19 cases and large right inferior hepatic vein was prepared for anastomosis in 6 cases. all grafts were removed through the suprapubic transverse incision. most donors were discharged at 7 days after hepatectomy. during the short-term follow-up period in the donors except this case, complications were not identified. conclusively, totally laparoscopic right donor hepatectomy in elective adult-to-adult ldlt can be initially attempted after enough experiences of laparoscopic hepatectomy and ldlt. however, the true benefits of totally laparoscopic living donor right hepatectomy should be fully assessed through various experiences from multi-institutes. background: the role of neoadjuvent chemotherapy on the treatment of pancreatic cancer remains widely controversial. studies have evaluated its effect on resectability and survival; however, few have studied the consequence of neoadjuvent therapy on surgical outcomes and complications. methods and procedures: a retrospective analysis was performed utilizing the targeted pancreas module of the national surgical quality improvement project (nsqip) for patients undergoing pancreaticoduodenectomy. neoadjuvent therapy was defined by chemotherapy and/or radiation in the 30-days before surgery. patient demographics, operative characteristics, and 30-day outcomes were compared amongst patients undergoing neoadjuvent chemotherapy, radiation, chemoradiation, and no neoadjuvent therapy. both univariable and multivariable analysis were completed. results: pancreaticoduodenectomy was completed in 3,114 patients. 2,635 patients had no neoadjuvent therapy; 207 underwent both chemotherapy and radiation; 256 underwent chemotherapy alone, and 16 underwent radiation alone. there were no differences in demographics or comorbidities. no difference in 30-day mortality was found; however pancreatic fistula formation was affected by neoadjuvent therapy. neoadjuvant radiation increased fistula formation (or: 2.4, 95% ci: 1.1-5.2) while neoadjuvent chemotherapy (or: 0.5, 95% ci: 0.3-0.99) was protective. conclusion: neoadjuvent therapy significantly impacts surgical outcomes following pancreaticoduodenectomy. given that pancreatic fistula formation can delay post-operative chemotherapy, it may be reasonable to refrain from neoadjuvent radiation therapy for patients with resectable and borderlineresectable disease. the influence of thickest background: the use of stapling devices for distal pancreatectomy remains controversial, due to concerns about the development of postoperative pancreatic ?stula (popf). pancreas thickness might be associated with popf, but suitable thickness of stapler remains also inconclusive in view of reducing popf. methods: we routinely use thickest endo gia™ reloads with tri-staple™ (covidien, north haven, ct) for pancreas closure during laparoscopic left side pancreatectomy (lp) since 2013. we compared short term surgical results of the consecutive ten patients underwent lp using new stapler (ns) and 20 patients with lp using other type of stapler (os) focusing on popf. results: no patients developed clinically relavent (cr)-popf in ns group and two patients (10.0%) with os group experienced cr-popf. however, there was no difference of cr-popf between two groups. pancreas thickness on stapling point were not different between two groups (15.9 mm vs 18.9 mm, p=0.246). in ns group, 3 patients (30.0%) developed a popf, whereas in os group, 12 patients (60.0%) developed a popf. there was also no difference of popf between 2 groups. conclusion: the gia™ reloads with the thickest tri-staple™ allows effective prevention of cr-popf after distal pancreatectomy. however, there was no advantage over thinner stapler for lp. introduction: single-incision laparoscopic hepatectomy (silh) has been showed feasible and safe in experienced hands for selected patients with benign or malignant liver diseases. there were only small series reported and most of the procedures were minor liver resections. we herein present our experience of silh during a period of 13 months. methods and procedures: consecutive 13 patients underwent silh which were performed by two experienced laparoscopic surgeons with straight instruments. patient characteristics and surgical outcomes were analyzed by reviewing the medical charts. results: the patient age was 62.7±9.2 (47-78) years with male predominance (8 patients, 61.5 %). six patients (46.2%) had liver cirrhosis proved by pathologic examinations. nine procedures (69.2%) were indicated for malignancy. four major hepatectomies (over two segments) and nine minor ones were performed including seven anatomical resections. the abdominal incisions were para-or trans-umbilical except one which was along the old operative scar at lower midline, while most of them (n=12, 92.3%) was within 5 cm in length. inflow control was carried out by either individual hilar dissection or extraglissonian approach instead of pringle maneuver. the operations were all accomplished successfully without additional ports or open conversion. the operative time was 436.5±178.4 (163-673) min and the estimated blood loss was 435.0±377.2 (75-1400) ml. five (38.5 %) patients encountered complications and four of them were classified as clavien-dindo grade i. the postoperative length of hospital stay was 6.1±2.2 (4-10) days. there was no mortality. conclusion: silh can be performed safely and efficaciously for selected patients with benign and malignant liver diseases including cirrhosis. not only minor but also major liver resections are feasible. this innovative procedure provides low postoperative pain and fast recovery. before adopting this demanding technique, surgeons should be familiar with both single-incision laparoscopic surgery and laparoscopic hepatectomy. better outcomes after the learning curve could be anticipated. background: laparoscopic distal panreatectomy (ldp) has been replacing the open procedure for benign or malignant diseases of the pancreas. however, it is often difficult to apply ldp for pancreatic ductal adenocarcinoma (pdac) because its aggressive invasion to adjacent organs or major vessels. objectives: the objective of this study was to report our experiences for laparoscopic extended pancreatectomy with en-bloc resection of adjacent organs or major vessels for left-sided pdac. methods: we reviewed data for all consecutive patients undergoing ldp for left-sided pdac at asan medical center (seoul, south korea) between april 2006 and december 2016. the patients who underwent laparoscopic extended panreatectomy with en-bloc resection of adjacent organs or major vessels were included in analyses. results: of total 257 patients, 21 underwent laparoscopic extended pancreatectomy. there were 14 male and 7 female patients with a median age of 64.1 years. resected adjacent organs or vessels were as following: stomach in 6, duodenum in 1, colon in 4, kidney in 2, superior mesenteric vein in 4, and celiac axis in 4. median operative duration was 280 minutes, and median length of hospital stay was 9 days. pathological reports revealed the following: a median tumor size of 3.5 cm, the tumor differentiation (well differentiated in 2, moderately differentiated in 17, and poorly differentiated in 2), t stages (t1 in 1, t3 in 18, and t4 in 2) , and n stages (n0 in 10 and n1 in 11). r0 resection was achieved in 6 patients, and most r1 resection were tangential retroperitoneal margins. postoperatively, clinically relevant postoperative pancreatic fistula was occured in 2 patients, and there was no 90-day mortality. median overall survival was 19.6 months and 1 year survival rate was 71.1%. conclusions: although laparoscopic surgery has limitations in treating extensive diseases, some selected patients can be applicable for laparoscopic extended pancreatectomy with acceptable complication and survival rates. who underwent hepatic resection was included. these patients were divided into llr or olr. demographics, tumor characteristics, recurrence rates and over-all survival were compared between the 2 groups. results: 49 patients were included and grouped into llr (n=28) and olr (n=21). the average tumor number was 2±1 for both groups, while the mean tumor size was 4.1 cm and 4.9 cm for the llr and olr group, respectively. when compared with olr, llr had lower post-operative complication rates (14.3% vs 33.3%, p=0.118) and shorter hospital stay (9 vs 21 days, p=0.103), although the difference was not statistically significant. overall, recurrence-free and disease-free survival was comparable between llr and olr. introduction: single port surgery has been described since 2009 with cholecystectomy, colectomy, gastrectomy, and others. nevertheless, few cases are still reported in field of hbp surgery. herein, we report single port pancreatic surgery developed from our previous experience. we had started single port surgery in 2009, since then we have done more than 850 cases of single port surgery using surgical glove port including cholecystectomy, appendectomy, and colectomy. because we consider this experience should develop to pancreatic surgery, 73 cases of single port staging laparoscopy for potentially resectable and borderline resectable pancreatic cancer and 15 cases of single port plus one port distal pancreatectomy (spop-dp) have been done in our institution. single port staging laparoscopy for pancreatic cancer. resectability was proved in 63 (86%) out of 73 patients while 10 patents had unresectale factor such as small liver and peritoneal metastases that was not able to detect pre-operatively. the length of hospital days were 5.0±4.8 days and the days to chemotherapy were 33.1±2.8 days. single port plus one port distal pancreatectomy (spop-dp) spop-dp starts with 1.5 cm skin incision on umbilicus. subsequently, a wound retractor is installed at umbilical wound. then, a non-powdered surgical glove (5.5 inches) is put on the wound retractor through which three 5-mm slim trocars and one 12-mm trocar are inserted via each finger tips. a semi-flexible laparoscopic camera is inserted via the middle finger port. 12-mm port is used when laparoscopic us, mechanical stapler, endo intestinal clip or retrieval bag were needed. an additional 5-mm port is inserted at left subcostal lesion mainly used for surgeon's right hand instrument. gastric posterior wall is fixed to abdominal wall by suture instead of manual retraction. pre-compression before transection of the pancreas was done using endo intestinal clip before firing. discussion: as we have seen in these two decades, surgery has dramatically been changed by laparoscopic surgery or robotic surgery. nevertheless, because of technical difficulty and relatively high post-operative complication rate, introduction of reduced port surgery to hbp surgery has just started. spop-dp using endo intestinal clip, glove port and gastric wall hanging method is feasible. but its advantage is not clear so far, multicenter rct is highly desired to clear the benefit of reduced port surgery for pancreas. introduction: scoring systems (ss) are an essential pillar of care in acute pancreatitis (ap) management. we compared six ss (acute physiology and chronic health examination (apache-ii), bedside index for severity in ap (bisap), glasgow score, harmless ap score (haps), ranson's score and sequential organ failure assessment (sofa) score) for their utility in predicting severity, intensive care unit (icu) admission and mortality. methods: ap patients treated between july 2009 and september 2016 were studied retrospectively. demographic profile, clinical presentation and discharge outcomes were recorded. predictive accuracy of six ss was assessed using areas under receiver-operative curve (auc) with pairwise comparisons. results: 675 patients were treated for ap. twenty-two (3.3%) patients were excluded for insufficient data. 383/653 (58.7%) were male and mean age was 58.7 (20-98) years. most common aetiology was gallstones (61.9%). mean length of stay was 6.8 (2-92) days. 81 (12.4%) patients had severe ap, 20 (3.1%) required icu admission and 12 (1.8%) died. table below shows positive predictive value (ppv), negative predictive value (npv) and auc of six ss in predicting outcomes. pairwise comparisons revealed ranson's (p.016) and sofa (p.024) scores were superior than other ss in predicting all three outcomes. auc of sofa was greater than ranson's score in predicting severity (p.001), but similar in predicting icu admission (p=0.933) and mortality (p =0.150). conclusion: sofa score is superior to classical ss in predicting severity, icu admission, and mortality in ap. introduction: necrotizing pancreatitis is often a devastating sequelae of acute pancreatitis. historically several approaches have been described with variable outcome. open necrosectomy is associated with higher morbidity (95%) and mortality (25%). endoscopic necrosectomy often is tolerated well but associated with stent migration and multiple procedures. video-assisted retroperitoneal debridement is tolerated well but associated with severe bleeding if adjacent blood vessels are injured during the procedure leading to severe complications. methods: in our series, we perform a step up approach by involvement of a multidisciplinary group consisting of general surgeons, gastroenterologists, infectious disease physicians, critical care internalist, interventional radiologist and nutritional services to formulate a management plan. the necrotized pancreas is initially drained with an ir guided drain, fluid cultures sent for microbiology and treatment with appropriate antibiotics if deemed necessary. the drain is gradually upsized to a 24 fr sized drain to form a well-defined tract for surgical debridement; a preoperative ct scan of the abdomen with iv contrast to access the location and proximity of the vasculature around the necrotized pancreas. a collaboration with the interventional radiologist to discuss possible ir embolization of splenic artery prior to surgical debridement. the patient would then undergo video assisted retroperitoneal pancreatic necrosectomy and a sump drain left in-situ at the pancreatic fossa. post-operative management in the surgical icu would be lead by the critical care internalist. results: three patients were managed by this multidisciplinary approach with excellent outcomes. one patient underwent preoperative ir embolization followed by surgical debridement; second patient underwent embolization immediately following debridement; one patient did not require any embolization but had ir on standby if needed to intervene. post-operatively all three patients recovered well. they all were tolerating good oral intake and were discharged to rehabilitation facilities. conclusion: our preliminary experience demonstrates that an early multidisciplinary plan by various subspecialties can result in a pragmatic and successful approach to this potentially catastrophic condition. introduction: liver resection with preservation of as much liver parenchyma as possible is called parenchymal sparing hepatectomy (psh). psh has been shown to improve overall survival by increasing the re-resection rate in patients with colorectal liver metastases (crlm) and recurrence. the caudal-cranial perspective in laparoscopy makes the cranial segments (2, 4a, 7 , and 8) more difficult to access. the objective of this systematic review is to analyze feasibility, safety, morbidity, and oncologic outcomes of laparoscopic psh. methods: a systematic review of the literature was performed. medline/pubmed, scopus, and cochrane databases were searched. a search strategy was published with the prospero registry. a systematic review was conducted on all cases reported, they were categorized by area of resection and quantitative meta-analysis of operative time, blood loss, length of hospital stay, complications, and r0 resection was performed. results: of the 351 studies screened for relevance, 48 studies were selected. because interventions or endpoints were noncontributory or reporting incomplete, 38 were excluded. only 10 publications remained, reporting data from 579 patients who underwent laparoscopic psh. the highest oxford evidence level was 2b and selective reporting bias was common due to single center and noncontrolled reports. among them, 132 (21.5%) resections were in the cranial segments 2 (1.1%), 4a (5.2%), 7 (6%), and 8 (9.1%), which previously would have required laparoscopic hemi-hepatectomies or sectorectomies. the most common tumor type was crlm (58%) and the second most common tumor type was hepatocellular carcinoma (16%). feasibility of laparoscopic psh was 93%, conversion rate was 7%, and complications were seen in 17% of cases. no perioperative mortality was reported. no standardized reporting format for complications was used across studies. meta-analysis revealed a weighted average operating time of 385 minutes, estimated blood loss of 463 cc, and length of stay of 8 days. r0 resections were achieved in 91% of cases. conclusion: laparoscopic psh of difficult to reach liver tumors are feasible with acceptable conversion and complication rate, but relatively long operating times and relatively high blood loss. in future studies, data on long term survival and specific tumor type recurrence should be reported and bias reduced. yangseok koh 1 , eun-kyu park 2 , hee-joon kim 2 , young-hoe hur 1 , chol-kyoon cho 1; 1 chonnam national university hwasun hospital, 2 chonnam national university hospital purpose: laparoscopic surgery has become the mainstream surgical operation due to its stability and feasibility. even for liver surgery, the laparoscopic approach has become an integral procedure. according to the recent international consensus meeting on laparoscopic liver surgery, laparoscopic left lateral sectionectomy ( conclusion: this study showed that laparoscopic lls is safe and feasible, because it involves less blood loss and a shorter hospital stay. for left lateral lesions, laparoscopic lls might be the first option to be considered. keywords: laparoscopy, left lateral sectionectomy. outcome analysis of pure laparoscopic hepatectomy for hcc and cirrhosis by icg immunofluorescence in.-a propensity score analysis introduction: in laparoscopic hepatectomy, the surgeon cannot use their hand to palpate the liver lesion and estimate margin of resection. the use of icg immunofluorescence technique can show up the liver tumour and has the potential to facilitate a throughout assessment during the operation. method: between 2013 and 2016, there were 182 patients undergone pure laparoscopic liver resection for hcc in our hospital. 162 patients had undergone surgery by the conventional laparoscopic approach. 20 patients had laparoscopic hepatectomy with additional icg immunofluorescence augmented technique. the surgical outcome was compared with propensity score analysis in a ratio of 1:3. result: 20 patients had icg immunofluorescence assisted laparoscopic hepatectomy (group 1). 60 patients using conventional laparoscopic liver resection with propensity-matched were selected for comparison (group 2). the median operation time was 200 minutes vs 164 minutes p=0.679, the median blood loss was 125 ml vs 100 ml (p=0.928). 3 additional tumours were identified by icg technique. 3 patients had suspicious lesion picked up by icg technique but proven to be benign pathology on frozen section examination. the sensitivity of tumour detection by group 1 was 90%. 100% r0 resection was achieved in group 1 and group 2 respectively. hospital stay was 5 days vs 4 days (p=0.824), post-operative complication was 0 (0%) vs 5 (8.3%) (p=0.424) none of the patient developed icg related complication. conclusion: in the current study, the new technique showed equally good short-term outcome when compared with conventional laparoscopic hepatectomy. icg immunofluorescence augmented reality is a promising technique that might facilitate easier identification tumour during laparoscopic hepatectomy. surg endosc (2018) 32:s130-s359 taking the training wheels off: transitioning from robotic assisted to total laparoscopic whipple introduction: there is a substantial learning curve to performing minimally invasive pancreatoduodenectomy (mis-pd) for surgeons who are trained in open pd. the learning curve to transition from robotic assisted pd (rapd) to total laparoscopic pd (tlpd) is not well established. methods: mis-pds performed between january 2014 and june 2017 performed by sc as a surgeon or co-surgeon were included for analysis. mis-pds were performed using a robotic assisted technique prior to august 2016, and tlpds were performed subsequently. rapds performed prior to 2014 were excluded to limit the comparison to rapds after the initial learning curve. demographics, clinical and pathologic outcomes, operative and post-operative outcomes were compared. results: a total of 28 rapds and 12 tlpds were scheduled during the study period. there was no statistically significant difference in age, body mass index, or prior abdominal surgery. median time from initial clinic consultation to surgery was 35 days for the rapd group versus 15 days in the tlpd group (p=0.005). conversion to laparotomy was required in 4 of 28 patients ( there were no operative complications or mortality. the mean hospital stay was 28±17.8 hours. there was no postoperative jaundice, bile leak, intra-abdominal collections or mortality. conclusion: when surgery is indicated for difficult acute calculous cholecystitis, laparoscopic subtotal cholecystectomy with control of the cystic duct is safe with excellent outcomes. however, if the critical view of safety can't be achieved due to obscured anatomy at calot's triangle, conversion to open surgery or cholecystostomy must be performed to prevent bile duct injury. scott revell, md 1 , joshua parreco 1 , rishi rattan, md 2 , alvaro castillo, md 1; 1 u. miami -jfk gme consortium, 2 university of miami, miller school of medicine introduction: over the last two decades the increasing incidence of benign liver tumors has led to the expanded need for clinicians to make therapeutic decisions regarding the utilization of open, minimally invasive and ablative techniques. the purpose of this study was to compare outcomes of the management of benign liver disease based on operative approach and pathology. methods: patients aged 18 years or older who underwent liver surgery for benign liver tumors from 2010 to 2014 were identified in the nationwide readmissions database. patients were compared based on liver pathology, resection versus ablation, and an open versus laparoscopic/robotic approach. the outcomes of interest were in-hospital mortality, prolonged length of stay (los) [7 days, and readmission within 30-days. univariable analysis was performed for these outcomes and multivariable logistic regression was performed using the variables with a p-value .05 on univariable analysis. results were weighted for national estimates. results: there were 6,173 patients undergoing surgery for benign hepatic tumors in the us during the study period. the most common pathology was benign neoplasm (62.1%) followed by hemangioma (28.9%), and congenital cystic disease (9.1%). resection alone was performed in 72.8%, ablation alone in 21.1%, and resection with ablation in 6.1%. a laparoscopic/robotic approach was used in 10.3% of cases. the overall mortality rate was 0.3%, a prolonged los was found in 14.7%, and readmission within 30 days occurred in 8.1%. an increased risk for mortality was found with hemangioma (or 12.34, p=0.03) and congenital cystic disease (or 11.43, p= 0.03). resection with ablation was associated with an increased risk of prolonged los (or 2.22, p.01), while a laparoscopic/robotic approach was a protective factor for prolonged los (or 0.39, p.01). patients treated with ablation alone were at decreased risk for readmission (or 0.59, p.01). omar m ghanem, md 1 , desmond huynh, md 2 , tomasz rogula, md 3; 1 mosaic life care, 2 cedars sinai, 3 introduction: laparoscopic sleeve gastrectomy is the most commonly weight loss procedures performed worldwide. as such, there is great diversity in the techniques utilized. this study aims to identify and categorize the differences in techniques and assess the need for guidelines in this field. case description: surgeons were surveyed on the techniques they employ on biweekly basis using the international bariatric club facebook group. the survey included sleeve staple line reinforcement, preoperative work up, intraoperative hiatal dissection, bougie size, distance from pylorus to distal staple line, and intraoperative leak testing. surveys were conducted between may 2017 and july 2017. each survey was active for 2 weeks after which data was collected. participants were required to select a single answer per question. discussion: when surveyed on staple line reinforcement (n=305), 122 surgeons used no reinforcement, 103 over-sewed, 43 buttressed, 19 clipped as necessary, 10 over-sewed as necessary. for preoperative work up (n=188), 125 utilized routine endoscopy, 9 routinely obtained upper gi series, 2 routinely obtained both endoscopy and upper gi, and 43 employed endoscopy or upper gi series only in patients who were symptomatic. for hiatal dissection (n=168), 14 surgeons dissected the hiatus routinely, 116 dissected only when obvious hernias intraoperatively, 32 dissected only if the hernia was detected on preoperative work up, and 1 dissected in the setting of gerd symptoms. for sleeve caliber sizing (n=275), bougie \32 f was used by 1 surgeon, bougie size 32f, 34f, 36f were utilized by 86, bougie size 38f and 40f were utilized by 171, bougie[40f were used by 4, and gastroscopes (34f) were used by 9. with regards to distance from pylorus to where the sleeve staple line was initiated (n=207), 44 participants started \4 cm away from pylorus, 159 between 4 and 6 cm, and 4 started [6 cm from pylorus. finally, for preferred intraoperative leak test during sleeve (n=268), methylene blue was used by 133 surgeons, air leak test by 50, 4 used both, and 78 opted for none. conclusion: this study characterizes the wide varieties in the techniques used during sleeve gastrectomy. a great number of variations exist in every parameter surveyed; however, there is little evidence comparing the effectiveness and safety of these variations. in this setting, further randomized controlled trials are necessary and should be used to construct guidelines to best optimize outcomes in this extremely common and necessary operation. yen-yi juo, md, mph, yas sanaiha, md, yijun chen, md, erik dutson, md; ucla introduction: bariatric surgeries are commonly performed in accredited centers of excellence, but no consensus exists regarding the optimal readmission destination when complications occurred. our study aims to examine the impact of care fragmentation on post-operative outcome and evaluate its causes and consequences among patients undergoing 30-day readmission after bariatric surgery. methods: the metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) 2015 database was used to identify patients who experienced 30-day unplanned readmission following bariatric surgery. non-index readmission was defined as any readmission occurring at a hospital other than the one where initial surgery was performed. primary outcome was 30-day mortality after surgery. logistic regressions were used to identify risk factors for nonindex readmission and to adjust for confounders in the association between non-index readmission and 30-day mortality. results: a total of 5,276 patients were identified as experiencing 30-day unplanned readmission following bariatric surgery, among whom 359 (6.8%) were non-index readmissions. occurrence of postoperative complication during initial hospitalization was the most significant risk factor for non-index readmission (or 1.36, 95% ci 1.06-1.75, p=0.02) in our multivariate logistic regression. the three most common reasons for readmission were similar within the two comparison groups, including nausea/vomit, abdominal pain and anastomotic leakage. similar proportion of patients underwent reoperation among the two comparison groups (22.7 vs 20.6%, p=0.362). even after adjusting for occurrence of complications, being readmitted to a non-index facility was still associated with a 5.2-fold odds of 30-day mortality (95% ci 2.50-10.85, p\.001). conclusion: non-index readmission significantly increases the risk of 30-day mortality following bariatric surgery. patients were more likely to visit a non-index facility if complications occurred during their initial hospitalization. further patient education is required to re-inforce the importance of continuity-of-care during management of bariatric complications and guide patient's decision making in choosing readmission destinations. introduction: sleeve gastrectomy has become the most performed bariatric surgery. removing part of the stomach causes weight loss by restricting food intake and regulating the production of incretins, particularly ghrelin. however, prognostic factors to weight loss after sleeve gastrectomy have been difficult to find. the goal of this research was to study the correlation between the volume of resected stomach and weight loss. methods and procedures: volume of resected stomach of 217 patients undergoing sleeve gastrectomy was measured. a standard laparoscopic technique was used. calibration was performed tightly around a 28 fr bougie, and stapling started 4-6 cm from the pylorus. the standardized technique for measurement involved insufflation with a 14g catheter with saline solution to a pressure of 18 cm h2o immediately after removal of the specimen. resected stomach's volume, gender, age, bmi, height and % of total weight loss (%twl) at 6 months and 1 year were prospectively recorded. correlation between variables was analyzed with pearson's test and linear regression models. conclusion: removed stomach was larger on men than women and its size slightly correlated to height. however, volume of resected stomach did not seem to have an incidence on short termweight loss. gastric size should not be considered as a prognostic factor for weight loss in patients undergoing sleeve gastrectomy. revisional bariatric surgery after initial laparoscopic sleeve gastrectomy: what to choose salman alsabah, eliana al haddad, ahmad almulla, khaled alenezi, shehab akrouf, waleed buhamid, mohannad alhaddad, saud al-subaie; amiri hospital introduction: bariatric surgery has been shown to produce the most predictable weight loss results, with laparoscopic sleeve gastrectomy (lsg) being the most performed procedure as of 2014. however, inadequate weight-loss may present the need for a revisional procedure. the aim of this study is to compare the efficacy of laparoscopic re-sleeve gastrectomy (lrsg), laparoscopic roux-en-y gastric bypass (lrygb) and gastric mini-bypass surgery (mgbp) in attaining successful weight loss following initial lsg. methods: a retrospective analysis was performed on all patients who underwent lsg at amiri and royale hayat hospital, kuwait from 2008 to 2017. a list was obtained of those who underwent revisional bariatric surgery after initial lsg, and their demographics were analyzed. introduction: the aim of this study is to identify potential risk factors or early indicators, specifically related to perioperative blood pressure, and its association with perioperative hemorrhage in the bariatric population. laparoscopic bariatric surgery in the united states has been steadily increasing over the past several years. between 2011 and 2015, the annual number of cases has increased by 24%. although rare, hemorrhagic complications (hc) occur at a rate of 1-5% and can lead to significant morbidity and mortality. by identifying factors which may place a patient at higher chance of hc, surgeons can potentially mitigate those risks. these modifications could reduce morbidity and limit the requirement of transfusions or reoperations. methods and procedures: a retrospective case-control series was performed to include all patients who underwent either laparoscopic sleeve gastrectomy (sg) or laparoscopic roux-en-y gastric bypass (gb) in 2016 at a single bariatric center of excellence. a total of 8 patients were identified with perioperative hc. each patient was matched 2:1 for procedure, body mass index, and medical comorbidities. peak systolic, diastolic, and mean arterial pressures were compared between groups at time of admission, intraoperative, and during remainder of initial hospital stay. welch's t-tests were used for comparison between groups. results: a total of 467 procedures were performed with 383 de novo sg, and 84 de novo gb. revisional bariatric cases were excluded from the study. hc occurred in 8 (1.7%) total patients, 5 gs and 3 gb. four patients required operative treatment for hc, 3 were treated laparoscopically and 1 required laparotomy. the mean diastolic pressures at time of arrival on day of surgery was higher in patients who develop hc (p=0.04) and mean peak diastolic pressure intraoperatively was lower in patients who develop hc (p=0.01). there was no statistical difference in peak systolic or mean arterial pressures throughout the hospital stay. conclusions: bariatric surgical patients with elevated preoperative diastolic blood pressures are at an increased risk of postoperative hc. additionally, decreased peak diastolic blood pressures may be an early indication of an hc in bariatric patients. introduction: bariatric surgery in the adult population is recognized as one of the most effective treatments for obesity and its comorbidities. nonetheless, the safety, efficacy, and substantial outcomes of bariatric surgery in young adults are still not well documented. the aim of our study is to evaluate the safety and efficacy of laparoscopic sleeve gastrectomy (lsg) in young adults (\ 29 years old) versus older adults (≥30 years old). methods: we retrospectively reviewed all patients who underwent bariatric surgery at our institution from 2010 to. propensity score matching was used in order to balance covariates, matching for common demographics and comorbidities between the younger patient population (\29 years old) and the control group ([30 years old). all tests were two-tailed and performed at a significant level of 0.05. statistical software r, version 3.3.1 (2016-06-21) was used for all analyses. results: of 1330 patients, 40.07% (n=533) met our inclusion criteria after matching. we found 12.63% (n=119) patients under 29 years old and 43.94% (n=414) patients greater or equal to 30 years old (control group). we observed that our younger population distribution was predominantly caucasian and female, 70.58 % (n=84) and 77.31% (n=92) respectively. the mean age was 24.63 ±3.49 years with a preoperative body mass index (bmi) of 45.93±7.3 kg/m 2 in the younger group compared to 50.08±3.49 years and a bmi of 44.88±6.16 kg/m 2 in the control group. diagnosis of diabetes and hypertension were present in 22.68% (n=27) and 10.08% (n=12) of our younger group, respectively. no statistical significance was found when assessing the percentage of bmi loss (% ebmil) at 3 and 6 months follow-up as shown in table 1 . when comparing the % ebmil at 12 months follow-up, the younger group had 10.39% more ebmil than the control group (p=0.0231). when assessing post-operative complications we observed no statistical significance. conclusions: bariatric surgery is equally effective and safe in young adult population demonstrating a significant better %ebmil at 12 months following bariatric surgery. following prospective studies are needed to elucidate the resolution and behavior of comorbidities in a younger bariatric population. minimally invasive conversion of sleeve gastrectomy to rouxen-y gastric bypass for intractable gastroesophageal reflux disease: short term outcome background: surgical management recommendations for intractable gastroesophageal reflux disease (gerd) after sleeve gastrectomy (sg) remain controversial. this case series demonstrates our experience with treatment of post-operative intractable gerd using minimally invasive conversion of sg to roux-en-y gastric bypass (rygb). patients and methods: this is a retrospective review of a prospective data registry (mbsaqip) from jan 2016 through sept 2017. eleven patients, 10 female and 1 male, were evaluated. of the 11 surgeries, 7 were laparoscopic, 3 assisted with xi da vinci robot, and 1 assisted with si da vinci robot. all patients presented with intractable reflux on high dose ppi. three had a history of aspiration pneumonia. 1±14.88%, respectively. one was omitted due to pending results. conclusion: several solutions exist for operative management of intractable gerd after sg including redo-sleeve gastrectomy, combined gastrectomy with fundoplication, conversion to gastric bypass or anti-reflux procedures such as linx. reports remain small in series and require further study to evaluate the consistency of results. we found minimally invasive conversion of sg to rygb is a highly effective and safe option for treatment of intractable gerd. setthasiri pantanakul, chotirot angkurawaranon, ratchamon pinyoteppratarn, poochong timrattana; rajavithi hospital background: obesity is an important health problem affecting more than 500 million people worldwide. esophageal dysmotility is a gastrointestinal pathology associated with obesity; however, its prevalence and characteristics remain unclear. esophageal dysmotilities have a high prevalence among obese patients regardless of gastrointestinal symptoms. objective: to identify the prevalence of esophageal motility disorder in asymptomatic obese patient. materials and methods: prospective study was performed between june 2014 and march 2017. a total of 47 of morbid obese patients who visited the bariatric and metabolic clinic at rajavithi hospital (bangkok, thailand) underwent preoperative evaluation with high resolution esophageal manometric test with manoscantm eso (smith medical). tracings were retrospective analysis and reviewed according to chicago classification criteria for esophageal motility disorders. results: among 47 asymptomatic obese participants, twenty five of them were female. the mean age was 32.94 (16-68) years old. most of the participants were classified as class three obesity or over. the mean bmi was 53.83 kg/m 2 . no hiatal hernia was found and the anatomy of esophagus was normal in all patients. the mean irp was 14.59 mmhg. twenty-one patients (44.68%) demonstrated high irp over normal limit ([15 mmhg) . four patients demonstrated premature contraction (dl\4.5 second). hypercontractile esophagus was identified in 2 patients and ineffective motility disorder was found in 5 patients. two patients were diagnosed as distal esophageal spasm (des). two patients were compatible with type 3 achalasia and 19 patients (40.42%) have esophageal outflow obstruction. none of the patient demonstrate incomplete bolus clearance even high irp or abnormal motility. conclusion: this study reveals a high prevalence of esophageal dysmotility in asymptomatic thai obese patients. the most common abnormality were esophageal outflow obstruction and ineffective motility. the chicago classification of esophageal motility disorder may not suitable among obese population. sitembile lee, ms 1 , chike okolocha 1 , aliu sanni, mdfacs 2; 1 philadelphia college of osteopathic medicine ga campus, 2 eastside bariatric and general surgery introduction: roux-en-y gastric bypass (rygb) is the most popular bariatric procedure performed worldwide, accounting for 45% of all bariatric procedures. however, in patients with a body mass index (bmi) ≥60 kg/m 2 (super-super obese) the rygb procedure can be technically challenging. this has led to the adoption of a single-stage treatment such as one anastomosis (mini) gastric bypass (oagb/mgb) in the super-super obese patients. proponents of the oagb/mgb claim the clinical outcomes are comparable to the rygb. the aim of this study is to compare the outcomes of the two procedures by examining the literature. methods: a systematic review was conducted through pubmed to identify relevant studies from 2001 to 2015 with comparative data on rygb versus oagb/mgb on super-super obese populations. the primary outcome was the percentage excess weight loss (%ewl). other outcomes include operative times, complication rates and length of hospital stay. results were expressed as standard difference in means with standard error. statistical analysis was done using randomeffects meta-analysis to compare the mean value of the two groups (comprehensive meta analysis version 3.3.070 software; biostat inc., englewood, nj introduction: obesity is becoming more prevalent in patients with inflammatory bowel disease (ibd). the obese body habitus increases the complexity of surgeries that are often needed to treat ibd. some surgeons may delay definitive surgical treatment because of obesity. little data exists on bariatric surgery in the obese patient with ibd. methods: we retrospectively identified 17 patients who had known diagnosis of ibd who underwent bariatric surgery from 2006 to 2016. demographics and post-operative outcomes were assessed. results: 17 patients were identified: 8 with ulcerative colitis (uc) and 9 with crohn's disease (cd). of the 8 uc patients, none of the patients had surgery for uc and only one was on a biologic. of the 8 uc patients, 2 had adjustable gastric band (agb), 1 had gastric bypass and 5 had sleeve gastrectomy. one patient with agb had it replaced for slip and subsequently removed for dysphagia. uc preoperative bmi average was 43.5. postoperative bmi was 32.4 with excess weight loss (ewl) of 57%. average follow up was 23 months. of the 9 cd patients, 4 patients had ileocolic resections and one had total proctocolectomy with end ileostomy. one was on remicade and one on 6mp. of the cd patients, 5 had agb, 1 had gastric bypass and 3 had sleeve gastrectomy. one agb patient had conversion to gastric bypass because of dysphagia and poor weight loss. a second abg patient had band removal because of dysphagia. cd patients' preoperative bmi average was 43.1. postoperative bmi was 37.0 with average ewl of 30%. average follow up was 37 months. overall, agb patients had 17% ewl, sleeves 51% and gastric bypass 74%. two uc patients had post-operative flares, one immediately post op and one month post-operative. four of the 7 band patients had dysphagia, with one replacement, two removals and one conversion to bypass. there were no leaks, intraabdominal infections, fistulas or wound infections. conclusions: uc patients appear to have higher excess weight loss compared to crohn's patients; ewl 57% compared to 30% but was not statistically significant. agb had poor results in both uc and cd patients. sleeve gastrectomy and gastric bypass results in effective weight loss for obese patients with ibd. gastric bypass in ibd patient is controversial, but may be appropriate in the right clinical setting. introduction: previous studies suggest that modest preoperative weight loss is associated with improved weight loss following bariatric surgery. however, there remains a need to investigate factors which may successfully predict preoperative weight loss among bariatric patients. methods and procedures: this analysis included patients who underwent laparoscopic roux-en-y gastric bypass (rygb), sleeve gastrectomy, or gastric banding at an academic medical center in california. data were measured at patients' consult and preoperative clinical visits. preoperative weight loss outcomes were categorized as follows: no weight loss, lost weight, or gained weight. associations between categorical sociodemographic and surgical characteristics and preoperative weight loss outcomes were assessed using the chi-square test of association. associations between continuous measures and preoperative weight loss outcomes were assessed using anova. a sub-group analysis was completed among participants who lost weight prior to bariatric surgery. wilcoxon-rank-sum and kruskal-wallis tests were used to evaluate associations between patient characteristics and the number of pounds lost. results: patients (n=2,597) were predominately ages 45-65 (56%), female (80%), white (53%), and privately insured (68%). patient race was significantly associated with weight loss outcomes (p =0.013): whereas 62% of white patients lost weight prior to surgery, only 54% of black patients lost preoperative weight. among privately insured patients, 59% lost weight. in contrast, 64% of patients insured by medi-cal/medicaid lost weight (p=0.049). on average, lower baseline excess body weight was associated with no weight loss. patients who lost preoperative weight (n=1,570) were included in the sub-group analysis. male sex (p\.001), black race (p.001), undergoing laparoscopic rygb (p=0.003), no previous abdominal surgeries (p=0.038), upper tertile baseline weight (p.0001), waist circumference (p\.0001), percent body fat (p\.01), bmi (p.0001), excess body weight (p.0001), and systolic blood pressure (p=0.001) were associated with more pounds lost. conclusions: this study demonstrates various associations between sociodemographic and clinical patient characteristics and preoperative weight loss. given previous literature indicating the positive relationship between preoperative and postoperative weight loss following bariatric surgery, the results of this study suggest an opportunity to improve preoperative weight loss among specific groups. yen-yi juo, md, mph 1 , usah khrucharoen, md 2 , yijun chen, md 1 , yas sanaiha, md 1 , peyman benharash, md 1 , erik dutson, md 1; background: besides rate and extent of weight loss, little is known regarding factors predicting interval cholecystectomy following bariatric surgery, which are important factors in a surgeon's consideration during decision-making regarding whether to perform prophylactic cholecystectomy. in addition, no previous studies have quantified the incremental costs associated with ic. we aim to identify risk factors predicting interval cholecystectomy (ic) following bariatric surgery and quantify its costs. methods: a retrospective cohort study was performed using the national readmission database 2010-2014. cox proportional hazard analyses were used to identify risk factors for ic. linear regression models were constructed to examine associations between cholecystectomy timing and cumulative hospitalization costs. background: patient-reported outcomes after bariatric surgery are important in understanding the longitudinal effects of surgery. the impact of hospital practices and surgical outcomes on followup rates remains unexplored. objective: to assess the effect of hospital-level practices and 30-day complication rates on 1-year follow-up rates of a standardized patient-reported outcomes survey. methods: bariatric surgery program coordinators in a statewide quality improvement collaborative were surveyed in june 2017 about their practices for obtaining patient-reported outcomes data one year after surgery. hospitals were ranked based on their follow-up rates between 2011 and 2015 (accounting for overall performance and improvement). univariate analysis was used to identify hospital practices associated with higher follow-up rates. multivariable regression was used to identify independent associations between 30-day outcomes and follow-up rates after adjusting for patient factors. results: overall, follow-up rates improved from 2011 (33.9%±14.5) to 2015 (51.0%±13.0) though there was wide variability between hospitals (21.1% vs 77. 3% in 2015) . coordinator survey response rate was 100%. sixty-one percent of all surveyed coordinators perceived that surgeons prioritize high follow-up rates. when asked how long were their patients followed for, 78% of coordinators noted their programs provided lifelong follow-up. patient reminders about the 1-year survey were used by 67% of programs, mostly during clinic visits (75%). most programs (83%) had implemented strategies to improve follow-up rates, such as handing out the survey (73%) during clinic visits. follow-up providers included surgeons (86%), nurse practitioners (56%), and/or registered dietitians (47%). patient disinterest (81%), loss to follow-up (44%), survey length (36%), and lack of staff/ resources (33%) were the factors most commonly perceived as barriers to high follow-up rates. when compared to programs in the bottom quartile of follow-up rates, those in the top quartile were more likely to hand out the survey to patients during clinic visits (100% vs 44.44%; p=0.0106) and had lower rates of risk-adjusted severe complications (1.79% vs 2.60%; p=0.0481), readmissions (3.96% vs 5.08%; p=0.0157), and reoperations (0.75% vs 1.50%; p=0.0216). conclusions: hospitals vary considerably in their 1-year follow-up rates when seeking patientreported outcomes data after bariatric surgery. there were also significant differences in programspecific practices for obtaining these data. hospitals with higher 1-year follow-up rates were more likely to physically hand surveys to patients during a clinic visit and had lower 30-day severe complication, readmission, and reoperation rates. improved 1-year patient-reported outcomes follow-up after bariatric surgery may be a proxy for higher quality perioperative care. david merkle 1 , kazim mohommed 1 , danielle r rioux 2 , dilendra weerasinghe, md, facs 3; 1 nova southeastern university, 2 herbert wertheim college of medicine, 3 bariatric surgery is gaining popularity not only for its weight loss benefits, but also for its metabolic effects. we present a 44-year-old female patient with symptoms of neuroglycopenia, occurring 11-years post roux-en-y gastric bypass surgery. during one of her syncopal episodes, her blood sugar was noted to be 21 mg/dl. continuous glucose monitoring demonstrated post prandial hypoglycemia, averaging 4 episodes per day, with a maximum of 6 episodes in one day. upon further evaluation, the lab results of the hba1c, chromogranin a, somatostatin, and urinary sulfonylurea levels were all normal, with the c-peptide level within the upper limit of normal. ct scan of the abdomen and pelvis did not show any obvious masses in the pancreas, and since the chromogranin a level was normal, it lead to the empiric diagnosis of nesidioblastosis by exclusion. we placed the patient initially on medical management which included a carbohydrate restricted diet of 30 g per meal, eating 6-8 small meals per day, and taking 50 mg of acarbose three times per day. overall, symptoms have improved, and she has 1-2 episodes per month, compared with about 4 episodes per day. we will also present the data with regards to other invasive treatment options, which are available when medical treatment options have failed, such as gastric bypass reversal versus distal gastrectomy. vertical banded gastroplasties (vbgs) were a common bariatric procedure in the 1980s but have largely fallen out of favor due to unsatisfactory weight loss and a relatively high incidence of longterm complications such as dysphagia and severe gastroesophageal reflux disease (gerd). one of the ways to address these undesirable effects is to convert to a roux-en-y gastric bypass (rygb). the aim of this study was to assess the safety and efficacy of vbg-to-rygb conversion. outcomes of vbg revisions performed at an academic center between 2008 and 2017 were reviewed. of the 54 vbg revisions, gastrogastrostomies were created in two patients, two underwent a planned 2-stage conversion, and 50 vbgs were converted to rygbs. patients were operated on an average of 24 years after their initial vbg. presenting symptoms were weight regain (n=30, 55.6%), dysphagia (n=29, 53.7%), or severe gerd (n=23, 42.6%). fourteen patients (26%) had a gastric staple line dehiscence. of the 50 vbg to rygb conversions, 39 were laparoscopic, 5 were converted to open, 4 were open, and 2 were robotic-assisted. average operative time and length of hospital stay were 305.4 minutes and 9.2 days, respectively. within the first 3 months post-operatively, twelve (24%) patients required readmission directly related to surgery, while eight (16%) visited the emergency department. eight patients (16%) required at least one unplanned operation due to complication(s) during the entire follow-up: small bowel obstruction (n=3, at 1-week, 12-months, and 14-months), necrosis/leak of remnant stomach requiring remnant gastrectomy (n=3), tracheostomy for prolonged respiratory failure (n=2), bleeding (n=1), anastomotic leak (n=1), and hemothorax requiring vats (n=1). four patients (8%) had a contained perforation that was medically managed and five (10%) developed a gastrojejunal anastomosis stricture requiring endoscopic intervention. one patient (1.8%) developed pulmonary embolism. there was no mortality directly related to surgery. complete resolution or improvement of gerd/dysphagia was appreciated in all patients in the short term follow-up. patients who presented with weight regain had a mean bmi loss of 13.2±8.2 points in the median follow-up time of 8.5 months up to a year after conversion to rygb. in summary, reoperative bariatric surgeries after vbgs are complex, requiring longer operative times and length of stay. our study found 16% risk of severe complications requiring reoperations, compared to the previously cited 38% in short and long-term complications. conversion of vbg to rygb provides excellent relief of severe gerd and dysphagia and is a viable option for significant weight reduction. introduction: bariatric surgery is a safe and effective treatment for severe obesity and its comorbidities. however, concomitant splenectomy is sometimes required due to uncontrolled bleeding during the surgery. limited literature exists regarding the effects of concurrent splenectomy on outcomes of bariatric surgery. this study aimed to determine these outcomes. methods: adult patients with obesity who underwent primary, elective laparoscopic roux-en-y gastric bypass (lrygb) or laparoscopic sleeve gastrectomy (lsg) with concomitant splenectomy were identified from the metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip, 2015) and national surgical quality improvement program (nsqip, 2005 (nsqip, -2014 datasets. using propensity scores (based on 14 baseline variables), patients who underwent primary bariatric surgery were matched 1:10 to a control group (primary lrygb/lsg without concomitant splenectomy) and thirty-day postoperative outcomes were compared. continuous variables and categorical variables were categorized as medians with interquartile range (iqr) and counts with percentages, respectively. background: several previous studies have suggested a correlation between weight loss and age after bariatric surgery. objective: the aim of our study is to further address age as a preoperative factor to determine the amount of weight loss after bariatric surgery. materials and methods: we performed a retrospective analysis of outcomes of a prospectively maintained database of 1,244 obese patients who underwent either sleeve gastrectomy (sg) or roux-en-y gastric bypass surgery (rygb) at our hospital between 2011 and 2015. we analyzed the 3-month, 6-month, and 1-year postoperative percent total body weight loss (%tbwl) of obese patients who underwent bariatric surgery based on their preoperative age. results: the average age of patients included in the study was 45 years old with a range of 21-78 years. an inverse relationship between preoperative age and postoperative weight loss was observed. younger patients achieved a higher % tbwl than older patients at the 3-month, 6-month, and 1-year postoperative follow-up. the average %tbwl for all patients at the 3-month, 6-month, and 1-year postoperative follow-up periods were 15.5%, 23.6%, and 28.9%, respectively. at the 1-year follow-up, for every decade increase in age (above the average age of 45), patients lost 4% less tbwl. conclusion: in our study, younger patients tend to lose a greater amount of %tbwl than older patients after bariatric surgery. results: 51 patients participated in the survey. the median age was 45 yo (iqr: 36-51) and 74.5% were females. the following responses were encountered when asked about the importance of surgery-related factors: the study population indicated the following responses regarding expectations from magnetic surgery compared to conventional laparoscopy: there was no significant evidence of different responses by demographic groups. additionally, 90.2% of the population indicated that a surgeon performing magnetic surgery should be more skillful than a surgeon performing conventional laparoscopy. conclusion: this study represents the first report of bariatric patient's perception regarding surgery-related factors. notably, nearly 80% of the cohort indicated that cosmesis after surgery is an important factor, whereas the responses regarding the rest of the factors were indicated as expected. the bariatric population included in this study had a positive perception of magnetic surgery. furthermore, the population perceived that this technique is associated with better outcomes, better cosmetic results, and higher surgeon dexterity. introduction: although much is known regarding medical outcomes of metabolic surgery, less is known regarding quality of life outcomes. we hypothesized that the collection of patient-reported outcomes (pros) could help us understand quality of life in this patient population. we chose to primarily use patient reported outcomes measurement information system (promis) instruments because of their broad applicability, low cost, and ability to use computer-adapted technology to survey. methods: we implemented the routine collection of pros as part of clinical care in december, 2015. patients were offered tablets in clinic, and were asked to complete the surveys at most of their visits. we used computer-adapted technology to decrease the length of time needed to survey. we collected the following promis instruments: depression, pain interference, physical function, and satisfaction with social roles. we also collected the gerd-hrql, a general health question, and a current health visual analog scale (vas). we retrospectively reviewed our results from december 2015 through september 2017. results: our response rate was 70% over the last year of collection. in total, 2166 assessments were completed by 1026 patients. the mean scores in our total patient population were as follows: vas 59, gerd-hrql 8, general health 51, depression 52, pain 56, physical function 43, and social roles 46. for promis instruments, the mean for the national population is 50, with 10 as the standard deviation. for the depression and pain scores a higher score is worse, while a higher score indicates better quality of life for social roles and physical function. conclusions: routine collection of patient reported outcomes can be implemented in a metabolic surgery clinic. health-related quality of life appears to be decreased in this patient population compared to the general public. further work is ongoing to learn about postoperative trends, as well as differential effects of metabolic procedures. the effect of peri-operative antibiotic drug class on the resolution rate of hypertension after roux-en-y gastric bypass and sleeve gastrectomy. results: in total, 123 rygb and 88 sg were included in our analysis. no significant differences were found between cefazolin and clindamycin regarding hypertension resolution rates after sg. there was a significant difference in the resolution of hypertension after rygb with the use of prophylactic clindamycin or cefazolin. as shown in figure 1 , patients who underwent rygb and received clindamycin had a significantly higher rate of hypertension resolution compared to cefazolin. this effect started at 2 weeks post-operatively (52.4% vs 23.5% respectively, p=0.008) and persisted up to the 1-year (57.9% vs 33.3% respectively, p=0.05). we found no significant differences in patient age, sex, number of pre-operative hypertensive medications, pre-operative bmi, or %bmi change after 1 year to account for the significant effect of antibiotic choice on hypertension resolution. conclusion: this study represents the first clinical report to suggest an impact of the type of antibiotic administered at the time of rygb on co-morbidity resolution, specifically hypertension. future studies will be needed to confirm that the mechanism of action for this novel finding is due to the differing modifications of the gastrointestinal microflora population based on the specific peri-operative antibiotic administered. introduction: laparoscopic adjustable gastric band with plication (lagbp) is a novel bariatric procedure which combines the adjustability of the laparoscopic adjustable gastric band (lagb) with the restrictive nature of the vertical sleeve gastrectomy (vsg). the addition of plication of the stomach to lagb should provide better appetite control, more effective weight loss, and greater weight loss potential. objective: the purpose of the study was to analyze the outcomes of lagbp at 18 months. setting: this is a retrospective analysis from one surgeon at a single private institution. methods: data from all patients who underwent a primary laparoscopic lagbp procedure from december 2011 to june 2016 were retrospectively analyzed. data collected from each patient included age, gender, weight, body mass index (bmi), and excess weight loss (ewl). results: sixty-six patients underwent lagbp. the mean age and bmi was 44.6±12.7 years and 42.1±5.1 kg/m, respectively. all 66 patients were beyond the 18-month postoperative mark. no patient was lost to follow-up. the patients lost an average of 49% and 46.8% excess weight loss (ewl) at 12 months (77.2% follow-up) and 18 months (66.1% follow-up), respectively. also, the patients lost a mean bmi of 7.7 kg/m 2 and 7.6 kg/m 2 at 12 months and 18 months, respectively. the total number of fills during the study period was 201, and the mean fill volume was 0.6±1 cc. dysphagia was the most common long-term complication. the mortality rate was 0%. conclusions: lagbp is a relatively safe and effective bariatric procedure. in light of recent studies demonstrating poor outcomes following lagb, lagbp may prove to be the future for patients desiring a bariatric procedure without resection of the stomach. the median interval between (lrygb) and reoperation is 53 months in group a and 26 months in group b. the median percentage of excess weight loss (%ewl) is 61% vs 67%, respectively (p=0.79). 14 patients 70% (5 in group a) were admitted in an emergency with an acute abdomen pain. ct scan was performed in 8 patients 40 % and has shown signs of occlusion in all cases. the most common symptoms were abdominal pain and vomiting. the surgery was performed by laparoscopy in 8 patients 40% and by laparotomy or conversion in 12 patients 60%. in all cases internal hernia was reduced and closed all defects. in only one patient in (group a) small bowel at jja was resected. there was no mortality and one patient had pneumonia with acute respiratory distress which was treated medically. conclusions: the closure of mesenteric defects at (lrygb) by tight non-absorbable continued sutures is recommended because it is associated with a significant reduction in the incidence of internal hernia. introduction: laparoscopic roux-en-y gastric bypass (rygb) is a common and effective form of bariatric weight loss surgery. however, a subset of patients will fail to achieve the expected total body weight loss (tbwl) greater than 20% after 12 months or experience significant weight regain despite dietary, psychiatric, and behavioral counseling. although alternative procedural interventions exist for operative revision after suboptimal rygb weight loss, laparoscopic adjustable gastric banding (lagb) provides an option with short operative time, low morbidity, and effective results. we have previously demonstrated that short-term (12-month), and mid-term (24-month) weight loss is achievable with lagb for failed rygb. the objective of this study is to report the long term 5 year outcomes of lagb after rygb failure. methods and procedures: a retrospective review of prospectively collected data before and after rygb when available, and before and after revision with lagb was performed. background: saline filled intragastic balloons have become a common outpatient procedure for the treatment of obesity. acute dilation, ischemia and necrosis of the stomach has been described in the medical literature. gastric necrosis from acute gastric dilation is a rare but life-threatening condition, which requires timely diagnosis and management. we present a case of partial gastric ischemia with necrosis 72 hours following placement of a saline filled intragastric balloon. postoperative complaints of bloating, nausea and vomiting are common complaints following placement of saline filled intragastric balloons and can lead to a delay in diagnosis. early diagnosis and management is essential in avoiding this life threatening complication. case report: a 59 year old woman, bmi 33, comorbid conditions of diabetes mellitus underwent uncomplicated placement of a saline filled intragastric balloon for treatment of obesity. 24 hours after placement the patient complained of cramping and bloating. 48 hours following placement the patient developed vomiting and presented to an emergency room for evaluation. she was found to have blood glucose exceeding 400 and a severely dilated stomach with pneumotosis on ct evaluation. ng tube decompression and icu management of the severe hyperglycemia was initiated. removal of the intragastric balloon was delayed 12-14 hours until an appropriate endoscopic retrieval kit could be obtained. endoscopic retrieval was performed without incident and near complete necrosis of the gastric mucosa was noted. the antrum was the only area spared. 48 hours after retrieval a laparoscopic evaluation of the stomach revealed full thickness necross of the entire fundus and greater curve. indocyanine green (icg) fluorescent dye was used to assess vascular integrity of the remaining stomach and to define lines of resection. resection of the greater curvature was performed using icg florescent dye to ensure that the angle of hiss was viable and well perfused. the patient had a full recovery and subtotal gastrectomy was avoided. conclusions: spontaneous gastric distension exacerbated by gastric outlet obstruction following placement of a saline filled intragastric balloon can occur. unrecognized this condition can lead to ischemia, necrosis and perforation of the stomach. appropriate evaluation of patients following placement of intragastric balloons is essential. recognition of this condition can be delayed due to the complaints of cramping, bloating and vomiting which are typical following placement of saline filled intragastric balloons. untreated, gastric ischemia and necrosis can lead to early perforation which is associated with a high mortality rate. introduction: morbid obesity has become a growing health risk in the united states with up to 40% of americans suffering with obesity. bariatric surgery remains the best treatment for morbid obesity. the recent use of laparoscopic sleeve gastrectomy (lsg) as a single stage procedure has met with great success because of its quick learning curve and minimal postoperative complication rates. however, there are concerns if the lsg is an effective procedure for long-term weight loss. although criticized at first, the mini-gastric bypass (mgb) surgery has become a great option for morbidly obese patients because of the ability to lose weight with minimal post-op complications. the aim of this review is to assess the outcomes of lsg as it compares to mgb for the management of morbid obesity. introduction: we hypothesize that a jejunoileal anastomosis and partial diversion using magnamosis, a novel magnetic compression device, is technically feasible and will improve insulin resistance and metabolic syndrome similarly to patients who underwent bariatric surgery. metabolic surgery has demonstrated improvements in various parameters including insulin resistance, triglyceride levels, and cholesterol. it may be technically feasible to perform a less-invasive operation through partial diversion, and thereby stimulate an increase in incretins from the l-cells of the ileum to glean these benefits. methods and procedures: we performed a laparotomy and jejunoileal partial diversion using magnamosis in five rhesus macaques with induced insulin resistance through dietary modifications. after surgery, weight was monitored and a metabolic laboratory evaluation was performed weekly. timed tests were performed at baseline and again at 3 and 6 weeks postoperatively for triglyceride levels, glp-1, insulin, glucose, and bile acids. the primates were followed for 8 weeks prior to euthanasia. results are represented as mean±sem and all p-values were calculated using a two-sample students' t-test. introduction: many studies concerning individuals seeking bariatric surgery indicate a higher prevalence of psychiatric disorder in this population, both before and after surgery, however results are not conclusive. the aim of this study was to investigate changes in psychiatric health after gastric bypass surgery. methods: patients within the catchment area of the department of psychiatry of the south alvsborg hospital, operated with gastric bypass surgery during 2011-2012 were identified through the scandinavian quality registry (soreg). patients files were examined and psychiatric diagnoses and alcohol/drug abuse were recorded preoperatively and with a follow up time of 5 years. results: a total of 148 operated patients were identified. 48 of these patients had been in contact with the psychiatric department before or after surgery. 7 patients had attempted suicide preoperatively, but no attempts were made postoperatively, all women. 5 patients attempted suicide postoperatively without a previous history of suicidal attempts, 4 men 1 woman. four patients with a preoperative history of alcohol abuse were identified, all women. these individuals did not seem to abuse alcohol/drugs postoperatively. postoperatively 9 patients with an alcohol/drug abuse were identified, 3 men, 6 women. none of them had a former history of abuse. 4 of the patient performing suicidal attempts postoperatively, 3 men 1 woman, had a postoperatively emerging alcohol/drug abuse. conclusion: preoperatively known alcohol/drug abuse or suicidal attempts do not seem to predispose for postoperative abusive problems or suicidal behavior. preoperative identification of individuals prone to alcohol/drug abuse or suicidal attempts seems difficult. introduction: in the past, our group has popularized models for gastric bypass, sleeve and gastric imbrication. there are currently no models to predict weight loss following single anastomosis duodenal switch. surgeons who offer this procedure are left to guess based on their limited experience how their patients will do following surgery we have developed a simple office based algorithm to predict weight loss following this procedure. method: 161 patients met the criteria for this study. these patients underwent surgery at a single institution from june 2013 to december 2016. non-linear regression analysis was performed to interpolate weight loss at one year. a multilinear regression was run to determine the significant variables. a model was then constructed to predict weight loss after single anastomosis duodenal switch. results: bmi, htn, gender, and the interaction between htn and dm were found to affect weight loss. the model achieved a r value of .616 and the average error of prediction in the model was 12.5%ewl. conclusion: today too many surgical practices offer procedures tailored to surgeon instead of the needs of the patient. using our models predicting postoperative weight loss can be a straightforward process using easily gathered data. all surgeons should be doing this currently in their own practice to allow patient to choose targeted healthcare interventions based on patient's personal goals. surg endosc (2018) introduction: there is a long-standing practice of testing anastamosis both in upper and lower gi surgery. post-operative leaks in bariatric surgery are an uncommon but serious compilation increasing morbidity and risk of mortality. the present study looks at the practice of performing an intra-operative leak test during roux-en-y gastric bypass (rygb) and sleeve gastrectomy (sg). methods and procedures: the study was divided in two independent phases of six months and 12 months. data was collected from all patients undergoing sg, rygb or revision rygb within those two periods. to confirm the integrity of the staple line all patients underwent a methylene blue and air test intra-operatively. this was followed by a gastrograffin swallow the morning post procedure. results: total number of patients in the study was 219. there were four positive intraoperative tests. one patient was a primary rygb and three patients were undergoing revision rygb. all were reinforced and subsequent recovery and gastrograffin swallow showed no leak. one revision rygb had an undetected small bowel injury distal to jejuno-jejunostomy that was not identified on intraoperative or next day imaging. we used multivariate statistical analysis to study our population sample and classified the impact of each factor or their combination with the use of principal component analysis. we used systematic clustering to identify subpopulations that have significant differences in statistical distribution. result: the main determinant of total operative time was the surgeon and the level of his assistant. prior surgeries, bmi and smoking history had a statistically significant impact on the laparoscopic time (p value.05). removing the impact of various surgeons, we detected four clusters of patients based on more than 15 patient characteristics. we noticed total or time had two different clusters: one with a standard-deviation of 17-21 min while the other had over 50 min. conclusion: this study may have practical implications on improving scheduling. the different comorbidities of these bariatric patients helped to stratify patients into these 2 main cluster groups. better predictability on length of operative procedure can lead to more efficient use of or time and staff, thus ultimately leading to savings for the hospital. in addition, we used automated noninvasive tracking methods to identify phases of bariatric procedures that will allow more accurate estimated or time to efficiently schedule cases. the smart or, which is equipped with multiple noninvasive sensors, allows for error free tracking and monitoring without human interference. objectives: successful outcomes after bariatric surgery (bs) require a comprehensive educational program (cep) focused on post-surgical dietary and lifestyle changes. at our institution, patients must comply with a 4-week life-after-surgery program prior to surgery. since many patients are not able to participate in-person, an online cep was created to improve accessibility. to evaluate comprehension, a 16-question test is administered at the last preoperative visit to participants of both classes. the primary objective of this study is to evaluate the effectiveness of online versus inperson cep in terms of comprehension and post-operative weight loss. methods: patients who underwent bs from august 2016-may 2017 were retrospectively reviewed at a single institution. all patients who underwent the in-person or online cep, completed the 16-question test, and had post-operative follow-up for at least 6 months were included. baseline demographic, operative, and weight data were obtained using the electronic medical record. background: body weight loss after bariatric surgery is affected by several factors. diabetes status or preoperative body mass index (bmi) would affect the body weight loss after surgery. age and sexuality may also be the predictor. furthermore, the malabsorptive procedure is considered more effective for body weight loss than the restrictive procedure alone. we investigated the contribution of preoperative background data and procedures to the body weight loss after surgery. methods: this was a multicenter, retrospective study to validate the efficacy of bariatric surgery for morbidly obese patients in japan. patients underwent sleeve gastrectomy (lsg) or lsg with duodenal-jejunal bypass (lsg/djb) in each institution from january 2005 to december 2015, and whose bmi was 35kg/m 2 or more at the first visit were included in this study. we investigated the percent excess body weight loss (%ewl) at 12 months after surgery. univariate and multivariate analyses were done to evaluate the predictive factors of body weight loss. we defined that %ewl more than 50% as well response (wr background: despite its known safety and efficacy, bariatric surgery is an underutilized treatment for morbid obesity in the united states. objective: our goal was to identify factors associated with failing to proceed with surgery despite being considered an eligible candidate by a bariatric surgery program. methods: this is a retrospective study that includes all patients (n=486) who attended a bariatric surgery informational session (bis) at a single center academic institution in 2015. eligible candidates were identified after clinical evaluation and multidisciplinary candidacy review (mcr). we compared patients who underwent surgery to those who did not (i.e. dropped out) by evaluating patient-specific, insurance-specific, and bariatric surgery program-specific variables. univariate analysis and multivariable regression were performed to identify risk factors associated with failing to undergo surgery among eligible candidates. introduction: the elderly are a special subset of the population due to their limited physiological reserve with aging. revisional bariatric surgery is becoming more common with increase in primary bariatric procedures. data on safety, weight loss, and metabolic effects of revisional bariatric surgery in elderly is limited. the aim of this study was to assess the safety and efficacy of revisional bariatric surgery in the elderly. methods: clinical data of all elderly patients (65 years and above) who underwent elective revisional bariatric surgery at an academic institute between 2008 and 2014 were reviewed. demographic data, perioperative variables, and postoperative outcomes were studied. results: a total of 52 patients were identified with a female predominance (3:1). mean age was 68 ±2.8 years. mean bmi at the time of revisional surgery was 39.3±10.3 kg/m 2 . the primary indication for revisional surgery included management of postoperative adverse events (n=32, 61.5%) and weight recidivism (n=20, 38.5%). in patients with postoperative complications, the most common indications for revisional surgery were dysphagia (n=8, 15.4%), marginal ulcer (n= 7, 13.5%), gastric outlet obstruction (n=7, 13.5%), and fistula formation (n=5, 9.6%). the most common type of revisions included conversion of vertical banded gastroplasty to roux-en-y gastric bypass (rygb, n=18), revision of rygb (n=13), conversion of adjustable gastric banding to sleeve gastrectomy (sg, n=6), and sg to rygb (n=4). two out of seven (28.6%) patients with 30-day postoperative readmissions had serious complications that required reoperation. one of them underwent small bowel resection for ischemia and the other had thoracotomy for hemothorax evacuation developing secondary to a gastropleural fistula. while there was no mortality over the first 30 days postoperatively, two patients died 6 months after surgery due to infectious complications. in the median follow-up time of 20 (interquartile range, 10-38) months, mean weight and bmi changes of −15.8 kg and −5.6 kg/m 2 were observed. twenty-three (44.2%) patients had diabetes at time of revisional surgery. a mean reduction of 12.6 mg/dl in fasting blood glucose and 1.1% in glycated hemoglobin were noted between baseline and last follow-up. conclusion: revisional bariatric surgery in elderly is associated with high complication rates. our data indicate that revisional bariatric surgery can potentially alleviate symptoms and resolve complications of primary bariatric surgery. elderly patients should have their risk stratified and weighed against the benefits of surgery. anne-marie carpenter, bs, alexander l ayzengart, md, mph; university of florida introduction: bariatric surgery is the most effective treatment for morbid obesity. of all available procedures, laparoscopic sleeve gastrectomy (lsg) is now the most popular worldwide. common complications of lsg include gastroesophageal reflux, stricture, and staple-line leak. although rare, portomesenteric venous thrombosis (pmvt) and liver retractor-induced injuries are increasingly reported. we present a case of isolated left portal vein thrombus after routine lsg that was likely caused by prolonged compression of left liver lobe by the nathanson retractor. case presentation: a 55-year-old female with a bmi of 39 and biliary colic due to cholelithiasis underwent lsg with hiatal hernia repair and cholecystectomy. she tolerated the procedure without complication and was discharged home on the following day. on postoperative day 9, she presented to the emergency department with fever and epigastric pain. contrast ct revealed an isolated filling defect within the proximal left portal vein; abdominal doppler demonstrated an acute thrombus occluding the left portal vein with normal flow in the main and right portal veins. the patient was treated with a 3-month course of therapeutic anticoagulation with lovenox. a complete hematologic workup did not uncover any hypercoagulable conditions. the patient recovered well and remained asymptomatic at her follow-up visit 12 weeks after operation. discussion: pmvt is a rare surgical complication with multifactorial etiology. in bariatric surgery, evidence suggests lsg elicits more frequent pmvt compared with roux-en-y gastric bypass. a 2017 systematic review cited the incidence rate of pmvt as 0.3-1% after lsg. the mechanisms are thought to be due to pneumoperitoneum, procoagulant obese state, manipulation of portomesenteric venous system during division of the gastrocolic ligament, and postoperative dehydration. liver retraction is paramount during laparoscopic bariatric surgery to provide adequate visualization of the upper stomach and diaphragmatic hiatus. most methods of liver retraction produce significant pressure on the liver parenchyma by compressing it against the diaphragm. three types of liver injury have been documented in literature: minor congestion, traumatic parenchymal rupture, and delayed liver necrosis. uniquely, we propose an additional type of injury-left portal vein thrombosis due to compression of left liver lobe with the nathanson retractor. conclusion: the case described herein represents the first documented report of isolated left portal vein thrombosis after lsg. this is a unique presentation of retraction-related liver injury causing pmvt by mechanical compression of liver parenchyma. as surgical procedures increase in duration, intermittent release of liver retraction should be performed at regular intervals. introduction: up to 11% of patients experience internal hernia (ih) after laparoscopic roux-en-y gastric bypass (rygb). studies have shown that antecolic roux limb orientation, and closure of the mesenteric defect reduce, but do not eliminate, the incidence of ih. we hypothesize that despite operative differences, ih occur more frequently in patients who experience significant weight loss. this study aims to determine whether those patients who present with ih following rygb experience greater than 70% excess body weight loss (ebwl). methods: a retrospective chart of all patients who underwent ih repair following rygb at our institution between sept 2014 and sept 2017 was performed. all applicable cpt codes to encompass ih repair were reviewed (n=412). 17 patients with ih repair after rygb were identified. results: of the 17 patients, 16 were female. the mean pre-rygb weight was 279lbs (sd±54.5), bmi 37.8 kg/m 2 (sd±8.7). all procedures but one were performed in an antecolic configuration; the other retrocolic-antegastric. fifteen cases were laparoscopic and two were open; nine had the jejunal mesenteric defect closed, eight did not. the average weight loss from the time of rygbp to ih presentation was 91.82lbs (sd±38.18) and %ebwl from rygb to the nadir weight was 77% (sd±24). when evaluated by t-test, there was no statistical difference in bmi at the time of program initiation, rygb, or ih presentation, as well as number of pounds lost, %ebwl, or time to ih presentation, when comparing patients for whom the mesenteric defect was closed or not. average time from rygb to ih presentation was 4.5 years (range 190-4655 days) . conclusion: in our limited cohort of patients who have presented with internal hernia after rygb, there was an average of 77% ebwl. this is greater than the average expected %ebwl at our institution and others, suggesting that ih may occur in patients with greater weight loss at a higher frequency. mesenteric defect closure did not appear to have any influence in this limited cohort, suggesting that weight loss is a stronger factor in ih development. we plan a more extensive evaluation in a larger cohort of patients to determine if greater %ebwl is a predictor of ih formation in patients undergoing rygb. introduction: introduction of enhanced recovery after surgery (eras) pathways has led to early recovery and shorter hospital stay after laparoscopic roux-en-y gastric bypass (lrygb) and laparoscopic sleeve gastrectomy (lsg). this study aims to assess feasibility and outcomes of postoperative day (pod) 1 discharge after lrygb and lsg from a national database. methods: patients who underwent elective primary lrygb and lsg and were discharged on pod 1 and 2 were extracted from metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) 2015 dataset. a 1:1 propensity score matching was performed between cases with pod1 vs pod2 discharge, and the 30-day outcomes of the propensity-matched cohorts were compared. high risk patients were excluded from the analysis. purpose: the aim of this study was to evaluate a large volume, multi surgeon bariatric surgery center producing the largest sample size to date proving efficacy (% weight loss) and safety of sleeve gastrectomy following band removal in one or two step procedures. methods: all patients undergoing conversion of lagb to lrygb (33) and lsg (291) regardless of one step vs two step conversion from january 2006 to january 2017 were included. a retrospective analysis of our prospectively maintained database was performed to compare outcomes in patients undergoing conversion to lrybg vs lsg after lagb to identify the outcomes. introduction: the purpose of the study was to describe the use of intraoperative indocyanine green (icg) fluorescence angiography to identify the blood supply patterns of the stomach and gastroesophageal junction (gej). we hypothesized that identifying these vascular patterns may help modifying the surgical technique to prevent ischemia-related postoperative leaks. methods: 86 patients underwent laparoscopic sg and were examined intraoperatively with icg fluorescence angiography at an academic center from january 2016 to september 2017. prior to the construction of the sg, 1ml of icg was injected intravenously and pinpoint® technology was used to identify the blood supply of the stomach. afterwards, the sg was created with attention to preserving the identified blood supply to the gej and gastric tube. finally, 3ml of icg were injected and pinpoint® technology was used again to ensure that all the pertinent blood vessels were preserved. results: 86 patients successfully underwent the procedure with no complications. the following blood supply patterns to the gej were found: the incidence of overall accessory blood supply to the right-side dominant pattern was more common than expected. in about half of the cases where an accessory vessel was found in the gastrohepatic ligament, the blood flow was toward the stomach (and not the liver). furthermore, the incidence of accessory blood supply from the left side was found in 34% of the cases. 10% of patients had both the left side accessory and accessory gastric artery pattern. in these particular patients, if a concurrent hiatal hernia repair is performed, these accessory blood supplies are at risk of being injured if care is not taken to preserve them, rendering the gej relatively ischemic. conclusion: icg fluorescence angiography allows determining the major blood supply to the proximal stomach prior to any dissection during sleeve gastrectomy so that an effort can be made to avoid unnecessary injury to these vessels. background: morbid obesity, a common medical concern with significant health risks, has a prevalence of 10.4% among u.s. adults. bariatric surgery provides effective weight loss for morbidly obese patients with improvement in their comorbid conditions. traditionally, routine intraoperative drain placement (idp) and postoperative esophagram (ugis) were thought to identify early postoperative complications. recently, these interventions have been scrutinized for their effectiveness. we hypothesized that idp and postoperative ugis do not alter outcomes in bariatric surgery and only increase hospital length of stay (los). methods: two cohorts, each consisting of 100 patients from either 2015 or 2017 were analyzed from our institution. in the 2015 cohort, all patients had idp and an ugis on postoperative day 1, prior to starting a clear liquid diet. in the 2017 cohort, no patients had idp or ugis, but instead were started on a clear liquid diet on postoperative day 1, in the absence of vomiting. all patients in each cohort underwent either a laparoscopic sleeve gastrectomy or a roux-en-y gastric bypass. a retrospective study was performed to analyze whether there was a significant difference in postoperative complications, length of stay, and operating room time between these two cohorts. those who underwent t2dm remission were less likely to be vdd at all time points. the rates of vdd appear to be slightly higher in rygb at each time points. the rates of macrocytic anemia, microcytic anemia and hypoalbuminemia were low and varied depending on surgical procedure, with no relevant increase following surgery (see figure 1 ). conclusions: vitamin d deficiency is prevalent among diabetic patients with obesity presenting for bariatric surgery. the postoperative management was successful in addressing vdd following surgery; those who experienced t2dm remission after surgery were less likely to be vdd. further prospective studies are needed to explore this relationship. surg endosc (2018) 32:s130-s359 introduction: it is well known that morbid obesity is strongly associated with high blood pressure. cardiovascular risk reduction is a well studied and described result of bariatric surgery. the objective of this study is to quantify hypertension resolution in patients who underwent bariatric surgery at our institution. methods: we retrospectively reviewed all the patients who underwent either laparoscopic sleeve gastrectomy (lsg) or laparoscopic roux en y gastric bypass (lrygb) at our institution between 2010 and 2015. we selected those patients who were on antihypertensive medical treatment and had a 12-month follow-up. hypertension resolution was defined as the interruption of any blood pressure medications within the follow-up period. we compared the patients who had resolution of hypertension (group 1) with patients who did not (group 2), based on demographics, comorbidities, and outcomes. chi-square and student t-test were used for categorical and continuous variables respectively. results: out of 1330 patients, 185 (13.9%) patients met the inclusion criteria, out of which, 73 (39.5%) had a complete resolution of hypertension within 12 months. the patient population included in group 1 was predominantly female n=114 patients (61.8%), diabetic (n=87, 47%), with a mean bmi of 30.31±4.45 kg/m 2 , a mean age of 52.6±10.7 years, and a preoperative systolic blood pressure mean of 131±14.31 mmhg. the most common procedure performed was lsg with n=105 (57%). comparison between group 1 and group 2 based on age, gender, bmi, and diabetes showed no statistically significant difference. estimated bmi loss % at 12 months, type of procedure and % ebmil showed no statistically significant difference between the groups. conclusions: rapid weight loss is associated with a drastic reduction of blood pressure. besides weight loss, we did not identify a clear correlation between risk factors when we compared patients who had resolution of hypertension with patients without resolution. further prospective studies should be done for better understand these findings. the mount sinai hospital, 2 university of chicago introduction: for many patients, hiv has transformed from a life-threatening illness into a manageable chronic disease. reflecting trends in the general population, obesity is increasingly prevalent among hiv-positive patients. surgical intervention has shown the greatest effectiveness in treating obesity. it is unknown, however, whether physician attitudes reflect the changing trends in obesity care for hiv-positive patients. methods and procedures: medical students from the first, second, and fourth years of training were invited to participate in an irb-approved survey, handed out during didactic sessions, which was designed to assess their knowledge and attitudes regarding bariatric surgery in hiv-positive patients. self-reported demographic information of respondents was also collected. the outcome of interest was the proportion of correct responses. univariate and multivariate regression analyses were performed. results: surveys were completed by 127 medical students. demographic covariates included the following: age, sex, race, bmi, and year of training. age, sex, race, and bmi were not statistically significant in the multivariate model. however, in both univariate and multivariate models, each additional year of training was associated with a significant increase in the proportion of correct responses (multivariate model beta coefficient=0.440, p.001). conclusions: obese and hiv-positive patients suffer from well-documented stigma in health care. these findings suggest that medical training corrects common misperceptions of obese and hivpositive patients, and may lead to a better understanding of the appropriateness of bariatric surgery for hiv patients. whether these attitudes are predictive of referral practices remains to be seen. introduction: obesity is a common problem worldwide with numerous associated comorbidities and is associated with an increased risk of developing some cancers. despite bariatric surgery being associated with a risk reduction for cancer development, some will develop cancer after surgery and little is known about complications which might arise during multimodality cancer treatment. here we report the case of a 55 year-old female who developed an unusual giant marginal ulcer (mu) post laparoscopic roux-en-y-gastric bypass (lrygb) while receiving systemic chemotherapy for an early stage breast cancer. case report: in summary, a 55 year-old female with a preoperative bmi of 40kg/m2 had an uncomplicated lrygb one year prior to her presentation. she was a non-smoker, was abstinent of alcohol and did not use nsaids, steroids or other ulcerogenic medications. eight months post procedure with a bmi of 29.1 kg/m 2 she was diagnosed and treated with bcs plus slnb for a pt2n0m0 er/pr +ve her2 −ve breast cancer. one week following her third cycle of docetaxel and cyclophosphamide, she presented with two days of melena, small volume hematemesis and abdominal discomfort. the patient was resuscitated with prbc, started on a ppi infusion and had free air ruled out on a cxr. upper endoscopy was complete showing a giant mu at the gastrojejunal anastomosis, biopsies ruled out malignancy and h. pylori. subsequent ct abdomen/pelvis identified contrast extravasation from the anastomosis confirming a free perforation. broad spectrum antibiotics were started and a diagnostic laparoscopy complete. a graham patch repair utilizing omentum and abdominal washout were complete with placement of surgical drains. the patient was supported with parenteral nutrition while npo. diet was advanced after an upper gi series on post operative day 7 showed no ongoing leak. the patient was discharged on post operative day 13, recovered and although further chemotherapy was discontinued she completed whole breast radiotherapy. conclusion: leaks and hemorrhage are early postoperative complications that are not seen intraoperatively in our experience. furthermore, endoscopy significantly increases mean operative time. routine use should be left to the discretion of the surgeon but should not be considered an essential step of the sleeve gastrectomy. the objective of the study: surgical site infection (ssi) following bariatric surgery contributes to patient morbidity and additional use of health care resources. we investigated whether a ssi quality control initiative in the form of a refined preoperativeantimicrobial protocol affected the rate of ssi following laparoscopic roux-en-y gastric bypass (lrygb). we reviewed all lrygb procedures performed between june 2015 and december 2016 at a single bariatric surgery centre of excellence. two preoperative antimicrobial protocols were compared. patients undergoing surgery prior to february 2016 received 2 g of cefazolin whereas patients undergoing surgery after february 1, 2016 received a new antimicrobial protocol consisting of 2 g cefazolin, 500 mg metronidazole and 30 ml oralchlorhexidine rinse. the primary outcome was 30 day ssi including superficial ssi, deep incisional ssi and organ/space infection as defined by the centre for disease control. clinic charts and provincial electronic medical records were reviewed for emergency department visits, microbiology investigations and physician dictations diagnosing ssi. outcomes were assessed using a students t-test. results: two hundred seventy six patients underwent lrygb of which 167 received the refined antimicrobial protocol and 109 received cefazolin. the refined antimicrobial protocol significantly decreased the rate of deep incisional ssi compared to cefazolin (n=1, 0.6% vs n=5, 4.6%; p\ 0.05). the refined antimicrobial protocol resulted in an insignificant overall reduction in the rate of superficial ssi (n=12, 7.2% vs n=13, 11.9%; p[0.05) and organ/space infection (n=0, 0.0% vs n= 2, 1.8%; p[0.05) respectively. conclusions: a preoperative antimicrobial protocol using cefazolin, metronidazole and chlorhexidine oral rinse appears to reduce the rate of ssi following lrygb. this protocol may be most effective to prevent deep incisional ssi. additional patient cases or alternative study design including a randomized control trial is required to better understand the efficacy of this protocol. background: for many years, the roux-en-y gastric bypass (rygb) was considered a good balance of complications and weight loss. according to a several short-term studies single anastomosis duodenal switch or stomach intestinal pylorus sparing surgery (sips) offers similar to weight loss to rygb with fewer complications and better diabetes resolution. however, no one has substantiated complication and nutritional differences between these two procedures over the midterm. this paper seeks to substantiate previous studies and compare complication and nutritional outcomes between rygb and sips. methods: a retrospective analysis of 798 patients who either had sips or rygb from 2010 to 2016. complications were gathered for each patient. nutritional outcomes were measured for each group at 1, 2, and 3 years. regression analysis was applied to interpolate each patient's weight at 3, 6, 9, 12, 18, 24 , and 36 months. these were then compared with t tests, fisher exact tests, and chi squared tests. results: rygb and sips have statistically similar weight loss at 3, 6, 9, 12 , and 36 months. they statistically differ at 18 and 24 months. at 36 months, there is a trend for weight loss difference. there were only statistical differences in nutritional outcomes between the two procedures with calcium at 1 and 3 years and vitamin d at 1 year. there were statistically significantly more long term major complications, minor complications, reoperations, ulcers, small bowel obstructions, nausea, and vomiting with the rygb than sips. conclusion: with comparable weight loss and nutritional outcomes, sips has fewer short and long-term complications than rygb and better type 2 diabetes resolution rates. introduction: the purpose of this study is to determine the risk factors that contributed to increased postoperative complications, as noted in prior studies within the publicly funded insurance population undergoing bariatric surgery. methods and procedures: data was collected via a retrospective review of the medical records of patients who underwent laparoscopic roux en y gastric bypass or laparoscopic sleeve gastrectomy from 2010 to 2014 at a single institution. for each patient, data was collected in the following categories: baseline demographics, insurance status, medical comorbidities, immediate complications, re-admissions and associated complications, and follow up out to 3 years. results: a total of 553 patient charts were reviewed, 513 patients were categorized as private insurance and 40 patients were categorized as public insurance. there was no statistically significant difference in mean patient age (private 46.6 years vs public 48 years), sex (male:female 22%:78% for both groups), or bmi (48 vs 50). there was a statistical significance in relationship status in the categories of single (21% vs 30%), married (61% vs 35%) or living with a partner (3% vs 10%), as well as employment status (78% vs 12%). when comparing comorbid conditions preoperatively there was no difference except for diabetes which was less common in the private insurance group 32% vs 50%. readmission rates for complications were significantly different as well at 35% vs 55% with public insurance patients having increased complication rates and readmissions. there was no difference in follow up percentages at each time point for the two groups. interestingly postoperative bmi was significantly different in the two groups until 1 year out (32 vs 34) when the difference disappears. conclusions: our current data set confirms prior research that documented higher complication rates in public insurance patient populations without differences in long term results in regards to weight loss. it also shows that the public insurance group is possibly at higher risk for complications and readmissions postoperatively due to the lack of social support at home given that a much higher percentage of them are single or divorced, and lack employment. it is likely that this lack of support at home prompts more frequent readmissions and associated complications. introduction: gastric bypass has been an acceptable treatment for the morbidly obese patient, with proven efficacy on weight loss and remission of co morbidities, especially diabetes (t2dm). laparoscopic sleeve gastrectomy (lsg) is gaining momentum as an alternative procedure for the morbidly obese patient. the aim of this study is to assess the resolution of t2dm by examining hba1c, bmi, fat %, and % excess weight loss in t2dm patients in our lsg patients. methods: we performed a retrospective chart review of 33 t2dm patients before and after lsg, analyzing hga1c, bmi, % weight loss, fat %, and diabetic medications. data was analyzed by using spss version 24. paired t-test was applied to see the significance of bmi, weight, fat % and hba1c before and after the procedure. introduction: gastroesophageal reflux disease (gerd) is a known risk following laparoscopic sleeve gastrectomy (lsg), with up to 50% of patients affected by the disease postoperatively. of these patients, an unknown number progress to medically refractory gerd. due to their postsurgical anatomy, these patients have limited options for intervention. while endoluminal therapies are available, surgical revision to roux-en-y gastric bypass (lrygb) has become an accepted revisional treatment. despite this therapeutic option, many payors deny coverage for this treatment. in this study, we report outcomes of revision of lsg to lrygb and difficulties in obtaining insurance approval for the operation. methods: we conducted a retrospective review of all patients who underwent a revisional bariatric operation at a single institution between january 2015 and august 2017. we analyzed all patients who underwent conversion of lsg to lrygb. we collected data on 30-day mortality and morbidities, pre-and postoperative antacid use, and the insurance approval process. results: within the study period, we identified 164 patients undergoing revisional bariatric surgery. seventeen patients had undergone conversion of lsg to lrygb. all of these patients underwent revision due gerd refractory to maximal medical therapy. the average body mass index was 37 kg/m 2 , and our average operative time was 184 minutes. one patient required laparoscopic cholecystectomy within 30 days due to acute cholecystitis, and another patient required reoperation for control of staple line bleeding. there were otherwise no 30-day morbidities or readmissions. fifty nine percent stopped all antacid medication by six months, and 65% stopped by 24 months. of the 35% percent of patient still on proton pump inhibitor therapy, none of those patients complained of reflux symptoms. of non-medicare patients, 69% were initially denied insurance coverage for revision. only one plan accounted for all initial approvals. twenty five percent of denied patients eventually paid out of pocket, and the remaining 75% ultimately secured coverage after an appeal process. with no significant differences in mortality or hospital stay. significantly shorter operative times were observed in the adolescent group (83.6±46 vs 88.1±51, p.001). in univariate analysis blood transfusions and vte rates were significantly lower in the adolescent group but there was no difference after risk-adjusted logistic regression analysis. analysis of readmission data showed lower rates in adolescents compared to young adults (3.67% vs 4.44% p=0.06). however, adolescents are more frequently readmitted secondary to gallstone disease (6.3% vs 1.9%, p.05). the most common reason for readmissions in both groups was nausea and vomiting with fluid/electrolyte depletion, followed by abdominal pain. conclusion: adolescent bariatric surgery is feasible and safe, with outcomes similar to that of young adults. lsg is currently the most common bariatric procedure performed in adolescents which is reasonable given the relative lack of co-morbid conditions within this group. nausea and vomiting are the most common reason of readmission in both groups, but gallstone disease is significantly higher in adolescents, suggesting that this population should be carefully screened for gallbladder disease preoperatively. further studies regarding long-term results are needed to elucidate long-term outcomes, such as the durability of comorbidity resolutions in adolescent patients. introduction: revision bariatric surgery is always considered to be associated with higher complication rates. there is currently controversy in the literature regarding one stage and two stage revisions. methods: the present study is ongoing longitudinal prospective analysis of data of revision surgery in a single unit. the revision surgery was offered after initial failed or complicated gastric band, sleeve gastrectomy and roux-en-y gastric bypass (rygb). results: there were forty-two individuals who had revision bariatric surgery. the age of the cohort of patients ranged from twenty-six to seventy-five years. thirty-three were females and nine males. all patients who were hypertensive or diabetic at the time of their initial bariatric operation had a relapse of their co-morbidity prior to their revision surgery. the two stage revisions patients had their band removed at another facility, had a compilation from the band itself or did not wish for revision surgery initially. of the two failed bypasses one had a large pouch and very short limbs. the other had a gastro-gastric fistula and ultra short limbs. there were no deaths in this study. one patient who underwent one stage revision of a gastric band to bypass had an iatrogenic small bowel injury that required a second operation. amelioration of diabetes and hypertension was seen in all who had relapsed. weight loss was good in all patients except for the those undergoing revision from short limbed to long limbed bypass. conclusion: there is enough evidence that revision surgery is feasible, and can ameliorate metabolic co-morbidities after failed band and sleeve. two staged surgery is not necessarily safer compared to one stage revision. in the present study an inadvertent iatrogenic injury occurred in one stage revision group but is not true reflection of increased complications. the association between preoperative endoscopic esophagitis and post operative gerd in sleeve gastrectomy patients samer elkassem, md; medicine hat regional hospital introduction: gerd is a common complication after sleeve gastrectomy (sg). the purpose of this study is assess the relationship between pre-operative findings of endoscopic esophagitis and postoperative gerd in sg patients. the hypothesis of this study is that patients with pre-op esophagitis are more likely to have gerd post-op than patients with no esophagitis pre-op. methods: a retrospective review of 103 sg patients who had pre-operative endoscopy and followed prospectively for at least one year was preformed. patients were divided into two groups based on pre-op endoscopic findings: those with no findings of esophagitis (ne), and those with endoscopic esophagitis, including barretts (ee). patients were followed for at least one year, and assessed for usage of a proton pump inhibitor (ppi) usage. the two groups were compared using both student t-test and chi square test. results: a total of 63 patients did not have any findings of esophagitis on pre-op endoscopy (ne group), and 38 patients had findings of endoscopic esophigitis (ee). there was no difference in preoperative demographics and post-op weight loss at one year (table i) . follow-up ranged from one to 4 years post-op. the dependency on ppi usage and de novo reflux are shown in table ii . introduction: patients with "super-super obesity", defined as a bmi≥60, are at higher risk of weight-related health problems and might benefit more than others from metabolic and bariatric surgery. however, these benefits need to be weighed against the potential for increased operative and perioperative risks. accurate data regarding these patients is critical to guide procedure choice and informed, shared decision-making. the metabolic and bariatric surgery accreditation and quality improvement program (mbsa-qip) is a national accreditation and quality improvement program, which captures clinically-rich specialty-specific data for the majority of all bariatric operations in the united states. this is the first analysis of the mbsaqip participant use file (puf) focusing on this at-risk subpopulation. introduction: sleeve gastrectomy represents one of the most common surgical procedure used in bariatric surgery. the most feared complication following laparoscopic sleeve gastrectomy is the leak that occurs at the staple line. one method to reduce the risk of leak is the use of reinforcement material at the suture line. in this study, the efficacy of sutures and fibrin glue in the prevention of staple leak has been compared retrospectively. materials and methods: a total of 250 patients undergoing lsg between october 2011 and august 2015 at the medical faculty of firat university were retrospectively assessed using the hospital database system records. results: there were 77 males (31%) and 173 (69%) females, with a mean age of 34 years (range: 16-65 y), and body mass index of 45 kg/m 2 . while no reinforcement material was used in 61 patients (24%) at the suture line, reinforcement sutures or fibrin glue were used in 54 (22%) and 135 (54%) patients, respectively. postoperative leak occurred in 8 patients (3.2%), and 6 (9.8%) of these had no use of reinforcment material for leak prevention, while additional sutures or fibrin glue had been used in 2 patients, one in each group (0.7%). one patient died due to leak and the consequent development of sepsis (0.4%). discussion: lsg is increasingly more frequently used in bariatric surgery practice. however, an increase also occurs in the rate of complications. a discrepancy exists in the published literature regarding the benefit of reinforcment the suture line on the risk of leak risk. in our patient series, patients without the use of additional material in the staple line had a significantly increased risk of leak. conclusion: despite some controversy, strong evidence exists on the effectiveness of fibrin glue in the prevention of leaks in patients undergoing laparoscopic sleeve gastrectomy. background: laparoscopic bariatric surgery has been performed safely since 1991. in a persistent search for fewer and smaller scars, single port and acuscopic surgery or even notes have been implemented. the goal of this study is to analyze the safety and feasibility of using a low cost incisionless liver retraction compared to a standard laparoscopic retractor for sleeve gastrectomy. methods and procedures: candidates for sleeve gastrectomy that fulfilled 1991 nih criteria for bariatric surgery were selected. those younger than 18 and/or with prior upper-left quadrant surgery were excluded. all patients signed written consent. patients were randomized 1:1 to either a standard 5 port technique with a fan-type liver retractor through a 5 mm port (group a); or a 4 port technique with the liver retracted by a polypropylene 1 suture passed through the right crura and retrieved at the epigastrium with the use of a fascia closure needle (group b). all surgeries were performed by the same surgeon. surgery length from insertion of first port to withdrawal of the last was the primary endpoint. anthropometric data, % of pre-surgical total weight loss (%ptwl), visualization of the surgical field, complications inherent to liver retraction and postoperative morbidity were recorded. background: comprehensive web and hospital based preparative patient education allow the morbidly obese patients to understand weight loss surgery, its benefits, the necessity of follow up and the risk of weight regain. while the inhouse seminars provide a face-to-face interaction with the bariatric program staff, the online seminars are easily accessible and more cost effective. the primary objective of this study is to compare demographics and weight loss surgery outcomes between patients who participated in the online vs in-house preparative seminars. methods: after obtaining institutional review board approval, a retrospective chart review was performed involving patients who underwent bariatric surgery between january 2015 and december 2016 at a tertiary care center. the patients were divided into two groups based on their choice of educational seminar, online or in-house, prior to their initial consult with a surgeon. data was collected on age, type of insurance, length of stay (los), longest follow up and change in bmi to assess weight loss. results: one hundred and eighteen patients were included in this study. eighty patients attended in-house seminar while 38 completed online seminar. the various types of surgery (laparoscopic gastric bypass, sleeve gastrectomy, and band) were similarly represented between the two groups. there was no difference in the type of insurance policy between the groups. patients who elected to take the in-house seminar were on average 5 years older than those who chose the online course, which was statistically significant (p.05). there were no differences in los, longest follow up after surgery, and weight loss at 12 months between the groups. conclusions: based on mbsaqip registry data, patients age 65 or over did not have higher odds of a 30-day readmission compared to younger patients after lsg or lrygb. rates of 30-day readmission, reoperation, and death were similar, but rates of complications (e.g. pneumonias, unplanned intubations) were higher in the older group. bariatric surgery in the elderly should therefore be performed only after careful and patient-centered selection processes. introduction: revisional bariatric surgery has become more common in recent years. it is to address short and long-term complications of primary bariatric surgery as well as the issue of weight regain. the aim of this study was to retrospectively analyze the indications for reoperation and short-term outcomes in our institution. methods and procedures: between 2011 and 2017, patients who underwent bariatric surgery in our center were included in a prospectively collected database. demographic data, primary and revisional bariatric procedures, reasons for revisions and outcomes were recorded and reviewed retrospectively. results: a total of 527 patients underwent bariatric surgery at our institution and 22% of these (n= 119) were revisional bariatric surgery. we identified 4 groups of patients according to their primary procedures: adjustable gastric band (agb), roux-en-y gastric bypass (rygbp), vertical band gastroplasty (vbg), and sleeve gastrectomy (sg). of the 119 patients, 51 (43%) had abg as primary procedure. of those, 55% had their band removed due to food intolerance and severe dysphagia and 37% had a conversion to either rygbp or sleeve gastrectomy (sg) due to weight recidivism. in the rygpbp group (n=38), 53% of the patients presented with late complications. of these, 45% had an acute presentation (small bowel obstruction, internal hernia, or perforated marginal ulcer) requiring emergency surgery. only 8% patients needed gastric bypass takedown due to severe hypoglycemia. weight recidivism was noted in 47% of the patients that necessitated either revising the anastomosis, trimming of the gastric pouch or gastrogastric fistula takedown. in the vbg group (n=14), 79% of the patients experienced weight recidivism that required conversion to rygb and 21% of the patients required the vbg to be taken down due to obstructive symptoms. in the sg group (n=14), 21% of the patients experienced early complications needing a second procedure. weight recidivism was found as the most common reason for conversion (50%) to rygbp. twenty nine percent of the patients in this group underwent conversion to a rygbp due to severe de novo gerd. introduction: our aim was to systematically review the literature to compare weight loss outcomes and safety of secondary surgery after sleeve gastrectomy (sg), particularly between roux-en-y gastric bypass (rygb) and biliopancreatic diversion with duodenal switch (bpd-ds). sg was originally developed as the first part of a two-stage procedure for bpd-ds. however, it is now the most common standalone bariatric surgery performed in the united states. the majority of sg are done as the sole bariatric operation but in 3%, a second operation is necessary, due to insufficient weight loss, weight regain or reflux. the most common second-stage operations are rygb at 46% and bpd-ds at 24%. there are a few small case series comparing rygb to bpd-ds as a secondary surgery after sg. these studies suggest that after failed sg, bpd-ds results in greater weight loss but higher early complication rates than rygb. we had one mortality, related in part to supra-therapeutic anticoagulation perioperatively. one patient underwent successful heart transplantation and 2 additional patients were reactivated on the transplant list. conclusion: laparoscopic sleeve gastrectomy is effective in advanced heart failure patients for meaningful weight loss, reactivation to the transplant wait list, and ultimately cardiac transplantation. however, this complex population carries a high perioperative risk and close multidisciplinary collaboration is required. more data is needed to best optimize perioperative management of these patients. the introduction: bariatric surgery is a highly effective treatment for severe obesity. while its effect on improvement of the metabolic syndrome is well described, its effect on intrinsic bone fragility and fracture propagation is unclear. therefore, the aims of this systematic review of the literature were to examine (1) the incidence of fracture following bariatric surgery, (2) the association of fracture with the specific bariatric surgical procedure (3) conclusion: it appears that the overall risk of sustaining a fracture of any type after undergoing bariatric surgery is approximately 5 percent after an average follow up of 3.6 years. the greatest risk of fractures is associated with the bpd, with the rygb being the most favorable. fractures following bariatric surgeries tend to follow osteoporotic and fragility patterns. post-operative supplementation of vitamin d, calcium and weight bearing exercises need to be optimized, and long term follow-up studies will be needed to confirm that these interventions will indeed reduce fracture risk following bariatric surgery. background: the effect of sleeve gastrectomy on gastroesophageal reflux (gerd) remains controversial. it is currently common practice to perform a hiatal hernia repair (hhr) at the time of the sleeve gastrectomy, however, there are few data on the outcomes of gerd symptoms in these patients. the aim of this study was to evaluate the effect of performing an esophagopexy hiatal hernia repair on gerd symptoms in morbidly obese patients undergoing robotic sleeve gastrectomy (rsg). methods: a single institution, single surgeon, prospectively maintained database was used to identify patients who underwent rsg and concomitant esophagopexy for hiatal hernia repair from november 2015 to july 2017. patient characteristics, operative details and postoperative outcomes were analyzed. primary endpoint was subjective gerd symptoms and recurrence of hiatal hernia. results: thirty-seven patients were identified meeting the inclusion criteria (rsg+hhr+esophagopexy) with a mean follow-up of 28. over the past 4 years there have been several bariatric surgeries cancelled secondarily to abnormal pre-operative test results within eastern health. these surgeries are often cancelled the day before their scheduled surgery, which does not provide sufficient time to book other patients. the end result is that the or gets underutilized and the bariatric surgery waitlist grows. prior to any major surgery patients are often subjected to a routine screening process, which includes a history and physical along with diagnostic screening tests and screening blood work. a preliminary analysis was done of the first 50 patients through the bariatric surgery program at eastern health assessing the coagulation study results and outcomes. analysis showed that out of the first 50 patients 2% were found to have a history of bleeding, 10% were using anticoagulants preoperatively, another 2% were noted to have a family history of bleeding. in the preoperative blood work that was done, 30% were found to have an elevated ptt/ inr for which hematology ended up being consulted in 4% of the patients. overall this did not change the preoperative management of these patients and they went on to have their surgery. intraoperatively 1 patient was noted to have excessive bleeding and this was found not be associated with any preoperative elevation in their coagulation studies or family history of bleeding disorders. post operatively there was bleeding in 1 patient which required transfusion, however this too was found not to be associated with any preoperative elevation in their coagulation studies or family history of bleeding disorders. overall this initial analysis showed no difference in operative management or delay in surgery secondarily to abnormal preoperative assessment findings. further analysis of a larger population of the bariatric surgery program patients is needed in order to determine whether any changes should be made to the preoperative assessment protocol. introduction: patients undergoing bariatric surgery frequently present with various obesity-related psychiatric comorbidities, including depression. furthermore, previous literature has demonstrated a positive association between depression and cardiovascular disease, and obesity serves as an independent risk factor for cardiovascular disease. however, the relationship between preoperative depression and cardio-metabolic risk factors following bariatric surgery remains unknown. methods and procedures: this retrospective analysis utilized data obtained from patients (n= 2,420) who underwent bariatric surgery at a single academic medical center in california. patients underwent either laparoscopic roux-en-y gastric bypass or sleeve gastrectomy. using medical record data, patients were preoperatively categorized as follows: not depressed, history of depression but not currently on anti-depressive medication, and history of depression and presently taking anti-depressive medication. patient demographic characteristics were obtained preoperatively. clinical and biochemical risk factors for cardiovascular disease were evaluated preoperatively and 6 and 12 months following bariatric surgery. anova, kruskal-wallis, and chisquare tests were applied where appropriate. results: in this sample, 59% of patients were not depressed, 21% had a history of depression but were not taking anti-depressive medication preoperatively, and 20% had a history of depression and were taking anti-depressive medication preoperatively. at baseline, depressive history was positively associated with female sex (p\.0001), older age (p\.0001), white race (p\.0001), medicare insurance (p\.0001), previous abdominal surgery (p\.0001), length of stay (p\.0001), requiring an inferior vena cava filter (p=.009), total cholesterol (p\.0001), and triglycerides (p =.003). on average, patients with a history of depression taking anti-depressive medication weighed less than patients with a history of depression not on medication and patients without depression preoperatively (p=.002) and 6 (p=.024) and 12 (p=.004) months after surgery. after six months of follow-up, preoperative depressive history was positively associated with total cholesterol (p=.039), triglycerides (p\.0001), hba1c (p=.039), and fasting serum concentrations of insulin (p=.017). after 12 months of follow-up, preoperative depressive history was positively associated with higher levels of total cholesterol (p=.013), ldl cholesterol (p=.021), and triglycerides (p=0.016). conclusion: a history of depression prior to surgery was associated with higher levels of total cholesterol and triglycerides at baseline and 6 and 12 months postoperatively. after 12 months, preoperative depressive history was also associated with higher levels of ldl cholesterol. this study suggests that, on average, bariatric patients with comorbid depression have worse lipid profiles prior to-and up to one year after-bariatric surgery relative to counterparts without depression. yen-yi juo, md, mph, yas sanaiha, md, erik dutson, md, yijun chen, md; ucla introduction: anastomotic leak is one of the most morbid complications of roux-en-y gastric bypass (rygb), yet its risk factors are ill-defined due to the rarity of the complication. we aim to identify both patient-and operative-level risk factors for anastomotic leak after rygb using a national clinical database. methods: a retrospective cohort study was performed using the 2015 metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) database. all adult patients who underwent laparoscopic or open rygb were included. multivariate logistic regression models were used to identify patient-and operative-level variables associated with development of anastomotic leakage. clinically relevant anastomotic leakage is defined as those that required readmission, intervention, or reoperation. introduction: hyperammonemia secondary to ornithine transcarbamylase (otc) deficiency is a rare and potentially lethal disorder. the prevalence of otc deficiency is reported to be 1:14,000 to 1:70,000 in the general population. otc deficiency has been reported in patients presenting with neurological symptoms after roux-en-y gastric bypass (rygb), and less than 30 cases have been reported in the literature. the aims of this study are to examine the apparent incidence of this uncommon disorder in patients after bariatric surgery and to examine potential predictors of mortality. methods and procedures: this is a single center, retrospective study in a large, urban teaching hospital of postbariatric surgery patients who developed hyperammonemia from january 2012 to august 2017. elevated plasma ammonia with an elevated urinary orotic acid level is accepted as consistent with a diagnosis of otc deficiency. all patients in our program are instructed on a post-operative diet containing 60 grams/day of protein. descriptive and correlative statistics are calculated for all variables. results: between january 2012 and august 2017, 1597 bariatric surgical procedures were performed at this single medical center. seven women with neurological symptoms had plasma ammonia levels above the upper limit of normal range. their average bmi is 45 kg/m2. two patients underwent vertical sleeve gastrectomy (vsg), 1 underwent vsg with duodenal switch, and 4 underwent rygb. all patients were hospitalized. the mean peak plasma ammonia level is 142 umol/l (range: 57-235). the mean urinary orotic acid level is 3.3 mmol/mol creatinine (range: 1.6-7.9). there were 2 patients with no orotic acid level checked, secondary to demise. no patient had clinical features or findings of progressive hepatic failure. there are four mortalities (57.1%). serum folate and peak lactic acid levels are predictors of mortality with p-values of 0.048 and 0.006 respectively. the apparent incidence of otc deficiency is 1:319 in post-operative patients. conclusions: in our post-operative population, hyperammonemia results in a high mortality. its apparent incidence, secondary to otc deficiency, amongst bariatric surgery patients is higher than that reported in the general population. since otc deficiency is identified after multiple bariatric surgical procedures, further investigation will be important to examine potential mechanisms for its development which may include a genetic predisposition (possibly triggered by nutritional deficiencies), upper gut bacterial overgrowth (supported by elevated serum folate levels), or preexisting, subclinical hepatic dysfunction. introduction: the use of closed suction drains is associated with poor outcomes in many anastomotic operations and routine use is not recommended. in this context, intraoperative drain placement for primary bariatric surgery remains controversial. recent studies demonstrate that drains confer no benefit to patients; however, data are limited to descriptive single center experiences with low sample size. in order to characterize this practice gap, and implement evidence based recommendations, we sought to evaluate the use of closed suction drain and outcomes following primary bariatric cases using the mbsaqip registry. methods: we used data from the 2015 metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) public use file for patients who underwent a non-revisional laparoscopic roux-en-y gastric bypass (rygb), laparoscopic sleeve gastrectomy (lsg), or laparoscopic adjustable gastric banding (lagb). we excluded patients with asa status greater than 3 or conversion to an open procedure. we analyzed demographics, preoperative comorbidities, procedure type for patients who did and did not undergo drain placement. adjusted rates of postoperative complications and mortality were then compared based on receipt of postoperative drain placement. results: of the 141,404 included patients who underwent laparoscopic bariatric surgery, 33,618 (23.8%) underwent intraoperative drain placement. drains were more often placed in patients who underwent lrygb, were older, had higher preoperative bmi, had higher preoperative asa status, and had more comorbid conditions. after patient level risk adjustment, there was no difference in rates of leaks requiring intervention (0.32% versus 0.26%, p=0.065) or mortality (6.5% versus 5.4%, p=0.206) for patients with and without drains. in patients who underwent drain placement, there were higher rates of transfusion (9.2% versus 5.6%, p.001), reoperations for bleeding (0.30% versus 0.18%, p=0.001), all reoperations (4.8% versus 3.9%, p.001), and surgical site infections (ssi) (1.0% versus 0.6%, p.001). conclusion: our analysis demonstrates that nearly one quarter of all laparoscopic bariatric surgery patients undergo drain placement. we found that drain placement is more common in preoperatively higher risk patients and following higher complexity procedures as suggested by associated increased rates of transfusion and reoperations for bleeding. we found no benefit to drain placement in terms of interventions for clinically significant leaks or mortality. finally, patients who underwent drain placement were more likely to develop ssi suggesting routine placement is not without risk. although further prospective studies are warranted, our analysis demonstrates that drains have the potential for harm with minimal protective benefit for patients after primary bariatric surgery. sleeve gastrectomy (50% n=24) and laparoscopic roux-en-y gastric bypass (50% n=24) were the two types of surgeries done in our population. the risk of developing atrial fibrillation was calculated preoperatively and found a 7-fold higher risk in females and 4-fold greater risk in males when compared with the ideal risk for each category. at 12 months follow-up the preoperative risk was 11.14±15.45% with an absolute risk reduction of 2.03% corresponding to a relative risk reduction of 18.22% with males having a more significant change at 12 months follow-up. these findings and the electrocardiographic changes at 12 months follow-up are better described in background: the sleeve gastrectomy (lsg) is the most popular procedure worldwide to treat obesity. among those that are obese, gerd has a prevalence of 39.8 percent. many surgeons do not perform lsg in these patients because only 34.6 percent of symptomatic patients showed resolution of gerd-like symptoms after concomitant sleeve gastrectomy with hiatal hernia repair. many surgeons perform the gastric bypass on gerd patients with hiatal hernias because they believe its superior for the resolution of gerd. when they do this they overlook the many long term complication associated with gastric bypass. also, many patients do not want the gastric bypass under any circumstances. surgeons need to be open to finding better way to reduce the high recurrent rates of gerd after lsg. materials and methods: this is a single institution, multi-surgeon, retrospective study involving 73 morbidly obese patients in a prospectively kept data base from january of 2015 through july of 2017. these patients all had gerd with preoperatively identified hiatal hernias on egd. all patients were dependent on anti-reflux medications. there were 9 (12.4%) males and 64 (87.6%) females. bmi ranged from 35 to 63. hiatal hernias measured from 2 cm to 8 cm. all lsg patients received a primary crural closure, with or without gore bio a mesh placement, at least 6 weeks prior to the sleeve gastrectomy. post-operatively, patients were interviewed for gerd symptomatology and anti-reflux medication dependency. results: of the 73 patients, 53 (72.60%) patients had resolution of gerd-like symptoms and off all anti-reflux medications after the staged hiatal hernia repair and sleeve gastrectomy. 13 patients (17.80%) had improvement of gerd but still dependent on anti-reflux medication. 7 patients (9.60%) had no resolution or improvement of gerd. there was one post-operative complication of laryngospasm with pulmonary edema status post extubation. there were no mortalities in the series. conclusions: in this study, staged hiatal hernia repair, at least 6 weeks prior to sleeve gastrectomy, doubled the published rate of gerd resolution from 34% to 73%. 90% showed improvement in symptoms at one year. this rate is comparable to gerd resolution after gastric bypass. this may be an alternative approach to hiatal hernias in the morbidly obese patient with gastroesophageal reflux disease who do not want a gastric bypass. background: bariatric surgery is a common procedure in general surgery. gastric bypass has been performed laparoscopically for over two decades and multiple techniques are described. the circular stapled anastomosis, one of the earliest methods for gastrojejunostomy, is performed in two ways: a transoral method to introduce the anvil and a transabdominal approach developed later. the former technique requires passing the anvil of the circular stapler through the mouth, down the esophagus, and into the gastric pouch. in the latter method, a gastrotomy is made, the anvil is introduced, and the gastrotomy is stapled off, creating the gastric pouch. this study aims to objectively compare the two methods of circular stapled gastrojejunostomy in terms of surgical site infection (ssi) rate. methods: a retrospective chart review of patients undergoing laparoscopic roux-en-y gastric bypass with one of two surgeons at a bariatric center of excellence in an academic hospital from january introduction: laparoscopic sleeve gastrectomy (lsg) has become the most commonly performed procedure in the treatment of morbid obesity, but there is significant variability in its performance. from national database analysis, more restrictive sleeve construction, based on smaller bougie size, has not correlated with greater weight loss. we hypothesize that bougie size is not reflective of actual restriction, or that sleeve restriction does not correlate with weight loss. we performed qualitative and volumetric analysis of immediate post-sleeve contrast studies to determine the association of sleeve restriction with post-operative weight loss and complications. methods: between 2010 and 2015, 222 patients underwent immediate post-sleeve contrast studies. based on standardized vertebral body height assessment by preoperative chest radiograph, sleeve diameter at intervals (including the narrowest point) was measured in mm, and the volume above the narrowest point of the sleeve was calculated. sleeve shape was assumed as dual-tiered or simple truncated cone based on morphology. sleeve restriction, morphology and volumetric analysis were associated with clinical outcomes including complications, post-op symptoms, and weight loss at 6 months. background: variability in surgical technique resulting in narrowing at the incisura angularis, twisting along the staple line, and retention of the gastric fundus has been implicated in increased gastroesophageal reflux disease (gerd) following laparoscopic sleeve gastrectomy (lsg). standardizing creation of the sleeve based on anatomic landmarks may help produce more consistent sleeve anatomy and improve outcomes. methods: a retrospective review of all patients undergoing lsg from january 2016 to november 2016 at a single institution specializing in bariatric surgery was performed (n=271). patients underwent either traditional lsg with use of a 40f suction bougie to guide creation of the sleeve (n =156) or anatomy-based sleeve gastrectomy (abs, n=115). abs was performed using a gastric clamp to maintain predetermined distances from key landmarks (1 cm from gastroesophageal junction, 3 cm from incisura angularis, 6 cm from pylorus) during stapling. patient demographics, perioperative characteristics, and post-operative outcomes were compared using chi-square and student's t-tests as required. helicobacter pylori (hp) is prevalent in up to 50% of the population worldwide with increased rates observed in the bariatric population. bariatric surgery has seen a rapid expansion over the last 20 years with the growing rates of severe obesity. higher hp rates are thought to be associated with increased rates of postoperative complications including increased marginal ulceration and leak rates. accordingly, some bariatric centers have adopted routine pre-operative screening and hp eradication programs. yet, while hp correlation with gastritis and malignancy has now been well defined, its impact on patients undergoing bariatric surgery remains unclear. background: the risk of developing a hiatal hernia in the obese population is 4.2 fold compared to patients with a bmi \30. most hiatal hernias after bariatric surgery are asymptomatic and when symptoms are present they may be difficult to differentiate from overeating or maladaptive eating habits. the aim of this study was to define the risk and symptoms associated with a hiatal hernia in the post-bariatric surgery cohort. methods: a retrospective review of prospectively collected data for patients who underwent laparoscopic hiatal hernia repair who previously had primary roux-en-y gastric bypass (rygb) or sleeve gastrectomy (sg). data collection spanned a five-year interval (7/2012-6/2017). preoperative and follow up data were collected from medical records and questionnaires in the clinic or by telephone. variables obtained include age, gender, psychiatric history, pre-index procedure bmi, pre-hiatal hernia repair bmi, post-hernia repair bmi, pre and post operative symptoms, and associated morbidity. all hiatal hernia repairs were done laparoscopically, with posterior cruroplasty after circumferential hiatal dissection. results: we identified 30 patients with a symptomatic hiatal hernia who had previously (range: 1-23 years) underwent bariatric surgery. fourteen rygb patients presented at a mean of 10.7 years compared to 16 sg patients who presented at a mean of 3.4 years after index procedure. diagnosis was by a combination of ugi (67%), ct scan (50%) and egd (27%). mean follow up was 8.6 months (range: 1-32 months). laparoscopic hiatal hernia repair was successfully performed in all 30 patients with 0% mortality. dysphagia and regurgitative symptoms markedly improved in [ 85% of patients however, nausea, vomiting and abdominal pain were not changed in 20-30% of patients ( figure) . conclusion: hiatal hernia following bariatric surgery is a rare but important cause of bloating manifested as nausea and vomiting, abdominal pain, regurgitation or reflux, and food intolerance or dysphagia (barf)-and should be further evaluated with imaging or endoscopy when present. laparoscopic repair of hiatal hernia is warranted and results in resolution of symptoms in the majority of symptomatic patients. mid-term outcomes of sleeve introduction: obese patients suffer from multiple organ comorbidities which contribute to a shortened lifespan. one of the effects of obesity is thought to be pseudotumor cerebri, which is secondary to increase in intracranial pressure (icp) in the absence of an obstruction. over the past two years, we have measured icp after insufflating with a laparoscopy device. we found that icp increases dramatically and it correlates with the amount of insufflation in the abdomen. over the years, there have been studies in obese patients and intra-abdominal pressure. these studies have shown that some obese patients have an intra-abdominal pressure of 15-18 mmhg. increasing intraabdominal pressure is thought to escalate intracranial pressure (icp). the objective of this pilot study was to observed change in icp after the raising intra-abdominal pressure. method: in this retrospective chart review preliminary study, pressure in each of the patients either normal pressure hydrocephalous or high pressure hydrocephalous receiving a ventricle shunt were measured by manometer. once the shunt was placed into the ventricle, we attached a manometer to measure the opening pressure. after we accessed the abdominal cavity using the standardoptiview technique, we created a pneumoperitoneum. after achieving an intraabominal pressure of 15mmhg, were measured the icp using the manometer. spss software version 24 was used for data analysis. paired t-test was applied on icp before and after the procedure. introduction: postoperative bleeding represents an infrequent, yet serious complication after bariatric surgery. differences in the rate of postoperative bleeding reported for the two most common weight loss procedures-laparoscopic roux-en-y gastric bypass (lrygb) and laparoscopic sleeve gastrectomy (lsg)-are ostensibly confounded by patient and surgeon specific preoperative, intraoperative and postoperative factors, in particular, by the utilization of staple line reinforcement or oversewing. with this understanding, we aim to use a large national database to definitively characterize differences in bleeding rates between lsg and lrygb. conclusions: after appropriate risk-matching, lsg patients have a reduced likelihood of a postoperative bleeding event compared to those undergoing lrygb. this difference is likely more pronounced with intraoperative securing of the staple line via oversew, buttress or an alternative method. these findings from a large national database represent an important consideration for surgeons and patients alike when evaluating the appropriate bariatric operation. background: bariatric surgery has shown to be the most effective treatment, with documented improvement in obesity-related comorbidities. the type of health insurance coverage plays an important role in the access to bariatric surgery, but might also affect postoperative outcomes. the objective of this study is to determine whether there is a difference in outcomes based on the type of insurance 12 months after bariatric surgery. methods: we retrospectively reviewed all the patients that underwent bariatric surgery at our institution from 2010 to 2016. we divided the patients into two groups, based on the type of insurance, private (group one), and public (group two). we compared demographics and 12 months outcomes between the groups, using t-test for continuous variables and chi-square for categorical variables. we also compared 12 months estimated bmi loss between 8 different private insurances using anova. introduction: bariatric surgeons are now performing primary and revisional procedures on the extremes of age. there is controversy surrounding the safety and effectiveness of bariatric surgery among older age groups compared to younger age groups. to address this knowledge gap, we designed a study assessing short-term bariatric surgery outcomes among various age groupings across a large national database. methods and procedures: de-identified patient data across 2015 from the mbsaqip registry was used. age groupings were organized into young, middle-aged, and older adults (in years) as follows: \40, 40-60, and [60, respectively. the following 30-day outcomes were evaluated between all possible pairwise age groupings: mortality, surgical site infection (ssi), and readmission; logistic regression was used to compare outcomes between age groupings controlling for primary vs. revisional index operation, patient factors, and procedure factors. a p value of .05 was deemed statistically significant. results: a total of 168,058 patients were identified (age range: 13 to [80); 86% (n=144,507) underwent primary bariatric operations while 14% (n=23,551) underwent revisional cases. older adults had significantly worse outcomes than middle-aged and younger adults, respectively, for over 100 comparisons across all 3 outcomes; in contrast, younger adults had significantly worse outcomes than middle-aged adults for only 14 comparisons across ssi and readmission. for primary bariatric cases, older adults had significantly higher mortality rates than middle-aged and younger adults, respectively, in the following categories: asa 3, laparoscopic sleeve gastrectomy (lsg), or laparoscopic roux-en-y gastric bypass (lrygb). for revisional cases, older adults had significantly higher mortality rates than middle-aged and younger adults, respectively, in the setting of female gender, caucasian race, or asa 3. regarding ssi, older adults undergoing primary lrygb had significantly higher organ space infections compared to younger adults. in addition, older adults who had revisional lrygb had significantly higher deep surgical site infections compared to middle-aged adults. following primary bariatric cases, older adults had significantly higher readmission rates compared to younger adults in the presence of male gender, caucasian race, asa 3, copd, or after lsg. following revisional cases, older adults had significantly higher readmission rates than middle-aged and younger adults, respectively, in the setting of pre-operative chronic steroid use. conclusions: overall, older adults had worse short-term outcomes compared to their younger counterparts following primary and revisional cases. further research is required to investigate these findings with the goal of targeting interventions to improve outcomes among bariatric surgical patients. background: the obesity epidemic in the united states has been accompanied by surge in bariatric surgery. nearly 200,000 bariatric procedures were performed in the us in 2015, 23% of which involved roux-en-y gastric bypass (rnygb). while rnygb has proven an effective tool in combating obesity, it also alters a patient's anatomy in a way that makes traditional ercp a difficult, if not impossible option for interrogating the common bile duct. one way to approach the post-rnygb patient with obstructive jaundice is to access the peritoneal cavity via a laparoscopic/ robotic approach followed by direct cannulation of the gastric remnant with a laparoscopic port, allowing passage of an endoscope. the aim of this study was to evaluate our single center experience with minimally-invasive transgastric ercp (tg-ercp) from 2010 to 2017. methods: we compiled a list of all patients who underwent laparoscopically or robotically assisted tg-ercp at our institution from 2010-2017. we then examined patient demographics, procedural details, postoperative outcomes, and success rate, with success defined as cannulation of the ampulla, clearance of obstruction if present (stones/sludge/stenotic ampulla), and completion imaging of the biliary and pancreatic ducts. results: 40 patients were included in the study. 2 cases were performed robotically (5%), and 38 laparoscopically (95%). ercp was successful in 36 cases (90%). all 4 unsuccessful attempts were aborted when the endoscopist was unable to pass the scope through a tight pylorus. median time of operation was 163 minutes (199 minutes if concomitant cholecystectomy was performed, 159 minutes if not). median length of stay after operation was 2 days (range 1-14 days). median estimated blood loss (ebl) was 50 ml. post ercp pancreatitis occurred in 3 patients (8.3%), and was mild and self limited in all cases. 2 patients had postoperative bleeding requiring transfusion. both of these had concomitant cholecystectomy. discussion: in patients with biliary obstruction and anatomy not suitable for traditional ercp, tg-ercp is a viable option. it can be performed with in a minimally invasive fashion (either laparoscopically or robotically) with a high success rate and low morbidity. as the population of patients who have undergone rnygb continues to grow, so does the likelihood of encountering one with obstructive jaundice. tg-ercp, therefore, should be thought of as an essential tool in the armamentarium of the general surgeon. introduction: primary palmar hyperhidrosis (ph) is a pathological condition of over perspiration caused by body produces an excessive amount of sweat. this disorder affects to decrease quality of life of patients. thoracoscopic sympathectomy is minimally invasive and an effective procedure to treat hyperhidrosis. different of level of sympathectomy has been debate for the best outcomes. many researchers studied about short term outcomes but no empirical research evidences long term outcomes of thoracoscopic sympathectomy in thailand. this study purposed to evaluate and compare the long term clinical outcomes between patients who underwent t3 and t4 thoracoscopic sympathectomy for ph with particular attention to patient satisfaction and quality of life. methods and procedures: sixty patients with ph underwent thoracoscopic sympathectomy. patients were divided into two groups by the level of thoracoscopic sympathectomy as t3 group and t4 group. they were investigated the improvement of sweating, compensatory sweating, satisfaction and quality of life. the long-term investigation was designed to examine clinical outcomes at before surgery, six months after surgery, 1 year after surgery, 3 years after surgery, and last follow up days were compared within group and between of t3 and t4 group. they were subjected to telephone interview using multiple questionnaires to investigate surgery outcomes, degree of satisfaction, and quality of life improvement. results: sixty patients responded to the telephone interview. patients demographic data and also recurrence rate of ph between t3 and t4 group was not significant different (p=0.353). both groups improved severity of sweating without any statistical significant. but the t4 thoracoscopic sympathectomy led to significantly lower incidence of compensatory hyperhidrosis when compared with t3 group at back and trunk sites. the t4 group had higher overall satisfaction than t3 group with was not significantly different. long term result are followed after 3 years. conclusions: there was no difference in decreasing severity of sweating between t3 and t4 level of thoracoscopic sympathectomy. both group equally archived patient satisfaction. but the t4 level of thoracoscopic significantly had lower severity of ch and better quality of life in long term period. introduction: acute pancreatitis due trauma is commonest cause of pseudocyst in pediatric age. due to limited literature available and under diagnosis by pediatricians, the true incidence of pseudocyst in 4-12 age group is not known. material and methods: retrospective analysis of 10 pediatric age (4-12 years) patients who underwent laparoscopic cystogastrostomy at distric teaching hospital was done. patients data, presentation, investigations, opetation done and post operative course was studied. result: total of 10 patients (8 males & 2 females) had mean age of 6.5 years, mean weight of 25 kg. etiollogies included blunt abdominal trauma (6), idiopathic (3), gallstones (1) . average cyst diameter was 6.5 cm. laparoscopic cystogastrostomy by transgastric approach was successfully possible in 10 cases with no conversion. cystogastrostomy was performed using sutures in 5 patients and ultrasonic energy device in 5 patients. gastrotomy was closed with sutures in all 10 cases. mean operative time was 98 minutes. post operative imaging at 3 months revealed no persistence or recurrence of cyst. conclusion: minimally invasive laparoscopic approach for chronic pancreatic pseudocyst in pediatric age group is safe and effective strategy and should be adopted as primary modality of treatment. introduction: videoscopic neck surgery is developing despite the fact that only potential spaces exist in the neck. gagner first described the endoscopic subtotal parathyroidectomy with constant co2 gas insufflations for hyperparathyroidism in 1996. the cervical approach utilizes small incisions in the neck thus making it cosmetically unacceptable and cannot be used for lesions greater than 4 cm. the axillary approach makes it difficult to visualize the opposite lobe. the anterior chest wall approach utilizes port access at various positions on the anterior chest wall depending on the surgeon. this technique also allows bilateral neck exploration. hence we have been able to perform total thyroidectomies with central compartment clearance for papillary carcinoma and near-total thyroidectomies for large multinodular goiters, materials and methods: three incisions subplatysmal plane pneumoinsufflation with carbon dioxide (co2) ports creating a subplatysmal palne dissection begins at the inferior pole posterior dissection clipping superior thyroid vessels specimen freed up thyroid lobectomy was performed in the twenty cases. the average blood loss was 40 ml mean operative time was 85 min there were no complications and no cases were converted to open. there were no cases of recurrent laryngeal nerve injury or postoperative tetany. no subcutaneous emphysema, ecchymosis or hypercarbia was observed in any patient. all patients were discharged on the second postoperative day except the first on the fifth day. in conclusion this approach seems to be safe in case of unilateral lobectomy but early to say it is superior to conventional thyroidectomy especially in total thyroidectomy. introduction: laparoscopic sleeve gastrectomy (lsg) is one of the most commonly performed weight loss surgeries. prolonged hospital admissions are associated with both increased morbidity and mortality and increased strain on the health care system; studies are now investigating the safety and feasibility of outpatient lsg. this study examined a single surgeon's postoperative admission trends for patients who underwent lsg. the patients were divided into two cohorts based on the date of surgery, and we hypothesize institutional experience has a significant impact on postoperative stay and hospital readmission rate. methods: this is a retrospective study on lsgs performed by a single surgeon in a tertiary center from 2012-2017. inclusion criteria: patients [18 years old, bmi [35 with comorbidities or bmi [40, and patient approval by the bariatric surgical program in victoria, british columbia. patients with prior weight-loss surgery were excluded. patients were discharged home on a care plan involving: nurse and surgeon telephone follow-ups within one week post-surgery. patients were divided into two cohorts: cohort a (procedures between 2012-2014 inclusive) and cohort b (procedures between 2015-2017 inclusive). results: 323 patients were included in this study: 265 females (82.0%) and 58 males (18.0%). the mean preoperative age was 46.8±10.5 years, and the mean preoperative bmi was 45.4±5.72 kg/m 2 . the average postoperative discharge day for the population was day 1.69±0.85 and the average or time was 53.9±20.6 minutes. one patient in cohort b was re-admitted pod8 with a diagnosis of postoperative edema managed conservatively and is included in the analysis as pod1. a second patient in cohort b returned to hospital (pod21) for abdominal pain and was managed conservatively as outpatient. conclusion: there was a significant difference in the average postoperative discharge day between patients in cohort a and cohort b who underwent lsg with patients in cohort b requiring a shorter average admission time. this study suggests that with increasing institutional experience and a postoperative discharge plan, patients undergoing lsg may be discharged on postoperative day one safely. surg endosc (2018) introduction: minimally invasive techniques have revolutionized the art of the surgical practice. the laparoscopic approach to cholecystectomy has become the gold standard and is the most common laparoscopic general surgery procedure worldwide. in an effort to further enhance the advantages of laparoscopic surgery, even less invasive methods have been attempted, including smaller and fewer incisions. the objective of this study was describing our results of 22 years of needlescopic cholecystectomy. methods: since march 1995 all patients that underwent to needlescopic cholecystectomy micro-laparoscopic procedure with instruments of 3 mm were included in this study in a prospective database and the information was analyzed. results: between march 1995 and september 2017, 638 needlescopic cholecystectomies have been done at texas endosurgery institute in san antonio, texas by a single surgeon. 86% of the patients were female. the average age was 41.9 (range of 14-82 years old). average operating time was 59.3 minutes (range of 30-200 minutes). the 200minute operation required laparoscopic cbd exploration, accounting for the extended time. average estimated blood loss (ebl) was 15 cc (range of 5-50 cc). 2% of cases required conversion to standard 5 mm cholecystectomy and was completed without incidents. all patients were followed up at 2 weeks, 4 weeks, and 6 months after the procedure. only 1 patient presented with a hernia at the umbilical site. otherwise no wound, bile duct, bile leak, bleeding or thermal injury complications were identified. conclusions: micro-laparoscopic procedures with 3 mm instruments in this specific procedure of needlescopic cholecystectomy is safe and feasible, and is a cosmetic alternative to the standard laparoscopic cholecystectomy. there's still less report about thyroid cancer cases in toetva. this study reviews all cases of thyroid cancer which surgery were performed. there were 47 cases of toetva in thyroid cancer and 7 cases of opened thyroidectomy. objective: to review and report in terms of surgical outcome, complication, post-surgical treatment and recurrence in all cases of thyroid cancer surgery, especially in toetva technique. material and methods: from march 2014-july 2017 in police general hospital, a total of 680 patients underwent toetva with 47 cases of toetva in thyroid cancer and 7 cases of opened thyroid surgery in thyroid cancer. all patients were recorded in multiple parameters. results: this study have total of 54 thyroid cancer cases which 7 cases (13%) were male and 47cases (87%) were female, with an average age of 38. most clinical presentation was thyroid mass or nodule which was at 52 cases (96.3%), 1 case (3.7%) was non-toxic goiter and 1 case (3.7%) was grave disease. the clinical presentation mean time was 2.6 years (2 weeks-13 years). there were 36 cases (66.7%) with a mass at right lobe, 15 cases (27.8%) with a mass at left lobe, and 3 cases (5.6%) with mass at both lobes. the size of thyroid mass was 3.5±2.3 centimeters (1-15 centimeters). there were 49 cases (90.7%) had euthyroid, 1 case (1.85%) had subclinical hyperthyroid, 2 cases (3.7%) had subclinical hypothyroid, and 2 cases (3.7%) had hyperthyroid. for type of surgery, there were 47 cases (87.04%) of toetva surgery and 7 cases (12.96%) of opened total thyroidectomy. most patients at 41 cases (75.9%) didn't have any post-operative complication. and there were hypothyroid 5 cases (9.35%), transient hypocalcemia with no symptom 6 cases (11.1%), and transient hoarseness 2 cases (3.7%). after toetva surgery performed, 24 cases (44.4%) were redo completion thyroidectomy, 19 cases (79.2%) were transaxillary completion thyroidectomy, 4 cases (16.7%) were redo toetva, and 1 case (4.2%) deny for reoperation. and 18 cases (75%) didn't have any complication after redo surgery, 3 cases (12.5%) were hypothyroid, 2 cases (8.32%) were hypocalcemia and hypoparathyroid, and 1 case (4.2%) was transient hoarseness. after did thyroidectomy, ultrasound neck shown that 47 cases had no residual or recurrence thyroid mass, 7 cases had residual thyroid tissue. all cases received radioactive iodine ablation. radionuclide total body scan showed no evidence of distant functioning metastasis. conclusion: three-year short-term followed up toetva in thyroid cancer has shown less complication and no recurrence cancer. objective of the study: sentinel node navigation surgery (snns) in gastric cancer has been investigated for almost two decades in an effort to reduce operative morbidity. indocyanine green (icg) with enhanced infrared visualization is one technique with increasing evidence for clinical use. we are the first to systematically review and perform metaanalysis to assess the diagnostic utility of icg and infrared electronic endoscopy (iree) or near infrared fluorescent imaging (nifi) for snns exclusively in gastric cancer. methods and procedures: a search of electronic databases medline, embase, scopus, web of science and the cochrane library using search terms "gastric/stomach" and "tumor/carcinoma/cancer/neoplasm/adenocarcinoma/malignancy" and "indocyanine green" was completed in may 2017. all human, english language randomized control trials, non-randomized studies, and case series were evaluated. articles were selected by two independent reviewers based on the following major inclusion criteria: (1) diagnostic accuracy study design; (2) indocyanine green was injected at tumor site; (3) iree or nifi was used for intraoperative visualization. the primary outcomes of interest were identification rate, sensitivity and specificity. 327 titles or abstracts were screened after removing duplicates. the quality of all included studies was assessed using the quality assessment of diagnostic accuracy studies-2. results: ten full text studies were selected for meta-analysis. a total of 643 patients were identified with the majority of patients possessing t1 tumors (79.8%). pooled identification rate, diagnostic odds ratio, sensitivity and specificity was 0.99 (0.97-1.0), 380.0 (68.71-2101), 0.87 (0.80-0.93) and 1.00 (0.99-1.00) respectively. the summary receiver operator characteristic for icg+iree/nifi demonstrated a test accuracy of 98.3%. subgroup analysis found improved test performance for studies with low risk quadas-2 scores, studies published after 2010 and submucosal icg injection. iree had improved diagnostic odds ratio, sensitivity and identification rate compared to nifi. heterogeneity among studies ranged from low (i2 \25%) to high (i2 [75%). conclusions: the idea of snns in gastric cancer is intriguing because of the potential to limit operative morbidity. we found encouraging results regarding the accuracy, diagnostic odds ratio and specificity of the test. the sensitivity was not optimal but may be improved by a carefully planned and strict protocol to augment the technique. given the limited number and heterogeneity of studies, our results must be viewed with caution. objective: to evaluate the feasibility, cost effectiveness and safety of single incision laparoscopic surgery using routine laparoscopy instruments. method: 64 cases of acute appendicitis and 56 cases of symptomatic gallstone disease were included in study. 120 cases were enrolled in study and prospective observational study was performed. ruptured appendicitis/abscess formation were excluded from study. similarly empyema gallbladder/gallbladder perforation were also excluded. results: total 120 cases included; 64 cases of appendicitis and 56 cases of symptomatic cholelithiasis. mean age of appendectomy group was 28.71±9.69 years and mean age of cholecystectomy group was 36.71±10.48 years. in our study, mean operative time for sil appendectomy was 42.04±5.74 min. post-operative fever was noted in 10 cases (14.25%). mean post-operative pain as per vas score taken after 24 hours, on pod 2 was 2.14. average post op stay in hospital was 2.14 days, port site infection occurred in one case (4.17%). patient satisfaction score obtained on the scale of 1-10 on one month follow up was 7.95, while scar cosmesis score was 7.9. in our study, 56 cases underwent sil cholecystectomy, of which 21 were male (36.8%) and 35 were females (41.2%), and mean age of patients was 36.71yrs. mean operative time in our study was 75.21 min, mean post-operative pain taken on pod 2 as per vas score was 2.91, mean post-operative hospital stay was 2.1 days, port site infections occurred in 2 cases. post-op fever was noted in 6 cases, post-operative patient satisfaction score obtained at 1 month follow up was 7.73 and scar score of 7.84 on the scale of 0-10. no case required drain placement and conversion. conclusion: sils can be performed using conventional laparoscopic instruments especially in a government setup where per capita economic burden to patient will be less. though it has more operative time, it has comparably less post-operative hospital stay, causes less pain, and has significantly more patient satisfaction regarding post-operative scar and cosmesis. since sils has more patient acceptance and satisfaction, it can be offered to all patients undergoing laparoscopic surgery. it is very useful in government setup where lower economic class of patients will also benefit, irrespective of unavailability of special instruments and financial constraints, as it can be performed using routine laparoscopic instruments. in the year 2009 we started to practice the pericardic window by laparoscopy to diagnostic of head injury hidden in precausal trauma, although lucketally for our society, this type of injury has decreased considerably, we have achieved an important number of patients and in the last year we have performed the procedure for another type of pathologies and also diversified the approach route according to the case. objective: sharing accumulated experience in 8 years in the pericardic window practice by laparoscopy or thoracoscopy. material and methods description of cases results: during this period, we have accomplished 65 cases of laparoscopic pericardal window with two unique ports for the diagnosis of head injury in trauma precordial, additionally there were practiced 15 windows through traumatic trauma of which 4 have been derived in treatment of cardiac injury on this way, without performing open approach. in another scenario, we have performed 8 pericardial spill treatments for different causes by minimally invasive via. no complication or mortality associated with the procedure has been presented. conclusions: the pericardic window performed by a minimally invasive surgery is an effective, replicable strategy for the management of diagnosis and the medical and traumatic treatment of this pathology. patient selection is key and work in multidisciplinary groups guarantees good results. introduction: for the transabdominal preperitoneal repair (tapp) for groin hernia, single port surgery (sps) has been reported to reduce the abdominal wall damages. to reduce the length of the umbilical scar and to keep the view of triangulation, we use one needle forceps plus sps. patients and methods: from may 2014 to july 2017, 168 consecutive tapp patients were retrospectively investigated. there were 139 male and 29 female. we use two 5 mm ports (1 for the scope and 1 for the operator's right hand forceps) through an umbilical multi-channel port and additional 3 mm needle instrument is pierced above the pubic bone. a 5 mm flexible scope allowed us to keep the triangular formation easily. we studied the safety and usefulness of this method from the viewpoints of operation time and the complications. results: median operation time of single side hernia (135 cases) was 77 min (38-152) and the bilateral case (33 cases) was 139 min (91-269). five cases needed one or two additional 5 mm ports, and one case with severe preperitoneal adhesion due to the previous prostate cancer surgery was converted to open method because of the venous bleeding. other complications were 2 spermatic cord injury and 3 postoperative seroma that required the percutaneous puncture. umbilical scars and the pierced needle instrument scars became gradually invisible within 1 or 2 months. there were no incisional hernia nor wound infections in our series. these data was comparable to the conventional laparoscopic hernia repairs. conclusions: operation scars of this method had better cosmesis than the conventional tapp or sps tapp, and there were no differences between our sps-tapp with one needle foerceps and conventional method in operation time and the complication rate. our method was demonstrated as a less invasive approach for laparoscopic groin hernia repair. clinical application: fj clip is a stainless steel that can be used to hold organs in the abdominal cavity. it is available in two sizes: 5 mm and 12 mm. the device is short, it has a strong grasp, and it causes no or only negligible organ damage. we have used fj clip in the performance of local gastric excision (n=13), colectomy (n=8), and cholecystectomy (n=50) with no resulting difficulty. f loop plus is a 21g stainless steel loop-like device into which we can insert φ0.1 mm nt alloy thread, which we draw out extracorporeally via simple puncture. laparoscopic total and proximal gastrectomy. we made a small incision at the umbilicus and inserted a 12-mm camera port and 6-mm metal cannula. we placed two (left and right) epigastric ports. retraction of the left hepatic lobe was easy with use of the 12-mm fj clip and a 6-mm penrose drain. for #4 lymph node dissection, we used the fj clip to grasp the upper part of the stomach, inserted the f loop plus from the upper right abdomen. for #6 dissection, we grasped the pyloric vestibule and pulled it leftward. for dissection of the upper edge of the pancreas, we grasped the left gastric arteriovenous pedicle and pulled it toward the abdomen. the fj clip's grasp and traction exerted on the stomach wall were strong and effective, and there was little organ damage. reconstruction (roux-y) or double tract were performed within the abdominal cavity by hand-sewn purse string suture of the esophageal stump, insertion of an anvil, and use of an automated anastomosis device. we have experienced 2 total and 3 proximally cases to date, but there have been no complications, and both intraoperative bleeding and operation time were within normal limits. conclusion: we believe the fj clip and f loop plus will replace conventional forceps for various tasks in reduced port gastrectomy. introduction: pulmonary anatomical resection is considered as standard treatment for early staged lung cancer. uniportal video-assisted thoracoscopic surgery (uvats) has recently showed favorable surgical outcomes, but remains technically demanding, especially in a complex procedure such as anatomic segmentectomy. needlescopic instruments facilitates complex laparoscopic surgeries with nearly painless and scarless postoperative outcomes, however, its utilization of thoracoscopic surgery were mostly for minor procedures such as bullectomy and sympathectomy. we presented our initial experience of lung cancer surgery performed by uniportal vats and additional needlescopic instruments, and we also compare the operative results with conventional uniportal vats. methods: from december 2016 to august 2017, 75 consecutive patients with lung cancer undergoing anatomical lung resections including lobectomies and segmentectomies were reviewed retrospectively. of these 75 patients, 39 patients received conventional uniportal vats (uvats), and 36 patients received needlescopic-assisted uniportal vats (na-uvats). we compared the peri-and post-operative outcomes in these 2 groups. results: there was no significant difference in demographic, anesthetic, or operative characteristics in two groups except for age. the mean operation time was statistically less in the na-uvats group (198.8±86.8 min vs 159.3±55.4 min, p=0.023). the intraoperative blood loss was significantly less in the na-uvats group (143.2±298.1 ml vs 40.9±56.7 ml, p=0.047). there were two major pulmonary arterial bleeding events and one conversion to thoracotomy in the uvats group. the hospital stay, duration of chest tube drainage and post-operative pain scale were comparative in the two groups. conclusion: under the assistance of additional needlescopic instruments, uniportal vats can be performed more efficiently and safely without compromising its benefit in less postoperative pain and early recovery. purpose: we applicated the v-loc 90 into abdominal wall closure in single incision laparoscopic appendectomy (sila) from 2014. the aim of our study is to present our experience of abdominal wall wound closure technique using barbed suture in sila and comparision of perioperative outcomes with conventional method of layer by layer abdominal wall closure after sila. methods: from august 2014 to june 2015, sila was performed on 160 patients with acute appendicitis at the department of surgery, hallym sacred heart hospital. under approval of institutional review board, data concerning demographic characteristics, operative outcomes, postoperative complications were compared between both v-loc closure group and conventional layer by layer closure procedures. in v-loc closure group, after removing the appendix, divided linear alba was closed using unidirectional absorbable barbed suture v-loc 90 2-0 with continous running fashon. begins at the end of incision, and coming back with reinforced running. subcutaneous closure was also done using same thread, and the subcuticular suture along incision line was performed with remaning portion of v-loc. results: the demographic data of patients's characteristics were similar between the two groups. the use of barbed suture significantly reduced the suturing time for abdominal wall closure (p= 0.014) compared with conventional suture. the postoperative incision length was significantly shorter in v-loc group than conventional group (p=0.034). the rate of surgical site infection were similar in both group. no incisional hernia were noted in both group with median follow up periods of 25.2 months. the total costs of the procedure were comparable in both group under korean drg system. the use of barbed suture in abdominal wall closure in single port laparoscopic appendectomy is safe, and feasible method, reduces the suturing time, thereby decreasing the total operation time, and incision length with cosmetic effect. angela m kao, md, michael r arnold, md, julia e marx, paul d colavita, md, b todd heniford; carolinas medical center introduction: morgagni hernia is an anteromedial congenital diaphragmatic hernia seen in approximately 1 in 3000 live births and rarely identified in adulthood. patients may be asymptomatic, have intermittent symptoms, or present acutely with incarceration/obstruction. given this, surgical repair is recommended, but a standardized technique has not yet been described. methods: a prospectively collected hernia-specific database was queried for all adult morgagni hernias performed at a tertiary hernia center. demographics and peri-operative data were compared. (2) repair. the most common (66.7%) method of repair included suturing mesh to the diaphragmatic portion of the defect and securing the anterior-inferior edge to anterior abdominal wall with transfascial sutures and/or tacks. four patients (26.7%) underwent primary repair. average defect and mesh size was 37.2 cm 2 and 226.4 cm 2 , respectively. three patients (20%) underwent a concomitant paraesophageal hernia repair. mean ebl and length of stay was 31 ml (range 10-125 ml) and 2.7 days (range 1-7 days). postoperative morbidity included transient postoperative hypoxemia (2 patients) and pleural effusion (1) . there was no mortality, mesh complications or recurrences with a mean follow-up of 36 months. conclusions: morgagni hernias patients were more often older, obese, and women. these hernias remained unrepaired in 87% of patients despite their having had previous abdominal surgery. a laparoscopic or robotic approach offers an effective hernia repair with minimal complications, short hospital stay, and excellent long-term results for both elective and acute operations. mesh repair, sutured to the diaphragm and sutured/tacked to the abdominal wall, appears to be a very successful means to repair larger defects. introduction: hydatidosis is a zoonotic disease caused by echinococcus granulosus. it is endemic in the mediterranean, south america and middle east. it is a systemic disease wherein lungs are the second most common organ involved, after liver. radio-imaging plays an important role in diagnosing and determining the extent of the disease. surgical enucleation of cyst has been the classical treatment for this disease. bilateral lung involvement has been traditionally treated by median sternotomy or a bilateral thoracotomy. video assisted thoracoscopic surgery (vats) is an effective surgical approach in such settings. materials and methods: at our center, we have operated 67 cases of pulmonary hydatidosis thoracoscopically over the past 3 years. in all cases, area around the cyst was cordoned off with 0.5% cetrimide soaked gauze pieces. a pericystotomy is performed with ultrasonic shears & the germinal membrane is delivered en masse into an endo-bag. an air leak test after saline instillation into the cavity, is a standard part of the procedure. for those cases with cysto-bronchiolar communications, the defect was sealed by either suturing or glue application. traditionally, bilateral cases & cysts larger than 10 cm in size were tackled by an open approach. but, in our experience, cyst size, bilaterality & presence of complications are not contraindications for vats. all cases are administered perioperative albendazole (400 mg twice a day, administered for three cycles of 21 days each, with a gap of 7 days in between) which helps in preventing recurrence and also takes care of any inadvertent intra-operative spillage. introduction: minimally invasive surgery (mis) is the standard approach for most of the surgical procedures performed by general surgeons. traditionally the majority of operations for trauma are performed open due to the complexity of the cases, however, trauma surgeons are expanding their armamentarium to include mis in a variety of acute procedures. we report our experience with the application of laparoscopy in a variety of trauma cases. methods: a retrospective review of trauma cases performed between 1/2012-1/2016. during that time 52 laparoscopic cases were performed after traumatic injury. patient demographics, injury severity (iss), injury mechanisms, the types of procedures and outcomes will be described. means and standard deviations were calculated and t test were performed. a p value of .05 was statistically significant. results: demographics-a total of 52 trauma cases were performed laparoscopically during the study period. the majority were male, n=43 and the age was 29 sd 11. obesity was documented in 30%, hypertension or cad was in 20%, and substance abuse was in 44%. blunt trauma was in 35% and penetrating 65%. the iss was 15 sd 9. surgical procedures-the majority, 85%, of the procedures were completed laparoscopically. non-therapeutic laparoscopy was performed in 36%. repair of diaphragmatic or traumatic abdominal wall hernias were 29%. hematoma evacuation and control of bleeding was 15%. control of solid organ bleeding and repair was performed in 11%. intestinal repair occurred in 9%. for the cases that required open conversion iss was 20 sd 7 vs. laparoscopic cases iss was 12 sd 9, p=0.04. outcomes: the overall length of stay was 5 days sd 6. there was n=1 late death in a poly-trauma patient that required open conversion for complex solid organ and intestinal injuries. there was n= 1 case of a community acquired pneumonia, and n=1 case of a recurrent pneumothorax. conclusions: a descriptive series of trauma operations approached with mis techniques is described. this cohort had high injury severity and a predominance of comorbid conditions. laparoscopy was successfully applied in the majority of cases for a variety of therapeutic procedures and mortality and morbidity was low. mis is safe and is gaining momentum for application in traumatic injury. objectives: laparoscopic distal gastrectomy for early gastric cancer is a standard treatment in japan described in guidelines. the surgical procedure has been shifting from laparoscopic assisted to complete laparoscopic surgery. in this study, we evaluated the outcomes and safety of the laparoscopic assisted distal gastrectomy. methods: for the marking of the oral side transecting line, the clipping at oral side of cancer lesion was performed by gastro-endoscopy before surgery. the lymph node dissection (d1+/d2) is performed laparoscopically. as the dissection of the pancreatic superior region, the assistant hold the left gastric artery and keep the good view by retracting the pancreas. the common hepatic artery and proximal side of splenic artery are exposed. both sides of the left gastric artery and vein are exposed. left gastric vein and left gastric artery are cut after clipping and sealing. lymph node dissention of hepato-duodenal ligament is done and right gastric artery is cut after clipping and sealing. minor curvature of upper gastric wall is exposed (no 1, 3 dissection). billroth i reconstruction by the circular stapler (cdh) is performed. through the upper median incision with 5 cm, operator pulls out the stomach and transects the oral side of stomach with linear stapler after palpating the clips. duodenum is transected after purse string suture. gastroduodenal anastomosis is performed by cdh. results: two hundred cases were analyzed. the operation time, blood loss and the conversion to open surgery rate were 175 minutes, 40 ml, and 1.0%, respectively. as postoperative complications, anastomotic failure, pancreatic fistula and postoperative bleeding were 2%, 1.5% and 1%, respectively. the reoperation rate was 2%. one surgical death due to cerebral infarction was experienced. there were no patients with ppm (pathological proximal margin) positive and too much pm distance. frequency of abdominal wall incisional hernia and ileus were 1% and 0%, respectively. conclusion: although there is the disadvantage that small laparotomy can be made in the upper abdomen, laparoscopic assisted distal gastrectomy with billroth i reconstruction in our procedure is enough good from the viewpoint of the precision of proximal margin, and the incidence of serious complications. introduction: minilaparoscopy (mini) is a modality of minimally invasive surgery that attempts to produce less surgical trauma to the abdominal wall by reducing the diameter of surgical instruments to 3 mm. searching for better outcomes in inguinal hernia repair, surgeons have looked for new and less invasive alternatives such as single-incision surgery, single-port surgery and mini. minilaparoscopic transabdominal preperitoneal hernia repair (mini-tapp) demonstrates some of the known advantages of mini general surgery procedures such as enhanced visualization, improved dexterity and great cosmetic outcome. it is safe and reproducible since it does not differ from standard laparoscopy. introduction: the celiac plexus is a structure located in the retroperitoneum, at the level of the lumbar vertebra, which is located in the prevertebral region and has sympathetic fibers. patients with advanced gastrointestinal cancer and associated pain, one of the management strategies is pain control. neurolysis of the celiac pleural by laparoscopy was first reported in humans in 2006 in patients with advanced pancreatic adenocarcinoma with excellent results. experience will be shown in the simplification of the technique for the procedure. method: neurolysis of the celiac pleura was performed in 89 patients with advanced gastrointestinal cancer, stomach 52%, pancreas 23% liver 14% other 11%, no complications associated with the procedure, pain improvement was achieved in 80% of patients after process. the standardization of the technique by laparoscopy and its simplification, has made this procedure that is replicable and safe. description of the technique: patient in french position, technique of 3 trocars, umbilical trocar 10 mm and 2 trocars of 5 mm paraumbilical, staging laparoscopy is performed and sampling if necessary, is identified in the region of the lowercurvature of the stomach, the celiac trunk and the emergence of the left gastric artery are identified and 20cc of 90% alcohol diluted to the medium in the lateral fatty bearing are instilled through a pericranial 22 under direct vision, verifying the non-arterial instillation of the alcohol. there were no complications related to the procedure. results: we report the experience of one group who underwent celiac pleura neurolysis in 89 patients with advanced gastrointestinal cancer, gastric cancer 52%, pancreatic cancer 23%, liver cancer 14% and another 11%. the most frequent pathology report was adenocarcinoma, 80% of the patients were managed at 24 hours with sustained effects, up to 6 months of follow-up. with a significant decrease in pain medication. only 1 patient required new laparoscopic neurolysis because of difficult-to-manage pain. the operative time of this procedure was 30 minutes. the standardization of the technique, the use of low cost inputs, makes this type of procedure easily replicable with goodresults in pain management in cancerpatients. conclusions: mis is offered as one of the fundamental tools for the management of palliative procedures in gastrointestinal cancer. neurolysis of the celiac pleura with standardization of the technique, use of low cost elements, and the surgeon's skills make this procedure an option of management and control of pain in patients with advanced gastrointestinal cancer, is easily replicable, economical and insurance. background: the non-absorbable polymer clip offers a solution to the disadvantage of traditional metallic clip. due to its metallic property, it is not only expensive but also causes artifacts on imaging studies and often migrates into cbd. this study compares the traditional standard metallic clip with hem-o-lock used in laparoscopic cholecystectomy (lc) in regard of the safety and efficacy?. material and methods: this study includes 40 patients who underwent lc implementing metallic clip (mc) and 40 patients implementing hem-o-lock clips (h0)?. both clips were applied to cystic duct and artery, then the gallbladder was dissected from the liver bed by diathermy. the intraoperative and postoperative parameters were collected including duration of the operation and complications?. results: the median operative time was not statistically different between the mc and the hc group (89.33 vs 86.17 minutes, respectively; p=0.96) with no significantly less incidence of bile spillage (9 vs. 8, p=0.956) . no statistically significant difference was found in the incidence of postoperative complications between both groups (1 vs. 2, p= 0.97). no postoperative bile leakage was encountered in both groups. conclusion: hem-o-lock clip provides a complete hemobiliary stasis and a secure cystic duct and artery control. its cost effectiveness is also attractive while provides efficacy equivalent to that of the standard metallic clip. introduction: most of the blunt thoracoabdominal injury patients always have multiple organ injuries. plan of definite treatment depends on the preoperative diagnosis. in isolating diaphragmatic traumatic injury without others organ injury laparoscopic approach is helpful, decrease a length of hospital stay as well as decrease a wound complication. authors describe the laparoscopic treatment of the patient who had rupture of a diaphragm from blunt trauma in an emergency setting. methods and procedures: a 56 years old man presented with motor vehicle accident and mechanism of injury was blunt thoracoabdominal injury. he complains about chest tightness and tachycardia. complete evaluation and ct scan ware performed. stomach was herniated to the left chest and diaphragmatic ruptured was found neither others great vessels nor solid organs injury. the laparoscopic approach was desired and left diaphragm was repair by non-absorbable sutured without intraoperative complication. results: the patient has been discharged 4 days post-operative with full recovery. chest x-ray was taken before discharge, in out-patient department 2 weeks as well as 6 months after discharge which shown no diaphragmatic herniation. conclusion(s): laparoscopic approach in isolated traumatic ruptured diaphragm patients is safe and should be considered. short-term outcome of laparoscopy-assisted distal gastrectomy with roux-en-y reconstruction through mini-laparotomy for gastric cancer since 1991, we have introduced laparoscopy-assisted distal gastrectomy (ladg) with b-i reconstruction through mini-laparotomy. regarding to reconstruction, roux-en y reconstruction are also one of the choice in ladg, however, the technical feasibility has not been well documented so far. the purpose of this study was to compare the short-term outcome of ladg with roux-en-y reconstruction through mini-laparotomy compared to that of ladg with b-i anastomosis. between 1994 and 2014, 440 patients who underwent ladg for gastric cancer in oita university were enrolled in this retrospective study. since 2005, the roux-en-y reconstruction has been performed as a standard method in our department. these patients were divided two groups based on anatstomosis; roux-en-y (r-y) group (n=246) and billroth i (b-i) group (n=194). baseline characteristics, operative results (including complications) and pathological results were evaluated. there were a considerably greater number of patients with advanced clinical stage and having ≥t3 invasion in the r-y group. estimated blood loss was lower in r-y than in b-i (p.001) and operative time was longer in r-y than in b-i (p.001). there were no significant differences in all grade intra-operative complications (p=0.441). in addition, there were no significant differences in all grade post-operative complications between the two groups except internal hernia. hospital mortality was 0% in each group. ladg with r-y reconstruction through mini-laparotomy was technically feasible as well as ladg with b-i anastomosis. utilization of laparoscopy associated with blunt abdominal trauma: the nationwide inpatient sample 2004-2014 kenneth w bueltmann 1 , marek rudnicki 2; 1 advocate illinois masonic medical center, chicago, il, 2 university of illinois introduction: the incidence of trauma and its heavy burden upon the healthcare system remain strong. paradigm shifts in the management of these cases has, however, improved the mortality in such cases. it can be expected that improvements in management, when combined with the benefits of laparoscopy, will demonstrate positive impacts upon treatment outcomes. methods: the nationwide inpatient sample was referenced for inpatient stays for the years 2004 to 2014. abdominal trauma cases were selected and identified as hollow (ho) or solid organ (so) type, and as blunt or penetrating. the trauma subset was then scanned for the presence of discrete laparoscopic procedures, laparotomy, and converted cases, and flagged accordingly. conclusion: utilization of laparoscopy in treatment of intraabdominal solid and hollow organs injury increases over time. although current analysis based on available hcup nis data include any procedures done during post-traumatic hospitalization, its results can lead to conclusion that minimally invasive technique is being utilized in increased fashion. introduction: single incision laparoscopic (sil) surgery is a laparoscopic procedure which leaves a single small incision in navel, and has been reported to be less invasive than and as safe and efficient as the conventional multiport laparoscopic (mpl) surgery. the long-term rate of incisional hernia after sils colectomy is unknown, and the risk factors of incisional hernia formation is not fully elucidated. methods and procedures: this is a retrospective from a prospectively collected database. the investigation took place in a high-volume multidisciplinary tertiary private hospital in japan. introduction: laparoscopic approach in the acute surgical care setting continues to be underutilized. we aim to report the successful diagnostic and therapeutic use of laparoscopy in the management of a nontoxic patient presenting with acute abdomen and to highlight the benefits of a minimally invasive approach without added morbidity. case report: presented is a 52-year-old male with history of cad s/p cabgx4 two years prior and no abdominal surgical history who presented to the ed with sudden onset severe, diffuse, abdominal pain of six-hour duration with n/v. there was no trauma to the abdomen. he had mildmoderate hypertension, but was otherwise hemodynamically stable. on examination, the patient was in severe distress and writhing in pain. fast exam was unable to be performed secondary to pain. cta of the abdomen revealed mesenteric abnormalities with associated small bowel edema in the rlq suspicious for small bowel ischemia. he was taken to the or for diagnostic laparoscopy. he was found to have an omental adhesive band to the abdominal wall with herniation of the small bowel through the small opening. approximately 70 cm of ischemic, nonviable small bowel was resected and anastomosed intracorporeally. he tolerated the procedure well and was discharged home on post-operative day 3. discussion: primary omental related internal herniation of small bowel is exceedingly rare. there have been only few cases reported in the literature (1, 2, 3, 4) . two were diagnosed on exploratory laparotomy, one on diagnostic laparoscopy and one at autopsy. the one who underwent diagnostic laparoscopy did not require bowel resection. in presenting this case, we hope to illustrate the role of laparoscopy in the management of acute abdominal pain due to bowel compromise. introduction: morgagni hernias are a rare finding in the adult population, and represent 1-3% of all congenital diaphragmatic hernias. multiple approaches to these rare hernias have been described in the literature. here we present a novel technique of laparoscopic trans-abdominal repair using a combination of the endo-close device (medtronic, minneapolis, mn) and the ti-knot (lsi solutions, victor, ny.) methods: in a patient with a large left anterior diaphragmatic defect we performed trans-abdominal suturing utilizing the endo-close to perform primary closure of the defect, using the ti-knot to secure the pledged sutures along the anterior fascia. due to the size of the defect (7910 cm) this primary repair was buttressed with polyester mesh. in a second patient with a smaller (698 cm) classic right-sided anterior diaphragmatic defect we similarly performed laparoscopic trans-abdominal suturing using the endo-close to traverse both the anterior and posterior fascia and the ti-knot to secure the sutures in order to perform a primary repair of the hernia. both patients presented had an uneventful postoperative course and no indication of recurrence at 4 months. conclusions: morgagni hernias present unique technical challenges. in our experience the combined use of trans-abdominal suture with laparoscopic knot replacement device allowed for completion of both cases laparoscopically with minimal tension on the repairs. feasibility of concomitant laparoscopic splenectomy and cholecystectomy in situs inversus totalis: first case report worldwide ibrahim a salama, md, phd; department of hepatobiliary surgery, national liver institute, menoufia university introduction: situs inversus totalis is a rare anomaly characterized by transposition of organs to the opposite site of the body. combined laparoscopic splenectomy and cholecystectomy in those patients is technically more demanding and needs reorientation of visual-motor skills. presentation of case: herein, we report a 16 year old girl presented with yellowish discoloration and left hypochondrium and epigastric pain diagnosed as hereditary spherocytosis (hs). the patient had not been diagnosed as situs inversus totalis before. the patient exhibit a left sided "murphy's sign" and spleen palpable in right hypochondruim. diagnosis of situs inversus totalis was confirmed with ultrasound, computerized tomography (ct) and magnetic resonant image (mri) with enlarged right sided spleen and presence of multiplegall bladder stones with no intra or extrabiliary duct dilatation. the patient underwent combined laparoscopic splenectomy and cholecystectomy as treatment of hereditary spherocytosis (hs). discussion: feasibility and technical difficulty in diagnosis and treatment of such case pose challenge problem due to the contra lateral disposition of the viscera. difficulty is the laparoscopic technique encountered in skelatonizing the structures in calot's triangle, which consume extra time than normally located gall bladder with right sided standing surgeon and the position changed to left sided standing surgeon during splenectomy. in review up to date medical literature this is the first case reported worldwide. conclusion: provided that the technique is performed by an experienced surgical team, concomitant laparoscopic splenectomy and cholecystectomy in situs inversus totalis is a safe and feasible procedure and may be considered for coexisting spleen and gallbladder disease as in hereditary spherocytosis (hs) as changes in anatomical disposition of organ not only influence the localization of symptoms and signs arising from a diseased organ but also imposes special demands on the diagnosis and surgical skills of the surgeon. objective: to identify the preference among medical students of the following surgical approaches: open surgery, conventional laparoscopy, minilaparoscopy (mini), single incision laparoscopic surgery (sils), natural orifice transluminal endoscopic surgery (notes), and robotic surgery. methods: an online google questionnaire was filled by 111 medical students of different years in medical school. before answering the questionnaire, they watched an online video showing the different techniques, its advantages and disadvantages. the questionnaire consisted of 18 questions about the hypothetical situation where the participants were going to be submitted to an elective cholecystectomy and they could decide which technique they would prefer. all statistical analysis was performed using the r software program, version 3.3.1. the chi-squared test was performed for categorical variables where appropriate. a p value .05 was statistically significant. results: one hundred and eleven medical students answered the survey. 60 (54.05%) were female and 51 men. most of the students were between 19 to 22 years old (54.95%). they were in the first four years of medical school. when asked if they would consider notes or single incision even if they know that they are new procedures and with not completely established security standards, 84.68% (94) answered that they wouldn´t consider with no difference between gender (p=0.920). when asked if only conventional laparoscopy, robotics or mini were offered, which one they would choose: 85% of women and 62.75% men chose mini first (p=0.025). about the factors that they would consider the most important when choosing the surgical technique, they answered safety first (57.66%), followed by the surgeon´s experience with the procedure (29.73%), with no statistically significant result between genders (p=0.529). when asked if they would consider an open technique even with the other techniques available and compared according to their year in medical school, students closer to finishing medical school would not consider it, with a statistically significant result (p=0.036). regarding the most important factors they would consider and compared by year in medical school, safety and experience of the surgeon performed best, with a statistically significant result (p.05). conclusion: among the available surgical approaches, minilaparoscopy tends to be the preference among women medical students who considered safety the most important aspect. the closer they get to the end to medical school, the less they consider the open technique. background: extension of the single incision for the purpose of specimen removal in singleincision plus one additional port laparoscopic surgery (sils+1) can undermine the merits of sils +1, either by increasing wound-related morbidity or by destroying cosmesis. methods: we retrospectively analyzed the clinical outcomes of patients underwent elective sils +1 anterior resection, either with transanal specimen extraction (tase, n=25) or transumbilical specimen extraction (tuse, n=77), for colorectal cancer from january 2014 to june 2017. this study included patients with a tumor diameter less than 5 cm, measured by preoperative computer tomography. results: both groups were similar in patient's basic information and oncologic condition. most surgical data and postoperative clinical variables were comparable between tase and tuse group, except for increasing operative time in tase (210.2+45.7 vs. 167±43.4 min, p=0.032) and reducing wound complications in tase (0% vs 14.6%, p=0.043). dosage requirement of narcotic analgesics was not inferior in tase group compare to tuse group. no significant differences were observed in conversion rate, perioperative and overall morbidity between the two groups. conclusion: although sils+1 with tase prolonged operative time compare to with tuse, implement of tase is expected to provide benefit of reduced wound-related morbidity in patients with a tumor diameter less than 5 cm. medhat ibrahim, md; al-azhar university, naser city, cairo, egypt purpose: morgagni hernia (mh) is a rare condition. mh is less than 6% of surgically treated diaphragmatic hernias in infants. there is no specific symptom for the maorgagni hernia. open surgical repair was the golden stander before the introduction of the laparoscopic surgery in the children and infant. there are many different laparoscopic techniques for mh repair have been reported. i report laparoscopic repair of mh in five infants using primary sutures closure with inrta-corporeal knot tying and ethicon secure strap device. this study is an evaluation of the safety and efficacy of this new laparoscopic technique of mh repair in infants with it is short-term outcomes follow up. patients and methods: five infants with mhs underwent laparoscopic repair by hernia sac excision then two primary sutures, non-absorbable proline through the full thickness of the anterior abdominal wall and the posterior rim of the defect, intra corporeal sutures knot tying, ethicon secure strap device which was used to complete the colures of the defect. there was no insertion of chest tube or drain. results: five infants with mh were operated upon. there were 4 males and 1 female. all cases were left side mh, male-female ratio was 4:1. intraoperative and postoperative analgesia requirement was minimal (paracetamole 100 mg/kg/rectal suppository/12 hours for the first 24 hours). ceftriaxone 50 mg/kg single dose at the anesthesia induction. all operations were completed laparoscopic. all infants started and tolerated oral regular feeding with in 24 hours from surgery. none of the patients developed intraoperative or postoperative complications. the maximum follow-up was 36 months (mean, 17 months). all patients are in good health without recurrence or port site compilation. conclusion: this easy save technique of mh repair is reducing the operative time and postoperative hospital stay. it is minims the need of postoperative analgesia, anti biotic. the early oral feeding is also a good benefit. the introduction: transumbilical single port laparoscopic appendectomy (tspla) is the most popularized single port surgery in the world. it provides more cosmetic benefits than conventional laparoscopic surgery. however, single port appendectomy requires longer operation time and advanced surgical skills. we aimed to investigate the learning curve for tspla. material and methods: data were collected from patients who underwent tspla by single surgeon between march 2013 and february 2016. the learning curve were analyzed using a cumulative sum control chart (cusum) for operation time and complication. results: a total of 109 patients were included in this study. mean operation time is 61.6±20.61 minutes. there was no open or multi-port conversion. based on cusum for operation time, learning curve were 31 cases. conclusions: tspla is a safe and effective alternative procedure. the learning curve could be overcome safely without major complications. our results suggest that the 31 cases are sufficient to achieve surgical skills for tspla. introduction: anastomotic leakage (al) is a life threatening complication after minimally invasive ivor lewis esophagectomy (tmie ile) and has diverse treatment strategies such as conservative treatment, endoscopic treatment and surgery. however, there is no consensus on which treatment strategy is best. the aim of this study was to analyse various therapeutic strategies for al and their outcomes. methods and procedures: this retrospective multicentre study was performed in three highvolume hospitals. all patients that developed al after tmie ile in the period of january 2011-july 2016 were included. the different endoscopic (stenting, clipping and suction-drainage) and surgical treatments and their success-rate were described; success was defined as clinical improvement after primary treatment. primary endpoint was the time until oral feeding was resumed. secondary endpoints were hospital stay and the total amount of surgical, endoscopic and radiologic interventions. results: in total 83 patients that developed al were identified; four patients received antibiotics only. in the remaining 79 patient, endoscopic treatment was performed as primary treatment in 53%; 47% received primary surgical treatment. basic variables were similar in these groups. median postoperative day of diagnosis of al was day 7 in the endoscopic-group and day 5 in the surgical-group (p=0.038). admission to the icu as a result of the leakage was necessary in 52% in the endoscopic-group versus 95% in the surgical-group (p.001). however, median icu-stay was significantly shorter in the endoscopic-group (7 days versus 12 days, p=0.020). success-rate of the primary treatment was similar; 76% and 73% respectively (p=0.743). primary and secondary endpoints were comparable for both the endoscopic-and surgical-group; median time until oral feeding was resumed was 36 days and 31 days respectively (p=0.232), median total hospital stay 36 days and 40 days respectively (p=0.378) and the median number of interventions was 5 in both groups (p=0.378). conclusion: endoscopic treatment appears to be a safe and efficient therapy for al after tmie ile. a patient-tailored approach based on the condition of the patient and the morphology of the leak can be adapted to avoid surgery in a selection of patients. this may prevent surgical reoperations and reduce icu admissions. background: lymph node (ln) dissection around recurrent laryngeal nerve (rln) is one of the most important and difficult procedure in esophageal cancer surgery because of high rate of ln metastasis and risk of rln palsy. especially around left rln, the surgical area is far and narrow by thoracic approach which tends to results in insufficient ln dissection. therefore, we tried to remove this ln by imaging lymphatic chain to dissect sufficient ln. surgical procedure: we perform thoracoscopic esophagectomy by semi-prone position using 6-10 mmhg thoracic air pressure. after dissection of right rln ln, middle and lower esophagus, encircle the esophagus at the level of bifurcation of bronchus and pull toward right side by tape to dissect the dorsal and left side of upper esophagus. dissect the tissue including left rln ln from trachea by pulling esophagus up to dorsal side and try to move this tissue toward dorsal side of left rln so that this rln ln tissue can recognize as the "lymphatic chain". to increase the mobility of esophagus, cut the esophagus at the level of aortic arch and pull further up this upper esophagus to dorsal side. cut the esophageal branch of rln and separate this lymphatic chain from rln. at the end of thoracic procedure, this lymphatic chain is attached to upper esophagus. after the upper esophagus has pulled out from cervical site, lymphatic chain can easily recognize at the esophageal wall. result: we performed this lymphatic chain procedure in 88 cases. to evaluate this procedure, 106 cases of conventional method by same prone positioned esophagectomy was used for control. there was no statistical difference between these two groups in amount of blood loss (lymphatic chain: conventional=45ml:55ml, p=0.524), rate of rln palsy (14.8%:14.2%, p=1.00). although the thoracic operation time was extended in some degree (291 min:270 min, p=0.005), number of dissected ln was increased (2.9:1.9, p=0.004) and recurrence along left rln has been relatively fewer by this method (4.5%:7.5% p=0.552). conclusion: ln dissection around left rln would be easy and sufficient by imaging lymphatic chain. further improvement is needed to secure this procedure and further evaluation should be done to support this data. introduction: to evaluate the role of robotic assisted surgery as part of an appropriate patient work-up and treatment of ipmn and its consistency in terms of perioperative and long term results. few reports described singular minimally invasive procedures for ipmn. this study aims to describe a comprehensive, oncologically adequate treatment of ipmn in a minimally invasive unit with an extremely high robotic penetrance. methods and procedures: we retrospectively analyze our database of resected ipmn between 2008 and 2017. this case series includes consecutive, unselected patients: all candidates with a preoperative diagnosis of ipmn were approached robotically. results: among 142 robot assisted pancreatic resections, we identified 13 patients with ipmn. one was excluded for having less than 6 months follow-up, so 12 patients were included and analyzed. they underwent duodenopancreatectomy in 7 cases, distal pancreatectomy in 4 cases and central pancratectomy in 1. all but one indications followed the most updated available guidelines (sendai from 2008 to 2012 and fukoka from 2012 to 2017; american gastroenterology association guidelines were used for comparison only). one patient was operated even if the guidelines were suggesting to follow up, because of a strong familiar cancer history. the final pathology for this patient was high grade dysplasia. in another patient we were inside fukoka's recommendations, but outside aga guidelines and the final pathology was adenoma in chronic pancreatitis. postoperative morbidity was 16.7 (2 low grade complications, one grade a pancreatic fistula, now considered a biochemical leakage only) and mortality was zero. one conversions to open surgery occurred only: a dp in jehowah's witness with a bulky mass behind the portal vein. the mean follow up was 40 months (range: 10-68), with only one loss to follow up after 12 months for a high grade dysplasia. conclusion: in hepatobiliary pancreatic minimally invasive centers the treatment of ipmn can be grant following the same principles of major cancer centers, with comparable results. large unbiased studies are needed to evaluate if a minimally invasive approach could modify the ratio between operated and surveilled patients. reducing the use of catheters, tubes and imaging after hiatal hernia surgery significantly reduces length of hospital stay sophia s oswald, candice l wilshire, md, brian e louie, md, ralph w aye, md, alexander s farivar, md; swedish medical center introduction: historically, standard post-operative management of patients undergoing laparoscopic hiatal hernia surgery has been placement of a foley catheter and nasogastric tube (ngt) at the time of surgery with removal early on postoperative day (pod) one, at which time an upper-gastrointestinal series study (ugi) would be performed. we initiated a quality improvement project, seeking to assess if we could safely forego placement of foley and ngt along with the ugi, unless clinically indicated. our aim was to determine if this decreased overall length of stay (los), and how often and which demographic of patients needed placement of foley or ngt postoperatively. methods and procedures: we reviewed patients who had undergone laparoscopic hiatal hernia surgery between 2010 and 2016 under a single thoracic surgeon. patients were excluded for poor esophageal motility (peristalsis \70%), previous esophageal surgery, and presence of a paraesophageal hernia (peh) with over 50% of the stomach contained in the chest. eligible patients were further stratified into two groups: fast track and non-fast track. fast track was defined as patients who left the operating room (or) with no foley or ngt, and did not receive a routine ugi on pod one. non-fast track was defined as patients who left the or with a foley and ngt and received a routine ugi on pod one. los was measured in hours from the start of surgery to the time of discharge. results: of the 75 patients included, 42 were categorized as fast track and 33 as non-fast track. the two groups were similar in terms of age, gender, bmi and asa; however, the fast track group had fewer paraesophageal hernias and shorter surgery times [table] . the hospital los, however, was significantly shorter in the fast track group, even though there were more postoperative urinary catheters utilized. no patients in fast track group needed an ngt placed or ugi ordered during initial stay. conclusion: in more straightforward laparoscopic hiatal hernia surgery, surgeons can safely forego ngt and foley placement, as well as ugi evaluation the following morning. these initiatives may translate to a quicker discharge from the ward, and may allow safe transition to performing these cases in 24 hour ambulatory outpatient setting. further evaluation of additional interventions and patient education to decrease los are underway. the conclusion: laparoscopic surgery seems to be a safe and feasible option, with long-term benefit for primary tumor resection with metastatic colorectal cancer, but optimal treatment has yet to be defined. the canadian association of gastroenterology (cag) has implemented the colonoscopy skills improvement (csi) program across canada with a goal of improving colonoscopy quality. the programs' efficacy has not yet been formally assessed. this retrospective cohort study was performed on fourteen endoscopists practicing in a tertiary referral center who have undergone csi training between october 2014 and december 2015. procedural data were collected before and after csi training. data were extracted from the electronic medical record (emr) and entered into spss version 20.0 for analysis. student's t-test was used to compare groups for continuous data; chi-squared tests were used for categorical data. data were collected for a total of 3783 procedures; 2383 were done before csi training and 1400 procedures since csi training. our sample size provided 80% power to detect a mean difference in adr improvement of 5%. the most common indication for colonoscopy was family history of colorectal cancer in 970 (25.6%) patients. while age (58.0 yrs v. 60.1 yrs, p.001) and gender (43.4% male v. 46.9% male, p=0.035) were similar, they were statistically different between groups. groups were comparable in terms of indication, and completion rate (92.6% v. 94.2%). adr improved significantly after completing the course (23.5% v. 35%, p.001). an improvement was also noted in both polyp detection (37.6% v. 52.9%, p.001) and polyp removal (36.1% v. 50.4%, p\ 0.001). we have seen a significant increase in adr at out institution since implementing the csi program. gastric stomach cancer is a rapid major cause of cancer-related death globally, have higher incidence in men and it is noticeable by its heterogeneity. a lot of studies have expressed out the molecular basis of this cancer, include pathogenesis, invasion and metastasis. the invention of new technologies has help to bring out several novel biomarkers that have diagnostic and prognostic value. therefore, this review centers on biomarkers for the early diagnosis, treatment and prognosis of gastric cancer, elaborate the clinical important of serum tumor markers in a patient with this cancer as well as checking the growths, prognosis together with epigenetic changes and genetic polymorphisms. a deep and rigorous search was carried out in pub med/medline using specific words; "gastric cancer", with "tumor marker". our search yielded 4947 important reports about related topic from books and articles that were published before the end of september 2016. conclusively, scientists are utilizing time and resource to salvage this nemesis which is of global burden. classical and novel biomarkers are important for treatment as well as pre-post diagnosis of gc. major causes for this disease are cigarette smoking, infection by helicobacter pylori, atrophic gastritis, male sex, and high salt intake. the treatment of which early diagnoses is of important to the management, after pathological diagnoses by stage prognosis and metastatic setting, although the outcome proved not so good includes chemotherapy, and oral medication are oxaliplatin, capecitabine, cisplatin and 5-fluorouracil (5-fu). introduction: emergent appendectomy is the standard of care in usa based on tradition rooted in theory that delaying surgery allows for progression of disease and poorer outcomes. antibiotic treatment alone has been shown feasible in the treatment of uncomplicated appendicitis. in clinical practice surgical treatment can be delayed due to a multitude of medical and logistical reasons. this study evaluates the relation between timing of surgery to outcomes. methods and procedures: 120 consecutive adult patients undergoing appendectomy in a teaching community hospital were risk stratified using the acs risk calculator. time from imaging to incision defined early and delayed groups. statistical analysis was used to determine association between risk level, timing of surgery and outcomes. results: 79% of patients in this study were considered high risk. average time to incision was 9.7 hours. shorter time to incision was associated with a statistically significant lower length of stay (p.05). for every 12 hours in surgery delay, one day was added to the length of stay. no statistical difference was found between time to incision and other outcome variables of clinical complications, conversion to open appendectomy or frequency of complicated appendicitis. length of stay was longer than predicted by acs risk calculator in both high and low risk groups. a multidisciplinary, obesity-focused approach improves diagnosis of obesity-related illnesses: a new paradigm for the care of patients with obesity roderick olivas, aaron brown, md, racquel s bueno, md, cedric s lorenzo, md; university of hawaii -department of surgery introduction: patients suffering from the burden of obesity are at significant risk for medical problems that lead to premature death and disability. we hypothesize that a multidisciplinary bariatric team will be better equipped to recognize and diagnose these conditions. this study hopes to quantify that a patient focused approach leads to increased recognition of obesity-associated comorbidities, thus improving quality of care and surgical outcomes. methods and procedure: a retrospective medical chart review of patients who underwent bariatric surgery from 12/1/15 to 12/1/16 was performed comparing patient problem lists obtained from their primary care providers upon entry into the bariatric program, and the final problem list generated after evaluation by the program's multidisciplinary team. the total number and specific comorbidities identified before and after multidisciplinary team evaluation was analyzed with a paired t-test and manova, respectively. comparison of the number of comorbidities identified against specific patient demographics was conducted using paired t-test. results: a total of 120 patient charts were selected and 100 met inclusion criteria. the sample consisted of 68% women and 32% men; the mean age was 46.5; the mean bmi was 51.2; 87% were morbidly obese (bmi 40) and 13% were obese . the total number of comorbidities identified after evaluation by a multidisciplinary team was significantly greater (p=.000), with the average number of comorbidities diagnosed before and after being 3.65 and 6.61, respectively. a significant increase (p.05) in the identification of comorbidities before and after evaluation were noted for all demographics, and no disparities regarding gender, age, marital status, employment status, bmi, or ethnicity where identified. conclusion: patients with obesity unknowingly suffer from many obesity-associated comorbidities simply because their health care providers have failed to recognize the existence of these conditions. surprisingly, this include diseases that are highly associated with obesity, such as osa and t2dm, for which obese patients should be screened. although the root of this dereliction is yet to be determined, insufficient obesity-focused education and inherent weight bias among providers must be considered. assessment by a multidisciplinary bariatric team resulted in the identification and treatment of an increased number of comorbidities in this patient population. increased recognition of obesity-related comorbidities improves quality of care, which can translate into improved surgical outcomes. introduction: it is known that surgical residents suffer from sleep deprivation. no recent study evaluated the type and number of calls received at night. lately, burn out, depression and suicide have been the subject of interest in studies and media because of the higher rate among the residents compared to general population. the objective of our study is to evaluate junior resident's level of fatigue and the quantity and quality of calls received during on-call nights in general surgery at chus. methods and procedure: transversal study conducted on 17 junior residents that were on-call in general surgery at the chus between april 25 and august 27, 2017. the participants detailed all the calls received between 11 pm and 6 am on an database created on the application handbase and completed a daily calendar of their on-call night noting all the tasks they did every half hour (surgery/consultation/sleep). the level of fatigue was evaluated at the end of the night at 8 am with a visual analog of sleep scale on a score over 5 points. results: the level of fatigue 4/5 (tired) or 5/5 (exhausted) was reached in closed to 50% of the oncall nights. the median number of calls by night was 3 and the median duration of sleep was only 3.3 hours. the median lenght of uninterrupted sleep was 2.5 hours by night. among the total 110 nights and 384 calls analyzed, 15% were ''not pertinent'' and 10% were ''reportable in the morning''. more than 28% of the nights had at least one call ''not pertinent'' or ''reportable in the morning'' that have interrupted the junior resident's sleep. the level of fatigue was significantly correlated to the number of calls received during the night (spearman's rho=+0.380, p.001) and to the number of uninterrupted hours of sleep (spearman's rho=−0.687, p.001). conclusion: the level of fatigue is very high among the junior residents in general surgery. many of the calls received during the night are not pertinent or could have been delayed to the morning. our results lead us to the conclusion that interventions and recommendations should be made to raise nurses and resident's awareness about the situation to reduce the unnecessary calls and the level of fatigue of the residents. we hope that on-call resident sleep will be better preserved and that will result in fewer health issues for them (burn out, depression, suicide). without interruptions: does twitter level the playing field? heather j logghe, md 1 , laurel milam, ma 2 , natalie tully, bs 3 , arghavan salles, md, phd 2; 1 thomas jefferson university, 2 washington university, 3 introduction: frequent interruption of women in conversation has long been noted anecdotally, and studies confirm that women are interrupted more often than men. such interruptions can diminish perceptions of authority and compromise women's self-confidence. on twitter, users cannot be interrupted in the same way they can be in live conversation. thus the platform may provide a means for women to overcome this obstacle. to determine the degree to which women surgeon leaders utilize twitter compared to their male colleagues, we examined the twitter accounts and activity of the leaders of three national surgical societies. methods and procedures: lists of surgeons holding leadership positions in three surgical societies; the american college of surgeons, the academic association of surgery, and the society of american gastrointestinal and endoscopic surgeons, were obtained and duplicate names were deleted. table 1 details the organizations and leadership positions included. the twitter accounts of these leaders were then identified and confirmed by reviewing the accounts for surgical content. account duration was calculated from the join date. the number of tweets, accounts following, followers, and likes were recorded for each account. outliers were defined as two standard deviations from the mean. results: one hundred sixty-eight men and 64 women surgeon leaders were identified. forty-nine percent of the men and 66% of the women were found to have twitter accounts. mean account durations for men and women were similar, 4.6 years and 4.1 years, respectively. outliers for total tweets (7 men, 1 women), accounts following (5 men), followers (2 men), and likes (3 men) were excluded from analyses. almost all positive outliers were men. there were no negative outliers. overall, excluding the outliers, there were no significant differences between men and women in any metric. conclusion: among leaders in the surgical organizations analyzed, a higher percentage of women than men have twitter accounts. those with the greatest number of tweets, accounts following, followers, and likes, however, are overwhelmingly male. thus, although women in this sample were more likely than the men to have twitter accounts, men were more likely to gain influence through their accounts. increasing women's influence in this public forum may position them as much-needed role models for the current and next generations. surgical societies may help reduce the disparity in women's representation in surgical fields through education of their members on how to use social media. introduction: the aim of this study was to report the perioperative morbidity and short-term outcomes of a case series of robotic-assisted laparoscopic transabdominal preperitoneal (tapp) inguinal hernia repairs. methods and procedures: a retrospective review (january through december 2015) of 104 patients who underwent either unilateral or bilateral robotic-assisted laparoscopic tapp inguinal herniorrhaphy by two attending surgeons was performed. patient demographics, perioperative morbidity, operative time, and follow-up data were analyzed. results: patient demographics are summarized in table 1 . mean operative times for unilateral and bilateral inguinal herniorrhaphy were 87.5±20.8 and 129.0±37.6 minutes, respectively. mean robot console times for unilateral and bilateral inguinal herniorrhaphy were 70.0±25.1 and 113.0±39.8 minutes, respectively. postoperative complications included urinary retention (6.7%), conversion to open repair (1%), and delayed reoperation (1.9%). no major bleeding, surgical site infection (ssi), or mortality was observed. at first follow-up visit (19±6 days), symptoms/signs included groin/scrotal swelling (8%), seroma (7%), groin pain (3%), burning (3%), numbness (1%), and persistent urinary retention (1%). 12% of patients required a second follow-up visit. two patients underwent reoperation for suspected recurrence but instead a cord lipoma was found without a hernia defect. conclusions: robotic-assisted tapp inguinal herniorrhaphy can be performed with operative times and short-term outcomes similar to those published for open technique. the robotic-assisted tapp inguinal herniorrhaphy is a safe and an efficient minimally invasive surgical option with lower ssi risk and better cosmetic results. gunnar nelson, nathan lau, phd; virginia polytechnic institute & state university introduction: the fundamentals of robotic surgery (frs) and fundamentals skills of robotic surgery (fsrs) are universal curriculums covering a range of topics to assure a high level of surgical skills for optimal patient outcomes. this assurance of skills should include management and response to adverse events. thus, we reviewed frs and fsrs to identify any gaps in educational contents pertaining to how surgical teams are trained to handle adverse events in robotic surgery. methods and procedures: we conducted a literature search through google scholar, journal of robotic surgery, and plos one on frs and fsrs from 2010 to 2017. we reviewed 65 articles on preparing medical professionals in handling adverse events during robotic surgeries. besides the two curriculums, we also surveyed the literature on the characteristics of the adverse events and responses of the medical team. this literature survey provided a basis for recommending additional education contents to frs and fsrs. results: in our review, the frs contains 4 modules consisting of an introduction to robotic surgery, with cognitive, psychomotor, and team training/communication skills. meanwhile, the fsrs contains 16 different tasks, half of which on human-machine interaction and another half on operative interaction. both curriculums appear to lack contents on managing adverse events in robotic surgery. according to fda data, 4,798 adverse events were reported per 100,000 surgeries, of which (i) 40% relates to broken pieces of surgical instruments falling into patients, (ii) 19.1% pertains to burning holes in tissue from electric arching, and (iii) 16.9% relates to unexpected operations of the instrument such as power outage and issues with electrosurgical units. thus, medical professionals should be trained to manage common adverse events in robotic surgery. for frs, augmenting the five current scenarios in the communication section with common adverse events (i.e., broken pieces falling into patients) would minimize complications under abnormal circumstances. for fsrs, the most logical method would be augmenting the operative interaction tasks with adverse events to train medical professionals. conclusion: we discovered universal curriculums on robotic surgery lack education contents for training medical professionals to manage adverse events and out of the 4,798 procedures, 4382 (91.3%) pertained to device malfunction. to protect the patient's health, universal curriculums must incorporate contents preparing medical professionals in responding to adverse events, particularly device malfunctions, during robotic surgeries. introduction: this retrospective study was performed to evaluate the safety and feasibility of the new senhance robotic system (transenterix) for laparoscopic cholecystectomies. we report the first single-institutional experience utilizing this new robotic platform. methods: approximately 20 robotic cholecystectomies were performed using the senhance robotic system. the senhance surgical system is a new robotic platform that consists of a cockpit, manipulator arm and a connection node ( figure 1 ). this new system provides robotic surgery with numerous advantages including eye-tracking camera control system, haptic feedback, reusable endoscopic instruments, and a high configuration versatility due to total independency of the manipulator arms. patients were between 18 and 80 years of age, eligible for a laparoscopic procedure with general anesthesia, had no life-threatening disease with a life-expectancy of less than 12 month and a bmi\ 40. a retrospective review of a variety of prospectively collected pre-, peri-and postoperative data including but not limited to patient demographics, intraoperative as well as postoperative complications was performed. cholecystectomies were performed by expert level laparoscopic surgeons. results: the standard laparoscopic technique and setup was easily applicable to the senhance robotic system for this particular surgery. operative time and perioperative complications were comparable to reports of standard laparoscopic cholecystectomies. there was no significant learning curve detected in our case series. conclusion: we report the first experience with laparoscopic cholecystectomies using the new senhance robotic system. there were no major perioperative complications and operative time was comparable to standard laparoscopic cholecystectomies well reported in the literature. this case series suggests that the senhance robotic system can be safely and easily used for laparoscopic cholecystectomies by experienced laparoscopic surgeons. background: the ergonomic benefits or robotic surgery for the health of the surgeon are widely touted as benefits of this technique, though concern remains over a perception of increased risk of injury to patients, particularly in the novice robotic surgeon. injury to the bedside surgeon and assistants due to robotic movement can also occur, though not previously reported. we describe a finger fracture to the bedside surgeon due to entrapment between robotic arms and discuss potential risks to the surgeon in robotic procedures. procedure: a distal pancreatectomy and splenectomy was performed utilizing the davinci si system (intuitive surgical, inc., sunnyvale, ca). during the operation, hemorrhage was encountered which required an instrument exchange that was delayed by self-testing failures. after the instrument was validated and advanced into the field by the bedside surgeon, the operator abruptly took control of the device to reposition. the external portion of the active arm was then rapidly and forcefully propelled laterally toward a stationary retracting arm. the bedside surgeon's hand was still engaged on the instrument being inserted and became trapped between the two arms, leading to a right middle finger crush injury. results: the bedside surgeon sustained a fracture to the distal phalanx at the insertion of the flexor tendon with significant hyperextension of the joint. there was temporary paresthesia of the fingertip. while flexor tendon function was preserved and surgery was not required, the surgeon was required to maintain continuous splinting and was unable to return to full duty for a total of 13 weeks. the surgeon has mild residual hyperextension. conclusions: while complications to the patient have previously been attributed to the robotic platform, this case demonstrates that there are other inherent hazards to members of the operative team. as is natural with all indirect visual surgical techniques, the operator becomes intensely focused on the internal view and instruments in the field. this spatial separation is accentuated on the robotic platform as the isolated console provides a complete visual field immersion, no tactile feedback, and a disconnect between the rapid, sizeable outward arm motions need to produce small internal movements. given the need for maximum dexterity internally, the device doesn't have external proximity sensors to prevent arm-arm or arm-operator collisions. while many bedside operators report anecdotes of collisions with the device, this case reveals the forces involved at the human-machine interface can lead to more significant injuries. robtic approach to non-midline abdominal wall hernias: a single institution experience from a high volume center emily benzer, do, j. stephen scott, md, facs; university of missouri introduction: the objective of our study was to evaluate our experience with robotically repaired non-midline abdominal wall hernias at a high-volume robotic surgery program. we also will discuss the technical advantages of the use of robotic technology in repair of these unusual hernias which have typically had higher recurrence rates then midline hernias. laparoscopic approach for lateral ventral abdominal wall hernia (spigelian) and lumbar hernia has been described, however the success of robotic assisted repair for these hernias has yet to be determined. methods: a retrospective case analysis of all robotic abdominal hernia cases between june 2016 and june 2017 at an academic institution with a single high volume robotic surgeon was performed. the operative details of robotic repair of non-midline abdominal hernias, patient demographics, length of stay and smoking status were recorded and analyzed. the technical advantages of the use of robotic technology for example circumferential fixation of the mesh, ease of intracorporeal suturing, and the use of wristed instruments to gain better angles for posterior fascial release were evaluated. results: a total of 11 cases were identified. the average age of the patients was 54.3 years (range 25-74 years) and patients were predominantly female (91%). spigelian hernias represented 73% (n =8) and lumbar hernias 27% (n=3). all patients had primary closure of their defect and 7 patients (64%) had a posterior myofascial release performed. mesh types placed included polypropylene uncoated (n=7), polypropylene coated (n=3), and biologic (n=1). with uncoated polypropylene mesh placed had peritoneum closed over the mesh. the average length of stat was 1.9 days (range 0-6 days). there were no recurrences identified over a mean follow up period of 3.1 months (range 0.5-13.2 months). conclusion: robotic assisted repair of non-midline abdominal wall hernias is a viable option in the elective setting with no recurrences noted in this case series. the technical advantages of using robotic technology were identified and discussed in detail. these advantages theoretically improve outcomes in these patients however further analysis on long-term outcome and costs will have to be determined in future studies. the inguinal hernia repair has seen several critical improvements in recent times due to the implementation of new techniques, including laparoscopic repair, as well as robotic repair. with over 600,000 inguinal hernia repairs performed annually, it is important to identify the safest and most patient-friendly method. for surgeons, robotic assisted laparoscopic surgery is gaining in popularity for its dexterity and 3d visualization. but despite the growing interest in robotic hernia repairs, there is a scarcity of literature to support its superiority over open inguinal hernia repair. this study hypothesizes that patients who undergo robot assisted laparoscopic inguinal hernia repair will have decreased immediate post-operative pain, shorter recovery room stays, decreased narcotic requirement, and overall decreased pain at follow up compared to open inguinal hernia repair. in this study, we performed a retrospective analysis of patients who underwent either an open or robotic assisted laparoscopic inguinal hernia repair at stamford hospital, from july 2015-july 2017. the following characteristics were analyzed for both subsets of patients: gender, bmi, type of repair, operative time, recovery room time, immediate post-operative pain, and post-operative pain at follow up. our study demonstrated longer average operative time for patients undergoing robotic hernia repair compared to open repair, which was statistically significant (p value=.05). patients who underwent robotic inguinal hernia repair spent less time in the recovery room compared to patient who underwent open repair. in addition, patients in the robotic hernia group required less narcotics in the recovery room compared to patients who underwent open repair (p value = .05). there was no statistically significant difference between lengths of hospital stay between the two groups. this study highlights several possible advantages of robotic inguinal hernia repair, including lower post-operative pain scores, less narcotic usage required in the post-operative period, as well as shorter recovery room time. the results from this study should increase interest in investigating the superiority of robotic inguinal hernia repair. future plans for study involve comparing robotic to laparoscopic repair. in addition, we plan to continue to follow the study patients to look at additional qualitative metrics, including time to return to work and time to return to daily activities. introduction: buccal mucosal grafts (bmg) are traditionally used in urethral reconstruction. there may be insufficient bmg for applications requiring large amounts of graft, such as urethral stricture after gender affirming phalloplasty. rectal mucosa is an alternative with less post-operative pain, no impairment in eating and speaking, and larger graft dimension. laparoscopic transanal minimally invasive surgery (tamis) has been described by our group. due to the technical challenges of harvesting a sizable graft within a confined space, we adopted a new approach using the intuitive da vinci xi® system. we demonstrate the feasibility and safety of a novel technique of robotic tamis (r-tamis) in the harvest of rectal mucosa for the purpose of onlay graft urethroplasty. methods and procedures: irb approval was obtained. three female-to-male transgender adults (age range: 33-53 years) presenting with post-phalloplasty urethral strictures underwent robotic rectal mucosal harvest. the procedure was first rehearsed on an inanimate model using bovine colon. the surgery was performed under general anesthesia with the patient in lithotomy position. the gelpoint path transanal access platform was used. the rectal mucosa was harvested by the robotic instruments after submucosal hydrodissection. specimen size harvested correlated with clinical surface area needed for urethral reconstruction. following specimen retrieval, flexible sigmoidoscopy was used to ensure hemostasis. the rectal mucosa graft was placed as an onlay for urethroplasty. results: there were no intraoperative or postoperative complications. average graft size was 39 12 cm (range: 8-15 cm). every case had excellent graft take for reconstruction. all patients recovered without morbidity or mortality. they reported minimal postoperative pain and all regained bowel function on the first postoperative day. all reported significantly less postoperative pain and greater quality of life in comparison to prior bmg harvests. the procedure has been refined to increase efficiency and decrease operative time by maintaining adequate insufflation, retraction of the mucosal graft, and maintaining graft integrity. conclusions: to our knowledge, this is the first use of r-tamis for harvest of rectal mucosal graft. our preliminary series indicates the robotic approach is feasible and safe. it constitutes a promising minimally-invasive technique to employ in urethral reconstruction. demonstrated feasibility and avoidance of the challenging recovery associated with bmg harvest warrants further application and long-term evaluation of this procedure. prospective studies evaluating graft success, donor site morbidity and long-term outcomes are needed. introduction: the proportion of robotic minimally invasive procedures that are being performed annually is growing rapidly, specifically in the field of general surgery. a robotic approach to minimally invasive procedures potentially confers a number of benefits ranging from a magnified viewing field to greater attenuation and translation of hand movements leading to improved stability and maneuverability. it is paramount that a robust curriculum is designed for training surgical residents in robotic techniques. the aim of this project is to assess the current state of robotic surgery training at the ohio state university, with specific regard to whether it is currently temporally effective in addition to establishing a baseline against which the robotic surgery curriculum can be compared. methods and procedures: data were obtained for 199 cases performed at the ohio state university hospital east, between january and september of 2017. case time, date, type, and attending surgeon were recorded and tracked for review. of the 199 cases, 72 were cholecystectomies, 40 were unilateral inguinal hernia repairs, and 36 were bilateral inguinal hernia repairs-for a total of 148 procedures included in the analysis. chief residents were trained in two-month blocks, beginning in january of 2017. mean console operative times for the first and second months were compared for cholecystectomies as well as unilateral and bilateral inguinal hernia repairs. results: mean console time decreased for cholecystectomies (−9.0%; n=72), bilateral (−16.0%; n=36) and unilateral (−1.5%; n=40) inguinal hernia repairs from month one to month two. there was a large amount of variance across training blocks, but there was a systematic improvement in operative time across the training period. average operation length was shortest for cholecystectomies (m=66.8 min), followed by unilateral inguinal hernia repairs (m=85.3 min), and finally bilateral inguinal hernia repairs (m=111.2 min). discussion: this preliminary data suggests that residents are able to decrease their robotic operation time over the course of the two-month rotation. although sample sizes were relatively small for each block, the consistency of the trend supports this conclusion. further data collection will allow for more precise estimates in the future, and stronger conclusions to be drawn. these results show that rapid improvement is possible and provide motivation to establish robotic surgery curricula for general surgery residents nationally. robotic pancreas-sparing treatment of pancreatic neuroendocrine tumors: three case reports and review of the literature alessandra marano, giorgio giraudo, stefano giaccardi, desiree cianflocca, diego sasia, felice borghi; santa croce e carle hospital introduction: pancreas-sparing resections would be the ideal procedure in case of small pancreatic neuroendocrine tumors (p-nets) reducing the risk of exocrine and endocrine insufficiency. compared to standard resection, this type of surgery is safe and feasible without increasing the risk of postoperative complications except the overall rate of clinical pancreatic fistula (pf), which did not result in higher mortality or overall morbidity. robotic surgery for pnets enucleation has been rarely described but initials experiences have shown that this approach is associated with favorable outcomes. the aim of this study is to describe three cases of dv®si™ pancreatic enucleation for p-nets located in the uncinate process, in the body and in the posterior aspect of the tail of the pancreas, respectively. a brief review of the literature regarding the application of robotics for pnets enucleation is also included. methods and procedures: this study includes patients undergoing dv®si™ enucleation for pnets with a maximum diameter no more than 2 cm and a distance between tumour and main pancreatic duct (mpd) greater than 2 mm. at surgery, exposure of the pancreas was achieved by separation and traction of the gastrocolic and gastropancreatic ligaments. the pancreas was explored: an intraoperative ultrasound was used ensuring negative margins and leaving the mpd intact. thus, a cross-stitch through the tumour was made routinely in order to pull the tumour. enucleoresection was carried out with monopolar scissors and bipolar forceps. the tumour was placed into a specimen bag and removed from the trocar port. a drain was always left. results: median total operative time was 178 min. no conversion neither intraoperative complications occurred. median length of stay was 4.6 days. two patients presented a pf grade a (classification isgpf) while a pf grade b occurred in case of pancreatic tail net enucleation. final pathology revealed two insulinomas and one non-functioning net of the pancreatic body. at a median follow-up of 15 months no pancreatic insufficiency, reoperation or tumour reoccurrence was observed in all cases. the robotic approach for the treatment of p-nets is safe and feasible and, in selected cases, it may extend the indications of minimally invasive pancreatic-sparing surgery. in particular, the robotic approach provides a more precise dissection and may ensure negative margins and the mpd intact. these preliminary results are consistent with literature data about over 100 robotic pancreatic enucleations for p-nets that shows favourable surgical outcomes, especially if compared with those of open surgery. introduction: rectal cancer continues to be a surgical challenge. new technologies must be incorporated into practice and, at the same time, oncologic surgery and overall outcomes must be improved. the use of da vinci robotic surgery systems has spread rapidly in the field of rectal cancer treatment showing several technical advantages and favorable outcome compared to laparoscopy. since the introduction of the robotic platform in our institution in 2013, we have adopted a single-docking robotic technique for rectal resection. the aim of this study is to present our standardized technique and to analyse the clinical outcomes of the first 100 robotic rectal procedures. methods and procedures: prospectively collected data reviewed from 100 consecutive patients who underwent single docking totally robotic (da vinci® si™) dissection for rectal cancer resection between june 2013 and august 2017 under eras program. robotic rectal surgery was performed without changing the position of the robotic cart but only the robotic arms are repositioned between two phases: 1) vascular ligation, and sigmoid colon to splenic flexure mobilization; and 2) pelvic tme. results: there were 66 men (66%) and the median age was 68 years (range-24-92). thirty-five patients had neo-adjuvant chemoradiotherapy whilst 15 patients had bmi [30. procedures performed included anterior resection (n=95) and abdominoperineal resection (n=5). protective ileostomy was performed in 50 patients. the median operating time was 270 min (range-160-604). there was one conversion and two intra-operative complications (one bladder lesion and one ureteral lesion, respectively). median length of stay was 3.5 days (range, , and readmission rate was 7%. thirty-day mortality was zero. anastomotic leak rate was 7%, and all patients except by one were managed conservatively. the mean lymph node harvested was 14 (sd±8.3). radial margin was negative in all patients. at median follow-up of 21 months, there were no local recurrences. the single docking robotic technique is a safe and feasible approach for rectal surgery: in our study it has demonstrated favourable clinical outcomes and the adoption of a standardized stepwise approach was useful especially during the initial learning phase. to the best of our knowledge, this is the largest series from italy to report this standardized approach and the short-term clinical and oncological outcomes. in the complex laparoscopic surgical procedure, there is a problem such as that the laparoscope and the surgical instruments interfere with each other because multiple instruments is concentrated in one place. this problem is significantly appear in the laparoendoscopic single site surgery. therefore we suggested multi degrees of freedom (dof) manipulator with mantle tube for assisting laparoendoscopic surgery, which manipulator has two flexion and one telescopic mechanisms actuated by wire. it is possible to insert any thin surgical instruments such an endoscope the mantle tube of the multi dof manipulator, which the manipulator can let those surgical instruments access the operative field from different axis with other instruments. the use of this manipulator has two advantages, one of which is avoidance of fighting between instruments and laparoscope. the other is that become possible to ensure a satisfactory field of vision in the operative field. in this report, we assumed that this multi-dof manipulator is used as laparoendscope. in order to evaluate the performance of this manipulator, the operation time of the test in the abdominal cavity simulator (fasotec inc.) was measured. the test is a contact test to multiple-targets, which is a test that bring a forceps contact multiple-targets in the abdominal cavity simulator according to the defined pattern. as a general comparison and evaluation target for this measurement result, it is compared with the case using the same access method as the conventional rigid endoscope. in this test, the number of contacts between forceps and laparoendoscope were recorded by using electrical device. subjects (n=10) are adult men who trained the peg transfer in the above simulator. it was compared of total operating times of the test and the field of vision obtaining each device. from these results, using the suggested manipulator device rather than using rigid laparoscope a satisfactory field of vision is obtained, and it is possible to short the operating time approximately 4 seconds, and to small the number of contacts significantly. therefore it was shown that the effectiveness using the suggested manipulator device. for this reason, use of this device is expected to facilitate the complex surgical operation. additionally, it is performed para ablative operation of swine liver tissue in the abdominal cavity simulator, as previous step of clinical test. the operative field in this test was surveyed, the refinements of this manipulator for improvement performance were described in this report. yoshiyuki usui, md, phd, ichiro akiyama, md, phd, hironori kunisue, md, phd, hideaki mori, md, phd, tetsuya ota, md, phd; okayama medical center background and methods: we have performed approximately 200 cases of gasless endoscopic thyroid surgery since 1999 for 17 years. this surgery was performed through a small subclavian incision and using a wire traction and inserting an endoscope. we have modified and improved our surgical techniques by inventing various surgical instruments. here we introduce four newly invented surgical instruments, chronologically. results: we made u-retractor (2000), u-trocar (2005), u-kelly forceps (2008), and u-suction tube retractor (2013). all surgical instruments were modified from conventional surgical instruments. the u-retractor was a piercing retractor, each end of which had a sharp tip and a retractor. this retractor was inserted from the 3-cm working port outside the body and retracted the muscles effectively. the u-trocar was reversely set from inside to outside to make the working space wider. the u-kelly forceps which had a special ratchet were made to dissect loose connective tissue around the thyroid gland avoiding injury of the recurrent laryngeal nerve. the u-suction tube retractor facilitated a wider working port and eliminated the mist created by the ultrasonically activated scalpel effectively. recent data showed no difference of operative time, hoarseness, blood loss and hospital stay between conventional thyroid lobectomy and gasless endoscopic lobectomy. conclusion: gasless endoscopic thyroid surgery has been improved in the last 17 years. this procedure made the excision of not only benign thyroid tumors but also small thyroid carcinomas. this operation is still cost effective, because almost all surgical instruments are reusable and is a satisfactory experience to both the patients and surgeons. objective: to put forward the importance of complete (r0) resection for the treatment of retroperitoneal tumors increasing overall survey. methods: in this study; 30 patients having the diagnosis of retroperitoneal tumors with different histopathological subtypes whom were hospitalized in emergency surgery department of istanbul medical faculty between the years of 2009 and 2017 were evaluated retrospectively. the database of the department was analyzed. operational backgrounds, histopathological results, radiological evaluations, and assesments about relapses, and overall survey were obtained from the medical archieve. results: the average follow-up time was 2, 5 years. all of the patients included into the study were undergone operations. the average time of hospital stay was calculated as 15 days. 4 of the patients were found to have positive surgical margins in their histopathological evaluations. overall mortality rate of the study was 20% (6/30). we have observed a direct correlation between complete (r0) resection and disease-free survival. patients having relapses had worse prognosis in terms of overall survey (44% mortality rate). after having done the statistical evaluation, surgery was found to be the main determining factor for the assesment of overall survey. conclusion: reference to an experienced and multidisciplinary surgical center after an early diagnosis has upmost importance for the treatment of retroperitoneal tumors. surgical approach constitutes the main element in the management. overall survey is directly correlated with complete (r0) resection. novel fluorescent dyes for real-time, intraoperative, organ-specific visualization of biliary and urinary systems using dual-color near-infrared imaging 1; 1 children's national health system, 2 nih/nci p536 multidisciplinary approach for management of necrotizing pancreatitis: a case series prabhu senthil-kumar university of alberta, 2 centre for the advancement of minimally invasive surgery introduction: the objective of this study was to systematically review the bariatric surgery literature to understand how weight loss is reported. the incidence of obesity has increased globally. according to the world health organization more than 600 million were obese in 2014. in the last decade, bariatric surgery has been increasingly utilized as an effective treatment option for severely obese patients. currently, bariatric surgeries are among the most commonly performed operations. the primary outcome of such procedures is weight loss which has been shown to vary according to the type of surgery. however, there are different methods used to report weight loss which makes it difficult to directly compare outcomes between studies. a previous review by dixon et al. in 2004 revealed a wide heterogeneity in weight loss reporting. however, there have been no recent reviews on the reporting of weight loss in bariatric surgery. methods: a search of the medline electronic database was performed for studies published in 2016 using search terms gastric bypass/sleeve gastrectomy, weight, human, and english. articles were selected by two independent reviewers based on the following inclusion criteria: (1) adult participant ≥18 years predictive factors for excess body weight loss after bariatric surgery in japanese obese patients takeshi naitoh hypertension resolution after rapid weight loss: a single institution experience cristian milla matute reoperative bariatric surgery: analysis of indications and outcomes: a single center experience iman ghaderi objective: to observe the effects of duodenal-jejunal transit on glucose tolerance and diabetes remission in gastric bypass rat model. method: in order to verify the effect of duodenal-jejunal transit on glucose tolerance and diabetes remission in gastric bypass, twenty-two type-2 diabetes sprague-dawley rat model established through high fat diet and low dose streptozotocin (stz) administered intraperitoneally were assigned to one of three groups: gastric bypass with duodenal-jejunal transit (gb-djt n=8), gastric bypass without duodenal-jejunal transit (rygb n=8) and sham (n=6). body weight, food intake, blood glucose, as well as meal-stimulated insulin, and incretin hormones responses were assess to ascertain the effect of surgery in all groups. oral glucose tolerance test (ogtt) and insulin tolerance test (itt) were conducted three and seven weeks after surgery. results: comparing our gb-djt to the rygb group, we saw no differences in the mean decline in bodyweight, food intake, and blood glucose 8-weeks after surgery. gb-djt group exhibited immediate and sustained glucose control throughout the study outcomes with sham operation did not differ from preoperative level. conclusion: preserving duodenal-jejunal transit does not impede glucose tolerance and diabetes remission after gastric bypass in type-2 diabetes sprague-dawley rat model is bariatric surgery effective for comorbidity resolution in super obese patients? methods: a retrospective analysis of outcomes of a prospectively maintained database was done on 723 obese patients with a diagnosis of at least one or more of the following comorbidities-t2dm, htn, osa, or hld-at the time of initial visit who had undergone either a sleeve gastrectomy (sg) or a roux-en-y gastric bypass (rygb) at our hospital between 2011 and 2015. the patients were stratified based on their preoperative body mass index (bmi) class: bmi methods: we retrospectively reviewed all patients that underwent laparoscopic sleeve gastrectomy (lsg) at our institution from 2010-2015. common demographics and comorbidities were collected as well as creatinine, preoperatively and up to 48 hours after surgery. the renal function was calculated using the ckd-epi formula, derived and validated by levey et al. acute kidney injury was defined as an increase in serum creatinine by ≥0.3 mg/dl within 48 hours after surgery. all tests were two-tailed and performed at a significant level of 0.05. statistical software r, version 3.3.1 (2016-06-21) was used for all analyses. results: of the 1330 patients reviewed conclusion: the impact of laparoscopic sleeve gastrectomy in renal function is evident within the first 48 hours after surgery. patients undergoing lsg, especially patients with baseline chronic kidney disease stage ≥2 are at increased risk of developing acute kidney injury in the perioperative setting the body mass index (bmi), fasting plasma glucose (fpg), glycosylated hemoglobin (hba1c), serum triglyceride, serum cholesterol and blood pressure of all patients were measured before and at 6 months after surgery. the results were collected and analyzed. results: 32 patients suffered from metabolic disease undertook lsg surgery successfully (a mean age of 34 years), 12 were male and 20 were female. all of 32 patients suffered from obesity and the mean bmi of them was 40.61±7.66 kg/m 2 before surgery. among them, 19 patients had type 2 diabetes mellitus (t2dm), 23 patients had hypertriglyceridemia (htg), 7 patients had hypercholesterolemia (hc) and 16 patients had hypertension. the mean bmi of 32 patients at 6 months after surgery was 30.78±5.51 kg/m 2 and decreased significantly (p.05). the mean excess weight loss (ewl%) of 32 patients was 68.97%±26.68%(17%*120%) at 6 months after surgery. the average levels fpg, hba1c of 19 t2dm patients at 6 months after surgery were 6.52±2.15 mmol/l, 6.89%±1.34% methods: we retrospectively reviewed all patients who underwent bariatric surgery from 2012 to 2015. we assessed kidney function using the chronic kidney disease epidemiology collaboration (ckd-epi) and cardiovascular risk using framingham risk score (frs) equation pre-operatively and at 3 and 12 months of follow-up. our population was divided into two groups: patients with ckd stage ≥2 (gfr\90 ml/min) and patients with normal gfr. significance. results: of the 1,330 patients reviewed, 22.48% (n=299) met the criteria for ckd-epi glomerular filtration rate (gfr) and framingham risk score (frs) calculations. after matching, 200 patients (15.03%) were left to analyze, 70% (n=140) of which had a laparoscopic sleeve gastrectomy. eighty-six patients (43%) had an impaired kidney function (ckd≥2) (group 1) and 114 patients (57%) had a normal gfr (group 2). common demographics and comorbidities after matching are described in table 1. the mean creatinine in group 1 was 1.25±1.23 mg/dl versus 0.68±0.13mg/ dl in group 2 (p�). glomerular filtration rate was 66.70±20.36ml/min in group 1 and 101 81±9.79 ml/min in group 2. furthermore, when the frs was calculated at 12 months follow-up, patients with impaired kidney function had an absolute risk reduction of 13.05% corresponding to a relative risk reduction (rrr) of 37 group 2. the percentage of estimated bmi loss was found to be similar in both groups (69.05±23.86 and 67.06±64.59 respectively p=0.786). conclusions: bariatric surgery, especially lsg, has a positive impact on kidney function particularly in patients with chronic kidney disease stage 2 or greater. despite these patients having a higher preoperative cardiovascular risk, they showed similar risk reduction when compared to patients with normal kidney function at 12 months of follow-up the impact of socioeconomic factors and indigenous status jerry t dang only 2 (2.3%) patients underwent urgent conversion for management of complications after sg. three patients had intraoperative complications necessitating blood transfusion. fourteen (16.1%) patients required readmission within 30 days postoperatively. six patients (6.9%) required surgical interventions including 2 for gastrointestinal leak, 2 for hemodynamic instability, 1 for a cecal perforation, and 1 for a small bowel obstruction. there were no mortalities within the first year of revisional surgery. in 62 patients with bmi[35 kg/m 2 at the time of revisional surgery, at the median postoperative follow-up of 30 (interquartile range, 14-72) months, a median 6 (interquartile range, 2-9) kg/m 2 reduction in bmi was observed. overall, 19 (21.8%) patients had persistent type 2 diabetes at time of revisional surgery. improvement of diabetes was observed in 15 patients (78.9%) after conversion of sg to rygb. among 14 patients with gerd symptoms, subjective symptomatic relief was reported at the last follow-up. conclusion: weight recidivism is the most common indication for revision of sg objective: to evaluate laparoscopic mini-gastric bypass in the treatment of morbid obesity. method: three hundred patients with a mean bmi of 41.84.5 kg/m 2 underwent a laparoscopic mini-gastric bypass between 2011 to 2016. a laparoscopic approach with five trocar incisions was used to create a long narrow gastric tube; this was then anastomosed ante-colically to a loop of jejunum 200 cm. distal to the ligament of treitz peri-operative and short-term follow-up results up to does age or preoperative bmi influence weight loss after bariatric surgery? one-way anova or the kruskal-wallis test was used to compare continuous data across all groups. subsequent analysis of categorical data was achieved by chi-square or fisher's exact test. statistical significance was accepted as p.05. results: a total of 160 patients (20% male) were analyzed. average age and preoperative bmi were 45.8 (10.9) years and 44.8 (8.2) kg/m 2 , respectively. preoperative comorbidities included: diabetes (20.6%), hypertension (46.3%), hyperlipidemia (29.4%), previous myocardial infarction (1.9%), obstructive sleep apnea (30.0%), chronic obstructive pulmonary disease (2.5%), gastroesophageal reflux (30.0%), tobacco use (8.8%). the asa classes of patients undergoing sg were ii (14.4%), iii (84.4%), and iv (1.3%). the follow up rate at 6, 12 and 24 months was 86.9%, 44.4%, and 18.8%, respectively. the 30-day mortality and readmission rate were 0% and 4.4%, respectively. the %ewl was not different among age groups at 6, 12 or 24 months for the total, male, or female cohorts. among preoperative bmi groups, %ewl was not different in any cohort at 12 or 24 months, but was different at 6 months for the total cohort (p.001) and female cohort (p\ 0.001), and trended toward significance in the male cohort (p=0.051). the highest %ewl was found to be in patients with preoperative bmi of 35-40. there was no difference in 30-day mortality or readmissions among groups a crp≥5 mg/dl had a sensitivity for a complication of 27% and a specificity of 88%. primary bariatric surgery patients with a post-operative complication had higher crp levels compared to those who did not (4.9±4.9 mg/dl vs 2.8±1.9 mg/dl; p=0.008). there was no difference in crp levels for patients with a 30-day reoperation or readmission. there were no mortalities. conclusions: bariatric surgery patients with elevated post-operative crp levels are at increased risk for 30-day complications. the low sensitivity of a crp≥5 mg/dl suggests that a normal crp methods and procedures: the 28 patients, who formed the previously published cohort, were contacted and their charts were reviewed. follow-up visits, symptom severity scores, and any subsequent medical or surgical interventions were collected. symptoms were assessed using the symptom severity score (sss) and the gastroparesis cardinal symptom index (gcsi) questionnaires. success was defined as a sss of 2 or less. results: out of 28 original patients, 15 patients (2 males, 13 females) were available for follow-up (2 patients declined participation, 9 were lost to follow-up, 1 patient was deceased, and 1 was excluded after undergoing esophagectomy for unrelated indication) mbbs 1 ; 1 grant government medical college and sir jj government hospitals methods and procedures: twenty-six nh patients with dm were prospectively randomized to undergo either lrygb or lsg. patients were followed for 2-years with primary end points consisting of total weight loss (twl), percent excess body weight loss (%ebw) and impact on dm as measured by fasting blood glucose (fbs) and hba1c. in addition, baseline, 1 week, and 1, 6, 12, 18, and 24 months post-operative levels of glucagon-like peptide (glp-1), peptide yy (pyy), leptin, and ghrelin were collected. results: a total of 25/26 patients completed follow-up. the %ebw at 1 year for lrygb and lsg were 54% and 49%, respectively. resolution of dm occurred in 22/25 patients, the remaining three subjects were in the lgs arm. pre-operative fbs in lrgyb and lsg groups, were 127 and 131, respectively. pre-operative hba1c in the lrygb and lsg groups, were 7.06 and 7.15, respectively. fbs at 1 year for lrygb and lsg were 93 and 110, while hba1c for lrygb and lsg were 5.89 and 6.54, respectively. a consistent post-operative decrease in fbs was only seen in lrygb. lrygb ghrelin percentages increased at 6, 12, and 18 months, while levels decreased in lsg. leptin percentages decreased in both groups. the ppy levels remained relatively unchanged in both groups. lrygb glp-1 levels increased at 1 week, 6, 12, and 18 months. lsg glp-1 trends were similar except at 18 months where glp-1 levels decreased. conclusion: lrygb and lsg resulted in equivalent post-surgical weight loss and resolution of dm in the nh population video assisted thoracoscopic thymectomy (vats) has emerged as a minimally invasive alternative to the standard transsternal approach. we present herewith the surgical and neurological outcomes after vats their operative time, blood loss, conversion rate and post operative parameters like intensive care unit (icu) stay, inter-costal drainage (icd) indwelling time, hospital stay were recorded. neurological outcomes were assessed based on myasthenia gravis foundation of america (mgfa) post intervention status classification. statistical analysis was done using stata 14 software. results: ninety patients underwent thoracoscopic thymectomy during the study period. vats was done through right approach in 47 (53.4%), left approach in 33 (38%) bilateral approach in 6 patients (7%) and subxiphoid approach in 2 (2.2%). there was conversion to open approach in 2 (2.2%) patients due to dense adhesions at westchina hospital of sichuan university were included. all of the operations were performed by a single skilled surgeon. we divided our patients into two groups based on whether isao was used. of them, 28 patients received isao for lps and 26 patients received lps without isao. surgical skills and safety were evaluated. results: there were no significant differences in preoperative patients characteristics of the two groups. significantly less intraoperative blood loss(78.1±34.0 ml vs 177.5±81.3 ml; t=−6.4, p= 0.001) were observed in group of isao conclusions: isao is technically feasible, safe surgical skills for patients reveived lps, and its represents an effective method to decreased intraoperative blood loss. p686 modular laser-based endoluminal ablation of early cancers: in-vivo dose-effect evaluation and predictive numerical modelling giuseppe endoscopic submucosal dissection enables en-bloc removal of early gastrointestinal neoplasms. however, it is technically demanding and time-consuming. laser-based ablation (la) techniques, are limited by the lack of depth penetration control and thermal damage (td) prediction. our aim was to evaluate a predictive numerical modelling (pnm) of the td to preoperatively select the optimal power and exposure time enabling a controlled ablation down to the submucosa (sm). additionally, the ability of confocal endomicroscopy (ce) to provide information on the td was assessed at the histology, there was an increased damage depth per higher j applications. the r value at 0.5 j was 0.57±0.21, and was significantly lower when compared to energies from 15 j (r=1.2±0.3; p.001) up to 30 j (1.33±0.31; p\ 0.001). safe m and sm ablations were achieved applying lower p settings (0.5 and 1 w), at different t values, leading to an mp impairment only in 5 and 20% of the cases, respectively. ce provided relevant images of the td, consisting in architecture's distortion and disappearance of the gland's contours. the predicted damage depth we also analyzed 10 early gastric cancer patients who received lpg-ip with 8cm jejunal interposition. anastomosis procedure was overlap method for eshophagojejunostomy and gastrojejunostomy, feea for jejuno-jejunostomy. results: the comparison between otg/opg-ip shows no significant difference in perioperative complications and qol scores, significant smaller body weight loss in opg-ip group. lpg-ip group also shows good result in short term outcomes. consideration: as comparison in open surgery implies superiority in jejunal interposition, we have introduced lpg-ip. esophagogastrostomy after proximal gastrectomy is simple but has a risk for sever gerd symptoms, no optimal procedure for reconstruction after proximal gastrectomy has yet been established. although laparoscopic jejunal interposition is relatively complicated in procedure, we can safely perform in combination with common anastomosis techniques. conclusion: body weight loss in otg-ip group is smaller compared to otg group 938 consecutive patients with early gastric cancer underwent solo spdg (n=103) and mldg (n=835) performed by same surgical team. solo spdg can be defined as practice in which a surgeon operates alone using camera holder. mldg usually requires two or three surgical assistants. the inclusion criteria in this study were (i) pathologic proven stage i-ii gastric cancer (ii) no other malignancy (iii) more than d1 lymph node dissection (iv) r0 surgery. one-to-two propensity score matching was performed to compensate for the differences between two groups. results: after the propensity score matching, solo spdg (n=99) and mldg (n=198) patients were selected. mean operation time (120±35.3 vs 178±53.4 mins, p=0.001) and estimated blood loss (ebl) (24.6±47.4 vs 46.7±66.5 ml, p=0.001) were significantly lower in the solo spdg group than in the mldg group. the hospital stay and the use of pain control were similar between the two groups. although the initiation of semi fluid diet was similar, the time to first flatus was earlier in the solo spdg adhesional omental hernia: a case report an unexpected cause of small intestinal obstruction in crohn's disease strangulation inguinal hernia due to an omental band adhesion within the hernia sac: a case report omental adhesion, intestinal herniation, and unexpected death in the elderly small bowel obstruction secondary to greater omental encircling band-unusual case report the median operative time was 281 min. the median postoperative hospital stay was 12.6 d. histological examination of the tumors revealed 27 carcinomas, 12 adenomas, and 1 carcinoid. complications occurred in 8 (23%) patients, viz., ssi (two patients), pancreatic fistula (two patients), bleeding (two patients), passing failure (one patient), and cholangitis (one patient). however, no severe postoperative complications (clavien-dindo classification grade 3 or higher) were reported in these cases. conclusion: our cases showed that duodenal tumor resection using lecs enables curability through a minimally this study aimed to compare the outcomes of tltg with those of latg by using a meta-analysis. methods: we searched pubmed, embase, and cochrane library in may, 2016 to locate prospective or retrospective studies on surgical outcomes of tltg versus latg. the outcome measures were postoperative complications such as anastomosis leakage and anastomosis stenosis, operation time, blood loss, time to flatus, time to first oral intake, and postoperative hospital stay endoscopic thyroid lobectomy: our early experience at tertiary care hospitals of lahore univariate analysis was performed followed by logistic regression to identify independent predictors for the primary outcome. results: forty-six out of 555 (8%) patients referred for gp required jt insertion to treat malnutrition. etiology of gp included: 67% idiopathic, 22% diabetic, 11% post-surgical. thirty-six patients (78%) reported severe daily symptoms. twenty-five patients (55%) had successful return to oral intake while 21 (45%) required prolonged feeding access, reinsertion of a jt or tpn initiation. on multivariate analysis patients who had a pyloroplasty (p=0.003, or 6.6) and those who were married (p=0.043, or 3.8) were found to be independent predictors of successful discontinuation of tube feedings. on subgroup analysis 4-hour gastric emptying time normalized after pyloroplasty (p= 0.008) in patients which had a successful re-initiation of oral intake while persistent gastric emptying refractory to pyloroplasty was associated with failure. the group of patients who underwent pyloroplasty did not differ in terms of demographics, marital status (p=0.192) and preoperative gastric emptying (p=0.492) from those who did not. gp etiology (p=0.585) psychiatric conditions (p=0.277) and substance abuse laparoscopic transabdominal repair of morgagni hernia rebekah macfie average procedure length was 68.6 minutes. average hospital length of stay was 0.99 days, with all patients tolerating a regular diet prior to discharge. our 30-day readmission rate was 1/75 (1.3%). 5/75 (6.7%) patients required repeat egd evaluation for either recurrence of symptoms or impacted food bolus. at 6 week follow-up, 25/75 patients (33%) complained of dysphagia and 65/75 patients (87%) had eliminated ppi from their daily medication regimen. at 6 month follow-up, 13/62 patients (21%) complained of dysphagia and 54/62 patients (87%) had eliminated ppis. at 1 year follow-up, 5/44 patients (11%) complained of dysphagia and 5/44 patients (89%) had eliminated ppis. conclusion: as a recently introduced surgical option, no long-term data exists detailing the linx procedures ultimate success rates and complication profile mini-laparoscopic vs traditional laparoscopic cholecystectomy: preliminary report deniz atasoy since the introduction of minilaparoscopic cholecystectomy (mlc) in 1997, it gained little interest that could be attributed to decreased durability of the reduced size instruments, poorer optical resolution and smaller jaws of the instrument tips. our aim was to compare the outcomes of mlc with traditional laparoscopic cholecystectomy (tlc) one developed choledocholithiasis on postoperative day one and after ercp the course was uneventful. the other patient developed choledocholithiasis and acute pancreatitis on the sixth postoperative day and was treated conservatively. the stone in the ampulla had fallen by itself without a need for ercp single-incision plus one additional port laparoscopic surgery for colorectal cancer with transanal specimen extraction: a comparative study two patients had a previous attempt of hernia repair, one with mesh. one patient did not have any immunosuppression due to hiv infection, whereas the other were on cyclosporine, tacrolimus and/or mycophenolate mofetil. there were two laparoscopic and two open cases, mean operative time was 169.25 minutes (111-311), mean blood loss was 85 ml (20-200). mesh used were biological porcine dermis in one case, polypropylene with absorbable hydrogel barrier in three cases. mean mesh length and width were 27 cm (20-33) and 28.25 cm (25-33) respectively. one patient underwent a component separation, though none of the patients had the fascial defect closed. there were no intra-operative complications. three patients were readmitted for hyperkalemia, abdominal pain, and seroma respectively. neither recurrences nor reoperations were reported. mean follow-up was 75.5 days (17-136) conclusion: post liver transplant incisional hernia repair is feasible either laparoscopic or in an open fashion. because of the size and location of the defect, fascial closure is unlikely achievable. the use of standard techniques and materials give a similar result of the non-transplant population. p722 technique of esophagojejunostomy using orvil after laparoscopy assisted total gastrectomy for gastric cancer shinichi sakuramoto there was a significant difference in mortality between the two time-periods, 10/38 patients died during 2000-2005 and 1/28 died during 2010-2015 (p=0.01). those who died were significantly older (69 years (51-79)) than the survivors (69 y (51-79)) (p=0.01). five of the patients who died in the previous group died without any intervention. 4/5 of those who had an acute open necrosectomy died. surgical necrosectomy correlated significantly with mortality (p=0.002). the only patient who died in the recent group died without any intervention. none of the 11 patients receiving minimal invasive drainage in this group died until now only 94 cases in adults and fewer than 20 cases in children have been reported in world literature, with surgical management being the only option. an innovative, minimally invasive laparoscopic excision of the abdominal sac was performed and the scrotal component was managed by jaboulay's procedure. this is probably the first case report in world literature describing laparoscopic management of hydrocele-en-bissac. case report: a 50 year old male presented with complaints of bilateral hydrocele and swelling in right lower abdomen since one year. computed tomography of the abdomen revealed an encysted hypodense lesion with enhancing walls along the right side of pelvis, anterior to the psoas muscle and extending through the internal ring into the right inguinal region upto the scrotal sac; measuring 14.1 cm93.6 cm suggestive of an encysted hydrocele of cord associated with hydrocele of both scrotal sacs excessive gastric resection may result in postoperative deformity of the stomach, with consequent gastric stasis in food uptake. to minimize the resection of stomach tissue, especially for lesions close to the esophagogastric junction or pyloric ring, we have developed laparoscopic wedge resection (lwr) with the serosal and muscular layers incision technique (samit) for gastric gastrointestinal stromal tumors. this samit is simple and does not require special devices. purpose: the purpose of this study was to clarify whether lwr with samit for gastric gists is technically feasible in term of short-term outcome methods: all patients who went through lsg in our department between 4/2014 to 12/2016 have been evaluated for bleeding complications, after implementation of anti-bleeding policy: blood pressure was controlled to 140 mmhg during stomach resection and staple line was reinforced throughout it's length with a running 3-0 absorbable v-lock suture. drains were used selectively. results: out of 308 patients who went through the procedure 9 (2.9%) suffered hemorrhagic complications: 7 patients had? hb[2gr%. 7 patients received 1-3 red blood pc's. no patients were re-operated for bleeding. 2 patients were readmitted for infected hematoma and had ct guided drainage. one patient (0.3%) suffered from leak. conclusion: implementation of anti-bleeding policy in lsg is very effective. there is no need to use expensive buttress material to achieve these results. drains can be used selectively. the impact of this policy on leak rate needs to be fifty procedures immediately prior to, immediately after, and eight months after completion of training were included for each endoscopist. data were extracted from the electronic medical record and entered into spss for analysis. student's t-test was used to compare groups for continuous data, and chi-squared tests were used for categorical data. data were collected for 2533 procedures. patient groups pre, post, and eight months after csi training were comparable in terms of age (60.1 yrs, 60.3 yrs, and 60.1 yrs), sex (56 it's in the bag; can stoma output predict acute kidney injury in new ostomates? robert fearn colostomy output stabilised rapidly, whilst ileostomy output increased progressively throughout the first 7 postoperative days as can be seen in chart 1. twelve patients (18%) developed aki during index admission. length of stay was significantly greater in the aki group at 34 (95% ci 30-38) days vs 15 (11-19) days. highest daily stoma output was non significantly higher in the aki group 1612 ml (95% ci 636-2,588 ml) vs 1,122 (857-1,387 ml) as was mean daily stoma output at 800 ml (337-1,263 ml) vs 549 ml (312-786 ml) (chart 2). seventeen patients (25%) were readmitted for any reason, 7 (9%) specifically for aki. in total 13 patients (19%) developed aki within three months of their stoma surgery only 3 of whom had developed aki during their index admission. all patients who developed aki following their index admission were ileostomy patients. conclusion: acute kidney injury in new stoma patients is associated with prolonged hospital stay and readmissions with associated morbidity and healthcare costs 2315 consecutive laparoscopic bariatric operations were performed, including 706 primary roux-en-y gastric bypasses (lrygb), 429 primary adjustable gastric bands (lagb), 901 primary sleeve gastrectomies (lsg) and 279 secondary bariatric surgeries and revisions. all bariatric procedures were approached laparoscopically (1814 procedures were stapled and 501 were nonstapled). the mean patient age was 38 years (16-73), females represented 85% and mean bmi was 48.2 kg/m 2 (35-73). there were no perioperative mortalities, no conversions to open surgery and no intraoperative blood transfusions. there we two major intraoperative complications (hypopharyngeal perforation-1, malignant hyperthermia-1). mean hospital stay was 1.45 days (1-40 days). eleven patients (0.47%, 10 in gastric bypass group and one in lsg group) required 30-day reoperations for postoperative complications (staple line gastrointestinal bleeding-5, anastomotic leak-1, strangulated port site hernia-1, unexplained severe abdominal pain-1, intestinal obstruction-2, and intraabdominal abscess-1). there were no long term (1-year) mortalities in patients that required reoperation. there was one transfer to another institution. the dynamics of further improving safety was such that there was no complication on the recent consecutive 127 stapled procedures and the mean hospital stay was 1.1 days (1-4 days). detailed subgroup analyses will be provided. conclusions: with well-controlled and structured pre-, intra-, and post-operative care, laparoscopic bariatric surgery can be performed with minimal reoperations and zero mortality in a teaching institution does concomitant placement of a feeding jejunostomy tube during esophagectomy affect quality outcomes? md, facs; icahn school of medicine at mount sinai background: placement of a feeding jejunostomy tube (fj) is often performed during esophagectomy. few studies, however, have sought to determine whether concomitant placement affects postoperative outcomes of esophagectomy of these, ldg was performed 280 patients and odg was performed 159. we compared elderly patients (aged 75 years or more) with younger patients in each operative procedure. (ldg: elderly 71, younger 209; odg: elderly 73, younger 86) preoperative comorbidity and surgical results were analyzed. multivariate analysis was performed to detect predictive factors for postoperative complications. results: in both ldg and odg groups, the operative time and amount of blood loss did not differ, while comorbidity was more common in elderly patients than in the nonelderly, and there were fewer retrieved lymph nodes in elderly patients. the incidence of all postoperative complications did not differ between both groups in each procedure, and there were no significant differences in the time to first flatus or postoperative hospital stay. however, in terms of specific postoperative complications, respiratory complications were more frequently observed in eldery group with odg significantly (p=0.034), while not with ldg group. in multivariable analysis, age was not independent predictor of postoperative complications. conclusion: odg for eldery patients requires attention particularly in postoperative respiratory complications. ldg is a safe and less invasive treatment for gastric cancer in elderly patients who have greater comorbidity. p739 examining the role of preoperative ineffective esophageal motility in laparoscopic fundoplication outcomes tyler hall there were no significant differences in complications or recurrence rates. preoperative quality of life measures did not vary between the cohorts nor did postoperative scores at three weeks or six months. patients with 100% ineffective clearance exhibited worse gerd-hrql scores one and two years postoperatively conclusion: preoperative ineffective esophageal motility was shown to result in comparable short-term quality of life following ars. however, gerd-hrql scores at one and two yearsshowed worse outcomes in patients with preoperative iem robotic surgery as part of oncologically adequate ipmn treatment: indications, short and long term results federico gheza eligible patients who had minimally invasive surgery were stratified in multiport laparoscopic and robotic cohorts, and included if they had poi/sbo after surgery. comparative analysis assessed the demographic, perioperative, and postoperative outcomes. the main outcome measures were the incidence rate, associated variables, and time to ileus/ sbo across the mis platforms. results: during the study period 4161 total patients were reviewed-3856 laparoscopic and 305 robotic. postoperatively, 512 (13.28%) laparoscopic and 49 (16.07%) robotic patients suffered from poi/sbo laparoscopic sbo occur significantly later after the index procedure than robotic sbo (24 conclusions: the rate of poi/ sbo is considerable and comparable across laparoscopic and robotic approaches. however, there are distinct differences in the severity, time to occurrence, and impact on quality measures, such as los and readmissions between laparoscopic and robotics. this information could be an important factor in which approach the surgeon choses laparoscopic surgical procedure was standard with using laparoscopic linear stapler. responses to surgery were evaluated a month after the operation based upon the american society of hematology 2011 evidence-based practice guidelines for itp. results: there was no open conversion in this study. the mean operation time and blood loss were 151 min and 64 g, respectively. there was no case using blood transfusion during and after operation. with regard to complications, one patient (4%) had a postoperative pancreatic fistula that did not require percutaneous drainage. positive responses, including the complete and partial remissions, were achieved in 78% (18/23). the mean follow-up duration was 89 months, and the 5-, 10-, and 15-year relapse-free survival rates were 94% for all three time points. conclusions: the present study demonstrated that ls for itp can provide good long-term outcomes two cases of conversion from sp-c to open surgery were excluded. all procedures were followed postoperatively for a minimum of 6 months, and wound complications such as bleeding, fat lysis, infection, or hernia were recorded. patients were classified as having a wound complication or not. results: pure transumbilical sp-c was completed 94.6%, additional trocars were used in 5.0%, and the rate of conversion to open surgery was 0.4%. after a median follow-up of 32.1 (range, 6-50) months few cases performed with hand assist, notes, or single-incision. utilization of robotics was highest for bpd/ds (227 of 1,051 cases, 21.6%). the greatest number of robotic-assisted cases were sleeve gastrectomy (5,539 of 92,406, 5.99%) and gastric bypass (2,904 of 36,076 cases, 7.18%). relatively few operations were converted to a different approach (see table). operative time was longer when using robotic approaches for both sleeve (74.01 vs 102.39 minutes, p.0001) and bypass (116.62 vs 152.68, p\ 0.0001). postoperative los was no shorter when using robotic-assistance (see table). unadjusted 30-day outcomes revealed slightly higher rates of readmission for both operations when using robotic-assistance (see table), and slightly higher rates of complications after robotic sleeve gastrectomy p756 comparision of perioperative and survival outcomes of laparoscopic versus open gastrectomy after preoperative chemotherapy: a propensity score-matched analysis adjustment for potential selection bias in the surgical approach was made with propensity score-matched (psm) analysis. perioperative and survival outcomes were compared between the lag and og groups. results: in total, 174 patients were identified from the database. after psm analysis, 45 patients who underwent og were one-to-one matched to 45 patients who underwent lag in the setting of nact. these two groups had similar outcomes in terms of intra-and postoperative complications and 3-year overall survival. however, the lag group had a longer operation time (p=0.031) and lower estimated blood loss (p=0.001). moreover, compared with patients in the og group, those in the lag group had fewer days until first ambulation conclusion: the present study indicates that lag performed by well-qualified surgeons for treatment of locally advanced gastric cancer after preoperative chemotherapy is as acceptable as og in terms of oncological outcomes. p757 outcomes of laparoscopic antireflux surgery for gastroesophageal reflux disease: effectiveness and economic benefits kyung won seo, phd; kosin university college of medicine purpose: laparoscopic antireflux surgery (ars) is an alternative treatment option for gastroesophageal reflux disease (gerd) in the world. however, the effectiveness and economic feasibility of ars versus medical treatment is unknown. this study was performed to evaluate the effectiveness and economic benefits of ars. methods: nine patients with gerd were treated using laparoscopic ars between 2012 and 2016. surgical results and total cost for surgery were reviewed. results: seven men and 2 women were enrolled. preoperatively, typical symptoms were present in 9 patients, while atypical symptoms were present in 5 patients. one patient underwent partial fundoplication due to absent peristalsis and the other underwent nissen fundoplication. postoperatively, typical symptoms were controlled in 9 of 9 patients, while atypical symptoms were controlled in 4 of 5 patients. overall, at 6 months after surgery, 3 reported partial resolution of gerd symptoms, with 6 achieving complete control. the average cost of ars for nine patients was 5840 usd. conclusion: laparoscopic ars is effective for controlling typical and atypical gerd symptoms. the cost of ars may be more economical over the long term compared to medical treatment since laparoscopic surgery is reported to affect respiration and circulation, we should take indication of lag for elderly patients into consideration carefully. indication of lag for elderly patients, however, is still controversial. the aim of this study is to assess the safety and validity of lag for elderly patients. method: medical records were retrospectively reviewed for 94 patients who underwent lag for gastric cancer between 2009 and 2016. in this study, patients over 75 years of age were defined as elderly patients. patients were divided into two groups according to age; group a (age ≥75, n=28), group b (age \75, n=66). preoperative characteristics and postoperative outcomes were analyzed. two-tailed student's test and/or pearson's chi-square test were used for statistical analysis. results: there were no significant differences in male/female ratio and body mass index between two groups. number of patients whose asa physical status was ≥3, and/or performance status was ≥3 did not differ total gastrectomy (14.3 vs 22.7 %, p=0.351), proximal gastrectomy (0 vs 1.5 %, p=0.246). intra-operative blood loss, operating time, and number of harvested lymph nodes did not differ between the two groups. as for postoperative complications such as intra-abdominal abscess (7.4 vs 6.1%, p=0.844), anastomotic leakage (0 vs 3.0%, p= 0.352), significant difference was not observed between the two groups. in addition, respiratory and cardiovascular complication was not observed in elderly patients. incidence of clavien-dindo classification ≥grade 3 (3.6 vs 3.0 %, p=0.891), and postoperative hospital stay (10.5 vs 10.0 days, p=0.985) did not differ. conclusion: short-term outcomes of lag in elderly patients were not different from those in young patients the essential role of the transcystic duct tube (c-tube) during laparoscopic common bile duct exploration (lcbde) towakai hospital introduction: laparoscopic common bile duct exploration (lcbde) is a standard surgical procedure for the treatment of common bile duct stones (cbds). however, there are some problems associated with cbd drainage after operations even if performing with the primary closure. therefore, we developed a new drainage tube, c-tube, which contributes to shorter drainage periods and reduces perioperative complications. method: c-tube is a type of bile drainage tube which is fixed to the cystic duct with an elastic band. closing the duct with an elastic band as soon as c-tube is removed prevents bile leakage from the stump of the cystic duct. the essential roles of this tube include: 1. assisting suturing during operations, 2. use during intra-and post-operative cholangiograpy, 3. assisting post-operative endoscopic sphincterotomy when necessary we included patients from 2-years prior to our intervention and compared this with patients who had follow-up after implementation. we excluded patients having revisions, gastric banding, and patients whose primary surgeon had left during the data collection period. we analyzed demographics and follow-up rates at 1, 3, 6, 12, and 24 months. chi-square test was used to evaluate for significance, and results were corrected for multiple comparison. results: 435 patients met inclusion criteria in the pre-intervention group, and 836 in the postintervention group. of those, 418 were analyzed for the 2 year follow-up visit. the pre-intervention group had 62 males, 373 females, and an average age of 37. approximately 1/3 of the surgeries performed were sg, 2/3 were rygb. the post-intervention group had 127 males, 709 females, average age of 38. approximately half of the post-intervention cases were sg while the rest were rygb. conclusion: bariatric surgery is a useful tool in aiding weight loss and improving comorbidities. it is essential that patients receive long-term follow-up and monitoring to achieve these goals. our program now uses a system of phone call reminders for scheduled visits, as well as calls and letters for annual visits surgeon's evaluation of an intraoperative microbreaks web-app workload questions were modified nasa task load index (physical demand, mental demand, and complexity) and procedural difficulty on 0-10 (10=maximum impact) scales. primary outcomes were the impact of microbreaks on surgeons' physical performance, mental focus, pain/discomfort and fatigue with checkboxes for improved, no change and diminished. secondary outcomes were microbreaks impact on distraction level and workflow disruption using a 0-10 (10=maximum impact) scale. descriptive statistics were calculated for median and interquartile ranges (iqr) of these responses. results: seven surgeons (3 male, 4 female), with a median (iqr) surgical experience of 8 (5.5, 17) years, completed ten surgical days with a median (iqr) operative duration of 367 (283, 533) minutes/surgical day. the median number of microbreaks/surgical day was 6. the median (iqr) for mental demand, physical demand, surgical complexity and difficulty are shown in table 1. following each surgical day, surgeons reported 10/10 improved physical performance situs inversus totalis (sit) is inherited in an autosomalrecessive fashion with complete abnormal transposition of thoracic & abdominal viscera. its incidence varies from 1 in 1400 to 35000 live births. for those undergoing surgery, laparoscopic approach is preferred as it avoids inappropriate incisions. however, due to mirroring of the viscera, the surgeon faces constant visio-spatial disorientation during laparoscopy. p764 ''how to be a surgeon and not dying trying'' control of basic physiological parameters in perioperative phase second main variable: blood pressure (bp) with manual measurement sleeve. preoperative bp and immediate postoperative bp were measured, we were not able to measure intraoperative bp due to the lack of consent of the surgeons involved for the use of other devices different from the heart rate band. secondary variables: years from graduation, years of practice, age, body mass index (bmi), number of medical co-morbidities, number of jobs, sleeping hours the night before. we took measurements to surgeons during a laparoscopic cholecystectomy. results: the mean preoperative heart rate was 77.8 bpm. the mean minimum intraoperative heart rate was 86 bpm. the mean maximum intraoperative heart rate was 115.2 bpm (86% with tachycardia at the surgery). the mean immediate postoperative heart rate was 89.5 cpm. the mean heart rate 15 minutes after the postoperative phase was 80.1 cpm. at the immediate preoperative phase 53% of surgeons had elevated bp level (usual normotensives) articles were randomly selected and the gender of the first and last authors determined. results: of the bariatric surgery publications reviewed, only 5% of first authors and 7.5% of last authors were female surgeons. even though the proportion of female authors has increased over time, this is not proportional to the increase in the number of female surgeons or surgery residents (figure 1). discussion: female surgeons are under-represented in bariatric surgery research. the number of female surgeons and residents has a continuous up trend over the last few decades our survey also included the validated quick-dash (disabilities of the arm, shoulder, and hand) questionnaire for upper-limb symptoms and the ability to perform certain physical activities. the quickdash is scored into two components: disability/symptom score, and the optional work module, which represent the impact of disability on daily activities and work responsibilities, respectively. both scores range from 0-100, with a higher score indicating greater disability. surgeons were grouped according surgical focus (open, lap, or ra), and comparisons were made between groups. surveys with more than 10% of responses missing were excluded. statistical analysis were done using spss 23.0, with α=0.05. results: 156 completed surveys were evaluated (open: n=23, lap: n=96, ra: n=37). the survey response rate was 50%. 76.9% of respondents were general surgeons, and mean age was 45 ±9.49 years. surgeons reported an average of 30±16.7 cases performed per month ra: 51.4%, p=0.253). likewise, there were no differences in the mean disability similarly, there was a positive correlation between mean work scores and reported pain in the upper-limb for lap and ra, both p.001. conclusions: this nationwide survey revealed a similar prevalence of pain in the upper-limb among surgeons performing open, laparoscopic and robotic-assisted procedures. likewise, similar disability scores were reported between the three surgical groups. older surgeons performing laparoscopic and robotic-assisted approaches reported a higher impact of upper-limb problems interfering with their daily activities, unlike open surgeons. among all surgeons who reported pain in the upper-limb, laparoscopic and robotic surgeons were more likely to report that this pain interferes with their work activities an analysis of subjective and objective fatigue between laparoscopic and robotic surgical skills practice p771 3d laparoscopic versus robotic gastrectomy for gastric cancer: comparisons of short-term surgical outcomes lin chen, xin guo 164 patients who underwent 3d-lag (n=99) or rag (n=65) for gastric cancer were enrolled. the clinicopathological factors and short-term surgical outcomes were compared with retrospectively analysis. results: the clinicopathological factors between the two groups were well matched. postoperative recovery factors including the days of first flatus, days of eating liquid diet and hospital stay were similar. the rate of postoperative complications between the two groups were with no statistical differences in the subgroups of patients with total gastrectomy, 3d-lag had less blood loss and shorter operative time than rag (p=0.006 and p.001), while for distal gastrectomy, blood loss and operative time showed no statistical differences. conclusions: this study suggests that 3d-lag is a novel and acceptable surgical technology in terms of surgical and oncological outcomes. 3d-lag is a promising approach for gastric cancer therapy methods: patients underwent robotic surgery between the beginning of 2013 to first half of 2017 in turkey were included. data were obtained from a prospectively maintained database. patient, surgeon and hospital identifiers were encrypted. parameters were operation type, operation year, robotic system used (s, si, xi), hospital volume and surgeon volume. high volume robotic colorectal hospital and surgeon was defined as the caseload within the forth interquartile (75th-100th) based on the median value. results: there were 799 colorectal procedures. 47 surgeons performed robotic colorectal surgery at 25 hospitals. 341 (42.7%) and 458 (57.3%) procedures were performed with the s-si and xi platforms respectively. 2 hospitals have both of the si and xi platforms. 4 hospitals are the si, 8 hospitals are the xi hospital currently. the number of robotic colorectal operations increased gradually by years (figure 1). the median numbers of colorectal procedures were 13 (range 1-171) and 5 (range 1-151) per hospital and per surgeon respectively among those hvrcs, the numbers of si and xi users were 7 and 5 respectively. the surgeons who performed more than 11 procedures continued to use robot in their practice except one surgeon who stopped at 27. only 2 left colectomies and no right colonic resection were performed before introduction of the xi platform first 100 robotic cases and implementation of a robotics curriculum in a general surgery residency domenech asbun armonk ny) and utilized student's t test and chi-square. we also performed a linear regression analysis to determine the effect of or time, robotic surgery, and diagnosis on operating room costs and postoperative length of stay. results: 37 laparoscopic and 14 robotic cholecystectomies were performed. demographic parameters (age, gender, medical comorbidities, preoperative albumin and bmi, surgical history and smoking) were comparable. primary diagnosis was significantly different (chi-square 0.05), driven by more acute cholecystitis in the laparoscopic group. 0/14 robotic cases and 5/37 (13.5%, p =0.305) laparoscopic cases were converted to open (2 for adhesions, 2 for failure to progress, and 1 for visualization of anatomy after adjusting for or time and diagnosis, robotic surgery was associated with a $980 increase in costs robotic surgery is independently associated with increased or cost, but individual hospital systems must decide if this additional cost outweighs increased robot utilization and training benefits for physicians and staff robotic abdominal wall hernia repairs: technical considerations and lessons learned inguinal hernia repairs (ihrs) comprised the majority (59.3%) of cases (71.9% male, mean age 55.5, mean bmi 26.2). there were 103 unilateral ihrs with an average operative time of 97.2±55.5 min and an average ebl of 19.8 ml. there were 18 bilateral ihrs with an average operative time of 132.4±49.9 min and average ebl of 19.8 ml. thirteen ihrs were combined with umbilical hernias and two with incisional hernias. average operative time for combined procedures was 152.8 min and average ebl was 29.7 ml. fifty-five incisional hernias were repaired robotically (56.3% male, mean age 54.5, mean bmi 28.9), four of which were retrorectus and two of those required transversus abdominis release. median hernia size was 6 cm (2-13 cm). mean operative time was 132.9±57.4 min and average ebl was 31.5 ml. twenty-three ventral/umbilical hernias were repaired robotically (52.2% male, mean age 45.4, mean bmi 28.8, median size 2.5 cm (1-4 cm), mean operative time 89.7±29.5 min, average ebl 13.3 ml). one spigelian hernia (operative time 99 min, ebl 20 ml) and one parastomal hernia (operative time 117 min, ebl 200 ml) were repaired robotically. there were no major complications and only 1 groin seroma requiring percutaneous aspiration. nine patients required conclusion: this study demonstrates improved outcomes of robotic inguinal hernia repair compared to an open or laparoscopic approach. robotic hernia repair showed overall lower 30-day complication and readmission rates, and shortened los. while open approach had the highest rate of opiate use we retrospectively investigated 186 consecutive overweight gc patients (bmi≥24) underwent distal gastrectomy with d2 lymphadenectomy (81 for rag and 105 for lag) performed by two surgeons. the clinicopathological and surgical features were compared between groups. the cutoff point for initial phase (phase i) and stable phase (phase ii) were determined by cumulative sum (cusum) curve of operation time. results: generally, the surgical outcomes including postoperative complication rate, duration of postoperative hospital stay and lymph nodes harvest in the overweight patients have comparable results between rag and lag groups. the cutoff determining phase i and ii according to the cusum figure for rag group was 15 and 10 cases for surgeon a and b, respectively. and comparison analysis showed that the operation time of phase ii rag was significantly shorter robotic-assisted transabdominal preperitoneal inguinal herniorrhaphy: a single-center experience including perioperative morbidity and short-term outcomes patient factors, treatment factors, and outcome measures were collected in an attempt to gain insight and to generate ideas to potentially improve outcomes. results: there were no operative complications. six patients (40%) had failed gastric pacemaker placement prior to intervention. nine patients (60%) reported improvement in their symptoms and overall quality of life. four patients (26%) reported no improvement in symptoms and required additional intervention for symptom control and supportive care (one underwent roux-en-y gastric bypass, three underwent laparoscopic jejunostomy feeding tube placement to maintain nutrition). conclusion: robotic-assisted pyloroplasty is a safe option that improves symptoms and quality of life in 60% of our patients patients were matched into cohorts by procedure type. outcomes were analyzed using unpaired t-test and fisher's exact test. results: cost data was available for 447 patients undergoing ras or la procedures. significant increases in equipment, labor, and overhead costs resulted with ras vs. la. variable-labor and variable-overhead costs were significantly higher in la procedures. higher supply costs and longer procedure time was seen with ras in all cohorts however, total 30-day costs were not significantly different in any group. conclusion: ras led to significant increases in fixed clinical, operative and pathologic factors were reviewed and analyzed. results: seventy patients underwent robotic surgery for rectal cancer during the study period. the locations of tumor were 26 upper rectum, 44 lower rectum. the procedure were as follow, high anterior resection in 6, low anterior resection in 51, isr in 6, apr in 7 patients. eight patients underwent bilateral lymph nodes dissection (llnd). the procedures were performed successfully in all cases. mean age was 66.5 years, and 70% of the patients were men, and the mean body mass index was 22.5 (range, 18.5-29.4) kg/m 2 . median operative duration was 321 (190-666) minutes. median blood loss was 15 (0-270) ml. median postoperative stay was 13 (6-16) days. mean harvest lymph node number was 17.0 (5-37). surgical margins were negative in all cases. there was one conversion due to bleeding during the llnd and anastomotic leakage occurred in two patients. morbidity was 17%. there was no mortality postoperatively in this series. conclusion: in early series of the selected patients, this technique appears to be fesible and safe when performed by surgeons skilled in laparoscopic colorectal surgery the inactive electrode was placed touching small bowel to simulate accidental thermal injury. the bowel tissue at the site of temperature change was immediately resected and examined histologically for tissue injury. student t-tests were used for all comparisons with a p-value less than 0.05 considered statistically significant. results: comparison of the laparoscopic and robotic techniques are displayed in table 1. energy transfer was quantified using energy leak (per ma), which in these tests averaged 1.18 degree celsius change (95% ci 1.05-1.31) at the inactive electrode. surface temperature heated to a maximum of 5.5 degrees celsius, more in the robotic system than laparoscopy but still clinically negligible. pathology results from in vivo testing showed only thermal injury to the serosa without deeper mural injury. conclusions: stray energy transfer occurs in both laparoscopic and robotic surgery in amounts that are measurable but without clinical relevance. the average change in tissue temperature is less than 2 degrees celsius laparoscopically and less than 6 degrees robotically. while the robotic surgery appears to transfer more stray energy, no significant bowel injuries were caused in either group. p789 robot assistance can improve the performance of laparoscopic extensive concomitant adhesiolysis: results from a large observational study federico gheza outcomes compared were operative time, conversion rate, overall complications, gastrointestinal (gi) related complications (wound infection, abdominal abscess, anastomotic leak, ileus and small bowel obstruction), hospital length of stay, and 30-day re-admission rate. two sample t-test was used and p.05 was considered statistically significant. results: fifty-five robotic colectomies were matched with 55 laparoscopic counterparts based on type of operation: right colectomy (n=28), sigmoidectomy (n=46), low anterior resection (n=26), proctocolectomy (n=4), transverse colectomy (n=2), abdominoperineal resection (n=2), and total abdominal colectomy (n=2) we assessed if technical obstacles of laparoscopic suturing were decreased and if laparoscopic skills overall were improved. surgical outcomes were compared relative to our historic values; we assessed procedure time and operating room efficiency, including set up and turn-over times. results: overall, the 3d/flexdex system permitted a greater improvement in working speed, superior optical visualization, and better suture handling compared to standard laparoscopy. all surgeries were completed without any complications. historically, we considered laparoscopic suturing to be complicated and inefficient. we relied on tacking devices for mesh fixation, suturing was previously completed with large cumbersome straight laparascopic devices. however, with flexdex and endoeye flex 3d, tacking devices have been eliminated and suturing technique improved. the mean total procedure times remained comparable for inguinal and hiatal hernia surgeries, and slightly longer for ventral hernias. operating room efficiency, including mean set up and turn-over times also remained unchanged. the acquisition cost for both the olympus endoeye flex 3d laparoscopic imaging system we performed a cost analysis which showed an average total cost of $7,024 for laparoscopic sleeve gastrectomy and an average of $11,680 for robotic assisted. the total reimbursements were $21,587 for laparoscopic sleeve gastrectomy and $18,310 for robot assisted. this translated to an average contribution margin of $14,564 for laparoscopic vs $6,630 for robot assisted. we analyzed these differences for bypasses as well. laparoscopic bypasses averaged 193 minutes laparoscopically vs 330 robotically. we found an average cost of laparoscopic $11,366 vs robot assisted $17,032, with a contribution margin of $13,734 laparoscopic vs $5,701 robot assisted. conclusions: in our study we noted increased operative times with robot assisted operations, especially bypasses which could be explained by increased use of the robotic system for difficult cases such as revisional bypasses. the impact of cost is especially important in this financial climate, and judicious use of resources becomes important when determining surgical approach average or time for rih was 127 minutes compared to lih which was 85 minutes. average intraoperative cost for rih was $1,110 compared to lih which was $890. of note, one lih was converted to open, whereas none of the rih required conversion. average los was 9.16 hours for rih compared to 11.6 hours for lih. postoperative pain at one week follow up was the same between both groups. two postoperative surgical site occurrences (sso) occurred in the lih group (2 groin seromas), whereas no ssos occurred in the rih group. eleven ventral hernia repairs were examined, 7 were robotic (rvh) and 4 were laparoscopic (lvh). average or time for rvh was 132 minutes compared to 65 minutes for lvh. average intraoperative cost for rvh was $1,492 compared to lvh which was $1,264. no procedure from either group required conversion to open. average los was 9.86 hours for rvh, and 13.5 hours for lvh. again, postoperative pain was the same at one week follow up for both groups. there were no postoperative complications noted in either cohort. conclusion: operative time and procedural costs for rvh and rih repairs were shown to be longer and more expensive when compared to their laparoscopic counterparts. however, with increased operative experience using the robotic platform, surgical time did show a decreasing trend does robotic system have advantages over laparoscopic system for distal pancreatectomy? results: a total of 91 consecutive patients underwent minimally invasive distal pancreatectomy (ldp n=61; ra-ldp n=30). most common pathologic finding was pancreatic ductal adenocarcinomas (36 cases). there was no in-hospital mortality or cases of conversion to open surgery in this study. spleen-preserving approach was performed more often in the ra-ldp (95%) than in the ldp (77.8%) groups (p=0.132) both groups showed no significant differences in the total number of lymph nodes, number of positive lymph nodes, tumor differentiation, tumor stage, and resection margins. conclusions: ra-ldp is a safe and feasible approach that has an advantage of performing spleenpreserving distal pancreatectomy, with perioperative and short-term oncologic outcomes comparable to those of ldp. p797 robot-assisted alpps technique mike fruscione right portal vein embolization was not feasible secondary to the proximity and size of the right hemi-liver tumor burden relative to the right portal vein. the pre-operative planned procedure was a right trisectionectomy and microwave ablation of the segment 2 lesion. results: using the da vinci xi surgical system (intuitive surgical, inc.) the right portal vein was dissected, doubly-ligated, and divided. the liver parenchyma was split from the inferior edge to the dome 5 mm medial to the falciform ligament and down to the middle hepatic vein which was preserved to maintain adequate venous outflow. the patient was discharged home on post-operative day two. on post-operative day six, ct volumetrics demonstrated a flr of 47%. on post-operative day seven, a second stage alpps procedure was performed where the right hepatic artery, middle and right hepatic veins and right hepatic duct were ligated and divided. segments 4a/b, 5, 6, 7 and 8 were removed. the patient was discharged home on post-operative day five they were asked to answer demographic questions and rate their comfort level (0=not comfortable, 10=very comfortable) with aspects of robotic surgery. paired t-tests and wilcoxon tests were used to assess whether there were changes in comfort level before and after labs, and chi-square goodness of fit tests were used to assess whether dry lab (using inanimate objects), wet lab (using a porcine model), or simulator modules were thought to be most helpful in obtaining specific robotic skills. results: the survey response rate was 73% (n=32). ninety-one percent of residents felt that robotic surgery is not intuitive. prior to simulation, 94% of residents felt inadequately prepared to safely operate on the robotic console. following simulation, 100% felt better prepared and more confident to participate in robotic surgery for the first 4 patients whom we treated (the first-stage group), we invited a visiting expert from a high-volume center to perform the procedure jointly with our hospital's surgeons by using a dual console. for the subsequent 6 patients (the second-stage group), the procedure was performed by our hospital staff alone. in this report, we describe our experience of introduction of robotassisted colectomy and discuss issues for the future. patients and methods: the operative procedure was sigmoid colectomy, low anterior resection, and intersphincteric resection. the median number of lymph nodes dissected was 15.6. the mean operating time was 337 minutes for the first-stage group and 365 minutes for the second-stage group. the median console time was 206 minutes for the first-stage group and 193 minutes for the second-stage group, with no significant differences between the two groups. the mean operating time other than console time was 127 minutes for the first-stage group and 171 minutes for the second-stage group, significantly longer in the latter group. the mean amount of hemorrhage was 15.5 g in the first-stage group and 31 g in the second-stage group. no significant differences were found between the two groups in the mean length of postoperative hospital stay. none of the patients in either group developed a complication of clavien-dindo grade iii or higher. conclusions: the use of dual console system was particularly useful for the introduction of robotassisted surgery in our hospital. for the patients whom we treated, we found almost no difference in console time between the first-and second-stage groups. the high-quality instruction received via the dual console was considered to have had a beneficial effect on the operators' learning curve. however, the operations that were set up other than console time, such as roll-in and docking, took significantly longer in the second-stage group when the proctor was not present select specimens from each trial were immediately resected and evaluated for histologic thermal injury. experiments were repeated 20 times based to detect an expected difference of five degrees. student t-tests were used for all comparisons with significance set at 0.05. results: stray energy transfer was higher in the single incision setup compared to the traditional setup (figure 1). stray energy in the assistant grasper caused 8.4±1.6°c of temperature change in the standard configuration, and 11.6±3.3°c in the single incision configuration (p=0.015). doubling energy output to 60w amplified the same finding robotic single-site cholecystectomy of 520 cases: surgical outcomes and comparing with laparoscopic single-site procedure jae hoon lee incisional hernia occurred one case in each group. rssc is safe and feasible procedures. with accumulating of experience, rssc had more short operative time than sslc. comparing to sslc, rssc is relatively suitable to acute gallbladder disease and high bmi and requires a minimal learning curve to transition from traditional multiport to single-port robotic cholecystectomy. p805 initial experience using da vinci xi robot in colorectal surgery anna r spivak, do, john marks, md; lankenau medical center introduction: the xi robot has been developed to facilitate multiquadrant abdominal surgery. this report presents initial experience to evaluate feasibility and safety of xi robot in colorectal surgery. methods: all cases performed on xi robot were prospectively entered into a robotic database that was queried for colorectal cases performed from intraoperative complications were encountered in 2 cases (1.9%), requiring conversion to laparoscopy. none were converted to open. mean length of largest incision 4.7 cm. median ebl 55 ml. there was no mortality. there were 10 (9.6%) immediate postoperative morbidities: postoperative abscess, bowel perforation, two postoperative bleeds, two hernias, two hematomas, smv thrombosis, small bowel obstruction. perioperative blood transfusions were required in 2.8% of cases. there was one anastomotic leak. median time from surgery to low residue diet and discharge was 3 days. conclusion: initial experience shows robotic colorectal resection with da vinci xi learning curve for robotic sleeve gastrectomy and roux-en-y gastric bypass: achieving equivalence to laparoscopy residents and fellows participated in an analogous fashion in both arms of the study, and patients undergoing re-operative bariatric surgery were excluded. results: a total of 109 patients undergoing rsg (n=84) or rrygb (n=25) were included. for the overall robotic cohort, median age was 38 (range 19-69), 36% were american society of anesthesiologists (asa) score 2, 60% were asa score 3, and mean body mass index (bmi) was 46±7 with no differences between procedures. there were no conversions to open. there was one patient with portal vein thrombosis after rsg which occurred in the 84th rsg and one patient who underwent re-operation in the immediate post-operative period for hemorrhage at the gastro-jejunal anastomosis in the rrygb group; this occurred in the 8th rrygb. there were no leaks, strictures, or mortalities in either group. mean length of stay was 2 days±1 for rsg with no difference based on number of procedures performed. in the rrygb group, los decreased after the first five procedures from 3 days±1 to 2 days±(p=0.04). for both procedures, operative time decreased by number of procedures performed (figure). equivalence to lsg in operative time (118 minutes±40) was reached after eight robotic procedures were included. the da-vinci xi® was used for the operations. age, gender, body mass index (bmi), asa score, indication for surgery, urgency of procedure, type of procedure, docking number, operation time, estimated blood loss, complications, short (≤30 days) and long term ([30 days) complications were evaluated. results: 19 patients (7 females) were included. median age was 28. median bmi was 23, median asa score was 2. total and completion rrp-ipaa were performed for 9 and 10 patients respectively. the indications were as follows: medical refractory uc (n=12), cancer/dysplasia (n= 2), fulminant colitis (n=2), toxic megacolon (n=1), medical treatment resulting in growth retardation (n=1), medical treatment refractory bleeding (n=1). 1 patient with toxic megacolon had an emergent operation. the median docking number was 1 and 3 for completion and total rrp-ipaa respectively. median operative time was 330 minutes. median blood loss was 100 ml. all patients had a stapled ileal j pouch anal anastomosis. all patients had a diverting loop ileostomy at the time of ipaa creation. no intraoperative complications were observed. no conversion to open surgery was needed. the median time to flatus was 1 day. the median time to oral intake was 1 day. 1 patient had a laparotomy on postoperative day 12 due to intra-abdominal bleeding. 1 patient had a bleeding from ileostomy which was treated endoscopically. superficial surgical site infection was observed in 3 patients. 1 patient had a pouchitis managed with oral antibiotics. 1 patient had an ileus responded to conservative treatment. 1 patient had a per-anal bleeding stopped spontaneously. 1 patient had a urinary tract infection responded to antibiotics. 2 patients had pouchitis, 1 patient had a perianal fistula requiring a loop ileostomy and a parastomal hernia was developed in another patient in long term follow up ) were significantly different between the two groups. 2,858 pairs undergoing primary and 354 pairs undergoing revisional procedures were successfully matched. robotic gastric bypass was associated with a significantly longer operation length than laparoscopic gastric bypass for both primary (median difference 31 minutes, p.0001) and revisional (median difference 47 minutes, p.0001) procedures overall, there were no significant differences in anastomotic/staple line leak, 30-day readmission, reoperation, re-intervention, total event, and mortality rates between matched cohorts. conclusion: when controlling for patient characteristics, those undergoing primary and revisional lrygb and rrygb had no difference in early morbidity. despite the prolonged operative duration, the robotic approach was not associated with any clinical benefit or increased complications for primary or revisional gastric bypass surgery preoperative risk factors were collected. we focused on perioperative outcomes and in hospital complication rate. results: thirty-three patients underwent robot assisted giant hiatal hernia repair at our institution. 13 patients (40%) were 70 years and older and 15 patients (46%) had a bmi higher then. there were no significant differences in patient characteristics between the groups. no patient underwent conversion to open or standard laparoscopy. no mortality was observed and no transfusions were needed. four patients (12%) had a complication, two of them were older than 70 years old. three of the four patient (75%) that had a complication were obese. there were no statistical differences in mortality 5% and 43.5% of them were with s-si and xi platforms respectively. the median numbers of procedures were 33 (range 3-290) and 7 (range 1-276) cases per hospital and per general surgeon respectively. the high volume surgeons (higher than 75th percentile) performed 1462 (77%) of the cases. the xi platform has been the main tool for colorectal surgery only (figure 1). conclusions: while xi platform significantly increased caseload in general surgery by facilitating performance of colorectal surgery, its preference in other general surgical fields is not superior to si laparoscopic inguinal hernia repair (tapp) -first experience with the new senhance robotic system robin schmitz 2; 1 intuitive surgical inc, 2 loma linda university medical center introduction: crohn's disease is an incurable inflammatory disorder that can affect the entire gastrointestinal tract. while medical management is considered first-line treatment, approximately 70% of patients with crohn's disease require surgery within 10 years of their initial diagnosis. traditionally, surgery has been performed via an open approach with poor adoption of minimally invasive technique. the aim of this study is to demonstrate the feasibility of robotic-assisted approach as a minimally invasive option for surgical management of crohn's disease and compare the perioperative outcomes with traditional laparotomy. methods: patients who underwent elective resection of the intestine for crohn's disease by roboticassisted or laparotomy approach from 2011 to q3 2015 were identified using icd-9 codes from premier healthcare database. all the procedures were performed by either general surgeons or colorectal surgeons. since hospital characteristics were comparable between the two cohorts before propensity-score matching, 1:1 matching was performed using patient characteristics such as age, gender, race, charlson index score and year of the surgery to create comparable cohorts. sample selection and creation of analytic variables were performed using instant health data (ihd) platform (bhe methods: we conducted a retrospective analysis of 102,241 mis inguinal hernia repairs (1,096 robotic, 101,145 laparoscopic) from 2010 through 2015 with data collected in the premier hospital database. patient, surgeon, and hospital demographics of robotic and laparoscopic inguinal hernia repairs were compared. the adjusted odds ratio of receiving a robotic procedure was calculated for each of the demographic factors using a multivariable logistic regression model. statistical significance was defined as p.05. sas software version 9.4 was used for statistical analysis. results: the odds of a procedure being robotic increased from inguinal hernia repair is one of the most common general surgery procedures with over 600,000 performed annually in the united states. when compared to traditional open inguinal hernia repair (oihr), laparoscopic inguinal hernia repair (lihr) has been associated with faster postoperative recovery rates and lower postoperative pain. with advances in the robotic platform, robotic inguinal hernia repair (rihr) is an available technique that is currently being explored. this study examines lihr and rihr as described in literature to see if one is superior to the other. study design: search terms: ''inguinal hernia repair surgical complications including hematomas (3.9%), seromas (2.6%), and trocar site infection (1.3%) resolved with antibiotics, with a 2.6% postoperative complication rate. conclusion: rihr repair is a safe alternative to lihr, with fewer postoperative complications and a faster recovery time. however, operative time as well as or room time is significantly longer, which may increase overall cost laparoscopic or robotic approach were chosen on a schedule availability basis. data was collected prospectively and it involved anthropometric data, presence of type 2 diabetes mellitus (t2dm), % of preoperative total weight loss (%ptwl), surgical time, postoperative length of stay, 30-day complications, and need for readmission or reoperation. comparison between groups was carried on with t-test for continuous data and with chi-square test for dichotomous variables. a p lower than 0.05 was considered significant. results: overall 131 sagb were performed, 111 laparoscopic and 20 robotic. a long and thin gastric pouch was created calibrated by a 27 fr bougie and a 2.5 cm antecolic antegastric gastrojejunal (gj) anastomosis was groups (laparoscopic vs robotic) were comparable regarding age (46 vs 45.3 years, p=0.77), bmi (48.1 vs 47 kg/m 2 , p=0.53), %ptwl (13.6 vs 16.9 %, p=0.29) and % with t2dm (51 vs there were fewer men in the laparoscopic group (20.2 vs 45% there were 6 (5.4%) major complications in the laparoscopic group: 3 bleedings from the gj anastomosis, one of which required reoperation, 1 severe dumping syndrome, 1 gerd requiring revision and 1 gj stricture that underwent relaparoscopy. the only complication (5%) in the robotic group was an acute pancreatitis. readmission rate was 5% in both groups and reoperation rate was 3% for laparoscopic and 0% for robotic surgeries. conclusions: totally robotic sagb with manual gastro jejunal anastomosis was safe and feasible in this early experience compared to laparoscopic approach multi degrees of freedom manipulator with mantle tube for assisting endoscopic and laparoscopic surgical operations masataka nakabayashi, phd 1 , yuta hoshito, masters student 1 p823 step by step anatomic mapping during laparoscopic transabdominal adrenalectomy lateral flank approach ranbir singh steps analyzed were: right adrenalectomy: step 1) mobilize liver; 2) medial dissection; 3) adrenal vein isolation; 4) inferior dissection; 5) adrenal off kidney; 6) detachment. left adrenalectomy: step 1) division splenorenal ligament; 2) develop plane pancreas/kidney; 3) mobilization medial/lateral borders adrenal; 4) adrenal vein isolatoin; 5) dissection adrenal off kidney; 6) detachment. structures were identified as yes/no and results expressed as percentage total n of cases seen at each step. results: structures identified at each step are shown (table) incisions were made at the oral vestibule under the inferior lip. a 10-mm trocar was inserted through the center of the oral vestibule with two 5-mm trocars above incisors. the subplatysmal space was created down to the sternal notch, and carbon dioxide was insufflated at pressure 6 mmhg to maintain the working space. parathyroidectomy was performed using laparoscopic instruments. intraoperative parathromone levels were measured 10 minutes after excision of gland. primary end-points were the success rate in achieving the cure from hyperparathyroid state and hypocalcemia rate. secondary end-points were operating time, scar length, pain intensity assessed by the visual-analogue scale, analgesia request rate, analgesic consumption, quality of life within 7 postoperative days (sf-36), cosmetic satisfaction, duration of postoperative hospitalization, and cost-effectiveness analysis. result: one patient experienced a transient recurrent laryngeal nerve palsy which was spontaneously resolved within 1 month. no permanent recurrent laryngeal nerve injury was found no mental nerve injury or infection was found. conclusion: with highly sensitive localising sestamibi and ct scans, focussed exploration is the current standard of treatment. among all minimally invasive surgeries, toepva is a feasible, safe, and almost pain-free surgical option when combined with intraoperative parathormone monitoring for patients with hyperparathyroidism indocyanine green is a water soluble nontoxic compound exhibiting near infrared renal function and long-term survival. indocyanine fluorescence helps in assessing vascular flow, tissue perfusion and aberrant anatomy and thereby leads to lower conversion rates in partial nephrectomy. we aim to present our experience in 44 patients who underwent partial nephrectomy over 7 years. materials and methods: of the 44 partial nephrectomies performed at our institution, 24 were done by laparoscopic approach alone and rest 20 by 260 patients who underwent llr for whole hepatoma in our facility, 176 underwent llr for a solitary hepatoma and were divided into "before standardization" (bs; n=147) and "after standardization" (as) groups (n=29). patient background, characteristics, and perioperative outcomes were compared between these groups. procedure: we chose the devices according to phases of liver transection. a soft-coagulation monopolar device was used for marking surface. an ultrasonically activated device was used for transection of the liver surface within a 2-cm depth. crash and sealing with biclamp were indicated for deep-phase transection. the cavitron ultrasonic surgical aspirator was used if the lesion was close to the major glisson's sheath or the major hepatic vein. results: no significant differences in the patients' background were found between the two groups. the operative durations were 128 min (60-312 min) and 203 min (50-470 min) in the as and bs groups, respectively, with a significant difference (p.001). the blood loss volumes were 5 cc (0-150 cc) and 30 cc (0-850 cc), respectively (p=0.0548). the lengths of hospital stay after llr were 5 days (range, 3-7 days) and 6 days (2-21 days), respectively, with a significant difference iwao kitazono, phd 1 , kentaro gejima 1 , hizuru kumemura 1 , akira hiwatashi 1 , yuichiro nasu 2 , fumisato sasaki 2 , akio ido 2 , yutaka imoto 1; 1 cardiovascular and gastroenterological surgery, kagoshima university graduate school of medical and dental science, 2 digestive and lifestyle disease, kagoshima university graduate school of medical and dental science introduction: in locally-treatable gastrointestinal tumors, laparoscopic endoscopic cooperative surgery (lecs) is a minimally-invasive technique that can avoid excessive resection of the gastrointestinal tract. objective: to share our therapeutic guidelines and surgical technique of lecs for gastroduodenal tumors. subjects: nineteen patients who underwent lecs for gastroduodenal tumors (10 patients with gastric tumor and 9 patients with duodenal tumor).[results] 1) gastric tumors (9 gist, 1 glomus): 1. site of lesion was u (4 patients), m (3), or l (2), 2. operative procedure was acquired in a stepwise manner from classical lecs (4 patients) to inverted lecs (2) to non-exposed endoscopic wall-inversion surgery: news (4). 3. operative outcome revealed no postoperative complications. 2) duodenal tumors (6 adenoma, 2 m cancer, 1 ectopic pancreas): 1. site of lesion was bulbus duodeni (1 patient), superior part (2), or descending part (6); 2. operative procedure was esd followed by laparoscopic continuous suture in a single seromuscular layer for patients with preoperatively confirmed or suspected cancer, or full-thickness resection followed by albert-lembert suture along the short axis for patients unable to undergo esd. in all cases, c-tube was placed to prevent bleeding and perforation at the site of resection due to exposure to bile; 3. operative outcome included successful endoscopic hemostasis upon bleeding from exposed vessel on postoperative day 4 in 1 patient and anastomotic leak in 1 patient. the event of anastomotic leak resolved after 14 days of bile drainage through c-tube and conservative therapy. compared with 26 patients who underwent esd alone, those who underwent lecs had significantly larger diameters of resected specimens and tumors (p.05) but no significant difference in the incidence of postoperative bleeding and delayed perforation. conclusion: for gastroduodenal tumors, lecs is a minimally-invasive and safe therapeutic option as it combines advantages of both laparoscopy and endoscopy. in particular, c-tube placement for bile drainage was effective in reducing exposure of the suture site to bile as well as supporting drainage after anastomotic leak. introduction: in japan, transurethral balloon catheters (tuc) are currently inserted in most surgical patients to maintain a urine outflow route and to measure the urine output both intraoperatively and postoperatively. however, tuc insertion not only causes postoperative pain but can also lead to urinary tract infections. temporary suprapubic catheters (spc) are used in the field of obstetrics and gynecology as a method of postoperative management to avoid performing transurethral procedures. in the field of surgery, especially in laparoscopic surgery, spc also considered how it would be a useful way to reduce patient suffering. here we report our prospective study on whether an spc can be safely inserted as a substitute for tuc during laparoscopic-assisted colectomy. subjects and methods: the subjects in this study were patients who underwent laparoscopic surgery for primary colorectal cancer from 2014 to 2015, and who would normally have had their urinary balloon catheter removed early after surgery. during surgery, an angiomed cystostomy set was installed for patients who gave their consent to participate in this study as an alternative to a urinary balloon catheter. we prospectively collected patient information including sex and age, in addition to other perioperative data, such as, time required for cystostomy, complications accompanying cystostomy, sense of discomfort or pain associated with the vesical fistula after surgery, the time of the removal of the vesical fistula, the frequency of releasing the vesical fistula, postoperative complications. results: our subjects included 52 cases who gave their informed consent to have an spc inserted. an spc was inserted into the remaining 45 case. the mean surgical duration was 229 min, and the spc insertion was performed at a mean of 137 min after the start of surgery. insertion required a mean duration of 158.2 s. the bladder of one case (2.2%) was perforated, and hematuria was observed at the time of insertion in two cases (4.4%), but surgery completed without any incident. six out of 42 cases (13.3%) demonstrated neither urinary urgency nor independent urination on the day the catheter was clamped. however, the clamp was released two to four times, and draining of an average of 586 ml urine, urinary urgency, and independent urination were confirmed 2-4 days later. conclusion: spc is a procedure that avoids crossing the urethra and its associated disadvantages. here we were able to demonstrate that the procedure can be safely used in laparoscopic surgery patients.our objective is to devise methods for proper port placement to overcome the ergonomic challenges. procedure: 3 patients with sit were operated laparoscopically in our hospital in the period of may 2016 to november 2017, 2 males suffering from cholelithiasis without cholecystitis and 1 female with acute appendicitis. after thorough review of literature and proper planning, the patients were posted for surgery. for laparoscopic appendectomy, a thorough initial diagnostic survey is performed on introducing a scope through the umbilical port and confirming the exact location of the appendix. the two working ports are introduced accordingly, which is usually a mirror image of the standard port sites. the appendix was visualised in the left iliac fossa and after meticulous dissection, the appendix and mesoappendix were divided using an endostapler. the operative time was 43 minutes and there were no intraoperative or postoperative complications.the port placement for laparoscopic cholecystectomy in such a case is trickier as the anatomical variation and the contralateral disposition of the biliary tree demand an accurate dissection and exposure of the biliary structures to avoid iatrogenic injuries. it is important to conform to the principles of triangulation during port placement. the mirror image of 4-port placement is convenient for left-handed surgeons. whereas, to make the procedure comfortable for right-handed surgeons, the working ports need to be shifted caudally with the surgeon standing between the patient's legs. the mean operative time was 54 minutes and there were no minor or major intraoperative or postoperative complications.conclusion: ergonomic comfort is vital to a smooth procedure. while mirroring ports suffices for appendectomy, all other procedures require forethought for port placement. it should be noted that ambidexterity is a desirable skill in the operating room for a laparoscopic surgeon.priscila r armijo, md, chun-kai huang, phd, gurteshwar rana, md, dmitry oleynikov, md, ka-chun siu, phd; university of nebraska medical center introduction: the aim of this study was to determine how objectively-measured and self-reported fatigue of the upper-limb differ between laparoscopic and robotic surgical training environments. methods: surgeons at the 2016 sages conference learning center, and at our institution were enrolled. two surgical skills practical environments were utilized: 1) a laparoscopic training-box environment (fls) and 2) the mimic® dv-trainer (mimic). two standardized surgical tasks were chosen for both environments: peg transfer, and needle passing. each task was performed twice. objective fatigue was evaluated by muscle activation and fatigue, and comparisons were made between fls and mimic, for each surgical task. muscle activation of the upper trapezius, anterior deltoid, flexor carpi radialis, and extensor digitorum were recorded during practice using surface electromyography (emg; trignotm, delsys, inc., boston, ma). the maximal voluntary contraction (mvc) was obtained to normalize muscle effort as %mvc. the median frequency (mdf) was calculated to assess muscle fatigue. subjective fatigue was self-reported by completing the validated 10-scale score piper fatigue scale-12 (pfh-12) before and after practice. statistical analysis was done using spss v23.0, with α=0.05. results: this abstract represented the performance of 15 trainees (fls: n=8, mimic: n=7) as part of larger cohort of the study. for peg transfer, emg analysis revealed that mimic had a significant increase in mean muscle activation for the upper trapezius and anterior deltoid, both p\ 0.001. conversely, practice with fls led to significantly more muscle fatigue than mimic for the same muscle groups (upper trapezius: p=0.028, anterior deltoid: p=0.015), represented by a significantly lower mdf. similarly, for needle passing, mimic had a significant increase in mean muscle activation for the upper trapezius (p=0.034) and anterior deltoid (p=0.031), but practice with fls significantly induced more muscle fatigue effort for anterior deltoid (p=0.004). survey analysis revealed a significant decrease in self-reported fatigue after performing fls tasks (before: 3.85±1.66, after: 3.05±1.54, p=0.044), but no difference after mimic tasks (before: 4.00±2.27, after: 4.22±2.56, p=0.417). conclusions: although different muscle groups are preferentially required in the performance of fls and mimic, our analysis for both surgical tasks showed practice with mimic required more activation of shoulder muscles, whereas practice with fls could lead more muscle fatigue for the same muscle groups. interestingly, surgeons reported improved or no change in perceived fatigue after the tasks, despite of having an increase in muscular activation and effort. subjective selfreport fatigue might not truly reflect the level of fatigue when trainees practice surgical tasks using fls or mimic. objective: to investigate the prevalence of musculoskeletal (msk) injuries in bariatric surgeons around the world. background: as the popularity of bariatric surgery increases, efforts into improving its patient safety and decreasing its invasiveness have also been on the rise. however, with this shift towards minimal invasiveness, surgeon ergonomic constraints have been imposed, with a recent report showing a 73-88% prevalence of physical complaints in surgeons performing laparoscopic surgeries. methods: a web-based survey was designed and sent out to bariatric surgeons around the world. participants were queried about professional background, primary practice setting, and various issues related to bariatric surgeries and msk injuries. results: there were 113 responses returned from surgeons from 34 countries around the world. 68.5% of the surgeons have had more than 10 years of experience in laparoscopic surgery, 65.8% in open and 0.9% in robotic surgery. 66% of participants reported that they have experienced some level of discomfort/pain attributed to surgical reasons, causing the case load to decrease in 27.2% of the surgeons. it was seen that the back was the most affected area in those performing open surgery, while shoulders and back were equally as affected in those performing laparoscopic, and the neck for those performing robotic, with 29.4% of the surgeons reporting that this pain has affected their task accuracy/surgical performance. a higher percentage of females than males reported pain in the neck, back and shoulder area when performing laparoscopic procedures. supine positioning of patients evoked more discomfort in the wrists, while the french position caused more discomfort in the back region. only 57.7% sought medical treatment for their msk problem, of which 6.35% had to undergo surgery for their issue, and 55.6% of those felt that the treatment resolved their problem. conclusion: msk injuries and pain are a common occurrence among the population of bariatric surgeons, and has the ability to hinder performance at work. therefore, it is of importance to investigate ways in which to improve ergonomics for these surgeons as to improve quality of life.introduction: the use of robotic technology is rapidly increasing among general surgeons but is not being routinely taught in general surgery residency. we aimed to evaluate our first 100 robotic cases during which time we developed a robotic surgery curriculum incorporating residents. methods: the first 100 robotic cases performed at our institution from 2016-2017 by two surgeons were analyzed. a residency curriculum was developed and instituted after the first 6 months. it consisted of online modules offered by intuitive surgical resulting in certification, simulator training, hands on workshops for cannula placement, docking, instrument exchange, camera clutching and other introductory tasks. patient demographics, type of procedure, resident involvement, total operative and console times, comorbid conditions and complications were evaluated. unpaired t tests were performed for statistical analysis. results: 66 females and 34 males comprised this series with an average age of 44 years ±12. the majority of patients, 71% had comorbidities, with a predominance of hypertension, 59% and diabetes, 37%. the bariatric patients had an average bmi of 48±10. a variety of procedures were performed including hernias, foregut and bariatric. residents participated in 40% of cases. there were no differences in total operative and console times in cases with residents except bariatric procedures. there were 3 complications in this series; postoperative ileus, gallbladder fossa hematoma and an enterotomy. there was one early conversion to open in a complex foregut case and no deaths in this series.conclusions: we report our initial experience of robotics in a variety of general surgery and complex foregut cases. the implementation of a robotic surgery program and residency curriculum was safe with similar outcomes related to operative times and complications. as mis expands with the application of robotics in general surgery, residency curriculums will need to be revised. further data is needed to determine residency learning curves between robotics and laparoscopy.background: robotic surgery has made a large impact in the fields of urology and gynecology. its use is significantly increasing in the fields of general and bariatric surgery. evidence remains unclear as to the clinical impact on outcomes, and significant questions remain as to the impact of cost. our goal was to evaluate the economic impact of robotic surgeries in general and bariatric surgery at our institution. methods: this study is a retrospective analysis of minimally invasive general and bariatric procedures done at a single institution from january 2016 through june 2017. we performed a cost and reimbursement analysis of robotic versus conventional laparoscopic surgery. the cost evaluation included operative time, operating room costs, length of stay and overall hospital expenses. in addition, we looked at reimbursement and the contribution margin per cpt code. results: our study included a total of 1927 patients who underwent 1716 laparoscopic and 211 robot assisted general and bariatric surgeries. the average time duration for laparoscopic surgeries was 138 minutes vs 248 minutes for robot assisted. we performed a cost analysis which showed an average total cost of $8,955 for laparoscopic and an average of $15,319 for robot assisted. the total reimbursements were $19,631 for laparoscopic and $21,949 for robot assisted. this translated to an average contribution margin of $10,676 for laparoscopic vs $6,630 for robot assisted. for general surgery we found an average cost of laparoscopic $7,675 vs robot assisted $9,436, with a contribution margin of $7,761 laparoscopic vs $3,473 robot assisted. for bariatric surgeries we found an average contribution margin of $14,149 for laparoscopic vs $6,165 for robot assisted. conclusions: robotic surgery has been associated with higher costs and longer operative times. in this economic climate of increased cost awareness with institutions under increasing financial pressures, judicious use of resources becomes important when determining surgical approach. although cost of robot assisted surgery may decrease with time, other quality factors may be important in patient selection. although there is no clear evidence that institutions lose money with robot assisted surgery, in our experience the contribution margin is lower with robot assisted surgery as compared to conventional laparoscopy.introduction: this retrospective study was performed to evaluate the safety and feasibility of the new senhance robotic system (transenterix) for inguinal hernia repairs using the transabdominal preperitoneal approach. our series is the first experience in the field of general surgery utilizing this new robotic platform. methods: from march to september 2017, 76 inguinal hernia repairs in 64 patients were performed using the senhance robotic system. the senhance surgical system is a new robotic platform that consists of a cockpit, manipulator arm and a connection node (figure 1 ). this new system provides robotic surgery with numerous advantages including eye-tracking camera control system, haptic feedback, reusable endoscopic instruments, and a high configuration versatility due to total independency of the manipulator arms. patients were between 18 and 90 years of age, eligible for a laparoscopic procedure with general anesthesia, had no life-threatening disease with a life-expectancy of less than 12 month and a bmi \ 40. a retrospective chart review was performed for a variety of pre-, peri-and postoperative data including but not limited to patient demographics, hernia characteristics, intraoperative and postoperative complications. results: 54 male and 10 female patients were included in the study. median age was 56.5 years (range 22-86 years), and median bmi was 25.9 (range 19.5-31.8 kg/m 2 ). median docking time was 7 minutes (range 2-21 minutes), and median operative time was 48 minutes (range 18-142 minutes). two cases were converted to standard laparoscopic surgery due to robot malfunction and intraoperative bleeding respectively. one patient developed a postop seroma that did not require any further intervention. conclusion: we report the first series of laparoscopic inguinal hernia repairs using the new senhance robotic system. compared to previously published conventional laparoscopic or robotic tapp hernia repairs these data suggest similar outcomes in operative time and perioperative complications. additionally there was no significant learning curve detected due to its intuitive applicability. therefor the senhance robotic system can be safely and easily used for tapp hernia repairs by experienced laparoscopic surgeons. this is a video presentation of 51 years old female, who presented with suprapubic pain and mass to gynecology office. she has a history of robotic hysterectomy and sbladder sling operation 4 years ago. this was complicated with peritonitis and long icu stay, due to what she was called ''bowel injury'' but treated only conservatively with antibiotics and subsequent abscess drainages at that time. she has occasional appearing nodule and pain at the left suprapubic region. ct ordered by gynecology read as abdominal wall hernia with long sigmoid diverticuli in hernia. also there was small amount of subcutaneous air at the tip of herniated diverticuli. after antibiotic treatment and improvement, colonoscopy shows, actually the diverticuli is the limb of the sling going through the simoid and anchored in subcutaneous fat on abdominal wall ahich represents clocutaneous fistula as gets infected. clip was placed on sling and repeat imaging comfirmed that the localion of this sling fits to location of so called ''hernia'' the sling limb was resected robotically and colon was repaired with side stapling of clolonic wall. the abdominal wall defect is repaired with long term absorbable suture. as far as we have found, the presentation and treatment of this complication is unique and could not find a similar case to guide us for the plan. background: robot-assisted surgery using da vinci surgical system (dvss) is thought to have many advantages over conventional laparoscopic surgery. it was reported that the use of the surgical robot might reduce surgery-related complications, then a multi-institutional historically controlled prospective cohort study on the feasibility, safety, effectiveness and economical efficiency of robotic gastrectomy (rg) for resectable gastric cancer was conducted in japan. this study evaluated the safety of rg using dvss xi. methods: this single-center, prospective phase ii study included patients with resectable gastric cancer (umin000019366). the primary endpoint was the incidence of post-operative complications greater than grade iii according to clavien-dindo classification during one month after surgery. the secondary endpoints included all adverse events and completion rate of robotic surgery. results: from oct 2015 to jan 2017, 22 patients were enrolled for this study. the incidence of post-operative complication greater than grade iii was 0%. the overall incidence of adverse events was 18.1% (grade i; 13.6%, grade ii; 4.5%). no patient required conversion to laparoscopic or open surgery; thus, the rg completion rate was 100%. conclusion: this study suggested the introduction of rg using dvss xi for gastric cancer seems to be safe and feasible. priscila r armijo, md 1 , dmitry oleynikov, md 1 , sages robotic task force* 2; 1 university of nebraska medical center, 2 sages robotic task force introduction: while robotic companies continue to aggressively market and promote the use of robots in general surgery, little is known about how this technology is employed by general surgeons, and what is expected of this technology from both novice and experts in the field. the aim of this study is to evaluate the needs of general surgeons who are new to robotic surgery and the needs of established robotic surgeons. methods: the sages robotic task force survey, a one-page survey, was designed and sent electronically to all sages members. questions regarding fellowship training, area of expertise, robotic simulation and in clinical case use, services offered in the current hospital, mentorship, likelihood of switching to a different approach, and expectations for the robot were included in the survey. two groups were created based on previous use of davinci® system in a clinical scenario, or not. statistical analysis was conducted using ibm spss v.23.0.0, using fischer's exact and pearson's chi-squared tests where appropriate. results: 201 sages members answered the survey. surprisingly, 157 respondents (78%) had used the davinci® in a clinical setting. among these, 122 (78%) had additional fellowship training, compared to 27 (63%) in the non-clinical use group, p=0.048. of all surgeons with additional fellowship training, the great majority (26%) had specialization in advanced gi, mis and bariatric surgery, followed by colorectal (10%). most surgeons are performing less than 10 cases per month using the robotic system, and with the majority of cases performed using the platform being hernia repairs (24%), followed by foregut-related procedures (20%). interestingly, from all the surgeons who replied the survey, only 11.3% are planning to switch from open procedures to its robot counterpart, whereas 38.1% are planning to adopt robotic-assisted procedures rather than laparoscopy. conclusions: the majority of sages members who responded to the survey have used the davinci® in a clinical setting in the past. surgeons who stated they perform mainly laparoscopic procedures were likely to continue to adopt robotic techniques, whereas those who perform open hernia repair for example were not very likely to switch to robotic approach. while the use of the robot may be enabling surgeons who used to perform mostly open procedures in the urology or gynecology fields, laparoscopic skills predict robotic utilization in general surgery. hernia and foregut appear to be the most common procedures that are being utilized.aim: while conventional multiport laparoscopic splenectomy has become gold standard for some hematological or splenic diseases, reduced-port laparoscopic splenectomy (rpls) including singleincision laparoscopic splenectomy (sils) is regarded as highly challenging. herein, we describe the technical refinements for safe rpls especially for patient with splenomegaly. methods: in all cases, access was achieved via a 2.5-cm mini-laparotomy at the umbilicus into which a sils tm port or e-z access ® with three 5-mm trocars was placed. a 5-mm flexible scope, an articulating grasper, and straight instruments were used. our rpls is characterized by the followings: a) early ligation of the splenic artery to shrink the spleen, b) application of our original "tug exposure technique," which provides good exposure of the splenic hilum by retracting (tugging) the spleen with a cloth tape, and c) safe introduction of stapler under the guidance with a flat drain into the splenic hilum. results: 27 rpls patients (12 men and 15 women, 43±19 years old) comprised hematological disorder (n=12), splenic disease (n=12), and liver cirrhosis (n=3). in 24 patients (89%), rpls was successfully completed: sils in 22 and sils plus one additional port only in 2 patients. conversion to open surgery was necessary in 3 patients including 1 liver cirrhosis with remarkable collateral varicose veins around the spleen. operation time and blood loss were 214±78 min and 166±312 g, respectively. weight of the extracted spleen was heavier than normal and 341±286 g (maximum 960 g). no intra-or postoperative complication occurred. the postoperative scar was nearly invisible. conclusions: rpls might safely be performed even for splenomegaly (up to 1,000 g). however, care should be taken for cirrhotic patient with collateral veins. rpls can be the procedure of choice even in the patients with splenomegaly and who are concerned about postoperative cosmesis. the aim of this feasibility study was to evaluate laparoscopic sn biopsy for laparoscopic snns in early gastric cancer patients. subjects and methods: this study includes 13 patients with ct1n0m0 (primary tumor \4 cm) gastric cancer who underwent laparoscopic sn biopsy in conjunction with radioisotope and dye methods between jan. 2010 and jul. 2011. first, we looked for green-dyed sns after injection of indocyanine green (icg) without near-infrared light system, and then tried to detect the radioactivity of sns using a hand-held gamma probe inserted through a small incision at the umbilical port. after the areas where sns were distributed were resected, a gastrectomy with prophylactic lymphadenectomy was performed according to the gastric cancer treatment guidelines of the japanese gastric cancer association. we looked for undetected sns in the resected specimen at the back table. results: among 13 cases, there were 11 (85%) in which sns were not detected in the resected specimen. there were 2 cases in whom sns were detected in the resected specimen. in both cases, the primary tumors were located in the middle and greater curvature of the stomach. in case 1, laparoscopic sn biopsy identified the left (4sb) and right (4d) greater-curvature lymph node (lns) as sns, however, lesser-curvature (3) and infrapyloric (6) lns remained as sns in the resected specimen. in case 2, the left (4sb) and right (4d) greater-curvature lns were identified as sns intraoperatively, while the lesser-curvature (3) ln remained as an sn in the resected specimen. the sns overlooked with laparoscopic sn biopsy method were detected by radioisotope only. no cases had ln metastasis, and the 5-year relapse-free survival rate of these 13 patients was 100%. conclusions: our feasibility study of laparoscopic sentinel node biopsy for early gastric cancer showed that we should search for sns of the lesser curvature carefully even if the primary lesion is located at the greater curvature. key: cord-006849-vgjz74ts authors: nan title: 27th international congress of the european association for endoscopic surgery (eaes) sevilla, spain, 12–15 june 2019 date: 2019-09-13 journal: surg endosc doi: 10.1007/s00464-019-07109-x sha: doc_id: 6849 cord_uid: vgjz74ts nan laparoscopic cholecystectomy is one of the most commonly performed operations worldwide. bile duct injury (bdi) is a rare but very serious complication of the procedure, with a significant impact on quality of life and overall survival. the high frequency of bdi with laparoscopic cholecystectomy was first considered to be a consequence of the initial learning curve of the surgeon, but it later became clear that the primary cause of bdi is misinterpretation of biliary anatomy. intraoperative cholangiography (ioc) has been advised by many authors as the technique reduces the risk of bdi. however, the procedure has inherent limitations and is therefore reserved for select cases. fluorescent cholangiography using indocyanine green(icg) is a novel approach, which offers real-time intraoperative imaging of the biliary anatomy. a comparative study was contacted by administering icg intravenously or intrabiliary during the operation. forty patients scheduled to undergo an elective lap. cholecystectomy were randomly divided in two groups: in group a icg was administered in a dose 2.5 mg in 2 ml solution intravenously 1 hour before surgery. in group b icg was injected intrabiliary in a 0.025 mg/ml solution mixed with the patient's bile. also, we observed and analysed the following parameters, liver function, b.m.i, asa score and possible complications, before and after operation. results: group a. intravenous icg was administered in 20 patients. there was no any reaction and the extrahepatic biliary anatomy was identified well. there was no bdi or any complication related to the procedure. group b. icg was injected intrabiliary in 20 patients during the laparoscopic procedure. in all but one patient the extrahepatic biliary tree was delineated very well. in one patient part of icg solution was injected into the gallbladder wall and this resulted in a partially confusing image. there was no bdi and no postoperative complication conclusions: fluorescence cholangiography can be used during laparoscopic cholecystectomy to obtain fluorescence images of the bile ducts following intrabiliary injection during the operation orintravenous injection 1 h before the procedure. the later technique is more easy to perform and does not require catheterization of the biliary tree. endoscopia digestiva chirurgica, policlinico universitario ''a. gemelli'', rome, italy; 2 ihu, strasbourg, france; 3 camma group, icube, university of strasbourg, cnrs, ihu strasbourg, strasbourg, france; 4 ircad, strasbourg, france; 5 digestive and endocrine surgery, nouvel hopital civil, university of strasbourg, strasbourg, france; 6 digestive and endocrine surgery, ihu-strasbourg, strasbourg, france aim: surgical societies are united in promoting the critical view of safety(cvs) during laparoscopic cholecystectomy(lc). nonetheless, reports have shown a discrepancy between the operative reports and the correct application of cvs, which may explain the stability of bile duct injury rates. therefore, surgeons and computer scientists at our institution are developing a machine-learning algorithm to automatize cvs assessment. however, the lack of a consistent cvs video assessment framework limits the ability to generate data to train the artificial intelligence. here we describe and test a method for cvs evaluation in videos. method: between march and july 2016, 100 consecutive videos of lc performed at nouvel hospital civil(strasbourg, france) were recorded. two independent reviewers assessed the achievement of cvs in the 60 s video sequences preceding clipping of cystic duct and artery. in addition to the 'doublet view' method, a 'binary' video evaluation method was tested: each of the 3 criteria composing the cvs(2 structures entering the gallbladder, clearance of the hepatocystic triangle and lower part of the cystic plate) was classified as achieved or not. if the 3 criteria were met, then the cvs was considered achieved. inter-rater agreement for cvs and for each of the 3 criteria was evaluated. results: twenty-two videos(12 fundus first and 5 partial lc, and 5 broken videos) were excluded from the cvs analysis. cvs elements were assessable in all but one 60 s videos sequences(98.72%). after mediation, cvs was achieved in 32/78(41.03%) of lc. the cystic plate was identified in only 52.56% of videos. inter-rater agreement using the doublet view vs. the binary method was as follows: 83.33%(? = 0.54) vs. 88.46%(? = 0.75) for cvs achievement, 66.66%(? = 0.48) vs. 93.59%(? = 0.79) for the 2 structures, 65.38%(? = 0.45) vs. 82.05%(? = 0.62) for the hepatocystic triangle and 61.53%(? = 0.36) vs. 88.46%(? = 0.77) for the cystic plate ( fig. 1) . conclusions: reliable cvs assessment is crucial to generate consistent data for machine-learning algorithms aiming at decreasing bile duct injury after cholecystectomy. our binary cvs video assessment method showed higher inter-rater reliability than the doublet view, originally described for assessment of photos. further studies are on going to validate the cvs assessment in videos and support our initial results. surg endosc (2019) the vital role of surgeries in healthcare requires a constant attention for improvement. surgical process modeling is an innovative and rather recently introduced approach for tackling the issues in nowadays complex surgeries, involving complex logistics, much technology, and large teams. surgical process modeling allows for evaluating the introduction of new technologies and tools prior to the actual development and is beneficial in optimization of the treatment planning and treatment performance in operating room. in this study, we first discuss the concepts associated with surgical process modeling, aiming to clarify them and to promote their use in future studies. next, we apply these concepts to analyze the procedure of challenging interventions, minimally invasive liver treatment (milt) methods, with the ultimate goal of improving and optimizing the treatment procedure. the procedure model of current treatment activities and planning of various milt methods and the associated techniques, are analyzed and combined into a generic procedure model of milt, which provides a firm foundation for qualitative and quantitative analysis of different milt procedures. the generic procedure model is validated by data from erasmus medical center (rotterdam, the netherlands) and oslo university hospital (oslo, norway) . the proposed procedure model is designed to be a basis for improvement of the procedure and to determine how and where the new technologies can be best, effectively and efficiently, employed in the clinical practices prior to and/or during actual development of the new technologies for milt. as a conclusion, the current work illuminates the importance of surgical process modeling for improving different aspects of treatment procedures and provides an overview of various modeling strategies that can be used to establish surgical process models. the generic procedure model of various milt methods, including laparoscopic liver resection, laparoscopic liver ablation and percutaneous ablation, is introduced and validated which is a basis for introduction of the optimized procedure model of milt objective: to determine the most appropriate time to start total laparoscopic living donor right hepatectomy (tldrh) based on the experience with laparoscopic liver resection (llr). summary background data accumulation of experience in llr is essential before starting tldrh to ensure donor safety. methods: we retrospectively reviewed data of 567 and 78 consecutive patients who underwent llr and donor hepatectomy, respectively, between 2003 and 2017. operative outcomes of laparoscopic major hepatectomy (lmh) were compared between two periods based on tldrh introduction (phase i 2003 -2009 vs phase ii 2010 . learning curve of llr was evaluated using the cumulative sum (cusum) method to determine the optimal time of tldrh introduction. conclusion: accumulating an experience of at least 73 lmh cases is needed in low-volume lt centers before starting tldrh to ensure donor safety. introduction: the number of surgical adverse events is still too high. an important number of these adverse events occur within the operating room (or) and are in fact preventable. in order to reduce adverse events in the or, we simply need to know what went well and what can be done better. the aim of this study was to analyze and debrief a predefined selection of surgical procedures, with the use of an operating room 'black box', to identify commonly observed safety threats and resilience support events. methods: in the period 2017-2018, 35 predefined gastro-intestinal laparoscopic cases were recorded by the or black box'. the postoperative surgical team assessment record (star) questionnaire was used. the recordings were analyzed by specifically trained raters, using the systems engineering initiative for patient safety (seips) model of work system and patient safety to identify relevant safety threat and resilience support events. qualitative data analysis was used to identify the most commonly discussed events during the team debriefings. results: in only 26.5% (n = 65) of times or team members, when asked direct following surgery, indicated that they had noticed aberrations (n = 234) during the case. a mean number of 52.5 (sd 15.0) relevant positive and negative events (e.i. aberrations) per surgical procedure were identified using the black box performance report. on average, 11.5 (sd 4.2) of events identified by the black box were rated as safety threats. most events discussed during the team debriefings were related to communication. conclusion: these results once again highlighting the importance of clear and closed-loop communication in the operating room. theatre staff underestimated the number of aberrations occurring in the or, when asked to retrieve from memory. postoperative structured team debriefing may be important for resolving incorrect assumptions between operating team members to avoid future unnecessary miscommunication. background: the eaes has recently published an intraoperative adverse event classification to assist the direct measurement and routine reporting of minimal access surgery interventions. we aimed to explore the clinically validity and reliability of the classification. methods: a prospective evaluation utilising case videos and clinical data from a completed multi-centre laparoscopic total mesorectal excision surgery randomised controlled trial was performed (isrctn59485808). enacted adverse events identified with the observational clinical human reliability analysis technique were graded with the eaes classification by two blinded, independent assessors. test-retest reliability was explored using grades previously applied during the development of the classification with intraclass correlation co-efficients calculated. clinical validity was assessed using 30-day morbidity events, the clavien-dindo classification and the highest eaes grade per case. results: 77 laparoscopic cases (419 h of surgery) contained 1393 error events which were all successfully categorised. excellent inter-rater and test-retest reliability was seen (icc 0.957, 95% ci 0.952-0.961, p \ 0.001 and icc 0.893, 95% ci 0.88-0.904, p \ 0.001 respectively. 61% of patients experienced post-operative morbidity (median 1 event, range [0] [1] [2] [3] [4] [5] . labelling analysed cases by their highest eaes classification grade gave 53% grade 2, 43% grade 3 and 4% grade 4 procedures. 51% of grade 2 cases developed a morbidity event, but this significantly increased in grade 3 and 4 operations (70% and 100%, p = 0.043). the number of complications and highest recorded clavien-dindo grade increased with each additional grade (1.05 ± 1.3 vs. 1.48 ± 1.3 vs. 2.33 ± 0.6, p = 0.145 and median 1 vs. 2 vs. 3, p = 0.023 respectively). anastomotic leak and re-operation were correctly captured by the allocated eaes grade (2.5% vs. 3.3% vs. 100%, p \ 0.001 and 5% vs. 0% vs. 66%, p \ 0.001 respectively). there was a significant rise in length of stay observed with increasing eaes grade (median 6 vs. 7 vs. 61 days, p \ 0.001). conclusion: in the context of major laparoscopic surgery, the eaes intraoperative adverse classification is seen to be a clinically valid and reliable assessment method. psychological medicine, nuhs, singapore, singapore aims: neurobiological feedback in surgical training could translate to better educational outcomes such as measures of learning curve. the variation in brain activation of medical students when performing laparoscopic tasks before and after a training workshop is not properly studied before and we planned to do this using functional near infrared spectroscopy (fnirs) which is a non-invasive optical brain imaging tool that measures cortical oxygenation change which is used as a marker of pre-frontal cortex activity (pfca). methods: this randomised controlled trial examined the pfc activity differences in two groups of novice medical students during the acquisition of 4 basic laparoscopic tasks. 'trained-group' had standerdised oneto-one training on the tasks, while the 'untrained-group' had no prior trainining and was just shown a video of the tasks. the pfca was measured pre and post intervention using a portable fnirs device. primary outcome was the difference in the pfca pre and post intervention. secondary outcomes were the differences in pfca between the 4 tasks and between the sexes. results: 16 trained and 16 untrained medical students with an equal sex distribution and a comparable age distribution were invovlved in the study. all students were right handed. trained group had a significantly attenuated pfca in the 'precision-cutting' (p = 0.011) and 'suture-insertion' (p = 0.025) tasks compared to the untrained group. subgroup analysis based on sex revealed significant attenuation in pfca in trained females compared to untrained females across 3 of the 4 laparoscopic tasks: 'pegstransfer' (p = 0.013), 'precision-cutting' (p = 0.034), 'suture-insertion' (p = 0.03). no significant pfca attenuation was found in male students who underwent training compared to untrained males. conclusion: a standardised laparoscopic training workshop promoted greater pfca attenuation in female medical students compared to males. this suggests that female and male students respond differently to the same instructional approach. these results may have implications for surgical training and education such as a greater focus on one to one surgical training for female students and use of pfca attenuation as a form of neurobiological feedback as a measure of learning curve in surgical training. robot assisted versus laparoscopic advanced suturing learning curve e. leijte 1 , i. de blaauw 2 , c. rosman 1 , s.m.b.i. botden 2 1 surgery, radboudumc, nijmegen, the netherlands; 2 pediatric surgery, radboudumc, nijmegen, the netherlands aims: compared to conventional laparoscopy, robot assisted surgery is expected to have most potential in difficult areas and demanding technical skills as minimally invasive suturing. this study was performed to identify the differences in the learning curves of laparoscopic versus robot assisted advanced suturing method: novice participants, with the knowledge of basic surgical procedures, were recruited and performed three suturing tasks on the eosim laparoscopic augmented reality simulator or the robotix robot assisted virtual reality simulator. each participant performed an intracorporeal suturing, tilted plane needle transfer and anastomosis needle transfer task. to complete the learning curve, all tasks were repeated for maximal twenty repetitions or until a plateau was reached three consecutive times. clinical relevant and comparable parameters regarding time (seconds), movements and safety were recorded. intracorporeal suturing was used to visualize and compare the learning curves between the groups. results: forty-six participants completed the learning curve, of which 16 laparoscopically and 30 robot assisted. when comparing the suture time, the plateau was reached much faster in the robot assisted group (7-9 repetitions) than the laparoscopic group (10) (11) (12) repetitions) as shown in figure 1 . there was a significant difference in 'time per suture', during the whole learning curve with median values of 637 versus 251 (first knot), 450 versus 147 (fifth) and 186 and 115 (eighteenth), all with a p \ 0.05. however, the parameter 'adequate surgical knot' was reached earlier in the laparoscopic group than in the robot assisted group. first: 69% versus 60%, fifth: 100% versus 70%, and eighteenth: 100% versus 83%. when assessing the 'needle out of view' parameter, the robot assisted group scored a median of 0.3 and 0.0 s during the first, respectively eighteenth knot, and the laparoscopic participants had their instruments out of view for 41 and 17 s during the first respectively eighteenth knot. conclusion: the learning curve of minimally invasive suturing can be reduced with the use of robot assisted surgery, with a specific reduction in operation time. the rate of adequate knots seemed to remain lower in robot assisted surgery, although this could be due to the virtual reality aspect of the simulator. introduction: endoscopic sleeve gastroplasty (esg) is a novel promising bariatric endoscopy treatment. gastric volume reduction and delayed gastric emptying are the mechanisms driving weight loss. however, little is known about the factors influencing the effectiveness of weight loss overtime. the present study aims at evaluating the correlation between endoscopic suture appearance and excess weight loss (ewl%) at 6 and 12 months follow up. patients and methods: all patients who underwent follow-up endoscopy at 6 and 12 months after esg were included. esgs were classified in 3 groups according to endoscopic appearance of the gastric sutures: optimal (group 1) when all stitches were in place and tights; suboptimal (group 2) when one or more stiches were displaced; loose (group 3) when all the sutures were completely disrupted. bmi at enrollment and ewl% at 6 and 12 months were recorded and compared to the endoscopic appearance. results: a total of 53 patients were included in the analysis. at 6 months, 25 (47.2%) patients had an optimal esg, 24 (45.3%) had a suboptimal sleeve and 4 (7.5%) had complete sutures failure. bmi at enrollment and ewl% were respectively 37.7 ± 4.2 and 36.6 ± 21.3% for group 1, 43.6 ± 6.7 and 22.77 ± 18.7% for group 2 and 50.7 ± 14.4 and 7.8% ± 16.5% for group 3. twenty five patients had 12 months egds: 5(20%) presented an intact esg and were classified in group 1, 15 (60%) in group 2 and 5 (20%) in group 3. twelve months ewl% was respectively 47.6 ± 9.1%, 31.3 ± 29.3 and 12 ± 14.4%. initial bmi significantly correlated with suture status at both 6 (rho -0.528; p \ 0.001) and 12 months (rho -0.423; p = 0.035) follow-up. furthermore, the sutures' appearance itself correlated with ewl% at both time points (rho ?0.416; p = 0.002 and rho 0.439; p = 0.028 respectively). conclusion: our preliminary results show that the aspect of the endoscopic suture has a significantly impact on ewl% at 6 and 12 months after esg. furthermore, bmi at enrollment seems to predict endoscopic suture duration overtime. larger studies and longer follow-up are needed to further validate our preliminary findings. background and aim: endoscopic sleeve gastroplasty(esg) is a relatively novel endoscopic procedure that reduces the gastric lumen with proven less complications and less 6 months weight loss compared to laparoscopic sleeve gastroplasty (lsg) . at present there are no studies investigating the role of multidisciplinary approach in esg. the aims of the present study were to evaluate the role of multidisciplinary assessment(ma) prior esg, weight loss outcomes, quality of live improvements and adverse events. material and methods: from may 2016 to may 2018 all patients that underwent esg were retrospectively evaluated from a prospective database. until september 2017 before esg only psychiatric evaluation was requested, while after this date we adopted the guidelines of the italian society for obesity surgery and all patients were evaluated on a multidisciplinary fashion prior esg. the multidisciplinary team was composed by:gastroenterologist, surgeon, psychiatrist, endocrinologist and dietitian. patients were divided in two groups:group 1 were patients with esg before ma and group 2 were patients with esg after ma. we compared this two groups in terms of weight loss outcomes, quality of live improvements and adverse events. quality of live was measured with the bariatric analysis and reporting outcome system(baros).all procedures were done with the apollo overstitch suturing system(apollo endosurgery) and a double channel gastroscope olympus 2tgif-160(olympus japan).all procedures were done in general anesthesia and with insufflation of co2. all patients had ambulatory visit t 1, 3 and 6 months after esg and weight loss outcomes were measured in terms of excess weight loss (%ewl),the total body weight loss (%tbwl) and baros scale were assessed. statistical analysis was done with chi-square test and \ 0.05 value was considered significant. results: 31 patients were identified (20 female; mean age 45. 4, range 23-73) . mean bmi at inclusion was 41.6(range 31.6-62.4). mean %ewl and %tbwl at 6 months was 37.1 and 16.7 respectively (table 1) .non procedure related complications were observed. comparing the two groups there was significant(p \ 0.05) difference in terms of %ewl and %tbwl (table 2) ,with better results in group 2. there was also a significant improvement in the baros scale in the patients in group 2. conclusions: ma before esg has a fundamental role in terms of better procedure outcomes for both weight loss and quality of live in obese patients. 2 gastroenterology, hadassah medical center, jerusalem, israel, israel aims: the over-the-scope clip (ovesco) is a novel endoscopic tool that enables non-surgical management of gastrointestinal defects. the aim of this study was to report our experience with ovesco for patients with staple line leaks following laparoscopic sleeve gastrectomy (lsg). methods: a prospectively maintained irb-approved institutional database was queried for all patients treated with ovesco for staple line leaks following lsg from 2010 to 2018. primary outcome was complete resolution of leak following ovesco as defined by return to complete oral nutrition and no evidence of leak on imaging. secondary outcome was the number of additional endoscopic or surgical procedures needed following ovesco. results: twenty-five patients (12 males, 13 females) were treated with ovesco for staple line leaks following lsg. the median age was 35 years (range 18-62), and mean body mass index was 44 kg/m 2 . nine patients (35%) were referred from an outside hospital. the median time from index operation to leak diagnosis and from leak diagnosis to ovesco was 18 days (range 2-118), and 6 days (range 1-120), respectively. all patients had upper staple-line leaks near the gastroesophageal junction. initial treatment included antibiotics-6 patients; computed tomography guided drainage and antibiotics-7 patients; and laparoscopic drainage-12 patients. ovesco led to final resolution of leak in 8 patients (33%) within 70 days of clip deployment (range 41-136). leaks which persisted following ovesco were eventually resolved with a combination of ovesco and stent-5 patients (21%), total gastrectomy and esophago-jejunostomy-10 patients (42%), and endoscopic suturing-1 patient (4%). one mortality was noted in a patient who suffered multiorgan failure. the number of additional endoscopic sessions ranged from 1 to 10 (median 2). no procedure related complications were noted. all patients were treated with total parenteral nutrition and the total length of stay was 49 days (range 13-127). conclusions: despite its low success rate, ovesco should be part of the bariatric surgeon's non-surgical armamentarium in treating staple line leaks following lsg. r. bademci 1 , r. vilallonga 2 , p. alberti 2 , r. renato 2 , c. yuhamy 2 , s.s. cordero 2 , l. posadas 2 1 general surgery, istanbul medipol üniversitesi, istanbul, turkey; 2 bariatric surgery, vall d'hebron, barcelona, spain background: in cases of morbid obesity, treatment is generally applied as either a surgical or endoscopic approach. the number of primary obesity surgery endolumenal (pose) procedures is increasing but the reliability and effectiveness is unclear as yet. the aim of this study was to present a series of cases that required revision surgery due to pose failure and to reveal possible alternative surgeries. materials and methods: a retrospective comparison was made of the data of obese patients with pose failure and conversion to surgical procedures between 2016 and 2018 in respect of operation, medical illness and bmi results. results: the patients comprised 60% females, 40% males with a mean age of 44.8 ± 12.4 years and mean follow-up period of 12.6 ± 8.3 months. on average, patients lost 24.1 ± 8.9 kg, with an average excess weight loss of 47.6%. conclusion: no firm conclusions can be drawn from such a small group. although sg seems to be a safe procedure and should be considered as the first technique to be applied following pose failure, it is possible to perform gastric bypass on patients with this endoscopic precursor. introduction: the population of post bariatric surgery patients is rapidly increasing worldwide. due to the altered anatomy post roux-en-y gastric bypass (rygb), conventional endoscopic management for choledocholithiasis is challenging. these patients are now commonly managed by means of a laparoscopic assisted ercp. although effective, this requires significant resource utilization and potential morbidity related to the need for surgical intervention. we present our preliminary experience with a purely percutaneous management of choledocholithiasis in bariatric patients post-rygb. methods: a retrospective single center review identified five patients with choledocholithiasis after bariatric rygb who underwent percutaneous cbd access and treatment by interventional radiology. four patients underwent percutaneous transhepatic cbd access while one patient underwent percutaneous trans-cholecystic cbd access. in three of the five patients conscious sedation alone was sufficient to perform the procedure. results: all patients had radiologically confirmed choledocholithiasis and were clinically symptomatic prior to intervention. the biliary tree was successfully accessed percutaneously and cleared in all five patients. in the four patients where a percutaneous transhepatic access was utilized, three patients required only fluoroscopic balloon sphincterplasty and sweep of the cbd to clear the ductal stones, while the fourth required percutaneous cholangioscopy assisted lithotripsy for clearance. in the fifth patient with non-dilated intrahepatic bile ducts a trans-cholecystistic approach into the cbd was utilized with percutaneous cholangioscopic assistance to clear the ductal stones. all procedures were completed successfully with no post procedure complications. conclusion: percutaneous clearance of cbd stones in bariatric patients presents a minimally invasive alternative to current surgical practice. the use of conscious sedation and the purely percutaneous approach may potentially reduce morbidity and resource utilization for this increasingly common clinical scenario. laparoscopic narbona-arnau procedure to control the gerd after lsg-3 years results of a prospective study i.c. hutopila, c. copaescu background: after the laparoscopic sleeve gastrectomy (lsg) alone or associated with calibration of the esophageal hiatus, for some patients the reflux symptoms worsen postoperatively due to development of a hiatal hernia (hh) or due to the recurrence of the hh previously repaired. for these situations, when the conservative treatment fails, are proposed some surgical solutions, one of them cardiopexy with teres ligament-narbona arnau. objective: is to establish a standardized laparoscopic technique for cardiopexy using the teres ligament (narbona arnau technique) and to analyze the procedure's outcomes. methods: the study was performed in a bariatric and metabolic center of excellence-ponderas academic hospital. all the patients undergoing narbona arnau procedure to control gerd after lsg since 2014 were included and prospectively analyzed. the selection criteria included lsg patients, presenting hh and symptomatic gerd. preoperative investigations were upper gastrointestinal endoscopy, radiological contrast study, ph-metry, computed tomography with oral contrast. results: 28 patients were included into the study. gerd and hh were preoperatively documented in all the cases. one patient was excluded after 2 years of follow up after being converted to a laparoscopic roux-en-y gastric bypass, for intense relux symptoms. no incidents during surgery. for 8 cases laparoscopic narbona arnau technique was performed concurrent with re-sleeve gastrectomy and gastric curvature plication. without postoperative complications. postoperative follow-up at 6 months, 1, 2 and 3 years, the percentage of patients without gerd symptoms and free of treatment with ppis was 64,28%, 82,14%, 71,42%, respectively 66.66%. at 3 years postoperatively the upper gi endoscopy showed remission/ improvement of the degree of esophagitis for 17 patients. for the same period of follow-up, the ph-metry highlighted a normal value of demeester score for 62.96% o patients (all the patients had preoperatively high de meester scores). no objective signs of hiatal hernia recurrence at imagistic investigations and upper gastrointestinal endoscopy were encountered. conclusions: complete preoperative evaluation is mandatory for choosing the optimal intervention. laparoscopic narbona arnau technique after lsg is proved to be a good option for the treatment of symptomic gerd, but further studies with high-volume patients are necessary. introduction: the aim of this study was to investigate the influence of baseline glycated hemoglobin level (hba1c) level in bariatric patients on postoperative outcomes. we found scarce of clinical data regarding influence of baseline hba1c on bariatric surgeries postoperative morbidity and readmission what was inspiration to conduct this multicenter retrospective study. methods and procedures: retrospective cohort study analyzed patients who underwent laparoscopic: sleeve gastrectomy (sg), roux-en-y gastric bypass (rygb) or mini-gastric bypass (mgb) for morbid obesity in seven referral bariatric centers. patients were divided into groups depending on preoperative hba 1c : hba 1c \ 5.7%; 5.7-6.4% and c 6.5%. primary endpoints: influence of hba 1c level on perioperative (30-days) and postoperative (12-months) morbidity rates, operation time, length of hospital stay (los) and readmission rate. results: study group included 2125, 68% females and 32% males. median age was 43 (35-52) years. median hba1c was 5.7 (5.3-6.1). hba1c \ 5.7% was present in 49% patients, hba1c5.7-6.4% in 35%, and hba1c c 6.5% in 16%. percentage of male patients increased in groups from 26% in hba 1c \ 5.7% to 47% in hba 1c c 6.5% significantly. same tendency through groups we observed in case of bmi and age. uncontrolled diabetes (hba 1c c 6.5%) was present in 8.7% patients, while 7.62% patients were not on antidiabetic medications despite having hba 1c c 6.5%. median operative time in patients was significantly longer than in hba1c \ 5.7% and hba1c 5.7-6.4%. 30-days morbidity rate was 5.27% and did not differ groups significantly, as 12-months morbidity rate (excl. 30-days) of 2.02% . los did not differ groups significantly. patients having hba1c in range of 5.7-6.4% and with hba1c c 6.5% did not have significantly increased odds for perioperative morbidity, 12-months postoperative morbidity as compared with those with hba1c \ 5.7%. patients with hba1c c 6.5% had increased or for prolonged los as compared to those with hba1c \ 5.7% (or 1.45; 95% ci 1.07-1.97). hba1c did not influence or for readmissions. patients with baseline hba 1c c 8% had significantly increased chances for hospital readmission (or 3.53, 95% ci 1.35-9.21). conclusion: baseline level of hba1c did not influence chance for perioperative morbidity, 12-months postoperative morbidity and prolonged los. patients with hba1c c 8% have increased chance for hospital readmissions. surg endosc (2019) 33:s485-s781 introduction: surgical resection is crucial for curative treatment of rectal cancer. through improvements in treatment and minimally invasive techniques, 5-year survival improved to over 60% of patients. the most recently introduced surgical technique is robotic-assisted surgery (ras). ras and conventional laparoscopy (cl) seem equally effective in terms oncological control. however, ras possibly provides further advantages e.g. 3d vision or the endowrist function, which have the potential to maximize the precision of surgery and thus has benefits for functional outcomes such as sexual function as well continence. therefore, the aim of this systematic review and meta-analysis was to compare functional outcomes of cl and ras for rectal cancer. materials and methods: this review was done according to the prisma and amstar guidelines andregistered with prosper-o(crd42018104519). the search was planned with the pico criteria and conducted on medline (via pubmed), web of science and central. two independent reviewers first screened titles and abstracts and then eligible full-texts. inclusion criteria were original studies, comparative studies for cl vs. ras for rectal cancer as well as reporting of functional outcomes. quality assessment was done with the newcastle-ottawa-scale for non-randomized studies and the cochrane tool to assess risk of bias for randomized trials. results: the search retrieved 9603 hits, of which 51 studies with 26225 patients met inclusion criteria. preliminary results yielded a lower rate of urinary retention for ras (odds ratio (or)[95%-confidence interval (ci)] 0.64 [0.45, 0.91] ) while there were no differences for ileus (or[ci]: 0.90 [0.77, 1.04] ). erectile function (iief) showed no differences after 3 (mean difference (md)[ci] 0.80 [-1.63, 3.21] , 6 (md[ci] 1.60 [-0.69, 3.89] ) and 12 months (md[ci] 1.11 [-1.70, 3.93] ). in terms of urinary problems (ipss) there were no differences 3 postoperative (md[ci] -0.96 [-2.16, 0. 23]) and 6 month postoperative md[ci] -0.92 [-1.96, 0.11] ), but advantages for the cl group after 12 months md[ci] -1.05 [-1.89, -0.21] ). discussion: ras and cl seem to provide similar functional outcomes after rectal cancer surgery. however, the results need to be interpreted carefully as none of the studies had any functional outcome defined as primary endpoint. future studies should evaluate both surgical approaches in terms of functional outcomes and should be appropriately powered. methods: from april 2014 to november 2018, 252 laparoscopic right colectomy with intracorporeal anastomosis were performed in our surgical department. all patients in both groups were perioperatively managed using an eras pathway. seventy-two patients had the enterotomy closed with a single layer running suture of filbloc tm (assut europe). these patients were matched with 72 patients who underwent intracorporeal right colectomy with enterotomy closed with a 'hybrid' double layer technique (first layer interrupted stitches in maxon tm 3-0 (covidien), second layer using a running suture in pds tm .intraoperative variables, anastomotic leak rate, morbidity and mortality rates were analyzed. results: the two groups were homogeneous with respect to demographics, body mass index (bmi), american surgical association score (asa) as well as for tumor stage. in the barbed group, median operating time was 121.5 min vs 140.7 min in the hybrid group (p = 0.02). anastomotic leak occurred in 5 (6.7%) patients in the hybrid vs 2 (2.7%) patients in the barbed group (p = 0.24) all patients required a reoperation. intraoperative findings at shows in 2 (0.4%) cases in the hybrid group a leak at the enterotomy closure, while an intact staler access was observed in both patients in the barbed group. no difference was observed with respect to non-infectious complications between the two groups (p = 0.55). patients in the hybrid group experienced a longer hospital stay when compared to the barbed group (p = 0.03). a re-admission occurred in the hybrid due an intraabdominal collection, while no re-admission was observed in the barbed group. no patient died in the postoperative period. conclusion: our results shows that the use of knotless barbed suture for enterotomy closure after laparoscopic intracorporeal right colectomy is safe, reproducible and associated with shorter operative time. aims: the accurate measurement and staging of rectal cancer, in particular the distal margin of low rectal tumours, is of paramount importance to optimise oncological surgical resection whilst preserving function. it is well recognised that the lower the tumour, the greater the technical challenges, operative time and the possibility of a temporary or permanent stoma. accurate localisation of the tumour is also essential to assist the multi-disciplinary team when considering neo-adjuvant chemoradiotherapy (crtx). the objective was to compare tumour height as reported on magnetic resonance imaging (mri) with endoscopic measurement. methods: a retrospective analysis of rectal tumour heights on pre-operative endoscopy and mri in patients undergoing radical colorectal surgery with curative intent over 3 years from january 2015. rectal tumours were identified as within 15 cm of the anal verge (av). all mri measurements were reported by one of two specialist gastrointestinal radiologists. measurements were taken from the lowermost point of the tumour to the av. endoscopic measurements were as recorded by 11 endoscopists including 2 rectal surgeons, 4 general surgeons, 4 gastroenterologists and a clinical nurse specialist endoscopist. results: records of eighty one patients with histologically confirmed rectal adenocarcinoma were reviewed. median age was 64 years (35 to 93). twenty three patients had 2 or more endoscopies. on mri the median tumour height from the av was 10.75 cm (3.5-18 cm) . on endoscopy the median tumour height was 23 cm (1-45 cm) . on comparing endoscopy with mri, the median difference was 12 cm (0-24 cm) . for over a third of patients (36%) tumours were lower on mri than endoscopy, median difference 12.25 cm (0.5-24 cm) . only rectal surgeons documented tumour height in relation to the rectal folds. the majority of the repeat endoscopies were performed by surgeons to locate tumours more accurately pre-surgery. on no occasion was it documented whether the tumour had been measured during insertion or withdrawal of the endoscope. conclusions: precise localisation of rectal tumours is imperative to plan complex surgery and give informed counsel to patients. this study demonstrates the urgent need for a standardised protocol for all endoscopists to use while recording the distal extent of rectal tumours. objectives: the aim of the present rct was to compare the incidence of genitourinary (gu) dysfunction after elective laparoscopic low anterior rectal resection and total mesorectal excision (lar ? tme) with high or low ligation (ll) of the inferior mesenteric artery (ima). secondary aims included the incidence of anastomotic leakage and oncological outcomes. background: the criterion standard surgical approach for rectal cancer is lar ? tme. the level of artery ligation remains an issue related to functional outcome, anastomotic leak rate, and oncological adequacy. retrospective studies failed to provide strong evidence in favor of one particular vascular approach and the specific impact on gu function is poorly understood. methods: between june 2014 and december 2016, patients who underwent elective laparoscopic lar ? tme in 6 italian nonacademic hospitals were randomized to high ligation (hl) or ll of ima after meeting the inclusion criteria. gu function was evaluated using a standardized survey and uroflowmetric examination. the trial was registered under the clinicaltrials.gov identifier nct02153801. results: a total of 214 patients were randomized to hl (n 111) or ll (n 103). gu function was impaired in both groups after surgery. ll group reported better continence and less obstructive urinary symptoms and improved quality of life at 9 months postoperative. sexual function was better in the ll group compared to hl group at 9 months. urinated volume, maximum urinary flow, and flow time were significantly (p \ 0.05) in favour of the ll group at 1 and 9 months from surgery. ultrasound measured post void residual volume and average urinary flow were significantly (p \ 0.05) better in the ll group at 9 months postoperatively. time of flow worsened in both groups at 9 months compared to baseline. there was no difference in anastomotic leak rate (8.1% hl vs 6.7% ll). there were no differences in terms of blood loss, surgica l times, postoperative complications, and initial oncological outcomes between groups. conclusions: ll of the ima in lar ? tme results in better gu function preservation without affecting initial oncological outcomes. hl does not seem to increase the anastomotic leak rate. introduction: robotic single-site cholecystectomy (rssc) has been known to have some advantages such as reducing stress of the surgeon compared to single incision laparoscopic cholecystectomy (silc). however, there are few studies comparing the perioperative outcomes of these two operative methods. patient and methods: between march 2014 and february 2018, 145 rssc and 268 silc were performed for benign gallbladder disease in our center. propensity score matching was performed to control variables including sex, age, body mass indes (bmi), diagnosis, american society of anesthesiologist (asa) score and 145 cohorts were selected among the silc group through 1:1 matching. the perioperative data of these 290 patients were analyzed retrospectively. the diagnosis was classified into acute cholecystitis, chronic cholecystitis, and gallbladder polyp. results: patient demographics between the two groups were evenly matched. total operation time including docking time was slightly longer in rssc group (48.1 min vs. 42.6 min, p \ 0.001), but real working time except the docking or set-up was shorter in rssc group (19.2 min vs. 23.5 min, p \ 0.001). conversion to additional robotic arm or additional port was frequent in silc group (0 vs. 5 cases, p = 0.03). intraoperative bile spillage rate (13.8% vs. 11.7%, p = 0.725) and postoperative hospital stay (1.8 days vs. 1.7 days, p = 0.091) were comparable in both group. conclusion: both surgical procedures performed safely. but the rssc demonstrated the better performance of the operation with shorter working time and the advantage of overcoming unexpected difficulties during the surgery with low conversion rate compared to silc. even though laparoscopic cholecystectomy(lc) is the gold standard procedure for cholelithiasis, patients are still suffering from various causes of pain. one of main causes is high pressure by pneumoperitoneum which makes peritoneal stretching and diaphragmatic irritation. however, there are few well-designed studies for evaluating pneumoperitoneum. therefore, we conducted a study to compare the postoperative pain after lc at serial different pressure methods. a prospective randomised double blind study was done in 147 patients with benign gallbladder disease. they were divided into 3 groups. each 49 patients underwent lc with different pneumoperitoneum method; group a: far-low (6-8 mmhg), goup b: low (9-11 mmhg) and group c: standard pressure (12-14 mmhg). three groups were compared for pain intensity, duration, analgesic requirement and complications. post-operative pain score was significantly least in far-low pressure group as compared to low or standard pressure group during late periods (12, 24 h). but, there were no pain score difference between far-low and low groups during early period (1, 2, 4, 8 h) even though scores of standard group were significant higher than those of low group. number of patients requiring rescue analgesic doses and intraoperative complications were not significantly different among 3 groups. this study demonstrates reducing the pressure of pneumoperitoneum results in reduction in intensity of post-operative pain. this study also shows that low pressure technique is safe with comparable rate of intraoperative complications. however, in immediate postoperative period, there is limitaton of pain relief after low pressure surgery. therefore, there may need new alternatives for pain. background and aim: anatomical hepatectomy with the glissonian approach is widely accepted as an important technique to ensure surgical safety and curability of the carcinoma. however, the histomorphological structure of the hepatic connective tissue is not sufficiently understood by surgeons. this study aimed to clarify the hepatic connective tissue structure using modern tissue imaging and analytical techniques. materials and methods: in total 5000 stained thin slices were loaded onto the computer and were reconstructed as 3dimages and analyzed. results: when the liver capsule enters the liver at the hepatic hilum, it becomes a sheath which envelops the portal pedicle. the hepatocytes in a row that constitute the periportal limiting plate at the edge of the hepatic lobule are firmly supported by the framework of the reticular fiber. the hepatic lobule and the portal area are in contact via the periportal space of mall. the framework of the limiting plate plays a role of a capsule of hepatic lobule (proper hepatic capsule) on the side in contact with the portal area. the binding site between the hepatic capsule and proper hepatic capsule (ppbs) is loose binding and is a layer that is easy to apply to surgical procedures. in order to enter between the liver capsule which became the sheath of the portal pedicle and the proper hepatic capsule at the hepatic hilum, the liver capsule must be dissected to reach the surface of the proper hepatic capsule. then, on the one hand, the portal pedicle is firmly gripped and pulled, on the other hand, the hepatic parenchyma covered by the proper hepatic capsule is pushed to expand between the portal pedicle and the liver parenchyma. at this time, the portal area (glisson's sheath) branched from the sheath of the portal pedicle into the gap of the hepatic lobule breaks like a string. with this dissecting plane, dissecting layer can reach to the next branch of the portal pedicle without entering into the portal pedicle or liver parenchyma. conclusion: understanding the connective tissue constituting the liver and conducting surgery turns the laparoscopic systematic hepatectomy into a standardized procedure. background: postoperative pancreatic fistula (popf) is the primary contributor to morbidity after distal pancreatectomy (dp). to date, no techniques used for the transection and closure of the pancreatic stump showed a clear superiority over the others. this study aimed to compare the rate of popf after pancreatic transection conducted with the reinforced stapler (rs) and ultrasonic dissector (ud) following dp. method: consecutive patients underwent dp from 2014 to 2017 were retrospectively reviewed. we included dps where pancreatic transection was performed by rs or ud and excluded dps extended to the pancreas head. to overcome the absence of randomization, we conducted a propensity matching analysis according to risk factors for popf. results: overall, 200 patients met the inclusion criteria. the rs was employed in 108 patients and ud in 92 cases. after the one-to-one propensity matching, 92 patients were selected from each group. the matched rs and ud cohort have no differences in baselines characteristics except for the mini-invasive approach, that was more common in the ud group (34% vs. 51%, p = 0.025). overall, 48 patients (26%) developed a popf, 46 a grade b (25%) and 2 (1%) a grade c. in the rs group the rate of popf was 12% (n = 11) and the ud group 38% (n = 35) with a p \ 0.001. conclusion: the results of this study suggest that the use of rs for pancreatic transection, reduces the risk of postoperative pancreatic fistula. a randomized trial is needed to confirm these preliminary data. aim: this study compares clinical and cost outcomes of robot-assisted single port and open longitudinal pancreaticojejunostomy (rlpj and olpj) for chronic pancreatitis. single incision mis needs more manual skills than conventional multiport operation. the advantage of better operation course is 3d vision and dedicate instrument. this paper aims to evaluate the feasibility and safety of the robot-assisted single incision with single port platform for chronic pancreatitis. materials and methods: clinical and cost data were retrospectively compared between open and ralpj. we collected 21 patients since july, 2015 to september, 2018. the patient was supinely placed in reverse trendelenburg position. the assistant surgeon was located between patient's legs. under general anesthesia a trans-umbilical 4.0 cm skin incision was made. a single incision advanced access platform with lagis port, glove portò (nelis, s. korea) and gelpoint combined with the da vinci si and xi surgical system (intuitive surgical, sunnyvale, ca, usa) pure or plus one was performed. the three arms, no. 1, no. 2, and da vinci scope, were in dwelled through the glove portò. pneumoperitoneum of 12 mmhg was established through the port. a rigid 30-degree up scope was used during operation. results: twenty-one patients underwent lpj: 5 open and 16 ralpj. no robot-assisted cases converted to open were noted. patients undergoing ralpj had less intraoperative blood loss, a shorter surgical length of stay, less postoperative pain and lower medication costs. operation supply cost was higher in the ralpj group. no obvious difference in hospitalization cost was found. conclusions: versus the open approach, ralpj performed for chronic pancreatitis shortens hospitalization, less postoperative pain and reduces medication costs; hospitalization costs are equivalent. a higher operative cost for ralpj is mitigated by a shorter hospitalization and less pain control. robotassisted puestow procedure using single port platform is feasible and safe method. the total procedures by da vinci robotic system are safe and easily performed in highly selected patients. 2 general surgery, hospital universitario infanta sofia, madrid, spain; 3 general surgery, hospìtal quirón la luz, madrid, spain aims: the concomitant presence of abdominal wall midline hernias and diastasis recti is frequent. diastasis recti might be a risk factor not only for having but for recurrence of midline hernias. most open surgical procedures not consider the treatment of both pathologies, nor laparoscopic most spread out approaches. the author presents a novel endoscopic, extraperitoneal and retromuscular hernioplasty technique and its preliminary results. methods: a serie of 15 patients is presented. a ct abdominal wall study is performed preoperatively. they all presented abdominal wall midline hernias in presence of a [ 3.5 cm concomitant diastasis recti. there were 8 females and 8 males. a totally endoscopic, extraperitoneal and retromuscular repair was performed, that included a midline anatomic restoration, tension-free hernia gap closure, omphaloplasty and skin treatment, if needed in every case. the tension-free massive-meshed hernioplasty included a bilateral totally endoscopic posterior components separation when needed. no drainages were used. all procedures included a bladder catheterization. results: all patient were dispatched within a period under 48 h. no reoperations were needed in postoperative period. postoperative pain was measured by an eva scale. 85% of the patients have no pain medication after 24-48 h dispatching from hospital. 25% of the patients have a skin suffusion or hematoma. a male patient presented a temporary abdominal asymmetry due to a unilateral posterior component added to his technique. the mean following-up is to 6 months (1-12 months) . no recurrence was observed. conclusions: preliminary results demonstrate this new approach to be a safe, feasible and a reproductible procedure. the 'terra' novel technique could provide of a new minimally invasive approach to abdominal wall midline hernias repair in the presence of a diastasis recti. only time and new results can predict the spreading out of this 'third way'. results: this study comprised 15 males and 12 females. mean age was 62 years (range 33-82 years) and mean body mass index was 30. gh and mh were found intraoperatively in 22 and 5 cases respectively. mean operative time for all hernias (gh/mh) was 122 min (range 70-240 min); 113 min for gh (range 70-155 min); and 169 min for mh (range 105-240 min). in 51.8% of cases, hernia operative measurement was larger than preoperative size, especially in cases of incisional hernias (64.2%). in 29.6% of cases, laparoscopy found additional abdominal wall defects previously undetected by physical examination and by us-and/or ct-scan. a composite mesh and a non-composite mesh (up to 30 cm in size) were used in 96.3% and 3.7% of cases respectively. the ethicon securestrap?? absorbable fixation device straps for sm fixation were employed in 77.8% of cases. mean length of hospital stay was 2.8 days. mean follow-up time was 33 months (range 1-109 months). in our study, there was one early (\ 30 days) postoperative seroma (3.7%), plus one late, small (2 cm) symptomless recurrence, but neither needed reoperation. conclusion: the sutureless sm technique facilitates intra-abdominal introduction, as well as the handling and fixation of large/very large meshes. this new approach is safe and fast, even in cases of gh/mh repair. aims: any ventral hernia (vh) combined with rectus muscle separation (rms) must be repaired along with repairing the rms, otherwise there is a high risk for hernia recurrence. open rms repair is vast and traumatic surgery and laparoscopy is not effective. at 2015 a new era of repairing abdominal wall hernia by assisted endoscopy started with wolfgang reinhold's milos procedure. these procedures are somewhat complexed and real reconstruction of the linea alba (la) was limited, which done better by ferdinand koeckerling's elar technique. we perfected the elar technique to be fully endoscopic with wide mesh fusing to the muscles immediately by fibrin glue: extended endoscopic hernia & linea alba reconstruction glue (eehlarglue), achieving a low traumatic mis for vh and rms with excellent surgical and cosmetic results. methods: our eehlarglue is a totally endoscopic based technique used since 2017. penetrating with optiview trocar and co2 pressure to the anterior rectus sheet (ars) level is followed by an extensive endoscopic dissection of the sub-cutaneous fat tissue from the ars. three trocars are inserted at the supra-pubic line enabling the dissection up to the xiphoid and costal margins laterally. any hernia sac is dissected, and the content reduced back to the abdominal cavity. relaxing incisions of the ars are performed longitudinally in the lateral aspect. the la is reconstructed by running two layers of non-absorbable sutures from xiphoid to pubis. a light mesh 30x15 cm is applied over the repair and the mesh is fused immediately to the muscles by fibrin glue. results: 25 patients underwent the eehlarglue with follow up of 24 months. all had significant rms of 5-10 9 14-26 cm combined with primary or recurrent vh. recovery was smooth with 1-3 days of simple analgesics and return to regular activity within 4-10 days. no one had recurrent vh, but two males had limited rms and two early cases seroma formation. conclusions: our eehlarglue enables endoscopic vh repair and la reconstruction with extrastrength received by immediate mesh fusion to muscles with fibrin glue. thus, achieving low traumatic mis, easy recovering and very effective results-a perfect solution for patients with vh combined with rms. results: twelve blinded prospective rcts were used. when compared to tep repair, tapp repair has comparable seroma formation rates (chi 2 = 7.94; (p = 0.02); ci -4.31, 0.55; i 2 = 75%) and post-op pain at 24 h (chi 2 = 30.28; (p = 0.00001); ci -0.31, 0.06; i 2 = 87%). however, tep repair is associated with a significantly shorter operative time (chi 2 = 502.95; (p = 0.00001);ci 0.24, 0.48; i 2 = 98%), post-op pain at 1 hour (chi 2 = 11.26; (p = 0.004); 0.05, 0.30; i 2 = 82%) and shorter hospital stay (chi 2 = 455.14; (p = 0.00001); ci 0.72, 1.07; i 2 = 99%). conclusion: tep is significantly better than tapp repair with regards to operative time, post-op pain at 1 h and hospital stay. there is no significant difference with regards to post-op pain at 24 h and seroma formation. background: primary hyperhidrosis (ph) is a neurological condition characterized by excessive sweating most often of the face, palms or axillae . palmar hyperhidrosis is treated through sympathetic chain clipping or transection .we aiming to compare the efficacy and results obtained with both techniques. patients and methods: sixty four patients underwent of 128 sympathetic procedures from march 2013 to february 2017. the patients were categorized into two groups: right sided transection sympathectomy and left sided clipping . patients were evaluated to compare the rates of success, satisfaction, compensatory sweating and recurrence either with transection or clipping of the t3 andt4 ganglion .mean follow up was 15 ? _7 months. results: sixty four patients 24 males and 40 females undergoing electro-coagulation sympathectomy on the right side and clipping on the left side. with mean age was 15 years (range 13 to 18 years). all patients had balanced demographic data . no statistical difference between the two groups according rate of success. compensatory sweating was observed in 28 patients (43.75%) overall with 4 cases of severe unsatisfied compensatory sweating. recurrence was reported in one case with transection and 2 cases in clipping. satisfaction was occurred in 63 cases in transection group and 61 cases in clipping group .pnumothorax was occurred in 2 cases in transection group compared to one case in clipping. no gustatory sweating and over dryness were reported in both groups. conclusion: both thoracoscopic sympathetic transection and clipping of t3t4 ganglion are safe and effective procedure in palmar hyperhidosis treatment. with no differences regarding recurrence rate,satisfaction and incidence compensatory sweating. keywords: thoracoscopic sympathectomy,palmar hyperhidrosis, clipping, compensatory hyperhidrosis. introduction: primary ventral hernias and ventral incisional hernias pose a challenge for surgeons throughout the ages. even though minimally invasive surgery and hernia repair have evolved rapidly, there is no standardized method that adequately decreases postoperative complications. hybrid hernia repair is a surgical repair, which has not been adopted widely. it combines both a laparoscopic and open component allowing sac excision, primary defect repair as well as laparoscopic mesh insertion. aims: to evaluate the short-term and long-term outcomes of hernia recurrence for patients undergoing hybrid ventral repair (hvr) for the treatment of primary and incisional ventral hernias. methods: between october-2012 and june-2013, hybrid vhr was performed in 24-patients at st mary's hospital, imperial college london. the medical records of these patients were reviewed retrospectively for demographics, comorbidities, prior surgeries, body mass index (bmi), hernial defects, hybrid technique used; mesh selection, operative time, complications and recurrences over a 5-year follow-up. results: twenty-four patients who underwent hybrid vhr were included with surgery performed by two surgeons. the mean age is 48-years with a mean bmi of 33.1 kg/m 2 . 88% had incisional hernias and 12% had primary hernias. the number of hernia defects ranged from 1 to 4, with the average mesh size used was 15x17 cm. extensive adhesionolysis was performed in 58% of patients. 30-day postoperative complications; 2 patients developed post-operative seroma, paralytic ileus in 1, pain control in 1 and urinary retention in 1 patient. there were no conversions to open procedures. the mean length of hospital stay was 2-days. none of the patients developed chronic pain and only one recurrence over the 5-year follow-up period. conclusions: the hybrid technique for vhr is safe and feasible, and has important benefits over an open or purely laparoscopic approach, including a low rate of seroma formation, chronic pain and fiveyear hernia recurrence. future investigation may include randomized controlled trials, to fully evaluate the benefits of hybrid vhr, with careful assessment of patient-centred end-points including quality of life and postoperative pain. surgery, medical faculty-university of tetove, tetove, macedonia; 2 general medicine, medical faculty-university of tetove, tetove, macedonia; 3 anestesiology, medical faculty-university of tetove, tetove, macedonia; 4 surgery, clinical hospital-tetove, tetove, macedonia laparoscopic cholecystectomy is widely used operative technique and it's characterized with less postoperative hospitalization and side effects. duration of the hospitalization after laparoscopic surgery depends on several factors of which pain and physical weakness are the most important. dexamethasone is well known; not only for its anti inflammatory effects but at the same time for analgesic and antiemetic effects, although the mechanism of this effects are not clarified yet. objectives: the aim of our study is the evaluation of analgesic effect of dexamethasone on reducing postoperative pain after laparoscopic surgery. patients and methods: in this study, 200 patients aged 25 -74 years old undergoing laparoscopic surgery, were classified into two groups, 100 patients in each group. the first group were treated with a intravenous injection of 8 mg dexamethasone preoperatively and another dose the next day after operation. the second group received a intravenous injection of normal saline. we evaluated the dose of consumed analgesics and antiemetic's drug during the first 24 h in both groups. results: according to our experience results the total dose of tramadol in a postoperative period in dexamethasone receiving group was smaller than in normal saline group. measure of postoperative pain was assessed using the paper-based vas scale. our result shows that the intensity of post operative pain in a period during first 36 h, after surgery in a group of patients treated with dexamethasone was lower compared with the group of patients treated with normal saline. nausea and vomiting during the first 36 h was significantly lower in the dexamethasone group than in the normal saline group. 2 surgery, hospital quiron sagrado corazon, sevilla, spain; 3 surgery, hospital virgen macarena, sevilla, spain; 4 surgery, hospital virgen del rocio sevilla, sevilla, spain aims: closing the defect (cd) during laparoscopic ventral hernia repair (lvhr) could be related to a reduction of seroma formation or bulging (hernia mesh) compared to conventional lvhr. but tension of the midline may contribute for some authors to a higher incidence of pain, recurrence in medium size defects and suggest to perform a component separation (cs) for restoring the midline in medium-large defects.we have developed a new technique for restoring the midline in medium ventral hernias (lira technique) and weanalyzed our results in terms of pain and recurrence compared to our conventional cd series (ccd). methods: we conducted a prospective controlled study of lvhr with ccd from january 2014 to december 2016 and a prospective controlled study performing lira technique from january 2015 to january 2017. we analyzed and compared both techniques in medium size defects (4-8 cms) in terms of postoperative pain (1, 7 days, 1, 3 months and 1 year) using a visual analogue scale (vas), bulging (return to prior distance among rectus muscles with the mesh in the sac in ct that didn't need surgical treatment)and recurrence (by physical examination and tomography). results: ccd was performed in 42 patients (mean age was 58.10 ± 13.15 years old and mean bmi was 33.11 ± 6.61 kg/m 2 ) and lira technique in 12 patients (mean age was 56.5 ± 10.5 years old and mean bmi was 30.12 ± 5.30 kg/m 2 ). the mean average follow-up in both series was 1 year. mean average vas in ccd was 5.35 ± 2.49 (1 day), 2.01 ± 2.13 (7 days) 0.62 ± 1.45 (1 month) 0.10 ± 0.43 (3 months ) and 0 at 1 year. in lira series vas was 3.9 ± (24 h) 1.08 ± 1.78 (7 days), 0.08 ± 0.28 (1 month), 0 (3 months) and 0 (1 year) . there are 6 cases of bulging in ccd series and 1 recurrence. bulging and recurrence were absent in lira series. conclusions: lira technique might be a safe procedure in medium size defects for restoring the midline in lvhr, and could be related to a lower pain rate compared to ccd with no recurrence or bulging. surg endosc (2019) 33:s485-s781 background: the desire of pediatric surgeon to reduce incision related morbidity and pain while achieving good cosmetic results has recently led to the introduction of single incision pediatric endo-surgery [sipes] and needlescopic surgery. intracorporeal suturing and knot tying during sipes remains challenging. the aim of this study is to introduce a novel and simple technique for intracorporeal suturing of the pediatric inguinal hernia after needlescopic disconnection of hernia sac using just needles rather than laparoscopic instruments. it is an imitation of the principles of sewing machine. methods: the first author discussed the idea of the technique with the co-authors and a demonstration was done on a silicon pad before application of the technique on children with congenital inguinal hernia [cih] for peritoneum closure after needlescopic disconnection of the hernia sac. the main outcome measurements were; feasibility of the technique, knot quality, suture placement accuracy, performance and suturing time and recurrence rate. results: the sutures were snugly applied to the ridges of silicon pad with good approximation and the knot was firmly tightened in all experiments. after applying and mastering the technique on a silicon pad, we shifted to use it on 373 children with 491 hernia defect. all operations were completed by the needlescopic technique without the need for insertion of any laparoscopic instruments. the time required for suturing of the peritoneum around internal inguinal ring [iir] and knot tying, decreased considerably from 5 min 27 s in the first operation to less than 3 min after the fifth operation and stabilized at approximately 2 minute 30 s. no major intraoperative complication and no recurrence. the primary end-point was to compare clinical outcome as well as cost effectiveness study between both groups. results: a total of 148 patients were enrolled (70 of them underwent tapp and 78 olr). drop out occurred in 5 cases (2 of tapp and 3 of olr group). patient characteristics were statistically similar between the 2 groups. tapp procedure had less early post-operative pain (p = 0.037), a shorter length of stay (p = 0.031) and less postoperative complications (p = 0.002) when compared with the olr approach. a slightly higher recurrence rate in the tapp group was found. additionally, there is a trend towards a higher postoperative quality of life and less chronic pain in the tapp group. conclusions: tapp procedure for bilateral inguinal hernia effectively reduces early postoperative pain, hospital stay and postoperative complications. cannizzaro hospital, catania, italy aim: the purpose of this study was to evaluate the long-term results in terms of safety and efficacy of a new technique to repair incisional ventral hernias with a self-gripping mesh, after a mean follow-up period of 15 months. methods: a retrospective, single-centre study was performed from june 2016 to june 2018. all patients undergoing elective incisional ventral hernia repair were included. hernias were diagnosed based on clinical examination at the outpatient clinic. in case of doubtful diagnosis, ct-scan was used to confirm the diagnosis. the component separation technique and, when needed, tar were performed. the self-gripping mesh was placed in sublay position (overlap 5 cm) with the self-gripping surface face down. in all cases drainage tubes were placed in retromuscular and supraaponeurotic position. the following characteristics were collected: age, sex, body mass index (bmi), smoking, comorbidities, number of previous surgical operations, defect size (ehs classification), mesh size, postoperative complications, duration of follow-up. all patients were interviewed by telephone every six months. when patients complained recurrence or other symptoms, visits were organized and when there was the doubt of recurrence a ct-scan was performed. results: a total of 40 patients were included in this study, 21 males, mean age was 59 years. 83% of patients had bmi [ 25, smokers and diabetics were respectively 28% and 9%. the mean defect size was 115 cm 2 . component separation technique was associated with tar in 6 patients. in 11 cases the size of mesh was 20 9 15 cm, while in 7 patients the size of mesh was 30 9 15 cm and in 11 cases this was 15 9 15 cm. in the other patients the mesh sizes were tailored to defect dimensions. subcutaneous seromas occurred in 7 patients, they were treated conservatively in 5 cases and with percutaneous punction in 2 cases. long-term follow-up demonstrated recurrences in one case, while in another one ct-scan revealed a bulging. no cases of mesh infection, pain or sensation of mesh. conclusions: this study with a mean follow-up period of 15 months demonstrated that the use of self-gripping mesh in sublay position is safe and effective to treat incisional ventral hernias. aim: morgagni hernias present technical challenges. the laparoscopic approach was described at first in 1992, however, as they are uncommon in adult life and, little data exist on the optimal method of surgical management. this study purpose was to analyse a method for laparoscopic repair of morgagni giant hernias using laparoscopic primary closure. methods: this case series describes a method of laparoscopic morgagni hernia repair using primary closure. in all patients a laparoscopic transabdominal approach was used. the content of the hernia was reduced into the abdomen and the diaphragmatic defect was closed with a running laparoscopic suture using a self-fixating suture. clips were placed at the edges of the suture to secure the pledged sutures to both the anterior and posterior fascia. demographic data as age, gender and bmi were collected. operative data (operative time, rate of conversion, blood loss) and post-operative data (short and long term complications, length of hospital stay, need of readmission and reoperation) were recorded. results: retrospectively collected data about 9 patients were analysed. there were 1 (11.1%) male and 8 (88.8%) females. the median bmi was 29.14 ± 5.2 kg/m 2 . median operative time was 80 ± 25 min. there were no intraoperative complications nor conversion to open surgery. patients began a fluid diet on the first post-operative day and were discharged after a median hospital stay of 3 ± 1.87 days. in a median follow up of 36 months we did not observe any recurrences. conclusions: transabdominal laparoscopic approach with primary closure of the diaphragmatic defect is a viable approach for repair of morgagni hernia. in our experience, the use of laparoscopic transabdominal suture fixed to the fascia allowed the closure of the defect laparoscopically with minimal tension on the repairs. can we predict the success of the laparoscopic approach in the adhesive small bowel obstruction? c. tellez marques, e. sebastian valverde, e. membrilla fernandez, l. grande posa, i. poves prim general surgery, parc de salut mar-hospital del mar, barcelona, spain aims: the laparoscopic approach in the acute adhesive small bowel obstruction and internal hernias (asbo) has shown superior to laparotomy in terms of morbidity and hospital stay. especially, in patients who present simple adhesions or internal hernias. according to this, the aim of the study is to determine those preoperative factors associated with simple adhesions and internal hernias, and consequently, improve the success of the laparoscopic approach in asbo methods: a retrospective study of patients who underwent urgent surgery for asbo was conducted from january 2007 to may 2016. we compare preoperative variables between single adhesions and internal hernias vs complex adhesions. a p value \ 0.05 was considered statistically significant. results: we analysed 262 patients who underwent surgery for asbo, 78 (30%) by laparoscopy and 184 (70%) by laparotomy. conversion rate in laparoscopy was 38.5%. 49.2% of patients presented a single adhesion or internal hernia; and 50.8% were considered complex adhesions. sex and age did not correlate with the type of adhesions. previous surgery (p \ 0.001), number of previous surgeries (p \ 0.001), asa (p \ 0.001) and previous abdominal wall mesh (p = 0.002) were significantly associated with complex adhesions. laparoscopy as the only surgical history was significantly associated with simple adhesions (p = 0.033). only appendectomy (p = 0.139) or supramesocolic (p = 0.076) previous surgeries tended to present single adhesions but it did not reach statistical significance. the need for intestinal resection was not related to the type of adhesions (p = 0.743). there was a significant correlation between the findings in the ct (computed tomography) and the type of adhesion found (p = 0.001). signs of ischaemia on ct were related to the need for intestinal resection (p \ 0.001). in the multivariate analysis, the number of previous surgeries, asa and ct scan findings were identified as independent factors related to the type of adhesion. conclusions: according to our study, a lower number of previous surgeries, asa i-ii and internal hernia in the ct scan are associated with single adhesions and internal hernias. patient selection is a key factor for the success of laparoscopic approach in asbo. aims: there aims of this study were: (i) to compare england with the united states in the utilisation of minimal access surgery (mas) and in-hospital mortality from four common abdominal surgical emergencies (appendicitis, incarcerated or strangulated abdominal hernia, small or large bowel perforation and peptic ulcer perforation). (ii) within england to evaluate the influence of mas upon in-hospital and long-term mortality. methods: between 2006 and 2012, the rate of mas and in-hospital mortality for four abdominal surgical emergencies were compared between the united states and england. univariate and multivariate analyses were performed to adjust for underlying differences in baseline patient demographics. results: 132,364 admissions in england for four abdominal surgical emergencies were compared to an estimated 1,811,136 admissions in the united states. after adjustment for patient demographics, mas was used less commonly england for three conditions; appendicitis (odds ratio (or) 0.30, 95% ci 0.30-0.31), abdominal hernia (or 0.18, ) and small or large bowel perforation (or 0.48, ). in-hospital mortality in multivariate analysis, was increased in england compared to the united states for three conditions; abdominal hernia (or 1.91, 95% ci 1.81-2.01), small or large bowel perforation (or 2.33, ) and peptic ulcer perforation (or 2.02, 95% ci 1.91-2.14). in england, after adjustment for patient demographics, open surgery was associated with increased in-hospital mortality for three conditions; abdominal hernia (or 1.80, 95% ci 1.26-2.71), small or large bowel perforation (or 1.59, 95% ci 1.37-1.87) and peptic ulcer perforation (or 2.31, . similarly open surgery was associated with increased long-term mortality for three conditions; abdominal hernia (hr 1.32, 95% ci 1. 15-1.52) , small or large bowel perforation (hr 1.30, 95% ci 1.18-1.43) and peptic ulcer perforation (hr 1.69, 95% ci 1.50-1.89). conclusions: minimal access surgery was used less commonly and inhospital mortality was increased in england compared to the united states for common abdominal surgical conditions. given the benefits of mas shown in this large study, strategies to enhance adoption of mas in emergency conditions in england need to be optimised and include appropriate patient selection and improved surgeon mas training and experience. surg endosc (2019) 33:s485-s781 background: in the treatment of inguinal hernias, there is little hard evidence concerning the economic reimbursement in the diagnosis-related-group (drg) era. factors that affect whether a hospital may earn or lose financially depending on open or laparoscopic approach is still underexplored. the aim of this study is to provide a reliable analysis of in-hospital costs and reimbursements in inguinal hernia surgery. methods: this retrospective study analysed the 1-year experience in inguinal hernia repair in patients undergoing open lichtenstein (ol), laparoscopic totally extraperitoneal unilateral (utep) or bilateral (btep) hernia repair. demographics, results, costs and drg-based reimbursements were recorded and analysed. results: during the study period, 39 patients underwent ol, 82 patients utep and 16 patients btep. the average total cost amounted to 4126 eur in ol, 5134 eur in utep and 7082 eur in btep groups (p \ 0.001*). the hospital reimbursement amounted to 5486 eur, 5252 eur and 6555 eur in the ol, utep and btep groups respectively (p \ 0.001*). finally, the mean hospital earnings were 1360 eur, 118 eur and -527 eur for each patient in ol, utep and btep respectively (p \ 0.001*). conclusions: in-hospital costs were higher in utep and btep as compared to ol. the drg-based reimbursement provided adequate compensation for patients with unilateral inguinal hernia, whereas hospital earnings were profitable in ol group only, and led an overall financial loss in the btep group. surgeons should be conscious that clinical advantages of the laparoscopic approach are not adequately compensated for, from an economic point of view. aims: umbilical hernias are common anatomical defects in swine which become a suitable model for surgical training and research in the field of surgical meshes. the aim of this study was to develop a surgical protocol for a successful laparoscopic implantation of stem cell-coated surgical meshes. methods: 9 large white pigs, weighing 25-68 kg and with congenital abdominal hernia were anesthetized for the surgical procedures. non absorbable polypropylene surgical meshes were coated with fibrin glue (fg) (control group) or with fg admixed with porcine bone marrowderived mesenchymal stem cells (fg/bm-mscs). approximation of hernia's borders was performed by intracorporeal suture. the meshes were carefully rolled inside the trocar for laparoscopic implantation. the surgical implantation was performed by laparoscopy using helicoidal staples. laparoscopic inspections and biopsies of the tissue surrounding the mesh were performed at 7, and 30 days post-implantation. at day 30, the animals were euthanized and macroscopically evaluated. ultrasonography was used at day 0, 7, and 30 to evaluate the size of the hernia. the biopsies were then processed for the histological analysis. results: ultrasonography demonstrated that the mean size of umbilical hernias before mesh implantation was 2.49 ± 0.99 cm. a decrease in hernia mean size was observed at day 7 and 30. the laparoscopic procedures allowed a successful mesh implantation in all animals. in most of cases, the implantation site did not show excessive inflammation or tissue adhesions. but one animal showed hernia maintenance. one animal had peritoneal and implant-site infection. foreign body reaction was noted in the histological analysis, although no significant difference was found between the control, and bm-msc group. conclusions: the anatomical similarities between humans and pigs in umbilical hernias make this animal model useful to: i) improve minimally invasive surgical procedures for hernia treatment; ii) evaluate new surgical meshes, and iii) introducing stem cell therapy to hernia surgical repair. the laparoscopic approach is efficient and safe for the implantation of stem cellcoated meshes. gene and protein expression analysis are required to evaluate the molecular changes between the conventional and the stem cell surgical approach. aims: fluorescence angiography with indocyanine green (icg) is used as a marker in the assessment of tissue perfusion, being more frequently used in colorectal procedures. this technology has shown to be a good technique to reduce complications related to vascular supply to the anastomosis. in esophagogastric procedures blood supply to the gastric pouch, jejunum and esophagus could be evaluated by icg fluorescence imaging. it could be also used in bariatric surgery to evaluated the anastomoses, during gastric bypass, and blood supply to the gastroesophageal junction and the angle of his during sleeve gastrectomy. methods: we have collected data during 8 gastric resection due to adenocarcinoma and 53 bariatric procedures that were performed by the same surgeon, using icg fluorescence to evaluate blood supply. the icg was infused before performing the anastomosis in order to evaluate the need to change the transaction line (tl). we analyzed those cases in which the tl was changed and the number of leaks in those cases that we changed this line. results: all the 61 cases were performed by laparoscopic approach: 5 subtotal gastrectomy (sg), 3 total gastrectomy (tg), 26 gastric sleeve (gs) and 27 gastric bypass. there were no changes regarding the tl before performing the anastomosis in any of the four types of procedures (sg, tg, gs, gb). in the analyzed data there is 1 anastomotic leak in one sg procedure (1.6%). conclusions: icg fluorescence angiography could be helpful in assessing blood supply during gastrointestinal anastomosis, although we have not find an influence in the results during bariatric and gastric procedures. however, we do not have the sufficient evidence to determine the value of this technology in this entities, being needed more volume and data to improve the significance of the results. aims: hyperspectral imaging (hsi) combines a spectrometer with a camera to analyze the tissues' optical properties in a broad wavelength range, without the need for a contrast agent. it provides extensive real-time information about tissue physiology, including oxygen saturation (sto2). fluorescence-based enhanced reality (fler) is a software solution providing a dynamic, quantitative analysis of the signal evolution of a systemically administered fluorophore, during fluorescence angiography (fa) . the aim of this study was to compare the performance of hsi and fler to assess bowel perfusion, in a porcine, non-survival model of bowel ischemia. methods: in 6 pigs, an ischemic small bowel segment was created and imaged after 1 hour of ischemia. the imaging modalities were applied sequentially to the same area.hsi was performed first, to acquire the sto2 spectra, by means of the tivita tm system (diaspective vision, pepelow, germany), which provides a spectral range of 500-1000 nm and a 5 nm resolution. subsequently, fa was performed using a nir-capable laparoscopic camera (d-light p, karl storz, germany), after intravenous injection of 0.2 mg/kg of indocyanine green (icg; infracyanine, serb, paris, france). the fluorescence flow was recorded during 40 s, then the slope of the fluorescence flow was analyzed using a proprietary software to obtain a virtual perfusion cartography. the virtual cartography was overlaid onto real-time images to obtain the enhanced reality effect. ten adjacent regions of interest (rois) were selected from hsi datasets and were superimposed to fler-generated cartographies using a custom plug-in software function, allowing for a quantitative comparison of both imaging modalities. hsi was repeated after icg injection. results: the r 2 correlation coefficient between hsi-sto2 and the fler slope was 0.79. at control hsi after icg injection, the correlation coefficient dropped significantly (r 2 0.45). the interference of icg on hsi imaging was clearly identified in the spectral curves. conclusion: sto2 given by hsi provided results comparable to those obtained with fler in our bowel ischemia model, without the need to inject a contrast agent. icg interferes with hsi datasets, disrupting sto2 values. surgical treatment is one of the most effective options for treatment of giant hiatal hernia. laparoscopic approach became is a 'gold standard' over the time demonstrating all advantages of minimally invasive techniques over the open procedures. however the utility of robotic operations still remains controversial. aim of the study: evaluate the initial experience of robotic fundoplication in compare to laparoscopic procedures. materials and methods: since the january till the december of 2017 thirty operations were operated on. mean age was 57.2 (44-76), among them 12 (65%) were female and 6 (35%) were males. mean bmi was 29.4 (24. 1-41.0) . laparoscopic procedures were performed in 8 patients (1st group), robotic procedures with davinci system were performed in 10 patients of the second group. nissen fundoplication modified was performed in 14 patients, toupet fundoplication was used for 4 patients. results: the median operative time in laparoscopic group was 150 min, in robotic group-131,2 min. there were no statistical differences between two groups (p = 0.93). blood loss was minimal in both groups. mean postoperative hospital stay was 4.08 days (2-7 days) in the 1st group and 3,6 days (2-6 days) in the second. there were no statistical differences between two groups (p = 0.19). postoperative course was uneventful in all patients of both groups. surgical stress response is associated with systemic inflammatory syndrome, sepsis, multiorgan dysfunction syndrome. robotic assisted surgery has been introduced to overcome the limitations of conventional laparoscopy. this technique has potential advantages over laparoscopy, such as increased dexterity, three-dimensional view, and a magnified view of the operative field. these advantages could result in limited intra-abdominal trauma and hence in attenuated surgical stress response over conventional laparoscopy. aims: this study aimed to synthesize data on the effect of robot assisted surgery on surgical stress response. methods: electronic databases were searched with the search terms 'surgical stress', 'stress response', 'oxidative stress', 'robotic assisted surgery', 'c-reactive protein', 'interleukin 6', 'interleukin 10','cortisol',;'oxidative stress markers', 'antioxidants', 'antioxidant status', 'mda', 'glutathione', 'cortisol', 'acute phase response' up to and including march 2018. results: one hundred forty studies were identified and their title and abstract were reviewed. one randomized controlled trial, six non randomized comparative studies, one experimental study and one case report met inclusion criteria. data were discordant. one prospective trial concluded that cortisol and il-6 were lower in laparoscopic assisted distal gastrectomy compared with robot assisted distal gastrectomy in another study comparing robotic assisted laparoscopic radical prostatectomy with open radical prostatectomy based on plasma measurements of il-6, il-1a and c-reactive protein, it was demonstrated that robotic assisted laparoscopic radical prostatectomy induces lower tissue trauma than open radical prostatectomy. in another study, it was reported reduced expression of genes associated with surgical stress response in patients treated with robotically assisted radical prostatectomy compared with patients treated with open prostatectomy. the case report concerned a case of polymyalgia rheumatic after robotic assisted laparoscopic prostatectomy. the experimental trial demonstrated that cortisol and substance p were significantly higher with open thoracic approach versus robot assisted thoracoscopic oesophageal surgery. conclusion: further research is needed to elucidate the effect of robotic surgery on surgical stress, based on a well standardized protocol for the measurement of surgical stress response. purpose: tissue compression is essential to prepare the tissue for proper staple formation. this study evaluates the risk factors of compression injury on the circular stapling line in vitro. methods: to reproduce the artificial bowel wall, a collagen plate was prepared by mixing collagen extracted from porcine with glycerin. artificial collagen plates with 4 mm and 6 mm in the thickness were made for dry and healthy condition and immersed plates in the tap water for 10 min to make wet and edematous condition. circular stapler (cdh25a, ethicon, usa) was applied in the collagen plates (dry and wet condition) and optimal compressions. compression line was evaluated for compression injury score. risk factors for excessive compressions and unacceptable injury were analyzed. results: in the dry condition, optimal compression didn't cause unacceptable injury. in the wet condition, excessive compressions were occurred in 47.1% with optimal approximation. unacceptable injury was significantly different in proper and excessive compression cases as 18.8% and 5.6%, respectively. on the univariate analysis, thickness (6 mm), wet condition, proximal side, maximal compression, and excessive compression were associated with unacceptable injury. on the multivariate analysis using logistic regression model, excessive compression was significant independent factor to cause tissue injury (p \ 0.001) and this significance was also proved in the optimal compression group (p = 0.021). background: minimal invasive appendectomy gained much popularity due to its better cosmoses, early recovery and less wound site infections. single incision laparoscopic appendectomy (sila) has many disadvantages such as, long operative time, bad ergonomics, surgical site infections, high conversion rate and port site hernia. needlescopic appendectomy (na) using mediflexò facial closure needle expected to be more superior over sila. here in we compare our results of needlescopic appendectomy with single-incision one. material and methods: one hundred and twenty patients with acute non complicated appendicitis were randomly assigned to na and sila 60 children for each group during the period between january 2015 to october 2018. the main outcome measurements included, demographics, operative time, intraoperative complication, conversion rate, post-operative hospital stay, surgical site infection, port site hernia and cosmetic results. results: a total of 120 children underwent appendectomy. there were 60 children who underwent na and 60 children who underwent sila. there were no difference in age (11.5 vs 11.98 years, p = 0.35), weight (42.98 vs 43.46 kg, p = 0.76) and hospital stay (1.51 vs 1.55 days, p = 0.92) between the two groups. there were no intraoperative complication during the two surgical approaches. operative time for na group is significantly shorter than single incision group (20.7 vs 38.2 min, p = 0.0001). no single case of conversion in na group and 18 cases needed conversion in sila group. seven cases of sila showed surgical site infection. 2 cases of sila group presented with port site hernia. the na group was superior as regard ergonomics. the two groups showed equal excellent cosmetic results. conclusion: needles scopic appendectomy and sila are comparable as regard cosmetic results and hospital stay. na proved to be safe, applicable, repetitive and superior over sila as regard better ergonomics, less operative time, absence of surgical site infection and port site hernia. aims: to objectively analyze the surgical performance and surgeon's ergonomics in the use of a novel flexible laparoscopic instrument during intracorporeal suture, and compare it with the use of a conventional laparoscopic needle holder. methods: three experienced laparoscopic surgeons performed five laparoscopic sutures on an organic tissue using the novel flexible instrument (flexdexò) and five sutures using a conventional needle holder with axial handle. the new device is based on a mechanical design with no electrical components, which transfers the surgeon's hand, wrist, and arm movements to the instrument tip in an intuitive manner. the use of the instruments was organized in a random fashion. prior to the study, participants conducted a 15-minute training session with the new flexible instrument. execution time and quality of the suture were assessed for each repetition. besides, flexion and radioulnar deviation of the wrist were recorded using an electrogoniometer (biopac systems, inc.) attached to the surgeon's hand and forearm. the intensity of the forearm's muscle activation was also analyzed by means of a myo armband (thalmic labs). results: surgeons required more time to perform the intracorporeal suture using the novel laparoscopic instrument (87.8 ± 23.333 s vs. 56.467 ± 8.733 s; p \ 0.001), but the quality of the suture was similar with both instruments. the wrist flexion (9.976 ± 7.513°vs 15.440 ± 4.049°; p \ 0.01) and wrist ulnar deviation (21.565 ± 5.19°vs 27.401 ± 3.19°; p \ 0.01) were significantly lower when using the flexible instrument. during the suturing tasks, the use of flexdexò instrument led to a higher muscular activation of the flexor (32.614 ± 3.437 vs 25.23 ± 3.076 rms; p \ 0.001) and extensor (23.341 ± 1.869 vs 20.017 ± 1.307 rms; p \ 0.001) muscle groups of the forearm. conclusions: the presented novel instrument allows surgeons to perform robotic-like laparoscopic suturing. we believe that with a longer training period surgeons could potentially reduce surgical times with this device. the preliminary results of this study suggest that the use of this new instrument provides a quality of the suture similar to that obtained with a conventional laparoscopic needle holder and an ergonomically more adequate wrist posture. aims: the intraoperative real-time evaluation of tissue perfusion is one key element for successful visceral surgery. traditionally, tissue evaluation is performed visually by surgeons. newer devices for objective quantification have in majority been based on the application of the fluorescent dye indocyanin green (icg). a novel method derived from geographic research is hyperspectral imaging (hsi). the aim of this study was the evaluation of hsi as a promising method for the evaluation of tissue perfusion and its implementation in the evaluation of the gastric conduit during esophagectomy in a porcine model. methods: the hsi camera records a 3 dimensional data cube from a 2 dimensional surgical situs obtaining wavelengths between 500 and 1000 nm. the absorption at different wavelengths is tissue-specific and influenced by the amount of oxygenated haemoglobin and other pigments. a software calculates 4 different indices in real-time including oxygen saturation. a porcine model (n = 24) is used for esophagectomy with gastric conduit formation. ischemia is induced artificially by magnets simulating staplers. different shapes of the gastric conduit and anastomosis formation are evaluated for perfusion metrics in order to obtain recommendations for the optimal formation of esophagogastrostomy. conclusion: hsi is a promising method for intraoperative evaluation of tissue perfusion that does not require application or injection of any agents. the preliminary results in this study showed that the gastric conduit receives its main blood supply from the gastroepiploic arteries and not via the mucosa. further results from the current evaluations enable formation of an optimized gastric tube and esophagogastrostomy in esophagectomy. surg endosc (2019) pediatric surgery, al azhar university, giza, egypt; 2 pediatric surgery, beni suef university, beni suef, egypt background: varicocele is one of the most common causes of infertility. many surgical interventions are used for varicocele ligation including open and conventional laparoscopic multiport or single incision techniques. the aim of the study is to present a new needlescopic lymphatic sparing varicocele ligation using mediflexò facial closure needle and 14 gauge vascular access cannula. material and methods: twenty-two male children with bilateral varicocele of grade ii-iii. all children were counseled by clinical examination, doppler ultrasonography, abdominal ultrasonography, and routine laboratory investigations. testicular lymphatics were delineated by subcutaneous injection of 1/2 cm 3 methylene blue in anterior wall of the scrotum 20 min prior to surgery. the testicular vessels (both vein and artery) were ligated one cm above the deep inguinal ring using two mediflex needles with preservation of lympatics. the main outcome measurements included; operative time, hospitalization, testicular atrophy, hydrocele formation, recurrence of varicocele and intra or postoperative complication. results: a total of twenty-two male children with grade ii-iii varicocele subjected to needlescopic lymphatic sparing technique. twenty one were bilateral. 15 background and aims: even if the clinical outcomes of robotic rectal resections are under investigation, the related robotic costs have not yet been well addressed, and the differences between the robotic rectal resection costs and the laparoscopic approach are still not well known. we have therefore performed a prospective comparative study of robotic rectal resections (rrr) and laparoscopic rectal resections (lrr) performed at our centre with the aim to evaluate the cost-effective outcomes of robotic versus laparoscopic surgery. study design: this is an observational, comparative prospective non-randomized study which includes patients that underwent laparoscopic and robotic rectal resection reaching a minimum of 6 months of follow up from february 2014 to march 2018, at the sanchinarro university hospital, madrid. an independent company performed the financial analysis and fixed costs were excluded. outcome parameters included surgical and post-operative costs, quality adjusted life years (qaly), and incremental cost per qaly gained or the incremental cost effectiveness ratio (icer). the primary end-point was to compare clinical outcome as well as cost effectiveness study between both groups. results: a total of 86 rrr and 112 lrr were included. the mean operative time was significantly lower in the lrr approach (336 versus 283 min; p = 0.001). the main pre-operative data, overall morbidity, hospital stay and oncological outcomes were similar in both groups, except for the readmission rate (rrr: 5.8%, lrr: 11.6%;p = 0.001).the mean operative costs were higher for rrr (4285.16 versus 3506.11€; p = 0.04); however, the mean overall costs were similar (7279.31€ for rrr and 6879.8€ for the llr; p = 0.44). mean qalys at 1 year for rrr group (0.5624) was higher than that associated with lrr (0.5066) (p = 0.018). at a willingness-to-pay threshold of 20,000 € and 30,000 €, there was a 61.18% and 64.09% probability that rrr group was cost-effective relative to lrr approach. conclusion: this study provides data of cost-effectiveness differences between rrr and lrr approach showing a benefit for the rrr aim: the efforts were aimed to the introduction of novel surgical technologies to overcome the intrinsic anatomical and technical constraints of rectal surgery. this was the case of the introduction into the clinical practice of laparoscopy and later on of robotic surgery for rectal surgery. however, whether robotic surgery is actually superior to laparoscopy is still debated. the aim of this study was to compare 3d laparoscopy and robotic surgery for rectal cancer on technical and oncological outcomes. methods: this was a single-center, prospective, randomized controlled trial. all patients more than 18 years of age undergoing elective surgery for rectal cancer situated from 5 to 10 cm from the anal verge were included. patients undergoing abdominal perineal amputation and/or with t4 and/or m1 tumours were excluded. patients were randomized before surgery into two arms: arms a (3d laparoscopy) and arm b (robotic), and gave their consensus to the study. demographic data, data regarding the tumour, operative and post-operative data were collected. patients with a follow up shorter than 24 months were excluded as well. results: twenty patients were enrolled in arm a and 20 in arm b in the period time of one year. patients' population of the 2 arms was homogeneous as concerns demographic characteristics and stage of the disease. robot-assisted rectal resection results in comparable operative time (125.70 vs 170 min; p = 0.068). the conversion rate was significantly lower for arm b (2 vs 0 p = 0.0). postoperative morbidity was comparable between groups. hospital stay was comparable but time required to resolve post-operative ileus was shorter in arm b (2.5 vs 1.2 days, p = 0.048). overall survival and disease-free survival were comparable between arms (98.6% vs 98.3%, p = 0.989, and 97.4% vs 97.6%, p = 0.856, respectively) conclusions: 3d laparoscopy and robotic surgery are two viable options for rectal surgery. robotic surgery can add some in terms of post-operative outcomes and ergonomics. aim: currently, robotic surgery for rectal cancer is a surgical operation that is being performed worldwide. we also introduced robotic surgery in 2015. however, after robotic surgery, we observed a rise in creatinine kinase (ck), which is unlikely to happen in other surgeries. we studied the postoperative complications of rectal cancer patients who underwent either robotic surgery or laparoscopy during the same period of time. methods: from january 2016 to november 2018, 23 patients underwent surgery using robotassisted rectal resection (da vinci si 20 cases and xi 3 cases) and 33 patients underwent laparoscopic rectal resection. in this study, abdominoperineal resection, intersphincteric resection, and lateral lymph node dissection were excluded. result: the operation time for the robotic surgery group was significantly longer than that for the laparoscopic group (424 min vs. 305 min; p \ 0.001). the ck value of the robotic surgery group on 1pod was significantly higher than that of the laparoscopic group (525 iu/l vs. 160 iu/l; p \ 0.001). in addition, one case of compartment syndrome was observed in the laparoscopic group. there were no significant differences in age, body mass index, intraoperative bleeding, tumor invasion depth, urination disorder, or postoperative hospital stay. in robotic surgery, it is considered that the increase in ck value is caused by the extended operation time, contact of the patient's cart with the left thigh of the patient, and the extra force applied to the abdominal wall caused by the displacement of the remote center. conclusion: in robotic surgery, it is suggested that the measurement of postoperative ck value is important. therefore, an attempt to shorten the operation time and paying attention to the surgical field are necessary to improve the outcomes. aims: anastomotic leak remains as one of the most important and life threatening post-operative complications in colorectal surgery. this complication has important consequences, both acute and long term, longer hospital stay, re-intervention, and increased morbidity and mortality. among all different circumstances that have been related to this entity, blood supply is an important factor that might have influence. fluorescence with indocyanine-green (icg) is used as a marker in the assessment of tissue perfusion in colorectal surgery which might reduce the numbers of leaks. methods: a multicenter analysis of the experience of 5 centers in spain is collected in order to assess the value of icg in colorectal anastomosis. 379 colorectal procedures were performed using icg to evaluate vascular supply in the anastomosis. icg was infused before performing the anastomosis analyzing the number of cases in which the transection line (tl) was changed. we also analyzed the number of leaks in those cases that we changed this line. results: out of the 379 cases performed, 15 cases were performed by open surgery, 319 by laparoscopy, 35 by single-port and 10 with transanal total mesorectal excision(tatme). the following procedures were performed: 94 right colonic resection(rc), 9 splenic flexure partial resection(sf), 149 left colonic resection(lc), 3 subtotal colectomy(sc), 2 total colectomy(tc), 6 hartman reversal surgery(hr), 63 low anterior resection with partial mesorectal-escision(lar) and 47 ultra low anterior resection with total mesorectal-escision(ular). leak rate(lr) was 6.59% (3.19%rc, 5.36%lc, 33.33%sc, 11.11%lar, 11 .32%ular). overall lr was 4.3% in colonic surgery and 11.2% in rectal surgery. the tl was changed due to icg in 12.13% of the cases (4.25%rc, 11.1%sf, 16 .77%lc, 50% tc, 7.93% lar, 18.86% ular), being 11.9% in colonic resection and 12.9% in rectal resection. the relation between leaks and the cases in which the tl was changed, were 20% (33.3%rc, 25%lc, 33.3%ular). conclusion: icg fluorescence may play a role in anastomotic tissue perfusion assessment. the lr after colorectal surgery might decrease using icg to detect the proper tl before to perform the anastomosis. however, we do not have the sufficient evidence to determine that the changing transaction line can lead to avoid leaks. surg endosc (2019) aims: to analyse the value of postoperative day 2 crp as an early predictor of safe discharge in robotic rectal cancer surgery. methods: a retrospective analysis was performed, including patients who had undergone robotic total mesorectal excision (tme) in a single centre over a 4-year period (may 2013 -september 2017 . patients who had a permanent stoma (abdominperineal resections or hartmann's procedure) were exluded from the study, leaving 144 patients for further analysis. as the los is currently used as a performance tool in assessing outcomes in colorectal surgery (with a cut-off established at 5 days), we compared the crp values in these 2 groups. results: fourty one percent of patients were discharged home within 5 days. they had an earlier peak of crp on postoperative day (pod) 2 (median 94. 5, 80) . the group of patients that were discharge home after 5 days (59%) had a crp peak on pod 3 (median 151, 168). on pod 3, the group of patients that went home within 5 days had a lower crp (83-70-vs. 151-168-) compared to the group of patients that were discharge after 5 days, p = 0.001). conclusions: a crp peak on pod 2 in robotic tme can predict an early and safe discharge (los within 5 days). background: purposelateral pelvic lymph node dissection (lpnd) is suggested to treat suspected lymph node metastasis in pelvic side-wall in patients with rectal cancer who underwent preoperative chemoradiotherapy (crt). however, technical difficulties make it possible that lateral pelvic lymph nodes (lpns) are not dissected completely and, thus, remained in the narrow pelvis. near-infrared fluorescence imaging (fi)-guided surgery is expected to help visualization and complete excision of nonvisible lymph nodes during cancer surgery. this study aimed to evaluate the efficacy of fi using indocyamine green (icg) to identify lpns during robotic lpnd. methods: 31 rectal cancer patients who were suspected lpn metastasis and had received preoperative crt were prospectively enrolled. icg in a dose of 2.5 mg was injected around tumor preoperatively. all procedures were performed with a totally robotic approach. after completing lpnd, fi was checked again for identifying remained lpns and resecting them completely. results: the lpns were successfully detected in 25 (80.6%) of the 31 patients. however, after accounting for eight cases, having finished adjusting icg injection, the lpns were successfully detected in 22 (95.7%) of 23 patients. the fi-guided lpnd group (n = 25) showed similar mean operative time for unilateral pelvic dissection and complication rate, compared to patients who underwent conventional robotic lpnd (n = 62). however, the mean number of unilateral harvested lpns was 10.2 in the fi-guided lpnd group, which was greater than the mean of 6.6 in the conventional group. lpn metastasis was identified in 40% of the fi-guided lpnd group, which was higher than that of the conventional group, 31.7%. conclusion: fi-guided lpnd identifies lymph nodes in pelvic side-wall with great reliability. this contributes to increased number of lpns yield compared to conventional robotic lpnd. this technique should be considered to dissect them completely by preventing subsequent missing of nonvisible lpns. aims: to compare the medium-term oncological outcomes of laparoscopic total mesorectal excision (l-tme) vs. robotic total mesorectal excision (r-tme) for rectal cancer. methods: a retrospective analysis was performed including patients who underwent l-tme or r-tme resection between 2011-2017. patients with disease stage iv at diagnosis or r1 resection were excluded. 680 patients were initially included, and 136 cases of r-tme were matched based on age, gender, stage and time of follow-up with an equal number of patientswho underwent l-tme. we compared 3-year disease free survival (dfs) and overall survival (os). in adittion, a multivariate analysis was performed in order to idenfity independent prognostic factors for 3-year dfs and os. results: pathological outcomes were similar between groups. however, major complications were lower in the robotic group (13.2% vs. 22.8%, p = 0.04), highlighting the anastomotic leakage rate, which was 7.4% in the r-tme vs. 16.9% in the l-tme group (p = 0.01). overall, the 3-year dfs rate was 69% in the laparoscopic group and 84% in the robotic group (p = 0.02). the 3-year os rate was 70% in the l-tme groups and 97% in the r-tme group (p = 0.000). for disease stage iii, 3-year dfs was significantly higher in the r-tme group. os was also significantly superior in the robotic group for every stage, reaching 86% in the stage iii. in the multivariate analysis, r-tme was a significant positive prognostic factor for distant metastasis (or 0.2 95%ci 0.1, 0.6, p = 0.001) and os (or 0.2 95%ci 0.07, 0.4, p = 0.000). conclusions: r-tme for rectal cancer can achieve better oncological outcomes compared to l-tme, especially in stage iii rectal cancers. the robotic approach has demonstrated to be a significant positive prognostic factor for local recurrence and overall survival, due to the better postoperative outcomes. however, a longer follow-up period is needed to confirm the oncologic findings. university hospital for visceral surgery, university of oldenburg, oldenburg, germany; 2 bremen spatial cognition center, university of bremen, bremen, germany aims: in clinical settings, realistic assessment of one's own abilities can enhance performance and promote patient safety, especially in surgical residents, who inevitably have to acquire skills during real surgery. this study thus implemented the global assessment of laparoscopic skills (goals) questionnaire with the aim to explore divergences between resident self-evaluation and specialist's evaluation on laparoscopic performance, as a first step to implement the goals questionnaire as a tool for constructive and objective feedback. methods: between july and october 2018, seven residents from the university hospital for visceral surgery at the pius-hospital oldenburg participated in this study. at the end of every laparoscopic operation where the resident acted as the primary surgeon, the resident and the supervising surgeon independently evaluated the resident's operative performance using the goals questionnaire. the five dimensions evaluated were depth perception, bimanual dexterity, efficiency, tissue handling and autonomy. a cumulative goals-score (with 25 being the highest possible score) was calculated for n = 46 laparoscopic operations. resident's year of training, the level of case difficulty and the type of laparoscopic procedure performed was also analysed. results: residents overestimated their laparoscopic abilities in 64.4% of the operations (goals-scores: residents: median = 16, mean = 16.51; specialists: median = 15, mean = 14.60; p \ 0.001). residents in the first three years of surgical training were more likely to overestimate their performance (residents: median = 16.5, mean = 16.82; specialists: median = 13, mean = 13.14; p \ 0.001) than those with more than three years of surgical experience (residents: median = 16, mean = 16.22; specialists: median = 15, mean = 16.00; p = 0.613). goals score differences did not depend on case difficulty and laparoscopic procedure. conclusions: surgical residents tend to overestimate their intraoperative laparoscopic performance when compared to specialist evaluation. overestimation was found to depend on one's own laparoscopic experience and seem to disappear with gained expertise. these results signify the importance of individually adapted training and the greater need for objective feedback for surgical residents. this approach could in return increase the skill acquisition rate of the resident and in return contribute towards enhancing patient safety. introduction: the delivery of safe surgical care is dependent of various, complex and interrelated factors. substantial data exist regarding the impact of training in human factor skills on surgical outcomes. however, except for the standardized time-out process, the best way to go about improving these skills remains unclear. the aim of this study was to gain more insights in the theatre staff's perception of human factors and their importance on surgical outcome in the operating theatre. methods: the surgical team assessment record (star) questionnaire was used to study the role of human factors, such as communication, situational awareness and organization, contributing to surgical team performance. the self-assessment questionnaire was filled out by the theatre staff, directly after the surgical procedure. conditional logistic regression was used to identify the impact of the role in the operating theatre on the yes versus no answers. results: some 507 questionnaires were completed. the theatre staff rated their team's performance with a median of 4 (iqr 0.0, 5-point likert scale). the surgical fellows (n = 76) rated their personal factors significantly lower compared to the rest of the operating team (median 3 versus 4, p-value \ 0.0001). the staff surgeon (n = 119) indicated significantly more often that there were many distractions (51.3%, yes n = 61) and noticed aberrations (60.5%, yes n = 72) during the surgical procedure (pvalue \ 0.0001) when compared to the rest of the operating team. most aberrations reported by the surgeons were related to technical performance. conclusions: human factors play an important role in the surgical environment. situational awareness may be less developed in members of operating teams, compared to the surgeon of that team. further work is needed to elucidate the impact of human factor skills on team performance. a team-based approach to safety interventions is recommended. future studies should determine what type of aberrations and distractions are most relevant and valuable to embark on with team training. dept. of digestive surgery, school of medicine, tokushima university, tokushima, japan; 2 dept. of digestive surgery, tokushima university, tokushima, japan background: the qualitative evaluation for laparoscopic training of medical students was performed using rubric evaluation, and weak points in conjunction with the lack of anatomical knowledge were derived. to conquer these weak points, virtual reality (vr) ? augmented reality (ar) training for understanding of regional anatomy was investigated. materials and methods: one hundred and six students in 5 th grade of tokushima univ. participated basic laparoscopic task training (gummy band ligation, beads transfer, delivery of beads, gauze excision) with training box and sham laparoscopic cholecystectomy with virtual simulator. rubric evaluation, as qualitative evaluation, which includes the evaluation standards for each maneuver were performed before and after basic task training and sham operation. the group which self-evaluation was higher in a rubric evaluation was investigated. the 3d image of vessels and bile duct obtained from mdct of real patient was projected in reality space with microsoft hololens. training of ar image using hololens was performed for understanding of regional anatomy. after training of regional anatomy with hololens, sham laparoscopic cholecystectomy was performed again, and quality of procedure was evaluated by rubric. anatomical questions were. results: rubric evaluation in basic task training showed no difference between self-evaluation and evaluation by tutor before and after practice. in sham laparoscopic cholecystectomy, several students showed higher score than tutor, especially in part of extension of operation field by elevation of the gall bladder, exposure of triangle of calot, and exposure of cystic duct. after ar training, all students showed high score in questions related regional anatomy during operation. especially, rubric evaluation of students who showed high self-evaluation in sham operation showed same score with tutor. conclusions: as rubric evaluation showed weak points of detailed parts of maneuver, and vr ? ar was useful for understanding details of regional anatomy for laparoscopic training. background: the eaes has recently published an intraoperative adverse event classification to aid reporting of minimally access surgery events. this includes capture of non-consequential errors. we aimed to investigate the clinical impact of these apparent 'near miss' events. methods: case videos and clinical data from a completed multi-centre laparoscopic total mesorectal excision randomised controlled trial was utilised (isrctn59485808). the eaes classification was applied by two blinded assessors to all enacted adverse events identified on video analysis using the observational clinical human reliability analysis technique. the total number of grade 1 (non-consequential) errors were compared with the number and nature of 30 day morbidity events (graded with the clavien-dindo system) and length of stay. results: 77 cases (419 h of surgery) contained 1377 error events of which 809 (58.8%) were classified as eaes grade 1 (median 10 per case, interquartile range 7-13, range 1-28). there were significantly more inconsequential errors recorded in patients that developed any early morbidity event than those who had an uneventful post-operative recovery (median 8.5 (iqr 6-12) vs. 11 (9-14), p = 0.005). a stepwise increase in the sum of eaes grade 1 errors is seen for each additional 30 day morbidity event reported (8.5 vs. 11 vs. 11 vs. 12, p = 0.047) and the highest clavien-dindo grade experienced (9 vs. 10 vs. 11 vs. 12. p = 0.067). positive correlation is observed between the sum of eaes grade a errors and length of post-operative stay (r s = 0.36, p = 0.001). conclusion: in the context of major laparoscopic surgery, near misses are commonplace and correlate with surgical outcomes. this may represent a novel surrogate assessment method for intraoperative performance. aims: diagnostic laparoscopy (dl) is an under-utilised procedure that can replace non-therapeutic exploratory laparotomies in many contexts. to date, no validated education programme for dl exists. this study seeks to evaluate the feasibility, acceptability and face, content, construct validity of the laplat curriculum (laparoscopic learning for abdominal trauma; a simulationbased curriculum for trauma dl). this is in addition to the development of a novel 3d-printed bench-top model for abdominal inspection. methods: this prospective and observational pilot study involved 39 novice medical students and junior doctors. surgeons from the uk and international (n = 8) were involved in a two stage delphi-process to determine the components of the training course which were used to formulate a final curriculum. in the absence of an adequate model for abdominal inspection, a novel 3dprinted abdominal inspection model was designed and produced. after an introductory familiarisation session as well as pre-course cognitive lectures, the novices performed 6 tasks on a virtual reality and bench-top simulator with 5 repetitions of each in a half-day session. outcome measures for construct validity were total time to complete task, accuracy, percentage of horizon maintained and economy of movement. face and content validity as well as acceptability was evaluated by a qualitative and quantitative survey. results: face, content and construct validity as well as acceptability was established. face validity was demonstrated in all components of the course (including pre-course cognitive content and technical tasks) in addition to content validity. all also met an acceptability threshold of 3/5 on a 5-point likert scale. one-way anova tests demonstrated construct validity in all tasks (p \ 0.0002) with learning curves in reducing time observed. using a performance improvement metric, one-way anova tests showed similar rates of improvement per participant between most tasks (p [ 0.05). the course was rated overall mean 8.86/10 (± 1.05). conclusion: this pilot study has demonstrated the feasibility, acceptability and face, content and construct validity of the laplat curriculum as well of the novel 3d-printed abdominal inspection model. randomised controlled trials are needed to establish higher-quality evidence, as part of a wider curriculum with transfer needed to the clinical environment. surgery, regional institute of gastroenterology and hepatology, cluj-napoca, romania; 2 anesthesiology-surgical propedeutics, university of agricultural sciences and veterinary medicine, cluj-napoca, romania; 3 radiology, regional institute of gastroenterology and hepatology, cluj-napoca, romania; 4 urology, training and research center, prof. dr. sergiu duca, cluj-napoca, romania; 5 general surgery, training and research center, prof. dr. sergiu duca, cluj-napoca, romania aims: to evaluate the benefits of systematical use of ex vivoliver model and ct imaging in the planning process for swine laparoscopic liver resections done by residents during training programs. methods: twenty four general surgery residents were equally divided into two groups: first one which performed laparoscopic liver resections without planning stage and the second one which systematically used anatomical data from a swine liver model and interactive ct scans 3d reconstructions. the planning stage included an interactive tutorial for establishing the strategy for the next resection followed by performing open liver dissection and the same resection on an ex vivoswine model. a total of twelve models were used during this step. afterwards, laparoscopic procedures were performed on sixteen anesthetized domestic pigs, two swine for every team, composed of three residents. both groups were part of a dedicated and continuous training program and used the same 'step by step' protocol for resections. results: the average time for imagistic planning was 36.7 min and for open dissection and resection was 57.9 min. all teams successfully completed the interventions and followed the standardized protocol without trainers' interventions and with no conversions. the second group obtained better results regarding the time needed for completion and blood loss. also, when the planning stage was applied the resection was more accurate and less functional parenchyma was removed. the 'warming up' by adding the imagistic and anatomical data to the core protocol offer more clarity before laparoscopic liver resections. this also makes an upgrade for our 'step by step' protocol and provides sufficient data to admit this planning stage as mandatory for laparoscopic liver resection on swine during a training program. introduction: submucosal tunnel endoscopic resections (ster) had been increasingly performed for treatment of gastric subepithelial tumors. one of the limitations for ster is the risk of incomplete tumor resection due to close dissection and bridging of tumor capsule. endoscopic full thickness resection (eftr) allowed complete resection of the tumor with margins to prevent recurrence. this study aimed to review the techniques and outcomes of eftr for treatment of gastric subepithelial tumors. method: patients who received endoscopic resection for gastric subepithelial tumors were recruited. the gastric subepithelial tumors were considered eligible for endoscopic resection with size \ 40 mm. all patients received preoperative assessments including eus and ct scan to define the extend of tumors and the proportion of extra and intralumenal components. all the procedures were performed under general anesthesia with co2 insufflation. eftr started after injection with mucosal incision up to 50% of tumor circumference, followed by submucosal dissection to identify tumor margin. further dissection was performed using esd devices. after adequate exposure of lateral margins, incision into muscularis propria was performed to achieve full thickness resection. luminal defects were closed by either clips, clip-loop crown method or overstitch suturing. results: from 2012 to 2018, 10 patients received eftr for gastric subepithelial tumors. the mean age was 60.6 years, and 4 were male. the gist were located at greater curvature (4), cardia (3) , lesser curve (2) and antrum (1) . the mean size was 20.5 mm (10-50 mm) . most of the eftr were performed in operation theatre while two were done at endoscopy. the mean hospital stay was 4.5 days, and mean operative time was 98 min (34-180 mins). there was no conversion to laparoscopy. closure of luminal defect were performed mostly with clips (5), followed by overstitch (4) and clip and loop crown closure (1) . most patients resumed full diet on day 3, and all the pathologies confirmed gist tumors with clear resection margins. conclusion: endoscopic full thickness resection is technically feasible and safe procedure for treatment of gastric gist. future research should focus on refining the techniques of eftr and closure of the defect. next generation endoscopic intervention (project engine), osaka university, suita, japan; 2 gastroenterological surgery, osaka university, suita, japan; 3 research & development, 3-d matrix, ltd., chiyoda-ku, tokyo, japan; 4 research & development, fuso pharmaceutical industries, ltd., cyuou-ku, osaka, japan background: hemostatic peptides have received increased attention. self-assembling peptides (tdms) comprise synthetic amphipathic peptides that immediately react to changes in ph and/or inorganic salts to transform into a gelatinous state. since tdms do not carry a risk of infection, their clinical application as new hemostatic agent is expected to increase. the first generation of these peptides (tdm-621) is currently used as a hemostatic agent in europe. however, tdm-621 exhibits slow gel-formation and low retention capabilities on tissue surfaces. the second generation (tdm-623) was therefore developed to encourage faster gel-formation and better tissuesealing capabilities, and we subsequently verified its usefulness and increased performance relative to tdm-621 in preclinical open surgery. aim: the aim of this study was to verify the efficacy of tdm-623 in terms of its hemostatic effect in endoscopic surgery. materials and methods: evaluation of the hemostatic effect in endoscopic surgery (animal study) was performed using eight female (35 kg) pigs in spine position. following systemic heparinization, we established a bleeding model by utilizing flexible endoscopic grasping forceps on the anterior wall of the stomach and duodenum. in the hemostasis method, an endoscope with a distal hood was brought into contact with the bleeding point, and 1 ml tdm-623 was applied to the wound. after tdm-623 gelation, the endoscope was removed, and the acute hemostatic effect (after 2 min) was confirmed. histologic evaluation was subsequently performed on resected specimens. results: in the endoscopic bleeding model, 17 of the 23 cases (73.9%) showed complete hemostatic effects on the anterior wall of the stomach, whereas on the anterior wall of the duodenum, 18 of 20 cases (80%) showed complete hemostatic effects. moreover, none of the gels were displaced from the anterior walls of the stomach and duodenum, and histologic evaluation confirmed no infiltration of inflammatory cells. the new self-assembling peptide (tdm-623) displayed improved hemostatic effects relative to the previous generation (tdm-621) in endoscopic surgery. tdm-623 had potential usefulness for upper gastrointestinal bleeding. our future work will assess its usefulness for laparoscopic surgery. objective: indocyanine green (icg) is a dye used in medicine since the mid-1950 s for different applications in ophthalmology, cardiology and hepatobiliary surgery; thanks to its selective hepatic uptake and biliary excretion, it can be used to evaluate hepatic function in patients scheduled for hepatic resection surgery. the aim of this study is to evaluate the efficacy and the feasibility of icg guided surgery in the intra-operative localization of liver tumors, comparing the pre-operative radiological aspect, the intra-operative visualization and the post-operative histopathological features of the tumors. materials and methods: icg was intravenously injected for a routine liver function test (limonò) in 58 patients who underwent hepatic resection surgery for primitive and secondary liver tumors in the period between november 2016 and september 2018. for each patient was performed an intraoperative visualization of the stain both in vivo and ex vivo, using a nearinfrared imaging system. all the images were recorded. results: a correct differentiation between liver parenchyma and tumor area was obtained in 89.1% of cases. five patients were not evaluable due to widespread uptake or complete absence of uptake; it was probably the first cases enrolled in the study for which we were not able to set doses and timing of administration of icg. in patients in which the method had been feasible, we observed a prevalence of nodular pattern in patients with hepatocellular carcinoma (63%) and a predominance of rim pattern in both cholangiocarcinoma (80%) and metastasis (83%). furthermore, in patients with hccs well-intermediate differentiated (g1-g2) was found predominantly a nodular pattern (82.9%), whereas in poorly differentiated ones was prevalent a rim appearance (60%). regarding radiological correlations, the only one patient who presented an atypical radiological feature in pre-operative evaluation, showed a lesion with no icg captation in intra-operative visualization. conclusions: icg fluorescence imaging is a safe, minimally invasive and quite inexpensive method, that can be easily administered for routine evaluation of pre-operative liver function. it can be a useful support tool in the intra-operative detection of liver tumors, especially in laparoscopic surgery where it is not possible to directly touch the tissue. surgery, bundang cha medical center, seongnam-si, korea; 2 surgery, severance hospital, seoul, korea; 3 surgery, nhimc ilsan hospital, ilsan, korea; 4 surgery, seoul national university bundang hospital, seongnam, korea; 5 surgery, asan medical center, seoul, korea backgrounds & aims: robotic surgical system had been widely accepted in various surgical field with the expectations of overcoming the limitation of laparoscopic surgery. however, robotic liver resection had not generalized, so far. thus, this study aimed to evaluate the feasibility and safety of robotic major liver resection by prospective multicenter study. methods: from july2017 to december 2018, five surgeons who were novice in robotic liver resection but experienced a lot in open and laparoscopic liver resection in five tertiary hospitals performed 46 cases of robotic major anatomical liver resection. perioperative patient's clinical data and surgical data were prospectively collected. results: 22 cases of left hemihepatectomy, 1 case of extended left hemihepatectomy, 14 cases of right hemihepatectomy, 2 cases of right anterior sectionectomy, 6 cases of right posterior sectionectomy, and one cases of central bisectionectomy were performed. the most common indications were hepatocellular carcinoma for 21 cases following intrahepatic cholangiocellular carcinomas for 7 cases, liver metastases for 3 cases, sarcoma for 1 case, intraductal papillary neoplasms for 2 cases, mucinous cystic neoplasm for 1 case, hemangioma for 1 case, and intrahepatic duct stones for 10 cases. surgical resection margins for all tumor cases were negative. total average operation time was 378.58 ± 124.31 min and estimated intraoperative blood loss was 276.67 ± 397.41 ml (minimal to 2600 ml). in terms of severe surgical complication, there were 3 cases of postoperative fluid collection treated with drainage and one case of bile leakage treated with percutaneous trans-hepatic biliary drainage. only one case out of 46 cases was converted to the conventional open left hemihepatectomy because of bleeding. conclusions: in this study, robotic anatomic major liver resection might be safely performed even by robotic beginners but advanced open and laparoscopic liver surgeons. surgical technique: with the patient at 30°on right lateral decubitus, access is gained through the path of the percutaneous drainage catheter after opening of the aponeuroses of the oblique and transverse muscles of the abdomen. a 15 mm laparoscopic trocar is inserted and a cavity is created with pneumoretroperitoneum at 15 mmhg. it is accessed with an optic of 0°and 5 mm, and the work space is extended with aspiration and hydrodissection. with 5 mm grippers, the necrotic material is removed, washed and drained. a two light silicone probe is left, one light for drainage and another one for washing. results: the mean age was 52. background: minimally invasive surgery has achieved worldwide acceptance in various fields, however, pancreatic surgery remains one of the most challenging abdominal procedures. in fact, the indication for robotic surgery in pancreatic disease has been controversial. the present study aimed to assess the safety and feasibility of robotic pancreatic resection. methods: we retrospectively reviewed our experience of robotic pancreatic resection done in sanchinarro university hospital. clinicopathologic characteristics, and perioperative and postoperative outcomes were recorded and analyzed. aim: this work aims to study the contact pressure between the moving capsule and a synthetic small intestine in order to provide design guidance for prototyping the self-propelled capsule robot for small-bowel endoscopy. method: since small-bowel peristalsis consists of peristaltic contraction and wave distension, the contacts between the capsule and the small intestine are multimodal. we consider three contact cases for the capsule robot. case 1: the capsule moves on a flat small intestinal surface; case 2: the capsule moves in a collapsed intestine with a flat surface support; and case 3: the capsule moves in a surrounded small intestine. by considering these three contact cases, experimental testing and finite element analysis (fea) were conducted by measuring the contact pressure between the small intestine and the capsule. introduction: traditional laparoscopic instruments have limited degrees of freedom and are not ergonomic. this results in severe limitations in performing complex, and even simple tasks in surgery, limiting many surgeons from performing a variety of minimally invasive procedures. handx tm is a hand-held, electromechanical smart instrument with robot-like features. the instrument is composed of a sophisticated user interface that enables unrestricted hand movement, and a novel, motor driven articulating tool that is controlled by the interface. the instrument is 5.5 mm in diameter, lightweight, and can be easily moved between laparoscopic trocars and perform complex motions in the surgical field. after the regulatory process was completed we have tested the device clinically through a structured, approved, clinical trial. materials and methods: after irb approval 30 patients were recruited to the trial. we have included a variety of procedures that require suturing and complex tissue manipulation. two experiences surgeons performed all procedures. after completing each procedure the surgeons completed a detailed standard usability (sus) questionnaire. results: 30 procedures were completed successfully without complications or device malfunction. there were 15 female and 15 male patients with an average bmi of 27. procedures performed were 6 right hemicolectomis with intra-corporeal anastomosis, 3 paraesophageal hernia repairs and fundoplication, 3 diagnostic laparoscopies, 2 tapp procedures, 10 ventral hernias with fascial suturing, and 6 laparoscopic cholecystectomies. the average performance score was 84.70/100. the results suggest that the handx device is safe and easy to use and may offer a simple solution for enhancing minimal invasive surgery capabilities and possibly reduce conversion rates while maintaining current standard surgery flow.the handx could potentially extend the surgeon's abilities to access hard to reach anatomy and perform complex maneuvers and present a cost-effective alternative to large console-based robotic systems. objective: endoscopic submucosal dissection (esd) has become widely accepted treatment for rectum neuroendocrine neoplasm. the aim of this study is to evaluate the safety and efficacy of esd with dental floss-assisted suspension traction for rectal neuroendocrine neoplasm. methods: we retrospectively reviewed the medical records of the patients, who underwent esd for rectum neuroendocrine neoplasm at endoscopy center of zhongshan hospital, fudan university. the data of operation time, r0 resection and adverse events were collected analyzed.in dfs-esd group: after the mucosa was partly incised along the marker dots, the next step was to construct traction device, similar to others in esd, with dental floss and hemoclip. the dental floss was tied to any arm of the metallic clip. the hemoclip was attached onto the incised mucosa, another hemoclip was attached onto normal mucosa opposite to the lesion in the same way. the submucosa was clearly exposed with the traction of dental floss and the resection could proceed. results: 37 patients were enrolled in the study. there were 17 patients treated by esd with dental floss-assisted suspension traction and 20 patients treated by conventional esd. the average tumor size was (0.74 ± 0.2)cm in both group. the operation time was 17.9 ± 6.6 min in conventional esd group and (14.7 ± 3.3) min in dfs-esd group (t = 1.776, p = 0.084). according to pathological grading about rectal neuroendocrine neoplasm, there were 17 grade 1 (g1) and 3 grade 2 (g2) in conventional esd group while 13 grade 1 (g1) and 4 grade 2 (g2) in dfs-esd group (?2 = 0.436, p = 0.509). among 37 cases in this study, all the basal resection margins were negative, the en blot resection rate was 100% and the curative resection rate was 100%. however, pathological results showed tumor tissue close to the burning margin in 5 cases of conventional esd group and in 2 cases of dfs-esd group (?2 = 0.364, p = 0.546). conclusions: esd with dental floss-assisted suspension traction for rectum neuroendocrine neoplasm can assist exposing tumor borders, provide good vision during the procedure and offer clearer anatomic structure, so as to simplify operation, reduce operation time and ensure the negative basal margin. it is especially suitable to be promoted in primary hospitals. surg endosc (2019) aims: force feedback and assessment provides detailed insight into tissue manipulation skills. the aim of this study is to evaluate learning curves for basic laparoscopic skills based of force and motion learning curve patterns. morevover, we aimed to detect the favourable time span for this curriculum for each individual trainee. methods: in this prospective cohort study, first year surgical residents participated in a three week at home training course. a mobile box training was equipped with forcesense system for objective force, motion and time based assessment. the system provides seventeen unique metrics. the training goal was set by the mean score of proficient laparoscopic surgeons. each repetition was captured and made available for analyses. continuous force feedback was provided during training. curve fitting was used to estimate the learning curve plateau and the number of repetitions needed to approach the plateau phase and to reach proficiency level. finally, a comparisson between novices and experts was executed. results: a total of 2007 attempts, executed by 20 residents were captured and analyzed. significant improvement of motion analysis parameters (e.g. path length and time) was observed for all training tasks, except for the fifth tasks. tissue manipulation skills (i.e. maximum and mean applied force) significantly improved by training tasks 2, 3 and 6. learning curve analysis revealed various shapes and lengths of the individual learning curves. a large range in learning curve plateaus was found between trainees and between tasks. each trainee managed to accomplish the preset goals within three weeks. conclusion: force-and motion based assessment provides insight into both tissue manipulation and instrument handling skills. when combined in learning curve analysis, these parameters effectively show progression towards proficiency for each individual trainee over time. we emphasize the variation in learning curves between trainees. therefore, we recommend individually tailored courses provided with objective force-and motion-based learning curve tracking. aims: the posterior retroperitoneoscopic adrenal access represents a challenge in orientation and working space creation.the aim of this experimental acute study was to evaluate the impact of computer-assisted quantitative fluorescence imaging on adrenal gland identification and perfusion assessment in the posterior retroperitoneoscopic approach. methods: six pigs underwent synchronous (n = 5) or sequential (n = 1) bilateral posterior retroperitoneoscopic adrenalectomy (pra, n = 12). fluorescence imaging was obtained via intravenous administration of 3 ml of indocyanine green (icg) using two near-infrared camera systems. fluorescence-based visualization of adrenal glands before vascular division (n = 4), after main vascular pedicle ligation (negative control, n = 1) or after adrenal division (n = 7) was followed by completion adrenalectomy. one of the animals had undergone icg injection 3 h previously, during another study. the dynamic evolution of fluorescence signal intensity over time was recorded and analyzed using a proprietary software. the computed color-coded perfusion cartography was superimposed onto real-time images obtained by corresponding left (l) and right (r) camera systems. the slope of fluorescence signal intensity evolution over time in the regions of interest (roi) served to assess adrenal perfusion by means of quantitative fluorescence signal analysis. results: in the retroperitoneum, the adrenal glands were promptly highlighted after primary intravenous icg administration or showed an increase in fluorescence signal intensity upon reinjection (both glands in a recovery pig and one gland in the sequential approach). after left adrenal main vascular pedicle ligation, the gland displayed low perfusion (blue; rois a1-a2 in figure 1 ), while a weak fluorescence signal after completion adrenalectomy suggests perfusion via collateral vessels. with intact vascular supply, the caudal segment of the right adrenal (a3) gland showed a significantly higher perfusion rate (red) than the ischemic cranial segment (a4). quantitative analysis of logarithmic fluorescence intensity showed a statistically significant difference between perfused and ischemic zones (p = 0.005) allowing to assess gland vascularity. kidneys (k) and adrenal glands showed distinct perfusion curves ( figure 1 ). conclusions: prior to dissection, fluorescence imaging allows to easily discriminate the adrenal gland from surrounding retroperitoneal structures. during adrenal gland surgery, icg injection complemented by a computer-assisted quantitative analysis helps to distinguish between wellperfused and low-perfused segments. giant adrenal tumors:technical considerations and surgical outcome a. giordano, g. alemanno, c. bergamini, p. prosperi, v. iacopini, a. dibella, a. valeri sod chirurgia d'urgenza, aou careggi, firenze, italy objectives: giant adrenal tumors are tumors with size more than 6 cm. these are rare cancer associated with malignancy in 25% of cases. the size of these tumors is an important topic in literature because of their higher probability of malignancy and possible technical limitations of laparoscopic approach. we report our center's experience on laparoscopic adrenalectomy. materials and methods: in the last ten years we performed about 242 adrenalectomies for benign and malignant adrenal tumors. 45 of these were giant tumors. the medium size was 9.9 cm (7-22 cm). 23 tumors were on the left adrenal gland and 22 on the right. there were 20 women and 25 men, the average age was 55 (21-81 years). 29 of these cancers were laparoscopically removed and 16 with open approach. 2 cases of open conversion. results: betweenn the 29 tumors laparoscopically removed we recorded 6 cases of carcinoma, 2 endothelial cysts, 6 adenomas (3 with aldosterone and 2 with cortisol hypersecretion), 2 myelolipomas, 10 pheochromocytomas and 3 metastases from lung carcinoma. the surgical outcomes in these patients were optimal in terms of good pain control and hospital stay (median 3 days). the average time of the intervention was 110 min with very low blood less (90 ml). no postoperative complications were recorded. the removal of the adrenal gland necessitated 3 or 4 trocars. in the dissection and resection phases we always used radiofrequency scalpel. the follow up after 12 and 24 months didn't show local recurrences. conclusions: laparoscopic adrenalectomy offers significant advantages over the open approach. the size of these tumors is still at the center of debate for the choice of the technique. the tumor size is only a predictive parameter of possible malignancy. the laparoscopic approach is a safe and feasible method in terms of surgical and oncological outcomes also for the giant adrenal tumors, only if performed by expert surgeons and in high-volume centers. vascular or adjacent organs infiltration is a contraindication to the laparoscopic approach. aims: adrenal gland size greater than 6 cm is considered a contraindication to laparoscopic adrenalectomy (la). aim of the present case-control study is to compare the surgical outcomes in patients undergoing la for adrenal gland measuring = 6 cm versus = 5.9 cm in diameter. methods: from january 1994 to august 2018, 552 las were performed in the two authors' centers which follow an identical treatment protocol. eighty-one patients with an adrenal gland size = 6 cm (intervention group) were included in the study. based on body mass index (bmi) class [ 40 kg/m 2 ) , lesion side (right or left), surgical technique (anterior transperitoneal for right and left-sided lesions, anterior transperitoneal submesocolic for left-sided lesions) and lesion type (conn-cushing, pheocromocytoma, primary adrenal cancer or metastases, other type of lesion), 81 patients with an adrenal gland lesion measuring = 5.9 cm in diameter were included (control group) and paired to the intervention group. results: comparing the intervention and control groups, statistically significant differences were observed in mean lesion size ( conclusions: the only significant difference between the two groups was the operative time which was longer in the intervention group. conversion and complication rates were also higher in the intervention group but the difference was not statistically significant. based on the present data, adrenal gland size measuring more than 6 cm in diameter is not a contraindication to a laparoscopic approach. ; and orthopaedics and urologists for the remaining 6.6%.the costs from these claims, differed from 2 to 13% of the total damage burden per year. the review of medical charts of claims related to laparoscopic gynaecologic surgery showed that 82% of claims were filed for visceral and/or vascular injuries (40% bowel injuries, 20% ureter). 38% of the injuries were entry-related. a delay in diagnosing injuries was the primary reason for financial compensation. conclusion: evaluating and learning from complications and claims will improve medical health care. in contrast to overall trends and developments considering medical claims, claims concerning laparoscopic surgery decreased, possible due to a rising learning curve. considering laparoscopic surgery, extra caution is required at moment of entry and the early recognising complications and at pre-operative counselling from patients. the aim of the study was to determine indications and contraindications for laparoscopic splenectomy in abdominal trauma patients and to analyze results of the operations. patients and methods: the study involved 112 patients with spleen injury grade iii who were admitted in our institute in the years of 2013-2018. the patients were divided on two groups. laparoscopic splenectomy was performed in 62 patients (group i) and 'traditional' splenectomy was carried out in 50 patients (group ii). there was no difference in the demographic data and trauma severity between the two groups.non-invasive investigations, such as laboratory investigations, serial abdominal ultrasound examinations (us), x-ray in multiple views and computed tomography (ct) had been performed before the decision about necessity of an operation was made. results: patients after laparoscopic operations had better recovering conditions compare to patients with the same injury after 'traditional' splenectomy. neither surgery related complications no mortalities were registered in both groups. laparoscopic splenectomy was more timeconsuming operation than 'traditional' splenectomy. we suggest that as experience of laparoscopic splenectomy is gained the operation time will be reduced. conclusion: laparoscopic splenectomy is a safe feasible operation in patients with spleen injury. the operation is indicated in patients with spleen laceration more than 3 cm of parenchymal depth with moderate continuing bleeding or expanding hematoma and contraindicated in patients with hemodynamic instability and high bleeding rate (more than 500 ml/h on serial us examinations). the isolated hydatid disease of the spleen is a quite rare condition, liver and lungs being the most common locations. the treatment requires usually splenectomy, open or laparoscopic. there are few reports in the literature describing a spleen-preserving type of surgery. we present a case of a female patient, 51 y.o., with a large cystic lesion of the spleen, 11 cm in diameter. lab tests and ct scan confirmed that is a hydatid cyst. after albendazole treatment and vaccination the patient was referred to us for surgical treatment. the procedure was performed under general anesthesia and laparoscopic approach was performed with the intention to preserve the spleen. after the cyst was identified and adhesiolysis was done, the area was isolated from the rest of the abdominal cavity with sponges with a betadine solution in order to prevent contamination. a needle aspiration of the cyst allowed the evacuation of 550 ml of purulent content, an indicator of a dead cyst. betadine solution was injected into the lesion. laparoscopic excision of the cyst was performed using advanced electrocoagulation devices and the spleen removal was not deemed necessary. two drainage tubes were placed in the remnant cavity. an abdominal ultrasound was performed in the third postoperative day and no collections were identified. the postoperative outcome was uneventful; the patient was discharged in the 6 th postoperative day. the conclusion is that in selected cases, with the cyst located in the anterior part of the spleen, with proper equipment and experienced laparoscopic teams, the cyst can be successfully treated without splenectomy. deep neuromuscular block was induced with rocuronium 1.2 mg/kg. in group1, forty patients were enrolled for reversal of profound neuromuscular block during thyroid surgery (sugammadex 2 mg/kg, after identification of vagus nerve). in group 2, thirty-five patients were enrolled profound neuromuscular block during thyroid surgery(without reversal of nmbd). tof-watch acceleromyograph was recorded in response to adductor pollicis muscle for ulnar nerve stimulation in patients with both groups; recovery was defined as a train-of-four (tof) ratio = 0.9.to prevent laryngeal nerve injury during the surgical procedures, all patients were neurophysiologically detected using ionm. results: the total duration of surgery was higher in group 2 than group 1(63.7 ± 5.6, 82.5 ± 6.1;p \ 0.001). the mean time to recovery of the tof ratio to 0.9 was higher in group 2 than group 1(22.3 ± 2.6, 74.3 ± 5.0; p \ 0.001). the mean duration of vagus reverse (v1:3,5milisecond) was higher in group 2 than in group1(21.3 ± 1.7, 42.9 ± 5.1; p \ 0.001). no significant difference was found between left and right v1-v2 and r1-r2 values in group 1 following nerve monitoring, whereas in group 2, a significant difference was found between left v1-v2, left r1-r2 and right v1-v2 values ( introduction: oeosphagogastric oncology trials have often lacked robust methods of monitoring and surgical quality assurance (sqa), leading to difficulty in interpretation of trial results. this study aims to assess expert opinion regarding challenges to sqa in oncology trials and potential mitigating strategies. method: a purposive international cohort of 71 expert stakeholders with experience in oncology trials were recruited including: 35 surgeons; 17 oncologists; 10 trial methodologists, and; 9 trial managers. semi-structured interviews were thematically analysed using grounded theory. spss was utilised to assess differences between trial stakeholders' opinions. results: 389 emergent themes were identified and 74 consensus themes emerged on qualitative analysis of stakeholder responses. key consensus challenges to implementation of sqa in oncology trials included: insufficient resources; limitations of surgical volume in centre selection; differing oncological beliefs and resistance to change adoption; overly prescriptive protocols and standardisation contributing to difficulty in surgeon recruitment; and cultural factors leading to difficulties in providing and receiving feedback. seminal consensus mitigating strategies to overcome challenges to sqa in oncology trials included: trial centre selection according to case volume (n = 31, 44%); requirement for specific centre attributes for inclusion in trials including specialist centre designation and participation in national audit (n = 29, 41%); consideration for surgeons learning curve in surgeon selection (n = 33, 46%); flexible standardisation of trial operating (n = 22, 31%); operation manual utilisation to aid standardisation of surgical interventions (n = 34, 48%); case monitoring using video (n = 22, 31%) or photographs (n = 11, 16%); direct intraoperative observation by an expert (n = 15, 21%), and; histopathological assessment of resected specimens (n = 10, 14%). other methods of monitoring surgical quality advocated included: recording post-operative outcomes; lymph node yield; case report forms; and real time data monitoring (n = 32, 45%). oncologists were significantly more likely to state the importance of standardisation of surgery in oncology trials (p \ 0.05), and trial methodologists significantly more likely to advocate consideration of surgeons' learning curve in surgeon selection (p \ 0.05). conclusion: surveying international expert stakeholder opinion revealed a wide variety of perceived challenges across all domains of surgical quality assurance. proposed mitigating solutions require consensus opinion to formulate a framework to aid design of sqa measures within future oncology trials. research group did not register a single case of ega leakage while 2 patients in control group (? \ 0,05). had the leakage which was stopped by means of 'endovac' system. there were 2 cases of esophagus postoperative strictures which developed 3 months after the surgery in the research group which was less than in the control group which saw 6 cases of strictures of ega (? \ 0,05). 6 months after surgery, the number of post-operative strictures increased in both groups, but was lower in the research group and amounted to 4 cases in the research group and 11 cases in the control group (? \ 0,05). there were 5 cases of esophagus postoperative strictures which developed 12 months after the surgery in the research group which was less than in the control group which saw 13 cases of strictures of ega (? \ 0,05). neither of the groups had any cases of post-operative mortality. purpose: to investigate the prognostic effects and risk factors of the omission and delay of postoperative chemotherapy of ii/iii gastric cancer (gc), with the goal of providing a reference for interventions of related departments. methods: the clinicopathological data of 1520 patients undergoing radical gastrectomy for ii/iii gc were collected and retrospectively analyzed. we defined the chemotherapy delayed until more than 60 days after radical gastrectomy and the complete omission of chemotherapy as unacceptable chemotherapy initiation (uac group), while the chemotherapy conducted within 60 days of radical gastrectomy was defined as acceptable chemotherapy initiation (ac group). the survival between the two groups was compared, and the trends and risk factors of uac were analyzed. results: the total number of patients who underwent totally laparoscopic distal gastrectomy with uncut roux-en-y and delta shaped billroth-i anastomosis was 244 and 214, respectively. the mean reconstruction time was longer in uncut roux-en-y than in delta shaped billroth-i, (30.5 ± 14.5 vs. 13.6 ± 10.3 min, p \ 0.001). the uncut roux-en-y was used more cartridge than delta shaped billroth-i anastomosis (6.9 ± 1.2 vs. 6.2 ± 1.0, p \ 0.001). however there was no significant differences in operation time, estimate blood loss, number of retrieved lymph node and postoperative course between reconstruction methods. postoperative complications more than clavien-dindo grade iiia occurred in 22 cases (4.8%) of postoperative early complications and 14 cases (3.1%) of late complications. the endoscopic findings showed excellent short and long-term outcomes in terms of very low incidence of bile reflux and reflux-induced remnant gastritis in uncut roux-en-y compared with delta shaped billroth-i anastomosis. conclusions: uncut roux-en-y gastrojejunostomy was a useful reconstruction method with totally laparoscopic distal gastrectomy for cancer, especially for diverting enteral contents from the remnant stomach and preventing remnant gastritis. therefore, it is recommended for young patients with early stage disease who have a long time to live after distal gastrectomy for cancer. operative technique: the seromuscular layer above the tumor is dissected, while the mucosa is kept unbroken. when seromuscular layer is dissected all around the tumor, the full layer is lifted, and the mucosa is stretched. the mucosa is then transected with a stapling device to execute fullthickness resection of the specimen. finally, the seromuscular defect is repaired by hand-sewn suture. results: since december 2015, clean-net has been performed in 57 patients with gastric smts. all tumors were resected en-blocwithout rupture. the average operation time ranged from 50 to 220 min with an average of 101.7 min. the postoperative course was uneventful. microscopically the surgical margin was tumor-negative (r0 resection) in all cases. the margin width was small with an average of 5.4 mm ± 2.5. conclusions: clean-net is a useful option in the laparoscopic surgical treatment of gastric smt, when excessive sacrifice of the healthy gastric wall surrounding the endophytic tumor should be avoided. background: the type of fundoplication-complete or partial is still controversial for the surgical treatment of gerd. laparoscopic toupet (270 0 wrap) fundoplication has less post op dysphagia and gas bloating compared to nissen fundoplication (360 0 wrap) and is advised to be the procedure of choice when esophageal manometry findings are abnormal, however it is considered by some less effective and more difficult to perform. the aim of this research was to determine in the functionality and efficacy of the different types of fundoplication. methods: explanted pigs stomachs weighing 45-60 kg were studied. two different studies of the les were performed: distensibility and failure point (occurrence of reflux according to volume added to the stomach). for both studies we first disrupted the lower esophageal sphincter using a rigiflex tm dilating balloon. we then performed three different fundoplications-nissen, toupet, dor and measured the distensibility of the egj after each fundoplication. the failure point was determined following each fundoplication type. results: we used 12 pig stomachs for the distensibility study and 11 pig stomachs for the failure point study. there was no statistically significant difference between the nissen and toupet fundoplications when distensibility was measured, however the egj was more distensible following dor fundoplication (p = 0.008 for nissen, 0.016 for toupet). when the failure point was measured, nissen fundoplication was significantly more effective than toupet, and toupet was significantly more effective than dor (p = 0.016,p = 0.017 respectively) conclusions: we studied the differences between the mechanical effects on the egj following three different fundoplications, encompassing 360 0 , 270 0 , and 180 0 of the esophagus. we demonstrated that there is a significant difference between dor fundoplication and nissen/toupet when distensibility was measured. there was no difference in the distensibility of the egj following a 360 0 or 270 0 wrap. there was, however a significant difference of effectiveness between all three fundoplications. these findings suggest that the 360 0 and 270 0 fundoplications have similar functionality while the 360 0 wrap mechanically prevents possible reflux and support proponents of toupet fundoplication rather than nissen due to the similar functional results while decreasing the post op dysphagia and gas bloating complications. surg endosc (2019) aim: to describe patients undergoing surgical treatment of incident gastro-oesophageal reflux disease and the use of anti-reflux treatment in a danish population-based cohort. methods: all adult danes 2000-2015 undergoing upper endoscopy and receiving a diagnosis of gerd within 90 days were identified. patients with previously diagnosed gerd, peptic ulcer-disease, barrett's oesophagus or cancer of the gastrointestinal tract were excluded. in this study, only patients undergoing anti-reflux surgery within two years of gerd-diagnosis were subsequently included. age, sex, charlson comorbidity index (cci), anti-reflux surgery (primary and re-operative) and endoscopic dilatation were identified using the danish national patient registry. mortality was identified using the national civil registry. pharmacological treatment of gerd (proton pump inhibitors, h \ su2 \/su-blockers and other prescription anti-reflux drugs) as well as use of nonsteroid anti-inflammatory drugs (nsaid) and anti-thrombotic treatment were identified using thethe danish national prescription registry. all data was linked on an individual level using the unique identification number that all danish citizen are assigned to at birth or first immigration. results: a total of 674 first-time fundoplications were performed, hereof 98.1% performed laparoscopically (n = 661) and 1.9% performed using open technique (n = 13). at one-year followup, 4.9% (n = 33) had undergone endoscopic dilatation and 2.1% (n = 14) had undergone reoperation. the 90-day mortality was \ 0.5%. patients had a median age of 46 years (18-80 years) and were predominately male (57.9%-n = 390). a total of 93.9% had cci 0 (n = 633). diagnoses were gerd with esophagitis (66.9%-n = 451), gerd without esophagitis (31.5%, n = 212) and gerd without specification (1.6%, n = 11). before initial endoscopy, 91,7% (n = 618) used at least one type of anti-reflux drug, dropping to 32.2% (n = 217) in the year after anti-reflux surgery. however, even when censoring patients with barrett's esophagus or peptic ulcer disease after initial endoscopy and patients undergoing concomitant treatment with nsaids or antithrombotic drugs, 27.7% still used at least one type of anti-reflux drug after surgery. conclusion: in this population-based study, anti-reflux surgery was safe and lowered the use of pharmacological treatment. however, even when adjusting for competing reasons for use of antireflux drugs, 27.7% used at least one type of anti-reflux drug one year after surgery. the new approach to perform nissen fundiplication m. paranyak, v. grubnyk surgery, odessa national medical university, odessa, ukraine nearly 10% of patients who undergo laparoscopic anti-reflux surgery at long-term follow-up need for surgical reintervention mostly because of hiatal hernia (hh) recurrence, wrap migration or disruption. purpose: the aim of our prospective study was to evaluate and compare several technics of wrap fixation and determine whether modified nissen fundoplication(mnf) reduce failure rate in the long term follow up. materials and methods: this was a prospective, randomized, controlled trial. from november 2012 to october 2014 one hundred and thirty-eight gerd patients who underwent anti-reflux surgery were divided into two groups. excluded criteria for our study ware diagnosed hiatal hernia (hh) type iii. in the i group which include 87 patients we performed the following manipulations: nf was supplemented with suturing wrap to the diaphragmatic crura (52 patients) on each side using two non-absorbable stitches. such technique permit us to create more symmetrical wrap. in case of weak conditions of crura or short esophagus (35 patients) fundoplication wrap was sutured to the body of stomach using two non-absorbable stitches on each side. control group (51 patients) underwent classic nissen fundoplication (nf) without wrap fixation. all patients were assessed before and after surgery using validated symptoms and quality of life (gerd-hrql) questionnaires, 24-h impedance-ph monitoring and barium-swallow. results: baseline characteristics were similar between groups. there were no conversion to open procedure or mortality. mean hospitalization was 2.7 days ± 1.4 days. at 41,6 months (range 18--57) of followup, the overall rate of complications after mnf was 1,14% (1 hh reccurence) and nf 7,84% (3 hh reccurence, 1slipped wrap). patient in mnf group show significant improvement in gerd-hrql score, from 19.3 ± 13.2 (preoperatively) to 4.3 ± 3.9 (postoperatively) (p? \ ?0.001). complete ppi independence was achieved in 91%. in the ii group of patients mean gerd-hrql score decline from 18.7 ± 11.9 (preoperatively) to 9.3 ± 7.7 (postoperatively), postoperative ppi treatment was necessary in 29%. conclusions: according to our study mnf minimized risk of slipped wrap and intrathoracic migration of the wrap and can make positive impact on reducing the failure rate of laparoscopic anti-reflux surgery. aims: comparative evidence across laparoscopic antireflux procedures does not exist. aim of this project was to identify direct comparative evidence between laparoscopic antireflux procedures and synthesize evidence using network meta-analytical methods. methods: the databases of medline, amed, central, opengrey were interrogated. pairwise meta-analyses for each pair of interventions using a random-effects model and network metaanalysis in stata was performed using the mvmetacommand and self-programmed stata routines. differences between direct and indirect evidence were explored by comparing direct and indirect estimates though computing the inconsistency factor within each closed loop of evidence. the ranking probabilities for all treatments of being at each possible rank for each intervention were computed using the mvmetacommand in stata. a hierarchy of the competing interventions was obtained using rankograms. quality of evidence was assessed using grade-nma and the cinema application. results: forty-three publications reporting on 32 randomized trials and some 1892 patients were identified. the network of treatments formed a closed loop between 270°, 360°and anterior 180°; and star network between 360°and other treatments; and between anterior 180°and other treatments. laparoscopic 360°, 270°, anterior 180°and anterior 90°were equally effective in the control of heartburn and this was supported by low quality of evidence according to grade-nma. the odds for dysphagia were lower for anterior 90°(high quality evidence), anterior 120°( moderate quality evidence), 270°(moderate quality evidence) and proton-pump inhibitors (moderate quality evidence) compared to 360°. the odds for gas-bloat were lower for 270°and anterior 90°compared to 360°(low quality evidence). the odds for regurgitation, morbidity and reoperation were similar across treatments, albeit these were associated with very low quality evidence. anterior 120°had a 49% probability of being the best treatment in terms of dysphagia. conclusion: under consideration of treatment effect estimates, evidence quality as assessed with grade-nma and other parameters, anterior 90°, anterior 120°and 270°should be preferred over 360°. further research needs to focus on the comparison between 90°and 120°/270°. aims: we have recently demonstrated that the tension of crural closure can be reliably measured intraoperatively (alsgbi conference december 2018). the aims of this study were to further characterise tension at the diaphragmatic hiatus from our prospective pilot study of 72 patients. methods: a prospective analysis was performed of patients undergoing laparoscopic hiatal hernia repair between april 2017 and december 2018. 72 patients underwent crural tension measurement intra-operatively. 24 patients had a pre-operative ct scan of the abdomen within one-year of surgery. hiatal surface area (hsa) was measured intraoperatively and a sauter-fh50 universal digital force gauge was used to measure the tension of crural closure during cruroplasty. outcome measures included the mean tension of the crural closure and the presence of muscle splitting during the cruroplasty. results: for all patients, the mean crural tension measurement was 2.93 n and the mean hsa was 543 mm 2 . pre-operative ct was positively correlated with post-dissection intra-operative hsa (r = 0.5402, p = 0.0064), however, strength of association was weak (r 2 = 0.2918) and ct consistently overestimated the size of hiatal defect intra-operatively (mean of differences 404 mm 2 , p = 0.0016). crural tension was positively correlated with age (r = 0.3321, p = 0.0044), hiatal height (r = 0.6023, p \ 0.0001), hiatal width (r = 0.766, p \ 0.0001) and hsa (r = 0.7753, p \ 0.0001). crural tension was correlated to the hiatal width to height ratio to assess the shape of defect and there was positive correlation (r = 0.4072, p = 0.0004). tension was calculated for the posterior and anterior halves of the suture cruroplasty. anterior tension was significantly higher when compared to posterior tension (3.26 n vs 2.59 n, p \ 0.0001) . 16 patients had evidence of muscle splitting during the cruroplasty. the group with muscle splitting were significantly older (66 vs 53, p = 0.0029), had larger hsa (910 mm 2 vs 347 mm 2 , p \ 0.0001) and higher crural tension (5.69 n vs 2. 14 n, p \ 0.0001). the lowest observed mean crural closure tension causing muscle splitting was 3.52 n. conclusion: there is now a possibility to optimise this operation with objective measures 100 years after it was first described. initial findings suggest that crural closure up to * 4 n could be the permissible tension threshold for suture cruroplasty and higher tension may benefit from the use of mesh reinforcement. background: endoscopic submucosal dissection (esd) and endoscopic full thickness resection (eftr) are advanced endoscopic techniques which can be time consuming using traditional endoscopic instruments. a new endosurgery platform, designed by fortimedix surgical, was developed featuring flexible articulating instruments to use in combination with a standard flexible endoscope. the platform is intended to perform endoscopic cutting, dissecting, and hemostasis. aim: evaluate feasibility of the platform in the upper gi-tract. project description: the platform was tested in a dry esophageal model as well as a second series with a porcine esophagus and stomach. the system has an external docking station affixed to the operative table to stabilize both flexible instruments for the right and left hand of the surgeon. at the tip of the endoscope, a cap containing instrument lumens is attached to allow advancing and removing the flexible instruments. the endoscope with the cap and instrument lumens attached is advanced via an overtube with outer diameter 18.5 mm. in the first series, flexibility and range of motion of the endeffectors was assessed. additionally, the ability to advance the instruments to the intraluminal target area from the docking station and along the scope was evaluated. in the second series, the functional capabilities of the system and instruments were evaluated in a porcine model. preliminary results: : in the dry model, the platform was adequately deployed to the target then range of motion was tested as well as cutting and grasping gastric wall with instrumetn triangulation achieved. the grasping forceps provided enough force to pull the mucosal wall and expose the dissection plane. in the pig model, the distal esophagus and stomach could successfully be accessed and platform deployed. esd was performed using newly designed flexible articulating scissors, dissection-hook, and graspers with good triangulation and sufficient grasping force with traction/counter-traction. the new fortimedix surgical endo-surgery platform applied to a standard flexible endoscope is feasible to perform esd. future studies are planned to determine learning curve and compare it to traditional endoscopic instruments. background: in laparoscopic surgery, we usually observe the organs in the same direction to avoid a mirror-image situation. therefore, we are unable to recognize how far the dissection has proceeded on the other side of the target organs or lesions, especially when the plane of dissection is under the mesentery or organs. this becomes a problem not to understand how far the dissection has progressed and how much more dissection is needed. aim: to solve this problem, we developed a laparoscopic device with tip illumination. project description: the device is configured by the long and narrow part made of polycarbonate resin and a battery-powered light-emitting diode to illuminate the tip by shining light through the polycarbonate resin. during the surgery, the tip of the device is inserted into the deepest part of the dissection area, and the transmitted light indicates how far the dissection has progressed. the tip of the device has a prism structure and light is emitted in a direction perpendicular to its axis. tip position can thus be more clearly identified even with insertion in the same direction as the laparoscopic view. to verify the utility of this instrument, laparoscopic surgeries were performed in a porcine model and cadavers. preliminary results: we performed some laparoscopic surgery such as the medial-to-lateral approach to the white line of the left side of the descending colon for sigmoidectomy, dissection of the posterior surface of the pancreas to the upper edge of the pancreatic body or splenic artery for distal pancreatectomy, and the separation of the anterior surface of the inferior vena cava from the liver to the area between the right and middle hepatic vein for right hepatectomy. we quickly and easily identified the deepest part of the dissection area even if identification had been difficult using other techniques such as placing gauze in the deepest position, inserting forceps into the dissection area or simply depending on the experience of the operator. background: recent advancements within surgery have seen artificial intelligence transform traditional approaches. robotic assistive devices have demonstrated particular success, as safe and cost effective, and are widely supported via industry and local government as a step closer to the future standard of practice. an example of seamless and touchless robotic assistive technology is based on touchless and interactive eye tracker glasses worn by the surgical team thereby enabling the team to perform wider surgical tasks, more efficiently and reduced human error. we introduce a perceptually-enabled, smart operating room (smart-or) based on a novel real-time framework for theatre-wide 3d gaze localisation in a mobile fashion. this framework enables dynamic gaze based user interaction with a robotic scrub nurse to facilitate meaningful practical integration of human and technology intra-opertively. aims: we tested participant acceptability of a novel robotic scrub nurse during simulated surgery. project description: surgeons performed segmental resection of pig colon and handsewn end-to-end anastomosis while wearing eye-tracking glasses to select surgical instruments on a screen. the robotic scrub nurse(rn) picked up and transferred the instrument to the surgeon. the study compared human nurse(hn) vs rn. gaze-screen interaction was based on a 3d gaze framework we developed with synergy of conventional wearable eye-tracking, motion capture system and fixed in space rgb-d cameras for real-time 3d reconstruction of the environment. nasa-tlx and van der laan's technology acceptance questionnaires were collected and analysed using anova. preliminary results: overall, 7 teams of surgeons(st) and scrub nurses(sn) participated. nasa tlx feedback for st and sn revealed no significant difference between in mental, physical or temporal demand. importantly, st and sn reported no significant difference in task overall performance. st reported more significant frustration with rn vs hn. van der laan's scores showed positive usefulness and satisfaction scores in using the rn platform. overall, all outcomes were more positive by sn vs rn. conclusions: this is the first platform of its kind. overall, quantitative and qualitative feedback was positive. the source of frustration has been understood and we believe it can be improved by appropriately modifying robot behaviour. importantly,there was no difference on perception of performance. background: endoscopic tumor resections in the gi tract may be facilitated by more advanced instruments for dissecting and suturing. we have focused on developing an endoscopic suturing technique using a standard flexible pediatric endoscope with new, flexible instruments allowing for complex end-effector movements. aim: perform flexible endoscopic suturing using a standard flexible scope in the gi tract project description: a standard flexible pediatric endoscope and a standard gastroscope were used for testing the new technique. via an overtube, the endoscope and newly designed fortimedix surgical flexible instruments (needle holder; grasper) with a diameter of 5 mm were inserted into the esophagus. suture training was performed in an experimental setting in a box in the dry lab and porcine model . the flexible needle holder was advanced into the esophagus next to the scope, and a suture of the esophageal wall was performed, followed by extracorporeal knot-tying with 3 knots. the test series consisted of training with both resident trainees and surgeons to evaluate the learning curve. each participant performed sutures on the box model and in the pig-esophagus. feasibility, duration of the different steps, and handling problems were documented. preliminary results: test series 01 (box training on esophago-gastric explant) with prototype 01 showed good feasibility. suturing was possible in 9 out of 10 attempts. median duration for single bite: 6 min (5-30); knot-tying: 5 min (2) (3) (4) (5) (6) (7) (8) . test series 02 (training in pig-model) with prototype 02 showed improved feasibility with better flexibility of instrument shaft: median duration of double bite: 8 min (7-15); knottying: 2 min (1) (2) (3) (4) (5) , overall duration intraluminal esophageal double bite suture and closing with 3 knots: median duration: 13 min (12-20). the new flexible endosuture instruments seem feasible to use and perform dependable intraluminal sutures. the training period and learning curve is short and the objective is to apply this system clinically for closure of perforations and fistulas. school of mechanical and aerospace engineering, nanyang technological university, singapore, singapore; 2 general surgery, national university hospital, singapore, singapore; 3 gastroenterology, national university hospital, singapore, singapore; 4 surgery, chinese university of hong kong, hong kong, hong kong background: ideally, endoscopic suturing should mimic surgical closure as the latter is stronger than most endoscopic closure devices. however, endoscopic suturing is challenging due to the confined endoluminal space and lack of dexterity of current endoscopic instruments. we have developed a novel robotic suturing device to overcome these problems. aim: this animal study aims to demonstrate the feasibility of this device in closing perforations. method: the trial was conducted on an anaesthetized live pig. a double-channel colonoscope was first inserted into the rectum. following saline lift, a 10 mm submucosal incision was created in the rectum to simulate a perforation. the robotic suturing device and grasper were inserted into the two colonoscope channels, allowing the endoscope to remain in position for tool exchanges or needle reloading. both the effectors were intuitively tele-operated by the user via a robotic master console. this robotic suturing device manipulated a curved, double-point needle (with a 10 cm 3-0 vicryl suture) to penetrate tissues at desired orientations. the needle could be switched between both jaws of the device through a locking mechanism. this facilitated passing the needle through tissues to form stitches or through suture loops to form surgical knots. the articulated joints and five degrees of freedom allowed dexterous steering to reach targets and triangulation with other tools in a confined space. the robotic grasper facilitated handling of tissue and suture. result: a total of four running stitches were performed and secured with a surgical knot by passing the needle through suture loops. the suture was cut and the needle was removed by the robotic grasper through the channel. 11 min and 4 min were required to stitch and tie the knot respectively. there was no complication. conclusion: our novel endoscopic robotic device can suture perforations resulting from complex endoscopic procedures. as our suturing method is similar to laparoscopic and robotic suturing, closure using our device is expected to be as strong as a surgical through-and-through closure. when developed further, this device can be used to close full-thickness resection sites and orifices in transluminal endoscopic surgery. modelling a collaborative robot with the ieee 11073 sdc standard for combined focused ultrasound and radiation therapy j. berger, m. unger, l. landgraf, a. melzer medical faculty, university hospital leipzig, innovation center computer assisted surgery, leipzig, germany background: surgical robotics require a smooth integration into the operating room (or) . for this propose the ieee 11073 sdc(service-oriented device connectivity) standard has been developed in the or.net project. in preparation for a combined focused ultrasound and radiation therapy (fus-rt) we have shown concepts and evaluations to position ultrasound and interventional devices with collaborative kuka arms. however, the safe and intraoperative cooperation with multiple different or-devices (e.g. an irradiation unit) requires a more sophisticated exchange of the robot's information and functionality. aim: to realize a safe clinical integration, the aim of this work is to implement and evaluate a dynamic connection between the kuka robots and other devices using the vendor-independent sdc communication standard. project description: a kuka lbr iiwa 7 r800 robot (kuka ag, germany) was modeled inside the sdc standard for medical device communication. the interconnection with other devices was implemented and evaluated on a mobile platform to position a clarius l7 wireless ultrasound transducer (clarius mobile health corp, canada). all necessary information of the robot was represented in the medical device description of the sdc standard to be shared via network. for each joint of the robot arm the position, torque, stiffness, damping, velocity and functional-states were represented, resulting in a total of 42 parameters. the software was implemented in c ?? on a standard pc accessing the kuka controller cabinet with ros (robot operating system) via ethernet. the accessibility of each parameter, as well as activation commands for planning and movement were tested with an sdc-consumer application. preliminary results: the sdc-provider functionality of the robot was successfully implemented, allowing for dynamic changes of the robot state during interventions. all appliances (sdc standard compatible) in the robots network can react to state changes and send movement and planning commands to the robot via activations. after testing, 100% of the 42 defined parameters are safely accessible. implementing the medical device communication for the kuka robot enables its integration into any networked operation room that supports the sdc standard. it is, therefore, ready to be set up and evaluated for the application of fus-rt in a clinical environment. background: assessment of perfusion of the left colon with fluorescence during anterior resections for cancer changes surgical decisions in up to 19% of cases. use of fluorescence has been shown to be associated with lower leak rates, and improved short-and long-term outcomes with reduced costs. given the high incidence of colorectal cancer, fluorescence-guided perfusion assessment could be of great importance in contemporary surgical practice. however, there is currently no standardisation of this technique which represents a significant limitation to widespread adoption. aim: to standardise fluorescence-guided perfusion assessment in rectal anterior resection through a computer vision algorithm. project description: videos were collected by a single surgeon in a referral centre for colorectal cancer treatment. perfusion assessment was used before proximal colon division to identify the best location for transection. a bolus of indocyanine green was injected intravenously and a near-infrared camera used to assess perfusion through fluorescence. photographs of fluorescent imaging of the colon were analysed using a non-supervised learning algorithm called 'k-means clustering'. the first step was to digitally subtract all background pixels, leaving only the area of interest of the colon. this area was then subsegmented into 2 'clusters' corresponding to perfused and nonperfused areas. a mathematical model was applied based on the 2 sub-clusters centres to select the area for transection with optimal perfusion of the proximal colon. preliminary results: representative images of proximal colon under perfusion assessment were presented to 8 expert surgeons. the optimal point for transection was selected based on their clinical judgement on previously delimited areas indicated by random letters. this was compared with the results from the automated segmentation using the algorithm ( fig. 1 ). the area identified for section by the algorithm included the area selected by the expert surgeons in 87.5-100% of test cases. these results need to be further validated due to high risk of overfitting. next steps include the collection of multicentre data with a standardised fluorescence perfusion assessment. after robust training, the algorithm will be validated on real-time clinical data to ensure improved outcomes for patients, which is our ultimate goal. background: endoscopic submucosal dissection (esd) is a flexible endoscopic technique that allows for an en bloc removal of lesions of the gastrointestinal (gi) tract. these procedures are typically time consuming due to the difficult control of the tools, and they often require around 95 min for removing lesions, that can reach 3-4 cm in diameter. the probability of intestinal perforation exceeds 18% and the hemorrhage risk ranges from 3.5% to 15.5%. a flexible robotic endoscope may offer a solution to overcome these limitations, by improving the degrees of freedom (dof) and operational efficiency. aim: within this clinical panorama, the aim of this project is presenting the development of a novel miniaturized robotic device to be coupled to the tip of a traditional endoscope for the surgical dissection of gi neoplasms. project description: the robotic platform consists of the miniaturized robot, the actuator housing (hereafter called external platform), the control unit and the master console (i.e.,two geomagic touch phantom) to allow the user driving and control (figure 1a ). during the operation, one surgeon stands close to the patient to maneuver the endoscope for exploring the gi tract and reaching the target area. another surgeon operates the miniaturized robot through the master console, carrying out the surgical procedure. the robot has been designed to be coupled to the tip of traditional flexible endoscopes of 14.5 mm in diameter. it exploits the flexibility of the endoscope for navigation through the intestine and integrates two-active robotic arms (i.e.,cautery and gripper) extending the dofs, and thus enhancing the efficiency during complex tasks such as manipulation and surgical tissue dissection. furthermore, the endoscope provides the optical system for visual feedback and one working channels for conventional instruments. preliminary results: firstly, a mock-up that faithfully reproduces the miniaturized robot has been realized using a 3d printer machine (projet mjp 3600, 3d system, inc.) to verify the feasibility of the design solution. after verifying the potentiality of the 3d printed prototype, a final device, with the same features (i.e.,dof and geometry) of the 3d printed prototype, has been designed, fabricated and assembled ( figure 1b ). background: virtual and augmented reality has been widely used in many fields mainly for entertainment purposes. we think that it could be beneficial to use augmented reality in medical practice. aim: the aim of this study was to evaluate usefulness of 3d holographic images of patients anatomy displayed using augmented reality goggles during endovascular aortic repair (evar). project description: one of the major challenges during endovascular procedures is working on two dimentional x-ray images of three dimentional vascular anatomy. using 3d holograms of patients anatomy could be beneficial during the evar procedure and could make the orientation in vascular anatomy easier for surgeon. we performed two endovascular aortic repairs with the assists of microsoft hololens -smart glasses using augmented reality. we used carna life application created by polish company medapp. it was one of the first use of holograms during vascular procedures in the world (second and third stent-graft implantation using holographic imaging in the world). results: two patients with abdominal aortic aneurysms, 79-years old male and 74-years old female, were operated on. holograms of patient's anatomy made from preoperative angio ct scans by polish company medapp were displayed during the procedures using microsoft hololens. holograms could be displayed in any place and configuration using augmented reality, which means that the images did not interfere with the surgeon's field of vision. microsoft hololens use voice commends which permits the surgeon staying sterile. stent-graft implantations were successful. both patients were discharged three days after the procedure and the hospitalization was uneventful. seeing precise patient's vascular anatomy reconstructions in three dimention certainly helped us to navigate in a vascular tree. we believe that in the future this technology would enable to reduce the operation time and need for radiation. background: interaction with electronically controlled operating room (or) systems embedded in modern surgical environments is everyday practice for surgeons performing minimally invasive surgery (mis). while there is a non-sterile operating nurse available in the or, capable of interacting with these systems upon request by the surgeon, this indirect control is mostly slow, prone for error and disrupting surgical workflow. facing an unanticipated and unwanted outcome may cause distress emotions. distress emotions are undesirable when performing surgery, since they may impact available cognitive workload. furthermore, they may result in negative communication, hampering or-team empowerment and effective leadership. both factors are known to negatively influence quality and safety in the or. aim: the aim of the tedtrial is to investigate what setup best enables surgeons to interact with the endoscopic operating room setup during surgical procedures. as a result, disruptions of workflow, delays and errors may be reduced. outcome parameters will be objectified using medical data recorder (mdr) derived output and biometric analysis using hexoskinó. subjective evaluation of outcome parameters is done using questionnaires. project description: the tedcubeó system is a plug-and-play device enabling wearable sensors to act as a wireless alternative for a regular computer mouse, therefore enabling direct hands-free and sterile control of the or. the study is an observational trial with three different arms: intervention group 1) direct interaction by surgeon with or environment using tedcubeó and myo tm armband, intervention group 2) direct interaction of surgeon with or environment using tedcubeó and plantronicsó wireless microphone headset. the third arm is the control group using indirect interaction of surgeon with or environment using third-person computer interaction. main endpoint of study is the number of workflow disruptions due to the operation of laparoscopic or equipment. secondary endpoints are error rate, delay, team communication, subjectively reported frustration and satisfaction with the system and objectively measured stress as symptom of frustration and anger as distress emotions. preliminary results: primary and secondary endpoints of study are compared among groups. it is anticipated that reduction of miscommunication, error and delay may result in a reduction of distress emotions. trial start is expected q1 2019. anticipating the automated intraoperative tissue recognition: intraoperative tissue classification using hyperspectral imaging and machine background: iatrogenic injuries may occur despite a sound expertise in surgical anatomy. hyperspectral imaging (hsi) is an emerging optical method, combining the use of a camera system with a spectrometer. hsi analyzes optical properties of tissues and acquires 3d data sets with two spatial dimensions (x, y) and one spectral dimension (?). the data sets contain information about tissue physiology, composition, and perfusion. those spectral features coupled with machine learning algorithms might allow for automatic tissue recognition. aim: assessing the ability of an hsi-based machine learning to discriminate the hyperspectral features of different tissues during neck and abdominal surgical procedures. methods and procedures: fourteen pigs underwent laparotomy (n = 6) or neck dissection (n = 8). twenty data sets were acquired in vivo from abdominal organs and 20 from neck structures by means of a customized hyperspectral camera (diaspective vision, germany). different anatomical structures were manually outlined by a surgeon using an image manipulation software (gimp). each pixel contained a hyperspectral curve and each curve was composed of 100 bands (from 500 to 1000 nm with a 5 nm resolution). the curves were normalized using the standard normal variate method. a logistic regression machine learning (ml) algorithm was used to train the model to discriminate tissues, based on the hsi spectral features. the efficacy of the prediction model was tested using the k-fold (k = 10) cross-validation. results: a large number of tissue-related hyperspectral curves could be extracted (4675 thyroid, 9417 vagal nerve, 48546 fatty tissue, 30486 cartilage, 16001 carotid artery, 81567 muscle, 5149 carotid vein, 7148 portal vein, 22973 biliary tract, 73940 gallbladder, 1874 hepatic artery, 16712 pancreas, 2412 duodenum, 34313 abdominal adipose tissue). the algorithm used 4 min to 'learn' all data sets, and prediction was provided as an immediate output. overall, prediction accuracy was 92 and 89% for neck and abdominal structures respectively. in particular, biliary ducts could be identified with a 93% accuracy and the vagal nerve with an 89% accuracy (see figure 1 for details). background: a gaze-controlled robotic endoscope is innovative technology with myriad potential applications in the rapidly advancing field of flexible endoscopy. improvements to the current flexible device to allow examination of the gastrointestinal tract whilst minimising procedural discomfort and complications are desirable. aim: to use a gaze contingent framework to manipulate a flexible endoscope through a simulated upper gastrointestinal tract (ugit) model. description: a flexible gastroscope (karl storz 13801 pks) was attached to a ur5 6 axis robotic arm (universal robots), mounted onto a rail and placed on top of a surgical table. two cogwheel shaped dials were 3d printed and placed onto the up/down and left/right wheels on the head of the gastroscope ( figure 1 ). robotization of these controls was achieved by using two motors (dynamixel rx-24f) to steer the distal tip. this system allows users to operate a robotised flexible endoscope using gaze control. gaze interaction with the screen was based on a 3d gaze framework we developed with the synergy of conventional wearable eye-tracking, motion capture system and fixed in space rgb-d cameras for 3d reconstruction of the environment. users are able to control endoscope movements without handling the device. the distal tip of the gastroscope was controlled using eye gaze technology. the ur5 robot was used to enable shaft rotation (initiated by fixed head movements) and linear movements were triggered using a joystick handle (up for forward movement, down for endoscope withdrawal). pause and retroflexion of the endoscope are achieved by moving the joystick left and right respectively. users were asked to navigate an endoscope through an ugit model (chamberlain group) simulating a diagnostic gastroscopy using gaze control and targeting ten points scattered through the stomach. results: four expert endoscopists and one novice used gaze control to successfully navigate a gastroscope through a simulated ugit. all were able to intubate the oesophagus and accurately locate ten targets placed in the fundus, body, antrum and pylorus of the stomach. conclusion: gaze control endoscopy is a feasible concept. it allows ergonomic, user-friendly and intuitive control whilst maintaining the benefits of a flexible endoscope. background: image-guided needle biopsies and histopathological evaluation are the gold standard for the diagnosis of liver neoplasms. most often, however, these are reserved for suspicious, but not diagnostic, situations. radiomic may help to characterize tumor biology by correlating imaging features with relevant tumor-biology information. features derived from radiomic analysis may provide complementary information to support clinical decisions, especially in situations where tissue analysis cannot be performed or is inconclusive. aim: the goal of our technology is to exploit computational capabilities for image analysis in order to identify radiomic features useful for characterizing liver lesions and to identify relevant information related to patient prognosis. project description: 17 patients derived from an internal database and 12 patients randomly extracted from the cancer archive liver dataset were included in this study. 56 lesions were extracted from those volumes using expert annotations (31 secondary vs 25 primary; 34 well differentiated vs 22 non-well differentiated). lesions were then split into training and testing sets. first order statistical features were computed and a lasso regression step was performed to reduce the number of features. both logistic regression and random forest models were built using cross-validation to predict the target classes on the test set. preliminary results: only 2 features namely the energy and the volume of the lesion were sufficient, when combined in either model, to predict the differentiation grade on the test set with an f1-score of 0.74(± 0.07). we are currently working on the addition of higher order statistical features to the analysis in order to differentiate primary from metastatic tumors and identify complementary features that may assist clinical decisions in patients with inconclusive hepatic lesions. objective of the technology or deviceideally, the use of medical simulators could provide trainees with initial background information about indications for procedures, endoscopic technique, and early hands-on training experience that could shorten the initial critical learning curve. rationale for using ex vivo models is that in the beginning of the learning curve, the most important issue is having an initial exposure to the basic movements and maneuvers. our objective of is to create a stomach model from renewable polymer, which would closely simulate normal human stomach with gastric pathology for endoscopic diagnostic or interventional skill acquisition/evaluation. description of the technology and method of its use or application stomach model is based in several steps; the first one is in the in-silicodesign of the overall shape, after that we 3d print the positive two halves of it. the interior detail is obtained shaping the 3d printer parts with ceramic putty. once concluded, this elaborated part will serve as a template in order to build injection bleeding moulds. in the injection bleeding moulding a mesh is placed between layers in order to provide structural attachment points as stiches or several pathological models that will be incorporated after the casting process. we have developed for these instance polyp moulds, fistulae structures in order to attach endoscopic clamps. the two halves are closed once the pathological models are placed inside via a thermic-fusing and stitching creating a leak proof stomach model. preliminary results if available: our models were evaluated by 8 international experts in ircar/ihu france in interventional endoscopy course and were favorable accepting for next trails in these prestigious institutions. conclusions: future directionsa new endoscopic training model of stomach was made and will be evaluated and validated for feasibility in mastering diagnostic and interventional endoscopic skills. clinical trials will be necessary to compare the ability of the simulator to perform training compared with traditional methods of training in endoscopic procedures. background: endoscopes are the eye of surgeons in minimally invasive surgery (mis). conventional endoscopes are mostly chopstick-like and are steered by the assistant. this limits the field of view and results in issues such as endoscope-instrument fencing, surgeon-assistant coordination. existing robotic endoscope holder enables solo-surgery, however endoscope remains blocking the instrument movement and impairs the operational safety. flexible endoscope such as the endoeye provides angulation at the tip and could enlarge the field of view. however, its steering the view is much more complex compared to the rigid endoscope. aim: to provide an intuitive robotic flexible endoscope with enhanced safety. project description: in this work, we present a robotic flexible endoscope for mis with enhanced safety. in the proof-of-concept system, it contains a flexible endoscope module and a robot manipulator. the endoscope contains a proximal rigid shaft and a distal flexible bending section. it is installed onto the patient side manipulator (psm) of the da vinci research kit (dvrk). visual servoing is adopted to achieve autonomous instruments tracking. during the tracking process, movements of the manipulator as well as the endoscope are minimized to save space for the operation and avoid instrument-endoscope fencing. the endoscope could also be controlled by the surgeon. a foot pedal is used to switch between the tracking-mode and control-mode. preliminary results: a prototype was developed and tested experimentally. in tracking a volume of 200*200*100 mm 3 , the spaces required by the flexible endoscope are 15.55% (inside the trocar) and 9.83% (outside the trocar) of that occupied by the rigid endoscope. evaluation with the fls tasks involved 10 subjects. all of the participants completed the tasks under the tracking-mode without failure. in the ex-vivo test with porcine stomach, the endoscope successfully guided the detection, dissection and knotting autonomously. background: fluorescence imaging allows to visualize deep-seated anatomical structures, using a deeper tissue penetration of near-infrared (nir) compared to visible light. the most commonly used fluorescent substance, indocyanine green (icg), is not naturally excreted by the urinary system and requires retrograde stent placement and injection. lighted catheters have been proposed to help visualise the ureter. fluorescent dye-coated ureteral catheters could well represent a more effective and less expensive solution. icg is unsuitable for coating materials. aim: to develop a stable fluorescent coating for catheters to be used intraoperatively, working in the same nir window as icg, to facilitate its use with clinically available systems. project description: the coating was developed based on poly(methyl methacrylate) (pmma), a biocompatible polymer, and on specifically designed fluorescent dyes exhibiting icg-like optical properties. three nir dyes (substances a, b, and c) were tested in order to find the optimal one, in terms of fluorescence signal intensity, and were compared to icg in a polymer form and to an icg-based reference card (green balance tm ). the fluorescent coating was applied onto 3 common ureteral stent materials: hydrophilic-coated ultrathaneò, silicone-coated latex, and pvc. the coating process involved 3 cycles of immersion into the respective dyes blended in pmma polymer (icg, substances a, b, and c), followed by a drying phase. the various tubes were partly inserted into a porcine ureter, next to the icg-based reference card. images were taken in white light and nir modes using the d-light p camera system (karl storz), at a fixed camera-to-target distance. the fluorescence signal intensity was measured for the different regions of interest (each material/coating combination inside and outside of the ureter, reference card) using proprietary software and normalised against the reference card. preliminary results: the signal intensity was significantly higher for all new substances as compared to icg. substance a showed the strongest fluorescence signal intensity among the tested coatings in all tested conditions and materials and was identified as the ideal candidate to undergo further evaluation and in vivo testing. background: endoscopic resection(er) of early gastric cancers provides tremendous patient advantages. however, post-resection findings of deeper sub-mucosal(sm) and/or lympho-vascular invasion can necessitate a second, surgical intervention. we propose that pre-resection evaluation of the submucosal architecture under the tumour can provide critical information for staging and operative planning. we evaluate three techniques to assess the submucosal architecture underlying the gastric mucosa in a pig model. aim: to evaluate three needle-based methods of evaluating the sm before er. project description: 6 acute pigs were used. a simulation of sub-mucosal tumours (endoscopically and eus visible bleb) by injecting the sm with 20 cc of undyed nac. a linear eus was use for all procedures. the tumours were marked and labelled according to geography. methodology: after creating the tumours, anterior lesions were evaluated using the following 19g needle-based modalities: confocal microscopy(cm) using the through-the-needle cellvizio (mauna-kea) system; mini-biopsy(mb) using the micro-biopsy forceps moray (us endoscopy) and fine-needle biopsy(fnb). results: 18 cm examinations were video recorded in all a positions. submucosal vascular visualisation was possible in all cases, excellent in 17/18. mb was performed in 18 lesions with a total of 2 biopsies obtained from each lesion (total = 36). fnb was performed once in the anterior lesions and twice in the posterior lesions with different needle brands. therefore, there was a total of 54 biopsies collected. 2 passes were performed in each biopsy (total = 108). each pass constituted 20-25 insertion/withdrawal movements combined with fanning, slow pull technique, no suction and suction (10-20 cc air negative pressure) to collect the material. all material were sent to an animal anatomo-pathologist blinded to the acquisition method. mean time of confocal examination was 15 min 8sec (6'02' '-30'59'') . mbtook a mean time of 5 min and fnb was a mean of 10 min for each biopsy. cm identified different patterns of vessels in relation to the probe position (superficial/reticular, middle cross-roads or deep/longitudinal). conclusion: eus-fnb, cm and mb are three potential methods to assess the sub-mucosal space underlying the gastric mucosa. cm offered the most architectural information but required more time to perform. these method's may have a role in better staging patients for appropriate er. background: the overall and disease-free survival of patients with rectal cancer is dependant on its staging, and adequate selection of the treatment strategy. mri has a proven efficacy in rectal cancer local staging and recognition of the adverse prognostic features. however, it can be difficult to utilise it as a navigation tool for surgeons, as it represents a complex three-dimensional pelvic space with a series of individual two-dimensional images. 3d image reconstruction has been successfully adopted in other surgical fields to overcome these limitations. aim: our primary aim is to develop a bespoke automated generation of patient-specific 3d pelvic models, which will improve surgical planning and navigation, patient interaction and surgical education. true-size, rotatable 3d models will offer a more realistic three-dimensional representation of the surgical space and its complex relationships, allowing for a more confident surgical rehearsal and potentially better utilisation of minimally invasive techniques in rectal cancer management. our secondary aim is to develop a large multipurpose database of the 3d models of male and female pelvis in health and in the disease. project description: our multidisciplinary team consists of colorectal surgeons, radiologists specialising in pelvic mri imaging and computer scientists. virtual 3d pelvic models are generated based on standard 2d dicom mri images routinely used for rectal cancer staging, which guarantees the high fidelity of cancer delineation. segmentation of the pelvic anatomy is performed with the use of itk-snap, an open-access, multi-platform software. machine learning technology is then employed to automate the 3d model generation, making it time-efficient, allowing for its clinical application. preliminary results: in the initial stage, using the manual segmentation, we have created ten models of normal male and female pelvic anatomy. a good inter-rater agreement level was found, which proves reproducibility of the approach applied. various machine learning algorithms are being explored to fully automate the process of 3d model generation, which will allow for their use in clinical practice and in development of the 3d colorectal database. the technology will be further implemented in creation of dynamic models of functional pelvic floor disorders. 4 surgery, toho university omori medical center, tokyo, japan; 5 surgery, neuchâtel hospital, neuchâ tel, switzerland background: laparoscopic gastrojejunostomies are time-consuming and require a specific training. alternatively, sutureless anastomosis can be achieved by means of endoscopically delivered magnetic rings. objective of the study: assessing the feasibility and reproducibility of an endo-laparoscopic gastrojejunostomy technique, using magnets coated with a fluorescent biocompatible polymer. methods and procedures: four pigs (2 acute, 2 survival models) and one cadaver were included in this study. the anastomotic device was composed of two magnetic rings (25x8x6 mm; attraction force 30 newton), each one attached to a 75 cm long thread. the distal ring was inserted endoscopically into the first duodenum, and the extremity of the thread was clipped to the gastric mucosa. twenty-four hours later, a two-port laparoscopy (12 mm, 5 mm) was performed, using a near-infrared (nir) laparoscope (d-light-p; karl storz). the magnet's position in the jejunum was detected thanks to the transluminal fluorescence of the dye. magnetic interaction with the metallic tip of the laparoscopic grasper allowed to catch the ring and bring the bowel loop to the future anastomotic site on the gastric wall. simultaneously, the proximal magnet was delivered to the gastroesophageal junction using a flexible endoscope. the magnet was carefully advanced into the stomach allowing precise connection with the distal ring. in one cadaver the procedure was repeated. the sole variation was that, in order to reach the second jejunal loop, the distal magnet was placed using a gastroscope inserted through a transgastric port. in two acute animals, the distal magnetic ring was introduced into the jejunum via an enterotomy. the anastomotic procedure (from the distal magnet detection via fluorescence to the magnetic connection using a hybrid approach) was reiterated 40 times. survival animals were followedup for 10 days and underwent control endoscopies and ct-scans. results: the procedure was easy to standardize and reproducible, with a mean anastomotic procedure time of 2.62 ± 1.42 min. there were no technical problems and magnetic connection could be precisely directed in all cases, at both the anterior and posterior gastric wall. no complications occurred during the survival period and the anastomoses were patent by day 5. transluminal fluorescence allowed for a rapid detection of the magnet. colorectal cancer is the fourth most common cancer in high-income countries counting [ 700.000 deaths worldwide. survival rate reaches 94% in case of early diagnosis, falling down to 11% in case of advanced stage. conventional colonoscopy screening is limited by invasiveness, pain and often need of sedation. wireless capsule endoscopy enables inspection without discomfort, but passive locomotion often leads to incomplete and/or false negative results. the european endoo project (grant agreement 688592) aims to develop a novel system that overcomes most of the drawbacks of conventional colonoscopy, maintaining accurate and reliable diagnosis and therapy. the system is composed of an active robotic platform that magnetically drives a soft-tethered capsule; magnetic guidance is achieved through the magnetic localization of the capsule in combination with a closed-loop control that maintains an optimal and safe link between the capsule and the magnetic end-effector. a stereoscopic camera is integrated in the capsule for enhanced diagnosis though 3d reconstruction and automated detection of lesions/pathologies. the different modules of the endoo medical platform are illustrated in the figures. the robotic guidance systemconsists of an anthropomorphic manipulator that controls the capsule through an external permanent magnet. the robot, positioned on a dedicated trolley, is equipped with sensors for performing safe human-robot collaboration. the medical workstationincorporates: screens, buttons and pedals for visualization and command initiation, a joystick for system teleoperation and a back-end for fluidic control and data communication. the soft-tethered capsuleembeds an internal permanent magnet, magnetic sensors, an accelerometer, white and infrared illumination and an hd stereoscopic vision system with two wide-angle customized optics. a controller serves as the main control unitfor performing real-time communication and closed-loop control of the robot, localization system, capsule and physician commands. the synergistic cooperation of academic, industrial and clinical partners within the project allowed to develop and validate the system in in-vitro \/i [ , exvivoand preliminary cadaver sessions, performing comparisons with state-ofthe-art commercial colonoscopes. in conclusion, the endoo medical platform provides: reduced procedural pressures, user-friendly procedures, similar functionalities and performances of commercial devices, comparable procedural times and considerably lower costs with a new painless approach. background: this study is aimed at the comparison of the process of manual and robotic-assisted positioning of the electrode performing radiofrequency ablation with the usage of multifunctional robot-assisted surgical platform. under the control of the surgical navigation system. the main hypothesis of this experiment was that the use of a collaborative manipulator will allow to position the active part of the electrode relative to the center of the tumor more accurately and from the first attempt. we also check the stability of the electrode's velocity during insertion and consider some advantages in ergonomics using the robotic manipulator. methods: sphere-shaped tumor phantoms measuring 8 mm in diameter were filled with contrast and inserted in cow livers. 10 livers were used for the robotic experiment and an equal quantity for manual. the livers were encased in silicone phantoms. analysis of ct data gave the opportunity to find the entry and the target point for each tumor phantom. this data was loaded into the surgical navigation system that was used to track and record the position of the rf-electrode during the operation for further analysis. results: standard deviation of points from the programmed linear trajectory totaled in the average 0.3 mm for the robotic experiment and 2.33 mm for the manual operation with a maximum deviation of 0.55 mm and 7.99 mm respectively. standard deviation from the target point was 2.69 mm for the collaborative method and 2.49 mm for manual method. the average velocity was 2.97 mm/s for the manipulator and 3.12 mm/s for the manual method, but the standard deviation of the velocity relative to the value of the average velocity was 0.66 mm/s and 3.05 mm/s respectively.thus, in two criteria out of three, the manipulator is superior to the surgeon, and equality is established in one. surgeons also noticed advantages in ergonomics performing the procedure using the manipulator. conclusions: this experiment was produced as part of the work on the developing of the robotic multifunctional surgical complex. we can confirm the potential advantages of using robotic manipulators for minimally invasive surgery in case of collaborative practice for cancer treatment. surg endosc (2019) background and aims: laparoscopy has reduced tactile feedback compared to open surgery. in neuropsychological literature there is increasing evidence that visual and haptic information converge to form a mental representation of an object. through the combination of these inputs, this representation is believed to be more refined and robust. we investigated whether tactile exploration of a lifelike anatomical object before executing a laparoscopic action on this object in a laparoscopic box trainer improves performance of this action. description: a randomized prospective cohort study with two groups (a ? b) of ten laparoscopically naïve medical students was conducted. we compared the groups for baseline characteristics and performance, using a basic laparoscopic task (post and sleeve). to investigate the effect of haptic exploration, students performed ten repetitions of a laparoscopic needle action on a lifelike silicone caecum model (applied medical, rancho santa margerita, usa). group a did a pre-test visual exploration of the model. in group b manual exploration of the anatomical model was added to the visual exploration before executing the task. the box trainer was equipped with the forcesense tm (medishield, delft, the netherlands) system for skill assessment using objective force, motion and time parameters. results: baseline characteristics and-laparoscopic performance were comparable (p [ 0, 05) . performances of 200 trials on the anatomical model were captured and parameter outcomes were compared between groups. significantly less force (maximal force, maximal impulse, mean force and force volume) was exerted by the 'touch' group (p \ 0.000) (fig. 1 ). this group also completed the task with less distance travelled by the instruments (p \ 0,003). there was no significant difference in time needed to complete the task (p = 0,695). conclusion: this study showed that, when performing a laparoscopic task on an anatomical model, pre-task haptic exploration of the model results in the use of significantly less force and less movement. adding haptic exploration to a laparoscopic training curriculum could therefore result in more efficient and more refined learning of laparoscopic actions. this, in turn, could lead to better, quicker and safer performance of laparoscopic operations. . esophagogastroscopy was performed before gabe and 1-week post-procedure assessing gastric abnormalities. weight and fasting plasma ghrelin were obtained at baseline, 1-, 3-, 6-and 12-months post-index procedure. after 6 months, the sham group was unblinded and received gabe. both gabe and sham crossover to gabe groups were followed for 12 months and received lifestyle therapy (behavioral-diet education). preliminary results: gabe was successful in all patients with no serious complications. significant, progressive weight loss was observed at 6 and maintained at 12 months. ghrelin in gabe group decreased by 22% (67.91 pg/ml) compared to baseline and 12 months levels. weight-loss was approximately 6.5% greater in the gabe group versus sham at 6 months ( table 1) . itt = intent-to-treat, pp = per-protocol analysis preformed using independent-sample t-test and à paired-sample t-test conclusions: gabe using eles is safe, accompanied by significant and so far maintainable weight loss. gabe using the eles demonstrated a reduction in ghrelin levels. aims: transanal total mesorectal excision (tatme) is the latest colorectal approach that continues to be in the spotlight. this study aims to describe the technique in depth by identifying and understanding technical advantages, errors and adverse events. methods: detailed video analysis using observational clinical human reliability analysis (ochra) was completed on 100 clinical tatme cases performed by 27 international surgeons. error frequency and error pathways leading to adverse events were described. tatme expert surgeons were interviewed and engaged in a workshop to elicit error-reducing mechanisms. results: overall technical errors and adverse events per procedure on average occurred 49 ± 32.9 (range 6--194) and 9 ± 6.1 (range 1-45) times respectively. inadequate insufflation and poor camera optics were the most frequent set-up problems. instrument handling errors consisted most commonly of excessive grasper movement during the pursestring phase (321 times total), inappropriate force applied (79 times) with the energy device during the rectotomy, inappropriate force with the grasper (74 times) and excessive movement with the energy device (117 times) during tme dissection. incorrect dissection planes were created during tme dissection mostly due to insufficient retraction (127 times) which didn't allow adequate exposure of the tissue planes. the most frequently occurring consequence was bleeding (mean: 6 times per procedure). rectal perforation (7 cases), vaginal wall injury (4 cases), and prostatic injury (7 cases) were also recorded. adverse events regularly occurred as a result of poor set-up/exposure, inappropriate retraction and/or instrument movement and incorrect plane surgery. error-reducing mechanisms and 'technical tips' describe specific steps and actions, both set-up/equipmentrelated and technique-related, that aim to prevent errors from occurring and avoid adverse consequences. ochra and individual feedback with error-reducing mechanisms developed by this study have been implemented into the national training programme for tatme. conclusion: tatme is an advanced complex procedure during which technical errors and their consequences are not infrequent. tatme requires knowledge of anatomy 'bottom-up', familiarity with its specialised equipment and technical skill working in a narrow space. appropriate structured training and mentorship are therefore recommended. surg endosc (2019) objective: insufficient vascular supply is one of the main causes of anastomotic leak in colorectal surgery. icg has been shown to provide information on tissue perfusion, identifying a well-perfused location for colonic and rectal transections and thus possibly reducing the leak rate. objective of this study is to evaluate the usefulness of intraoperative assessment of anastomotic perfusion using intraoperative indocyanine-green dye (icg) angiography in patients undergoing left-sided colon or rectal resection with colorectal anastomosis. methods: this randomized trial involved 252 patients undergoing laparoscopic left-sided colon and rectal resection randomized 1:1 to intraoperative icg or to subjective visual evaluation of the bowel perfusion without icg (clinicaltrials.gov nct02662946). the primary aim was to assess whether icg angiography could lead to a reduction in anastomotic leak rate. secondary outcomes were possible changes in the surgical strategy and postoperative morbidity. results: after randomization, 12 patients were excluded. accordingly, 240 patients were included in the analysis; 118 in the study group, and 122 in the control group. icg angiography showed insufficient perfusion of the colic stump, which led to extended bowel resection, in 13 cases (11%). an anastomotic leak developed in 11 patients (9%) in the control group and in 6 patients (5%) in the study group (p = n.s.). conclusion: intraoperative icg fluorescent angiography can effectively assess vascularization of the colic stump and anastomosis in patients undergoing colorectal resection. this method led to further proximal bowel resection in 13 cases, however its role in reducing anastomotic leak rate should be studied in further research. endoscopic sleeve gastroplasty (esg) is a promising endoscopic bariatric procedure carried out with the application of transmural sutures resulting in a gastric reduction and gastric shortening. sutures are placed in u shape fashion, from the incisura to the fundus, which is preserved, using an over the endoscope suturing platform (overstitch, apollo endosurgery, austin, texas, usa). the choice of right lankmarks for suturing the gastric wall is extremely important for the efficacy and safety of the procedure. flexible endoscopy suffers from little anatomical reference points. correct spatial relation to precisely target the insertion of the helix device used for retraction and correct orientation of the full thickhness tissue bite require a good undrestanding of the anatomy of the stomach and sourrounding organs including vascular structures that could be inadvertently injured (left lobe of the liver, gallbladder, spleen, short gastric vessels, pancreas, transverse colon). surgeons by training can 'see' the anatomy beyond the gastric wall and undrestand whether they work in a safe layer or whether an underlying structure should be spared. this video illustrates all the potential risks realted with a wrong chioce of endoscopic landmarks when performing esg with respect to gastric and abdominal anatomy. introduction: central bisectionectomy, anterior sectionectomy, and posterior sectionectomy are technically demanding procedures in minimally invasive approach because of difficult expoure and extensive parenchymal transection planes. with limited robotic instruments including absence of cusa, these procedures have been rarely perfomed by robotic approach. method: consecutive robotic central bisectionectomy, anterior sectionectomy, and posterior sectionectomy were performed. patients were all males and were 67, 71, and 41-years-old, respectively. pathologic diagnoses were all hepatocellular carcinomas of each 4.4, 4.2, and 3.2 cm diameter. operative settings were identical for the three kinds of procedure. the patients were placed in supine with a reverse trendelenburg and right side elevation. umbilical 12-mm camera port, three 8-mm ports and additional 12-mm assistant port were used. glissonian approach and icg fluorescence image clearly demarcated the resection planes. parenchymal transection was performed using the maryland bipolar dissector and harmonic scalpel. the rubber band self-retraction method and third arm of robot system helped for stable and excellent exposure of surgical planes result: there were no conversions to laparoscopic or open surgery. the operative time was 320, 330, and 290 min and estimated intraoperative blood loss was 200, 330, and 250 ml. the pathologic surgical margin was 2.5, 0.5, and 3.6 cm. the length of stay after surgery was 7, 8, and 6 days and there were no postoperative complications. conclusion: robotic central bisectionectomy, anterior sectionectomy, and posterior sectionectomy are still demanding procedures with long operative time. however, these procedures could be performed safely in regard to short-term perioperative outcomes. robot surgical system provided several benefits for anatomical hepatectomies including a stable and excellent operative field and clear surgical planes. suprapubic hernias (less than 5 cm above the pubic arch in the midline) require important anatomical knowledge because of complexity of their repair and low incidence, by approximately 2% of all hernias. the problem to repair this type of hernias is that inferior margin of the defect is very close to pubic symphysis, consequently, mesh overlap is often inadequate. treatment of suprapubic hernias is controversial because of limited evidence in the literatura. this video shows the case of a 40-year-old female patient with suprapubic hernia with a defect of 3x3 cm. we performed a laparoscopic repair with a bilateral peritoneal flap of the groin region (as it is perfromed during tapp) for proper view of the pubic symphysis, cooper's ligaments, epigastric and major vessels, nerves and meticulous dissection the space of retzius. the defect was repaired by reconstructing the middle line with a running sutures. subsequently, titanium helical tacks were used to fix the mesh to the pubis and cooper and following the double-crown technique having special attention when fixing the mesh near to inguinal chanal, due to the possibility of causing chronic pain. the peritoneal flap was fixed over the mesh with abdsorbable fixation devices and seal with fibrin glue. laparoscopic repair of suprapubic hernias can be considered as the first option in treatment, because it endeavors to join the advantages of a minimally invasive approach and it is associated to low recurrence. the main advantages are that allows a proper visualization the anatomy and a proper fixation of the mesh. background and aim: thoracoscopic esophagectomy has been performed for two decades and becomes widely spread. we evaluate our cases who undergone the thoracoscopic esophagectomy and consider the future prospective of this operation.transient recurrent laryngeal nerve palsy after lymphadenectomy in this surgery is not rare and induces not only hoarseness but also aspiration or pneumoniae. new method to avoid this complication is desired. patients and methods: 702 patients who received thoracoscopic esophagectomy in our institute from march 1995 to october 2017 were enrolled and studied retrospectively. operative indication is an all of the clinically resectable cases including with a neoadjuvant treatment or definitive chemoradiotherapy before surgery. overall survival rate of the patients with thoracoscopic approach and with thoracotomy until 2001 was analyzed. long term outcome of the patients with thoracoscopic esophagectomy was compared to the result from comprehensive registry of esophageal cancer in japan. short term results of the perioperative parameters were analyzed between left lateral decubitus position and prone position.we had introduced intraoperative nerve monitoring system for prone esophagectomy from 2014. results: there was no significant differences of the survival rate between thoracoscopic group and thoracotomy group based on pathological stage. 5 year survival without neoadjuvant treatment was 88.9% (pstagei), 71.5%(pstageiia), 68.1%(pstageiib), 40.9%(pstageiii), respectively.5 year survival rate of cstageii and iii with neoadjuvant chemotherapy was 65.7% and 5 year survival rate of the salvage esophagectomy after failure of definitive chemoradiotherapy was 31.4%. every outcomes are as good as any reported results in esophagectomy. in the comparison of the lateral position with the prone position, total blood loss was significantly lower in prone position. inflammatory response after surgery was improved more rapidly in prone group, therefore, prone position is recommended as a minimally invasive procedure for thoracoscopic esophagectomy. transient recurrent laryngeal nerve palsy was observed 30% of patients. conclusion: thoracoscopic esophagectomy will develop further as a standard operation for esophageal cancer. nerve monitoring is useful for detecting recurrent nerve and avoiding nerve injury. background: laparoscopic total mesorectal excision (tme), in a wide female pelvis is usually technically easier than in a narrow male pelvis. however, this is not always the case, as the uterus and adnexae may obscure the views and hinder safe dissection, especially in obese patients. techniques such as graspers through additional ports or suspension with sutures through the broad ligament may potentially cause injury or need additional ports/assistants. aim: we present a novel technique using a self-retaining gynaecological uterine manipulator to improve access during deep pelvic laparoscopic surgery in female patients. technical tip: the operation is commenced in the standard manner for a laparoscopic rectal excision. once pelvic dissection is commenced, whenever it is felt that uterine retraction would be advantageous (depending on the level of the rectal tumour, size of the uterus and ovaries, obesity etc.) a self-retaining uterine manipulator (as shown in the video) is used. the tip of this disposable device is introduced into the uterus after dilatation of the uterine cervix. once the balloon at the tip has been inflated, the instrument is secure and hence there is no need for active manipulation by an assistant. the shaft can be rotated to allow anteversion/retroversion of the uterus to varying degrees as required to aid dissection. as the video depicts clearly, it acts as a self-retaining retractor for the uterus and is removed at the end of the operation. though the procedure is being demonstrated by a gynaecologist in the video, the instrument is quite easy to insert and some of our colorectal team have been trained as well. conclusion: the self-retaining uterine manipulator is an efficient tool for uterine retraction in laparoscopic rectal surgery and we have been using it routinely in tme in females for the past 8 years, with no complications. this was previously published as a technical tip in the journal of minimal access surgerybut has never been submitted for peer review as a video. the authors present a video of two clinical cases treated by trans-axillary endoscopic approach. methods: a 74 years-old male and a 73-year-old male presented with intermittent dysphagia and frequent reflux (class ii of lahey). one had a history of recurrent respiratory infections. the disease was characterized by oesophagogastroscopy (egd) and oesophagogram. trans-axillary approach with areolar port. step-by-step as follows: (i) dissection anteriorly to the pectoralis major muscle (ii) isolation of the anterior border of sternocleidomastoid muscle (iii) omohyoid muscle's isolation (iv) identification of the thyroid's upper pole (v) zd isolation (vi) myotomy of the cricopharyngeal muscle (vii) zd's resection with stapler and its withdrawn with sac. results: both cases progressed without complications. complete local recovery was verified in both cases one month after the procedure. conclusion: this technique seems feasible and reproducible, allowing zd diverticulectomy with a better cosmetic result and perhaps lower surgical site infections (ssi). in the authors' knowledge, this approach to dz has never been published. background: gastric leak occurs in 1-6% of patients who undergo roux-en y gastric bypass (rygb) for morbid obesity. the pathophysiology may be related to gastric ischemia, fistula, or ulcer.gastric leak is a severe complication of gastric bypass (gbp) that is associated with significant morbidity and mortality. fistula may have several clinical impacts, depending on patientrelated factors, fistula characteristics, onset time, and therapy proposal. abdominal drainage, gastrostomy, and revisional surgery constitute the traditional approaches to dehiscence and fistula closure, with variable results. methods: we present a video of a clinical case of 44-year-old lady with body mass index of 45 kg/m 2 who underwent roux-en-y gastric bypass and 48 h later presentedtaquicardia and right cuadrantum pain. the ctscan inform a apical leak at the gastric pouch level. the video shows the relevant aspects of a revisional surgery and the key points to drain the fistula and close de defect laparoscopically. results: after 6 monts, the patient achieved succesful results, defined as a stabel clinical situation with image evidence of gastric fistula remision. conclusions: gastric bypass (gbp) is one of the most efficient bariatric interventions in morbidly obese patients. the most severe risk of this procedure seems to be the staple line leak, and the management of this complication can be very arduous. without any guidelines it is very difficult to determine the right procedure addressing the staple line leak after gbp. laparoscopic sleeve gastrectomy (lsg) has become the most commonly performed operation worldwide as a primary bariatric/metabolic procedure. however, conversion to other surgical procedures such as roux-en-y gastric bypass (rygb) or one anastomosis gastric bypass (oagb) have been described as treatment options for inadequate weight loss after lsg and unresolved co-morbidities or complications such as leak, stricture, and severe gastroesophageal reflux disease (gerd). we present two clinical cases of weight regain and severe gerd and dysphagia, which account for the main indications to reversal of lsg to either oagb or rygb. aims: we show in the video the surgical technique that we perform by laparoscopic aproach, in order to construct a roux-en-y polipropilene banded gastric bypass lrygb-b. methods: we are performing this procedures within a prospective randomized trial that is design to compare the long term results of lrygb-b versus the standard laparoscopic roux-en-y gastric bypass.the video shows our technique in a case of a 46 years old female with a bmi of 46 kg/m2. first we create a vertical gastric pouch of about 25-30 ml, and a polypropylene mesh (10x65 mm) is placed 20-30 mm proximal to the anastomosis around the gastric pouch, with the help of a laparoscopic band retractor. after that a 150 cm roux-en-y limb is constructed in an antegastric antecolic fashion, been the lenght of the biliary limb 100 cm. a 25 mm gastroyeyunal anastomosis is performed with a linear stapler, and the enterotomy and gastrostomy are closed with a 3/0 barbed running sutures. jejunojejunostomy anastomosis is constructed in similar fashion, but with a lenght of 30-45 mm. the petersen space and the mesenteric defect are closed with polipropilene 0/0 sutures. results: 31 patients has been operated following this technique, and there has been no complications related to the polipropilene band. (the ramdomized prospective trial is still ongoing). conclusions:the video shows a reproductible easy way to perform a lrygb-b using a polipropilene mesh. introduction: a 23-year old female patient presented at our clinic two years after initial rouxen-y gastric bypass. she had had a preoperative bmi of 31,5 and had a significant weight loss which resulted in a bmi of 21,4 at two years postoperatively. she currently suffered from severe dumping with glycaemia levels dropping to 30 mg/dl. pharmacological treatment with metformine, sandostatine and acarbose did not yield any results. on top of these problems she felt less restriction, could eat large portions and had gained 9 kg in the last three months. objective: the usual approach for severe dumping-related hypoglycemia would be to undo the gastric bypass. this patient however was extremely anxious to regain weight, so we sought other options. we assumed that by adding more restriction and slowing down the emptying of the gastric pouch we could alleviate some-if not all-of the dumping related symptoms and prevent further weight regain. methods: in this video we present the banding of a gastric pouch for severe dumping after rouxen-y gastric bypass. results: although unconventional, the banding of the pouch yielded excellent results. the slower pouch emptying and reduced portions resulted in a near complete remission of all symptoms. as an additional benefit we found a slight weight loss of four kilograms six weeks postoperatively. conclusion: the usual treatment of severe dumping-related hypoglycemia would be an undo of the gastric bypass. in this case however the patient was extremely anxious to regain weight, being very pleased with the results her gastric bypass had yielded. in agreement with both the patient and treating endocrinologist we attempted a different approach. the slower pouch emptying and increased restriction offered another way to alleviate the dumping and deep hypoglycemia while concomitantly resulting in weight maintenance. aim: the aim of this video is to present a novel surgical technique to avoid stent migration after endoscopic placement in patients with leakage subsequent to laparoscopic sleeve gastrectomy (lsg) . methods: this video shows the case of a patient (bmi 46,6 kg/m2) who developed an upper gastric leakage 2 days after lsg. a ct scan showed a small leakage at the eg junction complicated by intra-abdominal abscess. a ct guided percutaneous drainage of the abscess was performed. a stent placement was attempted endoscopically three times and failed for migration. we decided to place laparoscopically a non adjustable gastric ring (nagr) around the stomach, in order to avoid stent migration.first of all the stent is replaced endoscopically in order to cover the fistula tract. the patient is placed in a half sitting position and the pneumoperitoneum was obtained using a veress needle in left subcostal space. a 4 port technique is used as in standard laparoscopic sleeve gastrectomy.the procedure starts with the mobilization of adhesions, the fistula is identified in the upper part of the tubule.the gastric tubule is isolated and the lesser omentum is opened. the blunt needle at the tip of the ring is passed retrogastrically, a tourniquet can be useful is the positioning turn out to be difficult. the nagr is then closed over the gastric tubule containing the stent. a drain is finally placed. results: the stent was removed after 4 weeks. a gastrointestinal ct scan with oral contrast showed a complete resolution of leakage. after 6 months the patient was in a good condition with bmi 29,4 kg/m 2 . the stent was endoscopically removed after 4 weeks. a gastrointestinal ct scan with oral contrast showed a complete resolution of the leakage. after 6 months the patient was in a good condition with bmi 29,4 kg/m 2 . conclusions: this new technique is feasible and effective, as shown in this video; however the nagr can lead to complications, so a strict follow up is needed and if any complication appears, should be considered to remove laparoscopically the ring. introduction: in this case, we will discuss the case of a 72 year old male patient who underwent a laparoscopic cruraplasty and gastric plication resulting in a weight loss of 12 kg. other medical history reported insulin-dependent diabetes, reflux esophagitis and sleeping apnea with cpap. two years after gastric plication the patient presented with passage problems, gastro-esophageal reflux and epigastric pain. to this end a swallow test was performed revealing a large fundus with a restricted passage of contrast. due to the persistent complaints and the abnormal findings on barium swallow a surgical re-intervention was needed. objectives: despite the current bmi of 27 and the age of the patient, conversion from a gastric plication to a roux-en-y gastric bypass was performed. several other surgical options were considered, including an undo of the gastric plication or a dilatation with a resizing of the fundus. methods: in the video we describe the laparoscopic approach for a conversion of a gastric plication to a roux-en-y gastric bypass. results: at 6 months follow-up the patient showed a weight loss of 8 kg and the resolution of his earlier symptoms. the patient had a normal oral intake without any gastro-esophageal reflux or epigastric pain. conclusion: after a gastric plication, partial loosening of the sutures and stenosis are both wellknown complications. as presented in the video, it is apparent that a laparoscopic undoing of gastric plication is not as straightforward as it seems. firm adhesions between folds can compromise the procedure and inhibit a complete separation of the tissues. we believe that in these cases the best surgical approach is to convert to a roux-en-y gastric bypass. laparoscopic sleeve gastrectomy (lsg) is a relatively new surgical approach in the weight loss surgeon's armamentarium. in literature there is a consensus about the importance of mobilizing completely the gastric fundus before transection. the resg (revised sleeve gastrectomyresleeve) may be a valid option for failure of primary lsg. we focused the attention on the consequences that can have an incomplete resection of gastric fundus during an operation of sleeve gastrectomy and how they can be solved by the repetition of this procedure. a sleeve gastrectomy was performed in an obese 34-year-old woman (bmi = 40). three days after the operation, an upper gi x-ray with gastrografin did not show any abnormalities. three months after the surgical procedure, the woman referred frequent episodes of vomiting and a significant weight loss (42 kilos). an upper gi x-ray with gastrografin demonstrated the presence of multiple communicating cavities of the gastric fundus. the esophagogastroduodenoscopy (egd) showed that the gastric tube close to the esophagogastric junction was separated from a recess (2-3 cm in diameter) by an incomplete septum. a severe hypokalemia and consequent ecg abnormalities were treated with intravenous infusion of potassium. then, we performed a laparoscopic operation. the gastric tube was completely released along the suture line of the previous operation and, especially, the posterior surface of the upper part until the left crus of diaphragm became evident. under the guide of the bougie, the recess was removed. results: the clinical course was regular, and the patient was discharged on third post-operative day after an upper gi x-ray with gastrografin which demonstrated the absence of leakage and a normal gastric tube. after 1 year, the patient was very satisfied with the operation. conclusions: the complete mobilization of the gastric fundus allows to see clearly which part should be resected to obtain an adequate gastric tube and facilitate a correct placement of the stapler. in our experience, in patients with a residual fundus, an upper gi x-ray with gastrografin and an egd are needed to exclude the presence of stenosis. then, a resleeve gastrectomy is an efficient and safe procedure to treat this post-lsg complication. weight regain is one of the main problems in bariatric surgery. we have many surgical option but when we evaluate patients with long follow up and bmi of superobese patient before the first surgery, the weight recidivism can arrive up to 50-70% at 5 years.in most cases the first surgery is a restrictive procedure, and in many cases sleeve gastrectomy.here we present a case of weight regain after laparotomic super-magenstrasse (that we consider like a sleeve gastrectomy except for remnant removal) with a big incisional hernia. after a complete multidisciplinary re-evaluation we decided to perform an oagb (one anastomosis gastric bypass) but in this case we decided to create a functional exclusion to the duodenal transit by positioning a minimizer ring. this solution is effective in food diversion and guarantee gastric and duodenal endoscopic exploration in case of need. we think that this technique can represent an option to take in account for selected cases. at the end of bariatric procedure we perform a laparoscopic repair of incisional hernia with mesh in the hope to avoid future surgery and post operative small intestine herniation. patient rejected additional bariatric procedures and in fact she has gained 10 kg two years later (bmi 39.14). conclusions: lagb gastric erosion is uncommon (1.46-3%) . intraoperative (such as perigastric approach) and patient related factors (smoking, alcohol…) have been described as risk factors. the most frequent clinical presentation is weight loss failure; band and port issues (such as infection) are also frequent. erosion is infrequent to present as an acute event (\ 5%: peritonitis, abscess…) or asymptomatically (\ 1%). diagnosis is mostly performed under upper endoscopy. the most common therapeutic technique is removal of the band (by endoscopy or surgery), repair of the stomach, if needed, and band replacement (at least three months later). some authors have performed immediate replacement but the incidence of recurrent erosion seems to be higher. other options are lagb removal alone or conversion to different bariatric procedure. for endoscopic removal, it has been advised to wait until the band buckle is in the stomach and is sometimes very difficult. replacement of the band is not associated with weight regain. she reports 5 years of evolution presenting moderate intensity heartburn that was exacerbated during the night as well as submit occasional rejurgutation. the intensity of the symptoms is attenuated by maintaining a diet without irritants and improving feeding times. denies hematochezia, unintentional reduction of weight, dysphagia or early satiety. the patient has suffered from obesity since childhood, after pregnancy she had progressive weight gain and difficulty in controlling blood sugar, so she is scheduled a gastric bypass roux-en-y . preoperative endoscopy was performed, evidencing submucosal tumor in the gastroesophageal junction at 37 of the dentary arch, approximately 3 cm in diameter. an endoscopic ultrasound was performed, demonstrating subepithelial lesion of the gastroesophageal junction, hypoechoic, with well-defined borders, pseudobilobulated, 2.4 cm x1.3 cm, and dependent on the external muscular layer. a fine needle aspiration is performed in which spindle cells are identified, leiomyoma is likely diagnosed. it is programmed for laparoscopic resection of submucosal gastric tumor, gastric bypass and laparoscopic cholecystectomy. a tumor at the level of the gastro esophageal junction of approximately 2.5 cm is identified in the surgery, which can be resected by laparoscopy without complications. the patient is discharged after 2 days of postoperative stay. the final histopathological result: leiomyoma of 3.3 cm with free edges. cd4 (-) gog1(-) caldesmon (?)s100 (?). background: fifty percent of patients who have undergone gastric bypass, posterior reversal and sleeve gastrectomy and finally complete hiatoplasty presents symptomatic gastroesophageal reflux disease. surgical reinforcement of the lower esophageal sphincter is necessary to prevent acid reflux. here, we describe ligamentum teres cardiopexy, a surgical technique that reinforces the lower esophageal sphincter and restores its competence with a new valve, in patients with previous conversion of sleeve gastrectomy to gastric bypass and hiatal hernia repair. methods: we present the surgical techhnique performed to a patient with initial gastric bypass who underwent sleeve gasterctomy for hipoglycemias and hiatoplastia for severe gerd. persistent gerd requested to undergo ligamentum teres cardiopexy. in this procedure, the ligamentum teres is released from its umbilical connection and the hernia reduced by manual traction, freeing the last 3-5 cm of esophagus in the abdomen. the distal ligamentum teres is fixed with one stitch to the apex of the angle of his, one at the gastroesophageal junction, and one joining the gastric fundus to the esophagus. the remainder of the ligamentum teres is fixed over itself with four to six stitches, forming a necktie cardiopexy. the procedure concludes with diaphragmatic crus closure. results: after 3 months, the patient achieved successful results, defined as resolution of gerd, no protonpump inhibitor (ppi) use, and manometry measurement over 12 mmhg after surgery. conclusions: ligamentum teres cardiopexy combined with closure of the gastric crus is a late alternative treatment for gastroesophageal reflux disease in patients with previous sleeve gastrectomy and hiatal hernia. general surgery, ponderas academic hospital, bucharest, romania introduction: as metabolic surgery techniques evolve during the years, we have to face more and more patients with complications ands uboptimal results after the older/initial procedures. vertical banded gastroplasty(vbg) is one of those procedures that gain momentum during the initial experience in bariatric surgery, but has proven to have dissapointing results and a lot of complications, nowadays surgeons having to deal with difficult revisional operations. aim in this video: we want to present from our experience the difficulties encountered during the revisional surgery, rouxen y gastric bypass (rygbp)aftervbg, and the tips and tricks that will make this a safer and easier procedure. objective: after thorough preoperative assessment and a review of the literature multiple treatment options were considered. the procedure of choice ended up being a laparoscopic adjustable gastric banding, with the objective to achieve optimal weight loss with the lowest risk for complications. methods: in this video we present the placement of an adjustable gastric banding in a patient with a cirrhotic liver and portal hypertension and the possible pitfalls. results: postoperatively there were no complications and patient had a satisfying weight loss both 6 months and 1 year postoperatively. in a short review of the literature we've found that bariatric surgery is feasible in patients with portal hypertension as long as the patient is not decompensated or has bleeding varices. conclusion: cirrhosis and portal hypertension are no absolute contraindication for banding, sleeve or rny gastric bypass as long as the patient is not decompensated or has bleeding varices. the type of surgery is dependent on patient and surgeon-related factors. the aim should be to achieve optimal weight loss with the lowest possible surgical risk in this type of patients. surg endosc (2019) 33:s485-s781 introduction: in this case, we will discuss on a 54 year old female patient who had undergone a laparoscopic nissen fundoplication 5 years ago due to gerd grade b. because of morbid obesity a n-sleeve gastrectomy was performed 1 year ago resulting in a weight loss of 12 kg. at presentation she had regained all the lost weight, resulting in a bmi of 42,8. the patient history also reported insulin-dependent diabetes and obstructive sleep apnea with cpap. gastroscopy was performed showing a large residual fundus but no esophagitis. on the subsequent upper gi series a relatively wide sleeve with an intact nissen-collar was detected. objectives: a laparoscopic conversion to a roux-en-y gastric bypass was performed. other potential surgical treatment options are a sadi procedure or a sleeve gastrectomy with transit bipartition (santoro procedure). methods: in the video we describe the laparoscopic approach for a conversion of a n-sleeve to a roux-en-y gastric bypass. results: at 4 month follow-up the patient presented with a weight loss of 12 kg. the patient had good restriction on oral intake and did not have any reflux-related symptoms or complaints. conclusion: conversion from a n-sleeve to a roux-en-y gastric bypass is a challenging procedure. the largest pitfall during the creation of the gastric pouch is to staple a double fold of the nissen fundoplication. we believe that in these rare cases of weight regain after n-sleeve, the best surgical approach is to convert to a roux-en-y gastric bypass. four years later, in 2010, a laparoscopic conversion to roux-en-y gastric bypass was performed because of weight regain. she now presents with satisfactory and stable weight loss over the last few years. she was recently diagnosed with a brca-1 mutation for which she underwent a bilateral ovarectomy and mastectomy. the patient's brother was also diagnosed with this mutation and died of pancreatic cancer at the age of 39. genetic counseling advised a twoyearly follow-up because of an increased risk up to 10% of developing pancreatic cancer. control gastroscopy showed a normal esophagus and gastric pouch. control ct scan revealed hypertrophic stomach creases in the excluded stomach. these results prompted a laparoscopy-assisted gastroscopy of the excluded stomach which uncovered hypertrophic stomach glands and intestinal metaplasia on biopsy. methods: in this video we demonstrate the laparoscopic approach for complex revisional bariatric surgery. conversion from rny gastric bypass to a sleeve gastrectomy in a patient who already underwent a vbg. the focus of the video is on a manual gastro-gastrostomy with partial gastrectomy of the fundus and part of the stomach where the old vbg-band was placed. results: after 1,5 months follow-up the patient had no complaints and a stable weight. upper gi series shows a normal passage of contrast through the sleeve gastrectomy. conclusion: endoscopic surveillance of the remnant stomach and echo-endoscopy of the pancreas is no longer possible after rny gastric bypass. in cases where the need for such a surveillance arises after a rny bypass a patient-tailored approach is necessary. in our patient a laparoscopic conversion from a rny gastric bypass to a sleeve gastrectomy was performed. this approach keeps the patient's wish for weight loss intact while enabling further surveillance through natural-orifice endoscopy. a 47-year-old morbidly obese japanese woman with a body mass index of 41 kg/m 2 suddenly complained of swallowing difficulty 4 months after laparoscopic roux en y gastric bypass surgery with retro-colic roux limb route. an internal hernia of the defect of the transverse mesocolon was suspected by computed tomography, and emergency intervention was performed. the surgery revealed no internal hernia. however, strong inflammation and adhesion were observed between the transverse mesocolon and the retrocolicroux limb. in addition, the roux limb on the oral side of the adhesion site was dilated and bent.the adhesion between the transverse mesocolon and the flexed roux limb was dissected, linearized and re-fixedby suturing to the transverse mesocolon. however, since the difficulty of oral intake persisted re-do surgery was performed again. after resecting the roux limb involved in the severe inflammation, a 'new' roux limb was lifted to the cephalad via the ante-colic route. finally, the gastric pouch and roux limb were re-anastomosed with 3-0 absorbable sutures in an interrupted full thickness single layer manner. in the present case, we experienced difficulty with both adhesiolysis and determining the accurate target line to resect at the 'old' gastrojejunostomy. however, blocking the blood flow of the 'old' roux limb facilitated the accurate recognition of the target line. esofagogástrica, cirugía general y ap digestivo, hospital regional universitario de málaga, malaga, spain; 2 hepatobiliopancreática, hospital regional universitario de málaga, malaga, spain introduction: marginal ulcer is one a serious complications after a bariatric gastric bypass. tobacco, non-steroidal anti-inflammatory drugs (nsaids) and helicobacter pylori (hp) infection are known risk factors. methods: we present a 29-year-old women operated 3 years before of bariatric surgery with a gastrojejunal (gy) bypass technique due to intraoperative dehiscence of the staple line after attempting a vertical gastrectomy (sleeve). she has persistent vomiting and epigastralgia from 3 months after the intervention, affecting his quality of life. upper gastrointestinal endoscopy (uge) was performed, describing an ulcer in the gy anastomosis. she started hp eradication treatment, treatment with proton pump inhibitors (ppis), tobacco and nsaids were discontinued, but she had slight improvement. after 6 months the uge was made again, which show peptic esophagitis and 2 marginal ulcers. the plasma gastrin level was normal. due to the persistence of symptoms despite conservative treatment, we decided reoperation by laparoscopy. we found herniated bowel in petersen space, which were reduced and the space was closed. we proceeded to truncal vagotomy. the gy anastomosis was resected ( fig. 1 ) and performed again. finally, we perform antrectomy. the pathological anatomy showed ulceration. she was diacharged home on the 5th postoperative day without any complications. results: a marginal ulcer after bariatric surgery appears in the jejunal mucosa of the g-y anastomosis. the symptoms are epigastric pain, nausea and vomiting. acid, tobacco, nsaids and hp infection has an important role in their development \ sup [ . \/sup [ the first treatment is medical, discarding out the risk factors, but if it is not effective, it will be surgical, resecting the previous anastomosis. the usefulness of vagotomy is debatable, but the percentage of success increases. in our case, we perform antrectomy to avoid retained antrum syndrome. the hernia through petersen space is a cause of intestinal obstruction and abdominal pain as the case presents. although we believe that the symptoms were mainly caused by the marginal ulcer, the internal hernia was probably a symptomatic cause. conclusion: the treatment of a marginal ulcer is medical, eliminating the risk factors, but if it is not effective, the surgery is indicated. results: bowel's measurements and confection of the gastric pouch are identical in both cases. in the first case, intestinal anastomosis is performed in the inframesocolic compartment once small bowel has been divided. in the second case, such union is made next to the gastrojejunal anastomosis with the bowel uncut, making the section once no leakage has been found conclusions: laparoscopic roux-en-y gastric bypass is currently considered one of the technique of choice in the surgical treatment of morbid obesity. there are variations and alternatives for its realization. to know them can allow to individualize the technique to each type of patient. we present a clinical case of a 47 year old female. she had a vertical banded gastroplasty procedure (in another clinic) 9 years ago with an initial weight loss of 30 kg in a period of 2 months. she was seen at our clinic because she was suffering from dysphagia to solids and general diffuse abdominal pain for the last month. at physical exam we found a bmi of 39 and nothing else called our attention. we did an upper gi endoscopy and egd transit; we concluded that a gastric bypass would offer her the best results. therefore, we converted her vertical banded gastroplasty into a gastric bypass laparoscopically. she had an uneventful postoperative period and was discharged home without complications. aims: sadis emerged as a modification of biliopancreatic diversion with duodeno-ileal switch (bpdds) in which after sleeve gastrectomy (sg), the duodenum is anastomosed to an ileal loop in a billroth-ii fashion. sadis has promising outcomes for weight loss and comorbidity resolution in morbidly obese patients avoiding the high morbidity of biliopancreatic diversion with duodenal switch. clinical case: 50-year-old patient, subjected to bariatric surgery two years ago, including a sleeve gastrectomy (sg). despite this operation and dietary and hygienic modifications, the patient gained weight in recent months, reaching a bmi of 56 kg/m2 and an overweight of 78 kg. an endoscopy was carried out on her, which provided evidence of a gastric remnant of moderate size with flexible tissue, normal peristalsis, and fast disposal speed. the case was discussed in a joint session, leading to the decision to apply revision surgery. the decision was taken to apply sadis, a novel technique that had never been used before in andalucia. the rate of weight re-gain after the use of classical techniques such as sleeve gastrectomy (sg) or the roux-en-y gastric bypass (rybg) is considerable high. revision surgery due to weight re-gain is necessary in many of these cases. sadis emerged as a simplified alternative to the use of bpdds as revision surgery following a gv due to weight re-gain with good short-term results, in terms of both weight control and comorbidity control. since only one anastomosis needs to be applied, chirurgical time diminishes, as well as the rate of surgery-related complications. moreover, it could be used, through laparoscopy, for patients who have undergone previous, complex abdominal surgery. conclusion: sadis showed a promising short-term weight loss outcome and comorbidity resolution rate but long-term data are missing and there is currently a high level of technical variability. on the other hand, further studies are required to measure its cost-effectiveness compared to the currently popular bariatric procedures, sg and rygb. aims: the lps sleeve gastrectomy is the most common bariatric surgery technique because it has a low surgical complexity and acceptable weight loss results. however, 5-11% of patients present with an insufficient weight loss, weight regnances, reflux or dysphagia. in these cases, it is recommended to perform a second bariatric surgery to combine a component of malabsorption such as gastric bypass or duodenal switch. the video describes the technique of a laparoscopic biliopancreatic diversion with duodenal switch with a previous laparoscopic sleeve. the objective is to describe the safety of the technique and the subsequent success of it. methods: a 41-year-old female patient presented morbid obesity with a bmi of 49 after performing a laparoscopic sleeve gastrectomy in 2010. initially, she presented a percentage of excess weight loss of 62%, reaching a bmi of 33 after two years of follow-up. after this, she suffered a reganancia of all the weight lost despite diet and exercise, presenting a bmi 49. a study was made with tegd where no complications of the previous surgery or symptoms of gastroesophagical reflux or dysphagia were observed. the lps duodenal switch is proposed in the obesity unit committee in 2016, without immediate postsurgical complications. the patient presented a favorable postoperative period and was discharged three days postoperatively. results: at the present time, the patient has achieved a 98% excess weight loss and has a bmi of 26.5. presents good oral tolerance with 3 stools a day without urgency. it doesn't present protein deficioncies. vitamin deficiencies are orally supplemented. the lps duodenal switch is a technique that can be performed after a sleeve gastrectomy safely in cases of insufficient weight loss or weight reganancia. the patients presented a greater weight loss after the duodenal switch than after the gastric bypass, observing a lost of excess weight of 74% compared to 64%. the differences being statistically significant. weight regain after gastric bypass is a challenging problem. a number of revisional surgical options have been reported. this is a case of a 48 year-old woman 10 years after lrygb. her initial bmi was 67, lowest after surgery-28, at presentation-48. the video shows a robotassisted laparoscopic conversion of rygb to loop duodenal switch. the roux limb is transected and dissected to the gastrojejunostomy. the gastrojejunostomy is resected and the gastric pouch is recreated over a bougie. the gastric blood suply is confirmed with icg. a gastro-gastrostomy is created to restore gastric continuity and a sleeve gastrectomy is performed. the duodenum is devided and a duodeno-ileostomy is created 300 cm from the ileocecal valve. the remaining roux limb is resected. the patient recovered uneventfully. conversion of rygb to loop duodenal switch requires creation of as little as two anastomoses, in comparison to standard ds, which requires four. it is a safe option for patients with weight regain after lrygb. methods: irb approval and informed consent have been obtained. a dissection is conducted to separate the descending mesocolon of the gerota's plan from the medial aspect to the peritoneal lining to the left parietal gutter. the peritoneal layer is incised parallel to the vessel and close to the colonic wall. the dissection is continued anteriorly up to reach the resected parietal gutter. a passage into the mesentery of the upper rectum is created for the allocation of the stapler and the dissection of the rectum. these maneuvers permit to straighten the mesentery simplifying the identification and cutting of the sigmoid arteries. a caudal-to-cranial dissection of the mesentery is performed from the sectioned rectum to the proximal descending colon by a sealed envelope device. it can be very useful to mobilize the colon in any direction: laterally, medially, or upward. the dissection is performed along the course of the vessel up to the proximal colon, with progressive sectioning of the sigmoid arterial branches. the specimen is extracted by a pfannenstiel incision. the anastomosis is performed transanally with a circular stapler according to knight-griffin technique. results: we performed a laparoscopic segmental colectomy using this approach for 21 patients with benign sigmoid lesions: 13 diverticulitis, 3 flat polypoid lesions (no lift-up sign), and 5 bowel endometriosis. the mean operative time and blood loss were 161.4 ± 15.7 min and 50 ± 40 ml, respectively. there were not a single conversion to open surgery and no any leakage or stricture. only 2 cases of intraluminal bleeding and 1 case of wound infection (treated conservatively) were observed. conclusion: we consider this approach to be safe and useful for segmental colectomy to be performed sectioning the sigmoid artery close to the colonic wall. aims: to show a clinical case with a video of a patient was operated for colon cancer in hepatic angle by a single suprapubic incision (ssilrh). methods: a 44-year-old male assessed for abdominal pain and weight loss. on physical examination: a painful mass was detected in the upper right quadrant. the colonoscopy revealed an ulcerated lesion in the hepatic angle and the biopsy revealed a moderately differentiated adenocarcinoma. in the abdominal ct a mass of 3 x 4 cm was observed (figure). the patient was operated with ssilrh technique, as shown in the attached video. results: the patient was placed in the supine position and with the legs separated. the surgeon is placed between the patient's legs. a transverse incision of the skin was made in the middle line of 3.5 cm to 1 cm above the pubis. the underlying fascia was divided transversely, the rectus abdominal muscle was exposed, a purse-string suture placed in the fascia. an 11 mm reusable trocar was inserted for the chamber, a 6 mm reusable flexible trocar was placed at the 9 o'clock position and another trocar was placed at the 3 o'clock position. the ileocecal valve was released from the peritoneal parietal foil, as well as the mesocolon right by a lateral to medial approach to the second portion of the duodenum. the hepatic angle was also dissected from lateral to medial. for the anastomosis, the 11 mm trocar was replaced with a 13 mm trocar and a stapler was placed. a 5 mm 30°chamber was inserted through the 6 mm flexible trocar. the small intestine was divided as well as the proximal transverse colon with endogia. an intracorporeal ileocolic anastomosis was performed. the piece was removed through the suprapubic incision. he was discharged after 5 days without complications. the histological studies confirmed a differentiated adenocarcinoma of 8x7x6 cm. the surgical margins were free, without infiltrated lymph nodes (0/26) with stage pt3n0. the ssilrh technique allows a complete resection of the mesocolon and complies with the oncological principles. they can present with abdominal pain, nausea, acute abdomen, symptoms of intestinal obstruction or asymptomatic with incidental diagnosis. their diagnosis can be difficult. the objective is to demonstrate the safety and efficacy of the laparoscopic approach in this infrequent pathology. material and methods: we present a video of the surgical intervention of a 32-year-old patient, with functional dyspepsia, with a casual diagnosis of a pseudocystic mass of the right colon after performing a ct scan: giant diverticulum of the hepatic colon angle with fecaloid content inside it under tension the patient goes to the emergency room for acute abdominal pain, pending colonoscopy, antibiotic treatment is established, and a laparoscopic approach is decided upon after the patient's evolution. results: intervention: complete laparoscopic approach, 4 trocars. large size tumor in the right colon, diverticular in appearance, with stony content inside, with locoregional adenopathies, oncological radical right hemicolectomy, manual intracorporeal anastomosis, correct postoperative, hospital discharge. on the 4th day. definitive pathological anatomy: giant diverticula on areas of intense mucosal ulceration, free edges. conclusion: the laparoscopic approach of the symptomatic diverticula of the right colon is safe and effective. introduction: minimally invasive transanal surgery (tamis) is a surgical technique whose established indications are the complete exeresis of rectal polyps that are not resectable endoscopically or early rectal neoplasms with good prognosis criteria. transanal devices with gel platform facilitate dissection in this field. however, one of the drawbacks of this approach is the oscillation of the right nerve, which hinders dissection and prolongs the surgical time. material and methods: we present the case of a patient with a central depression neoformation, located 8 cm from the anal margin in the posterior aspect of the rectum in a male patient. the lesion occupies 25% of the circumference and was considered unresectable endoscopically. the endoscopic biopsies showed a tubulovillous adenoma with moderate dysplasia. results: an exeresis of full thickness of the rectal wall is performed, with subsequent suture of the defect. we show in the video the use of a glove interposed in the pneumoperitoneum gum to maintain the stability of the neumorectum and the technique of dissection and suture, as well as the stability of the neumorectum with this technique throughout the procedure. the use of a glove as a reservoir to stabilize the nemorectum is an economical and easy-to-use method that can safely replace extra devices. aims: endometriosis is a gynecologic disorder defined by the presence of endometrial glands and stroma outside the uterine cavity. deep infiltrating endometriosis (die) invades 5 mm to the retroperitoneum of the pelvic sidewalls, the rectovaginal septum, or the muscularis of the bowel, bladder or ureters. the rectum is being the most common bowel site of involvement. for symptomatic die, medical therapy should always be the first-line treatment. therefore, a minimally invasive approach using laparoscopy is considered the gold standard option and challenging aiming at complete disease excision. also, there are several advantages of natural orifice specimen extraction when compared with abdominal incision that may directly impact the postoperative results of these young patients. methods: we report a case of a 36-year-old female with a 12-month history of chronic pelvic pain, dyschezia and rectal bleeding. these symptoms were refractory to hormonal, antispasmodic and opioid therapy. magnetic resonance imaging reported a nodule 2 x 2 cm invading the rectal wall 10 cm to the dentate lane. we performed a laparoscopy and we found the nodule at the uterine posterior wall invading the rectal anterior wall. the nodule was invading into the rectum in a large area so we proceeded with segmental resection and added hysterectomy and salpinguectomy because it was the preference of the patient. the anastomosis was created intracorporeally and the specimen was removed through the vagina performing in this way a totally laparoscopic procedure with natural orifice specimen extraction. results: the total operative time was 3 h, the postoperative stay was uneventful and the patient was discharged on day four. the pathological report showed an endometrioma 4 9 4 cm length predominantly involving colonic muscularis propria. conclusion: laparoscopic surgery is a safe and feasible approach for the surgical management of deep infiltrating endometriosis of the rectum and the gold standard for female young patients that often need multiple surgeries. in addition natural orifice specimen extraction avoids potential complications of abdominal incisions. week-day surgery, university, sapienza, ospedale sant'andrea, rome, italy; 2 urology, clinica mater dei, rome, italy aims: we describe a case of a patient affected by a mass in the left kidney and a diverticular stenosis of the sigma. methods: a 65 years old woman complained abdominal pain in the left flank of the abdomen and in the left iliac fossa radiated to the hypogastrium, with fever and no passing flatus. contrast enhanced computer tomography scan (ct-scan) showed a 7 cm mass of the superior pole in the left kidney and a colonic diverticulitis with thickness of the wall and a microperforation of the sigma. she underwent to medical therapy with resolution of the diverticulitis. after 4 weeks a laparoscopic nefrectomy and sigmoidectomy was planned. patient was positioned on the right flank. this position was kept for both the procedures. we performed four trocar accesses along the left subcostal region and a periombelical incision for the specimen extraction. results: post-operative course was uneventful. patient was discharged in 7 post-operative day. istopathological exam showed a renal cell carcinoma confined to kidney with no positive lymph nodes and a diverticular stenosis of the sigma. laparoscopy allowed to perform two fine procedures in a critical situation using few trocar incisions and obtaining good results. background: hartmann procedure consists in a sigmoidectomy followed by a terminal colostomy. stoma is associated with complications and suboptimal quality of life, so the restoration of colonic continuity should be at least considered in any case. open restoration has been associated with significant morbidity and mortality. many authors have described the advantages of laparoscopic hartmann reversal. we want to go a step further showing our experience using a combined laparoscopic and transanal approach in an attempt to improve the surgical technique in a patient with 5 previous abdominal surgeries and a rectovaginal fistulae. methods: the transanal and laparoscopic team work simultaneously. by the abdominal approach a pericolostomic incision is made, the distal affected colon is resected and a purse string suture is performed around the anvil of the eea 31 mm single-use stapler with 4.8 mm staples (autosuture, covidien). a 12 mm umbilical trocar is located for a 30°camera and a gelport laparoscopic system (applied medical) with two 12 mm trocars is introduced through the colostomy wound. hard pelvic adhesiolysis was performed and splenic flexure was also mobilized.the gelpoint path transanal access platform (applied medical) is introduced through the anal canal with three trocars in a triangle position. the proximal rectum and mesorectum are dissected until the peritoneal reflexion. the previous stapler line with the resected tissue is then exteriorized throught the anus. the distal rectum is prepared with a circumferential purse string suture. the vaginal defect was sutured transanally. the proximal colon and the anvil are extracted through the rectal stump and connected to the circular stapler, performing an end-to-end anastomosis. results: the total operative time was 5 h. the postoperative stay was uneventful and the patient was discharged on day 5. conclusions: as in patients with rectal cancer, dissection of the stump in hartmann reversal procedure may be better and associated with shorter operative time. as with any new surgical procedure, it is probably too early to draw conclusions but nowadays transanal combined with laparoscopic approach seems to be a safe and feasible technique to perform a hartmann reversal, especially in challenging cases. intravenous and endoluminal contrast enhanced ct revealed the presence of a large retroperitoneal fluid and gas collection, due to diverticular perforation, extended from pelvis to iliac bifurcation, involving the left urether. no hydrosoluble contrast media leakage or massive pnuemoperitoneum were present. after an initial conservative treatment without significant improvement an emergency laparoscopic left colectomy with primary anastomosis and laparoscopic retroperitoneal collection drainage was performed. the laparoscopic approach was very challenging due to the obesity of the patient and the presence of the abscess. the patient was discharged on pod 12 after requiring re-intervention for dehiscence of the left iliac mini-laparotomy on pod 7. conclusion: diverticular perforation in obese patients adds a further challenge to its laparoscopic treatment and deserves an aggressive surgical approach since its outbreak. although intracorporeal anastomosis has been demonstrated to be safe and effective after right colectomy, limited data are available about its efficacy after left colectomy for colon cancer located in splenic flexure. there are few studies comparing patients who underwent laparoscopic left colectomy with intracorporeal anastomosis or with extracorporeal anastomosis. anyway literature shows that there is no significant difference between intracorporeal anastomosis and extracorporeal anastomosis about oncological result. as for right hemicolectomy, intracorporeal anastomosis seems to show a trend towards a faster recovery after surgery due to the shorter time to flatus and lower post-operative pain expressed in the mean vas scale. laparoscopic left colectomy with intracorporeal anastomosis is associated with a lower rate of post-operative complications as for right colectomy. literature results could suggest that a complete laparoscopic approach could be considered a safe method to perform laparoscopic left colectomy with the advantage of a guaranteed faster recovery after surgery. as usual further randomized clinical trials are needed to obtain a more definitive conclusion. we show a video of a 58 years old patient with a pure splenic flexure colon cancer who underwent to a laparoscopic left hemicolectomy with intracorporeal anastomosis. case presentation: here we describe a case of a 81 year old asthmatic and hypertensive lady with an asa score of iii who presented to emergency after a right knee replacement with a four day history of lower abdominal pain. she was septic upon arrival to the resuscitation roomimmediately prompting the hospital's local septic management protocol. a ct scan of her abdomen showed a rectosigmoid perforation with free intra-abdominal air and fluid. the patient underwent laparoscopic hartmann's procedure within 4 h of admission. after an uneventful postoperative recovery the patient was discharged home after a total of 4 days of hospitalisation. she was followed up at surgical outpatients with no adverse events over the course of the subsequent months. conclusion: this case exhibits the feasibility of laparoscopic hartmann's procedure as a surgical modality for hinchey stage iv diverticulitis. the positive outcome supports the claim that for experienced surgeons laparoscopic hartmann's procedure remains a safe and viable option for elderly comorbid patients in the emergency setting. introduction: mesenteric cysts are a very infrequent pathology, they usually present an anodyne clinic, and their diagnosis is reached casually. objectives: to demonstrate the safety and efficacy of the laparoscopic approach, in cases with intra-abdominal cysts of benign etiology, using material with mini-instruments, reducing surgical aggression, maintaining its safety and efficacy.material and method: clinical case: a 23-year-old man with no personal history of interest. in the last two months he presented episodes of pain in the right hypochondrium, exploration without findings, us-ct scan: a cystic tumor of 6 cm. in hepatic colon angle compatible with uncomplicated benign mesenteric cyst, tumor markers and normal colonoscopy. evidence of interest is exposed. given the evolution it is decided tto. elective surgical. result: intervention: laparoscopic approach, 4 trocars, two of 3.5 mm, optics of 5 mm 30°, benign cystic tumor, with colloid content of more than 7 cm. of diameter in antimesenteric border of colon, which is not possible to separate, mobilization and resection is carried out by endo-gia, including a portion of the colonic wall, appendectomy, extraction in a pocket. good postoperative course, alt to 2nd day. definitive ap: mesenteric cyst, absence of malignancy. the laparoscopic approach is a valid and effective alternative in cases of benign intra-abdominal cystic pathology, the use of mini instruments reduces surgical aggression, favoring the recovery of the patient. male, 70 yr, wuth doblue post-operative coloanal stenosi and 1 women, with ultralow rectal neoplatis stenosis (4 cm from anla verge). both patients were discharge ater 3 days from prosthesis positionins without pain and complications. the first patient, with protection ileostomy, showed fecal incontinence before the operation and was performed prosthesis positioning because rectal losses of infected material and fever. fecal incontinence was showed also after procedure but he had not fever. second patient, 93 yr, with ultralow rectal tumor, after prosthesis positioning was submitted to radiotherapy and she decided for not to be operated and she survives after 6 months in ful well-being. conclusion: endoscopic prosthesis positioning is a consolidated procedure for treatment of bowel obstruction. this study demonstrated that this procedure is safe and this kind of prosthesis is suitable for correct positioning. results: we present a case of a 67 years old man with faecal occult blood test positivity that was diagnosed by colonoscopy of a villous lesion at 9 cm of anal verge. biopsies were taken showing a tubulovillous adenoma with high grade dysplasia. a rectal mri was done showing the lesion fixed to the postero-lateral left side of the lumen at 9 cm of anal verge. no pathological lymph nodes were reported. extension study was negative. the case was presented in multidisciplinary committee agreeing in local excision. in october 2018 the procedure was done without incidents. the patient was placed in lithotomic position finding a lesion occupying of the lumen. resection was done without incidents and posterior suture with 2 continuous barbed sutures. he presented an uneventful recovery being the patient discharged in 3 rd postoperative day. definitive pathological findings showed a ptis with negative margins. after three months of followup the patient remains with good functional results and waiting for the first endoscopic revision. conclusions: tamis is a safe and feasible technique with low morbidity that gives us an alternative for early rectal cancer or big rectal lesions much less invasive than techniques used until now. complete mesocolon excision and d3 lymphadenectomy are two fundamental points in the oncological surgery of right colon cancer. most of the adenopathic recurrences of colon neoplasia in tumors located in the hepatic angle and the ascending colon are located near the head of the pancreas and the vascular axis of the superior mesenteric vein due to an alleged incomplete dissection. we present a case of right colon neoplasia where we performed a laparoscopic right hemicolectomy associated with a d3 lymphadenectomy. we use medial to lateral dissection of the mesocolon focused on the dissection of the superior mesenteric vein with the identification of ileocolic vascularization, right colic vessels and henle's trunk. this approach is safe and facilitates a correct resection of the mesocolon, which is approached following the embryological plans and a vascular ligature near the bifurcation. the performance of an extended lymphadenectomy allows a wider resection of the mesocolon and the excision of a greater number of lymph nodes, all of which can contribute to a greater survival. the efficacy of pc treatment is related with a properly preoperative imaging diagnosis of the disease, but the poor sensitivity for identifying small peritoneal metastasis are the major obstacle to achieve a complete resection and that leads to peritoneal recurrence. imageguided surgery using icg, could represent an advance in the detection of small peritoneal nodules. there are only a few clinical studies that have analyzed the role of icg for the staging of pc, specially from ccr, and nearly in all of them the selected approach were exploratory laparotomy. this study presents a laparoscopy case, as a non-invasive way of cs in selected patients with limited pc. a new category, tis, was created for low-grade appendiceal mucinous neoplasms (lamns) that invade or push into the muscularis propia by ajcc cancer staging 8th ed. management of these tumors depends on stage and histology. traditionally, laparotomy was the most recommended approach, however, if laparoscopy is safe, it could be used. the laparoscopic appendectomy should be done with 'not touch' technique and a radical approach has been recently proposed for its treatment. the laparoscopic radical appendectomy should start by exploring complete abdominal cavity. grasping of appendix should not be done. complete resection of mesoappendix is obligated. cequectomy with stapled endogia is necessary. the specimen must be extracted in an endobag. methods: we report a case of a 64 year-old female patient with a personal history of three caesarean sections. this patient was studied due to chronic abdominal pain. a computerized axial tomography was performed, showing an appendix increased in size and a thick wall. the colonoscopy evidence a lesion that protrudes from appendiceal base which is biopsied. results: a laparoscopic way was used and large and width appendiceal was viewed (10x2 cm). furthermore, a rounded right anexial tumor was also found. a radical not touch laparoscopic appendectomy with stapled cequectomy was done. the intraoperative study was mucinous appendiceal tumor without serose affection. the final result was ptisnx (lamn) without resection margins affected. after 48 h of admission, the patient is discharged without incidents. conclusion(s): minimally invasive surgery in lamns is possible if it is performed with enough experience, following specific rules and tips to manage this tumors. a correct follow-up should be carried out using tumor markers and computer tomography (ct). introduction: resection of both benign and malignant colovesical fistulae can be particularly challenging and carry with it specific surgical considerations. often there is a large inflammatory mass sat within a narrow pelvis, limiting specimen mobility and consequently access to dissection plains. additionally, with the underlying inflammatory process, the ureters may be displaced anatomically and be at risk of injury. aim: to demonstrate a streamlined and reproducible approach to the laparoscopic management of both benign and malignant colovesical fistula, with specific emphasis on the different modalities for bladder repair. method: the following method portrays an overall technique which is adapted dependant on the clinical scenario and specific intra-operative findings: approach to abdominal cavity in standard fashion.identification of right ureter.poster-medial mobilisation of the mass to facilitate delivery out of the pelvis followed by visualisation of the left ureter on the medial and lateral sides before division of the fistula.division of the fistula in benign disease or resection of the bladder dome in malignant disease.transverse laparoscopically sympathetic suprapubic skin incision.vertical incision through linea alba to deliver bulky specimen.intra/extracorporeal repair of bladder dome. results: all of the considered cases were successfully completed with a laparoscopic approach, irrespective of the malignant status of the disease in question. conclusion: both benign and malignant colovesical fistula disease can make the laparoscopic approach to resection challenging, especially when encountering a bulky mass in a narrow male pelvis. the stepwise and streamlined approach considered here can help facilitate successful and safe laparoscopic completion without the necessity to convert to open. background: primary neoplasms of the retrorectal space are very rare. they are located in anatomically difficult area to be addressed, hence a complete evaluation of the lesion is required to determine the extent of resection and the appropriate surgical approach, which include posterior, abdominal and combined abdominoperineal, depending on the characteristics of the lesion. objective: to show a combined laparoscopic abdominoperineal approach of retrorectal tumor. method: we present a video of a combined laparoscopic abdominoperineal resection of a lowlying retrorectal tumor in a 73-year-old female without prior abdominal surgery. conclusion: retrorectal tumors are infrequent. their anatomical location can make difficult the surgical approach. preoperative imaging can provide useful information for surgical planning. in the recent years, minimally invasive surgical approach has been proposed. laparoscopic approach is feasible and safe, but it is important to select adequately the patients. background: adult intussusception is a rare clinical event representing only 1-5% of all bowel obstruction cases and 5% of all intussusceptions and the occurrence of adult intussusception due to colonic cancer is even more rare. aim: we present this case of malignant colo-colic intussusception and literature review to increase the awareness of the incidence of colocolic intussusception due to colonic cancer. case report and literature review: our patient is a 70 years old female was admitted to our hospital due to central abdominal pain, cea level of 4, she was further investigated with ct scan of the abdomen and pelvis which raised the suspicion of mid transverse colon intussusception due to large polypoid lesion. she was further assessed with urgent colonoscopy which confirmed mid transverse colon tumour with biopsies confirmed adenocarcinoma. laparoscopic extended right hemicolectomy with lymph node dissection was performed. upon laparoscopic exploration it was found that the colocolic intussusception was evident as described on the ct scan and as clearly shown on the video. histologically, the transverse colon carcinoma was a moderately differentiated adenocarcinoma, with no lymph node involvement '0 out 15 lymph nodes', tnm staging of pt3 pn0 pm0 and r0 resection. intussusceptions of the colon in adult are frequently found in the ileocecal portion or sigmoidal colon but rarely in the transverse colon. only two cases of adult intussusception of the transverse colon caused by colonic cancer have been reported. overall 12 cases on literature review reported showing colo-colic intussusception due to colonic malignancy. conclusion: colo-colic intussusception due to colorectal cancer is a rare clinical event, however it should be included in the differential diagnosis of colonic obstruction. laparoscopic surgery is safe in malignant colocolic intussusception. aims: single-incision laparoscopic colectomy (silc) aims to achieve better cosmetic outcomes, less pain, and faster recovery compared to multi-port laparoscopic colectomy, but it also has several limitations, especially the technical difficulties. we report our experience with singleincision robotic right hemicolectomy via video presentation. methods: we arranged robotic-assisted single-incision right hemicolectomy for a 78-year-old female patient with ascending colon tumor. the operation was performed with gloveport singleport device and a three-arm da vinci robotic surgical system through a small midline umbilical incision. colectomy was proceeded by a medial-to-lateral approach along with one or two accessory instruments for maintaining sufficient bowel traction or surgical field exposure. after vessel ligation, complete colon mobilization and right side omentum division, the robotic arms were undocked to perform anastomosis extracorporeally. results: the operation was performed successfully without drainage tube placement. the total operative time was 193 min. the bowel movement returned on post-operative day 5,and the patient tolerated normal soft diet on post-operative day 7. she was hospitalized for 8 days after operation. the pathology report revealed colon adenocarcinoma (t1n0m0, tumor size 1.8 cm), and 19 lymph nodes were harvested. conclusions: single-incision robotic colectomy (sirc) approach seems feasible and safe in treatment of ascending colon cancer. this surgical option provides less pain and wound scar for the patient. moreover, it also achieves further benefits for the surgical procedures compared to silc. reasons being, first, it has better instruments flexibility and precision with endo-wrist, as well as less instruments clashing. second, the improved camera stability achieved through the use of the robotic arm is unattainable through manual hand-controlled methods. third, roboticassisted approach gives us an ergonomic environment, which enables the operator to control the arms while sitting by the console, and also to reassign them whenever they cross each other or block the surgical view. in spite of the advantages above, we still need to sincerely consider each patient's situation for proper management. recently, indocyanine green (icg) fluorescence has been introduced in laparoscopic colorectal surgery to provide detailed anatomical information.the aim of our study is the application of icg imaging during laparoscopic colorectal resections: to identify sentinel lymph node, for studying its prognostic value on nodal status, to facilitate vascular dissection when vascular anatomy of the tumor site is unclear and to assess anastomotic perfusion to reduce the risk of anastomotic leak. after tumor identification 5 ml of icg solution (0.3 mg/kg) is subserosal peritumoral injected. a full hd image1 s camera, switching to nir mode, in about 10 min displays fluorescence: the sln is identified and the sln biopsy (slnb) is performed.when tumor is in difficult site, as hepatic or splenic flexure, 5 ml of icg solution (0,3 mg/kg) is intravenous injected. in about 30-50 s a real-time angiography of tumor area is obtained; on this guide, vascular dissection and pedicle ligation is performed.after anastomosis, another 5 ml of icg solution is injected to confirm anastomotic perfusion. if there is an ischemic area, a new anastomosis is performed. from november 2016, 70 patients were enrolled: 22 left colectomy, 38 right colectomy, 2 transverse resections, and 8 resections of splenic flexure. in ten cases, intraoperative angiography led to the identification of vascular anatomy. in two cases the anastomotic perfusion wasn't good and the surgical strategy was changed. four postoperative complications occurred, of which one anastomotic leak, due to a mechanical problem. from november 2017, 40 patients were enrolled to perform the slnb: 23 right colectomy, 11 left colectomy, 1 transverse resection and 5 splenic flexure resections. the sln was identified in 37 cases. 17 cases were found to be n0 to the conventional examination and were subjected to ultrastaging. icg-enhanced fluorescence imaging is a safe, cheap and effective tool to increase visualization during surgery. it's recommended to reduce the incidence of anastomotic leak, to facilitate the assessment of vascularization in order to perform oncological resections, and to perform the slnb to study its clinical role on nodal status and for the sln ultrastaging in order to identify the micrometastases. background: surgical emptying of lateral pelvic lymph nodes (llnd) is a strategy used differently when compared the approaches to rectal cancer in the west and eastern countries. there is evidence that = 5 mm lymph nodes in lateral compartment should be removed, even in the setting of neoadjuvant chemoradiation. minimally invasive surgery with nerve-sparing technique and sharp dissection with minimal bleeding may help overcome the significant complexity of the procedure that may have been a technical obstacle to implementation in the past. the standardization of the technique may help implementation with shorter learning curves and excellent surgical outcomes. methods: a 56-year-old male with distal rectal cancer underwent neoadjuvant crt for a mrt3cn2m0 mremvi ? mrcrm ? disease. there was one left obturator node of 7 mm prior to crt. following 12 weeks of crt completion, the patient underwent tatme for the primary disease followed by left lateral node dissection by laparoscopy. results: the present video illustrates the most relevant surgical steps to perform lateral node dissection. the procedure has been didactically divided into 7 steps. the left ureter is identified and retracted using a vessel loop (step 1). identification of the common iliac vein and dissection with subsequent identification of psoas and internal obturator muscles (step 2). identification and dissection of accessory vessels. ( step 3) identification of obturator nerve and obturator vessels (step 4). blunt dissection of obturator nerve (step 5). identification and ligation of obturatory vessels. (step 6) umbilical artery is skeletonized to allow identification and clearance of fatty tissue along superior vesical arteries, internal iiliac artery/vein, inferior vesical artery and internal pudendal artery (step 7). postoperative course was uneventful. conclusion: standardization of lateral-node dissection for rectal cancer has paramount importance. laparoscopic lateral-node dissection for rectal cancer provides optimal anatomical view and allows safe dissection of the nodes of interest. aims: the aim of this video is to describe our technique using fluorescence to assess the lymph flow to ensure a complete mesocolic excision and central vascular ligation in order to provide expertise to contribute to the standardization of this new tool. methods: laparoscopic right colectomy with total excision of the mesocolon was proposed in all cases. for the detection of lymph flow, we injected indocyanine green dye (1 milliliter of 25 milligrams dye dilution in 10 milliliter of distilled water) into the subserosal to submucosal layer around the tumor at 1 point with a 21-gauge injection laparoscopically after trocar insertion, and observed the lymph flow using a near-infrared system (visera elite ii, olympus) after injection. we also performed a total mesocolic excision with central vascular ligation in the region where the lymph flow was fluorescently observed. results: 7 (100%) patients were included. no intraoperative or postoperative complications presented. no adverse effects were reported due to the infusion of indocyanine green. the lymph flow was visualized intraoperatively in a satisfactory way helping the surgeon in decision making to determine an appropriate separation line of the mesentery. the section line of the mesocolon was modified in 1 (14%) case based on the findings obtained by fluorescence. the mean operative time was 160 (42) min. the morphometric laboratory data of the specimens to audit the correct complete mesocolic excision were satisfactory according to the oncological standards. conclusion: fluorescence lymphography during colorectal surgery was feasible and reproducible with a minimum of added complexity. fluorescence-guided surgery may be a helpful technique for determining an appropriate total mesocolic excision in colon neoplasms. aims: this video shows our technique for complete mesocolic excision (cme) during right colectomy for cancer. methods: in this video, a 62 years old patient underwent a laparoscopic right colectomy with cme for a cancer of the ascending colon diagnosed with a colonoscopy performed after positivity to fecal occult blood test (fobt). after ct scan staging we obtained 3d printed models to clarify patient's vascular anatomy. patient was placed in supine position, 4 trocars were inserted in left quadrants as for standard right colectomy. cme is performed by sharp dissection between the visceral fascia that covers the posterior lay of the mesocolon and the parietal fascia that covers the retroperitoneum (toldt's fascia). the ileo-colic vessels are used as landmark to identify the right anterior surface of the superior mesenteric vessels. with a caudo-cranial approach, the mesocolon is sharply dissected and the root of tributaries venous is ligated, up to the inferior margin of the pancreas. the gastro-colic trunk is dissected out with ligation of the right colic vein, while the gastroepiploic vein is preserved (harvesting the sixth group lymph node). the pancreas-duodenum fascial plane is entered and all the lymphoid tissue around the vessel surface is harvested. procedure is completed with ileo-transverse intracorporeal stapled anastomosis. results: in our experience, between april 2017 and december 2018, 46 laparoscopic right hemicolectomies with cme were performed. we had no major intraoperative vascular lesions. no patients needed intraoperative blood transfusion. compared to our series of standard right colectomies we did not notice any significant difference in post-operative complications. the follow-up is too short to demonstrate if the cme approach has a better oncological outcome compared to standard right colectomy. conclusions: laparoscopic cme is feasible, although it requires a higher expertise level of surgical know-how. the quality of evidence is limited and does not consistently support the superiority of cme as compared to standard right colectomy. better data are needed before cme can be recommended as the standard of care for colon cancer resections. h. bando gastroenterological surgery, ishikawa prefectural central hospital, kanazawa, japan aim: in case of right-sided transverse colon cancers, it is necessary to dissect the lymph nodes around the root o f the middle colic vessels. but in this area there are dangerous organs, for example : pancreatic head, duodenum, and gastrocolic trunk. it is the point of our technique that we resect the accessory right colic vein and middle colic vein, and then dissect pancreas head and duodenum at early step of the operation. method: we perform the operation by five trocars. the first step is to transect the great omentum, and confirm the lower edge of pancreas.there are much adhesion between mesocolon of transverse colon and stomach, great omentum. it is very important to dissect the adhesion accurately. secondly, the mesocolon is incised at lower edge of pancreas. it is possible to detect the lower edge of pancreas in obese people. the anterior surface of superior mesenteric vein is exposed. the accessory right colic and middle colic vein are resected. and then front face of surgical trunk, pancreas, and duodenum is dissected caudally as possible. the superior mesentery artery is resected below the mesocolon after flip up of transverse colon. this approach is safe and feasible, because the dangerous organs are handled by direct vision. by that, extraction of intestine is easy from small incision. afer flip up of transverse colon, the mesenteric of ileum is incised. the root of ileocecal vessels is exposed and these are resected. the peritoneum of the front of superior mesenteric artery is incised, and the lymph nodes around the surgical trunk are dissected. this dissected area is easily connected with the one done beforehand. uniquely we resect the mesocolon and major omentum from the root of dissected vessels to resected side of transverse colon. and then right-side colon is dissected medial approach. conclusion: we dissect the dangerous organs in advance. that prevent major injury of them. background: good visualisation of the operative field is a fundamental requirement for safe laparoscopic colorectal surgery. over the past 25 years of the senior author's experience, camera systems have evolved from single to three chip, high definition (hd) and most recently, the 4 k system. in parallel, the rest of the infrastructure such as cables, processors, monitors etc. have also undergone improvements, resulting in improved image quality. aim/methods: we present a video of a case of laparoscopic total mesorectal excision (tme), performed with strict adherence to our previously published 'stepwise approach to laparoscopic colorectal surgery' which places particular emphasis on safety aspects. tme was performed in a 54 year old male patient with history of previous abdominal as well as robotic prostatic surgery. the procedure was filmed with all components including the camera head, cables, processing unit, screens as well as the recording/mixing decks being 4 k. multiple external 4 k cameras were also used. live transmission to a remote audience as part of our masterclass was achieved using appropriate bandwidth and projection on to 4 k screens. results: feedback from the operating team as well as from the live audience was that the image quality was far superior to hd systems. the 4 k system accorded a degree of clarity well beyond usual expectations. the depth of field also appeared to be different initially, but within a few minutes of starting the procedure and acclimatisation, the effects were appreciable. the clarity of the image which showed the fine details of the dissection planes and anatomical landmarks as well as the vibrancy of the vasculature gave a distinct three-dimensional effect to the picture. this excellent visualisation added one more layer of safety and complemented our stepwise approach for a successful procedure. conclusion: the laparoscopic 4 k system, in our practice, proved to bea beneficial visualisation tool to enhance the accuracy of dissection. vital structures appeared to be more vivid and clearer with dissection planes being more easily apparent. in our opinion the laparoscopic 4 k system when combined with a systematic approach enhances safety, especially in complex laparoscopic colorectal surgery. accumulating evidence suggests that laparoscopic surgery for colon cancer has feasibility and efficacy equal to or over conventional laparotomy. for cases with pasthistory of laparotomy, especially history of colon resection, however, there is almost no evidence for laparoscopic recolectomy for metachronous colon cancer. since 2016, we have been used submucosal local injection of indocyanine green (icg) around primary colorectal cancer by using intraoperative endoscopy, and complete mescolic excision (cme) have been convincingly carried out, which was clarified by completely resected icg positive area. although evidence on the oncological efficacy of icg guided surgery has not yet been clarified, since it can be easily judged whether cme is performed clearly, it is considered that icg guided surgery for primary colon cancer is useful for education. recently, we are applying this to ensure convincing cme for patients with colorectal cancer who had a history of colic resection. the representative case is as follows. a 60-year-old female was diagnosed as advanced sigmoid colon cancer, and laparoscopic sigmoidectomy with high tie of the inferior mesenteric artery was performed 10 years ago. then she was diagnosed as the metachronous descending colon cancer. the feeding artery of the new tumor should be the left colic artery, however, the left colic artery was already resected and genuine feeding artery was not identified by preoperative examination. by injecting icg into submucosa endoscopically during operation, it was clearly observed that the lymphatic flow from the tumor was directed to the inlet portion of the inferior mesenteric vein (imv). re-cme was performed by ligating the inlet of imv. intraoperative icg was also useful for clarifying the borderline for adhesion detachment of pastoperation between the mesentery and retroperitoneum (figure) . interestingly, icg flow in the mesentery direct to of the anus side was disrupted clearly at the past anastomotic site. we believe that laparoscopic surgery under icg guidance is potential useful tool that can confirm evidence to date more intuitively in real time. further studies, ideally randomized controlled trials, are required for define the oncological usefulness of icg guided surgery for re-do colectomy. the operation movie will be presented at the meeting. background: laparoscopic lateral pelvic node dissection (llpnd) is a minimally invasive alternative to open surgical therapy for advanced low rectal cancer patients. in this video, we demonstrate the technique of llpnd for rectal cancer patients with suspicion of lln metastases after neoadjuvant chemo-radiation. methods: the principle of this approach is en bloc resection with bilateral peritoneum. the peritoneum is incised lateral to the ureter following the line between external and internal iliac vessels. in the next step, llpnd dissection of the regional lymph node and high ligation of inferior mesenteric vessels were performed. a contralateral llpnd was performed in the same manner as a mirrored technique. after extracting the specimen, an end-to-end double-stapled circular anastomosis was performed. results: the procedure was done safely without any complications.the surgical duration was 245 mins, and the blood loss was 50 ml. the number of harvested lateral pelvic lymph nodes was 15. the tnm stage was ypt4an2m0. conclusion: this approach enables extended resection during lymph node dissection, allowing autonomic nerve preservation. it is maybe a helpful approach in the treatment of locally advanced rectal cancer with a lateral lymph node metastasis. aims: the aim is to present an inspection method where the anastomosis vascularity is testing simultaneously using the indocyanine green fluorescent angiography intraluminal and intraperitoneal. methods: sixty-five year old female patient underwent standard laparoscopic-assisted low anterior rectal resection for rectal carcinoma. the proximal end of the bowel and the stump of the distal rectum were checked using near-infrared fluorescence imaging with d-light camera. after making sure of adequate perfusion of the bowel, the end-to-end stapled anastomosis was performed under the laparoscopic visualisation. the d-port proctoscope was inserted into the anus. the second icg injection was administered. the perfusion of the anastomosis in transabdominal way and viability of the mucosa in transanal way was evaluated with two d-light cameras simultaneously. the anastomosis was determined 4 cm from the anal verge. an air-water leak and tension of the bowel tests were performed. after evaluation of anastomosis viability with fluorescence imaging, after negative air-water leak and tensions testing, the decision was made by surgeon not to perform preventive ileostomy. results: the patient had no complains for the first three days postoperatively. nevertheless, crp level was growing and was 69.6 mg/l on the second postoperative day, and 103.5 mg/l on the 4th postoperative day. the patient complained of the pain in the right iliac area and below symphysis on the 4th postoperative day. the abdominal and pelvis computed tomography scan with oral contrast was performed which denied our thoughts about the anastomotic leakage. intravenous cefuroxime and metronidazole antibiotics were prescribed. the crp level was 16.3 mg/l on the 10 th postoperative day. the patient was discharged on the 11 th postoperative day without preventive ileostomy. conclusion: using the original, standardized colorectal anastomosis inspection method we can determine which patient doesn't need the preventive ileostomy after low colorectal anastomosis. the 2 important causes of anastomotic leak are local ischemia and staple line defect. the purpose of this study was to investigate the combination of methods aimed to reduce the risk of anastomotic leak after anterior resections for rectal cancer. methods: we retrospectively analyzed perioperative outcomes of the first 30 patients, who underwent modified laparoscopic anterior resection with partial mesorectal excision for rectal cancer without preventive stomy. operative technique was modified and included routine preservation of the left colic artery (fig1)(aimed to improve anastomotic blood supply), manual suture invagination of the 'dog ears' (fig2) (aimed to reduce the risk of staple line defects), transperineal pelvic drainage and pelvic peritoneum reconstruction (aimed to reduce the risk of reoperation in case of leakage). anastomotic leak rate, reoperation rate, left colic artery preservation rate, additional operative time (time required for left colic artery preservation, 'dog ears' invagination and pelvic peritoneum reconstruction), blood loss, morbidity and mortality were analyzed. results: 1 (3.3%) patient developed an asymptomatic leakage, which was managed conservatively. there was no postoperative mortality and no reoperations. median additional operative time was 56 min for the first 15 procedures and 41 min for the last 15 procedures. left colic artery preservation was successful in 26 (86.7%) patients. median blood loss was 35 ml. conclusions: additional techniques used in our modification of laparoscopic anterior resection are safe and may lead to improved perioperative outcomes. however, they are associated with increased operative time, which may be reduced with a better learning curve. introduction: parastomal hernias are a significant cause of post abdominal ostomy morbidity with an overall life-time incidence exceeding 80%. the complications can range from a bulge resulting in stoma bag leakage, to life threatening bowel obstruction. the prevent-trial sought to determine if prophylactic utilisation of polypropylene mesh would decrease the incidence of parastomal hernias, with initial results demonstrating that it was safe to use in permanent end stomas. aim: to demonstrate a reproducible and streamlined technique for laparoscopic parastomal hernia repair with intraperitoneal funnel mesh, and assess the outcomes with the clavien-dindo (cd) classification tool. method: 10 parastomal hernia repairs (7 colostomy, 3 ileostomy) were considered, with the following approach adopted for each: swab sutured in stoma orifice to prevent wound contamination.sharp dissection of the stoma using parachute technique.stoma end refreshed followed by change of gloves and instruments.lateral stay sutures placed to tighten sheath later on.pneumoperitoneum temporarily created to assess/divide adhesions.funnel mesh placed in-situ, orientated in the optimal intraabdominal position, and sutured to the peri-colic fat to prevent slip.medial suture placed to narrow the sheath further.pneumoperitoneum re-created and mesh fixed in place with double crown laparoscopic tacks.redundant portion of end stoma excised and stoma formed. results: at median follow up of 12 months: no recurrence.no reported symptoms of pain or decreased stoma functionality.one superficial wound infection treated with drainage at bedside (cd = grade 1) conclusion: laparoscopic parastomal hernia repair with intraperitoneal funnel mesh for permanent end stomas yielded good outcomes in our patient cohort. a streamlined and reproducible approach ensures that the technique can be adopted for both prophylactic, primary and recurrent repair. parastomal hernias are common and can be associated with significant morbidity. when taking this into account, in conjunction with the recommendations of the initial results of the prevent-trial, one may consider prophylactic utilisation of a mesh in patients receiving a permanent end stoma. general surgery, rambam medical center, haifa, israel 29 year old, female patient referred to our institution with common bile duct stricture, caused by iatrogenic injury during laparoscopic cholecystectomy. during last year, patient suffered from recurrent episodes of ascending cholangitis. recently, she underwent ercp and severe stricture of middle cbd was diagnosed. plastic stent was inserted through the cbd. mrcp also showed severe stricture of cbd with dilatation of biliary tree, proximal to the stricture. due to severe and resistant (did not resolved by recurrent dilatation) structure of middle cbd, she was referred to operation. patient underwent da vinci robot-assisted excision of the cbd stricture, hepaticojejunostomy and extracorporeal jejunojejunostomy of roux-an-y limb. total operating time was 320 min. day three after operation patient started regular diet and was discharged home on day four. final pathology has shoved part of cbd with severe inflammation. aims: extrahepatic biliary duct resection for the treatment of bismuth i and ii stage klatskin tumor is the standard surgical technique [1] . methods: a 85 years old patient present at emergency room (er) with right upper abdominal pain with an elevation of the inflammatory markers at the blood exams and fever. the patient was submitted to a computer tomography (ct) that shows a tumor involving the lower tract of the principal bile duct. an endoscopic retrograde cholangio pancreatography (ercp) with biopsy (intraductal papillary neoplasm of the bile duct,ipnb with high-grade dysplasia) and stent placement was performed. considering the good general conditions of the patient and an absence of vascular and nodal invasion at the preoperative imaging, a minimally invasive surgical resection of the biliary tract with cholecystectomy was performed. results: a four port laparoscopic biliary tract resection with cholecystectomy was performed with lymphadenectomy of the hepatic hilum. no vascular or liver infiltration was found. the hepatic hilum was completely skeletonized. the resection of the biliary duct was performed with adequate free margin. a biliary reconstruction with roux-en-y technique was performed and a fully laparoscopic hepatico-jejunal anastomosis was done. and abdominal retro anastomotic drain was placed. the operative time was 350 min. the postoperative course was complicated by a low rate biliary leakage that was treated conservatively. the patient was discharged at 25 post operative day in good general conditions. the histological examination revealed a moderately differentiated in situ cholangiocarcinoma of the principal bile duct with the involving of the cystic duct with free resection margin (pt1bn0r0). conclusions: laparoscopic resection of the biliary tract is a challenging procedure that allows, in expert hands, to achieve in selected cases negative pathological margin, complete linfonode retrieval and entero-biliary bypass. injury to the extrahepatic bile duct during bile duct or hepatic surgery can be reduced by better real-time visualization. recently, indocyanine green (icg) fluorescence imaging has been used in laparoscopic hepatobiliary surgery. we applied icg fluorescence imaging in patient with huge hepatic cyst which severely deviated extrahepatic bile duct. the patient had received laparoscopic cholecystectomy and huge hepatic cyst stuck firmly with peri-hepatic structures including bile duct. icg fluorescence imaging correctly identified the common hepatic duct and remnant cystic duct and allowed for more meticulous and easier dissection. therefore, icg fluorescence imaging may guide a safe and accurate dissection and excision in hepatobiliary surgery. results: total patients who underwent ercp were 2,321 and 3.2 percent (75 cases) had a first failed ercp and 13 of then were unsuccesfull in the second intent of ercp. intrahospitalary stay was more than 7 days in the 11 percent, in the 89.2 percent was 4 to 7 days, with and average of 6 days. conclusions: before, during or after lcbde, ercp remains the gold standard for manegement of choledocolitiasis confirmed by clinics, laboratory and imagenology. lcbde is a very good option that requires experience and specific skills, and especialized equipment. in 9 years the rate of sucess in our hospital was 95.3% and there were no posoperatory complications such as: biliar peritonitis, pancreatitis or liver abscess. aims: easier intraoperative recognition of the biliary anatomy may be accomplished by using near-infrared (nir) fluorescence imaging after an injection of indocyanine green (icg). neither radiological support nor additional intervention such as opening the cystic or common bile duct is required, making it an easy and real-time technique to use during surgery. the aim of this video is to describe our experience in fluorescence-guided cholangiography in different clinical situations. methods: intravenous injection of icg is used to illuminate extrahepatic biliary anatomy. however, the simultaneous enhacement of liver parenchyma can disturb the visualization of clinical details. the key is in the used dose of icg, the route of administration and the time since its infusion. in the first case, a scheduled cholecystectomy is shown in which a dose (1 ml of 25 mg dye dilution in 10 ml of distilled water) administered intravenously 3 h before the intervention was used. the second case shows an urgent cholecystectomy in which the dose (30 ml of 25 mg dye dilution in 1000 ml of distilled water) was administered intragallbladder during surgery. all patients underwent laparoscopic cholecystectomy with traditional four-port technique. all procedures were performed using a 30-degree 10 mm laparoscope with nir imaging capability (visera elite ii, olympus). results: there were no intraoperative or postoperative complications. there was no increase in operative time due to the use of icg. in the first case, a clear identification of the cystic duct and the main bile duct was obtained thanks to the biliary excretion of the icg and the intravenous clearance. in the second case, the identification of the cystic duct, the main bile duct and the cystic artery occurred due to the intravesicular absorption of icg. conclusion: fluorescence-guided cholecystectomy clarifies the dissection plane. it can be considered to increase the safety of laparoscopic cholecystectomy. being aware of the doses, times and possible routes of administration is basic to universalize the technique and give it utility in different scenarios. introduction: mirizzi syndrome type 2 is an uncommon cause of obstructive jaundice caused by an inflammatory response to an impacted gallstone in hartmann's pouch or the cystic duct with a resultant cholecystocholedochal fistula. the obstructive biochemical changes can be caused by direct extrinsic compression from the impacted gall stone or from the fibrosis caused by advanced chronic cholecystitis, or for the established fistula. objective: we present a case of a mirizzi type 2 syndrome with choledocholithiasis which was solved by laparoscopy approach. material and methods: a 28-year-old female patient with no past medical history. the history of present illness begans with the presence of icteric dye since the last 3 days; she received symptomatic treatment with poor improvement. a liver and biliary tract ultrasound was performed with report of a 12 mm coledochus, 5 mm wall gallbladder. then an endoscopic retrograde clolangiopancreatography was performed with successful endoscopic sphincterotomy and removal of gallstones. but the patient jaundice persisted after the procedure. the patient underwent cholecystectomy and laparoscopic common bile duct exploration, where the findings were a mirizzi type 2 according to the csendez classification, chronic cholecistitis and choledocholithiasis. results: in this laparoscopic approach we performed a partial cholecystectomy, bile duct exploration with removal of residual gallstones. the closure of the choledocotomy was performed with simple knots using vycril 3.0. a subhepatic drainage was left. the patient showed adequate clinical evolution. after 4 days the patient was discharged. conclusions: it is important to properly identify the anatomy at the time of surgery to avoid injury of the common bile duct. operative treatment of mirizzi syndrome type 2 includes either laparoscopic or open subtotal cholecystectomy or placement of a t-tube or choledocoplasty. near-infrared fluorescent cholangiography (nirf-c) is an innovative intra-operative imaging technique that allows a real-time enhanced visualization of the extrahepatic biliary tree by fluorescence. thanks to the development of laparoscopes/endoscopes with light sources emitting infrared frequencies, it is possible to visualize anatomical structures (vessels, ureters, bile ducts, etc.) through the luminous intensity of substances (fluorescein, blue of methylene, indocyanine green) which are injected into the patient. this technology may be considered as an important teaching tool for laparoscopic surgery, especially for young surgeons in their surgical learning curve and it could lead to reduce the risk of iatrogenic bile duct injuries during laparoscopic cholecystectomy. the following video is characterized by a series of intra-operative images of biliary anatomy by fluorescence, having an important educational interest, while also detecting anatomical variations of the cystic duct. a. umezawa, minimally invasive surgery center, yotsuya medical cube, tokyo, japan aims: laparoscopic cholecystectomy(lap-c) for cholecystolithiasis has become standard. however, serious bile duct injury has been reported as a complication. repeated colic and chronic inflammation in cholecystolithiasis lead to the so-called difficult gallbladder conditions, such as dense fibrosis and scarring of the tissue. dissection of calot's triangle includes the risk of bile duct injury. critical view of safety (cvs) is the most well-known land mark for safe cholecystectomy. in the revised tokyo guidelines 2018 (tg 18), important land marks and bailout procedures had been proposed. those are for the difficult gallbladder which are not able to achieve cvs. methods land marks: baseline of segment4 of the liver and sulcus rouvier should be confirmed. the gallbladder wall itself is also useful landmark. bailout procedure: when the dissection of calot's triangle is considered impossible, bailout procedures should be considered. subtotal cholecystectomy which leave the neck is one of option. the fundus first technique is another approach. however, because fundus first technique has a possibility of leading to serious bile duct injury, it should stop by the neck. in this video, first case shows the importance of landmarks from near miss cases of misidentified injuries. second case shows bailout procedure, subtotal cholecystectomy with fundus first technique. result: in the atrophic gallbladder (case 1, near miss), it is liable to misidentify the junction of common bile duct as the gallbladder neck. the neck and common hepatic duct were lifted together easily. with confirming the landmark, misidentification was corrected and bile duct injury was avoided.in the case2, since the calot's triangle was obscured due to repeated cholecystitis, dissection of gallbladder was performed from the bottom to the neck, and was excised with the cervical portion remained. the remaining neck was reconstituted.in each case, intraoperative cholangiography was performed, and it was confirmed that there was no bile duct injury. without postoperative complications, those patients were discharged pod 2 as usual lap-c. conclusion: during lap-c for difficult gallbladder, the most annoying part is bile duct injury. confirming landmarks and switching bailout procedures can be contributory to avoid bile duct injury and to achieve safe lap-c. aims: choledocholithiasis is an important cause of morbidity and is present in about 18% of patients submitted a cholecystectomy. his treatment should be done in the same operative time, avoiding the morbidity and hospitalization time and costs of multiple procedures.the transcystic approach is preferable to prevent morbidity associated to choledochotomy.large stones can preclude this procedure. the use of laser lithotripsy to stone fragmentation is an option to provide transcystic extraction. methods: we present a video of laparoscopic transcystic common bile duct (cbd) exploration for choledocholithiasis. results: female patient, 65 years old with a previous hospitalization for acute cholangitis with choledocholitiasis.submitted to laparoscopic cholecystectomy with intraoperative cholangiography that showed the presence of stone in distal cbd with 1 cm size. the use of holmium laser lithotripsy made the stone fragmentation and provided his extraction by transcystic route using a basket.the patient was discharged at 4 th postoperative day, with no complications. conclusion: the use of laser lithotripsy for large cbd stones is safe and effective, making possible the transcystic approach and preventing the choledochotomy morbidity. surg endosc (2019) 33:s485-s781 gallbladder adenocarcinoma is rare and extremely aggressive. its' incidence is higher in elder females and its progression is rapid and silent with a dismal prognosis if diagnosed at advanced stages. we present the case of a 77 years-old female with dyspeptic complaints. the abdominal ultrasound revealed a 2 cm solid lesion of the gallbladder suspect for malignancy. the ct confirmed the presence of a vegetant mass on the free border of the gallbladder fundus with 28x13 mm. we performed a radical cholecystectomy with lymphadenectomy and liver bed excision. the post-operative period was complicated with a urinary tract infection, with full recovery after antimicrobial treatment. the histological sample revealed an adenocarcinoma of the gallbladder (t1bn0m0) and the patient remains asymptomatic and tumour free 9 months after the surgery. gallbladder cancer treatment depends of the stage and clinical presentation of the disease. complete surgical excision is the only curative treatment and should include a limited hepatectomy and portal pedicle lymphadenectomy. laparoscopic surgery might be an option in early stages, although it is challenging and requires both expertise in hepato-biliary and laparoscopic surgery. seen at the emergency room for a two month history of abdominal pain associated with jaundice. she is evaluated by the surgical team and diagnosed with acute cholecystitis and moderate risk for choledocholithiasis. the initial surgical plan was cholecystectomy with intraoperative cholangiogram. during surgery, firm adhesions are found from the gallbladder to omentum. friable tissue with edema and easy bleeding. difficulty is encountered during the dissection of calots'triangle. an intraoperative cholangiogram is done through hartmans'pouch without identifying correctly the biliary tract. therefore, an endoscopic retrograde cholangiopancreatography (ercp) is done to visualize the correct anatomy. during the ercp, a stenotic common hepatic duct is found and no stones are visualized. a biliary endoprosthesis is placed. she is discharged asymptomatic. a month later, the patient is back in the emergency room with abdominal pain. after an abdominal ct scan, we found that the endoprosthesis had migrated to the 4 th portion of the duodenum. a second ercp is done and this time we found a big stone (1.5-2 cm) aims: when training in the residency you watch your teacher perform laparoscopic cholecystectomy with ease, and even yourself perform several steps. but as a young surgeon, when confronted with a patient with acute cholecystitis, you're filled with emotions, and you do not know where to start the gallbladder dissection. the aim of this presentation is to show to young surgeons that you can, and must achieve, critical view of safety when performing laparoscopic cholecystectomy for acute cholecystitis. methods: we present the case of a 42 years old female patient, bmi of 36.3, who presented with a grade ii (moderate) acute cholecystitis. following tokyo guidelines, we initiated antibiotics and general supportive care, but without clinical improvement. the patient was proposed for laparoscopic cholecystectomy. results: at initial exploration we identified a 20 cm long gallbladder, with a thick wall, difficult to manipulate. we opted for an anterograde cholecystectomy, in our opinion the best option in acute cholecystitis. the dissection was started with hook electrocautery and then continued with a combination of blunt dissection with the aspirator and with the hook. when reaching the pedicle, blunt dissection was used in order to appreciate the anatomy of the cystic duct and cystic artery. after correct identification of these structures they ware clipped and cut. a drainage tub was then placed, and the abdomen deflated. conclusion(s): as a young surgeon, when dealing with acute you must maintain your calm, and try to achieve critical view of safety before transecting the cystic duct and cystic artery. this can be achieved with a combination of blunt and sharp dissection, keeping your camera clean and with a good collaboration with the assisting surgeon. conclusions: here, an easy and reproducible method is described for future macroscopic analysis by the surgeon following a cholecystectomy. in addition, we depict several frequent macroscopic abnormalities in order to provide surgical colleagues with some cases of abnormal macroscopic gallbladders. the left hepatectomy is a demanding and difficult procedure, still limited to reference centers. the caudal approach and exposure of the middle hepatic vein is a reliable way to achieve a safely and reproductible left hepatectomy. with this technique, exposing the middle hepatic vein, we believe that we can perform a safe and feasible laparoscopic left hepatectomy increasing the quality of this hepatectomy. we present a 47-year-old woman with an intrahepatic and common bile ductlithiasiswhich was previously submitted to an ercp. with an unsolved intrahepatic lithiasis the patient was proposed to alaparoscopic left hepatectomy. the minimally invasive approach for alpps in a patient with a large hepatocellular carcinoma in a liver with severe steatosis is shown. during the first stage a partial alpps is performed. pve is performed in postoperative day one. after 15 days from the first stage both liver volume and function (by hida scan) are re-assessed. right hepatectomy (second stage of alpps) is then conducted by laparoscopic aproach. hepato-bilio-pancreatic, centro hospitalar são joão, porto, portugal a 68 year old woman with a previous history of anxiety and catheter ablation to treat heart arrhythmias, was studied for for multiple pancreatic cysts incidentally discovered on a routine ultrasound. an mri was performed showing multiple cystic tumors throughout the pancreas, the largest of which was 15 mm. this led to a suspicion of multi-focal, side-branch intraductal papillary mucinous neoplasm (ipmn), with minimal dilatation of the main pancreatic duct. an echo endoscopy was subsequently performed indicating probable multifocal ipmn. a fna was carried out during this procedure, with aspiration of cystic content which was sent for cea analysis and cytology. cytology was compatible with mucinous neoplasm with mild atypia and cea 98 u/ml. a splenic preserving total laparoscopic pancreatoduodenectomy was proposed. the procedure was uneventful and the patient was discharged on the 5 th post-operative day. pathology revealeded a 19 mm ipmn, with severe dysplasia and 3 foci of microinvasive ductal adenocarcinoma of 1 mm-pt1n0r0. indocyanine green immunofluorescence guided laparoscopic partial hepatectomy y. tai obtaining negative tumor margin during laparoscopic hepatectomy has always been a very challenging topic for surgeons in that the surgeons are not able to palpate the tumor during laparoscopic surgery. although intraabdominal echo is available, but it demands great experiences and skills. with the guidance of icg immunofluorescence, surgeons can avoid failure of not obtaining enough negative margins nor resect too much healthy liver. icg is often used to estimate the liver function prior to hepatectomy traditionally. it binds to plasma protein and has a peak absorbance at 780 nm and emits fluorescence with a wavelength of approximately 800 nm. icg is preferentially retained in or around biliary malignancies due to impaired biliary excretion of hepatocytes in the affected area. we performed icg immunofluorescence guided laparoscopic partial hepatectomy on a 57 years old male who suffers from hcc located at segment 5 and 6. icg was injected 3 days prior to the operation day. while evaluation of liver is performed, it also allowed us to use a high-end laparoscopic camera system equipped with integrated filters for detection of near-infrared fluorescence. during the surgery, we were able to clearly locate the borders of malignancies through the use of integrated filters combine with icg injection. the pathology study also confirmed that the adequate tumor free margin ([ 0.5 cm) were obtained in both tumors and the patient's condition was stable as well. icg immunofluorescence guidance enables surgeons to obtain optimum result in tumor resection through laparoscopic surgery. it also has the ability to detect bile leakage. with the use of icg immuofluorescence, surgeons will have higher chances to achieve adequate negative margins. background: parenchymal sparing hepatic resection has the advantage of preserving valuable tissue in chemotherapy-treated livers, assuring an adequate future remnant volume without compromising long-term survival. moreover, the laparoscopic approach offers the decreased postoperative morbidity of minimally invasive surgery. whenever technically feasible, this kind of procedure should be considered a suitable alternative to the classic major hepatectomy for the treatment of multiple colorectal liver metastases. methods: 69-year old male with a previous history of laparoscopic sigmoidectomy in november 2014 for a pt2n0m0 sigmoid adenocarcinoma. a control scanner three years later showed liver metastases in segments v, viii, ii and caudate lobe. after chemotherapy (xelox), control mri and pet scans showed a good response. he was proposed for a laparoscopic parenchymal-sparing liver resection. results: total operative time was 3 h and 45 min with no intraoperative complications. patient presented a right atelectasis as the only postoperative complication and was resolved with respiratory therapy. he was discharged in 4 days. pathology report showed that lesions on segment v and viii had no viable tumor (100% fibrosis) and lesions on segment ii and caudate lobe had moderately differentiated adenocarcinoma. margins were free in all the lesions. after a 6 month follow up, the patient has no recurrence and normal liver function tests. conclusion: minimally invasive liver resection is possible in patients with multiple bilobar liver metastases and allows to perform parenchymalsparing surgery safely. difficult localization of lesions such as the caudate lobe are not a contraindication for this type of surgery. laparoscopic approach for perihilar cholangiocarcinoma is still poorly reported in the literature due to technical challenges secondary to the combination of major hepatectomy, lymphadenectomy and biliary confluence resection. despite this, in selected cases it can be a good option to provide a short term benefit to patients. the video reports the case of a perihilar cholangiocarcinoma with involvement of left bile duct and therefore requiring left hepatectomy. komagome hospital, bunkyo-ku,tokyo, japan aims: segmentectomy is an anatomic liver resection, in which the tertiary branches of the glissonean pedicles are selectively transected. however, the branching pattern of the tertiary branches varies depending on the case, particularly in segment 7 (s7) and segment 8 (s8). the extrahepatic approach to the glissonean pedicle from the hepatic hilum is very difficult depending on the branching pattern. furthermore, the distance of exposing the secondary branches that are to be preserved becomes longer, and there is an increased risk of biliary leakage and delayed biliary stricture due to excessive traction in laparoscopic surgery. therefore, laparoscopic s7 and s8 segmentectomy are considered technically difficult. we standardized the intrahepatic glissonean pedicle approach for laparoscopic s7 and s8 segmentectomy. methods: we standardized the intrahepatic glissonean pedicle approach for laparoscopic s7 and s8 segmentectomy. we identify the targeted glissonean pedicle intrahepatically after the parenchymal transection along the major hepatic vein or its branch running on the intersegmental plane, referring to the preoperative simulation by 3d imaging. (a)s7 segmentectomy; after the mobilization of the right lobe, the glissonean pedicles of s7 (g7) can be approached from the dorsal side by transecting the parenchyma between the ivc and the right hepatic vein. after the division of the g7, the parenchyma is transected along the demarcation line and the rhv from the root side to the peripheral side. (b)s8 segmentectomy; first, the parenchyma is transected along the middle hepatic vein (mhv) from the root side to the peripheral. g8 is typically detected on the right dorsal side of the mhv. after the division of the g8, the liver parenchyma is transected along the demarcation line and the rhv from the root side to the peripheral side. results: we have experienced 11 cases of laparoscopic s7 segmentectomy and 26 cases of laparoscopic s8 segmentectomy. conclusion: our approach to the g7 and the g8 is safe and very useful. laparoscopic anatomical segmentectomy of right anterior section is technically demanding because it is difficult to dissect the deep tertiary branches of right anterior portal pedicle (rapp). we present three cases of laparoscopic anatomical segmentectomy using the extrafascial and transfissural approach: 1) anatomical resection of segment 5, 2) anatomical resection of the ventral area 3) anatomical resection of segment 8 dorsal area. the extrafascial and transfissural approach means that the liver parenchyma along the fissure lines is opened, then the surgeon can confirm the glissonean pedicles and territory directly. the extrafascial and transfissural approach in laparoscopic anatomical segmentectomy of right anterior section is feasible and effective because this technique can easily be approached to the deep tertiary branches of rapp. repeated liver resection has significant role in patients with recurrent hepatocellular carcinoma (hcc) in several situations. laparoscopic redo surgery is becoming safer along with advance in surgical technique. we have performed laparoscopic re-resection for limited intrahepatic hcc recurrence. the aim of the present study was to investigate its significance comparing with first laparoscopic liver resections. subjects: patients with limited intrahepatic hcc recurrence after open hepatectomy underwent laparoscopic liver re-resection (n = 12). methods: adhesion between abdominal wall and visceral organs was carefully divided, after the first laparoscopic port was safely inserted. adhesion between diaphragm and liver surface or between previous liver cut surface and colon or duodenum was also minimally dissected. approach to the glisson's pedicles at the hepatic hilum was often difficult due to previous surgical procedure, thus pringle's maneuver was generally applied. dissection of hepatic parenchyma approaching to the target glisson's branch was often preceded under the ultrasound-guidance. liver resection was performed using lcs, biclamp, and cusa using intermittent block of the hepatic inflow. operation time, intraoperative bleeding, morbidity, mortality, and postoperative hospital stay were compared with those in patients who underwent first laparoscopic liver resection during the same period (n = 20). results: operation time was significantly longer in the re-resection group, possibly due to the adhesiolysis. meanwhile, no significant difference was detected in intraoperative bleeding, morbidity, mortality and postoperative hospital stay between the first and the redo surgeries. methods: the donor was a 32-year-old gentleman who decided to donate part of his liver to his wife suffering from viral liver cirrhosis and hepatocellular carcinoma. his bmi was 20.3 kg/m 2 and the preoperatively estimated donor's right liver volume was 836 ml, representing 63.6% of his entire liver. with the recipient's weight of 57 kg, the graft to recipient weight ratio (grwr) was 1.6%. the liver had classic hilar anatomy except that the right posterior intrahepatic duct seperately joined to the left main hepatic duct. after isolation and clamping of right hepatic artery and portal vein, indocyanine green of 2.5 mg was injected intravenously. results: the total operation time was 370 min and the estimated blood loss was 150 ml without transfusion. indocyanine green fluorescence image clearly demonstrated the anatomical demarcation between the lobes and visualized the running of the biliary tree. his postoperative course was uneventful and discharged postoperative day 7. conclusion: real-time indocyanine green fluorescence image may be particularly helpful to delineate anatomical surgical plane and to determine the appropriate division point of hepatic duct during laparoscopic living donor hepatectomy. surg endosc (2019) the correct management of intraoperative volemic status is essential in laparoscopic liver resection in order to control bleeding and to perform even complex procedures with a good profile of safety. central venous pressure is not really reliable in laparoscopy, due to presence of the pneumoperitoneum and patient position. monitoring of haemodynamic parameters via vigileo system is a minimally invasive method to control stroke volume variation, cardiac output, cardiac index and oxygen delivery in order to optimize the anaesthesiological management by controlling venous bleeding and avoiding tissutal ischemia. introduction: non-hydatid liver cysts represent a heterogeneous group of disorders that differ in their etiology, prevalence and clinical manifestations.within them, the simple hepatic cyst is the most frequent.the majority of simple cysts are an incidental finding during the performance of an imaging test for another unrelated cause and few of them are symptomatic or are associated with complications, and surgery is not necessary in most of them. described various therapeutic approaches so far there is no consensus about the optimal treatment of simple symptomatic, complicated or growth-showing liver cysts during its follow-up. currently the laparoscopic approach is widely used for the management of cysts hepatic, with results similar to open surgery but with the advantages of laparoscopy. objectives: to demonstrate the safety and efficacy of the laparoscopic approach in the approximation of complicated simple hepatic cysts.material and method: clinical case: a 68-year-old female patient with a history of: giant hiatus hernia intervention with laparoscopic nissen, fibromyalgia, previous ischemic colitis. hospital admission due to pneumonia and right pleural effusion with us: simple cyst 64 x90 x99 mm in segment v hepatic, with dilatation of biliary radicals adjacent to the cyst, distended gallbladder with irregular walls in the hepatic side. ct: cystic lesion in segment iv-v of the liver, which has increased in size, with small microabcesses adjacencies to the lesion, thickening of the gallbladder wall, to assess cholecystitis. antibiotic treatment is established with good evolution, deciding surgery. results: intervention: complete laparoscopic approach, 4 trocars, edematous cholecystitis, large retroyuxta vesicular cyst,with thickened walls with serous content. cholecystectomy maintaining the cyst wall, puncturing and taking samples for cytology and biochemistry of the contents, resection of the cyst wall, partial flare of its internal surface, negative intraoperative biopsy, epipoplasty, with drainage placement.correct postoperative course.pathological anatomy: simple biliary cyst with negative cytology, ck7?, ck20-, calretina-. conclusion: the treatment of choice of complicated simple hepatic cysts is laparoscopic.we recommend performing an intraoperative biopsy of all resected liver cysts to confirm its nature,we propose cyst enucleation as the best surgical treatment. objective: the objective of the following case is to present a patient with symptomatic polycystic liver disease, which was solved by laparoscopy approach and the management of its complications. material and methods: the case reported is about a 62 years old female patient with abdominal pain in upper right quadrant associated to asthenia, adynamia and hyporexia. ct scan reported heterogeneous liver with multiple ovoid images with regular edges defined which the biggest one measure 102x99x137 mm with volume of 723 cc on segment 2 and 3, which comprises stomach, and the other one in segment 8 with a volume of 1453 cc and others small sized located in segment 6, 7 and 4b. results: in this laparoscopic approach, we performed a cyst unroofing of the two biggest cysts as well as cholecystectomy because of firm and lax adhesions. the patient evolved with fever in the 5 th day postsurgical day and biliary leaking in a volume of 270 cc in 24 hrs. an ercp (endoscopic retrograde cholangiopancreatography) was asked for that was carried out by finding leak at the intrahepatic biliary duct therefore; esphinterotomy with placement of plastic endoprotesis was performed. the patient evolved without complication and was discharged at the 10 th day. conclusions: only symptomatic polycystic liver disease needs to be treated. the choice of treatment is not yet standardized, for voluminous cysts the unroofing ideally by laparoscopy is the gold standard and the ercp is the elected treatment when the biliary leak appears as a complication. introduction: laparoscopic liver resection (llr) for tumors located in the posterosuperior segments of the liver (segments (s) 7 or 8) is a challenging procedure. especially, llr for s7 is difficult because the access of instruments is limited, bleeding control is not feasible, major llr is sometimes required, and obtaining sufficient resection margin is not easy. to overcome this obstacles, we performed llr in s7 with a lateral approach using intercostal trocars. to obtain competent resection margin, llr through right hepatic vein (rhv) first approach was performed for 1.8 cm mass located near the rhv in a 58 year old female. case: after full mobilization of right liver including all short hepatic veins and caudate lobe, rotate the whole liver completely to the left side to approach to the root of rhv. one intercostal trocar was inserted to access the lesion. parenchymal transection started from the confluence of hepatic vein and then, followed along rhv with ligating several small branches from rhv. resection margin was demarcated after localization using laparoscopic ultrasonography. after completion of parenchymal dissection using cusa and ultrasonic shears, hemostatic agents were applied and drain was inserted. operation time and estimated blood loss were 120 mins and 400 ml. the patient was discharged without any complication on postoperative day 7. final pathological assessment confirmed clear resection margin (safety margin : 1.5 cm). conclusion: laparoscopic s7 segmentectomy with hepatic vein first approach technique is safe and recommended to obtain better resection margin. aims: simple liver cysts are the most common cystic lesions of the liver. most are diagnosed casually in image tests such as ultrasound or computerized tomography, most of which are asymptomatic and do not require treatment. in symptomatic patients (abdominal distension with palpable mass, abdominal pain, dyspnea, jaundice, etc.) the clinical manifestations are usually due to the growth of the cysts or the compression of neighboring structures. liver function tests are usually not altered. intracystic complications occur in less than 5% of cases and malignancy is exceptional. in this video, we present the case of a symptomatic patient with polycystic liver disease including a large size hepatic cyst. material and methods: 65-year-old woman with a personal history of arterial hypertension, saos, partial hysterectomy due to endometrial cancer, who was referred to our department complaining of supraumbilical pain and abdominal distension with palpable mases. abdominal ultrasound showed cholelithiasis and multiple simple hepatic cysts. in ct scan, multiple hepatic cysts were found, the largest one of about 20 cm of larger diameter. echinococcus granulosus serology test was negative. there was also no evidence of cancer disease in pet scan. results: a laparoscopic approach was performed with four trocars, three of 5 mm and a hasson trocar inserted thought a umbilical small incisional hernia. aspiration and wide unroofing of the large size cyst and smaller accessible ones was done. the patient also underwent cholecystectomy with intraoperative cholangiography and umbilical eventroplasty. the patient recovered uneventfully and is asymptomatic one year after surgery. conclusion: simple liver cysts rarely require treatment. in some cases, especially in large, complicated and symptomatic simple liver cysts, surgery is indicated. laparoscopic fenestration treatment is the best choice. aims: liver resection is the preferable initial treatment option for solitary or limited multifocal hepatocellular carcinomas. surgical indications for laparoscopic liver resection (llr) are the most important consideration, like liver function, tumor size (diameter less than 5 cm) and location (easy technical access like in the left lateral section or on the surface of the inferior region). partial liver resection or left lateral sectionectomy are the typical procedures for such tumors and are considered the best way to begin llr. with accumulating experience and technical advancement, llr has been performed for tumors larger than 5 cm and for others locations. some requirements to perform llr are to have experience in liver surgery and laparoscopic also, adequate technology and intraoperative ultrasound. methods: a 69-year-old male smoker, ex-parenteral drug users with chronic hcv liver disease child-a stage. he is diagnosed with a single lesion of 7 cm in segment iii of the liver, biopsied twice without conclusive diagnosis and with a three-phase ct suggestive of hepatocarcinoma li-rads 4 with data of portal hypertension (pht) and mild ascites. after the study is commented on tumor committee deciding surgical intervention. results: a laparoscopic resection of segment iii was performed with 5 trocars. liver is explored by intraoperative laparoscopic ultrasound. vascular control was performed using the pringle technique. liver transection was done with sonostar until identification of intraparenchymal segment iii vascularization, which is sectioned with endogia (45 mm) with seamguard. after the resection, we perform hemostasis control with electrocoagulation and hemostatic material. intraoperative bleeding of 300 ml. favorable postoperative evolution, high on the 5th postoperative day. ap: 7 cm trabecular hepatocarcinoma moderately differentiated pt1b, r0 resection. conclusions: llr allows major liver resections with low morbidity and mortality and the advantages of laparoscopic surgery. an efficient learning curve can be achieved by a parallel evolution of procedures and indications (according to modified bclc staging system and treatment strategy). studies suggest that llr results in less blood loss, shorter postoperative hospital stays, lower abdominal wall trauma and lower incidences of ascites accumulation and postoperative liver failure. with respect to oncological considerations, tumor margins are adequately maintained during llr. v. drakopoulos, s. voulgaris, i. iliadis, k. botsakis, p. trakosari, v. vougas 1st department of surgery and transplantation unit, district general hospital of athens « evangelismos » , athens, greece introduction: laparoscopic surgery is gaining acceptance in the treatment of liver metastasis. laparoscopic treatment of liver metastasis often presents technical difficulties and requires an extensive learning curve. material-method: we present the case of a 62 year old woman presented with a liver metastasis in section 3 of the liver. the patient had been submitted to a laparoscopic low posterior resection in february 2018. patient underwent laparoscopic left lateral hepatectomy, with the use of three trocars (umbilical 10 mm, and two in the midclavicular line bilaterally.) left lateral hepatectomy was conducted with the use of a linear stapler. the postoperative period was uncomplicated and the patient remains in good condition three months after surgery. conclusion: laparoscopic approach seems to be safe for treatment of liver metastasis, offering better surgical field view and less postoperative complications. 5 year survival rate after laparoscopic hepatectomy is compared to the open approach. general surgery, chang gung memorial hospital kaohsiung division, kaohsiung, taiwan purpose: laparoscopic hepatectomy is a quickly growing method for liver tumor because of modern technology. but for the ihd thrombosis, it is still technique dependent. the video was tried to share our experience for special case. material and method: one 68 y/o female patient suffered from fever episode and image show s56 3 cm hcc with right anterior ihd obstruction r/o tumor thrombosis, hilum ln enlargement, double right portal vein, hilum adhesion with duodenum, no ascites . lab data : no-b, no-c child a, afp 1199, icg clearance rate 4.5%, plt 174000 . heart, lung function exam normal. the laparoscopic right total hepatectomy and hilum ln dissection was conducted. results: laparoscopic approached was performed. the hilum ln dissection was done with vessel and bile duct isolation. hilum ln frozen show negative malignancy. hemi-vessel control was done with resecting the vessel. right hepatectomy was done with preserving middle hepatic vein. the right anterior and posterior ihd was opened and tumor thrombosis was removed from right anterior ihd carefully. the stump of ihd was closed by suture separately. the total op time was 630 min with 345 cc blood loss. post op minimsl bile leakage was found in the drain at day 6. the patient discharged at day 14 with drain. conclusions: laparoscopic hepatectomy may be a feasible method for hcc even with ihd tumor thrombosis. surg endosc (2019) introduction: the progressive laparoscopic learning in gastric surgery and the great development of instruments and laparoscopic material that facilitates the realization of advanced procedures, has led to an increase in the use of laparoscopy in the treatment of gastric cancer. material and methods: we present the case of a 75-year-old man without amc with a history of ischemic heart disease who enters our surgery department for cholangitis secondary to choledocholithiasis. ercp is requested during his admission that describes a gastric lesion from which a biopsy is taken, making it impossible to access vater papilla to perform sphincterotomy and lithiasis extraction due to the existence of duodenal diverticula. the result of pathological anatomy of the gastric lesion was compatible with adenocarcinoma. negative extension study. the clinical case is presented in a committee of multidisciplinary tumors and it is decided to perform surgical intervention of both pathologies. a subtotal gastrectomy was performed with a roux-en-y reconstruction. surgical time of 300 min. choledochotomy was performed with lithiasis extraction, as well as intraoperative exploration of the bile duct and main conduits by means of a choledochoscope. results: income of 9 days, with a clavien ii. the definitive pathological anatomy was an ai stage with a total of 22 isolated nodes without evidence of neoplasia in any of them, therefore it does not require adjuvant treatment. the patient is asymptomatic, with nutritional supplementation with follow-up in ccee of surgery. conclusions: in our case, there were no serious postoperative complications when performing gastric resection and bile duct exploration with drainage of the same. from the oncological point of view, the number of lymph nodes extracted and the surgical margins are similar to those obtained in patients in whom we perform open surgery; therefore, although it is a single clinical case, laparoscopy in expert surgeons is a safe and effective technique. the puestow procedure was initially proposed to alleviate the pain in patients with chronic pancreatitis and dilated wirsung duct. its objective is to provide an efficient drainage of the pancreatic fluids and, in the meantime, to preserve the pancreatic tissue and minimize the risk of endocrine and exocrine pancreatic insufficiency. aims: to describe the particular technical aspects and the efficacy of totally laparoscopic puestow procedure in patients with cystic duodenal dystrophy. methods: a 37 years old patient presenting diffuse epigastric pain, vomiting and weight loss was diagnosed at endoscopic ultrasound and biopsy with cystic duodenal dystrophy. a conservative treatment was decided with octreotide and opioids. however, due to the persistence of symptoms surgery was performed. results: due to the association of a dilated wirsung duct, the patient was submitted to a puestow procedure. the surgical procedure was completed in a minimally invasive manner; after dissecting the anterior surface of the pancreas an intraoperative ultrasound was performed in order to identify the wirsung duct. therefore, the pancreatic parenchyma was transected along the wirsung duct, a totally laparoscopic pancreato-jejunostomy on roux en y limb being performed. the early postoperative outcome was uneventful, the patient being discharged in the sixth postoperative day. at one month and six months follow up the need for opioid treatment significantly diminished. a kinking of the enteral anastomosis required a laparoscopic intervention one year after with a very good evolution after. conclusions: totally laparoscopic puestow procedure seems to be a safe and efficient method in order to treat symptomatic patients with cystic duodenal dystrophy in whom a dilated wirsung duct is present. aims: the approach to the intraductal papillary mucinous neoplasm (ipmn) is various, from a radiological follow-up with magnetic resonance (rm) to the surgical treatment with a pancreatic resection [1] . the surgical approach is various and depends on the localization of the lesion and on the surgical skills [2] . methods: a 67 years old patient was admitted at the chi possy-saint germain-en-laye with an acute pancreatitis. at the ecoendoscopy was found a pancreatic cystic at the junction of the pancreatic body and tail with a wirusng diameter of 5 mm. a second episode of acute pancreatitis occurred a few months later. after that episode the patient was submitted to a computer tomography (ct) that found a cystic lesion of 2 cm with an increasing dilatation of the wirsung duct. the serum ca19-9 was 452 ui/ml. a laparoscopic sils distal pancreatectomy with spleen conservation was performed. results: a trans-umbilical incision was performed with the positioning of the gelpoint sils platform with the placement of 3 trocars. a distal pancreatectomy with a spleen preservation and without a standard linfadenectomy was performed. the pancreatic stump was closed with an endo-gia 60 mm with seamguard device. any drain was placed. the post-operative course was uneventfull. a ct scan was performed in …. post-operative day which didn't show collections. the patient was discharged in -…… post'operative day. the histological examination shows an ipmn with low grade dysplasia. no invasive carcinomatoses cells were found. the distal pancreatic sils resection with spleen conservation is a feasible and safe technique that combine all the advantages of the minimally invasive laparoscopic approach with the esthetic advantages of the sils approach. pancreato-duodenectomy is a complex surgery, requiring several anastomoses to reconstruct the digestive tract. due to its technical complexity, the laparoscopic approach is not yet the goldstandard and there remains some controversy about its oncological safety. worldwide experience is limited, and its safety and effectiveness are yet under evaluation.we present the clinical case of a 70 years-old woman with a prior history of epilepsy. she was studied due to painless obstructive jaundice and a 2 cm pancreatic head tumour was diagnosed on imaging, causing cbd and wirsung channels' dilatation. the tumour was considered locally resectable and she was proposed for a radical pancreato-duodenectomy.we present the main steps of the surgery including the oncological resection with lymphatic basin clearance and totally laparoscopic reconstruction.the post-operative was uneventful, and the histologic sample revealed a ductal adenocarcinoma (t2) with an r0 resection and 0/30 lymph nodes invaded. although technically demanding, laparoscopic pancreato-duodenectomy is safe and effective requiring teams with experience both in pancreato-biliary and laparoscopic surgery. chronic pancreatitis is characterized by a progressive pancreatic fibrosis with loss of endocrine and exocrine function. one of its main symptoms is debilitating pain. surgical drainage of a dilated pancreatic duct is an option to consider in cases of refractory pain. longitudinal pancreato-jejunostomy allows an effective decompression of the pancreatic channel and a significant improvement in the quality of life. we present the clinical case of a 56 years-old lady with a prior history of gallstones. she was treated for an acute pancreatitis in may 2018, followed by recurrent relapses of pain and enzymatic elevation. she required opioid use for partial pain control and a significant 20 kg decrease on body weight due to 'fear of eating'. the endo-ultrasonography and the mri revealed a chronic pancreatitis with an 8 mm wirsung duct with ductal stones and an atrophic body and tail. we proposed a laparoscopic longitudinal pancreato-jejunostomy. the surgery was performed with 4 trocars, with the surgeon on the right side of the patient. we performed a trans-mesocolic 6 cm pancreato-jejunostomy. the post-operative was uneventful, and the patient was discharged on the 8th post-operative day, asymptomatic. laparoscopic longitudinal pancreato-jejunostomy, although effective is a technically demanding surgery but brings the benefits of a minimally invasive approach. background: preservation of spleen in distal pancreatectomy is also useful from the maintenance of platelets and the prevention of overwhelming post splenectomy infection. we have performed laparoscopic spleen preservation distal pancreatectomy: lspdp to benign and low-grade tumors of the pancreatic body tail. the aim of this study was to report our surgical experience with the method of svp: splenic vessel preservation and wt: warshaw technique of lspdp, describe our techniques with videos. method: there are three points of our surgical technique. 1, precede pancreatic dissection, improve the mobility of the pancreas. 2, confirming the courses of splenic artery and classified them into two major types. 3, preserving the left gastro-epiploic vessels and short gastric vessels.the postoperative cases of lspdp which performed from april 2012 to september 2018 was retrospectively studied. result: of 19 consecutive patients were performed lspdp at our institute, 12 were svp and 7 were wt. ages, gender and bmi were similar for two groups. there were no significant differences in operative time, blood loss and length of stay after surgery. comparing pathological finings, wt was associated with a slightly large tumor lesion (median 31 mm vs. 12.5, p = 0.08). among the median observation period of 27 months, splenic infarction was observed in 1 case in svp and 2 cases in wt. however, they were focal splenic infarctions, they did not need surgery or drainage. there were no cases in which late onset of splenic artery occlusion or esophageal / gastric varices. conculusion: after performing lspd, the function of the spleen was good in all cases. both svp and wt were safe and feasible procedures. this is the case of a 61-years-old lady presenting with recurrent abdominal intractable pain she has been suffering from for the last 7 years. msct revealed pancreatic calcifications from 1 mm to 5-8 mm and dilatation of the main pancreatic duct in the body of the pancreas up to 4 mm. the patient underwent laparoscopic local resection of the head of the pancreas combined with longitudinal roux-en-y pancreaticojejunostomy-a technique known as frey's procedure. it is recognized as an effective therapeutic option for the surgical treatment of patients with persistent pain caused by chronic pancreatitis.after performing the posterior wall of the pancreaticojejunal anastomosis we've faced an intraoperative complication such as volvulus of the roux limb causing serious ischemia of the limb. we were forced to remove all previous sutures in order to untwist the roux limb, thereafter the pancreaticojejunostomy was started anew.the purpose of this video is to demonstrate that frey's procedure can be performed in a minimally invasive fashion, which provides all the well-known advantages of this approach. we demonstrate that even such serious intraoperative complication as volvulus of the roux limb can be managed without conversion. our center has an experience of over 30 laparoscopic frey's procedures, however this is the first case where we encountered with such complication and we believe this is an experience worth sharing.yet we would like to underline that this approach should be used by highly skilled minimally invasive surgeons experienced in intracorporeal suturing which is the most challenging stage in frey's procedure. v. tomulescu, i. hutopila, c. copaescu spleen preserving distal pancreatectomy (spdp) is commonly applied in patients with benign or low-grade malignant tumors in the body and tail of the pancreas. two surgical techniques for spdp have been described. the first technique was described by kimura (spleen preserving distal pancreatectomy with splenic vessel preservation-spdp-svp) and preserves the main splenic artery and vein and excises the tail of the pancreas and those small, short vascular connections to the body;the second technique was described by warshaw and involves resection of the splenic vein and artery before distal pancreatectomy, and conservation of theshort spleno-colic and gastric vessels to keep normal blood flow for the spleen (spleen-preserving distal pancreatectomy with splenic vessel resection-spdp-svr). we present the case of a 50 years old female with 40/50 mm tumor of the pancreatic tail on ultrasonography. ct scan confirmed the tumor and endoscopic ultrasonography with fna have shown a solid pseudopapillary tumor. due to the low grade malignancy we have decided to perform a laparoscopic spleen preserving distal pancreatectomy with splenic vessels preservation (lspdp-svp). for lspdp-svp the difficulty is related with the splenic vessels dissection and manipulation. primary dissection and control of main trunk of splenic artery and vein will help to quickly control bleeding during vascular rupture in small vessels dissection. optimal stapling of any tissue requires an adequate tissue compression time to allow elongation of the tissue being compressed, smooth firing of the instrument, consistent staple line formation balanced against the risk of increased tissue tearing and excessive tensile strength. this is why, for pancreatic division, we prefer choosing a cartridge loaded with higher staplers. the pancreatic stump transection line is evaluated for bleeding and when it is needed, hemostatic clips are applied. histology report confirmed a solid pseudopapillary tumor t3nomxl0v0r0 at this moment with 12 month good follow up. in conclusion lspdp-svp is safe, reproductible and demonstrated very good outcomes when certain indications are respected. surg endosc (2019) aim: advances in minimally invasive surgery has permitted to perform complex techniques by this approach, being the laparoscopic duodenopancreatectomy (lpd) one of these. the aim of this communication is to present a surgical technique video for a complete laparoscopic pd, showing the most important steps of the resective and reconstructive phase, with the anastomosis realized completely by laparoscopy. methods: a surgical technique video is presented showing the main steps for the lpd and a complete laparoscopic reconstruction with an hepaticojejunostomy, duct-to-mucosa pancreatic-jejunostomy and a gastrojejunostomy. results: an 82 years old woman with past medial history of arteria hypertension, dyslipidemia, type ii diabetes mellitus and a breast cancer treated in 2009 with lumpectomy and axillary lymphadenectomy plus radiotherapy, recently diagnosed of and adenocarcinoma of the head of the pancreas. the ct scan showed a neoplasia localized in the head of the pancreas without extension to other organs. a laparoscopic pd was indicated after a multidisciplinary committee evaluation. a supraumibical hasson trocar was used for the pneumoperitoneum, three 12 mm trocars and two 5 mm trocars were used. lpd was performed. the resective phase was done following the conventional steps of the open whipple procedure and for the reconstructive phase, a child limb was used for a termino-lateral hepatico-jejunostomy with an absorbable 4/0 monofilament; a duct-to-mucosa pancreatic-jejunostomy with an absorbable 5/0 monofilament and finally a latero-lateral mechanical gastro-jejunostomy was performed. surgical time was 480 min. postoperative course without complications and the patient was discharged on the 7th postoperative day. definitive anatomopathological exam: intraductal tubulopapilar neoplasia, 16x16x13 mm, with wide high grade epithelial dysplasia. free margins. ptisn0 (0/12). conclusion: laparoscopic pd is a feasible procedure with a high technical requirement which should be performed in specialized centres with high experience in hepatobiliary surgery and in advanced laparoscopic procedures, because of its high morbidity and mortality. conclusions: robotic assistance in whipple may overcome limitations of laparoscopy and offer a minimaly invasive approach to this procedure potentially resulting in lower blood loss and less morbidity. we need further prospective randomized trials in order to determine the exact role of robotics in pancreatic surgery. aims: distal pancreatectomy is the standard curative treatment for symptomatic benign, premalignant, and malignant disease of the pancreatic body and tail. the most obvious benefits of a laparoscopic approach to distal pancreatectomy include earlier recovery and shorter hospital stay. spleen-preserving distal pancreatectomy should be attempted in case of benign disease. laparoscopic spleen-preserving distal pancreatectomy (lspdp) is expected to be less invasive than laparoscopic distal pancreatectomy with splenectomy. however, there are few reports regarding the details of the procedure for lspdp, and its safety remains unclear. this study aimed to evaluate the feasibility and safety of lspdp. methods: retrospective analysis of surgery treatment of 48 patients was made. lspdp was conducted in the period from 2014 to 2017 in the department of laparoscopy surgery of state institution o.shalimov national institute of surgery and transplantology. the average age was 45 :1 3.4 years, the body mass index (bmi) was 28.7 ± 1. results: laparoscopic distal pancreatectomys was performed in 100% of cases, were attempted in 36 female and 12 male patients. postoperative pathological examinations revealed 17 cases of serous cystadenoma in the body and tail of the pancreas, 2 case of serous oligocystic adenoma, 20 case of mucinous cystadenoma, 3 case of neuroendocrine tumor (insulinoma), and 6 case of solidpseudopapillary neoplasm. complications related to the surgery were like acute pancreatitis with 3-fold increase normal plasma amylase confirmed by ct-7 cases, fluid collection-4 cases, pancreatic fistula (grade a)-3 cases. the operation time was 195.6 min, (range 157-250 min) blood loss of 50.1 g (range 0-110 g), mean hospital stay was 6.8 days (range 5-11 days). conversion to laparotomy was in 1 case. mortality was 0. conclusion: laparoscopic spleen-preserving distal pancreatectomy is minimally invasive, safe, and feasible for the management of benign pancreatic tail tumors, with the advantages of earlier recovery and less morbidity from complications. aims: a pancreatic pseudocyst is an encapsulated, mature fluid collection occurring withing the pancreas that have a well-defined wall minimal or no necrosis secondary to pancreatic injury and mediated by the enzimatic and inflammatory disruption of pancreatic tissue. it is a common complication of acute and chronic pancreatitis. we present the case of a pancreatic pseudocyst located within the body of the pancreas due to recurrent necrotic pancreatitis. the objective of this video is to show the minimally invasive surgical approach of this entity. methods: a 47-year-old man without medical history was admitted to hospital in the digestive service on 3 times for acute necrotizing pancreatitis. after study in which is evidenced cholelithiasis and pseudocyst in pancreatic body of 6 cm maximum diameter and formation of two peripancreatic collections without signs of superinfection, cholecystectomy is indicated. magnetic control cholangiography was performed after surgery and it showed an increase in the size of the pancreatic pseudocyst, suspecting wirsung's duct disruption. therefore, endoscopic retrograde cholangiopancreatography (ercp) was performed by placing a plastic pancreatic prosthesis and performing a sphincterotomy. after hospital discharge, the patient is re-admitted due to recurrent abdominal pain without analytical alteration. tc abdominal observed an increase in the pseudocyst from 6 to 8 cm. this case was discussed in a multidisciplinary committee and surgical intervention was decided. results: laparoscopic approach is decided and four trocars were placed. initially, a gastrostomy was performed with liquid outlet. an aspiration of the liquid and quistogastrostomy with 45 mm endogia was made. the patient progresses favorably, being high on the tenth postoperative day, without complications. conclusions: almost every pancreatic pseudocyst improves spontaneously and needs no specific treatment. draining is indicated when secondary symptoms to compression, complications or rapidly enlarging are found. depending on the complexity of the pseudocyst, its communication with wirsung's duct and the existence of ductal injury, it may perform a percutaneous, endoscopic or surgical drainage. the goal of pancreatic debridement is to excise all dead and devitalized pancreatic and peripancreatic tissue while preserving viable functioning pancreas, controlling resultant pancreatic fistulas, and limiting extraneous organ damage. only the surgical procedure is definitive. case: a 29y old male presents with intermittent low retrosternal pain and progressive dyspnea with exercise since a couple of months. cardiac investigation was negative and gastroscopy showed a grade b esophagitis. he was treated medically but with only partial response. on a thoraco-abdominal cat-scan the diagnosis of a left sided bochdaleks' hernia was made. the hernia includes the left kidney (with blood vessels and ureter), transverse colon and small intestine which are positioned in the left lower thoracic cavity with the left lung considerably compressed. method: given the clear correlation between the patients' complaints and these anatomical findings, he was referred to our service of abdominal surgery. we performed a laparoscopy with the patient in lithotomy position and the surgeon between the legs. the patient was tilted to his right side. mobilization of the spleen was necessary to gain maximal access to the hernia. we were able to reduce all the herniated content, freed the margins of the defect, reduced the hernia sac and repositioned the kidney intra-abdominally. the defect was manually closed with non-resolvable stitches and covered with a mesh which was secured with tackers. result: postoperatively the patient recovered well with adequate pain relief and pulmonary support. he could leave the hospital after 6 days. control cat-scan on day 5 postoperatively shows an intact lining of the diaphragm with normal positioning of the intra-abdominal organs. on follow-up 6 weeks after surgery the patient had regained normal activities and was symptom free. conclusion: a symptomatic left sided bochdaleks' hernia in adults with an ectopic intrathoracic kidney is extremely rare. we hereby state that, during a laparoscopic repair, the kidney can also be safely reduced, which has almost never been described in literature yet, enhancing pulmonary recovery, improving access for mesh placement and thus diminishing recurrence rate. aims: large incisional hernias repair involves an actual problem for surgeons to face. anterior component separation has been an important method allowing to close the fascia defects without tension while also having underlay mesh reinforcement.therefore, we present a case of incisional hernia reparation performing endoscopic anterior component separation with advantages compared with open approach. method: we present the case of a 31-year-old woman, bmi 40 kg/m 2 , with previous laparoscopic gastric sleeve and posterior reintervention using open approach. the patient presented a 10 cm size incisional hernia m3w3. a ct scan was performed, confirming a midline incisional hernia containing colon, with an herniary defect of 11 cm. full minimal invasive abdominal wall repair was proposed. a 2 cm size incision was made in left iliac region to reach the aponeurosis of external oblique muscle. we placed a balloon trocar and subcutaneous pneumo-dissection with 8 mmhg pressure was performed; then, we placed a 5 mm trocar in left lumbar space. the aponeurosis of external oblique muscle was incised and anterior component separation from inguinal to subcostal area was achieved. an extensive intermuscular dissection was performed to achieve complete midline closure. we performed the same procedure on the right side. then, with laparoscopic approach using v-loc n°0 suture, we completely closed the midline. eventually, we placed a 30x15 cm ptfe-c mesh fixed with a double crown of tackers and fibrin glue. results: postoperatory course was uneventful and the patient was discharged 24 h after surgery without any remarkable event during his postoperative stay. the patient has been followed up for 12 months without any complication or recurrence in ct scan, confirming the correct minimally invasive reconstruction of the abdominal wall. conclusions: trends in abdominal wall reconstruction and complex-hernia repairs have advanced rapidly in recent years. the goal is to perform a complete abdominal wall repair with no tension in midline incisional hernias. endoscopic anterior component separation and laparoscopic eventroplasty with closure of the defect, leads to a complete wall reconstruction without tension and avoids drawbacks due to primary close defect in those patients with herniary defects wider than 10 cm. aims: endoscopic technique is a valid and safe approach for the treatment of abdominal wall defects. to combine the advantages of complete endoscopic extraperitoneal surgery with those of sublay mesh repair we propose totally endoscopic sublay anterior repair (tesar), a safe and feasible approach for the treatment of ventral and incisional midline hernias. methods: from may to september 2018 12 patients were referred to our unit for clinical and radiological diagnosis of midline ventral or incisional hernia and selected for tesar. exclusion criteria were: complicated ventral or incisional hernia (i.e. incarcerated hernia), maximum defect width [ 5 cm, contraindications to general anesthesia. the procedure consisted of suprapubic access with 3 trocars, complete endoscopic pre-aponeurotic dissection, isolation and reduction of the hernial sac, bilateral incision of the medial rims of recti aponeurosis and dissection of retromuscular plane to create the retromuscular space, sublay non-absorbable mesh positioning and anterior aponeurosis reconstruction. one drain was always placed in the retromuscular space and one drain in the subcutaneous space. results: all procedures were completed with endoscopic approach, with no conversion to laparoscopy or open surgery. no intraoperative complications were registered. total mean operative time was 148 ± 18.5 min. no post-operative major complications were registered. only one subcutaneous seroma was registered (8.3%), and treated conservatively. the mean postoperative stay was 3.6 ± 0.6 days. at post-discharge clinical checkups drains were checked and removed when indicated. no wound complications nor recurrence were registered to date. cosmetic and functional results were successful in all patients. conclusions: tesar is a safe and feasible technique for the extra-peritoneal sublay repair of ventral hernias with totally endoscopic approach. it provides accurate hernia repair with good outcomes in terms of resolution of symptoms and post-operative complications. r. mizuno, m. kondo backgrounds: abdominal incisional hernia is found in more than 10% after abdominal surgery, and risk factors such as wound infection, obesity, elderly, high abdominal pressure are pointed out. laparoscopic hernia repair using intraperitoneal onlay mesh (standard ipom) is becoming widespread in japan since the insurance release in 2012, and our hospital is actively working on it. recently, ipom plus procedure which also carries out fascia suture in addition to laparoscopic mesh placement has been introduced. aims: we report the clinical results of laparoscopic abdominal incisional hernia repair in our hospital. methods: we performed hernia repairs using a mesh for 36 cases from january 2014 to september 2018. of these, 21 cases were standard ipom and 15 cases were ipom plus. there was no significant difference in the patient background such as gender, age, bmi, etc, and in the intraoperative findings such as hernia orifice diameters and adhesions. surgical time, postoperative hospital stay, and the rate of complications such as seroma, mesh bulging, postoperative pain, hernia recurrence were compared and examined between the two groups. results: as a result, in ipom plus group, the operation time was longer and the incidence rate of postoperative pain was higher, but the incidence of mesh bulging was significantly lower. also, in some cases since 2018, the ' u reverse stitch method ' is used as an ingenuity of fascia suture in ipom plus. conclusions: laparoscopic abdominal incisional hernia repair has the advantage of being able to reliably confirm the hernia orifice from the intraperitoneal side?it is excellent in the identification of the fragile part of the abdominal wall and in the visibility of the restoration range. with regard to the ipom plus procedure which has been introduced in the last few years, although the operation time is extended, it has usefulness such as reduction of mesh bulging. from the viewpoint of cosmetic surgery, usage of ipom plus will increase in the future. introduction: incisional hernia is one of the most common complications after abdominal surgery. several methods have been introduced, and yet, there is no consensus on the best method of repair. we present a novel method for hernia repair which uses the retromuscular sublay mesh repair through a single incision at the pubic area to improve cosmesis. methods: medical records of patients who underwent single-port retrorectal incisional hernia repair from may 2018 to december 2018 were reviewed. patients were placed in supine position and a 3 cm incision was made in the pubic area below the panty line. a flap is made upwards until the defect is found and bilateral rectus sheathes are dissected. a mesh is then placed between the posterior rectus sheath and the muscle. results: a total of 30 patients with midline incisional hernia underwent single-port retro-rectal incisional hernia repair. mean age was 59.0 ± 12.5 years with an average bmi of 23.4 ± 2.7. all the patients had midline hernia defect with an average of 3.4 ± 2.2 cm. mean operation time was 59.6 ± 30.1 min and estimate blood loss was 32.6 ± 36.5 ml. there was no postoperative complication, and 27 (90%) patients were discharged on the day of surgery. conclusion: the single-port retrorectal incisional hernia repair is safe and effective while providing good cosmesis to selected patients with incisional hernia. aims: closing hernia defect during laparoscopic hernia repair is a vast extended technique nowadays. however, this technique is associated with mesh placemnt intraabdominally in contact by the abdominal content. nowadays there is a trend to recontruct the midline and to avoid a mesh intraabdominally in those cases suitable for it, as a new step forward of minimally invasive abdominal wall reconstruction. laparoscopic sublay approach with retromuscular placement of a mesh without mechanical fixation after reconstruction the linea alba migth be considered an option in primary hernias of the midline. methods: we present a case of a 47 year old male with an umbilcal hernia of 4 centimeter in diameter associated with rectus diastasis. a laparoscopic approach was performed, using one 12 and two 5 millimeter trocars placed on the left flank. the first step was to open the lateral side of the posterior fascia of the left rectus muscles, dissecting the retromuscular plane until we reach the linea alba getting into the preperitoneal space where the sac was diseected preserving the integrity of the peritoneum. the contralateral posterior fascia was also dissected all the way to the semilunaris line. the midline was closed, including th hernia defect, using a running double loop suture (maxon-loopò). a self gripping mesh (progripò) is placed in the retromuscular space in a sublay position (21 cm long, 9 cm wide). last, we close the fascia of the left rectus muscle using a barbed suture (v-locò). results: surgical time was 80 min, being discharged of the hospital on postoperative day 1. pain was controlled with conventional analgesia and no postoperative complications, nor seroma was detected. conclusions: sublay approach for ventral hernia can provide a midline reconstruction, reestablishing abdominal function and avoiding the use of intraabdominal meshes and traumatic fixation, decreasing postoperative complications and pain. aims: lumbar hernia is one of the rare cases that most surgeons are not exposed to. hence the diagnosis can be easily missed. this is often related to previous surgery as lumbotomies or primary in the superior lumbar triangle. this leads to delay in the treatment causing increased morbidity. we report a case of adquired lumbar hernia in a middle-aged woman repaired by laparoscopic approach. methods: a 60 years old woman with surgical history of a myelomeningocele surgery by posterior approach over 40 years ago, a laparoscopic left nephrectomy 2 years ago with a left colostomy due to a left colon injury during this procedure. a hartmann reversal by laparoscopic approach 6 months later. patient showed a large lumbar mass over 6 cms in the left lumbar region and a large scar near to spinal cord. it was soft in consistency, reducible and expansible on coughing and straining with defined borders. computerized tomography showed a large defect in the superior lumbar fascia over 6 cms in the grynfeltt-lesshaft triangle with the left colon inside. results: patient was placed in a full lateral decubitus position. in order to optimize exposure, a lumbar roll was placed under the lumbar region. a capnoperitoneum (12-15 mmhg) was built up. one 11 mm and two 5 mm trocars were used and positioned in the left mid axillary line. a 30 optic was used. adhesions were removed and toldt fascia was opened in order to expose the hernia defect bounded by quadrates lumborum, erector spinae muscles, 12 rib and serratus. hernia content was carefully extracted from the sac using a ligasure maryland (covidien medtronic-usa). hernia defect was measured and an intraperitoneal mesh (dinamesh-ipom feg textiltechnik mbh, aachen, germany) was positioned and sutured by tackers to the margins included the bone. patient was discharged in 48 h with a low pain rate and without complications. there is not recurrence in 10 months follow-up. conclusion: laparoscopy might be a safe and feasible approach for repairing lumbar hernias, either primary or adquired, with a low rate of pain and complications s582 surg endosc (2019) after pneumoperitoneum is done, three 5 mm trocars are placed on the left flank. the defect is delimited by drawing it over the skin of the patient with aid of an intramuscular needle and intraabdominal vision. posterior fascia is opened longitudinally at its medial edge and the retromuscular space is dissected. the arcuate line of douglas and the epigastric vessels are identified. from this point, transversus abdominis fascia is sectioned cranially 1 cm medial to the semilunar line, preserving the neuro-vascular pedicles that reach the rectus abdominis laterally. at supraumbilical level, transversus abdominis fibers advance behind rectus abdominis, so they need to be sectioned to access to the space below the ribs. lateral dissection of this space enables a tensionfree closure at midline. once the procedure is repeated on the contralateral side using two 5 mm and one 12 mm trocars on the right flank, a continuous suture of the posterior fascia is performed with a barbed suture. the anterior fascia is closed with a slowly-absorbable monofilament loop-type suture. finally, a double-layer polypropylene mesh is placed at retromuscular level without any suture and fibrin glue is applied. results: the patient was discharged 24 hous after surgery. no recurrence has been presented to the moment. conclusions: the section of the aponeurotic plane from the arcuate line of douglas enables a more accurate dissection of the retrotransversus plane without sectioning its fibers except for its cranial end, preserving the innervation and vascularization of the abdominal wall. this technical modification aims to simplify a complex laparoscopic procedure allowing its estandarization. aims: the authors present a video with their standardized laparoscopic ventral hernia intraperitoneal mesh (ipom) hernioplasty procedure but introducing a novel laparoscopic technique for tension releasing while hernia gap closure and midline anatomical restoration. methods: a 64 years old male patient with a bmi 31 presents a symptomatic ventral hernia recurrence after a sigma colic cancer open surgery. a ct scan study showed a 5 cm transverse diameter midline ventral hernia. a laparoscopic ipom hernia repair procedure is performed using 5 mm instruments and a 10 mm camera. when checking tension while midline restoration suturing, we decide to add a tension-releasing maneuver: a totally laparoscopic transverse abdomini muscle release (taltar). this maneuver allow right rectus posterior sheath to advance some distance to the midline, in order to provide a tension-free midline closure. a double-faced ready-to visceral contact mesh is now placed and fixed. case and technical details are shown in the video. results: the patient was discharged from hospital within a period of 5 h with a 4 rate in a eva acute pain visual scale. in a 2 year follow-up, there has no been an anatomical or clinical recurrence. no chronic pain, anatomical recurrence, lateral asymmetry, umbilical or abdominal wall complications have been reported with this technique. conclusions: depending on the patient characteristics, anatomical hernia factors and surgeon mini invasive experience, a taltar maneuver could be a safe and feasible option for releasing tension when midline anatomical laparoscopic closure. more studies are needed in order to standardized this approach. aims: when primary ventral hernia and simultaneous diastasis recti are diagnosed, there is no consensus among the international surgical community on the surgical treatment regarding indications or surgical technique. however, if diastasis recti is symptomatic of or is associated with midline hernias, the corrective surgery of both pathologies at the same time could be the most recommended option. when we only correct the herniary defect, we risk performing a reparation on an anatomically weak tissue, so the rate of hernia recurrence may increase. we propose a minimally invasive access using totally endoscopic retromuscular hernioplasty. by developing this technique, several advantages are provided, such as no peritoneal opening without intraabdominal access, no mesh fixation needed and simultaneous solving of both pathologies. method: we present the case of a 50-year-old man, with bmi 35 kg/m 2 and no previous medical history complaining of ventral hernia with associated recti diastasis. a 4 cm size umbilical hernia was diagnosed with a 5 cm size supraumbilical diastasis recti associated. full endoscopy retromuscular hernioplasty was proposed. a 2 cm size incision was made in left hypocondrium, openned the anterior rectus sheath and retracted the rectus muscle. we placed a balloon trocar and open the homolateral retromuscular space after placing two 5 mm trocars in left lumbar space and epigastric position. we crossed-over the linea alba and achieve contralateral retromuscular space. after this step, the hernia sac was reduced and we extended the dissection 5 cm caudal to the hernia ring. both medial posterior rectus sheaths were sutured with running barbed suture n°0 and a 20x20 cm size light-weight, big pore, polipropilene mesh was placed in retromuscular space and unrolled properly with enough overlap. a drain was placed and the anterior rectus sheath incision was closed. results: the patient was discharged 24 h after surgery without remarkable events during his postoperative stay. he has been followed up for 8 months remaining asymptomatic. conclusions: totally endoscopic retromuscular ventral hernia repair in men with umbilical hernia and diastasis recti associated, is feasible and reproducible procedure with several advantages compared to traditional laparoscopic ipom in terms of pain and mesh position. aims: parastomal hernia (ph) is one of the most frequent long-term complications of stoma formation, occurring in 35%-50% of patients. surgical treatment for parastomal hernia is the only cure but a fairly difficult field with a recurrence rate ranging from 24% to 54% of cases. due to its advantages, the number of laparoscopic mesh repairs for parastomal hernia has gradually increased over the past decade. according to this common complication, we report a case of laparoscopic reparation of ph using the sugarbaker technique. method: we present the case of a 65-year-old patient with surgical antecedent of laparoscopic low anterior resection due to rectal cancer, presenting in postoperative period an anastomosis leakage with severe peritonitis was identified and a laparotomy with end colostomy was performed. the postoperative course was uneventful. during the follow-up the patient showed a 6 centimetres size paraestomal hernia, being a m3w2 incisional hernia confirmed with ct scan.the patient underwent full laparoscopic hernia repair, performing a sugarbaker technique, exposing parastomal hernia completely to measure the hernia ring size (6 centimetres) and the midline associated defect (5 centimetres). a 26x36 cm size ptfe-c was selected to allow a 5-cm overlap over two defects. results: using this approach, the bowel loop was pushed into the abdominal wall and appropriate place between the mesh edge and the abdominal wall is left to allow the bowel loop to pass through. postoperatory course was uneventful and the patient was discharged 48 h after surgery without any remarkable event during his postoperative stay. he has been followed up for 18 months without realizing any clinical signs or alterations in ct scan. compared with traditional open surgical repairs, laparoscopic repair has certain advantages including its safe operation, postoperative rapid recovery, fewer complications, and lower recurrence rate. however, it still faces challenges regarding parastomal hernia treatment, and there is a need to improve existing surgical techniques. aims: nowadays, the principal disadvantages of laparoscopic approach in hernia repair are the use of intraabdominal meshes and traumatic fixation. first, intraabdominal meshes involve the contact of the prosthesis with the intestinal loops with the consequent risk of adhesion and fistula. also, using helicoidal sutures in prosthetic fixation produces adhesions to the tackers and a non-negligible incidence of chronic pain. when it comes to lead to better results, placing the mesh in retromuscular space avoids the drawback of contact with the loops, and using self-fixation meshes may decrease the rate of acute and chronic pain. accordind to this facts, we present a case of laparoscopic ventral hernia repair with transabdominal retromuscular mesh placement without traumatic fixation. methods: we present a 50-year-old patient with a 7 cm diameter hernia showed in preoperative ct scan, m3w2, with diastasis recti associated. the patient underwent laparoscopic surgery using transabdominal retromuscular route. one 11 mm and two 5 mm trocar were placed in left flank. the posterior rectus sheath on the left side is opened starting 5 cms far from the left egde of the defect. once the retromuscular space is dissected, the hernia ring is dissected and the hernia sac reduced, we continue with the dissection in retromuscular space on the side. craniocaudal dissection is achieved 5 cm distal to the defect margins. the hernia defect with the anterior rectus sheath and the diastasis recti were closed using v-loc running suture. self-adhesive mesh was subsequently placed. the mesh should be overlap 5 cm from the margins of the defect, covering the defect widely, with grips facing upwards. finally, we closed the posterior rectus sheath with peritoneum on the left side with v-loc running suture. results: the postoperative course was uneventful and the patient was discharged 24 h after the surgery. after 18 months of follow-up no clinical or radiological recurrence was showed. conclusions: the combination of laparoscopic approach, retromuscular mesh placement and the use of self-fixation meshes, seems to be an actual useful solution, combining the advantages of each item and avoiding the use of intraabdominal meshes and helicoidal sutures. aims: laparoscopic ventral hernia repair has clear advantages over open repair, including less post-operative pain and earlier return to normal activity. however, a prolonged surgeon learning curve is necessary to perform this technique effectively. robot assistance may improve outcomes of minimally invasive ventral hernia repair with improved three-dimensional visualization and enhanced dexterity with articulating instrumentation. we report a case of robotic rives-stoppa epigastric hernia repair in order to demonstrate the feasibility of the robotic approach. methods & results: a 58-year-old man came to our attention for the presence of a palpable mass in the epigastric region. the abdominal ct scan showed the presence of an epigastric hernia with herniation of omental content, and the presence of diastasis recti. the patient was then submitted to a rives-stoppa robotic hernia repair under general anesthesia. the da vinci-si surgical system (intuitive surgical inc., sunnyvale, ca, usa) was brought into position over the head of the patient and docked after placement of the ports. three trocars were placed in the hypogastric region along the transtubercular line. a fourth trocar was placed in the left iliac fossa and used by the assistant. the operation started with an extended adhesiolysis and hernia reduction. then, the retromuscolar dissection began by incising the posterior sheath starting from 4 cm above the pubic symphysis. an extended dissection of the rives space was performed to create a correct housing for the mesh. the hernia defect and the diastasis recti were closed using a 1-0 absorbable barbed suture. a phasix st tm mesh (bard inc./davol inc., warwick, ri) was positioned in the retromuscular plane, and was anchored with absorbable sutures and glue. the midline incision was closed using a 2-0 absorbable barbed suture. the operative time was 250 minute. the postoperative period was uneventful, and the patient was discharged home on the second post-operative day. conclusions: robotic rives-stoppa ventral hernia repair is feasible, safe, and effective when a standardized approach is performed. whether robotics may improve the outcomes of minimally invasive ventral hernia repairs, including lower recurrence rates, decreased post-operative pain, or shorter surgeons' learning curve, will require careful prospective investigation. aims: the authors present a video with a left chronic bochdaleck hernia classical hernioplasty repair but performing a mini invasive thoracoscopic approach and 3 mm instruments. methods: a 73 years old female patient come to hospital due to chronic left dorsolumbar pain. a ct scan study showed a chronic left diaphragmatic bochdaleck hernia. a lateral right decubitus thoracoscopic repair is performed using 3 mm instruments and a 5 mm camera. case and technical details are shown in the video. results: the patient was discharged from hospital within a period of 48 h with no pain and a clean chest x-ray. in a 2 year time follow-up, not an anatomical or clinical recurrence has been reported. neither chronic pain or respiratory complications happened, with in this period of time. conclusions: depending on the patient characteristics, anatomical factors and surgeon mini invasive experience, left bochdaleck hernia mini invasive thoracoscopic hernioplasty repair using 3 mm instruments could be a safe and feasible option. more studies are needed in order to standardized this approach. surg endosc (2019) abdominal wall surgery has expanded exponentially in the last decade. many techniques have been developed, mainly in minimally invasive surgery. laparoscopic ventral and incisional hernia repair (lvihr) has become a common procedure because of its feasibility and safety but unfortunately, it is not free of complications. chronic postoperative pain and bleeding are frequent complications, prolonging hospital stay and altering quality of life of the patients. absorbable or non-absorbable tacks are the usual method of mesh fixation and sometimes combined with transfascial sutures to secure the mesh. these 2 mechanical fixations pierce the abdominal wall causing nerve or vessel injuries. some studies showed no differences between absorbable tacks, non-absorbable tacks or transfascial sutures concerning postoperative remarkably high pain. some authors consider that a non-penetrating fixation of the mesh getting an effective mesh-abdominal wall interface will reduce significantly the postoperative pain after a laparoscopic ventral hernia repair. tissue glues are used in different medical treatments and also have been used successfully for extra peritoneal mesh fixation in laparoscopic inguinal hernia repair, open ventral hernia repair but not so in laparoscopic ventral hernia repair in spite of good results published in the literature. cyanoacrylate and its derivatives are 'synthetic glues' and classified as medical devices with stronger adherent properties than fibrin glues. experimental studies have reported good results compared with suture fixationand also tissue toxicity doesn't lead to an increased foreign body reaction. some authors have studied the use of cyanoacrylate in laparoscopic inguinal hernia repair but unfortunately, clinical trial reports in ventral and incisional hernia repair were not found in the literature because the lacking of experimental studies that guarantee the safety of intra-abdominal mesh fixation and the interaction of the glue with the intra-abdominal tissue. our group developed an experimental study demonstrating the feasibility, safety and effectiveness of the cyanoacrylate using for intraperitoneal mesh fixation and after this conclusion, started a clinical study. this video shows the methodology for laparoscopic mesh fixation with only glue in our first cases. aims: small epigastric hernias, associated or not with the rectus abdominis diastasis, and small umbilical hernias are common in middle-aged women, particularly with past history of pregnancy. the aim of this video is to illustrate a new extraperitoneal approach to these clinical situations. methods: patients between the ages of 40 and 60 years old, with epigastric hernia orifice up to 2 cm, with or without associated umbilical hernia (up to 2 cm), were chosen for this procedure. the surgery begins with a vertical umbilical incision for the umbilical hernia's correction, and dissection of the pre-aponeurotic plane. two 3 mm trocars (mini-laparoscopy instruments) are introduced at both flanks to enlarge the pre-aponeurotic plane towards the xiphoid appendix. in this way epigastric hernial defects are isolated. the surgery proceeds with defect suturing with braded suture, midline invagination and mesh placement if necessary. results: all patients had an eventful post-operative period and were discharged home at postoperative day 1. the aesthetic and functional results are optimal conclusion: for selected cases with high aesthetic motivation this technique seems to be feasible and with optimal cosmetic results. this technique allows the mesh placement both in-lay and onlay, protecting it from surgical site infections often present at the classical approac bochdalek hernia is a rare entity in adults. fewer than 200 have been reported in medical literature, the majority of which were incidentally diagnosed. as such, the optimal repair of a symptomatic hernia is unknown. we present a case of adult bochdalek hernia repair. methods: a 30-year-old obese male patient with a 2 years of chronic dry cough and left lung opacity in chest x-ray. a large posterior and lateral bochdalek hernia with herniation of intestinal loops and fat to the left hemithorax was seen in chest and upper abdominal ct scan. the hernia extended to mid-thorax, caused significant atelectasis of left lung. eighteen months later, due to appearance of chest and abdominal pain following a recent motor vehicle accident, a repeat chest ct was done and a slight enlargement of the hernia was shown. results: the patient was operated laparoscopically, positioned in a semi-right lateral decubitus with double lung intubation. a large left posterior and lateral diaphragmatic hernia which contained transverse and descending colon with omental fat was seen. they were pulled in to the intraperitoneal space carefully. the defect was measured to be 10*8 cm. it was reduced to 7*7 cm by suturing with a non-absorbable 0 v-loc suture . advancing the camera to the thoracic cavity showed the left lung to be severely atelectatic. after selective recruitment lung was well expanded. a symbotex composite 20 9 25 cm mesh was fixed to the defect area by suturs and laparoscopic tacker. the operation and post-operative course were uneventful. chest x-ray demonstrated the bowel below the diaphragm. the patient was discharged on pod 3. at 8-month follow-up, chest x-ray was normal. objective: to demonstrate the safety and efficacy of the standardized laparoscopic approach in the treatment of large parastomal hernia. currently, this approach is recognized as the one of choice in parastomal hernia pathology, being controversial which is the best technique of choice: keyhole vs sugarbaker. material and method: clinical case: a 76-year-old woman with a history of laparoscopic abdominoperineal amputation due to rectal neoplasia (pt2n0), a year ago, with symptomatic parastomal hernia with incarceration episodes and inflamation changes in the stomal orifice.tac: large hernia parastomal with intestinal content inside. surgical treatment is decided. result intervention: complete laparoscopic approach, right lateral partial decubitus, 4 trocars, dissection of the hernia defect and reduction of the content, partial mobilization of the pre-stomal colon, with bleeding at the level of the vascular origin, requiring careful hemostasis to avoid ischemia of the colostomy, herniorrhaphy with stitches with extracorporeal knotting, placement of polypropylene/pvdf mesh,fixed with irreabsorbable tackers with administration of biological glue at the edges of the mesh. correct postoperative, discharge at the 3rd day. asymptomatic and without hernia recurrence at one year of follow-up. conclusions: the technique of sugarbaker using a laparoscopic approach is a safe and effective alternative in the treatment of parstomal hernias. objetives: laparoscopic ventral hernia repair provides advantages in term of low infection rates and postoperatory stay when is compared with open repair. trends in laparoscopic abdominal wall surgery is to complete defect closure without tension in midline. closing the defect in ventral hernias wider than 8-9 cms creates high tension in midline and postoperatory pain. it's proposed different techniques to solve this drawback. laparoscopic posterior component separation makes the defects closure easier with no tension and placing the mesh extraperitoneally. methods: 65 years old woman with previous total hysterectomy, a m3m4w3 midline incisional hernia was clinically diagnosed and confirmed with ct scan. full laparoscopic abdominal wall repair with defect closure was proposed. 3 trocars in left side were placed and posterior rectus sheath right side in the defect margin is freed. once the lateral edge of the rectus sheath is reached, the posterior rectus sheath is incised, dividing the posterior aponeurotic sheath of the internal oblique muscle. this allows access to the plane between the internal oblique and the transversus abdominis muscles. it's is made the same steps in the left side with 3 trocar on the right flank. the posterior rectus sheath both side is reapproximated in the midline and 20 9 20 cms polipropilene mesh is placed and unfolded properly. it's fixed using cyanocrilate glue. one drain is left in retromuscular position and 10 mm trocar wounds are sutured. results: postoperatory course was uneventful. hospital stay 24 h. the drain was removed in day 3 after surgery. after 9 months follow-up no complication or recurrence were identified. methods: this video will show the evidence of gangrenous jejunal segment due to superior mesenteric vein thrombosis in a patient with history of breast ca on hormonal treatment.in this video, the gangrenous segment was resected and primary anastomosis was done using endogia 60 mm. results: a second look after 48 h revealed to be negative for any further ischemic bowel. conclusion: therefore, laparoscopy in acute abdomen is diagnostic and for treatment. introduction: gastric pseudo-volvulation is a rare entity of paraesophageal hernia that is characterized by migration of the stomach into the posterior mediastinum. this clinical-radiological picture has severe complications so in certain cases should be operated urgently. another small group of patients are asymptomatic, although the current literature recommends their regulated surgical intervention. we present a gastric pseudo-volvulation in the mediastinum, with a laparoscopic approach, showing that by systematizing the surgery, it is possible to perform this type of intervention with relative ease and safety material and methods: we present a video of an urgent laparoscopic approach in a female patient of 80 years with a personal history of hypertension, smoking and dyslipidemia. with a hiatus hernia diagnosed more than ten years ago. he went to the emergency department due to significant symptoms of heartburn and reflux, as well as incoerctable vomiting and difficulty feeding one week of evolution. a simple abdomen and postero-anterior chest radiograph was performed, showing a paraesophageal hiatus hernia with almost the entire stomach included in the mediastinum. a thoraco-abdominal axial tomography corroborated giant hiatus hernia with pseudovolvulation and incarceration data. urgent intervention was decided by laparoscopic approach in which hiatus hernia reduction and esophageal abdominalization were performed. closure of pillars and reinforcement with bioabsorbable mesh. gastric and gastropexy toupet of anterior face to anterior peritoneum of abdominal wall. results: the patient had a post-operative 48 h without incident, discharged with a crushed diet. the follow-up and evolution has been acceptable without notable complications. conclusion: the laparoscopic approach, in extreme cases of paraesophageal hiatus hernia with incarceration of the stomach and pseudovolvulation of it, is a correct, safe and effective alternative in experienced groups. surg endosc (2019) case report of incarcerated hiatal hernia. 30 years old female was admited to the hospital due to severe chest pain and vomiting for about six h. physical examination and lab test showed no abberations. chest xray revaled incarcerated stomach above the diaphragm. she was rushed to the or. laporoscopic approach was used, the stomach was removed from the chest and nissen fundoplication was performed. day after surgery patient was asymptomatic, got full oral diet. she was discharged on postoperative day two, without a need of any analgetics. gastroduodenoscopy was performed 6 weeks after surgery and showed proper image of oesophagus, stomach and duodenum, neither signs of hiatal hernia nor inflamation were present. laparoscopic approach is good way to treat incarcerated hiatal hernias and is related with shorter lenght of stay, lesser postoperative pain and better patient comfort. and it should be procedure of choice in this kind of cases. she was operated open technique using a 2 cm long incision in right iliac fossa and the appendix was phlegmonous. the patient began feeling bad from the second day postoperative having temperature over 38°c, pain and increasing crp. the general condition worsened the next day when the temperature went up till 39.5°c, extreme generalized pain and crp:343. the ct abdomen control indicates signs for generalized peritonitis and rises the suspicion for a forgotten large gauze. the patient is operated using laparoscopy technique: identifying and taking out the foreign body, doing adhesiolises, extensive lavage and in the end inserting one drain in douglas. the video is presenting what king of special graspers can be used but also tips and tricks when speaking about identifying the anatomy but also dissection in acute and inflamed environment. postoperatively the patient began to feel better and in the 5 th day was released home. conclusion: this case illustrates that even after open surgery, laparoscopy is a viable solution with the condition that there is available experience in minimally invasive surgery. introduction: foreign bodies can enter inside the human body by different mechanisms such as ingestion, aspiration, trauma or in some cases due to medical procedures. they are potentially life-threatening events, the diagnosis could be challenging and its management depends on their location. case report: a 64-year-old male was referred to our hospital due to chronic abdominal pain. he had cholelithiasis, medical history of acute pericarditis and past surgical history of left adrenalectomy, left nephrectomy, distal pancreatectomy and colon resection due to an adrenal adenocarcinoma (stage t4n0m0).abdominal radiograph showed a foreign body in the left lower quadrant of the abdomen, as an incidental finding. this was not detected in ct scans during ten years of oncology follow-up. ct scan revealed an extraintestinal metallic curved object in the right lower quadrant. this finding was not related to any surgical intervention or trauma. diagnostic laparoscopy was performed: the foreign body seemed to be a guidewire, it was included into the omentum and almost stuck to the abdominal wall. the guidewire was reached and carefully extracted through a 10 mm trocar without any evidence of intra-abdominal organ injury. then an elective cholecystectomy was also performed due to his medical history of symptomatic cholelithiasis.the procedure lasted 60 min. the hospital discharge was on the third postoperative day and no complication was registered. conclusion: is extremely rare to discover a guidewire that had migrated into the peritoneal space without abdominal injuries.this case report demonstrates the technical feasibility, safety and minimal postoperative morbidity associated with minimal invasive laparoscopic removal. aims: the authors present a video with their standardized laparoscopic groin hernia transabdominal preperitoneal hernioplasty (tap) procedure but using 3 mm instruments and 5 mm camera approach. methods: a 45 years old male patient with a bmi 30 presents a symptomatic bilateral groin hernia for 5 months. us study showed an indirect bilateral inguinal hernia. a laparoscopic tap hernia repair procedure is performed using 3 mm instruments and a 5 mm camera. a selfgripping mesh preperitoneal hernioplasty and peritoneal flap barbed-sutured hermetic closure was performed. case and technical details are shown in the video. results: the patient was discharged from hospital within a period of 4 h with a 2 rate in a eva acute pain visual scale. in a 2 year follow-up, there has no been an anatomical or clinical recurrence. no chronic pain, anatomical recurrence, umbilical or abdominal wall complications have been reported with in this period of time. conclusions: depending on the patient characteristics, anatomical factors and surgeon mini invasive experience, a laparoscopic bilateral hernia repair using 3 mm instruments, could be a safe and feasible option. more studies are needed in order to standardized this approach. results: during tapp approach a direct hernia relapse was identified, the previous mesh was included on preperitoneal space and some non-absorbable sutures to inguinal ligament were identified. stitches and nearly total mesh removal (only the part surrounding cord elements was left in place) were performed. 15x15 heavyweight polypropylene mesh was employed fixed with gubran2ò and the flap was closed with running sutures. patient was discharged uneventfully the same day. seven months later he did not need analgesics and had no physical impairment. conclusions: post inguinal hernia repair chronic pain can be severe and disabling, and is becoming more prevalent. the origin is complex and meshes and sutures could play a role. the management is multimodal and demanding. for refractory patients, surgery may be an option. laparoscopic, open and mixed approaches have been employed. they usually combine mesh removal and substitution (often in different planes) and groin nerve therapies. nowadays, triple neurectomy seems to be the most effective treatment (more than 90% pain relief). generally, removal of mesh alone does not lead to lasting pain relief or has worse outcomes compared with associated neurectomy. introduction: mesh repair of inguinal hernia is sometimes followed by adverse effects such as mesh migration, chronic groin pain or recurrence. removal of the mesh is necessary in selected cases. we affront this cases by tapp intervention. methods: we present a video with two intreventions of inguinal recurrent hernia by laparoscopy (tapp). we remar the points to decide explant the mesh or not to explant. the conditions to decide the explant were the proximity to the main vessels in inguinal area (espigastric and femoral vessels) and the plication of the mesh. results: and conclusion as we show in the video, the explant of the mesh is only conditioned by the plicature of the mesh for its migration and recurrence, accompanied usually with pain. we don't remove any time the mesh or the plug if it is in the triangle of doom with firm adhesions to the main vessels. we cover the previous mesh with a new ligthweigth 3d mesh and closing at the end the preitoneum over the new reparation. introduction: tep technique isn't a controversial area in surgical practice for inguinal hernias anymore, but a fully accepted method. the use of general anesthesia has been the mainstay of laparoscopic hernia repair, but epidural anesthesia is not a contradiction to properly selected patients. material-method: the approach of the extraperitoneal area achieved without use of a dilation balloon, but via the indroduction of the camera and the dissection of the regional structures.3 trocars ports were used: a 10 mm trocar through the umbilicus for the camera, exactly as in sils (single incision laparoscopic surgery), another one 5 mm is placed in the midline between the umbilicus and pubis, the last 5 mm trocar is placed in the midclavicular line ipsilateral with the hernia. the key for every operation was the tension free technique with placement and fixation of a mesh 10x5 cm. in 20/25 cases the mesh was placed with tacks on the inside of the inferior epigastric artery-vein complex. all patients were dismissed from the hospital in 24 h, no drain was placed and no major postoperative complications took place. conclusion: tep is a demanding technique with serious learning curve. the use of a dilation balloon for insertion in the extraperitoneal area is not prerequisite. tep is an appropriate method both for first appearing and recurrent inguinal hernias. epiduralanesthesia instead of general anesthesia is no a contradiction for properly selected patients. aims: the aim of this study was to investigate the effects of preperitoneal carbon-dioxide (co 2 ) insufflation during tapp (transabdominal preperitoneal) repair. materials and methods: 20 male patients with inguinal hernia were include in our study. we obtain laparoscopic access at the umbilicus and introduce 10 mm port. two 5 mm working ports are placed lateral. diagnostic laparoscopy of the entire abdomen is necessary to rule out other pathology or contraindications for surgery. using aspiration needle we insuflate carbon-dioxide (14 mmhg) preperitoneal at the level of anterior superior iliac spine while decrease abdominal gas pressure to 8 mmhg. same procedure is made lateral to the umbilical artery. results: we found that preperitoneal carbon-dioxide (co 2 ) insufflation during tapp facilitate the future parietalisation and even can reduce operating time in future improvements of the technique. there were no intraoperative complications related to this procedure. we did not found any potential risk of the technique when is use by trained surgeons. aims: laparoscopic inguinal hernia repairs (lihr) are performed more and more frequently because they offer some advantages; however, we cannot forget their specific complications. lihr are associated sometimes with peritoneal tears that can lead to bowel obstruction. we present two cases of bowel obstruction related to peritoneal defects post tapp procedure and review peritoneal closure, bowel obstruction and options to repair defects. a 79 year-old male was scheduled for tapp due to bilateral relapse. two 10x15 tio2mesh tm fixed with securestrapò, employed also for peritoneal flap closure, were employed. three days later he was readmitted with bowel obstruction with ct suggesting 'adhesions'. a 56 year-old male had bilateral tapp in another centre. seven days later he presented with bowel obstruction. ct showed metallic tackers and suggested 'adhesions' results: first case: after four days of conservative treatment failure, a revisional laparoscopy showed ileum herniation through a peritoneal defect and firm adhesions to the mesh. bowel was labouriously separated and the peritoneal defect closed with two running sutures. he was discharged on the 7 \ sup [ th \/sup [ postoperative day and three years later he is asymptomatic. second: after two days of conservative treatment failure, on laparoscopy, ileum was filmy adhered to polipropilene mesh through a big defect on flap closure. defect was closed with interrupted sutures. as tears persisted, an omental flap was created to cover the area. patient was discharged on the 5th day and continues asymptomatic three years later. conclusions: lihr bowel obstructions can be divided in adhesive disease and herniation. herniation can be early (through peritoneal defects) or late (trocar site). international guidelines recommends a thorough closure of peritoneal incision or bigger tears (grade b). the closure can be achieved with staples, tacks, running suture, or glue. these last two methods are more time-consuming but less painful. running suture seems to be the best, due to its low costs, tightness and low pain but sometimes can be technically difficult. low intra-abdominal pressures (= 8 mmhg) facilitate suturing. when a herniation appears, careful bowel management is needed and running sutures are recommended. if tears persist, an omental flap can be useful. aims: application of a single port robotic platform to perform an entirely transanal tatme/ tata. methods: the following video demonstrates how a totally transanal proctosigmoidectomy is performed using a novel, single port (sp) robotic platform was used to carry out a totally transanal proctosigmoidectomy, single port robotic tatme/tata. a 38-year-old female patient with a clinical t3n1b rectal cancer at the 3 cm level, status post neoadjuvant chemoradiotherapy (5580 cgy, xeloda) is presented. shown here is the open transanal dissection followed by docking of the sp robot, implementation of the single port instruments (fenestrated bipolar forceps, cadier, scissors, camera, clip applier) through a gelpoint path to complete a totally transanal proctosigmoidectomy including transanal tatme, ima/imv transection, splenic flexure release, and left colonic mobilization, loop ileostomy, and handsewn coloanal anastomosis. results: blood loss was 100 cc. pathology demonstrated a moderately differentiated, rectal adenocarcinoma. the total mesorectal excision was complete (grade 3), margins were negative, and all 17 lymph nodes were negative for metastatic carcinoma. the patient was discharged on postoperative day 4 after an uncomplicated hospital course. there was no postoperative morbidity or mortality. conclusions: application of the single port robot to transanal tatme/tata (sprtatme) is presented here. while much work remains to be done to validate the sp robot's safety, this first demonstration of a totally transanal tatme/tata establishes its feasibility and utility. this single port platform stands to greatly expand the application of natural orifice transluminal endoscopic surgery (notes). as shown, the sp robot offers more than sufficient visualization, technical control, and adequate reach to perform such an operation. we present an exciting new avenue by which to complete operations in an entirely transanal fashion, which are classically performed via a combined transanal and transabdominal approach. methods: this video shows the utilization of a new robotic platform to perform transanal endoluminal microsurgery, rtem. presented here is a 53 year old woman with a recurrent rectal adenoma at the 6 cm level, status post a previous tem resection in october 2017. demonstrated is the utilization of the sp robot through a gelpoint path in order to perform a partial fullthickness and full-thickness resection. the robot is introduced through a 25 mm in diameter cannula via a four-channel face-plate. the instruments' two-jointed mobility at the elbows and wrists as well as the novel navigation system are well demonstrated. the docking of the sp robot, utilization of the dissecting devices, and closure of the defect is shown. results: sprtem was performed with a blood loss of 5 cc, and the patient was discharged on postoperative day 1. there was no postoperative morbidity, mortality, or moderate/severe pain. pathology showed tubular adenoma with low-grade dysplasia in a non-fragmented specimen with negative margins circumferentially. conclusion: initial experience using the sp robot for rtem is demonstrated here. the robot provides wonderful visualization and operative control to the surgeon. articulation of the robot's wrists and arms have the potential to facilitate technical aspects of the procedure. rtem stands as an exciting development in the field of transanal endoluminal surgery. introduction: the application of robotic approach in the esophageal surgical field is in its first phase. the microsuturing and microdissection capabilites of the robotic system can potentially overcome the traditional limitation of the laparoscopic surgery thus enhancing the indications of minimally invasive surgery. methods: we have performed a retrospective analysis of our prospectively maintained database that included 16 patients who underwent robotic-assisted esophagectomy for malignant disease between 2014 and 2017. results: ten out of sixteen patients had squamous cell carcinoma meanwhile six had adenocarcinoma. ten mckeown's and six ivor lewis were performed. the mean operative time was 525 min (332-688) and the median blood loss was 155 ml (70-220). no patients required conversion nor intraoperative transfusion. the morbidity rate was 3/16 (18.7%) : a transitory laryngeal nerve paresis, a pneumotorax and pneumonia. the mean hospital stay was 8 (range 7-23) days. an r0 resection rate of 93.7% was achieved with a mean lymph node yield of 16 (13-21). the 1-year disease free survival was 82.8%, wheres the the 1-year overall survival was 88.5%. conclusions: robotic assisted minimally invasive esophagectomy (ramie) is safe and feasible, it offers promising results while preserving a good oncology adequacy. this video shows our technique for the treatment of an esophageal diverticolum using a robotic left sided transthoracic approach, followed by a heller myotomy and dor fundoplication using a transabdominal approach. our case is a 75 year old male, who suffered from severe dysphagia, halitosis and gastric reflux who on endoscopic and radiological investigations was found to have low grade and a 3 cm wide esophageal diverticulum, 7 cm from the lower esophageal sphincter. initially conservative management was attempted, however following poor compliance and the persistance of symptoms after 1 year of therapy, surgical intervention was indicated. the operation was performed using the minimally invasive robotic system of the davinci siò, starting with the thorax time. the patient is positioned in left side decubitus. the camera-trocar is insert in the thorax via the fifth intercostal space the, two 8 mm and one 12 mm robotic trocars are added. the lung is liberated from pleural adhesions and the esophagus is then prepared exposing the diverticulum which is successfully removed with an endo-giaò. the esophageal muscle fibers, near the suture line is reinforced with separated vicryl stitches and the resected piece is extracted via endo-bag. a 16fr thoracic drainage tube is then placed and the trocar accesses repaired. the patient is the put in supine position with a 15°anti-trendelemburg angle. three robotic trocars (two 8 mm and one 12 mm) are placed and the robot docking is made from the patient left shoulder. the lesser omentum is divided to visualize and prepare the gastric-esophageal junction (gej) sparing the vagus nerve. the heller myotomy is then performed for 4 cm over the gej and 3 cm under it. the mucosal integrity is assured via laparoscopic and contemporary gastroscopic view. the gastric fundus is attached to the distal esophagus completing the dor fundoplication. post-operative care comprehends the removal of the thoracic drainage during the first post-operative day, the pain management and the progressive realimentation. the hospitalized period lasts 6 day and the patient was dismissed without complications occurred. the uniportal video assisted lung lobectomies gained popularity all over the world during the last 10 years. the technique is safely applied for peripheral pulmonary lesions, under 6 cm, but more and more complex cases are being approached while the indications continue to evolve. our aim is to present the particular aspects of this technique in an 11-year-old female patient with a giant bullous lesion located in the lower lobe of the right lung. the preoperative work-up for this case is presented and commented. a multidisciplinary surgical team consisting of thoracic and pediatric surgeons was involved. a single 3.5 cm length incision in the fourth intercostal space was used for the access. due to the fact that the lesion involved almost the entire lobe and the margins were very close to the hilum, we have decided and performed a right lower lobectomy. dissection and stapling were quite difficult. all the anatomical structures had small dimensions, forcing us to perform an 'artery first approach' in a very narrow space. no complications during or after surgery were encountered. the patient was discharged after four days and she went to school on the sixth day. histopathological examination showed that the lesion was a type 1 ccam (congenital cystic adenomatoid malformation). conclussion: the uniportal video assisted lung lobectomy was safety applied for a giant bullous lesion of the right lung. aim: dunbar syndrome, celiac trunk (ct) compression syndrome, caused by median arcuateligament is a rarely diagnosed disease because of its nonspecific symptoms, which cause adelay in the correct diagnosis. the aim of the study was to demonstrate the usefulness andadvantages of laparoscopic approach in the treatment of dunbar syndrome. methods: we performed 3 laparoscopic release of ct in the department of general, minimallyinvasive and elderly surgery in olsztyn in 2018. all of three patients suffered from severepain of abdominal cavity before the surgery. results: in two cases, there were a complete remission of the symptoms. in one case, there was animprovement. all patients reported relief of symptoms in the first days after the operation.there were no postoperative complications. conclusions: the laparoscopic treatment of dunbar seems to be safe and feasible procedure. thelaparoscopic surgery alone can often eliminate discomfort, while angioplasty and stentimplantation are no longer necessary. introduction: the advances in robotic surgery have permitted the application of such technology to various surgical fields, one of the last of these being hernia surgery. we present a case video of the treatment of a dual-hernia using a robotic retromuscular ventral hernia repair(rrvhr) using the davinci siò robotic system. the case report demonstrates the evolution of the trans-abdominal robotic umbilical prosthetic (tarup) in that it utilises a 'double docking' technique to allow the positioning of a large retromuscular mesh. methodology: our patient is a 50-year-old male who presented with chronic epigastric pain. the abdominal ct confirmed two abdominal wall hernias; an epigastric and supra-umbilical hernia with visceral contents and wall defect diameter of 6 cm and 2.5 cm, respectively. using the minimally invasive robotic system of the davinci siò we adapted the well known retromuscular mesh technique. the operation was initially intraperitoneal with access to the retromuscular preperitoneale space using a right sided longitudinal incision.(as per standard tarup technique). we proceed with the dissection of the retro-muscular space until the left lateral edges of the rectus sheath, creating a preperitoneal space for the placement of a specifically modified ultrapro polypro-leneò 25x22 cm mesh. following this we repositioned the davinci siò in a symmetrical manner, with ports placed in the retromuscular space. the mesh is positioned and the peritneum subsequently closed with a v-lock sutureò. finally we opted for a negative pressure jackson-pratt drain, inserted preperitoneally. results: the patient was discharged on the 2nd post-operative day without complication follow up continued until 12 months post operatively during which the patient remained asymptomatic, without signs for hernia recurrance . conclusion: the technique highlighted in our video demonstrates the utility of the robotic system in hernia repair. specifically the approach proved a success as it facilites the placement of the mesh totally extra-peritoneally with closure of the posteriore sheath without tension. the added advantages are that the port-sites are distant from the mesh thus reducing infective risk. additionally this technique allows the treatment of large peritoneal defects. surg endosc (2019) aim: to analyse the performance of a robotic fellow during a robotic total mesorectal excision (tme) at the end of the fellowship, and subsequently compare it with their mentor. methods: the fellow is exposed to 2 robotic colorectal lists per week. during the fellowship, assessment of performance is recorded in a structured proforma covering aspects of autonomy, tissue handling and dissection. at the end of the fellowship, areview of cases performed by the fellow and the mentor was carried out in a blindly manner (video footage). results: robotic tme training was divided into modules in order of complexity and the trainee had to achieve sequential proficiency in each module, before progression. docking of davinci robotic system. inferior mesenteric artery exposure and ligation, development of medial to lateral plane and inferior mesenteric vein division. left colonic and splenic flexure mobilization. pancreas identification. rectal dissection (tme). qualitative assessments were recorded by the mentor; the fellow was 'able to perform with verbal help' most of the steps from early on. by the end of the fellowship, all steps were performed in a similar manner in terms of quality and oncological integrity when compared with the mentor. conclusions: at completion ofan advanced robotic colorectal fellowship, high quality trainees can perform every step of the tme dissection in a similar manner with the trainer, when assessed blindly, without compromising oncological integrity. aims: to find safe and simple method in robotic rectal low anterior resection with low tie arterial ligation and lymph node dissection around the root of inferior mesenteric artery. methods: we performed robotic rectal low anterior resection (rlar) by davinci si system in eight patients with rectal cancer. we applied low tie arterial ligation, just caudally to the origin of the left colic artery in all cases. during the procedure, we used tilepro function of davinci si system which enabled to display two other visual informations through external inputs under the normal 3-dimensional surgeon console view. preoperative 3d-ct vessel branching simulation video and intra-operative real time ultra sound navigation view were displayed simultaneously under normal operative camera view in the surgeon console. results: left colic artery preservation was completely done in all 8 cases. the mean time to find and expose the left colic artery from the first incision in sigmoid mesentery was 5 min, which was drastically shorter than conventional method. this method needed lesser mobilization of inferior mesenteric artery (ima), and may be less invasive to autonomic nerve around the root of ima which is very important for ejaculation function. conclusion: robotic rectal low anterior resection with low tie arterial ligation was performed safely and in short time, using tilepro intra-operative navigation method. preoperative 3d-ct vessel branching simulation video and intra-operative real time ultra sound navigation view were very useful in the procedure. we present the method in video. nerve sparing tme and pelvic neuroanatomy for colorectal surgeons p. tejedor, f. sagias, j.s. khan aim: to describe the critical points in which the pelvic nerves can be damaged during a total mesorectal excision (tme) for rectal cancer and the benefits of robotic surgery for identifying these points. methods: there are 4 critical points regarding pelvic neuroanatomy: superior hypogastric plexus (shp): located in front of l5-s1. the ganglionic sympathetic fibres form the right and left sympathetic trunk, travel along the anterior surface of the aorta and coalesce in the shp at the level of the inferior mesenteric artery (ima). superior hypogastric nerves: they take an anterolateral course into the pelvis. there is an avascular 'holy plane' around the rectum between these two nerves. inferior hypogastric plexus (ihp): lies over the posterolateral pelvis, almost parallel to the internal iliac arteries. this can be identified at the lower end of the rectum. neurovascular bundles(of walsh): in front of the denonvillier's fascia, at 2 and 10 o'clock position. they are responsible for erectile function. results: lack of knowledge or identification of key structures at these 4 points can lead to increased risk of nerve damage and translate into poor functional outcomes. the ima is dissected up to the origin from aorta and here the shp can be seen. care is taken to avoid any damage to these structures. the tme plane is found at the back of ima as the inner most dissectible layer between mesorectum pelvic fascia. right and left superior hypogastric nerves are identified. dissection is carried out posteriorly, laterally and anteriorly. ihp is identified at the lower third of the rectum, when the dissection is about to reach the pelvic floor. care should be taken in not to go too far lateral and damage this plexus. in the anterior dissection, plane is carried in front of the denonvilliers' fascia. the neurovascular bundles can be seen at 2 and 10 o'clock position and the surgeon has to be careful to stay inside that plane in order to avoid damage. conclusions: the precise dissection in robotic surgery results in minimal tissue damage and better visualization and preservation of the pelvic nerves. aims: to describe and evaluate new contributions and eventual advantages of icg fluorescence to perform an icg guided bilateral pelvic lymph node dissection in a patient who underwent low-anterior-resection for rectal carcinoma. we also present the basic steps to avoid ileostomy during rectal surgery in which icg and ghost ileostomy play an important role. methods: a 68-year-old male patient was referred to our hospital due to abdominal pain and significant changes in usual bowel habits.colonoscopy showed a no obstructing 5 cm middle rectal mass, which was reported as an adenocarcinoma.ct scan and mri revealed a 63 9 52 mm polyp in the anterior rectal wall which was located 7 cm from the anal verge. it was involving mucosa and sub-mucosa with muscularis propia invasion. no pathological lymphadenopathies or hepatic metastatic disease were found (stage t2n0).a laparoscopic ultra-low-anterior resection plus icg lateral lymphadenectomy with total mesorectal excision was performed. a complete splenic flexure mobilization was performed to achieve a safe tension-free anastomosis. transection line of the proximal rectum was checked after icg intravenous injection. icg was injected around the tumor by inserting an anoscope, just before the surgery. after the dissection of the rectum, lateral lymphadenectomy was performed assisted by icg. an end-to-side anastomosis was made. and a vascular loop was passed around the terminal ileum to create a ghost ileostomy.the procedure lasted 120 min. reactive protein c was monitored to identify an initial leak. the patient was discharge in postoperative day 7 and no complication was detected. results: pathological exam reported a rectal adenocarcinoma. pelvic lymphadenectomy results were: 2 negative nodes, 2 negative nodes and 10 negative nodes from right lymph node dissection, left lymph node dissection and rectosigmoid resection specimen respectively. no metastatic disease was found (stage t1n0m0). conclusions: in our experience, icg fluorescence imaging system offers important contributions to rectal surgery furthermore than evaluating vascular supply to the anastomosis. lymphatic mapping of the lateral lymph nodes and avoiding ileostomy could be a potential important use in the future. larger studies and more specific evaluations are needed to confirm its role in colorectal surgery and to find its limitations. background: robotic surgery for colorectal cancer is an emerging technique. potential benefits as compared to conventional laparoscopic surgery have been demonstrated. innovative robotic technologies have helped surgeons overcome many technical difficulties of conventional laparoscopic surgery such as hand-eye coordination, a two-dimensional view, and a restricted range of motion. robotic-assisted surgery was established as a new approach to minimally invasive surgery, overcoming these limitations. the following video shows a total robotic sigmoidectomy step by step on the basis of ourexperience. intervention: a 52-year-old male patient with no previous medical historyand a colon adenocarcinoma, 22 cm from the anal verge, no distant metastases. it was decided to perform a robotic sigmoidectomy. target anatomywas located andwe proceededto the exposure of the mesenteric vessels from medial to lateral. a cautery wasused to open the peritoneum,up to the origin of the inferior mesenteric artery, and caudally past the sacral promontory.the vessels weretransected by ligasuretm. we performedthe complete release of the colon taking care to avoid injury to retroperitoneal structures. we usedligasuretm to section the mesocolon in order to prepare the transection of the proximal colon. indocyanine green was used to check the correct vascularization. an endogia tristapletm was used to divide the colon. subsequently, we sectioned the rectumand extracted the specimen through itwith no need to make any auxiliary incisions. we introduced the anvil of the suture device to perform the anastomosis. we sectionedand close the rectum with an endogia tristapletm. finally we opened the proximal colon to introduce the anvil,making a pursestring to fix it and create a side to end anastomosis. outcome: the surgery took 110 min. the patient started oral intake 6 h after surgery and left the hospital on the 3rd postoperative day. pathological examination ruled out a colon adenocarcinoma pt1n0. conclusion: total robotic sigmoidectomy is safe and feasible and can be a procedure of choice to achieve a good surgical qualityand avoid assistance incisions in patients with colon cancer. surg endosc (2019) with more and more data now advocating wait and watch policy for these patients which require close radiological and endoscopic follow-up but unfortunately around 30% of them have regrowth of tumour which will require surgical intervention. the use of robot for cancer resections is becoming more frequent especially in narrow spaces like in an obese male pelvis. the reason being better 3-dimensional views, more angulation of the instruments and exclusion of tremors, which in turn leads to better dissection and preservation of hypogastric nerves. in this video, we present a robotic low anterior resection for rectal re-growth in an obese 55-years old male patient. he was offered neoadjuvant chemoradiotherapy after discussion in mdt. he had an complete response with chemoradiotherapy and was decided to offer him watch and wait regime. unfortunately, he developed rectal re-growth in the first year of his follow up. imaging showed t2 lesion with no distant metastasis and was later confirmed on histology as well. after mdt discussion he was offered robotic low anterior resection. the video starts by showing the clinicopathological features of patient including his radiological and endoscopic images. robotic port sites are shown. the edited video starts with rectal dissection after ligation of inferior mesenteric artery and vein with emphasis on narrow pelvis and preservation of hypogastric nerves, seminal vesicles and intact presacral fascia. postoperative histology was ypt2no and patient was discharged home after 3 days with no postoperative complications. background: minimally invasive surgery for colon resection has improved patient outcome, however a minilaparotomy still is necessary to extract the specimen. this report describes a new approach that combine laparoscopic parellel overlap stapling left colectomy with natural orifice specimen extraction surgery, with the aim to minimize abdominal wall trauma. method: laparoscopic left colectomy for malignant diesease was performed using a standard five-port technique. after releasing the left colon via laparoscopy, divide the proximal and distal of specimen with 60-echelon, and put distal sigmoid colon and proximal transverse colon together. open sigmoid colon 6 cm apart from distal margin, and incise transverse colon at proximal margin. take transverse colon and sigmoid colon side-to-side anastomosis via 60-echelon. incise posterior vaginal fornix to get into the abdominal cavity and extract specimen through vaginal. outcome parameters such as complications, conversions, operative time, postoperative recovery, and postoperative pain were prospectively recorded in a database. results: surgery was performed for 17 patients with left-colonic carcinoma. no perioperative complications or conversions occurred. the median operating time was 157 min. the median visual analogue scale score of postoperative pain was 1, and 2 of 17 patients needed analgesia on postoperative day 1. the median postoperative hospital stay was 6 days. for malignancies, tissue margins were oncologically adequate, the averge number of harvested lymph nodes were 16.9. the 4-week follow-up period was uneventful. conclusion: the described technique, a combination of laparoscopic parellel overlap stapling and natural orifice surgery, has the potential to avoid incision-related morbidity of the minilaparotomy in laparoscopic left colon resections. background: open surgical skills training has been well established over centuries, however, there are some significant differences in laparoscopic surgical skills training. it is an obvious advantage that the trainee and the trainer have the same view; however, some of the hurdles include the differences in tactile feedback, hand eye co-ordination, spatial awareness, depth perception and maximizing assistance. aim: we present a video highlighting some of the key challenges faced in laparoscopic colorectal surgical training, show-casing our systematic, structured approach. our approach: we have developed a structured approach starting with junior surgical trainees and progressing through to consultant level as per the levels below: level 1: attend courses/ workshops level 2: master camera work level 3: contra-lateral assisting level 4: intermediate level trainee-start operating with trainer scrubbed. the trainer is an additional member of the scrub team and stands on the same side as the trainee (does not replace any assistant) level 5: advanced level trainee-gradual progression from level 4. trainer un-scrubbed but standing next to the monitor throughout the procedure. level 6: trainer in theatre but out of sight of the trainee, with little interference level 7: progression to trainer-once proficiency is achieved at level 5/6, the trainee is trained to become a trainer, for the junior and intermediate level trainees. within each level the complexity of the procedure increases as the trainee progresses through the level. junior trainees (years 1-3 of surgical training) are taken through levels 1-3, intermediate (middle years of training) level 4 or 5 and advanced (last 2-3 years) up to levels 7. this way of training allows multiple members of the team to be trained simultaneously in every case. each operating list is preceded by team briefings where the role of every member of the team is clearly identified and followed by individual and collective feedback. conclusion: this training ladder proved very successful through the years. the feedback from trainees at all stages has been consistently positive. several trainees who have progressed to independent consultant practice, in the uk and abroad, are adopting this approach in their practice. introduction: despite the potential microsuturing capabilities of the robotic surgery, most of the esofago-jejunostomy after robotic total gastrectomy are still performed extracorporeal or through mechanical staplers. this can increase the cost of the procedure, the risk related to a improper functioning of the stapler. methods: we reviewed our prospectively maintained database analyzing patients from april 2015 to september 2017, who underwent robotic total gastrectomy with hand-sewn esophagojejunostomy for gastric cancer. results: a total of 18 patients were included in the study. the mean estimated blood loss was 140 ml (60-257). the overall operative time was 365 min (277-421). length of hospital stay was 6 days (5) (6) (7) (8) (9) (10) (11) (12) (13) . no conversion was necessary nor anastomotic leakage occurred. the morbidity rate was 2/18 (11.1%) and included a subhepatic abscess and wound infection trough pfannenstiel incision. a r0 resection rate was achieved in all cases. the mean of lymph node yield was 32 (14-39). the 1-year disease free survival was 74%, the 1-year overall survival 82.3%. the robotic-assisted hand-sewn esophago-jejunostomy is a safe and no time-consuming technique. it avoids the complication related to the stapler firing and it offers cosmetic benefit to the patient in terms of extraction site. introduction: colorectal endoscopic submucosal dissection (esd) is increasingly practiced for treatment of early colorectal neoplasia. however, colorectal esd is difficult to perform due to lack of retraction as well as instability especially over hepatic flexure. dilumen eip is an external flexible sheath introduced during colonoscopy to stabilize environment for esd. this video demonstrated the use of dilumen eip for performance of colonic esd at the ascending colon method and results: this is a 60 years old lady who received screening colonoscopy and found a 20 mm lateral spreading tumor (lst) type 0 iia lesion at ascending colon distal to ileocecal valve. under general anesthesia, patient received colonic esd using dilumen eip. due to significant looping, the dilumen device was introduced with the techniques of double balloon enteroscopy. after identification of the lst, the balloon in the front would be deployed to the proximal to the lesion while both balloons would be insufflated and created a stable environment. the esd procedure started after submucosal injection with normal saline in mix with indigocarmine, epinephrine and hyaluronate. mucosal incision was performed over the anal side of the lesion, and after adequate submucosal dissection, clips were applied to attach the mucosal flap to the sleeve of proximal balloon and achieved retraction. the submucosa was adequately exposed for dissection using dual knife jet. this enhanced submucosal dissection especially at one area with significant fibrosis. after the procedure, complete closure of the mucosal defect was performed by clips and assisted by the front balloon. the pathology confirmed intramucosal adenocarcinoma with clear resection margins. discussion: the dilumen eip device stabilized the environment within the colon with the double balloon and provide adequate retraction for performance of colorectal esd. surgery, kobe city medical center general hospital, kobe, japan background: robotic surgery has been widely spread all over the world, but robotic gastrectomy is not common and difficult because of complex anatomy and wide-ranging operation fields. in addition, it had been performed only under a few high-volume centers for reasons of the limitation of national health insurance in japan, which means medical expenses not covered by insurance. the situation was changed from this april, so we started robotic gastrectomy to reduce complications more rather than laparoscopic gastrectomy. we report results and aim to present the methods in detail using da vinci si surgical system. methods: we place five trocars, one is umbilical endoscopy port, and other four ports are placed at the reverse trapezoid, almost fan-shaped. using the arm number 3, the organ can be lifted up so that sharp lymphadenectomy is able to be done by almost a scissor as the arm number 1 while applying the countertraction by the arm number 2. in order to achieve a clear and bloodless lymphnode dissection while maintaining the oncological safety, we think not only the ultrasonic coagulating scissor but also the electrocautery of the scissor is very essential in robotic surgery. less postoperative complication such as pancreatic fistula or pancreatitis might be derived from robotic surgery because we can avoid pressing the pancreas during the suprapancreatic dissection of lymph nodes. the billroth i reconstruction can be performed using da vinci endowrist stapler under stable and inflexible surgical fields without needing help of surgical assistant. results: from october 2017 to december 2018, 25 patients with gastric cancer were operated robotic gastrectomy, included 3 total gastrectomy. there was no conversion to open surgery and no conversion to other procedures derived from intraoperative complications, and the overall operation time is gradually decreasing from the 14 th case. we are now on the way of learning curve shortening operation time, but robotic gastrectomy is no less safer and adequate than laparoscopic surgery. we will show our robotic procedures including lymphadenectomy around subpyloric and suprapancreatic area, and reconstruction with several important points in our video purpose: this report describes the benefits and drawbacks in the use of a novel articulating device (artisential), which has a multi-degree wrist freedom like the davinci endowrist, in performing complete single-port d2 lymph node dissection (lnd) in single-incision distal gastrectomy (sidg). methods: the artisential was used in performing sidg with d2 lnd for patients with advanced gastric cancer. all operations were performed by a single surgeon using a threedimensional camera and a passive scope holder in place of a scopist. the artisential was used mainly in the 4sb and suprapancreatic lnd, an area that is relatively far from the single port. in certain cases when the pancreas needed to be pushed down, such as obese male patients, the intraabdominal organ retractor was used to lift the tissue and the artisential to push the pancreas. operative results and short-term outcome were analyzed. results: twelve patients underwent the procedure without any intraoperative events, conversion to conventional laparoscopy, or surgery-related complications including postoperative pancreatic fistula. all patients underwent single port d2 lnd by complete exposure of the portal and splenic vein. mean operation time was 181.9 ± 42.5 mins. and mean number of retrieved lymph nodes was 61.8 ± 11.4. the artisential was found to be useful in grasping the tissues behind the pancreas and the major arteries throughout most of the lnd. the articulating motion also allowed the narrow single-port field of view to be clearly seen without the instrument body obstructing the camera. conclusion: the use of artisential in sidg appears feasible and reproducible, and is mandatory in performing a complete d2 lnd in sidg. the video shows a case of laterally spreading tumour of the rectum with preoperative benign histology, paris classification 0-is g (granular type), ut0n0 eus stage, kudo type iv, nice type 2. the neoplasm measured 6 x 7 cm, and extended from 6 to 12 cm from the anal verge, mainly located on the posterior wall. according to our local policy the indication was a transanal full-thickness excision. this was performed with the medrobotics flexò robotic system, used here for the first time outside the united states.the system technology utilizes an articulated multi-linked scope that can be steered along non-linear, circuitous paths in a way that is not possible with traditional, straight scopes. the maneuverability of the scope is derived from its numerous mechanical linkages with concentric mechanisms. this enables surgeons to perform minimally-invasive procedures in places that were previously difficult, or impossible, to reach. with the flexò robotic system, surgeons can operate through a single access site and direct the scope to the surgical target. once positioned, the scope can become rigid, forming a stable surgical platform from which the surgeon can pass flexible surgical instruments. the system includes on-board 3d hd visualization. the flexò robotic system contains two working channels to accept a number of different surgical and interventional instruments including monopolar and bipolar electrodes, scissors and graspers for tissue manipulation.the video shows the introduction of the dedicated rectoscope, the connection of the flexible robot, and the way to operate the device performing a full-thickness excision, including suturing of the rectal defect by means of two running sutures by a v-lock 3/0 thread. while illustrating the technique the authors will comment pros and cons of the use of the device. background: hepatobiliary procedures using a minimally invasive approachare demanding, especially in major hepatectomies. the use of da vinci surgical system allows to overcome some of the kinematics limitations of the direct manual laparoscopy maintaining the potential advantages of a minimally invasive approach . we herein present a case of left hepatectomy and local lymphadenectomy for hepatocellular carcinoma, carried out with the use of the da vinci xi. methodology: a 72-years old man with a long-lasting hbv chronic infection and ct scan and mri finding of a 4-cm solid neoplasia of the left hepatic lobe and gallbladder stones, was operated with the da vinci xi platform. the patient was placed in a supine position, with 15°anti-trendelenburg inclination. the trocars were positioned according with the intuitive indication for the upper quadrants surgery. results: the procedure was successfully completed in 360 min.at first, an intraoperative us scan with the use of tile-pro technology was done to determinate the tumor extension. the hepatic parenchyma transaction and the local lymphadenectomy were performed with monopolar scissors and bipolar grasps. the left hepatic vein section was performed with an endoscopic vascular stapler. there were no surgical complications or need for conversion to laparoscopy or laparotomy. the post-operative course was uneventful and the patient was discharged 5 days after surgery. conclusion: the da vinci xi can facilitate some technically demanding procedures and ultimately widen the range of application of minimally invasive surgery such as hepatic surgery. besides the well-known advantages provided by robotic surgery on 3d imaging, increased range of motion and augmented surgical dexterity, one of the most interesting and innovative features of robotic technology is the digitalization of the operative view; furthermore the tile-pro multiinput display allows the surgeon a 3d view of the operative field along with the ultrasound exam for a precise understanding of anatomy and vascularity and of tumor location. during the last few years, robotic surgery as well, as the latest innovation of minimally invasive procedures, takes its position in this particular field with the benefits of overcoming the limitations of conventional laparoscopy. our aim is to demonstrate the advantages of robotic surgery in procedures of hepatectomies, on occasion of a robotic hepatectomy performed by our team. methods: we present video fragments of a robotic left lateral hepatectomy procedure in an elderly female patient with a symptomatic gigantic haemangioma of the left hepatic lobe. we emphasize on the technical aspects and the advantages that the surgeon gains applying the robotic techniques in such procedures. results: the procedure was completed with minimal blood loss and the patient presented an uncomplicated post-operative course, with discharge on the third postoperative day, minimal need of analgesics and full recovery. conclusions: the excellent three-dimensional and high quality visualization that the robotic system offers, combined with the flexibility and the accuracy of the robotic instruments (especially on suturing), provide to the surgeon an important aid, in order to avoid serious complications, such as intraoperative bleeding and post-operative bile leaks. the restriction of the limitations of conventional laparoscopy is far more beneficial and promising for the evolution and the future of minimal invasive liver surgery. aims: the new da vinci xi surgical cart allows multi-quadrant and complex surgical interventions in a minimally invasive fashion. we present a case of robotic appleby left pancreatectomy using this platform and its specific operating bed. methods: a 73-years old woman with ct scan finding of a 30-mm hypo-vascular neoplasm of the pancreas body underwent surgery with the use of the new da vinci xi with four arms upper quadrants trocar' disposition. results: the procedure was successfully completed in 285 min. the pancreatic body was mobilized in order to expose the portal-mesenteric axis. the gland was transected using a robotic endo-stapler as well as the splenic vein. after evaluating the patency of collateral circles with intra-operative ultrasound, the common hepatic artery and the celiac artery were transected. then we increased the right tilted position and the neoplasia was detached from the gastric body by a tangential gastric resection using the robotic endo-stapler. finally, the operation was accomplished with the transection of the posterior attachment of the spleen and the pancreatic tail. no conversion or intra-operative complications were recorded. the post-operative course was uneventful and the patient was discharged 6 days after surgery. the da vinci xi with its specific tools helps in performing challenging procedures such as appleby operation for locally advanced pancreatic cancer. in our experience, the robotic endo-stapler permits the operating surgeon to directly control the transaction phase whereas the specific operating bed allows to perform minimally invasive multi-quadrant surgery and to obtain a better exposition of the operating field. results: the whipple procedure was successfully completed in 570 min. thanks to the dvtm the patient's position changed during the intervention to improve the exposure, with the instruments left inside the abdomen and without undocking the robot. the dissection of the pancreatic head from the portal vein and the section of the retroportal lamina were performed with the use of the endowrist vessel sealer device. a personal modified end-to-side pancreatojejunostomy was carried out, with 5/0 prolene and gore-tex double layer suture. no intra-operative complications occurred and no conversions to laparoscopy or laparotomy were required. the postoperative course was uneventful. conclusions: the use of the new fully wristed vessel sealer extend makes easier difficult maneuvers such as the fine dissection of the pancreatic head from the portal vein and the section of the retroportal lamina, enabling an optimized approach for vessels sealing and cutting and tissue bundles. moreover, the dvtm allows patient's movements without undocking the system or removing instruments from the abdomen, enhancing the surgical workflow. background: necrotizing pancreatitis is a devastating illness which can develop in up to 20% of patients who suffer from pancreatitis. it carries great morbidity with an associated mortality rate between 8 to 39%. many of these patients require drainage of fluid collections to treat sequela related to pain, per-os tolerance, and source control of sepsis if infected. the step-up approach to treatment of this disease has trended towards minimally invasive techniques, considering the morbidity of open debridement. as such, many centers have implemented the use of transgastric debridement via endoscopic cystogastrostomy. this technique, while effective in draining fluid and particulate necrotic tissue, has difficulty in resection of large necrotic tissue, due to instrument and anatomic limitations. current endoscopic accessories designed for polypectomy or foreign body extraction, for example, are not optimal for performing necrosectomy. to overcome this obstacle, additional access sites can be utilized to assist debridement. we describe the first laparoscopic assisted transgastric endoscopic necrosectomy through a percutaneous gastrostomy in a 59 year old male with infected pancreatic necrosis secondary to biliary pancreatitis. aim: to investigate the feasibility of utilizing gastrostomy access to assist in debridement during endoscopic necrosectomy. methods: the patient previously underwent an open necrosectomy and gastrostomy tube placement for acute emphysematous pancreatitis. post-operatively, there was a persistent and enlarging 12 cm infected walled-off necrosis (won). therefore, endoscopic cystogastrostomy was performed using a lumen-apposing metal stent. results: frank pus was evacuated. initial endoscopic necrosectomy was technically challenging due to the large volume of solid necrotic tissue. repeat endoscopic debridement utilized a surgical laparoscopic grasper via the gastrostomy site to aide solid debris extraction (video). this allowed for complete necrosectomy and resolution of the won. the patient did well and was discharged subsequently. conclusion: this is another emerging minimally invasive technique in the step-up approach for debridement and drainage of won. the use of the gastrostomy as a utility port for accessory instruments not only enhanced the technical aspects of the procedure but increased its efficacy as well. further experience is needed to validate the utility and reproducibility of this technique. objective: the presentation of the minimally invasive surgical approach for pancreatic necrosectomy guided by videoretroperitoneoscopy or var (video assisted retroperitoneoscopic), established in our center, as one of the option of the step-up approach treatment for acute necrotizing pancreatitis (anp) methods: the placement of the patient on the operating table should be in decubitus, with right lateral inclination, at 20-30°on the horizontal surface. the pancreatic cell is approached using the drainage catheter previously placed by radiological control (ultrasound or ct) as a guide, which will allow access to the cavity with safety. an incision of 3-5 cm is made around the previously placed catheter, crossing the subcutaneous cellular tissue and muscular fascias, dissolving the musculature. it continues in a blunt dissection, until a loss of resistance is appreciated which generally coincides with the outflow of necrotic or purulent material. once the retroperitoneal cell is accessed, a 15 mm trocar is placed and a pneumoretroperitoneum is performed. the 15-mm trocar allows the joint use of a 5 mm and 0°optic and the surgical material that allows debridement and cleaning. the aspiration and hydrodissection of the necrotic material, and the extraction of the solid component of the necrosis are proceeded. once the collection is drained and the necrotic material removed, a wash and drain system is placed, like a 3-way foley type probe. conclusions: in conclusion, the var is an alternative surgical technique, valid and reproducible in the treatment of anp, which offers comparable results and even superior, in some series, to those of open surgery, with satisfactory results in terms of morbidity and postoperative mortality. aim: lung subsegmentectomy is suitable for small and deep, non-palpable lung nodules. since it is difficult to intraoperatively detect the arteries, veins and bronchi of the subsegment, as well as the intersubsegmental borders, complete video-assisted thoracic surgery (vats) for lung subsegmentectomy is challenging. we use preoperative three dimensional ct to detect the arteries, veins and bronchi of the subsegment before conducting complete vats subsegmentectomy, and perform intraoperative bronchoscopy to detect the bronchi and intersubsegmental borders. i would like to describe our experience of complete vats combined subsegmentectomy for a non-palpable lung nodule. methods and results: the patient was a 67-year-old woman. during health screening, a small groundglass opacity was observed in her right lung on chest ct. the nodule was 15 mm in diameter and was located in s 2b (horizontal subsegment of the posterior segment) near s 3 (the anterior segment). we preoperatively diagnosed the lesion as well-differentiated adenocarcinoma, and planned combined subsegmentectomy for s 2b and s3 a (lateral subsegment of the anterior segment) of the right upper pulmonary lobe. before the operation, the locations of vessels were confirmed by three-dimensional ct angiography. video-assisted thoracoscopic surgery was performed using four ports: two 1 cm ports in the 8th intercostal space in the post-axillary line and in the angulus inferior scapulae line for the operator, a 4 cm port in the 4th intercostal space in the mid-axillary line for the assistant, and a 1 cm port for the camera in the 6th intercostal space in the mid-axillary line. the 4 cm port was also used for removal of the resected specimen. intraoperative bronchoscopy was used for detecting the subsegmental bronchi. she was diagnosed with primary lung cancer (adenocarcinoma in situ, nonmucinous) postoperatively. the tumor was pathologically graded as tisn0m0. no tumor recurrence has been noted in follow-up of twenty two months. conclusions: the combination of preoperative three-dimensional ct angiography, intraoperative bronchoscopy and complete video-assisted thoracoscopic surgery can be used for performing lung combined subsegmentectomy. aims: minimally invasive surgery is increasingly widespread for the diagnosis and treatment of abdominal pathology. laparoscopy is a diagnostic resource for those cases in which mass biopsy is not approachable through image-guided puncture, and is often therapeutic in the same act. it avoids the morbidity and mortality associated with laparotomy, favoring the early treatment of malignant processes. methods: we present a case of a 71 year old male who was incidentally diagnosed with an oval-shaped pelvic mass in the right lateral wall of the pelvis, adjacent to the vascular bundle of the right external iliac at its origin(5 9 3 9 5 centimeter), without sign of infiltration of surrounding structures. no other pathological findings on the abdominal computerized tomography and magnetic resonance imaging were found. due to its localization, it was not accessible to percutaneous biopsy. the first diagnostic impression was a benign tumor of the nerve sheath (schwannoma), without being able to rule out other diagnostic possibilities. to provide a definitive diagnosis the patient was subjected to an elective laparoscopic resection of the tumor. surgical procedure was performed using a 12 millimiter and two 5 millimeter, umbilicus for the optical system and operative on hypogastrium and left iliac fossa respectively. acleavage plan between the tumorand rightiliac vesselswas found. the exeresis of the masswas achieved, and it was extracted using an endo-bagò through the umbilical port site. a drain was put in the surgical bed. results: the patient had a short, uneventful post-operative course, being discharged on postoperative day 1. pathological examination revealed a lymphatic node with metastasis of poorly differentiated carcinoma, with suspected urothelial lineage. cystoscopy was performed with the finding of a 1 centimeter lesion on the right ureteral orifice with calcifications on the surface. biopsies were taken, confirming the bladder origin of the tumor. conclusions: both diagnostic and therapeutic laparoscopy is useful on pelvic masses because of the direct vision into this narrow anatomical space, especially in obese patients, providinga detailed view that makes easier to isolate and spear the anatomical structures surrounding the tumor, minimizing the risk of tumor rupture and bleeding. surg endosc (2019) aim: indocyanine green (icg)-enhanced fluorescence has been introduced initially in laparoscopic surgery to provide detailed anatomical information during laparoscopic cholecystectomy and to evaluate vascular supply to garantee correct anastomotic perfussion in order to reduce the risk of anastomotic leak. the uses of icg are increasing, specially in hepatic and oncological surgery in order to identify centinel lymph node and lymphatic mapping.we propose the use of icg imaging during complex laparoscopic colorectal resection in cases presenting ureter obstruction, to prevent iatrogenic ureteral injury. methods: we present a case of a 42 year old female previously diagnosed of pelvic endometriosis with severe pain and symptoms related with episodes of pseudo-occlusion .a colonoscopy was performed finding sigmoid cancer in an area of endometriosis in a narrow colon with difficulties to perform a complete colonoscopy that could be related to the process of pseudo-occlusion. the biopsy was informed as an adenocarcinoma.the ct-scan showed a dilatated left ureter in an area next to the sigmoid colon.we propose a preoperative strategy with a bilateral double j stent insertion, finding a ureter obstruction caused by the endometriosis.icg was injected through the ureteral catheter, guiding us during the surgery to avoid a iatrogenic ureteral injury. results: a laparoscopic left colectomy was performed. the icg allows us to follow the ureter during the surgery, disecting the colon properly from the area attached to the ureter. the prestenotic area of the ureter was marked dilatated up to two centimeters allowing the icg to identify it from the anatomic structures of the areas and guarateeing that there was not spill of icg out of the ureter avoiding a postoperative leak of urine. conclussions: when tumors, or another entities like endometriosis, produce a ureteral occlusion, icg could be injected through a j stent, allowing us to identify and to avoid an injury.icg fluorescence imaging is a safe, cheap, and effective tool to increase visualization during surgery, offering additional information of the anatomy in colorectal surgery. in this video we are going to show three cases of the robotic treatment of splenic artery aneurism and the evolution of the technology that we relied on for the preoperative planning and intraoperative navigation. our preoperative evaluation evolved from a tridimensional virtual reconstruction with augmented reality to patient specific anatomical 3d printed models, initially made of rigid materials and afterwards made of malleable materials, in order to reproduce hollow anatomical structures such as vessels, feasible to simulate the planned surgical plan. the choice of a robotic approach, in selected cases, allowed to restore the continuity of the splenic artery after the exclusion or excision of the aneurism, in order to preserve the spleen. aims: tumor-induced osteomalacia (tio) is a rare paraneoplastic syndrome in which patients presents bone pain, fractures and muscle weakness caused by the fibroblast growth factor 23 (fgf-23), a phosphate and vitamin d-regulating hormone. in tio, fgf-23 is secreted by mesenchymal tumors that are usually benign but small and difficult to locate. when medical treatment is unsuccessful, surgical treatment is indicated and conclusive.this video shows our technique for tio surgical treatment guided by indocyanine green (icg) fluorescent angiography. methods: the patient is an 81-years-old woman with tio confirmed by blood sampling of fgf-23, gallium-68 pet/ct scan and abdominal ct scan which identified a highly vascularized 8 mm nodule in the mesentery along the ileo-colic vascular axis.the patient was scheduled for a diagnostic laparoscopy with tio removal.the patient was placed in supine position. three trocars were introduced in the left quadrants. identified the last ileal loop, following ileo-colic vessels, a small mesenteric bulge was found. the icg fluorescent angiography confirmed the localization of the ipervascularised nodule and helped to define its edges. the nodule was removed through monopolar energy and the hemostasis was optimized through bipolar energy. the specimen was extracted through an endobag using one of the trocar access. results: the postoperative course was uneventful and the patient was discharged on postoperative day 4. histopathological examination showed an extrasurrenalic paraganglioma. conclusion: tio are often difficult to locate for surgical removal. icg fluorescent angiography allows to facilitate tio localization and removal. the minimally invasive technique decreases perioperative morbidity and mortality. laparoscopic removal guided by icg angiography should be considered when tio needs to be removed and is difficult to locate. aim: this video shows our technique to perform laparoscopic resection of a voluminous left paraaortic paraganglioma. methods: the patient is a 16-years-old man with a recent medical history offever, lumbar pain and haematuria.abdomen ct scan, performed during admission at emergency department, revealed a 8 x 7 cm left paraaortic retroperitoneal mass with pseudo-aneurysm. after procedure of angiographic embolization (with spermatic artery sparing), the patient was scheduled for a laparoscopic resection of paraaortic tumor. the patient was placed in the right flank position. three trocars (10 mm) in the abdominal midline and one trocar in left hypocondrium were placed. at initial examination of the abdominal cavity, voluminous left paraaortic mass arising in the contest of left mesocolon was found, dislocating posteriorly kidney vessels. the parietal peritoneum was divided and the paraaortic lesion was dissected on the aortic plane from medial to lateral and from down to up, preserving the inferior mesenteric vessels; the mobilization was carried on to splenic vein. the vessels, supplying the mass and arising directly from aorta, was isolated and taped with vascular clips. on the inferior margin of the lesion a large vessel, probably connected with previously embolized pseudoaneurysm, was dissected with vascular linear stapler. the mobilization was completed through difficult dissection from aortic plane and mesocolic posterior surface. the colonic perfusion was verified with fluorescence angiography. specimen was extracted through an endobag.a drain was left in pelvis. postoperative day 4.histopathological examination showed a morphological and immunoistochemical pattern for benign paraganglioma. conclusion: laparoscopic resection of paraaortic paragangliomas is feasible by skilled surgeon. the minimally invasive technique decreases perioperative morbidity and mortality. careful preoperative planning and surgeon's experience with vascular dissection and visceral mobilization are mandatory for a good outcome. aims: posterior retroperitoneal endoscopic approach has been considered for many years as a very complex and unsafe surgical technique. often attributed to a difficult location and visualization of retroperitoneal structures. in addition, surgeons were forced to work in a small and easily altered space due to discontinuous flow with constant changes of the retroperitoneal vision. lately this approach is emerging thanks to technological advances, mainly better visualization laparoscopic cameras and high definition screens, as well as continuous flow insufflators of co 2 , maintaining stable and smoke-free cavity uninterruptedly. methods: it shows a management of a potentially serious complication and the reproducibility of the technique through the retroperitoneal approach. results: to operate with high pressure of neumoretroperitoneum allows to contain the hemorrhage and to value with relative serenity and security, the best surgical option to repair said injury being laborious due to the reduced workspace. conclusions: the posterior or retroperitoneal approach is feasible, safe and fast. although the possibility of injuring the vena cava in right adrenalectomy remains one of the most serious and feared complications. as shown in the video, posterior retroperitoneal endoscopic approach allows repair of vascular injury correctly and safely. methods: four patients undergoing adrenalectomy, two of them with right adrenal pathology and two left. minimally invasive access, endoscopic approach, is exposed in all of them. results: in the first two surgeries, right gland is shown. initially, transabdominal approach, which requires mobilization and separation of the liver to access the retroperitoneal space and subsequent proceed to adrenal extirpation. later, right retroperitoneal approach is observed, with a meticulous sealing of the adrenal vein prior to complete the dissection of the gland, despite the small cavity created by co2. in the second part, both left adrenal approaches are exposed. transabdominal pathway is necessary to mobilize left colon and spleen to access a narrow space above the upper edge of the pancreas to locate adrenal gland. this is very different in posterior adrenal approach. conclusions: posterior or retroperitoneal approach is feasible and safe, allowing access to adrenal glands, located in retroperitoneal space, without across peritoneal cavity and its disadvantages. colon and small intestine mobilization is not necessary, with a lower rate of intestinal lesions and postoperative ileus. in the same way, liver or spleen mobilization is avoided. aims: when performing a laparoscopic adrenalectomy, especially in the setting of pheochromocytoma, one of the most important steps is to gain control of the adrenal vein early on in the procedure before great manipulation of the adrenal gland. we present the case of a 78 year old female with episodic headaches and tachycardia and severe uncontrolled hypertension, found to have elevated plasma and urine metanephrines with ct scan localizing a 1.2 cm right sided adrenal nodule. the patient was prepared preoperatively with phenoxybenzamine until mildly orthostatic with dry mucous membranes and was taken for laparoscopic right adrenalectomy. methods: after positioning our patient in left lateral decubitus, ports were placed inferior to the costal margin. the right lobe of the liver was mobilized and retracted cephalad and the ivc was exposed. careful and meticulous dissection was carried up the ivc, however no main adrenal vein was encountered. the adrenal gland was then dissected circumferentially and was removed in an endoscopic retrieval bag. there was no difficulty in hemostasis and the patient was deemed to be hemostatic prior to withdrawal of the ports and extubation. results: our patient had no issues with hemodynamic stability and her blood pressure was within normal ranges during and following the case. her hemoglobin was stable postoperatively with 11.1 immediately post op and 9.9 on discharge. her pre-op hemoglobin was 11.7. conclusions: our video demonstrates a right adrenal gland that was congenitally missing a main adrenal vein. it is very possible that small venous branches were taken with dissection however we believe this report to be important to note in the literature for surgeons performing adrenalectomy. surg endosc (2019) aims: adrenal cysts are the most frequently identified adrenal cysts, although they are a rare entity. typically they are presented by abdominal pain or palpable mass, but nowadays, cystic lesions of the adrenal gland are more often discovered incidentally by radiologic studies. adrenal cysts have an extensive differential diagnoses, which makes a difficult definitive diagnosis and a difficulty in later management. the management of an adrenal cyst can be summarized in three fundamental pillars: discard the functional status of the cyst, evaluation of eventual malignancy by images, and avoid possible complications (hemorrhage, infection), especially in large cysts . methods: clinical case: a 45-year-old male patient, with no history, studied for nonspecific pain in the right hypochondrium, without other accompanying symptoms. an abdominal ultrasound was performed, a cystic lesion in hcd without being able to identify the origin was seen. complementary explorations of interest are shown (ct), the biochemical study discards functionality of the lesion, negative serology for hydatidosis. the minimally invasive approach is the gold standard in the surgical treatment of adrenal pathology, so a laparoscopic approach is proposed for this patient. aims: endometriosis is a high incidence disease (approximately 10% of women) with a large impact on women's quality of life and fertility. endometriosis nodules surgical treatment is necessary every time there is evidence of active disease. the aim of this video is to present a minimally invasive technique for the resection of an endometriosis nodule from the abdominal wall. methods: a 41-years-old woman, with past history of endometriosis and a c-section, presents at the office with a palpable nodule at the rectus abdominis left lateral border, close to the umbilical scar. she had complaints of exuberant catamenial pain and magnetic resonance imaging (mri) showed a 33 mm nodule compatible with endometriosis depot. this technique uses 3 trocars (1 9 10 ? 2 9 5 mm) placed at the pfannenstiel scar. stepby-step as follows: (i) dissection of the pre-aponeurotic plane and isolation of the lesion (ii) lesion excision and its removal with sac (iii) closure of the aponeurotic defect braded suture. results: the post-operative period was uneventful and the patient was discharged home at post-operative day one. the aesthetic result was excellent and the patient was asymptomatic one month after the procedure. conclusion: endometriosis of the abdominal wall is related to previous c-section, is a rare event (incidence 0.1-0.4%) and usually located in the subcutaneous fat underlying the scar. the presence of nodules in the depth of the muscle is much uncommon and particularly in this clinical case, the nodule was located 8 cm cephalad from the previous pfannenstiel scar. this technique seems easy and reproducible in the authors' opinio. aims: general surgeons often face gynecological pathological findings, either along with other abdominal pathology, or as primitive cases that need laparoscopic expertise. with this particular presentation, our goal is to demonstrate the essential laparoscopic skills and the basic operative strategy that a general surgeon should be familiar with, in order to manage such cases. the presentation is made on occasion of a woman with multiple uterine fibromatosis of the pelvis, who was treated by our team. methods: we present video fragments of the laparoscopic excisional procedure for multiple uterine fibromyomatosis of the pelvis, highlighting the proper strategy in order to conclude the operation effectively and uneventfully, in a minimally invasive fashion. results: patients with multiple, large or other complex forms of uterine or pelvic fibromas can effectively be treated with a minimally invasive approach, with minimal blood loss, very fast recovery and minimal postoperative pain and complications. 2% of pregnancies require emergency surgery for a non obstetric indication, including acute appendicitis, cholecystitis, adnexal torsion, choledoco-lytiasis, hernias, intestinal obstructions, oncologic pathology or other less frequent indications. laparoscopic approach is the preffered surgical option for the patologies presented above. aims: to present the technical particularities and to analyze the outcomes of the emergency operations in pregnant women operated in hospital. method: a retrospective study including all the pregnant women operated in our hospital between 2015-2018 was performed. the preoperative workup and the surgical indication was discussed by a multidisciplinary medical team. the anesthesic and the obstetrical risk and their management was evaluated and specifically planned for each patient. the intraoperative and post-operative outcomes were recorded. results: 12 patients with gestational age between 16 weeks and 32 weeks who underwent emergency laparoscopic procedures were included in the study. out of the 12 cases we have performed 5 appedectomies, 3 cholecystectomies, 4 adnexal torsions. with a 75 min mean operating time, we had no major intraoperative complications; the technical challenges are presented and discussed. the hospital stay was 1,5 days (1-3 days). no major complications were associated with the laparoscopic approach in these cohort. one pre-term labour in a 31 weeks gestational patient was post-operatively encountered. conclusion: laparoscopic surgery can be the first option for pregnant woman with non obstetrical surgical emergencies; challenges in diagnostic, management and surgical techniques of the multidisciplinary team are expected. the objective of this presentation is to demonstrate step by step the technique to the oncologic surgeon and gynaecologist in training, including some tips and pitfalls. this is a laparoscopic transperitoneal approach in a woman with advanced cervical cancer (figo ib2) that will be treated with exclusive radio-chemotherapy. the purpose of the laparoscopic lumbo-aortic lymph node staging is to define the irradiation field. in this indication false negative in pet ct ranges from 12 to 22% (depending of the existence of pelvic fixation or not). the limits of this lymphadenectomy are: both ureters as the lateral limit of the dissection, iliac bifurcation as the caudal limit and renal veins as the cranial one. since the tumour is cervical and not ovarian, both ovarian veins are not resected. in the pathologic report, 25 lymph nodes were examined free of cancer spread. the patient have had a radio-chemotherapy with restriction of the irradiation field on the pelvis. lymphocele is a frequent complication that only sometimes needs treatment ranging from dietary changing to percutaneous drainage. if conversion to laparotomy for bleeding this technique loose its benefice but this is a rare complication. this technique is feasible and safe but requires advanced laparoscopic skills. objectives: although extremely rare, isolated splenic metastases are being increasingly diagnosed due to the improvement of imaging, survival times, and surveillance of oncologic patients. this video alerts to the growing diagnostic dilemma with primary lesions of the spleen, particularly in patients with history of cancer, and reviews the laparoscopic splenectomy 'step-by-step'. case-report: 58-year-old male patient diagnosed with rectal cancer (g2adenocarcinoma at 4 cm of the anal verge) after a colonoscopy for rectal bleeding. thoracic and abdominal ctscan and pelvicmri, showed a ct3n1 lesion, without distant metastases, except for a 15 mm suspicious splenic lesion. cea-12.2 ng/ml. after neoadjuvant therapy, a complete response was verified at the 8th week post-crt with a stable splenic lesion, and a 'watch-and-wait' program was initiated with no evidence of disease at the 3rd month. pet-ctscan did not show active metabolic features, despite an increase in the splenic lesion. in mdt, elective laparoscopic splenectomy was proposed and afterwards performed uneventfully. with the patient in semi-right lateral tilt, we approached the spleen inferiorly by dividing the splenocolic ligament. then we continued upwards, dividing the gastrosplenic ligament and exposing the splenic hilum, which was then carefully dissected, clipped and divided. finally the splenorenal ligament was divided and the spleen was extracted within an endobag, through a small pfannenstiel incision. pathologic report revealed a splenic lymphangioma. the patient is currently under a 'watch and wait' protocol surveillance with no signs of regrowth or relapse disease after 1 year and 3 months of follow-up. conclusion: one out of five colorectal carcinomas are metastatic at their presentation. isolated metastases to sites other than liver, lung or axial skeleton, are extremely rare, but can be found in the spleen. although the rare splenic secondary involvement is usually associated with breast, lung, melanoma, and gynecologic malignancies, if we consider solitary splenic metastases, colorectal and ovarian carcinomas are important sources. also, imaging including percutaneous biopsy, is frequently insufficient to clarify the nature of splenic lesions. for all these reasons, the decision-making process about this issue can be a true challenge, and will probably end up with laparoscopic splenectomy. therefore, surgeons must be familiarized with a standardized technique. sarcoidosis is a multisystem disease of unknown etiology characterized by the formation of noncaseating granulomas. sarcoidosis should be considered in the differential diagnosis of lymphoid disease. indications for diagnostic splenectomy includes a suspicion of a neoplasic process. the less invasive laparoscopic approach is the gold standard. case report: a 64-year-old female was referred to a general surgery department to complete a study to rule out lymphoid neoplasia. followed by hematology for cytopenias. biopsy of bone marrow and adenopathies were negative for lymphoid process. patient presented ct with multiple solid (8-10 mm) lesions in spleen, in thorax showed no pathological changes. laparoscopic splenectomy was performed. access with optical trocar, in mammary line. 3 triangle 5-mm trocars after pneumo under vision. section with ligasure of gastroesplenic ligament with short vessels and phrenic-splenic ligament. identification and preservation of pancreatic tail. section of splenic vessels at hilar level (branches) with ligasure. lower pole release. release of posterior part with gerota and diaphragm. incision by aid helps in bag without fragmenting. review of hemostasis, extraction of trocars under direct vision. intraoperative findings: spleen with normal external appearance, not megalic. postoperative evolution: satisfactory. first hours without incidents and with analytical control without anemization. tolerance and mobilization starts without incidents. the histopathology report shows granulomas formed by epithelioid histiocytes with the presence of multinucleated giant cells of the foreign body type, in some perisinusoidal granulomas the giant cells with the presence of asteroid bodies in their interior. the material has been revised with the extension of special studies. conclusion epithelioid granulomatosis, non-necrotizing, which suggests sarcoidosis. the procedure lasted 180 min. the hospital discharge was on the next postoperative day and no complication was registered. conclusion: splenectomy can be performed in a classic way, but at present the less invasive laparoscopic approach is the gold standard. indications for splenectomy include splenic tumours of unknown origin, suspicion of a neoplastic process, and splenomegaly. sarcoidosis should be considered in the differential diagnosis for lymphoid disease. postoperative pathological examination confirms the diagnosis. week-day surgery, university, sapienza, ospedale sant'andrea, rome, italy aims: we describe an interesting case of a female patient affected by a suspected echinococcus granulosus large cyst of the spleen. methods: a 42 years old woman complained abdominal pain and a sense of gravity in the upper left abdominal quadrant. computed tomography scan(ct-scan) showed a 13 centimetre (cm) cyst of the spleen with thickness of the wall and contrast enhancement uptake referred to an echinococcus granulosus cyst. the sierological blood test assessment, antigens and antybody markes, for echinococcus granulosus infection was negative. a laparoscopic procedure was planned. the patient was positioned on the right flank, four trocars were inserted along the left subcostal region of the abdomen: one 12 millimetre (mm) trocar for camera, one 10 mm for the assistant, and two of 5 mm for instruments. a periombelical minilaparotomy was performed for the specimen extraction. results: post-operative course was uneventful. patient was discharged in third post-operative day. istopathological exam showed a simple epithelial cyst of the spleen. conclusions: laparoscopy is safe and feaseable in case of large cyst of spleen in condition of unclear nature of the cyst. laparoscopy permits to explore the abdominal cavity and to assess the cyst characteristics in a lack overlap between the radiological exam and blood test examination. surg endosc (2019) we describe laparoscopic splenectomy for recurrent splenic cyst after laparoscopic marsupialization and partial resection of splenic cyst. the patient was a 31-year-old woman with abdominal discomfort and with a 24-cm palpable mass in the left upper and inferior quadrant. she undergone 9 years ago in another country a laparoscopic operation for splenic cyst. abdominal computer tomography revealed a cystic lesion of the spleen with concomitant huge splenomegaly. serology and oncological marker were negative. we performed laparoscopic splenectomy for the recurrent splenic cyst. the operation took 180 min. histologic examination of the resected spleen revealed a chronic hematoma. the patient had no abdominal symptoms during 12 months of follow-up. postoperative long term follow-up and examination by ultrasound or computed tomography is required after surgical treatment for splenic cyst to exclude the possibility of recurrence after spleen-preserving surgery. hand-assisted surgery is a recognized technique that combines the advantages of laparoscopic approach with the tactile feedback of the laparotomic one. it proved beneficial especially for the treatment of megaspleens due to lymphoma localization, thanks to safer handling of splenic vessels, major bleeding control and more effective detachment of superior splenic pole from the diaphragmatic dome. here we show an hand-assisted splenectomy for megaspleen reaching the omolateral anterosuperior iliac spine due to lymphoproliferative disease, in which the hand, inserted through a right subcostal minilaparotomy, was very useful during the dissecting manoeuvres, the splenic artery recognition and ligation and the isolation of the superior pole of the spleen from the gastric fundus and diaphragm. in any case of huge spleens, the specimen bagging is very difficult to perform in a pure laparoscopic way, not to mention the inexistence of capable endobag; besides, a minilaparotomy would be necessary for the spleen extraction. hand-assisted approach allow to overcome this not underestimable technical difficulty, reducing operative time with similar aesthetic and functional results to that of laparoscopic approach. aim: the evolution of technology and its application to the minimally invasive surgery of the thyroid gland offers new surgical techniques, like the transaxillary approach. this new procedure is still being implemented in our environment and has recently begun to be incorporated into our surgical practice. the objective of this case is to explain step by step how to carry out a right transaxillary endoscopic thyroidectomy and emphasize in the most relevant tips to take into account. also, current indications and limitations of this technique will be addressed. methods: a 49-year-old woman is referred for evaluation of a right thyroid nodule without any associated symptomatology. the blood test shows normal thyroid profile. cervical ultrasound is performed identifying a 3.5 cm single right nodule with well-defined edges and presence of peripheral vascularization. no other nodules are identified. fine needle aspiracion (fna) of the nodule describes a bethesda iii. after evaluation, a right transaxillary endoscopic thyroidectomy was performed. results: dissection begins in the subcutaneous plane above the pectoralis major muscle until identification of the sternocleidomastoid muscle. dissection continues towards the prethyroid muscles in order to perform a lateral approach of the thyroid gland. section of the upper pole allows better exposure of the recurrent laryngeal nerve (rln) which is being monitored intermittenly. identification and preservation of the parathyroid glands is the next step. surgery is completed with the section of the inferior pole of the thyroid along with the istmus. the postoperative period was uneventful and patient was discharged at 24 h after surgery. final pathology revealed a 3 cm nodule without malignancy. conclusion: surgical treatment of the thyroid gland by transaxillary approach may be indicated in previously selected patients with benign pathology, offering the advantages from minimally invasive techniques (shorter recovery time, shorter incision length, etc.). further research is required to make a better assessment of the minimally invasive approaches in thyroid surgery. we present the video of a thoracoscopic esophageal leiomyomaenucleation. it has been widely demonstratedthe advantages of theminimally invasive approach in surgery. esophageal thoracoscopic surgery has been suggested as an alternative to open procedures, presenting less surgical trauma, lower risk of bleeding, less postoperative pain, lower wound infection and lower pulmonary morbidity, showing similar oncologic outcomes. although leiomyomas are the most commonof benign tumors of the esophagus, they are relatively rare, presenting an incidence of 10-40 per 10.000 autopsy series. in our case, the patient was diagnosed of leiomyoma located at the medium third of the esophagus. he referred a history of 6 months of dysphagia for solid and liquids and retrosternal pain. the complementary studies were esophagoscopy, esophagography, ct and endoscopic ultrasonography. the patient was operated by a thoracoscopy approach using 3 ports. it was completed the enucleation of the tumor following the closure of the muscular layer. methylene blue test confirmed no leaks. the patient was discharged on third day postoperative developing no incidences. pathology report: leiomyoma 6 cm size, actin and desmin positive; s-100, cd 34 and cd 117 negative. we want to demonstrate the advantages of a minimally invasive approach in this kind of pathology. aims: this video shows our technique to perform thoracoscopic enucleation of large esophageal leiomyoma. methods: the patient is a 53-years-old woman with a six months history of progressively worsening dysphagia. chest ct scan revealed a 6 cm lesion of middle esophagus with extrinsic compression of mucosa and no increased fdg uptake on fdg-pet scan. barium swallow study showed a lateral deviation of thoracic esophagus due to extrinsic compression. endoscopic ultrasound confirmed the suspicion of esophageal leiomyoma. patient was scheduled for a thoracoscopic enucleation of esophageal tumor. she was placed in prone position and one-lung ventilation was employed. three trocars were placed in intercostals spaces on right hemithorax. azygos vein was identified and transected between vascular clips. esophagus was circumpherentially isolated from mediastinal structures. after myotomy, the lesion was dissected from submucosal-mucosal layer. since air leak test excluded injury of internal layer, muscular layer was closed with a continuous suture. the specimen was extracted through an endobag. a drain was left in place. results: the postoperative course was uneventful and the patient was discharged on postoperative day 7. final pathological examination confirmed esophageal leiomyoma. conclusion: thoracoscopic surgery in prone position allows removal of large esophageal tumor with several advantages. the minimally invasive technique decreases perioperative morbidity and mortality. introduction: spontaneous esophageal perforation is life threatening disease and requires emergent surgical treatment. recently, the efficacy of minimally-invasive surgery such as laparoscopic and thoracoscopic surgery for esophageal perforation has been reported. we report a novel technique of minimallyinvasive abdominal and left thoracic approach (malta) for spontaneous esophageal perforation. case presentation: 64-year-old male, who had been under hemodialysis due to iga nephropathy, complained of chest pain after vomiting several times. since the ct scan showed left hydropneumothorax and pneumomediastinum, and the gastrografin study demonstrated extravasation from left side of esophagus, we diagnosed him with the spontaneous esophageal perforation and planed emergent surgery. the patient was placed in the reverse trendelenburg position, and the legs were split, with the left side of the upper body lifted in order to perform thoracoscopy and laparoscopy simultaneously. first, we explored the thoracic cavity through a 12 mm port in the left 8th intercostal space and added other 3 ports. we identified the rupture site 30 mm in size on the left wall of the lower esophagus and sutured the mucosa and the muscle layer with a running suture respectively. we covered the perforation section with pericardium fat and irrigated the cavity with physiological saline. then transferred to the abdominal cavity, no contamination was found in the abdominal cavity. a feeding tube was inserted into stomach through the round ligament of the liver and the operation was completed. the total operative time was 173 min and the amount of intraoperative bleeding was 1300 ml including pleural effusion. postoperatively, the patient experienced left empyema pleurae but no other severe complications and was discharged on postoperative day 25. conclusion: we experienced a rare case of spontaneous esophageal perforation of a patient under hemodialysis. malta is an effective procedure for emergent esophageal operation because of great visual field of the chest and abdominal cavity without expanding contamination. introduction: digestive caustic injury is associated with high morbidity and mortality with stenosis in the long term. surgical treatment involves resection of the esophagus and reconstruction with the stomach, colon or jejunum. coloplasty provides several advantages but its vascularization is complex and involves 3 anastomosis. classically, vascular assessment was achieved by palpation through laparotomy and color evaluation. indocyanine green (icg) allows a minimally invasive intraoperative angiography in real time. methods: a 43-year-old female with medical history of caustic ingestion and subsequent esophagogastric stenosis, carrier of feeding jejunostomy.1. thoracoscopy (prone position): dissection of the esophagus from the hiatus to the upper thoracic inlet.2. laparoscopy: patient in the supine position, placement of five trocars. total non-oncological gastrectomy, post-pyloric section of the duodenum and omentectomy were completed. mobilization of the righ, transverse and descending colon. measurement of the transverse colon with a tape (distance from the neck and the esophageal hiatus). individualization of the righ, middle (with its branches) and left colic arteries and placement of clamps at the right colic, right branch of the middle colic and left colic arteries. 3 cc of icg were injected allowing for an assessment of the colon vascularization. section of the right branch of the middle colic artery. proximal section of the ascending and distal colon near the splenic angle, preserving the marginal arch. silk point to join the staple line of the descending colon and the pylorys. side-to-side mechanical antiperistaltic anastomosis between the distal endo of the coloplasty and the jejunum. finally an anastomosis between the ascending and descending side-to-side mechanical anastomosis using an assistance incision in the left flank was performed.3. cervical dissection: extraction of the surgical especimen under laparoscopic control. vascular assessment with icg is performed before and after the side-to-side anastomosis is performed. results: there were no intraoperative complications. the patient was discharged on postoperative day 11. discussion: we describe the first case of total minimally invasive colonic interposition with icg assessment of the vascularization. this technique, although technically demanding, avoids the drawbacks of the open surgery and allows for a precise assessment of the vascularization of the graft. surg endosc (2019) introduction: large pedunculated fibrovascular polyps are uncommon, mostly benign, intraluminal massess, usually located in the upper esophageal tract. most frequent reported clinical manifestation is dysphagia, followed by regurgitation, chest pain and intestinal bleeding. ct scan, and mri are the key in the diagnostic work-up revealing a sausage-shaped intraluminal mass. endoscopy with ultrasonography and biopsy add important information for the diagnosis and pedicle location. surgical excision is deemed due to potentially life-threating complication related to airway obstruction. the most frequent polyp resection is performed through cervical esophagotomy or by direct esophagectomy. however, this approach is related to a high morbidity and mortality rate. in the last years, few excisions have been reported by a endoscopical approach with a lower post operative complication. material and methods: this video shows the surgical steps of a trans-oral endoscopic surgical resection of a giant (23 cm) pedunculated polyp in a 43 year old man. the procedure was performed under general anesthesia. a flexible endoscope probe was used and the distal end of the polyp was extracted through the oral cavity with a loop. the endo-gia stapler was used to cut the base of polyp and finally removed. the anatomo-pathological study confirmed the diagnostic of a fibrovascular polyp with no evidence of malignancy. results and conclusions: the patient had an uneventful recovery with no recurrency at 3 years of follow up. this minimally invasive approach is a safe and feasible procedure to treat large esophageal fibrovascular polyps avoiding the complications related to more aggresive procedure. introduction: leiomyomas are the most common mesenchymal tumors affecting the esophagus and they usually grow in the mid to distal third of it. they tend to be asymptomatic, but sometimes they can grow to enormous size and produce dysphagia. case report: 61-year-old male asymptomatic patient was referred to our hospital due to an incidental finding. ct scan revealed a 41 x 35 mm rounded submucosal tumor on the dorsal side of the lower third of the esophagus. upper gastrointestinal endoscopy revealed a cystic lesion in the lower esophagus 37 cm from the incisor teeth, with normal overlying mucosa. an endoscopic-ultrasound-guided fine-needle-aspiration of the mass was performed, which was reported as a likely leiomyoma.conservative treatment was performed, no growth was detected during eleven years of follow-up. but it became symptomatic, the patient complained of progressive dysphagia caused by compression so surgical resection was decided.laparoscopic enucleation of esophageal leiomyoma was performed. the tumor was reached by transhiatal dissection. a careful dissection of the mass was performed, preserving the vagal branches. an intraoperative endoscopy was performed to verify the integrity of esophageal mucosa and that the tumor was completely resected. the muscular layer was sutured after enucleation using absorbable suture material and the hiatus was closed with non-absorbable suture material. a dor fundoplication was also performed. a swallow test with a water-soluble contrast was obtained on postoperative day one. no pathological findings were found so the patient was asked to drink.histopathological exam revealed a tumor measuring 70 mm 9 33 mm 9 17 mm consistent with leiomyoma.the procedure lasted 160 min. the hospital discharge was on the third postoperative day and no complication was registered. conclusion: surgical excision is the mainstay of treatment and is recommended for symptomatic leiomyomas and those greater than 5 cm. this case report demonstrates the technical feasibility, safety and minimal postoperative morbidity associated with minimal invasive esophageal surgery. introduction: total esophagectomy by means of minimally invasive surgery has proven to be a valid and effective alternative for performing this procedure. however, this procedure is not implemented in most centers. objective: demonstrate the technique of a total esophagectomy by endoscopic surgery for a benign esophageal stenosis. material and methods: clinical case: a 75-year-old female patient diagnosed with double esophageal peptic stenosis, treated on several occasions with endoscopic dilation by digestive, showing in the last endoscopy: severe esophagitis with stenosis impassable to 22 cms. additional tests of interest are exposed. resolved: intervention: right thoracoscopy in prone position, dissection and complete mobilization of the thoracic esophagus, section of the azygos vein, pleural drainage. laparoscopic time, 5 trocars, gastrolysis respecting the right gastroepiploic vessels, broad kocher until the cava is identified, vascular section of the vessels left gastric, full mobilization of the stomach, subxiphoid minilaparotomy, beginning of the cervical time with dissection of the cervical esophagus, section and fixation of this to a tube, externalization of the piece by abdominal route, creation of the gastric tubular with successive loads of gia, ascending posterior mediastinal plasty with manual esophago-tubular anastomosis, with placement of drainages and feeding jejunostomy. right operative with radiological control with gastrografin on the 6th day, discharge from hospital on the 11th day. asymptomatic one year after surgery, with radiological control without alterations. conclusions: the approach of esophageal peptic stenosis with minimally invasive surgery is safe and effective, adding the advantages inherent to this type of technique. (figs. 1, 2) , showing a cystic lesion in the gastric submucosa with a well defined, medial and superior to the lesser curvature of the stomach with exophytic growth. it causes extrinsic compression of the cardia in the gastric body. within the differential diagnosis are gastric duplication cysts or gastrointestinal stromal tumor. a biopsy was taken, discarding the presence of neoplastic cells. finally, a study of digestive transit showed extrinsic compression at the cardial level, which causes difficulty in passing contrast. a laparoscopic approach was performed beginning with the dissection of the abdominal esophagus. the presence of a cystic lesion on the anterior face of the abdominal esophagus was identified (figs. 3, 4) . we proceeded to the complete resection of cyst. the surgery was completed with 180°f undoplication anterior dor. the patient went home on the 3rd day without incident. results: duplication cysts are congenital malformations of the gastrointestinal tract contiguous with the esophagus, which can communicate with the esophageal lumen. most are diagnosed in childhood, but when it is diagnosed in adults, they used to be symptomatics. it is more frequent in men. although the pathogenic mechanism is unknown, it is caused by an anomaly during embryonic development. they are located in the thoracic esophagus, at the level of the lower and posterior mediastinum and, less frequently, in the abdominal esophagus, as in our case. they can give digestive symptoms(epigastralgia, vomiting) or respiratory symptoms. the diagnosis is made with eda, ct, and ecoendoscopy of choice, although it may be incidental. the treatment of choice in symptomatic patients is complete resection or cystic enucleation. in asymptomatic surgical is not defined, because it can cause complications, and, malignization due to degeneration is very infrequent. conclusion: the most of duplication cysts are diagnosed in childhood, although it's more frequent in adults to be symptomatic. surgical treatment can cure this disease. however, the choice between these becomes difficult in young patients, where the low incidence does not allow get series of long patients and decisions must be based on results achieved in adults. objetivos: to demonstrate the safety and efficacy of the laparoscopic approach in this infrequent pathology, pointing out the importance of having standardized the procedure to achieve better results. material and methods: case report: a 19-year-old man with progressive dysphagia until almost complete afagia, with clinical, endoscopic, radiological and manometric diagnosis,compatible with typical primary achalasia. chagas negative serology,we show the complementary studies of interest.dilatation is not performed, preoperative symptomatic treatment with calcium channel inhibitors. intervention: laparoscopic approach, 5 trocars, aberrant left hepatic artery with signs of severe esophagitis,opening of the gastroesplenic-hepatic ligament, no retroesophageal window, dissection of the hiatus and inferior mediastinal, preservation and mobilization of the left hepatic artery and the anterior vagus,meticulous disection of the cardia,standardized myotomy: first proximal 8 cms. with adequate simultaneous traction of both edges of the myotomy, then distal myotomy including 2-3 cm, including selectively the distal oblique fibers of les, tutorization with fouché and methylene blue to confirm good step and absence of leakage, dor-type funduplication, 4 pts on each side, fixed to both pillars, hiatalmediastinal drainage. egd on the 1st day of normal po, dischargeat 3rd day, asymptomatic and with normal radiological control at 1 year of age. conclusions: laparoscopic mh should be the first therapeutic option, in patients with primary achalasia, even in young patients. the length of myotomy, especially distal to ueg is one of the most important aspects of surgery, most authors (pellegrini) recommend that the myotomy extend 1-2 cm in the stomach, even up to 3 cm below the ueg to achieve an effective disruption of the eei.the standardization of the procedure is fundamental to increase safety and effectiveness in these more complex cases. aims: the surgical treatment of giant hiatal hernias is a complex and demanding procedure, not only in terms of performing the operation in a minimally invasive abdominal fashion by avoiding thoracic approaches, but also concerning the management of large hiatal defects which contribute to high recurrence rates. our aim is to present our surgical technique for the reconstruction of such hiatal hernias, exploiting the benefits of the robotic approach and also to highlight the technical aspects of non-absorbable mesh placement in order to bridge effectively the hiatal defects. methods: we present video fragments from a procedures selected from a series of cases of robotic reconstruction of giant hiatal hernias performed by our team, in which a non-absorbable meshes were utilized to restore the hiatal gap. we emphasize on the clear benefits of robotic surgery in these cases and on the strategy of how to avoid high recurrence rates. results: all of our patients, who underwent reconstruction of giant hiatal hernias with this particular technique, experienced very good early post-operative results, very short hospital stay and no recurrence in a 12-month follow-up. conclusions: the robotic approach for the treatment of large hiatal hernias offers great advantages to both surgeons and patients, by eliminating the restrictions of conventional laparoscopic surgery, minimizing intra-operative incidents and post-operative complications. large hiatal defects are very effectively closed with the use of advanced suturing techniques and non-absorbable meshes in a tension-free bridging fashion. aims: mckeown esophagectomy is commonly used for invasive esophageal carcinoma. as the morbidity and mortality rates for esophagectomy are persistently high, minimally invasive esophagectomy in prone position is expected to reduce respiratory postoperative complications. there is still limited experience for the use of minimally invasive approaches in patients undergoing surgery after neoadjuvant chemoradiation and many concerns about the feasibility, safety, and oncological outcomes of these procedures are still present. methods: we present the case of a 55-year-old female with a middle third esophageal squamous cell carcinoma, who received neoadjuvant chemoradiation. she underwent laparoscopic and thoracoscopic (prone position) mckeown esophagectomy with hand-sewn esophagogastric anastomosis through a left lateral cervical incision. results: the operation was completed successfully, with no conversion to open surgery. the operative time was 6 h with minimal blood loss and the patient was fed on day 5 and discharged on day 7 post-op. r0 resection was achieved and the number of total harvested lymph nodes was 44 (3 positive nodes, n2). conclusions: minimally invasive mckeown esophagectomy in patients with esophageal cancer and prior chemoradiation is feasible and safe procedure with acceptable oncological outcomes. results: preliminary results demonstrated that minimally invasive ivor-lewis esophagectomy procedure, provided of a better postoperative pain control and less respiratory complications. in order to standarise our procedure, the video shows how three different types of esophagogastric anastomosis are performed, depending on the patient characteristics, anatomical factors and safety and comfort for the surgeon: manual termino-terminal, mechanical termino-terminal and mechanical latero-lateral. conclusions: in our way to standardization, we are still looking for the best type of anastomosis, even though, we find out that, manually performed anastomosis are easier to performed, when the section in esophagus is lower, involving medium and inferior third. in the other hand, mechanical termino-terminal anastomosis seemed to be an ideal option for upper sections. more studies are needed in order to standardized one anastomosis, for all cases. because esophagectomy with radical lymphadenectomy is highly invasive, thoracoscopic esophagectomy (te) is attracting attention as a less invasive procedure. we first performed te with the left decubitus position in 1996. in 2009 we developed a hybrid of the prone and left lateral decubitus positions for te, and a total of 470 patients underwent te with a hybrid position . we introduced te with a hybrid position for the following three reasons: (1) mobilization and lymphadenectomy around the middle and lower esophagus are easier in the prone position. thanks to artificial pneumothorax and the gravity, the middle and lower mediastinum are opened, and which give us good surgical field. (2) lymphadenectomy along the left recurrent laryngeal nerve (rln) is more reliable and precise when performed in the left lateral decubitus position. we can dissect lymph node around the rln higher position in the upper mediastinum. (3) unexpected events requiring conversion to thoracotomy (e.g., massive bleeding, injury of other organs, dense intrathoracic adhesion, resection of adjacent organs) are easier to deal with in the left lateral decubitus position. the patient is fixed on the operating table with the semi-prone position and we can easily change patient positions from the left lateral decubitus position to the prone position and vice versa using rotation system of the operation table. the upper mediastinal procedure including lymphadenectomy along the right and left rln is performed with the patient in the left lateral decubitus position, while the middle and lower mediastinal procedures are performed with the patient in the prone position with artificial pneumothorax (7 mmhg). theabdominal procedures wereperformed by hand-assisted laparoscopic surgery (hals) and gastric tube reconstruction through aposterior mediastinal route was performed as a standard surgical procedure in our institution. the magnifying effect of thoracoscope enables us to perform more precise surgery and preserve nerve and vessels, and a hybrid position is thought to be feasible and effective methods. ivor lewis esophago-gastrectomy is a standard procedure for the treatment of distal esophageal cancer. among the years, the surgical community standardized the mininvasive abdominal phase. the thoracic phase is much more complex because usually all surgeons get in trouble in the phase of esophago-gastric anastomosis. in fact, is still very difficult and tricky to perform a mechanical circular anastomosis due to problems with the correct handling of the circular stapler through the minitoracotomy and is also difficult to place the anvyl in the proximal esophagus. the linear anastomosis (side-to-side) is a little bit easier but not so effective as the circular anastomosis in terms of leak rate. we think that robotic approach with its endowrist can allow us to overcome these limits and that a tailored double layer hand sewn esophago-gastric anastomosis could be the right choice. we treated 4 patients with this approach and they were all uneventful in the post operative period except for a case of chylothorax we treated successfully with lipiodol injection in inguinal lymphnodes. we need more cases to analyze the technique in terms of leak rate and major complications but we think this is a promising and cost-effective procedure for robotic approach. aim: anastomotic leakage is one of the most dreaded complication after esophagectomy. indocyanine-green near-infrared angiography (nir-icga) intraoperative use has been recently introduced for visceral perfusion evaluation. in this video we present our technique for gastric conduit fashioning according to the nir-icga blood supply evaluation in a total minimallyinvasive ivor-lewis esophagectomy. methods: a 48 years-old man affected by a siewert 1 adenocarcinoma (ct3, n2, m0) underwent a preoperative neoadjuvant treatment according to cross protocol. at restaging ct-scan no more pathologic nodes were evident (ct3, n0, m0). the patient was submitted to a total minimally-invasive ivor-lewis esophagectomy. surgical procedure: after pneumoperitoneum induction and 4 trocars insertion, the lesser omentum was opened and a lymphadenectomy at stations 12a, 8, 7, 9 and 11 performed. the esophagus was dissected at diaphragmatic hiatus with lymphadenectomy at station 1, 2 and 111. the larger omentum was opened along the right gastroepiploic arcade, that was preserved, the short gastric vessels divided and gastric fundus mobilized. by evaluating the presence of an intense fluorescence at nir-icga, a tailored partial tubulization of the stomach was performed with multiple linear stapler firing. a right thoracoscopy was performed through 4 trocars. the azygos vein ligated and divided. the mediastinal pleura was opened and the esophagus was dissected entirely with an en-bloc excision of nodes at stations 110, 112 and 108. nodes at stations 105, 106, 107 and 109 were removed separately. the esophagus was sectioned above the azygos vein level and a purse-string fashioned. the cardia and the gastric tube were pulled up and a minithoracotomy performed. a new nir-icga was repeated to verify the good blood supply and tailor the site of anastomosis on a well perfused area. the stomach was opened and a circular stapler inserted. after the end-to-side esophago-gastric anastomosis fashioning, the tubulization was completed by 2 linear stapler firing and the specimen removed. results: the post-operative course was uneventful and the pathologic examination revealed a cardial adenocarcinoma (ypt2,n0, r0). conclusions: nir-icga is an interesting and easy-to use tool for surgeons. nevertheless in literature is still not clear which is the best parameter to measure the blood supply. large studied are needed. aim: uses and application of indocyanine green (icg) fluorescence in the field of surgery are growing exponentially. the safety and feasibility of its usage has been proven in several areas and various pathologies of surgery and surgeons are starting to incorporate it into their common practice. however, there are still several aspects to define regarding this technology. we present different uses of icg in the specific area of esophageal cancer. methods: we used icg fluorescence at different moments of a two-field minimally invasive esophagectomy. first of all, peritumoral injection of icg may offer a lymphatic mapping, both in the abdominal phase of the surgery and the thoracic one, improving lymph node dissection by allowing a more targeted and less morbid approach that includes all relevant nodal stations. at the moment of the gastric section, intravenous injection provides assessment of gastric conduit perfusion, therefore optimizing the construction of the graft to avoid the inclusion of poorly perfused areas that may increase the risk of leak of the anastomosis. besides that, the esophagogastric anastomosis can be tested in the thoracic phase of the operation in order to check an adequate perfusion and prevent further complications. results: we consider that icg fluorescence is a promising technology that could be easily introduced in the surgical routine of the esophageal surgeon as an instrument to assess the anastomosis perfusion. icg is also feasible in detecting lymph node drainage from the esophagus, although its technique of application needs to be defined. conclusions: icg fluorescence has opened a new world of possibilities in all the different surgical specialties. its use in the esophagectomy is safe, simple and feasible. in a near future, its application to esophageal cancer surgery could improve survival by predicting and preventing anastomotic leak and guiding in a tailored lymphadenectomy. further research is needed to demonstrate these promising applications. introduction: the oesophagectomy is currently still mandatory in the curative treatment of the malignant oesphagic pathology. this procedure is defined by important morbidity and mortality. the minimally invasive approach aims to reduce the complications without repercussion on the oncological outcomes, however it's not exempt from them being a demanding surgical technique like it is. aim: we present the video of three complications after a three-stage oesophagectomy (mckeown-like) with the thoracic stage via thoracoscopy and the minimally invasive surgical solution for both of them. methods: all three cases represent a three-stage oesophagectomy for malignant esophageal pathology. the first one was a 50-year-old male who suffered an intraoperative left main bronchus injury. the second case was a 62-year-old male with no intraoperative complications whatsoever. nonetheless, on the second postoperative day, milky drainage started to appear through the thoracic tube. the third and final case represents an intraoperative hemorrhage, which is the most common complication of this kind of surgery. results: the first case was diagnosed and treated intraoperatively with the use of an adhesive matrix. during the postoperative period the patient showed no further complications. the second case was a chylothorax, diagnosed on the second postoperative day. it was treated initially with conservative measures. due to bad evolution, he underwent surgery on the tenth postoperative day. we can see how we ligated the stump of the thoracic duct in the original surgery and then how we repaired the unexpected leak. after the second surgery, the patient was discharged on the sixth day. the last patient was also diagnosed and treated intraoperatively successfully, with no repercussion whatsoever in the postoperative time. conclusions: the minimally invasive surgery has many advantages in the upper gastrointestinal field. it is a demanding technique, so it is important to be able to treat the complications that may arise with this approach. surg endosc (2019) aims: the use of icg fluorescence is incrising in surgery, mainly as a test of vascular supply in colonic anastomoses. during the last years, other potential uses have been described, such as the identification of the sentinel lymph node and lymphatic mapping in oncological surgery. these advances could allow a better staging in order to decide the most appropriate treatment to each patient. gastric cancer is one of the fields where this could play a key role in the near future. we present a case of a patient who underwent a laparoscopic total gastrectomy with icg-guided d2 lymphadenectomy, where a personalized lymphatic mapping was performed. methods: a 37-year-old male patient underwent gastroscopy for gastric discomfort, and a gastric carcinoma was detected at the greater curvature of gastric body. endoscopic biopsy was informed as diffuse type gastric adenocarcinoma. the preoperative staging was completed with echoendoscopy and ct-scan (t1bn0m0). we decided to perform a laparoscopic total gastrectomy with icg-guided lymphadenectomy. the preoperative day, a gastroscopy was performed to inject 0.75 mg of icg in four submucosal areas around the tumor. results: intraoperatively, the lymphatic mapping marked by icg was checked, allowing the identification of the territory of drainage of the tumor to lymph nodes at the lesser curvature, the greater curvature and the splenic artery. a d2 lymphadenectomy and a total laparoscopic gastrectomy with roux-en-y reconstruction was performed. during the lymphadenectomy, we were able to observe marked lymph nodes in territory 11, and also observed that the paraaortic lymph node behind the celiac trunk did not become green and the lymphadenectomy at this area was not continued. the patient presented no postoperative complications, and was discharged on the seventh postoperative day. the histological results showed a diffuse type gastric adenocarcinoma pt2n1 and 26 isolated lymph nodes, being one of them possitive (corresponding to the adenopathy marked at the greater curvature). conclusions: lymphatic icg-mapping in gastric cancer is a potential revolutionary advance that could ensure a correct lymphadenectomy, avoiding lymph node understaging. it is necessary to continue carrying out studies that will allow developing protocols to define appropriate lymphadenectomy based on icg-mapping. introduction: petersen's hernia is one of severe postoperative complication after gastrectomy, which may result in massive resection of small intestine. it is considered an essential proscedure to close the petersen's defect for all cases after such reconstruction after gastrectomy as roux-en y method. we report a case of petersen's hernia after radical gastrectomy, which was repaired laparoscopically. patient: the patient was 59-year-old male, who underwent laparoscopic distal gastrectomy for gastric cancer (d2 lymph node dissection followed by roux-en-y reconstruction) two years ago. the closure of petersen's defect was not performed in the initial operation. he was aware of abdominal pain and visited emergency unit in our hospital. abdominal ct scan showed internal hernia of petersen's hernia. surgical procedure: in laparoscopic examination, dilation of small intestinal and mastoid ascites was observed. massive small intestine including y-limb entered into petersen's defect from left to right side. we carefully pulled through the small intestine and confirmed absence of ischemic change in the whole small intestine. then the petersen's defect was closed by continuous suturing with 3-0 non-absorbable barbed suture. results: the operation time was 95 min and the estimated blood loss was 10 ml. oral intake was started from the next day of the operation. there was no postoperative complication. the patient was discharged on the 10th postoperative day. conclusion: we could safely perform laparoscopic repair for petersen's hernia. regarding technical points in the procedure, it is important to judge the direction of the small intestine into the petersen's defect, to manage the dilated small intestine gently, and to close the petersen's defect by laparoscopic suturing. introduction: in this case we present a 52-year old male with a history of morbid obesity, sleep apnea and psychiatric affliction including alcohol and nicotine abuse. in 2009 he underwent a laparoscopic roux-en-y gastric bypass. the results were satisfactory, with no complications post-surgery, and a steady weight loss over time (pre: 162 kg, post: 125 kg). in november 2017, he presented with complaints of dysphagia and weight loss (12 kg in 3 months). laryngoscopic examination by the otorhinolaryngologist was negative. he was referred to gastro-enterology for gastroscopy. biopsies showed a mildly differentiated adenocarcinoma of the gastro-esophageal junction with submucosal invasion. objective: after negative staging assessment, multiple treatment options were considered. the route of choice ended up being a laparoscopic radical gastrectomy with esophago-jejunostomy, with the objective to achieve optimal oncological results. methods: the procedure is demonstrated in this video. the gastric pouch as well as the remnant stomach, greater and lesser omentum were resected laparoscopically. due to the invasion of the carcinoma into the distal esophagus, a segment of the esophagus was resected as well. following anatomopathological examination on frozen section, the resection margins were reported malignancy free. results: postoperatively, there were no complications. ct scan with contrast showed no signs of leakage. anatomopathological examination confirmed the tumor to be a mildly invasive and poorly differentiated adenocarcinoma with local signet-ring cell differentiation (pt1bn0). there was no need for adjuvant therapy. oral intake was sound. conclusion: adenocarcinomas of the gastric pouch are rarely seen following gastric bypass. this patient presented with complaints of dysphagia, and an adenocarcinoma was diagnosed. consequently, the patient had a total gastrectomy at our hospital. the surgery was performed laparoscopically, and was executed with success. to conclude, it is feasible to treat adenocarcinomas after gastric bypass laparoscopically via total gastrectomy and omentectomy. 80 year old, female patient presented with upper abdominal discomfort and microcitic anemia. an ulcerative lesion was found on gastroscopy examination in body of the stomach (near the grate curvature). biopsy was done and pathology result showed poorly differentiated adenocarcinoma. chest computed tomography (ct) was without any significant findings. abdominal ct showed the lesion in stomach without enlargement of regional lymph nodes. her blood laboratory examinations were within normal limits, including serum cea. patient underwent laparoscopic total gastrectomy with modified d2 lymphadenectomy and roux-en-y esophagojejunostomy. total operating time (ort) was 218 min. three days after operation, patient has developed none st elevated mi and respiratory failure. she was intubated. on day 6 after operation she was extubated, on day 9 patient started regular diet and was discharged home on day 13. final pathology result confirmed poorly differentiated adenocarcinoma of the stomach. this video shows our favourite technique for laparoscopic d2 subtotal gastrectomy. we usually perform the procedure with 4 or 5 trocars; after the coloepiploic detachment we perform the gastric transection first! this manoeuvre provides a perfect view for the lymphadenectomy. at the end of our dissection we transect the duodenum with seamguard reinforcement. before going on with the reconstructive phase, we prepare the roux en 'y' with double loop technique, usually without dividing the mesentery. then we remove the specimen through a periumbilical 3-4 cm minilaparotomy; we think it's important anyway to check margins and distance from the tumor before going on with the reconstructive phase. from the same minilaparotomy we retrieve the prepared limb for roux en y and we perform a side to side mechanical linear anastomosis outside.then we proceed with performing the anastomosis between the gastric pouch and the alimentary limb by laparoscopy. we like very much this technique for the increased exposition of tissues during lymphadenectomy. laparoscopic roux en 'y' d2 surg endosc (2019) a body-tc was performed in which axillary, mediastinal adenopathies and images suggestive of hepatic metastases were identified. the biopsy confirms a gastrointestinal stromal tumor. the case discussed in a multidisciplinary committee and the pet-ct study was completed, subcardial gist t4n1m0 was diagnosed. neoadyuvance was decided with imatinib for one month and surgery was performed using a laparoscopic approach. the approach was performed with 5 trocars (11 mm supraumbilical, two 12 mm subcostal left and right, 5 mm subxifoid and 5 mm left flank). gastrectomy was performed with d1 lymphadenectomy following the oncological principles of subcardial tumors. the piece was removed in a bag by extending the 11 mm port to mini-laparotomy. esophagogastric anastomosis was performed by hand assisted circular mechanical suture. methylene blue test was carried out. no nasogastric tube left, but drainage tutoring the esophagogastric anastomosis was left. results: the postoperative evolution was favorable. oral tolerance without incidents at fourth postoperative day. the patient was discharged without incidences on the seventh postoperative day. the pathological study of the piece was reported as subcardial gastrointestinal stromal tumor 3 cm with respected surgical margins and 11 lymph nodes free of malignancy, postoperative diagnosis of t2n0m0. one month after surgery, the patient has adequate oral tolerance. she does not report gastroesophageal reflux and at 6 months remains asymptomatic and with good evolution. conclusions: laparoscopic proximal gastrectomy is a technique that is not currently used but can be performed through a laparoscopic approach. it is a safe technique with good clinical and oncological results, especially in the early gastric cancer and gastrointestinal stromal tumors. however, long-term studies are necessary. laparoscopic gastrectomy is a perfectly safe option nowadays for the treatment of gastric cancer. every year the percentage of the laparoscopic approach is rising not only in the east but also in the west. we present a case of a 44 year old female patient with a gastric tumor of the antrum-g3 adenocarcinoma with a ct2n0m0 staging. we perform a subtotal laparoscopic gastrectomy with a d2 lymphadenectomy and roux-en-y anastomosis. the patient begin clear liquids on the first post operative day and was discharged on the 5th. the final anatomopathological result of the specimen was a adenocarcinoma (g3)-pt1n0. there were 37 nodes resected all negative. the case was discussed in multidisciplinary team and was decided for clinical follow up with no further treatments. the patient was evaluated one month after surgery with no complaints and will continue the follow up. upper gi surgery, university of verona, verona, italy laparoscopic endoscopic cooperative surgery is an option in medium size submucosal cancers invading the muscular layer, mainly in border area were wedge resections are nor feasible.in this video we report a case of prepiloric gist treated with news technique (nonexposed endoscopic wall-inversion surgery).we think that this technique is feasible and safe and should be considered a valid option with a view to preserving the organ. aim: laparoscopic wedge resection or partial resection is a safe and feasible stomach preserving approach to gastric submucosal tumors (smt) such as gastrointestinal tumors (gist), and it has been widely performed recently. however, it should not be applied to the tumors at cardia in order to avoid stenosis or disruption of anti-reflux mechanism. we have introduced percutaneous endoscopic intragastric surgery (peigs) for smt at cardia since 2013 to preserve function of cardia. we will report the tips, techniques, and clinical result of our peigs. methods: from september 2013 to august 2018, seven patients with smt at cardia underwent peigs in our hospital. we insert the 12 mm port at umbilicus and investigate the abdominal cavity at first. then the incision is extended to 2.5 cm and lap-protector tm is equipped with the site to perform mini-laparotomy. using peroralendoscopy, a stomach was insufflatedand incision is made at an anterior wall of gastric body under direct vision. additional lap-protector tm is placed into the stomach so that the stomach is fixed on the abdominal wall. it enables us to keep direct access to gastric lumen and stable operative field. ez access tm is attatched on lap-protector tm , and intragastric operation is started. subsequently, two trocars are inserted using funada's gastropexy instrument. tumor is dissected by using energy devices, avoiding injury of capsule of the tumor. the defect of the gastric wall is closed by intragastric suturing. we should take particular care not to damage egj during the suturing, inserting peroral endoscopy in and out. results: the mean operation time was 145(117-149) min and the amount of intraoperative bleeding was 5.0(3.0-7.5) ml. the maximum diameter of tumors was 30(15-30) mm. one case was low risk gist and otherwise were leiomyoma. the postoperative course was uneventful in all cases, without leakage or stenosis. total hospital stay was 9(8.5-9) days. no patient had symptoms of esophagitis. conclusions: peigs for smt at gastric cardia is a feasible and safe approach, preserving function of cardia. our procedure achieves great stability and excellent visualization during the operation, which may have led to the fine surgical results. laparoscopic lymphadenectomyin gastric cancer is considered a feasible and safe procedure. data on the compliance of lymphadenectomy in the various lymph node stations is not yet well understood; moreover it is not clear if there are particular conditions relate to the patients impairing the oncological results. this video reports the use of the icg for the lymph node dissection of station number 6 in a case of obese patient and a case of a cirrhotic patient. fist patient, m.a., was a 67 year old man with distal cancer ct2n0 and a 32 bmi. second patient, t.d., was a 56 year old man with distal cancer ct2n0 and a alcoholic cirrhosis child b7. in both cases, intraoperative endoscopy was performed 20 to 30 min before the onset of lymph node dissection. 0.1 mg of icg was injected into the submucosal layer in 4 quadrants of the primary tumor. a laparoscopic subtotal gastrectomy was conduced with d2 lymphadenectomy. lymph note navigation were analyzed by novadaqò detector. using navigation system we removed the n6 basin. in both cases dissection were effective and pancreatic surface were easily detectable. number of lymph nodes retrieved was 8 in the case of obese patient and 3 in the case of cirrhotic patient. pathological tnm were pt3n0 (0/50 n ?) in the first case and pt3n1 (2/40 n ?) in the second. no n ? metastases were detected in n6 station for both cases. no pancreatic fistula was recorded. icg lymph node navigation system should be considered a valid support for the surgeon for completion of a correct lymphadenectomy in surgical challenging cases. aims: morgagni hernia is the rarest of congenital diaphragmatic hernias (2-3%). its presentation is rare in adults and its finding is usually incidental. it was first described by giovanni b. morgagni in 1769. it is located in the anterior region of the diaphragm. it is caused by a congenital defect in the fusion of the transverse septum of the diaphragm and the costal arches. the need for surgery depends on the presentation, it is recommended early repair before the development of complications. classically, the surgical approach was thoracotomy or laparotomy. currently, the tendency is to use a minimally invasive approach, which has shown good results, lower morbidity and faster recovery. the need to repair the defect with a mesh is controversial, recommended when it is not possible to close the defect without tension. the objective of this video is to demonstrate the safety and efficacy of the laparoscopic approach for the repair of this type of hernia, as well as the different types of mesh that can be used. aims: the treatment of the non-metastatic gastro intestinal stromal tumour (gist) is the surgical resection [1] . the duodenal gastro-intestinal stromal tumour (gist) is a relatively rare clinical entity. from all the resected gist, only 5% are duodenal [2] . the clinical presentation could vary from the most common acute gastro intestinal (gi) bleeding, chronic anaemia, but also acute abdomen caused by tumour rupture and intestinal obstruction. methods: a 69 years old patient present at the emergency department of the chi poissy-st germain-en-laye (paris, france) with acute gastrointestinal bleeding. at the laboratory exams the haemoglobin was 9.1 g/ dl. the patient perform a computer tomography (ct) which shows two hyper vascularised lesion at the 4th duodenum; this lesions has an intra and extraluminare growing. the ct scan didn't show any other abdominal lesions. the patient were submitted to a minimally invasive surgical operation with the multiport laparoscopic technique. results: the resection of the 3th and 4th duodenum and of the first 5 centimetre of jejunum was performed with a four trocar laparoscopic technique. a latero-lateral duodeno-jejunal mechanical anastomosis was performed. the operative time was 90 min. the patient start with oral alimentation on the third post-operative day after a ct scan with oral contrast that was negative for anastomotic dehiscence and collections. the post-operative course was globally uneventful and the patient was discharged at fifth post-operative day without complications. the histological examination shows a low risk gist, cd117 positive and with a ki-67 inferior of 2% (classification tnm 7th edition pt1 aims: gastric volvulus are a rare condition that sometimes represent a diagnostic challenge for surgeons. here we present the video of a recent case of a gastric volvulus in our area that was treated with a minimal invasive approach. methods: we report the case of a 58-year-old woman who was admitted in the emergency room (er) with epigastric transfixing pain and impossibility to vomit that had started 8 h prior to the admission. the physical exam showed good vital signs, and her abdomen was soft, with a tendency to tenderness with palpation in the epigastrium without guarding or rigidity. her lab tests were normal and the conventional radiology showed a double gastric bubble. we run an urgent computed tomography scan (ct scan) and a upper gastrointestinal (gi) endoscopy that showed a big type ii hiatal hernia that was complicated with a gastric volvulus. results: first, a nasogastric (ng) tube was placed for decompression of the stomach at the time when the endoscopy was made. the patient experienced a great improvement of the pain with that initial measure and remained stable. after almost a day of appropriate resuscitation, she underwent urgent surgery: we performed a hernia reduction, resection of the hernia sac, hiatal closure and a gastropexy and nissen fundoplication. the patient suffered no complications in the immediate postoperative time and was discharged after six days. conclusion: gastric volvulus are an uncommon emergency that we need to keep in mind. a simple abdomen x-ray can be very helpful, given that the double gastric bubble sign is a typical sign of this condition. it's always mandatory to perform an upper gi endoscopy in order to get to the diagnose and place a ng tube promptly. the surgery can be safely delayed in stable patients with no signs of ischemia, and a laparoscopic approach is associated with a shorter hospital stay and good long-term outcomes in this kind of patients. aims: during laparoscopic treatment of hiatal hernias the dissection can be complicated, but even more so the closure of the pillars, especially in giant hiatus hernias with a large defect. the use of prosthesis is controversial due to the lack of long-term studies and the possibility of secondary complications. the aim of this video is to demonstrate the safety of mesh hiatoplasty in hiatus hernia surgery. methods: we present the case of a 78-year-old woman with hypertension, hypothyroidism and right hemicolectomy for neoplasia 18 years ago. the patient presented with malnutrition, with a weight loss of 15 kg in the last months. a gastroscopy was performed in which a large hiatus hernia, that caused gastric volvulation, was shown. the upper gastrointestinal oral contrast study revealed esophageal tertiary waves and good passage to the gastric chamber, with an organoaxial volvulation of the stomach that was completely included in the thoracic cavity. results: through a five trocar laparoscopic approach, a large paraesophageal type iv hiatal hernia (7 9 6 cm hiatal orifice) with complete herniation of the stomach and greater omentum to the mediastinum was observed. after reduction of the hernia content, complete dissection with partial resection of the sac was performed. an extended mediastinal dissection of the esophagus, with descent of the esophagogastric junction until achieving an abdominal esophagus of 3-4 cm, was carried out. after posterior and anterior phrenorrhaphy with nonabsorbable sutures, dislacement of the right pillar without diagragmatic opening was evidenced. it was decided to reinforce the hiatorraphy using a c shape composite mesh fixed with nonabsorbable sutures. the procedure was completed with a nissen-type fundoplication. postoperative course was uneventful and the patient remains without hernia recurrence 9 months after the intervention. conclusion: prosthetic reinforcement in hiatal hernia repair can be a very useful resource in large hiatal defects in which a stress-free hiatus closure cannot be achieved. however, its use must be individualized according to the characteristics of the patient, the quality of the tissues involved and the result of simple hiatorraphy. aims: heller myotomy is an advanced laparoscopic surgical technique for the treatment of achalasia, a rare disease in which long time is needed to achieve the learning curve. surgical simulation, using animal models with an anatomy similar to humans, could help improving surgeon performance in a shorter time. the aim of our video is to show an ex-vivo and in-vivo animal model for heller myotomy training. methods: a cadaveric porcine model was developed using a tissue block including the esophagus and the stomach, without the diaphragm, mounted in a physical laparoscopic simulator. training procedures were also performed in an in vivo porcine model. experiments were carried out in the ' jesús usón' minimally invasive surgery centre in cáceres. results: the surgical technique is described step by step, first in the esophagus-stomach ex-vivo model and later in the live animal model. conclusion: surgical simulation using cadaveric an live animals offers a realistic representation, allows training in a safe environment, and can be very useful for advanced laparoscopic training in low incidence pathologies. introduction: esophagic perforation is one of the least frequent complications after laparoscopic nissen fundoplication, but it remains one of the most dreaded because of its morbidity and mortality rates and its technically difficult reparation. aims: to present how the authors dealt with an iatrogenic esophagus perforation post laparoscopic nissen fundoplication. methods: the authors report the clinical case of a 65-year-old woman who underwent a laparoscopic nissen fundoplication because of a symptomatic large hiatus hernia in a different hospital. the second postoperative day and after resuming oral intake, she presented respiratory and hemodynamic instability. she needed a chest tube that drained purulent content. the patient was referred to our hospital for clinical management. an urgent ct scan with oral contrast was performed without showing any leakage. results: in spite of the results, as the patient was unstable, she underwent an emergent diagnostic laparoscopy. it was found a small anterior esophagus perforation with right mediastinic collection. a running suture of the perforation was carried out and the nissen fundoplication was converted to a dör fundoplication. the operative time was 120 min. she went to the intensive care unit for ten days. five days after the surgery, she was given methylene blue with no exteriorization through the drainages. as postoperative morbidity, she suffered from a right pneumonia and a thoracic collection that was treated by thoracic surgeons. the patient was finally discharged on the 64th postoperative day. she did well at home. she attended follow-ups without clinical reflux or any other particular condition. conclusions: esophagic perforation is a life-threatening complication that can be managed laparoscopically if it is detected soon after surgery and an expertise is available. surgical treatment of achalasia fails in 10-20% of patients. the most frequent responsible cause is a previous incomplete myotomy, followed by fibrosis and aspects related with antireflux procedure. revisional surgery can represent a greater difficulty and a challenge. in this video we show 3 revisional surgeries after heller's myotomy with dor fundoplication for achalasia. all cases presented a recurrence of the symptomatology and a revisional surgery was proposed. surgeries were characterized by the presence of a herniation of the previous fundoplication, fibrosis around the prior myotomy and/or the formation of a pseudodiverticulum. we show the steps followed and the aspects to consider during the dissection. these cases demonstrate that laparoscopic reoperation for achalasia is feasible, even after open surgery. aims: upside-down stomach (uds) is a rare type of large paraoesophageal hernia, characterized by migration of the entire or large parts of the stomach into the posterior mediastinum. uds is associated with severe complications like strangulation or volvulus development, possibly leading to acute gastric outlet obstruction and incarceration. surgical repair is the only curative treatment option and therefore recommended for these patients. standard procedure includes a hiatoplasty followed by an anti-reflux procedure. in a variety of studies, the use of mesh proved to be superior with respect to reduction of anatomical recurrences. methods: a 78-year old woman presented to us with reflux symptoms, dysphagia, dyspnea and tachyarrhythmias. she reported a weight loss of 14 kg in the last 6 months. ct scan and esophagogastroscopy showed a large paraoesophageal hernia, measuring approximately 10 cm, with intrathoracic uds. results: we performed a laparoscopic hernia repair with dissection of the hernia sac from the posterior mediastinum, tension-free intrabdominal reposition of stomach and distal esophagus and hiatoplasty with biologic mesh (tutomesh tm ) augmentation. finally, a toupet fundoplication was performed to recreate the antireflux valve. in consequence of pronounced adhesions, a lesion of the muscularis of the distal esophagus occurred during surgery. the esophageal mucosa was unaffected. we treated the lesion laparoscopically with a simple interrupted suture (vicryl tm 3-0). an intraoperative patent blue v leak test did not identify any leaks. the recovery was uneventful. the patient was discharged on day 12 after surgery and was free of symptoms in her follow-ups. conclusions: patients with large hiatal hernias and uds can be treated successfully and effectively with laparoscopic mesh repair. intraoperative complications can be handled laparoscopically in a safe manner. iatoplasty followed by nissen fundoplication represent the gold standard treatment of hiatal hernia; however it has been reported high hernia recurrence rate, especially in case of giant hiatal defects. in order to reduce recurrence rate, various techniques of hiatoplasty reinforcement have been implemented, such as prosthetic materials apposition. however, in literature have been reported various mesh complications such as esophageal and proximal stomach erosion and late esophageal perforation after ischemia, especially in case of synthetic non absorbable materials.in this video we are going to show the repair of a huge hiatal hernia by hiatoplasty and positioning of an absorbable biosynthetic 'bio a' mesh which is replaced by connective soft tissue over six months, therefore decresing complications and recurrence rate.as usual, we start with the mobilization of gastric fundus and isolation of diaphragmatic pillars by sectioning the aderences between them and herniated viscera. we proceed then with intrabdominal esophagus mobilization and higher mediastinal dissection in order to obtain an adequate esophageal lenght. after the exposition of the hiatus, we approximate the pillars with some non absorbable stitches and we reinforce the hiatus positioning a 'u' shaped bio a mesh over the esophagus, simply fixed to crura with single stitches. then we go on performing nissen fundoplication.the use of this prosthetic material appears to be cost-effective; first series in literature show very low complication rate and less recurrences of hiatal hernia than hiatoplasty without reinforcement. this video demonstrates the technique of laparoscopic identification and complete dissection of the sac of a totally intrathoracic stomach. identification of the sac is performed centrally by scoring the peritoneum overlying the arch of the diaphragmatic hiatus. the inferior edge is retracted and series of blunt dissection is undertaken. once the areolar tissue of the avascular plan is visualized; a raytec sponge is placed and advanced cephalad to expose the extra-saccular space. this raytec is kept in place to allow carbon dioxide to infiltrate and further delineate the anatomy. we then proceed with dissection at the left crus and right crus. complete reduction of the stomach can be achieved without grasping it. this can be performed by applying caudal retraction on the sac. this maneuver exposes the plane outside the sac. this space is divided into two compartments (right and left) separated by a septum which indicates the proximal extent of the sac. this plane is avascular and blunt dissection can easily free the hernia sac from the mediastinal structures and the pleura. this was followed by excision of the sac and hiatal repair which is reinforced with bioabsorbable mesh. the proximal short gastric vessels were then divided and a standard toupet fundoplication was performed. v257-upper gi-reflux-achalasia introduction-objectives: the mixed giant hiatus hernias with paraesophageal component are hernias of difficult surgical correction, the laparoscopic approach of these implies a greater experience of the surgical team, given the complexity involved in its management, being the recurrence a complication that neither the use of meshes in this surgery has been able to avoid. in very high-risk patients, the gastric anterior pexy (boerema) may be an alternative to treat or alleviate the symptoms of these large hiatal hernias, although the frequency of recurrence with this technique is very high. material and methods. objective: the objective of the following case is to present a patient with type-1 hiatal hernia, barrett's esophagus without dysplasia and situs inversus. method: a laparoscopic partial fundoplication was performed on a 47-year-old male patient with a history of situs inversus totalis, who was seen in our general surgery service presenting a clinical history of retrosternal pain, heartburn and regurgitation. an endoscopy was performed, which reported hiatal hernia type 1 and incompetence of lower esophageal sphincter, with squamocolumnar junction biopsies with report of barrett's esophagus without dysplasia. results: surgical time was programmed, for toupet type fundoplication; there were lax adhesions from omentum to wall, the lax hiatus and already known situs inversus. partial funcuplication was performed, with the usual technique adapted for structural anatomical abnormality, postoperative course without complications, oral initiation at the next day, drainage penrose type was set draining just a little serohematic liquid, diet was progressed and patient was discharged on the third postoperative day without complications. conclusion: situs inversus is the mirror image of situs solitus, which presents subdivision in situs inversus totalis, which is the most usual form, characterized by the mirror location of the intraabdominal and thoracic organs including the heart; in the case presented, the patient was referred with situs inversus and barrett's esophagus, performing laparoscopic fundoplication. the gold standard of surgical management is laparoscopic in benign esophageal pathology. gastroesophageal reflux disease (gerd) is a condition that reduces the quality of life and can causedisorders associated with acid reflux, such as bronchial asthma, barrett's esophagus and esophagealadenocarcinoma. gerd is often caused by existing of hiatal hernia. in rare instances gerd is associatedwith type iv hiatal hernias in which the part of stomach and other organs migrate into mediastinum.nowadays this condition can cause problems for some surgeons.patient was a 64-year-old man. he was diagnosed with hiatal hernia in 1992. the condition had beenhaving asymptomatic course until 2003. patient takes omeprazole 20 mg for 15 years. he startedexperiencing chest pains when inhaled and dyspnoea in june 2018. co-morbidities were: arterialhypertension, chronic obstructive pulmonary disease (copd) and obesity (body mass index was 43.9 kg/m 2 ). the posterior mediastinum contained the part of stomach, large bowel and small bowelaccording to chest roentgenography. sizes of esophageal hiatus were 145 9 98 mm. in our clinical centerwas performed laparoscopic removal of hernia, cruroraphy, mesh repair of the esophageal hiatus andnissen fundoplication in 2018 july. during the surgery stomach, the part of small intestine, greatomentum and transverse colon were relocated into abdominal cavity. after cruroraphy, repair of theesophageal hiatus with prolene mesh was performed. the patient was in intensive care during 9 h.total enteral feeding was initiated on second day. patients had been discharged within 5 days aftersurgery. patient was re-examined by a surgeon in september 2018, there were no signs of a reccurence.this case report shows an efficiency and feasibility of the laparoscopic approach to the treatment gerdassotiated with a large defect in the phrenoesophageal membrane, allowing other organs, such as colon,great omentum and small intestine to enter the hernia sac. aims: the authors present a video with their standardized laparoscopic hiatal hernia repair and anti-reflux nissen procedure but using 3 mm instruments and 5 mm camera approach. methods: a 42 years old female patient with a bmi 29 presents a symptomatic gastro esophageal reflux disease (gerd) for 20 years. manometry and ph-metry showed a lack of esophageal motor disorders and a severe acid pattern with a 96.35 demeester score. panendoscopy study showed a 3 cm hiatal hernia and a los angeles-grade 2 esophagitis. she decided not to go with ppi treatment anymore. a laparoscopic hiatal hernia repair and standardized nissen procedure is performed using 3 mm instruments and a 5 mm camera. case and technical details are shown in the video. results: the patient was discharged from hospital within a period of 6 h with a 3 rate in a eva acute pain visual scale. in a 2 year follow-up, there has no been an anatomical or clinical recurrence. no chronic dysphagia, anatomical recurrence or abdominal wall complications have been reported with in this period of time. conclusions: depending on the patient characteristics, anatomical factors and surgeon mini invasive experience, hiatal hernia and anti-reflux mini invasive standardized repair using 3 mm instruments, could be a safe and feasible option. more studies are needed in order to standardized this approach. background and aims: parastomal herniation a very frequent complication in stoma patients. in isolated parastomal hernias (type i or iii)* a laparoscopic sugarbaker repair with intraperitoneal mesh is our preferred technique. if a concomitant incisional hernia is present (type ii or iii)* we currently opt for a retromuscular mesh repair as described by pauli. we adopted a minimal invasive approach using the robotic platform. methods: we performed a robot-assisted laparoscopic pauli repair for a wide incisional henria needing component separation and a small parastomal hernia (type ii)*. a non-slit retromuscular mesh was placed after a bilateral tar (transversus abdominus release) and lateralization of the colon. results: the operation was performed robot-assisted laparoscopically with 2x3 trocars without the need to convert to an open procedure. the procedure lasted 300 min. the patient was discharged from the hospital on post-operative day three. one month after the repair our patient presented with a parastomal skin infection for which she received surgical cleaning and wound dressing. ct scan three months postoperative shows good positioning of the mesh with a reinforced abdominal wall. conclusions: the modified sugarbaker repair in parastomal herniation is feasible following a pauli approach (retromuscular mesh positioning) completely laparoscopic, albeit robotically assisted. short-term follow up is promising. long-term postoperative follow-up in these patients is needed. methods: case presentation results: a 67-year-old caucasian female patient was admitted on emergency due to a progressive alteration of her physical condition and weigh loss, caused by intermitent and intense epigastric pain and vomisments. symptomes occurred several years prior to admittance, but used to be mild and rare. during the last months, the described episodes became more intense, lasted longer and occurred more frequently. percutaneous ultrasound raised the suspicion, while thoraco-abdominal ct revealed an enormous intrathoracic morgagni hernia and gastric volvulus inside the hernial sack. after a careful preoperative preparation (restoring of the nutritional and hydric inballances, amelioration of respiratory parameters), laproscopy confirmed the diagnosis; we gently reintegrated the herniated organs from the thoracic hernia into the abdominal cavity (small bowel, large omentum, transverse-doloco-colon, and thisted stomach); a laparoscopic exploration of the hernial cavity was followed by a thorough hemostasis. do to patient's frailty, we decided to leave the hernial sack in situ. surgical direct repair of the defect technique was done by using extracorporeally tied separate sutures through separate skin incisions. the postoperative outcome was completelly uneventful; patient was discharged on postoperative day 5. barium contrast at 3 mounths followup showed a slight esophageal diskynesia, but normal gastroduodenal passage; due to aerocolia, the normal position of the transverse colon could be confirmed aswell; no signs of reccurrence where detected. conclusions: although very rare, a morgagni hernia should be suspected and included in the differential diagnosis of patinets with dispeptic syndrome and epigastric/thoracic symptomes. thoraco-abdominal ctscanning is the imagistic technique of choice. laparoscopic approach became the gold standard, being far mora addvantageous as compared to laparotomy or thoracoscopy. direct suture with extracorporeally separate sutures through separate skin incisions was chosen for being less time consuming; for the same resons, the hernial sack was left in situ, with no consequences so far. aims: the giant fibrovascular polyp is one of the rarest benign lesions of the oesophagus. the most common locations of origin are the upper oesophagus or crico-pharyngeal region. the lesion is more common in elderly population, particularly men. symptoms include dysphagia, odynophagia. management options include surgical resection or endoscopic removal with endoloop. we aim to demonstrate the optimal management of these rare lesions using an endoscopic approach. method: we demonstrated the management of 75-year old patient with a giant oesophageal polyp, excised by minimally invasive endoscopic resection. the patient was placed in supine position and tracheal intubation was performed under general anaesthetic before an endoscopic approach was taken. the oesophago-duodenoscopy visualised a cresenteric shaped lumen due to an intraluminal mass occupying two thirds of the oesophageal diameter. the procedure was a multidisciplinary approach with the upper gi surgical and gastroenterology consultants. the polyp stalk was located in the oesophagus at the level of t2 and an endoloop was manipulated to surround the polyp. the polyp was then separated from the stalk by cauterisation and resected from the patient. the stalk was then injected with adrenaline to prevent haemorrhage. results: the excised specimen was a 15 cm polyp with stalk originating from the t2-t3 level. histology confirmed diagnostic suspicions of a benign pedunculated fibrovascular polyp. the polyp was covered by non-keratinising squamous epithelium with a discoloured tip demonstrating ulceration. there was no evidence of dysplasia or neoplastic process. the video shows a case of laterally spreading tumour of the rectum with preoperative benign histology, paris classification 0-is g (granular type), ut0n0 eus stage, kudo type iv, nice type 2. the neoplasm measured 6 x 7 cm, and extended from 6 to 12 cm from the anal verge, mainly located on the posterior wall. according to our local policy the indication was a transanal full-thickness excision. this was performed with the medrobotics flexò robotic system, used here for the first time outside the united states. the system technology utilizes an articulated multi-linked scope that can be steered along nonlinear, circuitous paths in a way that is not possible with traditional, straight scopes. the maneuverability of the scope is derived from its numerous mechanical linkages with concentric mechanisms. this enables surgeons to perform minimally-invasive procedures in places that were previously difficult, or impossible, to reach. with the flexò robotic system, surgeons can operate through a single access site and direct the scope to the surgical target. once positioned, the scope can become rigid, forming a stable surgical platform from which the surgeon can pass flexible surgical instruments. the system includes on-board 3d hd visualization. the flexò robotic system contains two working channels to accept a number of different surgical and interventional instruments including monopolar and bipolar electrodes, scissors and graspers for tissue manipulation. the video shows the introduction of the dedicated rectoscope, the connection of the flexible robot, and the way to operate the device performing a full-thickness excision, including suturing of the rectal defect by means of two running sutures by a v-lock 3/0 thread. while illustrating the technique the authors will comment pros and cons of the use of the device. surg endosc (2019) the video shows the use of a barbed suture of novel concept. this first prototype thread developed together with assut europe (roma, italy) is characterised by a bidirectional 3/0 barbed suture, of 24 cm length overall, with two needles 22 mm in diameter, and circumference. in order to fix the thread to the tissue, on the exact midline of the thread a small (2 mm) ball of the same material of the thread is fused with heat. this button as the entire thread is made of polydioxanone, a monofilament material for long-term absorption, self-retaining. this small advancement offera a consistent advancement in endorectal surgery helping in making transverse wound closures as easy as never before. the first stitch is placed on the transverse midline of the rectal wall defect, this way approximating proximal and distal edges. the button keeps the thread under tension allowing the completion of a running suture towards one of the two lateral ends of the would. when the first end is reached the needle is cut and the other needle is grabbed in order to perform the other half of the running suture keeping the right tension on the thread, with no risk of lumen stenosis. the two lateral ends of the suture are self-blocked passing them through the last stitch. no need for clips, knotting or buttonholes to pass through. while illustrating the technique the authors will comment pros and cons of the use of the device. background: the usefulness of robotic surgery has been largely reported; however, there are not enough reports on gist's treatment. the aim of this study is to report a single center experience on gastric gist's robotic resection. gastrointestinal stromal tumor (gists) are the most common mesenchymal tumors (overall incidence 1-3% of all gastrointestinal malignant tumor). they are most frequently located into the stomach. complete surgical resection still remains crucial for patients with gists. in cases of difficult localization of tumor (e.g. posterior wall, his angle) and bigger tumor (more than 5 cm in diameter), there still exist apparent difficulty and limitation to conduct laparoscopic resection. robotic assistance provides wide movements of its arms and was recognized particularly useful in case of difficult tumor localization, especially for those positioned at the posterior side of the stomach wall or around the lesser curvature. methods: six consecutive patients were analyzed focusing on safety (conversion/complications rate; hospital stay), oncological (margin resection, recurrence rate) and feasibility (operative time, technical tip and tricks) profile of robotic approach. results: the mean operative time was 173 ± 39 min, the mean hospital stay 3 ± 1 days. conversion rate was nihil. no intra and post-operative (mean follow-up 12 months) complications were registered. in all cases, the resections were classified as r0. conclusions: our experience supports the usefulness of robotic system in case of gist located at anatomically difficult gastric portion, such as lesser curvature or fundus close to gej, confirming the excellent oncological outcomes (100% r0) and the encouraging safety profile (0%). regarding the operative time our data are similar or better as compared to those reported by the previous literatures and didn't differ by the most recent published data for laparoscopic gastric resection. the anatomical hand-sewn reconstruction performed by precise hand-sewn suture instead of the usage of mechanical staplers represents a real great advantages of robotic resection. in our series, no patients suffered from stenosis or leakage after operation. background: we describe the use of a different suture from those historically used in the elaboration of a widely spread surgical technique such as the nissen fundoplication, for the treatment of pathological gastroesophageal reflux or symptomatic hiatal hernias. in our unit we have implemented the use of 2/0 irreabsorbable barbed suture to close the diaphragmatic pillars and the 360°fundoplication with the same continues suture. aim: the objetive of the use of the irreabsorbable barbed suture in the nissen fundoplication is to shorten the surgical times, which would achieve benefits for the patient and the institution, increasing the number of ambulant patients and the number of patients to be operated the same surgical session. objective: the goal of the present study was to demonstrate that intraoperative icg fluorescent imaging is a safe technique that can be used in laparoscopy establishing the exact location of the lymphocele and reducing intraoperative risks. method: fifty milligrams of icg dissolved in 20 ml of saline solution was injected via percutaneous drainage placed into the lymphocele to decompress transplanted kidneys 2 weeks before a laparoscopic lymphocele marsupialization procedure. results: during the first exploratory laparoscopy, in the flank and right iliac fossa, near the 2 renal grafts, fluorescence was identified in 3 raised areas that were the internal side of the lymphocele lobes. the lymphocele wall was dissected and 300 ml of serous fluid was aspirated after puncturing. a 5 cm breach was then made in the cyst wall using the ultracision harmonic scalpel (ethicon us). afterwards, a pedicle of the omentum in the lymphocele core was interfered with and fixed by 2 stitches. conclusions: laparoscopic surgery seems to be the preferred surgical option for the treatment of primary symptomatic lymphocele after kidney transplantation. intraoperative icg fluorescent imaging is a safe technique to establish the exact location of the lymphocele and reduces the risk of damaging urinary structures during surgery. conclusions: this method proved safe and risk-free, easily reproducible and without the need for a different toolkit than the one of a modern operating theatre. the preliminary analysis shows a strong correlation between the perfusional data so far obtained and the early outcome of the graft. thus opening the way to further analysis aimed to a future better management of post-operative immunosuppressant and support therapy. these results are quite encouraging, even if our study is at an initial stage. conclusions: results show a persistent dread across specialities regarding ai. rather than be seen as threat, ai should be embraced by clinicians as it will ease the ever-increasing daily workload faced. this will enable clinicians to focus their skills on patient centred activities, interventional procedures and development. despite current regulatory hurdles, ai implementation in medicine is unavoidable. this coming revolution presents a unique opportunity for endoscopist and radiologist to refocus their expertise in novel areas. project description: from february to july 2017 a three-armed proof of concept study was conducted at three hospitals, in three groups of patients; recurrent ventral hernia, aortic aneurysm, and healthy controls. patients were measured once at the outpatient clinic using an electronic nose based on three metal oxide sensors. measurement data were compressed to low-dimensional vectors using a tucker 3 like algorithm, and used to train an artificial neural network (ann) to provide a classification between patients (?1) and healthy controls (-1). preliminary resultsa total of 64 patients (hernia n = 29, aneurysm n = 35) and 37 controls were included in the study. based on receiver operating curve (roc) analysis, the ann could differentiate between recurrent hernia patients and controls with the following details: area under the curve (auc) 0.74, sensitivity 0.79, specificity 0.65. aortic aneurysm patients and healthy controls could be differentiated with an auc of 0.84, sensitivity of 0.83, and specificity of 0.81. the aeonoseò enose can reliably distinguish patients with weak collagen (recurrent hernia and aortic aneurysm patients) from healthy controls. validation of these results in a prospective cohort study is required before clinical application of the device. background: laparoscopic and endoscopic cooperative surgery (lecs) has been performed gastric submucosal tumor (smt) or duodenal tumor. although minimally invasive surgery using thoracoscopy has been the usual approach for esophageal smt, the treatment method combined thoracoscopy and endoscopy has not been established. in addition, submucosal endoscopic tumor resection (set) for esophageal smt was reported using the technique of submucosal tunnel. aim and project description: we planned to resect large esophageal smt located in the upper or middle thoracic esophagus by a combined endoscopic and thoracoscopic approach. because set is only recommended for tumors up to 40 mm in size owing to the limited submucosal space available and the left thoracic approach is restricted by the aortic arch and the trachea. preliminary results (case presentation): a 47-year-old woman was diagnosed with a benign schwannoma of length 60 mm originated from either the submucosal or the muscular layer of the middle thoracic esophagus by endoscopic ultrasonography, enhanced computed tomography, and ultrasound-guided fine-needle aspiration biopsy. since the tumor had increased in the 5 years and she had a symptom of dysphagia, we planned to resect it by a combined endoscopic and thoracoscopic approach. on endoscopy in the supine position, a submucosal tunnel was created 40 mm proximal to the cranial edge of the tumor, and its only oral end was dissected from the mucosal and muscular layers. this was followed by the resection of the entire tumor by left-sided thoracoscopic procedure in the prone position. endoscopic mucosal closure was achieved by using clips. no postoperative complications were observed. large benign esophageal tumors can be safely excised with minimally invasive surgery by using a combination of endoscopy and thoracoscopy. background: haemorrhage remains a major cause of morbidity and death in all surgical specialties. in laparoscopic surgery. relatively small amounts of blood can obscure the view of the operative field and increase the risk of injury to surrounding structures. excessive bleeding often leads to longer hospital stays, increased healthcare service utilisation, and higher healthcare costs, among other negative consequences. aim: the aim of this study was to analyse the feasibility of purastatò, a new synthetic haemostatic device, made of self-assembling peptides in laparoscopic colorectal surgery. project description: this was a prospective observational non-randomised study. consecutive patients undergoing laparoscopic colorectal surgery were enrolled. informed consent was obtained from all patients inclusion criteria was the need employ a secondary method of haemostasis when traditional methods such as conventional pressure or utilization of energy devices to control the bleeding were either insufficient or not recommended/ appropriate due to proximity to ureter, pelvic/sacral veins and other important structure. preliminary results: twenty patients were enrolled (12 males (60%) and 8 (40%) females). mean age was 61 years (± 2,4 years). all patients were undergoing elective laparoscopic colorectal cancer surgery (4 right hemicolectomy, 5 sigmoid colectomy, 11 anterior resection). we utilised 3 mls of purastatò in all patients. the mean area of application was 5, 35 cm 2 (± 2.25 cm 2 ) with the amount of purastatò needed per centimetre being 0.56 mls. the mean time to apply the product was 40 secs (± 17 s), whereas the mean time to achieve haemostasis was 17, 5 secs (± 3.5 s). there were no post operative complications in this cohort of 20 patients. mean operative time overall was 185 min (± 45.2 min). none of the patients experienced delayed post-operative bleeding and the mean hospital stay was 5 days (± 3.4). in conclusion, according to the purpose of this preliminary study, we have demonstrated that purastatò can be easily used in laparoscopic surgery and it is a safe, effective haemostatic agent. this is a feasibility study and additional controlled studies would be useful in the future. during last three years the laparoscopic method of surgery in the presence of common bile duct stones was carried out. after performance of intraoperative cholangiography and visualization of stones in the common bile duct laparoscopic, choledochotomy and bile duct stones extraction was undertaken in 56 patients, using flexible choledochoscopy control. in all patients with gallbladder stones was then performed laparoscopic cholecystectomy. results: laparoendoscopic intervention on common bile duct was successfully performed in 48 patients (85.7%) and the operation was completed by common bile duct drainage by kehr. in 8 patients due to technical difficulties conversion to open surgery was carried out. postoperative morbidity in the form of bile leakage were diagnosed in 9 patients (16.1%). in three cases they stopped spontaneously in 5-6 days after the operation. 6 patients were operated on repeatedly and additional suturing on choledocholithiasis was carried out. postoperative mortality was 2.4%. the death of the patient of 92 years was caused by acute cardiovascular failure. institute for image guided surgery, ihu-strasbourg, strasbourg, france; 2 hepato-digestif, nouvel hôpital civil, strasbourg, france; 3 image guided surgery, nouvel hôpital civil, strasbourg, france; 4 gastroenterology, chu-besancon, besancon, france; 5 gastroenterology, chu-lyon, lyon, france; 6 gastroenterology, clinic de trocadero, paris, france background: eus is difficult to learn and has a steep learning curve. therapeutic eus (teus) is even more so. simulators, ex-vivo models and phantoms are the most common current teaching modalities but are felt by many to be unsatisfactory for high-level training. aim: we designed a training curriculum for teus that uses high-fidelity animal models and present a validation study performed by 4 teus experts. project description: 3 different simulated pathologies were created in each of 9 acute pigs. 4 teus experts performed 15 therapeutic procedures in two or more animals over two days. each intervention was evaluated simultaneously using a structured survey by an non-expert observer. data included demographics and procedure details as well as likert-scale evaluation of the quality, realism and education utility of the simulations. global evaluation of the experience was captured from the experts as written comments. all data was consurrently registered and subsequently analysed by two blinded surgical educators. methodology: three types of models were created using surgical access: 1-tumors (injection of 4 types of hydrogel), 2-retro-gastric collections (5-7 cm long intestinal loops filled with oatmeal, oil-water and gel), 3-obstructions (bile duct and ureteral ligations 2 days prior to experience). gastric, pancreatic and liver tumor models were used for fna and fnb practice. retrogastric fluid collections and choledochal/ureteral obstructions were used for cyst gastrostomy, hepaticogastrostomy, gallbladder drainage and kidney drainage. results: experts age: 45-63, median intervention time 22 min , total of 60 interventions evaluated, overall quality of experience: 37 (68%) ranked 8-10 (excellent), 14 (27%) from 7-4 (good), 3 (5%) from 1-3 (poor), 54/60 procedures were successfully completed. models were rated good to excellent quality (7-10) in 42 (70%), poor quality in 8 (13%). for 17% (10) of the interventions the model was considered not good enough to be repeated (solid retrogastric tumor and peripheral hepatic lesion). conclusion: high-fidelity live animal models with simulated pathologies are considered to be excellent training tools by experts and may provide a better learning experience for teus. surg endosc (2019) in our short videos, we present scenarios in which this technology helped: to distinguish a significantly dilated cystic duct from the cbd, to identify an anterior cystic artery in the contest of acute inflammation, to identify a luksha duct, to exclude a leak after endoloops positioning on cystic duct. intra-operative augmented visualisation of biliary anatomy with indocyanine green cholangiography is an essential technology tool with the potential to extend the 72 hour window of safety for emergency cholecystectomies, with significant logistics benefits. introduction: endoscopic resection of subcardial polyps has its limitations; especially when it is necessary to dry out the entire gastric wall. uniportal intragastric surgery is a good alternative for the exeresis of subcardial premalignant lesions with gastric preservation. patient and method: we present a video with two cases. technique: we perform a laparoscopy to explore the entire abdominal cavity, then we open a 1.5 cm hole in the great curvature; a 2 cm incision is made in left hypochondrium and the uniportal device is placed inside de stomach. we inject serum in the submucosa with a endoscopic needle. when submucosa is completely separated from muscular lay; a submucosal exeresis can be made; but when there is not a complete separation after injection, then we perform a entire wall resection with a 1 cm circular margin. if a complete wall resection is made, then we close the defect with a barbed suture. results: case 1: 45 years old male with a 1,5 cm subcardial polyp, preoperative biopsy was informed as severe dysplasia. in the laparoscopy we saw an unknown lesion in the great curvature that looked like a gist. we placed the uniportal device intragastric and proceed to the submucosal serum injection. as the submucosal lay was completely separated from muscular, then we performed a submucosal exeresis. we close the gastrotomy and made the resection of the great curvature lesion with endostapler. the pathological analysis confirmed the severe dysplasia in subcardial lesion and gist in the great curvature lesion. patient was discharged with no complications after 2 days. case 2: 58 years old male with a 2 cm subcardial polyp. preoperative ecoendoscopy suggested a muscular layer infiltration but only severe dysplasia was found in the previous biospsy. laparoscopy did not found more lesions, and uniportal intragastric device was placed. a complete wall resection was made, and the defect was closed with manual barbed suture. pathologyst confirmed severe dysplasia and unaffected margin. patient was discharged with no complications after 4 days. conclusion: uniportal intragastric surgery is feasible and safe and may be useful for subcardial premalignant lesions when endoscopic resection is very difficult or not feasible. introduction: the role of icg in bariatric surgery is still unclear. knowing the lack of perfusion in the gastric pouch could be of great interest in revisional surgeries; but the question remains: is the current icg technology reliable enough to make intraoperative decisions in bariatric surgery? methods: we have carried out a check of tissue perfusion with icg fluorescence in several cases of primary and revisional bariatric surgery. a solution of 0,1 mg/kg was injected intravenously and icg fluorescence was performed. we looked for the correct staining of the entire gastric pouch and the intestinal loop trying to rule out areas of tissue ischemia. results: the 15 cases in which the test was performed showed a minimal delay of 10-15 s between the intestinal loop stain and the pouch. a correct staining was observed in all but one case shown in the video. is the case of a 46 years-old male patient who was operated 8 years earlier in another center; at that time he underwent a sleeve gastrectomy. we evaluated the patient for persistent gastroesofhageal reflux of 5 years of evolution with esophagitis. we offered revisional surgery to perform gastric bypass and hiatal closure. fundus was dilated so a funduplasty was performed instead of using endostappler in the vertical side of the pouch. manual anastomosis gastric bypass was performed. when the icg test was performed, a corner of the pouch does not stain green (an area of 1,5 cm) . so the decision was to resect that part and redo the anastomosis or wait and see. it was decided not to resect and the patient was discharged two days later with no complications and good outcome with a 12 months follow up. conclusion: icg fluorescence may be useful in bariatric surgery in the future but more evidence is needed of its usefulness in making intraoperative decisions. background: lymph node status is one of the key prognostic factors in patients with colorectal cancer, and remains the most important selection criteria for adjuvant chemotherapy. it is believed that at least 30% of node negative patients will suffer disease recurrence within the first 5 years after surgery. this may be due to understaging of lymph node status. sentinel lymph node mapping is widely used for staging of breast cancer and melanoma, with injection of colloid tc99 and isosulfan blue (ib). however, indocyanine green (icg) fluorescence guidance is a new technical approach to this issue, with promising results for detection of aberrant lymphatic drainage outside of the planned resection. the icg lymphography has the advantage of offering a good visualization of the lymphatic channels but there are problems in order to identify the lymphatic nodes. aim: the objective of the experimental study is to investigate the possibility to detect the sentinel lymph nodes after the injection of different solutions with indocyanine green in the subserosal colonic layer in the pig. project description: twelve female large white pigs were operated with laparosocpic approach and spies optic filter (karl storz, germany). indocyanine green was injected in the subserosa of the colonic wall (1 ml at 2 points). lymphatic flow was observed at 1-3-5-10-15 and 20 min, searching for the migration of the icg by the lymphatic channels and its introduction in the sentinel nodes. preliminary results. the identification of the sentinel nodes is very difficult with the solution of icg-sterile water. with this technique we can see the lympjatic channels but not the lymphatic nodes. the adition of 5% human albumin as a transporter of the icg is very helpful for the correct identifiaction of the lymphatic channels at 5-10-15 min and the correct visualization of the lymphatic nodes at 20 min after the bowel injection. addition of other transporters like dextran solutions may be helpful too but the time to the correct visualization is longer. there was significant difference among the three groups as regards postoperative les and postoperative ph metery.the incidence of persistent dysphagia was significantly higher in the group i. postoperative gerd symptoms were significantly higher in group iii (23.3%. p \ 0.0001). recurrent achalasia was significantly higher in group i (11 patients 15.9%, 8 patients in group ii (7.1%) and nil in group iii (p \ 0.02). in conclusion: longer myotomy on the gastric side ([ 2.5 cm) ensures complete division of the les with better outcomes in term of resolution of dysphagia but may be associated with higher postoperative gerd. therefore, a myotomy length of 1.5 to 2.5 cm on the gastric side provides a balance between relieve of dysphagia and development of postoperative gerd. c/t scan of his abdomen revealed a large groin hernia with signs of small bowel obstruction and collapsed distal bowel. emergency theatre was organised for this patient with anaesthetic assessment prior to his surgery. initial plan of local exploration with possibility of small resection was changed once he was under full anaesthetic with muscle relaxation. his abdominal girth provided an opportunity to utilise laparoscopic intervention as an initial approach. laparoscopy with 0 degree lens revealed moderately distended loops of small bowel and a large omental mass along with a loop of small bowel incarcerated in right direct inguinal hernia site. background: robotically assisted surgery is a rapidly developing modality of minimally invasive surgery with proven advantages in the management of cancer. despite its increasing prevalence, there is still an ongoing debate regarding its future role in colorectal surgery. while the prospective randomised multi-centre studies provide research evidence for its potential efficacy, an assessment of its effectiveness and realistic outcomes in everyday clinical practice can add an important perspective to this discussion. the international robotic colorectal registry will allow to compile and pool the international robotic colorectal experience. aims: the aim of the international robotic colorectal registry is to monitor the safety and outcomes, as well as the quality of specimen of robotically assisted colorectal surgery for malignant and benign diseases of the colon and rectum. the primary endpoint is a composite oncological failure. the secondary endpoints include anastomotic leak, resection margin involvement, conversion rate, operative time, post-operative 30-day morbidity and mortality, long term oncological outcomes, quality of life, functional outcomes and cost-effectiveness. project description: the international robotic colorectal registry is a multicentre web-based, online secure database. the registry has been awarded an ethical approval by a relevant national committee. all surgeons performing robotic or robotically-assisted surgery are invited to participate. the data collected includes patient demographics, cancer characteristics, operative details, histology of the specimen, wound healing, post-operative therapy, readmission, quality of life and functional (bowel, urinary and sexual) outcomes. all the sensitive patient information is encrypted before its introduction into the database. preliminary results: so far, twenty robotic colorectal centres have joined the international robotic colorectal registry. the preliminary results will be published once 600 patients have been enrolled. univariate and multivariate analyses will be performed to identify possible risk factors for poor outcome.the possibility to record open, laparoscopic or other minimally invasive colorectal procedures will facilitate comparison of the outcomes of the robotically assisted surgery and other modalities. the registry will also allow each surgeon enrolled to monitor their skill progression and outcomes over the time. results will be published in surgical literature and presented internationally. background: gastric outlet obstruction (goo) due to benign strictures is an uncommon surgical entity today. this paucity relates to the decrease in its aetiological factors in the modern era as well as to advances in both prevention and medical as well as endoscopic treatments of such condition. the most common of causes relating to peptic ulcer disease, has been subdued for decades with quality acid control medications. on the other hand advances in gastroscopic dilatations skimmed even more the frequency of these cases from arriving to surgical intervention. aim: this presentation gives an update on the standing of this pathology and its surgical management today. it will also shed a light on our early experience in this condition at the royal hospital of muscat in oman. project description: a case series of all patients with goo, who were surgically managed between 2010 and 2015 results: there were a total of 16 patients, 10 males and 6 females. the cause of obstruction was peptic ulcer disease in 10, corrosive injury in 2, iatrogenic perforation in 1 and idiopathic hypertrophic stenosis in 2 . emergency presentation was seen in 4. management included jaboulay pylorpolasty in 2, resection in 10 (distal gastrectomy in 9, total gastrectomy in 1) or a bypass (gastrojejunostomy) in 4. in 14 of the above, the procedure was done by laparoscopy. post operatively, temporary gastric paresis delayed recovery in 5, however all symptoms resolved in 15. there were no recurrences at minimum of 2 years of follow up. in spite of advances in medications and gastroscopy interventions, we still seem to identify this condition within our population. although infrequent, they demand awareness from surgeons since they could be managed successfully, especially laparoscopically, with minimal morbidities and early recovery. the introduction of advanced laparoscopy to the unit's setup in recent years, made such option feasible with satisfactory and durable outcomes. background: gists of the upper gi are found mainly in the stomach (60-70% of cases) and small intestine (30%). duodenal gists however, comprise a smaller subset with a frequency of 6 to 21%. the optimal surgical procedure for duodenal gist is still evolving. since wide margins and extensive lymphadenectomy are not required, restrain from more radical resections in this area would be a valid option. aim: this is a video case report of a patient with a gist involving the third part of the duodenum treated by laparoscopic lateral duodenectomy and end-side roux-en-y duodenojejunostomy case report: 55 years lady presented with recurrent mid abdominal postprandial pain with anorexia, nausea and occasional vomiting an ultrasound showed well defined hypoechoic mass of 3 9 2.5 9 2.2 cm at the right para-aortic region . ct scan defined the mass as retroperitoneal, intimately related to the pancreas uncinate process and the third part of duodenum with no clear cleavage line between them. an mri endorsed the diagnosis of gist of the duodenum. she was operated upon through a laparoscopic lateral duodenectomy including the gist at the third part of the duodenum. a frozen section confirmed the clear margins . reconstruction was done by a roux-en-y duodenojejunostomy with the alimentary limb taken 30 cm from the dj flexure. she had an uneventful post operative recovery and was discharged well. the histology confirmed a low grade gist tumour hence no further treatment was needed. at follow up six months later, she was doing well and gaining weight. conclusion: complex anatomy of the pancreatico-duodenal area makes conserving the duodenum for tumours rather than a major resection a challenging option. in our case however, with the disease in the third part being of a moderate size, a lateral duodenal wall resection including the mass was possible rather than a segmental resection. this procedure could be an ideal choice for benign, moderate sized tumours in the third and fourth part of the duodenum. background: during laparoscopic colectomy, laparoscopic lymph node dissection and extracorporeal intestinal anastomosis is commonly performed. an umbilical incision of 4-6 cm and wide-range mobilization of the intestinal tract is required for extracorporeal anastomosis. previously, we introduced intracorporeal overlap anastomosis in june 2017 as a minimally invasive treatment. here, we report its short-term outcomes. aim: we retrospectively compared the surgical outcomes of 21 cases of extracorporeal anastomosis and 8 cases of intracorporeal anastomosis, all of which were performed between june 2017 to may 2018. procedures: after lymph node dissection and sufficient mobilization of the intestinal tract, the proximal and distal intestines were resected perpendicularly to the intestinal tract with a 60-mm linear stapler. the anastomosis was performed after the specimen was extracted from an umbilical incision. the opposite sides of the mesenteric margin 3 cm from the staple line of the one intestinal tract, and 7 cm from the staple line of the other intestinal tract, were marked, and then the respective intestinal tract was positioned to join the opposite mesenteric sides together. an insertion hole was made in the intestinal tract at the marked site. side-to-side anastomosis with a linear stapler was performed, and then the insertion hole was closed with a linear stapler after several temporary sutures. preliminary results: in the extracorporeal anastomosis group, the mean operation time, blood loss, and post-operative days were 270 min, 127 ml, and 13.2 days, respectively. furthermore, there were three intraoperative cases of bleeding (14.3%), and two postoperative cases of lymphorrhea (9.5%) that occurred. however, in the intracorporeal overlap anastomosis group, the mean operation time, blood loss, and post-operative days were 284 min, 75 ml, and 12.5 days, respectively. additionally, there were no cases of intraoperative complications, and only one postoperative case of lymphorrhea (12.5%). conclusion: intracorporeal overlap anastomosis in laparoscopic colectomy is safe and feasible, and can be used as a minimally invasive treatment. nowadays 3d-printing it's not a new technology any more but with an exponential developing. there are beliefs that in 2027 10% of everything that will be produced will be 3d-printed. in medical field this technology knows the same exponential developing. first used in orthopedics and maxilo-facial surgery now 3d-printind is used in many other fields for different reasons, like preoperative training models, surgical special instruments, in medical education, etc.. liver surgery is in continuous developing and this is the reason why we need experimental liver model for training and testing. a best liver experimental model should heave liver consistency, to be flexible, to heave the same ultrasound feedback, to be cheap and easy to be reproduced. this is why we developed a liver experimental model made of gelatin by a simple recipe, using a 3dprinted mold, created after a human liver ct-scan. first was made the segmentation of the liver. after segmentation we create the 3d virtual liver model and the negative image of the liver, which was used for creation of the 2 pieces of the liver mold, with connections between them and a hole on the top to pour the gelatin solution. (1) ethical concerns on learning and training with real patients and substitutives such as animals, and (2) reconciling time devoted to learning with clinical practice, considering the european work time directives. simulation in medical education is and has been the preferred route to address both pedagogical needs. virtual simulation has proven to be a valid tool for training; however, current systems restrict usage to tasks and modules offered, without possibility of personalization. we present the minimally invasive surgery simulator scenario editor (mis-sim) an environment where users can create, edit and run virtual reality tasks designed for medical training. the environment features an editor allowing users to develop learning tasks, defining its learning objectives and task goals in an easy way. a first proof of concept has been implemented for surgical training and training activities (demostrators and short courses) have been carried out in three european sites: spain, the netherlands and hungary. during training activities, 10 different exercises have been created and uploaded to the contents' database. trained technical skills include handeye and bimanual coordination, instrument handling and pulling. preliminary results with 30 users have shown mis-sim training potential, although some functionalities should be made easier. personalization has been highlighted as the key added value of mis-sim with respect to the current competitions in the market: the ability for target users to use virtual reality based learning tools while remaining in complete control of the learning process. mis-sim aspires to break the barrier between vr and medical education by empowering users to create their own tasks. with mis-sim teachers/course creators and learners (healthcare professionals & future healthcare professionals) will benefit from an innovative tool to (1) create personalised medical learning contents tailored to preferred learning styles, allowing the creation of individualize learning paths; (2) improve the efficiency of training by focusing on the training needs of the learners and (3) share and sell vr-based didactic contents. c. tiu 1 , p. sánchez-gonzález 2 , m. chmarra 3 , d. gutiérrez 4 , c. guzmán-garcía 5 , l. sánchez-peralta 6 , g. wéber 7 , f. sánchez-margallo 8 , b. pagador 9 , j. dankelman 10 aims: currently surgical training is largely based on the improvement of technology enhanced learning solutions. the progress of engineering and the diversification of training facilities outside the operating theater results in an even greater contribution of technology in the future. the main reasons to encourage these changes are increased efficiency of simulators and directly increased patient's safety. the goal assumed by the easier project is to develop multi-skill, online platforms for minimally invasive surgical (mis) procedures-based on common pedagogical principles with reference value in a multinational space. the platform will allow the connection of external assets (such as simulators) to centralize all training data from residents. this work presents the milestones of the project during its first year of life. methods: the consortium's activity started with a knowledge elicitation process organizing brainstorms and workshops including experts in mis and interventional techniques, from spain, romania and hungary. this experience led to the formulation of a questionnaire that was implemented online and sent via email to surgeons and residents from the participating countries. results: accumulated experience was used to define the pedagogical needs of the platform. the pedagogical needs form the starting point for defining the technical requirements and specifications. based on them, the design of the platform has been achieved, including its architecture and communication protocol between external assets design, facilitated by the use of state of art educational standards. discussions and conclusions: next steps include the implementation of the easier platform, as well as the definition of cases studies selected by the clinical partners in spain, romania and hungary to solve applications of the platform dedicated to cholecystectomy, lumbar puncture and arthroscopy. a pedagogical model, built by the experience of the consortium, is being used to guide instructional design of the course. finally, results will be validated in a multi-center validation study.the easier project will create a training platform with reference value for european surgery. the structure of the consortium, based on the confluence between collectives with clinical, technological and pedagogical experience, will generate a complex learning tool in surgery embodying technology-based training systems with clinical experience. background: the treatment of groin hernia is an important part of our daily surgical activity. aim: we proposed to evaluate outcomes of the laparoscopic trans abdominal pre peritoneal treatment (tapp) of the groin hernia. project description : one hundred and fifty patients who underwent a tapp for a groin hernia were included in a retrospective study between january 2014 and november 2018. results: the gender ratio was 5. the average age was 57,4 years. twenty percent of patients had a history of abdominal surgery. the operative indication was a unilateral hernia in 25% of cases, associated with an umbilical hernia in 40% of cases, a recurrent groin hernia in 30% of cases and a bilateral inguinal hernia in 45% of cases. the conversion rate was 1.33%. the hernias were classified according to the ehs classification in l3 type in 31% of cases, l2 in 24% of cases, l1 in 19% of cases, m2 19%, l2 r in 20% of cases and f1r in 15% of cases. a contralateral inguinal hernia was discovered in 15% of patients. a polypropylene mesh 15x15 cm was fixed by a stapling in 59% of cases and by a suture in 41% of the cases. the average operation time was 55 min. the hospital stay average was 0,7 day. an antalgic treatment was prescribed in 15% of patients. the average time to return to normal physical activity was 5 days. a postoperative seroma was noted in 14% of patients. no cases of mesh suppuration were noted. chronic pain was noted in two patients. no recurrence was noted with an average follow-up of 25 months. conclusion: laparoscopic treatment of the groin hernia by tapp had good results concerning the postoperative pain, early recovery of physical activity and aesthetic damage. a larger setback is needed in our study to evaluate the recidivism rate. background: surgeon's training in ultrasound is viewed and understood differently in different parts of the world. if in united states the need for surgeons' training was accepted and taken over by the american college of surgeons, in europe the practice is completely different from one country to another, from one city to another, from one department to another within the same premises hospital. in some european countries, surgeons currently use ultrasound for diagnosis-especially in urgency, for follow-up, intraoperative, or as guidance for many surgical gestures. during this time, access to ultrasound of other surgical specialties-gynecology, urology, ophthalmology-is considered natural. material and method: once the decision to initiate an ultrasound course for surgeons was taken, a team of experts with technical or clinical expertise in ultrasound was organized. at the initiative of the technology commission, the courses were to be organized at the eaes congresses or others communication events endorsed by eaes. starting from the importance of each ultrasound application in surgery, it was decided to develop different modules to solve different training needs. at this time, the course offered at seville covers the capitols like abdominal ultrasound, guided punctures and trauma. a module dedicated to intraoperative ultrasound is under construction and will be available in november 2019. the course has a skill abilities dominant character, two thirds of it being thought of as a hands-on application on stationary, classical ultrasounds with large screens and also on small size wireless actual devices. results: after his debut in frankfurt in 2017, the course was resumed in london and in bucharest, twice. in this process, new modules and better teamwork skills have been developed. the participants' satisfaction quizzes, coming from all continents, were really encouraging. for the intraoperative ultrasound module the team approach is unique. students will have the opportunity to practice on live animals both laparoscopic and open abdominal procedures conclusions: the ultrasound for surgeons course initiated by eaeas was received with interest. the team will seek to inspect the real needs of training surgeons in this field and will complement and diversify the current platform surg endosc (2019) preliminary results: a total of 26 procedures (3-12, ± 5.2) were carried out in 5 patients. the indications included acute (2 leaks following esophageal resection, 1 rupture of the strictured anastomosis following pneumatic dilatation) and 2 chronic conditions (esophagopleurobronchial and gastropleurobronchial fistulas following the resection of esophageal diverticulum and sleeve gastrectomy). the initiation of the therapy was in 13, 18 and 1 day in case of acute conditions, and after 2 years of the duration of the unsuccessful therapy in 2 chronic cases. the successful closure was observed in 2 patients, 1 patient passed from mods and ards. in 1 case, the initiation of evac was provided as a combined surgical and endoscopic intervention (ct proven distant intraabdominal abscesses). in 2 chronic cases, 1 was discontinued due to the haemophagocytic syndrome of unknown etiology, in the second one, success in reduction of the lesion and symptomatology with long term duration was observed following just 3 applications of evac, despite minimal remanent leakage. the success is to our experience linked to early initiation of the therapy and presumes complex intensive care. the future investigation should specify the timing including preemptive use of evac and the combination of evac with other endoscopic, interventional and surgical therapeutic modalities. the aim of this feasible study is to investigate complete robotic esophagectomy with total mediastinal lymph node dissection (retm) only by the robotic arms. methods: the patient is placed hemi-prone position with one lung ventilation by blocking balloon tube under general anesthesia. the robotic trocar for 1st arm of da vinci xi surgical system is placed in the 10th intercostal space (ics) on the scapular line, the trocar of 2nd arm is placed in the 7th ics on the posterior axillary line, 3rd arm trocar is placed in the 5th ics on the middle axillary line, 4th robotic arm trocar is placed 3rd ics on the middle axillary line, and assistant trocar only for taking in and out of gauze is placed in the 8th ics on the middle axillary line. on the upper mediastinal lymph node dissection, robotic camera exchanges from 2nd to 3rd robotic arm to close and identify anatomical structures. esophagectomy with lymph node dissection starts from middle and lower mediastinum to upper mediastinum including along bilateral recurrent laryngeal nerves. all procedure perform under close-up view along the robotic enhanced anatomy to preserve organ functions. background: complete stenosis of the duodenal lumen secondary to a surgical suture in the treatment of a duodenal ulcer is a rare complication. the usual surgical resolution corresponds to a gastrojejunostomy associated or not, to an antrectomy with vagotomy, as a treatment for the peptic disease. the endoscopic resolution of this complication requires the use of complex maneuvers and specific therapeutic instruments. aim: to describe the endoscopic resolution of iatrogenic occlusion after raffia of perforated duodenal ulcer. description: a 43-year-old man was admitted to the emergency service for four days of pain and abdominal distension associated with abundant retention vomiting. performed ten days ago of a perforated duodenal ulcer, in which manual raffia was performed in two planes and drainage. abdomen and pelvis ct showed great distension and diffuse thickening of the gastric wall. the endoscopy showed abundant gastric retention content, pylorus, and bulb edema, and complete closure of the duodenal lumen, secondary to suture material; it was possible to count three suture threads. with a tipped papillotome and electrocautery, all the suture threads were sectioned, identifying a filiform opening through which a hydrophilic guide is inserted under fluoroscopy until it is sure to overcome the stenosis; we dilated the trajectory with a dilator of 10 mm in diameter and 4 cm in length. with a contrast medium, we observed an adequate trajectory and installed a partially covered duodenal metal stent (hanaro stent) of 14 cm in length by 20 mm in diameter in order to sustain the dilation. he was sent to home with inhibitors of the proton pump. after two weeks, the stent was removed, without complications. he was controlled two weeks after withdrawal, and, with pharmacotherapy, the patient was asymptomatic, making a normal life. conclusion: in this case, the result was positive. satisfactory results can prevent major surgery, which reduces the risk and possible complications. a new range of non-invasive medical tools with a remarkable improvement on the existing market. a manual laparoscopy, with the important novelty of having a bending head with a high degree of movement. this head can get multiple spatial positions to work in surgery and is very easy to use, with the same scissors thimble who controls it, so its performance and learning is very simple and intuitive. the tools can be easily reusable and they can be cleaned and sterilized by ordinary methods, very ergonomic and lightweight . the generic type models we initially have developed are focused in general surgery, but gradually we will develop new applications and different heads for specific medical conditions such as arthroscopy, laryngoscopy, otolaryngology, ophthalmology, orthopedics . its operation is simple and functional, simply moving the scissors thimbles, where they have a dual role, combining the head tilt and the action of opening and closing of this is achieved.the design of this tool allows us to work with some degrees of unparalleled freedom from the existing tools. our instruments replicate the movements of the robot with a simple handheld mechanical instrument, our philosophy is to position our instruments in between a long empty field between the surgeon and the robot. the tip of our instruments are providing the surgeons with angulations impossible to reach with the traditional instruments unless applying huge movements from the hands of the surgeon. we consider that this devices will have a very fast acceptance from the market as this robotic type movements can be managed by the surgeon through the traditional 5 mm trocars, without the need to change to a new surgical technique, just with the traditional method and a very brief training. background: diverticula of the middle thoracic esophagus are infrequent, its etiology may be secondary to traction or pulsion mechanisms. when the etiology is a mechanism of pulsion, they are associated with esophageal motor disorders and its prevalence is estimated between 0.02% and 0.77% of the population . they are rarely symptomatic and the diagnostic is usually incidental. the most common symptoms are episodes of food impaction, chest pain or bronchoaspiration.diverticulectomy and esophageal myomectomy by minimally invasive approach is the treatment of choice in those with large size or associated symptoms. aim: to describe a clinical case of esophageal diverticulum solved by minimally invasive surgery approach. clinical case description: a 74-year-old patient with a history of epilepsy and hypothyroidism consulted for atypical chest pain and dysphagia to liquids and solids. a study with esophagogastroduodenoscopy was performed: 27 cm from the dental arch, a large wide-mouth diverticulum was identified. we complete the study with an esophagram with barium: voluminous diverticulum in the right lateral face of the middle esophageal and a thoracic ct scan showing esophageal diverticulum located in the carina, from 5.6x4.5 cm to 13 cm from the esophagogastric junction. due to the suspicion of associated motor disorder, high resolution manometry was performed showing a significant motor disorder with alteration of peristalsis and exit obstruction with incomplete relaxation of the inferior sphincter and superior hypertonic sphincter. preliminary results: the patient underwent surgery: diverticulectomy and complete esophageal myotomy by thoracoscopy minimall invasive approach. the patient evolved favorably and was discharged after 5 days with a previus control esophagram without pathological findings. currently, it remains asymptomatic 6 months after surgery. , rectus muscles are re-approximated from xiphoid to pubis using laparoscopic running self-locking, pds sutures to restore anatomy and physiologic function of the abdominal wall. unlike the standard access to the abdominal cavity executed with 3 lateral access, the lap-t technique is performed through 3 sopra-pubic aesthetic approaches, using one 3 mm and one 5 mm bariatric (45 cm) instruments laterally, and one 8 mm camera in the middle.the entire procedure is performed in gas-less laparoscopy, with laryngeal mask and intra-peritoneal liquid anesthesia. the repair is consolidated placing an intra-peritoneal semi-absorbable mesh. preliminary results: in all cases abdominal functioning was successfully restored; no higher pain related to the continuous laparoscopic suturing has been reported compared to bridge ipom laparoscopic repair, while allowing for a more physiologic outcome and a stronger repair. the use of miniaturized instruments allowed for minimal tissue trauma and accurate surgical gestures; the tiny trocar sites did not require skin suturing and might reduce the risk of trocar hernias. no intra operative bleeding, no seroma formation, chronic pain, nor mesh infection have been recorded. 98% follow up at 24 months, 89% at 36 months with no recurrences observed. the lap-t technique allowed for a sound and anatomic reconstruction, reduced trauma, faster recovery and more satisfactory aesthetic results. surg endosc (2019) background: anastomotic leakage is a serious complication, associated with significant morbidity and mortality. one possible cause of anastomotic leakage is insufficient vascular supply. markers of sufficient perfusion include pink color of the bowel wall, visible peristalsis, palpable pulsations and bleeding from the marginal arteries. these signs are subjective and may be misinterpreted even by experienced surgeons. aim: the assessment of bowel perfusion with the use of indocyanine-green fluorescence angiography might be helpful in decreasing the number of anastomotic leaks. project description: we report a case report of a middle-aged patient without significant medical history who was treated by transanal total mesorectal excision (tatme) for rectal carcinoma. the patient underwent neoadjuvant treatment with radiochemotherapy. during the surgical procedure, indocyanine-green fluorescence angiography showed adequate perfusion of the bowel. the postoperative phase was uneventful and the patient was discharged home on the 9th postoperative day. preliminary results: indocyanine-green fluorescence angiography is a safe, cost-effective and feasible tool for assessment of tissue perfusion during colorectal resections. background: to properly learn how to perform a laparoscopic suture, along with safe tissue handling, applying an appropriate magnitude of the force on the tissue is essential. for this reason, it is fundamental to investigate and validate if training with real-time visual force feedback improves the suturing performance of laparoscopic novice surgeons. capturing all of the forces applied in laparoscopic in surgery in an unobtrusive way has been difficult in the past. sensor has supplied a novel force-sensing film (forcefilm) that can detect all of the forces applied with laparoscopic instruments without changing the surgical workflow or operation of the instruments. aim: to evaluate the effect of visual force feedback on surgical performance, applied force and surgeon's ergonomics during training of laparoscopic suturing using the sensor technology (sensor medical laboratories ltd.). methods: twenty novice laparoscopic surgeons participate in this study. they perform a laparoscopic suture on an ex vivo stomach tissue from a pig. participants are assigned, in a random fashion, to either group that receives visual force feedback (a) or the control group (b) without visual force feedback. five training trials (t1-t5) are carried out in order to assess the learning curve. in addition, an evaluation pretest (t0) and posttest (t6), without visual force feedback but recording the force applied, will be performed after the training trials. the applied force on the tissue and visual force feedback of each instrument are provided by means of the sensor technology. it accurately measures the forces exerted on the tissue from the instrument tip and wirelessly communicates the force information to the surgeon via visual force-feedback. during each trial, several parameters are evaluated such as execution time, applied force, surgical performance, and mental and physical workload. preliminary results: laparoscopic training using visual force feedback leads to an improvement of suturing skills with a reduction of the applied force and therefore providing a potentially positive effect on patient outcomes and surgeon's ergonomics. background: hiatal hernia is a frequent disorder, characterized by a protrusion of any abdominal structure other than the esophagus into the thoracic cavity through a widening of the diaphragmatic hiatus. current anatomic classification is mainly based on the location of the gastroesophageal junction and the presence of a true hernial sac, differentiating sliding from paraesophageal hernias. there is no solid evidence to support an association between gastric carcinogenesis and peh. however, chronic reflux is considered as one of the strongest risk factors of developing adenocarcinoma of the esophagus and proximal stomach. aim: herein, we report a case of an 83-year-old caucasian female with dysphagia, regurgitation and heartburn accompanied by an iron deficiency anemia, a remarkable total body weight loss and recurrent lower respiratory tract infections due to microaspirations. subsequent work-up with ct, upper endoscopy and barium esophagram confirmed the presence of synchronous distal gastric adenocarcinoma and a giant paraesophageal hernia with complete intrathoracic stomach. after mdt discussion and keeping in mind the patient's age and comorbidity, a 3d laparoscopic distal gastrectomy with a synchronous hernia reduction with posterior cruropexy was scheduled. project description: the patient was placed in a supine position, five thoracoscopic ports were introduced, and a diagnostic laparoscopy of the abdominal cavity was performed. the stomach was identified through the dilated hiatus into the left thorax. the hernia sac was dissected away from mediastinal structures, then excised to untwist the stomach. after reduction of the stomach to abdominal cavity, a total d1 ? gastrectomy with a roux-en-y reconstruction was performed. maintenance of optimal vision during minimally invasive surgery is crucial to maintaining operative awareness, efficiency and safety. hampered vision is commonly caused by laparoscopic lens fogging (llf) and lens condensation which has prompted the development of various antifogging fluids and warming devices. numerous tricks have been proposed to overcome this issue, such as heating the scope into a sterile thermos flask filled with hot water, or using one of the commercially available antifogging solutions. however, whether one method is superior to another remains elusive. as most surgeons know, none of these tips are totally efficient, as they don't treat the cause: the temperature difference. taking into account this need, we have developed ehs (endoscopy heater system), a thermoadjustable system by microcontroller, which is implemented in the manufacturing process of the rigid endoscope focused on laparoscopy. the technology enables the self-modulation of the temperature of the endoscope within the different conditions during the surgery, avoiding the 100% of laparoscopic fogging. with the adoption of ehs surgeons get a clear field of vision avoiding continues repetitions of extraction and insertion of the endoscope in the body during the intervention. in this way the risks of the patients are reduced with a more efficient and shorter duration procedure. ehs also represents an alternative that meets sustainability criteria by reducing energy costs and eliminating much of the waste currently generated by this procedure. therefore, this innovation will disrupt the laparoscopic device market by enhancing safety and effectiveness without introducing new components that could complicate surgical procedures. case report: we presented the case of a 63 year old women with chronic coloenteric fistula. conservative treatment was unsuccessful. the orifice was then closed with two subsequent clips, and the patient recovered well. to our knowledge, this is the first successful case of coloenteric fistula treatment with ovesco discussion: ovesco system is a technique that enables the closure of gastrointestinal defects (perforation sites, leaks, fistulas) . after the system application, the patient can be treated at home as was the case with our patient. a successful closure of the leak or fistula is possible when no extraluminal abscess is present. in our case, we had a cavity (previous sinus or abscess) that drained into the small bowel, thereby forming the coloenteric fistula. this allowed us to succeed with a fistula closure, as the cavity could drain into the small bowel conclusions: looking through the reports, one notes that the success rate of the otsc system procedure for insufficiency of anastomosis or colorectal fistula was 57-100%, but only nine successful reports of chronic colorectal fistula were found). a 100% success rate is reported if the clip is placed within a week of occurrence of the leak . on considering the financial side, clips could reduce costs and time of hospitalization and avoid patients having to undergo a surgical repair . the major advantage of ovesco clips seems to be their ability to grasp more tissue compared to the standard clips and their strong grip on the wound margins because of their sharpened teeth. the drawback of the clips in fistula sealing is their incomplete grasp when the tissue is fibrotic. most authors agree that ovesco is not very appropriate for fistulas larger than 12-15 mm. inguinal hernia repair is one of the most performed interventions in minimally invasive surgery. in this opportunity we report a new technique through the use of innovative devices such as the robotic clamp and magnetic deviceswith this technique and thanks to the magnetized devices and the robotic clamp we have demonstrated to reduce the surgical time between 10-20 min as well as to optimize the ergonomics of the surgeon.we explain the technique with a demostrative video and exposition of the devices that are necessary for make it.with this new technique we get a greater capacity of mabiobra for dissection of the peritoneum and later a greater facility for the suture of the same in the repair of the inguinal hernia. a motorized and computerized laparoscopic tool that can be customized to the specific surgeon and procedure a. szold aim: surgeons have different levels of skill and use instruments for different tasks, but laparoscopic instrument are commonly simple mechanical instruments that allow limited degrees of motion, and the same instruments are used regardless of the surgeon or the task. robotized articulating instruments so far have added degrees of freedom, but perform in a standard way for all users and procedures. technology: human xtensions has developed a \ u[hand-held \/u [ motorized smart laparoscopic instrument, that was recently introduced in human procedures. the device has several features that enable to customize it to the user and procedure. the degrees of freedom can be reduced from 7 to 5, the scale of rotation motion has 3 options that can control both speed and range of rotation, a feature especially useful for the variable types of suturing tasks. results: the variable features were tested in different procedures requiring suturing and grasping. the combination of all optional settings made the instrument customizable to the different skill levels of the surgeons. as such, it enabled to control the complexity of the device and take the surgeon through the learning curve until full control of all features was achieved. in addition, the combination of different controls was used for performing specific tasks requiring different levels of maneuverability. in september 2016, the results of a bomss survey regarding the routine use of pre-operative bariatric surgery were published. they found that 10% of units surveyed considered routine preoperative ogd completely unnecessary. as part of newly launching bariatric services in a single isolated centre we protocoled that all bariatric patients had to undergo pre-operative ogd, including a clo test, and reviewed if the ogd findings had influenced our surgical choice of operation and any necessary treatment before surgery. all patients embarking on the bariatric programme since its launch in jan 2017 to sept 2018 were included and had an ogd. the results of these ogds and all the clo tests were reviewed. these ogds were all performed by a single consultant to minimise any potential subjective differences. of the 45 patients, 7 (16%) tested clo positive of which 3 had normal findings on ogd. 9 patients had a hiatus hernia, 5 gastritis, 8 oesophagitis, 12 gastritis and oesophagitis, 9 had other findings e.g. ulcers, polyps' or nodules. the 7 positive clo patients underwent eradication of h pylori. studies have shown that this is a treatable and preventable cause of gastritis/ gastric cancers and potential surgical complications causing prolonged hospital stay in 22% of patients. knowing about the presence of a hiatus hernia prior to surgery also contributed to the surgical planning, including allowing time for the concurrent correction of the hiatus hernia in the operation. all patients with demonstrable oesophagitis (44%) had their operative choice changed to roux en y gastric bypass thus aiming to prevent post-operative reflux which would have been exacerbated had they undergone a sleeve gastrectomy instead. carrying out a pre-operative ogd had a significant impact in operative choice and additional treatment before surgery and therefore should be advised in all patients. general surgery department, ahievran university, kirsehir, turkey the majority of fatalities worldwide in people under the age of 35 years are caused by trauma 1 . blunt mechanisms account for 78.9 to 95.6%of injuries [2] [3] [4] [5] ,with the abdomen being affected in 6.0 to 14.9% of all traumatic injuries. this case contribute to the literature: a patient with sleeve gastrectomy has distorted anatomy at duodenogastric junction, if has bat,her/his small bowel perforation(sbp) will occur on more distal segment. this a unique case before unpublished. 33 years old female who had sleeve gastrectomy 3 years ago presented to emergency department sustained blunt abdominal trauma (bat). when she arrived pyschical exam (pe) revealed an abdominal guarding, tenderness, normal vital signs but those increased 8 h later. wbc values also increased 8 h later. hb was normal.fast showed in 3 cm thickness fluid early in douglas pauch (dp), 50 mm in supravesical, 33 mm diameter in dp 4 h later. abdominal ct: 37 mm diameter fluid in interloop, dp free air at 5th h from the accident. a diagnostic laparoscopy(dl) was done with diagnossed acute abdomen.there were a sbr-located 60 cm from treitz, intraperioneal fibrin deposits and fluid-repaired with a primary suture.the patient discharged on5 days without any event. repeat ct scans are recommended for patients with initial suspected bowel injury. we could not do this; cause ct exam could taken in rush hours only but we did repeatly pe that peritoneal iritation signs increased,resulted a dl,surgical therapy. according to the literature; dl may be a good treatment option in these patients, to reduce morbidity or mortality, time to surgery has been emphasized. long interval between presentation and surgery was found to be associated with complications. very few reports of isolated jejunal transection following blunt abdominal trauma have been published in literature. the literature mentioned; the patients with sbp are hemodinamically stable on arrival to the hospital like our case are, a rupture of the jejunum was seen just distal to the duodenal-jejunal flexure but there were a perforation 60 cm below treitz ligament and caused me to think the patient had sleeve gastrectomy and some brid around gastric and duodenal proximal jejunal part of intestines and also caused a new descended treitz ligament. normally external forces across to spine produce a blast effect on small bowel between treitz and ileocolic ligament. introduction: sasi bypass is a novel metabolic/bariatric surgery operation based on minigastric bypass and santoro's operation.it can be offered for patients with weight regain after sleeve gastrectomy. sleeve gastrectomy (sg)is a commonly performed bariatric procedure.weight regain following sg is a significant issue.yet,the understanding of this phenomenon is still unclear.rates of regain ranged from 5.7% at 2 years to 75.6% at 6 years.sasi bypass was an option for some candidates having sg done 2 years back and failed to achieve the required weight loss or having weight regain.in sasi bypass, resleeve gastrectomy of the dilated gastric pouch is done followed by a side to side gastro-ileal anastomosis. the aim of this study is to report the clinical results and the outcomes of sasi bypass as a therapeutic option for patients with weight regain after sg methods: we conducted a retrospective study for 25 morbidly obese patients having history of sg done more than 2 years back and failed to achieve and/or to maintain the required bmi. exclusion criteria:patients with recent history of laparotomy(less than 12 months). procedure was done at sidra hospital in kuwait from november 2016 to november 2018. using 5 ports, resleeve gastrectomy was performed over36 fr bougie tube starting 6 cm above the pylorus then gastro-ileal anastomosis (side to side)was performed 6 cm above the pyloric ring to an ileal loop counted 250 cm from the ileocaecal valve. data was collected from the patients including:weight loss progress,laboratory full results. discussion and results: during the study period:25 morbidly obese patients with a mean bmi of 44 ± 6 kg/m 2 were evaluated. -%ewl(excess weight loss)reached 85% at one year. -diabetes was cured in the 2 known diabetic patients (type2)within 6 months,and the one known type 1 diabetic patient had better control and less insulin daily doses(results were guided by glycated haemoglobin results every 3 months). follow up laboratory results were normal in 88% of patients (all were kept on regular vitamins and proteins supplementation)-one patient had postoperative leak(day1)from the anastomotic line that was treated conservatively. conclusion: sasi bypass is a promising operation that offers a good weight loss for morbidly obese patients having weight regain after sg conclusions: our study demonstrates a good agreement between the degree of liver steatosis and monocytes fat accumulation as well as between plin2 levels in liver and circulating monocytes. this suggests that ectopic fat deposition is a generalized feature of insulin resistance in obesity. sg reverses monocyte fat accumulation and restores insulin signalling, which correlates well with insulin sensitivity. moreover, circulating mmp9 levels significantly dropped after sg suggesting that the state of generalized inflammation characterizing obesity normalizes. her stomach stapled,a foreign tube like body was seen on cut surface of the stomach.the foreign body seen in dissected stomach wall is the tube is in placed a gastric banding insuflation tube.a laparatomy was made and the tube is extracted her stomach sutured primarily,nasogastric decompression,peritoneal drainage was made.her peritoneum was drained.she has a septic condition, leave in icu for a long period.her general status being well and discharged from hospital in 50 days.we learn after the operation not before;she had a gastric adjustable banding and extraction the gastric band, but the tube of gastric band is not removed. alkhaffaf et al present a case of fistulation of the lagb tubing into the jejunum a review of the published data to identify the salient learning points with this and similar rare complications fistulation from lagb tubing is a rare complication that tends to follow removal of an infected port. the clinical presentation is nonspecific, rendering the preoperative diagnosis difficult. the tube and band can be removed laparoscopically, with closure of the small bowel fistula site. securing the tubing to the abdominal wall fascia after intentional detachment from the port might reduce the incidence of this complication.katherine j et al report a late and rare complication of a small bowel obstruction in a 52-year-old woman from an lagb placed for 2 years. although not a common complication, one that could easily see the safety record of lagb patients tarnished if this small subgroup of patients is not acted upon promptly by emergency departments' unfamiliar lagb surgery. in our case we already made esophagogastroduodenoscopy before operation, ofcourse take past medical history from the patient.the patient hide past operation (gastric banding and removing band and port but leaving insuflation tube). ). there was no difference in the two groups regarding follow-up rate. basic demographics were the same, and other long-term results were similar between the groups. regression models for both post-op complications and failure as defined by baros score did not show that gender is a risk factor. discussion and conclusions: in our study, revisional sleeve surgery were similar. we did not see any significant difference in post-op complications, success of the operation as defined by baros, or subjective feeling of the patients. we do believe that gender-specific outcomes should be taken into consideration in optimizing patient selection and preoperative patient counseling, and that in the case of a sleeve post a band gender is not a risk factor for complication or failure of the procedure. objective: the internal hernia is a rare but a potentially fatal complication of laparoscopic roux-en-y gastric bypass (lrygb). the aims of this study are: (1) to determine the impact of mesenteric defects closure on the incidence of internal hernia after lrygb; (2) to determine the symptoms, characteristics and management of internal hernias after lrygb. the median interval between lrygb and reoperation was 53 months in group a and 26 months in group b. the median percentage of excess weight loss (%ewl) was 61% vs 67%, respectively (p = 0.79). the median percentage of total weight loss (%twl) was 39% vs 37%, respectively (p = ns). 14 patients, 70% (5 in group a), were admitted to the emergency room with acute abdomen pain. a ct scan was performed in 8 patients, 40%, and showed signs of occlusion in all cases. the most common symptoms were abdominal pain and vomiting. the surgery was performed using laparoscopy in 8 patients, 40%, and using laparotomy or conversion in 12 patients, 60%. conclusions: the closure of mesenteric defects during lrygb is recommended because it is associated with a significant reduction in the incidence of internal hernia. our study intends to analyze the long term results of 112 sleeve gastrectomies performed by 3d laparoscopic approach. materials and methods: a prospective cohort study was conducted to perform gastric sleeve for morbid abesity. all surgeries were performed by the same surgeon over a period of two years. the operating surgeon is a senior most laparoscopic surgeon with vast experience in laparoscopic surgery. during two years period, 112 cases were operated using 3d laparoscopy system. scientific calculation was done using spss release 18.0 windows software. results: 112 patients, 74 female (66%) and 39 male (43%), with median age of 42.3 ± 12.6 (15-70) the excess weight lost (ewl) was 68% in the first year, 72% in the second, 73% in the third, 71% in the fourth, 70% for the fith year, 68% for the sixth and 67% for the seventh. postoperative complications were 3 stenosis of the sleeve always located in the incisura and treated with endoscopic dilatation except one that required conversion to oagb. three leakages, all of them reoperated with drainage and introducing prosthesis by endoscopy in the same act. we have never had a postoperative bleeding of the sleeve. conclusions: 3d gastric sleeve laparoscopy is a safe and feasible technique for morbid obesity and related pathologies. the ewl is correct in long time. complications are rare but is necessary to have a good level of suspicious in order to a rapid solution. the worst complication is the leak of the sleeve. the oversewn of the gastric section is a good technique to avoid this complication. surg endosc (2019) aims: leak is one of the common complications of laparoscopic sleeve gastrectomy that result prolongation of hospital stay, morbidity and even mortality. methods: i report new approach for the treatment of 17 leaks presented to me post laparoscopic sleeve gastrectomy with laparoscopic roux en y bypass to the leak site at the level of gastroesophageal area. this new approach is possible and feasible, and avoids stenting due to high failure rate, prolonged hospitalization and saves life of patients. results: all leaks healed 7 days from surgery due to well vascularized small intestinal patch, except for 2 leaks that healed after 2 weeks of conservative treatment. aims: analyse the effect of one anastomosis (mini) gastric bypass (mgb/oagb) in the treatment of gastro-esophageal reflux in patients previously submitted to laparoscopic sleeve gastrectomy (sg). methods: a retrospective analysis was performed on the data of patients who underwent mgb/ oagb after a previous sg at policlinico san marco, italy, from january 2014 to june 2017. a total of 40 patients, 36 female and 6 males (85% f/15% m) underwent mgb/oagb after sg, due to the development of significant gastro-esophageal reflux disease (gerd), refractory to proton pump inhibitors (ppi), detected with the gerd questionnaire (gerd-q) and esophagogastroduodenoscopy (egds). in three patients (5%) a weight regain was also observed (mean bmi 41.3 kg/m 2 , range 39.3 kg/m 2 -42.5 kg/m 2 ). mean patients age was 40.6 (35-60 years old). before sg none of the patients had declared symptoms of gerd or was subjected to a therapy with ppi, preoperative egds did not show signs of esophagitis. mean bmi of the 37 patients who developed gerd without weight regain was 31.4 kg/m 2 (28 kg/m 2 -33.7 kg/m 2 ) at the time of surgery, with a medium ewl% of 51% (68.3-42.5%). patients were treated unsuccessfully with ppi for at least six months before programming revisional surgery. mean gerd-q score was 12. results: after mgb/oagb, with a mean follow up of 19 months (24-15 months), mean bmi was 28.5 kg/m 2 and gerd-q score was 5. however, five patients out of 40 (13%) developed an anastomotic ulcer or a grade c esophagitis. we did not observe any post-operative immediate complication nor any death. conclusion: mgb/oagb is a simple, effective and safe surgical procedure for patients who underwent a previous sg and who developed gerd, with satisfactory results in the short and medium post-operative time, even if there is still concern regarding the complications linked to biliary reflux. v.s. kyosev aims: laparoscopic adjustable gastric band (lagb) was one of the common techniques in bariatric surgery worldwide. the advantages included the possibility of regulation, ease of placement, acceptable weight loss and low rate of perioperative complications. a late complication of lagb is penetration of the gatric band through the gastric wall and migration into the lumen of the stomach. hereby, we present three cases of gastric band migration following lagb. methods: from 2013 to 2017 we observed 3 cases of gastric band migration in between 5 and 7 years after lagb placement. the patients were hospitalized in surgical department complaining of sudden sharp epigastric pain, nausea and vomiting, with symptoms onset in the last few days. all patients underwent abdominal ultrasound examination, x-ray investigation of the abdomen with oral contrast administration, fibrogastroscopy. in 2 cases the imaging studies revealed gastric band migration into the stomach's lumen and in 1 case-obstruction of the jejunum by the gastric band. all patients underwent laparoscopic surgery. results: two of the patients underwent gastrotomy, extraction of the gastric band and roux-and y-gatric bypass. the patient with jejunal obstruction underwent laparoscopic enterotomy, extraction of the gastric band and cholecystectomy due to concomitant cholecystitis. two of the patients had no additional perioperative complications and were discharged at the 5th postoperative day. one patient developed fever, left pleural effusion and partial insufficiency of the gastrointestinal anastomosis in the early postoperative period without the need of surgical treatment. the patient was discharged on 20th postoperative day. all patients were prescribed a diet and monthly blood test of ion balance. conclusions: lagb was one of the most common treatment methods due to the epidemic spread of morbid obesity in western countries. detailed knowledge on possible lagb complications is essential for the treatment of these patients. the diagnosis of lagb complications is often delayed due to its relative rarity and nonspecific clinical manifestations, but in most of the cases it requires emergency surgery for management of life-threatening conditions. results: there was conversion in 2 patients (short mesentery of the small intestine). such postoperative complication like anastomotic leak in 1 patient (2.7%) and staple line bleeding in 1 patient (2.7%), which was managed laparoscopically. compensation for type 2 diabetes was achieved in 9 (50%) patients, improvement was recorded in 7 (38.9%), dyslipidemia in 4 (23.5%) and 7 (41.1%) patients, arterial hypertension in 16 (55.1%) and 9 (31%) patients respectively, what led to metabolic syndrome resolution in 11 (55%) patients. the liquid is allowed to take after 1 day. average postoperative hospitalization-4.1 ± 1.9 days. %ewl for 48 months 69%. conclusions: laparoscopic mini gastric bypass is an effective method of surgical correction of body weight and metabolic disorders in patients with morbid obesity and allows to receive an adequate and stable correction of arterial hypertension, lipid and carbohydrate metabolism, which are components of a metabolic syndrome. introduction: over time laparoscopic sleeve gastrectomy lsg has become the most popular bariatric operation worldwide. a critical step during lsg is ensuring sleeve-size consistency. gastrisail device (gastric positioning system) is a three in one surgical device replacing the standard bougie used in lsg for the application of suction, decompression and to serve as a sizing guide for gastric sleeve creation. the aim of this study is to evaluate the possible merits of gastrisail device in lsg over the standard laparoscopic sleeve gastrectomy. methods: a prospective study of 40 patients randomly divided into two groups: group a composed of twenty patients who undergo lsg with the use of gastrisail and group b composed of twenty patients who undergo lsg with the standard bougie without the use of gastrisail comparing both according to operative time, consistent sleeve formation, delineation and visualization, intraoperative and post-operative complication rates, the lenght of hospital stay,gastric pouch design and percentage of excess weight loss (%ewl). results: regarding intraoperative time, the mean time was 72.0 ± 13.58 and 79.0 ± 11.74 for group a and b respectively,while no patients in group b had consistent sleeve formation,12 patients (60%) had consistent sleeve formation. delineation and visualization were accomplished in 100% of group a patients, was not accomplished at all in group b patients. the alignment of the stomach was reached in 12 patients in group a but no patients at all in group b, the mean of hospital stay was 2.20 ± 0. 42 and 2.40 ± 0.84 for group a and b respectively, the smaller tube design illustrated by gastrograffin x-ray at 3rd post-operative day was accomplished in 8 patients (80%) and 2 patients (20%) in group a and b respectively. there was no significant difference in %ewl in both groups. conclusion: the use of gastrisail device is superior to the standard lsg in consistent sleeve formation, visualization, delineation and good alignment and accomplishment of a small tube design while no significant difference in %ewl. bariatric surgery has spread all over the world. since japan has few patients with morbid obesity compared with western countries, it has been implemented only in limited facilities. however, bariatric surgery in japan is rapidly spreading recently, and many facilities are about to install bariatric surgery. effects of bariatric surgery are known to last for a long time, but some cases require reoperation which is called revision surgery due to late complications or rebound. because of thick subcutaneous and visceral fat, open surgeries are not even always a good solution to make surgery easier in morbid obese patients and all procedures must be completed laparoscopically. therefore, especially in revision surgery, the incidence of complications tends to be increased. as the number of bariatric cases to be increased in japan, cases requiring revision surgery is likely to increase. in revision surgery, it is necessary to select the procedure according to patient condition, and it is necessary to familiar well with those procedures. we will present cases that underwent revision surgery in our department and show the clinical outcome. we have done four revision surgeries after sleeve gastrectomy so far. operative indications are 2 mid-gastric stenosis and 2 rebounding disease. for stenosis cases, we performed roux-en y gastric bypass with distal stomach resection, and for rebounding cases, we performed re-sleeve gastrectomy with duodenal-jejunal bypass. average interval from initial operation to revision surgery is 69 months in rebounding cases, and 9 months in stenosis cases. duration of operation was 269 min in average, and mean estimated blood loss was 18 ml. no postoperative mortality was observed. in rebounding cases, excess bmi loss at 1 year after surgery was 40.9% in average, and both cases achieve diabetes remission at 1 year. one cases of mid-gastric stenosis required a nutritional support with formula diet temporally. in particular after sleeve gastrectomy, revising to roux-en y gastric bypass, re-sleeve gastrectomy, and adding the duodenal-jejunal bypass will be the main techniques. along with an increase of bariatric surgery in japan, it is necessary to acquire sufficient knowledge and skills to carry out revision surgery. methods: we present the case of a 45 year old woman who underwent lsg after lagb removal and lgcp. the patient underwent preoperative endoscopy and barium swallow, with no sign of stomach perforation or erosion. we emphasize that the patient, had undergone three operations of gastric band placement, gastric band removal and gastric plication before sleeve gastrectomy. however, a successful lsg was achieved. results: no severe postoperative complications were mentioned. conclusion: weight loss in the first year was 70% of the excess weight.sleeve gastrectomy after gastric band removal and gastric plication, for morbid obesity seems to be safe and efficient, especially in casesof absence of gastric erosion. surg endosc (2019) department of surgery, 19 patients were observed with serious septic complications many years after gastric banding operation. we detected a female dominance (16 female, 3 male) in patients with a mean age of 41.6 years. the leading symptoms were: dysphagia, upper abdominal tenderness and pain, spontaneous fistula formation, fever, masked septic signs, bowel and urinary obstruction. patients underwent video-endoscopy, chest and abdominal ct (computed tomography), fistulography and cystoscopy. results: in still morbid obese patients, laparoscopic procedures were performed with a conversion rate of 50%: atypic gastric and cardia resection in 4 cases, gastric suture in 9 cases, small bowel resection and suture in 4-4 cases. in one case, fistulectomy, abscess evacuation and combined urinary bladder suture and drainage were carried out. the duration of the surgeries were over 2 h with minimal blood loss (\ 200 ml). the foreign bodies were completely removed in every case. intraoperative complication was not occurred. early physiotherapy were promoted, oral feeding were gradually built up from the 2th postoperativ day depending on the type of the operation. early postoperative complications included recurrent fistula formation (n = 2) and wound infection (n = 11). all the fistulas were closed after conservative treatment. average hospital stay was 8 days, regular check-ups were held on the 3rd, 6th and 12th months of follow up. conclusion: gastric banding is the most common, routine and safe technique for the treatment of morbid obesity. the development of late, severe septic complications draws attention to the crucial importance of follow up. the surgical management of these patients is recommended in specialized centers in regard to difficult operative conditions and atypic treatment options. aims: single anastomosis duodeno-ileal bypass with sleeve gastrectomy (sadi-s) has been proposed as an alternative to biliopancreatic diversion with duodenal switch (bpd-ds) in order to maintain the outcome of the original procedure simplifying the technical complexity and to avoid potential complications. moreover, it potentially represents the more natural second step bariatric procedure after sleeve gastrectomy (sg). we aimed to report the initial experience with sadi-s of our high volume bariatric center. methods: retrospective analysis of patients who underwent bariatric procedure between july 2016 and november 2018 was conducted. the primary aim was the evaluation of the safety of sadi-s, defined as the rate of postoperative complications. the secondary endpoint was the bariatric efficacy of the procedure, defined as percentage excess weight loss (%ewl). results: among 813 patients who underwent bariatric procedures at our institution 36 (4.4%) patients were scheduled for sadi-s. all patients had multiple comorbidities. initial indication for sadi-s was failed sg in 8 patients (median pre-sg bmi 52.1 kg/m \ sup [ 2\/sup [ ; median 39 months after initial operation respectively) and primary procedure in 28 patients (median pre-operative bmi 56.5 kg/m 2 ). the surgical procedure was accomplished with robotic-assisted approach in 4 cases (median operative time 198 min) and with laparoscopic 4 trocars standard approach in the remaining 32 cases (median operative time 130 min). the duodeno-ileal anastomosis was fashioned using a double layer hand-sewn running sutures. no patients showed early post-operative complications, the median postoperative stay was 3 days. at a mean follow up of 12 months the median %ewl was 66.1. to date no patients experienced surgical. one patient develop wernicke encephalopathy 6 months after surgery, but he was non-compliant to multivitamin supplementation. conclusions: at least in a high volume bariatric center sadi-s, both as second step after sg and as primary surgical option, seems to be a safe and effective bariatric metabolic procedure based on solid physiopathologic principles. on the other hand, longer follow-up is necessary to support the use of this procedure as a better alternative to bpd-ds. m.r. elkeleny 1 , a. abo khozima 2 1 git and bariatric surgery, faculty of medicine, alexandria university, alexandria, egypt; 2 git surgery department, faculty of medicine,alexandria university, alexandria, egypt four bariatric cases 1. female patient with intragastric balloon, minor leak from the balloon leading to ballon migration to the jejunum;hance, small bowel obstruction occured. emergency diagnostic laparoscopy was done, enterotomy and extraction of the balloon, direct repair of enterotomy and balloon extraction through 10 mm port site. 2. male patient presented after 5 days of lsg with small bowel obstruction due to entrapment of small bowel loop through one of the port sites;therefor, emergency laparoscopy was done with reduction of the herniated segment and closure of the port site. 3. female patient presented with stricture of the ogj after re-sleeve gastrectomy managed by balloon dilatation which recur after 2 weeks .she was managed by expandable metallic stent for 6 weeks with good response and the stent was removed.4.38-year-old male patient presented with sever peripheral neuropathy following 5 months after sleeve gastrectomy, and the patient was getting worse; thus, he used wheel chair. he has been making good progress on vitamin b complex injections. the aim of our study was to compare histopathological findings of gastric specimens to preoperative clinical symptoms and to conclude about the need for ugi endoscopy as a routine prior to surgery. methods: the last two years, 44 morbid obese patients were selected to undergo laparoscopic sleeve gastrectomy (lsg) in our institution. for the needs of our study, all of them had ugi endoscopy and were reviewed for upper gi symptoms. histopathological reports obtained according to our protocol, after surgery. results: gastric histology from specimens revealed: no findings in 16/44 patients (36.4%), gastritis in 19/44 patients (43.2%) and focuses of incomplete intestinal metaplasia without dysplasia in 8/44 patients(18.1%). finally, two minor leiomyomas with low cellular proliferation rate were fully excised in a patient's specimen. there was no inconsistency between preoperative symptoms and gastric histology, while leiomyomas found were no reported to ugi endoscopy due to size. conclusions: some of the patients with clinical features of food intolerance, gastroesophageal reflux disease, and peptic ulcer disease had finally findings in histopathology of their stomachs. history of helicobacter pylori infection implements a raised incidence of mucosa pathology as well. because only one case revealed carrying significant pathology (leiomyomas), we consider that is safe to proceed with surgery in an otherwise asymptomatic patient based on his previous medical records and blood tests. aims: splenic abscess following laparoscopic sleeve gastrectomy (lsg) is a rarely seen complication. the aim of our study was to present a case of splenic abscess in a morbid obese patient who underwent lsg. as the main concern in these cases is leakage from the staple line, we present our diagnostic and treatment approach. methods: a 42-year-old, female morbid obese patient (bmi 56.6 kg/m 2 ), without any predisposing risk factors, underwent elective lsg in our department. following an uneventful course, she was discharged at the 2 nd postoperative day. however, at the 20 th postoperative day, she readmitted to our unit with high temperature of 38.4 o c, left upper quadrant tenderness and leukocytosis. contrast computed tomography (ct) revealed an abscess at the upper pole of the spleen 4,5 cm in maximum diameter, without leakage from the staple line. results: the patient was treated with broad-spectrum antibiotics and radiological percutaneous drainage of the abscess. although there was a partial clinical improvement, a week later, a new ct scan revealed the continuous presence of the abscess. despite the stable general condition of the patient a laparoscopic splenectomy was performed and a gradual recovery was followed. the presence of splenic abscess without splenic trauma or leakage from the gastric staple line, is an extremely rare complication and only a few cases have been previously reported. the cause has not yet clarified, but the proposed mechanism involves infarction of the spleen, due to vascular compromise and subsequent infection. most of the reported splenic abscesses were diagnosed during the late postoperative period. in our report we present a case of early onset, hence highlighting the need of clinical awareness for early diagnosis and treatment. introduction: obese surgical patients with obstructive sleep apnea (osa) have a higher risk of peri-and postoperative desaturations and subsequent morbidity and mortality. currently, the best perioperative management of patients without known osa remains unclear. although routine osa screening has been advocated, sleep studies are costly and time consuming. we hypothesized that bariatric patients can be safely monitored on a surgical ward by continuous postoperative pulse oximetry without preoperative screening for osa. objectives: to evaluate outcomes of continuous postoperative pulse oximetry without preoperative osa-screening, and to compare the results to outcomes of patients with osa and continuous positive airway pressure (cpap) treatment. methods: all patients who underwent bariatric surgery between 2011 and 2017 were included in this single-center retrospective cohort study. all patients were postoperatively monitored with continuous pulse oximetry on the surgical ward. patients with less than two documented saturation measurements were excluded. patient files were reviewed for osa diagnosis, cpap usage and perioperative details. primary outcomes were 30-day complication rates, intensive care unit admissions due to cardiopulmonary causes and postoperative desaturations of spo 2 \ 90%. secondary outcomes were icu admissions following all causes, length of stay and rates of reoperation and readmission. results: in total, 5203 patients were included. 675 patients (13%) were preoperatively diagnosed with osa, 511 (9.8%) were cpap users. complications occurred in 7.2% of patients without osa and in 9.6% with osa(p = 0.028). desaturations were documented in 1.4% and 4% (p \ 0.001), respectively. in both groups, 1 patient was admitted postoperatively to the icu for cardiopulmonary causes that could be related to osa (p = 0.119). both recovered without further complications. icu admissions, regardless of cause, occurred in 0.42% of patients without osa and in 1.18% with osa(p \ 0.001). no significant difference between groups was observed in complications based on clavien dindo classification, length of stay, reoperation-and readmissions-rates. conclusions: these findings suggest that continuous postoperative pulse oximetry without preoperative osa-screening is a safe perioperative management strategy for bariatric surgical patients. future studies are needed to assess cost-effectiveness of pulse oximetry vs. routine preoperative osa-screening in a prospective clinical setting. background: the pathology of colon is one of the most pressing and socially significant problems of modern health care, because it leads to reduction of the working population employed in manufacturing, in some cases to disability and reduced quality of life. mini invasive surgery of the colon has a great advantage: speed recovery, shorter hospital stay and better cosmetic results, the quickest return of patients to work. as a result, mini invasive endovideosurgery is firmly established in clinical practice of coloproctology. objective: the choice of optimal surgical method for treatment of colostasis, achievement of favorable outcomes of treatment. introduction: laparoscopic roux-en-y gastric bypass (rygb) is one of the most important bariatric surgical procedures performed worldwide and it can produce an important loss of weight with reversal of metabolic disorders like diabetes and dyslipidemia. even though it has good results, some complications occur after gastric bypass. a rare but serious complication of rygb is the so-called postprandial hyperinsulinemic hypoglycemia. its prevalence has been estimated less than 1% of cases and its pathophysiology remains unclear. methods: the aim is to present a case series of reversal surgery in patients with severe hiperinsulinemic hypoglycemia after rygbp in the hospital general universitari de la vall d'hebron. unit of endocrine-metabolic and bariatric surgery (eac-bs center of excellence for bariatric and metabolic surgery by ifso). it is a retrospective analysis of a prospective database same surgical team. we present in this study, the main features of those patients. results: between 2011 and 2018, 13 patients underwent a laparoscopic reversal procedure to normal anatomy and age mean was 57 year (22 years to 70 years). mean preoperative body mass index (bmi) was 31.2 kg/m 2 (range 28-39.4 kg/m 2 ) and 10 were women. all patients presented hypoglycemia symptoms 5 years after and the longest was 15 years after the procedure. the first step of the standard approach was a laparoscopic reversal to normal anatomy with resection of the alimentary rygb limb in 7 cases. a concomitant sleeve-like gastrectomy (sg) was added. four patients presented postoperative complications: gastrogastric anastomosis leak (1) introduction: laparoscopic sleeve gastrectomy is the most performed bariatric procedure, but complications might interfere with patient's long-term evolution based on its compliance and tolerance, surgical attitude and unpredictable evolution. materials: we present the case of a female obese patient, with type ii diabetes mellitus and blood hypertension, with multiple, sequential bariatric minimally-invasive interventions: sleeve gastrectomy in 2012 complicated by postoperative acute gastric dilation and mediogastric stenosis, reoperated for viscerolysis and cholecystectomy, with endoscopic gastric dilations, initially converted to functional one anastomosis gastric bypass (200 cm limb), with a non-adjustable gastric ring positioned instead of stapled division. the last operation was complicated 12 months after by persistent biliary gastro-esophageal reflux, chronic abdominal pain, and gas bloat syndrome. in 2018 the patient underwent conversion to laparoscopic r-en-y gastric bypass, with gastro-enteral anastomosis resection, band removal and viscerolysis. results: conversion to r-en-y was complicated by biliary leakage post-viscerolysis, treated with laparoscopic approach in the 9 \ sup [ th \/sup [ po day. after multiple surgical and endoscopic interventions, the patient presents short-term favorable outcomes, with no reflux or abdominal pain, with further weight loss and diabetes improvement. conclusion: bariatric surgery has unpredictable evolution in same cases, and conversion to r-en-y seems to be the best solution. lgcp is widely used in developing countries due to its lower cost and good results. material and methods: we performed in our department 120 lgcp for morbid obesity. excess weight loss (%ewl) was 55% at 6 month after surgery and 65% at one year. in 12 cases revision surgery was needed for different complications and in 22 cases for inadequate weight loss or weight regain after 18 month follow up. in 8 cases we performed sleeve gastrectomy (in 3 cases after taking down the plication) and in 14 cases we performed a re-plication in one row. results: the rate of revision surgery was 28% overall and 18% for inadequate weight loss (excess weight loss \ 50%) or weight regain. major complications occurred only in one patient (leak with abscess) but it was solved by laparoscopy. minor complications as vomiting and nausea appeared in 5 patients (22%) and were solved with medication. after one year follow up %ewl in these cases was 85%. conclusions: revision surgery after lgcp is possible. a new plication or sg was the option in our series with good results. further studies are needed to evaluate the use of lgcp in the armamentarium of bariatric surgery. background: roux-en-y gastric bypass (rygb) is one of the most commonly performed bariatric procedures around the world.however, rygb it sometimes carries the risk of rarebut serious long-term complications such as malnutrition and liver failure. we report a case of laparoscopic reversal of rygb. methods: in march 2017, a laparoscopic rygb was performed for a 53-year-old female without comorbidities and with a bmi of 54 kg/m 2 . all laboratory test results at the preoperative evaluation were within the normal range. abdominal ultrasound revealed moderate hepatic steatosis and oral endoscopy a hiatal hernia with grade b esophagitis. one year later, patient experienced an important weight loss of 75 kg (from 155 to 80 kg) with a bmi of 28 kg/m 2 . however, patient presented general weakness, abdominal pain, ascitis lower extremitiy edema, anemia, progressive caloric and protein malnutrition, vitamin (a, d), mineral (copper) and folic acid deficiencies, nonalcoholic steatohepatitis (nash) and liver function was progressive worsening. results: a laparoscopic reversal of gastric bypass was performed. the operation was successfully performed via laparoscopy. operating time was 70 min. postoperative was uneventful and patient discharge home at day 6. hepatic biopsy revealed nash with steatohepatitis of 80% (fibrosis f2-3/4). eight months after reversal of gastric bypass, patient has improved her clinical situation (no asthenia), maintains of weight (80 kg) and has improved her nutritional status and liver function parameters. conclusion: laparoscopic reversal of rygb is technically feasible and might be performed safely after thorough preoperative evaluation in carefully selected patients with malnutrition and liver failure. conclusion: laparoscopic sleeve gastrectomy it's a safety obesity procedure before major abdominal hernia repair. it's a minimally invasively technique with an absence of anastomoses. these factors prevent fewer complications, without using the small bowel, and skin problems and allow resolution of obesity-associated co-morbidities. body weight loss after surgery may be an opportunity to repair the severe loss of domain incisional hernia. bibliography borbély, y., zerkowski, j., altmeier, j., eschenburg, a., kröll, d. and nett, p. general surgery, benhazi medical center, benghazi, libia, 2 general surgery, royal bahrain hospital, manama, bahrain obesity is a worldwide epidemic with an increasing incidence trends and as a consequence obesity related health problems become priority to healthcare authorities in all the countries. laparoscopic gastric plication is an emergent restrictive procedure which claimed to be low cost because they do not need staplers and carries less complications as compared to laparoscopic sleeve gastrectomy. we present here a 37 years female who was operated for morbid obesity four months back where she underwent laparoscopic gastric plication with no immediate post operative complication and her wight loss was adequate. two days before presentation to our emergency department she started to complains of sever attacks or upper abdominal pain and vomiting.clinical examination was unremarkable apart of abdominal tenderness in left upper abdomen. all blood routine were normal and all inflammatory markers were within normal range.ct abdomen showed large cystic lesion around the greater gastric curvature containing fluid and raised possibility of collection. patient was admitted to hospital, in despite of medical treatment her pain persists and necessitate immediate laparoscopic exploration. gastro-gastric hernia at the greater curvature through loosen ethibond suture that was used to plicate the stomach in the previous surgery. we released the suture to liberate the strangulated stomach which is not gangrenous. re-plication was not possible because of the extensive gastric wall edema and as preoperative discussion with the patient she refused conversion to sleeve gastrectomy no intervention was done. post surgery patient was free of symptoms and tolerating oral diet and discharged home on third post operative day with no complications. gastro-gastric herniation could progress to gastric wall gangrene which will result in high morbidity and even mortality. high index of suspicion is required to diagnose the condition . preoperative patient counseling is important to explore the surgical options if deemed necessary to convert to another bariatric procedure. k. chouillard, a. d'alessandro, l. chahine background: bariatric surgery is the best available, long-term treatment for morbid obesity. currently, laparoscopic sleeve gastrectomy (sg) is the most commonly performed bariatric procedure in france. despite its safety and efficacy, long-term complications of sg are not rare including gastro-esophageal reflux disease (gerfd), twisting, stenosis, insufficient weight loss, and weight regain. the goal of this study was to analyze the pattern and short-term results of surgical revision in patients with sg. methods: revisional bariatric surgery, regardless of its motivation, was always a multidisciplinary decision after clinical, biological, endoscopic, and radiological assessment. patients who had revisional surgery after sg were retrospectively identified and subsequently divided in 4 subgroups according to preoperative body mass index ( we aim to present the management and the particular aspects of the surgical technique in a gastrobronchial fistula after gastric sleeve . the mean time between intervention and diagnosis is 6.7-7.2 months. methods: between 2011 and 2018, 4253 laparoscopic gastric sleeve resections were performed in our bariatric center. we had one case of gastrobronchial fistula associated with an inferior lobe abscess of the left lung, diagnosed 3 months after the gastric sleeve. the patient was subject for medical treatment for 24 h, than a laparoscopic intervention was performed in order to drain the lung abscess and the gastric fistula and to place a feeding jejunostomy. 2.5 months after this intervention (5.5 months after gastric sleeve) a laparoscopic roux-en-y fistulojejunostomy was performed. the evolution was monitorized with blood tests, upper gi contrast series and ct scans. results: the surgical drainage of the lung abscess, along with the antibiotherapy, controlled the infection and allowed the lung cavity to reduce in size, and thus the drainage tubes introduced in the thorax through the diaphragmatic orifice were retracted progressively. also, the feeding jejunostomy allowed a proper nutrition for the patient with a good recovery. however, 2.5 months after the drainage intervention, the gastric fistula was not healed, and a decision to interrupt the communication with the lung cavity was made, by creating a laparoscopic fistulojejunostomy. after this, the evolution was favorable, with the healing of the lung cavity, oral feeding was permitted and the jejunostomy was suppressed. conclusions: the treatment of the gastrobronchial fistula is complex (medical, endoscopic or surgical), phased and long lasting until healing. surgery was our initial choice for treatment due to the existence of the lung abscess, which needed to be drained. key words: gastrobronchial fistula, lung abscess, laparoscopy, fistulojejunostomy s.i. filip, i. hutopila, c. copaescu introduction: leakage remains one of the most dreadful complications in metabolic surgery. the main cause of leakage is poor tissue oxygenation due to inadequate vascular perfusion. the study of intraoperative tissue perfusion in real time due to icg enhanced fluorescence could provide valuable information for the surgeon in order to prevent postoperative fistula. aim: to present our experience in using icg enhanced fluorescence in laparoscopic bariatric surgery material and method: in 30 cases of gastric sleeve, 12 cases of gastric bypass and in 10 cases of revisional surgery or redo cases we used intraoperative icg mediated fluorescence to assure the optimal vascularization of the involved tissues. in our video we present intraoperative aspects before and after using icg in different cases. results: in all cases of primary gastric sleeve and gastric bypass with intraoperative use of icg we did not encounter inadequate perfusion. in one case of redo gastric bypass after failed vertical banded gastroplasty for morbid obesity despite intraoperative laparoscopic normal aspect of the gastro-jejunal anastomosis, icg mediated fluorescence allowed to identify an unexpected ischemic anastomosis and we could prevent consecutive postoperative leakage. discussion: presented cases are discussed and result with referral to literature is made. conclusion: intraoperative use of icg is a valuable tool in assessing the perfusion of the tissues and provide essential information for the surgeon in order to avoid postoperative leakage. , including 1270 patients, hemostasis with clips has been performed in all cases. however, among these cases nine patients required reoperation for early postoperative bleeding. in five cases a bleeding source from the stapled line was identified while in 4 cases no identifiable source was found. during the second period (2015 to present) 2967 patients were submitted to bariatric surgery and hemostasis was performed by over sewing with a running suture. among these cases reoperation for postoperative bleeding was needed in 13 cases (0.4%), but no bleeding from the staple line being encountered (0%). the difference has statistical significance. no significant complications related to the use of this type of reinforcement were encountered. conclusions: over sewing the gastric stapled line in bariatric surgery is superior to hemostatic clip application in preventing the postoperative bleeding from the stapled line postoperative bleeding. a protocol of active search of the bleeders during the bariatric procedure should be implemented and respected in all the cases. gastroenterological surgery, saitama medical university international medical center, hidaka-shi, saitama, japan intestinal endometriosis is a rare disease which is associated with about 10 to 30% of patients with endometriosis, and it is favorable to the rectum and sigmoid colon. here we report 5 cases (shown in the table) underwent laparoscopic resection for intestinal endometriosis. there were no postoperative complications in all cases, and all patient was discharged on 5-8 \ sup [ th \/sup [ postoperative day. before the operation, 2 of 5 patients were diagnosing intestinal endometriosis, and it was difficult to preoperatively diagnose. among them, the symptoms at the time of menstruation were clear was one case. in case of submucosal tumor, preoperative diagnosis seems difficult. additional image examination at menstruation may be useful for diagnosis. d2 dissection was performed for case 1, 2, 4 because malignant disease could not be denied as a preoperative diagnosis. 2 of them were strongly doubted endometriosis in surgical findings. in intestinal endometriosis surgery, pelvic adhesions and fibrosis are often advanced. in the sigmoidectomy, the average operation time was 152 min and the blood loss was 10 ml. in the rectal resection, the average operation time was 282 min and the blood loss was 17 ml. in case 1 and 5, pelvic adhesion was severe, residual rectum could not be straightened, and side to side anastomosis was performed. in intestinal endometriosis surgery, intestinal anastomosis method should be considered flexibly. conclusion: laparoscopic surgery for intestinal endometriosis was safe, but technically difficult because of fibrosis and adhesion. it is important to accurately diagnose from clinical symptoms and image also intraoperative findings. anastomotic method should be decided according to the case. aim: the aim of the study was to identify and highlight some of the complications one can encounter in bariatric surgery-specific-sleeve gastrectomy and discuss the therapeutic options one has at his disposal. methods: the study was retrospective. we identified a number of 260 patients which had a sleeve gastrectomy done in our clinic for a 2 year period. of these 10 had important surgical complications encountered during the surgery or in postoperative care. results: the group included 260 patients, with an average bmi of [ 40 kg / m 2 . average hospital stay was 7 days, with an average of 4.5 days which increased to 30 days when fistulas were encountered. the most frequent surgical complications were bleeding from the gastric suture (6) and gastric fistula (4 cases). other complications encountered were wound hematoma. surgery was required in 4 of the 6 cases of bleeding and 3 of the fistula cases required reintervention. one case was resolved with endoscopic stenting. conclusions: laparoscopic gastrectomy is considered a safe procedure with good results for the patient. although complications are rare they pose a series of technical difficulties for the surgeon due to the weight of the patient and frequent comorbidities which come with obesity. a thorough understanding of the symptoms and good follow-up ensures the best results. aims: to achieve additional weight loss or to resolve band-related problems, a laparoscopic adjustable gastric banding (lagb) can be converted to a laparoscopic roux-en-y gastric bypass (rygb). there is limited data on the feasibility and safety of routinely performing a single-step conversion. we assessed the efficacy of this revisional approach in a large cohort of patients operated in a high-volume bariatric institution. to the best of our knowledge this series represents the largest single-center study on conversion from lagb to rygb methods: between october 2004 and december 2017, a total of 1383 patients who underwent lagb removal with rygb were identified from a prospectively collected database. in all cases, a single-stage conversion procedure was planned. the feasibility of this approach and peri-operative outcomes of these patients were evaluated and analyzed. results: a single-step approach was successfully achieved in 920 (86.5%) of the 1383 patients. during the study period, there was a significant increase in performing the conversion from lagb to rygb single-staged. no mortality or anastomotic leakage was observed in both groups. only 49 patients (3.6%) had a 30-d complication: most commonly hemorrhage (n? = ?23/49), with no significant difference between the groups. conclusion: converting a lagb to rygb can be performed with a very low morbidity and zero-mortality in a high-volume revisional bariatric center. with increasing experience and full standardization of the conversion, the vast majority of operations can be performed as a single-stage procedure. only a migrated band remains a formal contraindication for a one-step approach. surg endosc (2019) . six months after surgery the mean hrql score, was 2 (0-6) in 5 patients underwent to lsg and 0.6 (0-2) in 3 patients underwent lgb. twelve months after surgery the mean postoperative questionnaire score was 6 (0-12) in 2 patients who underwent lsg. at ph-manometry the mean percentage time of acid reflux in orthostatism was 7.3 (range 6-8.6) and in clinostatism 5.3 (range 0.3-10.4). the mean demeester score at the distal electrode was 33.1 (13.7-52.5). conclusions: in asymptomatic patients, complete gerd evaluation before bariatric surgery allows better selection of surgical procedure, to reduce the postoperative occurrence of severe or de novo gerd. postoperative gerd evaluation provides useful data regarding the impact of lsg on gastroesophageal reflux. a larger patient sample size is required. aims: vertical calibrated gastrectomy (usually know as gastric sleeve) as unique technique gives better results than the roux y bypass in terms of improvement of anthropometric measures, reduces comorbidities and has a lower rate of postsurgical complications, with an improvement of quality of life. material and methods: an observational, longitudinal, retrospective and comparative study with 95 patients, aged 18-65 years,during a period of 3 years. everyone must comply with the protocol of the unit. demographics of the population and the anthropometric data will be measured in the presurgical consultation, the month and the year after the surgery: weight, height, bmi, weight loss percentage,bmi percentage and percentage of excess weight lost. we took data on the cardiovascular risk by the framingham score. the quality of life is measured by baros scale. mayor comorbidities are hypertension, diabetes, dyslipidemia. complications will be measured in absolute frequencies. for de statistical study, we apply type t student or chi square being statistically significant p equal to or less than 0.05. results: there was not statistically significant difference between the 2 techniques of surgery month (p = 0, 83), but they were evident to the year of the same (p 0.003). not gender or age differences were apparent. mayor complications did not appear in gastrectomy (no leaks), highlighting the number of bleeds with this surgical technique. the bypass there were two leaks. there was no statistically significant difference in cardiovascular risk (p = 0, 07) between the two techniques. there was a more significant decrease in number of comorbidities in gastrectomy against the bypass, with a total disappearance of patients with dyslipidemia. there were no statistically significant differences in baros score, although it was higher in gastrectomy. conclusions:-the vertical gastrectomy as unique technique can be considered superior in the short term, as well as safe, according to the aec quality parameter. we think it will be necessary to continue their studies into the medium-long term. aims: analyze the impact of different bariatric surgeries technics in carbohydrate metabolism and pancreatic beta cell population of none obese adult wistar rats. methods: we used twenty healthy not obese adult wistar rats divided in five groups randomly assigned. each with n = 4. the control groups were divided into fasting control (f) and sham (surgical control). the surgical groups were separated into vertical gastrectomy (gs), 50% resection of the middle small bowel (ri50) and gastric bypass (gb). in each group was assessment: beta cell mass modifications, pancreatic islets histomorphometry, proliferation, apoptosis and neogenesis in beta-cell pancreatic population; intraperitoneal glucose test tolerance, body weight and food intake. statistical analysis as evaluated using mann whitney test. results: the malabsorptive and restrictive group have a significantly smaller increase weight than the control groups. the intraperitoneal tolerance glucose test reports incremental glucose area under curve (auc) was significantly higher in the malabsorptive group and lower in the restrictive group compare to the control groups during the second (p \ 0.01) and third (p \ 0.05) month of the study. the beta-cell mass was significantly higher in the ri50 group compared with control groups respectively. there was a significantly increased number of beta-cell per pancreatic insulin positive area in gs and gb. proliferation was significantly increased in ri50 and gb group, and significantly decreased in sg compared. there was no significantly difference during apoptosis assessment among surgical and control groups. in neogenesis differences between groups were assessed qualitatively by the presence pdx -1 expression, being higher in rygb. the endocrine pancreas in our model is altered by the anatomical and functional conditions arising from surgical techniques. carbohydrate metabolism conditions are affected by temporary adaptive processes due to surgical alternatives. there is a hyperplasia and hypertrophy of the beta cells in surgeries with a malabsorptive component, as well as greater neogenesis. these results could explain part of the existing relationship between the enteropancreatic axis and the existing incretins. m. buza, c. copaescu introduction: nowadays, we have high volumes of obese patients for whom surgery is the answer, but unfortunately the psychological evaluation has no standard recomandation in preoperative evaluation of bariatric patients. it is argued that surgery success, in addition to the operation itself, relies on behavioral changes and that one of the goals of the preoperative assessment is to prepare the patient for the postoperative period, aiming to optimize surgical results. aim: although no formal standard exists in the literature, there is growing recognition of the important elements to be addressed and the appropriate means for collecting the necessary data to determine psychological readiness for these procedures. methods: information regarding the components of the clinical interview and the specific measures used for psychological testing are discussed. given the limited data on predicting success after surgery, determining psychological contraindications for surgery is addressed. additionally, the multiple functions served by the psychologist during this assessment procedure are highlighted along with the value of this procedure in the patients' preparation for surgery as well as the postoperative follow-up. in our center of excelence for bariatric and metabolic surgery (coe) we introduced since 2013 a mandatory pre-and postoperative psychological evaluation for all patients addressing the metabolic program. results: psychological evaluation of patients before bariatric surgery is a critical step, not only to identify contraindications for surgery, but also-and more so-to better understand their motivation, readiness, behavioral challenges, and emotional factors that may impact their coping and adjustment through surgery and the associated lifestyle changes. postoperative follow-up is necessary. the psychological evaluation of the patient undergoing bariatric surgery is an invaluable piece of the larger pre-and post-surgical assessment, aiming better results in the short and long term after bariatric surgery. introduction: a mesenteric cyst is defined as a benign abdominal tumors that is located in the mesentery of the gastrointestinal tract, identified in * 1 of 100,000 hospital admissions. mesenteric chylous cysts are rare pathologic entities that often present with unspecific symptoms. the preoperative diagnosis requires all the common abdominal imaging techniques. usually the correct diagnosis may be made only at the operation stage or during the histological examination. all mesenteric cyst should be resected in order to avoid their complications, complete surgical resection is recommended and curative in the majority of cases with a low risk of local recurrence. the laparoscopic approach is the gold standard in the treatment of intraabdominal mesenteric chylous cyst. laparoscopic resection provides less pain, shorter hospital stay, and early recovery for the patient. case report: we report a case of 28-year-old saudi woman who presented to our clinic complaining of upper abdominal pain and mass in the epigastrium for one week, no history of nausea, vomiting, or recent changes in bowel habits. her medical and family histories were clear and she had never had any abdominal interventions. abdominal palpation revealed a smooth-surfaced mass palpable in the left upper quadrant, ultrasonography and with computed tomography of the abdomen revealed an approximately 93 9 72 9 66 mm unilocular cyst closely related to the mesentery in the left side of upper abdomen not related to the pancreas .the cyst was excised by laparoscopy complete surgical excision to avoid recurrence within healthy borders, it is contained milky white fluid. the histopathological findings were chronic inflamed mesenteric cyst. a review of the literature considering this rare entity was also performed to evaluate our treatment strategy. conclusion: mesenteric chylous cysts represent a diagnostic challenge and they should be considered when a physician encounters an intraabdominal mass. usually the correct diagnosis may be made only at the operation stage or during the histological examination. the treatment of choice is the complete surgical excision that can be safely performed by laparoscopy. surg endosc (2019) background: diverticulum of appendix is relatively rare, and appendiceal diverticulitis was reported to have a higher risk of perforation than appendicitis. in the us and europe, because of the high risk of perforation, preventive appendectomy is recommended to appendiceal diverticulosis, even if the patient has no abdominal pain. methods: we retrospectively reviewed the records of 672 post-operative patients, who were diagnosed appendicitis or appendiceal diverticulitis on the pathological findings in our institution from january 2012 to october 2018. all patients were performed computed tomography (ct) before operation. 652 patients underwent laparoscopic surgery, including appendectomy, cecal resection, ileocecal resection and right hemicolectomy, while 20 patients underwent open surgery. total of 12 cases of appendiceal diverticulitis were analyzed in our study. result: 11 patients had abdominal pain before surgery. 4 patients were diagnosed appendiceal diverticulitis by preoperative ct. all patients underwent laparoscopic surgery (10 appendectomy, 1 cecal resection, and 1 ileocecal resection). on the pathological findings, perforation of appendix was found in 5 patients and the pseudo type of diverticula with no muscle layer was found in all patients. 660 patients with appendicitis were treated surgically during the same period. among them, a perforation of appendix was found in 78 cases. the perforation rate was 11.8%. on the other hand, the perforation rate of appendiceal diverticulitis was 58.3% in our study. conclusion: the perforation rate of appendiceal diverticulitis was higher than of appendicitis in our study. for the examination of the treatment strategy, including preventive appendectomy, the accumulation of more cases will be expected. case presentation: a 59-year-old man was referred to our hospital with right lower quadrant abdominal pain for 2 days. his fever was 38.4°c. his white blood cell count was 30,200/ ll, and c-reactive protein level was 8.3 mg/dl. ct revealed multiple diverticula of cecum and appendix. micro-abscess and free air were found around appendix. we diagnosed this case as appendiceal diverticulitis and laparoscopic appendectomy was performed. a perforation was found in resected appendix. microscopic study revealed a pseudo-diverticulum. the inflammation of appendix was stronger in serous membrane side than in mucosa side. this finding accorded with appendiceal diverticulitis. introduction: in order to reduce the abdominal trauma and the length of scar incisions (also during laparoscopic surgery) many approaches during the last decade has been proposed, such as single access laparoscopic surgery (sals). the aim of our paper was to update the data of our previous paper with a greater cohort of patients and a longer follow-up, also showing the single access laparoscopic left colectomy (salc) technique in particular with inferior mesenteric artery preservation imap (valdoni's technique). materials and methods: we made a retrospective analysis from october 2009 and october 2016 of all patients who underwent a sals approach for colorectal disease in the department of general and mininvasive surgery of san camillo hospital of trento. statistical analysis was performed using ibm spss statistics 23. continuous data were expressed as mean ± standard deviation (sd). categorical data were expressed as absolute number and percentage. the results are presented as 2-tailed values with statistical significance if p values \ 0.05 results: from october 2009 until october 2016, 72 salc for colorectal surgery were performed in our unit. of this 72, 58 were for left colectomy. in 12 cases we performed an imap. the salc with imap were performed only in case of benign disease. the mean operative time was 149.74 ± 27.93. only one intraoperative complication were recorded, that was a splenic capsule tear, resolved with apposition of fibrillar haemostats. according to clavien dindo classification there were in particular 2 grade ii complications, a bleeding solved with blood transfusion and one pancreatitis solved with medical therapy; 2 grade iiia complications that was anastomotic bleeding solved endoscopically (the two complications raised in patients with imap) and 2iiib complications due to anastomotic leakage which needed reoperation. the mean length of incision was 3.64 ± 0.86 cm. logistic regression did not show any correlation between imap and any complications. conclusion: in conclusion, salc is a safe but very challenging technique which need a longer learning curve than the conventional laparoscopic one. in laparoscopic colectomy, also, imap seems to be safe and effective without correlation with post-operative complications also if performed in single access laparoscopic approach. aims: to describe an infrequent anatomical variation that can give rise to diagnostic and therapeutic difficulties. methods: patient with ivermark syndrome (situs ambiguus and polysplenia) with acute appendicitis and bibliographic review results: a 41-year-old male who consulted for flank and right hypochondrium pain of 18 h of evolution, associated with nausea without vomiting, no fever noir other symptoms. to the physical examination good general condition. painful to palpation selectively on the flank and right hypochondrium, with involuntary defense and positive decompression at this level. the signs of rovsing and psoas were negatives. in the analytical performed leukocytes of 17,000 with neutrophils in 82% and rpc (reactive protein c) in 22 mg/l. abdominal ct (computed tomograph): cecum and the ilio-cecal valve were visualized at the subhepatic level with tubular structure on the side and seemed to correspond to the cecal appendix which is increased in size (12 mm), with findings suggestive of acute appendicitis. sigma and descending colon located in right hemiabdomen. second per duodenal portion located anterior to the superior mesenteric artery. superior mesenteric vein located to the left of the superior mesenteric artery, rotating around it, (radiological signs compatible with intestinal malrotation). no free fluid collections nor pneumoperitoneum. laparoscopic appendectomy on phlegmonous acute appendicitis without incidents. correct post-operative course, being discharged at 48 h; the pathological anatomy was reported as acute appendicitis in phlegmonous. conclusions: ivermark syndrome is a genetic alteration with a multifactorial inheritance pattern, characterized by an alteration in the situation of the mesenteric vessels, which leads to abnormal rotation of the intestine during the embryonic period and alteration of the situation of different intra-abdominal organs, without a specific pattern that is pathognomonic, is associated with congenital heart anomalies between 50 and 90%. reaching adulthood only between 5 and 10% of them. a case of acute appendicitis is presented in a patient with this anomaly, which can lead to diagnostic and therapeutic difficulties due to the anatomical variations involved. abdominal tomography is the image method that provides the best performance for the diagnosis of acute pathologies in this type of patients. background: the clinical manifestations which occur in relation to decompression during scuba diving are variable. mild symptoms have often been reported in gastrointestinal tract. this is one of the severe cases with gastrointestinal barotrauma. ischemic colitis caused by air embolism very rare, therefore it is to be reported and discussed. case presentation: a 58-year-old man visited our emergency room with diffuse abdominal pain and bloody diarrhea 2 days ago. the patient was a skilled diver who took seafood through diving for 30 years. two days before presenting, the patient had severe abdominal pain just after diving for 2 h at a depth of 30 meters. he was immediately transferred to a local hospital for hyperbaric oxygen therapy, but there was no improvement with the symptom. abdomen ct angiography showed terminal ileal, ascending, sigmoid colonic and rectal decreased enhancement with wall thickening. sigmoidoscopy showed diffuse huge ulcerative lesions and ischemic changes on mid rectum and sigmoid colon. emergent subtotal colectomy and temporary loop ileostomy were done, and pathologic findings revealed diffuse mural infarct with serosal abscess formation in whole colon and transmural infarct in terminal ileum. conclusion: surgical approach could be one of the treatment options, though it depends on severity of the symptoms and the patients' conditions. colonic lipomas are extremely uncommon benign tumours, with an incidence ranging between 0.035% and 4.4%. although they are most frequently asymptomatic, when colonic lipomas are [ 2?cm, they may present symptoms such as constipation, abdominal pain or rectal bleeding. most colonic lipomas typically occur in middle aged women and are located in the ascending colon and the caecum, while occurrence in other parts of the colon and rectum is rare. in this case report, we describe a lipoma that caused descendent bowel intussusception. a 55-year-old male presented with longstanding history of constipation. personal history of interest included active smoker, hypertension, hypercholesterolemia, psoriasis with joint affectation and reiter syndrome. he had had no previous surgery. he attended the emergency services on 17th july 2018 with a two-day bowel obstruction, without fever or nausea, being attended by our surgical emergency unit. he had been assessed during the previous months by gastroenterology, with a colonoscopy that showed a 4 cm submucosal lesion that partially occluded descendent bowel, with inconclusive biopsy. an abdominal contrast-enhanced computed tomography (ct) was performed, confirming a welldefined mass located in splenic flexure of descendent bowel, conditioning a large bowel intussusception, nevertheless with no obstructive acute signs. the surgery was scheduled a few weeks later, performing a laparoscopic segmental resection with primary anastomosis including oncologic margins. the patient evolved satisfactorily in the postoperative period and was discharged six days after the surgery without any complications. likewise, he was monitored on a regular basis at our outpatient department and was free of symptoms at the 1-month follow-up visit. the histological analysis revealed a 5 cm ulcerated lipoma affecting 60% of bowel circumference. the molecular study, using fluorescent in situ hybridation (fish) showed no mdm2 gene amplification. laparoscopic segmental resection of the large bowel is a safe and feasible technique for the treatment of large bowel intussusception caused by a colonic lipoma. the complete removal of the lipoma will condition the prognosis. furthermore, in the future, endoscopic surgery using colonoscopy could be employed when having a certain preoperative diagnosis of lipoma. surg endosc (2019) introduction: acute appendicitis is one of the most common abdominal surgical emergency, the diagnosis of which mostly relies on conventional methods such as physical examination and blood tests. the use of ultrasonography and ct abdomen aids in more precise diagnosis especially in patients with atypical presentation or in elderly. aim: this study aims to evaluate the ability of the neutrophil/lymphocyte ratio (nlr), platelet/lymphocyte ratio (plr) and mean platelet volume (mpv) in predicting the diagnosis of acute appendicitis. methods: retrospective analysis of prospectivly maintained data of all patients (98) admitted with acute appendicitis to the emergency department at a tertiary hospital in the middle east between january 2016 till september 2016. medical records and database of patients,who had appendicectomy for clinically and radiologically proven appendicitis, were reviewed. the retrieved data included patient's demographic and laboratory values of white blood cells (wbc), neutrophil (n), lymphocyte (l), and platelet (p) along with their ratios for comparison. results: spss 23 version was used for tabulating the data. the recommended cutoff value of the nlr, plr and mpv in predicting the diagnosis of acute appendicitis was decided by using receiver operating characteristic (roc) curve analyses. at least for nlr, the confidence interval (ci) was 0.47 which is 47 percentage of the positive values, since the confidence limit was between 35 to 58%. our results showed that the laboratory parameters were fairly significant since the confidence interval was 0.47 in predicting the diagnosis in our population. conclusion: although appendicitis is a clinical diagnosis but laboratory parameters specially nlr, plr and mpv can be used as an adjunct in the diagnosis of acute appendicitis. literature is scarce concerning the validity of such parameters in our part of the world and prospective randomized controlled trials are needed to prove the efficacy of such rationale. objective: tumors of the cecal appendix represent a subset of colonic neoplasms whose early diagnosis is a real clinical challenge. correspond to 0.5% of all gastrointestinal tumors and their prognosis depends on the type of injury, being the most frequent variety the carcinoid type. appendix involvement in endometriosis is rare, accounting for 3% of all endometriosis cases, and sometimes mimicking cecal tumors. methods: a 43-year-old woman with a history of hypothyroidism due to autoimmune thyroiditis and atrophic gastritis with gastric neuroendocrine tumors resected by endoscopy that in the digestive unit reviews, tac with double contrast was requested, showing a lobulated lesion in the cecum adjacent to the ileocecal valve, with contrast enhancement of approximately 27 9 21 9 20 mm, suggestive of tumor. the colonoscopy evidenced a protruding appendicular osteum with inflammatory aspect that was biopsied. the pathological anatomy of the biopsy reports chronic congestive colitis with edema of the own blade and minimal acute activity, with moderate local eosinophilia.the case was presented in the multidisciplinary oncology committee and it is decided, due to the patient's background, to perform surgery on the lesion. laparoscopic right hemicolectomy was performed, with extracorporeal latero-lateral mechanical anastomosis with endogia signiaò 60 mm. results: the patient evolves favorably, with good oral tolerance and depositional habit. she is sent home at the sixth postoperative day. the pathological anatomy reports tumor injury in the appendicular ostium compatible with endometriosis at the base of the cecal appendix implantation, ruling out malignant tumor pathology. conclusions: gastrointestinal tract endometriosis represents 3-15% of cases, being most frequently located in the rectal-sigmoid region. appendix involvement in endometriosis is rare, accounting 2-3% of all endometriosis cases and presents a preoperative diagnostic challenge, because sometimes mimicking a carcinoid cecal tumor. in our case, due to the patient's history, we assumed that the cecal lesion was a carcinoid tumor, so we performed a laparoscopic right colectomy, but if we had known that it was an endometriosis, we could have performed an appendectomy, although in both cases the laparoscopic approach gives us some benefits compared to the open approach aims: the natural history and predictive factors associated with chronic anastomotic complications have not been clearly studied. the aim of this study was to evaluated the predictive factors related to chronic anastomotic complications methods: from january 2010 to december 2016, a total of 53 patients who underwent anastomotic leakage were enrolled in this study. all patients underwent anterior resection with or without defunctioning stoma due to colorectal cancer. the patients received follow-up by clinical examination and abdominopelvic computed tomography (ct). they underwent a follow-up ct every 6 months for the first 1 year and then every 12 months for the next 2 years after that. complicated group (cg) underwent chronic anastomotic complications. normal group (ng) didn't underwent chronic anastomotic complications like stricture, fistula, chronic sinus, etc. results: there were no significant differences in gender, age, preoperative chemoradiotherapy and operation type between two groups. low rectum lesion and defunctioning stoma at the time of primary surgery were more frequent in cg (p = 0.013, 0.021). there were no significant differences in type of anastomotic leakage, international leakage grade and ct findings at the time of diagnosis of anastomotic leakage. however, abnormal ct findings at the time of 6 month were more frequent in cg group (p \ 0.0001). in multivariate analysis, abnormal ct finding at the 6th months was only significant factor related to chronic anastomotic complications. conclusions: abnormal ct findings at the 6th month associated with prediction of chronic anastomotic complications. aims: acute appendicitis is the most common cause of acute abdomen requiring surgical intervention in the world. nowadays, standard treatment of acute appendicitis involves a surgical approach, eitherlaparoscopic or open.the purpose of the present study is to evaluate the safety of a discharge within less than 24 h after performing appendectomy as a result of an uncomplicated acute appendicitis. conclusions: patients who undergo appendectomy (open or laparoscopic) for acute uncomplicated appendicitis, without surgical incidents and an adequate social/family network, can be discharged in less than 24 h without a higher risk of post-operative complications or readmissions than patients with longer postoperative stays. it will be necessary to conduct more prospective studies with higher level of evidence that could corroborate our results. aims: median arcuate ligament syndrome (mals), also known as the celiac axis compression syndrome, is a rare condition caused by to the compression of the celiac trunk and the nerves located in this area (celiac plexus) by the median arcuate ligament. it is believed that mals is caused by the median arcuate ligament compression of the celiac plexus nerves over the celiac trunk, but another probably cause may be the lack of blood flow to the organs supplied by the celiac artery, however, this theory is controversial. the first clinical sign of mals is the apparition of postprandial abdominal pain in the upper abdomen. this typical pain forces patients to avoid eating, which can lead to loss weight (often more than 20 pounds). other associated symptoms may include nausea, diarrhea, vomiting and delayed gastric emptying (a delay in food moving from the stomach into the small intestine). in relation to this uncommon condition, we present a clinical case of laparoscopic management of mals. methods: we present a 23-year-old patient with complaints of recurrent epigastric pain, postprandial vomiting and loss weight. blood tests and gastroscopy were performed to help ruling out more common causes of his symptoms, such as gastroesophageal reflux disease (gerd), gastritis or gastroparesis. as a part of the differential diagnosis, mals was suspected and a mesenteric doppler ultrasound was ordered to check blood flow through the celiac trunk and evaluate a possible compression of the celiac plexus. also, an angio-ct scan was also performed to confirm the diagnosis. once the mals was diagnosed, we decided to perform a laparoscopic approach as definitive surgical procedure. results: the patient was discharged 48 h after surgery with no remarkable events during his postoperative stay. he has been followed up during 6 months, remaining asymptomatic. conclusions: laparoscopic approach in mals offers a superior visualization during the surgery and involves lower morbidity in compare to open approach, which makes it an optimal treatment for this condition. aim: pilonidal sinus is a common disease with annoying and often painful symptoms. traditional surgical techniques for its treatment are characterized by either intense postoperative pain and prolonged wound-healing periods (wide resection, marsupialization) or unsatisfying aesthetic results (advancement or rhomboid flaps). 'endoscopic pilonidal sinus treatment' (epsit) is a new minimally invasive technique which utilises the meinero scope, primarily designed for the endoscopic treatment of complex perianal fistulas in a technique known as vaaft. we present our experience and outcomes in three treatment centers in northern greece. methods: between july 2015 and november 2018 we treated 61 patients with pilonidal sinus using the epsit technique. the mean age of patients was 30, and 85% of them were male. 4 patients were treated in the acute phase with the presence of pilonidal abscess. all operations were performed by two laparoendoscopic surgeons specifically trained in the technique. most patients were treated on a day-case basis. postoperative wound care included daily tract irrigation with 10 ml of saline for a total of 10 days. results: there were no immediate postoperative complications. medium postoperative pain was 2.8 on a vas scale. 91% of patients were discharged on the same day, 4 patients remained in hospital for one day mainly due to social reasons. return to daily activities was immediate. in a maximum follow-up of 24 months we observed 5 recurrences. conclusions: epsit is a promising minimally invasive technique for the treatment of pilonidal sinus. what makes it mostly attractive is the minimal amount of postoperative pain, the excellent cosmetic result and the fast recovery with return to daily activities. introduction: isolated acute chylous peritonitis is a rare event. when presented as an acute abdomen warranting surgical intervention, it is often difficult to determine the cause pre-operatively. here, we report a case of acute chylous peritonitis due to meckel's diverticulitis presented with the clinical features suggestive of acute appendicitis. presentation of the case: a 32-year-old female presented with abdominal pain and clinical features consistent with acute appendicitis underwent diagnostic laparoscopy. she was found to have four-quadrant chylous peritonitis and ileus caused by an inflamed meckel's diverticulum adhered underneath a loop of small bowel and mesentery leaking chyle. after uneventful postoperative recovery, she was discharged at post-operative day two with oral antibiotics and was advised to take a low-fat diet. aims: perforated diverticulitis with purulent peritonitis (hinchey iii) has traditionally been treated with surgery including colon resection and stoma (hartmann procedure) with considerable postoperative morbidity and mortality. laparoscopic lavage has been suggested as a less invasive surgical treatment. methods: a 78-year-old woman with a 10-day history of abdominal discomfort exacerbed during the last 48 h. ct scan showed neumoperitoneum accompanied by free fluid and a 6 cm collection adjacent to descending colon showing diverticula suggestive of covert perforation. after 48 h of non-response to medical treatment, associated with the impossibility of percutaneous drainage through interposition of intestinal loops, colon and lumbar vessels, urgent surgical intervention is decided. results: laparoscopic lavage of all 4 quadrants was performed with saline, 3 l or more, of body temperature, until clear fluid was returned. two non-suction j-pratt drains were placed. intravenous antibiotics were continued for a minimum of 72 h, then oral antibiotics were continued for 1 week. oral fluids were commenced on the first postoperative day and solids were subsequently introduced, depending on clinical progress. conclusion: laparoscopic management is reasonable alternative to the traditional open resection for hinchey grade ii-iii perforated diverticulitis with generalized peritonitis. this approach has a low mortality rate despite patient co-morbidity and disease severity. benefits include stoma avoidance and minimal wound infection. subsequent elective resection is probably unnecessary and readmission in the medium term is uncommon. background: constipation and fecal incontinence are common annoying complications after pull through procedures for hirschsprung disease (hsd). many causes could be the etiology of these problems. perineal descent syndrome could be the major hidden cause of these complications. the aim of this study is to evaluate the role of perineal descent syndrome in the development of post pull through constipation and fecal incontinence in addition to evaluate the role of laparoscopic rectopexy for treatment of these problems. \ b[patient and methods: \/b [ 380 patients treated with pull through for hsd over the period of five years. 62 out of the 380 patients presented with constipation and fecal incontinence. 23 patients with constipation and 39 patients with fecal incontinence. rectal exam, anorectal manomety, defecography, contrast enema, rectal biopsy, emg, proctoscopy and endorectal ultrasound were performed to all patients. patients with stricture, missed aganglionic segment, injured internal anal sphincter, and loss of the sensory mucosa above the dentate line were excluded from the study. anterior wall rectopexy was performed for anterior wall rectocele. posterior wall rectocele was treated by retro rectal mesh rectopexy. emg is repeated 7 weeks and 7 months after surgery. outcome measurements included constipation, fecal incontinence and pudendal nerve latency. results: 62 cases of post pull through constipation and fecal incontinence. 23 patients with constipation and 39 patients with fecal incontinence.7 patients with stricture, 3 patients with missed aganglionic segment,2 patients with loss of anal sensory sensation and 2 patients with injured anal sphincter were excluded from the study. defecography showed 40 patients with anterior rectocele (22 males and 18 females) and 8 patients with posterior rectocele (2 males and 6 females). the patients mean age 8.93 ± 2.4 years . emg showed prolonged pudendal nerve conduction in all cases. anterior wall and retro rectal rectopexy were performed laparoscopically without complications. constipation was resolved in all patients after surgery. all patients showed fully control in defecation. pudendal nerve latency decreased in all patients. conclusion: perineal descent syndrome proved to be a major hidden cause of post pull-through constipation and fecal incontinence. laparoscopic rectopexy showed a good solution of these complications. cystic lymphangioma is a rare entity. the surgical indication is determined by the size and symptomatology, and consists of the complete exeresis of the tumor. the laparoscopic approach is feasible in these cases, allowing a broad visualization of the anatomy, accessibility to the retroperitoneum in the context of a minimally invasive approach and a better recovery of the patient, without providing an increase in morbidity compared to the conventional. in this way we defend as a technique of choice laparoscopic surgery against these rare tumors for the general surgeon in the abdominal cavity, betting on a minimally invasive surgery. aims: laparoscopicposterior sutured rectopexy is one of the accepted treatment options for fullthickness rectal prolapse. recently, reduced port surgery(rps) has beenan emerging concept that, compared with conventional multiple port surgery (mps), yields reduced postoperative pain and improved cosmesis. the aim of the study is to evaluate the feasibility and safety of rps for fullthickness rectal prolapse. methods: rps was performed by single-incision plus one puncture, using internal organ retractor(ior) to secure operative field. straining one ior by 3-4 strings in 3-4 directions makes it possible to retract the internal organs three-dimensionally. this multi-directional flexible retraction could secure good operative field. from 2012 to 2018, 32 patients (rps: 22 cases, mrs: 10 cases) underwent laparoscopicposterior suture rectopexyfor total rectal prolapse. shortterm outcomes were compared between the two procedures. results: there was no significant difference between rps and mps in median operative time (175 vs 167.4 min, respectively, p [ 0.05). the median blood loss volume was not significantly different between rps and mps groups (7.7 vs 8.0 ml, p \ 0.05). the duration of median hospital stay after surgery was not significantly different between two groups (12.9 vs 13 days, respectively, p [ 0.05). the frequency of complications after surgery were not different between them. conclusions: reduced port lap-rectopexy can be a good therapeutic option for total rectal prolapse. a prospective, randomized, controlled trial should be conducted to confirm the superiority of this procedure over mps. the piccolo project proposes a new compact, hybrid and multimodal photonics endoscope based on optical coherence tomography (oct) and multi-photon tomography (mpt) combined with novel red-flag fluorescence technology for in vivo diagnosis and clinical decision support. for its development it includes different phases of validation. within this framework, the present study has as main objective: to characterize a model of rat colonic hyperplasia, which will be used for the development and validation of the previously mentioned endoscopic technology. secondary objectives: procure the reproducibility of the model chosen and determine the optimal time, after induction of the model. material and methods: 12 animals (rattus norvegicus), wistar, males and females \ 1-yearold, randomly distributed. group 1 (n = 2): by laparotomy, a non-resorbable suture (silk 4/0), not stenosing, is placed through the wall of the colon. group 2 (n = 2): by endoscopy, a 0.3 mm long segment of a polymeric catheter is inserted, which is fixed to the wall of the colon by means of a suture. group 3 (n = 2): by means of endoscopy, a self-expanding and uncoated metallic stent are placed in the colon. group 4 (n = 2): a superficial laser resection of the colonic mucosa is performed by endoscopy. group 5 (n = 4): as an extension of the most optimal model. weekly, the animals were anesthetized again to perform a colonoscopy, which determined the degree of mucosal growth in descending colon and colonic biopsies were extracted weekly (4 weeks). results: group 1. growth around the sutures after the second follow-up, diagnosed as hyperplastic polyps after a histopathological analysis. aim: the role of laparoscopy in the management of generalized appendicular peritonitis is controversial. this is due mainly to the lack of scientific data. through this study and a laborious bibliography research, we proposed to report our experience in terms of postoperative results, in the laparoscopic treatment of generalized appendicular peritonitis and to try to identify the risk factors associated with the occurrence of global morbidity and conclude on the feasibility of this technique in its treatment. methods: we conducted a retrospective study including all cases of generalized appendicular peritonitis managed laparoscopically, in the general surgery department of charles nicolle hospital between january 2006 and december 2016. results: we identified 93 patients. the mean age was 31.7 years. one fifth of the cases required a midline conversion (20.4%). the mean operative time was 146.6 ± 36,7 min. the overall morbidity rate was 15% including 7 surgical complications. there were no deaths. in uni-variate analysis, comorbidity, crp [ 200 mg /l, operative time exceeding 170 min and midline conversion were significantly associated with postoperative morbidity. co-morbidity, diabetes, asa score [ 2, delay of consultation [ 3 days, intra-abdominal abscess and operative time exceeding 170 min were significantly associated with medical complications. the univariate analysis also revealed that crp [ 200 mg /l and midline conversion were predictive of surgical complications.the multivariate analysis identified the midline conversion as the only independent factor significantly associated with post operative morbidity (odds ratio = 6.57, 95% confidence interval [1.28-33.7] ). conclusion: based on our results, it appears reasonable to continue the laparoscopic management of diffuse appendicular peritonitis. however, enhance this technique is basic in order to reduce midline conversion rate and to shorten operative time, which can lead to post operative complications. aims: currently, acute appendicitis is the most common surgical emergency. laparoscopic appendectomy is the usual procedure to treat acute appendicitis. the aim of this study is to evaluate the safety of electrocoagulation in the treatment of mesoappendix in laparoscopic appendectomy. methods: we have retrospectively studied a prospective database of operated patients of appendecectomy in emergency surgery unit. we have reviewed laparoscopic appendectomies from june 1st, 2014 to december 31st, 2017. the mesoappendix was electrocoagulated in every laparoscopic appendectomy. the statistical analyses has been done with spss 24.0 version. results: our group consists of 294 patients of which 59.2% were male and 40.8% were female. the average age was 37.94 years with a standard deviation of 18.14% and p75 was 51.5 years. the most common total stay was 1 day (100 patients). the usual post-operative stay was one day (144). we classified the diagnosis in complicated apendicitis (82 patients) and no complicated apendicitis (198 patients). the conversion rate was 3.1% (9). the main surgical complications were: surgical wound infection (1.4%); intraabdominal abscess (6.8%); and bleeding (1%). only one of the patients that suffered bleeding had complicated appendicitis. the medical complications were catheter sepsis (0.7%); respiratory infection (0.3%); cardiologicals (0,3%); and paralytic ileus (4.1%). the treatment of mesoappendix with electrocoagulation is safe and effective since the complications rate is very low. even so, it would be necessary to conduct more prospectives randomized studies in order to get enough evidence about the treatment of mesoappendix with monopolar electrocoagulation. introduction: the difficulty of resection of the rectum is determined by its anatomical relationships, intimately in contact with the bladder, seminal vesicles, prostate and urethra in the case of the male, vagina in the woman and nerve structures that will give defecatory, genital and urinary functionality. this structure creates a big impediment due to problems of visualization and difficult dissection, in such a way that conventional surgical techniques instigates a series of complications derived from this difficulty. we propose a new approach in rectal surgery in patients with inflammatory bowel disease. material and methods: a 49-year-old man with a history of ulcerative colitis developed a severe acute outbreak refractory to treatment. a total laparoscopic colectomy with a terminal ileostomy was performed in 2016. in 2018 he was notified for reconstruction. we evidenced a rectal stump of about 10 cm with signs of inflammatory disease at the mucosal level. a transanal proctectomy was performed with confection of 'j-pouch' and ileoanal anastomosis about 4 cm from the anal margin by laparoscopy. the postoperative courses favorably, being discharged on the sixth day. currently in follow-up in digestive and general surgery, he is asymptomatic and he has an optimum level of quality of life valued by the sf-36 12 weeks after the intervention. conclusions: our service introduces the transanal approach to the performance of proctectomy in cases of inflammatory disease, a technique that provides clear advantages by improving visualization and the identification of anatomical structures. in this way, a safe dissection of the pelvis is achieved, adjusted to the serosa of the rectum, with preservation of the mesorectum and the hypogastric plexus, and with the consequent improvement of the genital and urinary function. the result is an equally safe surgery, which implies little increase in operative time and with better and shorter postoperative recovery.the conservation of the pelvic innervation avoids disorders of ejaculation, vaginal lubrication and bladder and rectal motility. the transanal approach for the performance of proctectomy provides benefits in terms of the preservation of the hypogastric plexus, minimizing the anatomical difficulties involved in rectal surgery and maintaining urinary and sexual function. aims: to evaluate the feasibility and outcomes of laparoscopic appendicectomies in both simple and complicated appendicitis, given the increasing trend towards a laparoscopic approach in the last four decades for the treatment of acute appendicitis. we present data from a district general hospital over a 7-year period. methods: we retrospectively analysed a single consultant's continually updated database of laparoscopic appendicectomies between 01/03/2012 and 15/12/2018 (82 months). patient demographics, investigations, intraoperative findings and postoperative outcomes were recorded and analysed. complicated appendicitis was defined as the formation of appendiceal mass or abscess with or without perforation and peritonitis. results: 81 cases of laparoscopic appendicectomies were identified during the specified period. the median patient age was 30 (range 10-89 years). true positive rates for uss and ct were 33% and 84%, respectively. the rate of negative appendicectomies was 14%. transanal minimally invasive surgery (tamis) has been used for the treatment of rectal neoplasms such us benign polyps and early rectal cancer. when the tumour is located in the upper rectum or close to the rectosigmoid junction, this approach may be technically dificcult.we present a video of a tamis resection of a large polyp located 20 cm from the anal verge. after properative examination and ct and mri were performed, the patient was prepared for surgery, and a trasnanal minimally invasive surgery was proposed.resection of the polyp was performed with the aim of an endogia and conventional laparoscopic materials. total resection of the polyp with free margin was possible. the postoperative pathology report confirmed a high grade displasia villo-tubular adenoma with a lesion free margin. tamis resection of tumours located above the rectosigmoid junction may be a safe and feasible technique in selected patients. aims: pelvic organ prolapse (pop) is a very relevant problem for women's quality of life and has a prevalence of about 5% defined by symptoms and up to 50% when established by physical examination. nowadays, sacrorectopexy for posterior pop and sacrocolpopexy for apical pop are considered the gold standard techniques. recently, we have seen that laparoscopic lateral suspension is a feasible procedure for apical pop, obtaining a success rate higher than 90% at one year. these results are similar to what we can achieve with sacrocolpopexy. methods: we herein present the case of a 70-year-old woman with apical and posterior pop, this was provoking an important impact on her quality of life, with obstructive defecation (needing digitations) and urinary incontinence. we proposed sacrorectopexy for her posterior pop and laparoscopic lateral suspension for her apical pop. in the video we can see how we perform a ventral mesh sacrorectopexy, following d'hoore technique; and a laparoscopic lateral suspension with preperitoneal dissection, following the technique described by the team headed by dubuisson and veit-rubin. we used 4 laparoscopic ports (12, 5, 2.9 and 2.5 mm). results: patient was discharged home on the second postoperative day and has not had any sign of recurrence or extrusion after more than two years of follow-up. in addition, she has not suffered lower urinary tract symptoms, constipation or pain. conclusions: we present a case in which we have carried out a laparoscopic lateral suspension instead of a sacrocolpopexy for an apical pop, obtaining good short-term and long-term results. we consider it is very soon to assess this technique's efficacy and it has to be validated in studies with larger source of patients. nevertheless, we think this procedure might become an excellent alternative to sacrocolpopexy for apical pop. aims: laparoscopy is a minimally invasive approach with low morbidity. the aim is to show the usefulness of the laparoscopic approach for massive intra-abdominal abscesses, which it is controversial. we report three patients who underwent emergency laparoscopy for peritonitis or massive intra-abdominal abscesses not amenable to percutaneous approach that were suspected to be caused by acute diverticulitis. methods: all patients had diagnosis of acute diverticulitis (hinchey ii-iii grade) with pelvic abscesses situated between sigma and bladder or diffuse peritonitis. the patients with hinchey ii grade had failed conservative management with antibiotics. they underwent emergency laparoscopy under general anaesthesia, with three abdominal ports. intra-abdominal abscess cavities were exposed and the purulent exudate was sampled and aspirated. copious irrigation was performed under direct vision and thorough examination without other findings. the procedure was completed laparoscopically in all cases. results: all patients had favourable evolution. one of them had a properly drained faecal fistula which changed to a purulent fistula on the twentieth postoperative day. this patient underwent laparoscopic left colectomy three months later because he had have a new episode of acute diverticulitis. other two cases showed very good clinical evolution, without evidence of fistula in postoperative period and they were complete asymptomatic one month later. conclusion: in our experience laparoscopic drainage is a feasible, safe, and effective for the treatment of pelvic abscesses and diffuse peritonitis secondary to acute diverticulitis. n. pinheiro, a. ziegler introduction: solitary rectal ulcer syndrome (susr) is characterized as a rare disease whose pathophysiology remains uncertain. it was first described in 1829 by cruveilhier and his clinicopathological feature was reported in 1969 by mandigan and morson, where he is associated with defective disorders, internal rectal prolapse, and psychological changes. according to works about 26% of the patients are asymptomatic. when symptomatic the diagnosis can be made through physical examination, clinical history and, often, confirmed by endoscopy with biopsies. treatment depends on the severity of the symptoms and the existence of associated rectal prolapse. according to the literature, conventional surgical options include local excision, rectal mucosectomy, retopexy, and segmental colonic resection. rolato: a 28-year-old male complaining of anal bleeding at bowel movements 10 years ago. he performed, several times, conservative treatment, but without improvement. he sought proctological care and underwent colonoscopy, in which he showed an ulcerated lesion on the anterior wall of the distal rectum. new investigation with videodefecogram revealed colorectal intussusception with associated mucosal prolapse, being considered the factor causing the ulcer. elected by the sacropromontofixação. evoluiu with improvement of anal bleeding, mucorrhea and anal discomfort. after a proctological examination, which was normal, a control colonoscopy performed after 5 months of surgery revealed rectal mucosa, with residual scarring and disappearance of the submucosal nodule present in the initial examination. reassessed after 12 months, the patient is asymptomatic. conclusion: rectal solitary ulcer whose causal factor was a colorectal prolapse (intussusception) with mucosal exteriorization through the anal canal, which was individually treated with sacropromontofixation. j.p. mali, p.j. mentula, a.k. leppäniemi, v.j. sallinen approximately 15-20% of patients diagnosed with colonic diverticulitis have an intra-abdominal abscess as a complication. abscess diameter of 3-6 cm is generally accepted as a cut-off determining the choice of treatment between antibiotics alone and percutaneous drainage. the aim of this study was to analyze the treatment choices and outcomes of patients with diverticular abscesses. this was a retrospective cohort study which was conducted in helsinki university hospital, an academic teaching hospital functioning as secondary and tertiary referral center. patients with computer tomography-verified acute left-side colonic diverticulitis with intra-abdominal abscess were collected from a database containing all patients treated for colonic diverticulitis in our institution during 2006-2013. altogether, 241 suitable patients were included in analyses. those treated primarily with percutaneous drainage or antibiotics alone (29 and 150 patients, respectively) were further compared in regards to treatment results. the main measured outcomes were need of emergency surgery and 30-day mortality. abscesses under 40 mm were mostly treated with antibiotics alone with high success rate (93 out of 107, 87%). in abscesses over 40 mm, the use of emergency surgery increased and use of antibiotics alone decreased with increasing abscess size, but the proportion of successful drainage remained at 13-18% regardless of abscess size (figure 1 ). there were no differences in failure rate, 30-day mortality, need of emergency surgery, permanent stoma, recurrence, or length of stay in patients treated with percutaneous drainage versus antibiotics alone, even when groups were adjusted for potential confounders. white blood cell count = 15.0 * 10 9 /l, abscess diameter = 50 mm, and corticosteroid medication were independent risk factors for failure of treatment with antibiotics alone. patients without these risk factor had 95% and patients with one risk factor had 78% success with antibiotics alone. percutaneous drainage as treatment for large abscess does not seem to be superior to treatment with only antibiotics. majority of patients with abscesses over 60 mm in diameter undergo surgery as primary intervention. introduction: even today, 'chronic appendicitis' is a clinical term that is not widely accepted nor well documented amongst the medical community. its etiology is the presence of a mass (e.g. fecal mass, hyperplasia of lymphatic tissue, etc.) that continuously and partially obstructs appendix lumen. it is presented as a low intensity, intermittent, with exacerbations and remissions, abdominal pain that is located at the right iliac region. the pain lasts up to several months and it is usually underestimated by the patient. its diagnosis is based on imaging examination. appendectomy is the treatment of choice for chronic appendicitis. the operation is challenging for the surgeon who has to cope with an intensively inflamed area around the appendix without the ease of access to that area. purpose: to present our laparoscopic approach to a chronic appendicitis case and to review the literature. case report: a 56-year-old woman is hospitalized due to chronic appendicitis. the patient was treated conservatively with the use of intravenous antibiotics in two separate hospital admissions dated 2 and 4 months back respectively. eight weeks after the last exacerbation, she underwent a laparoscopic appendectomy. results: even though the procedure was planned six months after the first episode, the laparoscopy revealed a severe inflammation of the appendix, which was extended to the caecum and the surrounding preperitoneal tissues. although the difficultness of the operation it was completed successfully laparoscopically. the histological examination confirmed without any doubt the existence of 'chronic appendicitis'. the patient was discharged uneventfully the third postoperative day. conclusions: chronic appendicitis is an existing clinical entity that the surgeon may come through during his career. in the hands of experienced laparoscopic surgeon, the laparoscopic approach is feasible and safe. introduction: ventriculo-peritoneal shunting (vps) used in the treatment for hydrocephalus is associated with several complications.the exact cause of such extrusion is not known. visceral perforation is an unusual but serious complication with consequeces such as peritonitis, meningitis or encephalitis. management involves prompt removal of shunt, intravenous antibiotics, an adequate recovery gap so that cerebrospinal fluid culture is sterile and then followed by shunt replacement on opposite side. aim: multidisciplinary approach of extrusion of vps through anus by laparoscopic and external ventricular drainage. case exposure: a 49-year-old woman had a vps inserted 11 year ago after excision of gangliocytoma due to lhermitte-duclos disease. she was admitted in the emergency department without symptos after trans-anal protrusion of vps catheter. the neurological and abdominal evaluation was normal. laboratory tests did not reveal disorders and abdominal ct-scan suggested perforation, itshowed the insertion of the end of the catheter in sigma, without pneumoperitoneum or intraabdominal free fluid. cranial ct-scan did no describe sings of hydrocephalus. the patient underwent an emergencysurgical intervention. first of all, antibiotic therapy was initiated and neurosurgery's team was performed an external ventricular drain and they disconnected the proximal catheter side. after that, an exploratory laparoscopy was performed. it revealed a microperforation and collection beside to an appendix' base due to the proximity with the catheter. additionally, the catheter was freed from adhesions at the point of entry into the colon and after careful dissectionwe release the vps from colon with a 1.5 cm transmural trajectory at the sigmoid level. no free fluid was seen and rest of the bowel appeared normal. the distal end was removed through the anus and the proximal end through a laparoscopic port. we performed a laparoscopic segmental cecum resection and an extracorporeal colo-colonic anastomosis was performed for a mini-pfannestiel laparotomy of assistance. there were no complications in the postoperative period, being discharge on the 5th day. conclusion: the multidisciplinary approach and the laparoscopic support in the diagnosis and treatment of patients with colon perforation caused bay vps catheter is a feasible and safe option in third level centers. background: acute appendicitis continues to be the most common source of complicated intraabdominal infection worldwide. the high incidence of postoperative complications and dissatisfaction with the results of treatment in cases of complicated appendicitis and peritonitis gave the reason for conducting this study. aim: to evaluate the effect of different laparoscopic trocars position in case of laparoscopic appendectomy for diffuse appendicular peritonitis for the incidence of postoperative complications methods: the results of laparoscopic treatment of 116 patients with acute appendicitis complicated by diffuse peritonitis were analyzed. the first group consisted of 37 (32%) patients operated by triangulation access (type 1 trocar placement according sages guidelines for laparoscopic appendectomy (sages qla). the second group consisted of 79 (68%) patients operated by sectorisation access (type 4 sages qla). postoperative complications were classified by clavien-dindo classification. results: the duration of the operation for the analyzed groups was 91.1 ± 29.9 vs 84.5 ± 24.9 min. there were no deaths among this group of patients. the incidence of postoperative complications for both group was 36.2%. postoperative complications in the triangulation and sectorisation group were 43% and 21.6% respectively (p 0.043). clavien-dindo iiib complications were noted in 4.3% (n-5) patients and presented with intra-abdominal abscesses (iaa). all patients with iaa were operated in sectorisation group. conclusion: sectorisation trocar placement increases the incidence of intra-abdominal complications for laparoscopic appendectomy for diffuse appendicular peritonitis. introduction: the diverticular disease of the colon is a chronic entity with a variety of abdominal symptoms that can present with recurrent episodes of acute diverticulitis (ad). the prevalence of diverticulosis is not influenced by gender and increases with age, which, according to the increase in life expectancy, explains the accumulation of cases in western countries. the classic diagnostic-therapeutic algorithm of the disease is it has been based on the hinchey classification, the use of antibiotics and the intervention of hartmann (ih) at the acute time and elective colectomy in the multirecurrent cases. the use of laparoscopy with washing and drainage is actualymore extended in cases with peritonitis. objectives: to demonstrate the safety and efficacy of the laparoscopic approach, in cases with diverticular disease complicated by severe inflammatory plastron with 'covered' perforation, with several recurrent episodes. material and method: case report: a 46-year-old man with ap-diverticulitis 10 years ago with complete resolution and normal control colonoscopy. he presents in the last two months three compatible episodes of acute diverticulitis, exploration with plastron-mass in hypogastrium without defense, tac-marked thickening of a segment of 10 cms. of medium sigma, collection not drainable in mesosigma, of 3 cm, which loses the plane of cleavage with loops of thin neighbors with a linear tract that suggests fistulization. evidence of interest is exposed. given the evolution, it is decided surgical elective treatment. result intervention: preoperative ureteral double catheterization, laparoscopic approach, is exposed by video, rectosigmoid resection by diverticular plastron, with negative io biopsy, mechanical colorectal anastomosis. good postoperative course, discaharge at 5th day. defini-tive ap: perforated diverticulitis, absence of malignancy. the laparoscopic approach is a valid and effective alternative in cases of complex and severe diverticular disease. aim: tamis resection has been described for the treatment of rectal neoplasms, wether benign or early malignant tumours. since tamis appearance, many different indications have been reported.we aim to show an special indication as seen in this video of a tamis resolution of a rectal stenosis non treatable by endoscopy. method: we present a video of a female patient, previously treated for a large rectal adenoma treated by trasnanal apporach, with a postoperative sepsis which required lateral colostomy and trasanal drainage. after surgery, the patient suffered from a rectal stenosis which couldn' t be solved by endoscopy, so the patient was sent back for a surgical treatment.we decided to performed a trasnanal apporach by tamis and a long and circunferiential stenosis around 6 cm from the anal verge was seen.we performed a rectotomy by electrocautery in the posterior rectal wall until the perirectal fat was seen and the stenosis was passed. a dilatation with a foley catether was also performed. results: postoperative course was uneventful and after 6 months she was prepared for colostomy closure with no complications and remains asymptomatic nowadays. conclusion: tamis approach of rectal stenosis may be a safe and feasible technique in selected cases if conservative treatments fail. iatrogenic endoscopic colon perforation it is a severe, but rare complication of colonoscopy. the incidence of this complication is estimated to be 0.016-0.8% for diagnostic colonoscopies and 0.02-8% for therapeutic colonoscopies. the management of these complications depends on the size of the lesion,the time elapsed between the lesions were produced and diagnostic of the lesions and associated pathology. the treatment can be consevative,endoscopic or surgical(clasic/ laparoscopic) in our sevice in last 10 years we treated 5 cases with iatrogenic colon perforation after diagnostic colonoscopies. all lesions were at sigma level. one case was admission in our service at 3 days after a diagnostic colonoscopy.the pacient was operated clasic,in emergency,we found a fecaloid peritonitis,a perforation at sigma level.we made a colostomy, lavage, drainage but the pacient died after 4 days. in 4 cases we made the operation at maximum 2 h after the lesion was diagnosticated by the endoscopist(directly visualisation).we didn't made radiologic investigation.the pacients were operated laparoscopic,we made suture,lavage,drainage. evolutions of the pacients were good. conclusion: iatrogenic colonic perfortion are rare,but severe complication. laparoscopic surgery can be a choice in treatment of this complication introduction: complicated diverticulitis with fistula is responsible for about 20% of surgical procedures in diverticular disease and is commonly found in patients with diverticulitis of the sigmoid colon. colovesical fistulas are the most frequent (65%), with highest incidence in males. only a third of these patients have a history of diverticulitis. in most cases, treatment is surgical, and colectomy is performed, whether or not in association with vesical recession. case report: 61 year old male with pneumaturia and fecaluria for the preceding 4 months. the colonoscopy identified a diverticulitis of the sigmoid colon and the subsequent pelvic mri suggested a colovesical fistula. the cystoscopy was not able to identify any fistulous opening, but a double j catheter was placed in the left ureter, as surgical treatment had been proposed. a subsequently abdominal pain motivated a preoperative ctscan which revealed a pneumoretroperitoneum and a fluid collection near the left ureteral tract. the multidisciplinary team on the case decided to perform a percutaneous nephrostomy, followed by an exploratory laparoscopy. the fistula tract was identified and a laparoscopic sigmoidectomy with partial cystectomy was performed, as well as a ureterorenoscopy (with double j replacement). there were no intra or postoperative complications, and the patologic repport had no signs of malignancy. video of the surgical procedure is presented. conclusion: a laparoscopic approach to complicated diverticulitis with colovesical fistula is safe and effective when performed by experienced colorectal surgeons. introduction: diverticular disease is characterized by its high prevalence, being one of the most frequent causes for hospital admission when it comes to gastrointestinal pathology. even though it is more frequent in older patients, there has been an increase in incidence amongst lower age groups. the approach to the disease has also suffered changes in the last few years, showcasing a tendency for less invasive options, deferring elective surgery to later in the course of the disease. this study examines the therapeutic approach to diverticular disease in our hospital. methods: retrospective analysis of demographic data, therapeutic options and surgical outcomes in patients admitted for diverticular disease between january2015 and june2018. results: 154patients(n = 154)were included in the study:75(49%)were male and79 (51%) were female, with an average age of 61 years. 135patients(88%) underwent medical treatment, with surgery reserved for the remaining 19 patients, 11 of which were emergencies, with the other8 being elective(5%).about89% of patients were only admitted once,9%were admitted twice and2% had3or more episodes. for the single-admission group, the most common treatment was medical(93%of cases), as was the case for the group with3or more episodes(in which 58%of cases were subjected to medical treatment). in the patient group with2episodes,53%were submitted for surgery, most of which elective.as far as the surgically operated group is concerned, no statistically significant differences were found with regards to patient sex. age, however, was significantly greater for the group that underwent emergency surgery vs that submitted to elective (65.2 years vs 45.2 years,p \ 0.001).the most common procedure overall was colic recession, with hartmann's operation standing out as the most frequent for the emergency surgery group. length of hospital stay was again higher for the emergency group (vs elective;14 vs 5,p \ 0.001),as well as the morbidity rate. no statistically significant differences were found with regards to mortality rate. conclusion: knowledge of the natural history of diverticular disease led to changes in the approach to treatment, with a tendency to adopt a less aggressive therapeutic. despite controversy around aspects such as selection of patients for elective surgery, among others, it is key to the approach to diverticular disease that existing recommendations are taken into account, treatment is individualized and outcomes are closely monitored. surg endosc (2019) aims: the objective of this analysis is to establish if there is differences after the procedure of laparoscopy appendicectomy comparing the use of endoloop (el) vs. endostapler (es) in complicated and non-complicated acute appendicitis. methods: we performed a retrospective analysis of a prospective database of 499 patients from february 2012 to june 2018. we divided the patients in two groups: depending on that the procedure of the appendicectomy was with endoloop or endostapler. the groups were created selecting 229 patients in order to be homogeneous as to perforation appendix rate, thus a propensity score was performed for sex, age and perforation rate. an univariant analysis was carried out in regard to the differences in the use of el vs es in the apparition of abdominal complication, as well as hemorrhage, ileo, surgical wound infection, collection, reintervention or hospital readmission.qualitative variables were expressed in terms of absolute frequencies and percentages and mean values and standard deviation were used to express quantitative variables. introduction: intramural haematomas can develop anywhere within any the gastrointestinal tract3. these are most frequently associated with blunt trauma above the level of the sigmoid colon and very rarely occur in the rectum1. spontaneous, non-traumatic haematomas are a rare clinical condition usually secondary to haematological blood disorders or anticoagulant therapy2. case summary: a 56-year old gentleman presented to the emergency department with a 2 day history of worsening lower abdominal pain and bloody stool. he presented twice within the previous week with worsening, generalised abdominal pain. the patient had been taking regular aspirin and clopidogrel following insertion of coronary artery stents. on clinical examination, he was guarding with a distended, generally tender lower abdomen but all observations were stable, afebrile. an initial computer tomography of the abdomen reported pneumoperitoneum with haemorrhagic ascites; a differential diagnosis being perforated sigmoid colon with a large localised haematoma. the patient underwent an emergency laparotomy and hartmann's procedure (appendicectomy, sigmoid colostomy and rectal stump). he recovered well with no significant post-operative complications. histology reported the rectal perforation macroscopically associated with an opened haematoma and no evidence of malignancy. the appendix shows reactive appendicitis with serousal inflammation background: ulcerative colitis (uc) is one of the risk factor of developing sporadic colorectal cancer. approximately 15% of uc patients develop an acute attack of severe colitis, and 30% of these patients require colectomy. one third of the patients will not respond to steroid therapy. thus, a long-term follow-up has been recommended. case report: we reported a single case of completed 5 years follow of colorectal cancer related ulcerative colitis on 54 years old female patient undergoing emergency operation (2 staged total colectomy and j-pouch ileo-rectal anastomosis) after 1 year no responsed of medical treatment before, presenting with bloody diarrhea and anemia. there was no post operative complication reported. pathologic finding was early adeno carcinoma, closed follow up was done each year and for another five years later, no progression of the disease was found in this period and the patients has good quality of life after this procedures. is becoming a standard and feasible surgical method worldwide. 20% of patients with crohn's disease (cd) and 80% patients with ulcerative colitis (uc) will require an operation during their life. over the last decade, there have been many studies documenting the safety and feasibility of the laparoscopic approach for ibd in well-selected patients. methods: patients with a cd with the tight stenosis in the distal ileum and/or ileo-colon or various colon and rectum stenosis, patients with uc with ineffective medical therapy, steroid dependence or dysplasia underwent the lcs. from 2009 to 2018, 247 ileocolic resections, 63 hemicolectomies, 74 subtotal colectomies and 35 restorative proctocolectomies with ileopouchanal anastomosis were performed either totally laparoscopically or laparoscopically assisted (n = 419).the average time of the procedure was 105 min (55-295 min), average blood loss 125 ml (0-350 ml) and the conversion to laparotomy was in 8.2%. average return time of the bowel function was 3.5 days (2-9 days) and the average hospital stay was 7.1 days (6-11 days). complications occurred in 24 patients (5.7%). 3 cases of the early ileus due to adhesions, 2 cases of the anastomotic bleeding threated conservatively, 1 case of the instrumental perforation of the small bowel, 7 cases of the incisional hernia in minilaparotomy and 11 wound infections occurred. conclusion: in well-selected patients with ibd, thanks to superior short-and long-term outcomes, the laparoscopic approach should be considered a safe and effective method when performed by experienced surgeons. supported by mo1012. aim: over 80% of patients with crohn's disease (cd) will require a surgical resection within 10 years of their diagnosis and one quarter will have another resection for disease recurrence. laparoscopy should by preferred approach in surgery in cd due to reduced morbidity, faster recovery time, shorter hospital stay and reduction in adhesions and hernial formation. methods: patients with cd with the tight stenosis in the distal ileum and/or ileo-colon or various colon stenosis were indicated for the laparoscopy. from january 2009 to november 2018 we performed 246 ileocolic resections, 63 hemicolectomies and 72 subtotal colectomies either totally laparoscopically or laparoscopically assisted. the average time of the procedure was 92 min (55-240 min). the average return time of the bowel function was 3.5 days (2-9 days) and the hospital stay was from 6 to 10 days. complications occurred in 14 patients (3.7%) . in 3 cases early ileus developed due to adhesions, in 1 case was anastomotic bleeding threated conservatively, the incisional hernia in minilaparotomy occurred in 6 cases and 4 wound infections occurred. conclusion: minimally invasive surgery is becoming a gold standard in cd. it is safe and feasible in well-selected patients thanks to short-and long-term outcomes. laparoscopic approach for recurrent disease is still in debate. supported by mo1012. aim: the aim of the study was to observe when laparoscopy is avoided when treating surgical complications of crohn disease. methods: we did a retrospective study which included all of the patients diagnosed and operated in our clinic for complications of crohn disease during a period of 3 years. results: we identified a number of 62 patients operated for complications of crohn disease. of these 15 were operated by minimally invasive procedures. we observed that laparoscopy was avoided in the case of intestinal fistulas (p = 0,02). also when sepsis associated the surgical complication-laparoscopy was avoided (p = 0,01). age under 43 years represented another factor to avoid laparoscopy (p = 0,04). conclusions: although laparoscopy offers numerous advantages careful selection of the patients is of utmost importance so the safety of the procedure can be ensured. retroperitoneal sarcoma represents approximately 12-15% of all sarcomas and less than 0.5% of all neoplasia. radiotherapy and chemotherapy still do not represent valid therapeutic alternatives; therefore radical surgery remains the only valid option. complete surgical resection is the only potential curative treatment modality for retroperitoneal sarcomas. the ability of complete resection of a retroperitoneal sarcoma with tumor grading remains the most important predictor of local recurrence and disease-specific survival. hypoglycemia is a rare but potentially lifethreatening presentation of soft tissue tumors the etiology of hypoglycemia may be difficult to diagnose, assays for insulin-like activity (ila) were found to be high in the extract of tumor tissue, while insulin was not detected in significant concentration neither in the same extract nor in his serum. the most likely mechanism of hypoglycemia appears to be production of insulin like substance and increased utilization of glucose by the tumor. laparoscopic surgery represents an alternative technique for radical resection of such tumors rather than traditional surgery. only few cases of retroperitoneal tumors resected laparoscopically were reported in the literature. we report a rare case of 53 years old male presented to ed unconscious due to hypoglycemia.he was resuscitated and admitted for further investigations. hypoglycemic attack recurred again during the same evening of admission. initial investigations were within normal except for serum glucose 35 mg/dl (2.0 mmol/l). his tsh, glucagon & cortisol levels were within normal, insulin and c-peptide levels were undetectable. only hypokalemia (2.3 meq/l). he tested negative for the anti-insulin antibodies. his abdominal ultrasound as well as his ct scans showed the presence of a large retroperitoneal tumor (15 cm 9 12 cm 9 7 cm) with a heterogeneous contrast effect. a glucose supplement was required to maintain the plasma glucose level within normal limits during which complete resection of the tumor which was performed laparoscopically. diagnosis of such hypoglycemia inducing retroperitoneal fibrosarcoma represents great challenge especially when patients presents only with hypoglycemia and no other abdominal symptoms, management using minimal invasive technique to resect and remove such tumors from the retroperitoneal region shows superiority in recovery and limitation of complications when done by experienced surgeons. solitary fibrous tumor (sft) is a rare fibroblastic mesenchymal neoplasm, tipically arising from the pleura, less frequently from other anatomic sites. sft is an indolent neoplasm, but it have been described cases of greater aggressiveness in terms of local recurrences and more rarelly of distant metastases. among the various extrapleural sites, intrabdominal, retroperitoneal localization is the most common site, followed by the pelvis soft tissues and parenchymatous organs. the most common clinical finding of intraabdominal localization is a palpable mass, and the pain is the most frequently associated symptom. the diagnosis is performed by imaging, but the histological as well as immunohistochemistry characterization of the lesion is the latest goal. furthermore, histological features are used to attempt to identify the patient with a hight risk of malignant evolution of the tumor. the gold standard treatment is surgical approach, meanwhile there are no evidences about the efficacy of any adjuvant treatment. we present the case of a 62-year-old man affected by symptomatic tfs arising from mesosigma treated by surgical radical excision. finally, we propose a review of the literature of last decade. background: laparoscopic right hemicolectomy involves making an additional incision to remove the specimen and perform the anastomosis. recently, natural orifice specimen extraction surgery (noses) has been reported as an alternative approach without any additional incisions or extensions, may lead to better outcomes compared to conventional laparoscopic right hemicolectomy. in this video, we aimed to evaluate the safety and feasibility of noses for laparoscopic right hemicolectomy. methods: we describe the technique with transvaginal specimen extraction and d3 lymph node dissection in laparoscopic right hemicolectomy by this video. we performed intracorporeal anastomosis combined with a transvaginal route of specimen extraction after medial-to-lateral mobilization. transverse transvaginal posterior colpotomy was performed under aid with visualization. the specimen was pulled into the sterilized plastic bag, passed transvaginally. the vaginal incision was then closed with a running suture. results: the operation time was 230 min and the hospital stay was 6 days. an excellent postoperative recovery was demonstrated and has shown future potential for less incision. the pathologic tnm stage is t3n0m0. conclusions: this video has shown that laparoscopic right hemicolectomy with the noses technique is feasible and safe for selected cases. the long-term benefits of this procedure need to be more evaluated. recently, indocyanine green (icg) fluorescence has been introduced in laparoscopic colorectal surgery to provide detailed anatomical information.the aim of our study is the application of icg imaging during laparoscopic colorectal resections: to identify the sentinel lymph node (sln) to search micrometastases that can be missed with the conventional pathological exam, and to assess anastomotic perfusion to reduce the risk of anastomotic leak. after tumor identification 5 ml of icg solution (0.3 mg/kg) is subserosal peritumoral injected. a full hd image1 s camera, switching to nir mode, in about 10 min displays fluorescence: the sln is identified and the sln biopsy (slnb) is performed.after the transection 5 ml of icg solution is injected to confirm the stumps perfusion. if there is an ischemic area, a new resection is performed.after the anastomosis is performed, another bolus of icg is intravenous injected to confirm the anastomotic perfusion.when the sentinel node is negative for cancer metastases by conventional histological examination, ultrastaging is performed by serial sections. when no micrometastases are identified on these sections, immunohistochemical techniques are applied. from november 2016, 70 patients were enrolled: 22 left colectomy, 38 right colectomy, 2 transverse resections, and 8 splenic flexure resections. in two cases, one left colectomy and one right colectomy, the anastomotic perfusion wasn't good and the surgical strategy was changed. four postoperative complications occurred, of which one anastomotic leak, due to a mechanical problem. from november 2017, 40 patients were enrolled to perform the slnb: 23 right colectomy, 11 left colectomy, 1 transverse resection and 5 splenic flexure resections. the sln was identified in 37 cases. 17 cases were found to be n0 to the conventional examination and were subjected to ultrastaging. the serial sections showed micrometastases in two cases. in the other cases the immunohistochemistry was performed but the exam is still in progress. icg-enhanced fluorescence imaging is a safe, cheap and effective tool to increase visualization during surgery. it's recommended to assess the anastomotic perfusion in order to reduce the incidence of anastomotic leak, and to perform the slnb for the sln ultrastaging in order to identify micrometastases. methods: for the last 3 years, tem was performed on 28 patients' with early rectal cancer. there were 19 women and nine men, age 68 to 87. localization of tumors was 4-12 cm from anus. mean size of tumors was 3.8 cm. full thickness excision was performed in all patient with suturing of mucosa. during follow-up in three patients' metastasis in lymph nodes of mesorectum were detected. all of these patients were re-operated: laparoscopic colectomy with total mesorectal excision (tme) was done. for the last year in 9 patients with early stage rectal cancer we used indocyanine green (icg) with fluorescent imaging for mapping sentinel lymph node. icg was injected in four quadrants to submucosa around the tumor. during the laparoscopy, sln was detected and removed with morphological examination. results: among nine patients in 8 patients, sln was negative. tem was performed in these patients with good results. after 10-12 months no recurrence or metastasis were detected in these patients. in two patients with positive sn laparoscopic tme was performed with low colorectal anastomosis. anastomotic complication was occurred in one patient. conclusion: tem procedure is highly effective in selected group of patients with early rectal cancer. mapping and examination of sln can clarify indication for the tem in the patients with early rectal cancer. purpose: laparoscopic surgery for colorectal cancer provides better short-term benefits and similar long-term outcomes compared with conventional open surgery. unlike minimally invasive surgery, natural orifice specimen extraction (nose) can provide additional advantages by reducing morbidity and postoperative pain related to the surgical extraction site. this study aimed to evaluate the efficacy and safety of a nose procedure using needlescopic instruments for colon cancer surgery. methods: between november 2013 and february 2018, 6 patients underwent laparoscopic nose using needlescopic instruments. the first port for the camera was placed at the umbilicus. a 5-mm or 12-mm port was inserted in the right lower quadrant. a 3-mm or 5-mm port was inserted in the right upper quadrant. individual needlescopic forceps for the assistant were inserted into left upper and lower quadrant ports. thus, a total of 5 ports were placed. the superior rectal artery and inferior mesenteric vein were ligated with clips, and colonic mobilization was performed using a medial to lateral approach. after rectal stump irrigation, the distal rectum was transected using an endoscopic linear stapler. the proximal colon and associated mesentery were transected. after the rectal stump was opened, a wound retractor was pulled through the anus and inserted in the rectal lumen. the resected specimen was transanally extracted through this route. an anvil was intracorporeally attached to the proximal colon, and the open rectal stump was reclosed using an endoscopic linear stapler; colorectal anastomosis was then performed using a double-stapling technique. results: of the 6 patients, 4 were male and 2 were female, with a median age of 71 years (44-76 years). median body mass index was 21.8. the tumor site was in the sigmoid colon in 4 patients and rectosigmoid colon in 2 patients. median operative time was 309 min and blood loss was 13 ml. there was no conversion to open surgery. no postoperative complication was observed. median postoperative hospital stay was 8 days (7-12 days). conclusions: nose surgery using needlescopic forceps is an easily performed type of reducedport surgery with a conventional port arrangement. this procedure is feasible for the selected patients. introduction: splenic flexure colon cancer accompanying obstruction is usually managed stent insertion as a bridge to surgery and left hemicolectomy, or subtotal colectomy. however, stent insertion can fail more often than in sigmoid colon because it requires longer colonoscopic approach in the circumstance of impossible bowel preperation. although subtotal colectomy has advantage in the aspect that it is 1-stage treatment, it needs open surgery in most cases, right colon has to be sacrificed without oncologic neccesity, and preoperative staging and evaluation can be insufficient. despite colostomy is reluctant procedure when considering quality of life, in splenic flexure colon cancer obstruction, we can obtain prompt stabilization of patient state, suffient time to preoperative staging and evaluation, and also we can achieve minimally invasive surgery by using colostomy site as mini-laparotomy and close colostomy before discharge. colostomy site, tumor location, and minilaparotomy site for next radical surgery have to be considered comprehensively before making colostomy incision. colostomy site has to be appropriate as mini-laparitomy site for feasibility of laparoscopic left hemicolectomy and the colostomy has to be included in the specimen with caution to prevent unneccessary lengthening of the specimen. we experienced 3 cases which were treated succefully in this strategy and report them. result: temperary loop transverse colostomy and laparoscopic left hemicolectomy via colostomy site in splenic flexure colon cancer obstruction has advantage of quick stabilization of patient's status, suffient preoperative staging and evaluation, achieving minimally invasive surgery, and also rapid colostomy closure before discharge. our tatme procedure for locally advanced low rectal cancer following chemoradiotherapy y. nakamoto 1 , r. okamoto 1 , f. kimura 1 , h. yanagi 1 , t. nakajima 1 , h. yoshie 2 , n. yamanaka 1 1 surgery, meiwa hospital, nishinomiya, japan, 2 surgery, yoshie clinic, itami, japan background: short-course chemoradiotherapy using hyper-fractionation method (scrt; 25 gy/ 10 fraction/5 days ? s-1 or xeloda) is performed to secure circumferential resection margin (crm) due to tumor shrinkage, reduction of cancer cells with viability, reduction of radiation hazard for resectable locally advanced lower rectal cancer (more t3 or n1). the patient underwent radical surgery after one month of scrt. for more locally advanced lower rectal cancer (t4 or n2), induction chemotherapy is performed before scrt. for patients with poor efficacy of chemotherapy, we also do normal 25 fraction 45 gy radiotherapy. methods: we introduced tatme from last august, and 18 cases were performed so far. in all cases temporary stoma has been constructed, and intersphincter resection (isr) is based on partial isr avoiding total isr considering postoperative anal function. if possible, colonic j-pouch is added, and pelvic floor repair may be added for esr cases and older people. at first one team preceded with the anal operation and shifted to the abdominal procedure, now it is done with two teams with the advantage of getting good visual field from both sides when there is difficulty identifying the right dissecting layer. tatme is very useful in cases such as large tumor, obesity, and narrow pelvis. furthermore, when it is difficult to identify the dissecting layer by scarring after crt, it is more possible to control the crm/drm of cancer. results: 14 cases of isr, 3 case of apr, 1 case of tpe were performed, and in 10 of these cases lateral lymph node dissection was also performed (one side 5, both sides 5). postoperative complications were 2 anastomotic leakage, 4 pelvic floor infection, 2 perineum infection, and 1 bowel obstruction. conclusions: tatme for locally advanced lower rectal cancer is useful even after chemotherapy and scrt. background: although many studies have demonstrated similar perioperative outcomes for single-incision laparoscopic surgery (sils) and conventional laparoscopic surgery (cls) for colon cancer, few have directly compared the costs of them. we aimed to compare costs between sils and cls for colon cancer. methods: we analyzed the clinical outcomes and overall hospital costs of patients who underwent laparoscopic surgery for colon cancer from july 2009 to september 2014 at severance hospital; 288 were used for analysis after propensity score matching. the total hospital charge, including fees for the operation, anesthesia, preoperative diagnosis, and postoperative management was analyzed. results: the total hospital charges were similar in both groups ($8770.40 vs. $8352.80, p = 0.099). however, the patients' total hospital bill was higher in the sils group than in the cls group ($4184.82 vs. $3735.00, p \ 0.001) mainly due to the difference of the cost of access devices. there was no difference in the additional costs associated with readmission due to late complications between the two groups ($2383.08 vs. $2288.33, p = 0.662). conclusions: sils for colon cancer yielded similar costs as well as perioperative and long-term outcomes compared with cls. therefore, sils can be considered a reasonable treatment option for colon cancer for selective patients. aims: technology improvements in medicine allow the development of new minimally invasive approaches. despite every single advantage of these new devices they also can cause technical problems and difficulties for the surgical team. well known from last few years-laparoscopic assisted transanal total mesorectal excision for distal rectal cancer is perfect example for a quite new procedure, based on the combination of forgotten old surgical principles and technology advances. the aim of the study is to analyze the rate of technical problems during the procedure and to measure the impact of them on the operative time. methods: we conducted prospective observational study related to technical problems during the procedure. for the period between september 2017 and november 2018 in the department of endoscopic endocrine surgery and coloproctolgy at military medical academy-sofia have been performed 25 laparoscopic assisted transanal total mesorectal excisions. we used standard local preoperative work up and postoperative care protocols. we defined technical problem as intraoperative event different from complication leading to delay in operative time. every technical problem during the procedure was recorded and time for resolving the problem was measured in seconds. results: overall technical problems occurred in 11 of the cases. most of them were related to the insufficient smoke evacuation during the 6 cases. the second most common technical problem were the excessive rectal stump spasms during the procedure-this complication occurred in 3 of the patients. mean delay of the procedure related to technical problems is 21 min. in our series we experienced only one intraoperative complication which was specimen perforation during the dissection. three complications occurred in postoperative period-two urinary retentions and one perianastomotic abscess, without need of reoperation. conclusion: technical problems during the procedure can be source of delay in operative time. correct use of devices in operating room is the key to reduce technical issues. technical problems can increase the rate of intraoperative near miss events and complications during the transanal total mesorectal excision. surg endosc (2019) aims: anastomotic leak after rectal cancer surgery constitutes a severe complication associated with poorer oncologic outcome and quality of life. preoperative assessment of the risk for anastomotic leak is a key component of surgical planning, including the opportunity of creating a defunctioning stoma. methods: studies on rectal cancer surgery published between 2000 and 2015 were systematically reviewed according to the preferred reporting items for systematic reviews and meta-analyses of individual participant data (prisma-ipd) guidelines. with the aim to generate a score for anastomotic leak, all available per-operative covariates were used as independent factors in a logistic regression model with anastomotic leak as dependent variable. a receiver operating characteristic curve (roc) analysis was generated. we selected as threshold the value that allowed a missing rate of anastomotic leak \ 2%. the predictive power of the previously selected cut-off was validated in an independent set of patients. results: twenty-six centers provided individual data on 9735 patients. with a threshold value of the roc corresponding to 0.0791 in the training set, the area under the roc curve (auc) was 0.585 (p \ 0.0001). sensitivity and specificity of the model's probability [ 0.0791 to identify anastomotic leak were 79.1% and 32.9%, respectively. accuracy of the threshold value was confirmed in the validation set with 77.8% of sensitivity and 35.2% specificity. conclusions: we trust that, with further refinement using prospective data, this nomogram based on preoperative risk factors may assist surgeons in decision making. the score is now available online (http://www.real-score.org). in 7 (12.5%) cases laparoscopic interventions were performed in patients with diverticular colon disease. in the group of patients with colorectal cancer localization of the tumor in the right parts was observed in 27 (36%) patients, in the left-in 30 (39%), in the rectum-19 (25%). results: in the adenocarcinoma of the sigmoid colon, performed a left laparoscopic hemicolectomy (19 cases) and resection of the sigmoid colon (11). was executed high clipping and intersection of the lower mesenteric vessels, aorto-iliac lymphatic dissection. in the standard scope, lymph node dissection was performed with removal and testing of not less than 12 epi-, para-and mesocolical lymph nodes (max 21). the average length of the laparoscopic stage is 125 ± 22 min. laparoscopic right hemicolectomy (27 cases) was performed in accordance with the principles of cvl (central vascular ligation) and cme (complete mesocolic excision). intracorporal ileotransversoanastomosis was formed by a semimanual method with endogia universal and v-lock suture material. the average length of the laparoscopic stage was 125 ± 25 min, the open phase was 56 ± 14. in the tumor of the lower and middle ampullary parts of the rectum (19 cases) after neoadjuvant chemoradiotherapy, was executed a laparoscopic total mesorectumectomy. conclusions: the use of minimally invasive technologies in colorectal surgery provides a complete revising of the abdominal organs, adequate scope of resection and lymph nodes dissection in surgical interventions. background: it is thought that complete mesocolic excision (cme) improves the oncologic outcomes for colon cancer. but, precise mesenteric mobilization from retroperitoneum and safe ligations at the origins of central vessels are considered to be technically difficult in single port surgery(sps). to resolve this problem, we utilize retro-mesenteric medial approach for right side colon cancer. herein, we introduce this technique and assess its outcomes. operative procedure: the multi-trocar platform is placed in the umbilical site. 3d laparosopy is inserted from one of this channels. the surgeon manipulates instruments via the other 2 channels. 1st step: right colonic mesentery is mobilized medial to lateral from the head of the pancreas and retroperitoneum along the embryonic plane. 2nd step: the origins of ileocolic and right colic vessels are divided and central lymph node dissection is achieved. 3rd step: hepatic flexure is taken down from cranial. and right lateral attachment is dissected away and cme is achieved. 4th step: specimen is extracted and anastomosis is performed using a functional end to end anastomosis extracorporealy. results: from april 2009 to december 2018, 125 consecutive patients underwent sps-cme with right side colon cancer. there were 52 in stage i, 32 in stage ii, 30 in stage iii and 11 in stage iv. the mean operative time was 207 min. the mean estimated blood loss was 42 ml. there was no conversion to open surgery. additional port was placed in 4 patients (3.2%). intraoperative bleeding was occurred in 1 patient. anastomotic leakage was observed in 1 patient (0.8%), intestinal obstruction 1(0.8%) and wound infection in 3(2.4%). conclusion: these results suggest that retro-mesenteric medial approach in single port surgery with right side colon cancer is useful and safe technique. aims: this multicenter, randomized controlled trial (simple trial) aimed to investigate the quality of life (qol) and patient satisfaction of single port laparoscopic surgery (spls) for colon cancer, compared with multiport laparoscopic surgery (mpls). methods: patients with histologically diagnosed adenocarcinoma in cecum, ascending and sigmoid colon were eligible for this trial. eligible patients were randomly assigned to the spls or mpls group at a ratio of 1:1. qol was measured with the eortc qlq-c30 third edition (korean version) preoperatively and postoperatively at month 1, 3, 6 and 12. in addition, patient satisfaction was surveyed with a five-point questionnaire at postoperative 12 month. to exclude the impact of adjuvant chemotherapy on qol, subgroup analysis for patients with or without adjuvant chemotherapy were carried out. (clincaltrials.gov identifier: nct01480128) results: total 359 patients were randomly allocated into the spls group (n = 179) and mpls group (n = 180). in total patients, global health status and five functional scale steadily increased and nine symptom scales also gradually improved over time. but, nausea/ vomiting and appetite loss temporally deteriorated at postoperative 3 month. pain score was significantly worse in the mpls group (11.6 in the spls group vs. 17.6 in the mpls, p = 0.002) at postoperative 1 month and appetite loss score was significantly worse in the spls group (19.9 vs 13.5, p = 0.017) at postoperative 3 month. except for that domains, all the other items of qol between groups were not different until postoperative 12 months. patient satisfaction was significantly higher regarding the operation (p = 0.025) and the abdominal wound (p = 0.025) in the spls group. in patients without adjuvant chemotherapy, some items of qol (global health status, physical functioning, role functioning, emotional functioning, fatigue and pain) were significantly better in the spls group at postoperative 1 month. since postoperative 3 month, all of qol domains (except pain score) were similar between groups. conclusion: although postoperative pain was temporarily better in the spls, most of qol domain were similar between the spls and the mpls group until postoperative 12 month. in patients without adjuvant chemotherapy, spls showed better outcomes in some of functional scales and symptom scores at postoperative 1 month. coloproctological surgery, juntendo university, tokyo, japan; 2 gastroenterological surgery, juntendo university, tokyo, japan introduction: laparoscopic surgery causes less postoperative pain compared with pain after laparotomic surgery, and its low invasiveness should be considered for pain control. we have previously controlled postoperative pain by epidural anesthesia. in this study we compared postoperative multimodal analgesia centering on acetaminophen in patients who underwent laparoscopic colorectal cancer surgery with the conventional method. subjects: the subjects were 39 patients who underwent laparoscopic colorectal cancer surgery between january 2018 and june 2018. surgery was performed under epidural anesthesia in 24 patients and multimodal analgesia in 15: periodic acetaminophen administration ? transverse abdominis plane (tap) block in 6, periodic acetaminophen administration ? local anesthesia of the wound in 2, and periodic acetaminophen administration ? intravenous patient-controlled analgesia (ivpca) in 7. the operating roomoccupying time, postoperative pain (nrs), frequency of taking analgesics as needed, and postoperative nausea were investigated for 3 days after surgery and the duration of urethral catheter placement and postoperative intestinal movement were investigated in the epidural anesthesia and multimodal analgesia groups. results: while the time from entering the operating room to initiation of surgery was significantly shorter, the time from completion of surgery to leaving the room was significantly longer in the multimodal analgesia group. there was no difference in the operating room-occupying time. the frequency of postoperative pain was significantly lower in the multimodal analgesia group on postoperative day (pod) 2. the frequency of taking analgesics as needed was significantly lower in the multimodal analgesia group on pod1, 2, and 3. no significant difference was noted in the duration (number of days) of urethral catheter placement or postoperative nausea between the 2 groups. regarding postoperative intestinal movement, discharge of gas occurred significantly earlier in the epidural anesthesia group. the total number of incidents of complications in the epidural anesthesia group was 12. discussion: in laparoscopic colorectal cancer surgery, the effect of multimodal analgesia centering on periodic administration of acetaminophen without epidural anesthesia for postoperative analgesia was sufficient compared with the effectiveness of epidural anesthesia. this approach to analgesia may be useful because none of the potential complications of epidural anesthesia occur. surg endosc (2019) in the last years the application of new technologies like 3d vision or virtual reality have provided to surgeons the possibility of establish a preoperative surgical plan of each surgery and of each patient. these advances are specially useful in minimally invasive colorectal surgery due to the variability in location, anatomical relationship with other organs and vascular variants of these type of surgeries. the aim of our work is to built a digital 3-dimensional virtual model of the colorectal ct scan imagen of patients with colorectal cancer. the virtual models are obtained from the preoperative ct scan. the ct scans that we use to this work are general electric healthcare revolution gsiò and siemens somatom perspective 64ò and the size of each image is 1 mm. a medical software let us build a reconstruction of colorectal digital images where a radiologist has marked the exact image of the tumor so we obtain a 3d reconstruction which can provide an enhanced understanding of crucial anatomical details like the exact location of the tumor and the relationship with other organs and structures of the patient which can be selectively displayed or hidden. this information has an important applicability into clinical practice since it lets surgeons estimate the colorectal anatomy, tumor size and relationships, providing key landmarks to choose the most appropiate surgery, the best trocar location and a safer dissection specially in some cases whose location can change the kind of surgery radically. we present some cases where virtual models were crucial for the preoperative and intraoperative surgical plan, showing the potential interest of these 3d reconstructions in colorectal surgery. in conclusion the ct scan colorrectal image reconstruction can provide an enhanced understanding of crucial anatomical details of the colon and tumor location and relations which could contribute to choose the best surgical option and to improve safety in colorectal surgery. background: anastomotic leakage (al) after colorectal procedures are a common surgical experience and represents a significant burden both for patients and surgeons. the incidence of al has been reported to vary between 0.5% to up to 21%, with rates for the colon and rectum of 3-7% and 13-18%, respectively. they, not only add to potential postoperative patient morbidities and to overall costs of postoperative patient care, but also are considered a quality indicator in colorectal surgery. aim: we aimed to evaluate the clinical burden associated with anastomotic leaks following colorectal surgery. methods: we conducted a retrospective analysis of 641 colorectal patients who underwent conventional or laparoscopic colorectal surgery for colorectal cancer (crc), from january 1st, 2013 to december 31st, 2016 in a single colorectal centre (centro hospitalar de leiria). patient demographics, intraoperative and postoperative aspects were collected and analysed. all statistical analysis will be conducted using stata software (statacorp lp). results: in our cohort of 641 pts, 35 developed a clinical al (5.46%), mostly males (90%), with an average age of 71 ± 10.57. male gender and conversion were independent risk factors. the group with al had a higher lohs (25.2 days vs, 6.59-p \ 0.0001). 6 out of 35 al have been detected after the discharge. the mean diagnostic day was the eighth, and mode estimated at day 5. when compared with a control group, wcc, eosinophils and crp were statistically significant different in al group, at day 3 and 5. conclusion: in the present study, no statistically significant risk factors for al in crc surgery were detected, except for male gender and conversion. clinical methods and biomarkers were useful for early diagnosis. technology combined with experience and common sense may be the embodiment of the clinical method. conclusions: our regional screening program has significantly improved early diagnosis and quickened surgical treatment of crc. thanks to this, we obtained an earlier stage at diagnosis, a less invasive surgical approach, and a lower rate of complications and emergency surgery need were obtained also with an improvement in both os and dfs. introduction: surgeons are increasingly being faced with the problem of treating elderly colon cancer patients. we evaluated the outcome of silc in patients of over 80 years with colon cancer with a propensity score matched comparison to assess its perioperative and long-term oncological outcomes. methods: this retrospective cohort study analyzed our experience with silc for colon cancer over 5 years. eighty-seven patients of over 80 years with colon cancer who electively underwent silc were included in this study (elderly group). eighty-seven patients were then chosen out of a collective of 257 patients less than 80 years old in a propensity score matched design (younger group). short-term clinical outcomes in both groups were compared and verified its long-term oncological outcome. results: american society of anesthesiologists score and post-operative complication rate were significantly higher in elderly group. however, the other short-term clinical outcomes including post-operative hospital stay were equivalent in two groups. the rates of 5-year cancer specific survival were 78.0% in elderly group and 70.9% in younger group, respectively, and the 5-year overall survival rates were 64.6% and 66.8%, respectively. no significant differences were seen between two groups. conclusions: our initial experiences suggested the oncological and clinical safety of silc in patients of over 80 years with colon cancer. however, further studies are needed to demonstrate the advantages of this procedure compared to conventional laparoscopic colectomy. aim: some clinical trials have reported the safety and efficacy of laparoscopic colectomy for colon cancer. on the other hand, transverse colon cancer was excluded in these trials because of the difficulty of laparoscopic colectomy for transverse colon cancer. in this presentation, we report the tips for laparoscopic colectomy for transverse colon cancer. tips: in our department, 87 transverse colon cancers has been resected by laparoscopically so far. to complete cvl and cme, lymph nodes around middle colic artery should be resected, however many important structure, duodenum, pancreas, superior mesenteric vein (smv) and so on, may be obstacles. this is most difficult point for this surgery. our surgery is as follows. mobilization of ileum and ascending mesocolon from caudal sideconfirm duodenum and pancreasexpose smv and ligation root of ileocecal artery and veindissect lymph nodes around smv and ligation of middle colic artery and accessary right colic veinconfirm pancreas from caudate side of transverse mesocolon and incise the peritoneum along the caudal side of the pancreasdissect lymph nodes sufficiently by dissection from both side of transverse mesocolonmost important point. to dissect lymph nodes safely, confirmation from both side of transverse mesocolon is necessary and dissection should be performed along important structure, smv, pancreas and so on. introduction: we have developed and previously reported single-incision plus one port laparoscopic anterior resection of the rectum (sils ? 1-ar) as a reduced port surgery in which we can utilize the incision for drainage as an additional access route for laparoscopic procedures including the transection the lower rectum. a consecutive experience from its introduction of sils ? 1-ar for rectal cancer is reviewed, and its 5-year oncological outcomes are evaluated retrospectively. methods: one hundred and forty-one patients (53 female) with a mean age of 67.6 years adopted the sils ? 1 procedure for rectal cancer. a lap protector (lp) was inserted through a 2.5 cm transumbilical incision; an ez-access was mounted to the lp and three 5-mm ports were placed. a 12-mm port was inserted in the right lower quadrant. results: one hundred and thirty-six patients (96.5%) completed with sila ? 1-ar. the tumor locations in the rectosigmoid, rectum above the peritoneal reflection (ra), and rectum below (rb) were 44, 63 and 29, respectively. the median follow-up interval was 42 months. aims: colovesical fistulae came from inflammatory disease or cancer and do have a significant morbidity. the most common location is the sigmoid colon and the most common aetiology is diverticulitis. the treatment of choice is a surgical procedure. the aim was studying compare laparoscopic approach in patients diagnosed by benign (diverticulitis) and malignant (colon adenocarcinoma) colovesical fistulae. methods: from january 2001 to march 2005 all characteristics of surgical patients with diverticular and colon adenocarcinoma colovesical fistulae were reviewed. patient details (sex, age, symptoms, diagnosis, medical history and anaesthetic risk), surgical approach, hospital stay and complications were recorded. both groups were compared with significance level set at p \ 0.05. results: nine laparoscopic (71%) and 4 open approaches (29%) in diverticular colovesical fistulae were performed, with a conversion rate of 33%. the procedure done was sigmoidectomy. there were also performed 3 laparoscopic (14%) and 16 open approaches (72%) in colon adenocarcinoma colovesical fistulae. the procedures done were sigmoidectomy, pelvic exenteration, left colectomy, low anterior resection and loop colostomy. comparison between the two groups didn't show significant differences in characteristics but did show significant differences regarding the approach, with more cases performed by open approach in colon adenocarcinoma colovesical fistulae (p = 0.03). conversion rate didn't show significant differences. patients diagnosed for malignant colovesical fistulae had more complications, 15 cases (68%), 10 (45%) i-ii and 5 (23%) iii-iv-v according to clavien dindo classification, manifesting significant differences (p = 0.03). laparoscopic approach didn't show significant differences regarding complications. conclusions: generally, surgical approach with colonic resection and partial or total cystectomy is the treatment of choice in colovesical fistulae, although vesical resection can be avoided if it is suspected benign aetiology. whenever laparoscopic approach is performed by experienced surgeons, is feasible in colovesical fistulae and the morbidity and mortality numbers are acceptable. laparoscopic approach allows the advantages of a minimally invasive treatment but implies clinical trials to stablish stronger evidence. aims: laparoscopic right hemicolectomy became the standard of care for treating cecum, ascending and proximal transverse colon cancer in many center. most centers use multiport laparoscopic colectomy with extracorporeal resection and anastomosis (mce). single-incision laparoscopic colectomy with intracorporeal resection and extracorporeal (sci) remains controversial. the aim of the present study is to compare these two techniques using propensity matching analysis. methods: this study analyzed 171 patients who underwent laparoscopic right hemicolectomy including 119 mce surgeries and 52 sci surgeries from december 2015 to december 2017. short-term outcomes were recorded. postoperative pain was evaluated using a visual analogue scale (vas) and postoperative analgesic use as outcome measure. results: the length of skin incision in the sci group was significantly shorter than in the mce group: median (range) 3 (2-10) cm verses 4 (3-8) cm (p \ 0.0001). the vas score after surgery was significantly less in srhi than in mrhe. significantly fewer patients required analgesia after srhi after surgery. there were no significant differences in operative time, intraoperative blood loss, the number of lymph nodes removed and postoperative courses between the groups. the cost effectiveness was significantly cheaper in srhi than in mrhe. conclusions: sci for right colon cancer is safe and technically feasible. sci reduces the length of skin incision and postoperative pain compared with conventional mce. aim: this study was designed to clarify the utility of laparoscopic surgery for advanced lower rectal cancer after neoadjuvant chemoradiotherapy (ncrt). patients and methods: we investigated 3-year disease-free survival rate, operative outcomes and recurrence risk factor in 73 patients with lower rectal cancer (ct2-4, n0-2) who underwent laparoscopic surgery after ncrt from 2010 to december 2017 in kitasato university hospital. results: of 73 patients, 43 patients underwent low anterior resection (lar), 4 patients underwent intersphincteric resection (isr) and 27 abdominoperineal resection (apr). there were 7 anastomotic leakage, and 1 urinary disorder and 2 sexual dysfunction. ypcr rate was 24.7%, but 15 patients (20.5%) had recurrence (7 liver,7 lung and 2 lymph node and 1local recurrence; there is some overlapping). ypt4 and lymph node metastasis were detected as a recurrent risk factor. the 3-year relapse-free survival rate (rfs) was 79.5% and the 3-year overall survival rate (os) was 92%. conclusion: in this examination, ypt4 and lymph node metastasis were risk factor for recurrence. the operative outcomes, 3-year rfs and the 3-year os are relatively good results. we will conduct further follow-up, and it is necessary to investigate a long term prognosis. laparoscopic surgery is warranted for rectal cancer after ncrt. surg endosc (2019) introduction: synchronous colorectal neoplasia presents an incidence ranges from 2% and 7%. classically its surgical treatment consisted in the realisation of a subtotal colectomy (stc), however, several authors have proposed that in certain occasions the realisation of two segmental resections with two anastomoses was not accompanied by an increased risk of anastomotic failure. the objective of this study was to compare the feasibility and safety of the laparoscopic approach of synchronous colorectal neoplasia using two different techniques: stc versus two segmental resections with two anastomoses. methods: we retrospectively reviewed the clinical data of patients over 18 years of age who underwent colorectal surgery between 1998 and 2018 at a single center. we included patients with a synchronous colorectal neoplasia who underwent laparoscopic surgery, either stc or double resection (dr). results: a total of 24 patients met the inclusion criteria. mainly males (86%) with an average age of 75 years, with a scale of the american association of anesthesiologists superior to ii in 53% and with an average body mass index of 29 kg / m2. the mean operative time was 251 min in the dr and 281 min in the stc, the stc resulted in a higher conversion rate (23% vs 11%) and intraoperative bleeding (39% vs 22%), in addition to a postoperative period with more complications, only 15% of the patients undergoing stc didn't present any complication while 67% of the patients with a dr didn't present any complication. 38% of the stc presented anastomotic failure and only 11% of the dr. the mean hospital stay was 8 days in the dr and 18.5 in the stc. in the dr, an average of 47 cm of colon was resected with an average of 24.8 lymph nodes, while in the stc, 127 cm of colon was resected with an average of 24.2 resected nodes. conclusions: the double resection with two anastomosis is a less aggressive surgery, with fewer complications and a shorter hospital stay, providing similar oncological results. there were no differences in morbidity, re-operations or hospital stay. regarding tumor stage there were no differences between the three groups. as for the resected nodes, we found a mean of 21 in stc, 16 in lc and 14 sr with no statistical difference. there were no differences in the affected nodes among groups. in our patients we didn't find differences in the recurrences rate or in the distant metastases rate.the average follow-up was 76 months (range: 30-114), with no differences in overall survival. conclusion: segmental resection of splenic flexure neoplasias is safe and feasible, with no differences in morbidity or in the oncological outcomes compared with more aggressive surgeries. introduction: the evaluation of perfusion in colorectal anastomosis is still a field of study and progress for the development of new modalities that allow reducing the ratio of dehiscence or anastomotic leakage (al) in said surgery. our objective with this work is to highlight the utility of indocyanine green (icg) in the said evaluation after colo-rectal surgery. methods: we present a series of 85 cases of colorectal surgery (benign and malignant disease) intervened in the period between 2014 and 2018. the population sample has been homogenized according to age criteria, risk factors and comorbidity. a retrospective database has been developed with the spss v.22 software for the evaluation of the results obtained. the primary outcome measure was al rate with at least 1 month of follow-up. results: a significant reduction in the incidence of al was observed in patients who underwent colo-rectal surgery (p = 0.005). low al rates were shown in rectal cancer surgery (p = 0.02). there was no significant decrease in the al rate when colorectal procedures for benign and malignant disease were combined. conclusions: the use of the image by fluorescence with indocyanine green is a safe, reproducible and relatively simple method with which to evaluate the perfusion of the colorectal anastomosis as well as reduce the rate of anastomotic leak in the postoperative period. large well-designed randomized control trials are needed to provide evidence for its routine use in colorectal surgery. introduction: currently colonoscopy is the gold standard investigation for colonic evaluation. although caecal intubation is one of its quality indicators, it is not attained in up to 20% of cases. this remains a significant concern. limited data are available on the follow-up of patients with incomplete colonoscopy. aims: to assess colonoscopy completion rate, the reasons for incomplete colonoscopy, and the methods used to complete colonic evaluations after incomplete colonoscopy. methods: we performed a retrospective study of incomplete colonoscopies in our unit over a one year period (2017) these results compare favorably with published data. few statistically significant differences between groups suggest varying modalities of treatment broadly result in similar qol. this data highlights a need for well-delivered support programmes for specific issues, for example stoma care and sexual dysfunction. future studies will need to include a baseline questionnaire to truly measure the impact of surgery and measure quality in an increasingly elderly and comorbid population. splenic flexure cancer (sfc), comprising the tumours raised in the distal transverse colon and proximal descendingcolon, accountfor 2 to5%of all surgically treated colorectal cancers.in cme forsfc, dissection of both the transverse and descending mesocolon must be considered. however, the use of laparoscopic surgery as a curative treatment for sfc, has never been investigated in adequate controlled trials, because of difficulty in deciding on the appropriate operative procedure, as well as technical difficulties with laparoscopic lymph node dissection. the aim of this multicenter study is to evaluate the oncologic effectiveness of laparoscopic segmental resection with cme with for cancer located at the splenic flexure. we performed a retrospective analysis of all cases of sfc treated with a laparoscopic segmental resection with cme in five different institution. intra and post operative were evaluated. 112 patientes were evaluated, the mean operative time was 155.17 ± 48.54 min. a total of 6 (5.4%) conversions occurred, 2 due to splenic artery lesion, one for difficult adesyolisis and three due to locally advanced tumour. recurrence was observed in 13 (11.6%) patients. there was a significant association between disease stage and recurrence (p \ 0.001) with a higher proportion of stage iv patients in the recurrence group (46.1% vs 7.1%). at 30 days follow-up no mortalitywere recorded.during a median follow-up of 43 months (range 12-149), 13 deaths occurred (all of them for disease progression). keplan mayer curves showed a compareble suvival with other colo-rectal cancer. in conclusion, laparoscopic segmental resection with cme and cvl seems to be an oncologically safe and effective procedure for treatment of sfc. it may be regarded as the standard surgical method for elective management of this disease. in the future, more tailored patient-and tumor-specific segmental resection might be achieved with the use of routine lymph node road mapping. it is very important to establish a minimum number of lymph nodes to analyse for a correct staging. it has been established as 12. the treatment of colorectal cancer is essentially surgical. the review of the medical literature indicates that laparoscopic colorectal surgery is a safe procedure that has not found significant differences in the survival rate from open surgery. aim: the aim of our study is to compare the outcomes of laparoscopic and open resection for colorectal cancer surgery evaluating lymph node assessment. methods: the patients were collected in our hospital during the period from 1/11/2017 to 1/12/ 2018 and the number of lymph nodes obtained in lymphadenectomy has been studied comparing the laparoscopic and laparotomy approaches. results: 81 interventions were performed. 55 were laparotomic, 20 were laparoscopic and 5 converted laparoscopic (fig 1) . the average number of nodes found in these interventions was 15, 36. nowadays, the recommendations to obtain a proper lymphadenectomy is to find more than 12 lymph nodes. analysing our procedures, 61 surgeries had obtained a good lymphadenectomy. according to the approach, 62,3% of the interventions (38) are laparotomy, 31,2% (19) are laparoscopic procedures and 6,5% (4) are by reconverted laparotomy (figure 2 ). the average number of lymph nodes isolated was similar. laparotomy approach found 16,45 nodes while 13,5 nodes were found in laparoscopy. converted laparoscopy found 12,2 ( figure 3 ). conclusion: the treatment of colorectal cancer is essentially surgical. today, there are a lot of studies that support that laparoscopic surgery has a survival rate similar to laparotomy surgery. according to our study, the data collected indicates that the number of isolated lymph nodes in both approaches is very similar. to sump up, laparoscopic colorectal surgery is safe and has demonstrated oncological adequacy comparable to open approach and better short-term outcomes due to a less invasive approach. background: laparoscopic low anterior resection highlights the advantages of laparoscopic surgery (better surgical field, less bloodloss, less postoperativepain, better cosmeticresult). defunctioning ileostomy prevents anastomotic leakage in low rectal cancers, butincreases morbidity, degrades thequality of life and requires a second surgery for its closure. method: in the last 24 months we performed 8 laparoscopic low anterior resections for rectal cancer, whithout performing any protectiveileostomy, afterchecking the anastomosis intraoperatively(5 men, 6 women. average age: 65 years). the typical placement of trocars included one supraumbilical 5 mm trocar, two right sided 10 mm trocars in the midclavicular line, one 5 mm in the left midclavicular line and one 5 mm trocar in the suprapubic midline which is also used for specimen removal, after a 2 cm transverse extension of the incision. we present themain stages of the procedure (dissection andmesorectal excision, division of the rectum with linear stapler using the 'chinese hat-parnex' technique, creation of an end-to-end intracorporeal anastomosisusing circular stapler under direct laparoscopic vision). results: no major postoperative complication was observed. the mean operative time was 250 min (180-300) and free surgical margins were achieved. in one case a conversion to open surgery occured. the average length of hospital stay was 8 days (7-9). conclusions: the laparoscopic approach facilitates access to the middle and lower rectum, total mesorectal excision and avoidance of ileostomy if possible. it is a demanding operation with extended learning curve, and requires adequate experience in laparoscopic surgery and colorectal surgical oncology. background: in colorectal cancer, local excision is an attractive treatment option, but additional resection is considered when lymph node metastasis(lnm) is expected at high rate. in lower rectal cancer, advanced surgery techniques are required, so it is often difficult to make judgments. the aim of the current study is to assess the reliability of laparoscopic surgery for submucosally invasive rectal adenocarcinoma (pt1) analyzing short-term outcomes and long-term survival. method: this cohort study analyzed 217 patients who underwent laparoscopic rectal resection for submucosally invasive rectal adenocarcinoma (pt1). conversion rate and functional and oncologic outcomes were analyzed. data on long-term results and survival were evaluated. result: surgical procedure was low anterior resection / intersphincteric resection / abdominoperineal resection: 190/23/4, and conversion to open surgery was needed for 6 (2.8%) patients. sphincter-preserving procedures were performed in 204 (97.2%) patients. there were no perioperative mortalities and positive resection margin. the mean length of hospital stay was 10.5 days. complications beyond clavien-dindo grade iii occurred in 14 (6.4%) patients,the anastomotic leakage rate was 3.6% (8/217). the positive lymph node metastasis rate was 12.9% (28/217). high tumor budding (p = 0.006), lymphatic invasion (p \ 0.0001), and mucinous /poor histological differentiation (p = 0.01) were significantly associated with lymph node metastasis on univariate analysis. on multivariate analysis, only lymphatic invasion was associated with lymph node metastasis (p \ 0.001).the median follow-up time was 50 months (range, 6-151 months), recurrence free survival rates was 96.3% (209/217). conculusion: the outcomes of this study suggest that laparoscopic surgery can be used for safe and radical resection of submucosally invasive rectal adenocarcinoma (pt1)?and the absence of lymphatic invasion, budding, and mucinous /poor histological differentiation are each associated with low risk of lnm. risk stratification models integrating these factors need to be investigated further. conclusions: this study highlights the complex nature of sarcopenia, as well as its common incidence. minimally invasive surgery had a higher incidence of sarcopenia than that of open surgery when both were performed within an enhanced recovery setting. despite colorectal patients being a typically well-nourished cohort at low risk of complications, there may well be benefit from interventional strategies such as perioperative immunonutrition or pre-habilitation to reduce the incidence of this poor prognostic indicator. backgrounds: urinary dysfunction is frequently observed after rectal resection and justifies urinary drainage. the concept of enhanced recovery after surgery (eras) has been widely spread from the early 2000 s. however, the optimal duration of postoperative urinary drainage is unknown. aims: the aim of this study was to comprehend short-term outcome of early removal of urinary catheter after robotic rectal surgery (rrs). patients and methods: (patients) the data of 44 consecutive patients who underwent rrs at two hospitals between april 2015 and november 2017 were retrospectively reviewed. the main indication of rrs was the patients who need rectal mobilization with autonomic nerve preservation regardless of benign or malignant disease. perioperative management: none of the patients received epidural anesthesia for postoperative analgesia. our basic principle was to remove urinary catheter on postoperative day (pod) 1. after removal of urinary catheter, trans-urethral catheterization (tuc) was performed in the following situations:1) no autonomous urination over 6 h after removal 2) the decrease in urine volume (\ 150 ml/6hr) 3) the appearance of subjective symptoms like abdominal distension. when tuc was required even once, residual urine volume was measured with ultrasonic examination device since then. results: twenty seven male and 17 female were included. the median age of patients and bmi were 67 years old and 22.7 kg/m2, respectively. the surgical procedures included anterior resection (n = 33), intersphincteric resection (n = 4), abdominoperineal resection (n = 5), hartmann's procedure (n = 1), and total coloproctectomy (n = 1). only one patient received lateral pelvic lymph node dissection. urinary catheter was removed on pod1 in 40 cases (90.9%), on pod2 in 4 cases (9.1%). although tuc was needed in three cases (6.8%) immediately after removal, tuc was no longer needed within three days in all three patients. late dysuria was observed in two cases (4.5%), and bladder overdistension was suspected in these two cases. conclusions: our study showed that urinary catheter could be safely removed on pod1 after rrs. however, careful follow-up observation to avoid bladder overdistension is essential after removal. introduction: intersphincterian low rectal resection is a valid alternative to lower rectal cancers located at about 4-7 cm from the anus. methods: we present 19 cases from our personal experience for tumors localized 4-7 cm from the anus. 13 of them required preoperative radiochemotherapy. in 12 cases, abdominal surgery was performed laparoscopic, 7 having the surgical specimen extracted transanal. lone star device was used for the perineal procedure in all cases. 6 cases required a manually, separate wires anastomosis; the others 13 cases benefited from mechanical anastomosis performed endoanal with 29-31 mm circular stapler. we performed complete mesorectum excision in all cases, ligation at the origin of inferior mesenteric artery, complete mobilization of left splenic flexure and lateral protective ileostomy. all pacients underwent inspection rectoscopy before transit reintegration, and 16 cases were reintegrated over a period of 3-12 weeks, except for 3 cases which developed a colo-anal fistula, that closed under conservative treatment over a period of 3-9 months. results: there were no postoperative anal incontinence. in one case, a relative anal stenosis occured, which required endoscopic dilation. there was 1 case of tumor recurrence and required abdominoperineal resection. conclusion: literature data sustain a 3-4/1 ratio for very low rectal resection versus rectum amputation. the limit resection under the tumor is accepted as 0.5 cm. very good functional results by considering oncological principles, is a sustainable argument for choosing this kind of procedure as an alternative of rectum amputation. in the few studies conducted on crcs, the reported rate of sln micro-metastases is up to 20-30%. the aim of this ongoing prospective study is to assess the predictability of the ex-vivo nirf sln mapping and of the research of micrometastases in nnd crc patients to propose adjuvant chemotherapy. materials and methods: fifty-eight patients undergoing standard oncological crc laparoscopic resection have been prospectively enrolled in two centre. as previously described by the authors, the intact surgical specimen was extracted and opened longitudinally and 1 ml of indocyanine green (icg; 5 mg/ml) was injected submucosally at four corners around the tumor in order to identify the lymphatic pathway and the slns. each sln presenting as negative at conventional histological analysis, was further investigated with ultrastaging techniques including serial sectioning and additional immunohistochemistry, in order to detect the presence of micrometastases. results: thirty patients were n ? , and 28 were nnd. overall, a total of 1085 lymph nodes were retrieved. a total of 117 sln were identified (mean 2.01 per case) and 54 of those were nnd. after ultrastaging investigations, 4 micrometastatic cases were found in nnd patients. the patients were so upstaged to n1. sln located deeper in the mesenteric and mesorectal fat could easily be identified by nirf (even after nchrt). conclusions: in our preliminary series, the ex-vivo nirf sln mapping rightly predicts the status of loco-regional nodes, as confirmed by the histological investigations. the micrometastases' identification let selected patients to undergo the adjuvant treatment with the aim to reduce the risk of recurrence. (3/13) in lateral node positive group and 18.8% (3/16) in lateral node negative group. four of 6 local recurrence were lateral lymph node recurrence. two patients recurred the other lateral side of previous lpl, then they were laparoscopically resected and no recurrence (52, 58 months). two patients recurred the same side after lpl were not curable because of liver metastasis and extensive invasion to the common iliac vessels. conclusion: selective lpl for rectal cancer was safe and good local control for lateral lymph node positive patients. also curable local recurrence resection was possible for non-treated lateral lymph node recurrence. intestinal malrotation is an embriologic anomaly generally discovered in the first months of life due to bowel obstruction. adult presentation is rare and its association with colon cancer is far more rare. we report a case of a 70 years old man affected by asymptomatic intestinal malrotation incidentally found during an abdominal computed tomography (ct) performed for retroperitoneal colonic perforation in a patient with an endoscopically diagnosed aenocarcinoma of the caecum and a large polyp of the descending colon. preoperative vascular anatomic study allowed us to plan a laparoscopic approach safely also with adequate lymphoadenectomy. the abdominal cavity was entered throught a right flank 12 mm optical trocar on the transverse umbilical line. three additional 5 mm trocars were placed in right iliac fossa, right and left hypocondrium respectively. exploratory laparoscopy confirmed midgut malrotation and a fresh flogistic area at the descending colon perforation site. caecum and ascending colon were on midline and attached due to adhesions to sacral promontory. ileocolic artery (ica), middle colic artery (mca) and ima were selectively ligated but not at their origins due to aberrant anatomy. laparoscopic subtotal colectomy with intracorporeal stapled ileosigmoid anastomosis were carried out (endogia 45 mm, double layer 3/0 polyglicolic acid suturing of the breech). the anisoperistaltic nature of the anastomosis is due to the disposition of the mesenterium which did not allow an isoperistalting orientation of the two resected stumps. the specimen was extracted throught a pfannestiel incision. the postoperative course was complicated by intestinal obstruction conservatively treated with slow bowel function's restoration. the patient was discharged from the hospital in 15th postoperative day. unexpectedly specimen histology revealed two villous adenomas with high grade dysplasia. 17 lymphnodes were retrieved from the specimen (ptisn0). to date our case is the only fully laparoscopic colonic resection reported in literature in malrotation as well as the first intracorporeal stapled ileo-sigmoid anastomosis for such disease. the median hospital stay was 6 days. in-hospital mortality was nil. the overall morbidity was 20%. the median length of follow-up was 23 months. conclusions: our preliminary results suggest that robotic-assisted surgery for colorectal cancer can be carried out safely and according to oncological principles. robotic surgery is advantageous for both surgeons (in that it facilitates dissection in a narrow pelvis) and patients (in that it affords a very good quality of life via the preservation of sexual and urinary function in the vast majority of patients and it has low morbidity and good midterm oncological outcomes). in rectal cancer surgery, the robotic approach is a promising alternative and is expected to overcome the low penetration rate of laparoscopy in this field. aims: postoperative inflammation have been reported as one of the independent prognostic factors in several types of malignancies.the aim of this study is to clarify the impact of laparoscopic approach on postoperative inflammatory status after surgery for colorectal cancer, and to analyze the association between postoperative inflammation and prognosis in patients with colorectal cancer. methods: a total of 636 patients with stage l-lll colorectal cancer (crc) who underwent curative surgery were retrospectively analyzed. the maximum crp value measured between the times of surgical resection and discharge was defined as 'max crp'. the optimal cut-off value of max crp that best predicts rfs was determined to be 10 mg/dl by the minimum p-value approach. methods: trainees working in this firm were responsible for data collection. patients who underwent emergency surgery during the calendar year of 2018 had the following details collected-the presence or absence of a complication in the 30-day post-operative period, the type of complication and description of complication along with the grade of the complication (see fig. 1.) . patients who underwent intermediate to major surgery were followed up at outpatients and were specifically asked for the occurrence of complications from the point of discharge up until the outpatient appointment. with one centralised national hospital-the people who were discharged and subsequently experienced considerable or major complications invariably represented back to hospital via the a&e department. results: a total of 148 emergency surgeries were performed by this surgical firm in 2018, 63% of these being done laparoscopically. of these 148 cases-29 patients experienced post-operative complications within the first 30 days after their procedure. this equated to a complication rate of 19.59%. the most common complications were abdominal pain, nausea & vomiting, and wound infection. there were 8 complications for each of these 3 categories. post-operative bleeding occurred in 5 cases with fistulas or leak of an anastomosis occurring in 3 cases. death of a patient occurred in 3 instances once as a result of post-operative bleeding from the site of anastomosis after a whipple's procedure, the 2nd occurred subsequent to post-operative bleeding from a peptic ulcer and in the 3rd case occurred in an instance of faecal peritonitis as a result of anastomotic failure after a roux-en-y bypass for a patient with pancreatic malignancy. conclusion: the davien-clindo classification proved to be simple, efficient and useful in analysing post-operative outcomes. the results indicate that despite the emergency setting & elderly cohort of patients-minimally invasive surgery proved to be a safe and viable option. conclusions: in this prospective study, we observed greater rates of detection of adenomas among endoscopists. screening colonoscopy on symptomatic and/or high risk group for crc is valuable in early detection and the prevention of crc. large sample size and long period of screening colonoscopy was needed. limitation of our study was the small sample size and no use of high detention endoscopy. results: the mean intraoperative blood loss volume was significantly less in the lap group than in the open group (735 vs. 4447 ml, respectively, p \ 0.01). the mean operative time was not significantly different between the lap group and the open group (738 vs 679 min, respectively, p = 0.276). the incidence of severe postoperative complication (grade 3 or higher in the clavien-dindo classification) was lower in the lap group (4/17 (24%) vs 16/35 (46%), respectively). the mean postoperative hospital stay was significantly shorter in the lap group than that in the open group (39 vs. 454 days, respectively, p = 0.022). conclusions: lap-tpe can be a safe and feasible procedure. background: amyloid light chain (al) amyloidosis is a rare protein deposition disorder with an incidence ranging between 3-8 cases per million people. it can present insidiously with localized or multisystem symptoms and usually occurs later in life. prognosis is poor as al typically presents at an advanced stage. intestinal pseudo-obstruction is a rarely reported complication of al amyloidosis. here we report a case of al amyloidosis which was identified during surgery for intestinal pseudo-obstruction. case presentation: a 56 year old male presented to the emergency department with a 4 month history of abdominal pain and distension, as well as marked swelling of his lower limbs. this had worsened in the previous 2 weeks and he had developed intermittent diarrhoea. ct showed ileitis with marked dilation of the proximal small bowel. laparatomy revealed small bowel that was grossly distended that rapidly developed multiple petechiae and subsequent haematomas upon handling. two days later a repeat laparotomy was performed and 3.45 m of ishaemic small bowel was resected. histology showed amyloid deposition with positive congo red staining. subsequent cardiac events led to an echo being performed that showed concentric left ventricular hypertrophy attributed to amyloid deposition within the myocardium. free serum light chain ratio was sent and confirmed the diagnosis of al amyloidosis. he has recently been started on a treatment regimen consisting of cyclophosphamide and dexamethasone. discussion: systemic al amyloidosis frequently involves the gastrointestinal tract, typically presenting with chronic diarrhoea and associated malabsorption. only 1 case presenting with pseudo-obstruction has been reported in the literature. al amyloidosis presents insidiously with non-specific symptoms depending on which organs are affected. treatment aims to prevent further deposition of protein within the organs. prognosis is determined by the organs that are affected and the extent of protein deposition within them. cardiac involvement holds the worst prognosis ultimately causing sudden cardiac death. the mainstays of management are early identification and treatment implementation to prevent protein build up and subsequent organ failure. conclusion: a diagnosis of amyloidosis should be considered in patients with intestinal pseudo-obstruction to expedite the diagnosis of al amyloidosis and improve survival. aim: in the management of locally advanced rectal cancer (larc), the achievement of a complete total mesorectal excision (tme) with clear resection margins was demonstrated to be the main predictor of overall and disease-free survival. predicting surgical difficulty in larc patients may be of particular importance to choose the best surgical approach. this study proposes a mri-based score to identify preoperatively larc patients with a high risk of having a difficult surgery. methods: this is a retrospective study based on the european mri and rectal cancer surgery (eumarcs) database, including patients with mid-low larc who were treated with neoadjuvant chemoradiation therapy and laparoscopic tme with primary anastomosis. data on pre-treatment and restaging through magnetic resonance imaging were available for all patients. surgical difficulty was defined as high or low grade taking in to account operative (e.g. duration of surgery), and postoperative factors (e.g. hospital stay). score accuracy was evaluated by estimating sensitivity, specificity and area under the receiver operating characteristics curve (aroc). results: seventeen (12.5%) of 136 larc patients were graded as high surgical difficulty. the eumarcs score was developed using the following significant predictors of surgical difficulty: bmi [ 30, interspinous distance \ 96.4 mm, ymrtstage = t3b, and male sex. the score ranged from 0 to 10. the cut-off score to best differentiate patients with a high probability of difficult surgery was = 3 points. this cut-off value showed the best balance in sensitivity and specificity. the eumarcs score demonstrated high accuracy (aroc: 0.0802) conclusions: the eumarcs score was found to be sensitive and specific in predicting surgical difficulty in larc patients who were candidate for laparoscopic tme. the score has the advantage of considering patient and cancer related characteristics that can be all assessed preoperatively and it can be useful in the decision making process. this score has not yet been externally validated. background: recently published two non-inferiority randomised control trials has raised questions on laparoscopic surgery for rectal cancer, showing lower quality pathological specimens to those achieved using an open technique. locally advanced rectal cancers add to the level of difficulty for laparoscopy approach. our study was aimed to assess feasibility of laparoscopic rectal surgery, comparing short term outcomes, quality of surgical specimen, morbidity and mortality, between propensity score match groups of locally advanced and early rectal cancers. methods: prospectively acquired data from consecutive patients undergoing laparoscopic surgery for rectal cancer at the minimally invasive colorectal unit in united kingdom between 2006 and 2014. locally advanced rectal tumours were identified as t3b or t4 with pre-operative mri scans. all the patients were operated by the same team and the procedures were performed laparoscopically. 1:1 propensity score matching was performed to create a perfect match in terms of tumour height. results: total of 369 laparoscopic rectal resections were performed during the study period, out of which 87 patients had locally advanced (la) disease and were propensity-score matched for tumour height with non-locally advanced (nla) patients. median operative time was higher for the la surgery group (270 min vs 250 min p = 0.024). however, conversion to open surgery (p = 0.621), readmission (p = 0.295), re-operation (p = 0.747), clinical anastomotic leak (p = 0.589) and 30-day mortality rates (p = 0.497) were all equivalent between the two groups. r0 resection was achieved in 89% of la group as compare to 94% of nla group (p = 0.177). conclusion: this study demonstrate that standardised approach to laparoscopy is safe and feasible in locally advanced rectal cancers. comparable post-operative short-term clinical and pathological outcomes were seen between la and nla groups. aims: the application of colorectal cancer screening programs, has showed a decrease in recurrence and mortality. for this reason, these programs are being implemented at a national level in the different spanish regions, as has happened in our community.to present the initial short-term results on the morbidity of the immediate postoperative period to 90 days of colon cancer, mortality and hospital stay after the implementation of a screening program in our center. methods: a retrospective study was performed. 73 patients aged between 60 and 69 years were included in the study, diagnosed with colon cancer. they underwent minimally invasive surgery, in most cases, with any type of colonic resection, from january 2010 to december 2017. all patients were diagnosed, conventionally or through a screening program, the latter according to the plan implemented in our community. the sample was divided into two groups of patients according to the way of being diagnosed (group si screening = 25 patients, group no screening = 48 patients) and they were compared according different variables: dependent factors of the patient, factor of type colon cancer, factors of colon cancer resection and follow-up. results: both groups were comparable in all study variables. regarding the variables included in the follow-up, no statistically significant differences were found in terms of postoperative mortality-clavien-dindo v. however we found differences statistically significant in postoperative morbidity (p = 0.006) and in its classification according to clavien dindo i-iv (p = 0.018). the complications analyzed independently, such as anastomotic dehiscence (p = 0.023) or postoperative ileus (p = 0.033), have also presented significant differences, unlike surgical wound infection (p = 0.115). conclusion: at our center, the application of the screening program has not influenced in the initial stage of colon cancer or its surgical approach. however, we have found a lower overall morbidity rate and minor complications, justified by a lower incidence of anastomotic dehiscence and postoperative ileus. background: colorectal carcinoma is one of the most common malignancies. surgery is the only definitive method to achieve cure for this illness and can be performed via an open or a laparoscopic approach. the pros and cons of each approach have been discussed extensively, with the oncologic efficiency of the laparoscopic approach being one of the leading topics. objective: the aim of this study was to establish oncological non-inferiority of the laparoscopic approach to colorectal cancer. primary outcome measure was defined as number of harvested lymph nodes. secondary outcome measures were medium-term disease free and overall survival as well as length of hospital stay, time to oral feeding and short-and long-term complication rate. methods: this was a single center retrospective chart review. all consecutive patients who underwent colon or rectal resection due to colorectal carcinoma at hadassah medical center between the years 2014-2017 were included. patients who were operated on for recurrent disease or who had metastatic disease at the time of surgery were excluded. patients were divided into three groups according to the surgical approach: laparoscopic, open or converted. medium-term oncological outcomes were the same for all groups. time to oral feeding, length of hospital stay, short-and long-term complication rate were all significantly improved in the laparoscopic group. conclusions: we were unable to prove non-inferiority of the laparoscopic approach regarding the number of harvested lymph nodes. however, all surgical approaches yielded a high number of harvested lymph nodes which is most probably oncologically sufficient, as reflected by the non-existent difference in medium-term oncological follow up. this study supports previous studies showing the superiority of the laparoscopic approach regarding short term recovery and overall complications rates. aims: two non-inferiority randomised control trials have questioned the utility of laparoscopic surgery for rectal cancer by failing to prove that pathological markers of high quality surgery are equivalent to those achieved by open technique. we intend to present short and long-term postoperative outcomes from the largest single surgeon series of consecutive patients undergoing laparoscopic tme for rectal cancer. we describe the standardised laparoscopic technique developed by the principal surgeon, and the short-term outcomes from three surgeons who were trained in and subsequently adopted the same approach. methods: prospectively acquired data from consecutive patients undergoing surgery for rectal cancer by the principal surgeon (ap) at the minimally invasive colorectal unit in portsmouth between 2006 and 2014 were analysed along with data acquired between 2010 and 2017 from surgeons (tq,nf,ah) at three further international centres. end-points were overall and diseasefree survival at 5 years, and early post-operative clinical and pathological outcomes. results: 263 consecutive patients underwent laparoscopic tme surgery by the principal surgeon (ap). at 5 years overall survival was 82.9% (dukes' a = 94.4%; b = 81.6%; c = 73.7%); disease-free survival was 84.0% (dukes' a = 93.3%; b = 86.8%; c = 72.6%). post-operative length of stay, lymph node harvest, mean operating time, rate of conversion, incomplete resection, major morbidity and 30 day mortality were not significantly different between the principal surgeon and those he had trained when subsequently in independent practices. conclusion: laparoscopic tme produces excellent long-term survival outcomes for patients with rectal cancer. a standardised approach has the potential to improve outcomes by setting bench-marks for surgical quality, and providing a step-by-step method for surgical training. results: analysis of association of tumor location (sigmoid, right or left colon), operation time, blood loss, extraction site, type of surgical sutures used for wound closure with postoperative complications or specimen quality either did not show significant correlation or could not be conducted due to data nature. unexpectedly, a significant difference was demonstrated between two surgical teams in terms of hernias. majority of cases-44 (64.7%) were performed by surgeon 1 (s1), surgeon 2 (s2) operated on 22 (35.3%) patients, nevertheless minilaparotomy closure was usually performed by junior members of the team. conversion rate was 4.5% for s1 and 18.2% for s2 (p = 0.089). operation time and blood loss were smaller in s1 group compared to s2 (153.6 ± 62.5 min vs 179.3 ± 55.2 min, p = 0.037 and 59.7 ± 45.7 ml vs 100.0 ± 74.0 ml, p = 0.027 respectively). specimen quality and early postoperative complications did not differ. postoperative hernia rate was 2.3% for s1 and 22.7% for s2 (p = 0,013). both surgeons used the same specimen extraction sites and materials for wound closure. hernias were more frequent after vertical minilaparotomy-25% (1 of 4 patients), and in converted patients 33,3% (2 of 6 patients), compared to 5, 5% (3 of 56) in transverse minilaparotomy group. there was no association of hernias and wound infections. conclusions: our study demonstrates, that besides consultant dependent surgical surrogates, steps which are often performed by other members of surgical team (such as wound closure) may contribute to complication rate as well. more thorough supervision of wound closure may be needed. aims: laparoscopic complete mesocolic excision (cme) right hemicolectomy is considered a demanding procedure and it is actually adopted in few centers from the west. the aim of the present study is to analyze the safety of laparoscopic cme right hemicolectomy and to compare its short-term results with standard right hemicolectomy in a single western center. methods: prospectively collected data from 56 patients who underwent laparoscopic cme right hemicolectomy between june 2014 and november 2017 were retrospectively analyzed (cme group) and compared with data from 49 patients submitted to standard laparoscopic right hemicolectomy between april 2013 and november 2017 (s group). results: no differences were observed between the cme and the standard right hemicolectomy groups in terms of clinical characteristics. in the cme group, 39.3% of patients were = 75 years old, 28.6% of patients were asa class 3, 46.4% of patients had = 2 comorbidities, 30.4% of patients had bmi [ 28 and 14.3% of patients had = 2 previous abdominal surgeries. no differences were observed in terms of duration of surgery (215 ± 59 min vs. 208 ± 58 min; p = 0.573) and intraoperative complications (5.4% vs. 4.1%; p = 0.759) between cme and s groups; mean blood loss was lower in the cme group (50.5 ± 45.9 ml vs 75.7 ± 62.6 ml, p = 0.029). the percentage of overall (42.9% vs. 46.9%; p = 0.412) and severe (clavien-dindo = 3) complications (8.9% vs. 8.2%; p = 0.875), redo surgery (3.6% vs. 8.2%; p = 0.414) and readmission (3.6% vs. 6.1%; p = 0.662) was comparable between cme group and s group. a significant difference was observed in the length of specimen (329 ± 79 mm vs. 270 ± 98 mm; p \ 0.001) as well as in the length of proximal (159 ± 96 mm vs.121 ± 70 mm; p = 0.028) and distal margins (134 ± 64 mm vs.110 ± 61 mm; p = 0.05) in favor of the cme group. the number of lymph nodes harvested was slightly higher in the cme group (21.9 ± 9.6 vs. 25.7 ± 10.2; p = 0.055) as it was for the percentage of cases with less than 12 retrieved lymph nodes (8.2% vs. 1.8%; p = 0.143), although these differences did not reach statistical significance. conclusions: this study represents one of the few western experiences demonstrating the safety of laparoscopic cme right hemicolectomy. cme technique showed good short-term results and better quality specimens when compared with the standard procedure. aim and background: peritoneal dissemination of colorectal cancer (pc) makes the complete resection of cancer lesions impossible. in such cases, multidisciplinary therapy is essential with mainly chemotherapy. preoperative diagnosis of pc is usually uncertain by ct or mri image. for diagnosis of pc needs surgical materials with laparotomy. but the laparotomy and resection of pc with general anesthesia tends to make impossible for immediate chemotherapy. less invasive diagnosis of pc is necessary and expected.endocytoscopy (ec) makes the histological diagnosis with precise images gained by high magnification (x 520). as a preliminary examination, ec diagnosis for resected specimens of pc were evaluated. methods: two cases of pc diagnosed in operation were evaluated. under general anesthesia, laparotomy was conducted. peritoneal dissemination lesions obviously diagnosed as pc were resected. immediately the lesions were stained by methylene blue solution for 120 to180 s. ec observation was done according ec classification1) and ecv classification2). results: in two cases, ec observation was successfully done. images of dilated surface microvessels of a nonhomogeneous caliber or arrangement were observed in nbi ec corresponding to ec-v3. histopathological diagnosis of resected specimens was metastatic colorectal carcinoma in peritoneum in both cases. conclusions: histological diagnosis for pc is gained by ec with resected specimen. as the result of this investigation, ec examination via camera port in laparoscopic operation might be possible for diagnosis for pc of colorectal cancer in vivo. aims: the aim of this presentation is to demonstrate and analyze surgical complications, arising during laparoscopic colorectal resections for cancer and to analyze the reasons of adverse events. methods: we demonstrate videos from our surgeries, where different types of complications occurred and share our classification of types of mistakes, that may lead to intraoperative complications and ways to prevent them. results: we divide mistakes in laparoscopic colorectal resections into two large groups-'false strategy' and 'dangerous techniques'. the first includes poor diagnosis, too extensive or insufficient extent of surgery and improper enthusiasm in using platforms. prevention of first type mistakes is in thorough training and peer-review of each consultant practice. second type of mistakes includes two subtypes : 'faulty habits'-use of unsafe techniques (blind port insertion, poor vascular exposure prior to clipping, not obtaining 'critical views', unsafe use of energy and stapling devices etc.) and 'failure in a certain case'-when despite correct general approach a complication occurred (misinterpretation of fascial layers or vessels). prevention of 'faulty habits' lies in supervised training in high volume colorectal departments including dedicated surgical devices training. to avoid 'failure in a certain case' standardization of surgical procedure is essential, as the most efficient way to prevent this type of mistake is 'pattern recognition'ability of a surgeon to compare the picture he sees during a procedure with a 'standard' view, he used to have during previously performed standard surgeries-this is apparentely impossible when every procedure is done differently. regular reviews of own surgeries recording and other surgeons' procedures may also fascilitate pattern formation. in case a complication occurres we use the four step course of action: preservation of the view, temporary control, decision on conversion, permanent control. conclusion: as popularity of laparoscopic colorectal resections is growing rapidly the number of intraoperative compliactions is increasing as well. we demonstrate videos of complications and our approach to classification of possible mistakes. systematic aproach to reasons, underlying certain mistakes helps to produce a strategy to reduce intraoperative complication rate. introduction: the drains placement inside the abdominal cavity has traditionally been carried out to evacuate hematic remains or postoperative collections. there is no scientific evidence of the prophylactic use of drainage in elective colorectal cancer (ccr) surgery to avoid anastomotic complications or other complications. however, it is traditionally used. when the anastomotic leak is produced, it is generally agreed that drainage system should be used for therapeutic purposes. aims: the aim of this study is to evaluate the effectiveness of the use of prophylactic drainage in elective surgery of ccr. we would check if they avoided the appearance of complications, and if they are useful when the anastomotic leak appears. methods and results: we analyzed the data collected in our hospital from 1/11/17 to 12/12/18. we studied the number and type of interventions in which prophylactic drainages were placed, the appearance of anastomotic complications and if these drains were effective. 93 interventions were performed during this period of time. 72% of these procedures had used prophylactic drainage (67 interventions). this percentage was up to 100% in patients who have performed a left colon surgery as a sigmoidectomy or rectal procedure. during this period, there were 10 cases of anastomotic leakage. in all of them had been placed drainage but only 3 of them were effective. conclusions: we have seen that prophylactic drainage is a common practice independently of the location of the anastomosis. the last multimodal rehabilitation guidelines recommended the nonuse of drains systematically above the peritoneal reflection with a high level of scientific evidence. they cause discomfort to the patient and delay early mobilization. however, it may be useful to use drains in the first 24 h of a pelvic floor procedure. there is not enough evidence to show sistematic drainage after colorectal anastomosis prevents complications of the anastomosis or other complications. aims: colonic cancers of the splenic flexure is uncommon and associated with poor prognosis. several studies were published aimed to identify the optimal surgical option for the best oncological outcomes. however, whether an extended colectomy or a segmental resection is required is still controversial. the aim of this study is to analyse the outcome of the two different approaches through the experience of a single centre. materials and methods: retrospective data of consecutive patients with diagnosis of colonic cancer situated at the splenic flexure of our department between 2004 and 2017 were analysed. based on type of surgical procedure, patients were enrolled in arm a (segmental resection) and arm b (extended resection). arm a patients were treated with segmental resections with a wide mobilisation of the transverse and descending colon and ligation of the left colic artery, sparing the middle colic artery and the inferior mesenteric artery. functional lateral to lateral anastomosis was performed extracorporeally. arm b patients were treated with more extended colectomies, both associated with central vascular ligation. results: out of 200 patients included, 141 were allocated in arm a and 59 in arm b. patients' population of the 2 arms was homogeneous as concerns demographic characteristics and stage of the disease. operative time was comparable (108,9 min vs 119 min, p = 0,332). the length of the specimen was significantly shorter in arms a (15, 7 vs 32, 1, p = 0, 0351) . the number of harvested lymphnodes did not differ between the two groups (12,5 vs 17 p = 0,167)postoperative short term complications was comparable in both arms (17 vs 1, p = 0,692). no postoperative mortality was observed. overall 5-year survival and disease free survival rates were similar in arm a and b (81.3% vs 83.05%, p = 0,321 and 78,6% vs 80,5%, p = 0,534). hospital stay was similar in the two groups (p = 0,99). conclusions: despite a shorter length of surgical specimen after limited resections, postoperative complications, lymph node harvest, and survival were comparable in both.in our opinion the extracorporeal anastomosis is functional to both the achievement of a cleaner operative field and a better control of the resection margins. incidence of neuroendocrine tumours in the rectal area has increased in recent years.before the onset of minimally invasive colorectal surgery, these lesion had to be treated by a more radical technique when not suitable for endoscopic resection.selection of the cases is mandatory in order to achieve good results not only surgical, but also oncological. we present our series of 3 neuroendocrine tumoirs treated by tamis approach, including technical aspects, deffect closure techniques and data regarding pathological findings.all cases were low grade carcinoid tumours. resection with free margin was obtained in all cases. defect closure was performed in all cases. the tumours were settled 9,10 and 15 cm form the anal verge. postoperative course was uneventful, ann no adyuvant therapy was needed.tamis apporach for rectal neuroendocrine tumours is a safe and feasible technique. proper selection of the cases is mandatory in order to achieve good results. surg endosc (2019) aim: to assess the safety and efficacy of single layer of barbed vs double layer 'hybrid' (interrupted and running) suture for the closure of anastomotic stapler access enterotomy after laparoscopic right colectomy with intracorporeal anastomosis. methods: from april 2014 to november 2018, 252 laparoscopic right colectomy with intracorporeal anastomosis were performed in our surgical department. all patients in both groups were perioperatively managed using an eras pathway. seventy-two patients had the enterotomy closed with a single layer running suture of filbloc tm (assut europe). these patients were matched with 72 patients who underwent intracorporeal right colectomy with enterotomy closed with a 'hybrid' double layer technique (first layer interrupted stitches in maxon tm 3-0 (covidien), second layer using a running suture in pds tm 3-0 (ethicon). intraoperative variables, anastomotic leak rate, morbidity and mortality rates were analyzed. results: the two groups were homogeneous with respect to demographics, body mass index (bmi), american surgical association score (asa) as well as for tumor stage. in the barbed group, median operating time was 121.5 min vs 140.7 min in the hybrid group (p = 0.02). anastomotic leak occurred in 5 (6.7%) patients in the hybrid vs 2 (2.7%) patients in the barbed group (p = 0.24). all patients required a reoperation. intraoperative findings show in 2 (0.4%) cases in the hybrid group a leak at the enterotomy closure, while an intact staler access was observed in both patients in the barbed group. no difference was observed with respect to noninfectious complications between the two groups (p = 0.55). patients in the hybrid group experienced a longer hospital stay when compared to the barbed group (p = 0.03). a re-admission occurred in the hybrid due an intraabdominal collection, while no re-admission was observed in the barbed group. no patient died in the postoperative period. aims: lymph node status is one of the key prognostic factors in patients with colorectal cancer, and remains the most important selection criteria for adjuvant chemotherapy. it is believed that at least 30% of node negative patients will suffer disease recurrence within the first 5 years after surgery. this may be due to understaging lymph node status. sentinel lymph node mapping is widely used for staging of breast cancer and melanoma, with injection of colloid tc99 and isosulfan blue (ib). however, indocyanine green (icg) fluorescence guidance is a new technical approach to this issue, with promising results as it is not influenced by body mass index or lymphatic invasion. intraoperative fluorescence icg navigation also aims for detection of aberrant lymphatic drainage outside the planned resection. the icg lymphography has the advantage of offering a good visualization of the lymphatic channels but there are problems to identify the lymphatic nodes. our objective with this study is to rate the use of the intraoperative lymphogram in cases of elective colorectal surgery to evaluate if there were changes in the surgical attitude regarding the performance of lymphadenectomy. methods: indocyanine green was injected into the submucosal layer around the tumor at 2 points with?a 23-gauge localized injection before lymph node dissection and the lymph flow was observed at 1, 3 and 5 min after injection, using a near-infrared camera system. in addition, a complete mesocolic excision with central vascular ligation guided the region where the lymph flow was observed to be fluorescent. the following table summarizes the 10 procedures carried out as well as the lymphadenectomy performed before and after the use of icg. in brief, after the application of intraoperative icg it was observed that in 20% of patients additional lymph nodes were obtained after the expansion of the surgical plan, moreover 10% affected lymph nodes were spotted after the expansion of the surgical plan. conclusions: intraoperative real-time visualization of the lymph flow using indocyanine green fluorescence imaging during laparoscopic colon cancer surgery is feasible and a helpful technique for lymph node mapping which may lead to intraoperative changes in lymphadenectomy. tamis resection of rectal tumours has proven to be a sefe and feasible technique, specially for lesion located in the mid and low rectum.when the tumour is located in the upper rectum, and specially near the colorectal junction, tamis resection may be more difficult, not only due to technical aspects, but also due to the risk of a free perforation, specially when a full thickness resection is performed.we present our results of 6 tamis resections of lesions located around the colorectal junction. four resections where performed with the aim of an endostpaler in order to achieve full resection without the risk of a free colonic perforation.in 3 cases, an abdominal combined laparoscopic exploration was made, in order to help and assure proper resection of the lesion as well as avoiding intraoperative complications.distance from the anal verge ranged from 12 to 20 cm.postoperative course was uneventful in all cases, and a complete specimen resection was obtained in all cases.tamis resection of tumours located in the rectosigmoid junction may be a safe and feasible technique in selected patients. methods: between january 2015 to april 2018, 83 patients with diagnosis of right colon adenocarcinomas underwent right hemicolectomies. the data was analysed for patients demographic, histology, type of surgical approach, intraoperative details (length of surgical procedure, blood loss, blood transfusion, conversion rate) and short-term post-operative outcomes including complications. introduction: postoperative ischemic colitis is a life-threatening vascular gastrointestinal condition, that mainly occurs after cardiovascular surgery. we present a surprising case following a laparoscopic rectum resection. case report: a 77-year-old diabetic patient with upper rectal adenocarcinoma undergoing laparoscopic anterior rectal resection (partial mesorectum excision) and mechanical anastomosis following chemotherapy / radiotherapy. after 48 h postsurgery he presented abdominal pain, distension and fever. on adominal computed tomography (ct) scan (contrast enema) no anastomotic leakage (al) finfings were revealed. neither digital palpation nor proctosigmoidoscopy (3th day) showed al signs. the patient clinical situation improve with conservative treatment (antibiotics, digestive rest …), c-reactive protein levels decreased and the blood cultures were negative. on the 11th day he was discharged presenting semiliquid stools. eight days later he needed hospital readmission: air and feculent/purulent discharge from the previos abdominal drainage orifice. ct scan: no evidence of dehiscense found although rectum and sigmoid colon distention and an image of a 'large fecaloma' were observable. on the 4 th day of hospitalization he expulsed a large malodorous segment of tissue with necrotic asppearance (image) through the anus with surprising histologic features: 'complete-thickness necrotic colonic wall'. further rectosigmoidoscopy: complete anastomosis, signs of ischemic colitis proximate to the anastomosis and a fistulous orifice. surprisingly, the patient progressed favorably, being discharged the 12th day for ambulatory control with a low debit enterocutaneous fistula. histopathological diagnosis: ypt3 n0 m0. follow up: the fistula discharge quantity increased maintaining diarrheal stools through anus along with persistent anemia and malnutrition. a exploratory laparotomy was schedule. fistulous tract towards a small stenotic segment of colon inmediately proximal to the colorectal anastomosis was identify and resected. finally a terminal colostomy was performed. subsequent postoperative without incidents. currently the patient is asymptomatic. comments: it seems indisputable that a colon segment, proximal to the anastomosis, was necrosed and expelled through a colorectal anastomosis. the mechanism seems inexplicable to us. it is even more disconcerting that there was no disruption of the anastomosis. objectives: fluorescence-guided surgery has emerged as a new imaging modality to improve the detection of liver and lymph node metastasis in colorectal cancer. in right-sided colon cancer, the standard lymphadenectomy should reach the ileocolic vessels and the right branch of the middle colic vessels. the purpose of this study is to perform an objective estimation of lymphatic drainage and metastatic lymphonodes in right-sided colon carcinoma through indocyanine green (icg) lymphography. methods: patients with right-sided colon adenocarcinoma were included, excluding those in stage iv, t4 and those who underwent urgent surgery. 2 cc of icg peritumoral were injected using a peripheral intravenous catheter at the beginning of the intervention. the lymphatic drainage mapping of the tumor was identified. lymphadenectomy of the ileocolic vessels and right branch of the middle colic vessels was performed extending it to the left branch and origin of middle colic vessels if it was shown in the mapping. results: 16 patients were included. the average age was 58. in 10 patients the tumor was located in the ascending colon and in 6 patients in the hepatic angle. in 11 patients, the mapping showed lymphatic drainage to ileocolic vessels and right branch of the middle colic vessels. in 5 patients (31%) it showed drainage to the left branch and origin of the middle colic artery, therefore extended lymphadenectomy was performed at that level. in 14 patients, the postoperative period was uneventful. 1 patient presented infection of the surgical wound and another patient developed a 6 cm perianastomotic collection treated with percutaneous drainage. the anatomopathological report showed nodal metastasis in 4 of the 5 patients (80%) in whom lymphatic drainage was observed in the territory of the middle colic vessels with icg. these patients presented the tumor in the hepatic angle. therefore, 4 of the 16 patients with right-sided colon carcinoma (25%) presented nodal metastasis in the territory of the middle colic vessels. conclusions: fluorescent lymphography may improve the results of lymphadenectomy in colon cancer. in patients with tumors of the hepatic angle, lymphadenectomy extended to the left branch and origin of middle colic vessels, could be an adequate alternative. introduction: over the last decade, the common principles of surgical treatment in colon surgery are central vascular ligation (cvl) and complete mesocolic excision (cme). however, the superior mesenteric vessels anatomy, while performing the right colectomy is characterized by wide variability, which can lead to complications, especially during minimally invasive surgical intervention. objective. the purpose of this study is describing vascular variations around the superior mesenteric artery and vein-middle colic, right colic and ileocolic vessels, henle trunk in the laparoscopic right colectomy. materials and methods: the study was held in the 'dobrobut' clinic and o. o. bohomolets national medical university, department of general surgery (kyiv, ukraine) during the 2016-2018 period. 24 patients were included to the study, 13 females (45.8%), 11 males (54.2%) in the average age of 71,4 ± 9,8 years. all the patients underwent the laparoscopic right colectomy (cme ? cvl) with d3 lymph node dissection. recorded video materials from each laparoscopic right colectomy were analyzed during the study. results: ileocolic vessels were the most stable. there were typical anatomical position in all cases. 58.3% of cases, ileocolic vein was identified anteriorly to the ileocolic artery, while 41.7% being posteriorly. right colic vein was absent in 29.1% of cases. right colic vein drainage was to henle trunk and inferior mesenteric vein in 62.5% and 37.5% respectively. the right colic artery was present in 75% of patients, it's origin was superior mesenteric artery in 94.4% and 5.6% the middle colic artery. the middle colic vein was present and drained to superior mesenteric vein in 100% of cases. same as the middle colic artery with the superior mesenteric artery origin. henle trunk was present in 91.7%, gastro-pancreato-colic trunk in 45.5% of cases gastro-pancreatic trunk in 40.9%, gastro-colic in 13.6%. conclusions: knowing the options of surgical vessels anatomy, while performing the right colectomy, altogether with surgeons preparation, using the ct-scan data can reduce the risk of iatrogenic damage and complications risks. introduction: the enhanced recovery after surgery (eras) protocol was designed to accelerate convalescence, reduce morbidity and shorten the length of hospital stay (los). one of its major interventions is balanced perioperative fluid therapy. the impact of this single intervention on short-term outcomes is widely discuss. aim: the aim of this study was to assess the impact of perioperative fluid therapy on short-term outcomes. material and methods: the analysis included consecutive prospectively registered patients operated laparoscopically for colorectal cancer between november 2012 and january 2018. patients were divided into two groups: balanced (= 2500 ml) or unbalanced ([ 2500 ml) perioperative fluid therapy. all patients were treated according to eras protocol. study outcomes were: recovery parameters, morbidity rate, los, 30-day readmission rate. results: group 1 consisted of 361 and group 2 of 80 patients. there were no statistically significant differences between the groups in terms of demographic and operative parameters. morbidity was lower in group 1 (27.4% vs 38.8%, p = 0.044). patients in group 1 were discharged home earlier than in group 2 (4 vs 5 days, p \ 0.001). moreover, we observed differences in recovery parameters between the groups: tolerance of an oral diet on the 1st postoperative day (76% vs. 59%, p = 0.002) and patient mobilization on the day of surgery (90% vs. 78%, p = 0.005). 30-day readmission rate was lower in group 1 (7.8% vs. 15%, p = 0.041). conclusion: a balanced perioperative fluid therapy on the day of surgery may be associated with faster convalescence, lower morbidity rate, shorter los and lower 30-day readmission rate. methods: a retrospective analysis was performed including patients who underwent lcs or ocs for cancer treated as emergency in a single centre between 2014 and 2018. patients who underwent palliative surgery were excluded. lcs were 1:1 propensity score-matched based on pposum and stage of disease with ocs. short-term outcomes included oncological quality, length of hospital stay (los) and postoperative mortality. for long-term outcomes, 3-year overall and disease free survival (os and dfs) rates were analyzed. results: during the study period, a total of 406 emergency colorectal resections were performed. of them, 25% (n = 101) were coloniccancers. 38 lcs were matched to an equal number of ocs.median age was 71 (21) years and 62% were females. median follow-up was 18 (22) months. the majority of resections were right hemicolectomies (47%), followed by sigmoid resections (36%) and subtotal colectomies (17%). operative time (188 (90) background: total mesorectal excision (tme) offers the best reported rates for local recurrence and survival in patients with rectal cancer. our series from a single high-volume center, assessed the feasibility, safety and long-term oncologic adequacy of laparoscopic total mesorectal excision methods: we reviewed the prospective database of 266 consecutive unselected patients undergoing laparoscopic tme for rectal cancer between 1995 and 2009 at the department of general surgery, onze-lieve-vrouwziekenhuis hospital (olv), campus aalst, belgium. the objective of the present study was to evaluate the effectiveness of laparoscopic tme, with an emphasis on perioperative variables and long-term oncological outcomes. results: 266 pts with mid and distal rectal cancer up to 10 cm from the anal verge had laparoscopic tme resection. 161 patients (60.5%) underwent a sphincter-preserving surgery and the remaining 105 patients (39.5%) had an abdominoperineal resection. end-to-end anastomoses: 68 pts (42%), j-colonic pouch: 92 pts (58% introduction: the rica clinical pathway (intensified recovery in abdominal surgery), also called surgical multimodal rehabilitation, is the application of a series of perioperative measures and strategies in those patients who are going to undergo a surgical procedure with the objective of reducing secondary stress to the surgical intervention. in this way, we achieve a better recovery of the patient and significantly reduce complications and morbidity. objective: s to analyze, through our database of patients undergoing crc, the percentage of postoperative ileuses and the following quality indicators: the postsurgical hospital stay, the anastomotic leak, and the infection of the surgical site. to check if the implantation of the rica pathway has meant an improvement in our postoperative hospital stay and with that, a lower sanitary cost. methods and results: we analyzed the data collected from those patients who underwent ccr in our hospital between 01/11/2017 and 12/12/18, during which time we implemented the rica clinical pathway. the average hospital stay was 8 days. of the 78 patients, 8.97% presented anastomotic leak, 12.82% infection of the surgical wound and 19.23% paralytic ileus. we have verified how the average hospital stay increases with the appearance of anastomotic leak (18.86 days), infection of the surgical wound (16.10 days) and paralytic ileus (16.80 days). when we divided this 12-month period into two halves to see the impact of the implantation of the clinical pathway, we obtained the following results: the post-surgical hospital stay in the period from 01/11/2017 to 05/01/2018 was 8.33. the stay from 05/01/2018 to 12/12/2018 was 5.33. the implantation of the rica clinical pathway is providing us with important advantages in our clinical practice, with greater postoperative comfort and an improvement in our quality indicators, such as the decrease in the average hospital stay of our patients. on the other hand, after starting its implementation we have encountered the resistance to change clinical habits and the one that requires a multidisciplinary participation, so adherence to this is being progressive, and requires periodic audits to reinforce and consolidate our achievements, and identify our points of improvement. however, tem has not yet achieved widespread use. recently, transanal minimally invasive surgery (tamis) using single-port surgery devices has been reported. initially facilitated by existing single-port surgery devices, two platforms for transanal access, the gelpoint ò path (applied medical, rancho santa margarita, ca, usa) and the sils tm port. the gelpoint ò path is the only platform to be specifically designed for tamis y tatme. objetive: in the present study, usesa gelpoint ò path was performed in 35 patients with lower rectal neplasms. results: complete full-thickness excision was performed in all cases of tamis and free margins over rectal cancer. on two cases no neoplasm was visualizad. the patient characteristics, operative techniques and operative outcomes were evaluated. the mean age of the patients was 63.0 years (range 48-76). the mean operating time were 186 min (range 55-110). 23 patients was selsted for tatme, 10 for tamis and two patients for evaluation and biopsy if was necesary. additional transabdominal rectal resection was not performed, and adjuvant chemoradiotherapy was performed in all cases. tamis using a gelpoint ò path was revealed to be easy and safe to perform. although only a small number of cases were treated, and the operation was demonstrated to be sufficiently feasible. conclusion: gelpoint path is a good tool for colorectal surgery in tatme, tamis and evalluation of anastomoses or de novo lesions introduction: several improvements in rectal cancer treatment, in the last decades, resulted in a markedly increased survival. nevertheless, surgery remains the prevalent treatment and 60 to 90% of operated patients experience some kind of functional abnormalities. as nowadays we acknowledge the importance to focus not only on survival rates but also on quality of life, we craved for a precise, reproducible, simple, clear and user-friendly tool for evaluating bowel function in rectal cancer patients after sphincter saving operation. therefore, we performed a thorough translation with cultural adaptation of the patient reported outcome tool, low anterior resection syndrome (lars) score, to the portuguese language (lars-pt) and population. methods: according to the current international recommendations, we designed this study encompassing three main phases: (i) cultural and linguistic validation to european portuguese; (ii) feasibility and reliability tests of the version obtained in the previous phase; and (iii) validity tests to produce a final version. the questionnaire was completed by 154 patients from six portuguese colorectal cancer units, and 58 completed it twice. results: the portuguese version of lars score showed high construct validity. regarding the test-retest, the global intraclass correlation showed very strong test-retest reliability. looking at all five items, only items 3 and 5 presented a moderate correlation. lars score was able to discriminate symptoms showing worse quality of life in patients submitted to preoperative radio and chemotherapy. conclusion: lars questionnaire has been properly translated into european portuguese, demonstrating high construct validity and reliability. this is a precise, reproducible, simple, clear and user-friendly tool for evaluating bowel function in rectal cancer patients after sphincter saving operation. therefore, his sistematic use should be implemented. oesophagectomy is the mainstay of curative treatment for oesophageal cancer and post-oesophagectomy diaphragmatic hernia (podh) represents a potentially life-threatening surgical complication characterized by an underestimated occurrence rate and unknown related risk factors. this study analyses the experience of two tertiary designated centers in order to evaluate key elements concerning development and treatment of podh. a cohort of consecutive patients affected by a clinically resectable oesophageal cancer (any t, any n and m0) underwent ivor-lewis oesophagectomy between march 1997 and april 2017 according to three different approaches: totally open incision procedure (oilo), hybrid (hilo) and totally mininvasive to esophagectomy (milo). all population was retrospectively observed in the context of a postoperative calendarised follow-up in order to record the incidence and postrepair results of podh. 414 patients underwent ivor-lewis oesophagectomy for cancer and 22 (5.3%) developed podh within a median follow-up period of 16 months (6-177). surgical repair was generally applied by the mean of laparoscopic cruroplasty (77%) with a conversion rate of 24%. postoperative morbidity did not include early recurrences but exclusively cardio-pulmonary complications (5 patients) with one case of respiratory failure leading to death. the discharge was reached after a median hospital stay of 6 days (2-95) while 3 recurrences (14%) occurred over a median followup period of 10.1 months. a wide univariate analysis identified statistically significant associations between podh occurrence and the administration of preoperative chemoradiotherapy, the complete pathological response (cpr) and a lymph node harvest (lnh) larger than 33 stations (p-value of 0.016, 0.001 and 0.024 respectively). the strong influence of an extended lnh was confirmed by the multivariable analyses (0.026) along with cpr which should however be considered as longer survival-related bias. the minimally invasive surgery and the neoadjuvant chemoradiotherapy represent a considerable part of multimodal treatment for oesophageal cancer presenting a not statistically significant association with podh development while a lnh including more than 33 nodes resulted to be an independent risk factor mirroring the extent of surgical demolition in oesophagectomy. l. barbulescu aim: to asses the safety and effectiveness of robotic total meso-rectal excision vs laparoscopic total meso-rectal excision and to analyse the primary outcomes. methods: the operative, post-operative and oncological outcomes were evaluated to assess the effectiveness of both techniques of tme. in our center were performed 30 robotic rectal resections and 48 laparoscopic resections from january 2018 to present. results: the rtme was associated with longer operation time, early bowel movements, lower risk of conversion and shorter hospitalization. the statistical equivalence was seen between rtme and ltme for non-oncological variables like blood loss, morbidity and reintervention risk. the oncological variables such as number of harvested nodes and positive circumferential resection margin risk were also comparable in both groups. the length of distal resection margins was similar in both groups. conclusion: rtme in patients with rectal cancer was associated with a lower rate of conversion and less incidence of urinary retention. the operative time in rtme was significantly longer than in ltme. the initial oncological and function outcomes of rtme seem to be equivalent with ltme. c. athanasiou aims: two randomized controlled trials failed to show non-inferiority of the laparoscopic total mesorectal excision (ltme) compared to open. ltme becomes particularly challenging in low rectal cancers and in narrow pelves. many surgeons report that robotic tme (rtme) may be beneficial in that setting. our aim was to systematically review the literature and compare the pathologic outcomes of open, laparoscopic and robotic tme for rectal cancer methods: medline, embase, scopus, cochrane library and web of knowledge databases were searched for randomized controlled trials (rct) reporting patholologic outcomes of open, laparoscopic or robotic tme with no language restriction. our primary outcome was quality of tme on macroscopic assessment of the specimen. secondary outcomes included positive circumferential resection margin, distance to radial margin, number of lymph nodes and positive radial margin. the included studies were quality assessed and the jadad score was reported. the grade approach was used to rate the certainty of each network estimate. results: fourteen rcts were included in our study. seven rcts compared the otme to the ltme, six compared the ltme and rtme and one study the otme to the rtme. no statistical significant difference was found in quality of tme when the the ltme was compared to the otme or = 1.36 (0.99, 1,85) or the rtme or = 1.33 (0.82, 2.16) . no difference was found in pcrm for the laparoscopic or = 1.23(0.90, 1.69) or the robotic approach or = 0.87 (0.44, 1.75) when compared to open. distance to radial margin and number or lymph nodes didn't differ between the groups. conclusions: no significant advantage on pathologic specimen quality has been found with the robotic approach. the ltme doesn't seem to compromise the quality of the specimen. h. samura 1 , j. arakaki 2 , k. sugata 2 , y. hori 2 , y. nagamine 2 , f. kohagura 2 , h. motonari 2 , s. kameyama 2 , t. ishimine 2 1 division of digestive and general surgery, urasoe general hospital, okinawa, japan; 2 department of surgery, urasoe general hospital, okinawa, japan colorectal cancer often invade adjacent organs and it is known that prognosis improves with resection of the involved organ. we report our experience of invaded adjacent organ resection, which include seminal vesicle, uterine and bilateral appendages, posterior wall of the vagina and bladder wall. method: although the range of resection is predicted by image study preoperatively, at the time of operation, it was decided by palpation with a forceps. each operation is evaluated by operation time, blood loss, blood transfusion volume, postoperative complication, postoperative hospital stay, and short term prognosis. result: resection cases of seminal vesicle, posterior vaginal wall, uterine and bilateral appendages and bladder wall were 5, 4, 4 and 1, respectively. the results are shown in the order of seminal vesicle / vaginal posterior wall / uterine / bladder. median age was 70, 67, 54 and 74 years old. the median operation time was 541, 630, 623, 249 min, the median blood loss was 580, 340, 310, 10 ml, and only one case of uterine and bilateral appendages resection required the blood transfusion. the average postoperative hospital stay was 40, 38, 45, 15 days. nine cases have postoperative complication, that include delayed wound healing, anastomotic leakage and rectovesical fistula, postoperative ileus, chyle ascites and neurogenic blodder. all of those were improved with conservative treatment. the mean hospital stay in complication cases was 48 days (38-88) and 19 (14-29) days without complications. the median observation period was 773 days (166-2166), and there was no local recurrence. all of the case of stage iv were dead. there was no local recurrence and all patient without stage iv are alive, it seems that the resection range was sufficient. conclusion: even with adjacent organ invasion colorectal cancer, it was possible to determine the resection line by palpation with laparoscopic forceps manipulation, and possible to resect margin free of cancer. laparoscopic low rectal resection with/without diverting ileostomy p. ihnát, m. tesar, p. ostruszka, p. gunková, p. vávra background: the construction of diverting ileostomy (di) is recommended to avoid septic complications of anastomotic leakage. the aim of our study was to assess the benefits and risks of di constructed during laparoscopic low anterior resection (lar). methods: retrospective clinical cohort study was conducted in university hospital ostrava, czech republic. all patients undergoing laparoscopic lar with tme because of rectal cancer within a 6-year study period were assessed for study eligibility. results: a total of 151 patients (73 patients without di, 78 patients with di) after laparoscopic lar were enrolled into the study and underwent analysis. both study subgroups were comparable in terms of demographic and clinical features. postoperative 30-day morbidity was significantly lower in patients without di (23.3% vs. 42.3%, p = 0.013). anastomotic leakage frequency was higher in patients without di (9.6% vs. 2.5%, p = 0.090); surgical intervention was necessary in 6.8% of patients without di. stoma-related complications were noted in 53.8% of patients with di; some patients had more than one complication. surgical intervention because of stoma-related complications was needed in 9 patients (11.5%). distinctive complications of di laparoscopic construction (small bowel obstruction due to di semi-rotation around its longitudinal axis) was noted in 3 patients (3.8%). mean stoma period (interval between lar and di reversal) was more than 10 months in our study; only 19.2% of patients were reversed without delay (= 4 months). postoperative morbidity after di reversal was 16.6%; re-laparotomy was needed in 2.5% of patients. conclusions: despite benefits of di in protecting low rectal anastomosis, ileostomy construction remains fraught with many stoma-related complications and long stoma periods associated with significantly decreased quality of life. aims: single port laparoscopic is a minimally invasive surgical technique that joint the cosmetic advantages with the well recognized benefits of the standard laparoscopic approach [1] . we describe a laparoscopic single port hartmann reversal in a patient by the use of the umbilical colostomy site for surgical access [2] . methods: a 42 years old patient was submitted to a laparoscopic single port hartmann procedure with an trans-umbilical colostomy for a recurrent sigmoid volvulus that was treated at the beginning by endoscopic de-rotation. after three months the patient was reevaluated for a hartmann reversal with a laparoscopic single port technique. after routine skin preparation and laparoscopic setup, the colostomy is mobilized from its mucocutaneous border, and the anvil of a circular stapler is secured to the distal lumen. by the use of a gelpoint system with 3 trocars, the intra-abdominal adhesiolisis in performed. the splenic flexure is mobilized to achieve a sufficient mobilization of the left colon that allows the fashion of a tension free anastomosis. the rectal stump is mobilized to the mid rectum, starting from the posterior mesorectal fascia around to the anterior rectal wall. a tension-free colorectal anastomosis is secured with a standard circular 31 mm stapling device inserted transanally. the colostomy wound is closed. the operative time was 60 min. results: the postoperative course was uneventful, the patient was discharged at forth postoperative day, oral intake started on postoperative day three. conclusions: single port laparoscopic hartmann reversal thought the umbilical stoma site is a minimally invasive surgical option that is safe in selected patients and offer the best cosmetic results. [ the progressive evolution of surgical techniques and oncologic protocols on rectal cancer disease facilitates surgeons to challenge the skills for anus preservation in low rectal cancer surgery. the laparoscopic surgery is already one of the best ways to reach the pelvic floor and to try procedures, which were previously difficult to apply through open surgery. the anastomotic leakage has particularly high occurrence if the anastomosis is performed in the anal or distal rectum area. it is evident that although the fecal diversion does not decrease post operatory mortality, it significantly reduce the risk of anastomotic leak and the risk of a second major surgery when the leak occur. diverting stomas are low-risk procedures from a technical point of view, but they potentially expose the patients to postoperative morbidity, impacting the patients' quality of life. it is not easy to decide whether the fecal diversion is needed or not. this decision must be made on a case to case basis, trying to apply the stomas only when they are really needed. we report our initial experience by living a transmesenterial cotton loop around the pre terminal ileum which extremities are turned out usually through the lateral trocar wound in laparoscopy or by applying a dedicated mini incision in open surgery. the purpose is to perform (in case of suspected fistula), a mini invasive diverting procedure, by widening the loop wound and by pulling up the ileum in a lateral loop ileostomy. we applied this procedure to 12 consecutive patients with low colorectal anastomosis and in two of them we performed a lateral loop ileostomy with good results. we believe this can be an alternative that needs to be standardized. purpose: sarcoidosis is a chronic, multisystem inflammatory disorder with unknown aetiology characterised by noncaseating granulomas within involved organs. gallbladder involvement in sarcoidosis is extremely rare and literature review revealed only 7 reported cases to date. in this paper, we present a case of gallbladder associated sarcoidosis. method: a 67-year-old lady was known to the clinic for regular surveillance of liver steatosis and incidental gallbladder polyps. the largest polyp was 4 mm at presentation in 2008 and has grown to 7 mm in 2017. in view of worsening symptoms of biliary colic and growing polyps, a laparoscopic cholecystectomy was performed. results: laparoscopic cholecystectomy was unremarkable and specimens of the gallbladder and lymph nodes were sent for histology. histological examination revealed chronic cholecystitis with polypoid cholesterolosis of the gallbladder and noncaseating granulomata within a lymph node, which strongly suggest sarcoidosis. conclusion: in conclusion, we report a case of incidental finding of gallbladder sarcoidosis over the course of treatment of biliary colic and symptomatic gallbladder polyps. therefore, the definitive treatment for patients with symptomatic gallbladder sarcoidosis is a cholecystectomy. the surgical management of cholelithiasis can be associated with significant morbidities. despite the relatively low incidence of bile duct injuries during laparoscopic cholecystectomy, the total number is large due to the high frequency of the operation. the subtotal cholecystectomy with its variants is a well known bailout strategy to the surgical community. however, there is no agreement on when and how to perform these procedures. indeed, the majority of surgeons will adopt these solutions when there is a struggle to identify the critical view of safety. this struggle results increases the risk of injuries. we hypothesize that a primary intent gall bladder lithotomy and disconnection (glad) when the dissection of the gb pedicle is anticipated difficult dissection is a safe and feasible strategic option. methods: out 347 patients elevtively admitted to aberdeen univesity hospital with gall stone disease between march 2017 and november 2017, 75 consecutive patients were operated with glad procedure based on intraoperative criteria. the primary outcome was the operative time. secondary outcomes were length of hospital stay, the criteria to do this procedure will be explained, the outcomes will be listed. indocyanine green is a molecule that becomes fluorescent when excited struck with light of a specific wavelength in the infrared spectrum (nir-infrared), allowing the visualization of anatomical structures in which it has accumulated. the aim of the study is the application of icg enhanced fluorescence in laparoscopic cholecystectomy in order to identify the anatomy of the biliary tract, to reduce the risk of iatrogenic lesions and the conversion rate. the study involves laparoscopic cholecystectomy for cholecystitis and gallstones of main biliary tract. the evening before the surgery, a vial of icg (25 mg) diluted in 100 ml of saline solution was intravenous injected. during the procedure, after opening the calot triangle, switching to the nir mode on the camera, the anatomy of the biliary tract and in particular of the main biliary tract is visualized. the cystic duct and cystic artery are isolated, their section between clips is cut and the cholecystectomy is performed. from january 2018 11 patients were enrolled: 9 cases of acute cholecystitis, 2 cases of gallstones of main biliary tract, undergoing preoperative ercp. in 8 cases of cholecystitis, the angiography allowed the visualization of the main biliary tract. in one case, an abnormal course of the cystic duct was identified. in two cases of gallstones of common bile duct, it favoured the visualization of the biliary tract anatomy. all cases were completed with laparoscopic technique. there were no intra-and post-operative complications. icg-enhanced fluorescence is a safe, effective, cheap and rapid tool that can also be applied in small hospitals with no need for training. its use does not extend the time of surgery and allows the visualization of the anatomy of the biliary tract, especially in situations where it can be altered by reducing the conversion rate and potentially the risk of iatrogenic lesions of the main biliary tract. case presentation: patient is a 22 year old female with no significant past medical or surgical history presented to the emergency department with a 2 day history of worsening sharp right upper quadrant pain with associated nausea, vomiting, and po intolerance. the pain started a few months prior, however it was self-limited with diet modifications. an ultrasound demonstrated a contracted gallbladder with a 15 mm gallbladder wall. white blood cell count was within normal limits and total bilirubin was slightly elevated to 1.8 mg/dl. no palpable mass was noted on physical exam. an mr cholangiopancreatography was performed which demonstrated a dilated gallbladder measuring 11.5 x 2.5 cm, a severely thickened gallbladder with a small intramural collection and multiple gallstones. the patient proceeded with a laparoscopic cholecystectomy. intraoperatively, the omentum was densely adhered to the gallbladder and needle decompression of the gallbladder was unsuccessful due to the wall thickness. the gallbladder was subsequently removed without any complications. patient's remaining hospital course was uncomplicated. surgical pathology returned demonstrating acute on chronic cholecystitis. discussion: cholecystomegaly or 'giant gallbladder' disease is a rare pathology encountered in the surgical world. there have been few reported cases, most of which occurred in the elderly ([ 65 years). kuznetsov et al. defined an enlarged gallbladder to have a volume of 200-300 cc and a giant gallbladder as exceeding 1500 cc (the average weight of the liver). the etiology remains unknown, however certain factors exist to allow the gallbladder to reach this size without life-threatening sequela. preoperative imaging, such as mr cholangiopancreatography, is important to differentiate biliary pathology and delineate anatomy. removal of the gallbladder is recommended to prevent the development of complications like cholangitis or bowel obstruction. the cause of cholecystomegaly still remains uncertain and warrants further research. the management and treatment remains similar to acute cholecystitis. aims: mini-laparoscopic cholecysectomy (mlc) is considered to be the best variant of minimizing surgical trauma and improving cosmesis in laparoscopic cholecystectomy. the most challenging techniqual step of mlc is clipping the cystic duct. it may be impossible or unsafe when diameter of cystic duct exceeds 3 mm, which is common in severe chronic colecystitis or acure cholecystits. there is very limited data in the literature about the use of mlc in acute cholecystits. the aim of study was to access the first results of new technique of mlc. methods: five women with the mean age of 39 years (32-48) underwent mlc. the 1st 10-mm troacar was inserted in the umbilicus and used for the camera and removal of the gallbladder. the 2nd 5-mm troacar was inserted in subxyphoidal area and used for the main working instruments, including medium-large polymer clip-applier (hem-o-lok type). the 3rd and 4th 3-mm troacars were placed in right subcostal area and used for mini-graspers (karl storz). in initial 4 procedures we used conventional 5-mm clip-applier with adopted medium-large titanium clips. to improve safety, we aplied 5-mm hem-o-lok type clip-applier for the last patient with acute cholecystitis. in this case the diameter of cystic duct was 3,5 mm. the clipping was performed successfully. the 3-mm drain was placed via subcostal troacar incision. also, in this case we applied original technique of removal of the bladder using wound retraction instrument (karl storz). results: in all the cases there were no intra-or postoperative complications. the mean duration of procedures was 120 min (100-180 min). the postoperative stay was 2 days in every patient. the patients estimated their pain on postop day 2 as 'almost absent' and cosmethic results 1 mo postop as 'exellent'. conclusions: 1. new technique of mlc alowed to perform the clipping of cystic duct safely, which is essential in acute calculous cholecystitis. was conducted in department of surgery lumhs jamshoro. all the patients having age = 18 year of age, either gender presented with history of abdominal pain, nausea and vomiting and were diagnosed as cholelithiasis included in the study and were planned either for mini-laparoscopic cholecystectomy and conventional laparoscopic cholecystectomy were explored for outcome while the patients with empyema gallbladder, gangrene, mucocele gallbladder and adhesions were excluded from the study. results: during one year study period, total five hundred patients were diagnosed as cholelithiasis with means age 53.21 ± 6.83 (sd). of five hundred, 127 (25.4%) were underwent for mini-laparoscopic cholecystectomy with 15 (11.8%) were males and 112 (88.1%) were females. the outcome were measured as postoperative pain (vas) 1.61 ± 0.92, size of wound (umbilical 10 mm, epigastrium 5 mm and subcostal 2 mm), excellent cosmetic results, mean ± sd for hospital stay (hrs) and operative time (minutes) was 12.86 ± 5.73 and 30.83 ± 10.85, early return to work 115 (90.5%), minor oozing 07 (5.5%), port size hernia 2 (1.5%). remaining 373 (74.6%) were underwent for conventional laparoscopic cholecystectomy with 38 (10.1%) were males and 335 (89.8%) were females. the outcome were measured as postoperative pain (vas) 3.53 ± 1.95, size of wound (umbilical 10 mm, epigastrium 10 mm and subcostal 5 mm), mean ± sd for hospital stay (hrs) and operative time (minutes) was 49.95 ± 8.95 and 30.83 ± 7.72, early return to work 310 (83.1%), port size hernia 10 (2.6%) along with zero (0%) mortality. conclusion: it has been concluded that mini-laparoscopic cholecystectomy is superior and feasible than conventional laparoscopic cholecystectomy and has decreased early postoperative incisional pain, avoided late incisional discomfort and safe procedure with nearly scarless wounds with superior cosmetic effect especially for young female patients. objective: to determine the outcome of immediate versus late laparoscopic cholecystectomy in acute cholecystitis at tertiary care hospital hyderabad / jamshoro sindh pakistan patients and methods: the descriptive case series study of one year (2016 -2017) was conducted in department of surgery lumhs jamshoro. all the patients having age = 18 year of age, either gender presented with history of abdominal pain, nausea and vomiting and were diagnosed as acute cholecystitis (cholillthiasis) included in the study and were planned for laparoscopic cholecystectomy and were explored for outcome as immediate (within 48 h) and late components ([ 6 weeks). the frequency and percentage was calculated for categorical variables and mean ± sd was calculated for numerical variables. as this was descriptive case series so there was no any statistical test of significance was applied. results: during one year study period, total one hundred patients were diagnosed as acute cholecystitis with means age 55.72 ± 8.95 (sd). of one hundred, 80% were females and 20% were males. the immediate outcome reported as tissue fragile 10%, pancreatitis 2%, slipage of ligature of cystic duct 5%, empyema gallbladder 5%, mucocele 15% and gangrenous gallbladder 2% while the late outcome reported as adhesions 70%, cholecystoduodenal fistula and mirizzi syndrome 1% and 6%, gallstone ileus 2%, perforated gallbladder 8% and cholidochiolithiasis 20% while the mean ± sd for hospital stay (days) in immediate as 1.51 ± 0.32 while in late outcome (days) during acute cholecystitis 4.50 ± 0.55 and after surgery (6 weeks later) as 2.85 ± 1.21 respectively. conclusion: it has been concluded that early lc for acute cholecystitis with cholelithiasis is safe, low cost and feasible intervention and offering the additional benefit of shorter hospital stay and reduce the economical burden. surg endosc (2019) 33:s485-s781 general surgery, chang gung memorial hospital kaohsiung division, kaohsiung, taiwan background: the treatment of common bile duct (cbd) stones is challenging while unclear hepatic hilum anatomy especial experience of previous laparotomy. a minimally-invasive approach choledocholithotomy is feasible, but can be difficult and converted for the unclear anatomy of the biliary tree. near-infrared (nir) cholangiography by systemic administration of indocyanine green (icg) can enhance the visualization of the biliary tree anatomy but is limited by the high intensity of background fluorescence signal coming from the liver. nir fluorescence cholecysto-cholangiography by direct biliary tree administration of the icg can enhance the biliary tree without background noise signal. we created the nir cholangiography via different route according to patient situation : systemic circulation or biliary tree injection to see the feasibility of those application. material and method: ten patients who suffered from obstructive jaundice due to cbd stone and 5 patients received percutaneous biliary tree drainage as first treatment and 2 patients received endoscopic biliary tree drainage. those patients received laparoscopic choledocholithotomy as definite treatment after acute infection phase. 5 patients received biliary tree icg injection via drain tube and 5 patients by systemic injection. visualization and fluorescence patterns around cbd was recorded. results: in our series, one patient received previous gastrectomy and 4 patients had previous biliary tree surgery. background: laparoscopic cholecystectomy (lc) has become the gold standard for the treatment of gallstone disease. multiple studies have confirmed its safety, lc at index admission is still not widely practiced in ireland. we present our experience in performing index cholecystectomy at cuh after the start of acute care surgery program in may 2017. aim: the aim of this study is to determine the safety of laparoscopic cholecystectomy at index admission, complications,re-admissions, and los. methods: electronic records, theatre records and imaging reports were searched to enroll all patients who underwent lc for gallstone disease at index admission from may17 to october18. patient demographics, indication for surgery, postoperative complications, readmission and conversion rate were recorded.in addition timings of mrcp and ercp, imaging findings, and los were also noted. results: a total of 117 patients underwent lc during the study period. median age was 47 years (18-79). male to female ratio was 1: 1.78. 75(64%) patients had acute cholecystitis, 12 (10%)had acute biliary pancreatitis, 10 (8.5%)biliary colic and 9(7.6%) had cholecystitis with signs of cbd obstruction. 7(5.9%)patients had obstructive jaundice and one with adenomyomatosis.50 patients (42%) had preop mrcp while 23 (19%) underwent pre-op ercp. all except 3 patients undergoing ercp had preprocedure mrcp.2 patients had pre-op cholangiograms. in terms of complications, 2(1.7%) patients had bile leak and one(0.85%) had re-operation. one patient had the post-op hematoma which was drained percutaneously, one patient had procedure abandoned because of bradycardia upon induction of anesthesia. there was no common bile duct injury, no conversion to open and no 30 days mortality was reported. the average length of hospital stay has been 6 days. (2to18 days). conclusions: laparoscopic cholecystectomy at index admission for cholecystitis, choledocholithiasis, and biliary pancreatitis, has been a safe and feasible treatment option in our hospital. a safe practice can be ensured by adherence to a care pathway and a multidisciplinary, consultant-led service. index cholecystectomy service can be provided safely across the country to prevent diseaserelated morbidity and multiple re-admissions in patients awaiting interval surgery. when to use the two-stage surgery to treat choledocholithiasis: the size aims: the treatment of choledocholithiasis has been provided by various of studies worldwide. the most common accepted minimal invasive treatment was two-stage treatment using endoscopic retrograde cholangiopancreatography before or after laparoscopic cholecystectomy(ercp ? lc), and one-stage treatment with laparoscopic exploration of the common bile duct(lcbde). in fact, despite several large studies have been published in recent years, the debate for the ideal treatment of choledocholithiasis is way from being concluded. we aim to find the proper treatment option for the patients with variable sizes of choledocholithiasis. methods: we retrospectively analyzed 136 patients who underwent treatments for cholidocholithiasis in our institute between january 1, 2011 and july 31, 2016. the patients who received either ercp and lc in the same admission, and the patient who received lcbde, irrespective of trans-cystic(ltcbde), or choledochotomy(lcd), were included. the data was analyzed with chi-square test and mann-whitney u test. results: the stone size of the ercp ? lc group is significantly smaller than the lcbde group. we further analyzed the ercp failure case, and the group of stone size [=9.5 mm has a significantly higher rate of procedure failure. the failure rate is increasing with the stone size. conclusions: both the treatment of lcbde and ercp ? lc have similar safety and success rate, and the rate of residual stone was also similar in both group. however, the failure rate for ercp is significantly increased when stone size is larger than 9.5 mm in this study. aims: the xanthogranulomatous cholecystitis (xc) is a rare entity that can cause doubt in the choice of surgical treatment, because of differential diagnosis with gallbladder carcinoma (gc). methods: a 70-year-old patient presented acute abdominal pain in the right upper quadrant, nausea and low-grade fever with signs of peritonitis. he had elevated pcr, leukocytosis with neutrophilia. abdominal ultrasound showed an acute xanthogranulomatous cholecystitis. a laparoscopic cholecystectomy was decided but it was converted to open surgery due to the difficulty in the dissection, with fundus embedded in the hepatic bed and intraoperative finding of hilar adenopathic conglomerate .the postoperative period was torpid, with abdominal pain, jaundice, elevated bilirubin and enzymes of cholestasis. postoperative abdominal tomography showed injury in the iv segment of the liver suggestive of neoplasia. metastatic adenopathic conglomerate at the hepatic hilum caused extrinsic biliary obstruction with hepatic failure later so an internal-external drain was placed in the bile duct. the patient was died a week later. the pathological anatomy reported a stage four of gc. results: xc is a rare, non-neoplastic, inflammatory and destructive entity of the gallbladder wall, considered a variant of chronic lithiasic cholecystitis. it may be due to extravasation of bile or ulceration of the mucosa, causing an inflammatory reaction and fibrosis, with xanthomatous cells. the prevalence is 1 to 2% in the resected gallbladders. it is more frequent in 60-70 years oldfemales. its clinical presentation does not have specific characteristics that differ from cholelithiasis, except for the weight loss. radiologically it is characterized by nodular thickening and increased attenuation of the vesicular wall with signs of cholecystitis, indistinguishable from a vc. the xanthogranulomatous inflammatory foci infiltrate the hepatic parenchyma, having an invasive behavior; hence, it mimics a neoplastic disease. the confusion in diagnostic and the risk of gc (up to 10%) makes treatment contentious. conclusions: the xc can simulate an advanced gc that sometimes makes us wonder if we should perform a radical surgical treatment; when presented in an emergency situation, our therapeutic decision can focus on solving the acute problem and be conditioned by the patient's general condition. single port transumbilical laparoscopic surgery (sptls) is a techinque that has been around for about 18 years. although the enthusiasm for this type of surgery seems to have diminished in recent years it is expected to rise considering the recent development of sophisticated devices for its execution . we report retrospectively our 8 year experience with 253 procedures performed by sptls technique. in a private practice setting in mexico city. procedures include cholecystectomy (107), appendectomy (69), inguinal hernia tapp and tep (26), hiatus and esophageal (15) , sleeve gastrectomy (2), colon (6), gyn (28) . 4 different access platforms were employed. we explain our selection criteria for the application of the technique and describe the evolution of the instruments employed during the past 8 years, from laparoscopic conventional to curved and bendable; regular scopes to extra long telescopes with different angles. or time, top bleeding, conversion rate, the need to employ an extra trocar, complications, pathology reports, scheduled or urgent kind of surgery and length of hospital stay were recorded from the beggining; patients variables such as bmi, asa status, tep risk, satisfaction with the procedure and other were recorded . we describe the evolution of our technique, and our learning curve with cholecystectomies. we compare our group of sptls transvaginal assisted laparoscopic hysterectomy (tvalh) patients vs tvalh multiport patients. we explain the feasibilty, and efficiency of the procedure in our hands compared to other series. background: in japan, the severity of acute cholecystitis(ac) is assessed by the severity classification of the tokyo guidelines 2018 (tg18). the value of c-reactive protein (crp) is not included in the severity classification criteria. the first line treatment, according to tg18, for mild (grade i) to moderate (grade 2) ac is laparoscopic cholecystectomy, but laparoscopic surgery may not be feasible in some cases due to adhesion or local inflammation of the gall bladder. aim: the aim of this study is to assess the effect of crp on the open conversion rate in laparoscopic cholecystectomy for acute cholecystitis.method: we conducted a retrospective study. 41 patients who were diagnosed with ac and treated with emergent laparoscopic cholecystectomy between june 2017 and may 2018 in our institution are included. we set the cutoff value for crp at 20 mg/dl and compared the open conversion rate. secondary endpoints are amount of bleeding, operation time, post-operative course (peak in body temperature and inflammatory markers) and the frequency of complications according to the clavien-dindo classification. results: 10 out of 41 patients had a crp value greater than or equal to 20 mg/dl. the median crp values for the crp \ 20 group and crp = 20 group were 4.4 and 23.1, respectively. the open conversion rate of the crp = 20 group was significantly higher than that of the crp \ 20 group (3/10, 1/31, p = 0.01). the most common reason for these conversions was local adhesion (3/4) . there were no differences in the amount of bleeding, operation time, post-operative course, and frequency of complications with clavien-dindo grade ii or higher. background: reports about clinical value of fluorescent cholangiography using indocyanine green (icg) during single-incision laparoscopic cholecystectomy (silc) were increasing. we report clinical value and pitfalls of fluorescent cholangiography during silc for the patients with the infraportal type of the right posterior bile duct. methods: our silc procedure utilized the sils-port with an additional 5-mm forceps through the umbilical incision. before silc, 1 ml of icg (2.5 mg) was administrated by intravenous injection. for fluorescent cholangiography, icg fluorescent laparoscope system was used. results: we performed fluorescent cholangiography during silc in 13 patients with the infraportal type of the right posterior bile duct. all procedures were completed successfully. the interval from the injection of icg to the first obtained fluorescent cholangiography before the dissection of calot's triangle ranged from 40 to 60 min. detectability of infraportal type of the right posterior bile duct before dissection in claot's triangle was 23.1% (n = 3) and that during dissection in calot's triangle was 53.8% (n = 7). the infraportal type of the right posterior bile duct could be identified under fluorescent cholangiography only when it joined into the common hepatic duct. conclusions: utilization of fluorescent cholangiography can lead silc to safe even for the patients with the infraportal type of the right posterior bile duct. its benefit is emphasized when the infraportal type of the right posterior bile duct joins into the common hepatic duct. aims: due to the development of laparoscopic surgery and the progress made in surgical treatment ofhydrocephalus, surgeons may come across patients with ventriculoperitoneal (vp) shunt, as candidates for laparoscopic procedures. according to this fact, we report a case of an unusual complication of laparoscopy surgery that can appear in this kind of patients. methods: we present a case of a 66-year-old man with medical history of normotensive hydrocephalus with vp shunt, that came to the emergency room complaining of abdominal pain and fever since two days. blood test showed an elevation of infection parameters and inflammatory markers, and the ultrasound study revealed an emphysematous cholecystitis. therefore, we decide to carry out an emergency laparoscopic cholecistectomy. the patient did not present any adverse event during the surgery or the immediate postoperative period, being discharged the third postoperative day and evaluated ambulatory one month after the surgery with no complications. two months after surgery, the patient returned to the emergency room presenting alteration in consciousness and fever. results: during the study of the pacient, an abdominal ct was performed, showing a complete section of the vp shunt in the subcutaneus space of the upper abdominal wall and intraperitoneal migration of the remaining catheter. the patient was transferred to neurosurgery to carry out an emergent replacement of the ventriculoperitoneal shunt. after surgery and intravenous antibiotic treatment, the patient evolved favourably and was discharged a few days later. conclusions: the rate of serious complications associated with a laparoscopic approach is overall low and up to 50% of them occur during the abdominal access for camera or port placement and may not be recognized until postoperative period. vp shunts should not be a contraindication for laparoscopic surgery. however, laparoscopy approach must be carry out with good anesthetic and monitoring facilities and taking several previous considerations, such as verifying the proper functioning of the vp shunt, identifying the path of the catheter within the abdominal wall to avoid inadvertent damage to the catheter during trocar placement and ensuring that the intraperitoneal portion of the catheter is not twisted or obstructed prior to decompression of the abdomen. surg endosc (2019) introduction: since advantages of robotic surgery is being more emphasized, robotic cholecystectomy (rc) cases are increasing. ajou group had introduced a method called which technique places the trocars transversally on the bikini line and it makes cosmesis and pain beneficial. however, rc with low incision port has several limitations. therefore, we changed port placement which may be a one of safe tehniques for rc. method: this study retrospectively reviewed data for patients who received rc with port changing method (rcpc, n = 33) and rc with low incision port (rcli, n = 81) from february 2016-february 2017 and surgical variables were analyzed. results: patients in both groups had similar demographic features and indications for surgery. the rcpc group required no conversions to conventional robotic surgery and no additional operation, whereas the rcli group had one incisional hernia (1.2%) and two bowel perforation (2.4%) cases. length of stay (4.29 ± 0.72 vs. 5.13 ± 0.93 days, respectively; p = 0.123) did not significantly differ between the rcpc and scli groups. however, the rcpc group had shorter operative time (71.30 ± 48.88 vs. 74.70 ± 30.16 min; p = 0.772) than the rcli group, although the parameters mentioned above were not statistically significant. conclusion: robotic cholecystectomy with bikini line incision has some limitations even though it has cosmetic benefits. whereas robotic surgery with changing port method is one of safe and feasible procedures for performing robotic cholecystectomy. also nothing more to say that it gains cosmesis effect and escapes complications. mini surgery, odessa medical university, odessa, ukraine the aim of the study was to optimize the diagnostic and therapeutic tactics for yatrogenic injuries of the extrahepatic bile ducts. methods: 15 patients were examined. typical manifestations were jaundice, cholangitis, biliary peritonitis, external biliary fistula, subhepatic abscess.cholecystectomy main cause of damage.a visual, manual and x-ray examination of the hepato-choledochus and cholangioscopy were performed. ultrasound, endoscopic retrograde cholangiopancreatography, fistuloholangiography or percutaneous transhepatic cholangiography play a leading role in diagnosing. the results: high damage to the bile duct was detected in 53.3% of patients, low-in 46.7%.percutaneous transhepatic drainage under ultrasound control was performed in 66.7% of patients.emergency laparotomy, sanation of the abdominal cavity and external drainage of the bile ducts were performed with bile peritonitis. recovery operations produced 60.0% of patients. reconstructive interventions were performed in 40.0% of patients after 6-8 weeks after the first stage. the covery operations were successful in 66.7% of patients. 33.3% of the sick had complications in the form of biloma. a scar stricture formed in 33.3% of patients after 4-6 months. 1 patient underwent recanalization of the stricture zone with a dilatation balloon through interchangeable transhepatic drainage. balloon dilatation was performed retrogradely through the large duodenal papilla in 2 patients. deaths in the postoperative period was not observed. conclusions: the surgical team should be strengthened by an experienced surgeon when intraoperative diagnosis of yatrogenic damage to the bile ducts.the operation should be completed by external drainage of the bile duct and the abdominal cavity in the absence of an experienced specialist.recovery operations are shown only with lateral injury of the ducts.the patient must be sent to a specialized institution for radical surgical treatment after stabilization of his general condition. aim of the study sub-hepatic bile collections, biloma and hematoma are rare complications and we present our experience in treatment this complications. material and methods: from 750 laparoscopic cholecystectomy performed in our clinic, three patients (two women and one men) to whom it was performed laparoscopic cholecystectomy, came back two weeks later after they were released from the hospital because of epigastric discomfort, fever and nausea. results: clinical examination after rehospitalization showed tenderness in the epigastrium and right subcostal region. in all patients were measured high levels of leukocytosis and crp . an ultrasound examination of the abdomen revealed a large hypoechoic collection in the sub hepatic space, after the abdominal ct scan was performed, the density of the collection did not indicate the presence of blood in two patients. percutaneous drainage of the collection in both patients was realized under us guidance and 8-10 fr catheter was inserted in the sub hepatic region. in the first patient 800 cc of bile-stained liquid, and in the second patient 650 cc of biliary liquid was drained. in a third patient 16 h after surgery signs of significant hypotension and limited tenderness at the right subcostal region occurred. a complete blood count (cbc) showed a decrease in the level of haemoglobin to 10.4 g%. ultrasound examination revealed a fluid collection in the sub hepatic space, which is also confirmed by computed tomography. laparotomy was performed and the large sub hepatic hematoma was evacuated. after that the 18 fr abdominal drain was inserted into the sub hepatic space. the postoperative course of all three patients was not complicated. conclusion: sub hepatic biloma and hematoma are rare complications of laparoscopic cholecystectomy, while early diagnosis followed by percutaneous drainage or open laparotomy is the only way to resolve these complications. (3), hemoperitoneum 3.4% (1) . the average number of days of hospitalization was 7.6 days. there was no mortality at 30 days. conclusion: in the emergency setting the rendezvous technique has an adequate success rate of cannulation and clearence of the bile duct, an acceptable surgical time, few complications, these being more frequent in those patients with inflammation of the gallbladder and without associated mortality at 30 days. there is a need for controlled randomized studies with a greater number of patients recruited and follow-up to determine the usefulness of this technique. intraoperative cholangiography could serve as a fundamental solution to avoid the bile duct injury during laparoscopic cholecystectomy. however, it is difficult to identify the cystic duct to which the contrast catheter should be inserted in cases with high degrees of adhesion around the calot's triangle. in these cases, it is not possible to conduct cholangiography from the cystic duct. for these types of cases, intraoperative cholecystography may serve as an option. however, since the bladder is a bag-like organ that expands when liquids are entered, directly inserting a contrast dye into the bladder would make the bladder itself expand, which makes it impossible for to maintain enough pressure in the contrast dye to flow into the cystic duct, extrahepatic bile duct, and intrahepatic bile duct. also, since it is difficult to control leakage of the contrast dye from the catheter insertion site, it is not possible to obtain enough images to sufficiently understand the anatomical characteristics of the bile duct in many cases. therefore, cholecystography is not generally recognized as a method to be used during surgery. in our facility, we insert the contrast catheter through the bladder after stretching the gallbladder neck as much as possible, hold the gallbladder neck with a removable intestinal clamp, and then apply the contrast dye to the bile duct. through this method, it is possible to insert enough contrast dye into the cystic duct, extrahepatic bile duct, and intrahepatic bile duct to understand the anatomical characteristics of the bile duct, allowing us to obtain appropriate images of the biliary tract. because this method uses equipment that is highly versatile, we believe that it is inexpensive and convenient. during this presentation, we will also conduct a case presentation of the methods of bladder contrasting that we utilize in our facility during laparoscopic cholecystectomy. introduction: retrieval of a thick walled gallbladder during a difficult laparoscopic cholecystectomy (lc) for an acute or chronic calculous cholecystitis can be exasperating. it increases operative time and often necessitates enlargement of 10 mm port to deliver the specimen. the 'in-situ cholecystotomy', which we wish to call the 'delhi maneuver' is very helpful in improving the ergonomics of specimen retrieval, saves time and conserves cosmesis. patients & methods: one hundred and ten patients of acute or chronic calculous cholecystitis were placed randomly in 2 groups. a disposable transparent plastic bag was used in all cases to retrieve the gallbladder specimen through the 10-12 mm port using a rampley's sponge holding forceps. retrieval was done using conventional technique in 60 patients (group b). the delhi maneuver was used in the remaining 50 patients (group a). it involved cutting the gall bladder inside the plastic bag in a certain fashion, delivering the gallstones in the bag, and removal of gallbladder preceding the stones. the retrieval time, number of insertions of sponge holder, any rupture of plastic bag as well as the number of cases needing port enlargement were noted. results: the average time taken by delhi maneuver (group a) was 9 min as compared to was 14 min by conventional method (group b). the number of insertions of sponge holder ranged from 3-11 in group a (mean 5) and 5-18 in group b (mean 13). four patients needed port enlargement in group a (8%) while 17 patients needed enlargement in group b (28.3%). there were 2 incidences of bag rupture in group a (4%) and 3 in group b (5%). the delhi maneuver improved the ease and speed of specimen extraction at laparoscopic cholecystectomy for thick walled gallbladders. it also decreased the need for port enlargement for specimen retrieval. the bile duct injuries are a very complex desease to confront, the inciian managment is to clasificate the injury and to identifie the mechamism of the injury. it's important for the optimal heal of the patient to have a multidisciplinary approach including internal medicine, surgery, endoscopy and interventional radiology specialists. the laparoscopic cholecystectomy responsible for 80%-85% of them.this is a retrospective study on the incidence, classification and management of bile duct injuries in a private sector hospital in monterrey nl. mexico. in this study, 17 bile duct injuries were identified in 10 years of experience in a single center. were categorized using the strasberg classification. variables were evaluated such as type of injury, mechanism of injury, hospital stay, if the surgery was scheduled or of emergency, the moment in which the surgeon evidenced the injury, the way in which the surgeon became aware of the injury performed. the type of management that was given to this lesion was also studied and the days of intrahospital stay and the number of reinterventions or procedures performed were compared.the average age of the patients was 53 years, 10 patients belonged to the female sex, although there were lesions of all kinds in this work, there was a greater incidence in strasberg type a lesions, which represented 41% of the lesions. the most common diagnosis presented was cholecystolithiasis. in 7 surgeries the evidence and repair of the bile duct was in the same intervention aims: bile leak is a rare but recognised complication after laparoscopic cholecystectomy. this usually occurs after a difficult procedure complicated by adhesions, unusual anatomy or if the surgeon is inexperienced or unfamiliar with the anatomy. this video aims to demonstrate the laparoscopic diagnosis and treatment of this complication particularly for surgical trainees. methods: we report a case of significant bile leak occurring soon after a straightforward laparoscopic cholecystectomy due to very short cystic duct (cd). the procedure was carried out uneventfully but the cd was clipped flush with the bile duct. the patient was discharged on the day of surgery feeling well but readmitted with abdominal pain 48 h later. results: after readmission the patient underwent a ct demonstrating only a small amount of fluid suggestive of a small collection. she was treated conservatively but suddenly deteriorated and a repeat ct confirmed significant intraperitoneal fluid. a diagnostic laparoscopy was carried out urgently confirming a cd stump bile leak where the clips had sloughed off causing the leak. two litres of bile was aspirated with copious irrigation and a latex t-tube inserted into the cbd. patient made a full and rapid recovery. conclusions: this is a rare complication and learning opportunities for trainees are therefore infrequent. this video demonstrates a successful laparoscopic approach to management of postoperative bile leak showing t-tube insertion technique and highlighting the need for careful cd closure techniques during laparoscopic cholecystectomy when the duct is very short. about 10-15% of bile duct stones could not be extracted using conventional endoscopic techniques (baloon, sphincterotomy). there is lower success rate in elderly patients; among the biggest challenges are intrahepatic stones, size of stone is large, etc. aims: to present the case of a recurrent intrahepatic lithiasis and its management using spyglass choledochoscopy.to expose, other cases and the main outcome and complications of other difficult cases of bile duct stones that solvedusing this choledochoscope vs. the traditional one and the beneffits. we present a case of 85 years old male who presented with cholangitis caused by an intrahepatic stone that required multiple sessions of endoscopic retrograde cholangiopancreatography with spyglass for clearance. one year later, he presented again with cholangitis, that required another session of spyglass lithotripsy and cholecistectomy. conclusions: besides ercp, there are different approaches to treat difficult bile duct stones, as transhepatic percutaneous drainage, surgical techniques, or other endoscopic techniques (doubleballoon, enteroscopy). ercp and sphincterotomy are the first step of endoscopic treatment with more than 90% of success rate, and a low mortality and morbility rate; other steps include some lithotripsy techniques, or the use of biliary stent as a bridge before definite treatment. spyglass is a visualization & intervention system used when common ercp has been unsuccessful, and it is first line for better and direct image of biliary ducts, with 12°range of motion, with multiple advantages like the concomitant use of lithotripsy devices. aims: the number of elderly people has increased, because of the strong association between age and gallstone disease, both prevalence and incidence of this disease are increasing. this presentation aims to review our current management options of octogenerian patients with acute cholecystitis. methods: we retrospectively analyzed 173 octogenerian patients who were admitted to the our hospital with the diagnosis of acute cholecystitis between january 2013 and october 2018. the patients were initially allocated to four different treatment groups as follows: immediate surgery, delayed surgery, medical treatment and cholecystostomy. differences in the outcomes between the treatment groups were evaluated. results: there were 67 males (38.8%) and 106 females (61.2%) with a mean age of 85.7 years (range 80-90 years). the patients had different co-morbid diseases, especially hypertension (65, 37.5%) cardiovascular disease (43, 24.8%) and diabetes mellitus ( methods: a retrospective observational study where were analyzed patients older than 75 years who underwent urgent surgery for ac who fulfilled an indication for surgery according to tokyo guidelines 2018. the type of cholecystitis, stay and postoperative complications, the type of intervention, the conversion rate, the need for reoperation and re-admissions in patients older than 75 years were analyzed and compared with those of patients operated on for cholecystitis younger than 75 years. outcomes: a total of 289 patients were registered, 55 older than 75 years (19%) and 234 younger (81%). in 128 cases, cholecystitis were complicated (44.3%), 34 cases older than 75 years (26.56%) and in 94 cases younger than 75 years (73.44%). the approach was laparoscopic in 89% of the cases older than 75 years, with a conversion rate of 10.2%, not finding statistically significant differences with younger than 75 years (91% laparoscopies with 4.2% of conversions). 18% of patients older than 75 years had some type of postoperative complication, not finding statistically significant differences in patient younger than 75 years (17%); being the most frequent complication the intrabdomintal abscess (3.66% of patients [ 75 years, and 4.27% of those \ 75 years = '' span = '' [ being not statistically significant with 95% ci. any patient older than 75 years required re-entry after discharge, compared to 8 patients younger than 75 years who were re-entered, not being statistically significant; and any patient older than 75 years required reintervention, while it was necessary to reoperate 3 patients younger than 75 years (1%), being not statistically significant. mortality was very low, finding 1 case in older than 75 years (1.8%) and 1 case in younger (0.4%), not obtaining statistically significant differences. the postoperative stay in patients younger than 75 years of age has a median of 3 days and in older than 75 years a median of 4 days, not finding statistically significant differences with 95% ci conclusions: laparoscopic cholecystectomy is safe and effective in the treatment of elderly patients with (ac), there being no differences with younger patients. introduction: significant bile leak is an uncommon but serious complication of laparoscopic cholecystectomy. our study aims to evaluate the efficacy of relaparoscopy in treating symptomatic bile leak and biloma formation. material and methods: 125 patients presenting with postoperative bile leak after different operations on extrahepatic biliary tree from january 1993 to december 2018 were reviewed retrospectively (in total, 23,590 laparoscopic surgical interventions were performed for the period under study). the sites of bile leaks were the cystic duct stump in thirty seven patients, the bile ducts of luschka in fifty two, liver beds in 16 cases after hepatectomy, in 13 had small injury of cbd, and seven patients with tubular stenosis of the common bile duct. results: three main approaches of mini-invasive treatment of bile leakage was used: (1) percutaneous puncture with or without drain under ct-scan or ultrasound guidance in 45 patients; (2) endoscopic management in 50 patients (in 35 patients (70.0%) were managed with ercp alone and fifteen (30.0%) were treated with a percutaneous intervention followed by ercp. endobiliary stent placement was performed after es in 23 patients and without es in twenty seven patients (3) relaparoscopy has been performed in 30 patients, in cases of biliary peritonitis. conclusions: relaparoscopy was the ultimate method of treating postoperative complications of laparoscopic surgery in 94.3% of patients. in general, this method, as well as laparoscopic intervention, is highly effective in the diagnosis and correction of postoperative complications, with minimal surgical trauma for the patient, with great therapeutic effect and subsequent rapid social rehabilitation of patients. introduction: laparoscopic operations have already become routine, even for pancreatoduodenectomy for periampular cancer. for unresectable cases, endoscopic bibliary stenting or hepaticojejunostomy are usually used. these methods are quite expensive and may be accompanied by complications. materials and methods: laparoscopic cholecystogastroanastomosis was performed in 72 patients with unresectable periampullary cancer. there were 34 females and 38 men and average age was 72,6. the indications for surgery in all patients was unresectable periampullary cancer and biliary hypertension with preserved patency of the cystic duct. the level of bilirubinemia ranged from 89 to 520 lmol/l (the average level was 179,8 lmol/l). we used 3-port technique. optical trocar was placed in the right iliac region, one 10 mm above the navel and one 5 mm in the right hypochondrium after punction gallblaber and aspiration of bile, we cut the apex of the gallbladder and gastric antrum up to 2.5 cm and performed cholecystogastroanastomosis with barbed-suture v-loc. results: we had not conversion to open surgery. the average operation time was 37 min. postoperative stay was average 4 days and on median follow-up of 12 month. post-operatively, there were no major morbidity and nil mortality. we had 2 cases of leakage of bile through drainage for up to 3-5 days, which spontaneously stopped. all patients showed a decrease in the level of bilirubinemia. 12 patients were later radical operated (pancreatoduodenectomy), while they did not have such phenomena as cholangitis, pancreatitis, inflammation of the hepatoduodenal ligament elements, which we often observe after endoscopic biliary stenting. conclusions: laparoscopic cholecystogastroanastomosis is safe, effective and feasible for patients with periampular cancer and obstructive jaundice. aims: surgeons with the expertise and resources to perform laparoscopic common bile duct exploration often prefer the 'one stage approach' over endoscopic retrograde cholangio-pancreatography (ercp) for the management of common bile duct (cbd) stones. this case series aims to evaluate the effectiveness of lcbde in a single benign upper gastrointestinal (gi) unit. methods: all patients with suspected and confirmed pre-operatively cbd stones who underwent a lcbde between january 2015 and october 2018 were included. lcbde was performed on the basis of pre-operative suspicion of cbd stone confirmed by intra-operative imaging. results: 187 patients with confirmed choledocolithiasis had lcbde during this time period. the indications for lcbde were deranged liver function tests, dilated cbd or confirmed stones on preoperative imaging. median age was 63 (range 19-91), 67% of whom were female. 36% of patients had confirmed cbd stones pre-op. 70% of cases were performed as emergencies and conversion rate to open was 6.5%. choledocotomy was performed in 60% of cases. in 17% of these t-tube was left in situ. transcystic approach was used in the remaining 40%. despite positive intraoperative imaging no stones were found on cbd exploration in 11 cases (6%). in 3 patients stones were unable to be cleared with lcbde. the overall morbidity was 20%. 11% of patients had gallstone related complications. overall mortality was 1% (due to bile leak). 19/187 patients required re-intervention with re-look laparoscopy (n = 6) or ercp (n = 13). 3 patients re-presented within 3 months with cbd stones. overall median length of stay was 5 days. conclusions: our case series demonstrates that lcbde is an effective and safe treatment for choledocolithiasis in both the elective and emergency settings. complication rates are comparable with therapeutic ercp (10% specific complications) followed by laparascopic cholecystectomy (10% 30 day morbidity). the variability in anatomic location of subvesical bile ducts puts them in danger during hepato-biliary operations. its prevalence varies between 3% and 10%. the origin and drainage of these ducts were limited mainly to the right lobe of the liver, but great variation could be seen. some authors think of them as small bile ducts that drain directly into the body of the gallbladder; others consider them to be networks of miniscule bile ducts between the liver capsule and the gallbladder. recent studies suggest that clinically relevant bile leaks complicate approximately 0.4-1.2% of cholecystectomies. injury to a subvesical duct is one of the most common causes of cholecystectomy associated bile leak and occurs as often as major bile duct injuries and leaks from the cystic duct stump. indeed, recent studies suggest that about 27% of clinically relevant bile leaks are caused by inadvertent injury to a subvesical bile duct. there are four types of subvesical bile ducts, including (1) superficial variations of segmental and sectorial bile ducts, (2) superficial or intercommunicating accessory bile ducts, (3) hepaticocholecystic ducts, and (4) aberrant bile ducts.we present a case of 73 year old patient who developed a coleperitoneum after a routine daycase colecystectomy due to the inadvertent injury of a hepatocholecystic duct. a superior comprehension of ductal anatomy is essential in preventing and managing operative injury to the subvesical ducts, although some times is unavoidable. nowadays, the diagnosis of liver cancer is primarily radiological, as recommended by the principal international societies. in doubtful cases or due to the clinician needs, diagnostic evaluations can eventually be completed with a liver biopsy. the goal is to perform the examination, or the examinations, that guarantee the most elevated sensibility and specificity levels being as little invasive as possible. nevertheless, even using the best radiological tools, the diagnosis is not certain, due both to device limitations and radiology experience. recently, various diagnostic algorithms have been proposed, relating with contrast enhancement characteristics, different radiological techniques, blood examinations and cross evaluations from different radiologists. one of the most recent algorithm purposed is liver imaging reporting and data system (li-rads), that evaluates ct and mri imaging to classify hepatic lesions in different diagnostic categories, in order to perform a better and more precise diagnosis of hcc or other liver benign or malignant lesion. through a retrospective study, we evaluated and compared preoperative imaging and post-operative histological reports. results reveal that li-rads routine use increases hcc diagnosis up to 95%. background: we previously developed a modified difficulty scoring system (dss-ihd) of laparoscopic liver resection (llr) for patients with intrahepatic duct (ihd) stone. we validated dss-ihd in patients who underwent llr for hepatolithiasis. methods: dss-ihd was based on the extent of liver resection (2 to 4), stone location (1 to 5),atrophy of liver parenchyma (0 to 1), ductal stricture \ 1 cm from the bifurcation (0 to 1), and combined choledochoscopic examination for remnant ihd (0 to 1). results: the dss-ihd ranged from 3 to 12 and divided to 3-level groups of low group (score 3 * 5; n = 26), intermediate group ( objective: improving the surgical treatment of patients with cholangiogenic abscesses of the liver through the application of minimally invasive technologies. material and method: in the presented study presented results of treatment of 49 patients with biliary liver abscesses. surgical interventions for hepatic abscesses were performed simultaneously with the elimination of the primary pathological process of the biliary system, which caused the occurrence of cholangitis, or in the near future (up to 3 days) after biliary drainage drainage. among 49 patients with biliary liver abscesses, treated with minimally invasive methods, 29 revealed abscesses of the right hepatic lobe, 15-abscesses of the left hepatic lobe, 5-abscesses and right and left hepatic lobes. single abscesses were detected in 39 patients, and in 10-two or more abscesses. in terms of liver abscesses, more than 3 cm were detected in 43 patients, more than 5 cm in 6 patients. drainage of the biliary tract was carried out endoscopically transpapillary and (if the endoscopic approach was unsuccessful) with transcutaneous transhepatic approach. results: drainage under ultrasound guidance was performed on 21 patients with solitary and 7 patients with two or more cholangiogenic abscesses of the liver. laparoscopic interventions were performed on 21 patients. among the patients operated on using minimally invasive technologies, 7 occurred complications (14.3%). 1 patient died due to the development of biliary sepsis (2.0%). conclusion: percutaneous drainage of liver abscesses under ultrasound control is appropriate not only for single abscesses, but also for their larger number, which has many advantages over other interventions. it was proved possibility of simultaneous drainage of liver abscess and bile duct. percutaneous drainage of the liver abscess, drainage of the biliary tract and laparoscopic surgical intervention are complementary aspects in the treatment of liver abscesses of biliary origin. after laparoscopy residual calculus can be removed endoscopically in more favorable conditions after stabilization of the patient's condition is achieved and the infection-associated disorders are eliminated. in case of localization of abscesses in the marginal segments of the liver, laparoscopic atypical resection of the liver with an abscess is most desirable. general surgery, rambam medical center, haifa, israel background: recently robotic surgery has emerged as one of the most promising surgical advances. despite its worldwide acceptance in many different surgical specialties, the use of robotic assistance in the field of hepatobiliary (hbp) surgery remains relatively unexplored. our study presents single institution's initial experience of robotic assisted surgery for treatment of benign hepatobiliary pathologies. methods: a retrospective analysis of a prospectively maintained database on clinical outcomes was performed for 26 consecutive patients that underwent robotic assisted surgery for benign hbp disease at rambam medical center during 2013-2015. results: there were 26 robotic assisted surgical procedures performed for benign hbp pathologies during the study period. there were 3 anatomical robotic liver resections for symptomatic hemangiomas, 9 cases of giant liver cyst, 5 robotic assisted surgery for type i choledochal cyst, 2 case of benign (iatrogenic) common bile duct (cbd) stricture, 3 cases of robotic (cbd) exploration due to large intra choledochal stones and 6 cases of cholecystectomy for cholelithiasis. the median postoperative hospital stays for all procedures were 3.5 days (range 1-6 days). general morbidity (minor) was 2%. there was no mortality in our series. conclusion: robotic surgery is feasible and can be safely performed in patients with different benign hbp pathologies. further evaluation with clinical trials is required to validate it's real benefits. most liver cysts are asymptomatic and tend to have a benign clinical course. however, symptomatic or complicated liver cysts sometimes require surgical intervention. needle aspiration is safe and can be the lease invasive procedure, this procedure is however associated with a high failure rate and rapid recurrence. surgical approach is the crucial and provides definitive treatment for such cysts. thirteen cases were nominated from shonan kamakura general hospital between january 2015 and december 2018. mean age and body mass index (bmi) were 67.8 and 20.8, respectively. all patients have had any complaint such as upper abdominal pain, dyspnea, and fever. two cases were clinically diagnosed as the infectious cyst and serum crp was elevated before surgery. additional cholecystectomy was planned for one case of chronic cholecystitis with gallbladder stones. all cases were prompted the reduced port surgery (rps) and 4 cases were performed rps with trans-vaginal approach (hybrid notes) and 4 case was chosen in single port surgery. cyst unroofing was performed for all cases. mean operation time and blood loss of all cases were 122.3 min. and 62.7 ml, respectively. no surgical complication has been occurred in all cases, an infectious cyst case was however required additional drainage for infectious control after surgery. although statistic difference was not shown, fewer blood loss and shorter hospital stay was seen in non-infectious cases, compared to laparotomy cases. mean hospital stay after surgery of whole cases, non-infectious cases, infectious cases was 5.5, 2.5, 22.5 days, respectively . no recurrence of any symptom was shown in any cases in observation period (10-1392 days) . laparoscopic unroofing is the definitive treatment for the complicated or symptomatic liver cyst. however, for the infectious cyst, infection control such as intensive drainage and/or administration of antibiotic before surgery may be needed to avoid additional treatment, leading to longer hospital stay. laparoscopic unroofing of liver cyst can be the first choice for symptomatic or complicated liver cyst. also, reduced port surgery can be nominated to achieve less invasiveness. introducction: laparoscopic liver resection (llr) has been increasing since it was first reported in 1991. three international expert consensus conferences on llr surgery were held in louisville, ky, usa, in 2008 , morioka, japan in 2014 and southampton, uk, in 2017 . while most initial minimally invasive liver resections were typically done for benign lesions in anterior o left segments, llr is currently being applied for major anatomic resections, malignancy, cirrhosis and liver donor hepatectomy. clinical case report: this is a 78-year-old male patient with a history of hta and liver cirrhosis due to hepatitis b virus. hepatocarcinoma is diagnosed in liver segment vi with a size of 3 cm . in the digestive study the patient presents a child a stage, meld \ 9, without signs of portal hypertension. complete analytical with normal afp and cea 19.9 markers. after presentation of the patient in a multidisciplinary committee and being a stadium according to the early bclc classification, laparoscopic surgery with segment vi resection was decided. discussion: laparoscopic liver resection is becoming widely accepted for the treatment of hepatocellular carcinoma. liver resection is a first-line option in very early and early-stage disease. many meta-analysis have shown that llr is better than open liver resection in terms of short-term outcomes for patients with child-pugh a cirrhosis, solitary tumors, and minor resections. in the long-term setting, the results demonstrate that a minimally invasive approach is comparable to an open approach in terms of overall. in conclusion, the current evidence conclude than llrs for hcc are safe and may be considered a standard practice in specific settings. results: there were 6 women (43%) and 6 men (57%). the age of patients ranged from 43 to 82 years. the patients underwent complex examination including abdominal ultrasound, esophagogastroduodenoscopy, and some of them underwent ct (computed tomography). all patients in the first stage were performed antegrade external drainage of biliary tracts with x-rays of the biliary tracts, and specifying the level and extent of the block.total 28 miniinvasive interventions were hold. two patients in connection with the uncoupling of equity ducts were performed antegrade bilobar stenting with preliminary split external bile release.there were complications after carried out interventions in 10 cases, which were associated with dislocation of holangiostomic drainage in 5 patients (35.7%); with acute cholecystitis in 1 patient (7.1%); with hydrothorax in 2 patients (14.2%); perihepatic biloma in 1 case (7.1%).1 patient (7.1%) had a recurrence of obstructive jaundice due to germination of endobiliary stent in the late period after stenting. lethal outcome appeared in 1 patient. conclusions: ultrasound examination allows us to determine the level of obstruction of the biliary tract, to substantiate the tactical position in the application of mini-invasive technologies. antegrade miniinvasive technologies in the treatment of tumor lesions of the proximal bile ducts allow timely and effectively stop biliary hypertension and to determine further treatment strategy. acknowledgements this study was supported by the russian science foundation under project ? 18-15-00201. background: repeat hepatectomy is an effective treatment, with long-term surgical outcomes for recurrent hcc and colorectal liver metastasis(crlm). however, the efficacy of a minimally invasive surgical approach for recurrent liver tumor is not yet confirmed. the purpose of this study is to examine the efficacy of laparoscopic repeat hepatectomy(lrh) compared with open repeat hepatectomy(orh) for recurrent liver tumor. we retrospectively analyzed the clinicopathological features and short-term surgical outcomes between lrh and orh. methods: from 2006 to 2018, 158 patients with liver cancer underwent repeat hepatectomy. of those patients, 113 patients underwent partial hepatectomy, 37 patients were undergone laparoscopically, and 76 patients underwent open hepatectomy. we compared the clinicopathological and surgical parameters in the lrh group with those in the orh group. results: there were no significant differences in patients' gender, age, viral infection status, child-pugh classification, tumor size, tumor number, and tumor location in the two groups. the operative times were similar, but blood loss was significantly lower in lrh group (68 vs. 310 ml, p \ 0.001). the postoperative hospital stay was significantly shorter in the lrh group (9.0 vs. 11.5 days, p = 0.016). postoperative complications(cd = 3a) were observed only in the orh group, with a complication rate of 9.2%. conclusions: we demonstrate that lrh reduces blood loss and postoperative complications compared with orh. lrh might be a feasible and effective procedure for the selected patients. background: the liver is the most common site of metastatic disease with up 40-50% of all cancers having the potentiality for sending liver metastasis during the disease. consequently, increasing value for surgical resection of hepatic deposits of different types of cancers, the need for accurate evaluation of the extent of hepatic metastasis was established for choosing the most suitable patients for surgery and in planning the extent of hepatic resection. the aim of this work is to evaluate the role of intra-operative ultrasound in the detection of hepatic deposits in intra-abdominal malignancies with special emphasis on its accuracy, sensitivity, specificity. patients and method: this study was carried out on thirty patients who were admitted to the gastrointestinal surgery unit, main alexandria university hospital with intra-abdominal malignancies for whom elective open surgical intervention was recommended in the period from 1st of september 2017 till the 31th of march 2018. results: in the present study consisted of 17 males (56.7%) and 13 females (43.3%). their mean age at admission was 52.77 ± 9.12 years. six of the included patients (20%) were found to have hepatic lesions by using ious including the four cases (13.3%) already detected by preoperative imaging. two cases (6.67%) were newly discovered in the operative room by using ious. conclusion: the current study has proved that ious demonstrates superior lesion detection over the various non-invasive preoperative imaging modalities causing significant impact on change of the planned surgical strategy laparoscopic approach to the liver has become an integral part of surgery. two consecutive international consensus meeting recommends major hepatectomy has been on the expert hands. tumors located in the right posterior section are considered to be difficult for laparoscopic resection. patients and methods: since 2005, until 2017, cnuhh has been performing 260 laparoscopic hepatectomies including 67 major hepatectomies. among 67 major ones, there are 36 rh, 13 lh, 16 rps, 2 ch, and 1 as. we analyze data on patient demographics, tumor characteristics, operative date, and posterior outcome retrospectively. results: during 2014-2017, 16 laparoscopic rps were performed. the diagnosis were hcc in 13 and crlm in 3 patients. median operative time was 405 min, and median blood loss was 850 ml. no blood transfusion was occurred. median tumor size was 38 mm, and median resection margin was 12.3 mm. six of the 16 patients (38%) were cirrhotic on pathology. there was no conversion and was no postoperative mortality. median hospital stay was 11.6 days. conclusion: laparoscopic rps is known challenging procecedure. strict preoperative planning and operative procedure is mandatory. even though it should be performed by the experienced hands both on hepatic surgery and laparoscopic skill, it can be an good option for treatment of the tumor locating over right posterior section. purpose: previously we developed a new sponge (named endoractor) as an organ retraction device in laparoscopic surgery in 2009 and have reported that it is useful in various surgical procedures including rectal surgery we confirmed that it is also useful in laparoscopic radiofrequency ablation of the liver in terms of pulling and protecting organ, so we report it materials and methods: a case is an 82-year-old female with liver cirrhosis. she had primary hepatocellular carcinoma in s8 lesion with a diameter of 1.8 cm very close to the inferior vena cava and middle hepatic vein root and in s3 lesion with a diameter of 2.0 cm we thought she could not put up with hepatic resection because of her poor hepatic reserve capacity. and we could not expect treatment effect by embolization therapy since contrast effect was poor. so we decided to select ablation therapy in the puncture and ablation of the s8 tumor, since there was concern about the thermal damage of the middle hepatic vein and the cooling effect by the inferior vena cava, we would dissect the right coronary mesentery sufficiently and pull the liver apart from the inferior vena cava and the middle hepatic vein as much as possible using our endoractor also, in the puncture and ablation of the s3 tumor, it was feared that the stomach would be thermally damaged, so we would place endoclactor between the liver and the stomach to protect the stomach results: when ablating the s8 tumor, we could pull the liver securely without slipping, so we did not cause thermal damage to the middle hepatic vein. and there was no cooling effect by the inferior vena cava, so we could obtain sufficient cautery margin. in ablation of s3 tumor, we were able to puncture by stabilizing the lateral segment of the liver on our endoractor, and avoid thermal damage of the stomach conclusion: it seems possible to perform safe and reliable puncture and ablation by using our endoractor as well in laparoscopic radiofrequency ablation surg endosc (2019) surgical reinterventions in patients with complicated hepatic hydatid cysts usually occur as a result of diagnostic or technical failures during the initial procedure. according to recent studies, the most common complication after liver hydatid cyst surgery is local sepsis at the residual cavity and long-term biliary leak. we report the case of a 21-year-old male with a history of liver hydatid disease four years before the current episode, admitted in our surgical department for intense upper right quadrant pain. abdominal ultrasonography, ct and mri scans revealed three cysts in the gastrosplenic ligament, in liver segments vii-viii, and ii-iii respectively, sized between 4 and 8 cm. the intraoperative aspect during laparoscopy was strongly suggestive for liver hydatid disease. laparoscopic fenestration with tunneling for the hepatic cyst in segment viii, partial cystectomy in the left liver lobe and ideal cystectomy in the gastrosplenic ligament were performed. postoperatively, the patient displayed a constant biliary drainage output of 500-600 ml from the cavity remnant in the segment viii. conservative therapy for external biliary fistula and concomitant treatment with albendazole for 3 months were initiated. evolution was slowly favorable with decreased biliary drainage to 200 ml two months after surgery and complete symptom resolution five months after hospital discharge. aims: this study aimed to evaluate the effectiveness of fluorescence imaging with indocyanine green (icg) during laparoscopic deroofing of hepatic cysts. methods: this was a single-center, case-control study. we included 14 patients who underwent laparoscopic deroofing between november 2008 and october 2018. imaging with and without icg fluorescence was performed in 10 (icg group) and 4 (non-icg group) patients, respectively. icg was intravenously administered between 15 min and 6.5 h before surgery. we performed a standard laparoscopic procedure. we detected a thin bile duct on the hepatic cyst on using intraoperative icg fluorescence imaging. we adjusted the resection line of the cyst wall and ligated the bile duct at the point at which it crossed the resection line. data on age, sex, cyst size, resected cyst size, operative time, estimated blood loss, post-operative hospital stay, complications, and recurrence were compared between the groups. results: the mean cyst size was 139 ± 30.9 and 138 ± 32.5 mm, the mean resected cyst size was 139 ± 51.1 and 86 ± 39.9 mm, and the mean operative time was 95.8 ± 30.8 and 152 ± 72.8 min in the icg and non-icg groups, respectively. using icg fluorescence imaging, the bile duct was detected on the cyst wall in 4 patients (40%). all surgeries were completed laparoscopically, and no post-operative complications occurred in either group. recurrence of the hepatic cyst occurred in one patient (25%) of the non-icg group. conclusions: fluorescence imaging with icg is used widely in hepatobiliary surgery for intraoperative identification of biliary and vascular anatomies. this method does not require complicated techniques or instruments. icg fluorescence imaging may facilitate the prevention of intra-or post-operative complications, such as biliary leakage, in laparoscopic surgery. in this study, icg fluorescence imaging was found to be effective in detecting the bile duct on the cyst wall intraoperatively, allowing for wider resection of the cyst and avoiding inadvertent injury. our study suggests that wider resection of the cyst wall might prevent recurrence of hepatic and that icg fluorescence imaging could ensure procedural safety. abdominal ct showed: large hepatic cyst (18x17,8x22 cm size), with no malignity signs, that occupies practically the whole right liver, causing subsegmentary atelectasis of the middle lobe, superior and inferior cava vein compression, and displacement of right kidney, pancreas and right atrial. due to breath involvement, a percutaneous drainage is performed achieving clinical improvement and reduction of the size of the injury. the patient was released but a cyst superinfection occurred; once this problem was solved, the drainage was removed. results: in light of the complication, surgical treatment was decided, which confirmed the large cyst located in right posterior hepatic segments with tight diaphragmatic adhesions. we carried out the cyst evacuation and a wide laparoscopic resection of the cyst walls, until the posterior area of the cava vein, combining supra and infrahepatic access. the patient was released on the sixth postoperative day and continues asymptomatic. conclusions: simple cysts can be approached in a no surgical way (punction-aspiration with/ without sclerosing products injections) or in a surgical way (cyst wall fenestrations, cystectomy or liver resections). a conservative treatment will obtain symptomatic relief but with a high risk of recurring. recurrence is the main drawback of unroofing. cystectomy is the better option but may be too complicated depending on the cyst's location. to our patient, we carried out a wide laparoscopic unroofing (even though its posterior localization) to minimize recurrence possibilities. in conclusion, laparoscopic resection of the cyst wall is a simple and effective approach in symptomatic or complicated cases. background: single-incision laparoscopic surgery or laparoendoscopic single-site surgery is emerging as an alternative to conventional multiple-incision laparoscopic surgery. it has a potential benefit of less postoperative pain and faster recovery compared with conventional multiple-incision laparoscopic surgery. single-incision laparoscopic hepatectomy (silh) has been reported in only a few small series and the majority were minor resections. case report: a 54 y/o male patient is a case of chronic viral hepatitis b and early cirrhosis of liver. two atypical hepatocellular carcinomas (up to 2.4 cm in diameter) located at the junctions of segments 6 & 7 and segments 5 & 6 were impressed by liver magnetic resonance imaging (mri). we performed single-incision laparoscopic anatomical hepatic resection of the right posterior section via a 5-cm transverse incision on the right middle abdominal wall. inflow control was carried out with an extra-glissonian approach before parenchymal transection. the glissonean pedicles of segments 6 and 7 were divided by linear staplers respectively as well as a major branch of the right hepatic vein in segment 7. the operative time was 580 min and the estimated blood loss was 150 ml. the pathologic examination revealed two foci of hepatocyte dysplasia with a safe margin of 4 cm. the patient was discharged eight days after the surgery uneventfully. conclusion: single-incision laparoscopic anatomical right posterior sectionectomy is feasible and safe by experienced laparoscopic surgeons. it provides a fast recovery but needs a long operative time. the mortality in the patient with liver cirrhosis is very high. the aim of this work was to decrease mortality and morbidity by using endoscopic local heamostasis and laparoscopic operations, in the patients with bleeding from cirrhosis by variceal bleeding. methods and material: we observed 692 patients with cirrhosis complicated by variceal bleeding during 12 years. there were 260 patients with child phue a, 279 ones with child phue b, 153 ones with child phue c. all the patients were performed prolonged endoscopic heamostasis with conservative therapy. the main methods that we used were the ligation in 345 cases, sealing in 50 cases, sclerotherapy in 158 cases. in 18 cases we couldn't stop the bleeding with band ligation method and introduce the danis stents into esophagus and stopped the bleeding successfully. to prevent the re-bleeding we performed the laparoscopic dissection the abdominal part of esophagus with suturing the venous vessels, coagulations and dissection of short gastric vessels between stomach and spleen, clipping the left gastric artery and vein in the 67 patients. in 29 patients we performed laparoscopical suturing the variceal veins by introducing the laparoscopic trocars into the stomach. in 35 cases with varices vien of stomach, with non-effective local endoscopic heamostasis we performed laparoscopic resection the fundal part of stomach. results: endoscopic local heamostasis were successful (in 85%) in 588 cases. the relapse of bleeding were in 85 patients. 25 patients died. there was no mortality after laparoscopic operations. there were 7 cases for trocar wounds infection, 3 cases of subphrenic abscess. goals: the advance of laparoscopic surgery also includes the more complex procedures of abdominal surgery such as those affecting the liver and pancreas. there are multiple indications that laparoscopy has in hepatobiliopancreatic surgery, both in benign and malignant pathologies. material and methods: we present the video of a 78-year-old male patient with a history of right hemicolectomy due to disease-free intestinal lymphoma who, in the control analysis by his attending physician, detects the elevation of tumor markers. an extension study was started showing a hepatic lesion in the caudate lobe with a pathological anatomy suggestive of hepatocarcinoma and an adenopathy suspicious for malignancy adjacent to the right renal vein. the clinical case is presented in a multidisciplinary tumor committee and it is decided to perform surgery. a laparoscopic caudate lobe resection was performed, previously performing intraoperative ultrasound and a lymphadenectomy of the portal territory, vena cava and exeresis of adenopathy of the right renal vein. introduction: major vascular complications during laparoscopic surgery occur approximately in one in 1000 cases, but mortality rate can reach 8-17%. most major vascular injuries lead to conversion to laparotomy but successful laparoscopic repair is also possible. simulation training improves laparoscopic performance and possibly reduces surgeons mental strain. materials & methods: during two editions of advanced laparoscopic training course 12 participants had a task to control a major vessel damage (damage). before the task an educational video explaining the methods of obtaining haemostasis was shown. the algorithm of the 'damage' task was as follows: without previous preparation a 1 cm injury of a major vessel was done with l-hook electrocautery. after the injury participants were free to control the damage the way they wanted. heart rate of the participants was measured with an ear electrode. measurements were carried out 3 times-before the injury, immediately after, and afterwards obtaining vessel control. after participants were interviewed for their feelings after the 'damage' task. results: there were 12 vessel injuries in 10 animals. one animal died during the 'damage' task 20 min after desuflation due to relapse of bleeding. there was no conversion to open procedure. temporary vessel control was obtained with different methods. all participants used vicryl 2.0 or pds ii 3.0 suture for final hemostatic purposes. heart rate of the participants before injury were 52-85 ± 3.33 bpm, immediately after the injury it rose to 75-120 ± 4.31 bpm, and after obtaining vessel control were in the range 50-100 ± 4.83 bpm. a statistically significant difference was found between the ratio of the first and second hr measurement (p = 0.01, t = -9.727), and second compared to the third (p = 0.02, t = 4.177) measurement. participants judged their experience on a 5-point scale (1was not helpful at all; 5-was extremely educative). the educational value of the task received 5 points in 11 cases and 3 points in one case. conclusion: participants feel stress during major vessel bleeding even in animal model, and this stress can result in a serious intraoperative mental strain and significantly increase heart rate. participants found the 'damage' task very useful for their daily practice. the aim of study was to improve the results of treatment of patients with hepatic echinococcal cysts by using of argon plasma coagulation. methods: the analysis of treatment results of 66 patients was put into the basis of this study. it was 12 (18.2%) men and 54 (81.8%) women in total. an average age of them was 47.7 ± 15.9 years. the main difference between groups was a way of liver parenchyma coagulation in order to make reliable hemostasis. in main group the final stage of surgical intervention on liver was argon plasma coagulation. it was performed to 45 (68.2%) patients. alternatively, monopolar coagulation was performed to 21 (31.8%) patients (comparison group). results: in main group in the 86.6% cases pericystectomy was conducted. the resecting surgeries was performed to 13.4% cases. in comparison group was conducted in 28.6% cases. in early postoperative period in main group the complications were observed in 4.4% of cases. the same parameter was 4.8% in comparison group. it led to relaparomies. the forming of external biliary fistulas was observed in 2 (4.4%) patients in main group and in 3 (14.3%) patients in comparison group. however, all the fistulas have closed spontaneously on 7th-10th day in both groups. hernias of abdominal wall and peritoneal adhesions that manifested by intestinal obstruction of different degree were considered as complications of late postoperative period. these values were 0% and 4.4% in main group versus 19% and 14.3% in comparison group, respectively. the resection of hepatic echinococcal cysts with further application of argon plasma coagulation on the cyst bed was accompanied by complications quantity decrease in patients that underwent surgery in early as well as in late postoperative period. in this case more positive dynamics of functional liver values improvements was observed. aims: indocyanine green (icg) fluorescence imaging has been reported as a reliable and safe navigation tool in laparoscopic hepatectomy. however, the factors affecting the sensitivity of tumor detection with icg fluorescence imaging is relatively unclear. the aim of the present study is to analyze the factors of successful icg fluorescence in laparoscopic hepatectomy. methods: this is a retrospective single-center study. this study population consisted of 80 laparoscopic hepatectomies from january 2018 to november 2018 undertaken at kurashiki central hospital. we excluded patients whose tumors were located more than 10 mm from the liver surface, those who did not receive icg fluorescence imaging, and those who were not injected with icg dye (0.5 mg/kg) intravenously within 7 days of surgery. the pinpoint endoscopic fluorescence imaging system was used to detect the tumor location. we evaluated the relationship between successful fluorescence and the timing of injecting icg before operation, tumor size, icg r15, liver damage and bmi. results: following exclusion, 15 patients were eligible for analysis. among the 16 tumors resected, icg fluorescence imaging detected 9 tumors (56.3%), including 6 hepatocellular carcinomas and 3 liver metastases. icg fluorescence imaging detected all 9 tumors in the patients injected with icg 2 to 5 days before hepatectomies . icg fluorescence imaging detected all 9 tumors which were more than 10 mm in diameter. there was no relationship between indocyanine green fluorescence with icg r15, liver damage and bmi. conclusions: the injection of icg 2 to 5 days before operation and a tumor size of more than 10 mm can be factors in successful fluorescence in laparoscopic hepatectomy. introduction: cysts in the liver have a wide variety of aetiologies. it is important to characterize the cystic lesion before treating it. the simple cyst has a low prevalence and is more frequent in women. fenestration is a useful option for the treatment of simple cysts in selected patients. case presentation: a 40-year-old woman was referred to our hospital with a one-year history of intermittent, right upper quadrant pain, with no other associated symptoms. computed tomography and magnetic resonance imaging showed a large cyst (14,4 x 13,2 cm) in the right of the liver. the cyst presented lobulated morphology, smooth edges and well delimited. there were other smaller cysts in the left lobe. hepatic function in blood analysis was normal. biomarkers, tumor markers and hepatitis virus markers were negative. outpatient follow-up and symptomatic treatment of pain was decided. after six months of follow-up, the pain persisted, so surgical treatment was proposed. a laparoscopic fenestration was performed, widely resecting the free wall of the cyst. there was no evidence of a connection to the bile duct. there were no complications. on 3 days she was discharged. discussion: some giant hepatics cysts become symptomatic due to mass effect. persistence of pain is an indication of surgical treatment. laparoscopic fenestration is an alternative for the management of simple hepatic cysts. aim: laparoscopic liver resection for malignant pathology such as colorectal cancer metastases has been a matter of discussion for several groups in the last years. it has been proposed as a safe and feasible treatment but subjects like short and long term outcomes and oncologic results have not been adequately assessed. methods: we performed an observacional retrospective study of patients undergoing laparoscopic liver resection for colorectal metastases in our center. from november 2007 to november 2018 a total of 113 patients underwent laparoscopic liver resection. data for resection margin, hepatic and extrahepatic recurrence and both disease free survival and overal survival were collected. patients were discussed in a multidisciplinary group with oncologist, radioterapic oncologist and surgeons. the surgical procedures were perfomed by the same team in all the cases to minimize bias. results: a total of 9 patients (7.9%) were non resectable at the time of surgery.the mean overall survival was 19 months with a maximum of 132 months. we got a mean of disease free survival in our patients of 11.7 months. the hepatic recurrence was 28%, most of them in high risk patients, and from this group 67.74% underwent a new liver resection. major complications took place in 9 patients (7.96%) two biliar leaks, one bowel perforation, two hepatic failure, one evisceration and three respiratory insufficiency needing urgent surgery in three of the cases. mean hospital stay was 5.84 days. a mean of 2 days of this stay were in an intensive care unit. conclusions: laparoscopic liver resection for colorectal liver metastases could be a feasible technique when perfomed by trained surgeons. it improves the postoperatory recovery with a reduction of hospital stay and less postoperatory pain without increasing the development of major complications or mortality in the first 30 days after surgery. we got good oncological results that have been improving with the experience acquisition of the surgical team. aged 29 to 71 underwent surgery for cirrhosis with massive refractory ascites child c (9-10), without obvious signs of hepatic encephalopathy. major etiological factors were: viral hepatitis c (47 patients (48.0%)), b (29 patients (29.6%)), b ? d (17 patients (17.3%)), toxicity (5 patients (5,1%) ). to prevent possible bleeding at the first stage, endoscopic filling of esophageal varices with fibrin glue was performed in 81 patients (82.7%). after testing the effectiveness of varices filling, in the following 5-7 days decompression surgery of thoracic lymphatic duct was performed under local anesthesia to improve lymphatic drainage from liver and abdominal organs. simultaneously, laparoscopic sanitation of abdominal cavity was performed, with complete evacuation of ascites fluid, rinsing and drainage. fractional post-surgery rinsing was repeated daily for 3-5 days towards removing peritoneum edema and improving its absorptive properties. results evaluation was performed 3, 6 and 12 months after surgery, based on criteria of liver reserves and ascites volume. results: post-surgery mortality from liver failure was 5.1% (5 patients) . 7 other patients died of the same cause the following 3-6 months. annual survival rate was 87.6%. complete ascites regression over 3-12 months after surgery was noted in 53 patients (55.8%), significant regression and stabilization in 25 (25.6%), moderate regression with need for periodic decompressive laparocentesis in 8 cases. in all patients, functional liver reserves and life quality significantly improved. conclusions: the use of the given technique of refractory ascites correction, in patients with depleted liver cirrhosis, by laparoscopic sanitation with post-surgery fractional rinsing of abdominal cavity, with simultaneous decompression of thoracic lymphatic duct showed very high efficiency and deserves establishment as a clinical practice. t. urade, hepato-biliary-pancreatic surgery, kobe university, kobe, japan aim: anatomical liver resections guided by a demarcation line after portal staining or inflow clamping of the target territory were established as essential methods for the curative treatment of hepatocellular carcinoma (hcc) and then subsequently applied to other malignancies. however, laparoscopic anatomical liver resection (lalr) is much more difficult to reproduce these procedures and to confirm demarcation of the hepatic segment visually on the monitor. recently, laparoscopic fluorescence imaging system has been used as a tool for real-time intraoperative navigation in llr. the aim of this study is to demonstrate how to perform lalr using indocyanine green (icg) fluorescence imaging. methods: three patients underwent pure lalr using icg fluorescence imaging. the following operative procedures were performed: 1 partial liver resection for hcc, 1 segmentectomy for liver metastasis and right anterior sectionectomy for hcc. in all patients, preoperative 3d simulation images from dynamic ct were reconstructed using a 3d workstation to decide on cutting points of the glissonean branches. after mobilization of the liver, intraoperative ultrasonography was performed to identify the location of the tumor and glissonean pedicles corresponding to the tumor-bearing hepatic region. we dissected or transected the hepatic parenchyma to encircle the glissonean pedicles. after clamping or closure of them, 2.5 mg of icg was injected intravenously to identify the boundaries of the hepatic segments under near-infrared light. parenchymal transection was started according to the demarcation on the liver surface. the lateral aspect of the parenchymal transection was carried out based on the demarcation between non-fluorescing and fluorescing liver parenchyma as far as possible. results: in all the 3 cases, demarcation lines on the liver surface could be visualized clearly after injection of icg. in addition, boundaries of cone units, segments and sections could be recognized to some extent because the tumor-bearing hepatic region became non-fluorescing parenchyma during parenchymal transection. these procedures were completed successfully, and the postoperative courses were almost uneventful. aim: sintrahepatic cholangiocarcinoma is the second most common primary liver cancer after hepatocellular carcinoma (hcc). although the laparoscopic approach of these tumours is not frequent due to its complexity, it is performed increasingly by hepatic surgeons.traditionally, the abdominal surgery in cirrhotic patients has been reserved to selected cases secondary to the high rate of complications. the advance on the treatment of the hcc on liver cirrhosis and the higher safety when performed by laparoscopic approach has encourage some surgeons to extend surgery to child b-c or portal hypertension patients. methods: we present a male of 56 years old, diagnosed in 2010 of liver cirrhosis accompanied with portal hypertension. on mri in 2012 was found a solid lesion of 25 mm located on segment ii hepatic. biopsy confirmed the diagnostic of intrahepatic cholangiocarcinoma. after a liver function evaluation (child c, meld 17), an hepatic chemoembolization was performed. sequentially ct scans indicated a complete radiologic response. after 6 years of follow up, mri showed a recurrence of 25 mm between segment ii and iii of the liver.on multidisciplinary committee liver resection was decided due to suitable liver function and low aggressiveness of the tumour. a laparoscopic left lobe liver resection was performed. sonastarò and ligasure tm were used to perform the liver transection and endo gia tm for portal and hepatic veins sections. the surgery develop was complicated due to trend to bleeding that finally was achieve through cauterization. results: early after the surgery, the patient presented a haematic debt through the drain of 900 cc accompanied of hypotension, therefore an emergent surgery was indicated. an exploratory laparoscopy was performed finding hemoperitoneum and diffuse bleeding of the liver surface that was controlled. the patient had a proper recovery and was discharged on the 11 th day post-surgery. the analysis of the specimen showed a 6.5 cm cholangiocarcinoma with a 0.4 cm margin of resection. conclusion: there is an augmented risk of complications on liver resection of cirrhotic patients with portal hypertension. the laparoscopic approach allows to reduce potential complications, despite bleeding continuous to jeopardize this surgery, this option could be proposed on selected patients. introduction: accessory spleen itself is found in approximately 7% to 15% of the population. most (80%) are located near the splenic hilum but intrapancreatic accessory spleens (ipas) are the second most frequent location (16.8%) of accessory spleens. in adults, ipas are clinically silent. they may become clinically important because of their radiographic similar appearence of cancer. intrapancreatic accessory spleen is a rare cause of pancreatic pseudotumors and is located in the pancreatic tail in approximately 1% to 2%. ipas can be difficult to differentiate radiologically from hypervascular pancreatic tumors such as pancreatic endocrine neoplasms because theycan share a similar enhancement pattern. as a result, most of the reported cases of ipas have been diagnosedonly after distal pancreatectomy was completed. material and methods: we present the case of a 55-year-old male patient with a history of large vessel vasculitis followed-up for rheumatology, which showed a pancreatic nodule in a control ct so he was referred to digestive for study. an echoendoscopy was performed. it showed, at the level of the tail, in the third distal, a lesion of 16x12 mm, hypoechoic, with rounded morphology and well-defined edges that can not be biopsied given the absence of adequate window for the realization of fine needle aspiration biopsy (fnab). based on these radiographic findings, the differential diagnosis included a pancreatic endocrine tumor. due to the high suspicion of malignancy and the absence of biopsy, he was referred to general surgery for scheduled surgery. a laparoscopic corporocaudal pancreatectomy was performed without incidents and the definitive histology showed an intrapancreatic accessory spleen in the pancreatic tail that excluded the presence of cancer. conclusion: intrapanceratic accesory spleen is a challenging diagnosis to make and it should be included in the differential diagnosis of pancreatic neoplasm. its early identification precludes surgical resection. however, the preoperative diagnosis of ipasmay be difficult, and distal pancreatectomy is a safe and relatively simple operation, most of the reported cases of ipas being diagnosed correctly only after surgery there are various options for treating pps. this paper describes our tailored and methodological approach to laparoscopic drainage of pancreatic pseudocysts based on an anatomical classification. methods: we adopted the laparoscopic approach in 28 patients who had pps requiring surgical drainage. the laparoscopic method had been decided according to preoperative computed tomography (ct) and intraoperative findings. the results shown represent median (range). the aim of this work was to decrease mortality and morbidity in patients with combined trauma. methods and material: for 5 years 667 patients were brought to our clinic with combined trauma. everybody was performed ct and ultrasound examination. 286 patients were performed open laparatomic operation due to massive liver rupture, spleen rupture and massive trauma of bowels, pancreas and kidney with massive bleeding. in 106 circumstances we didn't found the trauma of the abdominal organs and the massive abdominal bleeding after ct observation. those patients were cured conservatively. in 275 circumstances with combined trauma after ct examination we performed laparoscopic operation. in 97 circumstances from the 275 patients, who we started laparoscopic operation in, we conversed to laparotomy, due to massive liver rupture, and trauma spleen and hollow organs. in those 112 circumstances we performed urgent laparotomies with suture ligation of bleeding points, suturing of liver and hollow organs and drainage of abdomen cavity. results: we performed laparoscopic operation in 178 patients. in 107 circumstances with trauma of liver we performed laparoscopic electro coagulation and argon-plasma coagulation. in 66 circumstances with trauma of liver we performed electro coagulation with packing the omenture to its surface. in 5 circumstances with trauma of spleen we performed argon plasma coagulation and used fibrin glue. after laparotomic operations mortality were in 35 circumstances, morbidity were in 87 patients. after laparoscopic operation mortality were in 5 circumstances of severe combined trauma with multiple abdominal trauma and morbidity in 11 patients. conclusion: laparoscopic operations in patients with combined trauma decrease mortality and morbility. aims: in laparoscopic distal pancreatectomy, getting away liver and stomach from the surface of the pancreas is sometimes difficult. when we separate the pancreatic body from the retroperitoneum, we must not injure the pancreas to prevent breaking a tumor. when we cut the dorsal side of the spleen from the retroperitoneum, we rarely cut into the spleen accidentally. based on our experiences, we gradually explored a set of procedural operation steps to resolve these problems. our three-step maneuver simplifies the procedure and improves the efficiency and safety of laparoscopic distal pancreatectomy. methods: as the first step, to get away the liver we sutured the round ligament of liver and crus of the diaphragm using 3-0 pds and the both ends were tugged form the outside of the body through both side of the xiphoid process. and the stomach was hung from the outside using two nylon thread like a bridge, so we could see the surface of the pancreas body with a good view. the second step was a rolling up maneuver of the pancreas. when we separate the pancreatic body and tail from the retroperitoneum, we rolled the pancreas with gauze for use in laparoscopic surgery and lifted the gauze up in only one assistant's forceps. then we could find the correct line for dissection clearly. the last step was a hanging maneuver of the spleen. when we cut the dorsal side of the spleen from the retroperitoneum, we hanged the hilum of spleen with cotton tape. with this technique we could find easily the correct line to dissect. results: the operation time was 4 h and 7 min and the estimated blood loss was a little. we did not injure the tumor or spleen in this operation. the patient recovered uneventfully after short hospitalization. conclusion: our three-step maneuver can be effective to perform laparoscopic distal pancreatectomy. about 10-20% of patients with pancreatic collections will develop walled off necrosis, with an associated 8-39% mortality. there are multiple options for intervention and drainage, usually the outcomes after endoscopic drainage are related with the nature of the collections. aims: to evaluate and present the rol of endoscopy in pseudocyst and walled off necrosis treatment, and favorable outcomes. methods and results: we present a case of a 48 years old male, who presented biliary pancreatitis treated with cholecystectomy and transoperative cholangiogram 6 weeks ago. he continued with persistent abdominal pain; his ct scan showed a big walled off necrosis; he was taken to surgery for an endoscopy-assisted laparoscopic cystogastrostomy with necrosectomy, he was discharged 14 days po. conclusions: the step-up management of walled off necrosis has proven to be a better option than conventional surgical or endoscopical techniques alone; by reducing complications and mortality vs conventional necrosectomy. the use of endoscopic treatments reduce the pro-inflamatory response. drainage of walled off necrosis can be done by a transpapilar or transmural endoscopic apporach each one with its own advantages. some authors avoid the use of endoscopy in walled off necrosis because of a higher rate of complications, re-interventions and a greater lenght hospital stay. in our experience, we have achieved excellent results with this combined technique. nearest and longpatients underwent chemotherapy after electroporation procedure. 90 day mortality was 4.3% (n = 1) in electroporation group. it was found that erreversible electroporation improved local recurrence-free survival (12 and 6 months, respectively, p = 0.01) and distant recurrence free survival (15 and 8 months, respectively, p = 0.03) . overall survival was 18 and 11 months, respectively (p = 0.03). conclusion: irreversible electroporation of locally advanced pancreatic cancer is safe. four month chemotherapy followed by surgical procedure is associated with good local response and better overall survival compared with chemotherapy alone. these data will be validated in further multicenter study. introduction: pancreatic pseudocysts are the most frequent complication of acute or chronic pancreatitis. usually asymptomatic, they can be managed conservative or, in case of complications, by several methods, endoscopic, percutaneous or by surgery. material and method: we present the case of a 41 years old patient known with an episode of acute pancreatitis five years ago, who was hospitalised now for an upper gastrointestinal bleeding with hematemesis. the upper endoscopy showed a subcardial bulking with an erosion of the posterior gastric wall, with signs of recent bleeding, managed by clipping. patient work-up showed a 12 cm pancreatic pseudocyst at endoscopic ultrasound. taking into consideration the history of the patient, the size and the complication of the cyst, the patient was proposed for a drainage intervention. results: a minimally invasive approach was decided. using ultrasonography guidance, a posterior gastrotomy was performed with the cystotome, establishing the comunication with the pancreatic pseudocyst. dilatation of the path with 8 mm cre baloon, with partial evacuation of turbid liquid. the drainage consisted in 2 pigtail 10 fr plastic stents. the patient was discharged the following day in a good health condition.the endoscopic ultrasound control at 3 weeks showed complete resolution of the pancreatic cyst and was followed by stent removal. the endoscopic drainage of the pancreatic pseudocyst represents the first treatment option as an alternative to the surgical intervention, being minimally invasive, with low risk and fast recovery. clinical case report: a 69-year-old man was admitted to the hospital with a diagnosis of severe acute pancreatitis and multi-organ failure. during the first month patient has in uci and non invasive procedures were attempted: enteral feeding by a nasoduodenal tube was started and antibiotics were administered to control sepsis. on day 30, percutaneous drainage was performed for large retroperitoneal abscess. on 67 days, endoscopic transgastric necrosectomy was performed and the left collection was resolved. due to the multi-organ failure persistence and the evidence of size increase of the right retroperitoneal collection, a vard was decided.the right collection was accessed following the previously pigtail catheter. a 12 mm trocar was placed to create retro-pneumoperitoneum with a pressure between 6-8 mmhg. a trocar of 5 mmhg was placed, purulent content was aspirated and a debridement was performed. irrigation and aspirate was performed with normal saline and povidone-iodine solution. drainage was used to perform washes with physiological saline and urokinase.on 146 days, the ct confirmed collection resolution. on 190 days he was discharged. after 8 months, the patient is in good clinical condition. discussion: drainage of the retroperitoneal abscesses via laparotomy is highly invasive and risky. vard enables radical necrosectomy and drainage less invasively. in this patient, the complete resolution of the right collection is obtained with retroperitoneal debridement without complications. we conclude that careful retroperitoneal necrosectomy is a valid alternative for the management of right collections. aims: in this study we analyze laparoscopic approach for hepatocellular carcinoma in order to clarify iwe can take advantage in some outcomes as complications, postoperative recovery or long-term survival outcomes. methods: a retrospective case consecutive study has been taken analyzing: age, sex, body max index, comorbidity, surgical extension and tumor size. the outcomes analyzed were: operation time, intraoperative blood loss, blood transfusion, postoperative morbidity and mortality, intensive care stay, hospital stay, tumor size, r0 resection, conversion rate, early reintervention, disease-free survival rate, overall survival rate results: in this study 15 patients were analyzed 12 males and 3 females with ages between 54 and 73 years (mean age 62) and diverse comorbidities: arterial high pressure (7/15; 46%), diabetes (2/15; 13,3%) ; dislipemy (5/15; 33,3%) , hepatophaty measured as liver cirrhosis (14/15; 93,3%). all of them underwent laparoscopic liver surgery, in 9 cases non-anatomical resection was performed while in the other 6 a segmentectomy was performed. in 12 cases the laparoscopic was strict, in 3 and assistance incision was needed. operative time was 120-525 min (mean: 282 min). blood loss mean was 1,2 g/dl and only 2 intraoperative transfusion were needed. massive blood loss was reported in 1 case. postoperative medical complications were observed: hepatic failure and renal insufficiency and in 1 case we observed a postoperative hemorrhage that needed an urgent reintervention. the mean of intensive care stay was 1 day and hospital stay was 4.93 days. about oncological outcomes r0 resection was achieve in 9/15 (60%), r1 in 6/15(40%). at 3 years 9/15 cases were free disease, 3 dead by progression of disease and 2 dead by other causes. aim: the purpose of this study is to analyze our initial experience with laparoscopic duodenopancreatic resection. introduction: laparoscopic procedures have advanced to represent the new gold standard in many surgical fields. laparoscopic pancreatoduodenectomy and laparoscopic distal pancreatectomy(ldp) are advocated to improved perioperative outcomes, including decreased blood loss, shorter length of stay, reduced postoperative pain and expedited time to functional recovery. however, the indication to minimally invasive approach for pancreatic surgery is often benign or low grade malignances. material and method. the steps of ldp procedures are similar to the open procedure. we perform destructive part of procedure totally laparoscopically and we prefer to do reconstructive part of procedure using hand-assisted techniques. for the period 2014-2017, we have been perform 76 pd, 24(32%) we have done with laparoscopic approach. 6(31%) of patients were operated totally laparoscopic and 18(69%) of patients were operated by handassisted techniques. results: a significantly higher conversion rate was encountered when lc was done 2-6 weeks after es, as compared to 1 week after ercp. it is estimated that pancreatitis after ercp affects roughly three to 10 percent of patients and many endoscopists quote a post-ercp pancreatitis rate of 3-5%. however, 10-15% is probably a more realistic answer for the majority of ercp endoscopists. wise endoscopists inform their patients that there is a spectrum of post ercp pancreatitis severity, from mild ([ 95% of cases) to severe (1-5% of cases). in mild forms, pancreatitis after ercp may resolve itself. conclusion: endoscopic retrograde cholangiopancreatography is a procedure used to diagnose and treat disorders involving the pancreatic and bile ducts. acute pancreatitis is the most common and feared complication of endoscopic retrograde cholangiopancreatography. the assumption is that the duration of the laparoscopic method is longer, but on the other hand the patient have better wound healing and fewer possibility of developing postoperative hernia . the postoperative period is much more simple due to the significantly shorter hospitalization and the faster recovery, and according to patients the level of pain is much smaller as well. however the oncology results are the same. introduction: spiegel hernias are a rare, representing only between 0.1% and 2% of all abdominal wall hernias. due to its location, below the spiegel line, its diagnosis requires a high index of suspicion. the physical examination only detects 50% of the spiegel hernias and, in many occasions, imaging tests are necessary for the diagnosis. goals: our objective is to describe the case of an urgent laparoscopic repair of a case of high grade bowel obstruction secondary to a spiegel hernia. material and methods: we present the case of a 75-year-old male patient with no medical history that comes to the emergency department of our center due to an eight hour evolution of abdominal discomfort associated with nausea without vomiting or other symptoms. the patient was afebrile and hemodynamically stable at all time. on physical examination, the abdomen is soft and depressible, painful on the left flank where a tumor compatible with spiegel's hernia is palpable. in the blood count there is no leukocytosis nor alteration of inflammatory parameters. an abdominal computed tomography (ct) scan was requested from the emergency department which demonstrated a high-grade small bowel obstruction caused by an entrapped loop of distal jejunum conditioned by a left-sided spiegel hernia. given the situation, an informed consent was obtained, and the patient was taken to the operating room for emergency laparoscopic repair. we performed a laparoscopic hernioplasty with ventralpatch mesh between oblique major and transverse and primary closure of defect in continuous suture. after this, the evolution of the patient is favorable, with good oral tolerance and re-establishment of intestinal transit, being able to be discharged 48 h after surgery. the spiegel hernia is a rare entity that requires a high index of suspicion for its diagnosis. despite the limited evidence published in the literature on the laparoscopic repair of incarcerated spiegel hernias, the studies published so far suggest that the laparoscopic repair is a valid alternative to the classic approach when it is performed by a well-trained laparoscopic surgeon. introduction: repair of lateral abdominal wall hernias (both primary and incisional) can be challenging due to the complexity of anatomy, issues with fixation and the low incidence of such cases. a good understanding of abdominal wall and retroperitoneal anatomy, coupled with proficient laparoscopic technique is essential for successful repair via the minimally invasive approach. methods: a retrospective review of a prospectively maintained database was performed to identify patients with lateral abdominal wall hernias who underwent laparoscopic repair from january 2015 to july 2018. results: 11 patients with 12 hernias were identified (6 primary, 6 incisional). mean patient age was 60 (range 22-81) and mean bmi was 27.1 kg/m 2 (range 18. 2-34.6 ). according to ehs classification, the incisional hernia defects were located at subcostal (l1, n = 1), flank (l2, n = 3), iliac (l3, n = 1) and lumbar (l4, n = 1) regions. background: it is commonly admitted that laparoscopic surgery has the advantage of abdominal wall preservation. however, the increased use of laparoscopy has resulted in certain complications specifically associated with the laparoscopic approach, such as trocar-site incisional hernia. until today, it is not finally clarified 'patient-dependent' factors contributing to the occurrence of postoperative hernia after laparoscopic abdominal surgery. methods: between 1996 and 2017, 256 patients were operated due to trocar-site incisional hernia in one surgical centre. 'the patient-depending' factors which caused postoperative trocar site incisional hernia data was collected and retrospectivily analysed. results: port site incisional hernia occurred in 98% (250 patients) after the use of trocars with 10 mm or larger diameter. the presence of metabolic syndrome was the decisive factor in the development of postoperative incisional hernia in 62% (159 patients). in 15% (38 patients) the postoperative hernia occurred on the background of a long cough symptoms caused by chronic obstructive pulmonary diseases. the cause of postoperative hernia in 13% (33 patients) of patients was the condition of lifting a one-time severity or heavy physical work. in 10% (26 patients) of postoperative patients hernia developed due to prolonged constipation of chronic inflammatory colon diseases. conclusions: thus, when the aponeurosis of the trocars is adequately closed, the reason of the occurrence of postoperative hernias was caused by patient-dependent factors which increase intra-abdominal pressure. for this method, small midline incision 3 cm in length 5-6 cm away from hernia orifice was carried out initially. dissection of intraperitoneal adhesion was carried out by sils with sils device. subsequently after closure of initial laparotomy unilateral anterior rectus sheath was incised from the same incision and dissection of retro-rectus space up to preperitoneal space was done under laparoscopic vision. dissecting the other side was carried out by same fashion. initial dissection of linea alba could be done by open surgery from initial incision. further dissection of linea alba, retro-rectus space, and hernia orifice was carried out by sils. defect closure of anterior and posterior rectus sheath using barbed suture was also done by sils and self-grip mesh was inserted. additional trocar to assist retro-rectus dissection, defect closure, and decompression of intraperitoneal cavity was inserted as required. aims: the laparo-endocsopic approach of inguinal hernia contiue to bring many clarifications concerning inter-parieto-peritoneal space of this region through in vivo exploration, obtained by magnification by means of specific optic intrumentation. our study aimed to revalue the in vivo fascias, to establish their embryological correspondences and to reunite the variable nomenclature existing in the classical anatomy of this region. these observations find their applicability in tapp and tep hernia procedures, as the old anatomical descriptions are no longer operative. methods: we have tried to identify the structures that delimit the anatomical regions of retzius and bogros in 20 recording of tapp procedures performed on men, on the right side, for small indirect hernias on patintes with clear view of the structures. additional, a review of literature on this subject has been performed through a search in the detabases according to the following keywords: bogros space, retzius space, preperitoneal approach, urogenital fascia. results: retzius and bogros are the medial and lateral compartments of the inter-parietalperitoneal space, located between the transversal fascia and the parietal peritoneum. these narrow, virtual spaces are best highlighted today with the help of insufflation techniques during laparo-endoscopic procedures. a competent and careful dissection confirms a 'deep and superficial' stratification, highlighting embryonic relics derived from the uro-genital fascia: urinaryprevesical fascia and spermatic fascia. in addition, the real retzius space is located previously and the real bogros space is located behind this strcuture. the confluence area of the two spaces is a critical point of laparo-endoscopic dissection, its non-recognition may 'wander' the dissection. conclusions: literature data in this topic reflects a certain terminological confusion using general terms such as 'preperitoneal tissue' or 'arreolar tissue' to denote what we consider to be the urogenital fascia or its prologations. the data obtained were synthesized in several drawings and diagrams very useful in training surgeons to use tapp / tep techniques. aim: spigelian hernia containing epiploic appendage is really rare entity. in this paper, we present a very rare case of spigelian hernia involving epiploic appendage performed laparoscopic hernia repair. case report: a 60-year-old woman presented to the emergency department with sudden onset abdominal pain in the left lower quadrant. on physical examination, she had a small, palpable tender mass in the left lower abdominal quadrant. temperature and white blood cell count were normal. an inflamed epiploic appendage with an oval shape, a fatty core, and a central thin hyperdense line in the hernia sac was detected on abdominal computed tomography. its intraabdominal relationship with the normal wall of the sigmoid colon was well appreciated (figure 1a, 1b) . diagnostic laparoscopy was performed. (figure 2 ) adhesions between the sac and epiploic appendage are released using sharp dissection. a peritoneal flap is then created (figure 3 ). laparoscopic tapp repair was used without closing the defect (figure 4) . the patient was discharged on 3th days uneventfully. aims: morgagni's hernia is an in infrequent, congenital, anterior or retrosternal diaphragmatic defect. the right side is the most frequently affected, up to 90% of cases. it represents between 2 and 5% of congenital diaphragmatic hernias. in childhood, they usually attend asymptomatically or with respiratory symptoms. up to 5% are diagnosed in adulthood, incidentally or after gastrointestinal obstruction debut. the treatment is surgery, which can be by laparoscopic or open approach.we present a case of laparoscopic approach with intra-abdominal mesh placement of giant morgagni's hernia diagnosed in senile age. methods: 84-year-old woman with a history of advanced alzheimer's dementia, partially dependent in daily life activities and institutionalized who consulted for intermittent episodes of oral diet intolerance associated with vomits of one month of evolution. abdominal examination was anodine. chest radiograph revealed a right lower lung field mass with fluid collected. thoracoabdominal scan showed small bilateral pleural effusion and large, right anterolateral morgagni's hernia, which contains dilated segment of transverse colon and greater omentum . results: laparoscopic approach was performed. hernia was reduced and hernia sac was removed. the defect was repaired with a dual-component (absorbable and non absorbable) mesh anchored with intracorporeal suture. patient recovered and was discharged 4 days after surgery. conclusion: laparoscopic approach for morgagni's hernia reapir is secure and offers the advantages of less post-opertive pain, faster recovery and short postopatory stay. introduction: recently, laparoscopic operations for ileus are increasing. we have undergone laparoscopic operation to adhesive ileus with umbilicar incision at the beginning. the umbilicar incision at the beginning makes it possible to secure the laparoscopic field by peeling the adhesion under direct view, and makes it easy to repair damage to the intestinal tract. surgical procedure: at first, the umbilicus 3-4 cm incision was made and peeled the adhesion as much as possible under direct vision. secondly, ez access was set and inserted one 12 mm port, therefore laparoscopic operation was performed with 2 or 3 pieces of 5 mm ports. when the repair or resection of small intestinal due to damage is necessary, it is pulled out through the ez access. objective: to investigate the possibility of problems of laparoscopic ileus operation to adhesion ileus by umbilicar incision at the beginning. introduction: small bowel obstruction (sbo) during pregnancy is a rare condition with an incidence of 0.001-0.003% and in around 70% of cases it is most caused by adhesions from previous abdominal surgery. other diagnosis, such as, hernias, malignancy, volvulus or intussusception are extremely rare. when sbo occurs in pregnancy, it carries a significant risk to mother and fetus. its diagnosis of can be difficult to make as symptoms are often attributed mistakenly to the pregnancy. goals: a case report of congenital bowel obstruction during the second trimester of pregnancy handled by laparoscopy. material and methods: we report the case of a 38 year old woman with a history of chronic lung disease, pregnant because in vitro fertilization (17 ? 3 weeks) who attended the emergency department with abdominal pain and bloating accompanied by nausea and vomiting for two days. on physical examination she showed a distended, soft, depressible and painful abdomen without peritonism. laboratory tests were normal. a nasogastric tube was placed with generous output fecaloid intestinal contents. abdominal ultrasound by expert radiologists in abdomen showed a moderate amount of free abdominal fluid with normal uterus moderate and sbo to the ileum because of intestinal adhesion. this results were confirmed with an magnetic resonance imaging (mri). results: the patient was operated by laparoscopic approach with three trocars. the main problem was discovered. we founded a congenital adhesion which conditionated the obstructive syndrome. postoperative recovery was uneventful and the patient was discharged 48 h after surgery. conclusion: the non-obstetrical acute abdomen in pregnant patient is a reality that occurs in one of every 500 pregnancies. its diagnosis in more difficult than in nonpregnant patients requiring or high index of suspicion. the laparoscopic approach of acute abdomen during pregnancy is a valid and safe option, even in the early hours after diagnosis of bowel obstruction when it is performed by a well-trained laparoscopic surgeon. aim: intestinal malrotation (im) without midgut volvulus in adults is a rare clinical entity, which is the result of an incomplete rotation of the small bowel during embryogenesis, due to the nonlysis of the ladd bands. these ligaments spread between the duodenum and caecum and do not allow the gastrointestinal tract to take its normal position into the peritoneal cavity. im appears in 1 to 300-500 newborns and is usually asymptomatic. diagnosis is usually made in the first month, and presents with findings of an acute abdomen, small bowel ileus and volvulus. im in adults is a rare entity. most of the times it is asymptomatic, but it can cause chronic abdominal discomfort and constipation. we present the laparoscopic management of an adult patient with intestinal malrotation. methods: our patient, a 22 year old female, presented to the emergency room with a 3-month history of abdominal pain and nausea. all blood tests were normal. an abdominal mri showed intestinal malrotation without volvulus. due to persisting symptoms, she underwent a diagnostic laparoscopy with complete lysis of the ladd bands. the only unusual finding was a slight oedema of the duodenum. results: her symptoms settled postoperatively and she was discharged on the 2 nd postoperative day. since her discharge, she has not developed any similar abdominal pains or complaints. conclusions: symptomatic intestinal malrotation in adults is an unusual clinical entity, but it is definitely one of the differential diagnoses we need to consider in case of chronic abdominal symptoms. the management consists of the division of the ladd bands, and this procedure can be performed safely with laparoscopy. many small intestinal obstructions are due to adhesions after laparotomy, but small bowel obstructions without history of open surgery is relatively few. in diagnostic imaging such as preoperative ct examination, the cause is diagnosed to some extent, but details are sometimes unknown unless operative observation is actually made. in many institutions, laparoscopic surgery is also actively introduced into the operation to relieve bowel obstruction, and its effectiveness is beginning to be recognized. we examined the usefulness of laparoscopic surgery for patients with small bowel obstruction without history of laparotomy from experience in our hospital. aim: from december 2000 to october 2018, we searched cases of laparoscopic surgery for a small bowel obstruction without previous laparotomy at our hospital, and clinical findings, surgical results, and postoperative course were examined. results: there were ten cases. eight men and two women. the median age was 57 years (15-90 yrs.) . reasons for intestinal obstruction were adhesions 5 cases, internal hernia 3 cases, persimmon stones 1 case, small intestine tumor 1 case. four cases of adhesions were emergency surgery. there were 7 cases of emergency surgery and 3 waiting surgery. five laparoscopic operations were completed and five cases during laparotomy transition. the median surgical operation time was 103 min (75-302 min), and the median bleeding amount was 10 g (5-230 g). there was no fatal case after operation, only one complications of ileus. the median length of hospital stay was 14 days (10-29 days) . conclusion: laparoscopic surgery for intestinal obstruction with no history of laparotomy was thought to be a safe and effective procedure. although the transition to laparotomy would be higher in case of emergency, but there was no case of large incisional laparotomy. conclusions: laparoscopic surgery for sbo reduces postoperative complications and contributes to shortening the postoperative hospital stay and to decreasing the rate of recurrences, although it is a retrospective study, which is a safe and a useful approach. furthermore, first episode of sbo without previous operation seems to be an appropriate indication for laparoscopic surgery. background: postoperative adhesion after abdominal surgery may cause intestinal obstruction, chronic pain, or female infertility, which constitutes the major problems after surgery. adhesion formation are reported to be reduced by laparoscopic surgery and the use of anti-adhesion barriers. seprafilm composed of sodium hyaluronate carboxymethylcellulose bioresorbable membrane has been widely used to date, especially in open surgery. the characteristics of seprafilm, which is easily stick when wet, conversely brittle when dry cause it difficult to deliver into the abdominal cavity via the small incision in laparoscopic surgery. therefore, seprafilm is not much used in laparoscopic surgery. although various methods of insertion of seprafilm have been reported, some need special devices, or some acquire skill. methods: we adopted the pre-moistening technique for the replacement of seprafilm in 210 consecutive cases of laparoscopic gastrointestinal surgery. a sheet of seprafilm was cut into 4 equal pieces. to soften the sheets, one of the pieces was placed on a folded wet gauze until it became naturally curled then it was reversed, and the same procedure was repeated. softened sheet is easily to deliver into the abdominal cavity via a small incision by pushing with digital finger. moistened sheet expands naturally in the abdominal cavity. one or two pieces were needed to cover the incision. this process took only a few minutes. results: in all cases, the sheets were successfully introduced into the abdomen and spread widely enough to cover the incision. there have been no adverse effects, no postoperative complications, or gastrointestinal obstruction due to adhesion in the observation period of median two years. conclusions: short term outcomes were good after applying this technique. however, to record the incidents of intestinal obstruction and chronic pain, over 10 years observation is indispensable. long term follow-up studies are required to clarify the usefulness of the anti-adhesive barrier in gastrointestinal surgery. b. east, 3rd department of surgery, motol faculty hospital, prague, czech republic aim: since 1994 when the ipom acronym was used for the first time our views at intraperitoneal mesh positioning has changed several times. despite growing evidence on its possible long term consequences it is still preferred method at some centres for large number of patients. the aim of this study is to point out the pitfalls of this method but also show that ipom is a good technique but only for highly selected cohort of patients. methods: this is a review of the literature focusing on the indications and complications of ipom pointing out controversies among the published articles over last two decades. some mesh material characteristics are being discussed as they are basic for understanding this complex and highly sensitive issue. results: a wide range on indications of ipom from little umbilical to large incisional hernias is advocated by many. however, some opinion leaders promoting this technique as universal and ideal for everyone just few years ago are advising to avoid it if possible lately. a necessary overlap has also been questioned recently. despite improving anti-adhesion barriers and methods of fixation in may 2017 a surgical mesh has become classified as risk class iii by the eu parliament and council on medical devices hoping to prevent physiomesh like incidents in the future. the need for post market registries and long term follow up is obvious. conclusion: us as surgeons implant a mesh in our patients and therefore we should be aware of its possible long term effects. no mesh on the market has a long term safety evidence especially in the intraperitoneal space. ipom is a good technique but possess a significant risk of long life complications and therefore should be spared only for those unfit for other methods of repair, patients with too high mesh infection risk, obese or older patients. introduction: acute appendicitis in elderly patients is relatively uncommon and could represent an underlying neoplasm. hence patients over the age of 40 are often referred for a follow-up colonoscopy after management of acute appendicitis. the current routine use of computed tomography (ct) scans in the evaluation of suspected acute appendicitis in elderly patients prior to surgery coupled with intra-operative findings at laparoscopy question the role of follow-up colonoscopy for these patients. aims: to determine the role and optimal timing of colonoscopy in early detection of colorectal neoplasia after treatment of acute appendicitis in elderly patients. methods: all patients aged 40 years and above with confirmed appendicitis admitted to our hospital during the period 1/1/15 to 30/9/17were included. follow-up colonoscopy, diagnosis of colorectal neoplasia and its location in this patient cohort was evaluated. results: number of people aged 40 and above in olol who had appendectomies from the dates 1/1/15 to 30/9/17 = 184. out of them 44/184 (24%) had full colonoscopy within 2 years of the appendectomy.of them 25 of the 44 colonoscopies done were maleand 19 were females. 30/44 (68%) of these colonoscopies were completely normal.1 colonoscopy identified colorectal carcinoma in ascending colon (2.3%). other pathologies identified included: benign polyp 3 (7%), polyp with low grade dysplasia 4 (9%) and others 6 (13.6%) (lymphocytic colitis, ulcerative colitis, medication related ulceration, diverticulosis, melanosis coli, haemorrhoids). conclusions: in elderly patients above 40 years of age: there may be an increased risk of colorectal cancer after acute appendicitis. only 24% of this patient cohort underwent colonoscopy after appendectomy. the current recommendations suggest the need for follow-up colonoscopy in elderly patients post acute appendicitis. further studies are needed to decide whether routine colonoscopy is indicated after acute appendicitis patients over 40 years. introduction: it is generally accepted that the main aetiology of appendicitis is obstruction due to appendicoliths in adults and lymphoid hyperplasia in children. in contrast, incidental appendicoliths have been reported to occur in up to 32% of the asymptomatic population. controversy still exists regarding the association of appendicolith and appendicitis. is the appendicolith a causative factor or merely an incidental finding? aims: to determine the association between the presence of appendicolith and acute appendicitis (perforated or non-perforated) vs healthy appendix. methods: we collected the data retrospectively from the electronic records of all appendicectomies performed between january 2012 and december 2016 in our institution. data collected included: age, sex, appendix histology and the presence of appendicolith. interval or incidental appendicectomies were excluded from this study. we analysed the data using spss software version 2010. results: during the study period 2348 appendectomies were performed (males: 1137, females: 1211, age range: 2-89 years). 1794 cases were histologically confirmed cases of acute appendicitis and of these, 61 were perforated. a normal appendix was identified in 288 cases. the remaining 266 cases were due to chronic appendicitis, sub-acute appendicitis, lymphoid hyperplasia, parasitic infestation, and neoplasm. appendicolith was found in 43 cases, of which 33 were found in a normal appendix and 10 were found in an inflamed appendix. out of the 33 cases of appendicolith with normal appendix: 23 cases were aged between 1 and 20 years old, 9 cases were aged between 21 and 40 years old and 1 case was aged between 40 and 60 years old. out of 10 cases of appendicolith with acute appendicitis, 7 cases were aged between 1 and 20 years old, 2 cases were aged between 21 and 40 years old and 1 case was aged over 41. conclusions: appendicolith may merely be an incidental finding and is not the primary cause of appendicitis. no significant correlation between gangrenous/perforated appendicitis and the presence of appendicolith. contrary to popular belief appendicoliths are more common in paediatric appendicitis than in adult cases. further research is recommended. over the last 20 years, patient satisfaction surveys have gained increased popularity. nowadays, respect for patients' needs is central to our health care system. hospitals use patient satisfaction surveys to assess quality of care. many hospitals routinely survey patient satisfaction but relatively little data has been published. our acute surgical assessment unit operates from 8am to 6 pm monday to friday and in its first year saw 2079 surgical patients, of whom 1308 were discharged and 771 were admitted to the hospital for further management. aims: to assess the levels of satisfaction of patients attending asau at our lady of lourdes hospital. methods: a random sample of patients seen in the asau was surveyed to determine their level of satisfaction and the experience they had whilst attending asau. a novel self-reported patient satisfaction questionnaire was developed and used to assess patients' opinion regarding the treatment they received, the doctor's explanation of their condition, the waiting time and the service in asau. also the questionnaire encouraged patients to suggest improvements to the service. aim: sintestinal obstruction is a very common cause of presentation to an emergency department. the most common cause in patients with prior abdominal surgery are adhesions, but the list of differential diagnosis is large. internal hernia is a very rare cause of obstruction, with a reported incidence of between 0.2 and 0.9%. the herniation related with broad ligament defects is even more uncommon. methods: we report the case of a 63-years-old woman with antecedents of liver transplant, tubal ligation and appendectomy. the patient was admitted refering abdominal pain in the epigastrium of 24 h duration, accompanied by nausea and vomiting. on physical examination, abdomen was depressible, tender in the right low quadrant, without evidence of peritoneal irritation. laboratory studies were normal except for an elevated leukocyte count with a left shift. computed tomography (ct) revealed dilated small bowel loops with a transition point in right lower quadrant. radiological diagnosis was intestinal obstruction, with fibrous adhesion as the most probably aetiology. management was conservative at the beginning, with intravenous hydration, nasogastric tube and administration of gastrografin (diatrizoate) without a good response. results: at 24 h, an exploratory laparoscopy was perform, finding dilatation of small bowel loops and a 3 cm defect in the right broad ligament in which a segment of ileum was herniated. ileal segment was liberated without evidence of ischemia. the hernial defect was closed by laparoscopy with simple silk stitches. the postoperative course was excellent, tolerating oral feeding next morning. the patient was discharged 72 h after surgery. conclusions: internal hernias of the broad ligament are an extremely rare cause of intestinal obstruction, but must be added to the differential diagnosis for female patients due to the risk of intestinal strangulation and perforation. even if clinical and radiological diagnose is difficult, ct is the best tool to delineate the cause and location of the obstruction. laparoscopy allows reduction of the hernia and closure of the defect with minimal invasiveness. because of that, the laparoscopic approach of bowel obstruction should be considered as the first choice if there is the suspicion of an internal hernia, without signs of necrosis or perforation. the laparoscopic approach is a safe and effective tool in the management of postoperative complications. it is well tolerated in critically ill patients and avoids respiratory and wound related morbidity associated with laparotomy. it also reduces diagnostic delay and a considerable number of unnecessary laparotomies, with a high resolution rate and minimal morbidity. it thus represents a valid and necessary alternative in surgeon's armamentarium. in the management algorithm of our institution we always choose the laparoscopic technique as the fisrt tool in case a reoperation is necessary. , small bowel obstruction (11.20% vs 9.09%), and colorectal cancer obstruction (9.09% vs 8.23%) was found higher for acs unit group, and also progressively higher during the last years. conclusion: according to our study, laparoscopic approach in abdominal emergencies shows an upward trend, and surgeons from acs units seem to have higher rates of laparoscopy than general surgeons in emergency procedures. background: incarcerated and strangulated hernias present a major problem in emergency medicine. there is scarce data about the role of laparoscopy in the management of these patients. laparoscopic repair offers the benefits of the ability to survey the incarcerated organ and to evaluate its viability, apart from the obvious advantages of laparoscopic surgery. the use of mesh repair in these emergent operations is also a major concern, due to the un-sterile conditions in which they are performed. objective: to evaluate the safety and short-term efficacy of laparoscopic emergent repair of incarcerated hernias. methods: retrospective review of prospectively collected data of all the patients who underwent emergent laparoscopy due to an incarcerated hernia between november 2017 and october 2018. results: during the study period, 13 patients underwent emergent laparoscopy due to incarcerated hernias (5 females, 8 males). 10 had incarcerated inguinal hernias, and 3 had incarcerated umbilical hernias. mean age was 63.3. all inguinal hernias were repaired in the tapp approach, and using an absorbable mesh. all umbilical hernias were repaired using the ipom approach. 7 patients had bowel obstruction, 5 had incarcerated omentum, and one patient had incarcerated urinary bladder. 3 patients underwent resection of an ischemic organ (1 bowel, 1 urinary bladder, 1 omentum). mean hospital los was 2.6 days. during the follow up period there were no mortalities, and no recurrences. one patient had a wound infection that resolved with antibiotics. conclusion: laparoscopic emergent repair of incarcerated hernias is a safe and feasible approach. further studies with longer follow up time need to be conducted, in order to evaluate the added benefit of the laparoscopic approach. gibraltar is a small overseas british territory with a residential population of approximately 30,000 inhabitants, that increases up to 50,000 daily due to incoming tourists and cross-frontier workers. as a geographically isolated center we have to provide a varied service including emergency surgery, and elective operating such as colectomies, gastrectomy's etc. one of the challenges faced is the limited stock of red blood cell (rbc) units within gibraltar and reliance on platelets (plt) from across the border from spain. given the immanent brexit we need to prepare for the challenges we will face in these times of political and distribution uncertainty. a prospective audit of all blood use within gibraltar was carried out over 4 months. the number and type of units requested, the number of units given, the speciality, location and indication for requests was recorded. introduction: the use of laparoscopic surgery in abdominal emergencies, such as in trauma, has had a slow acceptance. the advantages with this approach include less postoperative pain, faster recovery, quicker return to everyday activities, and fewer complications. we have collected the cases and indications of laparoscopy in abdominal trauma in the main hospitals in the andalusian capitals and compared with the national registry material and methods: a total of 25 patients who underwent laparoscopic surgery in the main hospitals of seville, cordoba, malaga, cadiz, huelva, jaen, granada and almeria were analyzed. they have been compared with the 567 traumas archived nationally by the spanish association of surgeons taking into account age, sex, score of the american society of anesthesiologists, hemodynamic stability and mechanism of injury. the intra and postoperative variables were compared between groups. results: at the national level, the main cause of abdominal trauma were traffic accidents, therefore, it was the patients who had a greater number of laparoscopies (35.3%), followed by stab wounds (9, 03%) and run over (11.2%). in our series, the average age of the patients is 41 years and 68% are male. only eco-fast was performed in 45% of the patients, being positive in 15.9% of the cases. as they were stable patients, in 95% of the cases a tac was possible. in our data, 60% of the laparoscopies were performed for therapeutic purposes as well as being diagnostic, thus avoiding a posterior laparotomy. conclusion: slaparoscopic surgery for abdominal trauma, either blunt or penetrating, is safe and technically feasible in hemodynamically stable patients. we found that laparoscopic surgery was associated with shorter operative time, lower estimated blood loss and faster return to normal diet. based on our findings we establish the indications of laparoscopy in these patients aims: submucosal aneurysm of small intestine is extremely rare, but its rapture can be lifethreatening. due to the unstable hemodynamics and unknown site of bleeding, emergency laparotomy has been widely performed for the rupture. we will present case reports and show the strategy for minimally invasive treatment for ruptured aneurysm. methods: we experienced two cases of ruptured submucosal aneurysm resected by laparoscopic surgery. case 1 is a 16-year-old male who was taken to our er with massive hematochezia. ct showed arterial bleeding in the small intestine and angiography revealed bleeding from the ilial artery. selective embolization using gelatin sponge and micro coil was performed and hemostasis was obtained. video capsule endoscopy found the hemispheric elevated lesion with protrusion at the top in the ileum. using balloon assisted enteroscopy, the site of aneurysm was marked with injecting india ink, which allows surgeons to accurately and easily identify the part of small intestine with aneurysm. subsequently, a single incisional laparoscopic assisted partial ileectomy was performed for the purpose of definitive diagnosis and preventing re-bleeding. the ileum with aneurysm was easily identified in laparoscopic exploration owing to the marking, and it was taken out from the incision to perform resection. case 2 is a 21-year-old female who was transferred to our emergency department with sudden onset of massive melena. ct and angiography were perfomed, and bleeding from the 3 rd jejunal artery were confirmed. subsequently, therapeutic embolization was performed in the same way as case 1. enteroscopy revealed submucosal elevation similar to case1 in the jejunum. we carried out endoscopic tattooing, followed by single incisional laparoscopic assisted partial jejunectomy. results: the operative time in case 1 and case 2 were 130 min and 68 min, respectively, and the amount of blood loss was both 5 ml. the postoperative course was uneventful in both cases. case 1 was discharged on the postoperative day 7, and case 2 was on postoperative day 6. conclusions: our experience indicates that ruptured submucosal aneurysm of the small intestine can be effectively managed by a laparoscopic surgery with combination of therapeutic embolization and enteroscopic evaluation, which is safe and minimally invasive. background: laparoscopic bilateral inguinal hernia repair may be completed with one large selffixating mesh crossing the midline in front of the bladder. no studies have investigated in detail whether preperitoneal mesh placement induces temporary or more lasting urinary symptoms. methods: urinary and hernia related symptoms were evaluated preoperatively and postoperatively at 1, 3 and 12 months in 100 patients using the iciq-mluts questionnaire and eurahs-qol score. results: voiding symptoms and bother scores were unchanged at 1 or 3 months, but there was significant improvement at 12 months compared with preoperative findings (symptoms p \ 0.001; bother score p \ 0.01). incontinence symptoms improved at 1 month (p \ 0.05) but not at 3 or 12 months, with a bother score significantly improved at 1 month (p \ 0.01) and 12 months (p \ 0.01). diurnal and nocturnal frequency did not change significantly postoperatively, but 12 months nocturnal bother score was decreased (p \ 0.05). eurahs-qol scores showed significant improvement in all 3 domains for all measurements compared to previous measurements. postoperative symptoms were improved at 12 months, compared with preoperative pain scores (-6.1), restriction of activity (-10.1) and cosmetic scores (-4.7) these findings were statistically significantly (p \ 0.001). at 12 months, there were no patients with severe discomfort (score = 5) for any of 3 domains. no recurrences were diagnosed with 95% clinical follow-up at 12 months. conclusion: placing a large preperitoneal self-fixating mesh for bilateral groin hernia repair did not cause new urinary symptoms and demonstrated significant improvement in voiding symptoms at 12 months. incontinence and nocturnal bother score were significantly improved. introduction: tep/tapp hernia repair is an increasingly widely used surgical methods for minimally invasive treatment of inguinal hernia. tep advantages to tapp are noincision of the parietal peritoneal sheet, therefore no need for its recovery-sewing or sticking at the end of the procedure, and no need for attachment of the prosthetic mesh to the structures of the anterior abdominal wall, which results in a reduction in the financial cost of operation.various types of meshes with different characteristics are used, depending on the surgeon's preferences.the aim of this study is to highlight mesh-related postoperative complications, which can be serious and life-threatening. material and methods: a retrospective cohort study of 68 cases of unilateral or bilateral tep and tapp hernia repair performed at the university hospital for the period 2016-2018 with a study of early and late postoperative complications potentially causally related to the implanted prosthetic mesh and methods of their treatment. results: for a 3-year period 22 tapp (12 bilateral) and 46 tep (33 bilateral) have been performed. three complications (clavien-dindo iva, ivb and v) were found, of which 2 were early postoperative (up to 20 pod)-one in tapp-5pod small bowel adhesive ileus due to suture dehyscense of the peritoneal sheet and adhesion of a bowel loop to the surface of polypropylene mesh.one in tep-2pod-a large preperitoneal hematoma with haemorrhagic shock at 86 years old female in anticoagulant therapy-an open revision of the preperitoneal space and definitive haemostasis; followed in 12 pod established bladder lesion from erosion from the edge of self-locking polypropylene mesh. suture and drainage performed, but the patient died of decompensation of concomitant diseases. a late complication-11 months after bilateral tep-erosion of soft polypropylene mesh of sigma (probable undetectable lesion of the peritoneum) with faecal peritonitis-hartmann procedure with laparostoma followed by restitution but persistent chroniosepsis with established abscess in retzii. 24 months after-revision with abscess incision and extraction of infected meshes. discussion: use of biologic meshes is quite expensive, however synthetic non-resorbable meshes implanted in preperitoneal layout is a prerequisite for specific severe postoperative complications. inguinal hernia repair is one of the most performed procedure all over the world, with more than 20 million procedures performed each year, it represents one of the top three most performed procedures. the lichtenstein procedure is one of the first procedures that a young trainee in general surgery learn, not only for its reproducibility and for the great numbers of procedures that could be done in each department, but also because during inguinal hernia repair the trainee learn a lot of skills which are the basis of major surgical interventions. the surgeon's performance for any procedure could be evaluated by way of established learning curves that can predict the minimum number of procedures required to reach the same intra and post-operative outcomes as an experienced surgeon performing the same technique. the aim of our multicentre study was to analyse how many cases are required to stabilize operating time (ot) and intra and post-operative complication rates over the course of the learning curve period for a lichtenstein procedure. from january 2014 to december 2018 all lichtenstein procedures from four different institutions were recorded in a prospective maintained computer database. the results of the first 100 consecutive procedures performed by three different trainees (group a; group b; group c) were compared with the same numbers of procedures by two senior surgeons of the same institutions (group e, group f). cusum analysis was performed to evaluate the achieving of learning curve. no differences in terms of biometric and hernia type were recorded between the five groups. cusum analysis showed that the trainees achieve the learning curve between the 37-41 procedures. no intra or post-opertive complications were recorded during the training period.in conclusion after our analysis we found that at least 40 procedures are needed for the trainees to achieve the learning curve for lichtenstein procedures. background: since its first description in the 1990 s, the total extraperitoneal (tep) technique has established itself as a popular endoscopic method for the repair of inguinal hernias. the tep repair is generally viewed as a technically-demanding procedure requiring adequate experience to minimize and handle complications. in this case report, we describe an uncommon complication of urethral injury, which was successfully repaired laparoscopically. case report: mr r is a 25 year old gentleman with no significant past medical history who presents to the department of general surgery, tan tock seng hospital, with a two-month history of a reducible right inguinal hernia, associated with some tenderness. an ultrasonography confirmed the diagnosis of a fat-containing indirect right inguinal hernia. in view of persistent pain, mr r was counseled for a laparoscopic repair of his right inguinal hernia. as mr r was able to empty his bladder just prior to surgery, no urinary indwelling catheter (idc) was inserted. an infra-umbilical incision was made to access the posterior rectus sheath and a balloon was used to bluntly dissect the pre-peritoneal plane. on inspection of the operating field, persistent pooling of blood was noted in the retropubic space. careful inspection revealed a defect in a tubular structure just inferior to the bladder neck. an idc was inserted, which confirmed a 1.5 cm defect in the pre-prostatic urethra. decision was made for primary repair using absorbable sutures in two layers. the bladder was subsequently filled via the idc, which did not reveal any leak. we then completed the right inguinal hernia repair using a mesh. mr r made an uneventful recovery and was discharged on post-operative day 1 with instructions to keep the idc in-situ for two weeks. the idc was removed after two weeks and a micturating cystourethrogram was performed, which showed no filling defects along the urethra and no contrast leaks. discussion: though uncommon, urethral injuries can be a complication of laparoscopic tep repair. the key to managing these complications is in the early identification of such injuries intra-operatively. with early recognition and careful assessment, such complications can be managed laparoscopically with minimal post-operative morbidity. aim: the purpose of this study is to report surgical technique and outcome of hybrid tapp procedure (a combination of tapp and ipom) for inguinal hernia patients complicated with preperitoneal space adhesion. methods: hybrid tapp procedure is applied if peritoneal dissection or closure of the peritoneum is difficult due to severe adhesion. the peritoneum should be dissected as much as possible. for the site where adequate dissection was achieved, the collagen mesh is placed outside the peritoneum. in the part where dissection was difficult it is placed inside the peritoneal cavity. in order to prevent mesh migration, the mesh should be directly fixed to the cooper's ligament with a tacker. for this purpose, the peritoneum around the cooper's ligament must be well-dissected, even if it is strongly adhered, so that the ligament can be exposed. the crucial points in the hybrid tapp procedure are fixation of the mesh and prevention of the bowel herniation into the preperitoneal space. at the site where peritoneal dissection is possible, the mesh is directly fixed on the fascia using a tacker. if it is difficult, the mesh is placed in the peritoneal cavity and fixed over the peritoneum. if there is a risk of migration along with peritoneum, transcutaneous full-thickness fixation can be performed using non-absorbable sutures. the preperitoneal space should be closed tightly as soon as possible in order to prevent the bowel herniation into the preperitoneal space. at closure of the preperitoneal space, the peritoneum is fixed on the collagen mesh using non-absorbable sutures. objective: show a tapp approach using a self-fixating mesh(15x10 cm. progrip tm laparoscopic self-fixating mesh, medtronic) with bipolar peritoneal defect sealing, avoiding the use of tackers and performing an easy and sutureless peritoneal closure. material and methods: 62 years old male, asa ii, medical history of beta-latacm allergy, high blood pressure, dyslipidemia and bilateral knee surgery. diagnosed of bilateral inguinal hernia at consultation due to inguinal disconfort. surgical site infection prophylaxis with iv vancomycin. balanced general anesthesia. supine decubitus position with shoulder supporting to allow a forced trendelemburg. 30 degree optical device with 3 trocars disposition: one 11 mm umbilical trocar and 2 5 mm trocar in both flanks, same distance and height to umbilical trocar. peritoneal opening and flap creation with monopolar energy, blunt maneuvers and pneumoperitoneum dissection. anatomical landmarks identification(cooper's ligament, epigastric and iliac vessels, hernia defect and spermatic cord elements). reduction of hernia sac content(pseudosac in this case, direct hernia) and complete peritoneal dissection to achive a correct mesh placing. mesh is folded in 3 parts(one inferior part, two superior parts) in vertical axis outside the abdomen to facilitate the posterior intraabdominal maneuvers. introduction: into abdominal cavity with grasping forceps and correct unfolding mesh assesment: medially(pubic bone), caudal(cooper's ligament) cranial(more than 5 cm of hernia defect/ deep inguinal ring) and lateral(anterior superior iliac spine). finally, we use a bipolar forceps to close de peritoneal defect. in order to facilitate this step, its necessary to decrease pneumoperitoneum pressure and to use the grasping forceps to bring together both peritoneal flap edges prior to bipolar energy sealing. results: 60 min. surgical procedure. 24 h hospital discharge, no complications. routine outpatient follow up(week, month, 3 months and 6 month later) with an epididymitis episode 2 months after surgery(treated with oral ciprofloxacin). conclusions:-this procedure is an easy implementation technique once the intraabdominal mesh unfolding procedure control is reached.-the use of a self-fixating mesh avoid the use of tackers and its potential disadvantages(e.g. increasing postoperative pain).-bipolar peritoneal sealing offers a quick, easy, cheap and safe peritoneal closure, avoiding the contact of the mesh with the viscera in the same manner. results: we performed 53 procedures within 42 patients. the average age was 54 years. twenty six percent of hernias were bilateral, 11,3% were inguinoscrotal and 56% in the right side. the median asa score was 1. the conversion rate was 3,7%. the average duration of the procedure was 88,21 min 34 min. overall morbidity was 19%. there were 5 seromas (9,4%) . on 2-year follow-up, one recurrence (1,8%) was found and chronic postoperative pain in one case . we had no mortality. in the univariate analysis, male sex, inguinoscrotal hernias, hernias classified as nyhus 3a were significantly associated with overall postoperative morbidity. a chronic obstructive pulmonary disease was the only variable significantly associated with the occurrence of medical complications. conclusion: given these results, the tapp technique is a good alternative in the treatment of groin hernias. however, enhancing this approach is essential to reduce the operating time and the postoperative outcomes. introduction: studies have emphasized the impact of a strong safety culture on patient outcomes. consequently, many interventions focus on improving the safety culture, of which teamwork and safety climate are important ingredients. it is known that differences in culture and safety attitudes may also impact teamwork. implementations of safety interventions, such as a ' black box', are dependent upon these differences. the aim of this study was to assess the safety culture at the operating theatre complex, along with the theatre staff's attitude towards a specific quality improvement intervention, a black box in the operating room as a tool for structured team debriefing. methods: the validated dutch version of the hospital survey on patient safety culture was administered to all healthcare professionals working in the operating room complex at one academic medical centre. this survey was supplemented with 10 questions regarding the use of a 'black box', a medical data recorder in the operating room, to measure the staff's attitude towards this quality improvement tool and its potential contribution to patient safety. aims: the aim of the study was to compare two methods of treatment of dunbar syndrome: thelaparoscopic release of median arcuate ligament alone and the hybrid method consisting ofsurgery and percutaneous stent implantation to celiac trunk. methods: we performed 6 laparoscopic release of ct in the department of general, mini-mallyinvasive and elderly surgery in olsztyn in 2016-2018. all of patients suffered from severepain of abdominal cavity before the surgery. three patients underwent doppler percutaneousangioplasty of the ct with stent implantation one month after the laparoscopy. results: all patients reported relief of symptoms in the first days after the operation. in two cases fromboth groups, there were a complete remission of the symptoms. in one case respectively,there was an improvement. there were no postoperative complications. the results of both methods do not show the differences therefore the surgery alone seems tobe a safe and feasible procedure. it increases the comfort of the patient and brings theopportunity for normal functioning. the method of wedge resection of lungs in patients with limited forms of chemo-resistant pulmonary tuberculosis is developed. in order to evaluate the efficacy, 80 patients underwent surgery (the main group). for comparison, the data on similar operations in 100 patients, made according to the traditional method (with the help of a cardboard weaving machine yo-60) were selected. compared the duration of the stage of resection itself, the frequency of need for additional hemostasis of the parenchyma sutures, the degree of deformation of the pulmonary tissue in the seam area, the frequency of postoperative complications and reoperations, the duration of postoperative inpatient treatment. the developed method, in comparison with the traditional one, has the following advantages: simultaneously leak proofness and hemostasis with minimal electrothermal damage to tissues are provided and there is no need for additional hemostasis, there are no negative effects of manual stitching of parenchyma of lung with abandonment of foreign material, a significant reduction in the duration of wedge resection of the lung from 27.5 to 9.2 min, a decrease in the number of postoperative pulmonary-pleural complications is achieved by 96.4% and caused by them reoperations-by 99.1%, shortening the duration of postoperative inpatient period of treatment from 20.7 to 14.5 days. introduction/aims: laparoscopy is a diagnostic and therapeutic resource that is largely used in elective gastrointestinal surgery due to its well-known advantages over the classic open approach. nevertheless, there is still some discussion about its application in emergency surgery. our aim is to analize the use of the laparoscopic approach by the members of the surgical emergency unit from our medical center. methods: a descriptive research based on the data of 12920 patients who required emergency surgery, that was performed by the members of the surgical emergency unit of a spanish hospital between november 2000 and may 2018, was conducted. these data were analyzed according the pathology that motivated the surgical procedure and the chosen form of surgical approach (open versus laparoscopic). results: out of the 12920 patients in whom emergency surgery was performed, 9712 suffered from a pathology that actually allowed the laparoscopic treatment. laparoscopy was used in 38.8% of these patients. according to pathology, the most common were acute appendicitis and cholecystitis, in which the laparoscopic approach was used, respectively, in 56% and 62% of the cases. regarding other less frequent pathologies, such as gastroduodenal perforation, bowel obstruction, diverticulitis and pancreatitis, laparoscopy had a less significant role. according to the year, a general tendency to increase the use of the laparoscopic approach was found, most notably in the cases of acute appendicitis and cholecystitis (with rates above 90% in 2018). conclusions: despite our positive results in the terms of the implementation of the laparoscopic approach in emergency surgery, there is still room for improvement, especially in regards of the less common pathologies. furtheremore, additional studies are needed in order to identify the factors that have had an effect, in favour or detriment, in the development of emergency laparoscopy in our center. aims: laparoscopic surgery, which produces small scars, has become widespread. when performing surgery through small laparoscopic incisions, a surgeon manipulates tools inserted into the abdomen through ports. for minimally invasive accurate procedure, the port as the pivot point should be stabilized on the abdominal wall. however, these laparoscopic incisions are loaded while manipulation because it is difficult for the port to be fixed on. thus, it is necessary for the patient friendly manipulation to be fixed the port mechanically. we developed a new pivot restraint device (prd) attached to a trocar for guiding the tool. the purpose of this study is to evaluate both of reducing the operating time and the load of the port with the prd experimentally. methods: the prd uses gimbal mechanism for two rotating axes and a linear guide mechanism for the insertion axis though into the forceps. in the experiment, the left hand forceps with or without the prd and the right hand forceps without the prd were set on the training box. the box had a measuring system created with a pressure sensitive sensor for the continuous force (resolution 0.1 n, 30fps) applied to abdominal wall fulcrum. the experiment task was performed as following three steps. (1) the surgeon lifted the 225 g weight for 5 s at the initial position using the right hand forceps. (2) the weight was transferred from the right hand forceps to the left hand forceps, and held for 5 s. (3) the weight was moved to the predetermined position, held for 5 s, and returned to the initial position. the surgeons were five endoscopic specialists and five non-specialists. the operating time and the time ratio exceeded 1 n for the left hand forceps were measured. two grouped datasets with or without the prd were compared using two-sided t-test. results: the prd was associated with both of reducing the operating time (33.2 s vs. 38.2 s; p \ 0.01), and the load of the port (13.3% vs. 58.0%; p \ 0.01) at the statistical analysis. conclusion: the prd could be used for reducing the operating time and the load of the port in minimally invasive accurate procedure. background: pathophysiological changes during laparoscopic surgery and positive pressure pneumoperitoneum (pp) may include (beside cardiovascular changes) elevated intra-thoracic as well as intracranial pressures. however, the possibility of physiological and functional cerebral impairment under pp is still debated. aim: to study the effects of pp on brain activity during different modes of anesthesia and ventilation during laparoscopic cholecystectomy (lc). patients and methods: thirty patients undergoing elective lc were divided to those who were ventilated by intermittent positive pressure ventilation (ippv, 16 pt.) and by high frequency jet ventilation (hfjv, 16 pt.). in those under hfjv we used total intravenous anesthesia (tiva). in those under ippv we either used inhalational anesthesia or tiva. intra-ocular pressures were detected in both eyes, trans-cranial doppler was used to measure the changes in flow of the middle cerebral artery, and cerebral oxygenation (o2 saturation) was measured too. each parameter was detected during anesthesia before surgery, several times during surgery under pp and after co2 evacuation. a novel computerized signal analysis by a continuous recording through a single electrode was done to explore cerebral cognitive activity during surgery. results: all surgeries went uneventful and without complications, pp was set to 14 mmhg, and each patient was positioned in a 15 degree anti-trendelenburg posture. cerebral perfusion and oxygenation were not changed significantly during pp. intra-ocular pressures decreased during anesthesia and increased during pp, but to a lesser extent under tiva. however, pressures during pp did not exceed pre-surgical values. we did not observe changes in cognitive brain activity during pp, although enhanced cerebral activity was seen under hfjv. conclusions: increased intra-abdominal pressure during laparoscopic surgery was not accompanied by decreased cerebral functions, maybe due to cerebral circulatory auto-regulation. changes in cerebral cognitive functions under hfjv might be explained either by the different cerebral effects of tiva in comparison to inhalational anesthesia, or due to dissimilar hemodynamic changes during hfjv. aims: gallstone ileus (gi) is a rare complication of cholelithiasis and accounts for 0.1-5% of small bowel obstructions. intermittent and non-specific presentation often results in late diagnosis. the triad of rigler is pathognomonic (pneumobilia, small bowel obstruction and ectopic gallstones), so an image test is usually mandatory in order to assure the diagnose. our aim is to expose our experience regarding this topic to show that a minimally invasive approach is feasible in selected cases. methods: since january 2016 we treated 12 cases of gi, 5 of whom (42%) underwent laparoscopic surgery. in all cases a ct was made to reach diagnosis. enterolithotomy alone is our preferred procedure for the resolution of this pathology. here we present a descriptive analysis of our data in those cases where a laparoscopic treatment was attempted. epidemiological variables, surgical technique, postoperative complications, days until hospital discharge, recurrence, etc. has been collected. results: 80% of patients were female(4) and 20% male (1). mean age was 68. size of gallstones varied from 20 to 34 mm and ct located them all in the ileum. two conversions to open surgery were made (40%), in one case because the gallstone could not be found and in the other case due to the need of an intestinal resection. in two cases (40%) la aparoscopic-assisted surgery was performed using a pfannestiel incision for the gallstone extraction and enterorrhaphy. only one case was total laparoscopic approach (20%). two cases needed an intestinal resection and anastomosis, one of them was complicated with a leak that needed reintervention. there were two cases of recurrence during the follow-up time. hospital stay varied from 4 to 27 days, mean of 10 days. conclusion: the widespread use of ct facilitates early diagnosis with high sensitivity detecting rigler's triad. a totally laparoscopic procedure might be ideal for patients specially with solitary stones even though a laparoscopic-assisted approach is an easier technique for surgeons with less experience in laparoscopic surgery. although experience in minimally invasive surgical treatment of gi is still developing, it may be recommended in selected cases and experienced hands. introduction: most of surgical interventions in hospitals in the world, where laparoscope is used, it is common that the vision inside the human body is constantly interrupted by fogging in laparoscope tip. the laparoscope fogging is caused by the difference of temperatures between the optic tip and the abdominal cavity. material and method: we replaces the traditional laparoscope for the ehs (endoscope heater system) with resistance between the internal and external tube that maintains the temperature of laparoscope at (30-50°celsius) without modifying the external architecture of traditional laparoscope. results: ehs does not generates any waste like other anti-fog systems, like liquids, plastics covers or electric heater. reduces intervention time, can keep same instruments or accessories for the intervention. all of the above means a saving of resources with have a positive environmental impact. conclusions: the discomfort transmitted by surgeons about the fogging in laparoscopy tip make success of the product and it will replace the current laparoscope which is fogged. aim: synchronous locally-advanced low rectal cancer and prostate adenocarcinoma represent a rare condition and a challenging situation for colorectal surgeons and urologists. the simultaneous resection of both adenocarcinomas after long-course chemoradiation therapy combines two major surgical procedures associated with a potentially increased postoperative morbidity. in the other hand, simultaneous resections minimize the risk of difficult dissections, which are expected if the two procedures are scheduled sequentially. in the past decade, robotic-assisted minimally-invasive surgical techniques have been increasingly used to treat both rectal and prostatic malignancies. especially in case of prostatic malignancy, the robotic approach is considered the treatment of choice because it is associated with significantly lower blood loss and transfusion rate, and much greater functional outcomes compared to laparoscopy. methods: we present the case of a 66-year-old male patient (bmi: 30.6) diagnosed with a histologically proven locally-advanced rectal adenocarcinoma (ct3an0) located at 5 cm from the anal verge and concurrent histologically proven prostatic adenocarcinoma [gleason score of 8 (4 ? 4)] located in the postero-basal right lobe. the preoperative total-body computed tomography (ct) scan showed no evidence of metastatic disease. after discussion in a multidisciplinary meeting, the patient received a long-course neoadjuvant chemoradiation therapy (ncrt). at the restaging positron emission tomography / magnetic resonance imaging (pet-mri), the rectal lesion was classified as ymrt0n0. preoperatively, the surgical difficulty was assessed as high, based on the calculation of the eumarcs score (equal to 6/10). moreover, due to the high-risk status of the prostate cancer (gleason 8), it was decided not to preserve the neuro-vescular bundles during the radical prostatectomy. results: the patient was operated on after 12 weeks from completion of ncrt by using the da vinci robot system si with a single docking approach, as previously described, in order to address both cancers. conclusions: this video shows the main surgical steps of the simultaneous robotic resection of the low rectal adenocarcinoma first, of the prostatic carcinoma then, and the mechanical colo-anal anastomosis followed by drain positioning and ileostomy. this video demonstrates the perioperative safety and feasibility of the minimally invasive robotic approach in case of extended and challenging oncologic resections. general surgery, rambam medical center, haifa, israel 75 year old, male patient presented with melena, without abdominal pain, nausea or vomiting. patient underwent colonoscopy and tumor was found in ascending colon (near the hepatic flexure). biopsy from the tumor has showed moderately differentiated adenocarcinoma. his blood laboratory examinations were within normal limits except of hgb level-11.0. cea and cea19-9 were normal. abdominal computed tomography was normal . patient underwent da vinci robot-assisted right hemicolectomy with extracorporeal anastomosis. total operating time was 150 min. three days after operation patient started regular diet and was discharged home on day four. final pathology result confirmed diagnosis of moderately differentiated adenocarcinoma. introduction: one of the goals of colorectal surgery is to decrease the number of leaks once an anastomosis has been performed. this life-threating entity after elective surgery has been related to the clinical history of the patients, the location of the tumor and to technical reasons, specially due to tension in the anastomosis or to lack of vascularization. tension could be identified during surgery, while vascular supply is evaluated by the surgeons based on a subjective analysis of the color of the colon/ileum. fluorescence tries to make these subjective parameter more objective in order to avoid an anastomosis with lack of vascularization, decreasing the numbers of leaks related to this factor. patients and method: the study presents a quasi-experimental analysis made from january 2009 to october 2017 in two hundred and eighty-five patients who underwent elective colerectal surgery, performing either a colo-rectal, ileo-rectal or intracorporeal ileo-colic anastomosis. vascular supply was eveluated using indocianyne green (icg) in one hundred and forty-five patients, while one hundred and forty subjects were operated in a previous period without using this technology, being considered the control group. the number of time that the attitude changed and the number of leaks were collected. results: out of the 285 cases performed, 80 were right colectomies (rc), 162 left colectomies (lc) and 43 rectal excision (re). in 20% the transection line was changed (2, 8% in rc, 11, 1% in lc and 6, 3% in rr) . in comparison with the control group, the icg group had a significantly less indicence of anastomotic leak compared to the control group (2,8% vs. 8,6%, p = 0,04), lower rate of terminal stoma after reoperation (0,7% vs. 5,7%, p = 0,018), a shorter length of hospital stay (4 days vs. 5 days, p = 0,02 respectively), and a low morbidity and mortality. conclusions: the rate of leaks after colorectal surgery decrease using icg to detect the proper transsection line before to perform the anastomosis in comparison with control group. these findings might influence in the final results although it is necessary in the future to find a system that provides greater objectivity by quantifying icg. aims: anastomotic leaks continue being one of the most important complications when a colorectal surgery is performed. this complication is usually related to the level and type of resection, the patient clinical history and surgical technique, where tension and vascular supply are the most important. indocyanine green (icg) fluorescence angiography seems to be helpful in order to evaluate the vascularization at the resection margins. methods: we have collected data on 187 colorectal procedures that were performed by the same surgeon using icg fluorescence angiography to evaluate vascular supply to the anastomosis. in order to asses in which of the different type of colorectal procedure has more value to be used, we analyzed the type of surgical procedure, the percentage change in the resection margin and the number of anastomotic leaks (al). results: all of the 187 cases were performed by laparoscopic approach: 77 left colonic resection (lc), 66 right colonic resection (rc), 9 splenic flexure partial resection (sf), 15 low anterior resection with partial mesorectal escision (lar), 19 ultra low anterior resection with total mesorectal escision (ular) and 1 total colectomy (tc). there was a change of transection line (ctl) in 21 lc (27,2%), 4 rc (6%), 1 sf (11,1%) and 10 (28,5%) in rectal anastomosis (lar, ular and tc). as far as al we found: lc 1 (1,2%), rc 2 (3%) and 2,8% in rectal procedures. lc, sf and rectal procedure showed more ctl and less al, while rc showed less ctl and more al. conclusion: icg fluorescence angiography as an additional tool to try to reduce the anastomtic leak rate seems to have more value in the procedures that involve the left colon and the rectum, since that is where we have observed the greatest number of ctl, this could be explained by the riolan's arcade and the variability of the vascular anatomy. however, it seems that this is a line of research should continue developing with longer and larger studies, so in that way we can have more significant results. retrorectal tumors ara rare and often found incidentally. the majority of retrorectal tumours are benign, but they have potential for malignant transformation and therefore should be resected when found. a case of a 44-year-old female patient with a retrorectal tumor is showed. the tumor was found incidentally on ct scan of the abdomen for evaluation of non specific right side abdominal pain. a mri was also performed and imaging was informed as a probably congenital retrorectal tumor (tailgut cyst) there was no evidence of involvement or invasion of other structures the tumor was palpable at rectal examination. a transanal minimally invasive surgery (tamis) approach was proposed. preoperative preparation was done with a full mechanical and oral antibiotic bowel preparation. preoperative parenteral antibiotics werw administred. under general anesthesia, lithotomy position. the contour of the tumor is not visible due to the small size. palpation of tumor and placement of clips to lolocate was done. placement of gel point path and rectal insufflation. a longitudinal incision was made to the posterior left side of rectal wall. the insufflation of the perirectal extraperitoneal space allowe for excellent exposure of the tumor. the tumor was disected with ligasure. then the tumor was extracted transanally.the proctotomy was closed in a single layer with reabsorbible monofilament continuous suture (pds). no complications after the procedure. the patient was discharged at 2 days. discusion: traditionally, the retrorectal tumors have been resected using a posterior parasacrococcygeal approach, an abdominal approach or a combined abdominal and posterior approach. with the advent of minimally invasive surgery, laparoscopyc approach has been described too. however, tamis approach is feasible, with low pain, morbidity, fester recovery and excellent cosmetic (no scare) results. it can be accomplished using standard laparoscopic equipment, with transanal access. we think that perhaps it could be the gold standar approach for this tumors. aimes: robotic-assisted laparoscopic surgery (rals) is a promising advanced technology that can overcome the inherent limitations of conventional laparoscopic surgery (cls). its advantage includes free-moving multijoint forceps, a motion scaling function, high-quality three-dimensional imaging, and stable camera work by an operator. this study aimed to clarify the short-term outcomes of rals for rectal tumors. methods: this study group comprised 25 patients who underwent rals for rectal tumors (cancer in 24 patients and gastrointestinal stromal tumor in 1 patient), excluding ones with distant metastasis from november 2016 through december 2018. the clinicopathological findings and short-term outcomes in rectal tumors were analyzed. results: the median operative time was 372 min (309-682). the median console time was 207 min with a median blood loss was 5 ml (5-394). conversion rate was 0.0% (0 / 25). the median postoperative hospital stay was 11 days (6-17). 2 patients (8.0%) had postoperative complications. 9 patients (36.0%) had lymph nodes metastases. the mean harvested lymph node was 17.6. the r0 resection rate was 96% (24 / 25). conclusions: these results suggest that rals for rectal tumors is safe and feasible, and the perioperative outcomes are acceptable. introduction: anastomotic healing defects are a feared complication which might have a fatal impact on the patient. fundamental conditions for proper anastomotic healing include sufficient blood supply. fluorescent angiography using indocyanine green in the spectrum of near infrared light facilitates the monitoring of tissue perfusion during a surgery. aim: a presentation of the results of our non-randomized study in which we assessed prospectively obtained data from a perioperative assessment of anastomosis perfusion by fluorescent angiography using indocyanine green during robotic rectal cancer surgery. method: thirty patients with rectal cancer, who underwent a robotic resection with primary anastomosis, were consecutively included in the study between april 1, 2017 and june 21, 2018. the study included patients facing a least invasive surgery with a guaranteed payment by a health insurance company. during the surgery, we monitored and assessed the quality of the perfusion of the resection line of the sigmoid colon and subsequent anastomosis by means of fluorescent angiography using indocyanine green in the spectrum of near infrared light. the data were obtained prospectively and subsequently analyzed. results: between april 1, 2017 and june 21, 2018, we consecutively included 30 rectal cancer patients in the project: 16 men and 14 women. monitoring of the perfusion of the resection line and anastomosis was successful in all cases and perfusion quality was satisfactory across the sample. perfusion insufficiency requiring a change in the resection line level or anastomosis adjustments was not detected with any patient. in two cases (6.7%) of tme, we gave up the planned protective ileostomy owing to quality perfusion of the anastomosis. one patient (3.3%) suffered from defective anastomosis healing without clinical symptomatology (type a). we found no technical complications related to fluorescent angiography or undesirable effects due to the application of indocyanine green. conclusion: even though we did not register insufficient perfusion in our sample and hence we did not have to change the resection line level or adjust the anastomosis, we may state that fluorescent angiography performed by an experienced colorectal surgeon may potentially reduce the frequency of complications linked to defective anastomosis healing.supported by mo 1012 aims: the aim of our study is to demonstrate whether robotic surgery has any influence on the reduction of complications in the aged population undergoing rectal cancer. methods: we performed a retrospective analysis of a prospective database of 151 patients who underwent robotic surgery for rectal cancer. we divided our population in 3 groups: under 65 year old, between 65 and 80 year old and above 80 year old. we recorded complications in each group intra and post procedure. qualitative variables were expressed in terms of absolute frequencies and percentages and mean values and standard deviation were used to express quantitative variables. the analysis of data was applying fisher's exact test or chi-squared test for qualitative variables and variance analysis or student'-t test for quantitative variables. statistically significant values of p \ 0.05 underwent multivariate logistic regression analysis. results: the present study included 151 patients (94 males).seventy seven patients were under 65 year old, 73 patients were between 66 and 80 year old and 11 patients were above 80 year old. the analysis showed conversion rates of 10.38%, 13.69%, 27.27%, and complication rate of 23.4%, 23.8%, and 27.3% in each group. univariate analysis showed no differences between the three groups. nevertheless, there were statistical differences from bmi, asa and neoadjuvant therapy. in multivariant analysis only neoadjuvant therapy was significant. conclusions: robotic approach do not decrease complications in elderly population. introduction: it has been described the advantages of total transanal mesorectal excision (tatme), with better visualization and access to the lower rectum. we use this access whith the gel point path device, to repair a rectovaginal fistula with stenosis of low rectal anastomosis in two patients, that would be difficult by conventional abdominal approach method: we show our surgical technique for repair a rectovaginal fistula with stenosis of low rectal anastomosis in two female patients operated due to rectal neoplasia. one of the patients underwent prior chemo-radiotheratpy. rectoscopy and image test was performed at the patients prior the intervention. no recurrence signs are recorded at mri.we describe the operation technique: a new anterior rectal resection was performed with a combined transanal (gel point path) and abdominal minimally invasive approach. redo anastomosis whith eea 31 stappler was performed, vaginal repair and epiploplasty. the intervention was especially laborious due to the fibrous tissue. pathology: fistulous path without tumor infiltration in the two patients. at two months, a opaque enema show permeability and absence of leaks in the two patients. the ileostomy was closed at three months. discusion: we believe that transanal access through the gel point path can be a good option for rectovaginal fistula and stenosis of low rectal anastomosis, allowing a better visualization and acces, and making more easy a very difficult intervention. introduction: tamis or transanal minimally invasive surgery for polyp resection has increased fame for several situations in which adenomas with or without dysplasia cannot be removed with conventional colonoscopy. in this video we show the step by step technique performed with the da vinci xi system. material and methods: in this video we show the setting and the location of the patient-side cart and the arms to perform the resection of polyps in different patients and how to develope the procedure. results: after placing the patient-side cart the arms are connected to 3 ports and the camera, double fenestrated grasper and scissors are connected to the arms through a transanal gel-port device. a line is described around the polyp with monopolar energy to determine the place of the dissection. the scissor is exchanged by a robotic harmonic wrist instrument and the complete dissection is performed. the wound is closed using a robotic needle holder and a suture. results: transanal robotic surgery could be safely performed after a standardized technique is stablished. aims: robotic rectal cancer surgery has demonstrated to obtain at least the same results than laparoscopic surgery. however, robotic surgery is associated with high rates of costs, specially when conversion to opened surgery occurs. the goal of this study is to create a predictor nomogram of conversions for robotic rectal cancer surgery. methods: we performed a retrospective analysis of a prospective database of patients who underwent robotic surgery for rectal cancer from october 2008 to november 2017. we performed a bivariant analysis and detected the variables which were related with the conversion: body mass index (bmi) and the t. we divided the patients of the population in two groups depends on obesity (bmi of kg/m2) and on t (t1-2/t3-4). we registered conversions in each group calculating the pretest risk. we performed likelihood index (lr ?/-) for under and above 30 kg/m2 of bmi, adding in a second step the lr of t; obtaining the prediction index for four groups by using a standardize nomogram. results: the present study included 194 patients (128 males). 143 were under bmi of 30 kg/m2 and 51 above. regarding t, 54 were with a tumor of t1-2 and 150 with t3-4. the analysis showed a conversion rate of the statistical sample of 14%. univariant analysis showed significative differences in the bmi (p = 0.005) and t (p = 0.022). a nomogram was performed; as regards the bmi, the positive likelihood index in the group of bmi [ 30 a prediction index of conversion of 50% (lr ? 4,95) and in bmi \ 30 the prediction index of conversion is 5% (lr-0,52). adding the t group data, for bmi [ 30 and t1-2 the conversion prediction rate is 3.5% (lr-1,2); for bmi [ 30 and t3-4 the conversion prediction is 92% (lr ? 5,6). bmi \ 30 and t1-2 the conversion prediction is 2% (lr-1,2); imc \ 30 and t3-4, the conversion prediction is 30%. conclusion: a standardize nomogram with the variable bmi and t facilitates the selection of patients for robotic surgery in rectal cancer avoiding conversion to open surgery. background: 3d-laparoscopy is proven to improve performance in dry laboratory settings, especially for novice surgeons due to better depth perception. however, the benefits for experienced laparoscopic surgeons are still discussed. aim: the aim of this study is to compare the results of right hemicolectomy (rc) using a conventional (2d hd) laparoscopic system with rc performed using a 3d laparoscopic system in terms of duration, complications and results. material and methods: from all laparoscopic right hemicolectomies performed in our clinic we selected all procedures performed by the same team of 2 consultant surgeons using the same technique and divided them in 2 groups. the study group comprised of all patients operated using our 3d einstein vision 2.0 system; all other patients which were operated using our standard wolf hd laparoscopy system comprised the control group. all patients were retrospectively analyzed in terms of patients characteristic, or time, duration of operation, intra-and postoperative complications, length of hospitalization, pain score, necessity of analgesics and number of lymph nodes retrived. risk factors for complications (bmi, smoker, diabetes, copd, bph) were also registered. results: there were 54 patients included in the study group, while the control group comprised of 98 patients. mean operation time in the study group was 123.3 min in the study group, while mean or time was 157.4 min. mean operation time in the control group was 128.1 min, while mean or time 145.4 min. one reintervention was noted in the control group and two in the the study group; no conversion to open surgery was noted. there were no significant differences regarding patient characteristics, pain score, wound complications, hernia rate, length of hospitalization or number of lymph nodes removed. conclusions: there were no significant differences regarding the outcome of rc using 3d laparoscopy; total or time was significantly higher in the study group due to the time needed to set up the 3d-laparoscopy unit. this is biased by the fact that the 3d system needs to be set up manually while the conventional hd system is integrated in the or. also, there was no significant difference in complication rate. background/purpose: robotic approach can be a treatment option for patients with pelvic recurrence after primary resection for rectal cancer. however, data regarding patient selection, complication rates, and oncologic outcomes are rarely reported. we aimed to present initial experience and to evaluate feasibility, safety, and oncologic outcomes of robotic salvage surgery for recurrent rectal cancer. methods: ten patients who underwent robotic salvage surgery for local recurrence at the anastomotic site, lateral pelvic side-wall, or lateral pelvic lymph nodes (lpns) were retrospectively evaluated from a prospectively maintained database. results: two patients underwent pelvic mass excision with en bloc resection of anastomosis and redo-anastomosis, and eight patients underwent lateral pelvic lymph node dissection (lpnd) for lpn metastasis; one of these eight patient underwent additional en bloc resection of anastomosis. all patients achieved r0 resection. the median operation time was 165 min and the median estimated blood loss was 50 ml. there were no conversions. as for intraoperative complications, one patient experienced ureter injury during lpnd because the metastatic lpn was closely abutting to the ureter. the median hospital stay was 7 days. in six patients who underwent lpnd, the median number of harvested lymph nodes was 7 (range 2-13) and the median number of metastatic lymph nodes was 1 (range 0-2). with median follow-up 26 months, one patient developed lung and pelvic recurrence at 36 months after salvage operation and seven patients remained in disease-free state at the last follow-up. conclusion: initial experience of robotic salvage surgery for pelvic recurrence in rectal cancer indicated that it is safe and feasible. therefore, the robotic approach can be considered as a treatment option for the treatment of local recurrence in selected patients. introduction: there is uncertainty regarding the effects of simulated patient death. several reports showed increased cognitive load and poorer learning outcomes, and others increased performance without causing stress to learners. we have not found any report studying the impact of animal death in the simulation lab. methods: this was an observational cohort study to assess the emotional and cognitive load of surgeons who experienced animal death in the simulation lab. seventy-four faculty and residents from different surgical specialties training minimally invasive surgery participated in the study. one cohort consisted of surgeons whose animal died during surgery, and the other by those whose animal survived. emotions were assessed using the scale for mood assessment and cognitive load with nasa task load index. results: twenty percent of participants experienced mortality while training anti-reflux surgery (11 cases) and other procedures (3 cases). causes of death included intraoperative pneumothorax (n = 10), hemorrhage (n = 1), and cardiac dysrhythmias (n = 3). participants exposed to animal death had higher levels of sadness and anxiety, and lower levels of happiness (p [ 0.05). cognitive load was slightly higher in the exposed cohort (p [ 0.05). conclusions: these findings suggest that mortality in the animal lab do not have a significant effect on cognitive workload and emotions of surgeons training complex laparoscopic procedures. introduction: the visuospatial profiles of expert laparoscopic surgeons remain unaccounted in the current literature for as the influence of visuospatial ability on laparoscopic learning has mainly been investigated in medical students or novice surgeons and using simulators as means of performance measurement. such knowledge is critical, as without understanding how clinical experience may impact visuospatial processes in surgeons, we hinder our efforts to utilize the available knowledge to support surgical education for the future. this study is aiming to explore the development and influence of visuospatial processes on intraoperative laparoscopic learning. method: the study reports the interim baseline results from the ongoing longitudinal study throughout a 2-year period of training on laparoscopic surgery. data from 35 surgeons including 17 residents undergoing training were captured and compared to 18 specialists who are working in departments of general and visceral surgery at two large hospitals. the mean experience of the surgical residents was 4 years. the mean laparoscopic experience among the senior surgeons is 17 years, with each surgeon performing an average of 6 laparoscopic procedures per week. visuospatial ability was tested using mental rotation test (mrt), guay visualization of views tests (gvvt), spatial perspective taking and spatial orientation test (ptost) and pictorial surface orientation (picsor). spearman correlation coefficient was used in this study with a p-value of significance at \ 0.05. results: senior surgeons have an overall good visuospatial profile, in the sense that they performed close to optimum on all measurement scales. the spearman rho revealed a significant correlation between scores on gvvt and picsor (r = 0.607, p = 0.012) and between ptost and picsor (r = -.686, p = 0.003). a significant correlation between years of laparoscopic experience and ptost score was also observed (r = 0.587, p = 0.035). when comparing residents and senior surgeons, no significant difference on the mrt was observed (m = 10.2, sd = 4.33), nor between baseline scores of senior surgeons and resident surgeons on all tests. conclusion: the results of this study carry important clinical and theoretical implications, as the results hint towards the idea that intraoperative laparoscopic experience lends little to no influence over the development of visuospatial ability. learning models and laparoscopic technical skills, how to adapt each case to improve objectives: according to da. kolb learning is the result of how people perceive and then process what they have perceived. the aim of this study is to identify the personal characteristics of learning in of the participants in a course of laparoscopic technical skills according to the styles described by kolb. methods: between june 2016 and november 2017, 35 participants performed a 50 h course distributed over five consecutive days performing laparoscopic manual intestinal anastomosis in endotrainer. they all filled in kolb's learning style test adapted to spanish. the anastomoses were performed in 'ex-vivo' swine intestines. in each anastomosis we evaluated the quality at the end and execution time. the test and quality variables were analyzed through statistical studies. results: in our study, 69% of the participants were women and 31% wew men. 49%were staff surgeons and 58% were resident. the median age among residents was 29 years and among the staff 39 years. the most frequent learning model in the sample studied was converging (31%). the predominant model among women was assimilating (37%), which, however, represented only 8% in men. in men, converging model was predominant (39%). among the staff, the most frequent model was diverging (35%). adaptation style prevailed among residents (39%), being rare among the staff (12%). the mean time of the anastomosis was 74 min for both the adapter model and the assimilator, 68 min, for the convergent and divergent models. the quality of the anastomosis performed by each participant was 80% for the adapter model, 37% for the assimilator model, 42% for the convergent model and 45% for the divergent model. the predominant style in our study was convergent. among women, the most frequent model was assimilator wheras in men it was the least frequent. in the residents, the most frequent model was adapter however, it was very rare in adjuncts. among residents we do not find divergent styles. the highest quality of the anastomosis was achieved by those who worked with an assimilating style. knowing previously the training style we can individualize the teaching methodology in order to improve competences. aims: assess whether laparoscopic appendicectomies (la) are a superior option to open appendicectomies (oa). specifically, comparing the time taken, complication rates and whether it is more appropriate to perform an la overnight, as opposed to oa. finally, to find out how a range of outcomes differs between different grades of surgeon. methods: an information request was sent to the clinical coding department to derive patient identification numbers for all appendicectomies over a ten-month period (180 total surgeries). these numbers were then inputted into the hospital information system where the electronic operation note is present, and specific outcomes were derived and analysed. results: 68% of operations were oa and 32% were la. mean la times for consultants, sas and spr were 88.4, 78 and 92 min respectively and oa 63, 57 and 59 min respectively. their respective conversion rates were 27%, 16% and 0%. oa had a complication rate of 16.3%, la was 10.2%. conclusion: oa are performed more than la. spr doctors had the slowest completion times for la but the lowest conversion rates. sas doctors had the fastest completion times for la and oa but higher conversion. la takes longer than oa but has lower complication rates; key factors when performing at night. key statement: laparoscopic appendicectomies require more surgeon-hours and have the potential to be converted to open, however the rates of complications and serious complications are significantly lower. background: paper based resources have been the standard sources for information for centuries. however, more and more people (patients and staff alike) are looking online for information. while the internet often provides excellent resources, there is often conflicting and confusing material of doubtful veracity. trainee staff and patients/carers should be able to access reliable resources whenever and wherever they are. the aim of this project was to create a high-quality resource fulfilling these needs. aim: we present a video demonstrating our integrated colorectal education website ( http://www.colorectaleducation.com/). our approach: high quality health care provision requires highly trained staff as well as wellinformed patients. information resources for these two groups are usually accessible from different repositories. our integrated website provides a common platform for all those involved in colorectal surgery, to use, learn and reflect on. users are directed to separate sections for patients and colorectal professionals. multiple disclaimers prevent patients accidentally stumbling across clinical/ operative information, whilst providing access to those who wish to do so. trainees struggle with balancing their educational needs with their service commitments. this website gives them the opportunity to view detailed operative training videos on the go. many of videos are chapter based allowing them to stop and re-start with ease. modules are also available for nurses providing them access to relevant educational material. the modular design of the website allows us to build upon it with more topics planned to be added over the next eighteen months. the resource also has detailed chapterised videos for patients due to undergo various colorectal procedures. all have been approved by a multi-professional panel including patients and are designed to provide information, offer support and to allay any anxiety. videos with the care pathway and previous patients' experiences are accessible on demand. conclusion: on demand information has now become the norm with the use of smart phones/ tablets. this website provides patients, surgical trainees and other healthcare professionals access to information and education in a clear and reliable format anywhere in the world. colorectal education, on demand and just a click away! objective: in the last decade the growing interest in robotic surgery is evident as shown by several published articles. the aim of the present study is to evaluate the main outcome of a single center experience and to describe the organizational system we have progressively established in our center in order to improve the development of robotic program in all surgical area. materials and methods: we report a case series of patients who underwent robot-assisted surgery at sanchinarro university hospital since the beginning of the program (october 2010) until november 2018 main patient demographic characteristics, type of surgery, peri and postoperative data and follow-up were evaluated. results: a total of 326 robotic procedures were performed for a total of 323 patients. the prevalence of malignant disease was 86%. a total of 72 pancreatic surgery were performes; 22 liver resections (mean operating time: 190 min); 33 gastrectomy (mean operating time 310 min); 18 esophagectomy (mean operating time: 490 min); 152 colorectal resections (100 rectal resections, 23 sigmoidectomy 19 hemicolectomies right, 10 left colectomy) (mean operating time: 220 min); 6 nissen procedures (mean operating time: 130 min), 2 esofagheous myomectomy for achalasia (operating time: 90 min); 3 adrenalectomy (mean operating time: 240 min); three biliary surgery for benign desease, 2 splenectomy. eight partial resection of the duodenum, one yeyunal resection, one mesenteric cyst resection and 3 retroperitoneal tumor have been performed. conversion rate was 6%, total morbidity have been 17%. there has been no peri and postoperative mortality up to 30 days after surgery. the average hospital stay and intensive care were respectively16 days (range 6-45 days) and 1.9 days (range 0-12 days). conclusions: the organizational model defined in our center is facilitating the constant and progressive development of the robotic program. a broad and flexible availability of the robotic system, a progressive increase of young surgeons joining this technology as well as the institutional and departmental economical effort are the points with which the robotic system may increase its development in a surgical department. aims: endoscopic surgery has been widespread in the field of general surgery. however, in japan, there is no standard program for endoscopic surgery training, and its competency has not been considered for the acquisition of board certified surgeon. the purpose of this survey was to investigate the current situation of endoscopic surgery training and autonomy of young surgeons for endoscopic surgery in japan. methods: the survey was planned to target general surgery members of the japan society for endoscopic surgery (jses) who was post graduate year 10 or less. after approval by the ethics committee of jses, the request for the participating in survey was mailed to 2296 object members. questionnaire responses were available in print or online media. the contents of the questionnaire consisted of 19 items, about the conditions of endoscopic surgical training, experienced case number, and the self-assessment of autonomy from 1 to 4 point by zwisch scale in 9 specific procedures of endoscopic surgery. results: the total response rate was 28.5% (645/2296). sixty five answers were excluded due to inadequate response and 580 answers were analyzed. of the questionnaire respondents, 87% were male and 13% were female. the ratio of board certified surgeon was 67%. although 87% of the teaching hospitals had simulators for basic training of endoscopic surgery and 94% of the respondents practiced basic skill of endoscopic surgery, only 34% teaching hospitals had specific training programs for endoscopic surgery. the surgeons who operated 20 cases of laparoscopic appendectomy and inguinal hernia repair and 50 cases of laparoscopic cholecystectomy, right hemicolectomy and sigmoidectomy, felt confident to perform each procedure independently. regarding with laparoscopic rectal resection and gastrectomy, even though the surgeons who had 50 cases of experience, they didn't had confidence to perform those procedures independently. conclusions: this study is the first national survey to investigate the status of endoscopic surgery training in japan and the autonomy of young surgeons for endoscopic surgery. in order to develop a training system for not only basic skills but also advanced procedures of endoscopic surgery, cooperation of each teaching hospital, academic surgical society, medical specialty board is necessary. currently there is a debate about what is the most optimal work schedule for residents of general surgery, it is important to respect the free time of residents to avoid burnout, however it is also important have enough exposition to clinical cases that allow a satisfactory development in the clinical practice. this becomes even more important when we talk about the learning of surgical skills. this is where the laparoscopic simulation industry opens a large area of opportunity, for a reasonable price it is possible to practice basic laparoscopic skills without compromising patient safety. this is a pilot study that was carried out during the period from january 2018 to june 2018, in a public hospital in monterrey, nl, mexico, the composition between the execution of the standardized exercises of the fls (fundamental laparoscopic surgery) in an endoscopic simulator was performed to 20 residents of general surgery (from first to fifth year) 24 hrs before being on call vs these same residents post call. a series of questions was asked to each resident in each measurement, so in this way they answered the same questions twice, then a comparison of the results of both questionnaires was made. the results of the exercises were assessed and rated by the same person using the criteria established in the fls for the scores of each exercise and for the final grade. an average age of 27 years was obtained, measurements were taken of 20 residents of which 16 are male and 4 female. on average, the residents before be on call performed the exercises with 7 h of having slept while the post call performed the exercises with 2.15 h of having slept, the residents before be on call had on average 10.9 h without sleep while the post call had 25 h without sleep. the average number of hours worked per week is 111 h, measured by the time in and out of the hospital. in this study, conclusive results were obtained regarding the null relationship of sleep deprivation with the performance of laparoscopic skills in surgical residents. aim: 'precision cutting' is one of skills tasks of the fundamentals of laparoscopic surgery (fls) program, which is cutting a circle on a piece of gauze under laparoscope and assessed by completing time (maximum time limit: 300 s). there is no definition of quality of the final product. the aim of this study is to develop an assessment tool of laparoscopic precision cutting and test its reliability. method: an assessment tool of laparoscopic precision cutting was developed with four items based on completion, degree of deformation, degree of being pulled, and overall appearance of the final product of laparoscopic precision cutting by experts' meetings. the scale of each item was 5 points likert scale. a descriptive sheet with a legend and a text description for each scale (fig) was attached for assessors' reference. for our high school entry medical students, they gained hands-on experiences of laparoscopic skills first time by attending a 1-hour course at minimally invasive surgery training center, national taiwan university hospital (ntuh). we invited students to participate this study after this training. we collected participants' final products of ' precision cutting' station and assessed them by using this assessment tool. this study was proved by institutional review board, ntuh (irb no:201512051rinb). results: 35 students were enrolled between february 2016 to june 2016. two non-medical assessors and a senior surgeon were invited to assess the products. the mean score and cronbach' s alpha value of each item were as followed: completion 2.2 ± 0.9, 0.91; degree of deformation 2.11 ± 0.9, 0.96; degree of being pulled 3.1 ± 1.1, 0.85; and overall appearance 2.6 ± 0.9, 0.95. conclusions: in summary, we successfully developed an assessment tool for laparoscopic 'precision cutting' and showed its reliability. the tool could provide qualitative descriptions for objective feedbacks. validating this tool in a large scale is undergoing. purpose: to evaluate whether the participants who experienced this scenario could recall an interventional scenario for testing trainees' situational awareness and intra-operative decision making when they participated this training again. methods: we designed an iodm training course for junior surgical trainees and nurses by using live pigs since sep 2016. in the first simulation, we created an interventional scenario and then provided an educational session. a researcher disconnected the ekg monitor on purpose for creating a scenario that the pig would lose vital signs when the team nearly finished a diagnostic laparoscopy. if the team did not aware the situation after 1.5 min, a researcher would remind the team (fig). we used a new developed assessment tool of iodm and an assessment tool for nontechnical skills for surgeons (notss) for self-evaluations and objective assessments. we also discussed with them about their reactions while encountering this interventional scenario. results: between sep 2017 to june 2018, 14 teams participated this training and experienced this interventional scenario. fourteen 2 nd year surgical trainees have experienced it before. only one participant (7%) recalled it and made a quick decision while encountering this interventional scenario again. the results of iodm assessment and notss did not show statistical difference comparing their self-assessments in the first and second year. based on the analysis of the discussions, most of them remembered this this interventional scenario and reminded themselves to react it properly before the simulation. however, when they were the primary surgeon of diagnostic laparoscopy, they focused on performing this procedure and tutoring their junior trainee. they had no capacity in their brain to notice the change of vital signs. in addition, although they increased their situation awareness in clinical settings after the 1st time iodm training, they did not show this ability in the simulation. conclusions: recalling of an interventional scenario for testing situational awareness of surgical trainees was very poor (1/14, 7%) among the 2nd year surgical trainees. qualitative analysis of discussions showed their brain capacities were occupied by performing new procedures and tutoring others. how to enhance trainees' situational awareness should be addressed. aims: a well-designed learning curve is essential to measure the progress of surgical abilities. learning curves are very important to test the skills of trainees. however, there are still no welldefined criteria for developing good learning curves. as a result, many authors use subjective evaluation criteria. the purpose of this review is to analyse this field of surgical education and to identify the key criteria for good learning curves. methods: learning curves were investigated in the field of laparoscopic and robotic minimally invasive surgery. surgery of appendectomy, cholecystectomy, cholectomy, inguinal hernia repair and gastrectomy were considered. the type of surgery, the year of publication, the design of the study, the surgeon's experience (resident, young or senior), the surgical technique, the number of patients involved in the study and the suggested learning curve by the different studies were taken into account. in the selection of articles, more importance was given to those based on the activity of young surgeons or residents. results: the literature analysis showed conflicting results. the different learning curves for the same surgery may be due to the different evaluation criteria considered. only a few studies investigate the learning curves of young surgeons and residents. conclusions: the data available in the literature on learning curves are contradictory. several factors need to be evaluated in order to create more accurate learning curves. we suggest the introduction of checklists with a score for each parameter to be examined, in order to develop more objective and standardized learning curves. aim: the uk training programme for transanal total mesorectal excision (tatme) has completed its first round of training. the study aim was to design a reporting platform that provided trainees with video-assisted feedback in a clear, concise and useful manner to support their training. methods: an established method of video analysis called observational clinical human reliability analysis (ochra) was used to assess the surgical performance of the trainees during their clinical tatme cases. a reporting form for the ochra results was designed identifying areas of difficulties in each procedure and providing error reduction mechanisms. this was piloted during the national training programme for tatme in the uk. results: the ochra reporting form underwent three modifications before the content and format was agreed upon. the final version is divided into three sections: a. case details, b. ochra findings, and c. suggested error-reducing mechanisms. for part b the tatme procedure was divided into four phases of the operation: 1. pursestring, 2. rectotomy, 3. tme dissection, and 4. connected phase when the abdominal and transanal teams work together synchronously. for each phase, ochra findings described the most frequently occurring technical inaccuracies/errors, number of consequential errors/adverse events and the most frequent and serious consequences encountered. suggested error-reducing mechanisms in part c were developed and established by an expert workshop and individual interviews with international surgeons experienced in tatme.trainee and mentor feedback stated that the reporting form had a clear format, easy to follow and understand. the error-reducing mechanisms were particularly useful and allowed the trainee to focus on improving specific technical aspects in their subsequent cases. conclusion: video analysis using ochra can provide a wealth of information on surgical performance, especially for trainees at the start of their learning curve. as an exploratory study, validation of the reporting platform is required; however, its potential to offer detailed, individualised feedback to enhance training is promising. laparoscopic pelvic surgery training program-using a new concept 3d-printed versatile pelvi-trainer r.c. elisei 1 , f. graur 2 , c. popa 2 , e. mois 2 , l. furcea 2 , n. al hajjar 2 1 general surgery, bistrita emergency county hospital, bistrita, romania; 2 general surgery, regional institute of gastroenterology and hepathology ,,prof. o. fodor,,, cluj-napoca, romania pelvic laparoscopic surgery (rectal, urological, or gynecological laparoscopic surgery) is an advanced surgery which require advanced skills, not easy to acquire. there are a lot of training programs for advanced laparoscopic skills but many of them are not affordable for most of surgery residents in eastern europe, where the training programs are far behind from those in western europe. because of that those training programs need to be improved and optimized. in the european union we want equal and high skilled surgeons. this is why we designed a new concept of pelvi-trainer, a versatile one in order to offer the residents the possibility to achieve advanced laparoscopic skills like perfect coordination, precise movements, ability to cut and suture after a well defined route, all of them in the pelvis tight space. we 3d-printed this pelvitrainer which has multiple characteristics: cheap and easy to produce, easy to be used, versatile because offer the possibility to achieve the skills named above, and many others, but also to train on real ex vivo animal rectum (suine model). we also believe that with a proper training a medical student and a young surgery resident are able to achieve the same skills like experienced surgery residents or specialists. in order to demonstrate that we need a study to compare the time to perform 4 or more exercises in this new concept pelvi-trainer by the medical students, young and experienced residents and surgery specialists. what we want to achieve with this training program project is to have more and more skilled surgeons in advanced laparoscopy and an equal laparoscopic surgery training all over the country, close to the level of training in the western europe. also we want this training program to make a standardization of the pelvic laparoscopic surgery training first in our country and then in other countries if possible. aims: the objective of this systematic review is to provide an evidence-based overview of the different components of laparoscopic training curricula, emphasizing the value of objective forcebased assessment and how this in implemented in modern laparoscopic training. methods: bibliographic databases of pubmed and embase were searched till april 2018 to identify studies reporting on evidence-based laparoscopic skills training. abstracts of retrieved studies were reviewed by two authors independently and those meeting the inclusion criteria were selected for full-text review. results: the search yielded a total of 2010 individual records. a total of 96 articles were included. the articles were divided into nine different categories, which include 'metrics', 'benchmark criteria', 'measurement systems', 'timetable', 'training modalities', 'camera settings', 'training tasks', 'serious gaming', and 'competition'. a descriptive analysis of the data is provided. motion analysis parameters, such and path length and time are frequently validated and used for assessment. the results of validation studies on tissue manipulation parameters, such as maximum force and mean force show proved their discriminating power between different levels of proficiency. however, implementation of these metrics remain restrained. conclusions: numerous studies on laparoscopic skills training have been conducted over the years. nevertheless, no consensus is reached towards the use of objective assessment tools. although the value of validated metrics is described well, implementation of objective metrics is limited. we recommend to consider objective force-and motion metrics for feedback and assessment during laparoscopic skills training. surgery, regional institute of gastroenterology and hepatology, cluj-napoca, romania; 2 anesthesiology, university of agricultural sciences and veterinary medicine, cluj-napoca, romania; 3 radiology, regional institute of gastroenterology and hepatology, cluj-napoca, romania aims: the aim of the study was to create a new easy learning method of swine liver anatomy for residents in training. based on human liver surgical anatomy we put 'face to face' the similar structures and also the differences using ex vivo porcine models and ct reconstructions from live pigs. methods: having in mind the human liver anatomy, in the first stage we used data obtained from dissection of twelve porcine liver models to create an anatomical pattern, which summarized the most important surgical information. in the second stage, anatomical data obtained from ct scans of twelve living anesthetized pigs were analyzed. the ct reconstructions and volumetry data were added to the gross anatomy pattern to create a more complex learning module. results: the residents established the most frequent description of swine liver anatomy by putting together the information from ex vivo model dissection. the liver parenchyma is divided into four main anatomic lobes: left lateral, left medial, right medial and right lateral. all those lobes are connected only in the posterior part, which allows a very good separation between them by deep fissures. just as in humans, we found eight distinct segments with independent vascularization and biliary drainage. portal vein has a specific 's' shape; in most cases hepatic artery was found like a trifurcation and extrahepatic biliary tree has a very thin wall. in the right hemi-liver, the inferior vena cava passes through the liver parenchyma. most frequent, we found five hepatic veins which are running completely intraparenchymal. the imagistic data offered a very useful 3d reconstruction with anatomical positions of the vascular-biliary tree and liver segmentation and gave us the possibility to create practical scenarios for resections. perhaps the most important information was to discover and see the section plan and to calculate the volume of the remaining liver after resection. conclusions: the anatomical-imagistic pattern based on \ i[ex vivo \/i [ model disections combined with imagistic data offers a unique mindset before intervention. the concept 'human \ i[vs \/i [ swine' to create an easy method of learning for residents in training can be applied to swine liver anatomy. the learning of surgery is traditionally based on the behaviourist model . goals are set, standards of care fixed, with regular assessments of the level achieved. the teacher exercises control over the student, imposing rules and models, supported by 'reinforcing' actions (reward or punishment). the theory of skinner's program of education, from 1954, is reflected in surgical learning. it foresees a gradual progression by level of difficulty, following a transmission-imitation model . these theories seem currently outdated to face the new challenges of medicine and surgery and to keep up with technological developments. bruner, one of the theorists of the constructivist model, proposed in 1990 a method of collaborative learning between those who teach and those who learn. the goal of the method was to improve strategic problem solving. the comparison between various perspectives (between teacher and student), allowed the learner to better absorb knowledge and improve critical thinking. in 2008 kapur published on the theory of 'productive failure'. this model makes the error of a single person useful for all his colleagues, privileges the practice of theoretical knowledge, contextualised learning as opposed to abstract learning, and 'guided' practice compared to a 'guided' theory. bruner and kapur's systems favour creativity, critical analysis of a problems origin, and the practical use of knowledge. they represent a hypothesis of learning, based on constructive discussion and a continuos 'give-take' feedback system. in order to put these new models into practice in the clinical context, one may hypothesise and propose the adoption of a formal discussion of clinical cases that are complicated or difficult. thereby making the theoretical lessons more collaborative, intuitive and inclusive. in the surgical field, one could adapt such a concept to surgery simulation, virtual reality and anatomical models. aim: large hiatal hernias have a surgical indication when the patients suffering disabling symptoms such as anaemia, dyspnea, chest pain, gastric reflux. several studies showed that in the case of large hernias the placement of a prosthesis was safe and could protect against recurrence. mini-invasive surgery is the preferred approach for hiatal hernia repair and anti-reflux procedure and the toupet fundoplication has been shown to be the best surgical technique for the hiatal hernias repair.the laparoscopic approach is currently the surgical gold standard but is burdened by technical difficulties especially in the case of large hiatal hernias. the robotic system is designed to overcome some technical difficulties of laparoscopy and the studies available in literature report the safety and effectiveness of the robotic approach in complex hiatal hernias repair. methods: we present the case of a grade iv hiatal hernia treated with a robotic approach in a 73 years old woman (bmi: 31 kg/m 2 ). the medical history consisted of a road accident with a probable mechanism of deceleration, three years before. the patient had been suffering from dyspnea for three years. due to the recent discovery of an anaemia, the patient was subjected to an endoscopic examination with the identification of a voluminous grade iv hiatal hernia. a subsequent computed tomography (ct) scan showed also the partial herniation of the transverse colon. results: the patient underwent to surgery by using the da vinci robot system siò (intuitive surgical, sunnyvale, usa) with a single docking approach. the surgery consisted in the liberation of the hernial sac, the placement of a goretex prosthesis and the packaging of a toupet fundoplicatio. the surgery was performed without complication. conclusions: the robotic approach in the hiatal hernia surgery seems to be a valid alternative to laparoscopy, especially in complex cases. the surgical ability in robotic surgery is of paramount importance. general thoracic surgery, kawasaki municipal hospital, tokyo, japan aim: video-assisted thoracoscopic surgery (vats) with carbon dioxide (co 2 ) for mediastinal surgery is known to improve the visualization of medaistinal space. we report our experiences with two cases that underwent vats thymectomy using co 2 insufflation under the one-lung ventilation general anesthesia by double lumen tube. methods: the instruments that were used for vats thymectomy were only the 5-mm 30-degree rigid thoracoscope, maryland jaw energy device, cotton made-dissectors, and straight endoscopic grasping forceps. they were used through sealed ports designed for laparoscopic surgery. lowpressure co 2 insufflation set at 8 mmhg were used for compression of surround tissue of mediastinal tumor during the releasing procedure. results: the patients were an 81-year-old male and a 54-year-old female. thoracoscope with the 8 mmhg co 2 insufflation provides excellent visualization of the medaistinal space and operation could be done smoothly without any hemodynamic compromise. their pathological diagnoses were thymic cancer and thymoma, type b1. the operative times were 115 min and 66 min. the postoperative courses were uneventful and the patients were discharged on day 10th and 3 rd . conclusion: we have just begun to routinely use co 2 insufflation for mediastinal tomorectomy and present our early experiences of successful vats thymectomy by utilizing co \ su2 \/su insufflation. aims: this retrospective study aims to evaluate the feasibility of single-incision thoracoscopic surgery (sits) for primary spontaneous pneumothorax (psp), using a novel multichannel port (x gateò). methods: between october 2015 and november 2018, ten patients who underwent sits using x gateò. nine patients were male and 1 was female, with mean age of 22.6 ± 8.7 years old. a 2.5 cm incision is placed in the middle axillary line on the 4th or 5th intercostal space, depending on the lesions. postoperative outcomes of these patients were compared with those of 33 patients with psp who underwent conventional three-port video-assisted thoracic surgery (vats). results: there were no conversions from sits to vats. mean operative time of sits group was significantly shorter than that of three-port vats group (55.4 ± 14.3 min vs 79.8 ± 26.0 min, p = 0.003). mean number of staplers used in surgery was 2.5 (1) (2) (3) (4) in sits group and 3 (1) (2) (3) (4) (5) in vats group (p = 0.698). mean duration of postoperative drainage was also shorter in sits group (1.0 ± 0 days vs 1.3 ± 0.6 days, p = 0.05). no recurrence and wound infection were observed in sits group. conclusion: sits using x gateò is feasible when performed for selected patients with psp. x gateò provides good visualization of intrapleural space and esthetic outcomes, as well as a superb maneuverability by decreasing mutual interference of surgical instruments. although conventional three-port vats for psp is well established, sits using x gateò can be a permissible alternative. further examinations are required to evaluate efficacy of sits using x gateò. aims: haemorrhage remains a leading cause of potentially preventable death in trauma. in particular non-compressible torso haemorrhage is approximated to cause 60-70% of mortality in civilian trauma patients with otherwise survivable injuries and 80% in war setting. we performed a literature review to assess the potential for using endovascular stenting in traumatic venous injuries and explore the evidence of their efficacy and safety with different venous injury patterns. methods: systematic online search of pubmed performed using key words'endovascular stent', 'venous injury', trauma, penetrating, blunt, abdominal and pelvic. inclusion criteria included all studies that explored the use of endovascular stents following traumatic abdominopelvic venous injuries. english language studies were used. results were presented according to prisma guidelines. results: of the 112 studies generated by the search,there were only four case reports in the literature documenting the use of endovascular stents in traumatic venous injuries dating back to 1997 and most recently 2009. the four cases included three retrohepatic ivc injuries, two secondary to blunt trauma and one penetrating; whilst the final case a blunt injury at the ilio-caval bifurcation. all four cases reported successful deployment of stents via the femoral or internal jugular veins, with subsequent resolution of haemorrhage. length of time taken for stent insertion ranged from 9 to 52 min. three of four patients made full recoveries and discharged from hospital, with one patient subsequently dying of a brain injury independent of the successful venous stent insertion. no complications were reported at up to 8 months follow up in remaining cases including stent leak, stenosis or migration. conclusion: endovascular venous stents have been used successfully in managing complex abdominopelvic traumatic venous injuries. in particular retrohepatic venous injuries refractory to hepatic packing and vessel embolization, which are not amenable to direct surgical repair due to anatomical location. however before endovascular stenting can be added to the arsenal of interventional radiologists for abdomino-pelvic trauma, further development of stents custom made for venous injuries as well as prospective studies examining their long term safety and outcomes is needed. tracheal papilloma is a rare neoplasm growing from the tracheal or bronchial epithelium and has no specific clinical presentations. this is a 40-year-old female who complained of progressive dyspnea for about 2 months. physical examination was unremarkable and the there was no abnormal finding by the chest plain film. chest computed tomography was arranged and revealed a mass lesion located at the tracheal lumen with more than 80% luminal obstruction. we used fiberoptic bronchoscopy to evaluate the airway and found a mass lesion with pedicle originated from the posterior tracheal wall. cryotherapy was considered for the tumor mass removing to establish a patent airway. the pathologic report revealed tracheal papillomatosis without any malignant component. dyspnea was immediately improved and the patient chose closely observation after the bronchoscopic cryotherapy. aims: recent advances in laparoscopic surgery, both in techniques and instrumentation material, have led to the emergence of innovative technological fields, among which robotic surgery stands out.one of the handicaps of this surgery is its high cost as well as the long learning curve. in this stage a new tool arises, the flexdex semi robotic arm, which combines the precision and the range of movements of robotic surgery with the greater availability, simplicity of use and learning of conventional laparoscopic surgery.the objective of this study is to evaluate the efficacy and safety of the flexdex device in different laparoscopic procedures. methods: flexdex's is a three-axis gimbal technological device integrated in a conventional laparoscopic instrument that translates the surgeon's hand, wrist, and arm movements from outside the patient into corresponding movements of an end-effector inside the patient's body.the greater accessibility provided by the flexdex allows the surgeon to perform sutures in areas of difficult access where mobility with conventional laparoscopic instruments is not optimal. the comfort of the surgeon remains fundamental in any type of surgery, even more when we are in anatomical locations with complex access, especially for the realization of sutures. here is where surgical innovation instruments such as flexdex provides ergonomic comfort for the surgeon and improves the patient's safety, especially in high-risk situations, such as when performing anastomosis. results: we present a prospective series of 10 laparoscopic procedures carried out by the same surgical team being the initial experience in our environment in the use of the flexdex semi robotic arm for the realization of complex anatomical sutures.this is a case series of 10 patients to whom different surgical techniques requiring manual suture have been performed. these being 2 tapp procedures, 6 nissen-type fundoplicature and 2 reinforcements of colorectal anastomosis. it is important to note that in none of the cases complications were recorded conclusions: flexdex can provide an excellent alternative to the robotic systems in complex surgical procedures, offering surgeons the precision and control they desire while maintaining the balance of cost, outcome and patient benefit. background: a new single-port device (fsis-flexible-single-incision-surgery) is presented. this new platform has three working channels, two for rigid instruments and one for the flexible endoscope. the channel for flexible instruments offers a pneumatic sealing to avoid the air's leak of the cavity (abdomen, rectum, vagina) . in this study the preclinical data are shown testing the feasibility and safety for laparo-endoscopic instruments. methods: experimental evaluation of feasibility and safety in two stages. in the first stage a working channel with pneumatic sealing was tested in simulators to use a flexible endoscope. in the second stage (animal model) the single incision device that makes possible to use laparoscopic instruments and flexible endoscopes was tested. the measured variables were: time of the procedure, co2 employed, adverse intraoperative events, grip's losing, losing of pneumatic sealing, feasibility and safety of the procedure for the surgeon. results: the hysterectomy and double adnexectomy was done with a median time of 7.1 min. the median of the co2 consumption was 32.5 litres. only in one case (16.6%) the surgeon had problems with the abdominal navigation of the endoscope that was easily solved. the grip's lose wasn't a major problem. the median size of the skin incision was 5.4 cm. the median surgeon' score for the feasibility was 10 and for the safety was 9.6. conclusions: the surgeons considered that the use of the device was very feasible and safe. the fsis-device is a universal platform for single-incision-surgery for surgeons and gastroenterologists and for abdominal, rectal and vaginal access. aim: despite the near-infrared fluorescence (nirf) via the intravenous administration of indocyanine green (icg) improves the visualisation of the cystic duct (cd) and the extrahepatic biliary tract (ebt), the back fluorescence of the liver reduces the signal-to-noise ratio.we have modified the technique of nirf cholecystocholangiography with intragallbladder icg injection by using the arrow-karlan tm balloon cholangiography catheter instead of the purse string at the gallbladder's fundus. this procedure allows a high rate of visualisation of the ebt, with few cases of icg leakage.aim: of this study is to confirm the feasibility of this different technique and to analyse the icg spillage from the gallbladder and to identify the ebt. methods: we enrolled nine patients undergoing laparoscopic cholecystectomy for cholelithiasis. the gallbladder was perforated with the cholangiogram catheter, the balloon inflated with 0.5 ml of saline and tightened. the bile was drained and the icg bolus injected. a titanium clip was the placed on the catheter strict closely to the gallbladder in order to prevent the catheter dislocation. results: the cd and the ebt were visible before dissection in 6/9 and 8/9 patients respectively. after dissection the cd was visible in all the patients and the ebt again in 8/9 patients. there was only one icg spillage due to a tardive positioning of the clip. in a case of inflamed gallbladder this technique helped in the identification of the dissection plane. conclusions: our preliminary results of this ongoing study confirm the feasibility of this different approach as a possible alternative to the purse string and a good visualisation of ebt. introduction: robotic-assisted surgery is a promising technique for overcoming the limitations of laparoscopic surgery, especially with regards to complex and advanced surgical procedures. here, we describe the establishment and implementation of our robotic upper gastrointestinal (gi) and hepato-pancreato-biliary (hpb) surgery program within our center of excellence for minimally invasive surgery as well as the first-year results. method: robotic-assisted surgery was performed using the davinci xi surgical system tm and performed by two surgeons specialized in minimally invasive surgery (db and tk). our robotic surgery program of upper gi and hpb surgery was established in three steps: (1) first, surgical procedures with easier degree of difficulty were performed robotically, including cholecystectomy, minor gastric resections and fundoplications. (2) then, pancreatic distal resections, enucleations, adrenalectomies and atypical liver resections were robotically performed, as procedures with moderate degree of difficulty. (3) finally, advanced and highly complex procedures were performed, including right hemihepatectomy, complex pancreatic head resections (including portal vein resections), total gastrectomy and esophagectomy. data collected from july 2017 till july 2018 were retrospectively analyzed with regard to conversion rate, morbidity (clavien dindo grade £3) and mortality. results: within the first year, a total of 66 robotic assisted upper gi and hpb resections were performed. the first step of establishing our robotic surgical program included eight procedures. here, conversion rate, morbidity and mortality were 0%. within the second step of establishment 31 procedures were performed. conversion rate, morbidity and mortality were 27%, 10% and 0%. the last step included 27 of advanced and highly complex procedures. these procedures resulted in a conversion rate of 48%, 30% morbidity and 0% mortality. conclusion: our stepwise approach enables a safe implementation of a robotic surgical program for upper gi and hpb surgery with low morbidity and no mortality even for highly complex procedures. however, highly complex procedures required a high conversion rate, which might be caused by the early stage of experience. the standard surgical procedure of choledochal cyst is a complete excision of the cyst with rouxen-y hepaticojejunostomy and laparoscopic surgery had been increasingly used. this is still a challenging way to perform anastomosis due to the small diameter of bile duct and the possibility of bile leak or stricture. robotic system can overcome the shortcomings of laparoscopy with providing three-dimensional view, magnification, and articulated instruments. from jan 2014 to dec 2017, 22 patients underwent robotic cyst excision and hepaticojejunostomy by single surgeon. we reviewed the clinical data and compared with laparoscopic outcomes of early (from 2009 to 2011) and late (from 2014 to 2017) group, retrospectively. patients of robotic series were all female with mean age 35.3 years and bmi 23.7. the mean size of cyst was 3.2 9 5.0 cm, and todani type ia 10, ic 6 and iva 6, respectively. total 5 trocars were used with 3 robotic working arm and 1 assist and 1 camera. the mean operative time 248.5 ± 52.9 min, and it was similar with late laparoscopic group (236 ± 62.9 min) and significantly shorter than early group (395 ± 85.9 min).there were no open conversion in robotic and late laparoscopic group, however, the early laparoscopic group involved 35% of conversion rates. the hospital length was 7 ± 3.8 days in robotic group, and it was similar with late group (7 ± 3.5) and more shorter than early group (9.3 ± 6.8). in robotic series, postoperative complications occurred 3 patients. one case included cholangitis which was resolved after conservative treatment. bile leakage was developed in 1 patient, and treated with drain that inserted intraoperatively. last cases showed incisional hernia at postoperative 4 months, and was corrected by laparoscopic herniorrahphy. complications (n = 7) in late laparoscopic group included hepaticojejunostomy stricture and stone, bleeding of jejunal branch, portal vein thromobosis, acute pancreatitis, and adhesive ileus. there were no mortaility case in any groups.robotic surgery of choledochal cyst is a safe and feasible option with short-term results that are comparable to laparoscopic approach. general surgery, sanchinarro university hospital, madrid, spain background: the incidental detection of benign to low-grade malignant small pancreatic neoplasms increased in the last decades. the surgical management of these patients is still under debate. the aim of this paper is to evaluate the safety and feasibility of robotic enucleations. methods: we retrospectively reviewed our prospectively databases from november 2018. demographics, pathological characteristics, perioperative outcome, and medium-term follow-up of patients who underwent robotic pancreatic enucleations were collected. results: 18 patients were included. the mean age of the patients was 61 years (48-74). the median body mass index was 26 (24-29). ten lesions were located in the pancreatic head, 4 in the pancreatic body, 4 in the pancreatic tail. operative time was 250 min (range 114-356), no intraoperative transfusion were needed and in one patient conversion to open approach was needed. in three patients grade b pancreatic fistula occurred. the mean postoperative stay was 8,4 days. conclusions: robotic enucleation is a feasible and safe approach, with low incidence of morbidity. the results of surgical treatment of patients with pulmonary tuberculosis were evaluated depending on the prevalence of the tuberculosis process and the type of surgical intervention used. according to the results of the questionnaire, 556 people operated on pulmonary tuberculosis in the period from 1 to 9 years ago, the frequency of cases of tuberculosis reactivation, the complicated course of the remote postoperative period, as well as the mortality and causes of lethal outcomes were assessed. it was found that after sublobular resection and lobectomy, treatment failure was noted at 3.2%, relapse of tuberculosis-2.2%, pleural empyema-1.1%, bronchial fistula-0.8%, cardiovascular insufficiency-in 1.3% operated. the mortality rate was 3.2% with a total clinical efficacy of 96.0%. after combined resection and bylobectomy, treatment failure was noted at 8.9%, relapse of tuberculosis-15.0%, pleural empyema-12.0%, bronchial fistulae-6.3%, cardiovascular failure-5.0% operated. the mortality rate was 12.6% with a total clinical efficacy of 87.4%. after pneumonectomy, treatment failure was noted at 5.6%, relapse of tuberculosis-5.6%, pleural empyema-3.8%, bronchial fistulae-3.8%, cardiovascular failure-3.8% operated. the mortality rate was 10.0% with a total clinical efficacy of 90.0%. robotic reduced-port splenectomy using single-site platform j.h. lee background: in the era of minimal invasive surgery, single incision laparoscopic splenectomy can offer some advantages compared to conventional laparoscopic splenectomy. but it requires expertise in minimally invasive techniques due to technical difficulties. the da vinci robotic reduced-port splenectomy using single-site platform permits greater freedom of movement and higher levels of accuracy than previous laparoscopic surgery through two small incisions. methods: we performed a retrospective review of all patients who underwent robotic reduced-port splenectomy using single-site platform at our institution between january,2015 and november,2018. one 3 cm periumbilical incision was made for glove port insertion and the other incision was made at left side of abdomen for additional 8 mm port insertion.the surgical technique is much same as open procedure. short gastric artery was ligated, firstly. splenic artery and vein were ligated individually. during the surgery, any stapling device was not used. vessel sealer was used for hemostasis and mobilization of spleen. a specimen was removed through umbilical port site within lap-bag. result: eight patients (6 female and 2 male) with median age of 33.5 years underwent robotic reducedport splenectomy using single-site platform (one case with combined robotic cholecystectomy for gall bladder stones without additional trocar). the indications were; hematological disease (n = 3), splenic mass (benign n = 4, malignant n = 1). preoperatively measured spleen size was ranged 5.5 cm to 16 cm (mean 11 cm). there were no intraoperative complications and open conversion. mean operative time was 132 min. (range 74-206 min) including docking (mean 19 min) and console time (mean 62 min) mean blood loss was under 10 ml. mean hospital stay was 5.2 days after surgery. one patient underwent oral anticoagulation therapy only for portal vein thromobisis without any symptoms, and thromobisis was resolved at 1 month follow-up ct scan. there were no clavien-dindo class iii or above postoperative complication. conclusions: robotic reduced-port splenectomy using single-site platform seems to be feasible and effective. it seems to overcome certain limits of previous robotic or conventional single-site laparoscopic splenectomy and single-site only robotic splenectomy. we think 8 mm additional port allows to use endo-wrist da vinci instruments such as vessel sealer which enhances dissection efficiency andsafety of procedures. aims: inguinal lymph node dissection carries an important risk of post-operative complications, mainly related with wound complications and long term lymphedema. the minimally invasive approach aims to reduce the morbidity of this procedure, avoiding the traditional groin incision but still allowing a full access to the lymph node basin. the authors aimed to describe their videoassisted inguinal lymph node dissection (vilnd) cases, comparing the surgical outcomes with a sample of open inguinal lymph node dissection (oilnd) cases. methods: we performed a retrospective descriptive study that compared the data from patients submitted to vilnd since 2017 (the year in which this technique was first performed in our institution) with the patients submitted to oilnd in 2015 and 2016. gynaecologic and urologic malignancies were excluded. the statistical analysis was performed using spssv25ó, with a p value \ 0.05 indicating statistical significance. results: a total of 62 cases of inguinal lymph node dissection were analysed, 33.9% of which vilnd (none of them requiring conversion to the open approach). melanoma was the primary tumour in 87% of patients. the vilnd and oilnd groups had no statistically significant difference between them regarding age, body mass index, smoking status or the reason for lymph node dissection-clinically detected lymph node vs. positive sentinel node biopsy. the mean of isolated lymph nodes in the vilnd (7.71) and oilnd (9.63) groups was also not statistically different (p = 0.109). there was no difference in the rate of post-operative seroma, wound dehiscence or lymphedema. the rate of surgical site infections was higher in the oilnd group-34% vs. 9.5% during post-operative hospital admission (p = 0.045); 29.3% vs. 4.8% after discharge (p = 0.036). conclusions: in our population of patients we conclude that the main advantage of the videoassisted approach regarding surgical morbidity lies in the reduction of the infection rate, as the published literature also confirms. the equivalent number of lymph nodes retrieved in both groups points toward the oncological safety of the minimally invasive procedure, that we hope to study further in the future after a longer follow up period. objectives: to evaluate the clinical feasibility of tumor localization technique with radio-frequency identification (rfid) clip marker methods: we developed the proto-type rfid integrated endoscopic clip (rfid-clip) and probe to detect it on serosa surface during the laparoscopic surgery. a pig weighing 40 kg was used as the specimen for the in-vivo test. endoscopist performed the application of the rfid-clip on porcine gastric mucosa. after then, the surgeon tried to find the location of rfid-clip using the detection probe and marked with the electrocautery. after the gastrectomy with 3 cm margin (each to proximal and distal), we confirmed the prediction of rfid-clip location and accuracy of resection. results: rfid-clip location was detected and recorded on the exact site of clip application. detection range was very short and we confirmed there are almost no differences between actual clip location and our prediction. this result might arise from using the low-frequency rfid tag to increase the accuracy through reduction of the range. however, some rfid-clip were not detected because of the issue of clipping trouble, not rfid tag. conclusions: this is a basic study to evaluate the clinical usefulness and feasibility of the new localizing technique. we confirmed the possibilities of this system and it could be the helpful option to provide the information of exact location for the minimally invasive surgery or early gastrointestinal tumors. background: the advantages of laparoscopic posterior retroperitoneal adrenalectomy (lpra) have been described in the literature. the aim of this study was to compare the clinical outcomes of lpra and robotic posterior retroperitoneal adrenalectomy (rpra) and determine the differences that could affect the outcomes. methods: we retrospectively analyzed 253 adrenalectomy cases at asan medical center from 2014 to 2017. there were 190 lpra and 63 rpra cases, and their clinicopathological features and surgical outcomes were compared. results: in lpra, there was a positive relationship between operation time and male gender, early period of experience, adrenal tumor size, and pheochromocytoma. in rpra, adrenal tumor size and pheochromocytoma were the only factors affecting the operation time. when the adrenal tumor size was = 5.5 cm, the operation time of lpra was shorter than that of rpra (p = 0.001). when the tumor size was [ 5.5 cm, there was no significant difference in the operation time of lpra and rpra (p = 0.102). conclusions: rpra is a feasible and technically safe approach for benign adrenal diseases. the use of rpra could benefit patients with large tumors and provide comfort by overcoming the factors contributing to a longer operation time in the laparoscopic technique. methods: twenty years experience at the american university of beirut medical center for laparoscopic adrenalectomy. a total of 65 cases were done laparoscopically with no conversion and minimal complication. the average operative time is 40 mins.the video will show the various steps used for lap redo (lt) adrenalectomy for a 15 cm pheochromocytoma using the lateral position and through 3 trocars. attempt to remove the pheochromocytoma in iraque was complicated by cardiac arrest treated successfully and patient referred to the american university of beirut medical center. results: patient had smooth postoperative course following laparoscopic adrenalectomy and patient discharged 3 days later with no complications. conclusions: even large adrenal masses can be completed laparoscopically in advanced experienced centers in laparoscopy. surg endosc (2019) aims: the adrenocortical of uncertain malignancy neoplasm is a spectrum of classification for adrenal tumors whose histopathological diagnosis is uncertain. clinical case: we present a 65 year old patient with constitutional syndrome and severe hypercortisolism and hypokalemia reason why she was admitted to icu for episodes of ventricular fibrillation. no other medical history of interest except refractory hypertension to treatment. the tc showed a left adrenal mass of 6.5 9 4.5 9 5 cm with microcalcifications, areas of necrosis and hemorrhage, no infiltrating, without disease to distance. the surgery was a laparoscopic left adrenalectomy with no evidence of infiltration and no lymph nodes. the histopathology lesion presented a dense proliferation cellular of cortical type, with incomplete fibrous, without vascular or capsular invasion, with a 30% ki67; positivity vimentin and cd56. all epithelial markers, were negative. all this leads to the diagnosis of a neoplasm of uncertain malignancy potential adrenocortical. during the postoperative period, the patient presents a crisis of adrenal insufficiency that was treated with intravenous replenishment corticoidea and later orally with good clinical response. discussion: the adrenal carcinoma has a low incidence (0.1%), incidence peak around the 50 years, the most frequent is the mixed secretory. they are 2-5% of the adrenal incidentalomas. it is usually presented to the diagnosis as a locally advanced tumor with metastases (to liver, lung, retroperitoneal ganglia and bone). may present clinically due to hormonal hyperproduction; or be non-functioning tumors. the adrenal carcinoma poses a great difficulty at the time of the diagnosis pathological, and includes as differential diagnosis to other abdominal tumors. the distinction between corticoadrenal adenoma and adrenal carcinoma is sometimes difficult, so it has been defined a spectrum of intermediate category called adrenocortical neoplasm of intermediate or uncertain malignancy. it is obtained with the weiss criteria, being necessary at least 3 of them for confirm the diagnosis of adrenal carcinoma. this category has a low risk of local recurrence or metastasis, but it needs a narrow follow-up. conclusion: adrenal carcinoma of uncertain malignancy implies a new category in those tumors of difficult classification. aims: multiple endocrine neoplasia type 2 (men2) is an autosomal dominant disorder with an estimated prevalence of 1 per 30,000 in the general population. among patients suspected to have a pheochromocytoma, the diagnosis is rarely confirmed and only 10% is presented bilaterally. we present bilateral laparoscopic adrenalectomy in patients with men2. method: a 77-year-old woman with a family history of medullary thyroid cancer and breast cancer. personal history: hypertension, medullary thyroid cancer, breast cancer, laparoscopic cholecystectomy. appendectomy. after a study by endocrinology and suspicion of bilateral pheochromocytoma, discussing the case in a multidisciplinary committee, bilateral adrenalectomy was decided by laparoscopic approach. selective alpha-1-adrenergic blocking agent (doxazosin) were utilized before surgery. under general anesthesia left adrenalectomy was performed first in right lateral decubitus position. 15 mmhg pneumoperitoneum was started with the verres needle and 3 trocars (11 mm umbilical, 5 mm subxifoid and 12 mm left subcostal).once dissection was completed the gland was placed in a plastic bag and extracted through one of the trocars incisions, then the position of the patient was changed to left lateral decubitus for the right adrenal approach. another right subcostal 5 mm trocar was used. adhesiolysis of previous cholecystectomy was performed to right adrenal approach. adrenal veins were divided between metallic clips.no drainage was employed. results: the procedures were successfully performed without conversion. surgical time was 150 min and hospital stay was 2 days. had a clinical reversion with control of blood pressure monitored by endocrinology conclusions: currently, the laparoscopic approach is the technique of choice for the management of adrenal pathology.lateral decubitus transperitoneal approach is the procedure of choice in most cases. bilateral laparoscopic synchronous adrenalectomy is feasible and safe with good results as in our patient. traditionally the treatment of hyperparathyroidism for patients with familial hyperparathyroidism was subtotal parathyroidectomy or total parathyroidectomy and auto transplantation. in the era of minimally invasive parathyroidectomy, the removal of only abnormal glands guided by preoperative localizing studies has been suggested. aims: this systematic review aimed to investigate the role of focused minimally invasive parathyroidectomy in the treatment of patients with familial hyperparathyroidism. methods: electronic databases were searched with the search terms 'men i', 'familial hyperparathyroidism', 'men2a','hyperparathyroidism-jaw tumor syndrome', 'parathyroidectomy', 'minimally invasive ', for the time period up to and including december 2018. full publications, including clinical trials randomized or not, retrospective studies, case series, case reports that provided relevant data met inclusion criteria. results: thirty five possibly relevant studies were identified. abstracts were reviewed and fifteen articles were excluded. twenty studies, that met inclusion criteria were retrieved in full text and included in the systematic review, including three retrospective cohort studies i.e. two presenting data on meni associated hyperparathyroidism and the third study on familial hyperparathyroidism and seventeen small case series or case reports. the two retrospective studies on meni hyperparathyroidism included 125 patients treated either with focused minimally invasive parathyroidectomy or with the conventional approach. these studies presented conflicting data with one supporting and the other negating the focused minimally invasive parathyroidectomy due to the failure of localization studies to identify enlarged parathyroid glands in a great number of patients. conclusion: undoubtedly, the idea of minimally invasive parathyroidectomy in patients with hereditary and familial hyperparathyroidism is interesting. this idea is especially challenging in the case of meni. existing data suggest that focused mimimally invasive parathyroidectomy is feasible under the condition of exact preoperative localization studies. the main advantage of this approach is the minimization of the risk of postoperative hypoparathyroidism. however, data are limited and further research is needed before valid conclusions can be drawn on the suitability of this approach. objective: resection of pheochromocytomas is a challenging procedure due to hemodynamic lability, tumor vascularity and malignant potential.given the technical challenges for resection of large pheochromocytomas, there were hesitations about using the laparoscopic approach for these tumors during the first decade of laparoscopic surgery. however, improvement in imaging modalities,better pharmacological preparation,advances in anaesthesia and laparoscopic surgery rendered laparoscopic surgery for pheochromocytomas safe and efficient. our aim was to evaluate surgical outcomes in 86 patients with pheochromocytoma and to validate the role of laparoscopic surgery in the treatment of these tumors. design: a total of 85 procedures for pheochromocytoma were performed between january 1998-september 2018. the preoperative diagnosis, operative details, complications, length of hospital stay, morbidity and follow up were retrieved from the hospital records of 668 patients who underwent 686 adrenalectomies for benign and malignant adrenal tumors in the same period. preoperative localization was established in all patients with computerized tomography (ct) or magnetic resonance imaging (mri), while iodine -123-metaiodobenzyguanidine(mibg) scan was reserved for ambiguous cases where paraganglioma or metastatic disease was suspected. endocrinological evaluation and complete adrenal dynamic testing were performed to determine whether the tumor was functional or not. results: eighty-seven tumors were removed from 85 patients. one patient with meniia underwent bilateral resection of pheochromocytomas in two stages. tumor size in laparoscopic procedures ranged from 1.2 cm to 11.0 cm (mean 5.87 cm). forty-three patients had benign disease, 41 potentially malignant (based on pass), 1 malignant with metastasis. eight were in the context of a familial syndrome. sixty -eight patients underwent laparoscopic adrenalectomy, 8 patients had open approach from the start for recurrent pheochromocytoma or large benign tumor, 1 patient had open approach due to inoperable malignant pheochromocytoma and 10 patients had conversions from laparoscopic to open procedure. nine patients received sodium nitroprusside intraoperatively to treat hypertension. one patient developed pulmonary embolism, and succumbed 1 month later. there were no recurrences for the benign tumors during the follow-up period. conclusions: laparoscopic resection of pheochromocytomas despite its increased level of difficulty compared to that of other adrenal tumors, is a safe and effective procedure. aim: the concept 'large' in transperitoneal lateral laparosopic adrenalectomy (tlla) has been evolving along time, ranging from 5 to 8-10 cm depending on different authors. on the other hand, some authors discourage laparoscopic surgery in larger tumors due to the increased risk of malignancy in those larger than 5-6 cm, referring to malignancy in 1 out of 3 or 4 cases. paragangliomas are rare tumors originated in extra-adrenal chromaffin cells, with an incidence of 2-8 cases per million inhabitants. they can appear in any location between neck and pelvis. sympathetic paragangliomas are usually functional and catecholamines producers. we present a movie of surgical intervention of a 22-year-old patient who, in study for refractory hypertension, presented paraganglioma producing norepinephrine, whose approach was performed laparoscopically. 22-year-old woman studied by nephrology for refractory hypertension. on physical examination, only obesity standed out. in blood exams, levels of normetanephrine were observed in plasma of 1950 pg/ml and aldosterone 832 pg/ml. abdominal scintigraphy was performed in which there was no evidence of increased activity at adrenal level. abdominal ct shows retroperitoneal extra-adrenal tumor of inter-aortocava location immediately below renal vessels with dimensions of 3.6 9 2.1 9 6 cm. after preparation, she was operated. laparoscopic access was performed under exhaustive monitoring. an heterogeneous, polylobulated tumor of 6 cm, located interaortocava, intimately adhered to left renal vascular pedicle, was observed. a cattell-braash and kocher maneuver was performed, with exposure of inferior cava and aorta to iliac bifurcation. complete tumor excision was performed after clipping arterial and venous tributary branches. after the operation, the pacient presented favorable evolution being discharge on the second postoperative day with good control of blood pressure levels. laparoscopic approach of retroperitoneal paragangliomas is a safe technique, which allows minimally invasive access, with consequent improvement in postoperative results. the exact location of lesions and their relationships with surrounding structures, as well as their functional behavior, are very important when considering the best therapeutic strategy for these patients. we present the case of a 50-year-old obese male patient referred for adrenalectomy after being diagnosed with left adrenal incidentaloma. abdominal mri showed a 4.7/4.1/3.9 left adrenal mass with normal hormonal levels. after preoperative workup, the patient underwent standard laparoscopic adrenalectomy. the lateral to medial dissection and mobilization of the spleen and pancreatic tail was difficult due to the abundance of peritoneal and pararenal fat. the anatomy was peculiar: the bulky pancreatic tail was located well inferior to the splenic hilum and was visible throughout the intervention and the spleen was quite elongated-long axis = 15 cm. the exposure of the adrenal gland was therefore cumbersome. the operating time was 128 min and blood loss 170 ml. the abdominal drainage was maintained for 48 h. before discharge the patient underwent a control abdominal us examination that only showed a thin line of left pleural fluid. the patient was readmitted 6 days after discharge for chest pain, fever (38.9°c) and malaise with no abdominal signs. the emergency ct scan diagnosed left basal pneumonia with minimal pleural effusion and a 7/1 cm fluid collection between the spleen and diaphragm while the blood test showed leukocytosis. the patient was treated for pneumonia with an apparent clinical benefit for three days and lowered white cell count but his condition worsened during the forth day. repeat abdominal us demonstrated that the abdominal collection increased in size therefore the patient underwent emergency surgery. during laparoscopic exploration, the collection was unveiled as being pancreatic juice (more than 7 times the normal serum levels of lipase and amylase). after thorough lavage, two drainage tubes were positioned in the left subphrenic space. the postoperative course was uneventful under antibiotic treatment for pneumonia and pancreatic antisecretory medication. the patient was discharged after 7 days with minimal pancreatic drainage and the drainage tube was extracted after 5 more days. the aim of the study was to develop the algorithm and the choice of the method of endoscopic treatment of a combined pathology of uterine leiomyoma and adenomyosis depending on the reproductive plans. methods: the study involved 60 patients with a combined pathology of uterine leiomyoma and adenomyosis. indications for conservative myomectomy were: the size of the uterus is more than 13 weeks. pregnancy; multiple leuomatous nodes and adenomyotic foci up to 5 cm in size; hemorrhagic and pain syndromes, anemia, compression of the adjacent organs; suspected node malfunction; submucous leiomyoma deforming the uterine cavity with foci of adenomyosis; subserous, cervical isthus nodes and foci of adenomyosis; the presence of endometrial hyperplasia, tumors of uterine appendages; growth rate of uterine leiomyoma more than 4 weeks pregnancy for the year; the growth of uterine leiomyoma on the background of drug treatment; infertility associated with leiomyoma and uterine adenomyosis.the laparoscopic myomectomy of the subserous node on the 'leg' with a size of more than 6 cm and nodes of more than 8 cm of intramural location is shown with an interest in preserving the organ.the hysterectomy is indicated for women after 43 years of age who insist on hysterectomy, with a combination of uterine leiomyoma with atypical endometrial hyperplasia. results: the conservative myomectomy and removal of adenomyotic foci were performed in 24 (40.5%) patients: from hysteroscopic access-4, vaginal access-3, laparoscopic access-13, abdominal access-5 in the presence of reproductive plans.the hysteroscopic myomectomy was performed in 8 (21.1%) patients, hysterectomy in 30 (78.9%) patients: from laparoscopic access-23, from vaginal access-3, from abdominal access-4 in the absence of reproductive plans. conclusions: the choice of surgical treatment of uterine leiomyoma and adenomyosis depends on the reproductive plans of the woman and the severity of the lesion.the laparoscopic method of treating a combined pathology of uterine leiomyoma and adenomyosis in the presence and absence of reproductive plans is a priority for women. surgery, policlinico ,,paolo giaccone,,, palermo, italy background: breast cancer in females represents the most frequent neoplasm in all age groups. the risk of getting breast cancer (mc) increases with age. the brca1 and the brca2 genes (tumor-suppressor genes, autosomal dominant transmission at high penetrance) alone justify from 30% to 70% of cases of hereditary breast cancer. methods: from 1 january 2011 to 1 june 2017 we have analyzed 18 patients with brca mutation. all 18 patients had in common a genetic mutation of brca1 or brca2 tumor suppressor genes. results: the frequency of germline mutation on brca1 (9 patients: 50%) was identical to brca2 gene (9 patients: 50%). 13 of the analyzed patients were women (72.2% of patients) 9 brca1 and 4 brca2, and 5 men (27.8%) all with brca2 mutation. conclusions: prophylactic surgery must be seen as a way to put the patient in the condition to implement the most appropriate treatment. further studies will be necessary to support the validity of prophylactic surgery in patients with mutations in brca1 and brca2 genes. introduction: laparoscopic hysterectomy is a safe surgical technique for removing the uterus with or without including the ovaries and fallopian tubes. laparoscopic surgery of endometrial cancer is a safe method, with the mean time of recovery being two days only. material-method: the case of a 55 yr old woman with metrorrhagia and anaemia (ht 24,5%) due to adenocarcinoma of the endometrius is presented. the patient underwent a laparoscopic hysterectomy and oophorectomy. 4 trocar ports were used during the procedure (a 10 mm transumbilical port, similar to the port used in single incision laparoscopic operations, two 5 mm ports at the level of the anterior superior iliac spines, and a 10 mm port in the middle of the imaginary line between the pubic symphisis and the umbilicus). the uterine vessels and the uterine ligaments were ligated and dissected by using a thermal energy source. the patient's postoperavite course was uneventful. the patient continues to be in good condition, 6 months post-surgery. conclusion: laparoscopic hysterectomy seems to be a safe method for addressing endometrial cancer, as it offers the surgeon a better surgical field, is tissue friendly and causes fewer postoperative complications. it is considered to be a less traumatic operative method, as due to zooming in the picture there is greater accuracy in handling the tissue, and blood loss is minimal. m. shahin background: hysterectomy is one of the most frequently performed surgical procedure. though there are three approaches in hysterectomy (open, vaginal and laparoscopic), still there are controversies regarding the optimal route for performing it. methods: this prospective comparative study included 42 obese patients subjected for panhysterectomy as a treatment. the forty-two patients were allocated into two groups: group (a) subjected to laparoscopic pan-hysterectomy, group (b) subjected to open pan-hysterectomy. results: there was significant difference between the two groups regarding mean operative time, blood loss, analgesic requirements and hospital stay, while no significant difference regarding intra-operative complications. conclusions: laparoscopic hysterectomy in obese patients has emerged as a viable, safe and better alternative to open hysterectomy amongst appropriately trained surgeons. general: endometriosis in the inguinal region is rare. the usual presentation is that of a woman in the reproductive age group. it accounts for 0.3-0.6% of patients affected by endometriosis. the groin swelling is usually slow growing, painful with exacerbations during menses. the incidence of inguinal endometriosis on the right side is 90-94% as compared to the left. aim: to present our laparoscopic approach for the treatment of the diagnostic dilemma. case presentation: a 40-year-old woman presented with a palpable mass in the right groin. the swelling was associated with a dull aching pain. the patient was suffering from increasing pain over the swelling during menstruation. she had undergone cesarean section some years ago and the scar had healed by primary intention. mri scan revealed a nodular hypoechoic lesion at the level of the internal inguinal ring with the absence of vascular flow around the lesion. results: since inguinal endometriosis was in the differential diagnosis and it may be associated with pelvic or intraperitoneal endometriosis, a laparoscopic approach was decided. the procedure was successfully completed laparoscopically following the transabdominal preperitoneal approach. the endometriosis was found, after dissecting the internal inguinal ring, firmly adhered to the round ligament. it was excised en bloc with the round ligament. a preperitoneal polypropylene mesh was inserted to protect for future inguinal hernias due to extensive dissection at the level of the internal inguinal ring. no intraperitoneal endometriosis was appreciated. histopathology revealed endometriosis of the round ligament. the patient was uneventfully discharged the next day. on follow up the patient was asymptomatic. conclusions: round ligament endometriosis is a rare entity. it is a disease of specific interest to the physician. it can be confused with an inguinal hernia and thereby pose a diagnostic dilemma. we recommend considering endometriosis in the differential diagnosis of groin swellings in women. the transabdominal preperitoneal approach is feasible and safe in the hands of an advanced laparoscopic surgeon. introduction: sentinel node biopsy is the newest accepted method for surgical staging of early stage endometrial and cervical cancer. aim: to evaluate the role of the technique of indocyanine green (icg) identification of the sentinel lymph nodes in cases of early endometrial cancer. material and method: five patients with early endometrial and cervical cancer were introduced in a prospective study. icg was locally injected during the laparoscopic exploration. novadac pinpoint near to red technology was used. guided biopsies were performed into the marked sentinel nodes and histological results were evaluated. results: sentinel lymph nodes were easily identified by using icg and near-infrared technology. technical details are described. no associated complication was encountered. conclusion: sln mapping using icg in uterine cancers is demonstrated as an effective and safe procedure. laparascopic extraction of an intraperitoneal gossypiboma following c/s and a retroperitoneal gossypiboma following pyeloplasty n. ozlem general surgery department, ahievran university, kirsehir, turkey gossypibomas are forgatten foreign bodies,iatrogenic.their symptoms are different where they are. they extracted with laparotomy in the past but now we can some article mentioned their extraction was made with laparoscopy. case 1: 33 y o female has abdominal pain after c/s for 2.5 years. a gossypiboma was extracted with laparoscopy above umblicus.a superficial surgical site infection existed,drained,subsided. case 2: 46yo m had a pyleoplasty operation 8 years ago.a gossypiboma was extracted with retroperitonescopy,no postoperative event. basibuyuk et al reported retroperionescopic extraction of a gossypiboma from single port in first time.althoug every effort taken the incidence of foreign body detected in the body is about 0.03-0.1%.they are most frequently localized in the intraabdominal cavity followed by tracheobronchial area,pleural cavity,pararenal area,vagina,spinal chord, neck, femur,breast,bladder,pancreas,and they may cause local irritation,and infection.tactile sense is absent in laparoscopy. all radiologic examinations(usg ct pet mri etc) be used to detect.we used usg ct.in the end laparoscopy make the diagnosis and remove gossypibomas in our cases with less postoperative pain and cosmosis. justo et al the computerized tomography (ct) scan is the most useful method for diagnosis; however, sometimes the preoperative diagnosis remains uncertain even after the imaging exam. in that case, laparoscopy arises as a valuable diagnostic tool, as well as a prompt treatment option. concerning gossypiboma, prevention is preferred rather than treatment. notwithstanding, there is no highly reliable prevention system. counting sponges is a method based on staff communication during the surgery with only 77% sensibility. routine surgical postoperative x-ray (spox) constitutes an early detection system, but the need to incorporate a radiopaque marker and to expose the whole surgical field to maximize its efficacy limits its use. more recently, electronic dispositives based on barcode detection and other technological adjuncts for counting sponges are being developed. none of these prevention systems are reliable when used alone. our education and research clinic was a state hospital before. no surgeon followed above instruction.but now we use all. multiple procedures and surgical teams, long operations and non-elective operations are the evidenced risk factors.c/s operation was learned full opened of ostium of cervix of the patient. urology, japan, nagoya, japan aims: some scoring systems have been suggested to standardize the renal tumor characteristics. among them, renal score is widely used in partial nephrectomy. whereas diameter-axis-polar (dap) score was developed to be more significantly related with postoperative renal function. our study compared dap score with renal score in robotic partial nephrectomy (rpn) outcomes. methods: records of patients who underwent rpn at nagoya daini red cross hospital between april 2016 to october 2018 were analyzed retrospectively. those include three oncocytomas. accordingly, we calculated the estimated glomerular filtration rate (egfr) just before rpn and 1 month postoperatively in 51 patients. we compared two nephrometry scores with warm ischemic time and change in egfr. results: in our institution, four surgeons performed rpn. according to dap score, 14 patients were high, 19 were middle and 18 were low. according to renal score, 1 were high, 26 were middle and 24 were low. the median warm ischemic time was 20 min (11-35). the median egfr decreased from 67.9 (23. 2-127.3) to 57.8 (10.7-116.9 ) ml/min/1.73 m 3 . there were no significant differences in warm ischemic time and percentage change in egfr between renal score groups (p = 0.38 and 0.87) but significant differences between dap score groups (p \ 0.05 and p \ 0.05). univariate and multivariate analyses were used to identify factors influencing postoperative renal function. that confirmed that dap score was independent poor predictors of change in egfr after rpn. conclusions: dap score is simpler estimate system than renal score. our study suggested that dap score is a useful scoring system for preoperative evaluation of renal tumor for rpn. further investigation is needed to better understand preoperative dap score. aims: retroperitoneal primary tumors comprise a great variety of neoplasm with different histological typologies, with insidious clinical symptoms and little specificity in most cases. its diagnosis is established through imaging tests and anatomopathological study is needed so complete surgical resection is the treatment of choice. the aim of the video is to demonstrate the safety and efficacy of the minimally invasive approach in patients with retroperitoneal lesions. methods: a 66-year-old female patient who, in the course of an abdominal pain at the right iliac fossa suspected of possible acute appendicitis, is diagnosed with a right retroperitoneal tumor, compatible with primary neurogenic tumor on a ct. radiographic imaging is a key component of the evaluation of a patient with a retroperitoneal mass, a ct scan is necessary to evaluate the primary site as well as to rule out metastatic disease. after complete biochemical study, nonfunctioning tumor is determined. the study is completed with mri where the lesion is located below the right kidney, in front of the right psoas muscle and lateral to the inferior vena cava, and without contact with these structures. ??it is in intimate contact with the ovarian vein. the complementary tests and iconography of interest of the case are exposed. surgical intervention is proposed with a laparoscopic approach. results: full minimally invasive approach in left lateral decubitus position: 4 trocars-lateral laparoscopic transabdominal approach. laparoscopic liberation of the right colon, kocher maneuver until the inferior vena cava is visualized, identification of a tumor of approximately 5 cm in the right infrarenal region, lateral to the right ureter, which includes the gonadal vessels. resection of the tumor in block with margins previous dissection and clipping of the proximal and distal gonadal vessels with ligasureò. the patient presented a successful postoperative recovery, being discharged 24 h after the intervention. definitive result of the specimen: leiomyosarcoma, grade 2 of the fnclcc with negative margin. the laparoscopic approach is a safe and effective technique in the approximation of retroperitoneal tumors, a radical oncological criterion is always needed with correct margins of resection especially in those of uncertain etiology. we started endoscopic thyroidectomy using the lifting method in 2001 and have developed single incision endoscopic thyroidectomy (siet) via chest (c-) or axillary incision (a-) by our original retractor since 2007. we created a new approach in 2010. recently, we have applied this method to parathyroid surgery. in this study, we present our method and results in parathyroid surgery with regard to surgical outcome and patients' complaints. method: endoscopic parathyroidectomy of c-siet was performed in 6 patients with hyperparathyroidism (primary 4, secondary 2) in new approach (mean age 69, male 1 female 5). single parathyroid adenoma was diagnosed using ultrasonic device, preoperatively. the patient is placed in a supine position with the neck extended. 30 mm vertical incision is made in anterior chest. flexible endoscope (olympus co. japan) is used through 5 mm trocar detached the retractor. in new approach, the parathyroid and thyroid are exposed through the avascular space between sternal head and clavicular head of sternocleidomastoid muscle. both of the skin and sternal head are lifted up by our original retractor (takasago medical co. japan). parathyroid adenoma behind the thyroid is resected using an ultrasonic scalpel. i would like to present our c-siet procedure. results: no scars in the neck were left in all cases. benign and hemi lateral parathyroid adenoma sized from 8 mm to 25 mm (mean:16.5 mm) were operated. mean operation time is 123 min. in new approach. there was no complication. parathyroid hormone levels decreased in all patients immediately after operation. conclusion: it is a little possible to make recurrent nerve palsy in this approach. new approach is useful to operate and make the working space wider without stress to find out of parathyroid adenoma. our original retractor can be introduced easily in most hospital, because it is not so expensive. most of women satisfied cosmetic results because of hidden scars. objectives: radiofrequency ablation (rfa) is a novel and developing technique for the treatment of parathyroid hyperplasia/adenoma in the context of secondary hyperparathyroidism (hpt) to chronic kidney disease (ckd) and there is little literature on the subject. the purpose of this study is to determine its usefulness by contributing a case carried out in our hospital. methods: we selected a case of secondary htp in a patient of 62 years old with ckd who presented a parathyroid adenoma detected clearly by ultrasound scanning. the patient was dismissed for surgery due to high surgical risk due to his comorbidities. rfa of a right inferior parathiroid adenoma was performed. intact parathyroid hormone (ipth) was measured before arf and 10 min after de procedure, calcium and phosphorus were measured the day after. the treatment was considered effective if ipth levels decreased at least 50% 10 min after rfa and calcium levels decreased the day after. results: ipth level before rfa was 1985 pg/ml. ipth level after 10 min of rfa was 835 pg/ ml, this meant a 58% reduction (normal values 15-65 pg/ml). calcium levels were from 10.2 at the baseline to 8.6 the day after (normal values 8.5-10.5 mg/dl) and phosphorus from 4.5 to 5.4 mg/dl (normal values 2.5-4.5 mg/dl). the patient presented dysphonia as a complication that improved with corticosteroid therapy. we are currently waiting for the next analytical controls at 3, 6 and 12 months after the proceidure. conclusions: rfa of parathiroid adenomas for treating secondary hpt in patients with ckd is feasible in selected patients. this treatment may reduce the morbidity that surgery supposes, it is developed in an outpatient regime avoiding hospital admission and this contributes to a reduction of health costs. however, a longer follow-up is necessary to verify the good results in our case. splenectomy is one of the treatment strategy for advanced portal hypertension due to liver cirrhosis. after splenectomy, thrombocytopenia is dramatically ameliorated, and liver function parameters have also been improved in several clinical settings. however, the mechanism underlying such a phenomenon remains unclear. the aims of the present study was to analyze histological changes of the liver after splenectomy in human, and to speculate the underlying mechanism. subjects and methods: cirrhotic patients with hepatocellular carcinoma (hcc) who had undergone laparoscopic splenectomy prior (4 weeks-52 months) to hepatic resection were analyzed (n = 15). non-tumorous liver specimens obtained at hepatectomy were histologically investigated. liver tissues from cirrhotic hcc patients who underwent only hepatectomy were used as controls (n = 15). results: after splenectomy, significant leukocytosis, especially increase in monocytes, was observed in addition to thrombocytosis. in the non-cancerous liver tissues, many round-shaped cd68-positive macrophages accumulated after splenectomy, while this phenomenon was merely observed in patients without splenectomy. the macrophages were cd163 ? (m2 marker) and cd14 -cd16 ? , suggesting their anti-fibrotic population. the accumulated macrophages existed around fibrous scar as well as ck19 ? epcam ? cells spreading out from the ductular reactions (dr). as a result, the number of ki67-positive hepatocytes significantly increased after splenectomy. the amount of platelets detected in the liver did not change even after splenectomy. finally, remarked attenuation of the established liver fibrosis was detected after relatively long duration. the accumulated macrophages expressed metalloproteinase (mmp)-1 and fibroblast growth factor (fgf)-7, suggesting these molecules may possibly participate in resolution of established fibrosis and hepatocyte proliferation. conclusion: splenectomy in cirrhotic patients with portal hypertension ameliorate liver fibrosis, and stimulate liver regeneration. the mechanism possibly include hepatic accumulation of anti-fibrotic cd163-positive macrophages and stimulation of dr-derived ck19 ? epcam ? progenitor-like cells. in patients with advanced splenic fibrosis, splenectomy could be a feasible therapeutic modality. the paper tries to establish the role and the opportunity of using laparoscopy in regard with abdominal contusions, as well as its indications or contraindications, combined in a therapeutic algorithm. we analyzed two groups of patients with abdominal contusions divided over two 5-year periods, 2008-2012 (51 patients) and 2013-2017 (60 patients) respectively. we have separated the two periods because starting from 2013 we have established a strategy for dealing with cases of abdominal contusions where we included diagnostic and / or therapeutic laparoscopy and nonoperative management. the investigation was done by fast echography, ct scan, simple abdominal radiography, peritoneal lavage puncture, and sometimes arteriography. in the second period we determined the diagnostic and therapeutic laparoscopy indications: suspicion of hollow or parenchymal organ injury, or mesentery injury, the presence of hemoperitoneum or fluid in the peritoneal cavity in a stable patient without major hemorrhage, apparent with unique injuries, without immediate vital risk and without other associated severe trauma. we have associated in this last period the nonoperative management for patients with grade 1 and 2 lesions of parenchymal organs that do not have fluid in the peritoneum, or only a very discreet quantity. in the first period, all 51 patients were treated by classic surgery, resulting in 4 unnecessary laparotomies where no visceral lesions were revealed. in the second period, we applied non-operative management to 8 patients out of 60, 2 patients with grade 1 and 2 splenic injuries, and 6 patients with grade 1 and 2 hepatic lesions. diagnostic laparoscopy was performed in 5 cases, in 2 of them without evidence of lesions, and in 3 other cases of grade 1 lesions no therapeutic action was required. therapeutic laparoscopy was required for one case of splenectomy and one of hepatorrhaphy. diagnostic laparoscopy is useful in abdominal contusions, if certain indications are followed and in selected patients. in our study, with the introduction of modern therapeutic strategies, unnecessary laparotomies were completely avoided, some lesions being even treated by laparoscopy. the new algorithm introduced allowed 23% of patients to avoid laparotomy. aims: about 150 cases of splenic hamartoma have been described in the literature since it was first described by rokitansky in 1861, it is a rare benign tumor. it is usually a casual finding in laparotomies or autopsies. they are usually asymptomatic, but there are few symptomatic splenic hamartomas and they can be associated with haematological alterations, being in some cases associated with spontaneous splenic rupture and acute abdomen, two thirds of them have multiple tumors. there are no specific data that allow the preoperative diagnosis of this entity, which is performed after the anatomopathological study of the surgical specimen, which must be extracted entirely, this together with the size of the spleen makes the laparoscopic approach difficult. the aim of this video is to demonstrate the surgical technique of a complete laparoscopic approach for this type of lesions, without the need for assistance laparotomies (handport). methods: clinical case: a 44-year-old man admitted to internal medicine due to fever and left lumbar pain. additional explorations of interest are discussed, including: thrombopenia of probable peripheral origin secondary hypersplenism (fna of bone marrow), ct: splenomegaly with 4 splenic masses, which deform the splenic contour, compatible with atypical hemangiomas, without being able to discard other vascular splenic tumors. results: complete semi-laparoscopic approach, 4 trocars, multilobulated splenomegaly (19x16 cm.), mechanical vascular section, complete bag extraction after minilaparotomy on the left flank. the patient presented a successful postoperative recovery, being discharged on the 4th po day. abdominal ultrasound at 1st week with portal vein thrombosis, which resolves after treatment with heparin. definitive result of the specimen: multiple splenic hamartoma. asymptomatic one year after surgery. the laparoscopic approach is a valid and effective alternative to splenic benign tumor lesions. the size does not contraindicate this type of approach, although the complete extraction of the spleen is recommended for its pathologic study. we recommend eco-doppler control per week, given the risk of portal thrombosis with an existing laparoscopic post-splenectomy. objectives: splenic cysts are a rare entity, currently described between 800-1000 cases in literature. a female patient's case is hereby presented, giant splenic cyst treated by conservative laparoscopic surgery obtaining good results. method: 30 years old female, without any relevant medical history, examined after abdominal pain on the left hypochondriac region, nausea, postprandial swelling and mass sensation. after exploration the presence of such mass was ratified, the rest of exploration found no relevant findings, no record of previous traumatism nor any other relevant incidence. diagnosis was made through ultrasound and computerized tomography, the existence of a big splenic cyst is confirmed, 19 cm by 14 cm, on the superior section of the spleen, negative results after parasitism test, normal haemogram, coagulation and biochemistry levels. patient was intervened using laparoscopic surgery, performing the deroofing technique on the cyst (two liters of orangey amber serous liquid that was sent for analysis) as well as extirpation of superior wall of the cyst, which was sent to pathological anatomy, a saline solution was used to cleanse the cavity, omentum and drainage were then set in place. results: patient evolved satisfactorily, hospital discharge and drainage withdrawal after 48 h. regular check-ups, after 12 and 24 months, patient presents no symptoms nor recurrence. pathological anatomy confirmed primary splenic cyst and the extracted liquid as cystic. conclusion: splenic cysts are primary (25%) or secondary (75%). diagnosis is performed through imagery tests, cat scan the being standard test used. regarding her treatment there is no clear consensus, due to the fact that up to a few years ago, complete splenectomy was the recommended treatment, techniques with preservation of the spleen are currently being widely recommended through laparoscopy in literature. among the conservative techniques percutaneous aspiration, with or without the injection of a sclerosing agent, partial splenectomy, marsupialisation, cystectomy, decapsulation, unroofing or fenestration can be found. the main issue is recurrence rates. few cases of primary giant splenic cysts treated by laparoscopic decapsulation can be found in literature, this treatment being simple and quick to perform, resenting a recurrence rate lower than other techniques such as aspiration and marsupialization. introduction: technology's progress and its application in the minimally invasive surgery of the thyroid gland offers us new surgical approache's like the transaxillary approach. this new technic still unusual in our environment and has recently begun to be incorporated into our surgical practice. the objective of this case is to explain step by step how to carry out a right transaxillary thyroidectomy and emphasize in the most relevant tips to take into account. also we going to review the main limitations we observed so far. statement of the case: we present the case of a 49-year-old woman referred for evaluation of a left thyroid nodule without associated symptomatology. the blood test shows normal thyroid profile. cervical ultrasound is performed identifying a 3.5 cm single right nodule with welldefined edges and presence of peripheral vascularization . no other nodules are identified. fna of the nodule describes a bethesda iii. after evaluation we decide to perform a left transaxillary thyroidectomy. discussion: surgical treatment of the thyroid gland by transaxillary approach may be indicated in previously selected patients, offering the advantages from minimally invasive techniques (shorter recovery time, shorter incision length, etc.). surely, more evidence and experiencie is required to make a better assessment of minimally invasive approaches in thyroid surgery. surgery, taipi city hospital, yan-ming branch., taipei, taiwan; 2 surgery, taipei city hospital, taipei, taiwan the first endoscopic thyroidectomy was performed in 1997 using a cervical approach. since then, various remote-access method, have been developed for thyroid surgery to avoid scarring of the neck. trans axillary approach(taa),bilateral axillo-breast approach(baba),and retroauricular approach(raa) are common in use. the main benefit of these procedure is that there are no visible scar that is one of the drawbacks of conventional kocher's incision. however,these methods require more dissection and longer operation time than conventional thyroidectomy transoral thyroidectomy(tovet) is a new approach and has become popular in recent years, however,most surgeons peformed a single procedure because of the limited patients and the learning periods sine 2017,more than 100 cases were performed,patients received endoscopic thyroidectomy(et) procedure at our hospital. we compare the surgical procedure of bilateral axillo-breast approach(baba) with transoral vestiblar approach(tovet) in our hospital both performed by one single surgeon .the surgen has expended eaqual amounts of time with these two procedures. the patient seletion process,operation time, operation procedure and approach,learnig experience, consmetic effect,onaologic consideration and surgical outcome were discussed yhroughly. presenting a case of a thyroid metastasis from an ovarian carcinoma, we conducted a review of the literature without finding similar reported cases. case: a 43-year-old woman consults for progressive asthenia, weight loss and ascites. abdominal ct finds a conglomerate in the pelvis involving the ovaries and peritoneal implants, the largest up to 10 cm. an omental epigastric lesion biopsy and paracentesis is performed resulting in adenocarcinoma and omental metastasis from ovarian neoplasm, associated with ca125 of 4553. patient starts neoadjuvant therapy with carboplatin-paclitaxel. in image controls there is a favorable response. three months later, intervention was carried out; laparotomy hysterectomy ? double anexectomy ? omentectomy ? appendectomy ? pelvic and paraaortic lymphadenectomy.the anatomopathological study shows a low-differentiated endometrioid carcinoma, omentum infiltration and absence of metastatic lymphatic involvement. while getting the maintenance treatment with bevacizumab the patient presented symptoms of arthritis and hypercalcemia was detected (11.4) with pth 268. a gammagraphy was performed and an increased uptake area was detected in the lower pole of rtl, suggestive of a parathyroid adenoma. we initially proposed the possibility of performing radiofrequency ablation but in a previous thyroid ultrasound we visualize 3 nodular lesions in rtl compatible with adenoma and a mass in the superior mediastinum that seems to correspond the area of greatest uptake in the gammagraphy so finally the procedure is dismissed and surgery is proposed. during the intervention we found a hard consistency nodule in the inferior pole rtl and lymphadenopathies of hard consistency in right vi area that are sent for intraoperative anatomopathological study with the result of adenocarcinoma metastasis without identifying origin. a total thyroidectomy, parathyroidectomy and central ganglion drainage is performed with the result of a parathyroid adenoma, lymphatic invasion of ovarian-grade latent carcinoma and extensive vascular permeation by carcinoma of the thyroid. the patient maintains oncological treatment with carboplatin-caelix. in the last follow-up, the pth and calcemia remains normal. conclussion: although some cases of neoplasic thyroid involvement associated with struma ovarii have been published, no cases similar to the one described are found, neither in our experience, which is why it is an exceptional case. the aim of the study was to evaluate the effectiveness of the use of embolization of the splenic artery in order to prevent portal bleeding. methods: the study included 96 patients, who had esophageal varices bleeding, which developed as a result of decompensated cirrhosis of the liver of various etiologies of classes b and c according to child-pugh. patients were divided into 2 groups. the main group included 71 (73.95%) patients who underwent endoscopic ligating of bleeding varix and in order to prevent recurrence of bleedingembolization of the splenic artery with gianturco coils. the comparison group consisted of 25 (26.05%) patients who received only drug therapy. to assess the effectiveness of the treatment, the patient's condition was monitored for 6 months. results: the average age of patients in the comparison group was 56.8 ± 4.4 years. using only drug therapy, we stopped bleeding in 54 (76.1%) patients. in all cases, at the end of treatment, we received an improvement in clinical and laboratory parameters. 17 (23.9%) patients died. the duration of treatment was 10.1 ± 2.4 days. the average age of patients in main group was 55.2 ± 5.6 years. performing endoscopic ligation of bleeding varices, we stopped bleeding in 23 (92.0%) patients. in all cases, at the end of treatment, we received an improvement in clinical and laboratory parameters. 2 (8.0%) patients died. the duration of treatment was 6.5 ± 2.7 days. a statistical analysis of mortality and duration of treatment revealed a significant difference (p \ 0.01) between the groups in both indicators. after splenic artery embolization in all cases managed to achieve a reduction in blood flow of 60-80%. after 6 months among 54 patients in the comparison group, bleeding relapse occurred in 12 (22.2%) cases. in the main group, this indicator was 8.7% (2 patients). the indicator in the main group was significantly (p \ 0.01) different from the same indicator in the comparison group. conclusion: performing embolization of the splenic artery in patients after endoscopic hemostasis of variceal bleeding allows to reduce the pressure in the portal system, which in turn leads to a decrease in the frequency of bleeding recurrences. thoracoscopic esophagectomy for aortoesophageal fistula y. ebihara, t. shichinohe, y. kurashima, s. murakami, surgery ii, hokkaido university, sapporo, japan background: aortoesophageal fistula (aef) is an uncommon but one of highly fatal conditions. there are surgical, endoscopic and interventional radiological treatment options, however, definitive treatment is the surgical intervention. video-assisted thoracoscopic surgery (vats) has been gradually accepted as a substitution for thoracotomy to reduce the invasiveness of the surgery as radical surgery for esophageal cancer. we aimed to evaluate a feasibility of vatsesophagectomy (vats-e) for aef in this study. introduction: achalasia is the most common motility disorder of the esophagus. heller's cardiomyotomy associated with a antireflux technique is the treatment of choice in patients with this disease; however, a small group of patients could present a recurrence of the symptoms being necesary a new surgery, what is an important challenge for most of the surgeons. we report the case of recurrence after a laparoscopic miotomy and dor fundoplication as a paradigm for the appropiate management in this kind of patients. methods: a 63 years old female, who underwent a previous miotomy and a dor fundoplication in 2011 due to an achalasia.six years after surgery, the patient showed epigastric pain and dysphagia. the study of the patient was performed with: barium swalow, phmetry, manometry, ct-scan and mri showing a recurrence of her disease.the patient was transfered to our center where she underwent a new surgery.the key points of the new surgery includes the next steps: dissection of the previous adhesions, dissection of the dor's partial fundoplication, avoid dissection of the anterior esophageal wall at the leve lof the hiatus (the area of previous myotomy) in order to avoid perforation of the esophagus, lateral and posterior dissection of the distal esophagus, lateral myotomy at the rigth wall of the esophagus and a toupet's funduplicatury. all of thisis procedures are done under intraoperative endoscopy in order to confirm a good passage to the estmach and to identify a perforationic supervision. results: following theseis steps several patients have been operated in our center with excellent results. in all of these cases, including the patiente presented previously, the symptoms have dissapeared. conclusions: achalasia is a rare motility disorder of the esophagus, being recurrences an important challenge for surgeons. a great proper therapeutic strategy using the different diagnostic exams and the supervison by a group of experts in this kind of entity are the basis in order to obtain good results in these situations. aims: re-do fundoplication is usually performed for recurrent reflux symptoms due to wrap failure or recurrent hiatus hernia. conversely, persistent dysphagia may occur early due to tight wrap/crural repair which should be avoided by good surgical technique. a small group of patients however may suffer progressive dysphagia due to weakening motility (especially in older patients), fibrosis of the wrap or a combination of the two. this video demonstrates the successful treatment of this problem with a laparoscopic conversion from nissen to posterior toupet fundoplication. a 72 year old man underwent an uncomplicated laparoscopic nissen fundoplication in 2015 with complete resolution of reflux symptoms. he re-presented 2 years later, still free of reflux but suffering progressive dysphagia and troublesome regurgitation. investigations demonstrated intact wrap and no mechanical obstruction, but confirmed low-amplitude peristalsis. a trial endoscopic dilatation improved symptoms for 11 days before recurrence, suggesting likely wrap fibrosis (which would reduce elasticity and impede passage of food bolus), justifying consideration for a conversion from nissen to toupet. results: this video demonstrates the expected adhesions between fundoplication and inferior surface of left lobe liver, mobilisation and division of the nissen fundoplication, and reconstitution of a posterior toupet fundoplication. the patient made a good recovery and was discharged the following day. three-and six-month follow-up confirmed complete resolution of symptoms with no recurrence of reflux. conclusion: laparoscopic re-do surgery for late-onset progressive dysphagia is a safe and viable option. patients must be thoroughly investigated and carefully selected for an appropriately tailored procedure. they should also be advised of the increased risks associated with re-do surgery. the anatomy can be unpredictably distorted by variable adhesions and this operation should therefore only be performed by laparoscopic surgeons experienced in both primary and re-do fundoplication. methods: i report unusual iatrogenic injury of cervical esophagus that resulted with complete resection post total thyroidectomy for papillary ca of thyroid patient presented 4 days post surgery to our center. the video will show the steps used to treat this unusual complication by neck exploration, laparoscopic trans hiatal esophagectomy with creation of gastric tube with preservation of the right gastroepiploic artery and the neck anastomosis between the cervical esophagus and stomach. were open and 13 minimally invasive esophagectomies. of the 27 patients, 10 were for squamous cell carcinoma, 13 were adenocarcinoma and 3 were of other histological diagnosis such as gastrointestinal stromal tumor and schwannoma. the median length of stay for patients who underwent minimally invasive esophagectomies was 9 days (9 to 120 days) while the median length of stay for patients who underwent open esophagectomies was 15 days (9 to 45 days). the minimally invasive group had a shorter icu stay of 1 day. for 30 day morbidity, the minimally invasive esophagectomy group had 2 patients who encountered anastomotic leaks, 1 with post operative pneumonia while the open esophagectomy group had 1 patient with anastomotic leak, 1 patient with post operative stricture and 1 patient with delayed gastric emptying. there were 2 mortalities in the minimally invasive group while there were no mortalities in the open group. conclusion: our data show that patients who underwent minimally invasive esophagectomies had a shorter duration of hospitalization with similar perioperative morbidity rates. minimally invasive esophagectomy is a viable surgical option for a select group of patients. aims: there has been an increasing tendency towards minimally invasive surgery for esophageal cancer. our aim was to evaluate the results of the thoracoscopic approach (ta) and compare them with the ones of open approach (oa) at our institution. methods: retrospective review of all patients who underwent esophagectomy due to esophageal cancer (adenocarcinoma or squamous cell) between 2013 and 2017 were included. patients with siewert iii tumors and those who didn't need a thoracic approach were excluded. results: during the study period were performed 83 esophagectomies, 23 through ta. in 43.5% of these, the abdominal stage was done by laparoscopy. when comparing ta versus oa, there were no statistically significant differences in the baseline characteristics of the two groups (mean age, median body mass index, ecog performance status, asa score, smoking status, diabetes mellitus, pulmonary disease, histologic type, clinical staging and neoadjuvant chemo and radiotherapy). regarding outcomes, there were no significant differences in need of intraoperative transfusion, median intraoperative blood loss, operative time and length of stay. although not significant, in ta group there was a tendency for higher overall morbidity (69.6% versus 58.3%, p = 0.347); major morbidity-ctcae 3-5 (56.5% versus 38.3%, p = 0.135); anastomotic leak (34.8% versus 16.7%, p = 0.084) and re-intervention rate (17.4% versus 15%, p = 0.748). on the other hand, in ta group there was a tendency (although not significant) towards lower rate of respiratory complications (17.4% versus 33.3%, p = 0.152), lower rate of r1 margins (4.3% versus 13.3%, p = 0.433) and higher median of lymph nodes removed (22 versus 18, p = 0.083). conclusions: in our series, outcomes of ta were similar to oa, with a tendency towards lower respiratory complications, lower rate of r1 margins and higher number of lymph nodes removed in ta group. the impact of these findings in survival remains to be seen. the tendency towards higher morbidity may be related to the learning curve, since this were the first cases performed at our center. background: esophagectomy is a surgical procedureburdened by a high morbidity rate. the effect of minimally invasive (mi) approach on elderly patients is still not clear. aim: of this study was to analyze the impact of mi approach on post-operative course according to the patient age. methods: a consecutive series of 692 patients underwent to elective oncological esophagectomy between 1997 and 2017. all data were entered into a prospective database. patients submitted to 3-flield or trans-hiatal esophagectomywere excluded andonly ivor-lewisopen, hybrid or totally minimally invasive esophagectomywere. patients were stratified according to age in 3 groups:group a(= 50 years) 53 patients, group b ([ 51 and \ 70 years)269 and group c (were = 71 years)126.clinical and pathological factors influencing surgical outcome were evaluated. complications were classified according to clavien-dindo (cd). results: as expected outcomes worsened with patients age(cd = 3b: 7.5% group a, 13% group b and 21% group c. p = 0.001), mortality (0% group a, 3% group b and 5.5% group c. p = 0.035) and length of stay (10 days group a, 11 days group b and 13 days group c. p = 0.001).a statistically significant higher incidence of anastomosticleaks was observed among patients submitted to totally mi esophagectomy in group c vs a and b that were respectively 12,5%, 0% and 7%. major respiratory complications were not statistically different among these 3 three sub-group. conclusions: old age has a significant impact on outcomes afteresophagectomy. in this subset of patients a mi approachcould also increasepostoperative morbidity. elderly patients should be carefully selected before to be submitted to mi esophagectomy. introduction: esophagectomy is a major surgical procedure with morbidity and mortality related to the patient's condition, stage of the disease, complementary treatments, and surgical experience. minimally invasive esophagectomy (mie) may lead to a reduction in perioperative morbidity and mortality with very good quality of life. material and method: we present the experience of the center of excellence in esophageal surgery regarding totally mie through thoracolaparoscopic modified mckeown three-stage approach followed by esophageal reconstruction by gastric intrathoracic pull-up and cervical esophagogastric anastomosis used for the treatment of thoracic esophageal cancer. results: in the last 4 years, mie was performed initial, in our clinic with extracorporeal preparation of the gastric conduit with reduced lung complications and hospital stay. we introduced the totally minimally invasive esophagectomy with laparoscopic-assisted feeding jejunostomy using a 3d high definition camera. operative times were: thoracic-120 min, abdominal-130 min and cervical-50 min with a total of 300 min. the augmented 3d high definition image provided an excellent visual field, that allowed an accurate identification of dissection plans and extensive periesophageal and perigastric lymphadenectomy. the short-term outcomes of the totally minimally invasive esophagectomy procedure were very encouraging with early feeding on jejunostomy and the control of cervical anastomosis was usually performed in the 5th day postoperative and the patients were discharged in the 9th day postoperative without any symptomatology. at the first and third-month follow-up was not reported any major complications. the long-term oncological results are being evaluated. conclusions: the totally minimally invasive approach using advanced technology of endoscopic surgery allowed for these patients a simple postoperative evolution, no major complications, and a good recovery after an extensive surgery. the solid experience in open esophageal surgery of the upper gastrointestinal surgeons provides a fast learning curve of complex minimally invasive surgical procedures with reduced perioperative morbidity. long-term follow-up should confirm the results from the literature regarding the survival, which is expected to be for these patients at least equivalent with outcomes after open esophagectomy. introduction: esophageal fistulas, benign or malignant, represent a real challenge for the surgeons and gastroenterologists, regarding the treatment and the outcome. in these cases, endoscopic treatment is the first line approach, being less invasive and sometimes avoiding the need for surgery. this includes clips, stents, glue and even suture. material and method: we have analyzed 9 esophageal fistulas in patients with benign or malignant pathology, diagnosed and treated in the first 6 months of 2018. the management of this complication included a self-expandable esophageal metallic stent. we have evaluated the diagnosis, the surgical intervention, the timing until the development of the leak, the localization and management of the fistula. results: 5 were postoperative leaks and 4 spontaneous esophageal fistulas. the localization was cervical in one case, thoracic in 5 cases and abdominal in 3 cases. for the postoperative fistulas, in 4 patients the treatment included at least one surgical reintervention with lavage and drainage, beside the insertion of an esophageal metallic stent. in the other cases, endoscopic treatment and antibiotic therapy was enough. in 2 cases, the stent migrated needing repositioning. 30 days mortality was 22%, both patients from postoperative group. conclusions: esophageal fistulas represent a severe complication, usually in patients already immunocompromised. endoscopic management, including self expandable esophageal metallic stent, can be the main approach, by stopping the contamination and by permitting the early per oral feeding. disadvantages include the possibility of migration and the need of removal after 6-8 weeks. methods: five hundreds and one patients with esophageal cancer who underwent mie from 2010 to 2016 at our department were eligible. we considered the risk factors of complications of pneumonia, anastomotic leakage, and hoarseness after surgery, and the risk factors of difficulty of surgery. results: the risk factors of postoperative complications in univariate analysis were more than 75 years old (odds ratio: 2.1, p = 0.01), more than ii in asa-ps (odds: 3.1, p \ 0.01), more than 300 g of bleeding (odds: 2.1, p = 0.01), more than 450 min. of operation time (odds: 2.2, p \ 0.01), and colon reconstruction (odds: 3.2, p = 0.02). the one in multivatiate analysis was more than ii in asa-ps (odds: 3.2, p = 0.01). the risk factors of much bleeding were colon reconstruction (odds: 6.5, p \ 0.01), and more than 50 of lymph node dissection (odds: 1.5, p = 0.05). the risk factors of long operation time without cervical lymph node dissection were neo-adjuvant therapy (odds: 8.1, p \ 0.01), more than 60 of lymph node dissection (odds: 3.0, p = 0.01), and colon reconstruction (odds: 8.1, p \ 0.01). the ones with cervical lymph node dissection were more than pstage iii (odds: 2.4, p \ 0.01) and more than 60 of lymph node dissection (odds: 2.8, p = 0.03). conclusions: considering those risk factors, we should perform perioperative management more carefully. method: sa 74-year-old man with a tobacco and alcoholic habit was suspended for years, under treatment for arterial hypertension, who consults for a logical dysphagia of 4 months of evolution. he is diagnosed of stenosing esophageal distal third epidermoid carcinoma txn1m0. it is decided to place a prosthesis that is effective and subsequent neoadjuvant qt-rt, after 6 weeks of its completion the surgery is performed. results: the surgery is performed in 2 times, initially by laparoscopy. the esophageal hiatus and the greater curvature are dissected preserving the right gastroepiploic, and lymphadenectomy of the celiac trunk with pedicle section of the left gastric. gastric plasty is performed with a section of lesser curvature towards fundus. it is continued by thoracoscopy. a section of the azygos vein is performed, dissection of the esophageal middle and lower third and lymphadenectomy. gastric plasty is promoted, proximal esophagus section and latero-lateral intrathoracic gastro-oesophageal anastomosis. the anatomopathological study reports ypt3 and pn2 with 4/33 adenopathies, and disease-free surgical margins. he was discharged without complications on the 12th day and did not require re-entry. conclusions: ivor-lewis endoscopic surgery is safe and meets oncological criteria in selected patients with distal esophageal neoplasia and performed by an experienced esophagogastric unit. background: the rates of thoracoscopic esophagectomy performed in the prone and left lateral decubitus positions are similar in japan. we retrospectively reviewed short term outcomes of thoracoscopic esophagectomy for esophageal cancer performed in the left lateral decubitus position under artificial pneumothorax by co2 insufflation in a single institution. this study aimed to evaluate the feasibility of applying this procedure. methods: between july 2013 and december 2017, 124 patients with esophageal cancer underwent thoracoscopic esophagectomy in the left lateral decubitus position under artificial pneumothorax by co2 insufflation. the thoracic procedure is performed as follows:the lymph nodes around the right recurrent laryngeal nerve are dissected. on the cranial side, the lymph node dissection is advanced to the level of the inferior thyroid artery. then, the assistant rotates the trachea toward the ventral side, and the lymph nodes around the left recurrent laryngeal nerve are dissected. the middle and inferior mediastinal lymph nodes are dissected including supradiaphragmatic lymph nodes and the dorsal lymph nodes around the thoracic descending aorta. then, the esophagus is transected using an automatic suture device. finally, the tracheal bifurcation area lymph nodes are dissected. we retrospectively analyzed these patients. results: the completion rate of thoracoscopic esophagectomy was 92.0%, and the procedure was converted to thoracotomy in five patients, due to hemorrhage,severe adhesion. the mean intrathoracic operative time, intrathoracic blood loss, and number of dissected mediastinal lymph nodes were 210.5 min, 120.5 ml, and 23.0, respectively. postoperative complications included pneumonia (13.7%), anastomotic leakage (16.9%), and recurrent nerve paralysis (16.1%). postoperative (30d) mortality was 2/124 (1.6%) due to ards and nomi, respectively. conclusions: standardization of the procedure for thoracoscopic esophagectomy in the left lateral decubitus position under artificial pneumothorax by co2 insufflation, with a standardized clinical pathway for perioperative care led to favorable surgical outcomes. introduction: recently thoracoscopic surgery has become widespread even in chest procedure in thoracic esophageal cancer surgery. as an advantage of minimally invasive esophagectomy, it is possible to perform sophisticated procedures due to its magnified visual effects. on the other hand, short-term perioperative safety and oncological safety are still unclear. in cases where abnormal anatomy or comorbidity in the thoracic cavity is observed, it is thought that it is necessary to carry out thoracic surgery which ensures safety while keeping in mind the transition to transthoracic surgery. here, we report on esophageal resection of the thoracic esophageal cancer accompanied by a 20 mm saccular aneurysm inside the aortic arch. patient: a 67-year-old man visited a nearby doctor with a chief complaint of discomfort during swallowing. upper gastrointestinal endoscopy examined middle cervical esophageal cancer and received referral to our hospital. ct revealed a 20 mm saccular aneurysm inside the descending aorta in contact with the thoracic esophagus. preoperative diagnosis was middle thoracic esophageal cancer; 0-iic ct1bn0m0 stageia (uicc 8th). we performed thoracoscopic esophagectomy and lymph node dissection as curative surgery. the anterior surface of the aorta was exposed from the lower mediastinum and descended ascendingly, reaching the lower end of the saccular anus at the head level of the lower pulmonary vein. peeling off the esophagus dorsal side along the margin of the saccular sac and performing esophageal resection. conclusion: we reported thoracoscopic esophageal resection for thoracic esophageal cancer with chest descending aortic saccular aneurysm. thoracoscopic surgery, which can fully exploit close magnification effect, seemed to be useful for anatomically disqualified cases. introduction: anastomotic leakage from oesophagojejunal (oj) anastomosis after total gastrectomy is associated with a high morbidity and mortality rate. leakage rates reported vary between 3?% and 11?% but lack of consensus in management. in the past, it often required surgical intervention or radiologically abscess drainage that will keep patients fasted with external drain for a long duration. recently, variable endoscopic options-oesophageal stents, clips, fibrin glue and endoluminal vacuum therapy had been introduced with variable outcomes. here, we presented a case of oj anastomotic leak management with combination innovative endoluminal and radiologically technique to insert double pig-tailed catheter. aim: to introduce the feasibility of double pig-tailed catheter for drainage and management of oj anastomosis leak. a 70 year old man presented with two months history of dysphagia. upper endoscopy (ogd) showed suspicious cardio-oesophageal lesion. histology biopsy confirmed with adenocarcinoma. ct-scan of thorax, abdomen and pelvic showed irregular thickening at cardiooesophageal junction with regional lymphadenopathy. no distant metastases. he underwent uneventful d2 total gastrectomy. on 5th post-operative day, patient had spike fever and newly developed atrial fibrillation. urgent ct-thorax, abdomen and pelvis with oral omnipaque. it showed lower mediastinal gas-containing fluid adjacent to oj anastomosis within the left retrocrural space suspicious for leak. ogd evaluation showed pin-hole oj leak. guidewire inserted via endoscopy into left retrocural space under radiologically guidance. double pig-tail 7fr 5 cm subsequently inserted via seldinger approach over guide wire. the proximal end of pig-tail pushed into left retrocural space and distal end positioned into efferent jejunal limb with crocodile jaw through endoscope. diluted contrast injected and passed down to efferent limb with minimal leak. outcome: after double pig-tail insertion, patient started on clear feed on 1st day post-insertion. one week later, he was started on full feed. repeat upper endoscopy and stent removal done two weeks later. contrast injection showed small blind ended sinus tract from anastomosis toward left pleural space without obvious leak. conclusion: radio-endoscopic is a novel minimally invasive technique that allows insertion of double pig-tailed internal drainage to control oj anastomosis leak. it allows early enteral nutritional feeding and avoid external drainage. background: the number of gastric cancer (gc) survivors, especially long-term survivors, is increasing. how best to evaluate the diseasespecific survival (dss) of gc survivors over time is unclear. we aimed to assess changes in the conditional survival of patients with gc after curative intend gastrectomy and the evolution of the impact of well-known risk factors. methods: clinicopathological data from 22,265 patients who underwent curative intend resection for gc at four specialized centres (three in china and one in italy) and from the surveillance, epidemiology, and end results (seer) database were retrospectively analysed. changes in the patients' 3-year conditional disease-specific survival (cs3) were analysed. we used time-dependent cox regression to analyse which variables had long-term effects on dss and devised an accurate, dynamic dss predictive model based on the length of survival. results: the median follow-up time was 74 months, and disease-specific death occurred in 9,927 cases (44.6%). the dss of the patients after surgery was dynamic, and most of the disease-specific deaths occurred within the first 3 years after surgery. based on 1-, 2-, 3-, 4-and 5-year survivorships, the cs3 of the population increased gradually from 62% to 68.1%, 77.3%, 83.7%, 87.6%, and 90.6%, respectively. subgroup analysis showed that the cs3 of patients who had poor prognostic factors initially demonstrated the greatest increase in postoperative survival time (e.g., n3b: 26.6%-84.1%, ?57.5% vs. n0: 84.1%-93.3%, ?9.2%). time-dependent cox regression analysis showed the following predictor variables constantly affecting dss: age, the number of examined lymph nodes, t stage, n stage and site (p all \ 0.05, 5 years after gastrectomy). the influence of prognostic factors on dss and cs3 changed dramatically over time. based on data from several large global centres, we developed an effective model for predicting the dss of gc patients based on the length of survival time. this model can provide personalized long-term follow-up strategies for patients. methods: we retrospectively analyzed clinicopathological data for 253 rgc patients who underwent radical gastrectomy from 6 centers. the prognosis prediction performances of the ajcc7th and ajcc8th tnm staging systems and the trm staging system for rgc patients were evaluated. web-based prediction models based on independent prognostic factors were developed to predict the survival of the rgc patients. external validation was performed using a cohort of 49 chinese patients. result: the mean number of retrieved lymph nodes was 16.1, and in 54.2% of patients, the number was = 15. the predictive abilities of the ajcc8th and trm staging systems were no better than those of the ajcc7th staging system (c-index: ajcc7th vs. ajcc8th vs. trm, 0.743 vs. 0.732 vs. 0.744; p [ 0.05). within each staging system, the survival of the two adjacent stages was not well discriminated (p [ 0.05). multivariate analysis showed that age, tumor size, t stage and n stage were independent prognostic factors for overall survival (os), disease-specific survival (dss) and disease-specific survival (dfs). based on the above variables, we developed 3 web-based prediction models, the huang os model, the huang dss model and the huang dfs model, which were superior to the ajcc7th staging system in their discriminatory ability (cindex), predictive homogeneity (likelihood ratio chi-square), predictive accuracy (aic, bic), and model stability (time-dependent roc curves). the stratified analysis showed that regardless of whether more or fewer than 15 lymph nodes were retrieved, the predictive performances of the web-based prediction models were still better than those of the other three staging systems. a decision curve analysis showed that the huang model provided better net benefits than the other three staging systems. external validation showed predictable accuracies of 0.780, 0.822 and 0.700, respectively, in predicting os, dss and dfs. conclusion: the ajcc tnm staging system and the trm staging system did not enable good distinction among the rgc patients. we have developed and validated visual web-based prediction models that are superior to these staging systems. objective: to perform competing risk analysis and evaluate cancer-and noncancer-specific mortality in patients with gastric cancer after radical surgery. methods: a total of 5051 patients from our department (as training set) and a total of 7123 patients from the surveillance, epidemiology, and end results (seer) database (as validation set) were enrolled in the study. the cumulative incidence of cancer and noncancer-specific mortality was determined by univariate and multivariate competing risk analysis. results: the five-year cancer-and noncancer-specific cumulative incidence of death (cid) in the training set were 36.9% and 2.5%, respectively, which were significantly lower than that in the validation set (48.2% and 8.6%, respectively). multivariable analysis showed that age, tumor site, tumor size and ptnm stage were independent predictors of gastric cancer-specific mortality and overall survival, whereas age was an independent predictor of gastric noncancer-specific mortality. noncancer-specific cid surpassed cancer-specific cid for ptnm stage i patients after approximately 8 years of surgery, but never for stage ii and iii patients. moreover, for stage i patients, the time point when noncancer-specific cid surpassed cancer-specific cid become earlier as age increasing, with only 3.5 years after surgery for patients more than 74 years of age. conclusions: age is an independent predictor of gastric cancer-and noncancer specific mortality and overall survival for patients after radical surgery. for patients with stage i gastric cancer, noncancer-specific mortality is a significant competing event, with an increasing impact as age increases. aim: of the study was to analyse the possibility of function preserving gastrectomy based on the sentinel lymph node (sln) concept. methods: during last 5 years in two clinics odessa national medical university we used mapping procedures in the 25 patients with early gastric cancer. there were 11 men and 14 women, age 52 to 85 years, mean age 56.8 ± 8.2 years. blue dye was injected into 4 quadrants of the submucosal layer surrounding the primary lesion using an endoscopic puncture needle in 16 patients. blue lymphatic vessels and blue-stained lymph nodes can be identified by laparoscopy within 15 min. of the blue dye injection. we used 0.5% indocyanine green in 9 patients, which we injected by intraoperative endoscopy. new technology indocyanine green (icg) fluorescent imaging was used for sln mapping in this 9 patients. results: amany 16 patients, in which we used blue dye for mapping sln, positive sln was in 5 patients, negative-in 11 patients. in all 16 patients distal gastrectomy (dg) was performed with d2 lymphdissection. from 11 patients with negative sln in 3 patients metastasis in other lymph nodes were detected.among 9 patients in whom we used icg fluorescent mapping positive sln were detected in 2 patients. laparoscopic-assisted distal gastrectomy with d2 lymph node dissection was performed in these patients. in 7 patients with negative sln partial wedge resection was performed in 2 patients, segmental pylorus preserving gastrectomy was performed in 5 patients. during follow-up period from 3 to 24 months no recurrences or metastasis were detected in these group of patients. qol in this group of patients was much better, than in patients with conventional distal gastrectomy. conclusions: icg fluorescent method is highly effective for detection of sln. in the patients with early gastric cancer function preserving gastrectomy based on sln navigation may be promising strategy to achieve better results. laparoscopic procedure taking advantage of robotic gastrectomy for gastric cancer to prevent pancreatic fistula gastrointestinal surgery and surgical oncology, ehime university, toon-city, japan backgrounds and aims: analysis of japanese national clinical database (ncd) showed that laparoscopic gastrectomy(lg) had rather increased pancreatic fistula (pf) compared with open gastrectomy. on the other hand, last year, multicenter collaborative research result of robotic gastric cancer surgery(rg)was shown that the complications including pf were significantly decreased as compared with lg. in this study, we have employed a new easy to use device in lg to minimize pf during suprapancreatic lymph nodes dissection requiring pancreatic retraction and compared with conventional lg and rg. materials and methods: internal organ retractor (aesculapò) to grasp the gastropancreatic fold and the suprapancreatic peritoneum to imitate davinci's forceps was guided with a thread outside the body. 104 patients(jan.2016 * nov.2018) were divided into three groups as follows, group lg-1(n = 40), lg using the standard devices, group lg-2(n = 40), lg using organ retractor, group rg (n = 24). amylase value in drain(d-amylase) and the volume in drainage, intraoperative bleeding, postoperative hospital stay, incidence of cd (] grade iii) were compared among three groups. results: data are indicated as lg-1/lg-2/rg(mean ± sd), respectively. on the day and third day after surgery, d-amylase were 1203 ± 260/645 ± 148/608 ± 285 and 383 ± 228/ 323 ± 136,176 ± 98(iu/l). d-amylase was significantly lower in lg-2 and rg group than in lg-1 the day after surgery. the operation time was significantly longer in rg, 318 ± 38/ 290 ± 64/396 ± 47 (min). bleeding volume and hospital stay did not differ among 3 groups. pancreatic fistula (cd ] grade iii)was observed only in lg-1 group at 5(%) . discussion: pf(grade]cdiii), which may lead to mortality, occurred in lg-1 group. a significant elevation of d-amylase on the 1st postoperative day was prevented in lg-2 just like rg, which seemed to lead to prevent pf afterwards. the multijoint forceps is known to be an advantage of rg but it cannot be reproduced by lg using a linear forceps. however, another advantage such as vertical grasping and lifting of the gastropancreatic fold at rest could be mimicked by lg using this device, which seemed to enable a safe lymph node dissection and lead to reduce the pancreatic damage. conclusion: this inexpensive and easy to use method taking the advantage of rg seems to reduce surgeon's fatigue and tissue damage(pf). the study presents comparison of perioperative outcome between different surgical approaches for gastric adenocarcinoma (ac). methods: retrospective cohort of 85 patients that underwent gastrectomy for (ac) at rambam hospital during 2012-2016. patients data was collected based on demographic characteristics, bmi, operating room time (ort), number of lymph nodes (ln), length of hospitalization (loh), and perioperative complications. results: study population included 55 patients after total gastrectomies, 10 of them robotic and 30 partial gastrectomies, 12 of them robotic. age, gender and bmi were similar between patients who underwent any type of procedures. median length of hospitalization (loh) for robotic total gastrectomy was 4.5 days and it was significantly shorter than both laparoscopic total gastrectomy (ltg) 7.0 days (p = 0.003) and open total gastrectomy (otg) 9.0 days (p \ 0.001). similar significant differences in (loh) between the groups were observed among patients who underwent partial gastrectomy, but the comparison between robotic and laparoscopic procedures was limited due to small numbers of (lpg). median(ort) was significantly longer among robotic gastrectomies compared to open, the difference was 64 min in total gastrectomy group and 145 min in partial gastrectomy group (p \ 0.001 for both differences), but the difference in(ort) between laparoscopic and robotic procedures were smaller and non-significant. the number of dissected (ln) was similar between the 3 procedures in total gasrectomies. in partial gastrectomies, the number of dissected (ln) was even higher among both laparoscopic and robotic gastrectomies compared to open (p \ 0.001).) conclusions: robotic total and partial gastrectomies for gastric (ac) are associated with oncologically adequate lymphadenectomy and faster patient recovery, but longer ort. objectives: during esophagojejunostomy using a circular stapler after latg, placement of the anvil head via the transabdominal approach proved difficult. the authors report on a method modified for laparoscopy-assisted, esophagojejunostomy performed by placing the pretilted anvil head(orvil) via the transoral approach. methods: between january 2013 and november 2018, esophagojejunostomy was performed using orvil in 99 patients after latg. the anesthesiologist introduced the anvil while observing its passage through the pharynx. during the anastomosis, we kept the jejunum fixed in position with a silicone band lig-a-loops, thereby preventing the intestine from slipping off the shaft of the stapler. results: esophagojejunostomy using the orvil was achieved successfully in all patients. no other complications, such as hypopharyngeal perforation and/or esophageal mucosal injury, occurred during passage. the postoperative complications of anastomosis were leakage in two patients and stenosis in 5 patients, in whom mild relief was achieved using a bougie. conclusions: esophagojejunostomy using the orvil is a simple and safe technique. gastrointestinal tract surgery, fukushima medical university, fukushima-shi, japan; 3 surgery, ohara general hospital, fukushima-shi, japan background: juvenile polyposis of the stomach is a very rare disease, and its malignant potential has been reported previously and total gastrectomy has been recommended as a standard treatment. recently, the usefulness of laparoscopic surgery for this case has been reported, however this type of surgery is thought that maintaining the surgical space is difficult because of distended and thickening stomach. case presentation: eight years ago, a 64-year-old woman who had no family history of gastrointestinal polyposis had been diagnosed with gastric polyposis and polyp-related anemia and received twice endoscopic submucosal dissection to early gastric cancer in another hospital. she had received an annual upper gastrointestinal endoscopy and she had taken iron supplements for anemia caused from the occasional bleeding from the polyps. however, the number of the polyps had increased over time. because she had a loss of appetite, she admitted to our hospital. enhanced computed tomography showed gastric wall thickening and multiple gastric polyps without lymphadenopathy or distant metastasis. colonoscopy showed no specific findings. she was diagnosed as the juvenile polyposis of the stomach, and she received laparoscopic total gastrectomy with roux-en y esophagojejunostomy. in operative findings, although there were the excessive distention and congestion of the stomach, standard laparoscopic surgery could be performed. the resected specimen revealed multiple variously sized polyps throughout the stomach except for lesser curvature and fundus and the histopathological examination revealed that all polyps were hyperplastic polyps without containing cancer. she was discharged on postoperative day 10. we successfully performed laparoscopic surgery to treat a rare case of juvenile gastric polyposis. introduction: we report a novel technique for combined use of laparo and thoracoscopy for faradvanced adenocarcinoma of esophagogastric junction (aeg). case presentation: a 50's years old man presented with far-advanced aeg. an esophagogastroduodenoscopy revealed a type 2 lesion with the entire circumference around esophagogastric junction (egj). contrast radiography revealed a severe stenosis in the egj and wall irregularity from egj to cardia. computed tomography revealed a stenosis of egj, suspected invasion into the left side diaphragm and some lymph nodes metastases at the abdomen. we diagnosed siewert type ii aeg (ct4an1m0, cstage iiia : japanese classification of gastric carcinoma ver.14). surgical technique :the patient was placed in the reverse-trendelenburg position with the left upper body lifted and legs spread, under general anesthesia. the tumor was huge, exposed from the serous membrane and invaded the left crus. first we performed from laparoscopic proximal gastrectomy using five ports. then, three ports were added in the 8th, 9th, and 11th intercostal spaces with the patient in the same body position, and performed thoracoscopic lower esophagectomy under artificial pneumothorax with intrathoracic pressure of 8-10 mmhg, which allows the ventilation of both lungs. the lower esophagus was resected under the thoracoscopic view to ensure an adequate margin. following this resection, intrathoracic esophagojejunostomy was performed by using the laparo-and thoracoscopic techniques. the operative time was 439 min, and the blood loss was 15 g. he was discharged on the 15th day after the operation without any postoperative morbidity. the histopathological diagnosis was pt4bn3am1, p1, pstage iv. after adjuvant chemotherapy with capecitabine and oxaliplatin, ramcilumab monotherapy is undertaken now. ct revealed solitary lung metastasis in 24 months after the operation. conclusion: malta for locally advanced aeg invading the surroundings could be performed safely. introduction: despite being the pioneer in laparoscopic surgery, europe did not have similar surgical experience compared to east asia due to decreased exposure to gastric cancer. several studies on minimally invasive gastrectomy for gastric cancer have been conducted in europe. however, some of them did not analyse total gastrectomy as a distinct entity combining both distal and total gastrectomies; moreover, most of them do not provide data on full five-year follow up for each patient. baltic countries stand in between east and west in terms of gastric cancer incidence: incidence rate per 100,000 is 10.6 in united kingdom, 26.3 in lithuania and 85.3 in japan. this exposure to gastric cancer provides unique opportunity to investigate the role of laparoscopic gastrectomy. therefore, a case-control study was designed to evaluate laparoscopic (ltg) versus open total gastrectomy (otg), comparing short-term surgical and long-term oncologic outcomes. surgery, jeju national university, school of medicine, jeju, korea; 2 surgery, chosun university, school of medicine, gwangju, korea objective: although mcv (mean corpuscular volume) levels are known to be associated with the prognosis of various diseases, few study investigated mcv as prognostic factor after gastric cancer surgery. the aim of this study is to address the prognostic value of mcv in gastric cancer who underwent curative gastric cancer surgery. methods: 286 patients (june 2009-december 2015) with stage i, ii, and iii cancer were consecutively included in this study. all patients underwent curative gastric cancer surgery including subtotal gastrectomy or total gastrectomy. overall survival (os), disease-free survival (dfs) and postoperative complications rate were compared between mcv [ 94 group and = 93 group. results: of all patients, the mean mcv was 89 fl (normal range, 80 to 100 fl). the dfs was significantly higher in the high-mcv ([ 94) than low-mcv group(= 93) (p \ 0.05) group. there was no significant difference in postoperative complications when compared with clavien-dindo scale. the survival rate of the high mcv group was higher but there was no significant difference. conclusions: mcv may be a predictive factor after gastric cancer surgery. unlike previous studies, patients with low mcv group showed lower dfs. more research is needed on the significance of mcv in variety of disease. methods: and materials. for 6 years we observed 11 cases with gist of stomach and duodenum. seven patients were brought to clinic with the bleeding and two patients were brought to clinic with vomiting and compensate stenosis. in all circumstances we done the ct, mrt and endoscopic examinations of stomach and duodenum with biopsy . in two circumstances we performed endoscopic operation. in one circumstance we successfully take off the gist from the duodenum endoscopically. during the operation we use the endoscopic instruments. in another circumstances,after endoscopic excision the tumor appear the bleeding which was stopped by endoscopic local heamostasis, by putting clipps on the vessels. in 9 circumstances the tumors were in stomach. in 4 circumstances we performed laparoscopic wedge resection the tumors by staplers. in 3 circumstances when the tumor was very big and situated in the fundus of stomach, we performed laparoscopic resection of the fundal part of stomach by using laparoscopic staplers and 'liga sure' sealing. in 2 circumstance we took off the tumor by putting laparoscopic trocars inside the stomach for instruments and for visualization tumor. after excision the tumor and took it of the stomach we sutured the holes in the stomach. we have no mortality after laparoscopic operation. there were no malignisation in all 9 circumstances. we have 3 cases morbidity. in 2 circumstance the bleeding from the stomach that was stopped endoscopically. in 1 circumstance there was wound infection. the aim of the study to decrease the morbidity in the patients with perforated ulcers of the stomach and duodenum. we observed 107 patients with perforated ulcers of stomach and duodenum. women were 45, men were 62. average age about 45 years. 91patients had perforation ulcer of stomach and duodenum. 26 patients had perforations with bleeding. all patients were divided in two groups. the first groups 59 patient operated laporocopically, in the second group 48 patients operated traditionally. results: there were no mortality in the group that operated laparoscopically. in the group that were operated traditionally one patient died after rebleeding. the average stay in hospital in the group that were operated laporoscopically about 2 days. in the groups with traditional operations, were about 8 days. the morbidity in the first group were in 5 cases. pneumonia in 2 cases, suppuration of the troacar points were in 3 cases. in the second group pneumonia were in 3 cases, suppuration of the operation wound were in 5 cases, subdiaphragmatic abscess was in 1 cases. conclusion: laporoscopic operation in during treatment decrease the mortality, morbidity and hospital staying in the patients the perforated ulcer of stomach and duodenum . of the 32 patients of the third group 22 (68.75%) were operated about ulcer rebleeding in the hospital, and 10 (31.25%)-about the profuse bleeding ulcer. noonr patient had recurrent bleeding. the average treatment time for patients in group 2 was 12.5 ± 3.2 days. conclusions: the development of hemorrhagic shock in patients with peptic ulcer bleeding significantly increases the risk of rebleeding and mortality. the application of endoscopic hemostasis allows to reduce the risk of rebleeding and mortality compared with conservative antiulcer therapy. surgical treatment can achieve reliable hemostasis, but accompanied by higher mortality and longer duration of hospital treatment. tan tock seng hospital is second largest hospital in singapore. it is affiliated to two medical schools in singapore and it is a training hospital for both undergraduates and postgraduates. minimally invasive surgery for both benign and malignant diseases of upper gastrointestinal tract becomes more and more popular nowadays. in our department, all the residents have to view the step by step instructional videos of mininally invasive surgeries before they can assist in the cases or perform on their own under the supervision of consultant surgeons. the viewing of the instructional videos help them with better understanding of the procedures. the viewing of videos help them with the importance of steps, standardization of steps. with the help of instructional video, they can not only assist better in the surgery but also reduce the learning curve when they start doing the procedure themselves after the graduation from the residency programme. this is the step by step instructional video of laparoscopic repair of perforated duodena ulcer for surgeons-in-training rotated to our department. in general duplication cysts are rare developmental congenital disorders of the gi tract. three morphological criteria should be met in order to confirm the pathological diagnosis: 1. they should be attached to the stomach's wall and should be the continuation of it, 2. at least one of the muscle layers of the stomach's wall should be included and 3.it should have normal gastric mucosa. the treatment is either enucleation or partial gastrectomy. aim: present our minimally invasive approach to a rare prepyloric submucosal cystic lesion causing gastric outlet obstruction. case report: a 27-year-old female with vomiting, weight loss and in bad general condition was diagnosed after a full work-up (blood tests, endoscopies, eus, ct and mri) with a submucosal cystic tumor. this cyst first was thought to be a duplication cyst. since the patient was young, our intention was to offer the least invasive surgical technique in order to spare gastrectomy and billroth anastomosis. results: the procedure was completed laparoscopically with enucleation of the cyst through a gastrotomy on the anterior wall of the stomach. after the enucleation of the cyst the gastric mucosa was sutured back and then the gastrotomy was closed with continuous sutures. the result of the pathological report confirmed a rare case of a heterotopic pancreatic cystic lesion. the postoperative course of the patient was uneventful and was discharged with instruction for her diet the 4 th postoperative day. the patient 3 months post-operative has no symptoms. conclusion: in such benign conditions and especially in young patients, gastrectomies could be avoided if possible and give their place to less invasive approaches in order to reduce lifelong risks and morbidity. trangastric enucleation of the cyst although a demanding approach is safe and could be considered as a 'gentler' technique with reduced morbidity. background: pancreatoduodenectomy is considered to be very invasive for early superficial duodenal tumors (sdts), which have a lower risk of lymph node metastasis. partial duodenal resection with endoscopic submucosal dissection for sdts is an attractive technique but it is associated with a high risk of complications. the full-thickness resection of the duodenum wall including laparoscopic and endoscopic cooperative surgery has risk of spreading tumor cells and digestive juices into the abdominal cavity. we have developed novel technique for sdts to decrease the risk of exposure to abdominal cavity of tumor cells and digestive juices, called nonexposed duodenum laparoscopic and endoscopic cooperative surgery (neo-dlecs). aim: the aim of this study is to evaluate the feasibility and safety of neo-dlecs for sdts. surgical procedure: the attachment of the transverse mesocolon was freed from the head of the pancreas and retroperitoneal tissues under laparoscopy. the duodenum and the head of the pancreas were mobilized from the retroperitoneum using the kocher maneuver. a standard esd was performed for the sdt using endoscope. the serosa of the esd ulcer bed was reinforced using the laparoscopic hand-sewn suturing technique in the seromuscular layer around the resected area. after completing the procedure, the endoscope was inserted and passed over the resected area to confirm that there was no stenosis or leakage. methods: ten consecutive patients with sdt underwent neo-dlecs in our institute between march 2015 and march 2017. the clinicopathological features of the patients and surgical outcomes were prospectively collected and retrospectively analyzed. results: pathological diagnosis was adenocarcinoma for six patients, adenoma for three patients, and neuroendocrine tumor grade 1 for one patient. the median tumor size was 36 (20-54) mm. the median operative time was 227.5 (180-390) min. the median blood loss was 0 (0-175) g. there were no conversions to open surgery in this series. intraoperative perforation was found in two cases during the esd procedure. however, all perforations were closed and reinforced using hand-sewn sutures. no postoperative complications were above grade 2 in the clavien-dindo classification system. conclusions: neo-dlecs is safe and feasible and can be an option for surgical sdt resection. aims: wilkie's syndrome is caused by the entrapment of the 3 rd part of the duodenum between the aorta and the superior mesenteric artery (sma). surgery is indicated for chronic cases and failure of conservative management, being reported a laparoscopic duodenojejunostomy as a minimally invasive option. methods: all cases treated by laparoscopic duodenojejunostomy in our centre because of chronic wilkie's syndrome were recorded. results: 3 females and 1 male underwent a laparoscopic duodenojejunostomy, with a mean age of 32 years (range 19-47). all patients presented abdominal pain, and weight loss was identified in most of them. a reduced aortomesenteric angle measured by ct scan was the key for the diagnosis (mean angle 22.5 degrees, range 21-24). conventional laparoscopic approach was performed in two patients, the other two patients underwent a sils port approach. mean time of surgery was 62.5 min (range 35-100) and length of stay was 5 days (range 2-13). after a mean follow-up of 47.5 months (range 11-69), 3 patients improved their symptoms. conclusions: surgery is the mainstay in complicated or refractory cases of sma. laparoscopic duodenojejunostomy has the advantages of the laparoscopic approach (including rapid recovery time, reduced post-operative pain and shorter hospital stay) and it is feasible, safe and effective. in mexico in 2013, gastric cancer represented the 3rd cause of death; it may manifest in a variety of histologic, anatomic, and genetic patterns, which influences the surgical approach. until now gastrectomy with curative intent is the only treatment that offers potential cure in gastric cancer. in recent years, laparoscopy has emerged as an important modality in the surgical management. in multiple trials no significant difference in recurrence, long-term survival and disease-free survival was observed when compared to the standard open gastrectomy. we present the case of a 62 year old man. with a smoking history of 30 pack years, suspended 12 years earlier. he presented unspecific upper gastrointestinal symptoms; an upper endoscopy was made observing a suspicious depressed lesion of 3 cm located in the greater curvature between the body and the antrum, the biopsy resulted in a poorly diferentiated signet-ring cell carcinoma of the stomach. an endoscopic ultrasound and a thoracoabdominal ct scan showed no evidence of enlarged adenopaties or metastatic disease. initially a diagnostic laparoscopy was made, there was no evidence of carcinomatosis, nor free intraperitoneal fluid; so the greater omentum was dissected towards the splenic and hepatic flexure; a d2 lymph node dissection was performed, and a subtotal gastrectomy with reconstruction of roux en y was done; intraoperative endoscopy was done to identify the lesion, so adequate margins could be obtained. the patient had a good post operative evolution and was discharged home at 4th day tolerating oral intake. minimally invasive techniques have proved equivalency of oncologic results when compared to the conventional approach; these techniques are becoming the preferred approach in the treatment of well-selected patients with gastric cancer and have a role in definitive staging, curative resection, and lymphadenectomy. appropriate selection of patients and optimal technical approach are paramount for good outcomes. most data of laparoscopic gastrectomy come from eastern countries, where the prevalence is higher; however western experience is growing along with evolution and development in surgical instruments and new technology. wilkie syndrome is a rare cause of high intestinal obstruction, resulting from the compression of the duodenum between the abdominal aorta and the superior mesenteric artery. the main symptoms are nausea and vomiting, weight loss, early satiety, abdominal distension and epigastric pain. historically, the barium study and arteriography were the diagnostic tests used; more recently the angiotac has shown greater sensitivity. the diagnostic criteria are: dilated duodenum, duodenal compression by the superior mesenteric artery and aortomesenteric angle less than 20 degrees. patients with an acute condition usually respond to conservative treatment (decompression, correction of hydroelectrolyte alterations, nutritional support…). however, those with chronic symptoms usually require surgery preferably with laparoscopic approaches of duodenojejunostomy or the strong's procedure. the strong procedure mobilizes the duodenum by dividing the ligament of treitz. once the duodenal-jejunal junction is mobilized, the duodenum is positioned to the right of the superior mesenteric artery and it is preferred because it provides less morbidity due of the maintaining of the integrity of the gastrointestinal tract, but it has a failure rate of 25%. gastrojejunostomy allows gastric decompression, but does not relieve duodenal compression, so digestive symptoms may persist, leading to the appearance of a blind loop syndrome or recurrent peptic ulcers. on the other hand, the duodenojejunostomy, which according to some series may be the procedure of choice, may obtain a success rate higher than 90%. we advocate to initiate the surgical approach with the strong procedure and if it fails to perform to a duodenojejunostomy. during this procedure, gastro-esophageal reflux was evaluated and assigned to severe, moderate and slight category. if the reflux was observed slightly up to cervical esophagus, the case was assigned to moderate category. if the reflux was observed intensely up to cervical esophagus, the position was returned to head high position for the safety and the case was assigned to severe category. the anti-reflux surgery was considered in the moderate and severe categories. results: we have performed laparoscopic nissen procedure in 95 cases. the outcome was assessed by reflux test performed on 4-5 postoperative day, and the results showed the reflux was disappeared in every cases. median follow-up period of this study was 56 months (3-110 months) . in 11 cases (11.6%) ppi was restarted before 6 months after the anti-reflux surgery. in 25 cases (26.3%) ppi was restarted after the anti-reflux surgery during the whole follow-up period of this study. the bmi of the patients had no relationship to the needed restart of ppi. to evaluate the degree of esophagitis objectively before and after the anti-reflux surgery we designed 'the esophagitis score'. in this scoring method, a number from 0-5 was assigned according to the degree of esophagitis along with the la classification. the results of the study have shown that the reflux esophagitis was improved obviously after the anti-reflux surgery even in the ppi restarted group (p \ 0.001). discussion: to extract the gerd patients who really need anti-reflux surgery is important. reflux test is feasible because of its convenience and visual effects for the patients. the results of the laparoscopic nissen fundoplication were good. background: laparoscopic paraesophageal hernia repair with fundoplication has become more and more popular nowadays due to less morbdity and mortality with shorter length of hospital stay. discussion: tan tock seng hospital is the second largest hospital in singapore. it is affiliated to two medical schools in singapore and it is a training hospital for both undergraduates and postgraduates. in our department, all the residents have to view the step by step instructional videos of mininally invasive surgeries before they can assist in the cases or perform on their own under the supervision of consultant surgeons. the viewing of the instructional videos help them understand the procedures better. the videos can also help them recognize the important steps and standardized safe approach. with the help of instructional video, they can not only assist better in the surgery but also reduce the learning curve when they start performing the procedure themselves during their training period. this is the step by step instructional video of laparoscopic paraesophageal hernia repair with fundoplication for surgeons-in-training who are posted to our department. conclusion: the step by step instructional video on laparoscopic paraesophageal hernia repair with fundoplication can help the surgeons in training reduce their learning curve and improve their surgical skills so that they can perform the procedure safely. the human immunodeficiency virus (hiv) is a neurotropic virus. there have been reports of patients with hiv who have esophageal motility problems, sometimes associated with opportunistic infections. the absence of contractility is defined as a major motility disorder according to the chicago v 3.0 classification, which is characterized by normal esophagogastric union relaxation and 100% peristalsis failure. we present the case of a 56-year-old male patient with a history of acquired immunodeficiency on treatment with efavirenz, emtricitabine and tenofovir. he presented progressive dysphagia, gastroesophageal reflux and pyrosis of 4 months of evolution. physical examination showed no alterations. upper endoscopy is done reporting a normal esophagus and diffuse chronic gastritis. the esophagogram reported inadequate esophageal motility with contrast stasis and a delayed emptying. the esophageal manometry reported an upper esophageal sphincter with high resting pressure. the middle and distal esophagus showed absence of peristalsis with a pan-esophageal pressurization pattern. the lower esophageal sphincter presented normal resting pressure and borderline relaxation (41%). the integrated relaxation pressure was less than 15 mmhg. the diagnostic impression was absence of contractility (chicago classification v 3.0).medical management was initiated with inhibitors of the proton pump, isosorbide dinitrate and injections of botulinum toxin without success. it was decided to program the patient for a heller myotomy with toupet fundoplication. a trans-surgical endoscopy revealed a complete myotomy with no leakage or obstruction. the patient went home on the second postoperative day tolerating a solid diet.heller myotomy by laparoscopy with partial fundoplication is safe in the treatment of patients with hiv and esophageal motility disorders, reporting a mortality of 0.1%. the effect of endoscopic treatments prior to surgery is controversy aims: epiphrenic diverticulum represents an infrequent entity and it is usually associated with esophageal motility disorders, such as achalasia, distal esophageal spasm, nutcracker esophagus or hypertensive lower esophageal sphincter. nowadays, epiphrenic diverticulectomy, esophageal myotomy and partial fundoplication is the gold standard technique; although it supposes a challenging procedure and it may provoke lots of complications. approach for diverticulectomy usually depends on the distance from the upper border of the diverticulum's neck to gastroesophageal junction, considering that thoracoscopy should be carried out when this distance is more than 5 cm. methods: we presentthecase of a 57-year-old male patient, with a bodymass index of 30anda medical history of diabetes, smoking and alcoholism. his symptoms were mainly regurgitation and dysphagia. upper endoscopy showed esophageal dilatation and the presence of a diverticulum with its neck 2 cm over the gastroesophageal junction. ct scan confirmed these findings and manometry showed achalasia. in the video we can see how we perform a laparoscopic diverticulectomy with esophageal myotomy and dor fundoplication. results: patient was discharged home on the second postoperative day with no complication. after more than two years of follow-up, he has not suffered regurgitation, heartburn, dysphagia or chest pain. conclusions: we present a case with an epiphrenic diverticulum secondary to achalasia in which we performed a laparoscopic diverticulectomy, esophageal myotomy and dor fundoplication. some authors suggest that the correction of the underlying motility disorder is the key in the management of these patients and they do not recommend concomitant diverticulectomy for all cases. however, we consider that the complete procedure, adding diverticulectomy, supposes the gold standard and it is feasible to perform for teams which are skilled in esophageal and gastric laparoscopic surgery, despite its high morbidity rates. purpose: a laparoscopic wedge resection for a gastric submucosal tumor closed to gastroesophageal junction or involved to gastroesophageal junction is technically challenging and more aggressive compared with tumors in other sites of the stomach. a gastroesophageal reflux disease would be more prevalent after laparoscopic wedge resection of a gastric submucosal tumor in gastroesophageal junction because of the destruction to low esophageal sphincter. we hypothesized that a prophylactic anti-reflux surgery after this surgery would be less prevalent the gastroesophageal reflux disease (gerd) and more improve the quality of life of the patients. the aim of this study is to analyze our experience with prophylactic anti-reflux surgery after laparoscopic wedge resection for a gastric submucosal tumor of gastroesophageal junction materials and methods: we retrospectively collected data from 51 patients who diagnosed with submucosal tumor of near the gastroesophageal junction underwent laparoscopic wedge resection between january 2000 and december 2017. the patients were divided into 2 groups according to operation with prophylactic anti-reflux surgery (group a) and without one (group b). results: there were no difference in the frequency of the preoperative gerd symptoms between the 2 groups, whereas postoperative gerd symptoms and postoperative use of acid suppressive medications were more frequent in the group b (p = 0.032, p = 0.036). however, there were no differences in the follow-up endoscopic findings in terms of reflux esophagitis and hill's grade between the 2 groups. in group a, postoperative mean low esophageal sphincter (les) pressure was 22.0 ± 13.0. the les pressure was dropped until 15 mmhg in the only one patient. however, there was no reflux symptom in this patient. conclusions: the prophylactic anti-reflux surgery after laparoscopic gastric wedge resection of gastroesophageal junction is an effective method of prevent gastroesophageal reflux symptoms. background: the most critical obstacle is a pancreatic leakage(pl). the most cause of pl might be an activation of pancreatic juice by the mixing of pancreatic juice and intestinal fluid because of the anastomosis technique, the difference of anastomosis between pancreatic duct and caliber of jejunum, and the topple of jejunal mucosa. aim: in this study, we devised the new anastomotic method of pancreato-jejunostomy, so called ' pancreatic stent sliding guide' (pssg) method using a pancreatic duct stent. we would like to demonstrate its method and results. (operative procedure) the 10cases of hybrid laparoscopic pancreatico-duodenectomies (pd) were done by shuriken-shaped umbilicoplasty with pssg. the pancreatic duct stent, which is fit for a diameter of pancreatic duct, is used for the direct puncture without any incineration. the aims of direct puncture are both the avoidance of the enlargement of anastomotic opening and disturbance of blood flow. the contralateral of anastomotic opening is also punctured and the stent is pulled out of the jejunum. the 6-0 pds with the needles at both ends is used for anastomotic thread. firstly, the eversion anastomosis of posterior wall is done by sliding the needle on the stent. and then the anastomosis of anterior wall is done by the same way. the stent of contralateral side is cut and the hole is closed. materials and methods: the 10 cased of pancreato-jejunostomy by pssg method were done by february 2019. the average of patient's age was 72 y.o. the disease of patients were pancreatic cancer (n = 4), bile duct cancer (n = 5), and papilla vater cancer(n = 1). the pancreatic leakage by the isgpf were grade 0:10,a:0,b:0,c:0 respectively. in the same periods, we underwent the more ten cases of open pd by pssg method. the pl were only one case of grade a and there were none of clinical pl. conclusion: our new device of pancreato-jejunostomy by pssg might be very effective for the decrease of pl from the view point of machanisms of pl even for laparoscopic pd. year old, male patient presented with upper abdominal discomfort and pain, without nausea, vomiting or weight loss. an sub mucosal lesion was found on endoscopy examination in first part of the duodenum. endoscopic ultrasound has showed 2.5 cm sub mucosal lesion in first part of duodenum (anterior wall and close to pylorus). cytology examination from the lesion has showed neuroendocrine tumor. computed tomography of abdomen and chest were normal. his blood laboratory examinations were within normal limits. patient underwent da vinci robotic partial gastrectomy with intra corporeal billroth ii gastrojejunostomy. total operating time (ort) was 255 min. three day after operation patient started regular diet and was discharged home on day fife. final pathology report confirmed diagnosis of carcinoid tumor with ki67 less than 1%. surg endosc (2019) 33:s485-s781 p426-robotics & new techniques-education integrated education for colorectal disease-a digital solution for a digital age united kingdom aims: surgical plume has problem in poor visibility of the operative field, inclusion of harmful chemical substances, and biological risk. it is desirable that plume should be removed appropriately to minimize these risks. we assessed whether these problems can be solved by using commercialized evacuator semi-quantification of residual chemicals in the abdominal cavity: was performed using industrial smoke tester by aspirating the intra-abdominal plume onto filter papers and digitizing the stains. (3) detection of dna in the exhausted gas from the evacuator: the hepa filter, which was interposed at the inlet or outlet of the evacuator, was analyzed using pcr method to detect any dna derived from porcine tissues. results: (1) laparoscopic visualization: judgement score were 2.2 vs. 1.0 for ec and 3.8 vs. 1.8 for us (evacuator: on vs. off, both p \ 0.0001), indicating the visualization was significantly better in the use of the evacuator on both devices general surgery, royo villanova hospital general surgery minimally invasive surgery centre, jesús usón minimally invasive surgery centre methods: i report my experience at the american university of beirut medical center for laparoscopic adrenalectomy 65 cases, 35 left adrenalectomy and 30 cases for right adrenalectomy. three out of the series are large adrenal of 15 cm, and all of these were completed laparoscopically.the video will show the steps of this procedure.a large rt. adrenal mass measuring 15 cm, wt.750gm was removed laparoscopically using 4 trocar techniques. the lateral position facilitated the exposure and ease of dissection. the mass was removed by extending one of the trocar site with muscle splitting using endocatch 15 mm. results: patient was discharged home 3 days after surgery. the operative time was 1 hour. pathology revealed carcinoma with no involvement of the capsule or vascular invasion 163 patients (male: n = 39; female: n = 124) underwent minimally invasive adrenalectomy (tp: n = 135; rp: n = 28) at our institute. mean patient age was 53.8 years (21-82 years). besides comparing operative (intraoperative blood loss, previous abdominal surgeries, conversion rate, operative time, tumor size) and perioperative factors (time of hospitalization, time to oral intake, histology, postoperative complications) in each group, perioperative outcomes of a learning curve (lc)-the first 28 procedures in both groups-was also analyzed in terms of tumor size, significantly larger lesions were removed with tp (tp: 48.18 ± 22.8 mm vs rp: 34.8 ± 11.2 mm; p = 0.028). the number of asa (american society of anesthesiologists) ii patients were significantly higher in the tp group while there were significantly more asa iii patients in the rp group conversions 237) showed no significant difference. the analysis of lc showed a significant difference in previous abdominal surgeries min vs rp: 134.5 ± 12.4 min; p = 0.023] all favoring the tp approach. conclusion: both methods proved to be feasible and safe in terms of minimally invasive adrenalectomy. based on our own experience the tp approach resulted in improved operative time and conversion rates to demonstrate the safety and efficacy of the laparoscopic approach in the treatment of large splenomegaly. currently, this approach is recognized as the one of choice in benign splenic pathology, being controversial in the face of a massive splenomegaly or neoplastic pathology. material and method: clinical case: a 38-year-old man followed in the dept. of internal medicine for a hepatosplenomegaly of probable lymphoproliferative origin. additional explorations of interest are provided. result: intervention: complete laparoscopic approach, right lateral partial decubitus, massive splenomegaly, ? 23 cm, splenuncle of 3-4 cm that is resected, section of short vessels, dissection of the splenic hilum, vascular section with endogias, splenectomy with full extraction in a pocket through reduced laparotomy in the left flank for anatomopathological study the aim of this video is to demonstrate the safety and efficacy of the laparoscopic approach in the treatment of large splenomegaly. currently, this approach is recognized as the one of choice in benign splenic pathology, being controversial in the case of a massive splenomegaly or neoplastic pathology it can transform into adenocarcinoma. patients and methods: between 2001 and 2008 we performed laparoscopic nissen fundoplication (lars) in 254 cases of gerd. in 78 cases of gerd patients be was proved by endoscopy and histological examination. the demeester score was higher (18.9 versus 41.9, p \ 0.001), and bile re?ux was measured more frequently among the be patients on the other hand during the 8.5 years long endoscopic follow up early barrett carcinoma developed in 2 patients, 38.5 months after the lars. both patients underwent a limited surgical resection of the distal esophagus and esophagogastric junction, regional lymphadenectomy, and reconstruction by interposition of an isoperistaltic jejunal segment. there were no complication. histological examination was shown pt1n0 stage disease in both cases. oncological follow up was 82 months long (6.8y) and both patients are still disease free. conclusions: although lars can affect regression in a part of be patients, progression to adenenocarcinoma can also occur. endoscopic surveillance is important in the case of be to recognize early cancer, to perform limited surgical resection with low morbidity and long overall-and disease free survival gastric cancer development a nomogram for predicting the conditional probability of survival after d2 lymphadenectomy for gastric cancer this study aimed to devise a nomogram to predict the conditional probability of cancer-specific survival (cpcs) in gastric cancer (gc) patients after gastrectomy with d2 lymphadenectomy. methods: clinicopathological data for 2,596 gc patients who underwent d2 lymphadenectomy in a large-volume eastern institution (the training cohort) were analysed. cancer-specific survival (css) was predicted using cox regression models. a conditional survival nomogram was constructed to predict cpcs at 3 and 5 years post-gastrectomy. two external validations were performed using a cohort of 2,198 chinese patients and a cohort of 504 italian patients. results: in the training cohort, the 5-year cpcs was 59.2% immediately post-gastrectomy and increased to 68.8%, 79.7%, 88.8% and 95.1% at 1, 2, 3 and 4 years post-gastrectomy, respectively. multivariate cox regression analyses showed that age; tumour site, size and invasion depth; numbers of examined and metastatic lymph nodes; and surgical margins were independent prognostic factors of cancer-specific survival (all p \ 0.05) and formed the nomogram predictor variables. internal validation showed that the conditional nomogram exhibited good discrimination ability at 3 and 5 years post-gastrectomy (concordance index, 0.794 and 0.789, respectively) gastric cancer does non-compliance in lymph node dissection affect oncological efficacy in gastric cancer patients undergoing radical gastrectomy? univariate and multivariate analyses revealed that non-compliance was an independent risk factor for os. logistic regression analysis demonstrated that the extent of gastrectomy, primary tumour site, history of intraperitoneal surgery, bmi and open gastrectomy were independent preoperative predictive factors for non-compliance. cox analysis demonstrated that age, pt, pn, and the extent of gastrectomy independently affected os in patients with noncomplaint lymphadenectomy. however, os was significantly better in the compliant group than in the non-compliant group regardless of the recommendation for chemotherapy. stratified analysis demonstrated that os was significantly better in chemotherapy patients than in patients without chemotherapy and stage ii patients (pt1n2/n3m0 and pt3n0m0) in whom chemotherapy was not recommended. conclusion: non-compliance is an independent risk factor after radical gastrectomy for gc we prospectively collected and retrospectively analysed the medical records of 398 patients with proximal gc who underwent lspsd. the data were split 75/25, with one group used for model development and the other for validation testing. results: of the 398 patients enrolled in this study, 174 (43.7%) required laparoscopic haemostasis treatment. a multivariate analysis determined the following preoperative adverse risk factors for the model group: gender, preoperative n stage, and terminal branches of the splenic artery (spa), and we developed a scoring system based on these findings. each of these factors contributed 1 point to the risk score. the intraoperative laparoscopy hemostasis rates were 11.5, 33.6, 58.5, and 73.5% for the low-, intermediate-, high-, and extremely high-risk categories, respectively. there were statistically significant differences among groups (p \ 0.001). with the increase in risk, both blood loss volume (blv) and operative time (min) of lspsd increased significantly (p \ 0.001).the area under the receiver operating characteristic curve for the score of intraoperative laparoscopic haemostasis was 0.700. the observed and predicted incidence rates for intraoperative laparoscopic haemostasis were parallel in the validation set. conclusions: this simple we compared the survival of src patients with that of tubular adenocarcinoma patients according to bmi. results: the 5-year survival of src was significantly worse than that of wmd (p \ 0.001) but superior to that of pd (p \ 0.001). bmi-stratified analysis showed that in the high-bmi group, the prognosis of src was similar to that of wmd (p [ 0.05) and better than that of pd (p \ 0.001). in normal-bmi patients, src had a worse prognosis than wmd (p \ 0.001) but a more favorable prognosis than pd (p \ 0.001). src among low-bmi patients displayed much poorer survival than did both wmd (p \ 0.001) and pd (p = 0.005). multivariate analysis indicated that the risk of death was lowest for src patients with a high bmi and highest for src patients with a low bmi baseline characteristics were compared in a 35-patient rspshl cohort and a 608-patient lspshl cohort. one-to-four propensity score matching was performed to determine between-group differences. result: in total, 175 patients were matched, including 35 patients who underwent rspshl and 140 who underwent lspshl. no significant differences in baseline characteristics were observed between these groups after matching. significant differences in total operative time, estimated blood loss (ebl), splenic hilar blood loss (shbl), splenic hilar dissection time (shdt), and splenic trunk dissection time were detected between these groups (all p \ 0.05). furthermore, no significant differences were evident between rspshl and lspshl in the overall noncompliance rate of lymph node (ln) dissection (62 the highest body temperature within 1 week after operation was used to establish diagnostic thresholds for high body temperature and low body temperature, which was obtained by x-tile software. the study used cox regression to analyze the influence of high body temperature on 5-year dfs. results: a total of 1396 patients were included in the analysis. the diagnostic threshold for high body temperature was defined as 38°c; 370 patients with a high postoperative body temperature were allocated to the high temperature group (htg), while another 1026 patients were allocated to the low temperature group (ltg) cao department of gastric surgery, fujian medical university union hospital, fuzhou, china background: laparoscopic surgery for remnant gastric cancer third step: baring of the right side of the esophagus. fourth step: exposure of left gastroepiploic vessels and lns dissection in the splenic hilar area. fifth step: baring of the left side of the esophagus. the above procedure was performed for 45 rgc patients with stage ct1-4an0/? disease. results: there was no conversion to open surgery. mean operation time was 195.0 ± 52.5 min, mean blood loss was 104.3 ± 90.4 ml, and mean times to first flatus p526-upper gi-gastric cancer a novel prognosis prediction model after gastrectomy for remnant gastric cancer: development and validation using international multicenter databases fuzhou, china; 2 department of gastrointestinal surgery the model calibration was accurate in predicting 5-year survival. dca showed that the model has a greater benefit. the results were also confirmed by bootstrap internal validation. in external validation, c-statistics and dca showed good prognostic performance in patient datasets from 2 participating institutions. moreover, we verified reliability of the model in an analysis of patients with different eln counts p527-upper gi-gastric cancer a novel abdominal negative pressure lavage-drainage system for anastomotic leakage after r0 resection for gastric cancer while risk of gastric cancer for ppi users was higher than non-ppi users when duration between 1-3 year, = 1 year, = 3 year and = 5 year. the risk of gastric cancer when duration = 5 year(rr = 2.03)and duration = 3 year(rr = 1.95)are higher than risk of gastric cancer when duration between 1-3 year (rr = 1.74). according to location subgroups meta-analysis,risk of non-cardiac gastric cancer for ppi users higher than non-ppi users conclusion: based on a systematic review with meta-analysis, we found the correlation between long-term use of ppi and the risk of gastric cancer and long-term use of ppi may increase the risk of non-cardiac gastric cancer when duration = 1 year p534-upper gi-gastric cancer age-adjusted charlson comorbidity index (acci) is a significant factor for predicting survival results: there were 1476 patients included in the analysis. the high-acci and low-acci groups had significant differences in preoperative abdominal surgery history, asa grade, tumor size, tumor stage, histologic type, age and comorbidity (all p \ 0.05). the incidence of postoperative complications was 17.9% in the high-acci group and was significantly higher than that in the low-acci group (p = 0.001). the overall survival rate (os) and cancer-specific survival (css) rate in the low-acci group were both higher than those in the high-acci group (p \ 0.05). univariate and multivariate analyses showed that the acci was an independent risk factor for os and css (p \ 0.05). furthermore, a combination of the tnm staging system and acci showed a trend toward higher prognostic value and higher auc for os and css than the tnm staging system alone (p \ 0.05). conclusions: the acci was an we aimed to investigate the clinicopathological features and prognosis of patients with mgc and the impact of postoperative adjuvant chemotherapy on long-term survival. methods: the clinical and pathological data of patients diagnosed with gastric adenocarcinoma and undergoing radical gastrectomy from stratified analysis showed that, in advanced gastric cancer (agc), the 5-year os rates of mgc without adjuvant chemotherapy and sgc without adjuvant chemotherapy were 34.0% and 46.1%, respectively, with a statistically significant difference (p = 0.025). the 5-year os rates of advanced mgc after adjuvant chemotherapy and of advanced sgc after adjuvant chemotherapy were 48.0% and 53.3%, respectively, and the difference was not statistically significant (p = 0.292). the 5-year os rate of advanced mgc after adjuvant chemotherapy was significantly higher than that of patients without adjuvant chemotherapy (48.0% vs. 34.0%, p = 0.026). conclusions: mgc is a poor prognostic factor after radical gastrectomy for gastric cancer background: whether the tumor-node-metastasis (tnm) staging system is suitable for patients with node-negative gc is still controversial. the modified staging system established by rpa showed good prognostic performance in a variety of cancers. the application of rpa has not been reported in the prognostic prediction of gc. methods: node-negative gc patients who underwent radical resection at fujian medical university union hospital (n = 862) and sun yat-sen university cancer center (n = 311) with an at least 5-year follow-up information were selected as the training set. rpa was used to develop a modified staging system. patients from the surveillance, epidemiology, and end results databases (n = 1415) were selected as the external validation set. results: the 5-year overall survival (os) rates of patients with 8th ajcc-tnm stage ia-iiia in the training set were ia 95%, ib 87%, iia 78%, iib 76% and iiia 73%. multivariate analysis (mva) showed that larger tumor size, older age, and deeper depth of invasion were independent risk factors for os in patients with node-negative gc (all p \ 0.05). patients were reclassified into rpa i, rpa ii, rpa iii, and rpa iv stage based on rpa, the 5-year os rates were 96%, 87%, 81%, and 64%, respectively, with significantly difference (p \ 0.05). two-step mva showed that the rpa staging system was an independent predictor for os (p \ 0.05) were retrospectively collected. patients were classified into two groups according to bmi of \ 25 kg/m 2 (332 patients; high bmi group) and = 25 kg/ m 2 (108 patients; low bmi group). for these 440 patients, clinicopathological variables were analyzed using propensity score matching to mitigate the selection bias: sex, age, asa physical states, clinical stage, laparoscopy-assisted total gastrectomy (latg) or totally laparoscopic total gastrectomy (tltg), d2 lymph node dissection, combined resection of other organs, method of anastomosis, jejunal pouch reconstruction. the surgical results and postoperative outcomes were compared and examined between the two groups. results: a total of 152 patients were matched for the analysis. contrary to our expectations, there were no differences in the surgical results about operative time and estimated blood loss (low bmi 336.3 ± 72.0 min, high bmi 354.5 ± 85.3 min; p = 0.479, low bmi 144.2 ± 300.8 g, high bmi 112.6 ± 155.8 g; p = 0.695, respectively). furthermore, there was no significant difference in postoperative outcome of complication (clavian-dindo [ iiia) and the length of postoperative hospital stays (low bmi 13 cases, high bmi 10 cases baiocchi general surgery, university of brescia-spedali civili, brescia, italy background and aim: recently indocyanine green (icg) was introduced in clinical practice as a fluorescent tracer. the use of icg for sentinel lymph node (ln) mapping was investigated in lots of fields such as breast methods: we conduced a single center prospective trial. we included patients with gastric cancer candidate to surgery. icg was injected intraoperative or the day before surgery, via submucosal or subserosal. total or subtotal gastrectomy was performed open, laparoscopic or video-assisted access. during gastric cancer standard lymphadenectomy we studied lymphatic flow and ln bright in vivo and ex vivo japan introduction: in japan, the number of elderly patients with gastric cancer has been increasing in correlation with the increase in average age of the population. the aim of this study is to assess the safety and efficacy of laparoscopic gastrectomy for cancer in elderly patients compared with the short-term outcome in the nonelderly. method: we reviewed 231 patients who underwent laparoscopic gastrectomy (dital gastrectomy,proximal gastrectomy,total gastrectomy)between 021).the incidence of advanced cancer(stageiior more)was higher in elderly patients there were no significant differences in the operating time,blood loss and postoperative hospital stay. there were no significant differences in the incidence of postoperative morbidity. conclusion: in elderly patients, there was a tendency of reduction surgery being selected according to individual condition, but there was no significant difference in the short-term outcome.hence,we conclude that laparoscopic gastrectomy is indicated even in elderly patients. p548-upper gi-gastric cancer improved technique of vacuum therapy and carried out ltg . a patient factor (the gender, the age and bmi), an operation factor (operation time, the bleeding amount, lymph node dissection and conjurer), a coincidence related complication (clavien dindo classification, sutural insufficiency of grade more than 2, anastomotic stricture, anastomotic region bleeding and reflux esophagitis) and the post-operatively length of stay were considered . result: cs crowd met 98 cases (85.2%) and ls cluster (14.8%) 17 cases. 70 years old of age medians (38-84), men and women were 93 examples (83.4%), 22 examples (16.6%) and bmi median 22.5 (16.2-31.1) by a patient factor, and a significant difference didn't admit by two groups for 13 days, the post-operatively average length of stay was 11 days by ls group by cs group. conclusion: operation time was short for a coincidence by linear stapler more than a coincidence by circular stapler in comparison of an esophagoenterostomy way in ltg on the day before the operation, we endoscopically clipped several points located 2 cm proximal to the tumor edge to cover about half of the tumor. after lymph node dissection, we incised the stomach with an endoscopic linear stapling device,including the previously placed clips. reconstruction was performed in all patients who underwent billroth i or roux-en-y procedures. result: no complications were observed during pre-operative endoscopic clipping or intraoperatively p558-upper gi-gastric cancer small intestinal tumors after laparoscopic surgery in our hospital small intestinal tumors are rarely observed, accounting for about 3-6% (malignant cases: 1-2%) of all gastrointestinal tumors. therefore, occasionally, their diagnoses can be difficult. however, recently, capsule and balloon endoscopes have been widely employed were examined regarding patient backgrounds, diagnostic methods, pathological findings, postoperative courses, and prognoses. results: the subjects consisted of 15 males and 5 females, with a mean age of 63 years. their chief complaints were black stools the median distance from the treitz ligament or bauhin valve was 50 cm (2-200) postoperative complications were abdominal abscess (2 cases; 10.0%) and surgical site infection (ssi), hemorrhage, and paralytic ileus (1 case each; 5.0%). pathological diagnoses were lymphoma metastatic small intestinal tumor (2 cases; 10.0%), and granuloma, lipoma, peutz-jeghers polyp, clear cell sarcoma, malignant mesothelioma, and ectopic pancreas most patients were diagnosed in bleeding, complicated by anemia and black stools. however, as most tumors were relatively close to the treitz ligament and bauhin valve, almost a half could be diagnosed with a small intestine endoscope before surgery patients were classified as popf and no-popf according to their grade b or c popf status. popf was diagnosed according to international study group of pancreatic fistula (isgpf) criteria or clinical findings. patient characteristics, intraoperative parameters, electrosurgical device type, pathological findings, and early postoperative outcomes were compared. electrosurgical devices were classified asthunderbeat (tb) or laparosonic coagulating shears (lcs) based on energy sources. results: eighteen patients developed grade b or c popf. among them, 12 (66.7%) and 6 (33.3%) were diagnosed with popf according to isgpf criteria and clinical findings 011), operation time (p = 0.03) and electrosurgical device type (p = 0.005) were significant risk factors for popf following lag 017) and tb device (or, 6.80; 95% ci 002) were independent risk factors for popf following lag. conclusions: operation time and tb use significantly affect the risk of popf and should be considered in future clinical studies. p560-upper gi-gastric cancer feasibility and nutritional benefits of double flap with no-knife stapler reconstruction after laparoscopic proximal gastrectomy for gastric cancer were analyzed. receiver operating characteristic curves were generated, and by calculating the areas under the curve(auc) and the c-index, the discriminative ability of crps during different periods were compared, including pre-crp, postoperative days 1, 3, 5 and postoperative maximum crp (post-crp max ). a decision curve analysis was performed to evaluate the clinical utility. result: ultimately, 401 patients were included this study and the median follow-up time was 29 (3-41) months. for postoperative recurrence, the auc and c-index of pre-crp were 0.692 and 0.678, respectively, significantly higher than the other crps, all p \ 0.05. among = ''''''the = '''' post-crps = '''' post-crp = '''' sub = similar findings were observed for overall survival. conclusion: both pre-crp and post-crp max , cheap and easily obtained, are independent predictors of recurrence for gc. act significantly prolonged the rfs for stage ii/iii gc patients with high-prep p566-upper gi-gastric cancer robot-assisted gastroduodenal surgery: a single center experience robot-assisted gastroduodenal surgery (ras) was introduced to overcome the technical limitations of conventional laparoscopy. it provides a 3d-amplified view to the surgeons and an increased ability to control the operative field by manipulating optics, as well as enhanced mobility and precision of instruments. the aim of the present study is to evaluate the main outcome of a single center experience in gastroduodenal robotic surgery. materials and methods: we report a case series of patients who underwent robot-assisted gastroduodenal surgery at sanchinarro university hospital between conclusions: robot-assisted gastroduodenal surgery is a safe and feasible technique in experienced centers with advanced robotic skills. in the literature, there are only few reports of robotic assisted gastroduodenal resection. further studies are necessary to better confirme our results. p567-upper gi-gastric cancer atypical methods: retrospective review of ogd reports before and after the introduction of the new guidelines. inclusion criteria: all elective ogds. exclusion criteria: emergency ogds and elective therapeutic ogds. data recorded: patient demographics, endoscopist, indication, number of photos, anatomical site photographed, pathology identified and whether pathology photographed or not. results: 1099 ogds reviewed, 790 before the guidelines (group 1) and 309 afterwards (group 2). the most common indication was reflux 166 (21%) in group 1 and anaemia 67 (22%) in group 2 clinical utility of systematic pre treatment staging laparoscopic exploration methods: all locally advanced gastric adenocarcinoma managed in surgical oncological unit between 1st january and 30th november were prospectively enrolled in the study. in the absence of emergency surgery or preoperative contraindications, all patients with curative intent underwent either preoperative chemotherapy followed by surgical exploration in the intent of curative gastrectomy (g)or systematic pretreatment laparoscopic exploration (l) benkabbou surgical department the patient background (age, gender, bmi) and c-stage of the preoperative factor were matched using propensity score matching method, and the surgical results were compared and examined. results: thirty rg groups matched rag 30 cases. the operation time (rag / lg) was significantly longer in the rag group as 309.2 ± 47.9 min / 220.6 ± 55.2 min (p \ 0.05). amount of blood loss was not significantly different each other; 10 ml / 9 ml (p = 0.693). pathologically t4a case was involved in 4 cases in rag and 5 cases in lg. the extent of lymph node dissection (d1 ? / d2) was 23/7 cases in both groups conclusions: rag in our clinical experiences can be safely introduced and short-term results are comparable to those of lg. verification of superiority of robotic surgery including long-term results seems to influence the future of robotic surgery conclusions: totally laparoscopic gastrectomy is feasible method in terms of surgical outcomes. furthermore, totally laparoscopic total gastrectomy is not technically difficult in advanced gastric cancer such as early gastric cancer and safety method. key words: gastrectomy, reconstruction, laparoscopic surgery, stomach neoplasm aims: meckel's diverticulum (md) is one of the most common congenital anomalies of the small intestine caused by an obliteration defect of omphalomesenteric duct. the objective of this study was to review surgical treatment and clinical outcomes of md, and evaluate the safety and feasibility of minimal invasive surgery (mis) in md. methods: we performed a retrospective analysis of medical record for patients who underwent meckel's diverticulectomy at six hallym-university-affiliated hospitals between 18 d), as well as the average of drainage stay. patients who underwent laparoscopic repair required significantly less parenteral analgesics than the open group.the mean postoperative stay was significantly shorter for laparoscopic group (mean, 5.48 d) than the open one.morbidity of medical and surgical complication was higher in open groups (23 vs 2). the most common complication in both groups was medical complication.more case of pneumonia was occurred in open groups compared to laparoscopic groups methods: a retrospective study using our prospective database was designed to analyse all the resected md in our centre. epidemiological data, clinical setting, diagnostic test and histological results were reported. results: md was resected in 112 patients, 80 males and 32 females, with a mean age of 13.00 years (3.25-59.50). in 35 cases, a laparoscopic approach was chosen. eighty-seven percent of the patients had a presurgical imaging test (ultrasounds, ct-scan or meckel's scan) background: perforated peptic ulcer (ppu) is a substantial health problem with significant postoperative morbidity up to 63% and mortality up to 40% worldwide. aims: this study aimed to estimate the sensitivity scoring systems for prognosis morbidity of patients operated for ppu with diffuse peritonitis. methods: a total of 153 patients were underwent emergency repair for ppu with diffuse peritonitis in pirogov russian national research medical university's surgical clinics during 2014-2016 years. different scoring systems used to predict outcome in ppu patients were identified: boey score, peptic ulcer perforation (pulp) score, asa, mannheim peritonitis index (mpi), world society of emergency surgery sepsis severity score (wses score). to quantify the strength of the concatenation of prognostic score and morbidity we use odds ratio (or) with 95% ci 05), respectively. pulp score and asa score have good prognostic value in relation to morbidity, but less than boey, mpi and wses sss. patients with pulp [ 7 had or 27 with 95% ci of p596-upper gi-gastroduodenal diseases gastrostomy tube placed by laparoscopy as a new therapeutic option for continuous intestinal infusion treatment with levodopa/ carbidopa we present 20 year outcomes of our initial consecutive patient cohort. methods: patients were identified in a prospectively maintained irb-approved database (1993-1998). post-operative eckardt scores and a 5-point validated system questionnaires were obtained via telephone interviews one patient required reoperation for failed myotomy. the mean eckardt score at 20 years was 0.81(± 0.21), with all fourteen patients having an eckardt score \ 3. all patients reported significant improvement in their quality of life. classic gerd symptoms (heartburn and regurgitation) were present in 3 (21.4%) patients. proton-pump inhibitors are being used by 35% of with patients with excellent symptom control. seven patients returned for a repeat egd (median 18.3yrs) with 5 patients having normal anatomy and 2 having la grade a esophagitis (1 patient on ppi). barrett's esophagus was not detected. conclusion: long-term results from our early experience with lhm are excellent and durable with only one patient requiring re-intervention in 20 years until recently the esophagectomy was the only choice in treatment of patients with end-stage achalasia. developing of minimally invasive techniques such as a laparoscopic heller miotomy and peroral endoscopic myotomy (poem) allowed to use them as a treatment options. aim: to present an experience of treatment of patents with end-stage cardiac achalasia. materials and methods: since 07.2013 till the 12 laparoscopic heller myotomy was performed in 3, and esophagectomies were performed in 3 patients with failed previously myotomy made in other clinics. gastric tube was used to replace the esophagus in patients underwent esophagectomy after skeletization of crura posteriorly to esophagus, two separated rectangular patches of parietene progrip mesh (covidien) measuring 1x2.5-3 cm were attached to the posterior surfaces of the crura. the patches were fixated themselves due to special hooks. than continious twodirections suture was placed through both crura along with the patches using self-gripping v-loc 2-0 suture (covidien). the same suture was used for construction of nissen fundoplication wrap 3.5 cm long. aditional anchoring stich through the wrap and esophageal wall was placed using ti-cron 2-0 suture (covidien). 3d laparoscopy was used while suturing using richard wolf epic system. results: all the procedures were performed successfully. there were no cases of bleeding from the suturing points either from the crura and the fundus wall. there were no crural dehiscence while suturing, even if the distance between crura was more than 4 cm. the mean duration of suturing facilitated by 3d laparoscopy was 15 min (range, 12-20 min) for crural repair, and 10 min (range, 8-25 min) for fundoplication. there were no excessive postoperative pain in all the patients. there were no disphagia 1 month postop in every patient. conclusions: 1. the new technique of posterior buttress of crural repair using small patches of parietene progrip mesh and v-loc suture showed feasibility and safety. 2. the use of 3d in such case most commonly manifested symptoms are cough, sore throat, hoarseness, dysphonia, globus and only 40% patients with lpr have typical gerd symptoms. also ppi therapy are less effective in patients with lpr in comparison with patients which have typical features of gerd. purpose: to compare the outcomes between surgical treatment and conservative therapy in patients with laryngopharyngeal reflux. materials and methods: for the period chesarev faculty surgery #1, federal state autonomous educational institution of higher education i.m. sechen, moscow, russia p614-upper gi-reflux-achalasia a case of a primary parahiatal hernia associated with a type i hiatal hernia emergency county hospital parahiatal hernia is a rare disease that occurs when an abdominal organ protrudes through an opening adjacent to an anatomically intact esophageal hiatus. the herniated organ is usually the stomach, although cases of omental and colonic herniation exist we report the case of a 60-year-old woman which accused epigastric pain, starting 2 years prior, pseudo-angina, heartburn and bloating. based on imagistic findings the patient was diagnosed with a parahiatal hernia and an associated type i hiatal hernia. patient underwent surgery and a 7 cm diameter defect in the diaphragm lateral to the left crus was discovered, through which 40-50% of the stomach had herniated. the hiatal orifice was slightly enlarged but anatomically intact, with an associated small sliding hiatal hernia. we performed closure of the defect, hiatoplasty and a floppy-nissen fundoplication pneumatic dilation with 30 mm balloon was performed under general anesthesia. radiological contrast control and endoscopy reevaluation revealed a perforation just above the squamo-columnar junction. a minimally invasive approach was decided. an fully covered esophageal stent was inserted. radiological control after 2 days reveals left pleurisy and migration of the stent. the same day was performed an endoscopic repositioning of the stent with clip fixation. left pleural puncture was performed and clear fluid was extracted. the condition of the patient got worse and she was transferred on icu (08.07). we performed left pleurostomy and initial exploratory laparoscopy-no intraperitoneal lesions. due to difficult transhiatal access to the inferior mediastinum the surgery was converted to open-perisophageal mediastinal abscess was found, evacuated and drainage and jejunostomy were performed. after a week, the patient presented progressive altered condition, febrile syndrome. thoraco-abdominal ct-scan showed left pleural effusions. left pleurostomy was performed, with extraction of fetid fluid. continuous lavage was instituted. on 1 st of august, the pleurostomy tube drained gastric content, and the clinical examination revealed signs of generalized peritonitis. laparotomy was performed with lavage, drainage and posterior decompression gastrostomy. results: postoperative evolution was favorable, with the suppression of pleural drainage in 13.08 and discharge in 29.08 with alimentation exclusive on jejunostomy. one month later, she had normal clinical and radiological examination evaluation of efficacy was performed with reflux symptom index (rsi) specific for extraesophageal symptoms, subjective satisfaction and occurrence of dysphagia and gas-bloat syndrome. a rsi score [ 12 was considered as pathological. results: rsi significantly decreased after surgery (14and msa as compared with total fundoplication; 83,3% of patients were satisfied with surgery: a comparison between techniques showed superiority of msa objective: to demonstrate the efficacy of hiatorraphy without the use of meshes in thegiant paraesophageal hiatus hernia, as well as the standardization of our technique, with thetechnical steps that we make successively. material and method: clinical cases: 68-year-old man,with symptomatic hiatal hernia with progressive intolerance and dysnea. egd: the stomach rotated in a giant hiatal hernia.gastroscopy not completed due to endoscope loop formation within giant hiatal hernia with gastric volvulation. ct: large hiatal hernia,combined volvulation (axial axial mesenteric organ), the stomach in a right subpulmonary situation. results: intervention: laparoscopic approach.hh of large paraesophageal size,double organoaxial-and-mesenteric volvular component,gastric walls very thickened and adhered to the mediastinum.reduction of all content and the sac,is adhered to the pleura, extended med-iastinal esophageal disection, up to vein pulmonary and get enough abdominal esophagus and rule out the presence of an short esophagus,posterior-anterior and left tutorized modified hiatorraphy with stitches in ''u'' with non-absorbable suture on teflon reinforcement patches.nissen fixed to both pillars, intramediastinal drainage.egd at the 1st day with esophageal stenosis due to inflamation of the nissen, resolved with medical treatment. dischage at 5 th day.asymptomatic and without radiological recurrence after 10 months of follow-up. conclusions: in giant and paraesophageal hiatus hernias, modified primary hiatorraphy together with mediastinal esophageal dissection extended can be an effective and safe alternative, and can be advised as a technical gesture prior to a collis nissen and-or placement of a hiatal-hiatoplasty mesh united states of america aim/background: prescribed opioids for pain control have been implicated as major contributors to addiction through their illicit use. efforts to reduce opioid prescriptions and measure their impact on outcomes are novel. we analyzed how patient outcomes are affected with reduced opioid prescriptions following laparoscopic foregut surgery narcr: 89%), length of hospital stay (narcs: 1.60 days vs. narcr: 1.62 days), 30-day readmission rates (narcs: 1% vs. narcr: 4%) and perioperative complication rates. additionally, no significant qol outcome differences between the groups were reported at one month postoperatively. conclusion: our study supports reducing opioid prescriptions as a strategy to counter illicit drug use and addiction 50 patients who underwent paraesophageal hernia repair at a tertiary referral center were analyzed retrospectively. demographic data, asa classification, characteristics of peh, onset of symptom, dysphagia severity score, characteristics of fundoplication (partial vs. total; laparotomy vs. laparoscopy; emergency vs. elective) and surgical outcome (length of stay, complication and 30-day mortality) were recorded and reviewed. results: 50 patients were included; 88% were female (mean age of 76.8 years old and mean body mass index of 21.5). mean onset of symptom was 2.0 weeks after peh repair, dysphagia severity scores were changed 1.8 from 3.7. conclusion: in our series, the dysphagia severity scores reduced after surgery upper gi surgery, the catholic university of korea mary's hospital, incheon city, korea p634-upper gi-gastric cancer general surgery biopsy from the mass has showed poorly differentiated signet ring cell adenocarcinoma. chest computed tomography revealed 38 mm thoracic aortic aneurism. abdominal computed tomography showed 43 mm infra renal aortic aneurism and no evidence of metastatic disease general surgery 70 year old, female patient presented with upper abdominal pain, weight loss (10 kg during last three month), without nausea or vomiting biopsy was done and pathology result showed intestinal type, her2-negative adenocarcinoma of the stomach. chest and abdominal computed tomography (ct) were normal. endoscopic ultrasound (eus) revealed 3 cm lesion with invasion to the muscularis propria (mp) she was treated by neo adjuvant chemotherapy (3 cycles carboplatine ? 5fu). patient underwent laparoscopic partial gastrectomy with modified d2 lymphadenectomy and billroth ii gastrojejunostomy. total operating time (ort) was 320 min. three day after operation patient started regular diet and was discharged home on day fife. final pathology result confirmed intestinal type, modified differentiated adenocarcinoma of the stomach economou 1st surgical department 1 general, visceral and transplant surgery, section minimally invasive surgery, heidelberg university hospital, heidelberg, germany; 2 department of surgery, iuliu hatieganu university, cluj-napoca, romania; 3 general and visceral surgery, klinikum mittelbaden, baden-baden, germany 1 surgery, toyonaka municipal hospital, toyonaka city, osaka, japan; 2 gastroenterological surgery, osaka university, osaka, japan; 3 next generation endoscopic intervention, osaka university, osaka, japan aims: uncomplicated healing of anastomoses in colorectal surgery is the basis for early adjuvant oncology therapy. the basis for proper healing is good blood flow. we use by robotic surgery foor control the firefly by intuitive. since january 2018, icg has used for blood flow in laparoscopic bowel surgery for the d-light system of storz. method: use of icg wants to accurately determine the resection line for free colon operations based on good blood circulation. we use icg pulse in two batches and color detection using d light from storz to verify blood flow. the first dose is given after the skeletalisation of the intestines intraabdominally and the second after the colic anastomosis to verify its vitality. results: in the period under review we performed 85 laparoscopic operations on the free colon, 82% of the operations were elective. we had 5.88% leakage across the set, however, in the subset of elective operations, we had only a leak of 0.142%. conclusion: in an unselected set of colorectal operations, leakage was 5.88%, but only 0.142 for elective operations. in our group there was a clear effect of using icg in elective laparoscopic resections with an intracorporal anastomosis, the effect was not shown in others, probably due to leakage were factors other than blood flow. objective: right hemicoloctomy (rhce) is the first choice in treating the right colon cancer. complete mesocolic excision with extended lymph node dissection at the roots of superior mesenteric artery (sma) branches enables removal of all lymphatic tissue and prevents local recurrence. previously variability of sma branches was demonstrated. the aim of presented study was to compare the distribution of sma branches in two ethnically different cohorts methods: preoperativect scans with vascular 3d reconstruction were assessed in 100 patients (28-93 years) from russia and 95 patients (24-88 years) from turkey with right colon cancer operated in 2015-2018. the distribution of ileocolic artery (ica), right colic artery (rca) and middle colic artery (mca) was investigated. results: ica and mca could be found on ct scans in all patients, whereas rca had significantly different distribution between patient cohorts: it was visible in 93 (98%) of turkish patients and only in 32 (32%) of russian patients (p = 0.001). conclusion: these results suggest that there might be ethnical differences in sma branches distribution. in turkish patients all named sma branches ate visible on ct scans in 98%, whereas in russian patients only in 32%. the majority of patients from russia don't have rca. ica and mca could be found in all patients regardless ethnicity. knowing the variant of sma branching before the operation can help plan extended lymph node dissection. the national training programme in laparoscopic colorectal surgery (s-micras-lapserb) in serbia was set up to introduce standardized and structured training in laparoscopic colorectal surgery. method: an assessment based structured training programme (lapserb) started in 2015. series of hands on supervised workshops were conducted for four different hospitals using the structured training by single trainer. this study aims at retrospective analysis of prospectively collected data for patients undergoing colorectal resections. we look at short-term clinical and pathological outcome of patients within laparoscopic colorectal resections performed in national training program. results: during the period november 2015 until november 2018, laparoscopic colorectal resection was performed in 746 (426 male and 320 female) patients. mean age of patients was 65.6 (21-88). the most common indication was colorectal cancer (645 patients, 86.4%), 80 (10.7%) patients were operated due to the colorectal polyps not suitable for endoscopic resection and 21(2.8%) was operated due to ibd. there were 174 (23.3%) right colonic, 557 (74.6%) left colonic /tme and 13 (1.7%) other resections. average number of lymph node harvested in patients with colorectal carcinoma was 16.5 . there were 6/645 (0.9%) r1 resections mean duration of hospital stay was 6.5 days (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) . postoperative complications were encountered in 103 /746 patients (13.8%). overall, mortality rate was 1.2% (9/ 746). conclusions: this study demonstrates successful and safe adoption of laparoscopic technique for colorectal resections. short term clinical and pathological outcomes are compared to published data and shows wider adoption at the national level. standardization of operative technique and structured training remains the key in success. introduction: femal adnexal tumor of probable wolffian origin (fatwo) was first described in 1973. it is a tumor of mesonephiris wolffian duct origin. fatwo is rare tumor which is usualle benign. in the literature has been reported 71 cases and only 8 cases of recurrent disease. next rare tumor in pelvic localisation is sex cord-gonadal stromal tumor of sertoli cells. methods: we present case report of women who presented metastatic fatwo with duplicity of sertoli tumor. results: 60-year old women underwent 15 years ago exstirpation of tumor in the left broad ligament. histologically there was rare benign fatwo. this year was indicated adnexectomy at gynecology department. during the operation was done bilateral adnexectomy and discovered tumor of anterior wall on upper rectum. microscopic examination showed sertoli tumor on the left ovary. afterward we completed next examinations. colonoscopy without any abnormality. on ct scan was tumor 7 cm without contact to rectum wall, without distant metastasis. the same was described on rectal ultrasonography-normal wall of rectum, tumor probably from uterus. at diagnostic laparoscopy was tumor mass 7 cm, with necrosis arising from anterior wall of the rectum. next small metastasis on pelvic peritoneum. we performed debulking of this big tumor and metastasectomy, there was no infiltration to muscularis propria of the rectum. patient did not have any postoperativ complications. microscopic examination of the rectal tumor and small peritoneal metastasis showed metastatic fatwo. after 6 weeks she underwent laparoscopic second look operation. there were small metastasis on pelvis peritoneum. we removed two biggest metastasis and the rest was destroyed wit j plazma. microscopic examination showed in this metastasis sertoli tumor. conclusion: our patient has metastatic fatwo and sertoli tumor. fatwo is so rare, that in the literaure is not enough information for observation or adjuvant therapy. in one case was described imatinib mesylae (gleevec) therapy with good results. surgeons must be ready to meet new diagnosis. bochdalek hernia is a type of congenital diaphragmatic hernia. in most cases, it is diagnosed during the neonatal period. we present a case of laparoscopically treated congenital bochdalek hernia that led to jejunal strangulation in an adult . case: an fifty eight year old obese(bmi = 38.3) female admitted for gradually worsening right flank pain, vomiting and respiratory distress for one day. there was no trauma history to chest or abdomen. the past medical history were old cva, well controlled hypertension and dm. she was hemodynamically stable. her right flank was very tender. there was no abdominal distension. initial cbc tests showed leukocytosis (15,200/ul) , but electrolytes were normal. chest pa revealed right side diaphragmatic hernia. there was poorly enhanced, herniated small bowel in the right hemithorax on chest ct scan. the patient was taken for emergency operation. on laparoscopy, the normal liver was displaced leftward because of herniated bowel. there was incarcerated jejunum and omentum which could not be reduced. so we widened a 3 cm sized posterolateral diaphragmatic defect first, and then we could reduce the strangulated jejunum(45 cm in length) and omentum. there was no hernial sac. the defect was closed with 2-o prolene. finally, the strangulated jejunum was resected and anastomosed extracorporeally. the hospital progress of the patient was not eventful. on post-operative day 4, the patient was allowed soft diet. the patient was discharged on post-operative day 8 without any complication. conclusion: congenital diaphragmatic hernia is an uncommon condition in adults, but you should keep in mind the diaphragmatic hernia as a cause of intestinal obstruction and respiratory distress in an adult. prompt surgical intervention is required to a favorable outcome. laparoscopic repair of bochdalek hernia is a good management option. aims: intestinal obstruction is one of the most frequent abdominal conditions in the emergency department (ed). up to 93% of patients having undergone a laparotomy will have an episode throughout their lives, of which 60% up to 70% will respond to conservative management. the laparoscopic approach is widely accepted and supported by the studies published up todate. it is recommended in patients with suspected single band, who have less than one previous laparotomy and less than 24 h of clinical evolution. our objective is to validate in our experience that these premises are the appropriate ones in the election of candidates for a minimally invasive approach. methods: we present a series of 26 cases admitted with symptoms compatible with adhesive intestinal obstruction in the ed of a third level hospital during 16 months. all patients underwent abdominal ct to rule out the possible causes of obstruction. emergency surgery was indicated because of failure of a conservative medical treatment or for the findings of the complementary tests. results: a initial laparoscopic approach was performed in the 26 patients, with a conversion rate of 38% of the cases (resection and anastomosis was required in 7 patients, due to loop suffering or intestinal tumor not seen in the ct). among the patients who required laparotomy, 80% had more han 24 h of clinical evolution before de surgery and 20% had free fluid in the tc. as surgical complications: 5 intestinal perforations were produced secondary to the manipulation. there was 3 recurrences of obstruction in the following 3 months. conclusion: the laparoscopic approach is feasible in selected cases and experienced hands. acording to our results it is recommended to perform it in only in patients with less than 24 h evolution and with a single band image in the ct without free fluid. the intestine should be explored avoiding the manipulation of the most extensive loops to prevent complications and keep in mind the possible conversion to laparotomy in case of complications. aim: the aim of the study was to evaluate whether physiologic and operative severity score for the enumeration of mortality and morbidity (possum) is useful to predict the risk of complications in patients older than 80 years. methods: we performed a retrospective study of 13 patients older than 80 years old diagnosed with acute abdomen who were admitted to the department of general, minimally invasive and elderly surgery in olsztyn between may and october 2017. results: the most common disagnosis was ileus. the mortality rate in surgery department was 31%.after relocation to the intensive care unit, the overall mortality rate was 53.9%. the patients who died a short time after surgery had mortality rates greater than 95% and morbidity rates greater than 60% according to possum. conclusions: this study shows that possum seems to be a valuable scale to predict the risk of death after surgery in older patients. patients with higher mortality and morbidity scores should be very carefully selected for surgery. aims: diaphragmatic hernia in adulthood is rare. the most common causes are blunt and penetrating trauma. we present an intraoperative video of the laparoscopic repair of an adult onset, non-traumatic, diaphragmatic hernia in a patient with splenomegaly. method: a 66 year old woman was referred to upper gastrointestinal surgery with epigastric burning and pain in the left side of her chest, radiating to the left shoulder, for one year. there was no recent or distant history of trauma. she has a past medical history of treated hepatitis c, cirrhotic liver disease, splenomegaly, thrombocytopenia and iron deficiency anaemia. gastroscopy was interpreted as a fundal diverticulum. ct abdomen/pelvis with intravenous (iv) contrast showed this to be a left diaphragmatic defect with herniated stomach causing a volvulus, which lay immediately above an enlarged spleen. a ct three years prior to this showed no diaphragmatic hernia. the patient had some symptomatic relief with a proton pump inhibitor and oral antacids, however due to her persistent symptoms surgery was undertaken. the patient had laparoscopic repair of the diaphragmatic hernia. ports were as follows: 10 mm umbilical, 12 mm left upper quadrant, 5 mm right upper quadrant, 5 mm left iliac fossa. a left posterior diaphragmatic defect was found, just above the enlarged spleen, containing the incarcerated fundus of the stomach. the hernia was reduced by gradual dissection of the sac. a 15 9 15 cm nonabsorbable polypropylene mesh (proceed) was used to cover the defect a 4 cm margin. this was tacked in place with protack. a single 16 french robinson drain was left in situ. results: the procedure was uncomplicated. oral diet was introduced on post-operative day 1 and the drain was removed and patient discharged on day 4. there were no post-operative complications. conclusions: the video shows an effective dissection of a left sided diaphragmatic hernia and mesh repair, overcoming multiple technical challenges secondary to splenomegaly and portal hypertension. aims: creating laparoscopic anastomosis is a challenging surgical skill with high clinical relevance. to assure efficient training and enhanced learning curves, constructive and objective feedback is essential. currently there is no appropriate instrument to assess the surgical performance while creating laparoscopic anastomosis. the aim of this study is to develop and validate the anastomosis-objective structured assessment of technical skill (a-osats) score. methods: to obtain an international expert consensus for a procedure specific checklist (psc) for laparoscopic anastomosis, a modified delphi survey with an integrated analytic hierarchy process is currently being performed. each a-osats sub step is assigned a specific weight to determine its importance to the final outcome of the anastomosis. to validate the a-osats score, a laparoscopic side-to-side small bowl anastomosis with a linear stapler and hand-sewn closure of the enterotomy was chosen and is performed by surgeons with varying degrees of laparoscopic experience on a live porcine model. all performances are recorded and rated twice using the a-osats by two blinded experts. results: the final a-osats score includes a weighted psc developed by the modified delphi survey and the already validated global rating scale of previously published osats scores. four key steps (bowel placement, creation of enterotomies, stapling, closure of enterotomy) and sub steps, as well as their definitions, were established during the delphi survey. to validate the a-osats, 16 surgeons (4 experts, 9 intermediates, 3 novices) have participated in the study so far. preliminary results showed significant differences between all three levels of laparoscopic experience (novices: 73.6 ± 8.9; intermediates: 89.1 ± 8.7, experts: 110.2 ± 5.6; p \ 0.001) for the overall a-osats score as well as the psc itself (novices: 52.0 ± 7.8, intermediates: 63.4 ± 6.2, experts: 76.5 ± 4.6; p = 0.001). conclusions: the a-osats is a weighted score that objectively assesses surgical skill during the creation of laparoscopic anastomosis. preliminary results confirm construct validity of the proposed score. furthermore, by offering the possibility to differentiate single aspects during the procedure, the a-osats allows focused feedback to enhance one's performance. minor changes in weights are expected after the last round of the delphi survey. interand intrarater reliability will be assessed after final inclusion of all participants.aims: to apply augmented reality technology from three-dimensional colon models as preoperative planning method in colorectal surgery. method: from three-dimensional anatomical models of the colon we have developed holograms of augmented reality. the models were obtained from ct images (siemens somatom perspective 64ò) with abdominal image cuts with 1 mm thick. the recovery of the images was in dicom format and the processing to achieve the three-dimensional reconstruction was performed with the programs osirixò and horosò, which made a complete segmentation of the colon surface, and a modification of the image density. in this way models 3d were obtained of the isolated colon, and in relationship with the bone structure. the application colon 3d ar was designed (increased hyper experience-visualizer with slam technology) creating a hologram of augmented reality to scale 1:1 from each three dimensional model to make a projection of it on the abdomen of the patient by modifying the position in height of the reconstruction, using the bone pelvis as anatomic reference point to calibrate the placement of the hologram. results: in the preliminary phase (from october to december 2018) holograms of augmented reality were developed in 6 patients with colorectal cancer (right colon, left colon, transverse colon and rectum) to complement the radiological reconstruction with the virtual model. in the application phase (from january 2019) the holograms developed are going to be applied as a method to improve preoperative study. conclusions: three dimensional reconstruction of the tumor in the preoperative plan of colorrectal surgery combined with hyperreality technology allows to develop models of augmented reality in order to improve colon anatomy knowledge and to plan the surgical technique.aims: laparoscopic adrenalectomy has become the standard of care for most adrenal masses. we report a case of laparoscopic adrenalectomy for left adrenal adenoma. methods: we present the case of a 72-year-old caucasian female patient with an asymptomatic, left-sided adenoma, that was incidentally detected during abdominal ultrasound. no headaches, palpitations, tachycardia, tremor, dizziness or vomiting were reported. pre-operative blood tests confirmed that the tumor was a non-secreting one and a ct-scan revealed a 2.9 9 2.2 cm left adrenal mass. laparoscopic surgical excision was proposed. the patient was placed in semilateral right-sided decubitus position. four trocars (1 epigastric-10 mm & 3 subcostals-10 mm & 2 5 mm) were used, without the use of a liver retractor. the adrenal vessels were clipped not only with the standard laparoscopic clips, but also with the hem-o-lok ligation system. results: the operation lasted for 2 h with minimal blood loss. the patient's post-operative course was uneventful and she was finally discharged four days post-operatively. histology report ensured that it was adenoma of the adrenal cortex. aims: since the first laparoscopic adrenalectomy in 1992 (gagner), the laparoscopic lateral transabdominal approach has proved to be the one of choice. it provides an easy anatomical orientation, overall the technique is similar to other traditional laparoscopic procedures. on the other hand, the posterior retroperitoneoscopic adrenalectomy (pra), described in 1995 (waltz), has proven to be a safe technique and effective for the surgical management of several adrenal pathologies. the advantages include direct access to the adrenal gland, without the need for visceral mobilization or lysis of adhesions from previous abdominal operations and the ability to perform a bilateral adrenalectomy without repositioning the patient. currently there is controversy about which is the approach of choice, having to take into account the learning curve necessary for the retroperitoneal approach and the reduced number of patients with adrenal pathology subsidiary of surgical management. the objetive is to demonstrate the safety and efficacy of the standardized laparoscopic approach of the left adrenal gland with 3 trocars for selected cases. methods: clinical case: 43-year-old man, resistant hypertension despite concurrent use of three antihypertensive agents, with biochemical and radiological diagnosis of left adrenal adenoma with primary hyperaldosteronism. demonstrative video of the technical steps in a standardized way that we propose for laparoscopic left adrenalectomy only using 3 trocars. results: full laparoscopic surgical approach in right lateral decubitus position: 3 trocars-lateral transabdominal approach. steps: 1. laparoscopic liberation of the splenic flexure of the colon for the colo-spleen-pancreato-gastric en block mobilization until identification of the left pillar, 2. dissection of the medial border of the gland, identification of left renal and diaphragmatic vein, as well as the adrenal vein which is dissected and clipped, 3. dissection of the lateral edge of the adrenal gland, 4. lower pole dissection of the gland completing the resection with ligasureò. the patient presented a successful postoperative recovery, being discharged 24 h after the intervention. asymptomatic, the patient does not need antihypertensive drugs at 1 year follow-up. conclusion(s): the standardization of the procedure allows reducing the number of trocars, maintaining the safety and effectiveness of the minimally invasive approach. aims: cortical-sparing adrenalectomy is a suitable treatment for hereditary and sporadic bilateral pheochromocytoma, in cases of low risk of malignancy, to reduce the possibility of adrenal insufficiency assuming the chance of local recurrence. the aim of the study is to analyze the functional results of partial adrenalectomy by retroperitoneal endoscopic approach in singleadrenal patients or patients requiring bilateral adrenalectomy. methods: prospective study between january 2015 and october 2017 including pheochromocytoma patients diagnosed with low risk of malignant mutations. all patients agreed to be included in the study. experienced endocrine surgeons who have been trained in minimally invasive endocrine surgery performed the procedure using the same surgical technique. demographic variables and clinical characteristics were collected, subsequently carrying out the descriptive analysis of the data. results: a total of eight patients were registered, five associated with men type 2 syndrome and three in the context of vhl syndrome. retroperitoneoscopic resection was performed without laparoscopic or open conversion and no postoperative complications; the average hospital stay was 2.2 days. preservation of the functional cortex without corticosteroids was achieved in 7 (87.5%) of out 8 cases with a follow-up of 37.5 ± 4 months. today, these seven patients have a preserved adrenal function without hormone replacement. conclusions: cortical-sparing adrenalectomy by the retroperitoneal endoscopic approach, in expert hands, is safe and feasible for the treatment of hereditary and sporadic pheochromocytoma in a context of low malignancy, making it possible to avoid the need for corticoid replacement in most cases. biomedical sciences, university of west attica, athens, greece partial adrenalectomy has been suggested for patients benign adrenal tumors especially in the case of hereditary syndromes, like multiple endocrine neoplasia type 2, von hippel-lindau disease and neurofibromatosis type i. aims: this systematic review aimed to investigate the role of partial adrenalectomy in the treatment of hereditary pheochromocytoma. methods: electronic databases were searched with the search terms 'men ii', 'von hippel lindau', 'neurofibromatosis', 'laparoscopic partial adrenalectomy', 'robotic assisted partial adrenalectomy' for the time period up to and including december 2018. full publications, including clinical trials randomized or not, retrospective studies, case series, case reports that provided relevant data met inclusion criteria results: thirty five possibly relevant studies were identified. abstracts were reviewed and fourteen articles were excluded as they were review articles or articles presenting data on open partial adrenalectomy. twenty one studies, that met inclusion criteria were retrieved in full text and included in the systematic review. eight studies presented data on partial adrenalectomy in patients with von hippel lindau including two case series with median follow up ranging from 5 to 7.2 years and six case reports. thirteen studies presented data on partial adrenalectomy in patients with men ii, including two case series and eleven case reports. recurrence rate was estimated at about 10% for pheochromocytoma. overall steroid dependence rate was estimated at 90%. conclusion: minimally invasive partial adrenalectomy is a therapeutic option especially in patients with heritable pheochromocytoma, given that tumors are often bilateral, tumors are commonly benign and severe morbidity and mortality may be associated with life-long steroid replacement therapy such as the possibly lethal addisonian crisis . however, data are limited, follow up is not standardized and not appropriately reported and rcts are difficult to be done due to the rarity of the disease. a multinational registry on the short term and long term outcomes of partial adrenalectomy in hereditary pheochromocytoma would be a significant source of knowledge. results: patients were operated on after an average of 31 months with complaints. in both groups, the leading symptoms were severe dysphagia and severe regurgitation. no intraoperative complication was detected. in the transoral group, one patient had to be reoperated on for bleeding, another patient developed pneumonia in the transcervical group. the average duration of the surgeries (42.5 vs. 98 min, p \ 0,001), the time to oral feeding (2.9 vs. 4.6 days, p \ 0,001) and the mean hospital stay (7.3 vs. 9.7 days, p \ 0,001) were significantly shorter in the transoral group than the transcervical group. 15 patients were completely symptomless postoperatively. after transcervical treatment, complaints were developed in 2 cases (moderate dysphagia and hoarseness). after transoral surgery, recurrent symptoms were observed in 6 patients, 4 had to be reoperated transcervically due to severe regurgitation. conclusion: transoral stapler diverticulostomy is a fast procedure and offers short hospital stay especially in comorbid, aged patients and intermedium diverticulum size. in the long term, some of the patients may require reintervention due to persistent regurgitation. the transcervical approach has higher perioperative morbidity, which can be performed in patients with less than 3 cm or large diverticulum size. aims: complex hiatal hernias, either implicating large hiatal defects or concerning cases of recurrence, often need apart from the primary closure of the hiatal gap, the re-enforcement of the crura with the use of meshes. our aim is to demonstrate the surgical technique for the on-lay placement of the absorbable mesh (phasix tm st mesh /bard) in challenging cases, presenting both the laparoscopic and the robotic approach. methods: we present video fragments from procedures of laparoscopic and robotic reconstruction of complex hiatal hernias, performed by our team, in which an absorbable mesh was utilized in an on-lay fashion. results: patients having undergone a minimally invasive surgical approach (laparoscopic or robotic) for the treatment of complex hiatal hernias with the use of an absorbable mesh, had an uneventful post-operative course and very short hospital stay and recovery time. the 6-month follow up revealed no recurrences or late complications. conclusions: treating complex cases of hiatal hernias with a minimally invasive approach can be proven quite challenging, with high recurrences and possible complications rate. a proper surgical technique, either laparoscopic or better (based in our primary experience) robotic, by experienced surgical teams and the use of meshes with the right strategy, minimizes the complications, offers all the benefits of minimally invasive surgery and reduces the recurrence rates. aims: several flexible endoscopic techniques for symptomatic zenker's diverticulum have been developed during the last decade. thulium laser has limited tissue penetration and may decrease the risk of perforation. this study reports the first use of thulium laser through flexible endoscopy for cricopharyngeal (cp) myotomy. aims were safety and efficacy of flexible endoscopic thulium laser myotomy and quality of life (qol) changes after treatment. methods: a retrospective review of a prospectively collected database of 19 patients who underwent thulium laser septum division for symptomatic zenker's diverticulum was done. demographic data, presenting symptoms, diverticulum characteristics, and intraoperative data were analyzed. functional outcome swallowing scale (foss) and m.d. anderson dysphagia inventory (mdadi) questionnaires were administered to determine severity of dysphagia and its effect on qol, both preoperatively and during follow-up visits. all the operations were carried out under general anesthesia. a continuous laser configuration and an emissionpower of 9 w was used in non-contact mode. once the mucosa was opened, the fibers of the cricopharyngeal muscle were divided until the buccopharyngeal fascia was visibile. results: between march 2017 and september 2018, 19 patients (12 males) underwent flexible endoscopic cp myotomy with thulium laser. mean age was 72 ± 10.6, mostly males (68.4%). seven patients (36.8%) presented with recurrent diverticulum after previous transoral or open treatment. mean diverticulum size was 2.5 ± 0.8 cm. preoperative main symptoms were dysphagia (94.7%), regurgitation (68.4%), and cough (47.3%). foss score was = 2 in 12 patients (66.7%). mean mdadi global and composite score were 47.8 ± 25.8 and 59.7 ± 9.4. complete division of the septum was achieved in all patients. mean hospital stay was 2.83 ± 1.62 days. there was only one perforation treated conservatively. no 90-days mortality was observed. at median follow-up of 7 months, foss was = 2 in 1 (5.6%) patient and mdadi global and composite score were 90.0 ± 12.4 and 89.5 ± 7.7. all main symptoms were significantly reduced and qol significantly increased. conclusions: flexible endoscopic approach with thulium laser is a safe and effective treatment option for zenker's diverticulum either as a primary treatment or as a rescue therapy. objective: this study sought to explore prognostic factors for patients with borrmann type iv gastric cancer and to establish a predictive model for survival benefit of postoperative adjuvant chemotherapy in such patients. method: this study reviewed the clinical data of patients who underwent curative surgery at fujian medical university union hospital from 2006 to 2014 for borrmann type iv gastric cancer using a prospective database. cox regression analyses were performed to identify prognostic factors that formed the basis for a nomogram and risk groups. establishment of risk groups to identify patients with borrmann type iv gastric cancer who would benefit from adjuvant chemotherapy. results: 265 patients who underwent r0 resection were included in this study.multivariate analysis showed that bmi, tumour differentiation, pt stage, pn stage, and asa score were independent prognostic factors. patients in the act-group had longer os than patients in the sagroup, although the p-value for this difference was marginally above the threshold for statistical significance (23.8% vs. 10.9%, p = 0.057). stratified analysis showed that there was no significant difference in os between the act-group and the sa-group for each ajcc stage (stage ii: 40.6% vs. 29.8%, p = 0.44; stage iii: 21.4% vs. 9.7%, p = 0.056).a nomogram was established based on these independent risk factors, and nomogram scores were used to divide all patients into a high-risk group (score [ 16), an intermediate-risk group (8 \ score = 16) and a low-risk group (score = 8).further stratified analysis based on ajcc stage showed that the 3-year survival rate was higher in the adjuvant chemotherapy group than in the surgery alone group for low-and intermediate-risk patients in each ajcc stage, while high-risk patients in stage iii did not significantly differ. objective: this study sought to explore the prognostic factors for smoking patients with gastric cancer and to establish a predictive model for the survival benefit of postoperative adjuvant chemotherapy in such patients. methods: we studied 2081 patients who were diagnosed from september 2009 to september 2014 at union hospital of fujian medical university. cox regression analyses were performed to identify prognostic factors. the kaplan-meier method was used to assess the effect of smoking history on the benefit of adjuvant chemotherapy after gastric cancer surgery. a decision tree algorithm was used to identify smoking patients who benefited from postoperative adjuvant chemotherapy. results: the median follow-up time for the whole group was 42.5 months, and the average age of all the included patients was 61.5 years.multivariate analysis showed that age (p \ 0.001), bmi (p \ 0.001), degree of tumor cell differentiation (p \ 0.01), and ajcc stage (p \ 0.001) were independent risk factors for the prognosis of smoking patients. based on these independent risk factors, a decision tree model for the benefit of adjuvant chemotherapy for smokers with gastric cancer was established, and the smoking patients were divided into the low-risk patients 78 .7%), medium-risk patients (3year os, 51.3%) and high-risk patients (3year os, 28.4%) (p \ 0.001). conclusion: cigarette smoking may reduce the efficacy of adjuvant chemotherapy after gastric cancer surgery. our decision tree model is simple and effective for identifying smokers who would benefit from adjuvant chemotherapy. objective: our study investigated the effect of lymph node (ln) noncompliance on the longterm prognosis of patients after laparoscopic total gastrectomy (ltg) and explored the risk factors of ln noncompliance. methods: the clinicopathological data of gastric cancer (gc) patients who underwent ltg with d2 lymphadenectomy from june 2007 to december 2014 were prospectively collected and retrospectively analyzed. the effects of ln noncompliance on the long-term prognosis of patients with gc after ltg were explored. results: the overall ln noncompliance rate was 51.9%. ln noncompliance was significantly correlated with age, bmi, asa score, tumor size, macroscopic tumor type and tnm staging (p values \ 0.05). the survival rate of patients after ltg with ln compliance was significantly superior to that of patients with ln noncompliance (p = 0.013). the stratified analysis of tnm stage indicated that there was no difference between the os of stage i patients with ln compliance and those with ln noncompliance; os of stage ii/iii patients with ln compliance was significantly better than that of those with ln noncompliance. cox regression analyses showed that ln noncompliance was an independent risk factor for os. logistic regression analysis showed that high bmi ([ 25 kg/m 2 ) was an independent risk factor for preoperative prediction of ln noncompliance in cstage ii/iii patients. compared with patients with a low bmi (bmi \ 25 kg/m 2 ), those with a high bmi were more likely to show ln noncompliance during surgery, especially during the dissections of #6, #8a and #12a ln stations. conclusion: ln noncompliance was an independent risk factor for poor prognosis in patients with advanced gastric cancer (agc) after ltg. patients with high bmi were more likely to have ln noncompliance, especially during the dissections of #6, #8a and #12a ln stations. ln tracing was recommended for these patients to reduce the rate of ln noncompliance. aim: to study the differences in pathology, survival, and recurrence between special remnant gastric cancer (srgc) and nonspecial rgc (nrgc). method: a total of 366 rgc patients were analyzed in 7 hospitals in china from january 2003 to july 2015.we compared the 3-year overall survival (os) disease-free survival (dfs) rates and used two-step regression explore the influence of the rgc categories on patient outcomes. results: all of the patients divided into srgc group (group s) (n = 200) and nrgc group (group n) (n = 166). the r0 resection rate and lymph node (ln) dissection number of group s were significantly higher than group n (p \ 0.05). the difference in 3-year os was not significant (p = 0.282), but the 3-year dfs of group s was worse than group n (p = 0.042). twostep multivariate analyses showed nrgc was an independent risk factor for poor dfs. of the 225 patients who had undergone r0 resection, 74 patients (32.89%),suffered recurrence, and the recurrence rate of group s was significantly higher than group n (p = 0.039), moreover, the ln recurrence rate of group s was significantly higher than group n (p = 0.027). cox regression analysis showed that age, ca199 level, n stage and category of rgc were independent risk factors for rgc recurrence. conclusion: srgc has a higher r0 resection rate and ln dissection number than nrgc, but among patients who had undergone radical gastrectomy, srgc patients had worse dfs and a higher tendency for ln recurrence; thus, they should be treated differently in the clinic. objective: the aim of this study was to report our institution's experience with a novel abdominal negative pressure lavage-drainage system (anplds) for anastomotic leakage (al) after radical gastrectomy (rg) for gastric cancer (gc). background: al is a severe complication associated with high morbidity and mortality after rg for gc. the optimal creation of drainage in al patients after rg remains controversial. methods: the study enrolled 4173 patients who underwent r0 resection for gc at our institution between 2009 and 2016. anplds was routinely used for patients with al after january 2014. al rates and postoperative outcome were compared before and after the anplds therapy. we used multivariate analyses to evaluate clinicopathological and perioperative factors for associations with al and failure-to-rescue (ftr) after al. results: al occurred in 83 patients (83/4173, 2%), leading to 7 deaths. the al rate was similar before (2009-2013, period 1) and after (2014-2016, period 2) the implementation of anplds (1.7% vs 2.3%, p = 0.121). age and malnourished were independently associated with al. the ftr rate and abdominal bleeding rate after al occurred were respectively 8.4% and 9.6% for the entire period, but compared with period 1, it significantly decreased at period 2 (16.2% vs 2.2%, p = 0.041; 18.9% vs 2.2%, p = 0.020, respectively). what's more, only anplds therapy was an independent protective factor for ftr after al. conclusion: our experience demonstrates that anplds is feasible and cost-effective for the management of al after rg for gc. objective: to apply the principles of the 'metro-ticket' paradigm to develop a novel tnm staging system (ntnm) for gastric cancer (gc). background: the 'metro-ticket' prognostic tool for hepatocellular carcinoma has been proven to predict outcome, but a similar concept has not been investigated for gc. methods: the ntnm considered the distance from the origin on a cartesian plane incorporating the pn (x-axis) and pt (y-axis) stages. gc patients undergoing radical resection at fujian medical university union hospital (fmuuh) (n = 4267) were included. the ntnm was validated using 2 external cohorts from the sun yat-sen university cancer center (sysucc) (n = 1800) and surveillance, epidemiology, and end results (seer) (n = 3227) databases. results: ntnm classes with the same distance from the origin have same stage; the stage increases with this distance. among all patients, 48.0% (n = 2049) were restaged in the ntnm compared with the 7th edition of the ajcc-tnm classification; 26.2% (n = 1116) were downstaged in the ntnm compared with the 8th edition. the ntnm provides significant survival differences between stages (all p \ 0.001). the survival difference between stages ib and iia was especially large for the ntnm (p \ 0.001) compared to the 7th and 8th editions (p = 0.073). the concordance index and hazard ratio increased successively with the ntnm stage. similar findings were observed in both external cohorts. conclusion: compared with the ajcc-tnm classification, the 'metro-ticket' ntnm for gc is easier to remember and provides some improvements; therefore, the ntnm may be considered for adoption in future editions of the ajcc-tnm classification. objective: to investigate the prognostic value of complete blood count (cbc)-based biomarkers for patients with resectable gastric cancer (gc). methods: patients with gc who underwent curative resection between december 2008 to december 2013 were included. estimated area under the curve (auc) and multivariate cox regression models were used to identify the best cbc-based biomarker. time-dependent receiver operating characteristics (t-roc) analysis was used to compare the prognostic impact. results: based on multivariate analysis, the lymphocyte-monocyte ratio (lmr) and hemoglobin (hb) level were the independent prognostic factors (both p \ 0.05). based on the lmr and hb level, we established the cbc-based inflammatory score (cbcs). higher cbcs was associated with older age, female sex, higher american society of anesthesiologists (asa) score, proximal tumor location, larger tumor size, later stage and vascular involvement (all p \ 0.05). univariate analyses showed that higher cbcs was also associated with poorer overall survival (os), which was consistent in each stage (all p \ 0.05). multivariate analysis revealed that the cbcs was a significant independent biomarker (p \ 0.05). furthermore, t-roc curve of the cbcs was superior to that of the prognostic nutritional index (pni), systemic immune-inflammation index (sii), modified glasgow prognostic score (mgps) and c-reactive protein/albumin ratio (crp/ alb) throughout the observation period. conclusion: preoperative lmr and hb were optimal cbc-based biomarkers for predicting os in gc patients after curative resection. based on the lmr and hb, we developed a novel and easily obtainable prognostic score called the cbcs, which may improve the prediction of clinical outcomes. purpose: the aim of this study was to evaluate the prognostic value of the eighth ajcc tnm staging classification for patients with gastric cancer who had already survived for 5 years. patients and methods: patients who underwent radical gastrectomy at a large eastern center were considered. the prognostic value of staging systems were assessed and compared. additional external validation was performed using a dataset from the surveillance, epidemiology, and end result (seer) database. results: the 5-year overall survival (os) rate for patients in the training set was 59.4%. with the prolongation of the survival time after surgery, the 5-year os improved significantly (p \ 0.05). however, there were no significant differences in survival curves among patients who have survived 5 years after surgery. the auc and c2 of the eighth ajcc classification for predicting of 5-year os decreased gradually after surgery and appeared stable after 5 years. for patients who survived 5 years after surgery, we constructed a new tnm staging system (ntnm) according to the survival curves of t stage and n stage. a 2-step multivariate analysis showed that ntnm, age and sex were independent prognostic factors. the ntnm demonstrated superior prognostic stratification, with higher c-statistic and likelihood ratio chi-square scores and lower aic values than those of the ajcc classification. similar results were observed in the external validation set. conclusion: the ntnm predicted an additional survival more accurately than did the ajcc classification for patients who have survived 5 years after surgery; this may guide decisions regarding surveillance. objective: to investigate the relationship between preoperative sarcopenia and systemic inflammation and evaluate the prognostic impact of these factors on patients with resectable gastric cancer (gc). methods: patients with gc who underwent radical gastrectomy between december 2009 and december 2013 were included. a multivariate cox regression analysis was performed to identify the prognostic factors. a novel prognostic score (slmr) was developed based on preoperative sarcopenia and the lymphocyte-monocyte ratio (lmr), and its prognostic value was evaluated. results: in total, 1167 patients with resectable gc were included in the study. on multivariate analysis, preoperative sarcopenia and the lmr were shown to be independent prognostic factors (both p \ 0.001). a low lmr was an independent predictor from sarcopenia (p \ 0.001). based on preoperative sarcopenia and the lmr, we established the slmr. an elevated slmr was associated with older age, higher asa scores, larger tumor size, advanced stages and vascular invasion (all p \ 0.05). multivariate analysis revealed that the slmr was a significant independent predictor (p \ 0.001). we incorporated the slmr into a prognostic model that included tumor size and tnm stage and generated a nomogram, which accurately predicted 3-and 5-year survival for gc patients. objective: to explore whether adjuvant chemotherapy is still needed in patients aged less than 50 years with pt1n0-3 and pt2/3n0 gastric cancer. methods: multi-center cohort data of patients with gastric cancer who underwent radical gastrectomy were analyzed. kaplan-meier curves and cox regression were used to analyze the relationships between chemotherapy and prognosis. additionally, nomograms to predict the benefit of chemotherapy were established. results: in total, 1,432 patients with pt1n0-3 and pt2/3n0 gastric cancer were included. 217 patients (15.2%) were aged \ 50 years. the 5-year overall survival (os) was not significantly different between the \ 50 years of age group and = 50 years of age group (92.6% vs. 90.4%, respectively; p = 0.249). lymph node (ln) metastases (hr 5.054; p = 0.002) and ln dissection number \ 15 (hr 6.944; p \ 0.001) were independent risk factors for the os of patients aged \ 50 years. adjuvant chemotherapy did not improve the 5-year os for patients aged \ 50 years with pt1n0-3 and pt2/3n0 gastric cancer (p = 0.218). however, chemotherapy showed a significant benefit (p = 0.042) when there were ln metastases and/or ln dissection number was \ 15. two nomograms were constructed, and the calculated difference was the potential benefit of adjuvant chemotherapy for the patients aged \ 50 years. conclusions: ln metastases and ln dissection number \ 15 were independent prognostic risk factors of patients aged \ 50 years with pt1n0-3 and pt2/3n0 gastric cancer. patients with these risk factors may benefit from the addition of adjuvant chemotherapy. objective: the choice of reconstruction after distal gastrectomy remains controversial. we have performed roux-en-y (r-y) method after laparoscopic distal gastrectomy(ldg) as a standard since 2008, but we have performed billroth ii (b-ii) method in an increasing number of cases, depending on the patient. we retrospectively investigated the outcomes of patients with b-ii method after laparoscopic distal gastrectomy in our hospital. methods: patients who underwent b-ii and r-y reconstruction after ldg from january 2008 to december 2015 were included. the patient characteristics, surgical outcomes, and postoperative outcomes between the 2 procedures were retrospectively analyzed. we also compared extend of gastritis on endoscopy and loss of body weight after surgery at 1 year. results: b-ii / r-y :110/307. b-ii was selected in the elderly patients with poor asa-ps (p \ 0.001). in surgical outcomes, operative time was shorter for b-ii than r-y (p \ 0.001), and blood loss was also smaller (p = 0.023). in postoperative outcomes, there were significant differences in complications (?grade3) (b-ii vs. r-y: 0.9 vs. 6.8%, p = 0.013) and length of stay (b-ii vs. r-y: median 10.5 vs. 14-day, p \ 0.001). there was significant difference in presence of gastritis between b-ii (35.4%) and r-y (11.0%) (p \ 0.001), but no significant difference in loss of body weight (p = 0.105). conclusion: b-ii reconstruction may be an adequate procedure for high-risk cases because of its shorter operative time and the absence of severe complications. background: numerous studies have shown that the short-term efficacy of three-dimensional (3d) laparoscopic radical gastrectomy (lg) is comparable to that of two-dimensional (2d)-lg. whether 3d-lg affects the recurrence pattern after surgery has not been investigated. using data from a prospective clinical trial, the present study compares the recurrence patterns between 2d-lg and 3d-lg. methods: from january 2015 to april 2016, a total of 419 patients were recruited for the clinical trial (nct02327481). the recurrence types, the first recurrence time and recurrence-free survival (rfs) were compared between the two groups. multivariate analyses of factors associated with rfs were performed to identify whether 3d-lg affects the recurrence patterns. results: ultimately, 401 patients were analyzed (197 in the 2d-lg group and 204 in the 3d-lg group), and there were no differences in the clinicopathological data between the two groups. distant metastasis was the most common type of recurrence. there were no significant differences between the two groups in the recurrence types, the first recurrence time or rfs (all p [ 0.05). according to the 7th american joint committee on cancer tumor-node-metastasis (tnm) staging system, both groups were stratified into pathological (p) i, ii, and iii stages. the stratified analysis showed that there were no statistically significant differences in rfs between the 2d group and the 3d group among patients in each subgroup (all p [ 0.05). the multivariate analysis of rfs showed that pathological tnm (ptnm) stage and lymphovascular invasion were independent risk factors (all p \ 0.05). the multivariate analysis of post-recurrence survival (prs) showed that adjuvant chemotherapy was an independent protective factor (p = 0.043). conclusions: distant metastasis was the most common type of recurrence after lg. the postoperative recurrence patterns, rfs and prs after 3d-lg were similar to those after 2d-lg. purpose: the aim of this study is to evaluate the efficacy of delta-shaped anastomosis compared to circular stapler anastomosis in laparoscopic distal gastrectomy with billroth i reconstruction (ladg-bi). method: this is a single-center randomized controlled study. eligibility criteria included histologically proven gastric adenocarcinoma in the lower third of the stomach, clinical stage i tumor. patients were preoperatively randomized to circular stapler anastomosis or delta-shaped anastomosis. the primary endpoint is the number of analgesics use during 3 days after surgery. we compared the surgical outcomes of the two groups. postoperative qol was evaluated using the postgastrectomy syndrome assessment scale-45. this trial was registered at the umin clinical trials registry as umin000016496. results: between december 2016 and september 2018, 39 patients (delta-shaped anastomosis 18, circular stapler anastomosis 21) were enrolled. there was no difference in the number of analgesics use during 3 day after surgery (median 9: delta-shaped anastomosis vs. 7: circular stapler anastomosis, p = 0.91). there was no difference in the overall proportion with in-hospital grade ii-iiib surgical complications (11%: delta-shaped anastomosis, 14%: circular stapler anastomosis). there was no operation-related death in either arm. regarding postoperative qol evaluated 1 month after surgery, diarrhea subscale was significantly worse in delta-shaped anastomosis than in circular stapler anastomosis. conclusion: we did not demonstrate the advantage of delta-shaped anastomosis in terms of postoperative pain. since delta-shaped anastomosis tended to cause postoperative abdominal symptoms related to diarrhea, we should carefully apply the delta-shaped anastomosis to ladg. introduction: the use of a three-dimensional(3d) camera for laparoscopic surgery has been reported in literature. however, there are only few comparative studies demonstrating its benefits, and no reports on the application of 3d vision to single-incision laparoscopic surgery. this study aims to compare 3d vision to the previous two-dimensional(2d) system in solo single-incision laparoscopic distal gastrectomy(sidg). methods: medical charts of 179 gastric cancer patients who underwent solo sidg from february 2014 to december 2017 were retrospectively reviewed. patients were grouped into either 2d group or 3d group depending on the type of camera used. all the operations were performed by a single surgeon using a flexible camera(olympus, japan), fixed onto a passive scope holder without the use of a scopist or an assistant. operative data, postoperative outcome, and early complication were analyzed. results: ninety had their operations under 2d vision and 89 used the 3d scope. in both groups, there was no difference in age, body mass index, staging, and other demographic or histopathologic criteria. operative time was significantly faster in the 3d group(115.6 ± 34.0 vs. 129.4 ± 38.5 mins., p = 0.012) and ebl was also less(20.7 ± 30.0 vs. 35.1 ± 56.0 ml, p = 0.034). patients in the 3d group started small fluid diet faster(2.5 ± 0.9 vs. 3.0 ± 1.1 postoperative days, p = 0.006), and were discharged faster(4.4 ± 1.7 vs. 5.2 ± 3.1 postoperative days, p = 0.024). early complication was also less in the 3d group(2.2% vs. 6.7%) but there was no statistical significance(p = 0.140). conclusion: the use of the 3d camera improves operative outcome and hospital stay in patients undergoing solo sidg. the frequency of anastomotic leakage after gastrectomyreaches 7-8%. at the same time, mortality in this group of patients reaches 30%, and the use of aggressive methods of surgical treatment for the treatment of anastomotic leakage increases the mortality rate from 20 to 64%. since 2006, vacuum-assisted closure has been used to treat anastomotic leakage of various localizations. the essence of this method is based on the creating a local negative pressure, which is transmitted to the drip cavity through a special porous spongy system. the negative pressure created in the closed cavity, allows you to remove exudate, helps to reduce tissue swelling, improvesmicrocirculation, which in turn contributes to the development of granulations and wound healing with separation of the fistulous course. failures in using the method of vacuum therapy in anastomotic leakage are associated with the great difficulty of delivering a polyurethane sponge with a drainage tube to the leakage zone. in this regard, we have developed an improved method of endoscopic local vacuum therapy, in which the delivery of a polyurethane sponge was carried out with the help of a thread through a pharyngeal ring, a leakage zone and brought out through a drainage tube. this technique has been successfully used in the treatment of four patients with anastomotic leakage after operations on the upper part of the digestive tract. for complete healing of the cavity of the leakage and defect of the organ wall, it took 6, 9, 10 and 5 sessions of replacing the vac system, respectively (average 7.5 ± 2.4). there were no complications during the endoscopic local vacuum therapy. when the control endoscopic studies after 3 months after the completion of the treatment at the site of defects of the seams of the anastomoses formed tender scar tissue without signs of narrowing of the organ. aims: enhanced recovery after surgery pathways are safe and effective for patients undergoing gastrectomy. this study aimed to identify perioperative factors influencing the adherence to the protocol, the postoperative course, and the consequent length of stay. methods: between 2014 and 2017, 201 patients were referred to our institution for gastric cancer. among these, 21 patients underwent atypical gastric resection and were excluded from this analysis. 187 were assigned to either total or distal gastrectomy and represent the study population. all patients were managed with a standardised perioperative pathway according to eras principles. according to data from the literature and based on our clinical experience, patients with optimal adherence to eras protocol may fit the criteria for discharge within ninth postoperative day, that was considered our ideal threshold for hospital discharge. data were retrospectively collected and analysed from a prospectively maintained database. statistical analyses were performed using spss version 24 for macintosh. the v 2 test, with a significance level of 0.05, was used to investigate the association between the outcome and perioperative categorical variables. when parametric assumptions were met, student's two-tailed t-test was used to compare the means of continuous variables; otherwise, the mann-whitney test was performed. a significance level of 0.05 was chosen. logistic binary regression with a backward selection procedure and selection criteria of p-value \ 0.05 were exploited to determine significant predictors. results: 44 preoperative, intraoperative and early postoperative variables were considered. among all, multivariate regression analysis revealed that incomplete preoperative immunonutrition, failure to extubate the patient at the end of surgery, intraoperative crystalloids infusions [ 2150 ml and blood transfusion [ 268 ml, surgery duration [ 195 min, and failure to mobilise patients within 24 h from surgery were associated with delayed discharge. the logistic regression model was statistically significant (p \ 0.001) and correctly classified 73.6% of cases. sensitivity and specificity were 74.1% and 73.2%, respectively. conclusions: results seem to be clinically rational and focus the attention on the importance of some perioperative clinical issues for the management of postoperative course. these variables could be considered as clinical goals to be reached in order to get an early discharge. objectives: the purpose of this study is to confirm the safety of laparoscopic gastrectomy with intraperitoneal cisplatin administration as a treatment for advanced gastric cancer with potential for peritoneal seeding. methods: from july 2014 to august 2018, 56 patients with advanced gastric cancer who underwent ip chemotherapy after diagnostic laparoscopy were retrospectively studied. all patients underwent laparoscopic gastrectomy with ip chemotherapy or ip chemotherapy alone after a diagnostic laparoscopy. gastrectomy was performed for palliative purposes even with seeding. results: the average age of the patients was 56 years. eight patients (14.3%) had preop chemotherapy. curative resection (r0) was performed in 31 patients (55.4%). in diagnostic laparoscopy, cytology was performed in 38 patients (67.9%) and cy1 was 10 (26.3%). peritoneal metastasis was detected in 35 patients (62.5%). of the total cohort, the 2 year os rate was 54.5% and the median survival time was 19 months. in the case of stage iiib and below, the 2-year os rate was 83%, but it was 42% in stage iiic-iv group. when the r0 resection group and the r1-2 resection group were compared, the 2-year os rates were 70.7% and 26.7%, respectively. hematological toxicity such as neutropenia was not seen in all patients. the mean hospital stay was 8.2 days and adjuvant chemotherapy was performed in 35 patients (62.5%). background: radical proximal gastrectomy (pg) and lymph nodes dissection are indicated for selected gastric cancers at the upper third of the stomach. with the advent of laparoscopic surgeries, more and more pg were performed by laparoscopic apporaches. in the past 5 years, our team has accomplished and reported the oncological outcome of laparoscopic distal gastrectomies in 100 cases of clinical stage i gastric cancer in taiwan. through the evolution of surgical trechniques and team work, we have cruised the learning curve of laparoscopic gastrectomy and reconstruction. materials and methods: in this report,we would like to present our surgical experience of laparoscopic proximal gastrectomy for gastric cancer patients. from 2005 to 2018,192 pateints with gastric cancer underwent laparoscopic gastrectomies by the same surgical team at the national taiwan university. among them,six consecutive pateints (male:female = 3:3) with gastric adenocarcinoma of the upper stomach underwent laparoscopic pg in 2018. the demographics, dissection, reconstruction methods and peri-operative outcome are presented. all six patients tolerated the procedure well, onepatient had mild anastomotic stenosis and improved with one session of endoscopic dilatation. one patient needed temporary proton pump inhibitor for controlloing acid reflux. four of the 6 patients were pathological stage i, and the rest two pateint were stage iia and iiia disease. there was no tumor recurrence until now. summary: laparoscopic proximal gastrectomy is technically safe for treating upper third gastric cancers. the long term oncological outcome deserve further observation. introduction: open gastrectomy (og) has long been the preferred surgical approach worldwide for treatment of gastric cancer (gc). nowadays, several randomized, prospective trials have confirmed improvements in postoperative outcomes for laparoscopic gastrectomy (lg) compared to open procedures, with similar oncologic outcomes. however, most part of these studies comes from the eastern countries. material and methods: a prospective non randomized study was conducted with all patients operated of gc at ramón y cajal university hospital from january 2015 to december 2017. over 96 patients enrolled, 47 patients underwent lg and 49 og. textbook outcome was defined as the percentage of patients who underwent a complete tumour resection with at least 15 lymph nodes in the resected specimen and an uneventful postoperative course, without hospital readmission. results: a textbook outcome was achieved in 51.04% of patients operated of gc. the outcome parameter 'no severe postoperative complication' had the greatest negative impact on the textbook outcome. a statistically higher number of patients with early cancer (40% vs. 16.3%) and subtotal gastrectomy (57.5% vs. 34.7%) were found in the laparoscopic group. no statistically differences were found between open and laparoscopic approach regarding operating time, rate of microscopic margin positivity, hospital stay, number of retrieved lymph nodes, complications, reinterventions, mortality and readmissions. no statistical differences in textbook outcome were found between both groups (57.14% vs. 45%; p = 0.25). conclusions: laparoscopic gastrectomy for treatment of gastric cancer seems to be safe and feasible with similar textbook outcomes compared to open gastrectomy. introduction: laparoscopic surgery has been increasing for treatment of gastric cancer. however, standardization of this minimally invasive approach has not been reached yet because of its technical difficulties and the concern about oncological safety. the aim of the study was to analyze the outcomes of our learning curve in this complex surgical technique. material and methods: the first consecutive 100 cases of laparoscopic gastrectomy (lg) performed at our hospital from november 2008 to february 2018 were enrolled. patients were divided into two groups based on the period they were operated. training phase (tp) was considered between 2008 and 2014 (46 cases) and more-developed phase (mdp) between 2015 and 2018 (54 cases). conversion, lymphadenectomy and retrieved lymph nodes (ln), hospital length of stay, mean operative time, complications, reintervention and mortality rates were compared between the two phases of learning curve. results: the number of retrieved ln was higher in the mdp (17 ± 8,6 vs. 23,3 ± 10,4; p = 0,004). furthermore, we have also found less complications (47,8% vs. 27,8%; p = 0,038), a decreased reintervention rate (15,2% vs. 1,85%; p = 0,023) and overall mortality (8,7% vs. 0%; p = 0,003) in the mdp. there were no significant differences in conversion rate, mean operative time, and hospital length of stay between phases. conclusion: although we consider that our learning curve is not completed yet because the average of monitored parameters have not reached a steady state, the improvement on surgical parameters and postoperative course in the last two years have showed our results are near to the best results published in the literature. aims: lymph node (ln) dissection proves to be essential for oncological gastrectomy, given that the presence of ln metastases is very high, even for early gastric cancer (4.9% for t1a and 21.4% for t1b). this way, d2 dissection for advanced gastric cancer and d1 ? for early gastric cancer are the gold standard procedures. some teams are using indocyanine green (icg) lymphography to improve their ln dissections, claiming that this technique facilitates the harvesting of small fluorescent ln that, otherwise, would be difficult to identify by conventional laparoscopic methods. methods: we herein present the case of a 60-year-old man with a t1b distal gastric cancer. endoscopic ultrasound discarded the presence of metastatic ln and ct scan showed no distant metastases. icg was administrated endoscopically the day before the surgery, an amount of 6 mg was injected along the submucosal layer around the tumour. in the video we can see how we perform a laparoscopic distal gastrectomy with d1 ? ln dissection and roux-en-y reconstruction. icg lymphography helped us to complete our expected ln harvesting, especially for groups 6 (infrapyloric) and 7 (left gastric artery). thanks to this technique, we could resect ln that we might have obviate during a usual laparoscopic procedure. results: patient was discharged home on the sixth postoperative day without complications and with adequate oral tolerance. conclusions: we present a case in which we have performed a laparoscopic distal gastrectomy with d1 ? dissection and roux-en-y reconstruction. we used icg lymphography to help us to improve our ln harvesting. although it is soon to assess if this technique may increase the number of retrieved ln and in which stations might be more useful, we consider this is a harmless method that may help gastric teams to complete their expected ln dissections. introduction: gastrointestinal stromal tumor (gist) represents around 0.1% to 3% of gastrointestinal neoplasms, with the mesenchymal tumor being more frequent than the digestive one.the gist can be produced from the esophagus to the anus, at any point, being the stomach of (39 to 60%) and the small intestine (30 to 42%) more frequent sites.it is characterized by the expression of the tyrosine kinase growth factor receptor,cd117,differentiating it from other mesenchymal tumors,which do not express it.it is accepted that its origin corresponds to the interstitial cells of ramón y cajal,which act as a pacemaker for intestinal motility.they are very heterogeneous tumors, which vary in size,morphology and biological behavior,being neoplasms with uncertain malignant potential.the incidence is between the fourth and sixth decades,being the distribution by gender similar. clinical case: female patient of 70 years,who goes to the general surgery service,as interconsultation,after a veda,by dyspepsia.it is reported stomach:ceiling mucosa without alterations,at the level of the greater curvature is seen a tumor of 5 cm,hard to the touch with the biopsy forceps,slightly irregular covered with mucosa of normal appearance. computed tomography: stomach body:rounded image of nodular aspect which does not present heterogeneous enhancement after administration intravenous iodine contrast extending to peritoneal region, measures 44 x33x32 mm liver:hypodense image without heterogeneous enhancement adjacent to this,a 10 mm rounded image that is suggested to be studied with nmr. gadolinium nmr liver hypodense image with well-defined limits without heterogeneous enhancement of cystic aspect. gastric roof,heterogeneous formation,which enhances with gadolinium 37mmx38m-mx40 mm,having to discard a gist. surgical technique laparoscopic partial gastrectomy. pathological anatomy and immunohistochemistry 1.5 cm injury with net edges.uncertain malignant fusocellular nodule, cd117 ??? actin-dog1 ??? s100-no mitosis or invasion of the mucosa is observed. conclusion: a case of stomach gist is presented,which,the main symptom was dyspepsia,being the clinical presentation very variable,in relation to the place in which it is located. there is fletcher criteria for the risk of malignancy,this being less than 1.5 cm,very low risk,less than 2 cm, the patient evolved favorably,without surgical complications.aims: to present the surgical procedure of resection of the lesser gastric curvature and its pedicle with laparoscopic surgery, fulfilling oncological criteria, carried out in the general surgery service of the hospital of torrecárdenas. methods: an 85-year-old man with prostate cancer treated with complete hormonal block and epoc, who consults for rectal bleeding of 1 week of evolution. it is diagnosed of gist in gastric lesser curvature, 9 x 8 x 11 cm, very vascularized and infiltrates the wall producing marked imprint on the fundus. it is tributary of left gastric artery. precise blood transfusion and presents hemodynamic stability, is decided surgical resection scheduled. results: the surgery is performed by laparoscopy, with a tumor of approximately 10 cm, which is dependent on the lesser curvature. the esophageal hiatus and the lesser curvature are dissected with section of the left gastric pedicle. atypical gastrectomy of the lesser curvature including gist, making a gastric sleeve dependent on the greater curvature. the anatomopathological study reports pt4 pn0 with 22 lymph nodes without adenopathies, and disease-free surgical margins. he was discharged without complications on the 6th day and did not require re-entry. conclusions: the laparoscopy surgery for atypical gastrectomy of lesser curvature is safe and meets oncological criteria in selected patients and performed by an experienced esophagogastric unit. aims: to present the surgical procedure of total gastrectomy with d2 lymphadenectomy with laparoscopic surgery, fulfilling oncological criteria, carried out in the general surgery service of the hospital of torrecárdenas. methods: a 35-year-old male with a tobacco habit who consults due to epigastric pain and constitutional syndrome of 6 months of evolution. it is diagnosed of gastric adenocarcinoma t3n2m0. neoadjuvant qt is decided, after 4 weeks of its completion, scheduled surgery is performed. results: the surgery is performed by laparoscopy, showing a stenosing tumor in at gastric antrum of approximately 8 cm. dissection of the greater curvature with section of the right gastroepiploic at its birth and duodenal section is performed. dissection of the lesser curvature with d2 lymphadenectomy, section of the pedicle of the left gastric and the distal esophagus. the transit is restored with latero-lateral esophageal-jejunal anastomosis and jejunojejunostomy. the anatomopathological study reports ypt4a and pn2 with 3/36 adenopathies, and disease-free surgical margins. he was discharged without complications on the 7th day and did not require reentry. conclusions: the laparoscopy surgery for total gastrectomy with complete d2 lymphadenectomy is safe and meets oncological criteria in selected patients and performed by an experienced esophagogastric unit. background: in gastric cancer surgery, to secure surgical margin, it is necessary to accurately judge the position of the tumor. however, with conventional marking clips, it is difficult to identify the exact location of the tumor during laparoscopic surgery. purpose: we investigate whether icg (indocyanine green) fluorescence navigation method is effective and safe for determination of cutting line in laparoscopic gastrectomy. patients and methods: 428 subjects underwent laparoscopic gastrectomy (including robot-assisted surgery) based on the icg method for gastric cancer in the period from april 2017 to december 2018. the day before surgery, icg diluted 50 times (0.2 ml of reagent ? 9.8 ml of distilled water) was injected at 1 cm from the tumor edge and 0.5 ml at the four submucosal layers around. then clip to the same part. gastrectomy based on standard surgery is performed, and the position of the tumor and spread of icg are confirmed by icg fluorescence navigation during operation, and a cutting line is determined. the extent of icg from the tumor is again measured with the excised specimen and compared with the pathological margin. results: among the patients who underwent intraoperative pathological examination, they were negative in all cases except one. the spread of icg was 2.5 cm on average, and considering the marking position (1 cm) from the tumor edge, securing of 3.5 cm or more was possible. the operation time was 230.0 ± 92.7 min and the estimated bleeding loss was 24.6 ± 120.9 ml. conclusion: laparoscopic gastrectomy with icg method can evaluate tumor position and spread easily and in real time during operation and it was effective for determining the cutting line in laparoscopic gastrectomy. epstein-barr virus (ebv) has been known as one of causal virus of gastric cancer. ebv-related gastric cancer considered to be about 10% of the entire gastric cancer, and it is rare that ebvrelated gastric cancer has multiple lesions. the patient was 60 years old female. she was diagnosed with upper gastrointestinal endoscopy with lesion in the lower major stomach body, lower anterior wall of the stomach body, rear wall in the middle part of the stomach, rear wall in the middle part of the stomach, and lesser curvature of the stomach angle, as a result of biopsy, adenocarcinoma was observed from the former four. the patient underwent a robot-assisted total gastrectomy. adding a newly found lesion, the histopathological diagnosis was pt1b in the lower major stomach body, pt1b in the lower anterior wall of the stomach body, pt1b on rear wall in the middle part of the stomach, pt1b on rear wall in the middle part of the stomach, and pt1a in lesser stomach body, pn0, pstageia. pathological examination results showed that the four lesions were positive for tumor cells in eber in situ hybridization and were considered to be ebv-related gastric cancers. she was discharged on the 12th day after the operation without any postoperative morbidities.there has been no sign of recurrence without postoperative therapy for 12 months. results: a 35-year-old female with no medical history of interest or allergies to medications, who consulted for palpable mass at mesogastric level to the left of the midline associated with abdominal pain of 3-6 months of evolution, without concomitants or relationship with the intake, valsalva or physical efforts, without change in the depositional habit or toxic syndrome. the abdominal ct (computed tomography) revealed a cystic mass in jejunum mesentery, defined edges, about 6 cm in diameter and that does not capture contrast; likewise, there is no ascites, retroperitoneal adenopathies or other intra-abdominal or pelvic masses, radiology recommends completing the study with abdominal mri (magnetic resonance imaging) that informs of possible lymphangioma at the level of the jejune mesentery. surgical exeresis was decided, which was carried out by laparoscopic approach, with emptying of the lesion and enucleation of the lesion without incidents, the postoperative evolution was favorable being discharged at 48 h. the pathological anatomy reported fibro-adipose tissue with presence of lymphatic dilatations associated with a cystic lesion without epithelial lining, with serous fluid and abundant macrophagic reaction compatible with mesenteric lymphangioma. conclusions: the mesenteric cyst is a rare pathology with an incidence ranging from 1 / 27,000 to 1 / 250,000, predominating in the fourth decade of life. it is defined as any cystic lesion in the mesentery, and is subdivided according to its origin into lymphatic, mesothelial, urogenital, dermoid, and enteric and pseudocysts. most of the time they are asymptomatic although they can (as in our case) present with abdominal pain and even produce complications such as intestinal obstruction, volvulus, intracystic hemorrhage, infection, rupture, and even malignant transformation. for the diagnosis, the palpation can be of great help, showing mass of well-defined limits and partially mobile. the imaging test of choice is abdominal ultrasound / abdominal ct, supplemented by magnetic resonance imaging. the recommended treatment is surgical exeresis, considering laparoscopy as the first option; if it is complete, it can be considered as a curative treatment. purpose: gastrostomy(og) is an alternative method for nutrition support, especial for the patients with oral-esophagus route obstruction or dysfunction. the most operation were conducted by young surgeon or residents. laparoscopic gastrostomy(lg) was a new coming procedure and the skillful suture techniques were needed. the most the residents can't be qualified for this operation. we designed the method for laparoscopic gastrostomy to provide the traning opportunity of suture skill training and guarantee the patient's safety. material and method: laparoscopic gastrostomy procedure was done with two 5 mm trocar. the lower body of stomach was chose. four point around gastrostomy wound were chose for subcutaneous fixation. the straight needle with 2-0 prolene was inserted into peritoneal cavity from upper point, then punctured the sero-muscular layer of stomach. the needle was retrieved out from the same point by guidance of 18 gauge needle. the same way was used for other three points. one purse string around gastrostomy was created by one hand suture method and fastened by köckerling knot tier after insertion of 20 fr foley tube. finally, the peritonization was finished by hand tie externally and knot were keep in subcutaneous layer material-method: we present the case of a 85 year old woman who presented with melena, hematemesis, anemia (ht 12.5%) and being haemodynamically unstable. after the stabilization of the patient, a gastroscopic examination followed, where it revealed a tumor of the fundus (adenocarcinoma). the patient was submitted to laparoscopic total gastrectomy and oesophagojejunal anastomosis, omega type (o), and intestinal anastomosis braun, with the usage of 3 trocars (umbilical 10 mm as inserted in laparoscopic surgery of a single incision, and two 12 mm in the midclavicular line bilaterally). the oesophagojejunal anastomosis was conducted with the use of a linear stapler for the posterior wall and the convergence of the anterior wall with laparoscopic sutures in two layers. patient remains in well condition, 6 months after the operation. conclusion: laparoscopic approach seems to be safe for treatment of gastric cancer of the fundus and of the gastroesophangeal junction, as it offers better surgical field view and less postoperative complications. the restoration of the continuity of the gastrointestinal tube with anastomosis of w type is considered safe alternative to the classic roux-en-y anastomosis. git and bariatric surgery, faculty of medicine, alexandria university, alexandria, egypt; 2 git surgery, faculty of medicine, alexandria, egypt background: superior mesenteric artery syndrome is best described as compression of the third part of duodenum by the superior mesenteric artery, resulting in obstruction. the study of this rare medical condition was carried out since decades yet remain obscure. this study aimed to analyze different clinical presentations, diagnostic modalities, treatment approaches and outcomes, as well as to emphasize the importance of long term follow up. methods: thirty-five superior mesenteric artery syndrome cases were collected retrospectively from a facebook group called 'superior mesenteric artery syndrome awareness & support'. a questionnaire was designed using google form to obtain the demographics, presenting symptoms, risk factors and co-morbidities, investigations, means of treatment and the outcomes. data was entered into microsoft office excel for statistical analysis. results: the median age at diagnosis was 22 years. the median body mass index was 20.8 kg/ m 2 ;. the median time interval from symptom onset to initial diagnosis was 22 months. the major presenting symptoms were abdominal pain (82.86%), nausea (77.14%), and vomiting (65.71%). abdominal computed tomography scan with contrast (82.86%) was commonly used for confirmation of diagnosis. thirteen cases (37.14%) were congenital. thirty patients (85.71%) had received treatment. the overall management success was only 13.33%. surgical management (34.29%) was the most used regimen. conclusion: diagnosis of superior mesenteric artery syndrome is established after a thorough assessment of the clinical presentations and confirmation with suitable imaging modalities. the choice of treatment should be dependent on the causes and severity as different patients respond differently to therapy. recurrence is possible in all patients thus a long-term follow up is required. aims: in the last hundred years much has been written on peptic ulcer disease and the treatment options for one of its most common complications: perforation. laparoscopic repair of perforated peptic ulcer has been gaining popularity in recent years. treatment for perforated ulcer can be performed laparoscopically in 85% of cases, making it possible to avoid a median laparotomy which can lead to wound infection and late eventration. methods: a 77-year-old male presented to emergency room with a three-hour history of progressively worsening epigastric pain and nausea. physical examination revealed rebound tenderness compatible with an acute abdomen. a ct scan showed: important pneumoperitoneum unable to define the drilling point; distended stomach with plenty of fluid inside and dense content fundus / body suggestive of active arterial bleeding . results: the patient was emergently taken to the operating room for diagnostic laparoscopy . perforation shown in greater gastric curvature associated blood remnants. gastrotomy for clot removal is done without observing active bleeding. the gastrotomy was repaired using standard stitches. all exudate was aspirated and the peritoneal cavity was irrigated with warm saline solution the patient had an uncomplicated post-operative course. jp drain was removed and he was discharged one week after surgery. conclusion: the role of laparoscopic surgery in emergencies is well documented. laparoscopic approach is indicated in any case of suspected gastroduodenal perforation and seems to offer the same advantages as for the vast majority of laparoscopic procedures. laparoscopic surgery may therefore have a real place in the treatment of perforated peptic ulcer. the aim: of our study was to evaluate of effectiveness of local injection of platelet-rich plasma for treatment of peptic ulcer bleeding with hemorrhagic shock in experiment. methods: the study was performed on 60 wistar rats according to local and international rules for working with experimental animals. the average weight of animals was 183 ± 16 grams. in all animals our modification of type 2 acetic acid ulcer (susumu okabe, 2005) was modeled. we randomly divide all animals in 3 groups. 20 rats with only modeled ulcer were included in group 1. 20 rats with modeled ulcer and hemorrhagic shock after 3-3.5 ml blood sampling were included in group 2. in group 3 we included 20 rats with modeled ulcer and hemorrhagic shock and performed local injection of platelet-rich plasma (local periulcelar injection of 0.1 ml of autologous platelet-rich plasma). on 1st, 7th and 14th day measurement of the ulcers square and morphological study were performed. results: the data we have received demonstrate a tendency of decrease of ulcers' square in all groups with time flow. we also compared sizes of ulcerative defects in all groups at every point of the study. on the 1 st day of investigation there were no differences (p [ 0.05) between ulcers' square in all groups. on the 7 th day we found out more rapid decrease of size in group 3 (p [ 0.05). however, this tendency had no statistical significance. on the 14th day difference was larger and it was statistically significant this time (p \ 0.01). also the better ability to stimulate the activity of fibroblasts and revascularization in the young connective tissue with improving oxygenation in the ulcers and enhancing of cell proliferation, differentiation and accelerating of maturation of connective tissue and healing of ulcers was demonstrated in group 3. conclusion: platelet rich plasma reduces inflammatory response and stimulates proliferation of gastric epithelial cells on 7 th day with the restoration of secretory activity and epithelialization of ulcers in 71.4% of experimental animals on 14 th day, the activation of the fibroblastic reaction during the all experiment and decreasing of ulcers' square. h. fujii, depat. of surgery, japanese red cross fukui hospital, fukui, japan introduction: in conjunction with charmant, a local eyeglass frame manufacturer, we developed novel devices called the fj (free jaw) clip to grasp organs in the abdominal cavity and the f (free) loop plus to pull thread extracorporeally from within the abdominal cavity. product summary: the fj clip is used to grasp organs in the abdominal cavity, a stainless steel, removable forceps for use in laparoscopic surgery. it provides a strong grip but rarely crushes organ tissue. the clip comes in two sizes, one for use in a 5-mm port and the other for use in a 12-mm port, and in two lengths, 29.4 mm and 35.6 mm, respectively. to pull out thread tied to the fj clip, we developed the f loop plus, which is a 21g by 90-mm-long special stainless needle with f0.1-mm niti alloy thread which is used pull suture threads from inside the abdominal cavity to outside the body. case: we performed 9 cases of reduced port laparoscopic and endoscopic cooperative surgery (lecs). we performed reduced port surgery (rps) by making a 1.5-cm incision at the umbilicus, inserting 2 trocars (12 mm and 5 mm), and inserting another trocar (5 mm) at the left side of the abdomen. we expanded the left hepatic lobe with a 12-mm fj clip for penrose drain placement, grasped the front wall of the gastric body with a 12-mm fj clip, applying traction toward the legs to pull up the tissues around the tumor, and resected all layers of the tumor via oral endoscopic submucosal dissection technique. the resected area was closed with a suturing device or interrupted sutures in the abdominal cavity. a 75 year-old female was admitted to the emergency department with complaints of abdominal cramping pain, back pain and diarrhea for one day. she also had fever, ever up to 39°. in these two weeks, she felt occasionally epigastric pain. her past medical history included hypertension. on physical examination, she was conscious and alert. abdominal examination revealed diffuse tenderness and knocking pain over right flank. laboratory tests indicated an degraded white cell count of 2890/cumm with 22% band forms, c-reactive protein of 25 mg/dl and abdominal liver function tests (alanine aminotransferase: 149 u/l, alkaline phosphatase: 249 u/l, gammaglutamyl transferase: 175 u/l) without hyperbilirubinemia. abdominal x-ray showed paralytic ileus. our presumptive diagnosis was acute peritonitis, based on the patient's symptoms. empirical antibiotics were administered immediately, and a computed tomography (ct) imaging study was performed. the ct scan showed a stick like foreign body noted between ventral side of pylorus and smv lumen, about 1.5 cm in length and associated with perifocal infiltration and segmental smv thrombus formation. (fig. 1) however, there is no obvious pneumoperitoneum and no evident ascites is associated. an emergency exploratory laparotomy was performed, revealing stomach perforation at posterior wall with a 3 cm fish bone thourgh pancreas into smv. localized inflammation and fibrosis were identified without obvious fluid accumulation( fig. 2 -4) . removal of fish bone and simple closure of stomach perforation were performed. blood cultures revealed bacteroides thetaiotaomicron. three weeks later, she received a follow-up ct scan which showed smv obliteration with chronic pylephlebitis. aim: here we present a case report about the endoscopic treatment for iatrogenic gastric perforation secondary to a chest tube insertion. methods: a case report of a 24-year-old male with history of a road traffic accident. described injuries were severe brain injury with gcs \ 8 at pre-hospital care arrival, thoracic injury with several rib fractures on the left hemithorax and hypoventilation on the left side. prior to hospital transfer a chest drain was inserted on the left side, and the patient was intubated. results: at hospital admission, the patient was hemodynamically stable and connected to a mechanical ventilator. thoracic exam showed persistent hypoventilation on the left chest. no other abdominal or pelvic injuries were found in the physical exam. a frontal chest x-ray revealed pneumothorax and the chest tube was not viewable. a further ct scan showed the chest drain placed in the abdominal cavity, into the stomach, besides a subdural hematoma, comminuted pelvic fracture of the pubic rami and a left sacroiliac fracture. during the first 24 h in the icu, neurological worsening was observed, and a new cranial ct revealed enlargement of the subdural hematoma, for what the patient underwent decompressive craniectomy, with improvement thereafter. following a five-day period of stabilization after surgery, the patient was evolving satisfactorily, and the removal of the intragastric chest drain was considered. endoscopy was performed to confirm the placement of the drain, and it was removed under direct vision. approximately twenty five centimeters of the catheter were visualized in the gastric lumen, and then successfully removed. the patient recovered well and was discharged from icu to medical hospital ward after fourteen days, and a week later he was discharged home. conclusion: endoscopic management for gastric perforation after a chest drain insertion may result effective and can prevent open surgery morbidity. aims: intestinal infusion treatment with levodopa/carbidopa (duodopa) is a therapeutic option concerning the advanced parkinson disease cases with no response to the conventional treatment. the drug requires carrying out a gastrostomy either by percutaneous endoscopy way, or by laparoscopy -if the first one is not possible-. later, a duodenum-yeyunum tube is placed in order to infuse the duodopa gel continuously by a portable bomb. in this report, we explain the laparoscopic gastrostomy technique. method: sin this report, we include two patients with advanced parkinson disease: the first one is a 61 year-old female patient suffering from an important gait disorder; and the second one is a 71 year-old male patient with uncontrolled motor fluctuations. in both cases, a percutaneous endoscopic gastrostomy was proposed, but neither was feasible because of the non-traslumination between the gastric and the abdominal wall. under general anesthesia, neumoperitoneum by veress needle was performed. three main trocars and one accessory were placed. at the level of the gastric antrum, a 1 cm incision was conducted to insert a gastrostomy tube, to be the guide for the drug infusion catheter. next, the gastrostomy is fixed to the abdominal wall by the stamm technique, externalizing the catheter through the accessory trocar in the medial line. results: on the first post-operative day, a duodenum-yeyunum tube is placed by endoscopic control through the gastric device. both patients got well satisfactorily, and no complications were described; and they develop a total normal life within the limitations of their underlying disease. conclusions: the duodopa intestinal infusion shows a significative improvement for the advanced parkinson disease symptoms, compared with oral medication; appreciating positive results referring to life quality. when the catheter placement by endoscopy way does not seem posible, gastrostomy by laparoscopy constitutes a valuable surgical option for the treatment of this kind of patients. peptic perforated ulcus (ppu) is a common surgical emergency and laparoscopic repair has been introduced as an alternative to open repair. it has shown good results and allows closure and peritoneal lavage, just like the open repair does but with the advantages of a minimally invasive surgery. the objective is to report the outcome of laparoscopic ppu in our hospital. methods: from january 2015 to october 2018, 16 patients with a clinical diagnosis of ppu were assigned to undergo laparoscopic repair. this retrospective study included all husm patients who underwent laparoscopic ppu repair by emergency surgeons. minimum follow-up of 3 months is carried out. results: of the 16 patients in this series, 70% were men and 30% were women, between 15 and 80 years of age at the time of surgery, average of 48 years. the time between the manifestation of symptoms and surgery was [ to 24 h in 70% of patients. in 6 patients there was a history of previous ulcer or non-steroidal anti-inflammatories intake and up to 50% were smokers. a ct scan was performed in all cases to reach the diagnosis primary closure with simple suture plus omental patch was the elected technic (90%). the approach was performed with 3 trocars in 44%, 4 trocars in 50% and 5 in 1 case. 13 cases (81%) were gastric ulcer, 2 duodenal cases (13%) and in one case no perforation was found. the conversion rate was 19%, in two cases due to technical difficulty and in the other case because the level of the perforation was not found. the median postoperative stay was 7 days although there were 2 cases with intrabdominal complications. there was an exitus due to a metastasic pulmonary neoplasia diagnosed in the immediate postoperative period. there were no cases of recurrence in the follow-up time. conclusion: in most centers, including ours, the rate of laparoscopic management has gradually increased along with the improvement of technical skills. improvements in the outcome of laparoscopic ppu repair are to be expected with more experience surgeon and a good selection of the cases. general surgery, jzu hospital ,,sveti vracevi,, bijeljina, bosnia-herzegovina introduction: diverticulum is an outpouching of a hollow organ. gastric diverticulum is rare form od this disease. incidence of detection varies depending on investigation method. it has been reported in 0.02% cases of autopsies, 0.04% cases of gastroduodenal roentgenographies with contrast, and 0.01-0.11% cases of upper endoscopies.small diverticula are usually asimptomatic, but bigger diverticula can cause variable symptoms such as abdominal pain, feeling of epigastric fullness right after meal, feeling of discomfort in upper parts of abdomen, and severe 'foetor ex ore' .diagnosis is usually established in procedures such as gastroduodenal roentgenographies with contrast, upper endoscopies and abdominal ct scan. case report: a 57-year-old woman came to our hospital because of feeling of discomfort and mild pain in upper abdomen that lasted for last year. diagnosis is established after ct scan of abdomen and upper endoscopy procedure. initially she has been prescribed conservative therapy (proton pump inhibitors). since the symptoms persisted, laparoscopic resection of the gastric divertuculum was performed using endogia stapler. considering the feeling of discomfort and abdominal pain dissapeared, the patient was discharged from hospital on the fourth postoperative day. conclusion: asymptomatic gastric diverticula doesn't require treatment. since gastric diverticulum can have complications such as bleeding, perforation and neoplasia, patient without symptoms should be monitored. initial therapy for symptomatic diverticula is conservative therapy (proton pump inhibitors). if conserative therapy doesn't procude expected results, laparoscopic resection of the diverticulum should be considered. introduction: the acute perforation of a gastric ulcer is a serious entity that requires urgent surgical treatment in most of the occasions, it is increasingly accepted that the approach of choice is laparoscopic, depending, above all, on the time of evolution of the process. objectives: to demonstrate the safety and efficacy of the laparoscopic approach in the perforation of a pyloric peptic ulcer, even in cases of severe peritonitis, by means of a standardized procedure, insisting on the sequential thorough washing of the cavity.material and methods: we present a video of the surgical intervention of a patient with acute abdomen, with a history of nsaid ingestion, exploration and ct-analysis compatible with perforation of hollow viscus, probably of gastric origin. results: intervention: complete laparoscopic approach, 4 trocars. severe biliopurunitic peritonitis, by pyloric perforation 'acute', liquid culture, suture of the perforation, epipoplasty, sequential thorough washing of the cavity with physiological saline and placement of drainages.correct postoperative period, discharge from the hospital on the 7th day after completing antibiotic treatment. endoscopy and helycobacter test are performed on an outpatient basis with normal results. conclusion: the laparoscopic approach is safe and effective in acute and complicated gastric ulcer disease, even in cases of severe peritonitis. surgical procedure: the clean-net procedure involves the selective dissection of both the serosa and muscle layer using a laparoscopic monopolar endoscopic scissor. the preserved mucosal layer provides a mechanical barrier between the gastric lumen and peritoneal cavity that aids in the prevention of peritoneal cavity contamination with gastric contents. tumors are observed with an upper gastrointestinal endoscope with the injection of indocyanine green (icg) into peri-tumoral submucosal layers at 4 points. selective seromuscular dissection is performed using a laparoscopic electrocautery monopolar scissor. the mucosa surrounding the gist is then resected using a endoscopic mechanical stapler to prevent exposure of the gastric lumen to the peritoneal cavity and peritoneal tumor cell seeding. results: there were 5 males and 1 female, and the average age was 65 years. the operation time was 186 min, the average bleeding volume was 14.6 ml, the postoperative hospital stay was 7.8 days. the mean tumor diameter was 32.3 mm, the final histopathological diagnosis was 5 gist, 1 schwannoma. there were no postoperative complications of clavien-dindo classification 2 or more. conclusion: clean-net was found to be safe and useful for the treatment of gastric smt with ulceration. year outcomes: laparoscopic heller myotomy stands the test of time aims: laparoscopic cardiomyotomy leads to excellent relief of dysphagia in 95% of patients and avoids thoracotomy or laparotomy. methods: we present a video illustration of the procedure that was modified at the american university of beirut medical center. so far, 129 patients underwent laparoscopic cardiomyotomy, age range of 14 to 76 years, with 56 males and 73 females. most of them have had previous balloon dilatation. results: all cases were successfully completed laparoscopically without complications. followup of 2 months to 15 years revealed excellent results with complete resolution of symptoms and no need for further medications. this will result in minimal post-operative pain and very short recovery period and is associated with low complication rate. conclusion: cardiomyotomy for achalasia is ideal for laparoscopic approach. magnification allows for precise division of muscle fibers. the new technique of hydro dissection and enseal for division of esophageal muscle allows for completion of the procedure without injury of the mucosa. therefore, adequate release of the obstructing segment followed by anti reflux procedure toupet will lead to excellent results with minimal morbidity and no mortality. aims: laparoscopic repair of huge hiatus hernia methods: twenty two cases of huge hiatus hernia presented to the american university of beirut medical center. patients underwent through 5 trocars in the upper abdomen reduction of the hernial sac from the chest. special care was taken in the dissection of the mediastinum to keep the thoracic fascia and pleura intact. the defect was sutured primarily by 0-ethibond sutures reinforced by onlay prolene mesh u-shaped was fixed at the rt. and lt. crus and a floppy nissen fundoplication performed . results: the video presentation includes the technical aspects and the method of reducing and repair of huge hiatus hernia.aim: nowadays, there is little experience in the world of applying robotic surgical system (rss) in treatment of patients with hiatal hernia (hh) and reflux-esophagitis (re). the aim of study was to determine the possibility and feasibility of using rss in treatment of patients with hh. materials and methods: a total of 41 patients underwent robot-assisted hh repair without mesh, followed by fundoplication with our original method (360°full symmetric wrap). the clinical and technical analysis did not reveal any advantages over similar laparoscopic procedures, so we abandoned the use of rss for hh type i, and these 4 patients were excluded. there were 32(86%) patients with hh type iii and 5(14%) type iv. the surgeries were performed by experienced robotic upper gastrointestinal surgeon and conducted with the davinci si surgical system (intuitive surgical, sunnyvale, ca). results: average operation time was 118 ± 37 (62-173) min. the respondents' mean age was 56.2 ± 10.9 years (range 29-68) and bmi was 30.8 ± 7.1 (range 17.1-44.3) cm/kg 2 . average blood loss was 20 ± 9 (5-70) ml. average hospital stay was 5 ± 1.3 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) days. the average follow-up time was 14 ± 3.6 (6-24) months. postoperative x-ray imaging and upper gi endoscopy was conducted in all 37 (100%) patients. there was no hh recurrence diagnosed. we did not observe a relapse of hh or clinical manifestations of re in the early (less than 30 days) and long-term (more than 6 months) postoperative periods. conclusion: we can conclude that robot-assisted surgery is safe, appropriate and justified in patients with hh type iii and iv. all procedures performed to the patients with giant hh revealed clear technical advantages of rss over similar laparoscopic operations: an enlarged 3d hd image, bendable instruments with endowrist technology allowed for precise dissection of tissues (hernial sac, cicatricial adhesions) in a narrow anatomical space-posterior mediastinum without damage to pleura, pericardium and vagus nerves. we believe that use of rss in treatment of patients with reflux esophagitis and/or hh type i is unjustified, due to the lack of proven advantages over laparoscopy. introduction: the presence of major anatomical obstacles -such as massive caudate lobe-in the confined operative field of laparoscopic hiatal hernia repair (lhhr) poses significant challenge to the foregut surgeon. aim: to provide a safe alternative for lhhr using a laparoendoscopic approach. method: this patient is a 60 year old female, with bmi of 32.2. her past medical history includes diabetes, hypertension and hyperlipidemia. she had gerd for 20 years. her egd showed 5 x 4 cm hiatal hernia and class b esophagitis. manometry showed ineffective esophageal motility. we used the classic five ports approach for lhhr. we found a massive caudate lobe which was comparable to the size of an already enlarged left lobe of the liver. the operative strategies: terminating the procedure proceeding with the standard approach and taking the risk of bleeding from the caudate lobe itself or the inferior vena cava (ivc) with possible catastrophic outcome. using the laparoendoscopic approach. the following three steps facilitated the performance of safe and effective surgery. additional liver retractor this improved exposure and minimized manipulation of the caudate lobe. extracorporeal sliding arthroscopic knots (esak) esak are similar to the knots used in endoloop. they are tied extracorporeally and require a single insertion of the knot pusher as they do not unravel. transoral incisionless fundoplication (tif) we performed tif to avoid a limited operative field and to prevent excessive tissue manipulations associated with laparoscopic fundoplication. tif also preserves the angle of his and produces partial fundoplication which has less side effects of dysphagia and gas bloat syndrome. results: the operative time was 98 min (lhhr 78 min and tif 20 min). there were no complications. patient discontinued omeprazole which she used daily for 20 years. at 6 months follow up, her gerd related quality of life (hrql) and gerd symptom (gerss) scores were (30 vs 5) and (40 vs 8). conclusion: the laparoendoscopic repair of hiatal hernia in the presence of anatomical obstacles is safe and effective. longer follow up is needed to assess the durability of this repair. gastroesophageal reflux disease (gerd) is a condition that reduces the quality of life and can causedisorders associated with acid reflux, such as bronchial asthma, barrett's esophagus and esophagealadenocarcinoma. gerd is often caused by existing of hiatal hernia. nowadays, some surgeons haddifficulties with the laparoscopic approach to treatment of recurrent hiatal hernias.patient was a 30-year-old man. he requested medical assistance with dysphagia, nausea after eatingand heartburn getting worse in a horizontal position. conservative treatment was not effective.transthoracic nissen fundoplication was performed in 2017. the main complaints of the patientpersisted during the postoperative period. the upper half of stomach and s-like curved esophagus werelocated in the mediastinum according to multislice computed tomography of the thorax in 2018 august.in our clinical center was performed laparoscopic cruroraphy, cardiomyotomy and nissen fundoplicationin 2018 november. during the surgery the normal anatomical position of stomach has been restored, s-like curve of esophagus has been removed; a gastric cuff (collis-nissen) has been created and anteriorand posterior cruroraphy has been performed. the patient was in intensive care during 8 h. anasogastric tube feeding was continued during the first 48 h. passage of the contrast through theesophagogastric junction was free within 72 h after surgery. patients had been discharged within 5 days after surgery.this case report shows that at the current stage of surgery laparoscopic approach can be useful not onlyfor treatment of primary hiatal hernias-but also for treatment of recurrent ones. aims: laparoscopic heller myotomy procedure, completed with an anti-reflux procedure is technically demanding. we report a case of laparoscopic heller myotomy followed by a dor anterior fundoplication. methods: this is the case of a 57-year-old caucasian woman with gradual dysphagia for solids and liquids, accompanied by severe regurgitation and chest pain. an initial diagnosis of achalasia was made in 2010, with the use of manometry and barium swallow. endoscopic dilatations were attempted pre-operatively with no clinical improvement. decision was made to perform a laparoscopic heller myotomy, combined with a standard dor anterior fundoplication. a 4-ports operation took place (one intra-umbilical 10 mm trocar-single incision laparoscopic surgery (sils) technique, two 5-mm subcostal trocars, and one another 10 mm subcostal trocar for the use of liver retractor). the operation lasted 2 h and 15 min. results: no post-operative complications were noted. the post-operative swallow test showed improvement of the esophageal patency. the patient started a liquid diet three days later and was discharged six days post-operatively. two months later the patient presented no complications. conclusions: heller's myotomy has demonstrated good long-term results in the treatment of esophageal achalasia and the laparoscopic approach has been well established in the last two decades. it is a very demanding operation to perform and the disease is relatively rare, making the learning curve difficult to achieve. aims: achalasia is a type of motor disorder of the esophagus due to degeneration of ganglion cells in the myenteric plexus, leading to failure of relaxation of the lower esophageal sphincter, accompanied by a loss of peristalsis in the distal esophagus. the association of a long-term achalasia and a large size hiatus hernia is an infrequent entity. among the therapeutic options is medical treatment, endoscopic treatment and surgical treatment associated with an antireflux procedure. the laparoscopic approach being the more indicated due to its better results in terms of morbidity, mortality and recurrences. the aim of the video is to show the effectiveness and safety of the laparoscopic approach in this infrequent pathology, pointing out the importance of performing a standardized procedure. methods: 73-year-old male patient, with personal history of chronic ischemic heart disease and obesity, diagnosed with long-term achalasia with moderate dilatation of the esophagus associated with giant hiatus hernia. the complementary explorations and iconography of interest are exposed. results: intervention: complete endoscopic approach, 5 trocars. reduction of hernial content into the abdominal cavity, dissection of the hernial sac and esophageal lipoma. extended mediastinal esophageal dissection. complete resection of both the sac and lipoma, respecting the posterior vagus. heller's myotomy of 10 cm, including 3 cm distal to the ueg, perforation of 3 mm of the mucosa at the ueg level, suture and blue methylene verification of the sealing. hiatorraphy and dor-type anterior fundoplication as antireflux technique. correct postoperative, with egd control on the 3rd po day and discharge on the 6th. asymptomatic at 24 months after surgery. conclusion(s): for achalasia laparoscopic heller myotomy with a partial fundoplication should be the treatment of choice in patients who are at low surgical risk. the length of the myotomy, especially distal to ueg is one of the most important aspects of the surgery, to achieve an effective disruption of the les. the presence of a giant hiatus hernia makes the procedure difficult, increasing the risk of complications, such as perforation. standardization is essential to increase safety and efficacy in these complex cases. purpose: there is evidence that the application of mesh-reinforced hiatal repair has resulted in a significant reduction in recurrence rates in comparison with primary suture repair, at least in short-term follow-up. however, and instead of this, the standard of care for repairing large paraesophageal hiatal hernias (lphh) remains controversial because no clear guidelines are given regarding indications, mesh type, shape and position. the aim of this study is to evaluate our short-term outcomes in management of lphh with a biosynthetic monofilament polypropylene mesh surrounded by a high-purity and adherent titanium dioxide surface coating to enhance the biocompatibility (tio 2 mesh tm ). methods: a retrospective study was conducted on our institution between december 2014 and october 2018. data were collected on 27 patients with lphh greater than 5 cm in which a laparoscopic repair was carried on by primary suture and additional reinforcement with a tio 2 mesh tm . clinical and radiological recurrences, dysphagia and mesh-related complications were investigated. results: there were 17 females and 10 males with a mean age of 73 years (range, 63-79 years). all operations were completed laparoscopically. median postoperative stay was 3 days. after a mean follow-up of 16 months, 3 patients developed clinical recurrence of reflux symptoms (11.1%) and 2 radiological recurrences (7.4%). there were no mesh-related complications. conclusions: the use of tio 2 mesh tm for laparoscopic repair of lphh is suited and with a reasonably low recurrence rate in this short-term study. additional long-term studies with enormous numbers carried out for years will be necessary to affirm whether this mesh is convenient in the prevention of recurrences and mesh related complications. background: surgery for refractory gastroesophageal reflux disease (gerd) has a satisfactory outcome, however sometimes fundoplication fails and redo surgery is required. several publications have investigated the feasibility of performing reoperative fundoplications using laparoscopic techniques. the aim of this study was to describe our experience in laparoscopic redo fundoplications in the last 4 years. material and methods: we retrospectively reviewed 26 consecutive patients who required laparoscopic redo fundoplication from january 2014 to august 2018.the indications were recurrent symptoms of gastroesophageal reflux disease (gerd) (15.4%), recurrent symptomatic paraesophageal hernia (42.3%), dysphagia (30.8%) and acute volvulus (11.5%). results: all redo fundoplications (basically toupet 69.2% and nissen 26.9%) were completed laparoscopically. the mean operative time was 120 min (range, 100-136.25 min). a mesh was placed in 31% of cases. intraoperative and postoperative complication rates were 23.1% and 3.8% respectively. the mean hospital stay was 4 days (range, 3-5 days). one patient (3.8%) from the laparoscopic group required a third operation-one for acute recurrent paraesophageal herniation of the redo wrap one month after surgery, which was repair laparoscopically again. symptomatic outcome was successful in 84.6% without any kind of proton bomb inhibitors therapy. conclusion: laparoscopic redo fundoplication is technically feasible and clinically effective with a reasonable low rate of postoperative complications p620-upper gi-reflux-achalasia objectives: in recent years, balloon dilatation (bd) for diseases requiring correction of the impaired patency of the sphincter zones of the esophagogastroduodenal region has become widespread. purpose: to assess the effectiveness of the use of the balloon dilatation in patients with impaired sphincter zones of the esophagogastroduodenal region. materials and methods: in the institute department of surgery for the period from 2006 to 2018, bd was performed in 245 patients. 210 of them diagnosed with achalasia of cardia (ac): 17-1 stage, 86-2 stage, 62-3 stage, 45-4 stage. 7 patients diagnosed with pylorospasm, 7 patients had compensated stenosis and 21 patients had subcompensated ulcerative pyloroduodenal stenosis. there were 87 males, 158 females, average age (45.3 ± 5.2). bd was performed under endoscopic and / or x-ray control by 'boston scientific' balloons with a diameter of 18-20 mm, 35 mm and 40 mm, a course of 3-6 sessions with an interval of 1-3 days and a cylinder exposure of 3-6 min. evaluation of bd was performed using esophagogastroscopy, balloon manometry and x-ray passage of barium. results: in the course of the study, the existing indications were refined and new indications were developed for performing an endoscopic bd in pyloroduodenal stenosis and in ac. in patients with stage 1-2 ac, a positive result was noted in 94.3% of cases already after the first session of bd. recurrences of ac after bd for up to 5 years were established in 49 (23.3%) patients: at stage 1-in 12.2%, at stage 2-in 16.3%, at stage 3-in 24.5% and at stage 4-in 47.0%. repeated bd courses in case of ac recurrence in 29 (13.8%) cases turned out to be ineffective. recurrence of pyloroduodenal subcompensated stenosis was diagnosed in 2.8% of cases in the period of 24 months after performing bd. conclusions: bd is an effective method for correcting the permeability of the sphincter zones caused by the pathology of the esophagogastroduodenal region. keywords: balloon dilatation, achalasia of cardia, pylorospasm, ulcerative pyloroduodenal stenosis, recurrences. introduction: the reoperation in antireflux surgery significantly increases morbidity and mortality up to 75-85%, reaching rates of 42% in patients undergoing 3 or more surgeries. the advantages of laparoscopic surgery used in this surgical technique have amplified its acceptance and use, resembling its results in terms of feasibility, safety and efficacy of laparoscopic surgery to open surgery.objective: :evaluate the currently literature about antireflux surgery reintervention, focusing on the main indications of re-intervention, type of approach and morbidity and mortality of laparoscopic antireflux surgery. material and methods: a literature search was conducted in two electronic databases, med-line and embase. the search was limited to the period 2009 to 2016. terms were used in relation to the procedure or intervention and the underlying disease. we chose observational studies (cohort, cases and controls and series of cases), where the main indication for antireflux surgery would have been gastroesophageal reflux disease. results: a total of 19 studies were selected, most of them were case series (57.9%), cohort studies (31%) and case-control studies (10.5%). a total of 1940 patients. the main indications were anatomical faults, of these failures, recurrent hiatus hernia and sliding occupy the highest percentage, while physiological failures, failure in esophageal and gastric motility occur more frequently. the main type of approach was laparoscopic in 85%, the conversion rate was 5.3% and the open approach was reserved for complex cases with more than one re-intervention 12.9%, for abdomen 8.6% and chest 3.5%, this last for cases with high esophageal lesions that can not be repaired via trans-abdominal.the main complications were injuries to hollow viscera, such as: esophagus and stomach among others. these complications are related to the complexity of the procedure. mortality has remained low up to 0.05%, however, the cause of death was due to medical complications and not related to the procedure. conclusions: this systematic review on reoperation in reflux surgery has confirmed that morbidity after reoperation surgery is higher than after primary surgery and reoperation indications increase with the use of new technologies (manometry) and the laparoscopic approach continues on the rise, with great adaptation to its use and improvement in results. aims: eras protocol is not commonly used in acute emergency procedures. elective lc is commonly performed as one day surgery, while in an emergency setting of acute cholecystitis, the in hospital stay averages 4, 5 days. the aim of this trial is the application of eras protocol in patients with acute cholecystitis, undergoing laparoscopic cholecystectomy. methods: a randomized prospective trial was conducted in first surgical department of sismanogleion g.h.a. the study included 96 patients, who were admitted with acute cholecystitis and underwent lc into 24 h from their admission. preoperatively, they all received crystalloid isotonic solutions and antibiotics. 5.3% were submitted to ercp, preoperatively, due to choledocholithiasis. the postoperative care included early mobilization into 2 h after surgery, early fluid intake (into 4 h) and early liquid food intake (into 6 h). they all received systematically antibiotics, analgesics and antiemetic on demand. asa score was not an exclusion criterion. results: conversion to open procedure was necessary in 6.5% of patients, whom were excluded from the study. all the rest were discharged into 24 h from the surgery with the guidance to receive oral antibiotics for 3 more days. readmission was necessary for 2 patients, one week after the operation. the first one presented with bile leak and submitted to ercp with stent placement and percutaneous drainage of the intrabdominal collection. the second one presented with choledocholithiasis and underwent ercp with balloon catheterization. conclusion: it is commonly accepted that eras protocol in elective procedure enhances the postoperative recovery while reduces the in hospital stay and cost. in emergency condition eras cannot be applicated preoperatively. however, a modified post surgery application seems to have advantages equal to those observed in elective procedures. aim: laparoscopic cholecystectomy is the gold standard for the treatment of symptomatic cholelithiasis. administration of one single dose of chemoprophylaxis before an elective laparoscopic cholecystectomy is a broadly accepted practice. however, its value is currently questioned, especially in low risk patients. method: this study was conducted in a high volume surgical department. one hundred and twelve patients submitted to elective laparoscopic cholecystectomy were included in this research. a written consent was acquired after thorough patient briefing. half the patients that underwent surgical operation received one dose of antibiotics 30 min prior to the incision and the other half did not receive any chemoprophylaxis. results: the age ranges from 16 to 81 years old. commonest concomitant diseases were arterial hypertension, type ll diabetes, hypothyroidism and respiratory deficiency. approximately 30% of patients were smokers and 11% were obese (bmi [ 30). the duration of the operations was between 20 and 85 min. intra-operative gallbladder rupture was observed in 36 patients (rate 32%). all the patients were discharged the first post-operative day and their monitoring continued for 30 more days. in the chemoprophylaxis group, no surgical site infection or other major complication was observed. from the group that did not receive any antibiotics, one patient developed surgical site infection and specifically infection of the surgical port in the epigastrium, which was treated with drainage of the abscess and oral antibiotics administration. no other complications were recorded. conclusion: our study concluded no statistically significant difference between the two patient groups, which depicts that chemoprophylaxis may not be necessary in elective cholecystectomy operations. on the contrary, antibiotics increase the cost of hospital stay and are often accompanied by multiple mild or severe side effects. publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.